diff --git a/onceforalltrainonenetworkandspecializeitforefficientdeployment/4a51cd14-2713-41e5-b618-48653a66fdc6_content_list.json b/onceforalltrainonenetworkandspecializeitforefficientdeployment/4a51cd14-2713-41e5-b618-48653a66fdc6_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..cad7c1b1563680228bb322f1a9f69671ebd4906c
--- /dev/null
+++ b/onceforalltrainonenetworkandspecializeitforefficientdeployment/4a51cd14-2713-41e5-b618-48653a66fdc6_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3031ec358799ac64d92deab6256599477fd6d744dc6282a4e45e518363c67fbf
+size 75756
diff --git a/onceforalltrainonenetworkandspecializeitforefficientdeployment/4a51cd14-2713-41e5-b618-48653a66fdc6_model.json b/onceforalltrainonenetworkandspecializeitforefficientdeployment/4a51cd14-2713-41e5-b618-48653a66fdc6_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..73ea27eb3418f09068308a8d0196391feda1678e
--- /dev/null
+++ b/onceforalltrainonenetworkandspecializeitforefficientdeployment/4a51cd14-2713-41e5-b618-48653a66fdc6_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d9f833c0a72fee7d7e2a00be0abc255cc442961d9db4ae2818ded5248e2b7958
+size 93098
diff --git a/onceforalltrainonenetworkandspecializeitforefficientdeployment/4a51cd14-2713-41e5-b618-48653a66fdc6_origin.pdf b/onceforalltrainonenetworkandspecializeitforefficientdeployment/4a51cd14-2713-41e5-b618-48653a66fdc6_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..3152bb63c4faa30e2cedfdcce2e9265a0c351357
--- /dev/null
+++ b/onceforalltrainonenetworkandspecializeitforefficientdeployment/4a51cd14-2713-41e5-b618-48653a66fdc6_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7a8a9a28af56a54dd8a516291f7503eb3dc24cd12a40639c6fc711ea7315ae3c
+size 3471878
diff --git a/onceforalltrainonenetworkandspecializeitforefficientdeployment/full.md b/onceforalltrainonenetworkandspecializeitforefficientdeployment/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..bf1d3fa28dd936d96ec0be6add38f70431e14baa
--- /dev/null
+++ b/onceforalltrainonenetworkandspecializeitforefficientdeployment/full.md
@@ -0,0 +1,252 @@
+# ONCE-FOR-ALL: TRAIN ONE NETWORK AND SPECIALIZE IT FOR EFFICIENT DEPLOYMENT
+
+Han Cai1, Chuang Gan2, Tianzhe Wang1, Zhekai Zhang1, Song Han1
+
+1Massachusetts Institute of Technology, 2MIT-IBM Watson AI Lab {hancai, chuangg, songhan}@mit.edu
+
+# ABSTRACT
+
+We address the challenging problem of efficient inference across many devices and resource constraints, especially on edge devices. Conventional approaches either manually design or use neural architecture search (NAS) to find a specialized neural network and train it from scratch for each case, which is computationally prohibitive (causing $CO_2$ emission as much as 5 cars' lifetime Strubell et al. (2019)) thus unscalable. In this work, we propose to train a once-for-all (OFA) network that supports diverse architectural settings by decoupling training and search, to reduce the cost. We can quickly get a specialized sub-network by selecting from the OFA network without additional training. To efficiently train OFA networks, we also propose a novel progressive shrinking algorithm, a generalized pruning method that reduces the model size across many more dimensions than pruning (depth, width, kernel size, and resolution). It can obtain a surprisingly large number of subnetworks $(>10^{19})$ that can fit different hardware platforms and latency constraints while maintaining the same level of accuracy as training independently. On diverse edge devices, OFA consistently outperforms state-of-the-art (SOTA) NAS methods (up to $4.0\%$ ImageNet top1 accuracy improvement over MobileNetV3, or same accuracy but $1.5\times$ faster than MobileNetV3, $2.6\times$ faster than EfficientNet w.r.t measured latency) while reducing many orders of magnitude GPU hours and $CO_2$ emission. In particular, OFA achieves a new SOTA $80.0\%$ ImageNet top-1 accuracy under the mobile setting (<600M MACs). OFA is the winning solution for the 3rd Low Power Computer Vision Challenge (LPCVC), DSP classification track and the 4th LPCVC, both classification track and detection track. Code and 50 pre-trained models (for many devices & many latency constraints) are released at https://github.com/mit-han-lab/once-for-all.
+
+# 1 INTRODUCTION
+
+Deep Neural Networks (DNNs) deliver state-of-the-art accuracy in many machine learning applications. However, the explosive growth in model size and computation cost gives rise to new challenges on how to efficiently deploy these deep learning models on diverse hardware platforms, since they have to meet different hardware efficiency constraints (e.g., latency, energy). For instance, one mobile application on App Store has to support a diverse range of hardware devices, from a high-end Samsung Note10 with a dedicated neural network accelerator to a 5-year-old Samsung S6 with a much slower processor. With different hardware resources (e.g., on-chip memory size, #arithmetic units), the optimal neural network architecture varies significantly. Even running on the same hardware, under different battery conditions or workloads, the best model architecture also differs a lot.
+
+Given different hardware platforms and efficiency constraints (defined as deployment scenarios), researchers either design compact models specialized for mobile (Howard et al., 2017; Sandler et al., 2018; Zhang et al., 2018) or accelerate the existing models by compression (Han et al., 2016; He et al., 2018) for efficient deployment. However, designing specialized DNNs for every scenario is engineer-expensive and computationally expensive, either with human-based methods or NAS. Since such methods need to repeat the network design process and retrain the designed network from scratch for each case. Their total cost grows linearly as the number of deployment scenarios increases, which will result in excessive energy consumption and $CO_2$ emission (Strubell et al., 2019). It makes them unable to handle the vast amount of hardware devices (23.14 billion IoT devices till
+
+
+Figure 1: Left: a single once-for-all network is trained to support versatile architectural configurations including depth, width, kernel size, and resolution. Given a deployment scenario, a specialized subnetwork is directly selected from the once-for-all network without training. Middle: this approach reduces the cost of specialized deep learning deployment from O(N) to O(1). Right: once-for-all network followed by model selection can derive many accuracy-latency trade-offs by training only once, compared to conventional methods that require repeated training.
+
+$2018^{1}$ ) and highly dynamic deployment environments (different battery conditions, different latency requirements, etc.).
+
+This paper introduces a new solution to tackle this challenge – designing a once-for-all network that can be directly deployed under diverse architectural configurations, amortizing the training cost. The inference is performed by selecting only part of the once-for-all network. It flexibly supports different depths, widths, kernel sizes, and resolutions without retraining. A simple example of Once-for-All (OFA) is illustrated in Figure 1 (left). Specifically, we decouple the model training stage and the neural architecture search stage. In the model training stage, we focus on improving the accuracy of all sub-networks that are derived by selecting different parts of the once-for-all network. In the model specialization stage, we sample a subset of sub-networks to train an accuracy predictor and latency predictors. Given the target hardware and constraint, a predictor-guided architecture search (Liu et al., 2018) is conducted to get a specialized sub-network, and the cost is negligible. As such, we reduce the total cost of specialized neural network design from O(N) to O(1) (Figure 1 middle).
+
+However, training the once-for-all network is a non-trivial task, since it requires joint optimization of the weights to maintain the accuracy of a large number of sub-networks (more than $10^{19}$ in our experiments). It is computationally prohibitive to enumerate all sub-networks to get the exact gradient in each update step, while randomly sampling a few sub-networks in each step will lead to significant accuracy drops. The challenge is that different sub-networks are interfering with each other, making the training process of the whole once-for-all network inefficient. To address this challenge, we propose a progressive shrinking algorithm for training the once-for-all network. Instead of directly optimizing the once-for-all network from scratch, we propose to first train the largest neural network with maximum depth, width, and kernel size, then progressively fine-tune the once-for-all network to support smaller sub-networks that share weights with the larger ones. As such, it provides better initialization by selecting the most important weights of larger sub-networks, and the opportunity to distill smaller sub-networks, which greatly improves the training efficiency. From this perspective, progressive shrinking can be viewed as a generalized network pruning method that shrinks multiple dimensions (depth, width, kernel size, and resolution) of the full network rather than only the width dimension. Besides, it targets on maintaining the accuracy of all sub-networks rather than a single pruned network.
+
+We extensively evaluated the effectiveness of OFA on ImageNet with many hardware platforms (CPU, GPU, mCPU, mGPU, FPGA accelerator) and efficiency constraints. Under all deployment scenarios, OFA consistently improves the ImageNet accuracy by a significant margin compared to SOTA hardware-aware NAS methods while saving the GPU hours, dollars, and $CO_2$ emission by orders of magnitude. On the ImageNet mobile setting (less than 600M MACs), OFA achieves a new SOTA $80.0\%$ top1 accuracy with 595M MACs (Figure 2). To the best of our knowledge, this is the first time that the SOTA ImageNet top1 accuracy reaches $80\%$ under the mobile setting.
+
+
+Figure 2: Comparison between OFA and state-of-the-art CNN models on ImageNet. OFA provides $80.0\%$ ImageNet top1 accuracy under the mobile setting ( $<600\mathrm{M}$ MACs).
+
+# 2 RELATED WORK
+
+Efficient Deep Learning. Many efficient neural network architectures are proposed to improve the hardware efficiency, such as SqueezeNet (Iandola et al., 2016), MobileNets (Howard et al., 2017; Sandler et al., 2018), ShuffleNets (Ma et al., 2018; Zhang et al., 2018), etc. Orthogonal to architecting efficient neural networks, model compression (Han et al., 2016) is another very effective technique for efficient deep learning, including network pruning that removes redundant units (Han et al., 2015) or redundant channels (He et al., 2018; Liu et al., 2017), and quantization that reduces the bit width for the weights and activations (Han et al., 2016; Courbariaux et al., 2015; Zhu et al., 2017).
+
+Neural Architecture Search. Neural architecture search (NAS) focuses on automating the architecture design process (Zoph & Le, 2017; Zoph et al., 2018; Real et al., 2019; Cai et al., 2018a; Liu et al., 2019). Early NAS methods (Zoph et al., 2018; Real et al., 2019; Cai et al., 2018b) search for high-accuracy architectures without taking hardware efficiency into consideration. Therefore, the produced architectures (e.g., NASNet, AmoebaNet) are not efficient for inference. Recent hardware-aware NAS methods (Cai et al., 2019; Tan et al., 2019; Wu et al., 2019) directly incorporate the hardware feedback into architecture search. Hardware-DNN co-design techniques (Jiang et al., 2019b;a; Hao et al., 2019) jointly optimize neural network architectures and hardware architectures. As a result, they can improve inference efficiency. However, given new inference hardware platforms, these methods need to repeat the architecture search process and retrain the model, leading to prohibitive GPU hours, dollars, and $CO_2$ emission. They are not scalable to a large number of deployment scenarios. The individually trained models do not share any weight, leading to large total model size and high downloading bandwidth.
+
+Dynamic Neural Networks. To improve the efficiency of a given neural network, some work explored skipping part of the model based on the input image. For example, Wu et al. (2018); Liu & Deng (2018); Wang et al. (2018) learn a controller or gating modules to adaptively drop layers; Huang et al. (2018) introduce early-exit branches in the computation graph; Lin et al. (2017) adaptively prune channels based on the input feature map; Kuen et al. (2018) introduce stochastic downsampling point to reduce the feature map size adaptively. Recently, Slimmable Nets (Yu et al., 2019; Yu & Huang, 2019b) propose to train a model to support multiple width multipliers (e.g., 4 different global width multipliers), building upon existing human-designed neural networks (e.g., MobileNetV2 0.35, 0.5, 0.75, 1.0). Such methods can adaptively fit different efficiency constraints at runtime, however, still inherit a pre-designed neural network (e.g., MobileNet-v2), which limits the degree of flexibility (e.g., only width multiplier can adapt) and the ability in handling new deployment scenarios where the pre-designed neural network is not optimal. In this work, in contrast, we enable a much more diverse architecture space (depth, width, kernel size, and resolution) and a significantly larger number of architectural settings $(10^{19}$ v.s. 4 (Yu et al., 2019)). Thanks to the diversity and the large design
+
+
+Figure 3: Illustration of the progressive shrinking process to support different depth $D$ , width $W$ , kernel size $K$ and resolution $R$ . It leads to a large space comprising diverse sub-networks ( $>10^{19}$ ).
+
+space, we can derive new specialized neural networks for many different deployment scenarios rather than working on top of an existing neural network that limits the optimization headroom. However, it is more challenging to train the network to achieve this flexibility, which motivates us to design the progressive shrinking algorithm to tackle this challenge.
+
+# 3 METHOD
+
+# 3.1 PROBLEM FORMALIZATION
+
+Assuming the weights of the once-for-all network as $W_{o}$ and the architectural configurations as $\{arch_{i}\}$ , we then can formalize the problem as
+
+$$
+\min _ {W _ {o}} \sum_ {\text {a r c h} _ {i}} \mathcal {L} _ {\text {v a l}} \left(C \left(W _ {o}, \operatorname {a r c h} _ {i}\right)\right), \tag {1}
+$$
+
+where $C(W_{o}, \text{arch}_{i})$ denotes a selection scheme that selects part of the model from the once-for-all network $W_{o}$ to form a sub-network with architectural configuration $\text{arch}_{i}$ . The overall training objective is to optimize $W_{o}$ to make each supported sub-network maintain the same level of accuracy as independently training a network with the same architectural configuration.
+
+# 3.2 ARCHITECTURE SPACE
+
+Our once-for-all network provides one model but supports many sub-networks of different sizes, covering four important dimensions of the convolutional neural networks (CNNs) architectures, i.e., depth, width, kernel size, and resolution. Following the common practice of many CNN models (He et al., 2016; Sandler et al., 2018; Huang et al., 2017), we divide a CNN model into a sequence of units with gradually reduced feature map size and increased channel numbers. Each unit consists of a sequence of layers where only the first layer has stride 2 if the feature map size decreases (Sandler et al., 2018). All the other layers in the units have stride 1.
+
+We allow each unit to use arbitrary numbers of layers (denoted as elastic depth); For each layer, we allow to use arbitrary numbers of channels (denoted as elastic width) and arbitrary kernel sizes (denoted as elastic kernel size). In addition, we also allow the CNN model to take arbitrary input image sizes (denoted as elastic resolution). For example, in our experiments, the input image size ranges from 128 to 224 with a stride 4; the depth of each unit is chosen from $\{2,3,4\}$ ; the width expansion ratio in each layer is chosen from $\{3,4,6\}$ ; the kernel size is chosen from $\{3,5,7\}$ . Therefore, with 5 units, we have roughly $(3\times 3)^2 + (3\times 3)^3 + (3\times 3)^4)^5 \approx 2\times 10^{19}$ different neural network architectures and each of them can be used under 25 different input resolutions. Since all of these sub-networks share the same weights (i.e., $W_{o}$ ) (Cheung et al., 2019), we only require 7.7M parameters to store all of them. Without sharing, the total model size will be prohibitive.
+
+# 3.3 TRAINING THE ONCE-FOR-ALL NETWORK
+
+Naïve Approach. Training the once-for-all network can be cast as a multi-objective problem, where each objective corresponds to one sub-network. From this perspective, a naïve training approach is to directly optimize the once-for-all network from scratch using the exact gradient of the overall objective, which is derived by enumerating all sub-networks in each update step, as shown in Eq. (1). The cost of this approach is linear to the number of sub-networks. Therefore, it is only applicable to scenarios where a limited number of sub-networks are supported (Yu et al., 2019), while in our case, it is computationally prohibitive to adopt this approach.
+
+Another naive training approach is to sample a few sub-networks in each update step rather than enumerate all of them, which does not have the issue of prohibitive cost. However, with such a large number of sub-networks that share weights, thus interfere with each other, we find it suffers from
+
+
+Figure 4: Progressive shrinking can be viewed as a generalized network pruning technique with much higher flexibility. Compared to network pruning, it shrinks more dimensions (not only width) and provides a much more powerful once-for-all network that can fit different deployment scenarios rather than a single pruned network.
+
+
+Figure 5: Left: Kernel transformation matrix for elastic kernel size. Right: Progressive shrinking for elastic depth. Instead of skipping each layer independently, we keep the first $D$ layers and skip the last $(4 - D)$ layers. The weights of the early layers are shared.
+
+significant accuracy drop. In the following section, we introduce a solution to address this challenge, i.e., progressive shrinking.
+
+Progressive Shrinking. The once-for-all network comprises many sub-networks of different sizes where small sub-networks are nested in large sub-networks. To prevent interference between the sub-networks, we propose to enforce a training order from large sub-networks to small sub-networks in a progressive manner. We name this training scheme as progressive shrinking (PS). An example of the training process with PS is provided in Figure 3 and Figure 4, where we start with training the largest neural network with the maximum kernel size (e.g., 7), depth (e.g., 4), and width (e.g., 6). Next, we progressively fine-tune the network to support smaller sub-networks by gradually adding them into the sampling space (larger sub-networks may also be sampled). Specifically, after training the largest network, we first support elastic kernel size which can choose from $\{3,5,7\}$ at each layer, while the depth and width remain the maximum values. Then, we support elastic depth and elastic width sequentially, as is shown in Figure 3. The resolution is elastic throughout the whole training process, which is implemented by sampling different image sizes for each batch of training data. We also use the knowledge distillation technique after training the largest neural network (Hinton et al., 2015; Ashok et al., 2018; Yu & Huang, 2019b). It combines two loss terms using both the soft labels given by the largest neural network and the real labels.
+
+Compared to the naive approach, PS prevents small sub-networks from interfering large sub-networks, since large sub-networks are already well-trained when the once-for-all network is fine-tuned to support small sub-networks. Regarding the small sub-networks, they share the weights with the large ones. Therefore, PS allows initializing small sub-networks with the most important weights of well-trained large sub-networks, which expedites the training process. Compared to network pruning (Figure 4), PS also starts with training the full model, but it shrinks not only the width dimension but also the depth, kernel size, and resolution dimensions of the full model. Additionally, PS fine-tunes both large and small sub-networks rather than a single pruned network. As a result, PS provides a much more powerful once-for-all network that can fit diverse hardware platforms and efficiency constraints compared to network pruning. We describe the details of the PS training flow as follows:
+
+
+Figure 6: Progressive shrinking for elastic width. In this example, we progressively support 4, 3, and 2 channel settings. We perform channel sorting and pick the most important channels (with large L1 norm) to initialize the smaller channel settings. The important channels' weights are shared.
+
+- Elastic Kernel Size (Figure 5 left). We let the center of a $7 \times 7$ convolution kernel also serve as a $5 \times 5$ kernel, the center of which can also be a $3 \times 3$ kernel. Therefore, the kernel size becomes elastic. The challenge is that the centering sub-kernels (e.g., $3 \times 3$ and $5 \times 5$ ) are shared and need to play multiple roles (independent kernel and part of a large kernel). The weights of centered sub-kernels may need to have different distribution or magnitude as different roles. Forcing them to be the same degrades the performance of some sub-networks. Therefore, we introduce kernel transformation matrices when sharing the kernel weights. We use separate kernel transformation matrices for different layers. Within each layer, the kernel transformation matrices are shared among different channels. As such, we only need $25 \times 25 + 9 \times 9 = 706$ extra parameters to store the kernel transformation matrices in each layer, which is negligible.
+- Elastic Depth (Figure 5 right). To derive a sub-network that has $D$ layers in a unit that originally has $N$ layers, we keep the first $D$ layers and skip the last $N - D$ layers, rather than keeping any $D$ layers as done in current NAS methods (Cai et al., 2019; Wu et al., 2019). As such, one depth setting only corresponds to one combination of layers. In the end, the weights of the first $D$ layers are shared between large and small models.
+- Elastic Width (Figure 6). Width means the number of channels. We give each layer the flexibility to choose different channel expansion ratios. Following the progressive shrinking scheme, we first train a full-width model. Then we introduce a channel sorting operation to support partial widths. It reorganizes the channels according to their importance, which is calculated based on the L1 norm of a channel's weight. Larger L1 norm means more important. For example, when shrinking from a 4-channel-layer to a 3-channel-layer, we select the largest 3 channels, whose weights are shared with the 4-channel-layer (Figure 6 left and middle). Thereby, smaller sub-networks are initialized with the most important channels on the once-for-all network which is already well trained. This channel sorting operation preserves the accuracy of larger sub-networks.
+
+# 3.4 SPECIALIZED MODEL DEPLOYMENT WITH ONCE-FOR-ALL NETWORK
+
+Having trained a once-for-all network, the next stage is to derive the specialized sub-network for a given deployment scenario. The goal is to search for a neural network that satisfies the efficiency (e.g., latency, energy) constraints on the target hardware while optimizing the accuracy. Since OFA decouples model training from neural architecture search, we do not need any training cost in this stage. Furthermore, we build neural-network-twins to predict the latency and accuracy given a neural network architecture, providing a quick feedback for model quality. It eliminates the repeated search cost by substituting the measured accuracy/latency with predicted accuracy/latency (twins).
+
+Specifically, we randomly sample 16K sub-networks with different architectures and input image sizes, then measure their accuracy on 10K validation images sampled from the original training set. These [architecture, accuracy] pairs are used to train an accuracy predictor to predict the accuracy of a model given its architecture and input image size2. We also build a latency lookup table (Cai et al., 2019) on each target hardware platform to predict the latency. Given the target hardware and latency constraint, we conduct an evolutionary search (Real et al., 2019) based on the neural-network-twins to get a specialized sub-network. Since the cost of searching with neural-network-twins is negligible,
+
+
+Figure 7: ImageNet top1 accuracy (\%) performances of sub-networks under resolution $224 \times 224$ . " $(\mathrm{D} = d, \mathrm{W} = w, \mathrm{K} = k)$ " denotes a sub-network with $d$ layers in each unit, and each layer has an width expansion ratio $w$ and kernel size $k$ .
+
+we only need 40 GPU hours to collect the data pairs, and the cost stays constant regardless of #deployment scenarios.
+
+# 4 EXPERIMENTS
+
+In this section, we first apply the progressive shrinking algorithm to train the once-for-all network on ImageNet (Deng et al., 2009). Then we demonstrate the effectiveness of our trained once-for-all network on various hardware platforms (Samsung S7 Edge, Note8, Note10, Google Pixel1, Pixel2, LG G8, NVIDIA 1080Ti, V100 GPUs, Jetson TX2, Intel Xeon CPU, Xilinx ZU9EG, and ZU3EG FPGAs) with different latency constraints.
+
+# 4.1 TRAINING THE ONCE-FOR-ALL NETWORK ON IMAGENET
+
+Training Details. We use the same architecture space as MobileNetV3 (Howard et al., 2019). For training the full network, we use the standard SGD optimizer with Nesterov momentum 0.9 and weight decay $3e^{-5}$ . The initial learning rate is 2.6, and we use the cosine schedule (Loshchilov & Hutter, 2016) for learning rate decay. The full network is trained for 180 epochs with batch size 2048 on 32 GPUs. Then we follow the schedule described in Figure 3 to further fine-tune the full network3. The whole training process takes around 1,200 GPU hours on V100 GPUs. This is a one-time training cost that can be amortized by many deployment scenarios.
+
+Results. Figure 7 reports the top1 accuracy of sub-networks derived from the once-for-all networks that are trained with our progressive shrinking (PS) algorithm and without PS respectively. Due to space limits, we take 8 sub-networks for comparison, and each of them is denoted as “ $(\mathrm{D} = d, \mathrm{W} = w, \mathrm{K} = k)$ ”. It represents a sub-network that has $d$ layers for all units, while the expansion ratio and kernel size are set to $w$ and $k$ for all layers. PS can improve the ImageNet accuracy of sub-networks by a significant margin under all architectural settings. Specifically, without architecture optimization, PS can achieve $74.8\%$ top1 accuracy using 226M MACs under the architecture setting $(\mathrm{D} = 4, \mathrm{W} = 3, \mathrm{K} = 3)$ , which is on par with MobileNetV3-Large. In contrast, without PS, it only achieves $71.5\%$ , which is $3.3\%$ lower.
+
+# 4.2 SPECIALIZED SUB-NETWORKS FOR DIFFERENT HARDWARE AND CONSTRAINTS
+
+We apply our trained once-for-all network to get different specialized sub-networks for diverse hardware platforms: from the cloud to the edge. On cloud devices, the latency for GPU is measured with batch size 64 on NVIDIA 1080Ti and V100 with Pytorch 1.0+cuDNN. The CPU latency is measured with batch size 1 on Intel Xeon E5-2690 v4+MKL-DNN. On edge devices, including mobile phones, we use Samsung, Google and LG phones with TF-Lite, batch size 1; for mobile GPU,
+
+
| Model | ImageNet Top1 (%) | MACs | Mobile latency | Search cost (GPU hours) | Training cost (GPU hours) | Total cost (N=40) |
| GPU hours | CO2e (Ibs) | AWS cost |
| MobileNetV2 [31] | 72.0 | 300M | 66ms | 0 | 150N | 6k | 1.7k | $18.4k |
| MobileNetV2 #1200 | 73.5 | 300M | 66ms | 0 | 1200N | 48k | 13.6k | $146.9k |
| NASNet-A [44] | 74.0 | 564M | - | 48,000N | - | 1,920k | 544.5k | $5875.2k |
| DARTS [25] | 73.1 | 595M | - | 96N | 250N | 14k | 4.0k | $42.8k |
| MnasNet [33] | 74.0 | 317M | 70ms | 40,000N | - | 1,600k | 453.8k | $4896.0k |
| FBNet-C [36] | 74.9 | 375M | - | 216N | 360N | 23k | 6.5k | $70.4k |
| ProxylessNAS [4] | 74.6 | 320M | 71ms | 200N | 300N | 20k | 5.7k | $61.2k |
| SinglePathNAS [8] | 74.7 | 328M | - | 288 + 24N | 384N | 17k | 4.8k | $52.0k |
| AutoSlim [38] | 74.2 | 305M | 63ms | 180 | 300N | 12k | 3.4k | $36.7k |
| MobileNetV3-Large [15] | 75.2 | 219M | 58ms | - | 180N | 7.2k | 1.8k | $22.2k |
| OFA w/o PS | 72.4 | 235M | 59ms | 40 | 1200 | 1.2k | 0.34k | $3.7k |
| OFA w/ PS | 76.0 | 230M | 58ms | 40 | 1200 | 1.2k | 0.34k | $3.7k |
| OFA w/ PS #25 | 76.4 | 230M | 58ms | 40 | 1200 + 25N | 2.2k | 0.62k | $6.7k |
| OFA w/ PS #75 | 76.9 | 230M | 58ms | 40 | 1200 + 75N | 4.2k | 1.2k | $13.0k |
| OFALarge w/ PS #75 | 80.0 | 595M | - | 40 | 1200 + 75N | 4.2k | 1.2k | $13.0k |
+
+Table 1: Comparison with SOTA hardware-aware NAS methods on Pixel1 phone. OFA decouples model training from neural architecture search. The search cost and training cost both stay constant as the number of deployment scenarios grows. “#25” denotes the specialized sub-networks are fine-tuned for 25 epochs after grabbing weights from the once-for-all network. “ $CO_2e$ ” denotes $CO_2$ emission which is calculated based on Strubell et al. (2019). AWS cost is calculated based on the price of on-demand P3.16xlarge instances.
+
+
+Figure 8: OFA saves orders of magnitude design cost compared to NAS methods.
+
+we use Jetson TX2 with Pytorch 1.0+cuDNN, batch size of 16; for embedded FPGA, we use Xilinx ZU9EG and ZU3EG FPGAs with Vitis AI4, batch size 1.
+
+Comparison with NAS on Mobile Devices. Table 1 reports the comparison between OFA and state-of-the-art hardware-aware NAS methods on the mobile phone (Pixel1). OFA is much more efficient than NAS when handling multiple deployment scenarios since the cost of OFA is constant while others are linear to the number of deployment scenarios $(N)$ . With $N = 40$ , the total $CO_2$ emissions of OFA is $16 \times$ fewer than ProxylessNAS, $19 \times$ fewer than FBNet, and $1,300 \times$ fewer than MnasNet (Figure 8). Without retraining, OFA achieves $76.0\%$ top1 accuracy on ImageNet, which is $0.8\%$ higher than MobileNetV3-Large while maintaining similar mobile latency. We can further improve the top1 accuracy to $76.4\%$ by fine-tuning the specialized sub-network for 25 epochs and to $76.9\%$ by fine-tuning for 75 epochs. Besides, we also observe that OFA with PS can achieve $3.6\%$ better accuracy than without PS.
+
+OFA under Different Computational Resource Constraints. Figure 9 summarizes the results of OFA under different MACs and Pixel1 latency constraints. OFA achieves $79.1\%$ ImageNet top1 accuracy with 389M MACs, being $2.8\%$ more accurate than EfficientNet-B0 that has similar MACs. With 595M MACs, OFA reaches a new SOTA $80.0\%$ ImageNet top1 accuracy under the mobile setting (<600M MACs), which is $0.2\%$ higher than EfficientNet-B2 while using $1.68 \times$ fewer MACs. More importantly, OFA runs much faster than EfficientNets on hardware. Specifically, with 143ms Pixel1 latency, OFA achieves $80.1\%$ ImageNet top1 accuracy, being $0.3\%$ more accurate and $2.6 \times$ faster than EfficientNet-B2. We also find that training the searched neural architectures from scratch cannot reach the same level of accuracy as OFA, suggesting that not only neural architectures but also pre-trained weights contribute to the superior performances of OFA.
+
+Figure 10 reports detailed comparisons between OFA and MobileNetV3 on six mobile devices. Remarkably, OFA can produce the entire trade-off curves with many points over a wide range of latency constraints by training only once (green curve). It is impossible for previous NAS methods (Tan et al., 2019; Cai et al., 2019) due to the prohibitive training cost.
+
+
+Figure 9: OFA achieves $80.0\%$ top1 accuracy with 595M MACs and $80.1\%$ top1 accuracy with 143ms Pixel1 latency, setting a new SOTA ImageNet top1 accuracy on the mobile setting.
+
+
+
+
+
+
+
+
+
+
+Figure 10: OFA consistently outperforms MobileNetV3 on mobile platforms.
+
+
+
+
+
+OFA for Diverse Hardware Platforms. Besides the mobile platforms, we extensively studied the effectiveness of OFA on six additional hardware platforms (Figure 11) using the ProxylessNAS architecture space (Cai et al., 2019). OFA consistently improves the trade-off between accuracy and latency by a significant margin, especially on GPUs which have more parallelism. With similar latency as MobileNetV2 0.35, "OFA #25" improves the ImageNet top1 accuracy from MobileNetV2's $60.3\%$ to $72.6\%$ ( $+12.3\%$ improvement) on the 1080Ti GPU. Detailed architectures of our specialized models are shown in Figure 14. It reveals the insight that using the same model for different deployment scenarios with only the width multiplier modified has a limited impact on efficiency improvement: the accuracy drops quickly as the latency constraint gets tighter.
+
+OFA for Specialized Hardware Accelerators. There has been plenty of work on NAS for general-purpose hardware, but little work has been focused on specialized hardware accelerators. We quantitatively analyzed the performance of OFA on two FPGAs accelerators (ZU3EG and ZU9EG) using Xilinx Vitis AI with 8-bit quantization, and discuss two design principles.
+
+**Principle 1:** memory access is expensive, computation is cheap. An efficient CNN should do as much as computation with a small amount of memory footprint. The ratio is defined as the arithmetic intensity (OPs/Byte). The higher OPs/Byte, the less memory bounded, the easier to parallelize. Thanks to OFA's diverse choices of sub-network architectures ( $10^{19}$ ) (Section 3.3), and the OFA
+
+
+Figure 11: Specialized OFA models consistently achieve significantly higher ImageNet accuracy with similar latency than non-specialized neural networks on CPU, GPU, mGPU, and FPGA. More remarkably, specializing for a new hardware platform does not add training cost using OFA.
+
+
+Figure 12: OFA models improve the arithmetic intensity (OPS/Byte) and utilization (GOPS/s) compared with the MobileNetV2 and MnasNet (measured results on Xilinx ZU9EG and ZU3EG FPGA).
+
+model twin that can quickly give the accuracy-latency feedback (Section 3.4), the evolutionary search can automatically find a CNN architecture that has higher arithmetic intensity. As shown in Figure 12, OFA's arithmetic intensity is $48\% /43\%$ higher than MobileNetV2 and MnasNet (MobileNetV3 is not supported by Xilinx Vitis AI). Removing the memory bottleneck results in higher utilization and GOPS/s by $70\% -90\%$ , pushing the operation point to the upper-right in the roofline model (Williams et al., 2009), as shown in Figure 13. $(70\% -90\%)$ looks small in the log scale but that is significant).
+
+Principle 2: the CNN architecture should be co-designed with the hardware accelerator's cost model. The FPGA accelerator has a specialized depth-wise engine that is pipelined with the point-wise engine. The pipeline throughput is perfectly matched for 3x3 kernels. As a result, OFA's searched model only has 3x3 kernel (Figure 14, a) on FPGA, despite 5x5 and 7x7 kernels are also in the search space. Additionally, large kernels sometimes cause "out of BRAM" error on FPGA, giving high cost. On Intel Xeon CPU, however, more than $50\%$ operations are large kernels. Both FPGA and GPU models are wider than CPU, due to the large parallelism of the computation array.
+
+# 5 CONCLUSION
+
+We proposed Once-for-All (OFA), a new methodology that decouples model training from architecture search for efficient deep learning deployment under a large number of hardware platforms. Unlike
+
+
+(a) on Xilinx ZU9EG FPGA
+
+
+(b) on Xilinx ZU3EG FPGA
+
+
+Figure 13: Quantitative study of OFA's roofline model on Xilinx ZU9EG and ZU3EG FPGAs (log scale). OFA model increased the arithmetic intensity by $33\% /43\%$ and GOPS/s by $72\% /92\%$ on these two FPGAs compared with MnasNet.
+(a) 4.1ms latency on Xilinx ZU3EG (batch size = 1).
+
+
+(b) $10.9\mathrm{ms}$ latency on Intel Xeon CPU (batch size $= 1$ -
+
+
+(c) $14.9\mathrm{ms}$ latency on NVIDIA 1080Ti (batch size $= 64$
+Figure 14: OFA can design specialized models for different hardware and different latency constraint. "MB4 3x3" means "mobile block with expansion ratio 4, kernel size 3x3". FPGA and GPU models are wider than CPU model due to larger parallelism. Different hardware has different cost model, leading to different optimal CNN architectures. OFA provides a unified and efficient design methodology.
+
+previous approaches that design and train a neural network for each deployment scenario, we designed a once-for-all network that supports different architectural configurations, including elastic depth, width, kernel size, and resolution. It reduces the training cost (GPU hours, energy consumption, and $CO_2$ emission) by orders of magnitude compared to conventional methods. To prevent sub-networks of different sizes from interference, we proposed a progressive shrinking algorithm that enables a large number of sub-network to achieve the same level of accuracy compared to training them independently. Experiments on a diverse range of hardware platforms and efficiency constraints demonstrated the effectiveness of our approach. OFA provides an automated ecosystem to efficiently design efficient neural networks with the hardware cost model in the loop.
+
+# ACKNOWLEDGMENTS
+
+We thank NSF Career Award #1943349, MIT-IBM Watson AI Lab, Google-Daydream Research Award, Samsung, Intel, Xilinx, SONY, AWS Machine Learning Research Award for supporting this
+
+research. We thank Samsung, Google and LG for donating mobile phones. We thank Shuang Wu and Lei Deng for drawing the Figure 2.
+
+# REFERENCES
+
+Anubhav Ashok, Nicholas Rhinehart, Fares Beainy, and Kris M Kitani. N2n learning: Network to network compression via policy gradient reinforcement learning. In ICLR, 2018. 5
+Han Cai, Tianyao Chen, Weinan Zhang, Yong Yu, and Jun Wang. Efficient architecture search by network transformation. In AAAI, 2018a. 3
+Han Cai, Jiacheng Yang, Weinan Zhang, Song Han, and Yong Yu. Path-level network transformation for efficient architecture search. In ICML, 2018b. 3
+Han Cai, Ligeng Zhu, and Song Han. ProxylessNAS: Direct neural architecture search on target task and hardware. In ICLR, 2019. URL https://arxiv.org/pdf/1812.00332.pdf. 3, 6, 8, 9
+Brian Cheung, Alex Terekhov, Yubei Chen, Pulkit Agrawal, and Bruno Olshausen. Superposition of many models into one. In NeurIPS, 2019. 4
+Matthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David. Binaryconnect: Training deep neural networks with binary weights during propagations. In NeurIPS, 2015. 3
+Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In CVPR, 2009. 7
+Zichao Guo, Xiangyu Zhang, Haoyuan Mu, Wen Heng, Zechun Liu, Yichen Wei, and Jian Sun. Single path one-shot neural architecture search with uniform sampling. arXiv preprint arXiv:1904.00420, 2019. 8
+Song Han, Jeff Pool, John Tran, and William Dally. Learning both weights and connections for efficient neural network. In NeurIPS, 2015. 3
+Song Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. In ICLR, 2016. 1, 3
+Cong Hao, Xiaofan Zhang, Yuhong Li, Sitao Huang, Jinjun Xiong, Kyle Rupnow, Wen-mei Hwu, and Deming Chen. Fpga/dnn co-design: An efficient design methodology for 1ot intelligence on the edge. In 2019 56th ACM/IEEE Design Automation Conference (DAC), pp. 1-6. IEEE, 2019. 3
+Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, 2016. 4
+Yihui He, Ji Lin, Zhijian Liu, Hanrui Wang, Li-Jia Li, and Song Han. Amc: Automl for model compression and acceleration on mobile devices. In ECCV, 2018. 1, 3
+Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015. 5
+Andrew Howard, Mark Sandler, Grace Chu, Liang-Chieh Chen, Bo Chen, Mingxing Tan, Weijun Wang, Yukun Zhu, Ruoming Pang, Vijay Vasudevan, et al. Searching for mobilenetv3. In ICCV 2019, 2019. 7, 8
+Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861, 2017. 1, 3
+Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. Densely connected convolutional networks. In CVPR, 2017. 4
+Gao Huang, Danlu Chen, Tianhong Li, Felix Wu, Laurens van der Maaten, and Kilian Q Weinberger. Multi-scale dense networks for resource efficient image classification. In ICLR, 2018. 3
+
+Forrest N Iandola, Song Han, Matthew W Moskewicz, Khalid Ashraf, William J Dally, and Kurt Keutzer. SqueezeNet: Alexnet-level accuracy with 50x fewer parameters and $0.5\mathrm{mb}$ model size. arXiv preprint arXiv:1602.07360, 2016. 3
+Weiwen Jiang, Lei Yang, Edwin Sha, Qingfeng Zhuge, Shouzhen Gu, Yiyu Shi, and Jingtong Hu. Hardware/software co-exploration of neural architectures. arXiv preprint arXiv:1907.04650, 2019a. 3
+Weiwen Jiang, Xinyi Zhang, Edwin H-M Sha, Lei Yang, Qingfeng Zhuge, Yiyu Shi, and Jingtong Hu. Accuracy vs. efficiency: Achieving both through fpga-implementation aware neural architecture search. In Proceedings of the 56th Annual Design Automation Conference 2019, pp. 1-6, 2019b. 3
+Jason Kuen, Xiangfei Kong, Zhe Lin, Gang Wang, Jianxiong Yin, Simon See, and Yap-Peng Tan. Stochastic downsampling for cost-adjustable inference and improved regularization in convolutional networks. In CVPR, 2018. 3
+Ji Lin, Yongming Rao, Jiwen Lu, and Jie Zhou. Runtime neural pruning. In NeurIPS, 2017. 3
+Chenxi Liu, Barret Zoph, Maxim Neumann, Jonathon Shlens, Wei Hua, Li-Jia Li, Li Fei-Fei, Alan Yuille, Jonathan Huang, and Kevin Murphy. Progressive neural architecture search. In ECCV, 2018. 2
+Hanxiao Liu, Karen Simonyan, and Yiming Yang. Darts: Differentiable architecture search. In ICLR, 2019. 3, 8
+Lanlan Liu and Jia Deng. Dynamic deep neural networks: Optimizing accuracy-efficiency trade-offs by selective execution. In AAAI, 2018. 3
+Zhuang Liu, Jianguo Li, Zhiqiang Shen, Gao Huang, Shoumeng Yan, and Changshui Zhang. Learning efficient convolutional networks through network slimming. In ICCV, 2017. 3
+Ilya Loshchilov and Frank Hutter. Sgdr: Stochastic gradient descent with warm restarts. arXiv preprint arXiv:1608.03983, 2016. 7
+Ningning Ma, Xiangyu Zhang, Hai-Tao Zheng, and Jian Sun. Shufflenet v2: Practical guidelines for efficient cnn architecture design. In ECCV, 2018. 3
+Esteban Real, Alok Aggarwal, Yanping Huang, and Quoc V Le. Regularized evolution for image classifier architecture search. In AAAI, 2019. 3, 6
+Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, and Liang-Chieh Chen. *Mobilenetv2: Inverted residuals and linear bottlenecks*. In *CVPR*, 2018. 1, 3, 4, 8
+Emma Strubell, Ananya Ganesh, and Andrew McCallum. Energy and policy considerations for deep learning in nlp. In ACL, 2019. 1, 8
+Mingxing Tan, Bo Chen, Ruoming Pang, Vijay Vasudevan, Mark Sandler, Andrew Howard, and Quoc V Le. Mnasnet: Platform-aware neural architecture search for mobile. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2820-2828, 2019. 3, 8
+Xin Wang, Fisher Yu, Zi-Yi Dou, Trevor Darrell, and Joseph E Gonzalez. Skipnet: Learning dynamic routing in convolutional networks. In ECCV, 2018. 3
+Samuel Williams, Andrew Waterman, and David Patterson. Roofline: An insightful visual performance model for floating-point programs and multicore architectures. Technical report, Lawrence Berkeley National Lab.(LBNL), Berkeley, CA (United States), 2009. 10
+Bichen Wu, Xiaoliang Dai, Peizhao Zhang, Yanghan Wang, Fei Sun, Yiming Wu, Yuandong Tian, Peter Vajda, Yangqing Jia, and Kurt Keutzer. Fbnet: Hardware-aware efficient convnet design via differentiable neural architecture search. In CVPR, 2019. 3, 6, 8
+Zuxuan Wu, Tushar Nagarajan, Abhishek Kumar, Steven Rennie, Larry S Davis, Kristen Grauman, and Rogerio Feris. Blockdrop: Dynamic inference paths in residual networks. In CVPR, 2018. 3
+
+Jiahui Yu and Thomas Huang. Autoslim: Towards one-shot architecture search for channel numbers. arXiv preprint arXiv:1903.11728, 2019a. 8
+Jiahui Yu and Thomas Huang. Universally slimmable networks and improved training techniques. In ICCV, 2019b. 3, 5
+Jiahui Yu, Linjie Yang, Ning Xu, Jianchao Yang, and Thomas Huang. Slimmable neural networks. In ICLR, 2019. 3, 4
+Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, and Jian Sun. Shufflenet: An extremely efficient convolutional neural network for mobile devices. In CVPR, 2018. 1, 3
+Chenzhuo Zhu, Song Han, Huizi Mao, and William J Dally. Trained ternary quantization. In ICLR, 2017. 3
+Barret Zoph and Quoc V Le. Neural architecture search with reinforcement learning. In ICLR, 2017. 3
+Barret Zoph, Vijay Vasudevan, Jonathon Shlens, and Quoc V Le. Learning transferable architectures for scalable image recognition. In CVPR, 2018. 3, 8
+
+# A DETAILS OF THE ACCURACY PREDICTOR
+
+We use a three-layer feedforward neural network that has 400 hidden units in each layer as the accuracy predictor. Given a model, we encode each layer in the neural network into a one-hot vector based on its kernel size and expand ratio, and we assign zero vectors to layers that are skipped. Besides, we have an additional one-hot vector that represents the input image size. We concatenate these vectors into a large vector that represents the whole neural network architecture and input image size, which is then fed to the three-layer feedforward neural network to get the predicted accuracy. In our experiments, this simple accuracy prediction model can provide very accurate predictions. At convergence, the root-mean-square error (RMSE) between predicted accuracy and estimated accuracy on the test set is only $0.21\%$ . Figure 15 shows the relationship between the RMSE of the accuracy prediction model and the final results (i.e., the accuracy of selected sub-networks). We can find that lower RMSE typically leads to better final results.
+
+
+Figure 15: Performances of selected sub-networks using different accuracy prediction model.
+
+# B IMPLEMENTATION DETAILS OF PROGRESSIVE SHRINKING
+
+After training the full network, we first have one stage of fine-tuning to incorporate elastic kernel size. In this stage (i.e., $K \in [7,5,3]$ ), we sample one sub-network in each update step. The network is fine-tuned for 125 epochs with an initial learning rate of 0.96. All other training settings are the same as training the full network.
+
+Next, we have two stages of fine-tuning to incorporate elastic depth. We sample two sub-networks and aggregate their gradients in each update step. The first stage (i.e., $D \in [4,3]$ ) takes 25 epochs with an initial learning rate of 0.08 while the second stage (i.e., $D \in [4,3,2]$ ) takes 125 epochs with an initial learning rate of 0.24.
+
+Finally, we have two stages of fine-tuning to incorporate elastic width. We sample four sub-networks and aggregate their gradients in each update step. The first stage (i.e., $W \in [6, 4]$ ) takes 25 epochs with an initial learning rate of 0.08 while the second stage (i.e., $W \in [6, 4, 3]$ ) takes 125 epochs with an initial learning rate of 0.24.
\ No newline at end of file
diff --git a/onceforalltrainonenetworkandspecializeitforefficientdeployment/images.zip b/onceforalltrainonenetworkandspecializeitforefficientdeployment/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..ee90a9d6e0c7ebf133d06d8178d777feb250f3af
--- /dev/null
+++ b/onceforalltrainonenetworkandspecializeitforefficientdeployment/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8c6caf25de8592437e3b8126b4ceba7e48a3bd79ff193b5018509c15546eaba3
+size 840384
diff --git a/onceforalltrainonenetworkandspecializeitforefficientdeployment/layout.json b/onceforalltrainonenetworkandspecializeitforefficientdeployment/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..b98468af6b54ddec5423d66f348343d2705ed4e3
--- /dev/null
+++ b/onceforalltrainonenetworkandspecializeitforefficientdeployment/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5345b856310860365daf92dc0340dd65928577222a000ab7c1c2a46f9d786553
+size 402018
diff --git a/oneshotpruningofrecurrentneuralnetworksbyjacobianspectrumevaluation/f414d865-4e58-4bd4-8107-5e77486b1ba7_content_list.json b/oneshotpruningofrecurrentneuralnetworksbyjacobianspectrumevaluation/f414d865-4e58-4bd4-8107-5e77486b1ba7_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..d6367140577e36bae01c7c5c071bbae113d8c07c
--- /dev/null
+++ b/oneshotpruningofrecurrentneuralnetworksbyjacobianspectrumevaluation/f414d865-4e58-4bd4-8107-5e77486b1ba7_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8724bc04c6b8a72da30e777032e97c8454d864f2816e34db9f4bfa49ff07ef6b
+size 74954
diff --git a/oneshotpruningofrecurrentneuralnetworksbyjacobianspectrumevaluation/f414d865-4e58-4bd4-8107-5e77486b1ba7_model.json b/oneshotpruningofrecurrentneuralnetworksbyjacobianspectrumevaluation/f414d865-4e58-4bd4-8107-5e77486b1ba7_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..45f9690cafb34dc41ad8d60ec8d967f59255a85b
--- /dev/null
+++ b/oneshotpruningofrecurrentneuralnetworksbyjacobianspectrumevaluation/f414d865-4e58-4bd4-8107-5e77486b1ba7_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5b7cfdb17e41ffe3d21125b560c372cef9eca45c8aa0330f9bc90804e24c7d6a
+size 91426
diff --git a/oneshotpruningofrecurrentneuralnetworksbyjacobianspectrumevaluation/f414d865-4e58-4bd4-8107-5e77486b1ba7_origin.pdf b/oneshotpruningofrecurrentneuralnetworksbyjacobianspectrumevaluation/f414d865-4e58-4bd4-8107-5e77486b1ba7_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..24538019242382f99a6d201c77525aa70d998337
--- /dev/null
+++ b/oneshotpruningofrecurrentneuralnetworksbyjacobianspectrumevaluation/f414d865-4e58-4bd4-8107-5e77486b1ba7_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e4b5ebdce9975b4384dd5813eabb71c8773bc7055acb3973a18351ff8c3d1b30
+size 483612
diff --git a/oneshotpruningofrecurrentneuralnetworksbyjacobianspectrumevaluation/full.md b/oneshotpruningofrecurrentneuralnetworksbyjacobianspectrumevaluation/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..9bc816a761c517d990af6df68d4579dfd7bba311
--- /dev/null
+++ b/oneshotpruningofrecurrentneuralnetworksbyjacobianspectrumevaluation/full.md
@@ -0,0 +1,340 @@
+# ONE-SHOT PRUNING OF RECURRENT NEURAL NETWORKS BY JACOBIAN SPECTRUM EVALUATION
+
+Matthew Shunshi Zhang
+
+University of Toronto
+
+matthew.zhang@mail.utoronto.ca
+
+Bradly C. Stadie
+
+Vector Institute
+
+# ABSTRACT
+
+Recent advances in the sparse neural network literature have made it possible to prune many large feed forward and convolutional networks with only a small quantity of data. Yet, these same techniques often falter when applied to the problem of recovering sparse recurrent networks. These failures are quantitative: when pruned with recent techniques, RNNs typically obtain worse performance than they do under a simple random pruning scheme. The failures are also qualitative: the distribution of active weights in a pruned LSTM or GRU network tend to be concentrated in specific neurons and gates, and not well dispersed across the entire architecture. We seek to rectify both the quantitative and qualitative issues with recurrent network pruning by introducing a new recurrent pruning objective derived from the spectrum of the recurrent Jacobian. Our objective is data efficient (requiring only 64 data points to prune the network), easy to implement, and produces 95 % sparse GRUs that significantly improve on existing baselines. We evaluate on sequential MNIST, Billion Words, and Wikitext.
+
+# 1 INTRODUCTION
+
+Within the neural network community, network pruning has been something of an evergreen problem. There are several motivations for pruning a neural network. Theoretically, overparameterization is a well known but poorly understood quality of many networks. Pruning algorithms provide a link between overparameterized models appropriately parameterized models. Thus, these algorithms may provide insights into exactly why overparameterized models have so much success. Indeed, recent work has closely linked the efficient utilization of model capacity with generalization results (Arora et al., 2018). From a more practical perspective, overparameterized networks require more storage capacity and are computationally more expensive than their pruned counterparts. Hence, there is an incentive to use pruned networks rather than fully dense networks during deployment.
+
+For years, many of the most successful network pruning techniques were iterative — relying on cycle of pruning and retraining weights to induce sparsity in the network. As identified in Lee et al. (2018), these methods usually either enforce a sparsity-based penalty on the weights (Han et al., 2015; LeCun et al., 1990), or else prune based on some fitness criteria (Carreira-Perpinan & Idelbayev, 2018; Chauvin, 1989). Recent advances in pruning literature suggest that such costly cycles of pruning and retraining might not always be necessary. For some problems, there exists a small subnetwork within the original larger network such that training against this smaller network produces comparable performance to training against the original fully dense network. The Lottery Ticket Hypothesis (Frankle & Carbin, 2019) provides a method for recovering these networks, but only after training is complete. SNIP (Lee et al., 2018) and GraSP (Wang et al., 2020) provide a saliency criterion for identifying this small subnetwork using less than 100 data points, no training, and no iterative pruning.
+
+Our present work began by asking the question: "How well do these newly discovered pruning techniques, which optimize a network sensitivity objective, work on recurrent neural networks?" Although Lee et al. (2018) does evaluate the SNIP pruning criterion on both GRU and LSTM networks, we found these results to be somewhat incomplete. They did not provide a comparison to random pruning, and the chosen tasks were not extensive enough to draw definitive conclusions. When compared against random pruning, we found that the SNIP and GraSP pruning objective
+
+performed similarly to or worse than random pruning. This left us wondering where those techniques were falling short, and if a better pruning objective could be developed that takes the temporal structure of recurrent networks into account.
+
+In this paper, we propose a new pruning objective for recurrent neural networks. This objective is based on recent advances in mean field theory (Gilboa et al., 2019; Chen et al., 2018a), and can be interpreted as forcing the network to preserve weights that propagate information through its temporal depths. Practically, this constraint is imposed by forcing the singular values of the temporal Jacobian with respect to the network weights to be non-degenerate. We provide a discussion about the similarities and differences between our objective and the SNIP and GraSP pruning objectives. It can be shown that these prior objectives fail to ensure that the temporal Jacobian of the recurrent weights is well conditioned. Our method is evaluated with a GRU network on sequential MNIST, Wikitext, and Billion Words. At $95\%$ sparsity, our network achieves better results than fully dense networks, randomly pruned networks, SNIP (Lee et al., 2018) pruned networks, and GraSP (Wang et al., 2020) pruned networks.
+
+# 2 PRUNING RECURRENT NETWORKS BY JACOBIAN SPECTRUM EVALUATION
+
+# 2.1 NOTATION
+
+We denote matrices and vectors by upper- and lower-case bold letters respectively. Vector-valued functions are bolded, whereas scalar valued functions are not. Distributions over variables are with the following script: $\mathcal{D},\mathcal{P}$ . We denote the standard $\ell_p$ norm of a vector by $\| \cdot \| _p$ . Let $[\cdot ]_{ij}$ be the (i,j)-th element of a matrix, and $[\cdot ]_i$ the i-th element of a vector. $\vec{1},\vec{0}$ , denotes a vector of 1s or 0s of appropriate length, and use $\odot$ denotes a Hadamard product. $I_A$ represents the standard indicator function. For vectors, superscripts are always used for sequence lengths while subscripts are reserved for indexing vector elements.
+
+# 2.2 PRELIMINARIES
+
+# 2.2.1 RECURRENT MODELS
+
+Let $\mathbf{X} = \{\mathbf{x}^{(t)}\}_{t=1}^{S}$ ; with each $\mathbf{x}^{(t)} \in \mathbb{R}^D$ . Similarly, let $\hat{\mathbf{Y}} = \{\hat{\mathbf{y}}^{(t)}\}_{t=1}^{S}$ , where each $\mathbf{y}^{(t)} \in \mathbb{R}^O$ is an associated set of outputs, such that each tuple $(\mathbf{X}, \mathbf{Y}) \stackrel{i.i.d.}{\sim} \mathcal{D}$ .
+
+Let $\mathbf{M}(\mathbf{x};\boldsymbol {\theta}): \mathbb{R}^D \mapsto \mathbb{R}^O$ be a generic model, parameterized by $\boldsymbol{\theta} \in \mathbb{R}^{N}$ , that maps $\mathbf{X}$ onto an output sequence. Define a recurrent model as a mapping done through iterative computation, such that each $(\hat{\mathbf{y}}^{(t)},\mathbf{h}^{(t)}) = \mathbf{M}(\mathbf{x}^{(\mathbf{t})},\hat{\mathbf{h}}^{(t - 1)};\boldsymbol{\theta})$ depends explicitly only on the current input and some latent state of the model, $h$ .
+
+We define a loss over the entire sequence of outputs as the sum of a non-sequential loss function, $\tilde{L}$ over an entire sequence: $L(\mathbf{M},\mathbf{X},\mathbf{Y}) = \sum_{i = 1}^{S}\tilde{L} (\hat{\mathbf{y}}^{(t)},\mathbf{y}^{(t)})$
+
+We define a sparse model as one where the parameters factorize into $\theta = c\odot w$ , with $c\in \{0,1\} ^N$ a binary mask and $w\in \mathbb{R}^N$ the free values, typically trained by gradient descent. We define a $K$ -sparse condition on a sparse model $\mathbf{M}$ as the restriction $\| c\| _0 = K$ during the entire training trajectory. A model is optimally $K$ -sparse if it minimizes the expected loss, $\mathbb{E}_{\mathcal{D}}[L(\mathbf{M},\mathbf{X},\mathbf{Y})]$ after training while also being subject to a $K$ -sparse condition.
+
+# 2.2.2 MEMORY HORIZON
+
+We introduce the following terms: $N$ is the size of the network hidden state $\mathbf{h}$ , and $\mathbf{J}_t \in \mathbb{R}^{N \times N}$ is the temporal Jacobian, of the hidden state at time $t + 1$ with respect to the previous hidden state, $\frac{\partial \mathbf{h}^{(t + 1)}}{\partial \mathbf{h}^{(t)}}$ , and $\sigma_i^{(t)}$ the singular values of said matrix.
+
+To arrive at a one-shot pruning criteria for recurrent neural networks, we consider the impact of the temporal Jacobian on both forward- and backward-propagation.
+
+- (Backpropagation) The formula for backpropagation through time (BPTT), from the loss at time $s$ can be given as:
+
+$$
+\nabla_ {\boldsymbol {\theta}} \tilde {L} \left(\hat {\mathbf {y}} _ {s}, \mathbf {y} _ {s}\right) = \underbrace {\left[ \tilde {\mathbf {G}} _ {\mathbf {h} ^ {(s)} ; \boldsymbol {\theta}} ^ {T} + \tilde {\mathbf {G}} _ {\mathbf {h} ^ {(s - 1)} ; \boldsymbol {\theta}} ^ {T} \mathbf {J} _ {s - 1} + \dots + \tilde {\mathbf {G}} _ {\mathbf {h} ^ {(1)} ; \boldsymbol {\theta}} ^ {T} \prod_ {t = 1} ^ {s - 1} \mathbf {J} _ {t} \right]} _ {\tilde {\mathbf {G}} _ {s}} \cdot \nabla_ {\mathbf {h} ^ {(s)}} \tilde {L} \left(\hat {\mathbf {y}} _ {s}, \mathbf {y} _ {s}\right) \tag {1}
+$$
+
+Where $\tilde{\mathbf{G}}_{\mathbf{h}^{(t)};\theta}$ is the Jacobian of $\mathbf{h}^{(t)}$ considering only the explicit dependence on $\theta$ .
+
+- (Forward Propagation)
+
+A single time-step of the network under small perturbations yields the following:
+
+$$
+\mathbf {M} \left(\mathbf {x} ^ {(t)}; \mathbf {h} ^ {(t)} + \boldsymbol {\epsilon}\right) \approx \mathbf {h} ^ {(t + 1)} + \mathbf {J} _ {t} \boldsymbol {\epsilon} \tag {2}
+$$
+
+With additional powers of the Jacobian appearing as we observe the entire sequence.
+
+From Equation 1, it can easily be seen that increasing the normed singular values of each $\mathbf{J}_{\mathrm{t}}$ will on average exponentially increase the gradient signal from later sequence elements, which will expedite convergence by reducing the vanishing gradient problem. From Equation 2, we additionally note that a well-conditioned Jacobian would enable the network to preserve separation of distinct input vectors, by preventing the additive perturbation from vanishing or exploding. Prior works in mean-field theory Gilboa et al. (2019); Chen et al. (2018a) provide an extensive analysis of a similar objective on the performance of a wide range of recurrent networks.
+
+The Froebenius norm of the temporal Jacobian, defined below, is thus key to both forward and backpropagation. Both processes are significantly expedited when the norm is close to 1.
+
+$$
+\chi = \frac {1}{N (S - 1)} \sum_ {t = 1} ^ {S - 1} \mathbb {E} \left(\left\| \mathbf {J} _ {\mathbf {t}} \vec {\mathbf {I}} \right\| _ {2} ^ {2}\right) = \frac {1}{N (S - 1)} \sum_ {t = 1} ^ {S - 1} \mathbb {E} \left(\sum_ {i = 1} ^ {N} \left| \sigma_ {i} ^ {(t)} \right| ^ {2}\right) \tag {3}
+$$
+
+# 2.3 PRUNING CRITERIA
+
+Under typical recurrent model initializations, where $\pmb{\theta} \sim \mathcal{N}(\mu_{\theta}, s_{\theta}^{2}\mathbf{I})$ or a similar distribution, with $\mu_{\theta} \sim 0$ , $s_{\theta}^{2} \ll 1$ , Gilboa et al. (2019) has empirically observed that $\chi$ is $< 1$ , and that the singular values concentrate towards 0 (see Figure 2 for further evidence). Therefore, we hypothesize that the fastest converging and best performing sparse models are those which simply maximize $\chi$ .
+
+We would like to determine the effect of removing one parameter on the Jacobian during the training trajectory. However, as we restrict ourselves only to information available at initialization, we approximate the effect of each parameter on the Jacobian by a first-order Taylor expansion. This is analogous to the derivations given in Lee et al. (2018); Wang et al. (2020):
+
+$$
+d _ {n} \propto | [ \Delta \chi ] _ {n} | = \frac {1}{S - 1} \sum_ {t = 1} ^ {S - 1} \left| \frac {\partial}{\partial \theta_ {n}} \| \mathbf {J} _ {\mathbf {t}} \mathbf {1} \| _ {2} ^ {2} \right| \tag {4}
+$$
+
+We call $d_{n}$ the sensitivity score of parameter $\theta_{n}$ .
+
+This criterion will not be well-normed across different types of parameters. This is due to numerous factors, including differing activation functions used for each gate, and differing distributions between the input and recurrent state. Consequently, the variance of our objective is not uniform between groups of parameters (see Section 3.3 for empirical confirmation). We compensate for this by dividing our criterion by the expected magnitude of the gradient for each parameter. The normalized sensitivity score becomes:
+
+$$
+d _ {n} = \left[ \Delta \tilde {\chi} \right] _ {n} = \approx \frac {\left[ \Delta \chi \right] _ {n}}{\left| \gamma_ {n} \right|}, \gamma_ {n} = \mathbb {E} _ {\tilde {\mathcal {D}}} \left[ \sum_ {t = 1} ^ {S} \sum_ {i = 1} ^ {O} \frac {\partial \tilde {h} _ {i} ^ {(t)}}{\partial \theta_ {n}} \right] \tag {5}
+$$
+
+where $\hat{\mathcal{D}}$ is either the data distribution or an approximate distribution (since we are only trying to estimate the approximate variance of the gradient distribution), and the sequence $\{\widetilde{\mathbf{h}}^{(t)}\}$ is computed
+
+on inputs from that distribution. This normalization scheme is similar in motivation to the normalization proposed in Pascanu et al. (2013), and allows us to consider all recurrent models with only one additional computation.
+
+For our pruning objective, we simply take the $K$ weights with largest sensitivity scores, as those represent the parameters which most affect the Jacobian objective near the initialization. Formally, we find the $K$ -th largest sensitivity, $\tilde{d}_K$ , and set $c_{n} = I_{A}(d_{n}\geq \tilde{d}_{K})$ . Empirically, we find that the sensitivity score remains an effective metric even if the weights are not restricted to a neighborhood where the Taylor expansion is valid (see Figure 2 for details).
+
+This objective is simple to compute, requiring only two backward passes using auto-differentiation. Furthermore, as we only depend on the Jacobian-vector product, it has a memory cost linear in the parameters.
+
+Algorithm 1 Pruning Recurrent Networks
+Require: Parameters $\theta$ , Dataset $\mathcal{D}$ , Approximate Dataset $\tilde{\mathcal{D}}$ , Sparsity Level $K$ , Sequence Length $S$ , Number to Sample $P$ , Sequence Horizon $U$
+1: for all $p = 1 \dots P$ do
+2: Sample sequence $(\tilde{\mathbf{X}}, \tilde{\mathbf{Y}}) \sim \tilde{\mathcal{D}}$ , $(\mathbf{X}, \mathbf{Y}) \sim \mathcal{D}$
+3: for all $t = 1 \dots S$ do
+4: Compute $\{\tilde{\mathbf{h}}^{(t)}\}$ with $(\tilde{\mathbf{X}}, \tilde{\mathbf{Y}})$ , $\{\mathbf{h}^{(t)}\}$ with $(\mathbf{X}, \mathbf{Y})$
+5: end for
+6: end for
+7: Compute $\gamma$ using $\{\tilde{\mathbf{h}}^{(t)}\}$ and Equation 5
+8: for all $u = 1 \dots U$ do
+9: Compute $\chi^{(u)} \gets \| \mathbf{J}_{S-u} \mathbf{1} \|_2^2 = \mathbb{E} \left[ \sum_{i,j} \left| \frac{\partial h_i^{(S-u)}}{\partial h_j^{(S-u-1)}} \right|^2 \right]$
+10: Compute $\Delta \chi^{(u)} \gets |\nabla_\theta \chi^{(u)}|$
+11: end for
+12: Compute $\mathbf{d} \gets \frac{\sum_{t} [\Delta \chi^{(t)}]}{|\gamma|}$
+13: $\tilde{d} \gets \text{SortDescending}(\mathbf{d})$
+14: $c_n \gets \mathbb{1}[d_n \geq \tilde{d}_K]$ , $\forall n$
+15: return c
+
+# 2.4 COMPARISON TO EXTANT METHODS
+
+There are two recently proposed criteria for pruning at initialization: GraSP (Wang et al., 2020), and SNIP (Lee et al., 2018). They are given by:
+
+$$
+\operatorname {G r a S P} (\theta) = \boldsymbol {\theta} ^ {T} \mathbf {H} \mathbf {g} \tag {6}
+$$
+
+$$
+\operatorname {S N I P} (\theta) = \left| \boldsymbol {\theta} ^ {T} \mathbf {g} \right| \tag {7}
+$$
+
+where $[\mathbf{H}]_{ij} = \mathbb{E}\left[\frac{\partial^2\mathcal{L}}{\partial\theta_i\partial\theta_j}\right]$ , $\mathbf{g} = \mathbb{E}[\nabla_{\theta}\mathcal{L}]$ are the expected gradient and Hessian respectively.
+
+Both methods rely on the gradient of the loss with respect to the weights, with SNIP being more dependent on this gradient than GraSP. Thus, the main term of interest is $\mathbf{g}$ , which can be decomposed as:
+
+$$
+\mathbf {g} _ {\mathbf {t}} = \tilde {\mathbf {G}} _ {t} \nabla_ {\mathbf {h} ^ {(t)}} \tilde {L} _ {t} \tag {8}
+$$
+
+With $\tilde{\mathbf{G}}_t$ , the Jacobian of $\mathbf{h}^{(t)}$ with respect to $\theta$ , as defined in Equation 1.
+
+A consequence of the smaller singular values of $\mathbf{J}$ is that the successive terms of $\tilde{\mathbf{G}}_t$ tend to vanish over time. Thus, loss-based gradient objectives tend to be biased toward explicit dependency between $\mathbf{h}^{(t)}$ on $\theta$ , thus neglecting long-term dependence between $\mathbf{h}^{(t)}$ and $\mathbf{h}^{(t-1)}$ .
+
+In certain cases, (ex. when the hidden state is small relative to the input) SNIP and GraSP prune many recurrent connections while leaving the input connections largely untouched (see section 3). In contrast, our algorithm considers the $\mathbf{J}$ matrix explicitly, which mitigates the problem of pruning too many recurrent connections.
+
+| Architecture | # of Parameters | Random | Ours | Dense | Δ |
| Basic RNN Cell | 171k → 8.5k | 9.51±3.98 | 7.57±0.20 | 7.08±2.08 | +4.39 |
| Standard LSTM | 684k → 34.2k | 2.17±0.18 | 1.66±0.16 | 0.80±0.18 | +0.86 |
| Peephole LSTM | 1.32M → 66.2k | 1.80±0.18 | 1.24±0.08 | 0.74±0.10 | +0.50 |
| GRU | 513k → 25.7k | 1.50±0.08 | 1.46±0.05 | 0.77±0.14 | +0.69 |
+
+# 3 EVALUATION
+
+For the following experiments, we compute the $\ell_2$ norm of $\mathbf{J}\vec{\mathbf{T}}$ using a single minibatch of 64 data samples, and using only the last 4 steps of the sequence.
+
+# 3.1 SEQUENTIAL MNIST BENCHMARK
+
+We first test our method on the sequential MNIST benchmark (Lee et al., 2018), a relatively small dataset which contains long term dependencies. We begin by verifying that our algorithm is robust across several common recurrent architectures. The results in Table 1 confirm that our method is not dependent on any specific recurrent neural architecture choice.
+
+Our principal results for the Sequential MNIST benchmark are presented in Table 2. Again, we see that our network's performance improves with network size, with the largest gap between our method and others coming when the network grows to 1600 units. We observe that SNIP and GraSP are surprisingly effective at small scales with good initialization, but fail when scaled to larger network sizes. Of the baselines, only random pruning is competitive when scaled, a fact we found quite interesting. For reference, we also provide results on standard L2 pruning (Reed, 1993) (for which the schedule can be found in the appendix) and random pruning. The reader should be careful to note that L2 pruning requires an order of magnitude more resources than other methods due to it's prune - retrain cycle; it is only considered here as a lower bound for network compression. Furthermore, while GraSP requires computing the Hessian gradient across the entire dataset, this is computationally infeasible in our case and we instead compute it with a single minibatch, for fairness.
+
+Table 1: Validation Error % of Various 400 Unit RNN Architectures after 50 Epochs of Training on Seq. MNIST; our method works well across all common recurrent architectures. Sparsity of $95\%$ was used on all experiments.
+
+| Pruning Scheme | 100 Units | 400 Units | 1600 Units |
| Unnorm. SNiP | 88.9±0.1 | 88.8±0.1 | 89.0±0.1 |
| Norm. SNiP | 4.09±1.06 | 1.52±0.11 | 1.10±0.11 |
| Unnorm. GraSP | 88.6±0.1 | 88.7±0.1 | 88.6±0.1 |
| Norm. GraSP | 4.28±0.57 | 1.62±0.24 | 1.22±0.14 |
| Random | 2.78±0.25 | 1.50±0.08 | 1.15±0.12 |
| Ours | 3.09±0.31 | 1.46±0.05 | 1.01±0.05 |
| L2 | 1.03±0.05 | 0.71±0.03 | 0.57±0.02 |
+
+Table 2: Benchmarking of Various Pruning Algorithms on $95\%$ Sparse GRUs on seq. MNIST. SNIP, GraSP and Random pruning are competitive for smaller models, but the results tend to diminish as the network size increases. Our method obtains strong results even as the network size is large. Further experimental details can be found in the appendix.
+
+In the preceding section, we postulated that normalization across the objective was necessary for strong performance (see Equation 5). This intuition is confirmed in Table 2, where we present both the normalized results (with Glorot Glorot & Bengio (2010) and $\gamma$ normalization) and the unnormalized results (without both). Indeed, we see that this normalization is crucial for recurrent architectures, with unnormalized architectures having all of the retained network weights concentrated in a single gate. This proved to be prohibitive to training.
+
+Finally, in Table 3, we examine the performance of our algorithm at various sparsity levels. Our algorithm continues to outperform random pruning, even at high sparsity levels.
+
+| Sparsity Level (%) | # of Parameters | Random | Ours | Dense | Δ |
| 90 | 68.4k | 1.12±0.16 | 1.05±0.08 | 0.63±0.02 | +0.42 |
| 95 | 34.2k | 1.50±0.08 | 1.46±0.05 | 0.77±0.10 | +0.69 |
| 98 | 13.7k | 1.82±0.22 | 1.77±0.07 | 0.67±0.13 | +1.10 |
+
+# 3.2 LINGUISTIC SEQUENCE PREDICTION
+
+We assess our models on 3 sequence prediction benchmarks: 1) WikiText-2 (wiki2). 2) WikiText-103 (wiki103), an expanded version of (1) with 10 times more tokens. 3) A truncated version of the One Billion Words (1b) benchmark (Chelba et al., 2013), where only the top 100,000 vocabulary tokens are used. The full experiment parameters are given in the appendix. We report the training and validation perplexities on a random $1\%$ sample of the training set in Table 4.
+
+Table 3: Sparsity Level vs Validation Error % on 400 Unit GRUs, for seq. MNIST. Our method consistently beats random pruning.
+
+| Dataset (%) | Random | Ours | Dense | Δ |
| wiki2 | 22.66 | 20.54 | 10.479 | +10.61 |
| wiki103 | 49.65 | 46.65 | 35.87 | +10.78 |
| Trunc. 1b | 59.17 | 53.26 | 38.98 | +14.28 |
| # of Parameters | 960k | 960k | 19.2M | - |
+
+Table 4: Training Perplexities of Training Sparse Models on Large Language Benchmarks. Our method successfully reduces the perplexity score across all benchmarks, often significantly, however there is still a large gap to the dense performance. Parameters are reported only for the recurrent layer as other layers were not pruned during training.
+
+From the results, it is clear that our algorithm succeeds in decreasing perplexity across all language tasks. Despite their varying difficulties, our algorithm speeds up initial convergence on all tasks and maintains an advantage throughout training.
+
+Finally, we perform an ablation experiment on the Penn Treebank Dataset (PTB) with an 800 unit GRU at different sparsity levels. The results are reported in Table 5.
+
+| Sparsity | 0% | 20% | 40% | 60% | 70% | 80% | 90% | 95% | 98% |
| Perplexity | 156.16 | 160.32 | 165.13 | 173.51 | 178.55 | 184.85 | 194.79 | 208.14 | 228.22 |
| Parameters | 2.88M | 2.30M | 1.72M | 1.15M | 864K | 576K | 288K | 144K | 57.6K |
+
+Table 5: Validation Perplexities of Pruned 800-unit GRU Models on Penn Treebank. For a simple comparison we do not finetune these models, or apply any regularization tricks besides early stopping.
+
+The loss from sparsity increases dramatically as the percentage of parameters remaining approaches zero. This trend is similar to that reported in Gale et al. (2019) and other prior works. For reference, a dense 200 unit GRU (360k parameters) achieves 196.31 perplexity while a 100 unit GRU (150k parameters) achieves 202.97 perplexity.
+
+# 3.3 QUALITATIVE ANALYSIS
+
+The success of our algorithm can be partially attributed to effective distributions across hidden units. Whereas many of the other algorithms are overly concentrated in certain gates and biased towards the input weights, our algorithm effectively distributes sparse connections across the entire weight matrix. We discuss the distribution of remaining connections on a 400 unit GRUs in Figure 1. We also give a set of sample connections under each algorithm in Figure 3.
+
+Finally, we perform an empirical study of the evolution of the Jacobian spectrum to verify our hypothesis on recurrence preservation. We show a 400-unit GRU trained on sequential MNIST, with a dense network, our pruning scheme, and random pruning respectively. It can be observed from Figure 2 after 50000 training steps that our Jacobian has both higher mean and much fewer near-zero singular values, which helps to explain our performance and justify the intuition behind our algorithm. The spectra at initialization also further confirms that the initial singular values of $\mathbf{J}$ are small.
+
+
+(a) SNiP. I/R Ratio: 0.205
+
+
+(b) GraSP. I/R Ratio: 0.124
+
+
+(c) Ours. I/R Ratio: 0.094
+
+
+Figure 1: Plot of Remaining Connections by Gate and Type. SNiP and GraSP consistently prune recurrent connections at a much higher ratio than input connections. The ratio of remaining input to recurrent (I/R) connections is given for each method; the dense ratio is 0.07 for comparison. SNiP and GraSP also exhibit severe imbalance between gates, while our imbalance is far milder.
+(a) Initialization, Pre-Pruning
+
+
+(b) SNiP
+Figure 2: Singular Value Magnitude Histograms after 50 epochs of Training, for 400 Unit GRU on seq. MNIST. Compared to SNiP, our method prevents spectral concentration at 0, with a mean singular value magnitude of 0.31 to SNiP's 0.18 This helps to explain our relative performance gain.
+
+
+(c) Ours
+
+# 4 OTHER RELATED WORK
+
+Methods for Pruning Recurrent Networks: Our method is the latest in a series of attempts to generate sparse RNNs. Perhaps the most well-known algorithm for sparse network pruning is Narang et al. (2017a). It is a modification to magnitude based pruning wherein the pruning threshold evolves according to several hyperparameters that have to be tuned by the user. Kliegl et al. (2017) uses iterative trace norm regularization to prune RNNs used for speech recognition. This effectively reduces the sum of the singular values of the weight matrices. But we found in our experiments that these values were often degenerate near 0. Furthermore, this technique is iterative. Narang et al. (2017b) uses iterative ground lasso regularization to induce block sparsity in recurrent neural networks. Wen et al. (2017) alters the structure of LSTMs to decrease their memory requirements. Their intrinsic sparse structures make structural assumptions about the sparsity distribution across the network. Dai et al. (2018) uses magnitude based pruning coupled with a special RNN structure
+
+
+(a) SNiP
+Figure 3: Map of remaining connections, with the x-axis indicating the output size (flattened across gates) and the y-axis indicated the input size. Our method is significantly more spread out across neurons and gates than the others.
+
+
+(b) GraSP
+
+
+(c) Ours
+
+to make RNNs more efficient. The pruning algorithm itself is magnitude based. See et al. (2016) uses iterative pruning and retraining to prune a recurrent model for neural translation. The underlying technique is simple iterative pruning, and the final pruning percentage is only $80\%$ . While fine for their application, we are interested in novel pruning techniques and higher levels of sparsity.
+
+In summation, all the methods discussed above utilized some variant of L1 or L2 pruning to actually sparsify the network. The novel advances are all related to pruning schedules, modifications to recurrent architectures, or small transformations of the L1 or L2 objective.
+
+Other Pruning Techniques: Many extant pruning techniques are applicable to recurrent network architectures, even if these methods were not designed from the ground up to work in the recurrent case. Lee et al. (2018) and Wang et al. (2020) both provide a pruning objective that can be used to prune networks before training begins. They are considered extensively in this work. In Frankle & Carbin (2019), it is shown that at initialization networks contain a small sparse set of connections that can achieve similar results to fully dense networks. However, no known method yet exists to recover these sparse networks to the full extent demonstrated in that work. Han et al. (2015) showed impressive results with magnitude based pruning. Follow up work made further use of magnitude-based pruning techniques (Carreira-Perpinan & Idelbayev, 2018; Guo et al., 2016); however, these techniques are primarily iterative.
+
+Mean Replacement Pruning (Evci et al., 2018) uses the absolute-value of the Taylor expansion of the loss to as a criterion for which units in a network should be pruned. This method can not be used with BatchNorm and achieves results comparable to magnitude based pruning. Bayesian methods have recently seen some success in pruning neural networks. (Ullrich et al., 2017), which is itself an extension of Nowlan & Hinton (1992), is the standard citation here. In essence, this method works by re-training a network while also fitting the weights to a GMM prior via a KL penalty. Molchanov et al. (2017) is another Bayesian pruning technique that learns a dropout rate via variational inference that can subsequently be used to prune the network. Finally, there exists several classical pruning techniques. Ishikawa (1996); Chauvin (1989) enforced sparsity penalties during the training process. LeCun et al. (1990); Hassibi et al. (1993) perform Hessian-based pruning, using the Hessian to get a sensitivity metric for the network's weights.
+
+While many of the above methods are effective in general, they do not explicitly consider the specifics of RNNs and sequential prediction.
+
+Other Related Work Several interesting papers have recently taken a critical look at the problem of network pruning (Liu et al., 2018; Crowley et al., 2018). The problem of network compression is closely related to network pruning. It would be impossible to cite all of the relevant papers here, and no good literature survey exists. Some worthwhile references are Gupta et al. (2015); Gong et al. (2014); Courbariaux et al. (2016); Chen et al. (2018b); Howard et al. (2017). Both problems often share a common goal of reducing the size of a network. Some notable papers explicitly consider the problem of recurrent network compression (Ye et al., 2018; Lobacheva et al., 2017; Wang et al., 2018).
+
+In the context of the above work, our method is not iterative and can be fully completed before training even begins. The tradeoffs in accuracy can be remedied by scaling up the network, since there is no longer a need to store fully dense weights during training. Furthermore, our objective is specifically adapted to the sequential prediction context in which RNNs are deployed. We are the first pruning algorithm to consider the temporal Jacobian spectrum as a key to generating faster converging and better performance sparse RNNs. Our method not only performs better in practice compared to other zero-shot methods, but also yields key insight into the factors behind RNN performance. This may aid the development of new architectures and training schemes for sequential prediction.
+
+# 5 CLOSING REMARKS
+
+In this work, we presented an effective and cheap single-shot pruning algorithm adapted toward recurrent models. Throughout the work, we continually found the importance of the Jacobian spectrum surprising and interesting. Future work could further examine the relationship between network width, the Jacobian spectrum, and generalization.
+
+# REFERENCES
+
+Sanjeev Arora, Rong Ge, Behnam Neyshabur, and Yi Zhang. Stronger generalization bounds for deep nets via a compression approach. ICML, 2018.
+Miguel A Carreira-Perpinan and Yerlan Idelbayev. "learning-compression" algorithms for neural net pruning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8532-8541, 2018.
+Yves Chauvin. A back-propagation algorithm with optimal use of hidden units. In Advances in neural information processing systems, pp. 519-526, 1989.
+Ciprian Chelba, Tomas Mikolov, Mike Schuster, Qi Ge, Thorsten Brants, Phillip Koehn, and Tony Robinson. One billion word benchmark for measuring progress in statistical language modeling. arXiv preprint arXiv:1312.3005, 2013.
+Minmin Chen, Jeffrey Pennington, and Samuel S Schoenholz. Dynamical isometry and a mean field theory of rnns: Gating enables signal propagation in recurrent neural networks. arXiv preprint arXiv:1806.05394, 2018a.
+Patrick Chen, Si Si, Yang Li, Ciprian Chelba, and Cho-Jui Hsieh. Groupreduce: Block-wise low-rank approximation for neural language model shrinking. In Advances in Neural Information Processing Systems, pp. 10988-10998, 2018b.
+Matthieu Courbariaux, Itay Hubara, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. Binarized neural networks: Training deep neural networks with weights and activations constrained to+ 1 or-1. arXiv preprint arXiv:1602.02830, 2016.
+Elliot J Crowley, Jack Turner, Amos Storkey, and Michael O'Boyle. Pruning neural networks: is it time to nip it in the bud? arXiv preprint arXiv:1810.04622, 2018.
+Xiaoliang Dai, Hongxu Yin, and Niraj K Jha. Grow and prune compact, fast, and accurate lstms. arXiv preprint arXiv:1805.11797, 2018.
+Utku Evci, Nicolas Le Roux, Pablo Castro, and Leon Bottou. Mean replacement pruning. 2018.
+Jonathan Frankle and Michael Carbin. The lottery ticket hypothesis: Finding sparse, trainable neural networks. *ICLR*, 2019.
+Trevor Gale, Erich Elsen, and Sara Hooker. The state of sparsity in deep neural networks. arXiv preprint arXiv:1902.09574, 2019.
+Dar Gilboa, Bo Chang, Minmin Chen, Greg Yang, Samuel S Schoenholz, Ed H Chi, and Jeffrey Pennington. Dynamical isometry and a mean field theory of lstms and grus. arXiv preprint arXiv:1901.08987, 2019.
+Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the thirteenth international conference on artificial intelligence and statistics, pp. 249-256, 2010.
+Yunchao Gong, Liu Liu, Ming Yang, and Lubomir Bourdev. Compressing deep convolutional networks using vector quantization. arXiv preprint arXiv:1412.6115, 2014.
+Yiwen Guo, Anbang Yao, and Yurong Chen. Dynamic network surgery for efficient dnns. In Advances In Neural Information Processing Systems, pp. 1379-1387, 2016.
+Suyog Gupta, Ankur Agrawal, Kailash Gopalakrishnan, and Pritish Narayanan. Deep learning with limited numerical precision. In International Conference on Machine Learning, pp. 1737-1746, 2015.
+Song Han, Jeff Pool, John Tran, and William Dally. Learning both weights and connections for efficient neural network. In Advances in neural information processing systems, pp. 1135-1143, 2015.
+
+Babak Hassibi, David G Stork, and Gregory J Wolff. Optimal brain surgeon and general network pruning. In IEEE international conference on neural networks, pp. 293-299. IEEE, 1993.
+Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861, 2017.
+Masumi Ishikawa. Structural learning with forgetting. Neural networks, 9(3):509-521, 1996.
+Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
+Markus Kliegl, Siddharth Goyal, Kexin Zhao, Kavya Srinet, and Mohammad Shoeybi. Trace norm regularization and faster inference for embedded speech recognition rnns. arXiv preprint arXiv:1710.09026, 2017.
+Yann LeCun, John S Denker, and Sara A Solla. Optimal brain damage. In Advances in neural information processing systems, pp. 598-605, 1990.
+Namhoon Lee, Thalaiyasingam Ajanthan, and Philip HS Torr. Snip: Single-shot network pruning based on connection sensitivity. arXiv preprint arXiv:1810.02340, 2018.
+Zhuang Liu, Mingjie Sun, Tinghui Zhou, Gao Huang, and Trevor Darrell. Rethinking the value of network pruning. arXiv preprint arXiv:1810.05270, 2018.
+Ekaterina Lobacheva, Nadezhda Chirkova, and Dmitry Vetrov. Bayesian sparsification of recurrent neural networks. arXiv preprint arXiv:1708.00077, 2017.
+Dmitry Molchanov, Arsenii Ashukha, and Dmitry Vetrov. Variational dropout sparsifies deep neural networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 2498-2507. JMLR.org, 2017.
+Sharan Narang, Erich Elsen, Gregory Diamos, and Shubho Sengupta. Exploring sparsity in recurrent neural networks. arXiv preprint arXiv:1704.05119, 2017a.
+Sharan Narang, Eric Undersander, and Gregory Diamos. Block-sparse recurrent neural networks. arXiv preprint arXiv:1711.02782, 2017b.
+Steven J Nowlan and Geoffrey E Hinton. Simplifying neural networks by soft weight-sharing. Neural computation, 4(4):473-493, 1992.
+Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. On the difficulty of training recurrent neural networks. In International conference on machine learning, pp. 1310-1318, 2013.
+Russell Reed. Pruning algorithms-a survey. IEEE transactions on Neural Networks, 4(5):740-747, 1993.
+Abigail See, Minh-Thang Luong, and Christopher D Manning. Compression of neural machine translation models via pruning. arXiv preprint arXiv:1606.09274, 2016.
+Karen Ullrich, Edward Meeds, and Max Welling. Soft weight-sharing for neural network compression. arXiv preprint arXiv:1702.04008, 2017.
+Chaoqi Wang, Guodong Zhang, and Roger Grosse. Picking winning tickets before training by preserving gradient flow. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=SkgsACVKPH.
+Zhisheng Wang, Jun Lin, and Zhongfeng Wang. Hardware-oriented compression of long short-term memory for efficient inference. IEEE Signal Processing Letters, 25(7):984-988, 2018.
+Wei Wen, Yuxiong He, Samyam Rajbhandari, Minjia Zhang, Wenhan Wang, Fang Liu, Bin Hu, Yiran Chen, and Hai Li. Learning intrinsic sparse structures within long short-term memory. arXiv preprint arXiv:1709.05027, 2017.
+Jinmian Ye, Linnan Wang, Guangxi Li, Di Chen, Shandian Zhe, Xinqi Chu, and Zenglin Xu. Learning compact recurrent neural networks with block-term tensor decomposition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9378-9387, 2018.
+
+# 6 APPENDIX A - EXPERIMENT HYPERPARAMETERS
+
+Unless otherwise specified, our model consists of a single-layered RNN, followed by an appropriately sized softmax layer with sigmoidal activation. The softmax layer is initialized with standard Xavier. We use a minibatch size of 64 samples during training, and optimize using the AdaM optimizer (Kingma & Ba, 2014) with a learning rate of 1e-3. We use an initial hidden state of zeros for all experiments.
+
+For all networks, we only prune the recurrent layer while leaving prior and subsequent layers untouched, since we are primarily interested in performance of recurrent layers. We trained all networks with a single Nvidia P100 GPU.
+
+# 6.1 SEQUENTIAL MNIST
+
+For seq. MNIST, we follow the same process as SNiP, feeding in row-by-row. We used $\mathcal{N}(0,0.1)$ for our own method, and Glorot initialization for SNiP and GraSP. $\gamma$ is computed from data sampled from a $\mathcal{N}(0,0.1)$ distribution. We use only the activations from the last time step. For L2, the density was annealed according to the schedule $\{0.8,0.6,0.4,0.2,0.1,0.05,0.02,0.01\}$ every 10k training steps.
+
+# 6.2 LANGUAGE BENCHMARKS
+
+We use 2000-unit LSTMs for all language benchmarks. To reduce the variance of our comparison, we freeze the embedding layer before training. We use sampled sequential cross-entropy loss with 1000 tokens for wiki103 and 1b, and standard cross-entropy for wiki2. We use He initialization for all papers.
+
+Wiki2 was trained for 20k training steps (13 epochs), while wiki103 was trained for 12k training steps, and 1b was trained for 30k training steps.
+
+# 7 APPENDIX B - ADDITIONAL STUDIES
+
+# 7.1 INITIALIZATIONS
+
+We benchmark the performance of our algorithm against random pruning using 3 additional initializations, seen in Table 6. With high variance, the first-order expansion we use to estimate our objective fails to hold, so we do significantly worse than the random benchmark.
+
+| Initialization Scheme | Ours | Random |
| Glorot
+N(0,1)
+uniform(0,0.1) | 1.219 | 1.36 |
| 3.30 | 1.38 |
| 1.73 | 1.32 |
+
+Table 6: Benchmarking of validation Error % on Different Initializations, for Sequential MNIST Task with 400 Unit GRU. Our algorithm successfully beats random on well-conditioned normal distributions, but fails on high variance and the uniform distribution.
+
+# 7.2 RUNTIME
+
+We benchmark the runtimes of SNiP, GraSP and our own algorithm, using only a single batch and time iteration for fairness, seen in Table 7.
+
+# 8 APPENDIX C - TRAINING CURVES
+
+We present a sample training curve of a 400 unit GRU for sequential MNIST below. As can be seen, random pruning is only competitive algorithm in this instance.
+
+| Pruning Scheme | Runtime (seconds) |
| SNiP | 4.718 |
| GraSP | 16.406 |
| Ours | 4.876 |
+
+Table 7: Benchmarking of Pruning Algorithm Runtimes; our method is faster than GraSP as the Hessian is larger than the Jacobian, but slower than SNiP for a single time instance. It should be noted that our algorithm works best when iterated across several time steps, while GraSP requires iteration across the entire training set, and SNiP requires only a single computation.
+
+
+Figure 4: Plot of Log Train Loss for a 400 Unit GRU, trained on Sequential MNIST. GraSP is the worst performing, followed by SNiP and then Random, which is on par with our method. L2 is shown as a lower bound. It is surprising that random is competitive, but it is free from the gate imbalance exhibited by SNiP and GraSP.
+
+Subsequently we present a sample training curve in Figure 5 for the 1b words experiment, detailed in Table 4. Our algorithm provides significant benefit over random pruning, but still lags behind the dense model.
+
+
+Figure 5: Plot of Log Train Perplexity on the 1b dataset, with 2k LSTM network. Our model clearly outperforms random pruning by a significant margin, however more work is needed before we achieve near-dense performance.
\ No newline at end of file
diff --git a/oneshotpruningofrecurrentneuralnetworksbyjacobianspectrumevaluation/images.zip b/oneshotpruningofrecurrentneuralnetworksbyjacobianspectrumevaluation/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..c625fdec9da4cbe4eeb1717bdc97faa7cef1b317
--- /dev/null
+++ b/oneshotpruningofrecurrentneuralnetworksbyjacobianspectrumevaluation/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8636d6630016e10ccb30b328a347768266e815c3c09d73ec5e34485d7adf7b80
+size 331594
diff --git a/oneshotpruningofrecurrentneuralnetworksbyjacobianspectrumevaluation/layout.json b/oneshotpruningofrecurrentneuralnetworksbyjacobianspectrumevaluation/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..04a5834cd1ef590e0506d67d34ca43d974a169ec
--- /dev/null
+++ b/oneshotpruningofrecurrentneuralnetworksbyjacobianspectrumevaluation/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b6807c4e9a98850f10550bb59770a41a264212698bc5367b8a58209946cba07d
+size 401845
diff --git a/ontheconvergenceoffedavgonnoniiddata/a6fa98c7-5455-427d-bd93-84e9253f15ac_content_list.json b/ontheconvergenceoffedavgonnoniiddata/a6fa98c7-5455-427d-bd93-84e9253f15ac_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..4bc7bbadec53dab4576d96976fc1db22ab16916e
--- /dev/null
+++ b/ontheconvergenceoffedavgonnoniiddata/a6fa98c7-5455-427d-bd93-84e9253f15ac_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:cf10a3f0c873085946a3b95dd02e2a07a0b8a2f999216e4831e80191aad9fb55
+size 191582
diff --git a/ontheconvergenceoffedavgonnoniiddata/a6fa98c7-5455-427d-bd93-84e9253f15ac_model.json b/ontheconvergenceoffedavgonnoniiddata/a6fa98c7-5455-427d-bd93-84e9253f15ac_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..29338d497e86247b8d2b702978d70b776ece1f48
--- /dev/null
+++ b/ontheconvergenceoffedavgonnoniiddata/a6fa98c7-5455-427d-bd93-84e9253f15ac_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:cfc8aacfdad4e0c15ff31db65f308094089c82ec579556b9d4f74a05c1e8d8f9
+size 224756
diff --git a/ontheconvergenceoffedavgonnoniiddata/a6fa98c7-5455-427d-bd93-84e9253f15ac_origin.pdf b/ontheconvergenceoffedavgonnoniiddata/a6fa98c7-5455-427d-bd93-84e9253f15ac_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..972e1cd4be0009be977eb071fe7f77cfa6a4c4fb
--- /dev/null
+++ b/ontheconvergenceoffedavgonnoniiddata/a6fa98c7-5455-427d-bd93-84e9253f15ac_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7e92359019b7d9dac04c1731157867238c7cedfde5138b96540c6f4eadcec5ce
+size 742569
diff --git a/ontheconvergenceoffedavgonnoniiddata/full.md b/ontheconvergenceoffedavgonnoniiddata/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..3efd365a13fcf4b3938b862b8eb63294bf9f9d17
--- /dev/null
+++ b/ontheconvergenceoffedavgonnoniiddata/full.md
@@ -0,0 +1,931 @@
+# ON THE CONVERGENCE OF FEDAVG ON NON-IID DATA
+
+Xiang Li*
+
+School of Mathematical Sciences
+
+Peking University
+
+Beijing, 100871, China
+
+smslixiang@pku.edu.cn
+
+Wenhao Yang*
+
+Center for Data Science
+
+Peking University
+
+Beijing, 100871, China
+
+yangwenhaosms@pku.edu.cn
+
+Kaixuan Huang*
+
+School of Mathematical Sciences
+
+Peking University
+
+Beijing, 100871, China
+
+hackyhuang@pku.edu.cn
+
+Shusen Wang
+
+Department of Computer Science
+
+Stevens Institute of Technology
+
+Hoboken, NJ 07030, USA
+
+shusen.wang@stevens.edu
+
+Zhihua Zhang
+
+School of Mathematical Sciences
+
+Peking University
+
+Beijing, 100871, China
+
+zhzhang@math.pku.edu.cn
+
+# ABSTRACT
+
+Federated learning enables a large amount of edge computing devices to jointly learn a model without data sharing. As a leading algorithm in this setting, Federated Averaging (FedAvg) runs Stochastic Gradient Descent (SGD) in parallel on a small subset of the total devices and averages the sequences only once in a while. Despite its simplicity, it lacks theoretical guarantees under realistic settings. In this paper, we analyze the convergence of FedAvg on non-iid data and establish a convergence rate of $\mathcal{O}\left(\frac{1}{T}\right)$ for strongly convex and smooth problems, where $T$ is the number of SGDs. Importantly, our bound demonstrates a trade-off between communication efficiency and convergence rate. As user devices may be disconnected from the server, we relax the assumption of full device participation to partial device participation and study different averaging schemes; low device participation rate can be achieved without severely slowing down the learning. Our results indicate that heterogeneity of data slows down the convergence, which matches empirical observations. Furthermore, we provide a necessary condition for FedAvg on non-iid data: the learning rate $\eta$ must decay, even if full-gradient is used; otherwise, the solution will be $\Omega(\eta)$ away from the optimal.
+
+# 1 INTRODUCTION
+
+Federated Learning (FL), also known as federated optimization, allows multiple parties to collaboratively train a model without data sharing (Konevcny et al., 2015; Shokri and Shmatikov, 2015; McMahan et al., 2017; Konevcny, 2017; Sahu et al., 2018; Zhuo et al., 2019). Similar to the centralized parallel optimization (Jakovetic, 2013; Li et al., 2014a,b; Shamir et al., 2014; Zhang and Lin, 2015; Meng et al., 2016; Reddi et al., 2016; Richtárik and Takác, 2016; Smith et al., 2016; Zheng et al., 2016; Shusen Wang et al., 2018), FL let the user devices (aka worker nodes) perform most of the computation and a central parameter server update the model parameters using the descending directions returned by the user devices. Nevertheless, FL has three unique characters that distinguish it from the standard parallel optimization Li et al. (2019).
+
+First, the training data are massively distributed over an incredibly large number of devices, and the connection between the central server and a device is slow. A direct consequence is the slow communication, which motivated communication-efficient FL algorithms (McMahan et al., 2017; Smith et al., 2017; Sahu et al., 2018; Sattler et al., 2019). Federated averaging (FedAvg) is the first and perhaps the most widely used FL algorithm. It runs $E$ steps of SGD in parallel on a small sampled subset of devices and then averages the resulting model updates via a central server once in a while. In comparison with SGD and its variants, FedAvg performs more local computation and less communication.
+
+Second, unlike the traditional distributed learning systems, the FL system does not have control over users' devices. For example, when a mobile phone is turned off or WiFi access is unavailable, the central server will lose connection to this device. When this happens during training, such a non-responding/inactive device, which is called a straggler, appears tremendously slower than the other devices. Unfortunately, since it has no control over the devices, the system can do nothing but waiting or ignoring the stragglers. Waiting for all the devices' response is obviously infeasible; it is thus impractical to require all the devices be active.
+
+Third, the training data are non-iid $^2$ , that is, a device's local data cannot be regarded as samples drawn from the overall distribution. The data available locally fail to represent the overall distribution. This does not only bring challenges to algorithm design but also make theoretical analysis much harder. While FedAvg actually works when the data are non-iid McMahan et al. (2017), FedAvg on non-iid data lacks theoretical guarantee even in convex optimization setting.
+
+There have been much efforts developing convergence guarantees for FL algorithm based on the assumptions that (1) the data are iid and (2) all the devices are active. Khaled et al. (2019); Yu et al. (2019); Wang et al. (2019) made the latter assumption, while Zhou and Cong (2017); Stich (2018); Wang and Joshi (2018); Woodworth et al. (2018) made both assumptions. The two assumptions violate the second and third characters of FL. Previous algorithm Fedprox Sahu et al. (2018) doesn't require the two mentioned assumptions and incorporates FedAvg as a special case when the added proximal term vanishes. However, their theory fails to cover FedAvg.
+
+Notation. Let $N$ be the total number of user devices and $K (\leq N)$ be the maximal number of devices that participate in every round's communication. Let $T$ be the total number of every device's SGDs, $E$ be the number of local iterations performed in a device between two communications, and thus $\frac{T}{E}$ is the number of communications.
+
+Contributions. For strongly convex and smooth problems, we establish a convergence guarantee for FedAvg without making the two impractical assumptions: (1) the data are iid, and (2) all the devices are active. To the best of our knowledge, this work is the first to show the convergence rate of FedAvg without making the two assumptions.
+
+We show in Theorem 1, 2, and 3 that FedAvg has $\mathcal{O}\left(\frac{1}{T}\right)$ convergence rate. In particular, Theorem 3 shows that to attain a fixed precision $\epsilon$ , the number of communications is
+
+$$
+\frac {T}{E} = \mathcal {O} \left[ \frac {1}{\epsilon} \left(\left(1 + \frac {1}{K}\right) E G ^ {2} + \frac {\sum_ {k = 1} ^ {N} p _ {k} ^ {2} \sigma_ {k} ^ {2} + \Gamma + G ^ {2}}{E}\right) \right]. \tag {1}
+$$
+
+Here, $G$ , $\Gamma$ , $p_k$ , and $\sigma_k$ are problem-related constants defined in Section 3.1. The most interesting insight is that $E$ is a knob controlling the convergence rate: neither setting $E$ over-small ( $E = 1$ makes FedAvg equivalent to SGD) nor setting $E$ over-large is good for the convergence.
+
+This work also makes algorithmic contributions. We summarize the existing sampling3 and averaging schemes for FedAvg (which do not have convergence bounds before this work) and propose a new scheme (see Table 1). We point out that a suitable sampling and averaging scheme is crucial for the convergence of FedAvg. To the best of our knowledge, we are the first to theoretically demonstrate
+
+Table 1: Sampling and averaging schemes for FedAvg. $S_{t} \sim \mathcal{U}(N, K)$ means $S_{t}$ is a size- $K$ subset uniformly sampled without replacement from $[N]$ . $S_{t} \sim \mathcal{W}(N, K, \mathbf{p})$ means $S_{t}$ contains $K$ elements that are iid sampled with replacement from $[N]$ with probabilities $\{p_k\}$ . In the latter scheme, $S_{t}$ is not a set.
+
+| Paper | Sampling | Averaging | Convergence rate |
| McMahan et al. (2017) | St ~ U(N, K) | ∑k∉St pkwt + ∑k∈St pkwkt | - |
| Sahu et al. (2018) | St ~ W(N, K, p) | 1/K ∑k∈St wk | O(1/T)5 |
| Ours | St ~ U(N, K) | ∑k∈St pkN/Kwk | O(1/T)6 |
+
+that FedAvg with certain schemes (see Table 1) can achieve $\mathcal{O}\left(\frac{1}{T}\right)$ convergence rate in non-iid federated setting. We show that heterogeneity of training data and partial device participation slow down the convergence. We empirically verify our results through numerical experiments.
+
+Our theoretical analysis requires the decay of learning rate (which is known to hinder the convergence rate.) Unfortunately, we show in Theorem 4 that the decay of learning rate is necessary for FedAvg with $E > 1$ , even if full gradient descent is used. If the learning rate is fixed to $\eta$ throughout, FedAvg would converge to a solution at least $\Omega(\eta(E - 1))$ away from the optimal. To establish Theorem 4, we construct a specific $\ell_2$ -norm regularized linear regression model which satisfies our strong convexity and smoothness assumptions.
+
+Paper organization. In Section 2, we elaborate on FedAvg. In Section 3, we present our main convergence bounds for FedAvg. In Section 4, we construct a special example to show the necessity of learning rate decay. In Section 5, we discuss and compare with prior work. In Section 6, we conduct empirical study to verify our theories. All the proofs are left to the appendix.
+
+# 2 FEDERATED AVERAGING (FEDAVG)
+
+Problem formulation. In this work, we consider the following distributed optimization model:
+
+$$
+\min _ {\mathbf {w}} \left\{F (\mathbf {w}) \triangleq \sum_ {k = 1} ^ {N} p _ {k} F _ {k} (\mathbf {w}) \right\}, \tag {2}
+$$
+
+where $N$ is the number of devices, and $p_k$ is the weight of the $k$ -th device such that $p_k \geq 0$ and $\sum_{k=1}^{N} p_k = 1$ . Suppose the $k$ -th device holds the $n_k$ training data: $x_{k,1}, x_{k,2}, \dots, x_{k,n_k}$ . The local objective $F_k(\cdot)$ is defined by
+
+$$
+F _ {k} (\mathbf {w}) \triangleq \frac {1}{n _ {k}} \sum_ {j = 1} ^ {n _ {k}} \ell \left(\mathbf {w}; x _ {k, j}\right), \tag {3}
+$$
+
+where $\ell (\cdot ;\cdot)$ is a user-specified loss function.
+
+Algorithm description. Here, we describe one around (say the $t$ -th) of the standard FedAvg algorithm. First, the central server broadcasts the latest model, $\mathbf{w}_t$ , to all the devices. Second, every device (say the $k$ -th) lets $\mathbf{w}_t^k = \mathbf{w}_t$ and then performs $E(\geq 1)$ local updates:
+
+$$
+\mathbf {w} _ {t + i + 1} ^ {k} \leftarrow \mathbf {w} _ {t + i} ^ {k} - \eta_ {t + i} \nabla F _ {k} \left(\mathbf {w} _ {t + i} ^ {k}, \xi_ {t + i} ^ {k}\right), i = 0, 1, \dots , E - 1
+$$
+
+where $\eta_{t + i}$ is the learning rate (a.k.a. step size) and $\xi_{t + i}^{k}$ is a sample uniformly chosen from the local data. Last, the server aggregates the local models, $\mathbf{w}_{t + E}^{1},\dots ,\mathbf{w}_{t + E}^{N}$ , to produce the new global model, $\mathbf{w}_{t + E}$ . Because of the non-iid and partial device participation issues, the aggregation step can vary.
+
+IID versus non-iid. Suppose the data in the $k$ -th device are i.i.d. sampled from the distribution $\mathcal{D}_k$ . Then the overall distribution is a mixture of all local data distributions: $\mathcal{D} = \sum_{k=1}^{N} p_k \mathcal{D}_k$ . The prior work Zhang et al. (2015a); Zhou and Cong (2017); Stich (2018); Wang and Joshi (2018); Woodworth et al. (2018) assumes the data are iid generated by or partitioned among the $N$ devices, that is, $\mathcal{D}_k = \mathcal{D}$ for all $k \in [N]$ . However, real-world applications do not typically satisfy the iid assumption. One of our theoretical contributions is avoiding making the iid assumption.
+
+Full device participation. The prior work Coppola (2015); Zhou and Cong (2017); Stich (2018); Yu et al. (2019); Wang and Joshi (2018); Wang et al. (2019) requires the full device participation in the aggregation step of FedAvg. In this case, the aggregation step performs
+
+$$
+\mathbf {w} _ {t + E} \leftarrow \sum_ {k = 1} ^ {N} p _ {k} \mathbf {w} _ {t + E} ^ {k}.
+$$
+
+Unfortunately, the full device participation requirement suffers from serious "straggler's effect" (which means everyone waits for the slowest) in real-world applications. For example, if there are thousands of users' devices in the FL system, there are always a small portion of devices offline. Full device participation means the central server must wait for these "stragglers", which is obviously unrealistic.
+
+Partial device participation. This strategy is much more realistic because it does not require all the devices' output. We can set a threshold $K$ ( $1 \leq K < N$ ) and let the central server collect the outputs of the first $K$ responded devices. After collecting $K$ outputs, the server stops waiting for the rest; the $K + 1$ -th to $N$ -th devices are regarded stragglers in this iteration. Let $S_{t}(|S_{t}| = K)$ be the set of the indices of the first $K$ responded devices in the $t$ -th iteration. The aggregation step performs
+
+$$
+\mathbf {w} _ {t + E} \leftarrow \frac {N}{K} \sum_ {k \in \mathcal {S} _ {t}} p _ {k} \mathbf {w} _ {t + E} ^ {k}.
+$$
+
+It can be proved that $\frac{N}{K}\sum_{k\in S_t}p_k$ equals one in expectation.
+
+Communication cost. The FedAvg requires two rounds communications—one broadcast and one aggregation—per $E$ iterations. If $T$ iterations are performed totally, then the number of communications is $\lfloor \frac{2T}{E} \rfloor$ . During the broadcast, the central server sends $\mathbf{w}_t$ to all the devices. During the aggregation, all or part of the $N$ devices sends its output, say $\mathbf{w}_{t + E}^k$ , to the server.
+
+# 3 CONVERGENCE ANALYSIS OF FEDAVG IN NON-IID SETTING
+
+In this section, we show that FedAvg converges to the global optimum at a rate of $\mathcal{O}(1 / T)$ for strongly convex and smooth functions and non-iid data. The main observation is that when the learning rate is sufficiently small, the effect of $E$ steps of local updates is similar to one step update with a larger learning rate. This coupled with appropriate sampling and averaging schemes would make each global update behave like an SGD update. Partial device participation ( $K < N$ ) only makes the averaged sequence $\{\mathbf{w}_t\}$ have a larger variance, which, however, can be controlled by learning rates. These imply the convergence property of FedAvg should not differ too much from SGD. Next, we will first give the convergence result with full device participation (i.e., $K = N$ ) and then extend this result to partial device participation (i.e., $K < N$ ).
+
+# 3.1 NOTATION AND ASSUMPTIONS
+
+We make the following assumptions on the functions $F_{1},\dots ,F_{N}$ . Assumption 1 and 2 are standard; typical examples are the $\ell_2$ -norm regularized linear regression, logistic regression, and softmax classifier.
+
+Assumption 1. $F_{1},\dots ,F_{N}$ are all $L$ -smooth: for all $\mathbf{v}$ and $\mathbf{w}$ , $F_{k}(\mathbf{v})\leq F_{k}(\mathbf{w}) + (\mathbf{v}- \mathbf{w})^{T}\nabla F_{k}(\mathbf{w}) + \frac{L}{2}\| \mathbf{v} - \mathbf{w}\|_{2}^{2}$ .
+
+Assumption 2. $F_{1},\dots ,F_{N}$ are all $\mu$ -strongly convex: for all $\mathbf{v}$ and $\mathbf{w}$ , $F_{k}(\mathbf{v})\geq F_{k}(\mathbf{w}) + (\mathbf{v} - \mathbf{w})^{T}\nabla F_{k}(\mathbf{w}) + \frac{\mu}{2}\| \mathbf{v} - \mathbf{w}\|_{2}^{2}$ .
+
+Assumptions 3 and 4 have been made by the works Zhang et al. (2013); Stich (2018); Stich et al. (2018); Yu et al. (2019).
+
+Assumption 3. Let $\xi_t^k$ be sampled from the $k$ -th device's local data uniformly at random. The variance of stochastic gradients in each device is bounded: $\mathbb{E}\left\| \nabla F_k(\mathbf{w}_t^k, \xi_t^k) - \nabla F_k(\mathbf{w}_t^k)\right\|^2 \leq \sigma_k^2$ for $k = 1, \dots, N$ .
+
+Assumption 4. The expected squared norm of stochastic gradients is uniformly bounded, i.e., $\mathbb{E}\left\| \nabla F_k(\mathbf{w}_t^k,\xi_t^k)\right\|^2\leq G^2$ for all $k = 1,\dots ,N$ and $t = 0,\dots ,T - 1$
+
+Quantifying the degree of non-iid (heterogeneity). Let $F^{*}$ and $F_{k}^{*}$ be the minimum values of $F$ and $F_{k}$ , respectively. We use the term $\Gamma = F^{*} - \sum_{k=1}^{N} p_{k} F_{k}^{*}$ for quantifying the degree of non-iid. If the data are iid, then $\Gamma$ obviously goes to zero as the number of samples grows. If the data are non-iid, then $\Gamma$ is nonzero, and its magnitude reflects the heterogeneity of the data distribution.
+
+# 3.2 CONVERGENCE RESULT: FULL DEVICE PARTICIPATION
+
+Here we analyze the case that all the devices participate in the aggregation step; see Section 2 for the algorithm description. Let the FedAvg algorithm terminate after $T$ iterations and return $\mathbf{w}_T$ as the solution. We always require $T$ is evenly divisible by $E$ so that FedAvg can output $\mathbf{w}_T$ as expected.
+
+Theorem 1. Let Assumptions 1 to 4 hold and $L, \mu, \sigma_k, G$ be defined therein. Choose $\kappa = \frac{L}{\mu}$ , $\gamma = \max \{8\kappa, E\}$ and the learning rate $\eta_t = \frac{2}{\mu(\gamma + t)}$ . Then FedAvg with full device participation satisfies
+
+$$
+\mathbb {E} \left[ F \left(\mathbf {w} _ {T}\right) \right] - F ^ {*} \leq \frac {2 \kappa}{\gamma + T} \left(\frac {B}{\mu} + 2 L \| \mathbf {w} _ {0} - \mathbf {w} ^ {*} \| ^ {2}\right), \tag {4}
+$$
+
+where
+
+$$
+B = \sum_ {k = 1} ^ {N} p _ {k} ^ {2} \sigma_ {k} ^ {2} + 6 L \Gamma + 8 (E - 1) ^ {2} G ^ {2}. \tag {5}
+$$
+
+# 3.3 CONVERGENCE RESULT: PARTIAL DEVICE PARTICIPATION
+
+As discussed in Section 2, partial device participation has more practical interest than full device participation. Let the set $S_{t}$ ( $\subset [N]$ ) index the active devices in the $t$ -th iteration. To establish the convergence bound, we need to make assumptions on $S_{t}$ .
+
+Assumption 5 assumes the $K$ indices are selected from the distribution $p_k$ independently and with replacement. The aggregation step is simply averaging. This is first proposed in (Sahu et al., 2018), but they did not provide theoretical analysis.
+
+Assumption 5 (Scheme I). Assume $S_{t}$ contains a subset of $K$ indices randomly selected with replacement according to the sampling probabilities $p_1, \dots, p_N$ . The aggregation step of FedAvg performs $\mathbf{w}_{t} \longleftarrow \frac{1}{K} \sum_{k \in S_{t}} \mathbf{w}_{t}^{k}$ .
+
+Theorem 2. Let Assumptions 1 to 4 hold and $L, \mu, \sigma_k, G$ be defined therein. Let $\kappa, \gamma, \eta_t$ , and $B$ be defined in Theorem 1. Let Assumption 5 hold and define $C = \frac{4}{K} E^2 G^2$ . Then
+
+$$
+\mathbb {E} \left[ F \left(\mathbf {w} _ {T}\right) \right] - F ^ {*} \leq \frac {2 \kappa}{\gamma + T} \left(\frac {B + C}{\mu} + 2 L \| \mathbf {w} _ {0} - \mathbf {w} ^ {*} \| ^ {2}\right). \tag {6}
+$$
+
+Alternatively, we can select $K$ indices from $[N]$ uniformly at random without replacement. As a consequence, we need a different aggregation strategy. Assumption 6 assumes the $K$ indices are selected uniformly without replacement and the aggregation step is the same as in Section 2. However, to guarantee convergence, we require an additional assumption of balanced data.
+
+Assumption 6 (Scheme II). Assume $S_{t}$ contains a subset of $K$ indices uniformly sampled from $[N]$ without replacement. Assume the data is balanced in the sense that $p_1 = \dots = p_N = \frac{1}{N}$ . The aggregation step of FedAvg performs $\mathbf{w}_t \leftarrow \frac{N}{K} \sum_{k \in S_t} p_k \mathbf{w}_t^k$ .
+
+Theorem 3. Replace Assumption 5 by Assumption 6 and $C$ by $C = \frac{N - K}{N - 1} \frac{4}{K} E^2 G^2$ . Then the same bound in Theorem 2 holds.
+
+Scheme II requires $p_1 = \dots = p_N = \frac{1}{N}$ which obviously violates the unbalance nature of FL. Fortunately, this can be addressed by the following transformation. Let $\widetilde{F}_k(\mathbf{w}) = p_kNF_k(\mathbf{w})$ be a scaled local objective $F_{k}$ . Then the global objective becomes a simple average of all scaled local objectives:
+
+$$
+F (\mathbf {w}) = \sum_ {k = 1} ^ {N} p _ {k} F _ {k} (\mathbf {w}) = \frac {1}{N} \sum_ {k = 1} ^ {N} \widetilde {F} _ {k} (\mathbf {w}).
+$$
+
+Theorem 3 still holds if $L, \mu, \sigma_k, G$ are replaced by $\widetilde{L} \triangleq \nu L$ , $\widetilde{\mu} \triangleq \varsigma \mu$ , $\widetilde{\sigma}_k = \sqrt{\nu}\sigma$ , and $\widetilde{G} = \sqrt{\nu} G$ , respectively. Here, $\nu = N \cdot \max_k p_k$ and $\varsigma = N \cdot \min_k p_k$ .
+
+# 3.4 DISCUSSIONS
+
+Choice of $E$ . Since $\| \mathbf{w}_0 - \mathbf{w}^*\|^2 \leq \frac{4}{\mu^2} G^2$ for $\mu$ -strongly convex $F$ , the dominating term in eqn. (6) is
+
+$$
+\mathcal {O} \left(\frac {\sum_ {k = 1} ^ {N} p _ {k} ^ {2} \sigma_ {k} ^ {2} + L \Gamma + \left(1 + \frac {1}{K}\right) E ^ {2} G ^ {2} + \kappa G ^ {2}}{\mu T}\right). \tag {7}
+$$
+
+Let $T_{\epsilon}$ denote the number of required steps for FedAvg to achieve an $\epsilon$ accuracy. It follows from eqn. (7) that the number of required communication rounds is roughly
+
+$$
+\frac {T _ {\epsilon}}{E} \propto \left(1 + \frac {1}{K}\right) E G ^ {2} + \frac {\sum_ {k = 1} ^ {N} p _ {k} ^ {2} \sigma_ {k} ^ {2} + L \Gamma + \kappa G ^ {2}}{E}. \tag {8}
+$$
+
+Thus, $\frac{T_{\mathrm{e}}}{E}$ is a function of $E$ that first decreases and then increases, which implies that over-small or over-large $E$ may lead to high communication cost and that the optimal $E$ exists.
+
+Stich (2018) showed that if the data are iid, then $E$ can be set to $\mathcal{O}(\sqrt{T})$ . However, this setting does not work if the data are non-iid. Theorem 1 implies that $E$ must not exceed $\Omega(\sqrt{T})$ ; otherwise, convergence is not guaranteed. Here we give an intuitive explanation. If $E$ is set big, then $\mathbf{w}_t^k$ can converge to the minimizer of $F_k$ , and thus FedAvg becomes the one-shot average Zhang et al. (2013) of the local solutions. If the data are non-iid, the one-shot averaging does not work because weighted average of the minimizers of $F_1, \dots, F_N$ can be very different from the minimizer of $F$ .
+
+Choice of $K$ . Stich (2018) showed that if the data are iid, the convergence rate improves substantially as $K$ increases. However, under the non-iid setting, the convergence rate has a weak dependence on $K$ , as we show in Theorems 2 and 3. This implies FedAvg is unable to achieve linear speedup. We have empirically observed this phenomenon (see Section 6). Thus, in practice, the participation ratio $\frac{K}{N}$ can be set small to alleviate the straggler's effect without affecting the convergence rate.
+
+Choice of sampling schemes. We considered two sampling and averaging schemes in Theorems 2 and 3. Scheme I selects $K$ devices according to the probabilities $p_1, \dots, p_N$ with replacement. The non-uniform sampling results in faster convergence than uniform sampling, especially when $p_1, \dots, p_N$ are highly non-uniform. If the system can choose to activate any of the $N$ devices at any time, then Scheme I should be used.
+
+However, oftentimes the system has no control over the sampling; instead, the server simply uses the first $K$ returned results for the update. In this case, we can assume the $K$ devices are uniformly sampled from all the $N$ devices and use Theorem 3 to guarantee the convergence. If $p_1,\dots ,p_N$ are highly non-uniform, then $\nu = N\cdot \max_{k}p_{k}$ is big and $\varsigma = N\cdot \min_{k}p_{k}$ is small, which makes the convergence of FedAvg slow. This point of view is empirically verified in our experiments.
+
+# 4 NECESSITY OF LEARNING RATE DECAY
+
+In this section, we point out that diminishing learning rates are crucial for the convergence of FedAvg in the non-iid setting. Specifically, we establish the following theorem by constructing a ridge regression model (which is strongly convex and smooth).
+
+Theorem 4. We artificially construct a strongly convex and smooth distributed optimization problem. With full batch size, $E > 1$ , and any fixed step size, FedAvg will converge to sub-optimal points. Specifically, let $\tilde{\mathbf{w}}^*$ be the solution produced by FedAvg with a small enough and constant $\eta$ , and $\mathbf{w}^*$ the optimal solution. Then we have
+
+$$
+\| \tilde {\mathbf {w}} ^ {*} - \mathbf {w} ^ {*} \| _ {2} = \Omega ((E - 1) \eta) \cdot \| \mathbf {w} ^ {*} \| _ {2}.
+$$
+
+where we hide some problem dependent constants.
+
+Theorem 4 and its proof provide several implications. First, the decay of learning rate is necessary of FedAvg. On the one hand, Theorem 1 shows with $E > 1$ and a decaying learning rate, FedAvg converges to the optimum. On the other hand, Theorem 4 shows that with $E > 1$ and any fixed learning rate, FedAvg does not converge to the optimum.
+
+Second, FedAvg behaves very differently from gradient descent. Note that FedAvg with $E = 1$ and full batch size is exactly the Full Gradient Descent; with a proper and fixed learning rate, its global convergence to the optimum is guaranteed Nesterov (2013). However, Theorem 4 shows that FedAvg with $E > 1$ and full batch size cannot possibly converge to the optimum. This conclusion doesn't contradict with Theorem 1 in Khaled et al. (2019), which, when translated into our case, asserts that $\tilde{\mathbf{w}}^*$ will locate in the neighborhood of $\mathbf{w}^*$ with a constant learning rate.
+
+Third, Theorem 4 shows the requirement of learning rate decay is not an artifact of our analysis; instead, it is inherently required by FedAvg. An explanation is that constant learning rates, combined with $E$ steps of possibly-biased local updates, form a sub-optimal update scheme, but a diminishing learning rate can gradually eliminate such bias.
+
+The efficiency of FedAvg principally results from the fact that it performs several update steps on a local model before communicating with other workers, which saves communication. Diminishing step sizes often hinders fast convergence, which may counteract the benefit of performing multiple local updates. Theorem 4 motivates more efficient alternatives to FedAvg.
+
+# 5 RELATED WORK
+
+Federated learning (FL) was first proposed by McMahan et al. (2017) for collaboratively learning a model without collecting users' data. The research work on FL is focused on the communication-efficiency Konevcny et al. (2016); McMahan et al. (2017); Sahu et al. (2018); Smith et al. (2017) and data privacy Bagdasaryan et al. (2018); Bonawitz et al. (2017); Geyer et al. (2017); Hitaj et al. (2017); Melis et al. (2019). This work is focused on the communication-efficiency issue.
+
+FedAvg, a synchronous distributed optimization algorithm, was proposed by McMahan et al. (2017) as an effective heuristic. Sattler et al. (2019); Zhao et al. (2018) studied the non-iid setting, however, they do not have convergence rate. A contemporaneous and independent work Xie et al. (2019) analyzed asynchronous FedAvg; while they did not require iid data, their bound do not guarantee convergence to saddle point or local minimum. Sahu et al. (2018) proposed a federated optimization framework called FedProx to deal with statistical heterogeneity and provided the convergence guarantees in non-iid setting. FedProx adds a proximal term to each local objective. When these proximal terms vanish, FedProx is reduced to FedAvg. However, their convergence theory requires the proximal terms always exist and hence fails to cover FedAvg.
+
+When data are iid distributed and all devices are active, FedAvg is referred to as LocalSGD. Due to the two assumptions, theoretical analysis of LocalSGD is easier than FedAvg. Stich (2018) demonstrated LocalSGD provably achieves the same linear speedup with strictly less communication for strongly-convex stochastic optimization. Coppola (2015); Zhou and Cong (2017); Wang and Joshi (2018) studied LocalSGD in the non-convex setting and established convergence results. Yu et al. (2019); Wang et al. (2019) recently analyzed LocalSGD for non-convex functions in heterogeneous settings. In particular, Yu et al. (2019) demonstrated LocalSGD also achieves $\mathcal{O}(1 / \sqrt{NT})$ convergence (i.e., linear speedup) for non-convex optimization. Lin et al. (2018) empirically shows variants of LocalSGD increase training efficiency and improve the generalization performance of large batch sizes while reducing communication. For LocalGD on non-iid data (as opposed to LocalSGD), the best result is by the contemporaneous work (but slightly later than our first version) (Khaled et al., 2019). Khaled et al. (2019) used fixed learning rate $\eta$ and showed $\mathcal{O}\left(\frac{1}{T}\right)$
+
+convergence to a point $\mathcal{O}(\eta^2 E^2)$ away from the optimal. In fact, the suboptimality is due to their fixed learning rate. As we show in Theorem 4, using a fixed learning rate $\eta$ throughout, the solution by LocalGD is at least $\Omega((E - 1)\eta)$ away from the optimal.
+
+If the data are iid, distributed optimization can be efficiently solved by the second-order algorithms Mahajan et al. (2018); Reddi et al. (2016); Shamir et al. (2014); Shusen Wang et al. (2018); Zhang and Lin (2015) and the one-shot methods Lee et al. (2017); Lin et al. (2017); Wang (2019); Zhang et al. (2013; 2015b). The primal-dual algorithms Hong et al. (2018); Smith et al. (2016; 2017) are more generally applicable and more relevant to FL.
+
+# 6 NUMERICAL EXPERIMENTS
+
+Models and datasets We examine our theoretical results on a logistic regression with weight decay $\lambda = 1e - 4$ . This is a stochastic convex optimization problem. We distribute MNIST dataset (LeCun et al., 1998) among $N = 100$ workers in a non-iid fashion such that each device contains samples of only two digits. We further obtain two datasets: mnist balanced and mnist unbalanced. The former is balanced such that the number of samples in each device is the same, while the latter is highly unbalanced with the number of samples among devices following a power law. To manipulate heterogeneity more precisely, we synthesize unbalanced datasets following the setup in Sahu et al. (2018) and denote it as synthetic $(\alpha, \beta)$ where $\alpha$ controls how much local models differ from each other and $\beta$ controls how much the local data at each device differs from that of other devices. We obtain two datasets: synthetic $(0,0)$ and synthetic $(1,1)$ . Details can be found in Appendix D.
+
+
+(a) The impact of $E$
+
+
+(b) The impact of $K$
+
+
+Figure 1: (a) To obtain an $\epsilon$ accuracy, the required rounds first decrease and then increase when we increase the local steps $E$ . (b) In Synthetic $(0,0)$ dataset, decreasing the numbers of active devices each round has little effect on the convergence process. (c) In mnist balanced dataset, Scheme I slightly outperforms Scheme II. They both performs better than the original scheme. Here transformed Scheme II coincides with Scheme II due to the balanced data. (d) In mnist unbalanced dataset, Scheme I performs better than Scheme II and the original scheme. Scheme II suffers from instability while transformed Scheme II has a lower convergence rate.
+
+
+(c) Different schemes
+(d) Different schemes
+
+Experiment settings For all experiments, we initialize all runnings with $\mathbf{w}_0 = 0$ . In each round, all selected devices run $E$ steps of SGD in parallel. We decay the learning rate at the end of each round by the following scheme $\eta_t = \frac{\eta_0}{1 + t}$ , where $\eta_0$ is chosen from the set $\{1, 0.1, 0.01\}$ . We evaluate the averaged model after each global synchronization on the corresponding global objective. For fair comparison, we control all randomness in experiments so that the set of activated devices is the same across all different algorithms on one configuration.
+
+Impact of $E$ We expect that $T_{\epsilon} / E$ , the required communication round to achieve curtain accuracy, is a hyperbolic function of $E$ as equ (8) indicates. Intuitively, a small $E$ means a heavy communication burden, while a large $E$ means a low convergence rate. One needs to trade off between communication efficiency and fast convergence. We empirically observe this phenomenon on unbalanced datasets in Figure 1a. The reason why the phenomenon does not appear in mnist balanced dataset requires future investigations.
+
+Impact of $K$ Our theory suggests that a larger $K$ may slightly accelerate convergence since $T_{\epsilon} / E$ contains a term $\mathcal{O}\left(\frac{EG^2}{K}\right)$ . Figure 1b shows that $K$ has limited influence on the convergence of FedAvg in synthetic (0, 0) dataset. It reveals that the curve of a large enough $K$ is slightly better. We observe similar phenomenon among the other three datasets and attach additional results in Appendix D. This justifies that when the variance resulting sampling is not too large (i.e., $B \gg C$ ), one can use a small number of devices without severely harming the training process, which also removes the need to sample as many devices as possible in convex federated optimization.
+
+Effect of sampling and averaging schemes. We compare four schemes among four federated datasets. Since the original scheme involves a history term and may be conservative, we carefully set the initial learning rate for it. Figure 1c indicates that when data are balanced, Schemes I and II achieve nearly the same performance, both better than the original scheme. Figure 1d shows that when the data are unbalanced, i.e., $p_k$ 's are uneven, Scheme I performs the best. Scheme II suffers from some instability in this case. This is not contradictory with our theory since we don't guarantee the convergence of Scheme II when data is unbalanced. As expected, transformed Scheme II performs stably at the price of a lower convergence rate. Compared to Scheme I, the original scheme converges at a slower speed even if its learning rate is fine tuned. All the results show the crucial position of appropriate sampling and averaging schemes for FedAvg.
+
+# 7 CONCLUSION
+
+Federated learning becomes increasingly popular in machine learning and optimization communities. In this paper we have studied the convergence of FedAvg, a heuristic algorithm suitable for federated setting. We have investigated the influence of sampling and averaging schemes. We have provided theoretical guarantees for two schemes and empirically demonstrated their performances. Our work sheds light on theoretical understanding of FedAvg and provides insights for algorithm design in realistic applications. Though our analyses are constrained in convex problems, we hope our insights and proof techniques can inspire future work.
+
+# ACKNOWLEDGEMENTS
+
+Li, Yang and Zhang have been supported by the National Natural Science Foundation of China (No. 11771002 and 61572017), Beijing Natural Science Foundation (Z190001), the Key Project of MOST of China (No. 2018AAA0101000), and Beijing Academy of Artificial Intelligence (BAAI).
+
+# REFERENCES
+
+Eugene Bagdasaryan, Andreas Veit, Yiqing Hua, Deborah Estrin, and Vitaly Shmatikov. How to backdoor federated learning. arXiv preprint arXiv:1807.00459, 2018. 7
+Keith Bonawitz, Vladimir Ivanov, Ben Kreuter, Antonio Marcedone, H Brendan McMahan, Sarvar Patel, Daniel Ramage, Aaron Segal, and Karn Seth. Practical secure aggregation for privacy-preserving machine learning. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, 2017. 7
+Gregory Francis Coppola. Iterative parameter mixing for distributed large-margin training of structured predictors for natural language processing. PhD thesis, 2015. 4, 7
+Robin C Geyer, Tassilo Klein, Moin Nabi, and SAP SE. Differentially private federated learning: A client level perspective. arXiv preprint arXiv:1712.07557, 2017. 7
+Briland Hitaj, Giuseppe Ateniese, and Fernando Pérez-Cruz. Deep models under the GAN: information leakage from collaborative deep learning. In ACM SIGSAC Conference on Computer and Communications Security, 2017. 7
+Mingyi Hong, Meisam Razaviyayn, and Jason Lee. Gradient primal-dual algorithm converges to second-order stationary solution for nonconvex distributed optimization over networks. In International Conference on Machine Learning (ICML), 2018. 8
+
+Dusan Jakovetic. Distributed optimization: algorithms and convergence rates. PhD, Carnegie Mellon University, Pittsburgh PA, USA, 2013. 1
+Ahmed Khaled, Konstantin Mishchenko, and Peter Richtárik. First analysis of local gd on heterogeneous data. arXiv preprint arXiv:1909.04715, 2019. 2, 7
+Jakub Konevcny. Stochastic, distributed and federated optimization for machine learning. arXiv preprint arXiv:1707.01155, 2017. 1
+Jakub Konevcny, Brendan McMahan, and Daniel Ramage. Federated optimization: distributed optimization beyond the datacenter. arXiv preprint arXiv:1511.03575, 2015. 1
+Jakub Konevcny, H Brendan McMahan, Felix X Yu, Peter Richtárik, Ananda Theertha Suresh, and Dave Bacon. Federated learning: strategies for improving communication efficiency. arXiv preprint arXiv:1610.05492, 2016. 7
+Yann LeCun, Léon Bottou, Yoshua Bengio, Patrick Haffner, et al. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278-2324, 1998. 8, 24
+Jason D Lee, Qiang Liu, Yuekai Sun, and Jonathan E Taylor. Communication-efficient sparse regression. The Journal of Machine Learning Research, 18(1):115-144, 2017. 8
+Mu Li, David G Andersen, Jun Woo Park, Alexander J Smola, Amr Ahmed, Vanja Josifovski, James Long, Eugene J Shekita, and Bor-Yiing Su. Scaling distributed machine learning with the parameter server. In 11th {USENIX} Symposium on Operating Systems Design and Implementation ( {OSDI} 14), pages 583-598, 2014a. 1
+Mu Li, David G Andersen, Alexander J Smola, and Kai Yu. Communication efficient distributed machine learning with the parameter server. In Advances in Neural Information Processing Systems (NIPS), 2014b. 1
+Tian Li, Anit Kumar Sahu, Ameet Talwalkar, and Virginia Smith. Federated learning: Challenges, methods, and future directions. arXiv preprint arXiv:1908.07873, 2019. 1
+Shao-Bo Lin, Xin Guo, and Ding-Xuan Zhou. Distributed learning with regularized least squares. Journal of Machine Learning Research, 18(1):3202-3232, 2017. 8
+Tao Lin, Sebastian U Stich, and Martin Jaggi. Don't use large mini-batches, use local sgd. arXiv preprint arXiv:1808.07217, 2018. 7
+Dhruv Mahajan, Nikunj Agrawal, S Sathiya Keerthi, Sundararajan Sellamanickam, and Léon Bottou. An efficient distributed learning algorithm based on effective local functional approximations. Journal of Machine Learning Research, 19(1):2942-2978, 2018. 8
+Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. Communication-Efficient Learning of Deep Networks from Decentralized Data. In International Conference on Artificial Intelligence and Statistics (AISTATS), 2017. 1, 2, 3, 7, 17, 25
+Luca Melis, Congzheng Song, Emiliano De Cristofaro, and Vitaly Shmatikov. Exploiting unintended feature leakage in collaborative learning. In IEEE Symposium on Security & Privacy (S&P). IEEE, 2019. 7
+Xiangrui Meng, Joseph Bradley, Burak Yavuz, Evan Sparks, Shivaram Venkataraman, Davies Liu, Jeremy Freeman, DB Tsai, Manish Amde, and Sean Owen. MLlib: machine learning in Apache Spark. Journal of Machine Learning Research, 17(34):1-7, 2016. 1
+Yurii Nesterov. Introductory lectures on convex optimization: a basic course, volume 87. Springer Science & Business Media, 2013. 7
+Sashank J Reddi, Jakub Konecný, Peter Richtárik, Barnabás Póczós, and Alex Smola. AIDE: fast and communication efficient distributed optimization. arXiv preprint arXiv:1608.06879, 2016. 1, 8
+Peter Richtárik and Martin Takáč. Distributed coordinate descent method for learning with big data. Journal of Machine Learning Research, 17(1):2657-2681, 2016. 1
+
+Anit Kumar Sahu, Tian Li, Maziar Sanjabi, Manzil Zaheer, Ameet Talwalkar, and Virginia Smith. Federated optimization for heterogeneous networks. arXiv preprint arXiv:1812.06127, 2018. 1, 2, 3, 5, 7, 8, 17, 24, 25
+Felix Sattler, Simon Wiedemann, Klaus-Robert Müller, and Wojciech Samek. Robust and communication-efficient federated learning from non-iid data. arXiv preprint arXiv:1903.02891, 2019. 2, 7
+Ohad Shamir, Nati Srebro, and Tong Zhang. Communication-efficient distributed optimization using an approximate Newton-type method. In International conference on machine learning (ICML), 2014. 1, 8
+Reza Shokri and Vitaly Shmatikov. Privacy-preserving deep learning. In Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, 2015. 1
+Shusen Wang, Farbod Roosta Khorasani, Peng Xu, and Michael W. Mahoney. GIANT: Globally improved approximate newton method for distributed optimization. In Conference on Neural Information Processing Systems (NeurIPS), 2018. 1, 8
+Virginia Smith, Simone Forte, Chenxin Ma, Martin Takac, Michael I Jordan, and Martin Jaggi. CoCoA: A general framework for communication-efficient distributed optimization. arXiv preprint arXiv:1611.02189, 2016. 1, 8
+Virginia Smith, Chao-Kai Chiang, Maziar Sanjabi, and Ameet S Talwalkar. Federated multi-task learning. In Advances in Neural Information Processing Systems (NIPS), 2017. 2, 7, 8
+Sebastian U Stich. Local SGD converges fast and communicates little. arXiv preprint arXiv:1805.09767, 2018. 2, 4, 5, 6, 7, 12
+Sebastian U Stich, Jean-Baptiste Cordonnier, and Martin Jaggi. Sparsified SGD with memory. In Advances in Neural Information Processing Systems (NIPS), pages 4447-4458, 2018. 5
+Jianyu Wang and Gauri Joshi. Cooperative SGD: A unified framework for the design and analysis of communication-efficient SGD algorithms. arXiv preprint arXiv:1808.07576, 2018. 2, 4, 7
+Shiqiang Wang, Tiffany Tuor, Theodoros Salonidis, Kin K Leung, Christian Makaya, Ting He, and Kevin Chan. Adaptive federated learning in resource constrained edge computing systems. IEEE Journal on Selected Areas in Communications, 2019. 2, 4, 7
+Shusen Wang. A sharper generalization bound for divide-and-conquer ridge regression. In The Thirty-Third AAAI Conference on Artificial Intelligence (AAAI), 2019. 8
+Blake E Woodworth, Jialei Wang, Adam Smith, Brendan McMahan, and Nati Srebro. Graph oracle models, lower bounds, and gaps for parallel stochastic optimization. In Advances in Neural Information Processing Systems (NeurIPS), 2018. 2, 4
+Cong Xie, Sanmi Koyejo, and Indranil Gupta. Asynchronous federated optimization. arXiv preprint arXiv:1903.03934, 2019. 7
+Hao Yu, Sen Yang, and Shenghuo Zhu. Parallel restarted sgd with faster convergence and less communication: Demystifying why model averaging works for deep learning. In AAAI Conference on Artificial Intelligence, 2019. 2, 4, 5, 7
+Sixin Zhang, Anna E Choromanska, and Yann LeCun. Deep learning with elastic averaging SGD. In Advances in Neural Information Processing Systems (NIPS), 2015a. 4
+Yuchen Zhang and Xiao Lin. DiSCO: distributed optimization for self-concordant empirical loss. In International Conference on Machine Learning (ICML), 2015. 1, 8
+Yuchen Zhang, John C. Duchi, and Martin J. Wainwright. Communication-efficient algorithms for statistical optimization. Journal of Machine Learning Research, 14:3321-3363, 2013. 5, 6, 8
+Yuchen Zhang, John Duchi, and Martin Wainwright. Divide and conquer kernel ridge regression: a distributed algorithm with minimax optimal rates. Journal of Machine Learning Research, 16: 3299-3340, 2015b. 8
+
+Yue Zhao, Meng Li, Liangzhen Lai, Naveen Suda, Damon Civin, and Vikas Chandra. Federated learning with non-iid data. arXiv preprint arXiv:1806.00582, 2018. 7
+Shun Zheng, Fen Xia, Wei Xu, and Tong Zhang. A general distributed dual coordinate optimization framework for regularized loss minimization. arXiv preprint arXiv:1604.03763, 2016. 1
+Fan Zhou and Guojing Cong. On the convergence properties of a k-step averaging stochastic gradient descent algorithm for nonconvex optimization. arXiv preprint arXiv:1708.01012, 2017. 2, 4, 7
+Hankz Hankui Zhuo, Wenfeng Feng, Qian Xu, Qiang Yang, and Yufeng Lin. Federated reinforcement learning. arXiv preprint arXiv:1901.08277, 2019. 1
+
+# A PROOF OF THEOREM 1
+
+We analyze FedAvg in the setting of full device participation in this section.
+
+# A.1 ADDITIONAL NOTATION
+
+Let $\mathbf{w}_t^k$ be the model parameter maintained in the $k$ -th device at the $t$ -th step. Let $\mathcal{I}_E$ be the set of global synchronization steps, i.e., $\mathcal{I}_E = \{nE \mid n = 1,2,\dots\}$ . If $t + 1 \in \mathcal{I}_E$ , i.e., the time step to communication, FedAvg activates all devices. Then the update of FedAvg with partial devices active can be described as
+
+$$
+\mathbf {v} _ {t + 1} ^ {k} = \mathbf {w} _ {t} ^ {k} - \eta_ {t} \nabla F _ {k} \left(\mathbf {w} _ {t} ^ {k}, \xi_ {t} ^ {k}\right), \tag {9}
+$$
+
+$$
+\mathbf {w} _ {t + 1} ^ {k} = \left\{ \begin{array}{l l} \mathbf {v} _ {t + 1} ^ {k} & \text {i f} t + 1 \notin \mathcal {I} _ {E}, \\ \sum_ {k = 1} ^ {N} p _ {k} \mathbf {v} _ {t + 1} ^ {k} & \text {i f} t + 1 \in \mathcal {I} _ {E}. \end{array} \right. \tag {10}
+$$
+
+Here, an additional variable $\mathbf{v}_{t + 1}^k$ is introduced to represent the immediate result of one step SGD update from $\mathbf{w}_t^k$ . We interpret $\mathbf{w}_{t + 1}^k$ as the parameter obtained after communication steps (if possible).
+
+In our analysis, we define two virtual sequences $\overline{\mathbf{v}}_t = \sum_{k=1}^N p_k \mathbf{v}_t^k$ and $\overline{\mathbf{w}}_t = \sum_{k=1}^N p_k \mathbf{w}_t^k$ . This is motivated by (Stich, 2018). $\overline{\mathbf{v}}_{t+1}$ results from an single step of SGD from $\overline{\mathbf{w}}_t$ . When $t + 1 \notin \mathcal{I}_E$ , both are inaccessible. When $t + 1 \in \mathcal{I}_E$ , we can only fetch $\overline{\mathbf{w}}_{t+1}$ . For convenience, we define $\overline{\mathbf{g}}_t = \sum_{k=1}^N p_k \nabla F_k(\mathbf{w}_t^k)$ and $\mathbf{g}_t = \sum_{k=1}^N p_k \nabla F_k(\mathbf{w}_t^k, \xi_t^k)$ . Therefore, $\overline{\mathbf{v}}_{t+1} = \overline{\mathbf{w}}_t - \eta_t \mathbf{g}_t$ and $\mathbb{E} \mathbf{g}_t = \overline{\mathbf{g}}_t$ .
+
+# A.2 KEY LEMMAS
+
+To convey our proof clearly, it would be necessary to prove certain useful lemmas. We defer the proof of these lemmas to latter section and focus on proving the main theorem.
+
+Lemma 1 (Results of one step SGD). Assume Assumption 1 and 2. If $\eta_t \leq \frac{1}{4L}$ , we have
+
+$$
+\mathbb {E} \left\| \overline {{\mathbf {v}}} _ {t + 1} - \mathbf {w} ^ {\star} \right\| ^ {2} \leq (1 - \eta_ {t} \mu) \mathbb {E} \left\| \overline {{\mathbf {w}}} _ {t} - \mathbf {w} ^ {\star} \right\| ^ {2} + \eta_ {t} ^ {2} \mathbb {E} \left\| \mathbf {g} _ {t} - \overline {{\mathbf {g}}} _ {t} \right\| ^ {2} + 6 L \eta_ {t} ^ {2} \Gamma + 2 \mathbb {E} \sum_ {k = 1} ^ {N} p _ {k} \left\| \overline {{\mathbf {w}}} _ {t} - \mathbf {w} _ {k} ^ {t} \right\| ^ {2}
+$$
+
+where $\Gamma = F^{*} - \sum_{k=1}^{N} p_{k} F_{k}^{\star} \geq 0$ .
+
+Lemma 2 (Bounding the variance). Assume Assumption 3 holds. It follows that
+
+$$
+\mathbb {E} \left\| \mathbf {g} _ {t} - \overline {{\mathbf {g}}} _ {t} \right\| ^ {2} \leq \sum_ {k = 1} ^ {N} p _ {k} ^ {2} \sigma_ {k} ^ {2}.
+$$
+
+Lemma 3 (Bounding the divergence of $\{\mathbf{w}_t^k\}$ ). Assume Assumption 4, that $\eta_t$ is non-increasing and $\eta_t \leq 2\eta_{t + E}$ for all $t \geq 0$ . It follows that
+
+$$
+\mathbb {E} \left[ \sum_ {k = 1} ^ {N} p _ {k} \left\| \overline {{\mathbf {w}}} _ {t} - \mathbf {w} _ {k} ^ {t} \right\| ^ {2} \right] \leq 4 \eta_ {t} ^ {2} (E - 1) ^ {2} G ^ {2}.
+$$
+
+# A.3 COMPLETING THE PROOF OF THEOREM 1
+
+Proof. It is clear that no matter whether $t + 1 \in \mathcal{I}_E$ or $t + 1 \notin \mathcal{I}_E$ , we always have $\overline{\mathbf{w}}_{t + 1} = \overline{\mathbf{v}}_{t + 1}$ . Let $\Delta_t = \mathbb{E}\left\| \overline{\mathbf{w}}_{t + 1} - \mathbf{w}^\star \right\|^2$ . From Lemma 1, Lemma 2 and Lemma 3, it follows that
+
+$$
+\Delta_ {t + 1} \leq (1 - \eta_ {t} \mu) \Delta_ {t} + \eta_ {t} ^ {2} B \tag {11}
+$$
+
+where
+
+$$
+B = \sum_ {k = 1} ^ {N} p _ {k} ^ {2} \sigma_ {k} ^ {2} + 6 L \Gamma + 8 (E - 1) ^ {2} G ^ {2}.
+$$
+
+For a diminishing stepsize, $\eta_t = \frac{\beta}{t + \gamma}$ for some $\beta > \frac{1}{\mu}$ and $\gamma > 0$ such that $\eta_1 \leq \min \left\{\frac{1}{\mu}, \frac{1}{4L}\right\} = \frac{1}{4L}$ and $\eta_t \leq 2\eta_{t+E}$ . We will prove $\Delta_t \leq \frac{v}{\gamma + t}$ where $v = \max \left\{\frac{\beta^2B}{\beta\mu - 1}, (\gamma + 1)\Delta_1\right\}$ .
+
+We prove it by induction. Firstly, the definition of $v$ ensures that it holds for $t = 1$ . Assume the conclusion holds for some $t$ , it follows that
+
+$$
+\begin{array}{l} \Delta_ {t + 1} \leq (1 - \eta_ {t} \mu) \Delta_ {t} + \eta_ {t} ^ {2} B \\ = \left(1 - \frac {\beta \mu}{t + \gamma}\right) \frac {v}{t + \gamma} + \frac {\beta^ {2} B}{(t + \gamma) ^ {2}} \\ = \frac {t + \gamma - 1}{(t + \gamma) ^ {2}} v + \left[ \frac {\beta^ {2} B}{(t + \gamma) ^ {2}} - \frac {\beta \mu - 1}{(t + \gamma) ^ {2}} v \right] \\ \leq \frac {v}{t + \gamma + 1}. \\ \end{array}
+$$
+
+Then by the strong convexity of $F(\cdot)$
+
+$$
+\mathbb {E} \left[ F \left(\overline {{\mathbf {w}}} _ {t}\right) \right] - F ^ {*} \leq \frac {L}{2} \Delta_ {t} \leq \frac {L}{2} \frac {v}{\gamma + t}.
+$$
+
+Specifically, if we choose $\beta = \frac{2}{\mu}$ , $\gamma = \max \{8\frac{L}{\mu} - 1, E\}$ and denote $\kappa = \frac{L}{\mu}$ , then $\eta_t = \frac{2}{\mu}\frac{1}{\gamma + t}$ and
+
+$$
+\mathbb {E} \left[ F (\overline {{\mathbf {w}}} _ {t}) \right] - F ^ {*} \leq \frac {2 \kappa}{\gamma + t} \left(\frac {B}{\mu} + 2 L \Delta_ {1}\right).
+$$
+
+
+
+# A.4 DEFERRED PROOFS OF KEY LEMMAS
+
+Proof of Lemma 1. Notice that $\overline{\mathbf{v}}_{t + 1} = \overline{\mathbf{w}}_t - \eta_t\pmb {g}_t$ , then
+
+$$
+\begin{array}{l} \left\| \overline {{\mathbf {v}}} _ {t + 1} - \mathbf {w} ^ {\star} \right\| ^ {2} = \left\| \overline {{\mathbf {w}}} _ {t} - \eta_ {t} \mathbf {g} _ {t} - \mathbf {w} ^ {\star} - \eta_ {t} \overline {{\mathbf {g}}} _ {t} + \eta_ {t} \overline {{\mathbf {g}}} _ {t} \right\| ^ {2} \\ = \underbrace {\left\| \overline {{\mathbf {w}}} _ {t} - \mathbf {w} ^ {\star} - \eta_ {t} \overline {{\mathbf {g}}} _ {t} \right\| ^ {2}} _ {A _ {1}} + \underbrace {2 \eta_ {t} \left\langle \overline {{\mathbf {w}}} _ {t} - \mathbf {w} ^ {\star} - \eta_ {t} \overline {{\mathbf {g}}} _ {t} , \overline {{\mathbf {g}}} _ {t} - \mathbf {g} _ {t} \right\rangle} _ {A _ {2}} + \eta_ {t} ^ {2} \left\| \mathbf {g} _ {t} - \overline {{\mathbf {g}}} _ {t} \right\| ^ {2} \tag {12} \\ \end{array}
+$$
+
+Note that $\mathbb{E}A_2 = 0$ . We next focus on bounding $A_{1}$ . Again we split $A_{1}$ into three terms:
+
+$$
+\left\| \overline {{\mathbf {w}}} _ {t} - \mathbf {w} ^ {\star} - \eta_ {t} \overline {{\mathbf {g}}} _ {t} \right\| ^ {2} = \left\| \overline {{\mathbf {w}}} _ {t} - \mathbf {w} ^ {\star} \right\| ^ {2} \underbrace {- 2 \eta_ {t} \langle \overline {{\mathbf {w}}} _ {t} - \mathbf {w} ^ {\star} , \overline {{\mathbf {g}}} _ {t} \rangle} _ {B _ {1}} + \underbrace {\eta_ {t} ^ {2} \left\| \overline {{\mathbf {g}}} _ {t} \right\| ^ {2}} _ {B _ {2}} \tag {13}
+$$
+
+From the the $L$ -smoothness of $F_{k}(\cdot)$ , it follows that
+
+$$
+\left\| \nabla F _ {k} \left(\mathbf {w} _ {t} ^ {k}\right) \right\| ^ {2} \leq 2 L \left(F _ {k} \left(\mathbf {w} _ {t} ^ {k}\right) - F _ {k} ^ {\star}\right). \tag {14}
+$$
+
+By the convexity of $\| \cdot \| ^2$ and eqn. (14), we have
+
+$$
+B _ {2} = \eta_ {t} ^ {2} \left\| \overline {{\mathbf {g}}} _ {t} \right\| ^ {2} \leq \eta_ {t} ^ {2} \sum_ {k = 1} ^ {N} p _ {k} \left\| \nabla F _ {k} \left(\mathbf {w} _ {t} ^ {k}\right) \right\| ^ {2} \leq 2 L \eta_ {t} ^ {2} \sum_ {k = 1} ^ {N} p _ {k} \left(F _ {k} \left(\mathbf {w} _ {t} ^ {k}\right) - F _ {k} ^ {*}\right).
+$$
+
+Note that
+
+$$
+\begin{array}{l} B _ {1} = - 2 \eta_ {t} \left\langle \overline {{\mathbf {w}}} _ {t} - \mathbf {w} ^ {\star}, \overline {{\mathbf {g}}} _ {t} \right\rangle = - 2 \eta_ {t} \sum_ {k = 1} ^ {N} p _ {k} \left\langle \overline {{\mathbf {w}}} _ {t} - \mathbf {w} ^ {\star}, \nabla F _ {k} \left(\mathbf {w} _ {t} ^ {k}\right) \right\rangle \\ = - 2 \eta_ {t} \sum_ {k = 1} ^ {N} p _ {k} \left\langle \overline {{\mathbf {w}}} _ {t} - \mathbf {w} _ {t} ^ {k}, \nabla F _ {k} \left(\mathbf {w} _ {t} ^ {k}\right) \right\rangle - 2 \eta_ {t} \sum_ {k = 1} ^ {N} p _ {k} \left\langle \mathbf {w} _ {t} ^ {k} - \mathbf {w} ^ {\star}, \nabla F _ {k} \left(\mathbf {w} _ {t} ^ {k}\right) \right\rangle . \tag {15} \\ \end{array}
+$$
+
+By Cauchy-Schwarz inequality and AM-GM inequality, we have
+
+$$
+- 2 \left\langle \overline {{\mathbf {w}}} _ {t} - \mathbf {w} _ {t} ^ {k}, \nabla F _ {k} \left(\mathbf {w} _ {t} ^ {k}\right) \right\rangle \leq \frac {1}{\eta_ {t}} \left\| \overline {{\mathbf {w}}} _ {t} - \mathbf {w} _ {t} ^ {k} \right\| ^ {2} + \eta_ {t} \left\| \nabla F _ {k} \left(\mathbf {w} _ {t} ^ {k}\right) \right\| ^ {2}. \tag {16}
+$$
+
+By the $\mu$ -strong convexity of $F_{k}(\cdot)$ , we have
+
+$$
+\left. - \left\langle \mathbf {w} _ {t} ^ {k} - \mathbf {w} ^ {\star}, \nabla F _ {k} \left(\mathbf {w} _ {t} ^ {k}\right) \right\rangle \leq - \left(F _ {k} \left(\mathbf {w} _ {t} ^ {k}\right) - F _ {k} (\mathbf {w} ^ {*})\right) - \frac {\mu}{2} \left\| \mathbf {w} _ {t} ^ {k} - \mathbf {w} ^ {\star} \right\| ^ {2}. \right. \tag {17}
+$$
+
+By combining eqn. (13), eqn. (15), eqn. (16) and eqn. (17), it follows that
+
+$$
+\begin{array}{l} A _ {1} = \left\| \overline {{\mathbf {w}}} _ {t} - \mathbf {w} ^ {\star} - \eta_ {t} \overline {{\mathbf {g}}} _ {t} \right\| ^ {2} \leq \left\| \overline {{\mathbf {w}}} _ {t} - \mathbf {w} ^ {\star} \right\| ^ {2} + 2 L \eta_ {t} ^ {2} \sum_ {k = 1} ^ {N} p _ {k} \left(F _ {k} \left(\mathbf {w} _ {t} ^ {k}\right) - F _ {k} ^ {*}\right) \\ + \eta_ {t} \sum_ {k = 1} ^ {N} p _ {k} \left(\frac {1}{\eta_ {t}} \left\| \overline {{\mathbf {w}}} _ {t} - \mathbf {w} _ {k} ^ {t} \right\| ^ {2} + \eta_ {t} \left\| \nabla F _ {k} \left(\mathbf {w} _ {t} ^ {k}\right) \right\| ^ {2}\right) \\ - 2 \eta_ {t} \sum_ {k = 1} ^ {N} p _ {k} \left(F _ {k} \left(\mathbf {w} _ {t} ^ {k}\right) - F _ {k} \left(\mathbf {w} ^ {*}\right) + \frac {\mu}{2} \left\| \mathbf {w} _ {t} ^ {k} - \mathbf {w} ^ {\star} \right\| ^ {2}\right) \\ = \left(1 - \mu \eta_ {t}\right) \left\| \overline {{\mathbf {w}}} _ {t} - \mathbf {w} ^ {\star} \right\| ^ {2} + \sum_ {k = 1} ^ {N} p _ {k} \left\| \overline {{\mathbf {w}}} _ {t} - \mathbf {w} _ {k} ^ {t} \right\| ^ {2} \\ + \underbrace {4 L \eta_ {t} ^ {2} \sum_ {k = 1} ^ {N} p _ {k} \left(F _ {k} \left(\mathbf {w} _ {t} ^ {k}\right) - F _ {k} ^ {*}\right) - 2 \eta_ {t} \sum_ {k = 1} ^ {N} p _ {k} \left(F _ {k} \left(\mathbf {w} _ {t} ^ {k}\right) - F _ {k} \left(\mathbf {w} ^ {*}\right)\right)} _ {C} \\ \end{array}
+$$
+
+where we use eqn. (14) again.
+
+We next aim to bound $C$ . We define $\gamma_{t} = 2\eta_{t}(1 - 2L\eta_{t})$ . Since $\eta_{t} \leq \frac{1}{4L}$ , $\eta_{t} \leq \gamma_{t} \leq 2\eta_{t}$ . Then we split $C$ into two terms:
+
+$$
+\begin{array}{l} C = - 2 \eta_ {t} (1 - 2 L \eta_ {t}) \sum_ {k = 1} ^ {N} p _ {k} \left(F _ {k} \left(\mathbf {w} _ {t} ^ {k}\right) - F _ {k} ^ {*}\right) + 2 \eta_ {t} \sum_ {k = 1} ^ {N} p _ {k} \left(F _ {k} \left(\mathbf {w} ^ {*}\right) - F _ {k} ^ {*}\right) \\ = - \gamma_ {t} \sum_ {k = 1} ^ {N} p _ {k} \left(F _ {k} \left(\mathbf {w} _ {t} ^ {k}\right) - F ^ {*}\right) + \left(2 \eta_ {t} - \gamma_ {t}\right) \sum_ {k = 1} ^ {N} p _ {k} \left(F ^ {*} - F _ {k} ^ {*}\right) \\ = \underbrace {- \gamma_ {t} \sum_ {k = 1} ^ {N} p _ {k} \left(F _ {k} \left(\mathbf {w} _ {t} ^ {k}\right) - F ^ {*}\right)} _ {D} + 4 L \eta_ {t} ^ {2} \Gamma \\ \end{array}
+$$
+
+where in the last equation, we use the notation $\Gamma = \sum_{k=1}^{N} p_k (F^* - F_k^*) = F^* - \sum_{k=1}^{N} p_k F_k^*$ .
+
+To bound $D$ , we have
+
+$$
+\begin{array}{l} \sum_ {k = 1} ^ {N} p _ {k} \left(F _ {k} \left(\mathbf {w} _ {t} ^ {k}\right) - F ^ {*}\right) = \sum_ {k = 1} ^ {N} p _ {k} \left(F _ {k} \left(\mathbf {w} _ {t} ^ {k}\right) - F _ {k} \left(\overline {{\mathbf {w}}} _ {t}\right)\right) + \sum_ {k = 1} ^ {N} p _ {k} \left(F _ {k} \left(\overline {{\mathbf {w}}} _ {t}\right) - F ^ {*}\right) \\ \geq \sum_ {k = 1} ^ {N} p _ {k} \left\langle \nabla F _ {k} (\overline {{\mathbf {w}}} _ {t}), \overline {{\mathbf {w}}} _ {t} ^ {k} - \overline {{\mathbf {w}}} _ {t} \right\rangle + (F (\overline {{\mathbf {w}}} _ {t}) - F ^ {*}) \\ \geq - \frac {1}{2} \sum_ {k = 1} ^ {N} p _ {k} \left[ \eta_ {t} \| \nabla F _ {k} (\overline {{\mathbf {w}}} _ {t}) \| ^ {2} + \frac {1}{\eta_ {t}} \left\| \mathbf {w} _ {t} ^ {k} - \overline {{\mathbf {w}}} _ {t} \right\| ^ {2} \right] + (F (\overline {{\mathbf {w}}} _ {t}) - F ^ {*}) \\ \geq - \sum_ {k = 1} ^ {N} p _ {k} \left[ \eta_ {t} L \left(F _ {k} \left(\overline {{\mathbf {w}}} _ {t}\right) - F _ {k} ^ {*}\right) + \frac {1}{2 \eta_ {t}} \left\| \mathbf {w} _ {t} ^ {k} - \overline {{\mathbf {w}}} _ {t} \right\| ^ {2} \right] + \left(F \left(\overline {{\mathbf {w}}} _ {t}\right) - F ^ {*}\right) \\ \end{array}
+$$
+
+where the first inequality results from the convexity of $F_{k}(\cdot)$ , the second inequality from AM-GM inequality and the third inequality from eqn. (14).
+
+Therefore
+
+$$
+\begin{array}{l} C = \gamma_ {t} \sum_ {k = 1} ^ {N} p _ {k} \left[ \eta_ {t} L \left(F _ {k} \left(\overline {{\mathbf {w}}} _ {t}\right) - F _ {k} ^ {*}\right) + \frac {1}{2 \eta_ {t}} \left\| \mathbf {w} _ {t} ^ {k} - \overline {{\mathbf {w}}} _ {t} \right\| ^ {2} \right] - \gamma_ {t} \left(F \left(\overline {{\mathbf {w}}} _ {t}\right) - F ^ {*}\right) + 4 L \eta_ {t} ^ {2} \Gamma \\ = \gamma_ {t} \left(\eta_ {t} L - 1\right) \sum_ {k = 1} ^ {N} p _ {k} \left(F _ {k} \left(\overline {{\mathbf {w}}} _ {t}\right) - F ^ {*}\right) + \left(4 L \eta_ {t} ^ {2} + \gamma_ {t} \eta_ {t} L\right) \Gamma + \frac {\gamma_ {t}}{2 \eta_ {t}} \sum_ {k = 1} ^ {N} p _ {k} \left\| \mathbf {w} _ {t} ^ {k} - \overline {{\mathbf {w}}} _ {t} \right\| ^ {2} \\ \leq 6 L \eta_ {t} ^ {2} \Gamma + \sum_ {k = 1} ^ {N} p _ {k} \left\| \mathbf {w} _ {t} ^ {k} - \overline {{\mathbf {w}}} _ {t} \right\| ^ {2} \\ \end{array}
+$$
+
+where in the last inequality, we use the following facts: (1) $\eta_t L - 1 \leq -\frac{3}{4} \leq 0$ and $\sum_{k=1}^{N} p_k (F_k(\overline{\mathbf{w}}_t) - F^*) = F(\overline{\mathbf{w}}_t) - F^* \geq 0$ (2) $\Gamma \geq 0$ and $4L\eta_t^2 + \gamma_t\eta_t L \leq 6\eta_t^2 L$ and (3) $\frac{\gamma_t}{2\eta_t} \leq 1$ .
+
+Recalling the expression of $A_{1}$ and plugging $C$ into it, we have
+
+$$
+\begin{array}{l} A _ {1} = \left\| \overline {{\mathbf {w}}} _ {t} - \mathbf {w} ^ {\star} - \eta_ {t} \overline {{\mathbf {g}}} _ {t} \right\| ^ {2} \\ \leq \left(1 - \mu \eta_ {t}\right) \left\| \overline {{\mathbf {w}}} _ {t} - \mathbf {w} ^ {\star} \right\| ^ {2} + 2 \sum_ {k = 1} ^ {N} p _ {k} \left\| \overline {{\mathbf {w}}} _ {t} - \mathbf {w} _ {k} ^ {t} \right\| ^ {2} + 6 \eta_ {t} ^ {2} L \Gamma \tag {18} \\ \end{array}
+$$
+
+Using eqn. (18) and taking expectation on both sides of eqn. (12), we erase the randomness from stochastic gradients, we complete the proof.
+
+Proof of Lemma 2. From Assumption 3, the variance of the stochastic gradients in device $k$ is bounded by $\sigma_k^2$ , then
+
+$$
+\begin{array}{l} \mathbb {E} \left\| \mathbf {g} _ {t} - \overline {{\mathbf {g}}} _ {t} \right\| ^ {2} = \mathbb {E} \left\| \sum_ {k = 1} ^ {N} p _ {k} \left(\nabla F _ {k} \left(\mathbf {w} _ {t} ^ {k}, \xi_ {t} ^ {k}\right) - \nabla F _ {k} \left(\mathbf {w} _ {t} ^ {k}\right)\right) \right\| ^ {2}, \\ = \sum_ {k = 1} ^ {N} p _ {k} ^ {2} \mathbb {E} \left| \left\| \nabla F _ {k} \left(\mathbf {w} _ {t} ^ {k}, \xi_ {t} ^ {k}\right) - \nabla F _ {k} \left(\mathbf {w} _ {t} ^ {k}\right) \right\| ^ {2}, \right. \\ \leq \sum_ {k = 1} ^ {N} p _ {k} ^ {2} \sigma_ {k} ^ {2}. \\ \end{array}
+$$
+
+
+
+Proof of Lemma 3. Since $\mathsf{FedAvg}$ requires a communication each $E$ steps. Therefore, for any $t\geq 0$ there exists a $t_0\le t$ , such that $t - t_0\leq E - 1$ and $\mathbf{w}_{t_0}^k = \overline{\mathbf{w}}_{t_0}$ for all $k = 1,2,\dots ,N$ . Also, we use
+
+the fact that $\eta_t$ is non-increasing and $\eta_{t_0} \leq 2\eta_t$ for all $t - t_0 \leq E - 1$ , then
+
+$$
+\begin{array}{l} \mathbb {E} \sum_ {k = 1} ^ {N} p _ {k} \left\| \overline {{\mathbf {w}}} _ {t} - \mathbf {w} _ {t} ^ {k} \right\| ^ {2} = \mathbb {E} \sum_ {k = 1} ^ {N} p _ {k} \left\| \left(\mathbf {w} _ {t} ^ {k} - \overline {{\mathbf {w}}} _ {t _ {0}}\right) - \left(\overline {{\mathbf {w}}} _ {t} - \overline {{\mathbf {w}}} _ {t _ {0}}\right) \right\| ^ {2} \\ \leq \mathbb {E} \sum_ {k = 1} ^ {N} p _ {k} \left| \left| \mathbf {w} _ {t} ^ {k} - \overline {{\mathbf {w}}} _ {t _ {0}} \right| \right| ^ {2} \\ \leq \sum_ {k = 1} ^ {N} p _ {k} \mathbb {E} \sum_ {t = t _ {0}} ^ {t - 1} (E - 1) \eta_ {t} ^ {2} \left\| \nabla F _ {k} \left(\mathbf {w} _ {t} ^ {k}, \xi_ {t} ^ {k}\right) \right\| ^ {2} \\ \leq \sum_ {k = 1} ^ {N} p _ {k} \sum_ {t = t _ {0}} ^ {t - 1} (E - 1) \eta_ {t _ {0}} ^ {2} G ^ {2} \\ \leq \sum_ {k = 1} ^ {N} p _ {k} \eta_ {t _ {0}} ^ {2} (E - 1) ^ {2} G ^ {2} \\ \leq 4 \eta_ {t} ^ {2} (E - 1) ^ {2} G ^ {2}. \\ \end{array}
+$$
+
+
+
+# B PROOFS OF THEOREMS 2 AND 3
+
+We analyze FedAvg in the setting of partial device participation in this section.
+
+# B.1 ADDITIONAL NOTATION
+
+Recall that $\mathbf{w}_t^k$ is the model parameter maintained in the $k$ -th device at the $t$ -th step. $\mathcal{I}_E = \{nE \mid n = 1,2,\dots\}$ is the set of global synchronization steps. Unlike the setting in Appendix A, when it is the time to communicate, i.e., $t + 1 \in \mathcal{I}_E$ , the scenario considered here is that FedAvg randomly activates a subset of devices according to some sampling schemes. Again, $\overline{\mathbf{g}}_t = \sum_{k=1}^{N} p_k \nabla F_k(\mathbf{w}_t^k)$ and $\mathbf{g}_t = \sum_{k=1}^{N} p_k F_k(\mathbf{w}_t^k, \xi_t^k)$ . Therefore, $\overline{\mathbf{v}}_{t+1} = \overline{\mathbf{w}}_t - \eta_t \mathbf{g}_t$ and $\mathbb{E} \mathbf{g}_t = \overline{\mathbf{g}}_t$ .
+
+Multiset selected. All sampling schemes can be divided into two groups, one with replacement and the other without replacement. For those with replacement, it is possible for a device to be activated several times in a round of communication, even though each activation is independent with the rest. We denote by $\mathcal{H}_t$ the multiset selected which allows any element to appear more than once. Note that $\mathcal{H}_t$ is only well defined for $t\in \mathcal{I}_E$ . For convenience, we denote by $\mathcal{S}_t = \mathcal{H}_{N(t,E)}$ the most recent set of chosen devices where $N(t,E) = \max \{n|n\leq t,n\in \mathcal{I}_E\}$ .
+
+Updating scheme. Limited to realistic scenarios (for communication efficiency and low straggler effect), FedAvg first samples a random multiset $S_{t}$ of devices and then only perform updates on them. This makes the analysis a little bit intricate, since $S_{t}$ varies each $E$ steps. However, we can use a thought trick to circumvent this difficulty. We assume that FedAvg always activates all devices at the beginning of each round and then uses the parameters maintained in only a few sampled devices to produce the next-round parameter. It is clear that this updating scheme is equivalent to the original. Then the update of FedAvg with partial devices active can be described as: for all $k \in [N]$ ,
+
+$$
+\mathbf {v} _ {t + 1} ^ {k} = \mathbf {w} _ {t} ^ {k} - \eta_ {t} \nabla F _ {k} \left(\mathbf {w} _ {t} ^ {k}, \xi_ {t} ^ {k}\right), \tag {19}
+$$
+
+$$
+\mathbf {w} _ {t + 1} ^ {k} = \left\{ \begin{array}{l l} \mathbf {v} _ {t + 1} ^ {k} & \text {i f} t + 1 \notin \mathcal {I} _ {E}, \\ \text {s a m p l e s} \mathcal {S} _ {t + 1} \text {a n d a v e r a g e} \left\{\mathbf {v} _ {t + 1} ^ {k} \right\} _ {k \in \mathcal {S} _ {t + 1}} & \text {i f} t + 1 \in \mathcal {I} _ {E}. \end{array} \right. \tag {20}
+$$
+
+Sources of randomness. In our analysis, there are two sources of randomness. One results from the stochastic gradients and the other is from the random sampling of devices. All the analysis in Appendix A only involve the former. To distinguish them, we use the notation $\mathbb{E}_{\mathcal{S}_t}(\cdot)$ , when we take expectation to erase the latter type of randomness.
+
+# B.2 KEY LEMMAS
+
+Two schemes. For full device participation, we always have $\overline{\mathbf{w}}_{t + 1} = \overline{\mathbf{v}}_{t + 1}$ . This is true when $t + 1 \notin \mathcal{I}_E$ for partial device participation. When $t + 1 \in \mathcal{I}_E$ , we hope this relation establish in the sense of expectation. To that end, we require the sampling and averaging scheme to be unbiased in the sense that
+
+$$
+\mathbb {E} _ {\mathcal {S} _ {t + 1}} \overline {{\mathbf {w}}} _ {t + 1} = \overline {{\mathbf {v}}} _ {t + 1}.
+$$
+
+We find two sampling and averaging schemes satisfying the requirement and provide convergence guarantees.
+
+(I) The server establishes $S_{t+1}$ by i.i.d. with replacement sampling an index $k \in \{1, \dots, N\}$ with probabilities $p_1, \dots, p_N$ for $K$ times. Hence $S_{t+1}$ is a multiset which allows a element to occur more than once. Then the server averages the parameters by $\mathbf{w}_{t+1}^k = \frac{1}{K} \sum_{k \in S_{t+1}} \mathbf{v}_{t+1}^k$ . This is first proposed in (Sahu et al., 2018) but lacks theoretical analysis.
+(II) The server samples $S_{t+1}$ uniformly in a without replacement fashion. Hence each element in $S_{t+1}$ only occurs once. Then server averages the parameters by $\mathbf{w}_{t+1}^k = \sum_{k \in S_{t+1}} p_k \frac{N}{K} \mathbf{v}_{t+1}^k$ . Note that when the $p_k$ 's are not all the same, one cannot ensure $\sum_{k \in S_{t+1}} p_k \frac{N}{K} = 1$ .
+
+Unbiasedness and bounded variance. Lemma 4 shows the mentioned two sampling and averaging schemes are unbiased. In expectation, the next-round parameter (i.e., $\overline{\mathbf{w}}_{t + 1}$ ) is equal to the weighted average of parameters in all devices after SGD updates (i.e., $\overline{\mathbf{v}}_{t + 1}$ ). However, the original scheme in (McMahan et al., 2017) (see Table 1) does not enjoy this property. But it is very similar to Scheme II except the averaging scheme. Hence our analysis cannot cover the original scheme.
+
+Lemma 5 shows the expected difference between $\overline{\mathbf{v}}_{t + 1}$ and $\overline{\mathbf{w}}_{t + 1}$ is bounded. $\mathbb{E}_{\mathcal{S}_t}\left\| \overline{\mathbf{v}}_{t + 1} - \overline{\mathbf{w}}_{t + 1}\right\|^2$ is actually the variance of $\overline{\mathbf{w}}_{t + 1}$ .
+
+Lemma 4 (Unbiased sampling scheme). If $t + 1 \in \mathcal{I}_E$ , for Scheme I and Scheme II, we have
+
+$$
+\mathbb {E} _ {S _ {t}} \left(\overline {{\mathbf {w}}} _ {t + 1}\right) = \overline {{\mathbf {v}}} _ {t + 1}.
+$$
+
+Lemma 5 (Bounding the variance of $\overline{\mathbf{w}}_t$ ). For $t + 1 \in \mathcal{I}$ , assume that $\eta_t$ is non-increasing and $\eta_t \leq 2\eta_{t + E}$ for all $t \geq 0$ . We have the following results.
+
+(1) For Scheme I, the expected difference between $\overline{\mathbf{v}}_{t + 1}$ and $\overline{\mathbf{w}}_{t + 1}$ is bounded by
+
+$$
+\mathbb {E} _ {\mathcal {S} _ {t}} \left\| \overline {{\mathbf {v}}} _ {t + 1} - \overline {{\mathbf {w}}} _ {t + 1} \right\| ^ {2} \leq \frac {4}{K} \eta_ {t} ^ {2} E ^ {2} G ^ {2}.
+$$
+
+(2) For Scheme II, assuming $p_1 = p_2 = \dots = p_N = \frac{1}{N}$ , the expected difference between $\overline{\mathbf{v}}_{t + 1}$ and $\overline{\mathbf{w}}_{t + 1}$ is bounded by
+
+$$
+\mathbb {E} _ {\mathcal {S} _ {t}} \left\| \overline {{\mathbf {v}}} _ {t + 1} - \overline {{\mathbf {w}}} _ {t + 1} \right\| ^ {2} \leq \frac {N - K}{N - 1} \frac {4}{K} \eta_ {t} ^ {2} E ^ {2} G ^ {2}.
+$$
+
+# B.3 COMPLETING THE PROOF OF THEOREM 2 AND 3
+
+Proof. Note that
+
+$$
+\begin{array}{l} \left\| \overline {{\mathbf {w}}} _ {t + 1} - \mathbf {w} ^ {*} \right\| ^ {2} = \left\| \overline {{\mathbf {w}}} _ {t + 1} - \overline {{\mathbf {v}}} _ {t + 1} + \overline {{\mathbf {v}}} _ {t + 1} - \mathbf {w} ^ {*} \right\| ^ {2} \\ = \underbrace {\left\| \overline {{\mathbf {w}}} _ {t + 1} - \overline {{\mathbf {v}}} _ {t + 1} \right\| ^ {2}} _ {A _ {1}} + \underbrace {\left\| \overline {{\mathbf {v}}} _ {t + 1} - \mathbf {w} ^ {*} \right\| ^ {2}} _ {A _ {2}} + \underbrace {2 \left\langle \overline {{\mathbf {w}}} _ {t + 1} - \overline {{\mathbf {v}}} _ {t + 1} , \overline {{\mathbf {v}}} _ {t + 1} - \mathbf {w} ^ {*} \right\rangle} _ {A _ {3}}. \\ \end{array}
+$$
+
+When expectation is taken over $S_{t + 1}$ , the last term $(A_3)$ vanishes due to the unbiasedness of $\overline{\mathbf{w}}_{t + 1}$ . If $t + 1 \notin \mathcal{I}_E$ , $A_1$ vanishes since $\overline{\mathbf{w}}_{t + 1} = \overline{\mathbf{v}}_{t + 1}$ . We use Lemma 5 to bound $A_2$ . Then it follows that
+
+$$
+\mathbb {E} \left\| \overline {{\mathbf {w}}} _ {t + 1} - \mathbf {w} ^ {*} \right\| ^ {2} \leq (1 - \eta_ {t} \mu) \mathbb {E} \left\| \overline {{\mathbf {w}}} _ {t} - \mathbf {w} ^ {\star} \right\| ^ {2} + \eta_ {t} ^ {2} B.
+$$
+
+If $t + 1 \in \mathcal{I}_E$ , we additionally use Lemma 5 to bound $A_1$ . Then
+
+$$
+\begin{array}{l} \mathbb {E} \left\| \overline {{\mathbf {w}}} _ {t + 1} - \mathbf {w} ^ {*} \right\| ^ {2} = \mathbb {E} \left\| \overline {{\mathbf {w}}} _ {t + 1} - \overline {{\mathbf {v}}} _ {t + 1} \right\| ^ {2} + \mathbb {E} \left\| \overline {{\mathbf {v}}} _ {t + 1} - \mathbf {w} ^ {*} \right\| ^ {2} \\ \leq \left(1 - \eta_ {t} \mu\right) \mathbb {E} \left\| \overline {{\mathbf {w}}} _ {t} - \mathbf {w} ^ {\star} \right\| ^ {2} + \eta_ {t} ^ {2} (B + C), \tag {21} \\ \end{array}
+$$
+
+where $C$ is the upper bound of $\frac{1}{\eta_t^2}\mathbb{E}_{\mathcal{S}_t}\left\| \overline{\mathbf{v}}_{t + 1} - \overline{\mathbf{w}}_{t + 1}\right\|^2$ ( $C$ is defined in Theorem 2 and 3).
+
+The only difference between eqn. (21) and eqn. (11) is the additional $C$ . Thus we can use the same argument there to prove the theorems here. Specifically, for a diminishing stepsize, $\eta_t = \frac{\beta}{t + \gamma}$ for some $\beta > \frac{1}{\mu}$ and $\gamma > 0$ such that $\eta_1 \leq \min \left\{\frac{1}{\mu}, \frac{1}{4L}\right\} = \frac{1}{4L}$ and $\eta_t \leq 2\eta_{t+E}$ , we can prove $\mathbb{E}\left\|\overline{\mathbf{w}}_{t+1} - \mathbf{w}^*\right\|^2 \leq \frac{v}{\gamma+t}$ where $v = \max \left\{\frac{\beta^2(B+C)}{\beta\mu-1}, (\gamma+1)\|\mathbf{w}_1 - \mathbf{w}^*\|^2\right\}$ .
+
+Then by the strong convexity of $F(\cdot)$
+
+$$
+\mathbb {E} [ F (\overline {{\mathbf {w}}} _ {t}) ] - F ^ {*} \leq \frac {L}{2} \Delta_ {t} \leq \frac {L}{2} \frac {v}{\gamma + t}.
+$$
+
+Specifically, if we choose $\beta = \frac{2}{\mu}$ , $\gamma = \max \{8\frac{L}{\mu} - 1, E\}$ and denote $\kappa = \frac{L}{\mu}$ , then $\eta_t = \frac{2}{\mu}\frac{1}{\gamma + t}$ and
+
+$$
+\mathbb {E} \left[ F \left(\overline {{\mathbf {w}}} _ {t}\right) \right] - F ^ {*} \leq \frac {2 \kappa}{\gamma + t} \left(\frac {B + C}{\mu} + 2 L \| \mathbf {w} _ {1} - \mathbf {w} ^ {*} \| ^ {2}\right).
+$$
+
+
+
+# B.4 DEFERRED PROOFS OF KEY LEMMAS
+
+Proof of Lemma 4. We first give a key observation which is useful to prove the followings. Let $\{x_{i}\}_{i = 1}^{N}$ denote any fixed deterministic sequence. We sample a multiset $S_{t}$ (with size $K$ ) by the procedure where for each sampling time, we sample $x_{k}$ with probability $q_{k}$ for each time. Pay attention that two samples are not necessarily independent. We only require each sampling distribution is identically. Let $S_{t} = \{i_{1},\dots ,i_{K}\} \subset [N]$ (some $i_{k}$ 's may have the same value). Then
+
+$$
+\mathbb {E} _ {\mathcal {S} _ {t}} \sum_ {k \in \mathcal {S} _ {t}} x _ {k} = \mathbb {E} _ {\mathcal {S} _ {t}} \sum_ {k = 1} ^ {K} x _ {i _ {k}} = K \mathbb {E} _ {\mathcal {S} _ {t}} x _ {i _ {1}} = K \sum_ {k = 1} ^ {N} q _ {k} x _ {k}.
+$$
+
+For Scheme I, $q_{k} = p_{k}$ and for Scheme II, $q_{k} = \frac{1}{N}$ . It is easy to prove this lemma when equipped with this observation.
+
+Proof of Lemma 5. We separately prove the bounded variance for two schemes. Let $S_{t+1} = \{i_1, \dots, i_K\}$ denote the multiset of chosen indexes.
+
+(1) For Scheme I, $\overline{\mathbf{w}}_{t + 1} = \frac{1}{K}\sum_{l = 1}^{K}\mathbf{v}_{t + 1}^{i_l}$ . Taking expectation over $\mathcal{S}_{t + 1}$ , we have
+
+$$
+\mathbb {E} _ {\mathcal {S} _ {t}} \left\| \overline {{\mathbf {w}}} _ {t + 1} - \overline {{\mathbf {v}}} _ {t + 1} \right\| ^ {2} = \mathbb {E} _ {\mathcal {S} _ {t}} \frac {1}{K ^ {2}} \sum_ {l = 1} ^ {K} \left\| \mathbf {v} _ {t + 1} ^ {i _ {l}} - \overline {{\mathbf {v}}} _ {t + 1} \right\| ^ {2} = \frac {1}{K} \sum_ {k = 1} ^ {N} p _ {k} \left\| \mathbf {v} _ {t + 1} ^ {k} - \overline {{\mathbf {v}}} _ {t + 1} \right\| ^ {2} \tag {22}
+$$
+
+where the first equality follows from $\mathbf{v}_{t + 1}^{i_t}$ are independent and unbiased.
+
+To bound eqn. (22), we use the same argument in Lemma 5. Since $t + 1 \in \mathcal{I}_E$ , we know that the time $t_0 = t - E + 1 \in \mathcal{I}_E$ is the communication time, which implies $\{\mathbf{w}_{t_0}^k\}_{k=1}^N$ is identical. Then
+
+$$
+\begin{array}{l} \sum_ {k = 1} ^ {N} p _ {k} \left| \left| \mathbf {v} _ {t + 1} ^ {k} - \overline {{\mathbf {v}}} _ {t + 1} \right| \right| ^ {2} = \sum_ {k = 1} ^ {N} p _ {k} \left| \left| \left(\mathbf {v} _ {t + 1} ^ {k} - \overline {{\mathbf {w}}} _ {t _ {0}}\right) - \left(\overline {{\mathbf {v}}} _ {t + 1} - \overline {{\mathbf {w}}} _ {t _ {0}}\right) \right| \right| ^ {2} \\ \leq \sum_ {k = 1} ^ {N} p _ {k} \left| \left| \mathbf {v} _ {t + 1} ^ {k} - \overline {{\mathbf {w}}} _ {t _ {0}} \right| \right| ^ {2} \\ \end{array}
+$$
+
+where the last inequality results from $\sum_{k=1}^{N} p_k (\mathbf{v}_{t+1}^k - \overline{\mathbf{w}}_{t_0}) = \overline{\mathbf{v}}_{t+1} - \overline{\mathbf{w}}_{t_0}$ and $\mathbb{E} \| \boldsymbol{x} - \mathbb{E} \boldsymbol{x} \|^2 \leq \mathbb{E} \| \boldsymbol{x} \|^2$ . Similarly, we have
+
+$$
+\begin{array}{l} \mathbb {E} _ {\mathcal {S} _ {t}} \left\| \overline {{\mathbf {w}}} _ {t + 1} - \overline {{\mathbf {v}}} _ {t + 1} \right\| ^ {2} \leq \frac {1}{K} \sum_ {k = 1} ^ {N} p _ {k} \mathbb {E} \left\| \mathbf {v} _ {t + 1} ^ {k} - \overline {{\mathbf {w}}} _ {t _ {0}} \right\| ^ {2} \\ \leq \frac {1}{K} \sum_ {k = 1} ^ {N} p _ {k} \mathbb {E} \left\| \mathbf {v} _ {t + 1} ^ {k} - \mathbf {w} _ {t _ {0}} ^ {k} \right\| ^ {2} \\ \leq \frac {1}{K} \sum_ {k = 1} ^ {N} p _ {k} E \sum_ {i = t _ {0}} ^ {t} \mathbb {E} \left\| \eta_ {i} \nabla F _ {k} \left(\mathbf {w} _ {i} ^ {k}, \xi_ {i} ^ {k}\right) \right\| ^ {2} \\ \leq \frac {1}{K} E ^ {2} \eta_ {t _ {0}} ^ {2} G ^ {2} \leq \frac {4}{K} \eta_ {t} ^ {2} E ^ {2} G ^ {2} \\ \end{array}
+$$
+
+where in the last inequality we use the fact that $\eta_t$ is non-increasing and $\eta_{t_0} \leq 2\eta_t$ .
+
+(2) For Scheme II, when assuming $p_1 = p_2 = \dots = p_N = \frac{1}{N}$ , we again have $\overline{\mathbf{w}}_{t + 1} = \frac{1}{K}\sum_{l = 1}^{K}\mathbf{v}_{t + 1}^{i_l}$ .
+
+$$
+\begin{array}{l} \mathbb {E} _ {\mathcal {S} _ {t}} \left\| \overline {{\mathbf {w}}} _ {t + 1} - \overline {{\mathbf {v}}} _ {t + 1} \right\| ^ {2} = \mathbb {E} _ {\mathcal {S} _ {t}} \left\| \frac {1}{K} \sum_ {i \in S _ {t + 1}} \mathbf {v} _ {t + 1} ^ {i} - \overline {{\mathbf {v}}} _ {t + 1} \right\| ^ {2} = \frac {1}{K ^ {2}} \mathbb {E} _ {\mathcal {S} _ {t}} \left\| \sum_ {i = 1} ^ {N} \mathbb {I} \left\{i \in S _ {t} \right\} \left(\mathbf {v} _ {t + 1} ^ {i} - \overline {{\mathbf {v}}} _ {t + 1}\right) \right\| ^ {2} \\ = \frac {1}{K ^ {2}} \left[ \sum_ {i \in [ N ]} \mathbb {P} (i \in S _ {t + 1}) \left\| \mathbf {v} _ {t + 1} ^ {i} - \overline {{\mathbf {v}}} _ {t + 1} \right\| ^ {2} + \sum_ {i \neq j} \mathbb {P} (i, j \in S _ {t + 1}) \left\langle \mathbf {v} _ {t + 1} ^ {i} - \overline {{\mathbf {v}}} _ {t + 1}, \mathbf {v} _ {t + 1} ^ {j} - \overline {{\mathbf {v}}} _ {t + 1} \right\rangle \right] \\ = \frac {1}{K N} \sum_ {i = 1} ^ {N} \left\| \mathbf {v} _ {t + 1} ^ {i} - \overline {{\mathbf {v}}} _ {t + 1} \right\| ^ {2} + \sum_ {i \neq j} \frac {K - 1}{K N (N - 1)} \langle \mathbf {v} _ {t + 1} ^ {i} - \overline {{\mathbf {v}}} _ {t + 1}, \mathbf {v} _ {t + 1} ^ {j} - \overline {{\mathbf {v}}} _ {t + 1} \rangle \\ = \frac {1}{K (N - 1)} \left(1 - \frac {K}{N}\right) \sum_ {i = 1} ^ {N} \left\| \mathbf {v} _ {t + 1} ^ {i} - \overline {{\mathbf {v}}} _ {t + 1} \right\| ^ {2} \\ \end{array}
+$$
+
+where we use the following equalities: (1) $\mathbb{P}\left(i\in S_{t + 1}\right) = \frac{K}{N}$ and $\mathbb{P}\left(i,j\in S_{t + 1}\right) = \frac{K(K - 1)}{N(N - 1)}$ for all $i\neq j$ and (2) $\sum_{i\in [N]}\left\| \mathbf{v}_{t + 1}^i -\overline{\mathbf{v}}_{t + 1}\right\| ^2 +\sum_{i\neq j}\langle \mathbf{v}_{t + 1}^i -\overline{\mathbf{v}}_{t + 1},\mathbf{v}_{t + 1}^j -\overline{\mathbf{v}}_{t + 1}\rangle = 0.$
+
+Therefore,
+
+$$
+\begin{array}{l} \mathbb {E} \left\| \overline {{\mathbf {w}}} _ {t + 1} - \overline {{\mathbf {v}}} _ {t + 1} \right\| ^ {2} = \frac {N}{K (N - 1)} \left(1 - \frac {K}{N}\right) \mathbb {E} \left[ \frac {1}{N} \sum_ {i = 1} ^ {N} \left\| \mathbf {v} _ {t + 1} ^ {i} - \overline {{\mathbf {v}}} _ {t + 1} \right\| ^ {2} \right] \\ \leq \frac {N}{K (N - 1)} \left(1 - \frac {K}{N}\right) \mathbb {E} \left[ \frac {1}{N} \sum_ {i = 1} ^ {N} \left\| \mathbf {v} _ {t + 1} ^ {i} - \overline {{\mathbf {w}}} _ {t _ {0}} \right\| ^ {2} \right] \\ \leq \frac {N}{K (N - 1)} \left(1 - \frac {K}{N}\right) 4 \eta_ {t} ^ {2} E ^ {2} G ^ {2}. \\ \end{array}
+$$
+
+where in the last inequality we use the same argument in (1).
+
+# C THE EMPIRICAL RISK MINIMIZATION EXAMPLE IN SECTION 4
+
+# C.1 DETAIL OF THE EXAMPLE
+
+Let $p > 1$ be a positive integer. To avoid the trivial case, we assume $N > 1$ . Consider the following quadratic optimization
+
+$$
+\min _ {\mathbf {w}} F (\mathbf {w}) \triangleq \frac {1}{2 N} \left[ \mathbf {w} ^ {\top} \mathbf {A} \mathbf {w} - 2 \mathbf {b} ^ {\top} \mathbf {w} \right] + \frac {\mu}{2} \| \mathbf {w} \| _ {2} ^ {2}, \tag {23}
+$$
+
+where $\mathbf{A} \in \mathbb{R}^{(Np + 1) \times (Np + 1)}$ , $\mathbf{w}$ , $\mathbf{b} \in \mathbb{R}^{Np + 1}$ and $\mu > 0$ . Specifically, let $\mathbf{b} = \mathbf{e}_1 \triangleq (1, 0, \dots, 0)^\top$ , and $\mathbf{A}$ be a symmetric and tri-diagonal matrix defined by
+
+$$
+\left(\mathbf {A}\right) _ {i, j} = \left\{ \begin{array}{l l} 2, & i = j \in [ 1, N p + 1 ], \\ - 1, & | j - i | = 1 \text {a n d} i, j \in [ 1, N p + 1 ], \\ 0, & \text {o t h e r w i s e}, \end{array} \right. \tag {24}
+$$
+
+where $i, j$ are row and column indices, respectively. We partition $\mathbf{A}$ into a sum of $N$ symmetric matrices $(\mathbf{A} = \sum_{k=1}^{N} \mathbf{A}_k)$ and $\mathbf{b}$ into $\mathbf{b} = \sum_{k=1}^{N} \mathbf{b}_k$ . Specifically, we choose $\mathbf{b}_1 = \mathbf{b} = \mathbf{e}_1$ and $\mathbf{b}_2 = \dots = \mathbf{b}_N = 0$ . To give the formulation of $\mathbf{A}_k$ 's, we first introduce a series of sparse and symmetric matrices $\mathbf{B}_k$ ( $1 \leq k \leq N$ ):
+
+$$
+\left(\mathbf {B} _ {k}\right) _ {i, j} = \left\{ \begin{array}{l l} 1, & i = j \in \{(k - 1) p + 1, k p + 1 \}, \\ 2, & i = j \text {a n d} (k - 1) p + 1 < i, j < k p + 1, \\ - 1, & | j - i | = 1 \text {a n d} i, j \in [ (k - 1) p + 1, k p + 1 ], \\ 0, & \text {o t h e r w i s e .} \end{array} \right. \tag {25}
+$$
+
+Now $\mathbf{A}_k$ 's are given by $\mathbf{A}_1 = \mathbf{B}_1 + \mathbf{E}_{1,1}$ , $\mathbf{A}_k = \mathbf{B}_k$ ( $2 \leq k \leq N-1$ ) and $\mathbf{A}_N = \mathbf{B}_N + \mathbf{E}_{Np+1,Np+1}$ where $\mathbf{E}_{i,j}$ is the matrix where only the $(i,j)$ th entry is one and the rest are zero.
+
+Back to the federated setting, we distribute the $k$ -th partition $(\mathbf{A}_k,\mathbf{b}_k)$ to the $k$ -th device and construct its corresponding local objective by
+
+$$
+F _ {k} (\mathbf {w}) \triangleq \frac {1}{2} \left[ \mathbf {w} ^ {\top} \mathbf {A} _ {k} \mathbf {w} - 2 \mathbf {b} _ {k} ^ {\top} \mathbf {w} + \mu \| \mathbf {w} \| _ {2} ^ {2} \right]. \tag {26}
+$$
+
+In the next subsection (Appendix C.3), we show that the quadratic minimization with the global objective (23) and the local objectives (26) is actually a distributed linear regression. In this example, training data are not identically but balanced distributed. Moreover, data in each device are sparse in the sense that non-zero features only occur in one block. The following theorem (Theorem 5) shows that FedAvg might converge to sub-optimal points even if the learning rate is small enough. We provide a numerical illustration in Appendix C.2 and a mathematical proof in Appendix C.4.
+
+Theorem 5. In the above problem of the distributed linear regression, assume that each device computes exact gradients (which are not stochastic). With a constant and small enough learning rate $\eta$ and $E > 1$ , FedAvg converges to a sub-optimal solution, whereas FedAvg with $E = 1$ (i.e., gradient descent) converges to the optimum. Specifically, in a quantitative way, we have
+
+$$
+\left\| \widetilde {\mathbf {w}} ^ {*} - \mathbf {w} ^ {*} \right\| \geq \frac {(E - 1) \eta}{1 6} \left\| \mathbf {A} _ {1} \mathbf {A} _ {2} \mathbf {w} ^ {*} \right\|
+$$
+
+where $\widetilde{\mathbf{w}}^*$ is the solution produced by FedAvg and $\mathbf{w}^*$ is the optimal solution.
+
+# C.2 NUMERICAL ILLUSTRATION ON THE EXAMPLE
+
+We conduct a few numerical experiments to illustrate the poor performance of FedAvg on the example introduced in Section 4. Here we set $N = 5, p = 4, \mu = 2 \times 10^{-4}$ . The annealing scheme of learning rates is given by $\eta_t = \frac{1/5}{5 + t \cdot a}$ where $a$ is the best parameter chosen from the set $\{10^{-2}, 10^{-4}, 10^{-6}\}$ .
+
+# C.3 SOME PROPERTIES OF THE EXAMPLE
+
+Recall that the symmetric matrix $\mathbf{A} \in \mathbb{R}^{(Np + 1) \times (Np + 1)}$ is defined in eqn. (24). Observe that $\mathbf{A}$ is invertible and for all vector $\mathbf{w} \in \mathbb{R}^{Np + 1}$ ,
+
+$$
+\mathbf {w} ^ {\top} \mathbf {A} \mathbf {w} = 2 \sum_ {i = 1} ^ {N p + 1} \mathbf {w} _ {i} ^ {2} - 2 \sum_ {i = 1} ^ {N p} \mathbf {w} _ {i} \mathbf {w} _ {i + 1} = \mathbf {w} _ {1} ^ {2} + \mathbf {w} _ {N p + 1} ^ {2} + \sum_ {i = 1} ^ {N p} \left(\mathbf {w} _ {i} - \mathbf {w} _ {i + 1}\right) ^ {2} \leq 4 \| \mathbf {w} \| _ {2} ^ {2}. \tag {27}
+$$
+
+which implies that $0 \prec \mathbf{A} \preceq 4\mathbf{I}$ .
+
+
+(a) Fixed learning rates
+
+
+(b) Decayed learning rates
+Figure 2: The left figure shows that the global objective value that FedAvg converges to is not optimal unless $E = 1$ . Once we decay the learning rate, FedAvg can converge to the optimal even if $E > 1$ .
+
+The sparse and symmetric matrices $\mathbf{B}_k$ ( $1 \leq k \leq N$ ) defined in eqn. (25) can be rewritten as
+
+$$
+\left(\mathbf {B} _ {k}\right) = \left( \begin{array}{c c} \mathbf {0} _ {(k - 1) p \times (k - 1) p} & \left( \begin{array}{c c c c c} 1 & - 1 & & & \\ - 1 & 2 & - 1 & & \\ & - 1 & \ddots & \ddots & \\ & & \ddots & 2 & - 1 \\ & & & - 1 & 1 \end{array} \right) _ {(p + 1) \times (p + 1)} \\ & \mathbf {0} _ {(N - k) p \times (N - k) p} \end{array} \right).
+$$
+
+From theory of linear algebra, it is easy to follow this proposition.
+
+Proposition 1. By the way of construction, $\mathbf{A}_k$ 's have following properties:
+
+1. $\mathbf{A}_k$ is positive semidefinite with $\| \mathbf{A}_k\| _2\leq 4$
+2. $\mathrm{rank}(\mathbf{A}_2) = \dots = \mathrm{rank}(\mathbf{A}_{N - 1}) = p$ and $\mathrm{rank}(\mathbf{A}_1) = \mathrm{rank}(\mathbf{A}_N) = p + 1$ ;
+
+3. For each $k$ , there exist a matrix $\mathbf{X}_k \in \mathbb{R}^{r_k \times (Np + 1)}$ such that $\mathbf{A}_k = \mathbf{X}_k^\top \mathbf{X}_k$ where $r_k = \mathrm{rank}(\mathbf{A}_k)$ . Given any $k$ , each row of $\mathbf{X}_k$ has non-zero entries only on a block of coordinates, namely $\mathcal{I}_k = \{(k - 1)p + 1, (k - 1)p + 2, \dots, kp + 1\}$ . As a result, $\mathbf{A} = \sum_{k=1}^{N} \mathbf{A}_k = \mathbf{X}^\top \mathbf{X}$ , where $\mathbf{X} = (\mathbf{X}_1^\top, \dots, \mathbf{X}_N^\top)^\top \in \mathbb{R}^{(Np + 2) \times (Np + 1)}$ .
+4. $\mathbf{w}^{*} = \mathbf{A}^{-1}\mathbf{b}$ is the global minimizer of problem eqn. (23), given by $(\mathbf{w}^{*})_{i} = 1 - \frac{i}{Np + 2}$ ( $1 \leq i \leq Np + 1$ ). Let $\widetilde{\mathbf{w}} \triangleq (\underbrace{1, \cdots, 1}_{p + 1}, \underbrace{0, \cdots, 0}_{(N - 1)p})^{\top} \in \mathbb{R}^{Np + 1}$ , then $\mathbf{A}_1\widetilde{\mathbf{w}} = \mathbf{X}_1^\top \mathbf{X}_1\widetilde{\mathbf{w}} = \mathbf{b}_1$ .
+
+From Proposition 1, we can rewrite these local quadratic objectives in form of a ridge linear regression. Specifically, for $k = 1$ ,
+
+$$
+\begin{array}{l} F _ {1} (\mathbf {w}) = \frac {1}{2} \left[ \mathbf {w} ^ {\top} \mathbf {A} _ {1} \mathbf {w} - 2 \mathbf {b} _ {1} ^ {\top} \mathbf {w} + \mu \| \mathbf {w} \| ^ {2} \right], \\ = \frac {1}{2} \left[ \mathbf {w} ^ {\top} \mathbf {X} _ {1} ^ {\top} \mathbf {X} _ {1} \mathbf {w} - 2 \widetilde {\mathbf {w}} ^ {\top} \mathbf {X} _ {1} ^ {\top} \mathbf {X} _ {1} \mathbf {w} + \mu \| \mathbf {w} \| ^ {2} \right], \\ = \frac {1}{2} \| \mathbf {X} _ {1} (\mathbf {w} - \widetilde {\mathbf {w}}) \| _ {2} ^ {2} + \frac {1}{2} \mu \| \mathbf {w} \| ^ {2} + C, \\ \end{array}
+$$
+
+where $C$ is some constant irrelevant with $\mathbf{w}$ .For $2\leq k\leq N$
+
+$$
+\begin{array}{l} F _ {k} (\mathbf {w}) = \frac {1}{2} \left[ \mathbf {w} ^ {\top} \mathbf {A} _ {k} \mathbf {w} - 2 \mathbf {b} _ {k} ^ {\top} \mathbf {w} + \mu \| \mathbf {w} \| ^ {2} \right], \\ = \frac {1}{2} \| \mathbf {X} _ {k} \mathbf {w} \| _ {2} ^ {2} + \frac {1}{2} \mu \| \mathbf {w} \| ^ {2}. \\ \end{array}
+$$
+
+Similarly, the global quadratic objective eqn. (23) can be written as $F(\mathbf{w}) = \frac{1}{2N} \| \mathbf{X}(\mathbf{w} - \mathbf{w}^{*})\|_{2}^{2} + \frac{1}{2}\mu \| \mathbf{w}\|^{2}$ .
+
+Data in each device are sparse in the sense that non-zero features only occur in the block $\mathcal{I}_k$ of coordinates. Blocks on neighboring devices only overlap one coordinate, i.e., $|\mathcal{I}_k\cap \mathcal{I}_{k + 1}| = 1$ . These observations imply that the training data in this example is not identically distributed.
+
+The $k$ -th device has $r_k$ ( $= p$ or $p + 1$ ) non-zero feature vectors which are vertically concatenated into the feature matrix $\mathbf{X}_k$ . Without loss of generality, we can assume all devices hold $p + 1$ data points since we can always add additional zero vectors to expand the local dataset. Therefore $n_1 = \dots = n_N = p + 1$ in this case, which implies that the training data in this example is balanced distributed.
+
+# C.4 PROOF OF THEOREM 5.
+
+Proof of Theorem 5. To prove the theorem, we assume that (i) all devices hold the same amount of data points, (ii) all devices perform local updates in parallel, (iii) all workers use the same learning rate $\eta$ and (iv) all gradients computed by each device make use of its full local dataset (hence this case is a deterministic optimization problem). We first provide the result when $\mu = 0$ .
+
+For convenience, we slightly abuse the notation such that $\mathbf{w}_t$ is the global parameter at round $t$ rather than step $t$ . Let $\mathbf{w}_t^{(k)}$ the updated local parameter at $k$ -th worker at round $t$ . Once the first worker that holds data $(\mathbf{A}_1,\mathbf{b}_1)$ runs $E$ step of SGD on $F_{1}(\mathbf{w})$ from $\mathbf{w}_t$ , it follows that
+
+$$
+\mathbf {w} _ {t} ^ {(1)} = \left(\mathbf {I} - \eta \mathbf {A} _ {1}\right) ^ {E} \mathbf {w} _ {t} + \eta \sum_ {l = 0} ^ {E - 1} \left(\mathbf {I} - \eta \mathbf {A} _ {1}\right) ^ {l} \mathbf {b} _ {1}.
+$$
+
+For the rest of workers, we have $\mathbf{w}_t^{(k)} = (\mathbf{I} - \eta \mathbf{A}_i)^E\mathbf{w}_t$ $(2\leq k\leq N)$
+
+Therefore, from the algorithm,
+
+$$
+\mathbf {w} _ {t + 1} = \frac {1}{N} \sum_ {k = 1} ^ {N} \mathbf {w} _ {t + 1} ^ {(k)} = \left(\frac {1}{N} \sum_ {i = 1} ^ {N} (\mathbf {I} - \eta \mathbf {A} _ {i}) ^ {E}\right) \mathbf {w} _ {t} + \frac {\eta}{N} \sum_ {l = 0} ^ {E - 1} (\mathbf {I} - \eta \mathbf {A} _ {1}) ^ {l} \mathbf {b} _ {1}.
+$$
+
+Define $\rho \triangleq \| \frac{1}{N}\sum_{i = 1}^{N}(\mathbf{I} - \eta \mathbf{A}_i)^E\| _2$ . Next we show that when $\eta < \frac{1}{4}$ , we have $\rho < 1$ . From Proposition 1, $\| \mathbf{A}_k\| _2\leq 4$ and $\mathbf{A}_k\succeq 0$ for $\forall k\in [N]$ . This means $\| \mathbf{I} - \eta \mathbf{A}_k\| _2\leq 1$ for all $k\in [N]$ . Then for any $\mathbf{x}\in \mathbb{R}^{Np + 1}$ and $\| x\| _2 = 1$ , we have $\mathbf{x}^\top (\mathbf{I} - \eta \mathbf{A}_k)^E\mathbf{x}\leq 1$ and it is monotonically decreasing when $E$ is increasing. Then
+
+$$
+\begin{array}{l} \mathbf {x} ^ {\top} \left(\frac {1}{N} \sum_ {i = 1} ^ {N} (\mathbf {I} - \eta \mathbf {A} _ {i}) ^ {E}\right) \mathbf {x} \leq \mathbf {x} ^ {\top} \left(\frac {1}{N} \sum_ {i = 1} ^ {N} (\mathbf {I} - \eta \mathbf {A} _ {i})\right) \mathbf {x} \\ = \mathbf {x} ^ {\top} \left(\mathbf {I} - \frac {\eta}{N} \mathbf {A}\right) \mathbf {x} < 1 \\ \end{array}
+$$
+
+since $0 \prec \mathbf{A} \preceq 4\mathbf{I}$ means $0 \preceq (\mathbf{I} - \frac{\eta}{N}\mathbf{A}) \prec \mathbf{I}$ .
+
+Then $\| \mathbf{w}_{t + 1} - \mathbf{w}_t\| _2\leq \rho \| \mathbf{w}_t - \mathbf{w}_{t - 1}\| _2\leq \rho^t\| \mathbf{w}_1 - \mathbf{w}_0\| _2$ . By the triangle inequality,
+
+$$
+\left\| \mathbf {w} _ {t + n} - \mathbf {w} _ {t} \right\| _ {2} \leq \sum_ {i = 0} ^ {n - 1} \left\| \mathbf {w} _ {t + i + 1} - \mathbf {w} _ {t + i} \right\| _ {2} \leq \sum_ {i = 0} ^ {n - 1} \rho^ {t + i} \left\| \mathbf {w} _ {1} - \mathbf {w} _ {0} \right\| _ {2} \leq \rho^ {t} \frac {\left\| \mathbf {w} _ {1} - \mathbf {w} _ {0} \right\| _ {2}}{1 - \rho}
+$$
+
+which implies that $\{\mathbf{w}_t\}_{t\geq 1}$ is a Cauchy sequence and thus has a limit denoted by $\widetilde{\mathbf{w}}^*$ . We have
+
+$$
+\widetilde {\mathbf {w}} ^ {*} = \left(\mathbf {I} - \frac {1}{N} \sum_ {i = 1} ^ {N} \left(\mathbf {I} - \eta \mathbf {A} _ {i}\right) ^ {E}\right) ^ {- 1} \left[ \frac {\eta}{N} \sum_ {l = 0} ^ {E - 1} \left(\mathbf {I} - \eta \mathbf {A} _ {1}\right) ^ {l} \mathbf {b} \right]. \tag {28}
+$$
+
+Now we can discuss the impact of $E$ .
+
+(1) When $E = 1$ , it follows from eqn. (28) that $\widetilde{\mathbf{w}}^{*} = \mathbf{A}^{-1}\mathbf{b} = \mathbf{w}^{*}$ , i.e., FedAvg converges to the global minimizer.
+(2) When $E = \infty$ , $\lim_{E\to \infty}\eta \sum_{l = 0}^{E - 1}(\mathbf{I} - \eta \mathbf{A}_1)^l\mathbf{b} = \mathbf{A}_1^+ \mathbf{b}_1 = \widetilde{\mathbf{w}}$ and $\lim_{E\to \infty}\frac{1}{N}\sum_{i = 1}^{N}(\mathbf{I} - \eta \mathbf{A}_i)^E = \mathrm{diag}\{(1 - \frac{1}{N})\mathbf{I}_p;(1 - \frac{1}{N})\mathbf{I} - \frac{1}{N}\mathbf{M};(1 - \frac{1}{N})\mathbf{I}_p\}$ where $\mathbf{M}\in \mathbb{R}^{(N - 2)p + 1\times (N - 2)p + 1}$ is some a symmetric matrix. Actually $\mathbf{M}$ is almost a diagonal matrix in the sense that there are totally $N - 2$ completely the same matrices (i.e., $\frac{1}{p + 1}\mathbf{ee}^T\in \mathbb{R}^{(p + 1)\times (p + 1)}$ ) placed on the diagonal of $\mathbf{M}$ but each overlapping only the lower right corner element with the top left corner element of the next block. Therefore $\widetilde{\mathbf{w}}^{*} = (\underbrace{1,\cdots,1}_{p},\underbrace{\mathbf{V}_{11},\cdots,\mathbf{V}_{(N - 2)p + 1,1}}_{(N - 2)p + 1},\underbrace{0,\cdots,0}_{p})^T$ where $\mathbf{V} = (\mathbf{I} - \mathbf{M})^{-1}$ .
+
+From (4) of Proposition 1, $\widetilde{\mathbf{w}}^*$ is different from $\mathbf{w}^*$
+
+(3) When $2 \leq E < \infty$ , note that
+
+$$
+\begin{array}{l} \widetilde {\mathbf {w}} ^ {*} - \mathbf {w} ^ {*} \\ = \left(\mathbf {I} - \frac {1}{N} \sum_ {i = 1} ^ {N} \left(\mathbf {I} - \eta \mathbf {A} _ {i}\right) ^ {E}\right) ^ {- 1} \left[ \frac {\eta}{N} \sum_ {l = 0} ^ {E - 1} \left(\mathbf {I} - \eta \mathbf {A} _ {1}\right) ^ {l} \mathbf {A} - \left(\mathbf {I} - \frac {1}{N} \sum_ {i = 1} ^ {N} \left(\mathbf {I} - \eta \mathbf {A} _ {i}\right) ^ {E}\right) \right] \mathbf {w} ^ {*}. \tag {29} \\ \end{array}
+$$
+
+The right hand side of the last equation cannot be zero. Quantificationally speaking, we have the following lemma. We defer the proof for the next subsection.
+
+Lemma 6. If the step size $\eta$ is sufficiently small, then in this example, we have
+
+$$
+\left\| \widetilde {\mathbf {w}} ^ {*} - \mathbf {w} ^ {*} \right\| \geq \frac {(E - 1) \eta}{1 6} \left\| \mathbf {A} _ {1} \mathbf {A} _ {2} \mathbf {w} ^ {*} \right\|. \tag {30}
+$$
+
+Since $\mathbf{A}_1\mathbf{A}_2\neq \mathbf{0}$ and $\mathbf{w}^*$ is dense, the lower bound in eqn. (30) is not vacuous.
+
+Now we have proved the result when $\mu = 0$ . For the case where $\mu > 0$ , we replace $\mathbf{A}_i$ with $\mathbf{A}_i + \mu \mathbf{I}$ and assume $\mu < \frac{1}{4 + \mu}$ instead of the original. The discussion on different choice of $E$ is unaffected.
+
+# C.5 PROOF OF LEMMA 6
+
+Proof. We will derive the conclusion mainly from the expression eqn. (29). Let $f(\eta)$ be a function of $\eta$ . We say a matrix $\mathbf{T}$ is $\Theta(f(\eta))$ if and only if there exist some positive constants namely $C_1$ and $C_2$ such that $C_1f(\eta) \leq \| \mathbf{T} \| \leq C_2f(\eta)$ for all $\eta > 0$ . In the following analysis, we all consider the regime where $\eta$ is sufficiently small.
+
+Denote by $\mathbf{V} = \sum_{i=1}^{N} \mathbf{A}_i^2$ . First we have
+
+$$
+\begin{array}{l} \mathbf {I} - \frac {1}{N} \sum_ {i = 1} ^ {N} (\mathbf {I} - \eta \mathbf {A} _ {i}) ^ {E} = \mathbf {I} - \frac {1}{N} \sum_ {i = 1} ^ {N} (\mathbf {I} - E \eta \mathbf {A} _ {i} + \frac {E (E - 1)}{2} \eta^ {2} \mathbf {A} _ {i} ^ {2} + \Theta (\eta^ {3})) \\ = \frac {E \eta}{N} \mathbf {A} - \frac {E (E - 1)}{2 N} \eta^ {2} \mathbf {V} + \Theta (\eta^ {3}). \tag {31} \\ \end{array}
+$$
+
+Then by plugging this equation into the right hand part of eqn. (29), we have
+
+$$
+\begin{array}{l} \frac {\eta}{N} \sum_ {l = 0} ^ {E - 1} \left(\mathbf {I} - \eta \mathbf {A} _ {1}\right) ^ {l} \mathbf {A} - \left(\mathbf {I} - \frac {1}{N} \sum_ {i = 1} ^ {N} \left(\mathbf {I} - \eta \mathbf {A} _ {i}\right) ^ {E}\right) \\ = \frac {\eta}{N} \sum_ {l = 0} ^ {E - 1} (\mathbf {I} - l \eta \mathbf {A} _ {1} + \Theta (\eta^ {2})) \mathbf {A} - \left(\frac {E \eta}{N} \mathbf {A} - \frac {E (E - 1)}{2 N} \eta^ {2} \mathbf {V} + \Theta (\eta^ {3})\right) \\ = \frac {\eta^ {2}}{N} \left(\frac {E (E - 1)}{2} (\mathbf {V} - \mathbf {A} _ {1} \mathbf {A}) + \Theta (\eta)\right) \\ \end{array}
+$$
+
+Second from eqn. (31), we have that
+
+$$
+\left(\mathbf {I} - \frac {1}{N} \sum_ {i = 1} ^ {N} (\mathbf {I} - \eta \mathbf {A} _ {i}) ^ {E}\right) ^ {- 1} = \left(\frac {E \eta}{N} \mathbf {A} + \Theta (\eta^ {2})\right) ^ {- 1} = \frac {N}{E \eta} \mathbf {A} ^ {- 1} + \Theta (1).
+$$
+
+Plugging the last two equations into eqn. (29), we have
+
+$$
+\begin{array}{l} \left\| \widetilde {\mathbf {w}} ^ {*} - \mathbf {w} ^ {*} \right\| = \left\| \left(\frac {N}{E \eta} \mathbf {A} ^ {- 1} + \Theta (1)\right) \frac {\eta^ {2}}{N} \left(\frac {E (E - 1)}{2} (\mathbf {V} - \mathbf {A} _ {1} \mathbf {A}) + \Theta (\eta)\right) \mathbf {w} ^ {*} \right\| \\ = \left\| \left(\frac {E - 1}{2} \eta \mathbf {A} ^ {- 1} (\mathbf {V} - \mathbf {A} _ {1} \mathbf {A}) + \Theta (\eta)\right) \mathbf {w} ^ {*} \right\| \\ \geq \frac {(E - 1) \eta}{1 6} \| (\mathbf {V} - \mathbf {A} _ {1} \mathbf {A}) \mathbf {w} ^ {*} \| \\ = \frac {(E - 1) \eta}{1 6} \| \mathbf {A} _ {1} \mathbf {A} _ {2} \mathbf {w} ^ {*} \| \\ \end{array}
+$$
+
+where the last inequality holds because (i) we require $\eta$ to be sufficiently small and (ii) $\| \mathbf{A}^{-1}\mathbf{x}\| \geq \frac{1}{4}\| \mathbf{x}\|$ for any vector $\mathbf{x}$ as a result of $0 < \| \mathbf{A}\| \leq 4$ . The last equality uses the fact (i) $\mathbf{V} - \mathbf{A}_1\mathbf{A} = \mathbf{A}_1\sum_{i=2}^{n}\mathbf{A}_i$ and (ii) $\mathbf{A}_1\mathbf{A}_i = \mathbf{0}$ for any $i \geq 3$ .
+
+# D EXPERIMENTAL DETAILS
+
+# D.1 EXPERIMENTAL SETTING
+
+Model and loss. We examine our theoretical results on a multinomial logistic regression. Specifically, let $f(\mathbf{w}; x_i)$ denote the prediction model with the parameter $\mathbf{w} = (\mathbf{W}, \mathbf{b})$ and the form $f(\mathbf{w}; \mathbf{x}_i) = \text{softmax}(\mathbf{W}\mathbf{x}_i + \mathbf{b})$ . The loss function is given by
+
+$$
+F (\mathbf {w}) = \frac {1}{n} \sum_ {i = 1} ^ {n} \operatorname {C r o s s E n t r o p y} \left(f \left(\mathbf {w}; \mathbf {x} _ {i}\right), \mathbf {y} _ {i}\right) + \lambda \| \mathbf {w} \| _ {2} ^ {2}.
+$$
+
+This is a convex optimization problem. The regularization parameter is set to $\lambda = 10^{-4}$ .
+
+Datasets. We evaluate our theoretical results on both real data and synthetic data. For real data, we choose MNIST dataset (LeCun et al., 1998) because of its wide academic use. To impose statistical heterogeneity, we distribute the data among $N = 100$ devices such that each device contains samples of only two digits. To explore the effect of data unbalance, we further vary the number of samples among devices. Specifically, for unbalanced cases, the number of samples among devices follows a power law, while for balanced cases, we force all devices to have the same amount of samples.
+
+Synthetic data allow us to manipulate heterogeneity more precisely. Here we follow the same setup as described in (Sahu et al., 2018). In particular, we generate synthetic samples $(\mathbf{X}_k,\mathbf{Y}_k)$ according to the model $y = \mathrm{argmax}(\mathrm{softmax}(\mathbf{W}_kx + \mathbf{b}_k))$ with $x\in \mathbb{R}^{60}$ , $\mathbf{W}_k\in \mathbb{R}^{10\times 60}$ and $\mathbf{b}_k\in \mathbb{R}^{10}$ , where $\mathbf{X}_k\in \mathbb{R}^{n_k\times 60}$ and $\mathbf{Y}_k\in \mathbb{R}^{n_k}$ . We model each entry of $\mathbf{W}_k$ and $\mathbf{b}_k$ as $\mathcal{N}(\mu_k,1)$ with $\mu_{k}\sim \mathcal{N}(0,\alpha)$ , and $(x_{k})_{j}\sim \mathcal{N}(v_{k},\frac{1}{j^{1.2}})$ with $v_{k}\sim \mathcal{N}(B_{k},1)$ and $B_{k}\sim \mathcal{N}(0,\beta)$ . Here $\alpha$ and $\beta$ allow for more precise manipulation of data heterogeneity: $\alpha$ controls how much local models differ from each other and $\beta$ controls how much the local data at each device differs from that of other devices. There are $N = 100$ devices in total. The number of samples $n_k$ in each device follows a power law, i.e., data are distributed in an unbalanced way. We denote by synthetic $(\alpha ,\beta)$ the synthetic dataset with parameter $\alpha$ and $\beta$ .
+
+We summarize the information of federated datasets in Table 2.
+
+Experiments. For all experiments, we initialize all runnings with $\mathbf{w}_0 = 0$ . In each round, all selected devices run $E$ steps of SGD in parallel. We decay the learning rate at the end of each round by the following scheme $\eta_t = \frac{\eta_0}{1 + t}$ , where $\eta_0$ is chosen from the set $\{1, 0.1, 0.01\}$ . We evaluate the averaged model after each global synchronization on the corresponding global objective. For fair comparison, we control all randomness in experiments so that the set of activated devices is the same across all different algorithms on one configuration.
+
+Table 2: Statistics of federated datasets
+
+| Dataset | Details | # Devices (N) | #Training samples (n) | Samples/device |
| mean | std |
| MNIST | balanced | 100 | 54200 | 542 | 0 |
| unbalanced | 100 | 62864 | 628 | 800 |
| Synthetic Data | α = 0, β = 0 | 100 | 42522 | 425 | 1372 |
| α = 1, β = 1 | 100 | 27348 | 273 | 421 |
+
+# D.2 THEORETICAL VERIFICATION
+
+The impact of $E$ . From our theory, when the total steps $T$ is sufficiently large, the required number of communication rounds to achieve a certain precision is
+
+$$
+T _ {\epsilon} / E \approx \mathcal {O} \left(\frac {E G ^ {2}}{K} + E G ^ {2} + \frac {\sum_ {k = 1} ^ {N} p _ {k} ^ {2} \sigma^ {2} + L \Gamma + \kappa G ^ {2}}{E}\right),
+$$
+
+which is a function of $E$ that first decreases and then increases. This implies that the optimal local step $E^{*}$ exists. What's more, the $T_{\epsilon} / E$ evaluated at $E^{*}$ is
+
+$$
+\mathcal {O} \left(G \sqrt {\sum_ {k = 1} ^ {N} p _ {k} ^ {2} \sigma^ {2} + L \Gamma + \kappa G ^ {2}}\right),
+$$
+
+which implies that FedAvg needs more communication rounds to tackle with severer heterogeneity.
+
+To validate these observations, we test FedAvg with Scheme I on our four datasets as listed in Table 2. In each round, we activate $K = 30$ devices and set $\eta_0 = 0.1$ for all experiments in this part. For unbalanced MNIST, we use batch size $b = 64$ . The target loss value is 0.29 and the minimum loss value found is 0.2591. For balanced MNIST, we also use batch size $b = 64$ . The target loss value is 0.50 and the minimum loss value found is 0.3429. For two synthetic datasets, we choose $b = 24$ . The target loss value for synthetic(0,0) is 0.95 and the minimum loss value is 0.7999. Those for synthetic(1,1) are 1.15 and 1.075.
+
+The impact of $K$ . Our theory suggests that a larger $K$ may accelerate convergence since $T_{\epsilon} / E$ contains a term $\mathcal{O}\left(\frac{EG^2}{K}\right)$ . We fix $E = 5$ and $\eta_0 = 0.1$ for all experiments in this part. We set the batch size to 64 for two MNIST datasets and 24 for two synthetic datasets. We test Scheme I for illustration. Our results show that, no matter what value $K$ is, FedAvg converges. From Figure 3, all the curves in each subfigure overlap a lot. To show more clearly the differences between the curves, we zoom in the last few rounds in the upper left corner of the figure. It reveals that the curve of a large enough $K$ is slightly better. This result also shows that there is no need to sample as many devices as possible in convex federated optimization.
+
+Sampling and averaging schemes. We analyze the influence of sampling and averaging schemes. As stated in Section 3.3, Scheme I iid samples (with replacement) $K$ indices with weights $p_k$ and simply averages the models, which is proposed by Sahu et al. (2018). Scheme II uniformly samples (without replacement) $K$ devices and weightedly averages the models with scaling factor $N / K$ . Transformed Scheme II scales each local objective and uses uniform sampling and simple averaging. We compare Scheme I, Scheme II and transformed Scheme II, as well as the original scheme (McMahan et al., 2017) on four datasets. We carefully tuned the learning rate for the original scheme. In particular, we choose the best step size from the set $\{0.1, 0.5, 0.9, 1.1\}$ . We did not fine tune the rest schemes and set $\eta_0 = 0.1$ by default. The hyperparameters are the same for all schemes: $E = 20$ , $K = 10$ and $b = 64$ . The results are shown in Figure 1c and 1d.
+
+Our theory renders Scheme I the guarantee of convergence in common federated setting. As expected, Scheme I performs well and stably across most experiments. This also coincides with the findings of Sahu et al. (2018). They noticed that Scheme I performs slightly better than another scheme: server
+
+
+(a) Balanced MNIST
+
+
+(b) Unbalanced MNIST
+
+
+(c) Synthetic(0, 0)
+
+
+(d) Synthetic(1, 1)
+Figure 3: The impact of $K$ on four datasets. To show more clearly the differences between the curves, we zoom in the last few rounds in the upper left corner of the box.
+
+first uniformly samples devices and then averages local models with weight $p_k / \sum_{l \in S_t} p_l$ . However, our theoretical framework cannot apply to it, since for $t \in \mathcal{I}$ , $\mathbb{E}_{S_t} \overline{\mathbf{w}}_t = \overline{\mathbf{v}}_t$ does not hold in general.
+
+Our theory does not guarantee FedAvg with Scheme II could converge when the training data are unbalanced distributed. Actually, if the number of training samples varies too much among devices, Scheme II may even diverge. To illustrate this point, we have shown the terrible performance on mnist unbalanced dataset in Figure 1b. In Figure 4, we show additional results of Scheme II on the two synthetic datasets, which are the most unbalanced. We choose $b = 24$ , $K = 10$ , $E = 10$ and $\eta_0 = 0.1$ for these experiments. However, transformed Scheme II performs well except that it has a lower convergence rate than Scheme I.
+
+
+(a) Synthetic(0,0)
+Figure 4: The performance of four schemes on two synthetic datasets. The Scheme I performs stably and the best. The original performs the second. The curve of the Scheme II fluctuates and has no sign of convergence. Transformed Scheme II has a lower convergence rate than Scheme I.
+
+
+(b) Synthetic(1,1)
\ No newline at end of file
diff --git a/ontheconvergenceoffedavgonnoniiddata/images.zip b/ontheconvergenceoffedavgonnoniiddata/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..f308187243653c0fc90332b660c33bed5d30a255
--- /dev/null
+++ b/ontheconvergenceoffedavgonnoniiddata/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:31d075cd641f4a78691e6c4f1b80731612f73b59b719fbae5d8df3b421f5e219
+size 1069103
diff --git a/ontheconvergenceoffedavgonnoniiddata/layout.json b/ontheconvergenceoffedavgonnoniiddata/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..073512f98fae618c7840fba5c9ae16ca308d6501
--- /dev/null
+++ b/ontheconvergenceoffedavgonnoniiddata/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6a07202590671f679d2a045fafae8283dff52b9f8920b6ca6f3e18d29d4c4ba6
+size 1273376
diff --git a/ontheequivalencebetweenpositionalnodeembeddingsandstructuralgraphrepresentations/79265851-6c9f-40c9-9670-1df76f49c641_content_list.json b/ontheequivalencebetweenpositionalnodeembeddingsandstructuralgraphrepresentations/79265851-6c9f-40c9-9670-1df76f49c641_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..8d08d6203a8b2d3b2f9b9f608ea71ffff19f4c73
--- /dev/null
+++ b/ontheequivalencebetweenpositionalnodeembeddingsandstructuralgraphrepresentations/79265851-6c9f-40c9-9670-1df76f49c641_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f75b574a48979c345741d8e81a9ca18b5beca84e44481986a771cf94d3b83462
+size 163949
diff --git a/ontheequivalencebetweenpositionalnodeembeddingsandstructuralgraphrepresentations/79265851-6c9f-40c9-9670-1df76f49c641_model.json b/ontheequivalencebetweenpositionalnodeembeddingsandstructuralgraphrepresentations/79265851-6c9f-40c9-9670-1df76f49c641_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..f99550a77ec184dbc44e94e18c7ebdcf163b07dc
--- /dev/null
+++ b/ontheequivalencebetweenpositionalnodeembeddingsandstructuralgraphrepresentations/79265851-6c9f-40c9-9670-1df76f49c641_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:36e64fda0b2eec67f0d25b2e67cd64c797e515186f1f4e837d0dfb3295d8d6d3
+size 197962
diff --git a/ontheequivalencebetweenpositionalnodeembeddingsandstructuralgraphrepresentations/79265851-6c9f-40c9-9670-1df76f49c641_origin.pdf b/ontheequivalencebetweenpositionalnodeembeddingsandstructuralgraphrepresentations/79265851-6c9f-40c9-9670-1df76f49c641_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..833a582b39c2235fc60bc67a7d86355b81c3f846
--- /dev/null
+++ b/ontheequivalencebetweenpositionalnodeembeddingsandstructuralgraphrepresentations/79265851-6c9f-40c9-9670-1df76f49c641_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d4916158cef071d643d87b178ee7d93686fcee300a4feb61d988441ea6a9353a
+size 634325
diff --git a/ontheequivalencebetweenpositionalnodeembeddingsandstructuralgraphrepresentations/full.md b/ontheequivalencebetweenpositionalnodeembeddingsandstructuralgraphrepresentations/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..2d88cb20373d9bce00264f99cd9bdfef053b2cc2
--- /dev/null
+++ b/ontheequivalencebetweenpositionalnodeembeddingsandstructuralgraphrepresentations/full.md
@@ -0,0 +1,548 @@
+# ON THE EQUIVALENCE BETWEEN POSITIONAL NODE EMBEDDINGS AND STRUCTURAL GRAPH REPRESENTATIONS
+
+Balasubramaniam Srinivasan
+
+Department of Computer Science
+
+Purdue University
+
+bsriniv@purdue.edu
+
+Bruno Ribeiro
+
+Department of Computer Science
+
+Purdue University
+
+ribeiro@cs.purdue.edu
+
+# ABSTRACT
+
+This work provides the first unifying theoretical framework for node (positional) embeddings and structural graph representations, bridging methods like matrix factorization and graph neural networks. Using invariant theory, we show that the relationship between structural representations and node embeddings is analogous to that of a distribution and its samples. We prove that all tasks that can be performed by node embeddings can also be performed by structural representations and vice-versa. We also show that the concept of transductive and inductive learning is unrelated to node embeddings and graph representations, clearing another source of confusion in the literature. Finally, we introduce new practical guidelines to generating and using node embeddings, which fixes significant shortcomings of standard operating procedures used today.
+
+# 1 INTRODUCTION
+
+The theory of structural graph representations is a recently emerging field. It creates a link between relational learning and invariant theory. Interestingly, or rather unfortunately, there is no unified theory connecting node embeddings—low-rank matrix approximations, factor analysis, latent semantic analysis, etc.—with structural graph representations. Instead, conflicting interpretations have manifested over the last few years, that further confound practitioners and researchers alike.
+
+For instance, consider the direction, word embeddings $\rightarrow$ structural representations, where the structural equivalence between men $\rightarrow$ king and women $\rightarrow$ queen is described as being obtained by just adding or subtracting their node embeddings (positions in the embedding space) (Arora et al., 2016; Mikolov et al., 2013). Hence, can all (positional) node embeddings provide structural relationships akin to word analogies? We provide a visual example in Appendix (Section 7) using the food web of Figure 1. In the opposite direction, structural representations $\rightarrow$ node embeddings, graph neural networks (GNNs) are often optimized to predict edges even though their structural node representations are provably incapable of performing the task. For instance, the node representations of the lynx and the orca in Figure 1 are indistinguishable due to an isomorphic equivalence between the nodes, making any edge prediction task that distinguishes the edges of lynx and orca a seemingly futile exercise (see Appendix (Section 7) for more details). Hence, are structural representations in general—and GNNs in particular—fundamentally incapable of performing link (dyadic) and multi-ary (polyadic) predictions tasks? GNNs, however, can perform node classification tasks, which is a task not associated with positional node embeddings (see Appendix (Section 7) for a concrete visual interpretation of the differences between positional node embeddings and structural representations over node classification and link prediction tasks).
+
+Confirmation bias has seemingly appeared to thwart recent efforts to bring node embeddings and structural representations into a single overarching framework. Preconceived notions of the two being fundamentally different (see Appendix (Section 7)) have been reinforced in the existing literature, arguing they belong in different applications: (Positional) Node embeddings would find applications in multi-ary relationships such as link prediction, clustering, and natural language processing and knowledge acquisition through word and entity embeddings. Structural representations would find applications in node classification, graph classification, and role discovery. A unified theory is required if we wish to eliminate these artificial boundaries, and better cross-pollinate, node embeddings and structural representations in novel techniques.
+
+Contributions: In this work we use invariant theory and axiomatic counterfactuals (causality) to develop a unified theoretical framework that clarifies the differences between node embeddings and
+
+
+Figure 1: A food web example showing two disconnected components - the boreal forest (Stenseth et al., 1997) and the antarctic fauna (Bates et al., 2015). The positional node embedding of the lynx and the orca can be different while their structural representation must be the same (due to the isomorphism).
+
+
+
+structural representations and emphasizes their correspondence. More specifically, (a) we show that structural representations and node embeddings have the same relationship as distributions and their samples; (b) we prove that all tasks that can be performed by node embeddings can also be performed by structural representations and vice-versa. Moreover, (c) we introduce new guidelines to creating and using node embeddings, which we hope will replace the less-than-optimal standard operating procedures used today. Finally, (d) we show that the concepts of transductive and inductive learning commonly used to describe relational methods— are unrelated to node embeddings and structural representations.s
+
+# 2 PRELIMINARIES
+
+This section introduces some basic definitions, attempting to keep the mathematical jargon in check as much as we can, sometimes even sacrificing generality for clarity. We recommend Bloem-Reddy & Teh (2019) for a more formal description of some of the definitions in this section.
+
+Definition 1 (Graph). We consider either a directed or an undirected attributed graph, denoted by $G = (V,E,\mathbf{X},\mathbf{E})$ , where $V$ is a set of $n = |V|$ vertices, $E$ is the set of edges in $V \times V$ , with matrix $\mathbf{X} \in \mathbb{R}^{n \times k}$ , $k > 0$ and 3-mode tensor $\mathbf{E} \in \mathbb{R}^{n \times n \times k'}$ , $k' > 0$ representing the node and edge features, respectively. The edge set has an associated adjacency matrix $\mathbf{A} \in \{0,1\}^{n \times n}$ . In order to simplify notation, we will compress $\mathbf{E}$ and $\mathbf{A}$ into a single tensor $\mathbf{A} \in \mathbb{R}^{n \times n \times (k' + 1)}$ . When explicit vertex and edge features and weights are unavailable, we will consider $\mathbf{X} = \mathbf{11}^T$ and $\mathbf{A} = \mathbf{A}$ , where $\mathbf{1}$ is a $n \times 1$ vector of ones. We will abuse notation and denote the graph as $G = (\mathbf{A},\mathbf{X})$ . Without loss of generality, we define number the nodes in $V = \{1,\dots,n\}$ following the same ordering as the adjacency tensor $\mathbf{A}$ and the rows in $\mathbf{X}$ . We denote $S$ and a vector of the elements in $S \in \mathcal{P}^\star(V)$ sorted in ascending order, where $\mathcal{P}^\star(V)$ is the power set of $V$ without the empty set.
+
+One of the most important operators in our mathematical toolkit will be that of a permutation action, orbits, $\mathcal{G}$ -invariance, and $\mathcal{G}$ -equivariance:
+
+Definition 2 (Permutation action $\pi$ ). A permutation action $\pi$ is a function that acts on any vector, matrix, or tensor defined over the nodes $V$ , e.g., $(Z_{i})_{i\in V}$ , and outputs an equivalent vector, matrix, or tensor with the order of the nodes permuted. We define $\Pi_{n}$ as the set of all $n!$ such permutation actions.
+
+Definition 3 (Orbits). An orbit is the result of a group action $\Pi_n$ acting on elements of a group corresponding to bijective transformations of the space that preserve some structure of the space. The orbit of an element is the set of equivalent elements under action $\Pi_n$ , i.e., $\Pi_n(x) = \{\pi(x) \mid \pi \in \Pi_n\}$ .
+
+Definition 4 ( $\mathcal{G}$ -equivariant and $\mathcal{G}$ -invariant functions). Let $\Sigma_{n}$ be the set of all possible attributed graphs $G$ of size $n \geq 1$ . More formally, $\Sigma_{n}$ is the set of all tuples $(\mathbf{A}, \mathbf{X})$ with adjacency tensors $\mathbf{A}$ and corresponding node attributes $\mathbf{X}$ for $n$ nodes. A function $g: \Sigma_{n} \to \mathbb{R}^{n \times \cdot}$ is $\mathcal{G}$ -equivariant w.r.t. valid permutations of the nodes $V$ , whenever any permutation action $\pi \in \Pi_{n}$ in the $\Sigma_{n}$ space associated with the same permutation action of the nodes in the $\mathbb{R}^{n \times \cdot}$ space. A function $g: \Sigma_{n} \to \mathbb{R}^{\cdot}$ is $\mathcal{G}$ -invariant whenever it is invariant to any permutation action $\pi \in \Pi_{n}$ in $\Sigma_{n}$ .
+
+Definition 5 (Graph orbits & graph isomorphism). Let $G = (\mathbf{A}, \mathbf{X})$ be a graph with $n$ nodes, and let $\Pi_n(G) = \{(\mathbf{A}', \mathbf{X}') : (\mathbf{A}', \mathbf{X}') = (\pi(\mathbf{A}), \pi(\mathbf{X})), \forall \pi \in \Pi_n\}$ be the set of all equivalent isomorphic graphs under the permutation action $\pi$ . Two graphs $G_1 = (\mathbf{A}_1, \mathbf{X}_1)$ and $G_2 = (\mathbf{A}_2, \mathbf{X}_2)$ are said isomorphic iff $\Pi_n(G_1) = \Pi_n(G_2)$ .
+
+Definition 6 (Node orbits & node isomorphism). The equivalence classes of the vertices of a graph $G$ under the action of automorphisms are called vertex orbits. If two nodes are in the same node orbit, we say that they are isomorphic.
+
+In Figure 1, the lynx and the orca are isomorphic (they have the same node orbits). We now generalize Definition 6 to subsets of nodes $S \in \mathcal{P}^{\star}(V)$ , where $\mathcal{P}^{\star}(V)$ is the power set of $V$ without the empty set.
+
+Definition 7 (Vertex subset orbits and joint isomorphism). The equivalence classes of $k$ -sized subsets of vertices $S \in \mathcal{P}^{\star}(V)$ of a graph $G$ under the action of automorphisms between the subsets are called vertex subset orbits, $k \geq 2$ . If two proper subsets $S_1, S_2 \in \mathcal{P}^{\star}(V) \backslash V$ are in the same vertex subset orbit, we say they are jointly isomorphic.
+
+Next we define the relationship between structural representations and node embeddings.
+
+# 3 A UNIFYING THEORETICAL FRAMEWORK OF NODE EMBEDDINGS AND STRUCTURAL REPRESENTATIONS
+
+How are node embeddings and structural representations related? This section starts with a familiar, albeit naive, view of the differences between node embeddings and structural representations, preparing the groundwork to later broadening and rectifying these into precise model-free mathematical statements using invariant theory. This broadening is needed since model-free node embeddings need not be related to node closeness in the graph (or to lower dimensional projections for that matter), as it is impossible to have a model-free definition of closeness.
+
+A familiar interpretation of node embeddings: Node embeddings are often seen as a lower-dimensional projection of the rows and columns of the adjacency matrix $\mathbf{A}$ from $\mathbb{R}^n$ to $\mathbb{R}^d$ , $d < n$ , that preserves relative positions of the nodes in a graph (Graham & Winkler, 1985; Linial et al., 1995); for instance, in Figure 1, the lynx and the coyote would have close node embeddings, while the node embeddings of the lynx and the orca would be significantly different. Node embeddings are often seen as encoding the fact that the lynx and the coyote are part of a tightly-knit community, while the lynx and orca belong to distinct communities. The structural representation of a node, on the other hand, shows which nodes have similar roles (structural similarities) on a graph; for instance, the lynx and the orca in Figure 1 must have the same structural representation, while the lynx and the coyote likely have different structural representations. The lynx, like the orca, is a top predator in the food web while the coyote is not a top predator.
+
+The incompatibility of the familiar interpretation with the theory of structural graph representations: The above interpretation of node embeddings must be tied to a model that defines closeness. Structural graph representations are model-free. Hence, we need a model-free definition of node embedding to connect it with structural representations. Unfortunately, one cannot define closeness without a model. Hence, in the remainder of this paper, we abandon this familiar interpretation in favor of a model-free definition.
+
+Roadmap: In what follows, we restate some existing model-free definitions of structural graph representation and introduce some new ones. Then, we introduce a model-free definition of node embeddings. We will retain the terminology node embedding for historical reasons, even though our node embedding need not be an embedding (a projection into lower dimensional space).
+
+# 3.1 ON STRUCTURAL REPRESENTATIONS
+
+In what follows we use the terms link and edge interchangeably. Proofs are left to the Appendix.
+
+Definition 8 (Structural node representations). The structural representation of node $v \in V$ in a graph $G = (\mathbf{A}, \mathbf{X})$ is the $\mathcal{G}$ -invariant representation $\Gamma(v, \mathbf{A}, \mathbf{X})$ , where $\Gamma: V \times \Sigma_n \to \mathbb{R}^d$ , $d \geq 1$ , such that $\forall u \in V$ , $\Gamma(u, \mathbf{A}, \mathbf{X}) = \Gamma(\pi(u), \pi(\mathbf{A}), \pi(\mathbf{X}))$ for all permutation actions $\forall \pi \in \Pi_n$ . Moreover, for any two isomorphic nodes $u, v \in V$ , $\Gamma(u, \mathbf{A}, \mathbf{X}) = \Gamma(v, \mathbf{A}, \mathbf{X})$ .
+
+Definition 9 (Most-expressive structural node representations $\Gamma^{\star}$ ). A structural representation of a node $v\in V$ , $\Gamma^{\star}(v,\mathbf{A},\mathbf{X})$ , is most-expressive iff, $\forall u\in V$ , there exists a bijective measurable map between $\Gamma^{\star}(u,\mathbf{A},\mathbf{X})$ and the orbit of node $u$ in $G = (\mathbf{A},\mathbf{X})$ (Definition 7).
+
+Trivially, by Definitions 6 and 9, two graphs $G_{1} = (\mathbf{A}_{1},\mathbf{X}_{1})$ and $G_{2} = (\mathbf{A}_{2},\mathbf{X}_{2})$ are isomorphic (Definition 5) iff the most-expressive structural node representations $(\Gamma^{\star}(u,\mathbf{A}_1,\mathbf{X}_1))_{u\in V}$ and $(\Gamma^{\star}(v,\mathbf{A}_2,\mathbf{X}_2))_{v\in V}$ are the same up to a valid permutation $\pi \in \Pi_n$ of the nodes. In what follows $\mathcal{P}^{\star}$ is the power set excluding the empty set.
+
+We now describe the relationship between structural node representations and node isomorphism.
+
+Lemma 1. Two nodes $v, u \in V$ , have the same most-expressive structural representations $\Gamma^{\star}(v, \mathbf{A}, \mathbf{X}) = \Gamma^{\star}(u, \mathbf{A}, \mathbf{X})$ iff $u$ and $v$ are isomorphic nodes in $G = (\mathbf{A}, \mathbf{X})$ .
+
+Having described representation of nodes, we now generalize these representations to subsets of $V$ .
+
+Definition 10 (Joint structural representation $\Gamma$ ). A joint structural representation of a graph with node set $V$ is defined as $\Gamma: \mathcal{P}^{\star}(V) \times \Sigma_n \to \mathbb{R}^d$ , $d \geq 1$ . Furthermore, $\Gamma$ is $\mathcal{G}$ -invariant over all node subsets, i.e., $\forall S \in \mathcal{P}^{\star}(V)$ and $\forall (\mathbf{A}, \mathbf{X}) \in \Sigma_n$ , it must be that $\Gamma(\vec{S}, \mathbf{A}, \mathbf{X}) = \Gamma(\pi(\vec{S}), \pi(\mathbf{A}), \pi(\mathbf{X}))$ for all permutation actions $\forall \pi \in \Pi_n$ . Moreover, for any two isomorphic subsets $S, S' \in \mathcal{P}^{\star}(V)$ , $\Gamma(\vec{S}, \mathbf{A}, \mathbf{X}) = \Gamma(\vec{S}', \mathbf{A}, \mathbf{X})$ .
+
+We now mirror Definition 9 in our generalization of $\Gamma$ :
+
+Definition 11 (Most-expressive joint structural representations $\Gamma^{\star}$ ). A structural representation $\Gamma^{\star}(\vec{S},\mathbf{A},\mathbf{X})$ of a non-empty subset $S\in \mathcal{P}^{\star}(V)$ , of a graph $(\mathbf{A},\mathbf{X})\in \Sigma_{n}$ , is most-expressive iff, there exists a bijective measurable map between $\Gamma^{\star}(\vec{U},\mathbf{A},\mathbf{X})$ and the orbit of $U$ in $G$ (Definition 7), $\forall U\in \mathcal{P}^{\star}(V)$ and $\forall (\mathbf{A},\mathbf{X})\in \Sigma_{n}$ .
+
+Note, however, the failure to represent the link (lynx, coyote) in Figure 1 using the most-expressive node representations of the lynx and the coyote. A link needs to be represented by a joint representation of two nodes. For instance, we can easily verify from Definition 11 that $\Gamma^{\star}((\text{lynx}, \text{coyote}), \mathbf{A}, \mathbf{X}) \neq \Gamma^{\star}((\text{orca}, \text{coyote}), \mathbf{A}, \mathbf{X})$ , even though $\Gamma^{\star}(\text{lynx}, \mathbf{A}, \mathbf{X}) = \Gamma^{\star}(\text{orca}, \mathbf{A}, \mathbf{X})$ .
+
+Next we show that joint prediction tasks only require joint structural representations. But first we need to show that any causal model defined through axiomatic counterfactuals (Galles & Pearl, 1998) can be equivalently defined through noise outsourcing (Austin, 2008), a straightforward result that we were unable to find in the literature.
+
+Lemma 2 (Causal modeling through noise outsourcing). Definition 1 of Galles & Pearl (1998) gives a causal model as a triplet
+
+$$
+M = \langle U, V ^ {\prime}, F \rangle ,
+$$
+
+where $U$ is a set of exogenous variables, $V'$ is a set of endogenous variables, and $F$ is a set of functions such that in the causal model, $v_i' = f(\vec{p}_i, u)$ is the realization of random variable $V_i' \in V'$ and a sequence of random variables $\vec{PA}_i$ with $PA_i \subseteq V \setminus V_i'$ as the endogenous variable parents of variable $V_i'$ as given by a directed acyclic graph. Then, there exists a pure random noise $\epsilon$ and a set of (measurable) functions $\{g_u\}_{u \in U}$ such that for $V_i \in V'$ , $V_i'$ can be equivalently defined as $v_i' \stackrel{a.s.}{=} f(\vec{p}_i, g_u(\epsilon_u))$ , where $\epsilon_u$ has joint distribution $(\epsilon_u)_{\forall u \in U} \stackrel{a.s.}{=} g'(\epsilon)$ for some Borel measurable function $g'$ and a random variable $\epsilon \sim \text{Uniform}(0,1)$ . The latter defines $M$ via noise outsourcing (Austin, 2008).
+
+The proof of Lemma 2 is given in the Appendix. Lemma 2 defines the causal model entirely via endogenous variables, deterministic functions, and a pure random noise random variable.
+
+We are now ready for our theorem showing that joint prediction tasks only require (most-expressive) joint structural representations.
+
+Theorem 1. Let $\mathcal{S} \subseteq \mathcal{P}^{\star}(V)$ be a set of non-empty subsets of the vertices $V$ . Let $Y(\mathcal{S},\mathbf{A},\mathbf{X}) = (Y(\vec{S},\mathbf{A},\mathbf{X}))_{S \in \mathcal{S}}$ be a sequence of random variables defined over the sets $S \in \mathcal{S}$ of a graph $G = (\mathbf{A},\mathbf{X})$ , that are invariant to the ordering of $\vec{S}, S \in \mathcal{S}$ , such that $Y(\vec{S_1},\mathbf{A},\mathbf{X}) \stackrel{d}{=} Y(\vec{S_2},\mathbf{A},\mathbf{X})$ for any two jointly isomorphic subsets $S_1, S_2 \in \mathcal{S}$ (Definition 7), where $\stackrel{d}{=}$ means equality in their marginal distributions. Then, there exists a measurable function $\varphi$ such that, $Y(\mathcal{S},\mathbf{A},\mathbf{X}) \stackrel{\text{a.s.}}{=} (\varphi(\Gamma^{\star}(\vec{S},\mathbf{A},\mathbf{X}),\epsilon_S))_{S \in \mathcal{S}}$ , where $\epsilon_S$ is the random noise that defines the exogenous variables of Lemma 2, with joint distribution $p((\epsilon_{S'})_{\forall S' \in \mathcal{S}})$ independent of $\mathbf{A}$ and $\mathbf{X}$ .
+
+Theorem 1 extends Theorem 12 of Bloem-Reddy & Teh (2019) in multiple ways: (a) to all subsets of nodes, $S \in \mathcal{P}^{\star}(V)$ , (b) to include causal language, and, most importantly, (c) to showing that any prediction task that can be defined over $S$ , requires only a most-expressive joint structural representation over $S$ . For instance, any task with $|S| = 2$ predicting a missing link $(u,v)$ on a graph $G = (\mathbf{A},\mathbf{X})$ , requires only the most-expressive structural representation $\Gamma^{\star}((u,v),\mathbf{A},\mathbf{X})$ . Note that, in order to predict directed edges, we must use $\Gamma^{\star}((u,v),\mathbf{A},\mathbf{X})$ to also predict the edge's direction: $\rightarrow$ , $\leftrightarrow$ , $\leftrightarrow$ , but a detailed procedure showing how to predict directed edges is relegated to a future journal version of this paper. Theorem 1 also includes node tasks for $|S| = 1$ , hyperedge tasks for $2 < |S| < n$ , and graph-wide tasks for $S = V$ .
+
+Remark 1 (GNNs and link prediction). Even though structural node representations of GNNs are not able to predict edges, GNNs are often still optimized to predict edges (e.g., (Hamilton et al., 2017a; Xu et al., 2018)) in transfer learning tasks. This optimization objective guarantees that any small topological differences between two nearly-isomorphic nodes without an edge will be amplified, while differences between nodes with an edge will be minimized. Hence, the topological differences in a close-knit community will be minimized in the representation. This procedure is an interesting way to introduce homophily in structural representations and should work well for node classification tasks in homophilic networks (where node classes tend to be clustered).
+
+We now turn our attention to node embeddings and their relationship with joint representations.
+
+# 3.2 ON (POSITIONAL) NODE EMBEDDINGS
+
+Definition 12 (Node Embeddings). The node embeddings of a graph $G = (\mathbf{A}, \mathbf{X})$ are defined as joint samples of random variables $(\mathbf{Z}_i)_{i \in V} | \mathbf{A}, \mathbf{X} \sim p(\cdot | \mathbf{A}, \mathbf{X})$ , $\mathbf{Z}_i \in \mathbb{R}^d$ , $d \geq 1$ , where $p(\cdot | \mathbf{A}, \mathbf{X})$ is a $\mathcal{G}$ -equivariant probability distribution on $\mathbf{A}$ and $\mathbf{X}$ , that is, $\pi(p(\cdot | \mathbf{A}, \mathbf{X})) = p(\cdot | \pi(\mathbf{A}), \pi(\mathbf{X}))$ for any permutation $\pi \in \Pi_n$ .
+
+Essentially, Definition 12 says that the probability distribution $p(\mathbf{Z}|\mathbf{A},\mathbf{X})$ of a node embedding $Z$ must be $\mathcal{G}$ -equivariant on $\mathbf{A}$ and $X$ . This is the only property we require to define a node embedding. Next, we show that the node embeddings given by Definition 12 cover a wide range of embedding methods in the literature.
+
+Corollary 1. The node embeddings in Definition 12 encompass embeddings given by matrix and tensor factorization methods — such as Singular Value Decomposition (SVD), Non-negative Matrix Factorization (NMF), implicit matrix factorization (a.k.a. word2vec), latent embeddings given by Bayesian graph models — such as Probabilistic Matrix Factorizations (PMFs) and variants —, variational autoencoder methods and graph neural networks that use random lighthouses to extract node embeddings.
+
+The proof of Corollary 1 is in the Appendix, along with the references of each of the methods mentioned in the corollary. The output of some of the methods described in Corollary 1 is deterministic, and for those, the probability density $p(Z|\mathbf{A},\mathbf{X})$ is a Dirac delta. In practice, however, even deterministic methods use algorithms whose outputs depend on randomized initial conditions, which will also satisfy Corollary 1.
+
+We now show that permutation equivariance implies two isomorphic nodes (or two subsets of nodes) must have the same marginal distributions over $Z$ :
+
+Lemma 3. The permutation equivariance of $p$ in Definition 12 implies that, if two proper subsets of nodes $S_1, S_2 \in \mathcal{P}^\star(V) \backslash V$ are isomorphic, then their marginal node embedding distributions must be the same up to a permutation, i.e., $p((\mathbf{Z}_i)_{i \in S_1} | \mathbf{A}, \mathbf{X}) = \pi(p((\mathbf{Z}_j)_{j \in S_2} | \mathbf{A}, \mathbf{X}))$ for some appropriate permutation $\pi \in \Pi_n$ .
+
+Hence, the critical difference between the structural node representation vector $(\Gamma(v, \mathbf{A}, \mathbf{X}))_{v \in V}$ in Definition 8 and node embeddings $\mathbf{Z}$ in Definition 12, is that the vector $(\Gamma(v, \mathbf{A}, \mathbf{X}))_{v \in V}$ must be $\mathcal{G}$ -equivariant while $\mathbf{Z}$ need not be—even though $\mathbf{Z}$ 's distribution must be $\mathcal{G}$ -equivariant. This seemingly trivial difference has tremendous consequences, which we explore in the reminder of this section.
+
+Next, we show that node embeddings $Z$ cannot have any extra information about $G$ that is not already contained in a most-expressive structural representation $\Gamma^{\star}$ .
+
+Theorem 2 (The statistical equivalence between node embeddings and structural representations). Let $Y(S, \mathbf{A}, X) = (Y(\vec{S}, \mathbf{A}, X))_{S \in S}$ be as in Theorem 1. Consider a graph $G = (\mathbf{A}, \mathbf{X}) \in \Sigma_n$ , $n \geq 2$ . Let $\Gamma^{\star}(\vec{S}, \mathbf{A}, X)$ be a most-expressive structural representation of nodes $S \in \mathcal{P}^{\star}(V)$ in $G$ . Then,
+
+$$
+Y (\vec {S}, \mathbf {A}, \boldsymbol {X}) \perp_ {\Gamma^ {\star} (\vec {S}, \mathbf {A}, \boldsymbol {X})} \mathbf {Z} | \mathbf {A}, \boldsymbol {X}, \quad \forall S \in \mathcal {S},
+$$
+
+for any node embedding matrix $\mathbf{Z}$ that satisfies Definition 12, where $A \perp_{B} C$ means $A$ is independent of $C$ given $B$ . Finally, $\forall (\mathbf{A}, \mathbf{X}) \in \Sigma_{n}$ , there exists a most-expressive node embedding $\mathbf{Z}^{\star}|\mathbf{A}, \mathbf{X}$ such that,
+
+$$
+\Gamma^ {\star} (\vec {S}, \mathbf {A}, \boldsymbol {X}) = \mathbb {E} _ {\boldsymbol {Z} ^ {\star}} [ f ^ {(| S |)} ((\boldsymbol {Z} _ {v} ^ {\star}) _ {v \in S}) | \mathbf {A}, \boldsymbol {X} ], \quad \forall S \in \mathcal {S},
+$$
+
+for some appropriate collection of functions $\{f^{(k)}(\cdot)\}_{k = 1,\dots,n}$ .
+
+The proof of Theorem 2 is given in the Appendix. Note that the most-expressive embedding $Z^{*}|\mathbf{A},\mathbf{X}$ extends the insight used to make GNNs more expressive in Murphy et al. (2019) to a more general procedure.
+
+Theorem 2 implies that, for any graph prediction task, node embeddings carry no information beyond that of structural representations. A less attentive reader may think this creates an apparent paradox, since one cannot predict a property $Y((lynx,coyote),\mathbf{A}_{\mathrm{food~web}},\mathbf{X}_{\mathrm{food~web}})$ in Figure 1 from structural node embeddings, since $\Gamma (lynx,\mathbf{A},\mathbf{X}) = \Gamma (orca,\mathbf{A},\mathbf{X})$ . The resolution of the paradox is to note that Theorem 2 describes the prediction of a link through a pairwise structural representation $\Gamma ((lynx,coyote),\mathbf{A}_{\mathrm{food~web}},\mathbf{X}_{\mathrm{food~web}})$ , and we may not be able to do the same task with structural node representations alone. An interesting question for future work is how well can we learn distributions (representations) from (node embeddings) samples, extending (Kamath et al., 2015) to graph representations.
+
+Other equally important consequences of Theorem 2 are: (a) any sampling approach obtaining node embeddings $Z$ is valid as long as the distribution is $\mathcal{G}$ -equivariant (Definition 12), noting that isomorphic nodes must have the same marginal distributions (per Lemma 3). (b) Interestingly, convex optimization methods for matrix factorization can be seen as variance-reduction techniques with no intrinsic value beyond reducing variance. (c) Methods that give unique node embeddings—if the embedding of any two isomorphic nodes are different—are provably incorrect when used to predict graph relationships since they are permutation-sensitive.
+
+Remark 2 (Some GNN methods give node embeddings not structural representations). The random edges added by GraphSAGE (Hamilton et al., 2017a) and GIN (Xu et al., 2018) random walks make these methods node embeddings rather than structural node representations, according to Definition 12. To transform them back to structural node representations, one must average over all such random walks.
+
+The following corollaries describe other consequences of Theorem 2:
+
+Corollary 2. The link prediction task between any two nodes $u, v \in V$ depends only on the most-expressive tuple representation $\Gamma^{\star}((u, v), \mathbf{A}, \mathbf{X})$ . Moreover, $\Gamma^{\star}((u, v), \mathbf{A}, \mathbf{X})$ always exists for any graph $(\mathbf{A}, \mathbf{X})$ and nodes $(u, v)$ . Finally, given most-expressive node embeddings $Z^{\star}$ , there exists a function $f$ such that $\Gamma^{\star}((u, v), \mathbf{A}, \mathbf{X}) = \mathbb{E}_{Z^{\star}}[f(Z_u^{\star}, Z_v^{\star})]$ , $\forall u, v$ .
+
+A generalization of Corollary 2 is also possible, where Theorem 2 is used to allow us to create joint representations from simpler node embedding sampling methods.
+
+Corollary 3. Sample $\mathbf{Z}$ according to Definition 12. Then, we can learn a $k$ -node structural representation of a subset of $k$ nodes $S \in \mathcal{P}^{\star}(V)$ , $|S| = k$ , simply by learning a function $f^{(k)}$ whose average $\Gamma(\vec{S}, \mathbf{A}, \mathbf{X}) = \mathbb{E}[f^{(k)}((Z_v)_{v \in S})]$ can be used to predict $Y(\vec{S}, \mathbf{A}, \mathbf{X})$ .
+
+The proof of Corollary 3 is in the Appendix. Finally, we show that the concepts of transductive and inductive learning are unrelated to the notions of node embeddings and structural representations.
+
+Corollary 4. Transductive and inductive learning are unrelated to the concepts of node embeddings and structural representations.
+
+Corollary 4 clears a confusion that, we believe, arises because traditional applications of node embeddings use a single Monte Carlo sample of $Z|\mathbf{A}, \mathbf{X}$ to produce a structural representation (e.g., (Mikolov et al., 2013)). Inherently, a classifier learned with such a poor structural representation may fail to generalize over the test data, and will be deemed transductive.
+
+Corollary 5. Merging a node embeddings sampling scheme with GNNs can increase the structural representation power of GNNs.
+
+Corollary 5 is a direct consequence of Theorem 2, with Murphy et al. (2019) showing RP-GNN as a concrete method to do so.
+
+# 4 RESULTS
+
+This section focuses on applying the lessons learned in Section 3 in four tasks, divided into two common goals. The goal of the first three tasks is to show that, as described in Theorem 2, node embeddings can be used to create expressive structural embeddings of nodes, tuples, and triads. These
+
+representations are then subsequently used to make predictions on downstream tasks with varied node set sizes. The tasks also showcase the added value of using multiple node embeddings (Monte Carlo) samples to estimate structural representations, both during training and testing. Moreover, showcasing Theorem 1 and the inability of node representations to capture joint structural representations, these tasks show that structural node representations are useless in prediction tasks over more than one node, such as links and triads. The goal of fourth task is to showcase how multiple Monte Carlo samples of node embeddings are required to observe the fundamental relationship between structural representations and node embeddings predicted by Theorem 2.
+
+An important note: Our proposed theoretical framework is not limited to the way we generate node embeddings. For example, our theoretical framework can use SVD in an inductive setting, where we train a classifier in one graph and test in a different graph, which was thought not possible previously with SVD. SVD with our theoretical framework is denoted MC-SVD, to emphasize the importance of Monte Carlo sampling in building better structural representations. Alternatively, more expressive node embeddings can be obtained using Colliding Graph Neural Networks (CGNN), as we show in the Appendix (Section 10 and Section 11)
+
+# 4.1 QUANTITATIVE RESULTS
+
+In what follows, we evaluate structural representations estimated from four node embedding techniques, namely GIN (Xu et al., 2018), RP-GIN (Murphy et al., 2019), 1-2-3 GNN (Morris et al., 2019), MC-SVD and CGNN. We classify GIN, RP-GIN and 1-2-3 GNN as node embedding techniques, as they employ the unsupervised learning procedure of Hamilton et al. (2017a). These were chosen because of their potential extra link and triad representation power over traditional structural representation GNNs. All node embedding methods are evaluated by their effect in estimating good structural representation for downstream task accuracy. We partition $G = (\mathbf{A},\mathbf{X})$ into three non-overlapping induced subgraphs, namely $G_{\mathrm{train}} = (\mathbf{A}_{\mathrm{train}},\mathbf{X}_{\mathrm{train}})$ , $G_{\mathrm{val}} = (\mathbf{A}_{\mathrm{val}},\mathbf{X}_{\mathrm{val}})$ and $G_{\mathrm{test}} = (\mathbf{A}_{\mathrm{test}},\mathbf{X}_{\mathrm{test}})$ , which we use for training, validation and testing, respectively. In learning all four node embedding techniques, we only make use of the graphs $G_{\mathrm{train}}$ and $G_{\mathrm{val}}$ . All the four models used here have never seen the test graph $G_{\mathrm{test}}$ before test time —i.e., all our node embedding methods, used in the framework of Theorem 2, behave like inductive methods.
+
+Monte Carlo joint representations during an unsupervised learning phase: A key component of our optimization is learning joint representations from node embeddings—as per Theorem 2. For this, at each gradient step (in practice, we do at each epoch), we perform a Monte Carlo sample of the node embeddings $Z|\mathbf{A}, \mathbf{X}$ . This, procedure optimizes a proper upper bound on the empirical loss, if the loss is the negative log-likelihood, cross-entropy, or a square loss. The proof is trivial by Jensen's inequality. For GIN, RP-GIN and 1-2-3 GNN, we add random edges to the graph following a random walk at each epoch (Hamilton et al., 2017a). For the MC-SVD procedure, we use the left eigenvector matrix obtained by: (1) a random seed, (2) a random input permutation of the adjacency matrix, and (3) a single optimization step, rather than running SVD until it converges. We also have results with MC-SVD†, which is the same procedure as before, but runs SVD until convergence—noting that the latter is likely to give deterministic results in large real-world graphs.
+
+Monte Carlo joint representations during a supervised learning phase: During the supervised phase, we first estimate a structural joint representation $\hat{\Gamma}(\vec{S},\mathbf{A},\mathbf{X})$ as the average of $m \in \{1,5,20\}$ . Monte Carlo samples of a permutation-invariant function (Murphy et al., 2018; Zaheer et al., 2017) (sum-pooling followed by an MLP) applied to a sampled node embedding $(Z_v)_{v \in S} | \mathbf{A}, \mathbf{X}$ . Then, using $\hat{\Gamma}(\vec{S},\mathbf{A},\mathbf{X})$ , we predict the corresponding target variable $Y(S,\mathbf{A},\mathbf{X})$ of each task using an MLP. The node sets of our tasks $S \subseteq V$ , have sizes $|S| \in \{1,2,3\}$ , corresponding to node classification, link prediction, and triad prediction tasks, respectively.
+
+Datasets: We consider four graph datasets used by Hamilton et al. (2017a), namely Cora, Citeseer, Pubmed (Namata et al., 2012; Sen et al., 2008) and PPI (Zitnik & Leskovec, 2017). Cora, Citeseer and Pubmed are citation networks, where vertices represent papers, edges represent citations, and vertex features are bag-of-words representation of the document text. The PPI (protein-protein interaction) dataset is a collection of multiple graphs representing the human tissue, where vertices represent proteins, edges represent interactions across them, and node features include genetic and immunological information. Train, validation and test splits are used as proposed by Yang et al. (2016) (see Table 3 in the Appendix). Further dataset details can be found in the Appendix.
+
+Table 1: Micro F1 score on three distinct tasks averaged over 12 runs with standard deviation in parenthesis. The number within the parenthesis beside the model name indicates the number of Monte Carlo samples used in the estimation of the structural representation. MC-SVD†(1) denotes the SVD procedure run until convergence with one Monte Carlo sample for the representation. Bold values show maximum empirical average, and multiple bolds happen when its standard deviation overlaps with another average. Results for Citeseer are provided in the Appendix in Table 2.
+
+ | Node Classification | Link Prediction | Triad Prediction |
| Cora | Pubmed | PPI | Cora | Pubmed | PPI | Cora | Pubmed | PPI |
| Random | 0.143 | 0.333 | \( 0.5^{121} \) | 0.500 | 0.500 | 0.500 | 0.250 | 0.250 | 0.250 |
| GIN(1) | 0.646(0.021) | 0.878(0.006) | 0.533(0.003) | 0.526(0.029) | 0.513(0.048) | 0.604(0.018) | 0.280(0.010) | 0.430(0.019) | 0.400(0.006) |
| GIN(5) | 0.676(0.031) | 0.880(0.003) | 0.535(0.004) | 0.491(0.019) | 0.517(0.028) | 0.609(0.012) | 0.284(0.017) | 0.422(0.024) | 0.397(0.004) |
| GIN(20) | 0.678(0.024) | 0.880(0.002) | 0.536(0.003) | 0.514(0.026) | 0.512(0.042) | 0.603(0.010) | 0.281(0.010) | 0.422(0.028) | 0.399(0.004) |
| RP-GIN(1) | 0.655(0.023) | 0.879(0.002) | 0.534(0.005) | 0.506(0.016) | 0.616(0.048) | 0.605(0.011) | 0.283(0.013) | 0.423(0.024) | 0.400(0.005) |
| RP-GIN(5) | 0.681(0.022) | 0.881(0.004) | 0.534(0.004) | 0.498(0.016) | 0.637(0.038) | 0.612(0.006) | 0.285(0.025) | 0.429(0.024) | 0.399(0.009) |
| RP-GIN(20) | 0.675(0.032) | 0.879(0.005) | 0.533(0.003) | 0.518(0.017) | 0.619(0.032) | 0.603(0.007) | 0.279(0.011) | 0.418(0.011) | 0.393(0.003) |
| 1-2-3 GNN(1) | 0.319(0.017) | 0.412(0.005) | 0.403(0.003) | 0.501(0.007) | 0.495(0.018) | 0.502(0.005) | 0.280(0.010) | 0.416(0.020) | 0.250(0.003) |
| 1-2-3 GNN(5) | 0.321(0.008) | 0.395(0.065) | 0.405(0.001) | 0.501(0.018) | 0.500(0.002) | 0.501(0.003) | 0.285(0.015) | 0.418(0.029) | 0.251(0.005) |
| 1-2-3 GNN(20) | 0.324(0.010) | 0.462(0.113) | 0.401(0.007) | 0.501(0.007) | 0.499(0.002) | 0.501(0.008) | 0.285(0.014) | 0.419(0.026) | 0.254(0.008) |
| MC-SVD†(1) | 0.665(0.014) | 0.810(0.009) | 0.523(0.005) | 0.588(0.029) | 0.807(0.024) | 0.755(0.010) | 0.336(0.038) | 0.515(0.077) | 0.532(0.010) |
| MC-SVD(1) | 0.667(0.017) | 0.825(0.007) | 0.521(0.006) | 0.583(0.020) | 0.818(0.032) | 0.755(0.008) | 0.304(0.034) | 0.518(0.065) | 0.529(0.006) |
| MC-SVD(5) | 0.669(0.013) | 0.842(0.015) | 0.556(0.009) | 0.572(0.019) | 0.848(0.038) | 0.754(0.006) | 0.306(0.037) | 0.567(0.061) | 0.544(0.008) |
| MC-SVD(20) | 0.672(0.013) | 0.855(0.010) | 0.591(0.009) | 0.580(0.021) | 0.868(0.029) | 0.762(0.010) | 0.300(0.033) | 0.546(0.029) | 0.550(0.007) |
| CGNN(1) | 0.468(0.026) | 0.686(0.020) | 0.545(0.010) | 0.682(0.026) | 0.587(0.027) | 0.661(0.015) | 0.352(0.028) | 0.404(0.014) | 0.414(0.009) |
| CGNN(5) | 0.641(0.022) | 0.808(0.008) | 0.637(0.014) | 0.707(0.027) | 0.585(0.037) | 0.704(0.012) | 0.414(0.045) | 0.417(0.018) | 0.463(0.026) |
| CGNN(20) | 0.726(0.024) | 0.831(0.010) | 0.707(0.015) | 0.712(0.041) | 0.581(0.039) | 0.738(0.011) | 0.405(0.034) | 0.419(0.017) | 0.498(0.021) |
+
+Node classification task: This task predicts node classes for each of the four datasets. In this task, structural node representations are enough. The structural node representation is used to classify nodes into different classes using an MLP, whose weights are trained in a supervised manner using the same splits as described above. In Cora, Citeseer and Pubmed, each vertex belongs only to a single class, whereas in the PPI graph dataset, nodes could belong to multiple classes.
+
+Link prediction task: Here, we predict a small fraction of edges and non-edges in the test graph, as well as identify all false edges and non-edges (which were introduced as a corruption of the original graph) between different pairs of nodes in the graph. Specifically, we use joint tuple representations $\Gamma((u,v),\mathbf{A},\mathbf{X})$ , for $u,v \in V$ , as prescribed by Theorem 2. Since, datasets are sparse in nature, and a trivial 'non-edges' predictor would result in a very high accuracy, we balance the train and validation and test splits to contain an equal number of edges and non-edges.
+
+Triad prediction task: This task involves the prediction of triadic interaction as well as identification of possible fake interactions in the data between the three nodes under consideration. In this case, we use joint triadic representations $\Gamma((u,v,h),\mathbf{A},\mathbf{X})$ , for $u,v,h\in V$ , as prescribed by Theorem 2. Here, we ensure that edge corruptions are dependent. We treat the graphs as being undirected in accordance with previous literature, and predict the number of true (uncorrupted) edges between the three nodes. Again, to handle the sparse nature of the graphs, we use a balanced dataset for train, validation, and test.
+
+In Table 1 we present Micro-F1 scores for all four models over the three tasks. First, we note how more Monte Carlo samples at test time tend to increase test accuracy. In node classification tasks, we note that structural node representations from CGNN node embeddings significantly outperform other methods in two of the three datasets (the harder tasks). In link prediction tasks, the low accuracy of GNN-based methods (close to random) showcases the little extra-power of GIN and RP-GIN sampling schemes have over the the inability of structural node representations to predict links. Surprisingly, in triads predictions, the accuracy of GNN-based methods is much above random in some datasets, but still far from other node embedding methods. In link and triad prediction tasks, MC-SVD and CGNN share the lead with MC-SVD winning on Pubmed and PPI, and CGNN being significantly more accurate on Cora. Although, the 1-2-3 GNN is based on the two-Weisfeiler-Lehman (2-WL) algorithm (pairwise Weisfeiler-Lehman algorithm (Fürer, 2017)), which provides tuple representations that can be exploited towards link prediction, it is however an approximation primarily designed for graph classification tasks. Unfortunately, the 1-2-3 GNN performs quite poorly on all our tasks (node classification, link and triad prediction), indicating the need for a task-specific approximation of 2-WL GNN's. Results for Citeseer, in the Appendix, show similar results.
+
+# 4.2 QUALITATIVE RESULTS
+
+We now investigate the transformation of node embedding into node and link structural representations. Theorem 2 shows that the average of a function over node embedding Monte Carlo samples gives node and link embedding. In this experiment, we empirically test Theorem 2, by creating structural representations from the node embedding random matrix $\mathbf{Z}$ , defined as the left eigenvector matrix obtained through SVD (ran until convergence), with the sources of randomness being due to a random permutation of the adjacency matrix given as input to the SVD method and the random seed it uses. Consider $m$ such embedding matrices Monte Carlo samples, $\mathcal{Z}^{(m)} = \{Z^{(i)}\}_{i=1}^{m}$ .
+
+
+(a) Difference in avg. structural node representations.
+
+
+(b) Average structural link representations
+Figure 2: Structural Representations for nodes and links using multiple samples obtained using MC-SVD on the disconnected food web graph shown in Figure 1.
+
+Structural node representations from node embeddings: According to Theorem 2, the average $\mathbb{E}[Z_{v,..}|\mathbf{A}]$ is a valid structural representation of node $v\in V$ in the adjacency matrix $\mathbf{A}$ of Figure 1. To test this empirically, we consider the unbiased estimator $\hat{\mu} (v,\mathcal{Z}^{(m)}) = \frac{1}{m}\sum_{i = 1}^{m}\mathbf{Z}_{v,..}^{(i)},v\in V,$ where $\lim_{m\to \infty}\hat{\mu} (v,\mathcal{Z}^{(m)})\stackrel {a.s.}{=}\mathbb{E}[Z_{v,..}|\mathbf{A}].$ Figure 2a shows the Euclidean distance between the empirical structural representations $\hat{\mu} (\mathrm{orca},\mathcal{Z}^{(m)})$ and $\hat{\mu} (\mathrm{lynx},\mathcal{Z}^{(m)})$ as a function of $m\in [1,200]$ . As expected, because these two nodes are isomorphic, $\| \hat{\mu} (\mathrm{orca},\mathcal{Z}^{(m)}) - \hat{\mu} (\mathrm{lynx},\mathcal{Z}^{(m)})\| \to 0$ as $m$ grows, with $m = 100$ giving reasonably accurate results.
+
+Structural link representations from node embeddings: According to Theorem 2, the average $\mathbb{E}[f^{(2)}(Z_{u,\cdot},Z_{v,\cdot})|\mathbf{A}]$ of a function $f^{(2)}$ is a valid structural representation of a link with nodes $u,v\in V$ in the adjacency matrix $\mathbf{A}$ of Figure 1. As an example, we use $f^{(2)}(a,b) = \| a - b\|$ , and define the unbiased estimator $\hat{\mu} (u,v,\mathcal{Z}^{(m)}) = \frac{1}{m}\sum_{i = 1}^{m}\| Z_u^{(i)} - Z_v^{(i)}\| ,\forall u,v\in V$ , where $\lim_{m\to \infty}\hat{\mu} (u,v,\mathcal{Z}^{(m)})\stackrel {a.s.}{=}\mathbb{E}[\| \mathcal{Z}_{u,\cdot} - \mathcal{Z}_{v,\cdot}\| |\mathbf{A}]$ . Figure 2b shows the impact of increasing the number of Monte Carlo samples $m$ over the empirical structural representation of links. We observe that although the empirical node representations of the orca and the lynx seem to converge to the same value, $\lim_{m\to \infty}\hat{\mu} (\mathrm{orca},\mathcal{Z}^{(m)}) = \lim_{m\to \infty}\hat{\mu} (\mathrm{lynx},\mathcal{Z}^{(m)})$ , their empirical joint representations with the coyote converge to different values, $\lim_{m\to \infty}\hat{\mu} (\mathrm{lynx},\mathcal{Z}^{(m)})\neq \lim_{m\to \infty}\hat{\mu} (\mathrm{orca},\mathcal{Z}^{(m)})$ , as predicted by Theorem 2. Also note a similar (but weaker) trend for $\lim_{m\to \infty}\hat{\mu} (\mathrm{orca},\mathrm{lynx},\mathcal{Z}^{(m)})\neq \lim_{m\to \infty}\hat{\mu} (\mathrm{orca},\mathrm{coyote},\mathcal{Z}^{(m)})$ , showing these three tuples to be structurally different.
+
+# 5 RELATED WORK
+
+Node Embeddings vs Structural Representations: Prior works have categorized themselves as one of either node embedding methods or methods which learn structural representations. This artificial separation, consequently led to little contemplation of the relation between the two, restricting each of these approaches to a certain subsets of downstream tasks on graphs. Node embeddings were arguably first defined in 1904, through Spearman's common factors. Ever since, there has never been a universal definition of node embedding: node embeddings were simply the product of a particular method. This literature features a myriad of methods, e.g., matrix factorization (Belkin & Niyogi, 2002; Cao et al., 2015; Ahmed et al., 2013; Ou et al., 2016), implicit matrix factorization (Mikolov et al., 2013; Arora et al., 2009; Chen et al., 2012; Perozzi et al., 2014; Grover & Leskovec, 2016), Bayesian factor models (Mnih & Salakhutdinov, 2008; Gopalan et al., 2014a), and some types of neural networks (You et al., 2019; Liang et al., 2018; Tang et al., 2019; Grover et al., 2019).
+
+Arguably, the most common interpretation of node embeddings borrows from definitions of graph (node) embeddings in metric spaces: a measure of relative node closeness (Abraham et al., 2006; Bourgain, 1985; Candès & Recht, 2009; Graham & Winkler, 1985; Kleinberg, 2007; Linial et al., 1995; Rabinovich & Raz, 1998; Recht et al., 2010; Shaw et al., 2011). Even in non-metric methods, such as word2vec (Mikolov et al., 2013) and Glove (Pennington et al., 2014), the embeddings have properties similar to those of metric spaces (Nematzadeh et al., 2017). Note that the definition of close varies from method to method, i.e., it is model-dependent. Still, this interpretation of closeness is the reason why their downstream tasks are often link prediction and clustering. However, once the literature started defining relative node closeness with respect to structural neighborhood similarities (e.g., Henderson et al. (2012); Ribeiro et al. (2017); Donnat et al. (2018)), node embeddings and structural representations became more strangely entangled.
+
+Structural representations have an increasing body of literature focused on node and whole-graph classification tasks. Theoretically, these works abandon metric spaces in favor of a group-theoretic
+
+description of graphs (Bloem-Reddy & Teh, 2019; Chen et al., 2019; Kondor & Trivedi, 2018; Maron et al., 2019; Murphy et al., 2019), with connections to finite exchangeability and prior work on multilayer perceptrons (Wood & Shawe-Taylor, 1996). Graph neural networks (GNNs) (e.g., Duvenaud et al., 2015; Gilmer et al., 2017; Kipf & Welling, 2016a; Hamilton et al., 2017a; Xu et al., 2018; Scarselli et al., 2008) among others) exploit this approach in tasks such as node and whole-graph classification. Morris et al. (2019) proposes a higher-order Weisfeller-Lehman GNN (WL- $k$ -GNN), which is shown to get better accuracy in graph classification tasks than traditional (WL-1) GNNs. Unfortunately, Morris et al. (2019) focused only on graph-wide tasks, missing the fact that WL-2 GNN should be able to also perform link prediction tasks (Theorem 1), unlike WL-1 GNNs. More recently, graph neural networks have also been employed towards relational reasoning as well as matrix completion tasks (Schlichtkrull et al., 2018; Zhang & Chen, 2018; Battaglia et al., 2018; Berg et al., 2017; Monti et al., 2017). However, these GNN's, in general, learn node embeddings rather than structural node representations, which are then exploited towards link prediction. GNN-like architectures have been used to simulate dynamic programming algorithms (Xu et al., 2019), which is unrelated to graphs and outside the scope of this work.
+
+To the best of our knowledge, our work is the first to provide the theoretical foundations connecting node embeddings and structural representations. A few recent works have classified node embedding and graph representation methods arguing them to be fundamentally different (e.g., Hamilton et al. (2017b); Rossi et al. (2019); Wu et al. (2019); Zhou et al. (2018)). Rather, our work shows that these are actually equivalent for downstream classification tasks, with the difference being that one is a Monte Carlo method (embedding) and the other one is deterministic (representation).
+
+Inductive vs Transductive Approaches: Another common misconception our work uncovers, is that of qualifying node embedding methods as transductive learning and graph representation ones as inductive (e.g., Hamilton et al. (2017a); Yang et al. (2016)). In their original definitions, transductive learning (Gammerman et al., 1998), (Zhu et al., 2003), (Zhou et al., 2004) and inductive learning (Michalski, 1983), (Belkin et al., 2006) are only to be distinguished on the basis of generalizability of the learned model to unobserved instances. However, this has commonly been misinterpreted as node embeddings methods being transductive and structural representations being inductive. Models which depend solely only on the input feature vectors and the immediate neighborhood structure have been classified as inductive, whereas methods which rely on positional node embeddings to classify relationships in a graph have been incorrectly qualified as transductive.
+
+The confusion seems to be rooted in researchers trying to use a single sample of a node embedding method and failing to generalize. Corollary 4 resolves this confusion by showing that transductive and inductive learning are fundamentally unrelated to positional node embeddings and graph representations. Both node embeddings and structural representations can be inductive if they can detect interesting conceptual patterns or reveal structure in the data. The theory provided by our work strongly adheres to this definition. Our work additionally provides the theoretical foundation behind the performance gains seen by Epasto & Perozzi (2019) and Goyal et al. (2019), which employ an ensemble of node embeddings for node classification tasks.
+
+# 6 CONCLUSIONS
+
+This work provided an invaluable unifying theoretical framework for node embeddings and structural graph representations, bridging methods like SVD and graph neural networks. Using invariant theory, we have shown (both theoretically and empirically) that relationship between structural representations and node embeddings is analogous to that of a distribution and its samples. We proved that all tasks that can be performed by node embeddings can also be performed by structural representations and vice-versa. Our empirical results show that node embeddings can be successfully used as inductive learning methods using our framework, and that non-GNN node embedding methods can be significantly more accurate in most tasks than simple GNNs methods. Our work introduced new practical guidelines to the use of node embeddings, which we expect will replace today's naive direct use of node embeddings in graph tasks.
+
+# ACKNOWLEDGMENTS
+
+This work was sponsored in part by the ARO, under the U.S. Army Research Laboratory contract number W911NF-09-2-0053, the Purdue Integrative Data Science Initiative and the Purdue Research foundation, the DOD through SERC under contract number HQ0034-13-D-0004 RT # 206, and the National Science Foundation under contract number CCF-1918483.
+
+# REFERENCES
+
+Ittai Abraham, Yair Bartal, and Ofer Neimany. Advances in metric embedding theory. In Proceedings of the thirty-eighth annual ACM symposium on Theory of computing, pp. 271-286. ACM, 2006.
+Amr Ahmed, Nino Shervashidze, Shravan Narayanamurthy, Vanja Josifovski, and Alexander J Smola. Distributed large-scale natural graph factorization. In Proceedings of the 22nd international conference on World Wide Web, pp. 37-48. ACM, 2013.
+Sanjeev Arora, Satish Rao, and Umesh Vazirani. Expander flows, geometric embeddings and graph partitioning. Journal of the ACM (JACM), 56(2):5, 2009.
+Sanjeev Arora, Yuanzhi Li, Yingyu Liang, Tengyu Ma, and Andrej Risteski. A latent variable model approach to pmi-based word embeddings. Transactions of the Association for Computational Linguistics, 4:385-399, 2016.
+Tim Austin. On exchangeable random variables and the statistics of large graphs and hypergraphs. Probability Surveys, 5:80-145, 2008.
+Michael L Bates, Susan M Bengtson Nash, Darryl W Hawker, John Norbury, Jonny S Stark, and Roger A Cropp. Construction of a trophically complex near-shore antarctic food web model using the conservative normal framework with structural coexistence. Journal of Marine Systems, 145: 1-14, 2015.
+Peter W Battaglia, Jessica B Hamrick, Victor Bapst, Alvaro Sanchez-Gonzalez, Vinicius Zambaldi, Mateusz Malinowski, Andrea Tacchetti, David Raposo, Adam Santoro, Ryan Faulkner, et al. Relational inductive biases, deep learning, and graph networks. arXiv preprint arXiv:1806.01261, 2018.
+Mikhail Belkin and Partha Niyogi. Laplacian eigenmaps and spectral techniques for embedding and clustering. In Advances in neural information processing systems, pp. 585-591, 2002.
+Mikhail Belkin, Partha Niyogi, and Vikas Sindhwani. Manifold regularization: A geometric framework for learning from labeled and unlabeled examples. Journal of machine learning research, 7 (Nov):2399-2434, 2006.
+Rianne van den Berg, Thomas N Kipf, and Max Welling. Graph convolutional matrix completion. arXiv preprint arXiv:1706.02263, 2017.
+Benjamin Bloem-Reddy and Yee Whye Teh. Probabilistic symmetry and invariant neural networks. arXiv preprint arXiv:1901.06082, 2019.
+Jean Bourgain. On lipschitz embedding of finite metric spaces in hilbert space. *Israel Journal of Mathematics*, 52(1-2):46-52, 1985.
+Emmanuel J Candès and Benjamin Recht. Exact matrix completion via convex optimization. Foundations of Computational mathematics, 9(6):717, 2009.
+Shaosheng Cao, Wei Lu, and Qiongkai Xu. Grarep: Learning graph representations with global structural information. In Proceedings of the 24th ACM international on conference on information and knowledge management, pp. 891-900. ACM, 2015.
+Shuo Chen, Josh L Moore, Douglas Turnbull, and Thorsten Joachims. Playlist prediction via metric embedding. In Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 714-722. ACM, 2012.
+Zhengdao Chen, Soledad Villar, Lei Chen, and Joan Bruna. On the equivalence between graph isomorphism testing and function approximation with gnns. *NeruIPS*, 2019.
+Thomas H Cormen, Charles E Leiserson, Ronald L Rivest, and Clifford Stein. Introduction to algorithms. MIT press, 2009.
+
+Claire Donnat, Marinka Zitnik, David Hallac, and Jure Leskovec. Learning structural node embeddings via diffusion wavelets. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 1320-1329. ACM, 2018.
+David K Duvenaud, Dougal Maclaurin, Jorge Iparraguirre, Rafael Bombarell, Timothy Hirzel, Alán Aspuru-Guzik, and Ryan P Adams. Convolutional networks on graphs for learning molecular fingerprints. In Advances in neural information processing systems, pp. 2224-2232, 2015.
+Alessandro Epasto and Bryan Perozzi. Is a single embedding enough? learning node representations that capture multiple social contexts. In The World Wide Web Conference, pp. 394-404. ACM, 2019.
+Shuangfei Fan and Bert Huang. Recurrent collective classification. arXiv preprint arXiv:1703.06514, 2017.
+Martin Fürer. On the combinatorial power of the weisfeiler-lehman algorithm. In International Conference on Algorithms and Complexity, pp. 260-271. Springer, 2017.
+David Galles and Judea Pearl. An axiomatic characterization of causal counterfactuals. Foundations of Science, 3(1):151-182, 1998.
+Alexander Gammerman, Volodya Vovk, and Vladimir Vapnik. Learning by transduction. In Proceedings of the Fourteenth conference on Uncertainty in artificial intelligence, pp. 148-155. Morgan Kaufmann Publishers Inc., 1998.
+Justin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl. Neural message passing for quantum chemistry. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 1263-1272. JMLR.org, 2017.
+Joseph Gonzalez, Yucheng Low, Arthur Gretton, and Carlos Guestrin. Parallel gibbs sampling: From colored fields to thin junction trees. In Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, pp. 324-332, 2011.
+Prem Gopalan, Francisco J Ruiz, Rajesh Ranganath, and David Blei. Bayesian nonparametric poisson factorization for recommendation systems. In Artificial Intelligence and Statistics, pp. 275-283, 2014a.
+Prem K Gopalan, Laurent Charlin, and David Blei. Content-based recommendations with poisson factorization. In Advances in Neural Information Processing Systems, pp. 3176-3184, 2014b.
+Palash Goyal, Di Huang, Sujit Rokka Chhetri, Arquimedes Canedo, Jaya Shree, and Evan Patterson. Graph representation ensemble learning. arXiv preprint arXiv:1909.02811, 2019.
+Ronald L Graham and Peter M Winkler. On isometric embeddings of graphs. Transactions of the American mathematical Society, 288(2):527-536, 1985.
+Aditya Grover and Jure Leskovec. node2vec: Scalable feature learning for networks. In Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 855-864. ACM, 2016.
+Aditya Grover, Aaron Zweig, and Stefano Ermon. Graphite: Iterative generative modeling of graphs. ICML, 2019.
+Will Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. In Advances in Neural Information Processing Systems, pp. 1024-1034, 2017a.
+William L Hamilton, Rex Ying, and Jure Leskovec. Representation learning on graphs: Methods and applications. arXiv preprint arXiv:1709.05584, 2017b.
+Keith Henderson, Brian Gallagher, Tina Eliassi-Rad, Hanghang Tong, Sugato Basu, Leman Akoglu, Danai Koutra, Christos Faloutsos, and Lei Li. Rolx: structural role extraction & mining in large graphs. In Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 1231-1239. ACM, 2012.
+
+Olav Kallenberg. Foundations of modern probability. Springer Science & Business Media, 2006.
+Sudeep Kamath, Alon Orlitsky, Dheeraj Pichapati, and Ananda Theertha Suresh. On learning distributions from their samples. In Conference on Learning Theory, pp. 1066-1100, 2015.
+Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
+Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
+Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907, 2016a.
+Thomas N Kipf and Max Welling. Variational graph auto-encoders. arXiv preprint arXiv:1611.07308, 2016b.
+Robert Kleinberg. Geographic routing using hyperbolic space. In IEEE INFOCOM 2007-26th IEEE International Conference on Computer Communications, pp. 1902-1909. IEEE, 2007.
+Risi Kondor and Shubhendu Trivedi. On the generalization of equivariance and convolution in neural networks to the action of compact groups. ICML, 2018.
+Daniel D Lee and H Sebastian Seung. Algorithms for non-negative matrix factorization. In Advances in neural information processing systems, pp. 556-562, 2001.
+Omer Levy and Yoav Goldberg. Neural word embedding as implicit matrix factorization. In Advances in neural information processing systems, pp. 2177-2185, 2014.
+Dawen Liang, Rahul G Krishnan, Matthew D Hoffman, and Tony Jbara. Variational autoencoders for collaborative filtering. In Proceedings of the 2018 World Wide Web Conference, pp. 689-698. International World Wide Web Conferences Steering Committee, 2018.
+Nathan Linial, Eran London, and Yuri Rabinovich. The geometry of graphs and some of its algorithmic applications. Combinatorica, 15(2):215-245, 1995.
+Haggai Maron, Heli Ben-Hamu, Nadav Shamir, and Yaron Lipman. Invariant and equivariant graph networks. ICML, 2019.
+Changping Meng, Jiasen Yang, Bruno Ribeiro, and Jennifer Neville. Hats: A hierarchical sequence-attention framework for inductive set-of-sets embeddings. In KDD, 2019.
+Ryszard S Michalski. A theory and methodology of inductive learning. In Machine learning, pp. 83-134. Springer, 1983.
+Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pp. 3111-3119, 2013.
+Andriy Mnih and Ruslan R Salakhutdinov. Probabilistic matrix factorization. In Advances in neural information processing systems, pp. 1257-1264, 2008.
+Federico Monti, Michael Bronstein, and Xavier Bresson. Geometric matrix completion with recurrent multi-graph neural networks. In Advances in Neural Information Processing Systems, pp. 3697-3707, 2017.
+Christopher Morris, Martin Ritzert, Matthias Fey, William L Hamilton, Jan Eric Lenssen, Gaurav Rattan, and Martin Grohe. Weisfeiler and leman go neural: Higher-order graph neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pp. 4602-4609, 2019.
+Ryan L Murphy, Balasubramaniam Srinivasan, Vinayak Rao, and Bruno Ribeiro. Janossy pooling: Learning deep permutation-invariant functions for variable-size inputs. arXiv preprint arXiv:1811.01900, 2018.
+
+Ryan L Murphy, Balasubramaniam Srinivasan, Vinayak Rao, and Bruno Ribeiro. Relational pooling for graph representations. arXiv preprint arXiv:1903.02541, 2019.
+Galileo Namata, Ben London, Lise Getoor, Bert Huang, and UMD EDU. Query-driven active surveying for collective classification. In 10th International Workshop on Mining and Learning with Graphs, pp. 8, 2012.
+Aida Nematzadeh, Stephan C Meylan, and Thomas L Griffiths. Evaluating vector-space models of word representation, or, the unreasonable effectiveness of counting words near other words. In CogSci, 2017.
+Mingdong Ou, Peng Cui, Jian Pei, Ziwei Zhang, and Wenwu Zhu. Asymmetric transitivity preserving graph embedding. In Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 1105-1114. ACM, 2016.
+Jeffrey Pennington, Richard Socher, and Christopher Manning. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pp. 1532-1543, 2014.
+Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. Deepwalk: Online learning of social representations. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 701-710. ACM, 2014.
+Yuri Rabinovich and Ran Raz. Lower bounds on the distortion of embedding finite metric spaces in graphs. Discrete & Computational Geometry, 19(1):79-94, 1998.
+Benjamin Recht, Maryam Fazel, and Pablo A Parrilo. Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization. SIAM review, 52(3):471-501, 2010.
+Leonardo FR Ribeiro, Pedro HP Saverese, and Daniel R Figueiredo. struc2vec: Learning node representations from structural identity. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 385-394. ACM, 2017.
+Ryan A Rossi, Di Jin, Sungchul Kim, Nesreen K Ahmed, Danai Koutra, and John Boaz Lee. From community to role-based graph embeddings. arXiv preprint arXiv:1908.08572, 2019.
+Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. The graph neural network model. IEEE Transactions on Neural Networks, 20(1):61-80, 2008.
+Michael Schlichtkrull, Thomas N Kipf, Peter Bloem, Rianne Van Den Berg, Ivan Titov, and Max Welling. Modeling relational data with graph convolutional networks. In European Semantic Web Conference, pp. 593-607. Springer, 2018.
+Prithviraj Sen, Galileo Namata, Mustafa Bilgic, Lise Getoor, Brian Galligher, and Tina Eliassi-Rad. Collective classification in network data. AI magazine, 29(3):93-93, 2008.
+Blake Shaw, Bert Huang, and Tony Jebara. Learning a distance metric from a network. In Advances in Neural Information Processing Systems, pp. 1899-1907, 2011.
+Charles Spearman. "general intelligence," objectively determined and measured. The American Journal of Psychology, 15(2):201-292, 1904.
+Nils Chr Stenseth, Wilhelm Falck, Ottar N Bjørnstad, and Charles J Krebs. Population regulation in snowshoe hare and canadian lynx: asymmetric food web configurations between hare and lynx. Proceedings of the National Academy of Sciences, 94(10):5147-5152, 1997.
+Da Tang, Dawen Liang, Tony Jebara, and Nicholas Ruozzi. Correlated variational auto-encoders. arXiv preprint arXiv:1905.05335, 2019.
+Edward Wagstaff, Fabian B Fuchs, Martin Engelcke, Ingmar Posner, and Michael Osborne. On the limitations of representing functions on sets. arXiv preprint arXiv:1901.09006, 2019.
+Jeffrey Wood and John Shawe-Taylor. A unifying framework for invariant pattern recognition. Pattern Recognition Letters, 17(14):1415-1422, 1996.
+
+Zonghan Wu, Shirui Pan, Fengwen Chen, Guodong Long, Chengqi Zhang, and Philip S Yu. A comprehensive survey on graph neural networks. arXiv preprint arXiv:1901.00596, 2019.
+Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? arXiv preprint arXiv:1810.00826, 2018.
+Keyulu Xu, Jingling Li, Mozhi Zhang, Simon S Du, Ken-ichi Kawarabayashi, and Stefanie Jegelka. What can neural networks reason about? arXiv preprint arXiv:1905.13211, 2019.
+Zhilin Yang, William W Cohen, and Ruslan Salakhutdinov. Revisiting semi-supervised learning with graph embeddings. ICML, 2016.
+Jiaxuan You, Rex Ying, and Jure Leskovec. Position-aware graph neural networks. arXiv preprint arXiv:1906.04817, 2019.
+Manzil Zaheer, Satwik Kottur, Siamak Ravanbakhsh, Barnabas Poczos, Ruslan R Salakhutdinov, and Alexander J Smola. Deep sets. In Advances in neural information processing systems, pp. 3391-3401, 2017.
+Muhan Zhang and Yixin Chen. Link prediction based on graph neural networks. In Advances in Neural Information Processing Systems, 2018.
+Dengyong Zhou, Olivier Bousquet, Thomas N Lal, Jason Weston, and Bernhard Schölkopf. Learning with local and global consistency. In Advances in neural information processing systems, pp. 321-328, 2004.
+Jie Zhou, Ganqu Cui, Zhengyan Zhang, Cheng Yang, Zhiyuan Liu, and Maosong Sun. Graph neural networks: A review of methods and applications. arXiv preprint arXiv:1812.08434, 2018.
+Xiaojin Zhu, Zoubin Ghahramani, and John D Lafferty. Semi-supervised learning using gaussian fields and harmonic functions. In Proceedings of the 20th International conference on Machine learning (ICML-03), pp. 912-919, 2003.
+Marinka Zitnik and Jure Leskovec. Predicting multicellular function through multi-layer tissue networks. Bioinformatics, 33(14):i190-i198, 2017.
+
+# APPENDIX
+
+# 7 A VISUAL INTERPRETATION OF NODE EMBEDDINGS AND STRUCTURAL REPRESENTATIONS
+
+In what follows we showcase the difference between structural representations (Figure 3) and positional node embeddings (Figure 4) obtained by graph neural networks and SVD, respectively, on the same graph. The graph is the food web of Figure 1 and consists of two disconnected subgraphs.
+
+Figure 3 shows a 2-dimensional $(\mathbb{R}^2)$ structural representation of nodes obtained by a standard GNN (obtained by the 1-WL GNN of Xu et al. (2018)), optimized to try to predict links without the addition of any random edges). Node colors represent the mapping of the learned structural representation in $\mathbb{R}^2$ into the red and blue interval of RGB space $[0,255]^2$ . Note that isomorphic nodes are forcibly mapped into the same structural representations (have the same color), even though the representation is trained to predict links. Specifically, the lynx and the orca in Figure 3 get the same color since they are isomorphic, while the lynx and the coyote have different structural representations (different colors). The lynx, like the orca, is a top predator in the food web while the coyote is not a top predator. Hence, it is quite evident why GNN's are not traditionally used for link prediction: as the structural node representation is a color, using these representations is akin to asking "is a light pink node in Figure 3 connected to a light blue node?" We cannot answer this question unless we also specify which light pink (i.e. coyote or seal) and which light blue (i.e. orca or lynx) we are talking about. Hence, the failure to predict links. Our Corollary 2 shows that joint 2-node structural representations are required for link prediction. Our Theorem 1 generalizes it to joint $k$ -node structural representations for any $k$ -node joint prediction task.
+
+
+Figure 3: (Best in color) Food web graph of Figure 1 with node colors that map a two dimensional representation from a 1-WL GNN (GIN - (Xu et al., 2018)) into the [0, 255] interval of blue and red intensity of RGB colors respectively. GIN is used a representative of structural representation of nodes. Structurally isomorphic nodes, obtain the same representation and are hence end up being visualised with the same color. Consequently, it is clear why structural node representations are used for node and graph classification, but not for link prediction.
+
+
+
+Positional node embeddings, on the other hand, are often seen as a lower-dimensional projection of the rows and columns of the adjacency matrix $\mathbf{A}$ from $\mathbb{R}^n$ to $\mathbb{R}^d$ , $d < n$ , that preserves relative positions of the nodes in a graph. In Figure 4 we show the same food web graph, where now node colors map the values of the two leading (SVD) eigenvectors (of the undirected food web graph) into the [0, 255] interval of blue and red intensity of RGB colors, respectively. The graph is made undirected, otherwise the left and right eigenvectors will be different and harder to represent visually. SVD is used here as a representative of positional node embedding methods. Note the lynx and the coyote now have close positional node embeddings (represented by colors which are closer in the color spectrum), while the positional node embeddings of the lynx and the orca are significantly different. Node embeddings are often seen as encoding the fact that the lynx and the coyote are part of a tightly-knit community, while the lynx and orca belong to distinct communities.
+
+It is evident from Figure 4 why positional node embeddings are not traditionally used to predict node classes: predicting that the lynx, like the orca, is a top predator based on their node colors is
+
+difficult, since the coyote's color is very similar to that of the lynx, while the orca obtains a completely different color to the lynx. Relying on color shades (light and dark) for node classification is unreliable, since nodes with completely different structural positions in the food chain may have similar color shades. Our Theorem 2 shows, however, that any positional node embedding (actually, any node embedding) must be a sample of a distribution given by the most-expressive structural representation of the node. That is, if SVD is ran again over different permutations of the vertices in the adjacency matrix (an isomorphic graph to Figure 1), the set of colors obtained by the lynx and the orca must be the same. Hence, node classification is possible if we look at the distribution of colors that a node obtains from random permutations of the adjacency matrix.
+
+On the other hand, link prediction using the colors in Figure 4 is rather trivial, since similar colors mean "closeness" in the graph. For instance, we may easily predict that the baleen whale also eats zooplankton.
+
+
+Figure 4: (Best in color) Food web graph of Figure 1 with node colors that map the values of the two leading (SVD) eigenvectors over the undirected graph into the [0, 255] interval of blue and red intensity of RGB colors, respectively. SVD (run until convergence) is used as a representative of positional node embedding methods. The graph is made undirected, otherwise the left and right eigenvectors will be different and harder to represent visually. Note that nodes which are a part of the same connected component, obtain embeddings which are close in latent space, visually shown as similar colors. Consequently, it is clear that positional node embeddings can be used for link prediction and clustering.
+
+
+
+# 8 PRELIMINARIES
+
+Noise outsourcing, representation learning, and graph representations: The description of our proofs starts with the equivalence between probability models of graphs and graph representations. We start with the concept of noise outsourcing (Austin, 2008, Lemma 3.1) applied to our task—a weaker version of the more general concept of transfer (Kallenberg, 2006, Theorem 6.10) in pushforward measures.
+
+A probability law $Z|(\mathbf{A},\mathbf{X}) \sim p(\cdot |(\mathbf{A},\mathbf{X}))$ , $(\mathbf{A},\mathbf{X}) \in \Sigma_{n}$ , can be described (Kallenberg, 2006, Theorem 6.10) by pure random noise $\epsilon \sim \mathrm{Uniform}(0,1)$ independent of $(\mathbf{A},\mathbf{X})$ , passing through a deterministic function $Z = f((\mathbf{A},\mathbf{X}),\epsilon)$ —where $f:\Sigma_n\times [0,1]\to \Omega$ , where $\Omega$ in our task will be a matrix $\Omega = \mathbb{R}^{n\times d}$ defining node representations, $d\geq 1$ . That is, the randomness in the conditional $p(z|(\mathbf{A},\mathbf{X}))$ is entirely outsourced to $\epsilon$ , as $f$ is deterministic.
+
+Now, consider replacing the graph $G = (\mathbf{A}, \mathbf{X})$ by a $\mathcal{G}$ -equivariant representation $\Gamma(\mathbf{A}, \mathbf{X})$ of its nodes, the output of a neural network $\Gamma: \Sigma_n \to \mathbb{R}^{n \times m}$ , $m \geq 1$ , that gives an representation to each node in $G$ . If a representation $\Gamma^{\star}(\mathbf{A}, \mathbf{X})$ is such that $\exists f'$ where $Z = f(\mathbf{A}, \mathbf{X}, \epsilon) = f'(\Gamma^{\star}(\mathbf{A}, \mathbf{X}), \epsilon)$ , $\forall (\mathbf{A}, \mathbf{X}) \in \Sigma_n$ and $\forall \epsilon \in [0,1]$ , then $\Gamma^{\star}(\mathbf{A}, \mathbf{X})$ does not lose any information when it comes to predicting $Z$ . Statistically (Kallenberg, 2006, Proposition 6.13), $Z \perp_{\Gamma^{\star}(\mathbf{A}, \mathbf{X})} (\mathbf{A}, \mathbf{X})$ . We call $\Gamma^{\star}(\mathbf{A}, \mathbf{X})$ a most-expressive representation of $G$ with respect to $Z$ . A most-expressive representation (without qualifications) is one that is most-expressive for any target variable.
+
+Representation learning is powerful precisely because it can learn functions $\Gamma$ that are compact and can encode most of the information available on the input. And because the most-expressive $\Gamma^{\star}$ is $\mathcal{G}$ -equivariant, it also guarantees that any $\mathcal{G}$ -equivariant function over $\Gamma^{\star}$ that outputs $Z$ is also $\mathcal{G}$ -equivariant, without loss of information.
+
+# 9 PROOF OF THEOREMS, LEMMAS AND COROLLARIES
+
+We restate and prove Lemma 1
+
+Lemma 1. Two nodes $v, u \in V$ , have the same most-expressive structural representation $\Gamma^{\star}(v, \mathbf{A}, \mathbf{X}) = \Gamma^{\star}(u, \mathbf{A}, \mathbf{X})$ iff $u$ and $v$ are isomorphic nodes in $G = (\mathbf{A}, \mathbf{X})$ (Definition 6).
+
+Proof. In this proof, we consider both directions.
+
+$(\Rightarrow)$ Consider two nodes $v, u \in V$ which satisfy the condition, $\Gamma^{\star}(v, \mathbf{A}, \mathbf{X}) = \Gamma^{\star}(u, \mathbf{A}, \mathbf{X})$ but are not isomorphic in $G = (\mathbf{A}, \mathbf{X})$ . By contradiction, suppose $u$ and $v$ have different node orbits. This is a contradiction, since the bijective mapping of Definition 9 would have to take the same input and map them to different outputs.
+$(\Leftarrow)$ By contradiction, consider the two nodes $u, v \in V$ which are isomorphic in $G = (\mathbf{A}, \mathbf{X})$ but with different most expressive structural representations i.e. $\Gamma^{\star}(v, \mathbf{A}, \mathbf{X}) \neq \Gamma^{\star}(u, \mathbf{A}, \mathbf{X})$ . This is a contradiction, because as per Definition 6 two nodes should have the same structural representation, which would imply the most expressive structural representation is not a structural representation. Hence, the two nodes should share the same most expressive structural representation.
+
+Next, we restate and prove Lemma 2
+
+Lemma 2 (Causal modeling through noise outsourcing). Definition 1 of Galles & Pearl (1998) gives a causal model as a triplet
+
+$$
+M = \langle U, V, F \rangle ,
+$$
+
+where $U$ is a set of exogenous variables, $V$ is a set of endogenous variables, and $F$ is a set of functions, such that in the causal model, $v_{i} = f(\vec{\mu}_{i},u)$ is the realization of random variable $V_{i}\in V$ and a sequence of random variables $\vec{PA}_i$ with $PA_{i}\subseteq V\backslash V_{i}$ as the endogenous variable parents of variable $V_{i}$ as given by a directed acyclic graph. Then, there exists a pure random noise $\epsilon$ and a set of (measurable) functions $\{g_u\}_{u\in U}$ such that for $V_{i}\in V$ , $V_{i}$ can be equivalently defined as $v_{i}\stackrel {a.s.}{=}f(\vec{\mu}_{i},g_{u}(\epsilon_{u}))$ , where $\epsilon_{u}$ has joint distribution $(\epsilon_{u})_{\forall u\in U}\stackrel {a.s.}{=}g^{\prime}(\epsilon)$ for some Borel measurable function $g^{\prime}$ and a random variable $\epsilon \sim \mathrm{Uniform}(0,1)$ . The latter defines $M$ via noise outsourcing (Austin, 2008).
+
+Proof. By Definition 1 of Galles & Pearl (1998), the set of exogenous variables $U$ are not affected by the set of endogenous variables $V$ . The noise outsourcing lemma (Lemma 3.1) of Austin (2008) (or its more complete version Theorem 6.10 of Kallenberg (2006)) shows that any samples of a joint distribution over a set of random variables $U$ can be described as $(u)_{\forall u \in U} \stackrel{a.s.}{=} g(\epsilon)$ , for some Borel measurable function $g$ and a random variable $\epsilon \sim \mathrm{Uniform}(0,1)$ . As the composition of two Borel measurable functions is also Borel measurable, it is trivial to show that there exists Borel measurable functions $\{g_u\}_{u \in U}$ and $g'$ , such that $u \stackrel{a.s.}{=} g_u(\epsilon_u)$ and $(\epsilon_u)_{\forall u \in U} \stackrel{a.s.}{=} g'(\epsilon)$ . The latter is trivial since $g_u$ can just be the identity function, $g_u(x) = x$ .
+
+Next, we restate and prove Lemma 3
+
+Lemma 3. The permutation equivariance of $p$ in Definition 12 implies that, if two proper subsets of nodes $S_{1}, S_{2} \subsetneq V$ are isomorphic, then their marginal node embedding distributions must be the same up to a permutation, i.e., $p((\mathbf{Z}_i)_{i \in S_1} | \mathbf{A}, \mathbf{X}) = \pi(p((\mathbf{Z}_j)_{j \in S_2} | \mathbf{A}, \mathbf{X}))$ for some appropriate permutation $\pi$ .
+
+Proof. From Definition 12, it is trivial to observe that two isomorphic nodes $u, v \in V$ in graph $G = (\mathbf{A}, \mathbf{X})$ have the same marginal node embedding distributions. In this proof we extend this to node sets $S \subset V$ where $|S| > 1$ . We marginalize over $(Z_i)_{i \notin S_1}$ to obtain $p((\mathbf{Z}_i)_{i \in S_1} | \mathbf{A}, \mathbf{X})$ and in the other case over $(Z_i)_{i \notin S_2}$ to obtain $p((\mathbf{Z}_i)_{i \in S_2} | \mathbf{A}, \mathbf{X})$ respectively.
+
+We consider 2 cases as follows:
+
+Case 1: $S_{1} = S_{2}$ : This is the trivial where $S_{1}$ and $S_{2}$ are the exact same nodes, hence their marginal distributions are the identical as well by definition.
+
+Case 2: $S_{1} \neq S_{2}$ : Since $S_{1}$ and $S_{2}$ are also given to be isomorphic, it is clear to see that every node in $S_{1}$ has an isomorphic equivalent in $S_{2}$ . In a graph $G = (\mathbf{A}, \mathbf{X})$ , the above statement conveys that $S_{2}$ can be written as a permutation $\pi$ on $S_{1}$ , i.e. $S_{2} = \pi(S_{1})$ . Now, employing Definition 12, it is clear to see that $p((\mathbf{Z}_{i})_{i \in S_{1}} | \mathbf{A}, \mathbf{X}) = \pi(p((\mathbf{Z}_{j})_{j \in S_{2}} | \mathbf{A}, \mathbf{X}))$
+
+Next, we restate and prove Theorem 1
+
+Theorem 1. Let $\mathcal{S} \subseteq \mathcal{P}(V)$ be a set of subsets of $V$ . Let $Y(\mathcal{S},\mathbf{A},\mathbf{X}) = (Y(\vec{S},\mathbf{A},\mathbf{X}))_{S \in \mathcal{S}}$ be a sequence of random variables defined over the sets $S \in \mathcal{S}$ of a graph $G = (\mathbf{A},\mathbf{X})$ , such that we define $Y(\vec{S},\mathbf{A},\mathbf{X}) := Y_{\vec{S}}|\mathbf{A},\mathbf{X}$ and $Y(\vec{S}_1,\mathbf{A},\mathbf{X}) \stackrel{d}{=} Y(\vec{S}_2,\mathbf{A},\mathbf{X})$ for any two jointly isomorphic subsets $S_1, S_2 \in \mathcal{S}$ in $(\mathbf{A},\mathbf{X})$ (Definition 7), where $\stackrel{d}{=}$ means equality in their marginal distributions. Then, there exists a deterministic function $\varphi$ such that, $Y(\mathcal{S},\mathbf{A},\mathbf{X}) \stackrel{\text{a.s.}}{=} (\varphi(\Gamma^{\star}(\vec{S},\mathbf{A},\mathbf{X}),\epsilon_S))_{S \in \mathcal{S}}$ , where $\epsilon_S$ is a pure source of random noise from a joint distribution $p((\epsilon_{S'})_{\forall S' \in \mathcal{S}})$ independent of $\mathbf{A}$ and $X$ .
+
+Proof. The case $S = V$ is given in Theorem 12 of Bloem-Reddy & Teh (2019). The case $S = \emptyset$ is trivial. The case $S \subsetneq V$ , $S \neq \emptyset$ , is described as follows with a constructive proof. First consider the case of two isomorphic sets of nodes $S_1, S_2 \in S$ . As by definition $Y(\vec{S}_1, \mathbf{A}, \mathbf{X}) \stackrel{d}{=} Y(\vec{S}_2, \mathbf{A}, \mathbf{X})$ , we must assume $p(Y_{\vec{S}_1}|\mathbf{A}, \mathbf{X}) = p(Y_{\vec{S}_2}|\mathbf{A}, \mathbf{X})$ . We can now use the transfer theorem (Kallenberg, 2006, Theorem 6.10) to obtain a joint description $Y_{\vec{S}_1}|\mathbf{A}, \mathbf{X} \stackrel{\text{a.s.}}{=} \phi_1(\vec{S}_1, \mathbf{A}, \mathbf{X}, \epsilon)$ and $Y_{\vec{S}_2}|\mathbf{A}, \mathbf{X} \stackrel{\text{a.s.}}{=} \phi_2(\vec{S}_2, \mathbf{A}, \mathbf{X}, \epsilon)$ , where $\epsilon$ is a common source of independent noise. As $S_1$ and $S_2$ are joint isomorphic (Definition 7), there exists an isomorphism $S_1 = \mathrm{id}(\vec{S}_2)$ , where $\phi_1(\mathrm{id}(\vec{S}_2), \mathbf{A}, \mathbf{X}, \epsilon) = \phi_1(\vec{S}_1, \mathbf{A}, \mathbf{X}, \epsilon)$ . Because the distribution given by $\phi_1(\cdot, \epsilon)$ must be isomorphic-invariant in $(\mathbf{A}, \mathbf{X})$ and $S_1$ and $S_2$ are also isomorphic in $(\mathbf{A}, \mathbf{X})$ then, for all permutation actions $\pi \in \Pi_n$ , there exists a new isomorphism iso' such that $\phi_1(\pi(\mathrm{id}(\vec{S}_2)), \pi(\mathbf{A}), \pi(\mathbf{X}), \epsilon) \stackrel{d}{=} \phi_1(\mathrm{id}'(\pi(\vec{S}_1)), \pi(\mathbf{A}), \pi(\mathbf{X}), \epsilon)$ , which allows us to create a function $\varphi'$ that incorporates iso' into $\phi_1$ . Due to the isomorphism between $S_1$ and $S_2$ , we can do the same process for $S_2$ to arrive at the same function $\varphi'$ . We can now apply Corollary 6.11 (Kallenberg, 2006) over $(Y_{\vec{S}_1}|\mathbf{A}, \mathbf{X}, Y_{\vec{S}_2}|\mathbf{A}, \mathbf{X})$ along with a measure-preserving mapping $f$ to show that $Y_{\vec{S}_1}|\mathbf{A}, \mathbf{X} \stackrel{\text{a.s.}}{=} \varphi'(\vec{S}_1, \mathbf{A}, \mathbf{X}, \epsilon_1)$ and $Y_{\vec{S}_2}|\mathbf{A}, \mathbf{X} \stackrel{\text{a.s.}}{=} \varphi'(\vec{S}_2, \mathbf{A}, \mathbf{X}, \epsilon_2)$ , where $(\epsilon_1, \epsilon_2) = f(\epsilon)$ . If $S_1$ and $S_2$ are not joint isomorphic, we can simply define $\varphi'(S_i, \cdot) := \phi_i(S_i, \cdot)$ . Definition 11 allows us to define a function $\varphi$ from which we rewrite $\varphi'(\vec{S}_i, \mathbf{A}, \mathbf{X}, \epsilon_i)$ as $\varphi(\Gamma^{\star}(\vec{S}_i, \mathbf{A}, \mathbf{X}), \epsilon_i)$ . Applying the same procedure to all $S \in S$ concludes our proof.
+
+Next, we restate and prove Theorem 2
+
+Theorem 2 (The statistical equivalence between node embeddings and structural representations). Let $\mathbf{Y}(\mathcal{S},\mathbf{A},\mathbf{X}) = (Y(\vec{S},\mathbf{A},\mathbf{X}))_{S\in \mathcal{S}}$ be as in Theorem 1. Consider a graph $G = (\mathbf{A},\mathbf{X})\in \Sigma_{n}$ . Let $\Gamma^{\star}(\vec{S},\mathbf{A},\mathbf{X})$ be a most-expressive structural representation of nodes $S\in S$ in $(\mathbf{A},\mathbf{X})$ . Then,
+
+$$
+Y (\vec {S}, \mathbf {A}, \boldsymbol {X}) \perp_ {\Gamma^ {\star} (\vec {S}, \mathbf {A}, \boldsymbol {X})} \boldsymbol {Z} | \mathbf {A}, \boldsymbol {X}, \quad \forall S \in \mathcal {S},
+$$
+
+for any node embedding matrix $\mathbf{Z}$ that satisfies Definition 12, where $A \perp_{B} C$ means $A$ is independent of $C$ given $B$ . Finally, $\forall (\mathbf{A}, \mathbf{X}) \in \Sigma_{n}$ , there exists a most-expressive node embedding $\mathbf{Z}^{\star}|\mathbf{A}, \mathbf{X}$ such that
+
+$$
+\Gamma^ {\star} (\vec {S}, \mathbf {A}, \boldsymbol {X}) = \mathbb {E} _ {\boldsymbol {Z} ^ {\star}} [ f ^ {(| S |)} ((\boldsymbol {Z} _ {v} ^ {\star}) _ {v \in S}) | \mathbf {A}, \boldsymbol {X} ], \quad \forall S \in \mathcal {S},
+$$
+
+for some appropriate collection of functions $\{f^{(k)}(\cdot)\}_{k = 1,\dots,n}$ .
+
+Proof. In the first part of the proof, for any embedding distribution $p(\mathbf{Z}|\mathbf{A},\mathbf{X})$ , we note that by Theorem 1, $y(\vec{S},\mathbf{A},\mathbf{X}) \stackrel{a.s.}{=} f'(\Gamma^{\star}(\vec{S},\mathbf{A},\mathbf{X}),\epsilon_{S})$ . Hence, $Y(\vec{S},\mathbf{A},\mathbf{X}) \perp_{\Gamma^{\star}(\vec{S},\mathbf{A},\mathbf{X})} \mathbf{Z}|\mathbf{A},\mathbf{X}, \forall S \in S$ , is a direct consequence of Proposition 6.13 in (Kallenberg, 2006).
+
+In the second part of the proof, we construct an orbit over a most-expressive representation of a graph $(\mathbf{A},\mathbf{X})$ of size $n$ , with permutations that act only on unique node ids (node orderings) added as node features: $\Pi^{\prime}(\mathbf{A},\mathbf{X}) = \{((\Gamma^{\star}(v,\mathbf{A},[X,\pi (1,\ldots ,n)^T])_{\forall v\in V}\}_{\forall \pi \in \Pi_n}$ , where $[A,b]$ concatenates column vector $b$ as a column of matrix $A$ . Define $Z^{\star}|\mathbf{A},\mathbf{X}$ as the random variable with a uniform measure over the set $\Pi^{\prime}(\mathbf{A},\mathbf{X})$ . We first prove that $Z^{\star}|\mathbf{A},\mathbf{X}$ is a most-expressive node embedding. Clearly, $Z^{\star}|\mathbf{A},\mathbf{X}$ is a node embedding, since the uniform measure over $\Pi^{\prime}(\mathbf{A},\mathbf{X})$ is $\mathcal{G}$ -equivariant. All that is left to show is that we can construct $\Gamma^{\star}$ of any-size subset $S\in S$ from $Z^{\star}|\mathbf{A},\mathbf{X}$ via
+
+$$
+\Gamma^ {\star} (\vec {S}, \mathbf {A}, \mathbf {X}) = \mathbb {E} _ {\mathbf {Z} ^ {\star}} [ f ^ {(| S |)} ((\mathbf {Z} _ {v} ^ {\star}) _ {v \in S}) | \mathbf {A}, \mathbf {X} ],
+$$
+
+for some function $f^{(|S|)}$ . This part of the proof has a constructive argument and comes in two parts.
+
+Assume $S \in S$ has no other joint isomorphic set of nodes in $S$ , i.e., $\sharp S_2 \in S$ such that $S$ and $S_2$ are joint isomorphic in $(\mathbf{A}, \mathbf{X})$ . For any such subset of nodes $S \in S$ , and any element $R_{\pi} \in \Pi'(\mathbf{A}, \mathbf{X})$ , there is a bijective measurable map between the nodes in $S$ and their positions in the representation vector $R_{\pi} = (\Gamma^{\star}(v, \mathbf{A}, [X, \pi(1, \ldots, n)^T]))_{\forall v \in V}$ , since all node features are unique and there are no isomorphic nodes under such conditions. Consider the multiset
+
+$$
+\mathcal {O} _ {S} (\mathbf {A}, \mathbf {X}) := \{(\Gamma^ {\star} (v, \mathbf {A}, [ \mathbf {X}, \pi (1, \dots , n) ^ {T} ])) | _ {\forall v \in S} \} _ {\forall \pi \in \Pi_ {n}}
+$$
+
+of the representations restricted to the set $S$ . We now show that there exists an surjection between $\mathcal{O}_S(\mathbf{A},\mathbf{X})$ and $\Gamma^{\star}(\vec{S},\mathbf{A},\mathbf{X})$ . There is a surjection if for all $S_1,S_2\in \mathcal{P}^\star (V)$ that are non-isomorphic, it implies $\mathcal{O}_{S_1}(\mathbf{A},\mathbf{X})\neq \mathcal{O}_{S_2}(\mathbf{A},\mathbf{X})$ . The condition is trivial if $|S_{1}|\neq |S_{2}|$ as $|\mathcal{O}_{S_1}(\mathbf{A},\mathbf{X})|\neq |\mathcal{O}_{S_2}(\mathbf{A},\mathbf{X})|$ . If $|S_{1}| = |S_{2}|$ , we prove by contradiction. Assume $\mathcal{O}_{S_1}(\mathbf{A},\mathbf{X}) = \mathcal{O}_{S_2}(\mathbf{A},\mathbf{X})$ . Because of the unique feature ids and because $\Gamma^{\star}$ is most-expressive, the representation $\Gamma^{\star}(v,\mathbf{A},[X,\pi (1,\ldots ,n)^T])$ of node $v\in V$ and permutation $\pi \in \Pi_{n}$ is unique. As $S_{1}$ is not isomorphic to $S_{2}$ , and both sets have the same size, there must be at least one node $u\in S_{1}$ that has no isomorphic equivalent in $S_{2}$ . Hence, there exists $\pi \in \Pi_n$ that gives a unique representation $\Gamma^{\star}(u,\mathbf{A},[X,\pi (1,\ldots ,n)^T])$ that does not have a matching $\Gamma^{\star}(v,\mathbf{A},[X,\pi (1,\ldots ,n)^T])$ for any $v\in S_{2}$ and $\pi^{\prime}\in \Pi_{n}$ . Therefore, $\exists a\in \mathcal{O}_{S_1}(\mathbf{A},\mathbf{X})$ , where $a\notin \mathcal{O}_{S_2}(\mathbf{A},\mathbf{X})$ , which is a contradiction since we assumed $\mathcal{O}_{S_1}(\mathbf{A},\mathbf{X}) = \mathcal{O}_{S_2}(\mathbf{A},\mathbf{X})$ .
+
+Now that we know there is such a surjection, a possible surjective measurable map between $\mathcal{O}_S(\mathbf{A},\mathbf{X})$ and $\Gamma^{\star}(\vec{S},\mathbf{A},\mathbf{X})$ is a multiset function that takes $\mathcal{O}_S(\mathbf{A},\mathbf{X})$ and outputs $\Gamma^{\star}(\vec{S},\mathbf{A},\mathbf{X})$ . For finite multisets whose elements are real numbers $\mathbb{R}$ , Wagstaff et al. (2019) shows that a most-expressive multiset function can be defined as the average of a function $f^{(|S|)}$ over the multiset. The elements of $\mathcal{O}_S(\mathbf{A},\mathbf{X})$ are finite ordered sequences (ordered according to the permutation) and, thus, can be uniquely (bijectively) mapped to the real line with a measurable map, even when $\mathbf{A}$ and $\mathbf{X}$ have edge and node attributes defined over the real numbers $\mathbb{R}$ . Thus, by Wagstaff et al. (2019), there exists some surjective function $f^{(|S|)}$ whose average over $\mathcal{O}_S(\mathbf{A},\mathbf{X})$ gives $\Gamma^{\star}(\vec{S},\mathbf{A},\mathbf{X})$ .
+
+Now assume $S_{1}, S_{2} \subseteq V$ are joint isomorphic in $(\mathbf{A}, \mathbf{X})$ , $S_{1}, S_{2} \neq \emptyset$ . Then, we have concluded that $\mathcal{O}_{S_1}(\mathbf{A}, \mathbf{X}) = \mathcal{O}_{S_2}(\mathbf{A}, \mathbf{X})$ . Fortunately, by Definition 10, this non-uniqueness is a required property of the structural representations of $\Gamma^{\star}(S_1, \mathbf{A}, \mathbf{X})$ and $\Gamma^{\star}(S_2, \mathbf{A}, \mathbf{X})$ , which must satisfy $\Gamma^{\star}(S_1, \mathbf{A}, \mathbf{X}) = \Gamma^{\star}(S_2, \mathbf{A}, \mathbf{X})$ if $S_1$ and $S_2$ are joint isomorphic, which concludes our proof.
+
+# Next, we restate and prove Corollary 1
+
+Corollary 1. The node embeddings in Definition 12 encompass embeddings given by matrix and tensor factorization methods — such as Singular Value Decomposition (SVD), Non-negative Matrix Factorization (NMF), implicit matrix factorization (a.k.a. word2vec)—, latent embeddings given by Bayesian graph models — such as Probabilistic Matrix Factorizations (PMFs) and variants—, variational autoencoder methods and graph neural networks that use random lighthouses to extract node embeddings.
+
+Proof. In Probabilistic Matrix Factorization (Mnih & Salakhutdinov, 2008), we have $\mathbf{A}_{uv} \sim \mathcal{N}(Z_u^T Z_v, \sigma_a^2 \mathbf{I})$ where $Z_u \sim \mathcal{N}(0, \sigma^2 \mathbf{I})$ , $Z_v \sim \mathcal{N}(0, \sigma^2 \mathbf{I})$ . We note that the posterior of $P(\mathbf{Z}|A)$ is clearly equivariant, satisfying definition 12, as a permutation action on the nodes requires the same permutation on the $\sigma^2 \mathbf{I}$ matrix as well to obtain $\mathbf{Z}$ . The proof for Poisson Matrix Factorization (Gopalan et al., 2014a;b) follows a similar construction to the above, wherein the Normal assumption is replaced by the Poisson distribution.
+
+Moreover, any matrix factorization algorithm gives an equivariant distribution of embeddings if the input matrices are randomly permuted upon input. Specifically, any Singular Value Decomposition (SVD) method satisfies Definition 12 as the distribution of the eigenvector solutions to degenerate singular values—which are invariant to unitary rotations in the corresponding degenerate eigenspace—will trivially be $\mathcal{G}$ -equivariant even if the algorithm itself outputs values dependent on the node ids. Same is true for non-negative matrix factorization (Lee & Seung, 2001) and implicit matrix factorization (Levy & Goldberg, 2014; Mikolov et al., 2013).
+
+PGNN's (You et al., 2019) compute the shortest distances between every node of the graph with a predetermined set of 'anchor' nodes to encode a distance metric. By definition, using such a distance metric would make the node embeddings learned by this technique $\mathcal{G}$ -equivariant. The shortest path between all pairs of nodes in a graph can be seen equivalently as a function of a polynomial in $\mathbf{A}^k$ . Alternatively, this can also be represented using the adjacency matrix and computed using the Floyd-Warshall algorithm (Cormen et al., 2009). The shortest distance is thus a function of $\mathbf{A}$ ignoring the node features $\mathbf{X}$ . Since the inputs to the GNN comprises of the distance metric, $\mathbf{A}$ and $\mathbf{X}$ , the node embeddings $\mathbf{Z}$ can equivalently seen as a function of $\mathbf{A}$ , $\mathbf{X}$ and noise. The noise in this case is characterized by the randomized anchor set selection.
+
+In variational auto-encoder models such as CVAE's, GVAE's, Graphite (Tang et al., 2019; Kipf & Welling, 2016b; Grover et al., 2019) the latent representations $\mathbf{Z}$ are learned either via a mean field approximation or are sampled independently of each other i.e. $Z \sim P(\cdot | \mathbf{A}, \mathbf{X})$ . We note that in the case of the mean field approximation, the probability distribution is a Dirac Delta. It is clear to see that the $\mathbf{Z}$ learned in this case is $\mathcal{G}$ -equivariant with respect to any permutation action of the nodes in the graph.
+
+
+
+# Next, we restate and prove Corollary 2
+
+Corollary 2. The link prediction task between any two nodes $u, v \in V$ depends only on the most-expressive tuple representation $\Gamma^{\star}((u,v),\mathbf{A},\mathbf{X})$ . Moreover, $\Gamma^{\star}((u,v),\mathbf{A},\mathbf{X})$ always exists for any graph $(\mathbf{A},\mathbf{X})$ and nodes $(u,v)$ . Finally, given most-expressive node embeddings $Z^{\star}$ , there exists a function $f$ such that $\Gamma^{\star}((u,v),\mathbf{A},\mathbf{X}) = \mathbb{E}_{Z^{\star}}[f(Z_u^{\star},Z_v^{\star})], \forall u,v$ .
+
+Proof. It is a consequence of Corollary 3 with $|S| = 2$ .
+
+
+
+# Next, we restate and prove Corollary 3
+
+Corollary 3. Sample $\mathbf{Z}$ according to Definition 12. Then, we can learn a $k$ -node structural representation of a subset of $k$ nodes $S \subseteq V$ , $|S| = k$ , simply by learning a function $f^{(k)}$ whose average $\Gamma(\vec{S}, \mathbf{A}, \mathbf{X}) = \mathbb{E}[f^{(k)}((Z_v)_{v \in S})]$ can be used to accurately predict $Y(\vec{S}, \mathbf{A}, \mathbf{X})$ .
+
+Proof. This proof is a direct application of Theorem 2 which shows the statistical equivalence between node embeddings and structural representations.
+
+Note that $f^{(k)}((Z_v)_{v \in S})$ can equivalently be represented as $f^{(k)}(\varphi(\Gamma(v, \mathbf{A}, X)_{v \in S}, \epsilon_S))$ using Theorem 2 and that the noise $\epsilon_S$ is marginalized from the noise distribution of Theorem 1, still preserving equivariance. With an assumption of the most powerful $f^{\prime(k)}$ , which is able to capture dependencies within the node set (Murphy et al., 2018) and noise $\epsilon_S$ , we can replace the above with $f^{\prime(k)}(\varphi(\Gamma(S, \mathbf{A}, X), \epsilon_S))$ and subsequently compute an expectation over this function to eliminate the noise.
+
+
+
+# Next, we restate and prove Corollary 4
+
+Corollary 4. Transductive and inductive learning are unrelated to the concepts of node embeddings and structural representations.
+
+Proof. By Theorem 2, we can build most-expressive any-size joint representations from node embeddings, and we can get node embeddings from any-size most-expressive joint representations.
+
+Hence, given enough computational resources, node embeddings and graph representations can have the same generalization performance over any tasks. This shows they are unrelated with the concepts of transductive and inductive learning.
+
+Next, we restate and prove Corollary 5
+
+Corollary 5. A node embeddings sampling scheme can increase the structural representation power of GNNs.
+
+Proof. The proof follows as a direct consequence of Theorem 2, along with Murphy et al. (2019) which demonstrates RP-GNN as a concrete method to do so. More specifically, appending unique node ids to node features uniformly at random, makes the nodes unique, and can be seen as a strategy to obtain node embeddings which satisfy Definition 12 using GNN's. By averaging over multiple such node embeddings gives us structural representations more powerful than that of standalone GNN's.
+
+
+
+# 10 COLLIDING GRAPH NEURAL NETWORKS (CGNNS)
+
+In this section we propose a new variational auto-encoder procedure to obtain node embeddings using neural networks, denoted Colliding Graph Neural Networks (CGNNs). Our sole reason to propose a new auto-encoder method is because we want to test the expressiveness of node embedding auto-encoders—and, unfortunately, existing auto-encoders, such as Grover et al. (2019), do not properly account for the dependencies introduced by the colliding variables in the graphical model. In our experiments, shown later, we aggregate multiple node embedding sampled from CGNN to obtain structural representations of the corresponding nodes and node sets.
+
+Node Embedding Auto-encoder. In CGNN's, we adopt a latent variable approach to learn node embeddings. Corresponding to each evidence feature vector $\mathbf{X}_{i,\cdot} \in \mathbb{R}^k \forall i \in V$ , we introduce a latent variable $Z_{i,\cdot} \in \mathbb{R}^k$ . In addition, our graphical model also consists of observed variables $\mathbf{A}_{i,j,\cdot} \in \mathbb{R}^d \forall i,j \in V$ . These are related through the joint distribution $p(\mathbf{A},\mathbf{X}|Z) = \prod_{i,j \in V \times V} p(\mathbf{A}_{i,j,\cdot}|Z_{i,\cdot},\mathbf{Z}_{j,\cdot}) \prod_{h \in V} p(\mathbf{X}_{h,\cdot}|Z_{h,\cdot})$ , which is summarized by the Bayesian network in Figure 5 in the Appendix. Note that $\mathbf{A}_{i,j,\cdot}$ is a collider, since it is observed and influenced by two hidden variables, $Z_{i,\cdot}, Z_{j,\cdot}$ . A neural network is used to learn the joint probability via MCMC, in an unsupervised fashion. The model learns the parameters of the MCMC transition kernel via an unrolled Gibbs Sampler, a templated recurrent model (an MLP with shared weights across Gibbs sampling steps), partially inspired by Fan & Huang (2017).
+
+The unrolled Gibbs Sampler, starts with a normal distribution of the latent variables $Z_{i,\cdot}^{(0)}, \forall i \in V$ , with each $Z_{i,\cdot}^{(0)} \sim \mathcal{N}(0, \mathbf{I})$ independently, where $I$ is the identity matrix. Subsequently at time steps $t = 1, 2, \ldots$ , in accordance with the graphical model, each variable $Z_{i,\cdot}^{(t)}$ is sequentially sampled from its true distribution conditioned on all observed edges of its corresponding node $i$ , in addition to the most-up-to-date latent variables $Z$ 's associated with its immediate neighbors. The reparametrization trick (Kingma & Welling, 2013) allows us to backpropagate through the unrolled Gibbs Sampler. Algorithm 1 in the Appendix details our method. Consequently, this procedure has an effect on the run-time of this technique, which we alleviate by performing Parallel Gibbs Sampling by constructing parallel splashes (Gonzalez et al., 2011). Our unsupervised objective is reconstructing the noisy adjacency matrix.
+
+# 11 CGNN ALGORITHMS
+
+The procedure to generate node embeddings used by the CGNN is given by Algorithm 1. Structural representations are computed as an unbiased estimate of the expected value of a function of the node embedding samples as given by Algorithm 2 via a set of sets function (Meng et al., 2019).
+
+input: $\mathbf{A}$ $X$ , num-times
+output: $Z$
+initialization: $Z_{u}\sim \mathcal{N}(0,1)\forall u\in V$
+while num-times $>0$ do for $u\in V$ do $\forall v\in V$ such that $\mathbf{A}_{uv} = 1$ hidden $\leftarrow f(\{Z_v\})$ // $f$ is a permutation invariant function visible $\leftarrow g(\{X_v\})$ // $g$ is a permutation invariant function $Z_{u}\gets \mathbf{MLP}$ (hidden, visible, $X_{u}) + \mathrm{Noise} / /$ With Reparametrization Trick / Equivalently, $Z_{u}\sim P(\cdot |\{Z_{v}\} ,\{X_{v}\} ,\{A_{uv}\} ,X_{u})$ end num-times $\leftarrow$ num-times - 1
+
+
+Algorithm 1: Node Embeddings from the Unrolled Gibbs Sampler
+Figure 5: Latent variable model for Colliding Neural Networks. Observed evidence variables in gray
+
+# 12 FURTHER RESULTS
+
+In Table 2 we provide the results on node classification, link prediction and triad prediction on the Citeseer dataset.
+
+# 13 DESCRIPTION OF DATASETS AND EXPERIMENTAL SETUP
+
+A detailed description of the datasets and the splits are given in Table 3. Our implementation is in PyTorch using Python 3.6. The implementations for GIN and RP-GIN are done using the PyTorch Geometric Framework. We used two convolutional layers for GIN, RP-GIN since it had the best performance in our tasks (we had tested with $2/3/4/5$ convolutional layers). Also since we perform tasks based on node representations rather than graph representations, we ignore the graph wide readout. For GIN and RP-GIN, the embedding dimension was set to 256 at both convolutional layers. All MLPS, across all models have 256 neurons. Optimization is performed with the Adam Optimizer (Kingma & Ba, 2014). For the GIN, RP-GIN the learning rate was tuned in $\{0.01, 0.001, 0.0001, 0.00001\}$ whereas for CGNN's the learning rate was set to 0.001. Training was performed on Titan V GPU's. For more details refer to the code provided.
+
+input: $\{\mathbf{Z}^{(i)}\}_{i = 1}^{m}$ $k$ // node embedding samples, node set size
+output: $g(\{\mathbf{Z}\}_{\mathcal{S}})$ .. $S = \{S_1\}_{\forall S_1\subset V:|S_1| = k}$ //structural representations, $S$ is a set of sets
+initialization: $g(\{\mathbf{Z}\}_{\mathcal{S}}) = \{\vec{0}\}$
+for $i\in [1,m]$ do for $S\in S$ do $g(\{Z_u\}_{u\in S})\gets g(\{Z_u\}_{u\in S}) + \frac{1}{m} f(\{Z_u^{(i)}\}_{u\in S}) / / f$ is a permutation invariant function end
+
+Algorithm 2: Structural Representations from the Node Embedding Samples
+
+Table 2: Micro F1 score on three distinct tasks over the Citeseer dataset, averaged over 12 runs with standard deviation in parenthesis. The number within the parenthesis beside the model name indicates the number of Monte Carlo samples used in the estimation of the structural representation. MC-SVD†(1) denotes the SVD procedure run until convergence with one Monte Carlo sample for the representation. Bold values show maximum empirical average, and multiple bolds happen when its standard deviation overlaps with another average.
+
+ | Node Classification | Link Prediction | Triad Prediction |
| Random | 0.167 | 0.500 | 0.250 |
| GIN(1) | 0.701(0.038) | 0.543(0.024) | 0.309(0.009) |
| GIN(5) | 0.706(0.044) | 0.525(0.040) | 0.311(0.022) |
| GIN(20) | 0.718(0.034) | 0.530(0.023) | 0.306(0.012) |
| RP-GIN(1) | 0.719(0.031) | 0.541(0.034) | 0.313(0.005) |
| RP-GIN(5) | 0.703(0.026) | 0.539(0.025) | 0.307(0.013) |
| RP-GIN(20) | 0.724(0.020) | 0.551(0.030) | 0.307(0.017) |
| 1-2-3 GNN(1) | 0.189(0.026) | 0.499(0.002) | 0.306(0.010) |
| 1-2-3 GNN(5) | 0.196(0.042) | 0.506(0.018) | 0.310(0.012) |
| 1-2-3 GNN(20) | 0.192(0.029) | 0.502(0.014) | 0.310(0.020) |
| MC-SVD†(1) | 0.733(0.007) | 0.552(0.021) | 0.304(0.011) |
| MC-SVD(1) | 0.734(0.007) | 0.562(0.017) | 0.297(0.015) |
| MC-SVD(5) | 0.739(0.006) | 0.556(0.022) | 0.302(0.009) |
| MC-SVD(20) | 0.737(0.005) | 0.565(0.020) | 0.299(0.015) |
| CGNN(1) | 0.689(0.010) | 0.598(0.024) | 0.305(0.009) |
| CGNN(5) | 0.713(0.009) | 0.627(0.048) | 0.301(0.013) |
| CGNN(20) | 0.721(0.008) | 0.654(0.049) | 0.296(0.008) |
+
+Table 3: Summary of the datasets
+
+| CHARACTERISTIC | CORA | CITESEER | PUBMED | PPI |
| Number of Vertices | 2708 | 3327 | 19717 | 56944, 2373a |
| Number of Edges | 10556 | 9104 | 88648 | 819994, 41000a |
| Number of Vertex Features | 1433 | 3703 | 500 | 50 |
| Number of Classes | 7 | 6 | 3 | 121b |
| Number of Training Vertices | 1208 | 1827 | 18217 | 44906c |
| Number of Validation Vertices | 500 | 500 | 500 | 6514c |
| Number of Test Vertices | 1000 | 1000 | 1000 | 5524c |
+
+a The PPI dataset comprises several graphs, so the quantities marked with an “a”, represent the average characteristic of all graphs.
+b For PPI, there are 121 targets, each taking values in $\{0,1\}$
+c All of the training nodes come from 20 graphs while the validation and test nodes come from two graphs each not utilized during training.
\ No newline at end of file
diff --git a/ontheequivalencebetweenpositionalnodeembeddingsandstructuralgraphrepresentations/images.zip b/ontheequivalencebetweenpositionalnodeembeddingsandstructuralgraphrepresentations/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..b859e045a59b39fa276d1ec5b341c62696136334
--- /dev/null
+++ b/ontheequivalencebetweenpositionalnodeembeddingsandstructuralgraphrepresentations/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b05895595b99fe1134c6a340aedd442b9824cb7bcd8ccb703ccd5514fb63fb57
+size 429756
diff --git a/ontheequivalencebetweenpositionalnodeembeddingsandstructuralgraphrepresentations/layout.json b/ontheequivalencebetweenpositionalnodeembeddingsandstructuralgraphrepresentations/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..29c6d2c66869959af1d0db2d6772d79e5a58aca1
--- /dev/null
+++ b/ontheequivalencebetweenpositionalnodeembeddingsandstructuralgraphrepresentations/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d0e2ab47fa5e5310e639ba8fdb5756e115cd6a1ba47041e998047e45ada0ee8e
+size 1126869
diff --git a/ontheglobalconvergenceoftrainingdeeplinearresnets/9ebe3cd7-2a94-45a9-87cd-1cac7a4d4898_content_list.json b/ontheglobalconvergenceoftrainingdeeplinearresnets/9ebe3cd7-2a94-45a9-87cd-1cac7a4d4898_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..6e4310e44b43ba79267cba25bffca48d8c1786e9
--- /dev/null
+++ b/ontheglobalconvergenceoftrainingdeeplinearresnets/9ebe3cd7-2a94-45a9-87cd-1cac7a4d4898_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:791be2aaa0fb61af45a8349b1592fec77b2f819bde9dd1e2784b09f53cfc3aa5
+size 213602
diff --git a/ontheglobalconvergenceoftrainingdeeplinearresnets/9ebe3cd7-2a94-45a9-87cd-1cac7a4d4898_model.json b/ontheglobalconvergenceoftrainingdeeplinearresnets/9ebe3cd7-2a94-45a9-87cd-1cac7a4d4898_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..550a44f38b1d552d18dc4c52b7bd9650ff5c9f48
--- /dev/null
+++ b/ontheglobalconvergenceoftrainingdeeplinearresnets/9ebe3cd7-2a94-45a9-87cd-1cac7a4d4898_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a935740f77087210c38bc9a736ea899570c7dfffbccef10fb5851fc6f5094d2b
+size 242996
diff --git a/ontheglobalconvergenceoftrainingdeeplinearresnets/9ebe3cd7-2a94-45a9-87cd-1cac7a4d4898_origin.pdf b/ontheglobalconvergenceoftrainingdeeplinearresnets/9ebe3cd7-2a94-45a9-87cd-1cac7a4d4898_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..a006c22cf9e058f5d5fc855e4492483a7c21d736
--- /dev/null
+++ b/ontheglobalconvergenceoftrainingdeeplinearresnets/9ebe3cd7-2a94-45a9-87cd-1cac7a4d4898_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0c731f4b68f2fc56fea9811dca9c7e76ccf7b3426bc6856d16d7d6590157150e
+size 487655
diff --git a/ontheglobalconvergenceoftrainingdeeplinearresnets/full.md b/ontheglobalconvergenceoftrainingdeeplinearresnets/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..c3d693e0327f34a3199f1d3695c397abdba052fd
--- /dev/null
+++ b/ontheglobalconvergenceoftrainingdeeplinearresnets/full.md
@@ -0,0 +1,1059 @@
+# ON THE GLOBAL CONVERGENCE OF TRAINING DEEP LINEAR RESNETS
+
+Difan Zou
+
+Department of Computer Science
+
+University of California, Los Angeles
+
+knowzou@cs.ucla.edu
+
+Philip M. Long
+
+Google
+
+plong@google.com
+
+Quanquan Gu
+
+Department of Computer Science
+
+University of California, Los Angeles
+
+qgu@cs.ucla.edu
+
+# ABSTRACT
+
+We study the convergence of gradient descent (GD) and stochastic gradient descent (SGD) for training $L$ -hidden-layer linear residual networks (ResNets). We prove that for training deep residual networks with certain linear transformations at input and output layers, which are fixed throughout training, both GD and SGD with zero initialization on all hidden weights can converge to the global minimum of the training loss. Moreover, when specializing to appropriate Gaussian random linear transformations, GD and SGD provably optimize wide enough deep linear ResNets. Compared with the global convergence result of GD for training standard deep linear networks (Du & Hu, 2019), our condition on the neural network width is sharper by a factor of $O(\kappa L)$ , where $\kappa$ denotes the condition number of the covariance matrix of the training data. We further propose a modified identity input and output transformations, and show that a $(d + k)$ -wide neural network is sufficient to guarantee the global convergence of GD/SGD, where $d, k$ are the input and output dimensions respectively.
+
+# 1 INTRODUCTION
+
+Despite the remarkable power of deep neural networks (DNNs) trained using stochastic gradient descent (SGD) in many machine learning applications, theoretical understanding of the properties of this algorithm, or even plain gradient descent (GD), remains limited. Many key properties of the learning process for such systems are also present in the idealized case of deep linear networks. For example, (a) the objective function is not convex; (b) errors back-propagate; and (c) there is potential for exploding and vanishing gradients. In addition to enabling study of systems with these properties in a relatively simple setting, analysis of deep linear networks also facilitates the scientific understanding of deep learning because using linear networks can control for the effect of architecture choices on the expressiveness of networks (Arora et al., 2018; Du & Hu, 2019). For these reasons, deep linear networks have received extensive attention in recent years.
+
+One important line of theoretical investigation of deep linear networks concerns optimization landscape analysis (Kawaguchi, 2016; Hardt & Ma, 2016; Freeman & Bruna, 2016; Lu & Kawaguchi, 2017; Yun et al., 2018; Zhou & Liang, 2018), where major findings include that any critical point of a deep linear network with square loss function is either a global minimum or a saddle point, and identifying conditions on the weight matrices that exclude saddle points. Beyond landscape analysis, another research direction aims to establish convergence guarantees for optimization algorithms (e.g. GD, SGD) for training deep linear networks. Arora et al. (2018) studied the trajectory of gradient flow and showed that depth can help accelerate the optimization of deep linear networks. Ji & Telgarsky (2019); Gunasekar et al. (2018) investigated the implicit bias of GD for training deep linear networks and deep linear convolutional networks respectively. More recently, Bartlett et al. (2019); Arora et al. (2019a); Shamir (2018); Du & Hu (2019) analyzed the optimization trajectory of
+
+GD for training deep linear networks and proved global convergence rates under certain assumptions on the training data, initialization, and neural network structure.
+
+Inspired by the great empirical success of residual networks (ResNets), Hardt & Ma (2016) considered identity parameterizations in deep linear networks, i.e., parameterizing each layer's weight matrix as $\mathbf{I} + \mathbf{W}$ , which leads to the so-called deep linear ResNets. In particular, Hardt & Ma (2016) established the existence of small norm solutions for deep residual networks with sufficiently large depth $L$ , and proved that there are no critical points other than the global minimum when the maximum spectral norm among all weight matrices is smaller than $O(1 / L)$ . Motivated by this intriguing finding, Bartlett et al. (2019) studied the convergence rate of GD for training deep linear networks with identity initialization, which is equivalent to zero initialization in deep linear ResNets. They assumed whitened data and showed that GD can converge to the global minimum if (i) the training loss at the initialization is very close to optimal or (ii) the regression matrix $\Phi$ is symmetric and positive definite. (In fact, they proved that, when $\Phi$ is symmetric and has negative eigenvalues, GD for linear ResNets with zero-initialization does not converge.) Arora et al. (2019a) showed that GD converges under substantially weaker conditions, which can be satisfied by random initialization schemes. The convergence theory of stochastic gradient descent for training deep linear ResNets is largely missing; it remains unclear under which conditions SGD can be guaranteed to find the global minimum.
+
+In this paper, we establish the global convergence of both GD and SGD for training deep linear ResNets without any condition on the training data. More specifically, we consider the training of $L$ -hidden-layer deep linear ResNets with fixed linear transformations at input and output layers. We prove that under certain conditions on the input and output linear transformations, GD and SGD can converge to the global minimum of the training loss function. Moreover, when specializing to appropriate Gaussian random linear transformations, we show that, as long as the neural network is wide enough, both GD and SGD with zero initialization on all hidden weights can find the global minimum. There are two main ingredients of our proof: (i) establishing restricted gradient bounds and a smoothness property; and (ii) proving that these properties hold along the optimization trajectory and further lead to global convergence. We point out the second aspect is challenging especially for SGD due to the uncertainty of its optimization trajectory caused by stochastic gradients. We summarize our main contributions as follows:
+
+- We prove the global convergence of GD and SGD for training deep linear ResNets. Specifically, we derive a generic condition on the input and output linear transformations, under which both GD and SGD with zero initialization on all hidden weights can find global minima. Based on this condition, one can design a variety of input and output transformations for training deep linear ResNets.
+- When applying appropriate Gaussian random linear transformations, we show that as long as the neural network width satisfies $m = \Omega (kr\kappa^2)$ , with high probability, GD can converge to the global minimum up to an $\epsilon$ -error within $O(\kappa \log (1 / \epsilon))$ iterations, where $k, r$ are the output dimension and the rank of training data matrix $\mathbf{X}$ respectively, and $\kappa = \| \mathbf{X}\|_2^2 /\sigma_r^2 (\mathbf{X})$ denotes the condition number of the covariance matrix of the training data. Compared with previous convergence results for training deep linear networks from Du & Hu (2019), our condition on the neural network width is independent of the neural network depth $L$ , and is strictly better by a factor of $O(L\kappa)$ .
+- Using the same Gaussian random linear transformations, we also establish the convergence guarantee of SGD for training deep linear ResNets. We show that if the neural network width satisfies $m = \widetilde{\Omega} \left( kr\kappa^2 \log^2(1/\epsilon) \cdot n^2 / B^2 \right)$ , with constant probability, SGD can converge to the global minimum up to an $\epsilon$ -error within $\widetilde{O}\left(\kappa^2 \epsilon^{-1} \log(1/\epsilon) \cdot n / B\right)$ iterations, where $n$ is the training sample size and $B$ is the minibatch size of stochastic gradient. This is the first global convergence rate of SGD for training deep linear networks. Moreover, when the global minimum of the training loss is 0, we prove that SGD can further achieve linear rate of global convergence, and the condition on the neural network width does not depend on the target error $\epsilon$ .
+
+As alluded to above, we analyze networks with $d$ inputs, $k$ outputs, and $m \geqslant \max\{d, k\}$ nodes in each hidden layer. Linear transformations that are fixed throughout training map the inputs to the first hidden layer, and the last hidden layer to the outputs. We prove that our bounds hold with high probability when these input and output transformations are randomly generated by Gaussian distributions. If, instead, the input transformation simply copies the inputs onto the first $d$ compo
+
+nents of the first hidden layer, and the output transformation takes the first $k$ components of the last hidden layer, then our analysis does not provide a guarantee. There is a good reason for this: a slight modification of a lower bound argument from Bartlett et al. (2019) demonstrates that GD may fail to converge in this case. However, we describe a similarly simple, deterministic, choice of input and output transformations such that wide enough networks always converge. The resulting condition on the network width is weaker than that for Gaussian random transformations, and thus improves on the corresponding convergence guarantee for linear networks, which, in addition to requiring wider networks, only hold with high probability for random transformations.
+
+# 1.1 ADDITIONAL RELATED WORK
+
+In addition to what we discussed above, a large bunch of work focusing on the optimization of neural networks with nonlinear activation functions has emerged. We will briefly review them in this subsection.
+
+It is widely believed that the training loss landscape of nonlinear neural networks is highly nonconvex and nonsmooth (e.g., neural networks with ReLU/LeakyReLU activation), thus it is fundamentally difficult to characterize the optimization trajectory and convergence performance of GD and SGD. Some early work (Andoni et al., 2014; Daniely, 2017) showed that wide enough (polynomial in sample size $n$ ) neural networks trained by GD/SGD can learn a class of continuous functions (e.g., polynomial functions) in polynomial time. However, those works only consider training some of the neural network weights rather than all of them (e.g., the input and output layers)1. In addition, a series of papers investigated the convergence of gradient descent for training shallow networks (typically 2-layer networks) under certain assumptions on the training data and initialization scheme (Tian, 2017; Du et al., 2018b; Brutzkus et al., 2018; Zhong et al., 2017; Li & Yuan, 2017; Zhang et al., 2018). However, the assumptions made in these works are rather strong and not consistent with practice. For example, Tian (2017); Du et al. (2018b); Zhong et al. (2017); Li & Yuan (2017); Zhang et al. (2018) assumed that the label of each training data is generated by a teacher network, which has the same architecture as the learned network. Brutzkus et al. (2018) assumed that the training data is linearly separable. Li & Liang (2018) addressed this drawback; they proved that for two-layer ReLU network with cross-entropy loss, as long as the neural network is sufficiently wide, under mild assumptions on the training data SGD with commonly-used Gaussian random initialization can achieve nearly zero expected error. Du et al. (2018c) proved the similar results of GD for training two-layer ReLU networks with square loss. Beyond shallow neural networks, Allen-Zhu et al. (2019); Du et al. (2019); Zou et al. (2019) generalized the global convergence results to multi-layer over-parameterized ReLU networks. Chizat et al. (2019) showed that training over-parameterized neural networks actually belongs to a so-called "lazy training" regime, in which the model behaves like its linearization around the initialization. Furthermore, the parameter scaling is more essential than over-parameterization to make the model learning within the "lazy training" regime. Along this line of research, several follow up works have been conducted. Oymak & Soltanolkotabi (2019); Zou & Gu (2019); Su & Yang (2019); Kawaguchi & Huang (2019) improved the convergence rate and over-parameterization condition for both shallow and deep networks. Arora et al. (2019b) showed that training a sufficiently wide deep neural network is almost equivalent to kernel regression using neural tangent kernel (NTK), proposed in Jacot et al. (2018). Allen-Zhu et al. (2019); Du et al. (2019); Zhang et al. (2019) proved the global convergence for training deep ReLU ResNets. Frei et al. (2019) proved the convergence of GD for training deep ReLU ResNets under an over-parameterization condition that is only logarithmic in the depth of the network, which partially explains why deep residual networks are preferable to fully connected ones. However, all the results in Allen-Zhu et al. (2019); Du et al. (2019); Zhang et al. (2019); Frei et al. (2019) require a very stringent condition on the network width, which typically has a high-degree polynomial dependence on the training sample size $n$ . Besides, the results in Allen-Zhu et al. (2019); Zhang et al. (2019) also require that all data points are separated by a positive distance and have unit norm. As shown in Du & Hu (2019) and will be proved in this paper, for deep linear (residual) networks, there is no assumption on the training data, and the condition on the network width is significantly milder, which is independent of the sample size $n$ . While achieving a stronger result for linear networks than for nonlinear ones is not surprising, we believe that our analysis, conducted in the idealized deep linear case, can provide useful insights to understand optimization in the nonlinear case.
+
+Two concurrent works analyze gradient descent applied to deep linear (residual) networks (Hu et al., 2020; Wu et al., 2019). Hu et al. (2020) consider deep linear networks with orthogonal initialization, and Wu et al. (2019) consider zero initialization on the last layer and identity initialization for the rest of the layers, which are similar to our setting. However, there are several differences between their work and ours. One major difference is that Hu et al. (2020) and Wu et al. (2019) only prove global convergence for GD, but our results cover both GD and SGD. In addition, Hu et al. (2020) focuses on proving the global convergence of GD for sufficiently wide networks, while we provide a generic condition on the input and output linear transformations for ensuring global convergence. Wu et al. (2019) assumes whitened data and proves a $O(L^3 \log(1/\epsilon))$ bound on the number of iterations required for GD to converge, where we establish a $O(\log(1/\epsilon))^2$ bound.
+
+# 1.2 NOTATION.
+
+We use lower case, lower case bold face, and upper case bold face letters to denote scalars, vectors and matrices respectively. For a positive integer, we denote the set $\{1,\dots ,k\}$ by $[k]$ . Given a vector $\mathbf{x}$ , we use $\| \mathbf{x}\| _2$ to denote its $\ell_2$ norm. We use $N(\mu ,\sigma^2)$ to denote the Gaussian distribution with mean $\mu$ and variance $\sigma^2$ . Given a matrix $\mathbf{X}$ , we denote $\| \mathbf{X}\| _F$ , $\| \mathbf{X}\| _2$ and $\| \mathbf{X}\|_{2,\infty}$ as its Frobenius norm, spectral norm and $\ell_{2,\infty}$ norm (maximum $\ell_2$ norm over its columns), respectively. In addition, we denote by $\sigma_{\mathrm{min}}(\mathbf{X})$ , $\sigma_{\mathrm{max}}(\mathbf{X})$ and $\sigma_r(\mathbf{X})$ the smallest, largest and $r$ -th largest singular values of $\mathbf{X}$ respectively. For a square matrix $\mathbf{A}$ , we denote by $\lambda_{\mathrm{min}}(\mathbf{A})$ and $\lambda_{\mathrm{max}}(\mathbf{A})$ the smallest and largest eigenvalues of $\mathbf{A}$ respectively. For two sequences $\{a_k\}_{k\geqslant 0}$ and $\{b_k\}_{k\geqslant 0}$ , we say $a_{k} = O(b_{k})$ if $a_{k}\leqslant C_{1}b_{k}$ for some absolute constant $C_1$ , and use $a_{k} = \Omega (b_{k})$ if $a_{k}\geqslant C_{2}b_{k}$ for some absolute constant $C_2$ . Except the target error $\epsilon$ , we use $\tilde{O} (\cdot)$ and $\tilde{\Omega} (\cdot)$ to hide the logarithmic factors in $O(\cdot)$ and $\Omega (\cdot)$ respectively.
+
+# 2 PROBLEM SETUP
+
+Model. In this work, we consider deep linear ResNets defined as follows:
+
+$$
+f _ {\mathbf {W}} (\mathbf {x}) = \mathbf {B} (\mathbf {I} + \mathbf {W} _ {L}) \dots (\mathbf {I} + \mathbf {W} _ {1}) \mathbf {A} \mathbf {x},
+$$
+
+where $\mathbf{x} \in \mathbb{R}^d$ is the input, $f_{\mathbf{W}}(\mathbf{x}) \in \mathbb{R}^k$ is the corresponding output, $\mathbf{A} \in \mathbb{R}^{m \times d}$ , $\mathbf{B} \in \mathbb{R}^{k \times m}$ denote the weight matrices of input and output layers respectively, and $\mathbf{W}_1, \ldots, \mathbf{W}_L \in \mathbb{R}^{m \times m}$ denote the weight matrices of all hidden layers. The formulation of ResNets in our paper is different from that in Hardt & Ma (2016); Bartlett et al. (2019), where the hidden layers have the same width as the input and output layers. In our formulation, we allow the hidden layers to be wider by choosing the dimensions of $\mathbf{A}$ and $\mathbf{B}$ appropriately.
+
+Loss Function. Let $\{(\mathbf{x}_i,\mathbf{y}_i)\}_{i = 1,\ldots ,n}$ be the training dataset, $\mathbf{X} = (\mathbf{x}_1,\dots ,\mathbf{x}_n)\in \mathbb{R}^{d\times n}$ be the input data matrix and $\mathbf{Y} = (\mathbf{y}_1,\dots ,\mathbf{y}_n)\in \mathbb{R}^{k\times n}$ be the corresponding output label matrix. We assume the data matrix $\mathbf{X}$ is of rank $r$ , where $r$ can be smaller than $d$ . Let $\mathbf{W} = \{\mathbf{W}_1,\dots ,\mathbf{W}_L\}$ be the collection of weight matrices of all hidden layers. For an example $(\mathbf{x},\mathbf{y})$ , we consider the square loss defined by
+
+$$
+\ell (\mathbf {W}; \mathbf {x}, \mathbf {y}) = \frac {1}{2} \| f _ {\mathbf {W}} (\mathbf {x}) - \mathbf {y} \| _ {2} ^ {2}.
+$$
+
+Then the training loss over the training dataset takes the following form
+
+$$
+L (\mathbf {W}) := \sum_ {i = 1} ^ {n} \ell (\mathbf {W}; \mathbf {x} _ {i}, \mathbf {y} _ {i}) = \frac {1}{2} \| \mathbf {B} (\mathbf {I} + \mathbf {W} _ {L}) \dots (\mathbf {I} + \mathbf {W} _ {1}) \mathbf {A} \mathbf {X} - \mathbf {Y} \| _ {F} ^ {2}.
+$$
+
+Algorithm. Similar to Allen-Zhu et al. (2019); Zhang et al. (2019), we consider algorithms that only train the weights $\mathbf{W}$ for hidden layers while leaving the input and output weights $\mathbf{A}$ and $\mathbf{B}$ unchanged throughout training. For hidden weights, we follow the similar idea in Bartlett et al. (2019) and adopt zero initialization (which is equivalent to identity initialization for standard linear network). We would also like to point out that at the initialization, all the hidden layers automatically satisfy the so-called balancedness condition (Arora et al., 2018; 2019a; Du et al., 2018a). The optimization algorithms, including GD and SGD, are summarized in Algorithm 1.
+
+Algorithm 1 (Stochastic) Gradient descent with zero initialization
+1: input: Training data $\{\mathbf{x}_i,\mathbf{y}_i\}_{i\in [n]}$ , step size $\eta$ total number of iterations $T$ , minibatch size $B$ input and output weight matrices A and B.
+2: initialization: For all $l\in [L]$ , each entry of weight matrix $\mathbf{W}_l^{(0)}$ is initialized as 0. Gradient Descent
+3: for $t = 0,\dots ,T - 1$ do
+4: $\mathbf{W}_l^{(t + 1)} = \mathbf{W}_l^{(t)} - \eta \nabla_{\mathbf{W}_l}L(\mathbf{W}^{(t)})$ for all $l\in [L]$
+5: end for
+6: output: $\mathbf{W}^{(T)}$ Stochastic Gradient Descent
+7: for $t = 0,\ldots ,T - 1$ do
+8: Uniformly sample a subset $\mathcal{B}^{(t)}$ of size $B$ from training data without replacement.
+9: For all $\ell \in [L]$ , compute the stochastic gradient $\mathbf{G}_l^{(t)} = \frac{n}{B}\sum_{i\in \mathcal{B}(t)}\nabla_{\mathbf{W}_l}\ell (\mathbf{W}^{(t)};\mathbf{x}_i,\mathbf{y}_i)$
+10: For all $l\in [L]$ $\mathbf{W}_l^{(t + 1)} = \mathbf{W}_l^{(t)} - \eta \mathbf{G}_l^{(t)}$
+11: end for
+12: output: $\{\mathbf{W}^{(t)}\}_{t = 0,\dots ,T}$
+
+# 3 MAIN THEORY
+
+It is clear that the expressive power of deep linear ResNets is identical to that of simple linear model, which implies that the global minima of deep linear ResNets cannot be smaller than that of linear model. Therefore, our focus is to show that GD/SGD can converge to a point $\mathbf{W}^*$ with
+
+$$
+L(\mathbf{W}^{*}) = \min_{\boldsymbol {\Theta}\in \mathbb{R}^{k\times d}}\frac{1}{2}\| \boldsymbol {\Theta}\mathbf{X} - \mathbf{Y}\|_{F}^{2},
+$$
+
+which is exactly the global minimum of the linear regression problem. It what follows, we will show that with appropriate input and output transformations, both GD and SGD can converge to the global minimum.
+
+# 3.1 CONVERGENCE GUARANTEE OF GRADIENT DESCENT
+
+The following theorem establishes the global convergence of GD for training deep linear ResNets.
+
+Theorem 3.1. There are absolute constants $C$ and $C_1$ such that, if the input and output weight matrices satisfy
+
+$$
+\frac {\sigma_ {\operatorname* {m i n}} ^ {2} (\mathbf {A}) \sigma_ {\operatorname* {m i n}} ^ {2} (\mathbf {B})}{\| \mathbf {A} \| _ {2} \| \mathbf {B} \| _ {2}} \geqslant C \frac {\| \mathbf {X} \| _ {2} \left(L \left(\mathbf {W} ^ {(0)}\right) - L \left(\mathbf {W} ^ {*}\right)\right) ^ {1 / 2}}{\sigma_ {r} ^ {2} (\mathbf {X})}
+$$
+
+and the step size satisfies
+
+$$
+\eta \leqslant C _ {1} \cdot \frac {1}{L \| \mathbf {A} \| _ {2} \| \mathbf {B} \| _ {2} \| \mathbf {X} \| _ {2} \cdot \left(\sqrt {L (\mathbf {W} ^ {(0)})} + \| \mathbf {A} \| _ {2} \| \mathbf {B} \| _ {2} \| \mathbf {X} \| _ {2}\right)},
+$$
+
+then for all iterates of GD in Algorithm 1, it holds that
+
+$$
+L (\mathbf {W} ^ {(t)}) - L (\mathbf {W} ^ {*}) \leqslant \left(1 - \frac {\eta L \sigma_ {\min} ^ {2} (\mathbf {A}) \sigma_ {\min} ^ {2} (\mathbf {B}) \sigma_ {r} ^ {2} (\mathbf {X})}{e}\right) ^ {t} \cdot \big (L (\mathbf {W} ^ {(0)}) - L (\mathbf {W} ^ {*}) \big).
+$$
+
+Remark 3.2. Theorem 3.1 can imply the convergence result in Bartlett et al. (2019). Specifically, in order to turn into the setting considered in Bartlett et al. (2019), we choose $m = d = k$ , $\mathbf{A} = \mathbf{I}$ , $\mathbf{B} = \mathbf{I}$ , $L(\mathbf{W}^{*}) = 0$ and $\mathbf{X}\mathbf{X}^{\top} = \mathbf{I}$ . Then it can be easily observed that the condition in Theorem 3.1 becomes $L(\mathbf{W}^{(0)}) - L(\mathbf{W}^{*}) \leqslant C^{-2}$ . This implies that the global convergence can be established as long as $L(\mathbf{W}^{(0)}) - L(\mathbf{W}^{*})$ is smaller than some constant, which is equivalent to the condition proved in Bartlett et al. (2019).
+
+In general, $L(\mathbf{W}^{(0)}) - L(\mathbf{W}^*)$ can be large and thus the setting considered in Bartlett et al. (2019) may not be able to guarantee global convergence. Therefore, it is natural to ask in which setting
+
+the condition on $\mathbf{A}$ and $\mathbf{B}$ in Theorem 3.1 can be satisfied. Here we provide one possible choice which is commonly used in practice (another viable choices can be found in Section 4). We use Gaussian random input and output transformations, i.e., each entry in $\mathbf{A}$ is independently generated from $N(0,1 / m)$ and each entry in $\mathbf{B}$ is generated from $N(0,1 / k)$ . Based on this choice of transformations, we have the following proposition that characterizes the quantity of the largest and smallest singular values of $\mathbf{A}$ and $\mathbf{B}$ , and the training loss at the initialization (i.e., $L(\mathbf{W}^{(0)}))$ ). The following proposition is proved in Section A.2.
+
+Proposition 3.3. In Algorithm 1, if each entry in $\mathbf{A}$ is independently generated from $N(0,\alpha^2)$ and each entry in $\mathbf{B}$ is independently generated from $N(0,\beta^2)$ , then if $m\geqslant C\cdot (d + k + \log (1 / \delta))$ for some absolute constant $C$ , with probability at least $1 - \delta$ , it holds that
+
+$$
+\begin{array}{l} \sigma_ {\min } (\mathbf {A}) = \Omega (\alpha \sqrt {m}), \sigma_ {\max } (\mathbf {A}) = O (\alpha \sqrt {m}), \quad \sigma_ {\min } (\mathbf {B}) = \Omega (\beta \sqrt {m}), \sigma_ {\max } (\mathbf {B}) = O (\beta \sqrt {m}), \\ \text {a n d} L \left(\mathbf {W} ^ {(0)}\right) \leqslant O \left(\alpha^ {2} \beta^ {2} k m \log (n / \delta) \| \mathbf {X} \| _ {F} ^ {2} + \| \mathbf {Y} \| _ {F} ^ {2}\right). \\ \end{array}
+$$
+
+Then based on Theorem 3.1 and Proposition 3.3, we provide the following corollary, proved in Section 3.4, which shows that GD is able to achieve global convergence if the neural network is wide enough.
+
+Corollary 3.4. Suppose $\| \mathbf{Y}\| _F = O(\| \mathbf{X}\| _F)$ . Then using Gaussian random input and output transformations in Proposition 3.3 with $\alpha = \beta = 1$ , if the neural network width satisfies $m = \Omega (\max \{kr\kappa^2\log (n / \delta),k + d + \log (1 / \delta)\})$ then, with probability at least $1 - \delta$ , the output of GD in Algorithm 1 achieves training loss at most $L(\mathbf{W}^{*}) + \epsilon$ within $T = O\big(\kappa \log (1 / \epsilon)\big)$ iterations, where $\kappa = \| \mathbf{X}\| _2^2 /\sigma_r^2 (\mathbf{X})$ denotes the condition number of the covariance matrix of training data.
+
+Remark 3.5. For standard deep linear networks, Du & Hu (2019) proved that GD with Gaussian random initialization can converge to a $\epsilon$ -suboptimal global minima within $T = \Omega (\kappa \log (1 / \epsilon))$ iterations if the neural network width satisfies $m = O(Lkr\kappa^3 +d)$ . In stark contrast, training deep linear ResNets achieves the same convergence rate as training deep linear networks and linear regression, while the condition on the neural network width is strictly milder than that for training standard deep linear networks by a factor of $O(L\kappa)$ . This improvement may in part validate the empirical advantage of deep ResNets.
+
+# 3.2 CONVERGENCE GUARANTEE OF STOCHASTIC GRADIENT DESCENT
+
+The following theorem establishes the global convergence of SGD for training deep linear ResNets.
+
+Theorem 3.6. There are absolute constants $C$ , $C_1$ and $C_2$ , such for any $0 < \delta \leqslant 1/6$ and $\epsilon > 0$ , if the input and output weight matrices satisfy
+
+$$
+\frac {\sigma_ {\min } ^ {2} (\mathbf {A}) \sigma_ {\min } ^ {2} (\mathbf {B})}{\| \mathbf {A} \| _ {2} \| \mathbf {B} \| _ {2}} \geqslant C \cdot \frac {n \| \mathbf {X} \| _ {2} \cdot \log (L \left(\mathbf {W} ^ {(0)}\right) / \epsilon)}{B \sigma_ {r} ^ {2} (\mathbf {X})} \cdot \sqrt {L \left(\mathbf {W} ^ {(0)}\right)},
+$$
+
+and the step size and maximum iteration number are set as
+
+$$
+\eta \leqslant C _ {1} \cdot \frac {B \sigma_ {\min} ^ {2} (\mathbf {A}) \sigma_ {\min} ^ {2} (\mathbf {B}) \sigma_ {r} ^ {2} (\mathbf {X})}{L n \| \mathbf {A} \| _ {2} ^ {4} \| \mathbf {B} \| _ {2} ^ {4} \| \mathbf {X} \| _ {2} ^ {2}} \cdot \min \left\{\frac {\epsilon}{\| \mathbf {X} \| _ {2 , \infty} ^ {2} L (\mathbf {W} ^ {*})}, \frac {B}{n \| \mathbf {X} \| _ {2} ^ {2} \cdot \log (T / \delta) \log (L (\mathbf {W} ^ {(0)}) / \epsilon)} \right\},
+$$
+
+$$
+T = C _ {2} \cdot \frac {1}{\eta L \sigma_ {\mathrm {m i n}} ^ {2} (\mathbf {A}) \sigma_ {\mathrm {m i n}} ^ {2} (\mathbf {B}) \sigma_ {r} ^ {2} (\mathbf {X})} \cdot \log \left(\frac {L (\mathbf {W} ^ {(0)}) - L (\mathbf {W} ^ {*})}{\epsilon}\right),
+$$
+
+then with probability $^3$ at least $1/2$ (with respect to the random choices of mini batches), SGD in Algorithm 1 can find a network that achieves training loss at most $L(\mathbf{W}^*) + \epsilon$ .
+
+By combining Theorem 3.6 and Proposition 3.3, we can show that as long as the neural network is wide enough, SGD can achieve global convergence. Specifically, we provide the condition on the neural network width and the iteration complexity of SGD in the following corollary.
+
+Corollary 3.7. Suppose $\| \mathbf{Y}\| _F = O(\| \mathbf{X}\| _F)$ . Then using Gaussian random input and output transformations in Proposition 3.3 with $\alpha = \beta = 1$ , for sufficiently small $\epsilon >0$ , if the neural network width satisfies $m = \widetilde{\Omega}\bigl (kr\kappa^2\log^2 (1 / \epsilon)\cdot n^2 /B^2 +d\bigr)$ , with constant probability, SGD in Algorithm 1 can find a point that achieves training loss at most $L(\mathbf{W}^{*}) + \epsilon$ within $T = \widetilde{O}\bigl (\kappa^{2}\epsilon^{-1}\log (1 / \epsilon)\cdot n / B\bigr)$ iterations.
+
+From Corollaries 3.7 and 3.4, we can see that compared with the convergence guarantee of GD, the condition on the neural network width for SGD is worse by a factor of $\widetilde{O}(n^2\log^2(1/\epsilon)/B^2)$ and the iteration complexity is higher by a factor of $\widetilde{O}(\kappa\epsilon^{-1}\cdot n/B)$ . This is because for SGD, its trajectory length contains high uncertainty, and thus we need stronger conditions on the neural network in order to fully control it.
+
+We further consider the special case that $L(\mathbf{W}^{*}) = 0$ , which implies that there exists a ground truth matrix $\Phi$ such that for each training data point $(\mathbf{x}_i, \mathbf{y}_i)$ we have $\mathbf{y}_i = \Phi \mathbf{x}_i$ . In this case, we have the following theorem, which shows that SGD can attain a linear rate to converge to the global minimum.
+
+Theorem 3.8. There are absolute constants $C$ , and $C_1$ such that for any $0 < \delta < 1$ , if the input and output weight matrices satisfy
+
+$$
+\frac {\sigma_ {\mathrm {m i n}} ^ {2} (\mathbf {A}) \sigma_ {\mathrm {m i n}} ^ {2} (\mathbf {B})}{\| \mathbf {A} \| _ {2} \| \mathbf {B} \| _ {2}} \geqslant C \cdot \frac {n \| \mathbf {X} \| _ {2}}{B \sigma_ {r} ^ {2} (\mathbf {X})} \cdot \sqrt {L (\mathbf {W} ^ {(0)})},
+$$
+
+and the step size is set as
+
+$$
+\eta \leqslant C _ {1} \cdot \frac {B ^ {2} \sigma_ {\operatorname* {m i n}} ^ {2} (\mathbf {A}) \sigma_ {\operatorname* {m i n}} ^ {2} (\mathbf {B}) \sigma_ {r} ^ {2} (\mathbf {X})}{L n ^ {2} \| \mathbf {A} \| _ {2} ^ {4} \| \mathbf {B} \| _ {2} ^ {4} \| \mathbf {X} \| _ {2} ^ {4} \cdot \log (T / \delta)},
+$$
+
+for some maximum iteration number $T$ , then with probability at least $1 - \delta$ , the following holds for all $t \leqslant T$
+
+$$
+L (\mathbf {W} ^ {(t)}) \leqslant 2 L (\mathbf {W} ^ {(0)}) \cdot \left(1 - \frac {\eta L \sigma_ {\operatorname* {m i n}} ^ {2} (\mathbf {A}) \sigma_ {\operatorname* {m i n}} ^ {2} (\mathbf {B}) \sigma_ {r} ^ {2} (\mathbf {X})}{e}\right) ^ {t}.
+$$
+
+Similarly, using Gaussian random transformations in Proposition 3.3, we show that SGD can achieve global convergence for wide enough deep linear ResNets in the following corollary.
+
+Corollary 3.9. Suppose $\| \mathbf{Y}\| _F = O(\| \mathbf{X}\| _F)$ . Then using Gaussian random transformations in Proposition 3.3 with $\alpha = \beta = 1$ , for any $\epsilon \leqslant \widetilde{O}\big(B\| \mathbf{X}\|_{2,\infty}^{2} / (n\| \mathbf{X}\|_{2}^{2})\big)$ , if the neural network width satisfies $m = \widetilde{\Omega}\big(kr\kappa^2\cdot n^2 /B^2 +d\big)$ , with high probability, SGD in Algorithm 1 can find a network that achieves training loss at most $\epsilon$ within $T = \widetilde{O}\big(\kappa^2\log (1 / \epsilon)\cdot n^2 /B^2\big)$ iterations.
+
+# 4 DISCUSSION ON DIFFERENT INPUT AND OUTPUT LINEAR TRANSFORMATIONS
+
+In this section, we will discuss several different choices of linear transformations at input and output layers and their effects to the convergence performance. For simplicity, we will only consider the condition for GD.
+
+As we stated in Subsection 3.1, GD converges if the input and output weight matrices $\mathbf{A}$ and $\mathbf{B}$
+
+$$
+\frac {\sigma_ {\min } ^ {2} (\mathbf {A}) \sigma_ {\min } ^ {2} (\mathbf {B})}{\| \mathbf {A} \| _ {2} \| \mathbf {B} \| _ {2}} \geqslant C \cdot \frac {\| \mathbf {X} \| _ {2}}{\sigma_ {r} ^ {2} (\mathbf {X})} \cdot \left(L \left(\mathbf {W} ^ {(0)}\right) - L \left(\mathbf {W} ^ {*}\right)\right) ^ {1 / 2}. \tag {4.1}
+$$
+
+Then it is interesting to figure out what kind of choice of $\mathbf{A}$ and $\mathbf{B}$ can satisfy this condition. In Proposition 3.3, we showed that Gaussian random transformations (i.e., each entry of $\mathbf{A}$ and $\mathbf{B}$ is generated from certain Gaussian distribution) satisfy this condition with high probability, so that GD converges. Here we will discuss the following two other transformations.
+
+Identity transformations. We first consider the transformations that $\mathbf{A} = [\mathbf{I}_{d\times d},\mathbf{0}_{d\times (m - d)}]^{\top}$ and $\mathbf{B} = \sqrt{m / k}\cdot [\mathbf{I}_{k\times k},\mathbf{0}_{k\times (m - k)}]$ . Which is equivalent to the setting in Bartlett et al. (2019) when $m = k = d$ . Then it is clear that
+
+$$
+\sigma_ {\min } (\mathbf {B}) = \sigma_ {\max } (\mathbf {B}) = \sqrt {m / k} \quad \text {a n d} \quad \sigma_ {\min } (\mathbf {A}) = \sigma_ {\max } (\mathbf {A}) = 1.
+$$
+
+Now let us consider $L(\mathbf{W}^{(0)})$ . By our choices of $\mathbf{B}$ and $\mathbf{A}$ and zero initialization on weight matrices in hidden layers, in the case that $d = k$ , we have
+
+$$
+L (\mathbf {W} ^ {(0)}) = \frac {1}{2} \| \mathbf {B A X} - \mathbf {Y} \| _ {F} ^ {2} = \frac {1}{2} \| \sqrt {m / k} \mathbf {X} - \mathbf {Y} \| _ {F} ^ {2}.
+$$
+
+We remark that $\left\| \sqrt{m / k}\mathbf{X} - \mathbf{Y}\right\| _F^2 /2$ could be as big as $\frac{1}{2}\left(m\| \mathbf{X}\| _F^2 /k + \| \mathbf{Y}\| _F^2\right)$ (for example, when $\mathbf{X}$ and $\mathbf{Y}$ are orthogonal). Then plugging these results into (4.1), the condition on $\mathbf{A}$ and $\mathbf{B}$ becomes
+
+$$
+\sqrt {m / k} \geqslant C \cdot \frac {\| \mathbf {X} \| _ {2}}{\sigma_ {r} ^ {2} (\mathbf {X})} \cdot \left(\frac {1}{2} \left(m \| \mathbf {X} \| _ {F} ^ {2} / k + \| \mathbf {Y} \| _ {F} ^ {2}\right) - L (\mathbf {W} ^ {*})\right) ^ {1 / 2} \geqslant C \cdot \frac {\| \mathbf {X} \| _ {2}}{\sigma_ {r} ^ {2} (\mathbf {X})} \cdot \sqrt {\frac {m \| \mathbf {X} \| _ {F} ^ {2}}{2 k}},
+$$
+
+where the second inequality is due to the fact that $L(\mathbf{W}^{*}) \leqslant \| \mathbf{Y}\|_{F}^{2} / 2$ . Then it is clear if $\| \mathbf{X}\| _F \geqslant \sqrt{2} /C$ , the above inequality cannot be satisfied for any choice of $m$ , since it will be cancelled out on both sides of the inequality. Therefore, in such cases, our bound does not guarantee that GD achieves global convergence. Thus, it is consistent with the non-convergence results in (Bartlett et al., 2019). Note that replacing the scaling factor $\sqrt{m / k}$ in the definition of $\mathbf{B}$ with any other function of $d$ , $k$ and $m$ would not help.
+
+Modified identity transformations. In fact, we show that a different type of identity transformations of $\mathbf{A}$ and $\mathbf{B}$ can satisfy the condition (4.1). Here we provide one such example. Assuming $m\geqslant d + k$ , we can construct two sets $S_{1},S_{2}\subset [m]$ satisfying $|\mathcal{S}_1| = d,|\mathcal{S}_2| = k$ and $S_{1}\cap S_{2} = \emptyset$ . Let $S_{1} = \{i_{1},\ldots ,i_{d}\}$ and $S_{2} = \{j_{1},\dots ,j_{k}\}$ . Then we construct matrices $\mathbf{A}$ and $\mathbf{B}$ as follows:
+
+$$
+\mathbf {A} _ {i j} = \left\{ \begin{array}{l l} 1 & (i, j) = (i _ {j}, j) \\ 0 & \text {o t h e r w i s e} \end{array} \right. \quad \mathbf {B} _ {i j} = \left\{ \begin{array}{l l} \alpha & (i, j) = (i, j _ {i}) \\ 0 & \text {o t h e r w i s e} \end{array} \right.
+$$
+
+where $\alpha$ is a parameter which will be specified later. In this way, it can be verified that $\mathbf{B}\mathbf{A} = \mathbf{0}$ , $\sigma_{\mathrm{min}}(\mathbf{A}) = \sigma_{\mathrm{max}}(\mathbf{A}) = 1$ , and $\sigma_{\mathrm{min}}(\mathbf{B}) = \sigma_{\mathrm{max}}(\mathbf{B}) = \alpha$ . Thus it is clear that the initial training loss satisfies $L(\mathbf{W}^{(0)}) = \| \mathbf{Y}\|_F^2 /2$ . Then plugging these results into (4.1), the condition on $\mathbf{A}$ and $\mathbf{B}$ can be rewritten as
+
+$$
+\alpha \geqslant C \cdot \frac {\| \mathbf {X} \| _ {2}}{\sigma_ {r} ^ {2} (\mathbf {X})} \cdot \left(\| \mathbf {Y} \| _ {F} ^ {2} / 2 - L (\mathbf {W} ^ {*})\right) ^ {1 / 2}.
+$$
+
+The R.H.S. of the above inequality does not depend on $\alpha$ , which implies that we can choose sufficiently large $\alpha$ to make this inequality hold. Thus, GD can be guaranteed to achieve the global convergence. Moreover, it is worth noting that using modified identity transformation, a neural network with $m = d + k$ suffices to guarantee the global convergence of GD. We further remark that similar analysis can be extended to SGD.
+
+# 5 EXPERIMENTS
+
+In this section, we conduct various experiments to verify our theory on synthetic data, including i) comparison between different input and output transformations and ii) comparison between training deep linear ResNets and standard linear networks.
+
+# 5.1 DIFFERENT INPUT AND OUTPUT TRANSFORMATIONS
+
+To validate our theory, we performed simple experiment on 10-d synthetic data. Specifically, we randomly generate $\mathbf{X} \in \mathbb{R}^{10 \times 1000}$ from a standard normal distribution and set $\mathbf{Y} = -\mathbf{X} + 0.1 \cdot \mathbf{E}$ , where each entry in $\mathbf{E}$ is independently generated from standard normal distribution. Consider 10-hidden-layer linear ResNets, we apply three input and output transformations including identity transformations, modified identity transformations and random transformations. We evaluate the convergence performances for these three choices of transformations and report the results in Figures 1(a)-1(b), where we consider two cases $m = 40$ and $m = 200$ . It can be clearly observed that gradient descent with identity initialization gets stuck, but gradient descent with modified identity initialization or random initialization converges well. This verifies our theory. It can be also observed that modified identity initialization can lead to slightly faster convergence rate as its initial training loss can be smaller. In fact, with identity transformations in this setting, only the first 10 entries of the $m$ hidden variables in each layer ever take a non-zero value, so that, no matter how large $m$ is, effectively, $m = 10$ , and the lower bound of Bartlett et al. (2019) applies.
+
+# 5.2 COMPARISON WITH STANDARD DEEP LINEAR NETWORKS
+
+Then we compare the convergence performances with that of training standard deep linear networks. Specifically, we adopt the same training data generated in Section 5.1 and consider training $L$ -hidden-layer neural network with fixed width $m$ . The convergence results are displayed in Figures
+
+
+(a) $m = 40$
+
+
+(b) $m = 200$
+Figure 1: (a)-(b):Convergence performances for three input and output transformations on a 10-hidden-layer linear ResNets. (c)-(d) Comparison between the convergence performances of training deep linear ResNets with zero initialization on hidden weights and standard deep linear network with Gaussian random initialization on hidden weights, where the input and output weights are generated by random initialization, and remain fixed throughout the training.
+
+
+(c) $m = 40$
+
+
+(d) $m = 200$
+
+1(c)-1(d), where we consider different choices of $L$ . For training linear ResNets, we found that the convergence performances are quite similar for different $L$ , thus we only plot the convergence result for the largest one (e.g., $L = 20$ for $m = 40$ and $L = 100$ for $m = 200$ ). However, it can be observed that for training standard linear networks, the convergence performance becomes worse as the depth increases. This is consistent with the theory as our condition on the neural network width is $m = O(kr\kappa^2)$ (please refer to Corollary 3.4), which has no dependency in $L$ , while the condition for training standard linear network is $m = O(Lkr\kappa^3)$ (Du & Hu, 2019), which is linear in $L$ .
+
+# 6 CONCLUSION
+
+In this paper, we proved the global convergence of GD and SGD for training deep linear ResNets with square loss. More specifically, we considered fixed linear transformations at both input and output layers, and proved that under certain conditions on the transformations, GD and SGD with zero initialization on all hidden weights can converge to the global minimum. In addition, we further proved that when specializing to appropriate Gaussian random linear transformations, GD and SGD can converge as long as the neural network is wide enough. Compared with the convergence results of GD for training standard deep linear networks, our condition on the neural network width is strictly milder. Our analysis can be generalized to prove similar results for different loss functions such as cross-entropy loss, and can potentially provide meaningful insights to the convergence analysis of deep non-linear ResNets.
+
+# ACKNOWLEDGEMENT
+
+We thank the anonymous reviewers and area chair for their helpful comments. This work was initiated when Q. Gu and P. Long attended the summer program on the Foundations of Deep Learning at the Simons Institute for the Theory of Computing. D. Zou and Q. Gu were sponsored in part by the National Science Foundation CAREER Award IIS-1906169, BIGDATA IIS-1855099, and Salesforce Deep Learning Research Award. The views and conclusions contained in this paper are those of the authors and should not be interpreted as representing any funding agencies.
+
+# REFERENCES
+
+Zeyuan Allen-Zhu, Yanzhi Li, and Zhao Song. A convergence theory for deep learning via overparameterization. In International Conference on Machine Learning, pp. 242-252, 2019.
+Alexandr Andoni, Rina Panigrahy, Gregory Valiant, and Li Zhang. Learning polynomials with neural networks. In International Conference on Machine Learning, pp. 1908-1916, 2014.
+Sanjeev Arora, Nadav Cohen, and Elad E Hazan. On the optimization of deep networks: Implicit acceleration by overparameterization. In 35th International Conference on Machine Learning, ICML 2018, pp. 372-389, 2018.
+Sanjeev Arora, Nadav Cohen, Noah Golowich, and Wei Hu. A convergence analysis of gradient descent for deep linear neural networks. In International Conference on Learning Representations, 2019a.
+
+Sanjeev Arora, Simon S Du, Wei Hu, Zhiyuan Li, Ruslan Salakhutdinov, and Ruosong Wang. On exact computation with an infinitely wide neural net. In Advances in Neural Information Processing Systems, 2019b.
+Peter L Bartlett, David P Helmbold, and Philip M Long. Gradient descent with identity initialization efficiently learns positive-definite linear transformations by deep residual networks. Neural computation, 31(3):477-502, 2019.
+Alon Brutzkus, Amir Globerson, Eran Malach, and Shai Shalev-Shwartz. SGD learns overparameterized networks that provably generalize on linearly separable data. In International Conference on Learning Representations, 2018.
+Lenaic Chizat, Edouard Oyallon, and Francis Bach. On lazy training in differentiable programming. In Advances in Neural Information Processing Systems, 2019.
+Amit Daniely. SGD learns the conjugate kernel class of the network. In Advances in Neural Information Processing Systems, pp. 2422-2430, 2017.
+Simon Du and Wei Hu. Width provably matters in optimization for deep linear neural networks. In International Conference on Machine Learning, pp. 1655-1664, 2019.
+Simon Du, Jason Lee, Haochuan Li, Liwei Wang, and Xiyu Zhai. Gradient descent finds global minima of deep neural networks. In International Conference on Machine Learning, pp. 1675-1685, 2019.
+Simon S Du, Wei Hu, and Jason D Lee. Algorithmic regularization in learning deep homogeneous models: Layers are automatically balanced. In Advances in Neural Information Processing Systems, pp. 384-395, 2018a.
+Simon S Du, Jason D Lee, and Yuandong Tian. When is a convolutional filter easy to learn? In International Conference on Learning Representations, 2018b.
+Simon S Du, Xiyu Zhai, Barnabas Poczos, and Aarti Singh. Gradient descent provably optimizes over-parameterized neural networks. arXiv preprint arXiv:1810.02054, 2018c.
+Yuguang Fang, Kenneth A Loparo, and Xiangbo Feng. Inequalities for the trace of matrix product. IEEE Transactions on Automatic Control, 39(12):2489-2490, 1994.
+Daniel C Freeman and Joan Bruna. Topology and geometry of half-rectified network optimization. In International Conference on Learning Representations, 2016.
+Spencer Frei, Yuan Cao, and Quanquan Gu. Algorithm-dependent generalization bounds for overparameterized deep residual networks. In Advances in Neural Information Processing Systems, 2019.
+Suriya Gunasekar, Jason D Lee, Daniel Soudry, and Nati Srebro. Implicit bias of gradient descent on linear convolutional networks. In Advances in Neural Information Processing Systems, pp. 9461-9471, 2018.
+Moritz Hardt and Tengyu Ma. Identity matters in deep learning. arXiv preprint arXiv:1611.04231, 2016.
+Wei Hu, Lechao Xiao, and Jeffrey Pennington. Provable benefit of orthogonal initialization in optimizing deep linear networks. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=rkgqN1SYvr.
+Arthur Jacot, Franck Gabriel, and Clément Hongler. Neural tangent kernel: Convergence and generalization in neural networks. In Advances in neural information processing systems, pp. 8571-8580, 2018.
+Ziwei Ji and Matus Telgarsky. Gradient descent aligns the layers of deep linear networks. In *ICLR*, 2019.
+Kenji Kawaguchi. Deep learning without poor local minima. In Advances in Neural Information Processing Systems, pp. 586-594, 2016.
+
+Kenji Kawaguchi and Jiaoyang Huang. Gradient descent finds global minima for generalizable deep neural networks of practical sizes. arXiv preprint arXiv:1908.02419, 2019.
+Yuanzhi Li and Yingyu Liang. Learning overparameterized neural networks via stochastic gradient descent on structured data. In Proceedings of the 32nd International Conference on Neural Information Processing Systems, pp. 8168-8177, 2018.
+Yuanzhi Li and Yang Yuan. Convergence analysis of two-layer neural networks with ReLU activation. In Proceedings of the 31st International Conference on Neural Information Processing Systems, pp. 597-607. Curran Associates Inc., 2017.
+Haihao Lu and Kenji Kawaguchi. Depth creates no bad local minima. arXiv preprint arXiv:1702.08580, 2017.
+Samet Oymak and Mahdi Soltanolkotabi. Towards moderate overparameterization: global convergence guarantees for training shallow neural networks. arXiv preprint arXiv:1902.04674, 2019.
+Ohad Shamir. Exponential convergence time of gradient descent for one-dimensional deep linear neural networks. arXiv preprint arXiv:1809.08587, 2018.
+Lili Su and Pengkun Yang. On learning over-parameterized neural networks: A functional approximation prospective. arXiv preprint arXiv:1905.10826, 2019.
+Yuandong Tian. An analytical formula of population gradient for two-layered ReLU network and its applications in convergence and critical point analysis. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 3404-3413. JMLR.org, 2017.
+Roman Vershynin. Introduction to the non-asymptotic analysis of random matrices. arXiv preprint arXiv:1011.3027, 2010.
+Lei Wu, Qingcan Wang, and Chao Ma. Global convergence of gradient descent for deep linear residual networks. arXiv preprint arXiv:1911.00645, 2019.
+Chulhee Yun, Suvrit Sra, and Ali Jadbabaie. Global optimality conditions for deep neural networks. In International Conference on Learning Representations, 2018.
+Huishuai Zhang, Da Yu, Wei Chen, and Tie-Yan Liu. Training over-parameterized deep resnet is almost as easy as training a two-layer network. arXiv preprint arXiv:1903.07120, 2019.
+Xiao Zhang, Yaodong Yu, Lingxiao Wang, and Quanquan Gu. Learning one-hidden-layer ReLU networks via gradient descent. arXiv preprint arXiv:1806.07808, 2018.
+Kai Zhong, Zhao Song, Prateek Jain, Peter L Bartlett, and Inderjit S Dhillon. Recovery guarantees for one-hidden-layer neural networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 4140-4149. JMLR.org, 2017.
+Yi Zhou and Yingbin Liang. Critical points of linear neural networks: Analytical forms and landscape properties. 2018.
+Difan Zou and Quanquan Gu. An improved analysis of training over-parameterized deep neural networks. In Advances in Neural Information Processing Systems, 2019.
+Difan Zou, Yuan Cao, Dongruo Zhou, and Quanquan Gu. Stochastic gradient descent optimizes over-parameterized deep ReLU networks. Machine Learning Journal, 2019.
+
+# A PROOF OF MAIN THEOREMS
+
+We first provide the following lemma which proves upper and lower bounds on $\| \nabla_{\mathbf{W}_l}L(\mathbf{W})\| _F^2$ when $\mathbf{W}$ is staying inside a certain region. Its proof is in Section B.1.
+
+Lemma A.1. For any weight matrices satisfying $\max_{l\in [L]}\| \mathbf{W}_l\| _2\leqslant 0.5 / L$ , it holds that,
+
+$$
+\| \nabla_ {\mathbf {W} _ {l}} L (\mathbf {W}) \| _ {F} ^ {2} \geqslant \frac {2}{e} \sigma_ {\min} ^ {2} (\mathbf {A}) \sigma_ {\min} ^ {2} (\mathbf {B}) \sigma_ {r} ^ {2} (\mathbf {X}) \big (L (\mathbf {W}) - L (\mathbf {W} ^ {*}) \big),
+$$
+
+$$
+\left\| \nabla_ {\mathbf {W} _ {l}} L (\mathbf {W}) \right\| _ {F} ^ {2} \leqslant 2 e \| \mathbf {A} \| _ {2} ^ {2} \| \mathbf {B} \| _ {2} ^ {2} \| \mathbf {X} \| _ {2} ^ {2} \big (L (\mathbf {W}) - L (\mathbf {W} ^ {*}) \big)
+$$
+
+$$
+\left\| \nabla_ {\mathbf {W} _ {l}} \ell (\mathbf {W}; \mathbf {x} _ {i}, \mathbf {y} _ {i}) \right\| _ {F} ^ {2} \leqslant 2 e \| \mathbf {A} \| _ {2} ^ {2} \| \mathbf {B} \| _ {2} ^ {2} \| \mathbf {x} _ {i} \| _ {2} ^ {2} \ell (\mathbf {W}; \mathbf {x} _ {i}, \mathbf {y} _ {i}).
+$$
+
+In addition, the stochastic gradient $\mathbf{G}_l$ in Algorithm 1 satisfies
+
+$$
+\| \mathbf {G} _ {l} \| _ {F} ^ {2} \leqslant \frac {2 e n ^ {2} \| \mathbf {A} \| _ {2} ^ {2} \| \mathbf {B} \| _ {2} ^ {2} \| \mathbf {X} \| _ {2} ^ {2}}{B ^ {2}} L (\mathbf {W}),
+$$
+
+where $B$ is the minibatch size.
+
+The gradient lower bound can be also interpreted as the Polyak-Lojasiewicz condition, which is essential to the linear convergence rate. The gradient upper bound is crucial to bound the trajectory length, since this lemma requires that $\max_{l\in [L]}\| \mathbf{W}_l\| \leqslant 0.5 / L$
+
+The following lemma proves the smoothness property of the training loss function $L(\mathbf{W})$ when $\mathbf{W}$ is staying inside a certain region. Its proof is in Section B.2.
+
+Lemma A.2. For any two collections of weight matrices, denoted by $\widetilde{\mathbf{W}} = \{\widetilde{\mathbf{W}}_1,\ldots ,\widetilde{\mathbf{W}}_L\}$ and $\mathbf{W} = \{\mathbf{W}_1,\dots ,\mathbf{W}_L\}$ , satisfying $\max_{l\in [L]}\| \mathbf{W}_l\| _F$ , $\max_{l\in [L]}\| \widetilde{\mathbf{W}}_l\| _F\leqslant 0.5 / L$ that, it holds that
+
+$$
+\begin{array}{l} L (\widetilde {\mathbf {W}}) - L (\mathbf {W}) \leqslant \sum_ {l = 1} ^ {L} \langle \nabla_ {\mathbf {W} _ {l}} L (\mathbf {W}), \widetilde {\mathbf {W}} _ {l} - \mathbf {W} _ {l} \rangle \\ + L \| \mathbf {A} \| _ {2} \| \mathbf {B} \| _ {2} \| \mathbf {X} \| _ {2} \left(\sqrt {2 e L (\mathbf {W})} + 0. 5 e \| \mathbf {A} \| _ {2} \| \mathbf {B} \| _ {2} \| \mathbf {X} \| _ {2}\right) \sum_ {l = 1} ^ {L} \| \widetilde {\mathbf {W}} _ {l} - \mathbf {W} _ {l} \| _ {F} ^ {2}. \\ \end{array}
+$$
+
+Based on these two lemmas, we are able to complete the proof of all theorems, which are provided as follows.
+
+# A.1 PROOF OF THEOREM 3.1
+
+Proof of Theorem 3.1. In order to simplify the proof, we use the short-hand notations $\lambda_{A},\mu_{A},\lambda_{B}$ and $\mu_B$ to denote $\| \mathbf{A}\| _2,\sigma_{\mathrm{min}}(\mathbf{A}),\| \mathbf{B}\| _2$ and $\sigma_{\mathrm{min}}(\mathbf{B})$ respectively. Specifically, we rewrite the condition on $\mathbf{A}$ and $\mathbf{B}$ as follows
+
+$$
+\frac {\mu_ {A} ^ {2} \mu_ {B} ^ {2}}{\lambda_ {A} \lambda_ {B}} \geqslant \frac {4 \sqrt {2 e ^ {3}} \| \mathbf {X} \| _ {2}}{\sigma_ {r} ^ {2} (\mathbf {X})} \cdot \left(L (\mathbf {W} ^ {(0)}) - L (\mathbf {W} ^ {*})\right) ^ {1 / 2}.
+$$
+
+We prove the theorem by induction on the update number $s$ , using the following two-part inductive hypothesis:
+
+(i) $\max_{l\in [L]}\| \mathbf{W}_l^{(s)}\| _F\leqslant 0.5 / L,$
+(ii) $L(\mathbf{W}^{(s)}) - L(\mathbf{W}^*)\leqslant \left(1 - \frac{\eta L\mu_A^2\mu_B^2\sigma_r^2(\mathbf{X})}{e}\right)^s\cdot \left(L(\mathbf{W}^{(0)}) - L(\mathbf{W}^*)\right)$
+
+First, it can be easily verified that this holds for $s = 0$ . Now, assume that the inductive hypothesis holds for $s < t$ .
+
+Induction for Part (i): We first prove that $\max_{l\in [L]}\| \mathbf{W}_l^{(t)}\| _F\leqslant 0.5 / L$ . By triangle inequality and the update rule of gradient descent, we have
+
+$$
+\begin{array}{l} \| \mathbf {W} _ {l} ^ {(t)} \| _ {F} \leqslant \sum_ {s = 0} ^ {t - 1} \eta \| \nabla_ {\mathbf {W} _ {l}} L (\mathbf {W} ^ {(s)}) \| _ {F} \\ \leqslant \eta \sum_ {s = 0} ^ {t - 1} \sqrt {2 e} \lambda_ {A} \lambda_ {B} \| \mathbf {X} \| _ {2} \cdot \left(L (\mathbf {W} ^ {(s)}) - L (\mathbf {W} ^ {*})\right) ^ {1 / 2} \\ \leqslant \sqrt {2 e} \eta \lambda_ {A} \lambda_ {B} \| \mathbf {X} \| _ {2} \cdot \left(L (\mathbf {W} ^ {(0)}) - L (\mathbf {W} ^ {*})\right) ^ {1 / 2} \cdot \sum_ {s = 0} ^ {t - 1} \left(1 - \frac {\eta L \mu_ {A} ^ {2} \mu_ {B} ^ {2} \sigma_ {r} ^ {2} (\mathbf {X})}{e}\right) ^ {s / 2} \\ \end{array}
+$$
+
+where the second inequality follows from Lemma A.1, and the third inequality follows from the inductive hypothesis. Since $\sqrt{1 - x} \leqslant 1 - x / 2$ for any $x \in [0,1]$ , we further have
+
+$$
+\begin{array}{l} \| \mathbf {W} _ {l} ^ {(t)} \| _ {F} \leqslant \sqrt {2 e} \eta \lambda_ {A} \lambda_ {B} \| \mathbf {X} \| _ {2} \cdot \left(L (\mathbf {W} ^ {(0)}) - L (\mathbf {W} ^ {*})\right) ^ {1 / 2} \cdot \sum_ {s = 0} ^ {t - 1} \left(1 - \frac {\eta L \mu_ {A} ^ {2} \mu_ {B} ^ {2} \sigma_ {r} ^ {2} (\mathbf {X})}{2 e}\right) ^ {s} \\ \leqslant \frac {\sqrt {8 e ^ {3}} \lambda_ {A} \lambda_ {B} \| \mathbf {X} \| _ {2}}{L \mu_ {A} ^ {2} \mu_ {B} ^ {2} \sigma_ {r} ^ {2} (\mathbf {X})} \cdot \left(L (\mathbf {W} ^ {(0)}) - L (\mathbf {W} ^ {*})\right) ^ {1 / 2}. \\ \end{array}
+$$
+
+Under the condition that $\mu_A^2\mu_B^2 /(\lambda_A\lambda_B)\geqslant 2\sqrt{8e^3}\| \mathbf{X}\| _2\big(L(\mathbf{W}^{(0)}) - L(\mathbf{W}^*)\big)^{1 / 2} / \sigma_r^2 (\mathbf{X})$ , it can be readily verified that $\| \mathbf{W}_l^{(t)}\| _F\leqslant 0.5 / L$ . Since this holds for all $l\in [L]$ , we have proved Part (i) of the inductive step, i.e., $\max_{l\in [L]}\| \mathbf{W}_l^{(t)}\| _F\leqslant 0.5 / L$
+
+Induction for Part (ii): Now we prove Part (ii) of the inductive step, bounding the improvement in the objective function. Note that we have already shown that $\mathbf{W}^{(t)}$ satisfies $\max_{l\in [L]}\| \mathbf{W}_l^{(t)}\| _F\leqslant 0.5 / L$ , thus by Lemma A.2 we have
+
+$$
+\begin{array}{l} L (\mathbf {W} ^ {(t)}) \leqslant L (\mathbf {W} ^ {(t - 1)}) - \eta \sum_ {l = 1} ^ {L} \| \nabla_ {\mathbf {W} _ {l}} L (\mathbf {W} ^ {(t - 1)}) \| _ {F} ^ {2} \\ + \eta^ {2} L \lambda_ {A} \lambda_ {B} \| \mathbf {X} \| _ {2} \cdot \left(\sqrt {e L (\mathbf {W} ^ {(t - 1)})} + 0. 5 e \lambda_ {A} \lambda_ {B} \| \mathbf {X} \| _ {2}\right) \cdot \sum_ {l = 1} ^ {L} \| \nabla_ {\mathbf {W} _ {l}} L (\mathbf {W} ^ {(t - 1)}) \| _ {F} ^ {2}, \\ \end{array}
+$$
+
+where we use the fact that $\mathbf{W}_l^{(t)} - \mathbf{W}_l^{(t - 1)} = -\eta \nabla_{\mathbf{W}_l}L(\mathbf{W}^{(l - 1)})$ . Note that $L(\mathbf{W}^{(t - 1)}) \leqslant L(\mathbf{W}^{(0)})$ and the step size is set to be
+
+$$
+\eta = \frac {1}{2 L \lambda_ {A} \lambda_ {B} \| \mathbf {X} \| _ {2} \cdot \left(\sqrt {e L (\overline {{\mathbf {W}}} ^ {(0)})} + 0 . 5 e \lambda_ {A} \lambda_ {B} \| \mathbf {X} \| _ {2}\right)}},
+$$
+
+so that we have
+
+$$
+\begin{array}{l} L (\mathbf {W} ^ {(t)}) - L (\mathbf {W} ^ {(t - 1)}) \leqslant - \frac {\eta}{2} \sum_ {l = 1} ^ {L} \| \nabla_ {\mathbf {W} _ {l}} L (\mathbf {W} ^ {(t - 1)}) \| _ {F} ^ {2} \\ \leqslant - \frac {\eta L \mu_ {A} ^ {2} \mu_ {B} ^ {2} \sigma_ {r} ^ {2} (\mathbf {X})}{e} \big (L (\mathbf {W} ^ {(t - 1)}) - L (\mathbf {W} ^ {*}) \big), \\ \end{array}
+$$
+
+where the second inequality is by Lemma A.1. Applying the inductive hypothesis, we get
+
+$$
+\begin{array}{l} L \left(\mathbf {W} ^ {(t)}\right) - L \left(\mathbf {W} ^ {*}\right) \leqslant \left(1 - \frac {\eta L \mu_ {A} ^ {2} \mu_ {B} ^ {2} \sigma_ {r} ^ {2} (\mathbf {X})}{e}\right) \cdot \left(L \left(\mathbf {W} ^ {(t - 1)}\right) - L \left(\mathbf {W} ^ {*}\right)\right) \\ \leqslant \left(1 - \frac {\eta L \mu_ {A} ^ {2} \mu_ {B} ^ {2} \sigma_ {r} ^ {2} (\mathbf {X})}{e}\right) ^ {t} \cdot \left(L \left(\mathbf {W} ^ {(0)}\right) - L \left(\mathbf {W} ^ {*}\right)\right), \tag {A.1} \\ \end{array}
+$$
+
+which completes the proof of the inductive step of Part (ii). Thus we are able to complete the proof.
+
+# A.2 PROOF OF PROPOSITION 3.3
+
+Proof of Proposition 3.3. We prove the bounds on the singular values and initial training loss separately.
+
+Bounds on the singular values: Specifically, we set the neural network width as
+
+$$
+m \geqslant 1 0 0 \cdot \left(\sqrt {\max \{d , k \}} + \sqrt {2 \log (1 2 / \delta)}\right) ^ {2}
+$$
+
+By Corollary 5.35 in Vershynin (2010), we know that for a matrix $\mathbf{U} \in \mathbb{R}^{d_1 \times d_2}$ ( $d_1 \geqslant d_2$ ) with entries independently generated by standard normal distribution, with probability at least $1 - 2\exp(-t^2 / 2)$ , its singular values satisfy
+
+$$
+\sqrt {d _ {1}} - \sqrt {d _ {2}} - t \leqslant \sigma_ {\min } (\mathbf {U}) \leqslant \sigma_ {\max } (\mathbf {U}) \leqslant \sqrt {d _ {1}} + \sqrt {d _ {2}} + t.
+$$
+
+Based on our constructions of $\mathbf{A}$ and $\mathbf{B}$ , we know that each entry of $\frac{1}{\beta} \mathbf{B}$ and $\frac{1}{\alpha} \mathbf{A}$ follows standard Gaussian distribution. Therefore, set $t = 2 \sqrt{\log(12 / \delta)}$ and apply union bound, with probability at least $1 - \delta / 3$ , the following holds,
+
+$$
+\alpha \left(\sqrt {m} - \sqrt {d} - 2 \sqrt {\log (1 2 / \delta)}\right) \leqslant \sigma_ {\min } (\mathbf {A}) \leqslant \sigma_ {\max } (\mathbf {A}) \leqslant \alpha \left(\sqrt {m} + \sqrt {d} + 2 \sqrt {\log (1 2 / \delta)}\right)
+$$
+
+$$
+\beta \big (\sqrt {m} - \sqrt {k} - 2 \sqrt {\log (1 2 / \delta)} \big) \leqslant \sigma_ {\min } (\mathbf {B}) \leqslant \sigma_ {\max } (\mathbf {B}) \leqslant \beta \big (\sqrt {m} + \sqrt {k} + 2 \sqrt {\log (1 2 / \delta)} \big),
+$$
+
+where we use the facts that $\sigma_{\mathrm{min}}(\kappa \mathbf{U}) = \kappa \sigma_{\mathrm{min}}(\mathbf{U})$ and $\sigma_{\mathrm{max}}(\kappa \mathbf{U}) = \kappa \sigma_{\mathrm{max}}(\mathbf{U})$ for any scalar $\kappa$ and matrix $\mathbf{U}$ . Then applying our choice of $m$ , we have with probability at least $1 - \delta / 3$ ,
+
+$$
+0. 9 \alpha \sqrt {m} \leqslant \sigma_ {\min } (\mathbf {A}) \leqslant \sigma_ {\max } (\mathbf {A}) \leqslant 1. 1 \alpha \sqrt {m} \quad \text {a n d} \quad 0. 9 \beta \sqrt {m} \leqslant \sigma_ {\min } (\mathbf {B}) \leqslant \sigma_ {\max } (\mathbf {B}) \leqslant 1. 1 \beta \sqrt {m}.
+$$
+
+This completes the proof of the bounds on the singular values of $\mathbf{A}$ and $\mathbf{B}$ .
+
+Bounds on the initial training loss: The proof in this part is similar to the proof of Proposition 6.5 in Du & Hu (2019). Since we apply zero initialization on all hidden layers, by Young's inequality, we have the following for any $(\mathbf{x},\mathbf{y})$
+
+$$
+\ell \left(\mathbf {W} ^ {(0)}; \mathbf {x}, \mathbf {y}\right) = \frac {1}{2} \| \mathbf {B A x} - \mathbf {y} \| _ {2} ^ {2} \leqslant \| \mathbf {B A x} \| _ {2} ^ {2} + \| \mathbf {y} \| _ {2} ^ {2}. \tag {A.2}
+$$
+
+Since each entry of $\mathbf{B}$ is generated from $\mathcal{N}(0,\beta^2)$ , conditioned on $\mathbf{A}$ , each entry of $\mathbf{BAx}$ is distributed according to $\mathcal{N}(0,\beta^2||\mathbf{Ax}||_2^2)$ , so $\frac{\|\mathbf{BAx}\|_2^2}{\|\mathbf{Ax}\|_2^2\beta^2}$ follows a $\chi_k^2$ distribution. Applying a standard tail bound for $\chi_k^2$ distribution, we have, with probability at least $1 - \delta'$ ,
+
+$$
+\frac {\| \mathbf {B A x} \| _ {2} ^ {2}}{\| \mathbf {A x} \| _ {2} ^ {2}} \leqslant \beta^ {2} k (1 + 2 \sqrt {\log (1 / \delta^ {\prime}) / k} + 2 \log (1 / \delta^ {\prime})).
+$$
+
+Note that by our bounds of the singular values, if $m \geqslant 100 \cdot \left(\sqrt{\max\{d,k\}} + \sqrt{2\log(8 / \delta)}\right)^2$ , we have with probability at least $1 - \delta / 3$ , $\| \mathbf{A}\|_2 \leqslant 1.1\alpha \sqrt{m}$ , thus, it follows that with probability at least $1 - \delta' - \delta$ ,
+
+$$
+\left\| \mathbf {B A x} \right\| _ {2} ^ {2} \leqslant 1. 2 1 \alpha^ {2} \beta^ {2} k m \left[ 1 + 2 \sqrt {\log \left(1 / \delta^ {\prime}\right)} + 2 \log \left(1 / \delta^ {\prime}\right) \right] \| \mathbf {x} \| _ {2} ^ {2}.
+$$
+
+Then by union bound, it is evident that with probability $1 - n\delta' - \delta/3$
+
+$$
+\| \mathbf {B A X} \| _ {F} ^ {2} = \sum_ {i = 1} ^ {n} \| \mathbf {B A x} _ {i} \| _ {2} ^ {2} \leqslant 1. 2 1 \alpha^ {2} \beta^ {2} k m \left[ 1 + 2 \sqrt {\log (1 / \delta^ {\prime})} + 2 \log (1 / \delta^ {\prime}) \right] \| \mathbf {X} \| _ {F} ^ {2}.
+$$
+
+Set $\delta' = \delta / (3n)$ , suppose $\log(1 / \delta') \geqslant 1$ , we have with probability at least $1 - 2\delta / 3$ ,
+
+$$
+L (\mathbf {W} ^ {(0)}) = \frac {1}{2} \| \mathbf {B A X} - \mathbf {Y} \| _ {F} ^ {2} \leqslant \| \mathbf {B A X} \| _ {F} ^ {2} + \| \mathbf {Y} \| _ {F} ^ {2} \leqslant 6. 0 5 \alpha^ {2} \beta^ {2} k m \log (2 n / \delta) \| \mathbf {X} \| _ {F} ^ {2} + \| \mathbf {Y} \| _ {F} ^ {2}.
+$$
+
+This completes the proof of the bounds on the initial training loss.
+
+Applying a union bound on these two parts, we are able to complete the proof.
+
+
+
+# A.3 PROOF OF COROLLARY 3.4
+
+Proof of Corollary 3.4. Recall the condition in Theorem 3.1:
+
+$$
+\frac {\sigma_ {\min } ^ {2} (\mathbf {A}) \sigma_ {\min } ^ {2} (\mathbf {B})}{\| \mathbf {A} \| _ {2} \| \mathbf {B} \| _ {2}} \geqslant C \cdot \frac {\| \mathbf {X} \| _ {2}}{\sigma_ {r} ^ {2} (\mathbf {X})} \cdot \left(L \left(\mathbf {W} ^ {(0)}\right) - L \left(\mathbf {W} ^ {*}\right)\right) ^ {1 / 2}. \tag {A.3}
+$$
+
+By Proposition 3.3, we know that, with probability $1 - \delta$
+
+$$
+\frac {\sigma_ {\operatorname* {m i n}} ^ {2} (\mathbf {A}) \sigma_ {\operatorname* {m i n}} ^ {2} (\mathbf {B})}{\| \mathbf {A} \| _ {2} \| \mathbf {B} \| _ {2}} = \Theta (m),
+$$
+
+$$
+\frac {\| \mathbf {X} \| _ {2}}{\sigma_ {r} (\mathbf {X})} \cdot \big (L (\mathbf {W} ^ {(0)}) - L (\mathbf {W} ^ {*}) \big) ^ {1 / 2} = O \left(\frac {(\sqrt {k m \log (n / \delta)} + 1) \| \mathbf {X} \| _ {F} \| \mathbf {X} \| _ {2}}{\sigma_ {r} (\mathbf {X})}\right).
+$$
+
+Note that $\| \mathbf{X}\| _F\leqslant \sqrt{r}\| \mathbf{X}\| _2$ , thus the condition (A.3) can be satisfied if $m = \Omega (kr\kappa^2\log (n / \delta))$ where $\kappa = \| \mathbf{X}\| _2^2 /\sigma_r^2 (\mathbf{X})$
+
+Theorem 3.1 implies that $L(\mathbf{W}^{(t)}) - L(\mathbf{W}^{*}) \leqslant \epsilon$ after $T = O\left(\frac{1}{\eta L\sigma_{\min}^2(\mathbf{A})\sigma_{\min}^2(\mathbf{B})\sigma_r^2(\mathbf{X})}\log \frac{1}{\epsilon}\right)$ iterations. Plugging in the value of $\eta$ , we get
+
+$$
+T = O \left(\frac {\| \mathbf {A} \| _ {2} \| \mathbf {B} \| _ {2} \| \mathbf {X} \| _ {2} \cdot \left(\sqrt {L (\mathbf {W} ^ {(0)})} + \| \mathbf {A} \| _ {2} \| \mathbf {B} \| _ {2} \| \mathbf {X} \| _ {2}\right)}{\sigma_ {\mathrm {m i n}} ^ {2} (\mathbf {A}) \sigma_ {\mathrm {m i n}} ^ {2} (\mathbf {B}) \sigma_ {r} ^ {2} (\mathbf {X})} \log \frac {1}{\epsilon}\right).
+$$
+
+By Proposition 3.3, we have
+
+$$
+\begin{array}{l} T = O \left(\frac {\| \mathbf {A} \| _ {2} \| \mathbf {B} \| _ {2} \| \mathbf {X} \| _ {2} \cdot \left(\sqrt {k m \log (n / \delta)} \| \mathbf {X} \| _ {F} + \| \mathbf {A} \| _ {2} \| \mathbf {B} \| _ {2} \| \mathbf {X} \| _ {2}\right)}{\sigma_ {\mathrm {m i n}} ^ {2} (\mathbf {A}) \sigma_ {\mathrm {m i n}} ^ {2} (\mathbf {B}) \sigma_ {r} ^ {2} (\mathbf {X})} \log \frac {1}{\epsilon}\right) \\ = O \left(\frac {\| \mathbf {X} \| _ {2} \cdot \big (\sqrt {k m \log (n / \delta)} \| \mathbf {X} \| _ {F} + m \| \mathbf {X} \| _ {2} \big)}{m \sigma_ {r} ^ {2} (\mathbf {X})} \log \frac {1}{\epsilon}\right) \\ = O \left(\frac {\| \mathbf {X} \| _ {2} \cdot \left(\sqrt {k r \log (n / \delta) / m} \| \mathbf {X} \| _ {2} + \| \mathbf {X} \| _ {2}\right)}{\sigma_ {r} ^ {2} (\mathbf {X})} \log \frac {1}{\epsilon}\right) \\ = O \left(\kappa \log {\frac {1}{\epsilon}}\right) \\ \end{array}
+$$
+
+for $m = \Omega (kr\log (n / \delta))$ , completing the proof.
+
+
+
+# A.4 PROOF OF THEOREM 3.6
+
+Proof of Theorem 3.6. The guarantee is already achieved by $\mathbf{W}^{(0)}$ if $\epsilon \geqslant L(\mathbf{W}^{(0)}) - L(\mathbf{W}^*)$ , so we may assume without loss of generality that $\epsilon < L(\mathbf{W}^{(0)}) - L(\mathbf{W}^*)$ .
+
+Similar to the proof of Theorem 3.1, we use the short-hand notations $\lambda_{A},\mu_{A},\lambda_{B}$ and $\mu_B$ to denote $\| \mathbf{A}\| _2,\sigma_{\min}(\mathbf{A}),\| \mathbf{B}\| _2$ and $\sigma_{\mathrm{min}}(\mathbf{B})$ respectively. Then we rewrite the condition on $\mathbf{A}$ and $\mathbf{B}$ , and our choices of $\eta$ and $T$ as follows
+
+$$
+\begin{array}{l} \frac {\mu_ {A} ^ {2} \mu_ {B} ^ {2}}{\lambda_ {A} \lambda_ {B}} \geqslant \frac {\sqrt {8 e ^ {3}} n \| \mathbf {X} \| _ {2} \cdot \log (L (\mathbf {W} ^ {(0)}) / \epsilon^ {\prime})}{B \sigma_ {r} ^ {2} (\mathbf {X})} \cdot \sqrt {2 L (\mathbf {W} ^ {(0)})} \\ \eta \leqslant \frac {B \mu_ {A} ^ {2} \mu_ {B} ^ {2} \sigma_ {r} ^ {2} (\mathbf {X})}{6 e ^ {3} L n \lambda_ {A} ^ {4} \lambda_ {B} ^ {4} \| \mathbf {X} \| _ {2} ^ {2}} \cdot \min \left\{\frac {\epsilon^ {\prime}}{\| \mathbf {X} \| _ {2 , \infty} ^ {2} L (\mathbf {W} ^ {*})}, \frac {\log^ {2} (2) B}{3 n \| \mathbf {X} \| _ {2} ^ {2} \cdot \log (T / \delta) \log (L (\mathbf {W} ^ {(0)}) / \epsilon^ {\prime})} \right\}, \\ T = \frac {e}{\eta L \mu_ {A} ^ {2} \mu_ {B} ^ {2} \sigma_ {r} ^ {2} (\mathbf {X})} \cdot \log \left(\frac {L (\mathbf {W} ^ {(0)}) - L (\mathbf {W} ^ {*})}{\epsilon^ {\prime}}\right), \\ \end{array}
+$$
+
+where we set $\epsilon' = \epsilon / 3$ for the purpose of the proof.
+
+We first prove the convergence guarantees on expectation, and then apply the Markov inequality.
+
+For SGD, our guarantee is not made on the last iterate but the best one. Define $\mathfrak{E}_t$ to be the event that there is no $s \leqslant t$ such that $L(\mathbf{W}^{(t)}) - L(\mathbf{W}^*) \leqslant \epsilon'$ . If $\mathbb{1}(\mathfrak{E}_t) = 0$ , then there is an iterate $\mathbf{W}_s$ with $s \leqslant t$ that achieves training loss within $\epsilon'$ of optimal.
+
+Similar to the proof of Theorem 3.1, we prove the theorem by induction on the update number $s$ , using the following inductive hypothesis: either $\mathbb{1}(\mathfrak{C}_s) = 0$ or the following three inequalities hold,
+
+(i) $\max_{l\in [L]}\| \mathbf{W}_l^{(s)}\| _F\leqslant \frac{\sqrt{2es\eta n\lambda_A\lambda_B\|\mathbf{X}\|_2}}{B}\cdot \sqrt{2L(\mathbf{W}^{(0)})}$ .
+(ii) $\mathbb{E}\bigl [L(\mathbf{W}^{(s)}) - L(\mathbf{W}^{*})\bigr)\biggr ]\leqslant \left(1 - \frac{\eta L\mu_A^2\mu_B^2\sigma_r^2(\mathbf{X})}{e}\right)^s\cdot \bigl (L(\mathbf{W}^{(0)}) - L(\mathbf{W}^*)\bigr)$
+(iii) $L(\mathbf{W}^{(s)})\leqslant 2L(\mathbf{W}^{(0)}),$
+
+where the expectation in Part (ii) is with respect to all of the random choices of minibatches. Clearly, if $\mathbb{1}(\mathfrak{C}_s) = 0$ , we have already finished the proof since there is an iterate that achieves training loss
+
+within $\epsilon'$ of optimal. Recalling that $\epsilon < L(\mathbf{W}^{(0)}) - L(\mathbf{W}^*)$ , it is easy to verify that the inductive hypothesis holds when $s = 0$ .
+
+For the inductive step, we will prove that if the inductive hypothesis holds for $s < t$ , then it holds for $s = t$ . When $\mathbb{1}(\mathfrak{E}_{t - 1}) = 0$ , then $\mathbb{1}(\mathfrak{E}_t)$ is also 0 and we are done. Therefore, the remaining part is to prove the inductive hypothesis for $s = t$ under the assumption that $\mathbb{1}(\mathfrak{E}_{t - 1}) = 1$ , which implies that (i), (ii) and (iii) hold for all $s\leqslant t - 1$ . For Parts (i) and (ii), we will directly prove that the corresponding two inequalities hold. For Part (iii), we will prove that either this inequality holds or $\mathbb{1}(\mathfrak{E}_t) = 0$ .
+
+Induction for Part (i): As we mentioned, this part will be proved under the assumption $\mathbb{1}(\mathfrak{E}_{t - 1}) = 1$ . Besides, combining Part (i) for $s = t - 1$ and our choice of $\eta$ and $T$ implies that $\max_{l\in [L]}\| \mathbf{W}_l^{(t - 1)}\| _F\leqslant 0.5 / L$ . Then by triangle inequality, we have the following for $\| \mathbf{W}_l^{(t)}\| _F$ ,
+
+$$
+\left\| \mathbf {W} _ {l} ^ {(t)} \right\| _ {F} \leqslant \left\| \mathbf {W} _ {l} ^ {(t - 1)} \right\| _ {F} + \eta \left\| \mathbf {G} _ {l} ^ {(t - 1)} \right\| _ {F}.
+$$
+
+By Lemma A.1, we have
+
+$$
+\left\| \mathbf {G} _ {l} ^ {(t - 1)} \right\| _ {F} \leqslant \frac {\sqrt {2 e} n \lambda_ {A} \lambda_ {B} \left\| \mathbf {X} \right\| _ {2}}{B} \cdot \sqrt {L (\mathbf {W} ^ {(t - 1)})}.
+$$
+
+Then we have
+
+$$
+\begin{array}{l} \left\| \mathbf {W} _ {l} ^ {(t)} \right\| _ {F} \leqslant \left(\left\| \mathbf {W} _ {l} ^ {(t - 1)} \right\| _ {F} + \eta \left\| \mathbf {G} _ {l} ^ {(t - 1)} \right\| _ {F}\right) \\ \leqslant \left\| \mathbf {W} _ {l} ^ {(t - 1)} \right\| _ {F} + \frac {\sqrt {2 e} \eta n \lambda_ {A} \lambda_ {B} \| \mathbf {X} \| _ {2}}{B} \cdot \sqrt {L \left(\mathbf {W} ^ {(t - 1)}\right)}. \tag {A.4} \\ \end{array}
+$$
+
+By Part (iii) for $s = t - 1$ , we know that $L(\mathbf{W}^{(t - 1)}) \leqslant 2L(\mathbf{W}^{(0)})$ . Then by Part (i) for $s = t - 1$ , it is evident that
+
+$$
+\left\| \mathbf {W} _ {l} ^ {(t)} \right\| _ {F} \leqslant \frac {\sqrt {2 e t} \eta n \lambda_ {A} \lambda_ {B} \left\| \mathbf {X} \right\| _ {2}}{B} \cdot \sqrt {2 L \left(\mathbf {W} ^ {(0)}\right) \cdot}. \tag {A.5}
+$$
+
+This completes the proof of the inductive step of Part (i).
+
+Induction for Part (ii): As we previously mentioned, we will prove this part under the assumption $\mathbb{1}(\mathfrak{E}_{t - 1}) = 1$ . Thus, as mentioned earlier, the inductive hypothesis implies that $\max_{l\in [L]}\| \mathbf{W}_l^{(t - 1)}\| _F\leqslant 0.5 / L$ . By Part (i) for $s = t$ , which has been verified in (A.5), it can be proved that $\max_{l\in [L]}\| \mathbf{W}_l^{(t)}\| _F\leqslant 0.5 / L$ , then we have the following by Lemma A.2,
+
+$$
+\begin{array}{l} L (\mathbf {W} ^ {(t)}) - L (\mathbf {W} ^ {(t - 1)}) \leqslant - \eta \sum_ {l = 1} ^ {L} \left\langle \nabla_ {\mathbf {W} _ {l}} L (\mathbf {W} ^ {(t - 1)}), \mathbf {G} _ {l} ^ {(t - 1)} \right\rangle \\ + \eta^ {2} L \lambda_ {A} \lambda_ {B} \| \mathbf {X} \| _ {2} \cdot \left(\sqrt {e L \left(\mathbf {W} ^ {(t - 1)}\right)} + 0. 5 e \lambda_ {A} \lambda_ {B} \| \mathbf {X} \| _ {2}\right) \cdot \sum_ {l = 1} ^ {L} \| \mathbf {G} _ {l} ^ {(t - 1)} \| _ {F} ^ {2}. \tag {A.6} \\ \end{array}
+$$
+
+By our condition on $\mathbf{A}$ and $\mathbf{B}$ , it is easy to verify that
+
+$$
+\lambda_ {A} \lambda_ {B} \geqslant \frac {\mu_ {A} ^ {2} \mu_ {B} ^ {2}}{\lambda_ {A} \lambda_ {B}} \geqslant \frac {2 \sqrt {2 e ^ {- 1} L (\mathbf {W} ^ {(0)})}}{\| \mathbf {X} \| _ {2}}.
+$$
+
+Then by Part (iii) for $s = t - 1$ (A.6) yields
+
+$$
+L \left(\mathbf {W} ^ {(t)}\right) - L \left(\mathbf {W} ^ {(t - 1)}\right) \leqslant - \eta \sum_ {l = 1} ^ {L} \left\langle \nabla_ {\mathbf {W} _ {l}} L \left(\mathbf {W} ^ {(t - 1)}\right), \mathbf {G} _ {l} ^ {(t - 1)} \right\rangle + e \eta^ {2} L \lambda_ {A} ^ {2} \lambda_ {B} ^ {2} \| \mathbf {X} \| _ {2} ^ {2} \cdot \sum_ {l = 1} ^ {L} \| \mathbf {G} _ {l} ^ {(t - 1)} \| _ {F} ^ {2}. \tag {A.7}
+$$
+
+Taking expectation conditioning on $\mathbf{W}^{(t - 1)}$ gives
+
+$$
+\begin{array}{l} \mathbb {E} \left[ L (\mathbf {W} ^ {(t)}) | \mathbf {W} ^ {(t - 1)} \right] - L (\mathbf {W} ^ {(t - 1)}) \leqslant - \eta \sum_ {l = 1} ^ {L} \| \nabla_ {\mathbf {W} _ {l}} L (\mathbf {W} ^ {(t - 1)}) \| _ {F} ^ {2} \\ + e \eta^ {2} L \lambda_ {A} ^ {2} \lambda_ {B} ^ {2} \| \mathbf {X} \| _ {2} ^ {2} \sum_ {l = 1} ^ {L} \mathbb {E} \left[ \| \mathbf {G} _ {l} ^ {(t - 1)} \| _ {F} ^ {2} | \mathbf {W} ^ {(t - 1)} \right]. \tag {A.8} \\ \end{array}
+$$
+
+Note that, for $i$ sampled uniformly from $\{1,\dots,n\}$ , the expectation $\mathbb{E}[\| \mathbf{G}_l^{(t - 1)}\| _F^2 |\mathbf{W}^{(t - 1)}]$ can be upper bounded by
+
+$$
+\begin{array}{l} \mathbb {E} \left[ \| \mathbf {G} _ {l} ^ {(t - 1)} \| _ {F} ^ {2} | \mathbf {W} ^ {(t - 1)} \right] = \mathbb {E} \left[ \| \mathbf {G} _ {l} ^ {(t - 1)} - \nabla_ {\mathbf {W} _ {l}} L (\mathbf {W} ^ {(t - 1)}) \| _ {F} ^ {2} | \mathbf {W} ^ {(t - 1)} \right] + \| \nabla_ {\mathbf {W} _ {l}} L (\mathbf {W} ^ {(t - 1)}) \| _ {F} ^ {2} \\ \leqslant \frac {n ^ {2}}{B} \mathbb {E} \left[ \| \nabla_ {\mathbf {W} _ {l}} \ell \left(\mathbf {W} ^ {(t - 1)}; \mathbf {x} _ {i}, \mathbf {y} _ {i}\right) \| _ {F} ^ {2} \mid \mathbf {W} ^ {(t - 1)} \right] + \| \nabla_ {\mathbf {W} _ {l}} L \left(\mathbf {W} ^ {(t - 1)}\right) \| _ {F} ^ {2}. \tag {A.9} \\ \end{array}
+$$
+
+By Lemma A.1, we have
+
+$$
+\begin{array}{l} \mathbb {E} \left[ \| \nabla_ {\mathbf {W} _ {l}} \ell \left(\mathbf {W} ^ {(t - 1)}; \mathbf {x} _ {i}, \mathbf {y} _ {i}\right) \| _ {F} ^ {2} | \mathbf {W} ^ {(t - 1)} \right] \leqslant 2 e \lambda_ {A} ^ {2} \lambda_ {B} ^ {2} \mathbb {E} \left[ \| \mathbf {x} _ {i} \| _ {2} ^ {2} \ell \left(\mathbf {W} ^ {(t - 1)}; \mathbf {x} _ {i}, \mathbf {y} _ {i}\right) | \mathbf {W} ^ {(t - 1)} \right] \\ \leqslant \frac {2 e \lambda_ {A} ^ {2} \lambda_ {B} ^ {2}}{n} \sum_ {i = 1} ^ {n} \| \mathbf {x} _ {i} \| _ {2} ^ {2} \ell \left(\mathbf {W} ^ {(t - 1)}; \mathbf {x} _ {i}, \mathbf {y} _ {i}\right) \\ \leqslant \frac {2 e \lambda_ {A} ^ {2} \lambda_ {B} ^ {2} \| \mathbf {X} \| _ {2 , \infty} ^ {2} L (\mathbf {W} ^ {(t - 1)})}{n}. \\ \end{array}
+$$
+
+Plugging the above inequality into (A.9) and (A.8), we get
+
+$$
+\begin{array}{l} \mathbb {E} \left[ L \left(\mathbf {W} ^ {(t)}\right) \mid \mathbf {W} ^ {(t - 1)} \right] - L \left(\mathbf {W} ^ {(t - 1)}\right) \\ \leqslant - \eta \sum_ {l = 1} ^ {L} \| \nabla_ {\mathbf {W} _ {l}} L (\mathbf {W} ^ {(t - 1)}) \| _ {F} ^ {2} \\ + e \eta^ {2} L \lambda_ {A} ^ {2} \lambda_ {B} ^ {2} \| \mathbf {X} \| _ {2} ^ {2} \cdot \sum_ {l = 1} ^ {L} \left(\frac {2 e n \lambda_ {A} ^ {2} \lambda_ {B} ^ {2} \| \mathbf {X} \| _ {2 , \infty} ^ {2} L (\mathbf {W} ^ {(t - 1)})}{B} + \| \nabla_ {\mathbf {W} _ {l}} L (\mathbf {W} ^ {(t - 1)}) \| _ {F} ^ {2}\right). \\ \end{array}
+$$
+
+Recalling that $\eta \leqslant 1 / (6eL\lambda_A^2\lambda_B^2\| \mathbf{X}\| _2^2)$ , we have
+
+$$
+\begin{array}{l} \mathbb {E} \left[ L (\mathbf {W} ^ {(t)}) | \mathbf {W} ^ {(t - 1)} \right] - L (\mathbf {W} ^ {(t - 1)}) \leqslant - \frac {5 \eta}{6} \sum_ {l = 1} ^ {L} \| \nabla_ {\mathbf {W} _ {l}} L (\mathbf {W} ^ {(t - 1)}) \| _ {F} ^ {2} \\ + \frac {2 e ^ {2} \eta^ {2} L ^ {2} n \lambda_ {A} ^ {4} \lambda_ {B} ^ {4} \| \mathbf {X} \| _ {2} ^ {2} \| \mathbf {X} \| _ {2 , \infty} ^ {2} L \left(\mathbf {W} ^ {(t - 1)}\right)}{B}. \tag {A.10} \\ \end{array}
+$$
+
+By Lemma A.1, we have
+
+$$
+\sum_ {l = 1} ^ {L} \| \nabla_ {\mathbf {W} _ {l}} L (\mathbf {W} ^ {(t - 1)}) \| _ {F} ^ {2} \geqslant 2 e ^ {- 1} L \mu_ {A} ^ {2} \mu_ {B} ^ {2} \sigma_ {r} ^ {2} (\mathbf {X}) \big (L (\mathbf {W} ^ {(t - 1)}) - L (\mathbf {W} ^ {*}) \big).
+$$
+
+If we set
+
+$$
+\eta \leqslant \frac {B \mu_ {A} ^ {2} \mu_ {B} ^ {2} \sigma_ {r} ^ {2} (\mathbf {X})}{6 e ^ {3} L n \lambda_ {A} ^ {4} \lambda_ {B} ^ {4} \| \mathbf {X} \| _ {2} ^ {2} \| \mathbf {X} \| _ {2 , \infty} ^ {2}}, \tag {A.11}
+$$
+
+then (A.10) yields
+
+$$
+\begin{array}{l} \mathbb {E} \left[ L \left(\mathbf {W} ^ {(t)}\right) \mid \mathbf {W} ^ {(t - 1)} \right] - L \left(\mathbf {W} ^ {(t - 1)}\right) \\ \leqslant - \frac {5 \eta L \mu_ {A} ^ {2} \mu_ {B} ^ {2} \sigma_ {r} ^ {2} (\mathbf {X})}{3 e} \left(L \left(\mathbf {W} ^ {(t - 1)}\right) - L \left(\mathbf {W} ^ {*}\right)\right) \\ + \frac {2 e ^ {2} \eta^ {2} L ^ {2} n \lambda_ {A} ^ {4} \lambda_ {B} ^ {4} \| \mathbf {X} \| _ {2} ^ {2} \| \mathbf {X} \| _ {2 , \infty} ^ {2} \left(L \left(\mathbf {W} ^ {(t - 1)}\right) - L \left(\mathbf {W} ^ {*}\right)\right)}{B} \\ + \frac {2 e ^ {2} \eta^ {2} L ^ {2} n \lambda_ {A} ^ {4} \lambda_ {B} ^ {4} \| \mathbf {X} \| _ {2} ^ {2} \| \mathbf {X} \| _ {2 , \infty} ^ {2} L (\mathbf {W} ^ {*})}{B} \\ \leqslant - \frac {4 \eta L \mu_ {A} ^ {2} \mu_ {B} ^ {2} \sigma_ {r} ^ {2} (\mathbf {X})}{3 e} \left(L \left(\mathbf {W} ^ {(t - 1)}\right) - L \left(\mathbf {W} ^ {*}\right)\right) + \frac {2 e ^ {2} \eta^ {2} L ^ {2} n \lambda_ {A} ^ {4} \lambda_ {B} ^ {4} \| \mathbf {X} \| _ {2} ^ {2} \| \mathbf {X} \| _ {2 , \infty} ^ {2} L \left(\mathbf {W} ^ {*}\right)}{B L ^ {2}}. \tag {A.12} \\ \end{array}
+$$
+
+Define
+
+$$
+\gamma_ {0} = \frac {4 L \mu_ {A} ^ {2} \mu_ {B} ^ {2} \sigma_ {r} ^ {2} (\mathbf {X})}{3 e}, \quad \mathrm {a n d} \quad \gamma_ {1} = \frac {2 e ^ {2} \eta^ {2} L ^ {2} n \lambda_ {A} ^ {4} \lambda_ {B} ^ {4} \| \mathbf {X} \| _ {2} ^ {2} \| \mathbf {X} \| _ {2 , \infty} ^ {2} L (\mathbf {W} ^ {*})}{B},
+$$
+
+rearranging (A.12) further gives
+
+$$
+\mathbb {E} \left[ L \left(\mathbf {W} ^ {(t)}\right) \mid \mathbf {W} ^ {(t - 1)} \right] - L \left(\mathbf {W} ^ {*}\right) \leqslant \left(1 - \eta \gamma_ {0}\right) \cdot \left(L \left(\mathbf {W} ^ {(t)}\right) - L \left(\mathbf {W} ^ {*}\right)\right) + \eta^ {2} \gamma_ {1}. \tag {A.13}
+$$
+
+Therefore, setting the step size as
+
+$$
+\eta \leqslant \frac {\gamma_ {0} \epsilon^ {\prime}}{4 \gamma_ {1}} = \frac {B \mu_ {A} ^ {2} \mu_ {B} ^ {2} \sigma_ {r} ^ {2} (\mathbf {X})}{6 e ^ {3} L n \lambda_ {A} ^ {4} \lambda_ {B} ^ {4} \| \mathbf {X} \| _ {2} ^ {2} \| \mathbf {X} \| _ {2 , \infty} ^ {2}} \cdot \frac {\epsilon^ {\prime}}{L (\mathbf {W} *)},
+$$
+
+we further have
+
+$$
+\begin{array}{l} \mathbb {E} \left[ L \left(\mathbf {W} ^ {(t)}\right) - L \left(\mathbf {W} ^ {*}\right) \mid \mathbf {W} ^ {(t - 1)} \right] \leqslant \left[ (1 - \eta \gamma_ {0}) \cdot \left[ L \left(\mathbf {W} ^ {(t - 1)}\right) - L \left(\mathbf {W} ^ {*}\right) \right] + \eta^ {2} \gamma_ {1} \right] \\ \leqslant \left(1 - 3 \eta \gamma_ {0} / 4\right) \cdot \left[ L \left(\mathbf {W} ^ {(t - 1)}\right) - L \left(\mathbf {W} ^ {*}\right) \right], \tag {A.14} \\ \end{array}
+$$
+
+where the second inequality is by (A.13) and the last inequality is by the fact that we assume $\mathbb{1}(\mathfrak{E}_{t - 1}) = 1$ , which implies that $L(\mathbf{W}^{(t - 1)}) - L(\mathbf{W}^*)\geqslant \epsilon '\geqslant 4\gamma_1\eta /\gamma_0$ . Further taking expectation over $\mathbf{W}^{(t - 1)}$ , we get
+
+$$
+\begin{array}{l} \mathbb {E} \left[ L \left(\mathbf {W} ^ {(t)}\right) - L \left(\mathbf {W} ^ {*}\right) \right] \leqslant \left(1 - 3 \eta \gamma_ {0} / 4\right) \cdot \mathbb {E} \left[ L \left(\mathbf {W} ^ {(t - 1)}\right) - L \left(\mathbf {W} ^ {*}\right) \right] \\ \leqslant \left(1 - 3 \eta \gamma_ {0} / 4\right) ^ {t} \cdot \left(L \left(\mathbf {W} ^ {(0)}\right) - L \left(\mathbf {W} ^ {*}\right)\right), \\ \end{array}
+$$
+
+where the second inequality follows from Part (ii) for $s = t - 1$ and the assumption that $\mathbb{1}(\mathfrak{E}_0) = 1$ . Plugging the definition of $\gamma_0$ , we are able to complete the proof of the inductive step of Part (ii).
+
+Induction for Part (iii): Recalling that for this part, we are going to prove that either $L(\mathbf{W}^{(t)}) \leqslant 2L(\mathbf{W}^{(0)})$ or $\mathbb{1}(\mathfrak{C}_t) = 0$ , which is equivalent to $L(\mathbf{W}^{(t)}) \cdot \mathbb{1}(\mathfrak{C}_t) \leqslant 2L(\mathbf{W}^{(0)})$ since $L(\mathbf{W}^{(0)})$ and $L(\mathbf{W}^{(t)})$ are both positive. We will prove this by martingale inequality. Let $\mathcal{F}_t = \sigma \{\mathbf{W}^{(0)},\dots ,\mathbf{W}^{(t)}\}$ be a $\sigma$ -algebra, and $\mathbb{F} = \{\mathcal{F}_t\}_{t\geqslant 1}$ be a filtration. We first prove that $\mathbb{E}[L(\mathbf{W}^{(t)})\mathbb{1}(\mathfrak{C}_t)|\mathcal{F}_{t - 1}] \leqslant L(\mathbf{W}^{(t - 1)})\mathbb{1}(\mathfrak{C}_{t - 1})$ . Apparently, this inequality holds when $\mathbb{1}(\mathfrak{C}_{t - 1}) = 0$ since both sides will be zero. Then if $\mathbb{1}(\mathfrak{C}_{t - 1}) = 1$ , by (A.14) we have $\mathbb{E}[L(\mathbf{W}^{(t)})|\mathbf{W}^{(t - 1)}] \leqslant L(\mathbf{W}^{(t - 1)})$ since $L(\mathbf{W}^*)$ is the global minimum. Therefore,
+
+$$
+\begin{array}{l} \mathbb {E} \left[ L \left(\mathbf {W} ^ {(t)}\right) \mathbb {1} \left(\mathfrak {E} _ {t}\right) \mid \mathcal {F} _ {t - 1}, \mathbf {W} ^ {(t - 1)}, \mathbb {1} \left(\mathfrak {E} _ {t - 1}\right) = 1 \right] \leqslant \mathbb {E} \left[ L \left(\mathbf {W} ^ {(t)}\right) \mid \mathcal {F} _ {t - 1}, \mathbb {1} \left(\mathfrak {E} _ {t - 1}\right) = 1 \right] \\ \leqslant L \left(\mathbf {W} ^ {(t - 1)}\right). \\ \end{array}
+$$
+
+Combining these two cases, by Jensen's inequality, we further have
+
+$$
+\begin{array}{l} \mathbb {E} \left[ \log \left(L \left(\mathbf {W} ^ {(t)}\right) \mathbb {1} \left(\mathfrak {E} _ {t}\right)\right) | \mathcal {F} _ {t - 1} \right] \leqslant \log \left(\mathbb {E} \left[ L \left(\mathbf {W} ^ {(t)}\right) \mathbb {1} \left(\mathfrak {E} _ {t}\right) | \mathcal {F} _ {t - 1} \right]\right) \\ \leqslant \log \left(L (\mathbf {W} ^ {(t - 1)}) \mathbb {1} (\mathfrak {E} _ {t - 1})\right), \\ \end{array}
+$$
+
+which implies that $\{\log \big(L(\mathbf{W}^{(t)})\cdot \mathbb{1}(\mathfrak{E}_t)\big)\}_{t\geqslant 0}$ is a super-martingale. Then we will upper bound the martingale difference $\log \big(L(\mathbf{W}^{(t)})\cdot \mathbb{1}(\mathfrak{E}_t)\big) - \log \big(L(\mathbf{W}^{(t - 1)})\cdot \mathbb{1}(\mathfrak{E}_{t - 1})\big)$ . Clearly this quantity would be zero if $\mathbb{1}(\mathfrak{E}_{t - 1}) = 0$ . Then if $\mathbb{1}(\mathfrak{E}_{t - 1}) = 1$ , by (A.7) we have
+
+$$
+L \left(\mathbf {W} ^ {(t)}\right) \leqslant L \left(\mathbf {W} ^ {(t - 1)}\right) + \eta \sum_ {l = 1} ^ {L} \| \nabla_ {\mathbf {W} _ {l}} L \left(\mathbf {W} ^ {(t - 1)}\right) \| _ {F} \| \mathbf {G} _ {l} ^ {(t - 1)} \| _ {F} + e \eta^ {2} L \lambda_ {A} ^ {2} \lambda_ {B} ^ {2} \| \mathbf {X} \| _ {2} ^ {2} \sum_ {l = 1} ^ {L} \| \mathbf {G} _ {l} ^ {(t - 1)} \| _ {F} ^ {2}.
+$$
+
+By Part (i) for $s = t - 1$ , Lemma A.1, we further have
+
+$$
+\begin{array}{l} L \left(\mathbf {W} ^ {(t)}\right) \leqslant \left(1 + \frac {2 e \eta L n \lambda_ {A} ^ {2} \lambda_ {B} ^ {2} \| \mathbf {X} \| _ {2} ^ {2}}{B} + \frac {2 e ^ {2} n ^ {2} \eta^ {2} L ^ {2} \lambda_ {A} ^ {4} \lambda_ {B} ^ {4} \| \mathbf {X} \| _ {2} ^ {4}}{B ^ {2}}\right) L \left(\mathbf {W} ^ {(t - 1)}\right) \\ \leqslant \left(1 + \frac {3 e \eta n L \lambda_ {A} ^ {2} \lambda_ {B} ^ {2} \| \mathbf {X} \| _ {2} ^ {2}}{B}\right) L \left(\mathbf {W} ^ {(t - 1)}\right), \tag {A.15} \\ \end{array}
+$$
+
+where the second inequality follows from the choice of $\eta$ that
+
+$$
+\eta \leqslant \frac {B}{2 e n L \lambda_ {A} ^ {2} \lambda_ {B} ^ {2} \| \mathbf {X} \| _ {2} ^ {2}}.
+$$
+
+Using the fact that $\mathbb{1}(\mathfrak{E}_t) \leqslant 1$ and $\mathbb{1}(\mathfrak{E}_{t-1}) = 1$ , we further have
+
+$$
+\log \left(L (\mathbf {W} ^ {(t)}) \cdot \mathbb {1} (\mathfrak {E} _ {t})\right) \leqslant \log \left(L (\mathbf {W} ^ {(t - 1)}) \cdot \mathbb {1} (\mathfrak {E} _ {t - 1})\right) + \frac {3 e \eta L n \lambda_ {A} ^ {2} \lambda_ {B} ^ {2} \| \mathbf {X} \| _ {2} ^ {2}}{B},
+$$
+
+which also holds for the case $\mathbb{1}(\mathfrak{E}_{t - 1}) = 0$ . Recall that $\{\log \big(L(\mathbf{W}^{(t)})\cdot \mathbb{1}(\mathfrak{E}_t)\} \}_{t\geqslant 0}$ is a supermartingale, thus by one-side Azuma's inequality, we have with probability at least $1 - \delta^{\prime}$ ,
+
+$$
+\log \left(L (\mathbf {W} ^ {(t)}) \cdot \mathbb {1} (\mathfrak {E} _ {t})\right) \leqslant \log \left(L (\mathbf {W} ^ {(0)})\right) + \frac {3 e \eta L n \lambda_ {A} ^ {2} \lambda_ {B} ^ {2} \| \mathbf {X} \| _ {2} ^ {2}}{B} \cdot \sqrt {2 t \log (1 / \delta^ {\prime})}.
+$$
+
+Setting $\delta' = \delta / T$ , using the fact that $t \leqslant T$ and leveraging our choice of $T$ and $\eta$ , we have with probability at least $1 - \delta / T$ ,
+
+$$
+\sqrt {T} \eta = \frac {\log (2) B}{3 e \sqrt {2 \log (\delta / T)} L n \lambda_ {A} ^ {2} \lambda_ {B} ^ {2} \| \mathbf {X} \| _ {2} ^ {2}},
+$$
+
+which implies that
+
+$$
+L \left(\mathbf {W} ^ {(t)}\right) \mathbb {1} \left(\mathfrak {E} _ {t}\right) \leqslant \exp \left[ \log \left(L \left(\mathbf {W} ^ {(0)}\right)\right) + \log (2) \right] \leqslant 2 L \left(\mathbf {W} ^ {(0)}\right). \tag {A.16}
+$$
+
+This completes the proof of the inductive step of Part (iii).
+
+Note that this result holds with probability at least $1 - \delta / T$ . Thus applying union bound over all iterates $\{\mathbf{W}^{(t)}\}_{t = 0,\dots,T}$ yields that all induction arguments hold for all $t \leqslant T$ with probability at least $1 - \delta$ .
+
+Moreover, plugging our choice of $T$ and $\eta$ into Part (ii) gives
+
+$$
+\mathbb {E} \left[ L \left(\mathbf {W} ^ {(t)}\right) - L \left(\mathbf {W} ^ {*}\right) \right] \leqslant \epsilon^ {\prime}.
+$$
+
+By Markov inequality, we further have with probability at least $2/3$ , it holds that $[L(\mathbf{W}^{(T)}) - L(\mathbf{W}^{*})] \cdot \mathbb{1}(\mathfrak{E}_t) \leqslant 3\epsilon' = \epsilon$ . Therefore, by union bound (together with the high probability arguments of (A.16)) and assuming $\delta < 1/6$ , we have with probability at least $2/3 - \delta \geqslant 1/2$ , one of the iterates of SGD can achieve training loss within $\epsilon'$ of optimal. This completes the proof.
+
+# A.5 PROOF OF COROLLARY 3.7
+
+Proof of Corollary 3.7. Recall the condition in Theorem 3.6:
+
+$$
+\frac {\sigma_ {\min } ^ {2} (\mathbf {A}) \sigma_ {\min } ^ {2} (\mathbf {B})}{\| \mathbf {A} \| _ {2} \| \mathbf {B} \| _ {2}} \geqslant C \cdot \frac {n \| \mathbf {X} \| _ {2} \cdot \log (L \left(\mathbf {W} ^ {(0)}\right) / \epsilon)}{B \sigma_ {r} ^ {2} (\mathbf {X})} \cdot \sqrt {L \left(\mathbf {W} ^ {(0)}\right)}, \tag {A.17}
+$$
+
+Then plugging in the results in Proposition 3.3 and the fact that $\| \mathbf{X}\| _F\leqslant \sqrt{r}\| \mathbf{X}\| _2$ , we obtain that condition (A.17) can be satisfied if $m = O\big(kr\kappa^2\log^2 (1 / \epsilon)\cdot B / n\big)$ .
+
+In addition, consider sufficiently small $\epsilon$ such that $\epsilon \leqslant \widetilde{O}\big(B\| \mathbf{X}\|_{2,\infty}^2 /(n\| \mathbf{X}\|_2^2)\big)$ , then and use the fact that $\| \mathbf{X}\|_{2,\infty}\leqslant \| \mathbf{X}\|_2$ we have $\eta = O\big(kB\epsilon /(Lm n\kappa \| \mathbf{X}\|_2^2)\big)$ based on the results in Proposition 3.3. Then in order to achieve $\epsilon$ -suboptimal training loss, the iteration complexity is
+
+$$
+T = \frac {e}{\eta L \sigma_ {\mathrm {m i n}} ^ {2} \sigma_ {\mathrm {m i n}} ^ {2} (\mathbf {B}) \sigma_ {r} ^ {2} (\mathbf {X})} \log \left(\frac {L (\mathbf {W} ^ {(0)} - L (\mathbf {W} ^ {*}))}{\epsilon}\right) = O \big (\kappa^ {2} \epsilon^ {- 1} \log (1 / \epsilon) \cdot n / B \big).
+$$
+
+This completes the proof.
+
+# A.6 PROOF OF THEOREM 3.8
+
+Proof of Theorem 3.8. Similar to the proof of Theorem 3.6, we set the neural network width and step size as follows,
+
+$$
+\frac {\mu_ {A} ^ {2} \mu_ {B} ^ {2}}{\lambda_ {A} \lambda_ {B}} \geqslant \frac {4 \sqrt {2 e ^ {3}} n \| \mathbf {X} \| _ {2}}{B \sigma_ {r} ^ {2} (\mathbf {X})} \cdot \sqrt {2 L (\mathbf {W} ^ {(0)})}
+$$
+
+$$
+\eta \leqslant \frac {\log (2) B ^ {2} \mu_ {A} ^ {2} \mu_ {B} ^ {2} (\mathbf {B}) \sigma_ {r} ^ {2} (\mathbf {X})}{5 4 e ^ {3} L n ^ {2} \lambda_ {A} ^ {4} \lambda_ {B} ^ {4} \| \mathbf {X} \| _ {2} ^ {4} \cdot \log (T / \delta)},
+$$
+
+where $\lambda_{A},\mu_{A},\lambda_{B}$ and $\mu_B$ denote $\| \mathbf{A}\| _2,\sigma_{\mathrm{min}}(\mathbf{A}),\| \mathbf{B}\| _2$ and $\sigma_{\mathrm{min}}(\mathbf{B})$ respectively.
+
+Different from the proof of Theorem 3.6, the convergence guarantee established in this regime is made on the last iterate of SGD, rather than the best one. Besides, we will prove the theorem by induction on the update parameter $t$ , using the following two-part inductive hypothesis:
+
+(i) $\max_{l\in [L]}\| \mathbf{W}_l^{(t)}\| _F\leqslant 0.5 / L$
+(ii) $L(\mathbf{W}^{(t)})\leqslant 2L(\mathbf{W}^{(0)})\cdot \left(1 - \frac{s\eta L\mu_A^2\mu_B^2\sigma_r^2(\mathbf{X})}{e}\right)^s.$
+
+Induction for Part (i) We first prove that $\max_{l\in [L]}\| \mathbf{W}_l^{(t)}\| _F\leqslant 0.5 / L$ . By triangle inequality and the update rule of SGD, we have
+
+$$
+\begin{array}{l} \left\| \mathbf {W} _ {l} ^ {(t)} \right\| _ {F} \leqslant \sum_ {s = 0} ^ {t - 1} \eta \left\| \mathbf {G} _ {l} \right\| _ {F} \\ \leqslant \eta \sum_ {s = 0} ^ {t - 1} \frac {\sqrt {2 e n} \lambda_ {A} \lambda_ {B} \| \mathbf {X} \| _ {2}}{B} \left(L (\mathbf {W} ^ {(s)}) - L (\mathbf {W} ^ {*})\right) ^ {1 / 2} \\ \leqslant \frac {\sqrt {2 e} \eta n \lambda_ {A} \lambda_ {B} \| \mathbf {X} \| _ {2}}{B} \cdot \left(L (\mathbf {W} ^ {(0)}) - L (\mathbf {W} ^ {*})\right) ^ {1 / 2} \cdot \sum_ {s = 0} ^ {t - 1} \left(1 - \frac {\eta L \mu_ {A} ^ {2} \mu_ {B} ^ {2} \sigma_ {r} ^ {2} (\mathbf {X})}{2 e}\right) ^ {s} \\ \leqslant \frac {\sqrt {8 e ^ {3}} n \lambda_ {A} \lambda_ {B} \| \mathbf {X} \| _ {2}}{B L \mu_ {A} ^ {2} \mu_ {B} ^ {2} \sigma_ {r} ^ {2} (\mathbf {X})} \cdot \left(L (\mathbf {W} ^ {(0)}) - L (\mathbf {W} ^ {*})\right) ^ {1 / 2} \\ \end{array}
+$$
+
+where the second inequality is by Lemma A.1, the third inequality follows from Part (ii) for all $s < t$ and the fact that $(1 - x)^{1/2} \leqslant 1 - x/2$ for all $x \in [0,1]$ . Then applying our choice of $m$ implies that $\| \mathbf{W}_l^{(t)} \|_F \leqslant 0.5 / L$ .
+
+Induction for Part (ii) Similar to Part (ii) and (iii) of the induction step in the proof of Theorem 3.6, we first prove the convergence in expectation, and then use Azuma's inequality to get the high-probability based results. It can be simply verified that
+
+$$
+\begin{array}{l} \lambda_ {A} \lambda_ {B} \geqslant \frac {\mu_ {A} ^ {2} \mu_ {B} ^ {2}}{\lambda_ {A} \lambda_ {B}} \geqslant \frac {4 \sqrt {2 e ^ {3}} n \| \mathbf {X} \| _ {2} \cdot \log (L (\mathbf {W} ^ {(0)}) / \epsilon)}{B \sigma_ {r} ^ {2} (\mathbf {X})} \cdot \sqrt {2 L (\mathbf {W} ^ {(0)})} \geqslant \frac {2 \sqrt {2 e ^ {- 1} L (\mathbf {W} ^ {(0)})}}{\| \mathbf {X} \| _ {2}} \\ \eta \leqslant \frac {\log (2) B ^ {2} \mu_ {A} ^ {2} \mu_ {B} ^ {2} (\mathbf {B}) \sigma_ {r} ^ {2} (\mathbf {X})}{9 6 e ^ {3} L n ^ {2} \lambda_ {A} ^ {4} \lambda_ {B} ^ {4} \| \mathbf {X} \| _ {2} ^ {4} \cdot \log (T / \delta)} \leqslant \frac {B \mu_ {A} ^ {2} \mu_ {B} ^ {2} \sigma_ {r} ^ {2} (\mathbf {X})}{6 e ^ {3} L n \lambda_ {A} ^ {4} \lambda_ {B} ^ {4} \| \mathbf {X} \| _ {2} ^ {2} \| \mathbf {X} \| _ {2 , \infty} ^ {2}}. \\ \end{array}
+$$
+
+Thus, we can leverage (A.12) and obtain
+
+$$
+\mathbb {E} \big [ L (\mathbf {W} ^ {(t)}) | \mathbf {W} ^ {(t - 1)} \big ] - L (\mathbf {W} ^ {(t - 1)}) \leqslant - \frac {4 \eta L \mu_ {A} ^ {2} \mu_ {B} ^ {2} \sigma_ {r} ^ {2} (\mathbf {X})}{3 e} L (\mathbf {W} ^ {(t - 1)}),
+$$
+
+where we use the fact that $L(\mathbf{W}^{*}) = 0$ . Then by Jensen's inequality, we have
+
+$$
+\begin{array}{l} \mathbb {E} \left[ \log \left(L (\mathbf {W} ^ {(t)})\right) | \mathbf {W} ^ {(t - 1)} \right] \leqslant \log \left(L (\mathbf {W} ^ {(t - 1)})\right) + \log \left(1 - \frac {4 \eta L \mu_ {A} ^ {2} \mu_ {B} ^ {2} \sigma_ {r} ^ {2} (\mathbf {X})}{3 e}\right), \\ \leqslant \log \left(L (\mathbf {W} ^ {(t - 1)})\right) - \frac {4 \eta L \mu_ {A} ^ {2} \mu_ {B} ^ {2} \sigma_ {r} ^ {2} (\mathbf {X})}{3 e}, \\ \end{array}
+$$
+
+where the second inequality is by $\log (1 + x)\leqslant x$ . Then similar to the proof of Theorem 3.6, we are going to apply martingale inequality to prove this part. Let $\mathcal{F}_t = \sigma \{\mathbf{W}^{(0)},\dots ,\mathbf{W}^{(t)}\}$ be a $\sigma$ -algebra, and $\mathbb{F} = \{\mathcal{F}_t\}_{t\geqslant 1}$ be a filtration, the above inequality implies that
+
+$$
+\mathbb {E} \left[ \log \left(L \left(\mathbf {W} ^ {(t)}\right)\right) \mid \mathcal {F} _ {t - 1} \right] + \frac {4 t \eta L \mu_ {A} ^ {2} \mu_ {B} ^ {2} \sigma_ {r} ^ {2} (\mathbf {X})}{3 e} \leqslant \log \left(L \left(\mathbf {W} ^ {(t - 1)}\right)\right) + \frac {4 (t - 1) \eta L \mu_ {A} ^ {2} \mu_ {B} ^ {2} \sigma_ {r} ^ {2} (\mathbf {X})}{3 e}, \tag {A.18}
+$$
+
+which implies that $\left\{\log \left(L(\mathbf{W}^{(t)})\right) + 4t\eta L\mu_A^2\mu_B^2\sigma_r^2 (\mathbf{X}) / (3e)\right\}$ is a super-martingale. Besides, by (A.15), we can obtain
+
+$$
+\log \left(L (\mathbf {W} ^ {(t)})\right) \leqslant \log \left(L (\mathbf {W} ^ {(t - 1)})\right) + \frac {3 e \eta L n \lambda_ {A} ^ {2} \lambda_ {B} ^ {2} \| \mathbf {X} \| _ {2} ^ {2}}{B},
+$$
+
+which implies that
+
+$$
+\begin{array}{l} \log \left(L (\mathbf {W} ^ {(t)})\right) + \frac {4 t \eta L \mu_ {A} ^ {2} \mu_ {B} ^ {2} \sigma_ {r} ^ {2} (\mathbf {X})}{3 e} \\ \leqslant \log \left(L (\mathbf {W} ^ {(t - 1)})\right) + \frac {4 (t - 1) \eta L \mu_ {A} ^ {2} \mu_ {B} ^ {2} \sigma_ {r} ^ {2} (\mathbf {X})}{3 e} + \frac {4 e \eta L n \lambda_ {A} ^ {2} \lambda_ {B} ^ {2} \| \mathbf {X} \| _ {2} ^ {2}}{B}, \\ \end{array}
+$$
+
+where we again use the fact that $\log (1 + x) \leqslant x$ . Thus, by the one-sided Azuma's inequality we have with probability at least $1 - \delta'$ that
+
+$$
+\begin{array}{l} \log \left(L (\mathbf {W} ^ {(t)})\right) \leqslant \log \left(L (\mathbf {W} ^ {(0)})\right) - \frac {4 t \eta L \mu_ {A} ^ {2} \mu_ {B} ^ {2} \sigma_ {r} ^ {2} (\mathbf {X})}{3 e} + \frac {4 e \eta L n \lambda_ {A} ^ {2} \lambda_ {B} ^ {2} \| \mathbf {X} \| _ {2} ^ {2}}{B} \cdot \sqrt {2 t \log (1 / \delta^ {\prime})} \\ \leqslant \log \left(L (\mathbf {W} ^ {(0)})\right) - \frac {t \eta L \mu_ {A} ^ {2} \mu_ {B} ^ {2} \sigma_ {r} ^ {2} (\mathbf {X})}{e} + \frac {9 6 e ^ {3} \eta L n ^ {2} \lambda_ {A} ^ {4} \lambda_ {B} ^ {4} \| \mathbf {X} \| _ {2} ^ {4} \log (1 / \delta^ {\prime})}{B ^ {2} \mu_ {A} ^ {2} \mu_ {B} ^ {2} \sigma_ {r} ^ {2} (\mathbf {X})} \\ \leqslant \log \left(L (\mathbf {W} ^ {(0)})\right) - \frac {t \eta L \mu_ {A} ^ {2} \mu_ {B} ^ {2} \sigma_ {r} ^ {2} (\mathbf {X})}{e} + \log (2), \\ \end{array}
+$$
+
+where the second inequality follows from the fact that $-at + b\sqrt{t} \leqslant b^2 / a$ , and the last inequality is by our choice of $\eta$ that
+
+$$
+\eta \leqslant \frac {\log (2) B ^ {2} \mu_ {A} ^ {2} \mu_ {B} ^ {2} (\mathbf {B}) \sigma_ {r} ^ {2} (\mathbf {X})}{9 6 e ^ {3} L n ^ {2} \lambda_ {A} ^ {4} \lambda_ {B} ^ {4} \| \mathbf {X} \| _ {2} ^ {4} \cdot \log (1 / \delta^ {\prime})}.
+$$
+
+Then it is clear that with probability at least $1 - \delta'$ ,
+
+$$
+L \left(\mathbf {W} ^ {(t)}\right) \leqslant 2 L \left(\mathbf {W} ^ {(0)}\right) \cdot \exp \left(- \frac {t \eta L \mu_ {A} ^ {2} \mu_ {B} ^ {2} \sigma_ {r} ^ {2} (\mathbf {X})}{e}\right), \tag {A.19}
+$$
+
+which completes the induction for Part (ii).
+
+Similar to the proof of Theorem 3.6, (A.19) holds with probability at least $1 - \delta'$ for a given $t$ . Then we can set $\delta' = \delta / T$ and apply union bound such that with probability at least $1 - \delta$ , (A.19) holds for all $t \leqslant T$ . This completes the proof.
+
+# A.7 PROOF OF COROLLARY 3.9
+
+Proof of Corollary 3.9. Recall the condition in Theorem 3.8:
+
+$$
+\frac {\sigma_ {\operatorname* {m i n}} ^ {2} (\mathbf {A}) \sigma_ {\operatorname* {m i n}} ^ {2} (\mathbf {B})}{\| \mathbf {A} \| _ {2} \| \mathbf {B} \| _ {2}} \geqslant C \cdot \frac {n \| \mathbf {X} \| _ {2}}{B \sigma_ {r} ^ {2} (\mathbf {X})} \cdot \sqrt {L \left(\mathbf {W} ^ {(0)}\right)}, \tag {A.20}
+$$
+
+Then plugging in the results in Proposition 3.3 and the fact that $\| \mathbf{X}\| _F\leqslant \sqrt{r}\| \mathbf{X}\| _2$ , we obtain that condition (A.17) can be satisfied if $m = O\bigl (kr\kappa^2\cdot B / n\bigr)$ .
+
+In addition, it can be computed that $\eta = O\bigl (kB^{2} / (Lmn^{2}\kappa \| \mathbf{X}\|_{2}^{2})\bigr)$ based on the results in Proposition 3.3. Then in order to achieve $\epsilon$ -suboptimal training loss, the iteration complexity is
+
+$$
+T = \frac {e}{\eta L \sigma_ {\mathrm {m i n}} ^ {2} \sigma_ {\mathrm {m i n}} ^ {2} (\mathbf {B}) \sigma_ {r} ^ {2} (\mathbf {X})} \log \left(\frac {L (\mathbf {W} ^ {(0)}) - L (\mathbf {W} ^ {*})}{\epsilon}\right) = O \big (\kappa^ {2} \log (1 / \epsilon) \cdot n ^ {2} / B ^ {2} \big).
+$$
+
+This completes the proof.
+
+# B PROOFS OF TECHNICAL LEMMAS
+
+# B.1 PROOF OF LEMMA A.1
+
+We first note the following useful lemmas.
+
+Lemma B.1 (Claim B.1 in Du & Hu (2019)). Define $\Phi = \arg \min_{\Theta \in \mathbb{R}^{k\times d}}\| \Theta \mathbf{X} - \mathbf{Y}\| _F^2$ , then for any $\mathbf{U}\in \mathbb{R}^{k\times d}$ it holds that
+
+$$
+\left\| \mathbf {U} \mathbf {X} - \mathbf {Y} \right\| _ {F} ^ {2} = \left\| \mathbf {U} \mathbf {X} - \boldsymbol {\Phi} \mathbf {X} \right\| _ {F} ^ {2} + \left\| \boldsymbol {\Phi} \mathbf {X} - \mathbf {Y} \right\| _ {F} ^ {2}.
+$$
+
+Lemma B.2 (Theorem 1 in Fang et al. (1994)). Let $\mathbf{U},\mathbf{V}\in \mathbb{R}^{d\times d}$ be two positive definite matrices, then it holds that
+
+$$
+\lambda_ {\min } (\mathbf {U}) \operatorname {T r} (\mathbf {V}) \leqslant \operatorname {T r} (\mathbf {U V}) \leqslant \lambda_ {\max } (\mathbf {U}) \operatorname {T r} (\mathbf {V}).
+$$
+
+The following lemma is proved in Section B.3.
+
+Lemma B.3. Let $\mathbf{U} \in \mathbb{R}^{d \times r}$ be a rank- $r$ matrix. Then for any $\mathbf{V} \in \mathbb{R}^{r \times k}$ , it holds that
+
+$$
+\sigma_ {\min } (\mathbf {U}) \| \mathbf {V} \| _ {F} \leqslant \| \mathbf {U V} \| _ {F} \leqslant \sigma_ {\max } (\mathbf {U}) \| \mathbf {V} \| _ {F}.
+$$
+
+Proof of Lemma A.1. Proof of gradient lower bound: We first prove the gradient lower bound. Let $\mathbf{U} = \mathbf{B}(\mathbf{I} + \mathbf{W}_L)\ldots (\mathbf{I} + \mathbf{W}_1)\mathbf{A}$ , by Lemma B.1 and the definition of $L(\mathbf{W}^*)$ , we know that there exist a matrix $\Phi \in \mathbb{R}^{k\times d}$ such that
+
+$$
+L (\mathbf {W}) = \frac {1}{2} \| \mathbf {U X} - \boldsymbol {\Phi} \mathbf {X} \| _ {F} ^ {2} + L (\mathbf {W} ^ {*}). \tag {B.1}
+$$
+
+Therefore, based on the assumption that $\max_{l\in [L]}\| \mathbf{W}_l\| _F\leqslant 0.5 / L$ , we have
+
+$$
+\begin{array}{l} \left\| \nabla_ {\mathbf {W} _ {l}} L (\mathbf {W}) \right\| _ {F} ^ {2} = \left\| \left[ \mathbf {B} (\mathbf {I} + \mathbf {W} _ {L}) \dots (\mathbf {I} + \mathbf {W} _ {l + 1}) \right] ^ {\top} \left(\mathbf {U X} - \boldsymbol {\Phi} \mathbf {X}\right) \left[ (\mathbf {I} + \mathbf {W} _ {l - 1}) \dots \mathbf {A X} \right] ^ {\top} \right\| _ {F} ^ {2} \\ \geqslant \sigma_ {\min } ^ {2} \left(\left(\mathbf {I} + \mathbf {W} _ {L}\right) \dots \left(\mathbf {I} + \mathbf {W} _ {l + 1}\right)\right) \cdot \sigma_ {\min } ^ {2} \left(\left(\mathbf {I} + \mathbf {W} _ {l - 1}\right) \dots \left(\mathbf {I} + \mathbf {W} _ {1}\right)\right) \\ \cdot \| \mathbf {B} ^ {\top} (\mathbf {U} - \boldsymbol {\Phi}) \mathbf {X X} ^ {\top} \mathbf {A} ^ {\top} \| _ {F} ^ {2} \\ \geqslant \left(1 - 0. 5 / L\right) ^ {2 L - 2} \| \mathbf {B} ^ {\top} (\mathbf {U} - \boldsymbol {\Phi}) \mathbf {X X} ^ {\top} \mathbf {A} ^ {\top} \| _ {F} ^ {2}, \\ \end{array}
+$$
+
+where the last inequality follows from the fact that $\sigma_{\mathrm{min}}(\mathbf{I} + \mathbf{W}_l) \geqslant 1 - \| \mathbf{W}_l\|_2 \geqslant 1 - \| \mathbf{W}_l\|_F \geqslant 1 - 0.5 / L$ . Applying Lemma B.2, we get
+
+$$
+\begin{array}{l} \left\| \mathbf {B} ^ {\top} (\mathbf {U} - \boldsymbol {\Phi}) \mathbf {X X} ^ {\top} \mathbf {A} ^ {\top} \right\| _ {F} ^ {2} = \operatorname {T r} \left(\mathbf {B B} ^ {\top} (\mathbf {U} - \boldsymbol {\Phi}) \mathbf {X X} ^ {\top} \mathbf {A} ^ {\top} \mathbf {A X X} ^ {\top} (\mathbf {U} - \boldsymbol {\Phi}) ^ {\top}\right) \\ \geqslant \lambda_ {\min } (\mathbf {B} \mathbf {B} ^ {\top}) \cdot \operatorname {T r} \left(\mathbf {A} ^ {\top} \mathbf {A X X} ^ {\top} (\mathbf {U} - \boldsymbol {\Phi}) ^ {\top} (\mathbf {U} - \boldsymbol {\Phi}) \mathbf {X X} ^ {\top}\right) \\ \geqslant \lambda_ {\mathrm {m i n}} (\mathbf {B B} ^ {\top}) \cdot \lambda_ {\mathrm {m i n}} (\mathbf {A} ^ {\top} \mathbf {A}) \cdot \| (\mathbf {U} - \boldsymbol {\Phi}) \mathbf {X X} ^ {\top} \| _ {F} ^ {2}. \\ \end{array}
+$$
+
+Note that $\mathbf{X}$ is of $r$ -rank, thus there exists a full-rank matrix $\widetilde{\mathbf{X}} \in \mathbb{R}^{d \times r}$ such that $\widetilde{\mathbf{X}}\widetilde{\mathbf{X}}^{\top} = \mathbf{X}\mathbf{X}^{\top}$ . Thus we have
+
+$$
+\| (\mathbf{U} - \boldsymbol {\Phi})\mathbf{X}\|_{F}^{2} = \operatorname {Tr}\bigl((\mathbf{U} - \boldsymbol {\Phi})\mathbf{X}\mathbf{X}^{\top}(\mathbf{U} - \boldsymbol {\phi})^{\top}\bigr) = \operatorname {Tr}\bigl((\mathbf{U} - \boldsymbol {\Phi})\widetilde{\mathbf{X}}\widetilde{\mathbf{X}}^{\top}(\mathbf{U} - \boldsymbol {\phi})^{\top}\bigr) = \bigl\|(\mathbf{U} - \boldsymbol {\Phi})\widetilde{\mathbf{X}}\bigr \|_{F}^{2}. \tag{B.2}
+$$
+
+Therefore,
+
+$$
+\begin{array}{l} \left\| (\mathbf {U} - \boldsymbol {\Phi}) \mathbf {X X} ^ {\top} \right\| _ {F} ^ {2} = \left\| (\mathbf {U} - \boldsymbol {\Phi}) \widetilde {\mathbf {X}} \widetilde {\mathbf {X}} ^ {\top} \right\| _ {F} ^ {2} \\ = \operatorname {T r} \left(\left(\mathbf {U} - \boldsymbol {\Phi}\right) \widetilde {\mathbf {X}} \widetilde {\mathbf {X}} ^ {\top} \widetilde {\mathbf {X}} \widetilde {\mathbf {X}} ^ {\top} \left(\mathbf {U} - \boldsymbol {\Phi}\right) ^ {\top}\right) \\ \geqslant \lambda_ {\min } (\tilde {\mathbf {X}} ^ {\top} \tilde {\mathbf {X}}) \cdot \| (\mathbf {U} - \boldsymbol {\Phi}) \tilde {\mathbf {X}} \| _ {F} ^ {2} \\ = 2 \sigma_ {r} ^ {2} (\mathbf {X}) \cdot (L (\mathbf {W}) - L (\mathbf {W} ^ {*})), \tag {B.3} \\ \end{array}
+$$
+
+where the inequality follows from Lemma B.2 and the last equality follows from (B.2), (B.1) and the fact that $\lambda_{\mathrm{min}}(\widetilde{\mathbf{X}}^{\top}\widetilde{\mathbf{X}}) = \lambda_r(\mathbf{X}\mathbf{X}^\top) = \sigma_r^2 (\mathbf{X})$ . Note that we assume $d,k\leqslant m$ and $d\leqslant n$ . Thus it follows that $\lambda_{\mathrm{min}}(\mathbf{BB}^{\top}) = \sigma_{\mathrm{min}}^{2}(\mathbf{B})$ and $\lambda_{\mathrm{min}}(\mathbf{A}^{\top}\mathbf{A}) = \sigma_{\mathrm{min}}^{2}(\mathbf{A})$ . Then putting everything together, we can obtain
+
+$$
+\| \nabla_ {\mathbf {W} _ {l}} L (\mathbf {W}) \| _ {F} ^ {2} \geqslant 2 \sigma_ {\min } ^ {2} (\mathbf {B}) \sigma_ {\min } ^ {2} (\mathbf {A}) \sigma_ {r} ^ {2} (\mathbf {X}) (1 - 0. 5 / L) ^ {2 L - 2} \big (L (\mathbf {W} - L (\mathbf {W} ^ {*})).
+$$
+
+Then using the inequality $(1 - 0.5 / L)^{2L - 2} \geqslant e^{-1}$ , we are able to complete the proof of gradient lower bound.
+
+Proof of gradient upper bound: The gradient upper bound can be proved in a similar way. Specifically, Lemma B.3 implies
+
+$$
+\begin{array}{l} \left\| \nabla_ {\mathbf {W} _ {l}} L (\mathbf {W}) \right\| _ {F} ^ {2} = \left\| \left[ \mathbf {B} (\mathbf {I} + \mathbf {W} _ {L}) \dots (\mathbf {I} + \mathbf {W} _ {l + 1}) \right] ^ {\top} \left(\mathbf {U X} - \boldsymbol {\Phi} \mathbf {X}\right) \left[ (\mathbf {I} + \mathbf {W} _ {l - 1}) \dots \mathbf {A X} \right] ^ {\top} \right\| _ {F} ^ {2} \\ \leqslant \sigma_ {\max } ^ {2} \left(\left(\mathbf {I} + \mathbf {W} _ {L}\right) \dots \left(\mathbf {I} + \mathbf {W} _ {l + 1}\right)\right) \cdot \sigma_ {\max } ^ {2} \left(\left(\mathbf {I} + \mathbf {W} _ {l - 1}\right) \dots \left(\mathbf {I} + \mathbf {W} _ {1}\right)\right) \\ \cdot \| \mathbf {B} ^ {\top} (\mathbf {U} - \boldsymbol {\Phi}) \mathbf {X X} ^ {\top} \mathbf {A} ^ {\top} \| _ {F} ^ {2} \\ \leqslant \sigma_ {\max } ^ {2} \left(\left(\mathbf {I} + \mathbf {W} _ {L}\right) \dots \left(\mathbf {I} + \mathbf {W} _ {l + 1}\right)\right) \cdot \sigma_ {\max } ^ {2} \left(\left(\mathbf {I} + \mathbf {W} _ {l - 1}\right) \dots \left(\mathbf {I} + \mathbf {W} _ {1}\right)\right) \\ \cdot \| \mathbf {B} \| _ {2} ^ {2} \| \mathbf {A} \| _ {2} ^ {2} \cdot \| (\mathbf {U} - \boldsymbol {\Phi}) \mathbf {X X} ^ {\top} \| _ {F} ^ {2} \\ \leqslant \left(1 + 0. 5 / L\right) ^ {2 L - 2} \| \mathbf {B} \| _ {2} ^ {2} \| \mathbf {A} \| _ {2} ^ {2} \cdot \| (\mathbf {U} - \boldsymbol {\Phi}) \mathbf {X X} ^ {\top} \| _ {F} ^ {2}, \\ \end{array}
+$$
+
+where the last inequality is by the assumption that $\max_{l\in [L]}\| \mathbf{W}_l\| _F\leqslant 0.5 / L$ . By (B.3), we have
+
+$$
+\begin{array}{l} \left\| (\mathbf {U} - \boldsymbol {\Phi}) \mathbf {X} \mathbf {X} ^ {\top} \right\| _ {F} ^ {2} = \left\| (\mathbf {U} - \boldsymbol {\Phi}) (\mathbf {X} \mathbf {X} ^ {\top}) ^ {1 / 2} (\mathbf {X} \mathbf {X} ^ {\top}) ^ {1 / 2} \right\| _ {F} ^ {2} \\ \leqslant \lambda_ {\max } (\mathbf {X X} ^ {\top}) \cdot \| (\mathbf {U} - \boldsymbol {\Phi}) (\mathbf {X X} ^ {\top}) ^ {1 / 2} \| _ {F} ^ {2} \\ = \lambda_ {\max } \left(\mathbf {X} \mathbf {X} ^ {\top}\right) \cdot \| (\mathbf {U} - \boldsymbol {\Phi}) \mathbf {X} \| _ {F} ^ {2} \\ = 2 \| \mathbf {X} \| _ {2} ^ {2} \cdot \left(L (\mathbf {W}) - L (\mathbf {W} ^ {*})\right), \\ \end{array}
+$$
+
+where the inequality is by Lemma B.3 and the second equality is by (B.2). Therefore, combining the above results yields
+
+$$
+\| \nabla_ {\mathbf {W} _ {l}} L (\mathbf {W}) \| _ {F} ^ {2} \leqslant 2 \sigma_ {\max } ^ {2} (\mathbf {B}) \sigma_ {\max } ^ {2} (\mathbf {A}) \| \mathbf {X} \| _ {2} ^ {2} (1 + 0. 5 / L) ^ {2 L - 2} \big (L (\mathbf {W} - L (\mathbf {W} ^ {*})).
+$$
+
+Using the inequality $(1 + 0.5 / L)^{2L - 2} \leqslant (1 + 0.5 / L)^{2L} \leqslant e$ , we are able to complete the proof of gradient upper bound.
+
+Proof of the upper bound of $\| \nabla_{\mathbf{W}_l}\ell (\mathbf{W};\mathbf{x}_i,\mathbf{y}_i)\| _F^2$ : Let $\mathbf{U} = \mathbf{B}(\mathbf{I} + \mathbf{W}_L)\dots (\mathbf{I} + \mathbf{W}_1)\mathbf{A}$ , we have
+
+$$
+\nabla_ {\mathbf {W} _ {l}} \ell (\mathbf {W}; \mathbf {x} _ {i}, \mathbf {y} _ {i}) = \left[ \mathbf {B} (\mathbf {I} + \mathbf {W} _ {L}) \dots (\mathbf {I} + \mathbf {W} _ {l + 1}) \right] ^ {\top} (\mathbf {U} \mathbf {x} _ {i} - \mathbf {y} _ {i}) \big [ (\mathbf {I} + \mathbf {W} _ {l - 1}) \dots \mathbf {A} \bar {\mathbf {x}} _ {i} \big ] ^ {\top}.
+$$
+
+Therefore, by Lemma B.3, we have
+
+$$
+\begin{array}{l} \left\| \nabla_ {\mathbf {W} _ {l}} \ell (\mathbf {W}; \mathbf {x} _ {i}, \mathbf {y} _ {i}) \right\| _ {F} ^ {2} \leqslant \sigma_ {\max } ^ {2} \left(\left(\mathbf {I} + \mathbf {W} _ {L}\right) \cdot \left(\mathbf {I} + \mathbf {W} _ {l + 1}\right)\right) \cdot \sigma_ {\max } ^ {2} \left(\left(\mathbf {I} + \mathbf {W} _ {l - 1}\right) \dots \left(\mathbf {I} + \mathbf {W} _ {1}\right)\right) \\ \cdot \left\| \mathbf {B} ^ {\top} \left(\mathbf {U} \mathbf {x} _ {i} - \mathbf {y} _ {i}\right) \mathbf {x} _ {i} \mathbf {A} ^ {\top} \right\| _ {F} ^ {2} \\ \leqslant \left(1 + 0. 5 / L\right) ^ {2 L - 2} \cdot \| \mathbf {B} \| _ {2} ^ {2} \| \mathbf {A} \| _ {2} ^ {2} \| \mathbf {x} _ {i} \| _ {2} ^ {2} \cdot \| \mathbf {U x} _ {i} - \mathbf {y} _ {i} \| _ {F} ^ {2} \\ \leqslant 2 e \| \mathbf {A} \| _ {2} ^ {2} \| \mathbf {B} \| _ {2} ^ {2} \| \mathbf {x} _ {i} \| _ {2} ^ {2} \ell (\mathbf {W}; \mathbf {x} _ {i}, \mathbf {y} _ {i}), \\ \end{array}
+$$
+
+where the last inequality is by the fact that $(1 + 0.5 / L)^{2L - 2} \leqslant e$ .
+
+Proof of the upper bound of stochastic gradient: Define by $\mathcal{B}$ the set of training data points used to compute the stochastic gradient, then define by $\bar{\mathbf{X}}$ and $\bar{\mathbf{Y}}$ the stacking of $\{\mathbf{x}_i\}_{i\in \mathcal{B}}$ and $\{\mathbf{y}_i\}_{i\in \mathcal{B}}$ respectively. Let $\mathbf{U} = \mathbf{B}(\mathbf{I} + \mathbf{W}_L)\cdot \cdot \cdot (\mathbf{I} + \mathbf{W}_1)\mathbf{A}$ , the minibatch stochastic gradient takes form
+
+$$
+\begin{array}{l} \mathbf {G} _ {l} = \frac {n}{B} \sum_ {i \in \mathcal {B}} \nabla_ {\mathbf {W} _ {i}} \ell (\mathbf {W}; \mathbf {x} _ {i}, \mathbf {y} _ {i}) \\ = \frac {n}{B} \Big [ \mathbf {B} (\mathbf {I} + \mathbf {W} _ {L}) \dots (\mathbf {I} + \mathbf {W} _ {l + 1}) \Big ] ^ {\top} (\mathbf {U} \bar {\mathbf {X}} - \bar {\mathbf {Y}}) \big [ (\mathbf {I} + \mathbf {W} _ {l - 1}) \dots \mathbf {A} \bar {\mathbf {X}} \big ] ^ {\top}. \\ \end{array}
+$$
+
+Then by Lemma B.3, we have
+
+$$
+\begin{array}{l} \| \mathbf {G} _ {l} \| _ {F} ^ {2} \leqslant \frac {n ^ {2}}{B ^ {2}} \sigma_ {\max } ^ {2} \big ((\mathbf {I} + \mathbf {W} _ {L}) \cdot (\mathbf {I} + \mathbf {W} _ {l + 1}) \big) \cdot \sigma_ {\max } ^ {2} \big ((\mathbf {I} + \mathbf {W} _ {l - 1}) \dots (\mathbf {I} + \mathbf {W} _ {1}) \big) \\ \cdot \| \mathbf {B} ^ {\top} (\mathbf {U} \bar {\mathbf {X}} - \bar {\mathbf {Y}}) \bar {\mathbf {X}} ^ {\top} \mathbf {A} ^ {\top} \| _ {F} ^ {2} \\ \leqslant \frac {n ^ {2}}{B ^ {2}} \cdot (1 + 0. 5 / L) ^ {2 L - 2} \cdot \| \mathbf {B} \| _ {2} ^ {2} \| \mathbf {A} \| _ {2} ^ {2} \| \bar {\mathbf {X}} \| _ {2} ^ {2} \cdot \| \mathbf {U} \bar {\mathbf {X}} - \bar {\mathbf {Y}} \| _ {F} ^ {2} \\ \leqslant \frac {e n ^ {2}}{B ^ {2}} \| \mathbf {B} \| _ {2} ^ {2} \| \mathbf {A} \| _ {2} ^ {2} \| \bar {\mathbf {X}} \| _ {2} ^ {2} \cdot \| \mathbf {U} \bar {\mathbf {X}} - \bar {\mathbf {Y}} \| _ {F} ^ {2}. \\ \end{array}
+$$
+
+where the second inequality is by the assumptions that $\max_{l\in [L]}\| \mathbf{W}_l\| _F\leqslant 0.5 / L$ , and the last inequality follows from the fact that $(1 + 0.5 / L)^{2L - 2}\leqslant (1 + 0.5 / L)^{2L}\leqslant e$ . Note that $\bar{\mathbf{X}}$ and $\bar{\mathbf{Y}}$ are constructed by stacking $B$ columns from $\mathbf{X}$ and $\mathbf{Y}$ respectively, thus we have $\| \bar{\mathbf{X}}\| _2^2\leqslant \| \mathbf{X}\| _2^2$ and $\| \mathbf{U}\bar{\mathbf{X}} -\bar{\mathbf{Y}}\| _F^2\leqslant \| \mathbf{U}\mathbf{X} - \mathbf{Y}\| _F^2 = 2L(\mathbf{W})$ . Then it follows that
+
+$$
+\| \mathbf {G} _ {l} \| _ {F} ^ {2} \leqslant \frac {2 e n ^ {2}}{B ^ {2}} \| \mathbf {B} \| _ {2} ^ {2} \| \mathbf {A} \| _ {2} ^ {2} \| \mathbf {X} \| _ {2} ^ {2} \cdot L (\mathbf {W}).
+$$
+
+This completes the proof of the upper bound of stochastic gradient.
+
+# B.2 PROOF OF LEMMA A.2
+
+Proof of Lemma A.2. Let $\mathbf{U} = \mathbf{B}(\mathbf{I} + \mathbf{W}_L) \cdots (\mathbf{I} + \mathbf{W}_1)\mathbf{A}$ and $\widetilde{\mathbf{U}} = \mathbf{B}(\mathbf{I} + \widetilde{\mathbf{W}}_L) \cdots (\mathbf{I} + \widetilde{\mathbf{W}}_1)\mathbf{A}$ and $\Delta = \widetilde{\mathbf{U}} - \mathbf{U}$ . We have
+
+$$
+\begin{array}{l} L (\widetilde {\mathbf {W}}) - L (\mathbf {W}) = \frac {1}{2} \left(\| \widetilde {\mathbf {U}} \mathbf {X} - \mathbf {Y} \| _ {F} ^ {2} - \| \mathbf {U} \mathbf {X} - \mathbf {Y} \| _ {F} ^ {2}\right) \\ = \frac {1}{2} \left(\| (\mathbf {U} + \boldsymbol {\Delta}) \mathbf {X} - \mathbf {Y} \| _ {F} ^ {2} - \| \mathbf {U X} - \mathbf {Y} \| _ {F} ^ {2}\right) \\ = \frac {1}{2} \left(\| \mathbf {U X} - \mathbf {Y} + \Delta \mathbf {X} \| _ {F} ^ {2} - \| \mathbf {U X} - \mathbf {Y} \| _ {F} ^ {2}\right) \\ = \frac {1}{2} \left(\| 2 \langle \mathbf {U X} - \mathbf {Y}, \boldsymbol {\Delta X} \rangle + \| \boldsymbol {\Delta X} \| _ {F} ^ {2}\right) \\ = \left\langle \mathbf {U} \mathbf {X} - \mathbf {Y}, (\widetilde {\mathbf {U}} - \mathbf {U}) \mathbf {X} \right\rangle + \frac {1}{2} \| (\widetilde {\mathbf {U}} - \mathbf {U}) \mathbf {X} \| _ {F} ^ {2}. \tag {B.4} \\ \end{array}
+$$
+
+We begin by working on the first term. Let $\mathbf{V} = (\mathbf{I} + \mathbf{W}_L) \cdots (\mathbf{I} + \mathbf{W}_1)$ and $\widetilde{\mathbf{V}} = (\mathbf{I} + \widetilde{\mathbf{W}}_L) \cdots (\mathbf{I} + \widetilde{\mathbf{W}}_1)$ , so that $\widetilde{\mathbf{U}} - \mathbf{U} = \mathbf{B}(\widetilde{\mathbf{V}} - \mathbf{V})\mathbf{A}$ . Breaking down the effect of transforming $\mathbf{V} = \prod_{j=L}^{1} (\mathbf{I} + \mathbf{W}_j)$ into $\widetilde{\mathbf{V}} = \prod_{j=L}^{1} (\mathbf{I} + \widetilde{\mathbf{W}}_j)$ into the effects of replacing one layer at a time, we get
+
+$$
+\widetilde {\mathbf {V}} - \mathbf {V} = \sum_ {l = 1} ^ {L} \left[ \left(\prod_ {j = L} ^ {l + 1} (\mathbf {I} + \mathbf {W} _ {j})\right) \left(\prod_ {j = l} ^ {1} (\mathbf {I} + \widetilde {\mathbf {W}} _ {j})\right) - \left(\prod_ {j = L} ^ {l} (\mathbf {I} + \mathbf {W} _ {j})\right) \left(\prod_ {j = l - 1} ^ {1} (\mathbf {I} + \widetilde {\mathbf {W}} _ {j})\right) \right]
+$$
+
+and, for each $l$ , pulling out a common factor of $\left(\prod_{j = L}^{l + 1}(\mathbf{I} + \mathbf{W}_j)\right)\left(\prod_{j = l}^{1 - l}(\mathbf{I} + \widetilde{\mathbf{W}}_j)\right)$ gives
+
+$$
+\begin{array}{l} \widetilde {\mathbf {V}} - \mathbf {V} = \sum_ {l = 1} ^ {L} (\mathbf {I} + \mathbf {W} _ {L}) \dots (\mathbf {I} + \mathbf {W} _ {l + 1}) (\widetilde {\mathbf {W}} _ {l} - \mathbf {W} _ {l}) (\mathbf {I} + \widetilde {\mathbf {W}} _ {l - 1}) \dots (\mathbf {I} + \widetilde {\mathbf {W}} _ {1}) \\ = \underbrace {\sum_ {l = 1} ^ {L} (\mathbf {I} + \mathbf {W} _ {L}) \cdots (\mathbf {I} + \mathbf {W} _ {l + 1}) (\widetilde {\mathbf {W}} _ {l} - \mathbf {W} _ {l}) (\mathbf {I} + \mathbf {W} _ {l - 1}) \cdots (\mathbf {I} + \mathbf {W} _ {1})} _ {\mathbf {V} _ {1}} \\ + \underbrace {\sum_ {l = 1} ^ {L} (\mathbf {I} + \mathbf {W} _ {L}) \cdots (\mathbf {I} + \mathbf {W} _ {l + 1}) (\widetilde {\mathbf {W}} _ {l} - \mathbf {W} _ {l})} _ {\mathbf {V} _ {2}} \\ \underbrace {\cdot \left[ \left(\mathbf {I} + \widetilde {\mathbf {W}} _ {l - 1}\right) \cdots \left(\mathbf {I} + \widetilde {\mathbf {W}} _ {1}\right) - \left(\mathbf {I} + \mathbf {W} _ {l - 1}\right) \cdots \left(\mathbf {I} + \mathbf {W} _ {1}\right) \right]} _ {\mathbf {V} _ {2}}. \tag {B.5} \\ \end{array}
+$$
+
+The first term $\mathbf{V}_1$ satisfies
+
+$$
+\begin{array}{l} \langle \mathbf {U X} - \mathbf {Y}, \mathbf {B V} _ {1} \mathbf {A X} \rangle \\ = \left\langle \mathbf {U} \mathbf {X} - \mathbf {Y}, \mathbf {B} \left(\sum_ {l = 1} ^ {L} (\mathbf {I} + \mathbf {W} _ {L}) \dots (\mathbf {I} + \mathbf {W} _ {l + 1}) \left(\widetilde {\mathbf {W}} _ {l} - \mathbf {W} _ {l}\right) (\mathbf {I} + \mathbf {W} _ {l - 1}) \dots (\mathbf {I} + \mathbf {W} _ {1})\right) \mathbf {A} \mathbf {X} \right\rangle \\ = \sum_ {l = 1} ^ {L} \left\langle \mathbf {U X} - \mathbf {Y}, \mathbf {B} (\mathbf {I} + \mathbf {W} _ {L}) \dots (\mathbf {I} + \mathbf {W} _ {l + 1}) \left(\widetilde {\mathbf {W}} _ {l} - \mathbf {W} _ {l}\right) (\mathbf {I} + \mathbf {W} _ {l - 1}) \dots (\mathbf {I} + \mathbf {W} _ {1}) \mathbf {A X} \right\rangle \\ = \sum_ {l = 1} ^ {L} \operatorname {T r} \left(\left(\mathbf {U X} - \mathbf {Y}\right) ^ {\top} \mathbf {B} (\mathbf {I} + \mathbf {W} _ {L}) \dots \left(\mathbf {I} + \mathbf {W} _ {l + 1}\right) \left(\widetilde {\mathbf {W}} _ {l} - \mathbf {W} _ {l}\right) \left(\mathbf {I} + \mathbf {W} _ {l - 1}\right) \dots \left(\mathbf {I} + \mathbf {W} _ {1}\right) \mathbf {A X}\right) \\ = \sum_ {l = 1} ^ {L} \operatorname {T r} \left(\left(\mathbf {I} + \mathbf {W} _ {l - 1}\right) \dots \left(\mathbf {I} + \mathbf {W} _ {1}\right) \mathbf {A X} \left(\mathbf {U X} - \mathbf {Y}\right) ^ {\top} \mathbf {B} \left(\mathbf {I} + \mathbf {W} _ {L}\right) \dots \left(\mathbf {I} + \mathbf {W} _ {l + 1}\right) \left(\widetilde {\mathbf {W}} _ {l} - \mathbf {W} _ {l}\right)\right) \\ = \sum_ {l = 1} ^ {L} \left\langle \left[ \mathbf {B} \left(\mathbf {I} + \mathbf {W} _ {L}\right) \dots \left(\mathbf {I} + \mathbf {W} _ {l + 1}\right) \right] ^ {\top} (\mathbf {U X} - \mathbf {Y}) \left[ \left(\mathbf {I} + \mathbf {W} _ {l - 1}\right) \dots \mathbf {A X} \right] ^ {\top}, \widetilde {\mathbf {W}} _ {l} - \mathbf {W} _ {l} \right\rangle \\ = \sum_ {l = 1} ^ {L} \left\langle \nabla_ {\mathbf {W} _ {l}} L (\mathbf {W}), \widetilde {\mathbf {W}} _ {l} - \mathbf {W} _ {l} \right\rangle , \tag {B.6} \\ \end{array}
+$$
+
+where the first equality is by the definition of $\mathbf{V}_1$ . Now we focus on the second term $\mathbf{V}_2$ of (B.5),
+
+$$
+\begin{array}{l} \mathbf {V} _ {2} = \sum_ {l = 1} ^ {L} (\mathbf {I} + \mathbf {W} _ {L}) \dots (\mathbf {I} + \mathbf {W} _ {l + 1}) (\widetilde {\mathbf {W}} _ {l} - \mathbf {W} _ {l}) \\ \cdot \sum_ {s = 1} ^ {l - 1} (\mathbf {I} + \mathbf {W} _ {l - 1}) \dots (\mathbf {I} + \mathbf {W} _ {s + 1}) (\widetilde {\mathbf {W}} _ {s} - \mathbf {W} _ {s}) (\mathbf {I} + \widetilde {\mathbf {W}} _ {s - 1}) \dots (\mathbf {I} + \widetilde {\mathbf {W}} _ {1}). \\ \end{array}
+$$
+
+Recalling that $\| \mathbf{W}_l\| _F,\| \widetilde{\mathbf{W}}_l\| _F\leqslant 0.5 / L$ for all $l\in [L]$ , by triangle inequality we have
+
+$$
+\begin{array}{l} \| \mathbf {V} _ {2} \| _ {F} \leqslant (1 + 0. 5 / L) ^ {L} \cdot \sum_ {l, s \in [ L ]: l > s} \| \widetilde {\mathbf {W}} _ {l} - \mathbf {W} _ {l} \| _ {F} \cdot \| \widetilde {\mathbf {W}} _ {s} - \mathbf {W} _ {s} \| _ {F} \\ \leqslant (1 + 0. 5 / L) ^ {L} \cdot \left(\sum_ {l = 1} ^ {L} \| \widetilde {\mathbf {W}} _ {l} - \mathbf {W} _ {l} \| _ {F}\right) ^ {2}, \\ \end{array}
+$$
+
+where we use the fact that $\sum_{l,s\in [L]:l > s}a_{l}a_{s}\leqslant \sum_{l,s\in [L]}a_{l}a_{s} = \left(\sum_{l}a_{l}\right)^{2}$ holds for all $a_1,\ldots ,a_L\geq 0$ . Therefore, the following holds regarding $\mathbf{V}_2$ :
+
+$$
+\begin{array}{l} \langle \mathbf {U X} - \mathbf {Y}, \mathbf {B V} _ {2} \mathbf {A X} \rangle \leqslant \| \mathbf {U X} - \mathbf {Y} \| _ {F} \| \mathbf {B V} _ {2} \mathbf {A X} \| _ {F} \\ \leqslant \sqrt {2 L (\mathbf {W})} \| \mathbf {B} \| _ {2} \| \mathbf {A} \| _ {2} \| \mathbf {X} \| _ {2} \| \mathbf {V} _ {2} \| _ {F} \\ \leqslant \sqrt {2 e} \sqrt {L (\mathbf {W})} \| \mathbf {B} \| _ {2} \| \mathbf {A} \| _ {2} \| \mathbf {X} \| _ {2} \left(\sum_ {l = 1} ^ {L} \| \widetilde {\mathbf {W}} _ {l} - \mathbf {W} _ {l} \| _ {F}\right) ^ {2} \tag {B.7} \\ \end{array}
+$$
+
+where the third inequality follows from the fact that $(1 + 0.5 / L)^{L} = (1 + 0.5 / L)^{L}\leqslant \sqrt{e}$ . Next, we are going to upper bound the second term of (B.4): $\frac{1}{2}\| (\widetilde{\mathbf{U}} -\mathbf{U})\mathbf{X}\| _F^2$ . Note that, since $\| (\widetilde{\mathbf{U}} -\mathbf{U})\mathbf{X}\| _F^2 = \| \mathbf{B}(\widetilde{\mathbf{V}} -\mathbf{V})\mathbf{A}\mathbf{X}\| _F^2\leqslant \| \mathbf{A}\| _2^2\| \mathbf{B}\| _2^2\| \mathbf{X}\| _2^2\| \widetilde{\mathbf{V}} -\mathbf{V}\| _F^2$ , it suffices to bound the norm $\| \widetilde{\mathbf{V}} -$ $\mathbf{V}\| _F$ . By (B.5), we have
+
+$$
+\begin{array}{l} \left\| \widetilde {\mathbf {V}} - \mathbf {V} \right\| _ {F} = \left\| \sum_ {l = 1} ^ {L} (\mathbf {I} + \mathbf {W} _ {L}) \dots (\mathbf {I} + \mathbf {W} _ {l + 1}) (\widetilde {\mathbf {W}} _ {l} - \mathbf {W} _ {l}) (\mathbf {I} + \widetilde {\mathbf {W}} _ {l - 1}) \dots (\mathbf {I} + \widetilde {\mathbf {W}} _ {1}) \right\| _ {F} \\ \leqslant \left(1 + 0. 5 / L\right) ^ {L} \sum_ {l = 1} ^ {L} \| \widetilde {\mathbf {W}} _ {l} - \mathbf {W} _ {l} \| _ {F}. \tag {B.8} \\ \end{array}
+$$
+
+Plugging (B.6), (B.7) and (B.8) into (B.4), we have
+
+$$
+\begin{array}{l} L (\widetilde {\mathbf {W}}) - L (\mathbf {W}) \\ = \left\langle \mathbf {U} \mathbf {X} - \mathbf {Y}, \mathbf {B} \left(\mathbf {V} _ {1} + \mathbf {V} _ {2}\right) \mathbf {X} \right\rangle + \frac {1}{2} \| \mathbf {B} (\tilde {\mathbf {V}} - \mathbf {V}) \mathbf {A} \mathbf {X} \| _ {F} ^ {2} \\ \leqslant \sum_ {l = 1} ^ {L} \left\langle \nabla_ {\mathbf {W} _ {l}} L (\mathbf {W}), \widetilde {\mathbf {W}} _ {l} - \mathbf {W} _ {l} \right\rangle \\ + \| \mathbf {A} \| _ {2} \| \mathbf {B} \| _ {2} \| \mathbf {X} \| _ {2} \left(\sqrt {2 e L (\mathbf {W})} + 0. 5 e \| \mathbf {A} \| _ {2} \| \mathbf {B} \| _ {2} \| \mathbf {X} \| _ {2}\right) \left(\sum_ {l = 1} ^ {L} \| \widetilde {\mathbf {W}} _ {l} - \mathbf {W} _ {l} \| _ {F}\right) ^ {2} \\ \leqslant \sum_ {l = 1} ^ {L} \langle \nabla_ {\mathbf {W} _ {l}} L (\mathbf {W}), \widetilde {\mathbf {W}} _ {l} - \mathbf {W} _ {l} \rangle \\ + L \| \mathbf {A} \| _ {2} \| \mathbf {B} \| _ {2} \| \mathbf {X} \| _ {2} \left(\sqrt {2 e L (\mathbf {W})} + 0. 5 e \| \mathbf {A} \| _ {2} \| \mathbf {B} \| _ {2} \| \mathbf {X} \| _ {2}\right) \sum_ {l = 1} ^ {L} \| \widetilde {\mathbf {W}} _ {l} - \mathbf {W} _ {l} \| _ {F} ^ {2}, \tag {B.9} \\ \end{array}
+$$
+
+where the last inequality is by Jesen's inequality. This completes the proof.
+
+
+
+# B.3 PROOF OF LEMMA B.3
+
+Proof of Lemma B.3. Note that we have
+
+$$
+\left\| \mathbf {U} \mathbf {V} \right\| _ {F} ^ {2} = \operatorname {T r} \left(\mathbf {U} \mathbf {V} \mathbf {V} ^ {\top} \mathbf {U} ^ {\top}\right) = \operatorname {T r} \left(\mathbf {U} ^ {\top} \mathbf {U} \mathbf {V} \mathbf {V} ^ {\top}\right).
+$$
+
+By Lemma B.2, it is clear that
+
+$$
+\lambda_ {\min } \left(\mathbf {U} ^ {\top} \mathbf {U}\right) \operatorname {T r} \left(\mathbf {V V} ^ {\top}\right) \leqslant \operatorname {T r} \left(\mathbf {U} ^ {\top} \mathbf {U} \mathbf {V} \mathbf {V} ^ {\top}\right) \leqslant \lambda_ {\max } \left(\mathbf {U} ^ {\top} \mathbf {U}\right) \operatorname {T r} \left(\mathbf {V V} ^ {\top}\right).
+$$
+
+Since $\mathbf{U} \in \mathbb{R}^{d \times r}$ is of $r$ -rank, thus we have $\lambda_{\min}(\mathbf{U}^{\top} \mathbf{U}) = \sigma_{\min}^{2}(\mathbf{U})$ . Then applying the facts that $\lambda_{\max}(\mathbf{U}^{\top} \mathbf{U}) = \sigma_{\max}^{2}(\mathbf{U})$ and $\mathrm{Tr}(\mathbf{V} \mathbf{V}^{\top}) = \| \mathbf{V} \|_F^2$ , we are able to complete the proof.
+
+
\ No newline at end of file
diff --git a/ontheglobalconvergenceoftrainingdeeplinearresnets/images.zip b/ontheglobalconvergenceoftrainingdeeplinearresnets/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..26403e936c73849744dfc51f81868c0792ac3534
--- /dev/null
+++ b/ontheglobalconvergenceoftrainingdeeplinearresnets/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f12d858da6c5da8de90323d1c22191eabc7d0c37891ff3815db71b6b8395aaf7
+size 1642910
diff --git a/ontheglobalconvergenceoftrainingdeeplinearresnets/layout.json b/ontheglobalconvergenceoftrainingdeeplinearresnets/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..15f72ebb9ccd372c41d1c668a0bdefc6ad9eb3bd
--- /dev/null
+++ b/ontheglobalconvergenceoftrainingdeeplinearresnets/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1209abc69b0da5bec5b17d24745899d63fbb170747f1ea6bdffffeda8e67b1cf
+size 1192252
diff --git a/ontheinteractionbetweensupervisionandselfplayinemergentcommunication/1b3901be-621d-4bb3-bcdb-d88a447dac73_content_list.json b/ontheinteractionbetweensupervisionandselfplayinemergentcommunication/1b3901be-621d-4bb3-bcdb-d88a447dac73_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..657194e22436870ea897ca4e87ad117b1e206f37
--- /dev/null
+++ b/ontheinteractionbetweensupervisionandselfplayinemergentcommunication/1b3901be-621d-4bb3-bcdb-d88a447dac73_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8095a0c0b0ed76ed40ab8cc95816b6dd760e2813f53e52dc55a7d31a42e53f2b
+size 80378
diff --git a/ontheinteractionbetweensupervisionandselfplayinemergentcommunication/1b3901be-621d-4bb3-bcdb-d88a447dac73_model.json b/ontheinteractionbetweensupervisionandselfplayinemergentcommunication/1b3901be-621d-4bb3-bcdb-d88a447dac73_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..66a1d82c7c3d03ce642a5191986e98014008f803
--- /dev/null
+++ b/ontheinteractionbetweensupervisionandselfplayinemergentcommunication/1b3901be-621d-4bb3-bcdb-d88a447dac73_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8d136354a1b737269e8eb938279bcf234a0957e8d28676756e50ce677699d351
+size 97132
diff --git a/ontheinteractionbetweensupervisionandselfplayinemergentcommunication/1b3901be-621d-4bb3-bcdb-d88a447dac73_origin.pdf b/ontheinteractionbetweensupervisionandselfplayinemergentcommunication/1b3901be-621d-4bb3-bcdb-d88a447dac73_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..c8b563aa48544fe9184f9364b763affbdaacdaff
--- /dev/null
+++ b/ontheinteractionbetweensupervisionandselfplayinemergentcommunication/1b3901be-621d-4bb3-bcdb-d88a447dac73_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:870d00049f9ed58addb8dc47a1af6808bbf9867bfb5d60206b92b4ec27c5c2a7
+size 3629334
diff --git a/ontheinteractionbetweensupervisionandselfplayinemergentcommunication/full.md b/ontheinteractionbetweensupervisionandselfplayinemergentcommunication/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..cd8b466624dcb0376e2f9068288cfa7e57083866
--- /dev/null
+++ b/ontheinteractionbetweensupervisionandselfplayinemergentcommunication/full.md
@@ -0,0 +1,274 @@
+# ON THE INTERACTION BETWEEN SUPERVISION AND SELF-PLAY IN EMERGENT COMMUNICATION
+
+Ryan Lowe,\* Abhinav Gupta\* MILA
+
+Jakob Foerster, Douwe Kiela
+Facebook AI Research
+
+Joelle Pineau
+Facebook AI Research
+MILA
+
+# ABSTRACT
+
+A promising approach for teaching artificial agents to use natural language involves using human-in-the-loop training. However, recent work suggests that current machine learning methods are too data inefficient to be trained in this way from scratch. In this paper, we investigate the relationship between two categories of learning signals with the ultimate goal of improving sample efficiency: imitating human language data via supervised learning, and maximizing reward in a simulated multi-agent environment via self-play (as done in emergent communication), and introduce the term supervised self-play ( $S2P$ ) for algorithms using both of these signals. We find that first training agents via supervised learning on human data followed by self-play outperforms the converse, suggesting that it is not beneficial to emerge languages from scratch. We then empirically investigate various S2P schedules that begin with supervised learning in two environments: a Lewis signaling game with symbolic inputs, and an image-based referential game with natural language descriptions. Lastly, we introduce population based approaches to S2P, which further improves the performance over single-agent methods.1
+
+# 1 INTRODUCTION
+
+Language is one of the most important aspects of human intelligence; it allows humans to coordinate and share knowledge with each other. It is also crucial for human-machine interaction, as human language is a natural means by which to exchange information, give feedback, and specify goals. A promising approach for training agents to solve problems with natural language is to have a "human in the loop", meaning we collect problem-specific data from humans interacting directly with our agents for learning. However, human-in-the-loop data is expensive and time-consuming to obtain as it requires continuously collecting human data as the agent's policy improves, and recent work suggests that current machine learning methods (e.g. from deep reinforcement learning) are too data-inefficient to be trained in this way from scratch (Chevalier-Boisvert et al., 2019). Thus, an important open problem is: how can we make human-in-the-loop training as data efficient as possible?
+
+To maximize data efficiency, it is important to fully leverage all available training signals. In this paper, we study two categories of such training methods: imitating human data via supervised learning, and self-play to maximize reward in a multi-agent environment, both of which provide rich signals for endowing agents with language-using capabilities. However, these are potentially competing objectives, as maximizing environmental reward can lead to the resulting communication protocol drifting from natural language (Lewis et al., 2017; Lee et al., 2019). The crucial question, then, is how do we best combine self-play and supervised updates? This question has received surprisingly little attention from the emergent communication literature, where the question of how to bridge the gap from emergent protocols to natural language is generally left for future work (Mordatch & Abbeel, 2018; Lazaridou et al., 2018; Cao et al., 2018).
+
+
+Figure 1: (a) Diagram of the supervised self-play (S2P) procedure (phases 1-3) and the testing procedure considered in this work (phase 4). (b) The environments considered in this paper (Sec. 4).
+
+
+
+Our goal in this paper is to investigate algorithms for combining supervised learning with self-play — which we call supervised self-play (S2P) algorithms — using two classic emergent communication tasks: a Lewis signaling game with symbolic inputs, and a more complicated image-based referential game with natural language descriptions. Our first finding is that supervised learning followed by self-play outperforms emergent communication with supervised fine-tuning in these environments, and we provide three reasons for why this is the case. We then empirically investigate several supervised-first S2P methods in our environments. Existing approaches in this area have used various ad-hoc schedules for alternating between the two kinds of updates (Lazaridou et al., 2017), but to our knowledge there has been no systematic study that has compared these approaches. Lastly, we propose the use of population-based methods for S2P, and find that it leads to improved performance in the more challenging image-based referential game. Our findings highlight the need for further work in combining supervised learning and self-play to develop more sample-efficient language learners.
+
+# 2 RELATED WORK
+
+In the past few years, there has been a renewed interest in the field of emergent communication (Sukhbaatar et al., 2016; Foerster et al., 2016; Lazaridou et al., 2017; Havrylov & Titov, 2017) culminating in 3 NeurIPS workshops. Empirical studies have showed that agents can autonomously evolve a communication protocol using discrete symbols when deployed in a multi-agent environment which helps them to play a cooperative or competitive game (Singh et al., 2019; Cao et al., 2018; Choi et al., 2018; Resnick* et al., 2019; Evtimova et al., 2018).
+
+While the idea of promoting coordination among agents through communication sounds promising, recent experiments (Lowe et al., 2019; Chaabouni et al., 2019; Kottur et al., 2017; Jaques et al., 2019) have emphasized the difficulty in learning meaningful emergent communication protocols even with centralized training.
+
+Apart from the above advances in emergent communication, there has been a long outstanding goal of learning intelligent conversational agents to be able to interact with humans. This involves training the artificial agents in a way so that they achieve high scores while solving the task and their language is interpretable by humans or close to natural language. Recent works also add another axis orthogonal to communication where the agent also takes a discrete action in an interactive environment (de Vries et al., 2018; Mul et al., 2019). Lewis et al. (2017) introduced a negotiation task which involves learning linguistic and reasoning skills. They train models imitating human utterances using supervised learning and found that the model generated human-like captions but were poor negotiators. So they perform self-play with these pretrained agents in an interleaved manner and found that the performance improved drastically while avoiding language drift. Lee et al. (2019) also propose using an auxiliary task for grounding the communication to counter language drift. They use visual grounding to learn the semantics of the language while still generating messages that are close to English.
+
+A recent trend on using population based training for multi-agent communication is a promising avenue for research using inspirations from language evolution literature (Smith et al., 2003; Kirby,
+
+2014; Raviv & Arnon, 2018). Cultural transmission is one such technique which focuses on the structure and compression of languages, since a language must be used and learned by all individuals of the culture in which it resides and at the same time be suitable for a variety of tasks. Harding Graesser et al. (2019) shows the emergence of linguistic phenomena when a pool of agents contact each other giving rise to novel creole languages. Li & Bowling (2019); Cogswell et al. (2019); Tieleman et al. (2018) have also tried different ways of imposing cultural pressures on agents, by simulating a large population of them and pairing agents to solve a cooperative game with communication. They train the agent against a sampled generation of agents where the generation corresponds to the different languages of the different agent at different times in the history.
+
+Our work is inspired from these works where we aim to formalize the recent advancements in using self-play in dialog modeling, through the lens of emergent communication.
+
+# 3 METHODS
+
+# 3.1 PROBLEM DEFINITION
+
+Our agents are embedded in a multi-agent environment with $N$ agents where they receive observations $o \in O$ (which are functions of a hidden state $S$ ) and perform actions $a \in A$ . Some actions $A_{L} \subset A$ involve sending a message $m \in A_{L}$ over a discrete, costless communication channel (i.e. a cheap talk channel (Farrell & Rabin, 1996)). The agents are rewarded with a reward $r \in R$ for their performance in the environment. We assume throughout that the environment is cooperative and thus the agents are trained to maximize the sum of rewards $R = \sum_{t=1:T} \sum_{i=1:N} r_{i,t}$ across both agents. This can be thought of as a cooperative partially-observable Markov game (Littman (1994)).
+
+We define a target language $L^{*} \in \mathcal{L}$ , usually corresponding to natural language, that we want our agents to learn (we further assume $L^{*}$ can be used to achieve high task reward). In this paper, we consider a language $L \in \mathcal{L}$ to be simply a set of valid messages $A_{L}$ and a mapping between observations and messages in the environment, $L: O \times A_{L} \mapsto [0,1]$ . For example, in an English image-based referential game (Section 4) this corresponds to the mapping between images and image descriptions in English. We are given a dataset $\mathcal{D}$ consisting of $|\mathcal{D}|$ (observation, action) pairs, corresponding to $N_{e}$ 'experts' (for us, $N_{e} = 2$ ) playing the game using the target language $L^{*}$ . Our goal is to train agents to achieve a high reward in the game while speaking language $L^{*}$ with an 'expert'. Specifically, we want our agents to generalize and to perform well on examples that are not contained in $\mathcal{D}$ .
+
+To summarize, we want agents that can perform well on a collaborative task with English-speaking humans, and we can train them using a supervised dataset $\mathcal{D}$ and via self-play.
+
+# 3.2 SUPERVISED SELF-PLAY (S2P)
+
+In recent years, there have been several approaches to language learning that have combined supervised or imitation learning with self-play. In this paper, we propose an umbrella term for these algorithms called supervised self-play (S2P). S2P requires two things: (1) a multi-agent environment where at least one agent can send messages over a dedicated communication channel, along with a reward function that measures how well the agents are doing at some task; and (2) a supervised dataset $\mathcal{D}$ of experts acting and speaking language $L^{*}$ in the environment (such that they perform well on the task). Given these ingredients, we define S2P below (see Figure 2).
+
+Definition 3.1. Supervised self-play (S2P). Supervised self-play is a class of language learning algorithms that combines: (1) self-play updates in a multi-agent language environment, and (2) supervised updates on an expert dataset $\mathcal{D}$ .
+
+S2P algorithms can differ in how they combine self-play and supervised learning updates on $\mathcal{D}$ . When supervised learning is performed before self-play, we refer to the dataset $\mathcal{D}$ as the seed data. Why might we want to train our agents via self-play? Won't their language diverge from $L^{*}$ ? One way to intuitively understand why S2P is beneficial is to think in terms of constraints. In our set-up, there are two known constraints on the target language $L^{*}$ : (1) it is consistent with the samples from the supervised dataset $\mathcal{D}$ , and (2) $L^{*}$ can be used to obtain a high reward in the environment. Thus, finding $L^{*}$ can be loosely viewed as a constrained optimization problem, and enforcing both constraints should clearly lead to better performance.
+
+# 3.3 ALGORITHMS FOR S2P
+
+Here we describe several methods for S2P training. Our goal is not to exhaustively enumerate all possible optimization strategies, but rather provide a categorization of some well-known ways to combine self-play and supervised learning. To help describe these methods, we further split the seed dataset $\mathcal{D}$ into $\mathcal{D}_{train}$ , which is used for training, and $\mathcal{D}_{val}$ which is used for early-stopping. We also visualize the schedules in Figure 2.
+
+Emergent communication with supervised fine-tuning (sp2sup): We first perform self-play updates until the learning converges on the task performance. It is then followed by supervised updates on $\mathcal{D}_{train}$ until the listener performance converges on the dataset $\mathcal{D}_{val}$ .
+
+Supervised learning with self-play (sup2sp): This is the complement of the above method which involves doing supervised updates until convergence on $\mathcal{D}_{val}$ followed by self-play updates until convergence on the task performance.
+
+Random updates (rand): This is the method used in (Lazaridou et al., 2017). At each time step, we sample a bernoulli random variable $z \sim \text{Bernoulli}(q)$ where $q$ is fixed. If $z = 1$ , we perform one supervised update, else we do one self-play update, and repeat until both losses converge on $\mathcal{D}_{val}$ .
+
+
+Figure 2: A visual representation of the different S2P methods.
+
+Scheduled updates (sched): We first pretrain the listener and the
+
+speaker until convergence on $\mathcal{D}_{val}$ . Then we create a schedule, where we perform $l$ self-play updates followed by $m$ supervised updates, and repeat until convergence on the dataset.
+
+Scheduled updates with speaker freezing (sched_frz): This method is based on the findings of Lewis et al. (2017), who do sched S2P while freezing the parameters of the speaker during self-play to reduce the amount of language drift. In our case, we freeze the parameters of the speaker after the initial supervised learning.
+
+Scheduled updates with random speaker freezing (sched_rand_frz): Experimentally, we noticed that sched_frz didn't perform well in self-play. Thus, we introduce a variation, we sample a bernoulli random variable $z \sim Bernoulli(r)$ where $r$ is fixed. If $z = 1$ , we freeze the parameters of the speaker during both self-play and supervised learning, else we allow updates to the speaker as well.
+
+# 3.4 POPULATION-BASED S2P (POP-S2P)
+
+As explained above, the goal of S2P is to produce agents that follow dataset $\mathcal{D}$ while maximizing reward in the environment. However, there are many such policies satisfying these criteria. This results in a large space of possible solutions, that increases as the environment grows more complex (but decreases with increasing $|\mathcal{D}|$ ). Experimentally, we find that this can result in diverse agent policies. We show this in Figure 3 by training 50 randomly initialized agents on the image-based referential game (defined in Sec. 4) the agents can often make diverse predictions for a given image (Figure 3a) and achieve variable performance when playing with other populations with a slight preference towards their own partner (the diagonal in Figure 3b).
+
+Inspired by these findings, we propose to aug-
+
+
+Figure 3: Results from training 50 S2P agents on the IBR game with $|\mathcal{D}| = 10000$ . (a) The agents have a range of predictions on many images. (b) When playing with each other, the agents exhibit uneven performance (color is mean reward, yellow is higher), indicating policy variability.
+
+ment S2P by training a population of $N$ agents, and subsequently aggregating them back into a
+
+single agent (the 'student'). We call this population-based S2P (Pop-S2P). While there are many feasible ways of doing this, in this paper we train the populations by simply randomizing the initial seed, and we aggregate the populations using a simple form of policy distillation (Rusu et al., 2016). Another simple technique to boost performance is via ensembling where we simply take the majority prediction at each time step.
+
+# 4 ENVIRONMENTS & IMPLEMENTATION DETAILS
+
+We consider environments based on classical problems in emergent communication. These environments are cooperative and involve the interaction between a speaker, who makes an observation and sends a message, and a listener, who observes the message and makes a prediction (see Figure 1b). Our goal is to train a listener such that it achieves high reward when playing with an expert speaking the target language $L^{*}$ on inputs unseen during training.2
+
+Environment 1: Object Reconstruction (OR) Our first game is a Lewis signaling game (Lewis, 1969) and a simpler version of the Task & Talk game from Kottur et al. (2017), with a single turn and a much larger input space. The speaker agent observes an object with a certain set of properties, and must describe the object to the listener using a sequence of words. The listener then attempts to reconstruct the object. More specifically, the input space consists of $p$ properties (e.g. shape, color) of $t$ types each (e.g. triangle, square). The speaker observes a symbolic representation of the input, consisting of the concatenation of $p = 6$ one-hot vectors, each of length $t = 10$ . The number of possible inputs scales as $t^p$ . We define the vocabulary size (length of each one-hot vector sent from the speaker) as $|V| = 60$ , and the number of words (fixed length message) sent to be $T = 6$ .
+
+For our target language $L^{*}$ for this task, we programatically generate a perfectly compositional language, by assigning each object a unique word. In other words, to describe a 'blue shaded triangle', we create a language where the output description would be "blue, triangle, shaded", in some arbitrary order. By 'unique symbol', we mean that no two properties are assigned the same word. The speaker and listener policies are parameterized using a 2-layer linear network (results were similar with added non-linearity and significantly worse with 1-layer linear networks) with 200 hidden units. During both supervised learning and self-play, the listener is trained to minimize the cross-entropy loss over property predictions.
+
+Environment 2: Image-Based Referential game with natural language (IBR) Our second game is the communication task introduced in Lee et al. (2018). The speaker observes a target image $d^{*}$ , and must describe the image using a set of words. The listener observes the target image along with $D$ distractor images sampled uniformly at random from the training set (for us, $D = 9$ ), and the message $y_{d^{*}}$ from the speaker, and is rewarded for correctly selecting the target image. For this game, the target language $L^{*}$ is English — we obtain English image descriptions using caption data from MS COCO and Flickr30k. We set the vocabulary size $|V| = 100$ , and filter out any descriptions that contain more than $30\%$ unknown tokens while keeping the maximum message length $T$ to 15.
+
+Similar to (Mordatch & Abbeel, 2018; Sukhbaatar et al., 2016), we train our agents end-to-end with backpropagation. Since the speaker sends discrete messages, we use the Straight-Through version of Gumbel-Softmax (Jang et al., 2017; Maddison et al., 2017) to allow gradient flow to the speaker during self-play ( $\mathcal{I}_{\text{self-play}}$ ). The speaker's predictions are trained on the ground truth English captions $m^{*}$ using the cross entropy loss $\mathcal{I}_{\text{spk-supervised}}$ . The listener is trained using the cross-entropy loss $\mathcal{I}_{\text{lsn-supervised}}$ where the logits are the reciprocal of the mean squared error which was found to perform better than directly minimizing MSE loss in Lee et al. (2018). The mean squared error is taken over the listener's image representation $b_{lsn}$ of the distractor (or target) image and the message representation given as input. The loss functions are defined as:
+
+$$
+\mathcal {J} _ {\mathrm {s p k - s u p e r v i s e d}} (d ^ {*}) = - \sum_ {t = 1} ^ {T} \log p _ {s p k} (m _ {t} | m _ {< t}, d ^ {*})
+$$
+
+$$
+\mathcal {J} _ {\mathrm {l s n - s u p e r v i s e d}} \left(m ^ {*}, d ^ {*}, D\right) = - \sum_ {d = 1} ^ {D + 1} \log (\operatorname {s o f t m a x} \left(1 / p _ {l s n} \left(m ^ {*}\right) - b _ {\mathrm {l s n}} (d)\right) ^ {2})
+$$
+
+
+(a)
+
+
+(b)
+Figure 4: (a) Left: In the OR game, best performance (number of total samples required to achieve $95\%$ test accuracy, lower is better) for S2P is achieved when all of the samples are in the seed. 0 on the x-axis corresponds to sp2sup and Optimal is the actual (minimum) number of samples required to solve this optimization problem (see Appendix B). Right: This is also the case in the IBR game, where performance is measured by the generalization accuracy using 10k total training samples (higher is better). (b) Adding more samples to initial supervised learning in the IBR game improves agents' generalization to $L^{*}$ . (c) Even when we learn the perfect distribution with emergent communication in the OR game, it still performs worse than Pop-S2P (using sup2sp S2P).
+
+
+(c)
+
+$$
+\mathcal {J} _ {\mathrm {s e l f - p l a y}} (d ^ {*}, D) = - \sum_ {d = 1} ^ {D + 1} \log (\operatorname {s o f t m a x} (1 / p _ {l s n} (y _ {d ^ {*}}) - b _ {\mathrm {l s n}} (d)) ^ {2})
+$$
+
+where $y_{d^*}$ is the concatenation of $T$ one-hot vectors $y_{d^*}^t = \mathrm{ST - GumbelSoftmax}(p_{spk}^t)$ .
+
+We use the same architecture as described in Lee et al. (2018). The speaker and listener are parameterized by recurrent policies, both using an embedding layer of size 256 followed by a GRU (Cho et al., 2014) of size 512. We provide further hyperparameter details in Table 1 in the Appendix.
+
+# 5 DO SUPERVISED LEARNING BEFORE SELF-PLAY
+
+A central question in our work is how to combine supervised and self-play updates for effective pre-training of conversational agents. In this section, we study this question by conducting experiments with two schedules: training with emergent communication followed by supervised learning (sp2sup), and training with supervised learning followed by self-play (sup2sp). We also interpolate between these two regimes by performing the rand and sched on $0 < n < |\mathcal{D}|$ samples, followed by supervised fine-tuning on the remaining $|\mathcal{D}| - n$ samples.
+
+Our first finding is that it is best to use all of your samples for supervised learning before doing self-play. This can be seen in Figure 4: when all of the samples are used first for supervised learning, the number of total samples required to solve the OR game drastically, and in the IBR game the accuracy for a fixed number of samples is maximized (Figure 4a). While this may seem to be common sense, it in fact runs counter to the prevailing wisdom in some emergent communication literature, where languages are emerged from scratch with the ultimate goal of translating them to natural language.
+
+To better understand why it is best to do supervised learning first, we now conduct a set of targeted experiments using the environments from Section 4. Results of our experiments suggest three main explanations:
+
+(1) Emerging a language is hard. For many environments, with emergent communication it's often hard to find an equilibrium where the agents meaningfully communicate. The difficulty of 'emergent language discovery' has been well-known in emergent communication (Lowe et al., 2017), so we will only briefly discuss it here. In short, to discover a useful communication protocol agents
+
+
+Figure 5: Results from the OR game with 1 property and 10 types. When the supervised updates are performed first (supervised data available for words $0 - 3$ ), then the self-play updates make sensible predictions for the unknown words $4 - 7$ . When the self-play updates are performed first, the subsequent supervised updates merely correct the predictions for words $1 - 4$ , without enforcing the constraint that each word should result in a separate type to solve the task.
+
+have to coordinate repeatedly over time, which is difficult when agents are randomly initialized, particularly in environments with sparse reward. Compounding the difficulty is that, if neither agent communicates and both agents act optimally given their lack of knowledge, they converge to a Nash equilibrium called babbling equilibrium (Farrell & Rabin, 1996). This equilibrium must be overcome to learn a useful communication protocol. In S2P, the initial language supervision can help overcome the discovery problem, as it provides an initial policy for how agents could usefully communicate (Lewis et al., 2017).
+
+(2) Emergent languages are different than natural language. Even if one does find an equilibrium where agents communicate and perform well on the task, the distribution of languages they find will usually be very different from natural language. This is a problem because, if the languages obtained through self-play are sufficiently different from $L^{*}$ , they will not be helpful for learning. This is seen for the OR game in Figure 4a, where 17 samples are required in the seed before S2P outperforms the supervised learning baseline. We speculate that this is due to the different pressures exerted during the emergence of artificial languages and human languages.
+
+Thankfully, we can learn languages closer to $L^{*}$ by simply adding more samples to our initial supervised learning phase. We show this in Figure 4b, where we train populations of 50 agents on the IBR game and use Pop-S2P to produce a single distilled agent. With both 1K and 10K initial supervised samples, the distill agent generalizes to agents in the validation set of their population. However, the distilled agent trained with 10000 samples performs significantly better when playing with an expert agent speaking $L^{*}$ , indicating that the training agents from that population speak languages closer to $L^{*}$ .
+
+(3) Starting with self-play violates constraints. Even if you have 'perfect emergent communication' that learns a distribution over languages under which $L^{*}$ has high probability, current methods of supervised fine-tuning do not properly learn from this distribution. What if we had all the correct learning pressures, such that we emerged a distribution over languages $\mathcal{L}$ with structure identical to $L^{*}$ , and then trained a Pop-S2P agent using this distribution? Surprisingly, we find that S2P with all of the samples in the seed performs better than even this optimistic case, in terms of providing useful information for training a Pop-S2P agent. We conduct an experiment in the OR game where we programmatically define a distribution over compositional languages $\mathcal{L}_c$ , of which our target language $L^{*}$ is a sample. Each language $L \in \mathcal{L}_c$ has the same structure and are obtained by randomly permuting the mapping between the word IDs and the corresponding type IDs, along with the order of properties in an utterance. Next, we compare two distilled policies using 50 populations: one is distilled from S2P populations (trained with $X$ samples), and the other is distilled from 'perfect emergent communication' and fine-tuned on $X$ samples. As can be seen in Figure 4c, we show that when we train a Pop-S2P agent on 50 of these compositional populations, we still need $3X$ more samples than regular Pop-S2P (trained on 50 S2P agents with all of the samples in the seed) to reach $95\%$ test accuracy3.
+
+To understand why this happens, we conduct a case study in an even simpler setting: single-agent S2P in the OR game with $p = 1$ , $t = 10$ , $|V| = 10$ . We find that agents trained via emergent communication consistently learn to solve this task. However, as shown in Figure 5, when subsequently trained via supervised learning on $\mathcal{D}$ to learn $L^*$ , the learned language is no longer coherent (it maps different words to the same type) and doesn't solve the task. On the other hand, agents trained first with supervised learning are able to learn a language that both solves the task and is consistent with $\mathcal{D}$ .
+
+Intuitively, what's happening is that the samples in $\mathcal{D}$ are also valid for solving the task, since we assume agents speaking $L^{*}$ can solve the task. Thus, self-play after supervised learning simply 'fills in the gaps' for examples not in $\mathcal{D}$ .4 Emergent languages that start with self-play, on the other hand, contain input-output mappings that are inconsistent with $L^{*}$ , which must be un-learned during subsequent supervised learning.
+
+In theory, the above issue could be resolved using Pop-S2P; if the distilled agent could use the population of emergent languages to discover structural rules (e.g. discovering that the languages in the OR game in Figure 4c are compositional), it could use the samples from $\mathcal{D}$ to refine a posterior distribution over target languages that is consistent with these rules (e.g. learning the distribution of compositional languages consistent with $\mathcal{D}$ ). Current approaches to supervised fine-tuning in language, though, do not do this (Lazaridou et al., 2017; Lewis et al., 2017). An interesting direction for future work is examining how to apply Bayesian techniques to S2P.
+
+# 6 EXPLORING VARIANTS OF S2P
+
+# 6.1 POPULATION-BASED S2P
+
+In this section, we aim to show that (1) S2P outperforms the supervised learning baseline, and (2) Pop-S2P outperforms S2P. We conduct our experiments in the more complex IBR game, since the agents must communicate in English, and measure performance by calculating the accuracy at different (fixed) numbers of samples. Our baseline is then the performance of a supervised learner on a fixed number of samples.
+
+We show the results in Figure 6. We first note that, when both 1k and 10k samples are used for supervised learning, S2P (sched) outperforms the supervised learning baseline. We can also see that the population-based approach outperforms single agent S2P (sched) by a significant margin. We also compare our distillation method to an ensembling method that keeps all 50 populations at test time, and find that ensembling performs significantly better, although it is much less efficient. This suggests that there is room to push distilled Pop-S2P to even better performance.
+
+
+Figure 6: S2P (sched) outperforms the supervised baseline in the IBR game, and is in turn outperformed by PopS2P.
+
+# 6.2 EXAMINING S2P SCHEDULES
+
+In this section, we aim to: (1) evaluate several S2P schedules empirically on the IBR game; and (2) attain a better understanding of S2P through quantitative experiments.
+
+Parameter freezing improves S2P We show the results comparing different S2P schedules in Figure 7a. We find that in this more complex game, the sup2sp S2P performs much worse than the other options. We also see that adding freezing slightly improves the performance on the target language (Figure 8 in the Appendix also shows that it converges more quickly). We hypothesize that this is because it reduces the language drift that is experienced during each round of self-play updates (Lee et al., 2019). Overall, however, the difference between different S2P schedules is relatively small, and it's unclear if the same ordering will hold in a different domain.
+
+
+(a)
+
+
+(b)
+
+
+(c)
+Figure 7: (a) Comparing test performances of different S2P methods on the IBR game. For each method, we picked the model that gave the best performance on $\mathcal{D}_{val}$ . (b) 2D visualization of S2P (sched) performance over the course of training, in terms of performance on $L^{*}$ (vertical axis) and performance in self-play (horizontal axis). The zig-zag patterns indicate that most self-play updates result in a short-term decrease in target language performance. (c) Visualization of the role of the supervised and self-play updates in sched S2P.
+
+Self-play acts as a regularizer What is the role of self-play in S2P? We can start to decipher this by taking a closer look at the sched S2P. We plot the training performance of this method in Figure 7b. Interestingly, we notice from the zig-zag pattern that the validation performance usually goes down after every set of self-play updates. However, the overall validation performance goes up after the next round of supervised updates. This is also reflected in the poor performance of the sup2sp S2P in Figure 6.
+
+This phenomenon can be explained by framing self-play as a form of regularization: alternating between supervised and self-play updates is a way to satisfy the parallel constraints of 'is consistent with the dataset $\mathcal{D}$ and 'performs well on the task'. We visualize this pictorially in Figure 7b: while a set of self-play updates results in poor performance on $\mathcal{D}$ , eventually the learned language moves closer to satisfying both constraints.
+
+# 7 DISCUSSION
+
+In this work, we investigated the research question of how to combine supervised and self-play updates, with a focus on training agents to learn a language. However, this research question is not only important for language learning; it is also a important in equilibrium selection and learning social conventions (Lerer & Peysakhovich, 2019) in general games. For example, in robotics there may be a trade-off between performing a task well (moving an object to a certain place) and having your policy be interpretable by humans (so that they will not stumble over you). Examining how to combine supervised and self-play updates in these settings is an exciting direction for future work.
+
+There are several axes of complexity not addressed in our environments and problem set-up. First, we consider only single-state environments, and agents don't have to make temporally extended decisions. Second, we do not consider pre-training on large text corpora that are separate from the desired task (Radford et al., 2019; Devlin et al., 2018). Third, we limit our exploration of self-play to the multi-agent setting, which is not the case in works such as instruction following (Andreas & Klein, 2015). Introducing these elements may result in additional practical considerations for S2P learning, which we leave for future work. Our goal in this paper is not to determine the best method of S2P in all of these settings, but rather to inspire others to use the framing of 'supervised self-play algorithms' to make progress on sample efficient human-in-the-loop language learning.
+
+# ACKNOWLEDGEMENTS
+
+We are very grateful to Angeliki Lazaridou, with whom discussions at ICML 2019 and her simultaneous work (Lazaridou et al., 2020) shifted the direction of this work considerably. We also thank Jean Harb, Liam Fedus, Amy Zhang, Evgeny Naumov, Cinjon Resnick, Igor Mordatch, and others at MILA and Facebook AI Research for discussions related to the ideas in this paper. Special thanks to Arthur Szlam and Kavya Srinet for discussing their ongoing work with us. RL is supported in part by a Vanier Scholarship.
+
+# REFERENCES
+
+Jacob Andreas and Dan Klein. Alignment-based compositional semantics for instruction following. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pp. 1165-1174, Lisbon, Portugal, September 2015. Association for Computational Linguistics. doi: 10.18653/v1/D15-1138. URL https://www.aclweb.org/anthology/D15-1138.
+Kris Cao, Angeliki Lazaridou, Marc Lanctot, Joel Z Leibo, Karl Tuyls, and Stephen Clark. Emergent communication through negotiation. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=Hk6WhagRW.
+Rahma Chaabouni, Eugene Kharitonov, Emmanuel Dupoux, and Marco Baroni. Anti-efficient encoding in emergent communication. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d\text{text}quotesingle Alché-Buc, E. Fox, and R. Garnett (eds.), Advances in Neural Information Processing Systems 32, pp. 6290-6300. Curran Associates, Inc., 2019. URL http://papers.nips.cc/paper/8859-anti-efficient-encoding-in-emergent-communication.pdf.
+Maxime Chevalier-Boisvert, Dzmitry Bahdanau, Salem Lahlou, Lucas Willems, Chitwan Saharia, Thien Huu Nguyen, and Yoshua Bengio. BabyAI: First steps towards grounded language learning with a human in the loop. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=rJeXCoOcYX.
+Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1724-1734, Doha, Qatar, October 2014. Association for Computational Linguistics. doi: 10.3115/v1/D14-1179. URL https://www.aclweb.org/anthology/D14-1179.
+Edward Choi, Angeliki Lazaridou, and Nando de Freitas. Multi-agent compositional communication learning from raw visual input. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=rknt2Be0-.
+Michael Cogswell, Jiasen Lu, Stefan Lee, Devi Parikh, and Dhruv Batra. Emergence of Compositional Language with Deep Generational Transmission. arXiv:1904.09067 [cs, stat], April 2019. arXiv: 1904.09067.
+Harm de Vries, Kurt Shuster, Dhruv Batra, Devi Parikh, Jason Weston, and Douwe Kiela. Talk the walk: Navigating new york city through grounded dialogue. arXiv preprint arXiv:1807.03367, 2018.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
+Katrina Evtimova, Andrew Drozdov, Douwe Kiela, and Kyunghyun Cho. Emergent communication in a multi-modal, multi-step referential game. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=rJGZq6g0-.
+Joseph Farrell and Matthew Rabin. Cheap talk. Journal of Economic Perspectives, 10(3): 103-118, September 1996. doi: 10.1257/jep.10.3.103. URL http://www.aeaweb.org/articles?id=10.1257/jep.10.3.103.
+
+Jakob Foerster, Ioannis Alexandros Assael, Nando de Freitas, and Shimon Whiteson. Learning to Communicate with Deep Multi-Agent Reinforcement Learning. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett (eds.), Advances in Neural Information Processing Systems 29, pp. 2137-2145. Curran Associates, Inc., 2016.
+Laura Harding Graesser, Kyunghyun Cho, and Douwe Kiela. Emergent linguistic phenomena in multi-agent communication games. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 3691-3701, Hong Kong, China, November 2019. Association for Computational Linguistics. doi: 10.18653/v1/D19-1384. URL https:// www.aclweb.org/anthology/D19-1384.
+Serhii Havrylov and Ivan Titov. Emergence of Language with Multi-agent Games: Learning to Communicate with Sequences of Symbols. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (eds.), Advances in Neural Information Processing Systems 30, pp. 2149-2159. Curran Associates, Inc., 2017.
+Eric Jang, Shixiang Gu, and Ben Poole. Categorical Reparameterization with Gumbel-Softmax. In International Conference on Learning Representations, 2017. URL https://openreview.net/forum?id=rkE3y85ee.
+Natasha Jaques, Angeliki Lazaridou, Edward Hughes, Caglar Gulcehre, Pedro Ortega, Dj Strouse, Joel Z. Leibo, and Nando De Freitas. Social Influence as Intrinsic Motivation for Multi-Agent Deep Reinforcement Learning. In International Conference on Machine Learning, pp. 3040-3049, May 2019. URL http://proceedings.mlr.press/v97/jaques19a.html.
+Simon Kirby. Iterated learning and the evolution of language. Current Opinion in Neurobiology, pp. 7, 2014.
+Satwik Kottur, José Moura, Stefan Lee, and Dhruv Batra. Natural language does not emerge 'naturally' in multi-agent dialog. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pp. 2962-2967, Copenhagen, Denmark, September 2017. Association for Computational Linguistics. doi: 10.18653/v1/D17-1321. URL https://www.aclweb.org/anthology/D17-1321.
+Angeliki Lazaridou, Alexander Peysakhovich, and Marco Baroni. Multi-Agent Cooperation and the Emergence of (Natural) Language. In International Conference on Learning Representations, 2017. URL https://openreview.net/forum?id=Hk8N3Sclg.
+Angeliki Lazaridou, Karl Moritz Hermann, Karl Tuyls, and Stephen Clark. Emergence of linguistic communication from referential games with symbolic and pixel input. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=HJGv1Z-AW.
+Angeliki Lazaridou, Anna Potapenko, and Olivier Tieleman. Multi-agent communication meets natural language: Synergies between functional and structural language learning. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 7663-7674, Online, July 2020. Association for Computational Linguistics. URL https://www.aclweb.org/anthology/2020.acl-main.685.
+Jason Lee, Kyunghyun Cho, Jason Weston, and Douwe Kiela. Emergent translation in multiagent communication. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=H1vEXaxA-.
+Jason Lee, Kyunghyun Cho, and Douwe Kiela. Countering language drift via visual grounding. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 4376-4386, Hong Kong, China, November 2019. Association for Computational Linguistics. doi: 10.18653/v1/D19-1447. URL https://www.aclweb.org/anthology/D19-1447.
+Adam Lerer and Alexander Peysakhovich. Learning existing social conventions via observationally augmented self-play. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, pp. 107-114. ACM, 2019.
+
+David Lewis. Convention: A philosophical study. Harvard University Press, 1969.
+Mike Lewis, Denis Yarats, Yann Dauphin, Devi Parikh, and Dhruv Batra. Deal or no deal? end-to-end learning of negotiation dialogues. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pp. 2443-2453, Copenhagen, Denmark, September 2017. Association for Computational Linguistics. doi: 10.18653/v1/D17-1259. URL https://www.aclweb.org/anthology/D17-1259.
+Fushan Li and Michael Bowling. Ease-of-Teaching and Language Structure from Emergent Communication. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'textquotesingle Alché-Buc, E. Fox, and R. Garnett (eds.), Advances in Neural Information Processing Systems 32, pp. 15825-15835. Curran Associates, Inc., 2019. URL http://papers.nips.cc/paper/9714-ease-of-teaching-and-language-structure-from-emergent-communication.pdf.
+Michael L Littman. Markov games as a framework for multi-agent reinforcement learning. In International Conference on Machine Learning, volume 157, pp. 157-163, 1994.
+Ryan Lowe, YI WU, Aviv Tamar, Jean Harb, OpenAI Pieter Abbeel, and Igor Mordatch. Multi-Agent Actor-Critic for Mixed Cooperative-Competitive Environments. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (eds.), Advances in Neural Information Processing Systems 30, pp. 6379–6390. Curran Associates, Inc., 2017. URL http://papers.nips.cc/paper/7217-multi-agent-actor-critic-formixed-cooperative-competitive-environments.pdf.
+Ryan Lowe, Jakob Foerster, Y-Lan Boureau, Joelle Pineau, and Yann Dauphin. On the pitfalls of measuring emergent communication. In Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems, AAMAS '19, pp. 693-701. International Foundation for Autonomous Agents and Multiagent Systems, 2019. ISBN 978-1-4503-6309-9. URL http://www.ifaamas.org/Proceedings/aamas2019/pdfs/p693.pdf.
+Chris J. Maddison, Andriy Mnih, and Yee Whye Teh. The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables. In International Conference on Learning Representations, 2017. URL https://openreview.net/forum?id=S1jE5L5gl.
+Igor Mordatch and Pieter Abbeel. Emergence of grounded compositional language in multi-agent populations. In AAAI Conference on Artificial Intelligence, 2018. URL https://aaai.org/ocs/index.php/AAAI/AAAI18/paper/view/17007.
+Mathijs Mul, Diane Bouchacourt, and Elia Bruni. Mastering emergent language: learning to guide in simulated navigation. arXiv:1908.05135 [cs], August 2019. arXiv: 1908.05135.
+Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners. 2019. URL https://d4mucfpksywv.cloudfront.net/better-language-models/language_models—are_unsupervisedMULTITASK_Learners.pdf.
+Limor Raviv and Inbal Arnon. Systematicity, but not compositionality: Examining the emergence of linguistic structure in children and adults using iterated learning. Cognition, 181:160-173, December 2018. ISSN 0010-0277.
+Cinjon Resnick*, Abhinav Gupta*, Jakob N. Foerster, Andrew M. Dai, and Kyunghyun Cho. Capacity, bandwidth, and compositionality in emergent language learning. *ArXiv*, abs/1910.11424, 2019.
+Andrei A Rusu, Sergio Gomez Colmenarejo, Caglar Gulcehre, Guillaume Desjardins, James Kirkpatrick, Razvan Pascanu, Volodymyr Mnih, Koray Kavukcuoglu, and Raia Hadsell. Policy distillation. In International Conference on Learning Representations, 2016. URL https://arxiv.org/pdf/1511.06295.pdf.
+Amanpreet Singh, Tushar Jain, and Sainbayar Sukhbaatar. Individualized controlled continuous communication model for multiagent cooperative and competitive tasks. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id= rye7knCqK7.
+
+Kenny Smith, Henry Brighton, and Simon Kirby. Complex Systems In Language Evolution: The Cultural Emergence Of Compositional Structure. Advances in Complex Systems (ACS), 6(04): 537-558, 2003. doi: 10.1142/S0219525903001055. URL https://ideas.repec.org/a/ wsi/acsxxx/v06y2003i04ns0219525903001055.html.
+Sainbayar Sukhbaatar, Arthur Szlam, and Rob Fergus. Learning multiagent communication with backpropagation. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett (eds.), Advances in Neural Information Processing Systems 29, pp. 2244-2252. Curran Associates, Inc., 2016. URL http://papers.nips.cc/paper/6398-learning-multiagent-communication-with-backpropagation.pdf.
+Olivier Tieleman, Angeliki Lazaridou, Shibl Mourad, Charles Blundell, and Doina Precup. Shaping representations through communication. 2018. URL https://openreview.net/pdf?id=HkzL4hR9Ym.
+
+# A HYPERPARAMETERS
+
+We provide hyperparameter details in Table 1.
+
+| Hyperparameter | Values |
| Learning rate | 1e-2, 1e-3, 2e-3, 6e-3, 1e-4, 5e-4, 6e-4 |
| Model architecture | Linear, Bilinear, Non-Linear |
| Number of encoders (perfect emcomm) | 1, 2, 5, 10, 20, 50, 100, 200, 500, 1000 |
| Hidden layer size (Linear) | 200, 500, 1000 |
| Number of encoders (Pop-S2P) | 20, 40, 50, 60, 80, 100 |
| Number of distractors | 1, 4, 9 |
| GRU hidden size | 256 |
| Word embedding size | 512 |
| Image embedding size (from pretrained Resnet50) | 2048 |
| Batch size | 1, 512, 1000 |
| Random seeds | 0, 1, 2, 3, 4 |
| Optimizer | Adam, SGD |
| Dropout | 0, 0.3 |
| Gumbel relaxation temperature | 1 |
| Vocabulary size | 100, 200, 500, 1000, 5000 |
| Max sentence length | 12, 15, 20, 30, 50 |
| m in sched | 0, 1, 30, 40, 50, 70 |
| l in sched | 0, 30, 40, 50 |
| q in rand | 0.75 |
| r in sched_rand_frz | 0.5 |
| Number of initial supervised steps (pretraining) | 0, 1000, 2000, 3000, 5000 |
+
+Table 1: Hyperparameters considered in S2P training.
+
+# B CALCULATION OF OPTIMAL SAMPLE COMPLEXITY IN OR GAME
+
+Here we provide a quick calculation for how quickly a human might learn a new compositional language $L$ in the OR game in as few examples as possible, which we use as a baseline in Figure 4a. We assume a OR game with $p = 6$ properties, $t = 10$ types, $T = 6$ words sent per message (concatenated together), and $|V| = 60$ vocabulary size. If this language $L$ is compositional, then each word in the vocabulary is assigned to 1 type. Thus, we need to learn 60 total assignments. In this analysis we assume we can construct (i.e. hand-design) the samples seen by the human, and thus the final number should be considered something like a lower bound.
+
+Since $T = 6$ , we get information about 6 word←type assignments for every sample. However, this information is entangled as we don't know which word corresponded to which type. Thus, we (1) divide the problem up by first constructing 9 (word sequence, object) sample pairs where none of the object types overlap between each sample. With this information, we are able to narrow down the word←type assignments into 10 groups of 6 (that is, in each group we have 6 words corresponding to 6 types, but we don't know which type belongs to which word). Note we don't need 10 samples as the last one can be inferred by exclusion. (2) We then construct 5 more samples where each type belongs to a separate group. We can do this because $t > p$ . Because each type belongs to a separate group, cross-referencing the words observed from samples in (1) and (2) uniquely defines each word←type assignment. Note again we don't need 6 samples as the last one can be inferred by exclusion. This gives us a total of $9 + 5 = 14$ samples.
+
+# C ADDITIONAL PLOTS
+
+We show training curves for various S2P schedules.
+
+
+
+
+
+Figure 8: Training curves for various S2P methods in the IBR game described in $\S 4$ .
+
+sp2sup sup2sp rand sched sched_frz sched_rand_frz
+
+
\ No newline at end of file
diff --git a/ontheinteractionbetweensupervisionandselfplayinemergentcommunication/images.zip b/ontheinteractionbetweensupervisionandselfplayinemergentcommunication/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..1d3f83fb4963d6306c516dc8f9fbf61b678fc0a9
--- /dev/null
+++ b/ontheinteractionbetweensupervisionandselfplayinemergentcommunication/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4b22522710439ca7a093720e6504e055de06825a4d62be50e736bdb2afe0353c
+size 460771
diff --git a/ontheinteractionbetweensupervisionandselfplayinemergentcommunication/layout.json b/ontheinteractionbetweensupervisionandselfplayinemergentcommunication/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..79a668a10f2dd7a4ae8204db1e4cd6faab315d74
--- /dev/null
+++ b/ontheinteractionbetweensupervisionandselfplayinemergentcommunication/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:617a7f18af3e6eef888c9fc515393eba368d7b2a4bbbbaf89e2b3c990870de5a
+size 414069
diff --git a/ontheneedfortopologyawaregenerativemodelsformanifoldbaseddefenses/58912334-d640-40cb-adcc-b945fac97af5_content_list.json b/ontheneedfortopologyawaregenerativemodelsformanifoldbaseddefenses/58912334-d640-40cb-adcc-b945fac97af5_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..d0ed6d83155172271be14a702607bd5bbac50d4f
--- /dev/null
+++ b/ontheneedfortopologyawaregenerativemodelsformanifoldbaseddefenses/58912334-d640-40cb-adcc-b945fac97af5_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7888af95976ca8e1b55f757bfa51c46e226a5cfe920a9bb0a994f4eda17a8c44
+size 181701
diff --git a/ontheneedfortopologyawaregenerativemodelsformanifoldbaseddefenses/58912334-d640-40cb-adcc-b945fac97af5_model.json b/ontheneedfortopologyawaregenerativemodelsformanifoldbaseddefenses/58912334-d640-40cb-adcc-b945fac97af5_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..7c8c55f1063c06bcc9a7fa17c6699249a83cedfb
--- /dev/null
+++ b/ontheneedfortopologyawaregenerativemodelsformanifoldbaseddefenses/58912334-d640-40cb-adcc-b945fac97af5_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d2aa6ec3097fa83f1f01037484e28819eb2d86d914920f7772682b3097a95e80
+size 222385
diff --git a/ontheneedfortopologyawaregenerativemodelsformanifoldbaseddefenses/58912334-d640-40cb-adcc-b945fac97af5_origin.pdf b/ontheneedfortopologyawaregenerativemodelsformanifoldbaseddefenses/58912334-d640-40cb-adcc-b945fac97af5_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..77a75fba61c07b3d911e6d600323dde4b2ae7317
--- /dev/null
+++ b/ontheneedfortopologyawaregenerativemodelsformanifoldbaseddefenses/58912334-d640-40cb-adcc-b945fac97af5_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d470ca87aed3b3589bf505c7d1a75888eb09c820fe715fdf0e799cca058f9963
+size 1304575
diff --git a/ontheneedfortopologyawaregenerativemodelsformanifoldbaseddefenses/full.md b/ontheneedfortopologyawaregenerativemodelsformanifoldbaseddefenses/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..33eb0e78b65928568c3afe332dad41014d1ad2f8
--- /dev/null
+++ b/ontheneedfortopologyawaregenerativemodelsformanifoldbaseddefenses/full.md
@@ -0,0 +1,912 @@
+# ON THE NEED FOR TOPOLOGY-AWARE GENERATIVE MODELS FOR MANIFOLD-BASED DEFENSES
+
+Uyeong Jang
+
+Department of Computer Sciences
+
+University of Wisconsin-Madison
+
+Madison, WI, USA
+
+wjang@cs.wisc.edu
+
+Susmit Jha
+
+Computer Science Laboratory
+
+SRI International
+
+Menlo Park, CA, USA
+
+susmit.jha@sri.com
+
+Somesh Jha
+
+Department of Computer Sciences
+
+University of Wisconsin-Madison
+
+Madison, WI, USA
+
+XaiPient
+
+Princeton, NJ, USA
+
+jha@cs.wisc.edu
+
+# ABSTRACT
+
+Machine-learning (ML) algorithms or models, especially deep neural networks (DNNs), have shown significant promise in several areas. However, researchers have recently demonstrated that ML algorithms, especially DNNs, are vulnerable to adversarial examples (slightly perturbed samples that cause misclassification). The existence of adversarial examples has hindered the deployment of ML algorithms in safety-critical sectors, such as security. Several defenses for adversarial examples exist in the literature. One of the important classes of defenses are manifold-based defenses, where a sample is "pulled back" into the data manifold before classifying. These defenses rely on the assumption that data lie in a manifold of a lower dimension than the input space. These defenses use a generative model to approximate the input distribution. In this paper, we investigate the following question: do the generative models used in manifold-based defenses need to be topology-aware? We suggest the answer is yes, and we provide theoretical and empirical evidence to support our claim.
+
+# 1 INTRODUCTION
+
+Machine-learning (ML) algorithms, especially deep-neural networks (DNNs), have had resounding success in several domains. However, adversarial examples have hindered their deployment in safety-critical domains, such as autonomous driving and malware detection. Adversarial examples are constructed by an adversary adding a small perturbation to a data-point so that it is misclassified. Several algorithms for constructing adversarial examples exist in the literature (Biggio et al., 2013; Szegedy et al., 2013; Goodfellow et al., 2014b; Kurakin et al., 2016a; Carlini & Wagner, 2017; Madry et al., 2017; Papernot et al., 2017). Numerous defenses for adversarial examples also have been explored (Kurakin et al., 2016b; Guo et al., 2017; Sinha et al., 2017; Song et al., 2017; Tramér et al., 2017; Xie et al., 2017; Dhillon et al., 2018; Raghunathan et al., 2018; Cohen et al., 2019; Dubey et al., 2019).
+
+In this paper, we focus on "manifold-based" defenses (Ilyas et al., 2017; Samangouei et al., 2018). The general idea in these defenses is to "pull back" the data point into the data manifold before classification. These defenses leverage the fact that, in several domains, natural data lies in a low-dimensional manifold (henceforth referred to as the manifold assumptions) (Zhu & Goldberg, 2009). The data distribution and hence actual manifold that the natural data lies in is usually unknown, so these defenses use a generative model to "approximate" the data distribution. Generative models attempt to learn to generate data according to the underlying data distribution. (The input to a generative model is usually random noise from a known distribution, such as Gaussian or uniform.) There are
+
+various types of generative models in the literature, such as variational autoencoder (VAE) (Kingma & Welling, 2013), generative adversarial network (GAN) (Goodfellow et al., 2014a) and reversible generative models, e.g., real-valued non-volume preserving transform (Real NVP) (Dinh et al., 2016).
+
+This paper addresses the following question:
+
+Do manifold-based defenses need to be aware of the topology of the underlying data manifold?
+
+In this paper, we suggest the answer to this question is yes. We demonstrate that if the generative model does not capture the topology of the underlying manifold, it can adversely affect these defenses. In these cases, the underlying generative model is being used as an approximation of the underlying manifold. We believe this opens a rich avenue for future work on using topology-aware generative models for defense to adversarial examples.
+
+Contributions and Roadmap. We begin with a brief description of related work in Section 2. Section 3 provides the requisite mathematical background. Our main theoretical results are provided in Section 4. Informally, our result says that if the generative model is not topology-aware, it can lead to a "topological mismatch" between the distribution induced by the generative model and the actual distribution. Section 5 describes our experimental verification of our theoretical results and investigates their ramifications on a manifold-based defenses called Invert-and-Classify (INC) (Ilyas et al., 2017; Samangouei et al., 2018).
+
+# 2 RELATED WORK
+
+# 2.1 GENERATIVE MODELS
+
+As a method for sampling high-dimensional data, generative models find applications in various fields in applied math and engineering, e.g., image processing, reinforcement learning, etc. Methods for learning data-generating distribution with neural networks include well-known examples of Variational Autoencoders (VAEs) (Kingma & Welling, 2013) and variations of Generative Adversarial Networks (GANs) (Goodfellow et al., 2014a; Radford et al., 2015; Zhao et al., 2016).
+
+These generative models learn how to map latent variables into generated samples. The VAE is a variational Bayesian approach, so it approximates a posterior distribution over latent vectors (given training samples) by a simpler variational distribution. Similar to other variational Bayesian methods, VAE tries to minimize the Kullback-Leibler divergence between the posterior distribution and the variational distribution by minimizing the reconstruction error of the autoencoder. GANs represent another approach to learning how to transform latent vectors into samples. Unlike other approaches, the GAN learns the target distribution by training two networks – generator and discriminator – simultaneously.
+
+In addition to generating plausible samples, some generative models construct bijective relations between latent vector and generated samples, so that the probability density of the generated sample can be estimated. Due to their bijective nature, such generative models are called to be reversible. Some examples are normalizing flow (Rezende & Mohamed, 2015), Masked Autoregressive Flow (MAF) (Papamakarios et al., 2017), Real NVP (Dinh et al., 2016), and Glow (Kingma & Dhariwal, 2018).
+
+# 2.2 APPLICATIONS OF GENERATIVE MODELS IN ADVERSARIAL MACHINE LEARNING
+
+The DNN-based classifier has been shown to be vulnerable to adversarial attacks (Szegedy et al., 2013; Goodfellow et al., 2014b; Moosavi-Dezfooli et al., 2016; Papernot et al., 2016; Madry et al., 2017). Several hypothesis try explaining such vulnerability (Szegedy et al., 2013; Goodfellow et al., 2014b; Tanay & Griffin, 2016; Feinman et al., 2017), and one explanation is that the adversarial examples lie far away from the data manifold. This idea leads to defenses making use of the geometry learned from the dataset – by projecting the input to the nearest point in the data manifold.
+
+To learn a manifold from a given dataset, generative models can be exploited. The main idea is to approximate the data-generating distribution with a generative model, to facilitate searching over data manifold by searching over the space of latent vectors. The term Invert-and-Classify (INC) was coined to describe this type of defense (Ilyas et al., 2017), and different types of generative models were tried to detect adversarial examples (Ilyas et al., 2017; Song et al., 2017; Samangouei et al., 2018). Usually, the projection is done by searching the latent vector that minimizes the geometric
+
+distance (Ilyas et al., 2017; Samangouei et al., 2018). However, despite the promising theoretical background, all of those methods are still vulnerable (Athalye et al., 2018; Ilyas et al., 2017).
+
+# 3 BACKGROUND
+
+We formally describe data generation, based on the well-known manifold assumption; data lies close to a manifold whose intrinsic dimension is much lower than that of the ambient space. In our model of data generation, we provide a formal definition of data-generating manifold $M$ on which the data-generating distribution lies such that $M$ conforms to the manifold assumption.
+
+# 3.1 REQUIREMENTS
+
+Real-world data tends to be noisy, so the data does not easily correspond to an underlying manifold. We first focus on an ideal case where data is generated solely from the manifold $M$ without noise.
+
+In the setting of classification with $l$ labels, we consider manifolds $M_1, \ldots, M_l \subset \mathbb{R}^n$ that correspond to the generation of data in each class $i \in \{1, \ldots, l\}$ , respectively. We assume those manifolds are pair-wise disjoint, i.e., $M_i \cap M_j = \emptyset$ for any $i \neq j$ . We set the data-generating manifold $M$ as the disjoint union of those manifolds, $M = \bigcup_{i=1}^{l} M_i$ . We assume $M$ to be a compact Riemannian manifold with a volume measure $dM$ induced by its Riemannian metric. When a density function $p_M$ defined on $M$ satisfies some requirements, it is possible to compute probabilities over $M$ via $\int_{\mathbf{x} \in M} p_M(\mathbf{x}) dM(\mathbf{x})$ . We call such $M$ equipped with $p_M$ an $dM$ as a data-generating manifold. We refer to Appendix A and Appendix D.1 for details about definitions and requirements on $p_M$ .
+
+In practice, data generation is affected by noise, so not all data lie on the data-generating manifold. Therefore, we incorporate the noise as an artifact of data-generation and extend the density $p_{M}$ on $M$ to the density $p$ on the entire $\mathbb{R}^n$ by assigning local noise densities on $M$ . We consider a procedure that (1) samples a point $\mathbf{x}_o$ from $M$ first, and (2) adds a noise vector $\mathbf{n}$ to get an observed point $\hat{\mathbf{x}} = \mathbf{x}_o + \mathbf{n}$ . Here, the noise $\mathbf{n}$ is a random vector sampled from a probability distribution, centered at $\mathbf{x}_o$ , whose noise density function is $\nu_{\mathbf{x}_o}$ , satisfying $\nu_{\mathbf{x}}(\mathbf{n}) = \nu_{\mathbf{x}}(\hat{\mathbf{x}} - \mathbf{x}) = p_M(\hat{\mathbf{x}} | \mathbf{x}_o = \mathbf{x})$ .
+
+# 3.2 EXTENDING DENSITY
+
+When $M$ is equipped with a density function $p_M$ and a measure $dM$ that we can integrate over $M$ , we can compute the density after random noise is added as follows.
+
+$$
+p (\hat {\mathbf {x}}) = \int_ {\mathbf {x} \in M} \nu_ {\mathbf {x}} (\hat {\mathbf {x}} - \mathbf {x}) p (\mathbf {x}) d M (\mathbf {x}) \tag {1}
+$$
+
+Since $\nu_{\mathbf{x}}(\hat{\mathbf{x}} - \mathbf{x})$ is a function on $\mathbf{x}$ when $\hat{\mathbf{x}}$ is fixed, computing this integration can be viewed as the computing expectation of a real-valued function defined on $M$ . Computing such expectation has been explored in Pennec (1999). A demonstrative example is provided in Appendix B, and this extension is further discussed in Appendix D.2.
+
+# 3.3 GENERATIVE MODELS
+
+A generative model tries to find a statistical model for joint density $p(\mathbf{x},y)$ (Ng & Jordan, 2002). We mainly discuss a specific type that learns a transform from one distribution $\mathcal{D}_Z$ to another target distribution $\mathcal{D}_X$ . Commonly, a latent vector $\mathbf{z} \sim \mathcal{D}_Z$ is sampled from a simpler distribution, e.g., Gaussian, then a pre-trained deterministic function $G$ maps to a sample $\mathbf{x} = G(\mathbf{z})$ .
+
+Specifically, we focus on reversible generative models to facilitate the comparison between the density of generated samples and the target density. In this approach, the dimensions of latent vectors are set to be the same as those of the samples to be generated. Also, for a given $\mathbf{x}$ , the density of $\mathbf{x}$ is estimated by the change of variable formula (equation (2) in Section 5.1).
+
+# 3.4 INVERT AND CLASSIFY (INC) APPROACH FOR ROBUST CLASSIFICATION
+
+As the data-generating manifold $M$ contains class-wise disjoint manifolds, there is a classifier $f$ on $\mathbb{R}^n$ separating these manifolds. If $f$ separates the manifolds of $M$ , any misclassified point should lie out of $M$ . Therefore, to change a correct classification near a manifold, any adversary would pull a sample further out of the manifold. By projecting misclassified points to the nearest manifold, we may expect the classification to be corrected by the projection. The INC method (Ilyas et al., 2017; Samangouei et al., 2018) implements this using a generative model.
+
+The main idea of INC is to invert the perturbed sample by projecting to the nearest point on the data-generating manifold. Ideally, the data-generating manifold $M$ is accessible. For any point $(\hat{\mathbf{x}}, y)$ with $f(\hat{\mathbf{x}}) \neq y$ , out-of-manifold perturbation is reduced by projecting $\hat{\mathbf{x}}$ to $\mathbf{x}^*$ on $M$ . The manifold $M$
+
+is unknown in practice. However, as $M$ is the data-generating manifold of $\mathcal{D}_X$ , a generative model $G$ for $\mathcal{D}_X$ is trained to approximate $M$ . Then, searching over $M$ is replaced by searching over latent vectors of $G$ . More details about INC implementations are described in Section 5.1.
+
+# 4 TOPOLOGICAL PROPERTIES OF DATA FROM GENERATIVE MODELS
+
+In this paper, we study the significance of differences in the topological properties of the latent vector distribution and the target distribution in learning generative models. Initial information about the topology of target distribution1 is crucial to the generative model performance. Specifically, if there is a difference in the number of connected components in the superlevel set between the target distribution and the distribution of the latent vector, then any continuous generative model $G$ cannot approximate the target distribution properly (irrespective of the training method). Due to the space limit, all proofs are presented in Appendix C.
+
+# 4.1 TOPOLOGY OF DISTRIBUTIONS BASED ON SUPERLEVEL SETS
+
+The data-generating manifold is a geometric shape that corresponds to the distribution. However, this manifold is not accessible in most cases and we only have indirect access via the distribution extended from it. Therefore, we consider finding a shape from the extended density so that this "shape" successfully approximates the data-generating manifold.
+
+$\lambda$ -density superlevel set. We use the concept of $\lambda$ -density superlevel set to capture geometric features of the density function. Simply put, for a density function $p$ and a threshold $\lambda > 0$ , the $\lambda$ -density superlevel set $L_{p,\lambda}$ is the inverse image $p^{-1}[\lambda, \infty]$ . Our theoretical contribution is the conditional existence of a $\lambda$ -density superlevel set reflecting the topology of the data-generating manifold under proper conditions on the noise density.
+
+Assumptions on noise density. For a family of densities $\{\nu_{\mathbf{x}}\}_{\mathbf{x}\in M}$ , we require the noise $\nu_{\mathbf{x}}$ to satisfy a number of assumptions. These assumptions facilitate theoretical discussion about the superlevel set reflecting the data-generating manifold. In the following definition, we denote a Euclidean ball of radius $\delta$ centered at $\mathbf{x}$ by $B_{\delta}(\mathbf{x})$ .
+
+Definition 1. Let $\nu_{\mathbf{x}}$ be a family of noise densities.
+
+- $\lambda$ is small-enough if $L_{\nu_{\mathbf{x}},\lambda}$ is nonempty for all $\mathbf{x} \in M$ ,
+- $\lambda$ - bounding radius $\delta_{\mathbf{x},\lambda} \coloneqq \min \{\delta \mid L_{\nu_{\mathbf{x}},\lambda} \subseteq \overline{B_{\delta}(\mathbf{0})}\}$ is the smallest radius that $\overline{B_{\delta}(\mathbf{0})}$ contains $L_{\nu_{\mathbf{x}},\lambda}$ . When $\max_{\mathbf{x} \in M} \delta_{\mathbf{x},\lambda}$ exists for some $\lambda$ , we denote the maximum value as $\delta_{\lambda}$ .
+- $\lambda$ -guaranteeing radius $\epsilon_{\mathbf{x},\lambda} \coloneqq \max \{\epsilon \mid \overline{B_{\epsilon}(\mathbf{0})} \subseteq L_{\nu_{\mathbf{x}},\lambda}\}$ is the largest radius that $L_{\nu_{\mathbf{x}},\lambda}$ contains $\overline{B_{\epsilon}(\mathbf{0})}$ . When $\min_{\mathbf{x} \in M} \epsilon_{\mathbf{x},\lambda}$ exists for some $\lambda$ , we denote the minimum value as $\epsilon_{\lambda}$ .
+
+
+Figure 1: Example superlevel set $L_{\mathbf{x},\lambda}$ with $\lambda$ -bounding radius $\delta_{\mathbf{x},\lambda}$ and $\lambda$ -guaranteeing radius $\epsilon_{\mathbf{x},\lambda}$ .
+
+Sufficient conditions for the existence of these radii are discussed in Appendix D.3. The properties of these radii are summarized in Lemma 1. (The proof follows from Definition 1).
+
+Lemma 1. Let $\nu_{\mathbf{x}}$ be a family of noise densities and let $\lambda$ be small-enough. Then,
+
+$$
+\left\| \hat {\mathbf {x}} - \mathbf {x} \right\| > \delta_ {\lambda} \Longrightarrow \nu_ {\mathbf {x}} (\hat {\mathbf {x}} - \mathbf {x}) < \lambda
+$$
+
+$$
+\| \hat {\mathbf {x}} - \mathbf {x} \| \leq \epsilon_ {\lambda} \Longrightarrow \nu_ {\mathbf {x}} (\hat {\mathbf {x}} - \mathbf {x}) \geq \lambda
+$$
+
+whenever $\delta_{\lambda}$ and $\epsilon_{\lambda}$ exist.
+
+Figure 1 shows an example of superlevel set $L_{\mathbf{x},\lambda}$ of noise $\nu_{\mathbf{x}}$ at a point $\mathbf{x}$ and its $\lambda$ -bounding radius $\delta_{\mathbf{x},\lambda}$ and $\lambda$ -guaranteeing radius $\epsilon_{\mathbf{x},\lambda}$ .
+
+Finally, we define the continuous variation of noise densities $\nu_{\mathbf{x}}$ over changes of $\mathbf{x} \in M$ . For the continuous variation, we require the continuity of both radii $\delta_{\mathbf{x},\lambda}$ and $\epsilon_{\mathbf{x},\lambda}$ as real-valued functions of $\mathbf{x} \in M$ for any fixed value of $\lambda$ .
+
+Definition 2 (Continuously varying radii). Noise densities $\nu_{\mathbf{x}}$ have continuously varying radii if, for a fixed small-enough $\lambda$ , both $\lambda$ -bounding radius $\delta_{\mathbf{x},\lambda}$ and $\lambda$ -guaranteeing radius $\epsilon_{\mathbf{x},\lambda}$ are continuous functions of $\mathbf{x} \in M$ .
+
+When noise densities have continuously varying radii, with the compactness of $M$ , we can apply the extreme value theorem to guarantee the existence of both $\delta_{\lambda} = \max_{\mathbf{x}\in M}\delta_{\mathbf{x},\lambda}$ and $\epsilon_{\lambda} = \min_{\mathbf{x}\in M}\epsilon_{\mathbf{x},\lambda}$ .
+
+# 4.2 MAIN THEOREM
+
+Our main theorem establishes, under the assumptions on noise densities from Section 4.1, the existence of a $\lambda$ such that,
+
+- (Inclusion) The $\lambda$ -density superlevel set $L_{p,\lambda}$ includes the data-generating manifold $M$ .
+- (Separation) The $\lambda$ -density superlevel set $L_{p,\lambda}$ consists of connected components such that each component contains at most one manifold $M_i$ .
+
+Definition 3. Consider a data-generating manifold $M$ with density function $p_M$ . For a radius $\epsilon > 0$ , we define $\omega_{\epsilon}$ to be the minimum (over $\mathbf{x} \in M$ ) probability of sampling $\mathbf{x}' \in M$ in an $\epsilon$ -ball $B_{\epsilon}(\mathbf{x})$ .
+
+$$
+\omega_ {\epsilon} := \min _ {\mathbf {x} \in M} \operatorname * {P r} _ {\mathbf {x} ^ {\prime} \sim p _ {M}} \left[ \mathbf {x} ^ {\prime} \in B _ {\epsilon} (\mathbf {x}) \right]
+$$
+
+Definition 4 (Class-wise distance). Let $(X,d)$ be a metric space and let $M = \bigcup_{i=1}^{l} M_i$ be a data-generating manifold in $X$ . The class-wise distance $d_{\mathrm{cw}}$ of $M$ is defined as,
+
+$$
+d_{\mathrm{cw}} = \min_{\substack{i,j\in [l]\\ i\neq j}}\min_{\substack{\mathbf{x}\in M_{i}\\ \mathbf{x}^{\prime}\in M_{j}}}d(\mathbf{x},\mathbf{x}^{\prime})
+$$
+
+With the definitions above, we proved the following main theorem.
+
+Theorem 1. Pick any small-enough threshold $\lambda$ . Fix a value $\lambda^{*} \leq \omega_{\epsilon}\lambda$ and let $\delta^{*} = \delta_{\lambda^{*}}$ be the $\lambda^{*}$ -bounding radius. If $d_{\mathrm{cw}}$ of $M$ is larger than $2\delta^{*}$ , then the superlevel set $L_{p,\lambda^{*}}$ satisfies the following properties.
+
+- $L_{p,\lambda^*}$ contains the data-generating manifold $M$ .
+Each connected component of $L_{p,\lambda^*}$ contains at most one manifold $M_i$ of class $i$ .
+
+# 4.3 APPLICATION TO THE GENERATIVE MODEL
+
+We show an application of Theorem 1. We denote the target distribution by $\mathcal{D}_X$ , the latent distribution by $\mathcal{D}_Z$ , and the distribution of $G(\mathbf{z})$ where $\mathbf{z} \sim \mathcal{D}_Z$ by $\mathcal{D}_{G(Z)}$ . Similarly, we denote the corresponding $\lambda$ -density superlevel sets of densities by $L_{\lambda}^{X}$ , $L_{\lambda}^{Z}$ , and $L_{\lambda}^{G(Z)}$ . We assume the generative model $G$ to be continuous. Then, we get the following theorem regarding the difference between $L_{\lambda}^{X}$ and $L_{\lambda}^{G(Z)}$ in the number of connected components.
+
+Theorem 2. Let $\mathcal{D}_Z$ be a mixture of $n_Z$ multivariate Gaussian distributions, and let the data-generating manifold of $\mathcal{D}_X$ contain $n_X$ components. Let $G$ be a continuous generative model for $\mathcal{D}_X$ using latent vectors from $\mathcal{D}_Z$ . Let $\lambda^*$ be the threshold value from the Theorem 1. If $n_Z < n_X$ , $L_{\lambda^*}^X$ and $L_{\lambda^*}^{G(Z)}$ do not agree on the number of connected components.
+
+We can use this theorem to deduce the need for adequate information about the target distribution when training a generative model, especially if it is used for a security-critical application, e.g., INC.
+
+Corollary 1. If Theorem 2 is satisfied, there is a point $\hat{\mathbf{x}}\in \mathbb{R}^n$ such that $\hat{\mathbf{x}}\notin L_{\lambda^{*}}^{X}$ but $\hat{\mathbf{x}}\in L_{\lambda^{*}}^{G(Z)}$
+
+As a result, with density at least $\lambda^{*}$ , $G$ generates a point $\hat{\mathbf{x}}$ that is unlikely to be generated by the target distribution. Since INC is based on generations of $G$ , the INC method can output an out-of-manifold point as a solution of optimization (12).
+
+2 In Appendix D.4, Theorem 2 is generalized for more topological properties.
+
+| two-moons | spirals | circles |
| M0: {(x1,x2)| x1=cosθ x2=sinθ} | M0: {(x1,x2)| x1=1/3etcos(t) x2=1/3etcos(t)} | M0: {(x1,x2)| x1=cosθ x2=sinθ} |
| M1: {(x1,x2)| x1=1-cosθ x2=1-sinθ+1/2} | M1: {(x1,x2)| x1=1/3etcos(t+2/3π) x2=1/3etsin(t+2/3π)} | M1: {(x1,x2)| x1=1/2cosθ x2=1/2sinθ} |
| for θ ∈ [0,π] | M2: {(x1,x2)| x1=1/2etcos(t+4/3π) x2=1/3etsin(t+4/3π)} | for θ ∈ [0,2π] |
| for t ∈ [0,T] where T = ln(15/√2 + 1) | |
+
+Table 1: Parameterizations of dataset used in the experiments.
+
+# 5 EXPERIMENTAL RESULTS
+
+In this section, we empirically demonstrate the consequence of the two theorems and explore their implication for the INC defense. Our main goals are to provide (1) empirical support for the applicability of Theorem 2 and Corollary 1 via toy datasets, and (2) the improvement in INC performance using a class-aware generative model. The main questions and the corresponding answers are shown below.
+
+(Q1) Can we experimentally verify the results of section 4.3? Specifically, can we find cases that the superlevel sets of $\mathcal{D}_X$ and $\mathcal{D}_{G(Z)}$ have different numbers of connected components?
+(Q2) How does INC fail when the generative model is ignorant of topology information?
+(Q3) Does the class-aware generative model improve the INC performance?
+
+(A1) Theorem 2 and Corollary 1 can be verified by plotting the $\lambda$ -density superlevel set. Especially, we visualize the $\lambda$ -density superlevel set of $\mathcal{D}_{G(Z)}$ reflecting Theorem 2 and Corollary 1.
+(A2) When generative model is not trained with topology information, naive INC may fail. We found out two possible reasons regarding INC failure: (1) choice of a bad initial point and (2) out-of-manifold search due to non-separation of density superlevel set.
+(A3) The performance of INC is improved by training generative models with topology information on the target distribution. We improved the average INC performance by decreasing the error induced by projection to $30\%$ compared to the class-ignorant counterpart.
+
+In the rest of this section, we provide a more detailed description of our experiments. First, we briefly describe the experimental setup in Section 5.1: datasets, latent vector distributions, training method, and INC implementation. Then, Sections 5.2-5.4 describe the experimental results regarding the findings summarized above. Section 5.5 contains an additional experiment illustrating the changes of decision boundaries by INC application.
+
+# 5.1 EXPERIMENTAL SETUP
+
+Datasets. For all experiments, we use three toy datasets in $\mathbb{R}^2$ : two-moons, spirals, and circles. Table 1 summarizes the parameterizations of each data-generating manifold and Figure 2 shows the plots of the corresponding data-generating manifolds. To construct the training set, we first sample 1000 points uniformly from each manifold $M_{i}$ , then each point is perturbed by isotropic Gaussian noise $\mathcal{N}(0,\sigma^2 I_2)$ with $\sigma = 0.05$ . Before the training, each training set is standardized by a preprocessing of Scikit-learn package.
+
+Latent vector distributions. For latent vector distributions $\mathcal{D}_Z$ , we prepared three different mixtures of $n_Z$ Gaussian distributions with $n_Z \in \{1,2,3\}$ . When $n_Z = 1$ , we simply use $\mathcal{N}(\mathbf{0},I_2)$ . When $n_Z = 2,3$ , we arranged $n_Z$ Gaussian distributions along a circle of radius $R = 2.5$ , so that $i$ -th Gaussian has mean at $\mu_i = \left(-R\sin \left(\frac{2\pi i}{n}\right), R\cos \left(\frac{2\pi i}{n}\right)\right)$ with $\sigma = 0.5$ for $n = 2$ and $\sigma = 0.3$ for $n = 3$ . Then, the uniform mixtures of the arranged Gaussian are used as $\mathcal{D}_Z$ . In Figure 3 (top row), we visualize the connected components corresponding to the latent vector distributions.
+
+Training generative models. Our experiments mostly use the Tensorflow Probability (Dillon et al., 2017) library that contains the implementation of reversible generative models. Specifically, the Tensorflow Probability library contains an implementation of the Real NVP coupling layer that we used as a building block of our models. The default template provided by Tensorflow Probability
+
+
+(a) two-moons
+
+
+(b) spirals
+
+
+(c) circles
+Figure 2: Data-generating manifolds used in the experiments
+
+library was used to construct each Real NVP coupling layer with two hidden layers of 128 units. Each model uses eight coupling layers that are followed by permutations exchanging two dimensions of $\mathbb{R}^2$ except for the last coupling layer.
+
+We describe the details of the training procedure of the generative models used in the experiments. We prepared two different types of generative models: class-ignorant and class-aware.
+
+The class-ignorant type is the usual Real NVP model. This model uses the empirical estimation of negative log-likelihood over a training batch $\{\mathbf{x}_1,\dots ,\mathbf{x}_m\}$ as its training loss.
+
+$$
+\ell_ {\mathrm {c i}} = - \frac {1}{m} \sum_ {t = 1} ^ {m} \log \left(p _ {X} \left(\mathbf {x} _ {t}\right)\right)
+$$
+
+The density $p_X$ of $\mathcal{D}_X$ is estimated by applying the change of variables formula,
+
+$$
+p _ {X} (\mathbf {x}) = p _ {Z} (\mathbf {z}) \left| \det \left(\frac {\partial G (\mathbf {z})}{\partial \mathbf {z} ^ {T}}\right) \right| ^ {- 1} \tag {2}
+$$
+
+where $p_Z$ is the density of $\mathcal{D}_Z$ and $\frac{\partial G(\mathbf{z})}{\partial\mathbf{z}^T}$ is the Jacobian of $G$ as a function from $\mathbb{R}^n$ to itself.
+
+The class-aware type is the Real NVP model trained with information about the number of connected components, i.e. the number of class labels $l$ . Using the number of labels, the densities $p_{X}$ and $p_{Z}$ can be decomposed as follows.
+
+$$
+p _ {X} (\mathbf {x}) = \sum_ {i \in \{1, \dots , l \}} \Pr [ y = i ] p _ {X, i} (\mathbf {x})
+$$
+
+$$
+p _ {Z} (\mathbf {z}) = \sum_ {i \in \{1, \dots , l \}} \Pr [ y = i ] p _ {Z, i} (\mathbf {z}) \tag {3}
+$$
+
+where $p_{X,i}(\mathbf{x}) = p_X(\mathbf{x}|y = i)$ and each $p_{Z,i}$ is the $i$ -th Gaussian component described above. Since $\operatorname{Pr}[y = i]$ is not generally known, the uniform distribution $\operatorname{Pr}[y = i] = \frac{1}{l}$ is used, where $l$ is the number of classification labels.
+
+The main idea is class-wise training, i.e., training each $p_{X,i}$ from each $p_{Z,i}$ . Applying the change of variables formula for each class $i$ ,
+
+$$
+p _ {X, i} (\mathbf {x}) = p _ {Z, i} (\mathbf {z}) \left| \det \left(\frac {\partial G (\mathbf {z})}{\partial \mathbf {z} ^ {T}}\right) \right| ^ {- 1} \tag {4}
+$$
+
+Combining equations (3) and (4), we get the change of variables formula (2). We define the class-wise loss function $\bar{\ell}_i$ for class-wise training as follows.
+
+$$
+\ell_ {i} = - \frac {1}{m _ {i}} \sum_ {t = 1} ^ {m} \mathbb {1} [ y _ {t} = i ] \log (p _ {X, i} (\mathbf {x} _ {t}))
+$$
+
+where $m_{i}$ is the number of training samples in class $i$ . Then, we train a generative model using the weighted sum of $\ell_{i}$ as the training loss function.
+
+$$
+\ell_ {\mathrm {c a}} = \sum_ {i \in \{1, \dots , l \}} \Pr [ y = i ] \ell_ {i}
+$$
+
+Each model was trained for 30,000 iterations. For each iteration, a batch of 200 random samples was chosen from two-moons and circles dataset, and a batch of 300 random samples was chosen from the spirals dataset. For the choices of latent vector distribution, we chose the mixture of $l - 1$ Gaussians for the class-ignorant type, whereas we chose the mixture of $l$ Gaussians for the class-aware type.
+
+
+(a) Isotropic Gaussian
+
+
+(b) Mixture of 2 Gaussians
+
+
+(c) Mixture of 3 Gaussians
+
+
+(d) two-moons, class-ignorant
+
+
+(e) spirals, class-ignorant
+
+
+(f) circles, class-ignorant
+
+
+(g) two-moons, class-aware
+
+
+(h) spirals, class-aware
+
+
+(i) circles, class-aware
+Figure 3: $\lambda$ -density superlevel sets of $\mathcal{D}_Z$ and $\mathcal{D}_{G(Z)}$ with $\lambda = 0.01$ . Top row: $\mathcal{D}_Z$ for $n_Z = 1,2,3$ . Middle row: $\mathcal{D}_{G(Z)}$ , class-ignorant model. Bottom row: $\mathcal{D}_{G(Z)}$ , class-aware model.
+
+# 5.2 VISUAL VERIFICATION OF THEOREMS
+
+The goal of this section is to verify Theorem 2 and the Corollary 1 by visualizing the superlevel set reflecting the statements. Figure 3 shows the $\lambda$ -density superlevel sets of densities of $\mathcal{D}_{G(Z)}$ using the same threshold $\lambda = 0.01$ . The first row and the second row show the results from the class-ignorant version and those from the class-aware version, respectively. Each column corresponds to each dataset. All distributions are scaled for the standardization preprocessing before the training. In general, superlevel set components are separated when the generative model is class-aware. On the contrary, the class-ignorant generative models introduce connections between the components, as anticipated by Corollary 1. Due to this connection, the class-ignorant generative models contain fewer connected components in their superlevel sets; this verifies Theorem 2 for our choice of $\lambda^{*} = 0.01$ .
+
+# 5.3 INC FAILURE DUE TO THE LACK OF INFORMATION ON THE DISTRIBUTION TOPOLOGY
+
+We present how the non-separation of superlevel set components influences the performance of the INC. We provide two possible explanations of why the INC fails. First, the bad initialization causes a suboptimal solution on a manifold not-the-nearest to the input. Second, an artifact induced by the topological difference produces an out-of-manifold solution.
+
+Figure 4 presents three visualized examples of INC with a class-ignorant generative model for two-moons. In each plot, the black dot is the given point $\hat{\mathbf{x}}$ , and cyan dot is the initial point from choosing $\mathbf{z}$ randomly from the latent vector distribution $-\mathcal{N}(\mathbf{0}, I_2)$ , and magenta dot is the final point output by INC. All intermediate points of the optimization are plotted with dots, changing colors gradually from cyan to magenta. The training set for two-moon used in the training procedure is plotted in gray.
+
+
+(a) INC with an ideal initialization
+
+
+Figure 4: Successful and failed cases of INC using class-ignorant generative model of two-moon.
+
+
+(b) INC with a bad initialization
+(c) INC searching out of manifold
+
+| Two-moons | Spirals | Circles |
| class-ignorant | class-aware | class-ignorant | class-aware | class-ignorant | class-aware |
| 0.647 (0.666) | 0.148 (0.208) | 1.523 (1.338) | 0.443 (0.440) | 0.699 (0.491) | 0.180 (0.259) |
+
+Table 2: Comparison of the projection errors of INC based on the class-awareness of the model.
+
+Figure 4a is the INC optimization with an ideal start. The initial point lies in the same manifold as the manifold closest to $\hat{\mathbf{x}}$ . Then, the INC optimization searches along the manifold, converging to a point close to $\hat{\mathbf{x}}$ . Figure 4b shows a case in which INC fails because of a bad initialization. The initial point was chosen on a manifold not containing the desired solution, so the INC converged to a local optimum on the wrong manifold. Our class-aware INC performs manifold-wise initialization to circumvent this issue. Figure 4c shows that the INC failed due to an out-of-manifold search. The INC converged in a wrong manifold, and a nontrivial amount of intermediate points were out of manifold, resulting in an out-of-manifold solution (see Figure 3d).
+
+# 5.4 INC IMPROVEMENT VIA CLASS-AWARE GENERATIVE MODEL
+
+We demonstrate that INC performance is improved by using class-aware generative models. To measure the performance of the INC, 100 points are chosen uniformly from each manifold $M_{i}$ . Then, each point $\mathbf{x}$ is perturbed by $\mathbf{n}_{\mathbf{x}}$ normal to the manifold at $\mathbf{x}$ , generating 200 adversarial points $\hat{\mathbf{x}} = \mathbf{x} \pm r \mathbf{n}_{\mathbf{x}}$ . For all datasets, $r = 0.2$ is used for perturbation size. We expect two types of INC to map $\hat{\mathbf{x}}$ back to the original point $\mathbf{x}$ , as $\mathbf{x}$ is the optimal solution to (11). We define the projection error of INC as $\| \mathrm{INC}(\hat{\mathbf{x}}) - \mathbf{x}\|_2$ , and collect the statistics of projection errors over all $\hat{\mathbf{x}}$ .
+
+Table 2 shows the projection error statistics for two types of INC. Each pair of columns show the results on the indicated dataset. For each pair, one column shows the error of the class-ignorant INC and the other column shows that of the class-aware counterpart. Numbers in each cell are averages and standard deviations (in parenthesis) of the projection error. For any dataset, the class-aware INC achieves lower projection errors. Histograms of the projection errors are provided in Appendix E.
+
+# 5.5 ADDITIONAL EXPERIMENTS FOR THE INC PERFORMANCE.
+
+Finally, we present experiments to demonstrate the effect of the superlevel set discrepancy on the INC performance. First, we begin with training support vector machines (SVMs) performing classification tasks for our target distributions. For training data, we randomly sampled 1000 training points from each data-generating manifold. The baseline SVMs were intentionally ill-trained by using the high kernel coefficient $\gamma = 100$ . After training SVMs, we formed other classifiers by applying INC to ill-trained SVMs To explain, for each dataset, we have four types of classifiers as follows.
+
+(1) Ill-trained SVM: Baseline classifier
+(2) Ideal INC: Classifier with INC using a direct access to the data-generating manifolds
+(3) Class-ignorant INC: Classifier with INC using a topology-ignorant generative model
+(4) Class-aware INC: Classifier with INC with using a topology-aware generative model
+
+We want to emphasize that direct access to the data-generating manifold is not possible in general. However, applying INC using direct access gives us an INC purely based on the geometry, so it is an ideal form of INC that should be approximated. Also, since the class-ignorant INC is affected by a bad choice of an initial point, we reduced the effect of bad initialization by sampling more initial points and taking the best solution among the projection results. For this number of initial choices, we
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+(a) Ill-trained SVM
+
+
+(b) Ideal INC
+
+
+(c) Class-ignorant INC
+
+
+(d) Class-aware INC
+Figure 5: Changes in the decision boundaries of ill-trained SVM after the INC applications.
+
+chose as many initial points as the number of manifolds, which was exactly the same as the number of initial points for the topology-aware INC model.
+
+To demonstrate the improvement in the robustness of the model, we visualize the effect by depicting the decision boundary of each classifier. Specifically, we form a $300 \times 300$ grid on the domain of $[-3, 3] \times [-3, 3]$ and compute the classification result. The depicted decision boundaries are presented in Figure 5. Each row corresponds to each dataset: two moons, spirals, and circles, respectively. Each column corresponds to classifiers 1-4 described above, from the first column to the fourth column, respectively. From Figure 5, it is visually evident that the class-aware INC models provide more proper approximations to the ideal INC model compared to the class-ignorant INC models.
+
+# 6 CONCLUSION
+
+We theoretically and experimentally discussed the necessity of topology awareness in the training of generative models, especially in security-critical applications. A continuous generative model is sensitive to the topological mismatch between the latent vector distribution and the target distribution. Such mismatch leads to potential problems with manifold-based adversarial defenses utilizing generative models such as INC. We described two cases in which the INC failed: the bad initialization and the artifacts from the topological difference. We experimentally verified that topology-aware training effectively prevented these problems, thereby improving the effectiveness of generative models in manifold-based defense. After topology-aware training of generative models, the INC projection errors represented $30\%$ of the errors of the topology-ignorant INC.
+
+# 7 ACKNOWLEDGEMENT
+
+Dr. Susmit Jha and Uyeong Jang's internship at SRI International were supported in part by U.S. National Science Foundation (NSF) grants #1740079, #1750009, U.S. Army Research Laboratory Cooperative Research Agreement W911NF-17-2-0196, and DARPA Assured Autonomy under contract FA8750-19-C-0089. The views, opinions and/or findings expressed are those of the author(s) and should not be interpreted as representing the official views or policies of the Department of Defense or the U.S. Government. This work is partially supported by Air Force Grant FA9550-18-1-0166, the National Science Foundation (NSF) Grants CCF-FMitF-1836978, SaTC-Frontiers-1804648 and CCF-1652140 and ARO grant number W911NF-17-1-0405.
+
+# REFERENCES
+
+Anish Athalye, Nicholas Carlini, and David Wagner. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. arXiv preprint arXiv:1802.00420, 2018.
+Battista Biggio, Igino Corona, Davide Maiorca, Blaine Nelson, Nedim Šrndić, Pavel Laskov, Giorgio Giacinto, and Fabio Roli. Evasion attacks against machine learning at test time. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pp. 387-402. Springer, 2013.
+Nicholas Carlini and David Wagner. Towards evaluating the robustness of neural networks. In 2017 IEEE Symposium on Security and Privacy (SP), pp. 39-57. IEEE, 2017.
+Arin Chaudhuri, Deovrat Kakde, Carol Sadek, Laura Gonzalez, and Seunghyun Kong. The mean and median criteria for kernel bandwidth selection for support vector data description. In 2017 IEEE International Conference on Data Mining Workshops (ICDMW), pp. 842-849. IEEE, 2017.
+Tian Qi Chen, Yulia Rubanova, Jesse Bettencourt, and David K Duvenaud. Neural ordinary differential equations. In Advances in Neural Information Processing Systems, pp. 6571-6583, 2018.
+Jeremy M Cohen, Elan Rosenfeld, and J Zico Kolter. Certified adversarial robustness via randomized smoothing. arXiv preprint arXiv:1902.02918, 2019.
+Guneet S Dhillon, Kamyar Azizzadenesheli, Zachary C Lipton, Jeremy Bernstein, Jean Kossaifi, Aran Khanna, and Anima Anandkumar. Stochastic activation pruning for robust adversarial defense. arXiv preprint arXiv:1803.01442, 2018.
+Joshua V Dillon, Ian Langmore, Dustin Tran, Eugene Brevdo, Srinivas Vasudevan, Dave Moore, Brian Patton, Alex Alemi, Matt Hoffman, and Rif A Saurous. Tensorflow distributions. arXiv preprint arXiv:1711.10604, 2017.
+Laurent Dinh, Jascha Sohl-Dickstein, and Samy Bengio. Density estimation using real nvp. arXiv preprint arXiv:1605.08803, 2016.
+Abhimanyu Dubey, Laurens van der Maaten, Zeki Yalniz, Yixuan Li, and Dhruv Mahajan. Defense against adversarial images using web-scale nearest-neighbor search. arXiv preprint arXiv:1903.01612, 2019.
+Reuben Feinman, Ryan R Curtin, Saurabh Shintre, and Andrew B Gardner. Detecting adversarial samples from artifacts. arXiv preprint arXiv:1703.00410, 2017.
+Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems, pp. 2672-2680, 2014a.
+Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014b.
+Will Grathwohl, Ricky TQ Chen, Jesse Betterncourt, Ilya Sutskever, and David Duvenaud. Ffjord: Free-form continuous dynamics for scalable reversible generative models. arXiv preprint arXiv:1810.01367, 2018.
+Chuan Guo, Mayank Rana, Moustapha Cisse, and Laurens Van Der Maaten. Countering adversarial images using input transformations. arXiv preprint arXiv:1711.00117, 2017.
+Andrew Ilyas, Ajil Jalal, Eirini Asteri, Constantinos Daskalakis, and Alexandros G Dimakis. The robust manifold defense: Adversarial training using generative models. arXiv preprint arXiv:1712.09196, 2017.
+Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
+Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
+
+Durk P Kingma and Prafulla Dhariwal. Glow: Generative flow with invertible 1x1 convolutions. In Advances in Neural Information Processing Systems, pp. 10215-10224, 2018.
+Alexey Kurakin, Ian Goodfellow, and Samy Bengio. Adversarial examples in the physical world. arXiv preprint arXiv:1607.02533, 2016a.
+Alexey Kurakin, Ian Goodfellow, and Samy Bengio. Adversarial machine learning at scale. arXiv preprint arXiv:1611.01236, 2016b.
+John M Lee. Introduction to smooth manifolds. Graduate Texts in Mathematics, 218, 2003.
+Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083, 2017.
+Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. Deepfool: a simple and accurate method to fool deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2574-2582, 2016.
+James R Munkres. Topology prentice hall. Inc., Upper Saddle River, 2000.
+Andrew Y Ng and Michael I Jordan. On discriminative vs. generative classifiers: A comparison of logistic regression and naive bayes. In Advances in Neural Information Processing Systems, pp. 841-848, 2002.
+George Papamakarios, Theo Pavlakou, and Iain Murray. Masked autoregressive flow for density estimation. In Advances in Neural Information Processing Systems, pp. 2338-2347, 2017.
+Nicolas Papernot, Patrick McDaniel, Somesh Jha, Matt Fredrikson, Z Berkay Celik, and Ananthram Swami. The limitations of deep learning in adversarial settings. In 2016 IEEE European Symposium on Security and Privacy (EuroS&P), pp. 372-387. IEEE, 2016.
+Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z Berkay Celik, and Ananthram Swami. Practical black-box attacks against machine learning. In Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security, pp. 506-519. ACM, 2017.
+Xavier Pennec. Probabilities and statistics on riemannian manifolds: Basic tools for geometric measurements. In Nonlinear Signal and Image Processing, pp. 194-198. CiteSeer, 1999.
+Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015.
+Aditi Raghunathan, Jacob Steinhardt, and Percy Liang. Certified defenses against adversarial examples. arXiv preprint arXiv:1801.09344, 2018.
+Danilo Jimenez Rezende and Shakir Mohamed. Variational inference with normalizing flows. arXiv preprint arXiv:1505.05770, 2015.
+Pouya Samangouei, Maya Kabbab, and Rama Chellappa. Defense-gan: Protecting classifiers against adversarial attacks using generative models. arXiv preprint arXiv:1805.06605, 2018.
+Aman Sinha, Hongseok Namkoong, and John Duchi. Certifiable distributional robustness with principled adversarial training. arXiv preprint arXiv:1710.10571, 2, 2017.
+Yang Song, Taesup Kim, Sebastian Nowozin, Stefano Ermon, and Nate Kushman. Pixeldefend: Leveraging generative models to understand and defend against adversarial examples. arXiv preprint arXiv:1710.10766, 2017.
+Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013.
+Thomas Tanay and Lewis Griffin. A boundary tilting perspective on the phenomenon of adversarial examples. arXiv preprint arXiv:1608.07690, 2016.
+
+Florian Tramér, Alexey Kurakin, Nicolas Papernot, Ian Goodfellow, Dan Boneh, and Patrick McDaniel. Ensemble adversarial training: Attacks and defenses. arXiv preprint arXiv:1705.07204, 2017.
+E Weinan. A proposal on machine learning via dynamical systems. Communications in Mathematics and Statistics, 5(1):1-11, 2017.
+Cihang Xie, Jianyu Wang, Zhishuai Zhang, Zhou Ren, and Alan Yuille. Mitigating adversarial effects through randomization. arXiv preprint arXiv:1711.01991, 2017.
+Linfeng Zhang, Lei Wang, et al. Monge-amp\ere flow for generative modeling. arXiv preprint arXiv:1809.10188, 2018.
+Junbo Zhao, Michael Mathieu, and Yann LeCun. Energy-based generative adversarial network. arXiv preprint arXiv:1609.03126, 2016.
+Xiaojin Zhu and Andrew B Goldberg. Introduction to semi-supervised learning. Synthesis Lectures on Artificial Intelligence and Machine Learning, 3(1):1-130, 2009.
+
+# A MATHEMATICAL BACKGROUND
+
+# A.1 GENERAL TOPOLOGY
+
+We introduce definitions and theorems related to general topology appeared in the paper. For more details, all the definitions and theorems can be found in Munkres (2000).
+
+Definitions in general topology. We first provide the precise definitions of the terms we brought from the general topology.
+
+Definition 5 (Topological space). A topology on a set $X$ is a collection $\mathcal{T}$ of subsets of $X$ having the following properties.
+
+1. $\varnothing$ and $X$ are in $\mathcal{T}$
+2. The union of the elements of any subcollection of $\mathcal{T}$ is in $\mathcal{T}$ .
+3. The intersection of the elements of any finite subcollection of $\mathcal{T}$ is in $\mathcal{T}$ .
+
+A set $X$ for which a topology $\mathcal{T}$ has been specified is called a topological space.
+
+For example, a collection of all open sets in $\mathbb{R}^n$ is a topology, thus $\mathbb{R}^n$ is a topological space. If a topology can be constructed by taking arbitrary union and a finite number of intersections of a smaller collection $\mathcal{B}$ of subsets of $X$ , we call $\mathcal{B}$ is a basis of the topology.
+
+Pick a metric $d$ in $\mathbb{R}^n$ and consider $\mathcal{B}$ a set of all open balls in $\mathbb{R}^n$ using the metric $d$ . The topology of $\mathbb{R}^n$ can be constructed by taking $\mathcal{B}$ as a basis. When this construction is possible, metric $d$ is said to induce the topology.
+
+Definition 6 (Metrizable space). If $X$ is a topological space, $X$ is said to be metrizable if there exists a metric $d$ on the set $X$ that induces the topology of $X$ . A metric space is a metrizable space $X$ together with a specific metric $d$ that gives the topology of $X$ .
+
+Since $\mathbb{R}^n$ is equipped with Euclidean metric that induces its topology, $\mathbb{R}^n$ is metrizable.
+
+Continuity and the extreme value theorem. Let $X$ and $Y$ be topological spaces. In the field of general topology, a function $f: X \to Y$ is said to be continuous, if for any subset $V$ open in $Y$ , its inverse image $f^{-1}(V)$ is open in $X$ . Moreover, if $f$ is a continuous bijection whose inverse is also continuous, $f$ is called a homeomorphism. The notion of homeomorphism is important as it always preserves topological property, e.g., connectedness, compactness, etc., and this will be used in the further generalization of Theorem 2.
+
+Here, we only introduce the generalized statement of extreme value theorem.
+
+Theorem 3 (Extreme value theorem). Let $f: X \to Y$ be continuous, where $Y$ is an ordered set. If $X$ is compact, then there exist points $\underline{\mathbf{x}}$ and $\overline{\mathbf{x}}$ in $X$ such that $f(\underline{\mathbf{x}}) \leq f(\mathbf{x}) \leq f(\overline{\mathbf{x}})$ for every $\mathbf{x} \in X$ .
+
+Specifically, if a manifold $M$ is a compact subset in $\mathbb{R}^n$ , we may use $X = M$ and $Y = \mathbb{R}$ .
+
+Normal space and Urysohn's lemma. The Urysohn's lemma was used to prove the Corollary 1. We first introduce the notion of normal space.
+
+Definition 7 (Normal space). Let $X$ be a topological space that one-point sets in $X$ are closed. Then, $X$ is normal if for each pair $A, B$ of disjoint closed sets of $X$ , there exist disjoint open sets containing $A$ and $B$ , respectively.
+
+Urysohn's lemma is another equivalent condition for a space to be normal.
+
+Theorem 4 (Urysohn's lemma). Let $X$ be a normal topological space; let $A$ and $B$ be disjoint closed subsets in $X$ . Let $[a, b]$ be a closed interval in the real line. Then there exists a continuous map
+
+$$
+f: X \longrightarrow [ a, b ]
+$$
+
+such that $f(\mathbf{x}) = a$ for every $\mathbf{x}$ in $A$ , and $f(\mathbf{x}) = b$ for every $\mathbf{x}$ in $B$ .
+
+To apply this lemma to $\mathbb{R}^n$ , we only need the following theorem.
+
+Theorem 5. Every metrizable space is normal.
+
+Since $\mathbb{R}^n$ is metrizable, it is a normal space by Theorem 5. Therefore, we can apply Urysohn's lemma to any pair of disjoint subsets in $\mathbb{R}^n$ , to show the existence of a continuous map $f:X\to [0,1]$ .
+
+# A.2 DIFFERENTIAL GEOMETRY
+
+We provide the definitions from differential geometry (Lee, 2003) used in the paper.
+
+Manifold and tangent space. Formally, topological manifold is defined as follows.
+
+Definition 8 (Manifold). Suppose $M$ is a topological space. We say $M$ is a topological manifold of dimension $k$ if it has the following properties.
+
+1. For any pair of distinct points $\mathbf{x}_1, \mathbf{x}_2 \in M$ , there are disjoint open subsets $U_1, U_2 \subset M$ such that $\mathbf{x}_1 \in U$ and $\mathbf{x}_2 \in V$ .
+2. There exists a countable basis for the topology of $M$ .
+3. Every point has a neighborhood $U$ that is homeomorphic to an open subset $\tilde{U}$ of $\mathbb{R}^k$ .
+
+There are different ways to define tangent space of $k$ -dimensional manifold $M$ . Informally, it can be understood as geometric tangent space to $M \subset \mathbb{R}^n$ at a point $\mathbf{x} \in M$ , which is a collection of pairs $(\mathbf{x}, \mathbf{v})$ where $\mathbf{v}$ is a vector tangentially passing through $\mathbf{x}$ . Here we put a more formal definition of tangent space. Consider a vector space $C^\infty(M)$ , a set of smooth functions on $M$ .
+
+Definition 9 (Tangent space). Let $\mathbf{x}$ be a point of a smooth manifold $M$ . A linear map $X:C^{\infty}(M)\to \mathbb{R}$ is called a derivation at $\mathbf{x}$ if it satisfies
+
+$$
+X (f g) = f (\mathbf {x}) X g + g (\mathbf {x}) X f
+$$
+
+for all $f,g\in C^{\infty}(M)$
+
+The set of all derivations of $C^\infty(M)$ at $\mathbf{x}$ forms a vector space called the tangent space to $M$ at $\mathbf{x}$ , and is denoted by $T_{\mathbf{x}}(M)$ .
+
+Riemannian metric. As tangent space $T_{\mathbf{x}}(M)$ is a vector space for each $\mathbf{x} \in M$ , we can consider a inner product $g_{bfx}$ defined on $T_{\mathbf{x}}(M)$ .
+
+Definition 10 (Riemannian metric). A Riemannian metric $g$ on a smooth manifold $M$ is a smooth collection of inner products $g_{\mathbf{x}}$ defined for each $T_{\mathbf{x}}(M)$ . The condition for smoothness of $g$ is that, for any smooth vector fields $\mathcal{X}, \mathcal{Y}$ on $\mathbf{M}$ , the mapping $\mathbf{x} \mapsto g_{\mathbf{x}}(\mathcal{X}|_{\mathbf{x}}, \mathcal{Y}|_{\mathbf{x}})$ .
+
+A manifold $M$ equipped with a Riemannian metric $g$ is called a Riemannian manifold.
+
+# B EXAMPLES
+
+Computing density $p_M$ over a Riemannian manifold $M$ . This section presents example computations of the probability computations from Section D.1 and Section 3.2 As a concrete example of computing density over a manifold, we use the following simple manifolds, so called two-moons in $\mathbb{R}^2$ .
+
+$$
+M _ {0} = \left\{\left(x _ {1}, x _ {2}\right) \middle | \begin{array}{l l} x _ {1} = \cos \theta \\ x _ {2} = \sin \theta \end{array} \text {f o r} \theta \in [ 0, \pi ] \right\}
+$$
+
+$$
+M _ {1} = \left\{\left(x _ {1}, x _ {2}\right) \middle | \begin{array}{l} x _ {1} = 1 - \cos \theta \\ x _ {2} = 1 - \sin \theta + \frac {1}{2} \end{array} \text {f o r} \theta \in [ 0, \pi ] \right\}
+$$
+
+We take $M = M_0 \cup M_1$ as our example manifold. Figure 6a shows the manifold of two-moons dataset plotted in different colors: $M_0$ in red and $M_1$ in blue.
+
+First recall the following equation (equation (8) from the Section D.1).
+
+$$
+\int_ {\mathbf {x} \in M} p _ {M} (\mathbf {x}) d M (\mathbf {x}) = \int_ {\mathbf {u} \in D} p _ {M} (X (\mathbf {u})) \sqrt {\left| \det \left[ g _ {X (\mathbf {u})} \right] \right|} d \mathbf {u}
+$$
+
+where $[g_{X(\mathbf{u})}]$ is the $k\times k$ matrix representation of the inner product $g_{X(\mathbf{u})}$ at $X(\mathbf{u})\in M$
+
+Especially, when a manifold in $\mathbb{R}^n$ is of dimension 1, i.e., parameterized curve $\gamma :[a,b]\to \mathbb{R}^n$ , the integration (8) can be written in simpler way.
+
+$$
+\int_ {\mathbf {x} \in M} p _ {M} (\mathbf {x}) d M (\mathbf {x}) = \int_ {t = a} ^ {b} p _ {M} (\gamma (t)) \| \gamma^ {\prime} (t) \| d t \tag {5}
+$$
+
+where $\gamma'(t)$ is the $n$ -dimensional velocity vector at $t \in [a, b]$ .
+
+
+(a) Plot of the two-moons manifold in $\mathbb{R}^2$
+Figure 6: Density extension example from two-moons manifold.
+
+
+(b) Extended density function over $\mathbb{R}^2$ from the two-moons dataset
+
+Let $p_M$ be a probability density function defined on $M$ . As $M$ is composed of two disjoint manifolds $M_0$ and $M_1$ , we consider conditional densities $p_0, p_1$ as follows.
+
+$$
+p _ {0} (\mathbf {x}) = p _ {M} (\mathbf {x} \mid \mathbf {x} \in M _ {0}) = \frac {p _ {M} \left| _ {M _ {0}} (\mathbf {x}) \right.}{\Pr [ \mathbf {x} \in M _ {0} ]} \tag {6}
+$$
+
+$$
+p _ {1} (\mathbf {x}) = p _ {M} (\mathbf {x} \mid \mathbf {x} \in M _ {1}) = \frac {p _ {M} \left| _ {M _ {1}} (\mathbf {x}) \right.}{\Pr [ \mathbf {x} \in M _ {1} ]}
+$$
+
+Here, $p_{M}|_{M_{0}}$ and $p_{M}|_{M_{1}}$ represent the density function $p_{M}$ with its domain restricted to $M_{0}$ and $M_{1}$ , respectively. By our definition of data-generating manifolds, $\operatorname*{Pr}[\mathbf{x} \in M_{i}]$ corresponds to the probability of data generation for class $i$ , i.e. $\operatorname*{Pr}[y = i]$ . For a concrete example of such density, uniform density for each manifold $M_{i}$ can be defined as $p_{i}(\mathbf{x}) = \frac{1}{\pi}$ for all $\mathbf{x} \in M_{i}$ .
+
+Note that each manifold has parameterized curves in $\mathbb{R}^2$
+
+$$
+\gamma_ {0}: \theta \mapsto (\cos \theta , \sin \theta)
+$$
+
+$$
+\gamma_ {1}: \theta \mapsto (1 - \cos \theta , 1 - \sin \theta + 0. 5)
+$$
+
+with constant speed $\| \gamma_0'(\theta)\| = \| \gamma_1'(\theta)\| = 1$ at all $\theta \in [0,\pi ]$ . Therefore, from equation (5),
+
+$$
+\int_ {\mathbf {x} \in M _ {0}} p _ {M} | _ {M _ {0}} (\mathbf {x}) d M _ {0} (\mathbf {x}) = \int_ {\theta = 0} ^ {\pi} p _ {M} \left(\gamma_ {0} (\theta)\right) d \theta \tag {7}
+$$
+
+$$
+\int_ {\mathbf {x} \in M _ {0}} p _ {M} | _ {M _ {1}} (\mathbf {x}) d M _ {1} (\mathbf {x}) = \int_ {\theta = 0} ^ {\pi} p _ {M} (\gamma_ {1} (\theta)) d \theta
+$$
+
+For any measurable subset $A \subseteq M$ , the probability for an event that $\mathbf{x}$ is in $A$ can be computed as follows.
+
+$$
+\begin{array}{l} \Pr [ \mathbf {x} \in A ] = \int_ {\mathbf {x} \in A \subseteq M} p _ {M} (\mathbf {x}) d M (\mathbf {x}) \\ = \int_ {\mathbf {x} \in A \cap M _ {0}} p _ {M} | _ {M _ {0}} (\mathbf {x}) d M _ {0} (\mathbf {x}) + \int_ {\mathbf {x} \in A \cap M _ {1}} p _ {M} | _ {M _ {1}} (\mathbf {x}) d M _ {1} (\mathbf {x}) \\ = \int_ {\substack {\theta \in [ 0, \pi ] \\ \gamma_ {0} (\theta) \in A}} p _ {M} | _ {M _ {0}} (\gamma_ {0} (\theta)) d \theta + \int_ {\substack {\theta \in [ 0, \pi ] \\ \gamma_ {1} (\theta) \in A}} p _ {M} | _ {M _ {1}} (\gamma_ {1} (\theta)) d \theta \quad (\because (7)) \\ = \Pr [ \mathbf {x} \in M _ {0} ] \int_ {\substack {\theta \in [ 0, \pi ] \\ \gamma_ {0} (\theta) \in A}} p _ {0} (\gamma_ {0} (\theta)) d \theta \\ + \Pr \left[ \mathbf {x} \in M _ {1} \right] \int_ {\substack {\theta \in [ 0, \pi ] \\ \gamma_ {1} (\theta) \in A}} p _ {1} \left(\gamma_ {1} (\theta)\right) d \theta \quad (\because (6)) \\ = \frac{1}{\pi}\left(\operatorname *{Pr}[\mathbf{x}\in M_{0}]\int_{\substack{\theta \in [0,\pi ]\\ \gamma_{0}(\theta)\in A}}1d\theta +\operatorname *{Pr}[\mathbf{x}\in M_{1}]\int_{\substack{\theta \in [0,\pi ]\\ \gamma_{1}(\theta)\in A}}1d\theta\right) \\ \end{array}
+$$
+
+We can briefly check all the requirements (R1), (R2), and (R3). The computation of $\operatorname*{Pr}[\mathbf{x}\in A]$ is based on (R1), so (R1) is satisfied trivially. Also, $p_M$ is a function defined only on $M$ , thus (R2) is clear, i.e. $\mathrm{supp}(p_M) = \{\mathbf{x}\in \mathbb{R}^n\mid p(\mathbf{x}) > 0\} \subseteq M$ . To check (R3), when $A = M_i$ , computing this integration will result in the exact probability $\operatorname*{Pr}[\mathbf{x}\in M_i] = \operatorname*{Pr}[y = i]$ , so when $A = M$ , computing the integration will result in $\operatorname*{Pr}[y = 0] + \operatorname*{Pr}[y = 1] = 1$ , as desired in the requirements.
+
+Extending density to $\mathbb{R}^n$ . We extend the domain to $\mathbb{R}^n$ for the example of two-moon. In Section 3, we defined the noise density function to satisfy the following requirement.
+
+(R0) The translated noise density function, $\nu_{\mathbf{x}}(\hat{\mathbf{x}} - \mathbf{x})$ , is the density of noise $\mathbf{n} = \hat{\mathbf{x}} - \mathbf{x}$ being chosen for a given $\mathbf{x}$ . Given $\mathbf{x}_o = \mathbf{x}$ , since adding noise $\mathbf{n}$ is the only way to generate $\hat{\mathbf{x}}$ by perturbing $\mathbf{x}_0$ , $p(\hat{\mathbf{x}} | \mathbf{x}_o = \mathbf{x})$ is equal to $\nu_{\mathbf{x}}(\mathbf{n})$ .
+
+Under a proper noise density function, We show an example construction of the density extended from $M$ satisfying the requirement (R0). For simplicity, we choose isotropic Gaussian distribution, $\mathcal{N}(0,\sigma^2 I)$ with the standard deviation $\sigma$ for each dimension as the noise density function $\nu_{\mathbf{x}}$ for all $\mathbf{x}\in M$ . Such noise density $\nu_{\mathbf{x}}$ defined in $\mathbb{R}^n$ can be written as follows.
+
+$$
+\nu_ {\mathbf {x}} (\mathbf {n _ {x}}) = \frac {1}{\sqrt {2 \pi} \sigma^ {2}} \exp \left(- \frac {\| \mathbf {n _ {x}} \| _ {2} ^ {2}}{2 \sigma^ {2}}\right)
+$$
+
+By putting $\mathbf{n}_{\mathbf{x}} = \hat{\mathbf{x}} -\mathbf{x}$ to density equation above,
+
+$$
+p (\hat {\mathbf {x}}) = \int_ {\mathbf {x} \in M} \frac {1}{\sqrt {2 \pi} \sigma^ {2}} \exp \left(- \frac {\| \hat {\mathbf {x}} - \mathbf {x} \| _ {2} ^ {2}}{2 \sigma^ {2}}\right) p _ {M} (\mathbf {x}) d M (\mathbf {x})
+$$
+
+Specifically, We assume an isotropic Gaussian distribution with $\sigma = 0.05$ as the noise density $\nu_{\mathbf{x}}$ for all $\mathbf{x} \in M$ .
+
+By the equation (1), we have the following computation of density on $\hat{\mathbf{x}}$ .
+
+$$
+\begin{array}{l} p (\hat {\mathbf {x}}) = \int_ {\mathbf {x} \in M} \nu_ {\mathbf {x}} (\hat {\mathbf {x}} - \mathbf {x}) p _ {M} (\mathbf {x}) d M (\mathbf {x}) \\ = \int_ {\mathbf {x} \in M _ {0}} \nu_ {\mathbf {x}} (\hat {\mathbf {x}} - \mathbf {x}) p _ {M} | _ {M _ {0}} (\mathbf {x}) d M _ {0} (\mathbf {x}) + \int_ {\mathbf {x} \in M _ {1}} \nu_ {\mathbf {x}} (\hat {\mathbf {x}} - \mathbf {x}) p _ {M} | _ {M _ {1}} (\mathbf {x}) d M _ {1} (\mathbf {x}) \\ = \int_ {\theta = 0} ^ {\pi} \nu_ {\mathbf {x}} (\hat {\mathbf {x}} - \mathbf {x}) p _ {M} | _ {M _ {0}} (\gamma_ {0} (\theta)) d \theta + \int_ {\theta = 0} ^ {\pi} \nu_ {\mathbf {x}} (\hat {\mathbf {x}} - \mathbf {x}) p _ {M} | _ {M _ {1}} (\gamma_ {1} (\theta)) d \theta \quad \left(\because (5)\right) \\ = \Pr [ \mathbf {x} \in M _ {0} ] \int_ {\theta = 0} ^ {\pi} \nu_ {\mathbf {x}} (\hat {\mathbf {x}} - \mathbf {x}) p _ {0} (\gamma_ {0} (\theta)) d \theta \\ + \Pr [ \mathbf {x} \in M _ {1} ] \int_ {\theta = 0} ^ {\pi} \nu_ {\mathbf {x}} (\hat {\mathbf {x}} - \mathbf {x}) p _ {1} (\gamma_ {1} (\theta)) d \theta \quad (\because (6)) \\ = \frac {1}{\pi \sqrt {2 \pi} \sigma^ {2}} \left[ \Pr [ \mathbf {x} \in M _ {0} ] \int_ {\theta = 0} ^ {\pi} \exp \left(- \frac {\| \hat {\mathbf {x}} - \mathbf {x} \| _ {2} ^ {2}}{2 \sigma^ {2}}\right) d \theta \right. \\ + \Pr [ \mathbf {x} \in M _ {1} ] \int_ {\theta = 0} ^ {\pi} \exp \left(- \frac {\| \hat {\mathbf {x}} - \mathbf {x} \| _ {2} ^ {2}}{2 \sigma^ {2}}\right) d \theta ] \\ \end{array}
+$$
+
+We can also check that the requirement (R0) is satisfied by the construction; our construction (equation (1)) is based on (R0). The computed density is shown in Figure 6b.
+
+# C PROOFS
+
+In this section, we provide the proofs for statements that appeared in Section 4.
+
+# C.1 PROOF OF THEOREM 1
+
+To begin with, pick a value $\lambda$ such that the $\lambda$ -density superlevel set $L_{\nu_{\mathbf{x}},\lambda}$ is nonempty for all $\mathbf{x} \in M$ . As we use noise densities $\nu_{\mathbf{x}}$ described in Section 4.1, it is safe to assume that both $\lambda$ -bounding radius $\delta_{\lambda} = \max_{\mathbf{x} \in M} \delta_{\mathbf{x},\lambda}$ and $\lambda$ -guaranteeing radius $\epsilon_{\lambda} = \min_{\mathbf{x} \in M} \epsilon_{\mathbf{x},\lambda}$ exist.
+
+Then, we can prove that, with a proper choice of threshold $\lambda$ , the $\lambda$ -density superlevel set includes the data-generating manifold.
+
+Lemma 2. Assume that noise densities have radii in Definition 1 for all $\mathbf{x} \in M$ and a small enough $\lambda > 0$ . Then, for any $\mathbf{x} \in M$ , the density $p(\mathbf{x})$ is at least $\omega_{\epsilon}\lambda$ , i.e. $p(\mathbf{x}) \geq \omega_{\epsilon}\lambda$ , where $\epsilon = \epsilon_{\lambda}$ .
+
+Proof. By Lemma 1,
+
+$$
+\begin{array}{l} \mathbf {x} ^ {\prime} \in B _ {\epsilon} (\mathbf {x}) \Longleftrightarrow \mathbf {x} \in B _ {\epsilon} \left(\mathbf {x} ^ {\prime}\right) = B _ {\epsilon_ {\lambda}} \left(\mathbf {x} ^ {\prime}\right) \quad \left(\because \epsilon = \epsilon_ {\lambda}\right) \\ \Longrightarrow \nu_ {\mathbf {x} ^ {\prime}} \left(\mathbf {x} - \mathbf {x} ^ {\prime}\right) \geq \lambda \\ \end{array}
+$$
+
+Then, we can lower bound the density $p_M(\mathbf{x})$ as follows.
+
+$$
+\begin{array}{l} p (\mathbf {x}) = \int_ {\mathbf {x} ^ {\prime} \in M} \nu_ {\mathbf {x} ^ {\prime}} \left(\mathbf {x} - \mathbf {x} ^ {\prime}\right) p _ {M} \left(\mathbf {x} ^ {\prime}\right) d M \left(\mathbf {x} ^ {\prime}\right) \\ \geq \int_ {\mathbf {x} ^ {\prime} \in M \cap B _ {\epsilon} (\mathbf {x})} \nu_ {\mathbf {x} ^ {\prime}} (\mathbf {x} - \mathbf {x} ^ {\prime}) p _ {M} (\mathbf {x} ^ {\prime}) d M (\mathbf {x} ^ {\prime}) \\ \geq \lambda \int_ {\mathbf {x} ^ {\prime} \in M \cap B _ {\epsilon}} p _ {M} (\mathbf {x} ^ {\prime}) d M (\mathbf {x} ^ {\prime}) \\ = \lambda \Pr_ {\mathbf {x} ^ {\prime} \in M} \left[ \mathbf {x} ^ {\prime} \in B _ {\epsilon} (\mathbf {x}) \right] \\ \geq \omega_ {\epsilon} \lambda \\ \end{array}
+$$
+
+
+
+This lemma shows that the thresholding the extended density $p$ with threshold $\lambda^{*} \leq \omega_{\epsilon} \lambda$ guarantees the superlevel set to include the entire manifold $M$ .
+
+Corollary 2. For any threshold $\lambda^* \leq \omega_{\epsilon} \lambda$ , the corresponding $\lambda^*$ -density superlevel set $L_{p, \lambda^*}$ of the extended density $p$ includes the data-generating manifold $M$ .
+
+Similarly, we show that, with a proper choice of threshold $\lambda$ , each connected component of $\lambda$ -density superlevel set contains at most one manifold.
+
+Lemma 3. Assume a family of noise densities satisfies the assumptions of Section 4.1. Let $\lambda >0$ be a value such that the $\lambda$ -density superlevel set $L_{\nu_{\mathbf{x}},\lambda}$ is nonempty for any $\mathbf{x}\in M$ . Also, let $\delta = \delta_{\lambda}$ be the maximum $\lambda$ -bounding radius over $M$ . Then, for any $\hat{\mathbf{x}}\notin N_{\delta}(M)$ , the extended density value is smaller than $\lambda$ , i.e. $p(\hat{\mathbf{x}}) < \lambda$ .
+
+Proof. By Lemma 1,
+
+$$
+\begin{array}{l} \hat {\mathbf {x}} \notin N _ {\delta} (M) \iff \hat {\mathbf {x}} \notin B _ {\delta} (\mathbf {x}) = B _ {\delta_ {\lambda}} (\mathbf {x}) \text {f o r a n y} \mathbf {x} \in M \quad (\because \delta = \delta_ {\lambda}) \\ \Longrightarrow \nu_ {\mathbf {x}} (\hat {\mathbf {x}} - \mathbf {x}) < \lambda \text {f o r a n y} \mathbf {x} \in M \\ \end{array}
+$$
+
+Then, we can upper bound the density $p(\hat{\mathbf{x}})$ as follows.
+
+$$
+\begin{array}{l} p (\hat {\mathbf {x}}) = \int_ {\mathbf {x} \in M} \nu_ {\mathbf {x}} (\hat {\mathbf {x}} - \mathbf {x}) p _ {M} (\mathbf {x}) d M (\mathbf {x}) \\ < \lambda \int_ {\mathbf {x} \in M} p _ {M} (\mathbf {x}) d M (\mathbf {x}) \quad (\because \hat {\mathbf {x}} \notin N _ {\delta_ {\lambda}} (M)) \\ = \lambda \\ \end{array}
+$$
+
+
+
+This lemma says that the $\lambda$ -density superlevel set is included by the $\delta$ -neighborhood $N_{\delta}(M)$ of the data-generating manifold $M$ .
+
+Now, we can deduce the following main result.
+
+Theorem 1. Pick any $\lambda^* \leq \omega_\epsilon \lambda$ threshold value satisfying the Corollary 2. If the class-wise distance of data-generating manifold is larger than $2\delta^*$ where $\delta^* = \delta_{\lambda^*}$ (the $\lambda^*$ -bounding radius), then the superlevel set $L_{p,\lambda^*}$ satisfies the followings.
+
+- $L_{p,\lambda^*}$ contains the data-generating manifold $M$ .
+- Each connected component of $L_{p,\lambda^*}$ contains at most one manifold $M_i$ of class $i$ .
+
+Proof. The first property is a direct application of Corollary 2 for $\lambda^{*} = \omega_{\epsilon}\lambda$
+
+For the second property, since the class-wise distance of $M$ is larger than $2\delta^{*}$ , the $\delta^{*}$ -neighborhood of manifolds are pairwise disjoint, i.e. $N_{\delta^{*}}(M_{i}) \cap N_{\delta^{*}}(M_{j}) = \emptyset$ for each $i \neq j$ . Therefore, $N_{\delta^{*}}(M)$ has exactly $k$ connected components $N_{i} = N_{\delta^{*}}(M_{i})$ 's.
+
+By Lemma 3, $\delta^{*}$ -neighborhood $N_{\delta^{*}}(M)$ contains the superlevel set $L_{p,\lambda^{*}}$ , thus each connected component of $L_{p,\lambda^{*}}$ is in exactly one of $N_{i}$ 's. Since $M$ is contained in $L_{p,\lambda^{*}}$ , each $M_{i}$ is contained in some connected component $C$ of $L_{p,\lambda^{*}}$ which is in $N_{i}$ . Then, for any $j \neq i$ , $M_{j} \notin C \subset N_{i}$ , since $M_{j}$ is in $N_{j}$ which is disjoint to $N_{i}$ . Therefore, if a connected component $C$ contains a manifold $M_{i}$ , then it cannot contain any other manifold.
+
+# C.2 PROOFS FOR SECTION 4.3
+
+Theorem 2. Let $\mathcal{D}_Z$ be a mixture of $n_Z$ multivariate Gaussian distributions, and let $\mathcal{D}_X$ be the target distribution from a data-generating manifold with $n_X$ manifolds. Let $G$ be a continuous generative model for $\mathcal{D}_X$ using latent vectors from $\mathcal{D}_Z$ . Assume the Theorem 1 is satisfied, and let $\lambda^*$ be the threshold value from the Theorem 1. If $n_Z < n_X$ , $L_{\lambda^*}^X$ and $L_{\lambda^*}^{G(Z)}$ do not agree on the number of connected components.
+
+Proof. Since $L_{\lambda^*}^X$ is the results of Theorem 1, the number of connected components of $L_{\lambda^*}^X$ is at least $n_X$ .
+
+However, since $\mathcal{D}_Z$ is a mixture of Gaussians, for any value of $\lambda$ (including the special case $\lambda = \lambda^*$ ), $L_{\lambda}^{Z}$ can never have more than $n_Z$ connected components. Since $G$ is continuous, it preserves the number of connected components, thus $L_{\lambda^*}^{G(Z)} = G(L_{\lambda^*}^{Z})$ has at most $n_Z$ connected components. As $n_Z < n_X$ , $L_{\lambda^*}^{X}$ and $L_{\lambda^*}^{G(Z)}$ can never agree on the number of connected components.
+
+Corollary 1. If Theorem 2 is satisfied, there is a point $\hat{\mathbf{x}}\in \mathbb{R}^n$ such that $\hat{\mathbf{x}}\notin L_{\lambda^{*}}^{X}$ but $\hat{\mathbf{x}}\in L_{\lambda^{*}}^{G(Z)}$
+
+Proof. Since $n_Z < n_X$ , there exists a connected component $\hat{C}$ of $L_{\lambda^*}^{G(Z)}$ containing at least two connected components of $S_{\lambda^*}^X$ . Without loss of generality, assume $\hat{C}$ contains exactly two connected components $C$ and $C'$ . By definition, $\lambda$ -superlevel set is a closed set, so $C$ and $C'$ are disjoint closed sets. In Euclidean space $\mathbb{R}^n$ , the Urysohn's lemma tells us that for any disjoint pair of closed sets $A, A'$ in $\mathbb{R}^n$ , there is a continuous function $f$ such that $f|_A(\mathbf{x}) = 0$ and $f|_{A'}(\mathbf{x}) = 1$ for any $\mathbf{x} \in \mathbb{R}^n$ . Especially, when $A = C$ and $A' = C'$ , there exists a continuous function $f$ such that,
+
+$f(\mathbf{x}) = 0$ for all $\mathbf{x}$ in $C$
+$f(\mathbf{x}) = 1$ for all $\mathbf{x}$ in $C^\prime$
+
+Consider $S = f^{-1}\left(\frac{1}{2}\right)$ which is a separating plane separating $C$ and $C'$ . If $\hat{C} \cap S = \varnothing$ , then $\hat{C} \cap S = f^{-1}[0, \frac{1}{2})$ and $\hat{C} \cap S = f^{-1}\left(\frac{1}{2}, 1\right]$ will be two open sets in subspace $\hat{C}$ , whose union is $\hat{C}$ . This implies that $\hat{C}$ is disconnected, which is a contradiction. Therefore, $\hat{C} \cap S$ should be nonempty, and any point $\mathbf{x}$ in $\hat{C} \cap S$ is not in $L_{\lambda^*}^X$ .
+
+# D FURTHER DISCUSSIONS
+
+# D.1 COMPUTING DENSITY OVER A DATA-GENERATING MANIFOLD
+
+When $M$ is a Riemannian manifold equipped with a Riemannian metric $g$ , we can compute probabilities over $M$ . There are two essential components of probability computation: (a) a density function $p_M$ and (b) a measure $dM$ over $M$ . We assume $p_M$ and $dM$ to satisfy the followings.
+
+(R1) For any measurable subset $A \subset M$ , i.e., $\operatorname{Pr}[\mathbf{x} \in A] = \int_{\mathbf{x} \in A} p_M(\mathbf{x}) dM(\mathbf{x})$ .
+(R2) $p$ is zero everywhere out of $M$ , i.e., $\mathrm{supp}(p_M) = \{\mathbf{x} \in \mathbb{R}^n \mid p_M(\mathbf{x}) > 0\} \subseteq M$ .
+(R3) For any $(\mathbf{x},y)$ , $\mathbf{x}$ is sampled from $M_{i}$ if and only if $y = i$ , i.e. $\operatorname*{Pr}[\mathbf{x}\in M_i] = \operatorname*{Pr}[y = i]$ .
+
+When equipped with such $p_M$ and $dM$ , we call $M$ as a data-generating manifold.
+
+Probability over a Riemannian manifold. We show how to compute a probability of $\mathbf{x}$ being generated from a Riemannian manifold $M$ . We assume a $k$ -dimensional manifold $M$ equipped with a Riemannian metric $g$ , a family of inner products $g_{\mathbf{x}}$ on tangent spaces $T_{\mathbf{x}}M$ . In this case, $g$ induces the volume measure $dM$ for integration over $M$ . If $M$ is parameterized by $\mathbf{x} = X(\mathbf{u})$ for $\mathbf{u} \in D \subseteq \mathbb{R}^k$ , the integration of a density function $p_M$ on $M$ is as follows.
+
+$$
+\int_ {\mathbf {x} \in M} p _ {M} (\mathbf {x}) d M (\mathbf {x}) = \int_ {\mathbf {u} \in D} p _ {M} (X (\mathbf {u})) \sqrt {\left| \det [ g _ {X (\mathbf {u})} ] \right|} d \mathbf {u} \tag {8}
+$$
+
+where $[g_{X(\mathbf{u})}]$ is the $k\times k$ matrix representation of the inner product $g_{X(\mathbf{u})}$ at $X(\mathbf{u})\in M$
+
+In Appendix B, a concrete example of this computation will be provided.
+
+# D.2 DENSITY EXTENSION OF THE SECTION 3.2
+
+This section introduces some remaining discussions regarding our data-generating process from a data-generating manifold.
+
+Relation to kernel density estimation. While this extension is computing the density of compound distribution, it can be interpreted as computing expectation over a family of locally defined densities. Such an expected value can be observed in previous approaches of density estimation. For example, if $\nu_{\mathbf{x}}$ is isotropic Gaussian for each $\mathbf{x}$ , the above integration is equivalent to the kernel density estimation, with Gaussian kernel, over infinitely many points on $M$ .
+
+Observed property of the extended density. In Figure 6b in Appendix B, we can observe that the extended density achieved higher values near the data-generating manifold. We formalize this observation to discuss its implication to the INC approach.
+
+Let $d(\hat{\mathbf{x}}, M)$ to be the minimum distance from $\hat{\mathbf{x}}$ to the manifold $M$ .
+
+(C1) For any given $\hat{\mathbf{x}}$ , let $y^{*}$ be the class label whose conditional density $p(\hat{\mathbf{x}} | y = y*)$ dominates $p(\hat{\mathbf{x}} | y = i)$ for $i \neq y^{*}$ ,
+
+$$
+y ^ {*} \in \arg \max _ {i \in [ l ]} p (\hat {\mathbf {x}} | y = i) \tag {9}
+$$
+
+and let $M_{y^*}$ be the manifold corresponding to $y^*$ .
+
+(C2) For $y^*$ satisfying (C1), we choose $y^*$ such that the distance of $\hat{\mathbf{x}}$ from the manifold $d(\hat{\mathbf{x}}, M_{y^*})$ is the smallest.
+
+If there are multiple $y^{*}$ satisfying both of (C1) and (C2), we expect the following property to be true for all of those $y^{*}$ .
+
+(P1) Consider the shortest line from $\hat{\mathbf{x}}$ to the manifold $M_{y^*}$ . As $\hat{\mathbf{x}}$ goes closer to $M_{y^*}$ along this line, $\hat{\mathbf{x}}$ should be more likely to be generated as the influence of noise decreases when moving away from the manifold. Therefore, we expect our density $p_M$ to have the following property.
+
+$$
+\begin{array}{l} \mathbf{x}^{*}\in \arg \min_{\mathbf{x}\in M_{y^{*}}}d(\hat{\mathbf{x}},\mathbf{x}) \\ \Rightarrow p (\hat {\mathbf {x}}) \leq p ((1 - \lambda) \hat {\mathbf {x}} + \lambda \mathbf {x} ^ {*}) \text {f o r a l l} \lambda \in [ 0, 1 ] \tag {10} \\ \end{array}
+$$
+
+Actually, this provides another justification of INC. In reality, the density conditioned by the label is not available even after running a generative model, so finding $y^{*}$ with (C1) is relatively hard. If we only consider (C2) without filtering $y^{*}$ via (C1), we are finding a point $\mathbf{x} \in M$ achieving the minimum distance to $\hat{\mathbf{x}}$ , which is the optimization (11) above. Then projecting $\hat{\mathbf{x}}$ to the $\mathbf{x}^{*}$ , i.e. the solution of the optimization 11, can be explained by 10; when $\lambda = 1$ , $p$ is the highest along the shortest line between $\hat{\mathbf{x}}$ and $\mathbf{x}^{*}$ .
+
+# D.3 SUFFICIENT CONDITIONS FOR THE EXISTENCE OF RADI
+
+We discuss the sufficient conditions guaranteeing the existence of radii introduced in Definition 1. Those sufficient conditions are derived from natural intuition about the properties of distributions in most machine-learning contexts.
+
+The first intuition is that the influence of noise should diminish as observed sample $\hat{\mathbf{x}}$ moves away from a source point $\mathbf{x}_o$ . Therefore, we formalize the noise whose density decreases as the noise $\mathbf{n} = \hat{\mathbf{x}} - \mathbf{x}_o$ gets bigger. We formalize boundedness of noise densities via the boundedness of their $\lambda$ -density superlevel sets and continuity of noise density via the continuity of individual $\nu_{\mathbf{x}}$ .
+
+Definition 11 (Center-peaked noise density). Noise density functions $\nu_{\mathbf{x}}$ are center-peaked, if for any source point $\mathbf{x} \in M$ and any noise vector $\mathbf{n} \in \mathbb{R}^n$ with $\| \mathbf{n} \| > 0$ , $\nu_{\mathbf{x}}(\mathbf{n}) < \nu_{\mathbf{x}}(\lambda \mathbf{n})$ for all $\lambda \in [0,1)$ .
+
+Definition 12 (Bounded noise density). Noise density functions $\nu_{\mathbf{x}}$ are bounded, if a $\lambda$ -density superlevel set is nonempty, there is a radius $\delta$ by which the $\lambda$ -density superlevel set is bounded, i.e., $L_{\nu_{\mathbf{x}},\lambda} \subseteq \overline{B_{\delta}(\mathbf{0})}$ where $\overline{B_{\delta}(\mathbf{0})}$ is the closed ball of radius $\delta$ centered at $\mathbf{0}$ .
+
+Definition 13 (Continuous noise density). Noise density functions $\nu_{\mathbf{x}}$ are continuous, if $\nu_{\mathbf{x}}$ is continuous in $\mathbb{R}^n$ , for any $\mathbf{x} \in M$ .
+
+Under the conditions above, the radii in Definition 1 always exist.
+
+Proposition 1. If noise densities $\nu_{\mathbf{x}}$ are center-peaked, bounded, and continuous, any nonempty $\lambda$ -density superlevel set $L_{\nu_{\mathbf{x}},\lambda}$ has both $\lambda$ -bounding radius $\delta_{\mathbf{x},\lambda}$ and $\lambda$ -guaranteeing radius $\epsilon_{\mathbf{x},\lambda}$ .
+
+Proof. Let $\nu_{\mathbf{x}}$ be a center peaked, superlevel set bounded family of continuous noise densities. Since $\nu_{\mathbf{x}}$ is continuous, superlevel set $L_{\nu_{\mathbf{x}},\lambda} = \nu_{\mathbf{x}}^{-1}\big[\lambda ,\infty \big)$ is closed as an inverse image of $\nu_{\mathbf{x}}$ . Therefore, its boundary $\partial L_{\nu_{\mathbf{x}},\lambda}$ is contained in $L_{\nu_{\mathbf{x}},\lambda}$ .
+
+Because $\nu_{\mathbf{x}}$ is superlevel set bounded, superlevel set $L_{\nu_{\mathbf{x}},\lambda}$ is bounded by a closed ball $\overline{B_{\delta}(\mathbf{0})}$ with radius $\delta \geq 0$ . Since $\nu_{\mathbf{x}}$ is center peaked, a nonempty superlevel set $L_{\nu_{\mathbf{x}},\lambda}$ always contains $\mathbf{0}$ as the maximum is achieved at $\mathbf{0}$ . Moreover, there exists a closed neighborhood ball $\overline{B_{\epsilon}(\mathbf{0})}$ with radius $\epsilon \geq 0$ contained in the superlevel set $L_{\nu_{\mathbf{x}},\lambda}$ . Now it is enough to show that the minimum of $\delta$ and the maximum of $\epsilon$ exist.
+
+Since $L_{\nu_{\mathbf{x}},\lambda}$ is bounded, its boundary $\partial L_{\nu_{\mathbf{x}},\lambda}$ is also bounded. $\partial L_{\nu_{\mathbf{x}},\lambda}$ is closed and bounded, thus it is a compact set. Therefore, the Euclidean norm, as a continuous function, should achieve the maximum $\bar{r}$ and the minimum $\underline{r}$ on $\partial L_{\nu_{\mathbf{x}},\lambda}$ by the extreme value theorem. From the choice of $\delta$ and $\epsilon$ , we can get,
+
+$$
+\epsilon \leq \underline {{r}} \leq \overline {{r}} \leq \delta
+$$
+
+Therefore, we can find the minimum $\delta_{\mathbf{x},\lambda} = \overline{r}$ and the maximum $\epsilon_{\mathbf{x},\lambda} = \underline{r}$ .
+
+
+
+# D.4 GENERALIZATION OF THE THEOREM 1
+
+We try generalizing the Theorem 2 to handle more concepts in topology. Theorem 2 mainly uses a fact that the number of connected components of $\lambda$ -density superlevel set is preserved by a continuous generative model $G$ .
+
+In algebraic topology, each connected component corresponds to a generator of 0-th homology group $H_0$ , and continuity of a function is enough to preserve each component. In general, generators of $i$ -th homology group $H_i$ for $i > 0$ are not preserved by a continuous map, so we need to restrict $G$ further. By requiring $G$ to be a homeomorphism, we can safely guarantee that all topological properties are preserved by $G$ ; therefore, we can generalize the Theorem 2 with a homeomorphic generative model $G$ .
+
+To generalize the proof of the Theorem 2, we first provide the sketch of the proof.
+
+(1) $\lambda^{*}$ -density superlevel set $L_{\lambda^{*}}^{Z}$ of a mixture of $n_Z$ Gaussian distributions has at most $n_Z$ connected components.
+(2) Since $G$ is continuous, the number of connected components of $L_{\lambda^*}^G(Z) = G(L_{\lambda^*}^Z)$ is same to the number of connected components of $L_{\lambda^*}^Z$ , so it is also at most $n_Z$ .
+(3) We choose $\lambda^{*}$ so that $L_{\lambda^*}^X$ is included in $\delta^{*}$ -neighborhood of $M$ .
+(4) By assumption on the class-wise distance of $M$ , $\delta^*$ -neighborhood of $M$ has exactly same number of connected components to $M$ , i.e., $n_X$ . Therefore $L_{\lambda^*}^X$ has at least $n_X$ connected components.
+(5) By (2) and (4), we conclude that $L_{\lambda^*}^G(Z)$ and $L_{\lambda^*}^X$ do not agree on the number of connected components as long as $n_Z < n_X$ .
+
+In this proof, $n_Z$ corresponds to the maximal 0-th Betti number of $L_{\lambda^*}^Z$ , i.e. the number of generators of $H_0(L_{\lambda^*}^Z)$ . If we keep using a mixture of Gaussians as latent vector distribution, all components of $L_{\lambda^*}^Z$ are contractible, so we may use 0 as the maximal $i$ -th Betti number.
+
+Also, $n_X$ corresponds to the 0-th Betti number of $M$ and it worked as the minimal 0-th Betti number of $L_{\lambda^*}^X$ . The condition on the class-wise distance of $M$ is used to ensure $n_X$ to be a lower bound. Combining these observations, we can get the following generalized statement.
+
+Theorem 3. Let $\mathcal{D}_Z$ be a mixture of multivariate Gaussian distributions, and let $\mathcal{D}_X$ be the target distribution from data-generating manifold $M$ . Let $n_i$ be the $i$ -th Betti number of $M$ .
+
+Consider a generative model $G$ is used to approximate $\mathcal{D}_X$ using the latent vectors sampled from $\mathcal{D}_Z$ . Assume that $G$ is a homeomorphism from $\mathbb{R}^n$ to itself. Assume that data-generating manifold satisfies the conditions of the Theorem 1, and let $\lambda^{*}$ be the threshold value that $L_{\lambda^{*}}^{X}$ corresponds
+
+to that superlevel set. Assume that for some $j > 0$ , the homomorphism $\iota^{*}$ induced by the inclusion $\iota : M \to N_{\delta^{*}}(M)$ is injective.
+
+If $0 < n_{j}$ , $L_{\lambda^{*}}^{X}$ and $L_{\lambda^{*}}^{G(Z)}$ do not agree on the number of connected components.
+
+Proof. Since $L_{\lambda^*}^X$ is the results of Theorem 1, it includes $M$ and is included by $\delta^*$ -neighborhood $N_{\delta^*}(M)$ of $M$ . Define inclusions $\iota_1, \iota_2$ as,
+
+$\iota_{1}:M\to L^{X}_{\lambda^{*}}$
+$\iota_{2}:L_{\lambda^{*}}^{X}\to N_{\delta^{*}}(M)$
+
+Clearly, $\iota = \iota_{2} \circ \iota_{1}$ .
+
+Let $\iota_1^*$ and $\iota_2^*$ be induced homomorphisms of $\iota_{1}$ and $\iota_{2}$ , resp.
+
+By the assumption, any generator $[a]$ in $H_{j}(M)$ is injectively mapped to a nonzero generator $\iota^{*}([a])$ in $H_{j}(N_{\delta^{*}}(M))$ . Therefore, the $j$ -th Betti number of $N_{\delta^{*}}(M)$ is equal to that of $M$ , i.e. $n_j$ . Note that $j$ -th Betti number is the rank of $j$ -th homology group $\mathrm{rank}(H_j(N_{\delta^*}(M)))$ Because $\iota_2^*$ is a homomorphism from $H_{j}(L_{\lambda^{*}}^{X})$ to $H_{j}(N_{\delta^{*}}(M))$ , $\mathrm{rank}(L_{\lambda^*}^X)\geq \mathrm{rank}(H_j(N_{\delta^*}(M)))$ . Therefore the $j$ -th Betti number of $L_{\lambda^*}^X$ is at least $n_j$ .
+
+However, since $\mathcal{D}_Z$ is a mixture of Gaussians, for any value of $\lambda$ (including the special case $\lambda = \lambda^*$ ), $L_{\lambda}^{Z}$ does not have any generator of $j$ -th homology group, so it has $j$ -th Betti number 0 for all $j > 0$ . Since $G$ is homeomorphic, it preserves all the Betti numbers, thus $L_{\lambda^*}^{G(Z)} = G(L_{\lambda^*}^{Z})$ has the same $j$ -th Betti number. As $0 < n_j$ , $L_{\lambda^*}^{X}$ and $L_{\lambda^*}^{G(Z)}$ can never agree on the number of connected components.
+
+In Section 5.2, we see the Figure 3i from the circles dataset, which is a remarkable example that $L_{\lambda}^{G}(Z)$ has the same number of connected components, but does not have any loop (non-contractible circle). This is empirical evidence of Theorem 3, so it is explained by mismatches in the topology of distributions. Each concentric circle has $\mathbb{Z}$ as its first homology group as circle contains exactly one generator. However, latent vector distribution always has a trivial first homology group, as any superlevel set of a mixture of Gaussians is a set of contractible connected components.
+
+# D.5 DETAILS OF INC IMPLEMENTATIONS IN THE SECTION 5
+
+INC implementation. We start from introducing the optimization for the ideal INC projection when the data-generating manifold $M$ is available.
+
+$$
+\mathbf {x} ^ {*} = \underset {\mathbf {x} \in M} {\arg \min } d (\mathbf {x}, \hat {\mathbf {x}}) \tag {11}
+$$
+
+where $d$ is a metric defined on the domain $X$ . If perfect classification on $M$ is assumed (model is well-trained on $M$ ) and $\hat{\mathbf{x}}$ is close enough to the manifold of correct label, classification $f(\mathbf{x}^{*})$ is likely to be correct, since $\mathbf{x}^{*}$ is likely to lie on the correct manifold. Since the data-generating manifold $M$ is unknown, the INC approach runs the following optimization with before the classification.
+
+$$
+\mathbf {x} ^ {*} = G \left(\mathbf {z} ^ {*}\right) \text {w h e r e} \mathbf {z} ^ {*} = \underset {\mathbf {z} \sim \mathcal {D} _ {Z}} {\arg \min } d (G (\mathbf {z}), \hat {\mathbf {x}}) \tag {12}
+$$
+
+where $d$ is a metric defined on the domain $X$ .
+
+When INC is implemented with a reversible generative model $G$ , for any given $\hat{\mathbf{x}} \in \mathbb{R}^n$ there exists a trivial solution $\mathbf{z}^{*} = G^{-1}(\hat{\mathbf{x}})$ to the optimization (12), achieving $d(G(\mathbf{z}^{*}), \mathbf{x}) = 0$ . This is even true for $\hat{\mathbf{x}}$ out of the manifold, resulting in the situation that the output $\mathbf{x}^{*} = G(\mathbf{z}^{*}) = \hat{\mathbf{x}}$ is still out of the data-generating manifold.
+
+To manage this problem, we add another term penalizing a low density of latent vector to the objective function. Thus, in our INC implementation, we solve the following optimization problem.
+
+$$
+\mathbf {x} ^ {*} = G \left(\mathbf {z} ^ {*}\right) \text {w h e r e} \mathbf {z} ^ {*} = \arg \min _ {\mathbf {z} \sim \mathcal {D} _ {Z}} \left[ d \left(G (\mathbf {z}), \hat {\mathbf {x}}\right) + \alpha \left(M - p _ {Z} (\mathbf {z})\right) \right] \tag {13}
+$$
+
+where $\alpha$ is the regularization factor and $M$ is the maximum possible value of the density $p_Z$ of the latent vector distribution. For the choice of regularization factor, we used the same value $\alpha = 1$ during the entire experiment.
+
+To solve each optimization problem, we used the built-in adam optimizer (Kingma & Ba, 2014) in TensorFlow package. For optimization parameters, we ran 100 iterations of adam optimizer using learning rate 0.01 with random sampling of $\mathbf{z}$ .
+
+When implementing INC using a class-aware generative model, we used the following strategy to improve its robustness.
+
+- As the class-aware generative model generates each manifold from each Gaussian component, we first sample initial points from each manifold by randomly choosing latent vectors $\mathbf{z}_1,\ldots ,\mathbf{z}_l$ from each Gaussian component.
+- We run INC for $i$ -th manifold by solving the following optimization.
+
+$$
+\mathbf {x} _ {i} ^ {*} = G \left(\mathbf {z} _ {i} ^ {*}\right) \text {w h e r e} \mathbf {z} _ {i} ^ {*} = \arg \min _ {\mathbf {z} \sim \mathcal {D} _ {Z}} \left[ d (G (\mathbf {z}), \hat {\mathbf {x}}) + \alpha (M _ {i} - p _ {Z, i} (\mathbf {z})) \right]
+$$
+
+where $M_{i}$ is the maximum value of $i$ -th Gaussian component. The regularization term is designed to penalize $\mathbf{z}$ which is unlikely to be generated by $i$ -th Gaussian component, so we only search in the range of $i$ -th Gaussian component, i.e., $i$ -th manifold.
+
+- We choose the final solution $\mathbf{x}_i^*$ achieving the minimum $d(\mathbf{x}_i^*,\hat{\mathbf{x}})$ , breaking ties randomly.
+
+Since each search is performed only on each submanifold, the artifact observed in Section 5.3 never appears during the optimization process. Also, choosing initial points from each manifold prevents the initialization problem mentioned in Section 5.3.
+
+# D.6 DISCUSSION ABOUT THE LIMITATION OF TOPOLOGICAL INFORMATION
+
+Given a sufficient number of connected components in the latent vector distribution, does the class-aware training suggested in this paper result in a generative model that achieves manifold separation? For this question, the answer is no, and the manifold separation depends on other factors, e.g., alignment of latent vector distribution, choice of training parameter, etc.
+
+
+(a) Superlevel set of $\mathcal{D}_Z$
+Figure 7: Failure cases of class-aware training.
+
+
+(b) Superlevel set of $\mathcal{D}_{G(Z)}$
+
+Figure 7b shows the superlevel set of $\mathcal{D}_{G(Z)}$ from a class-aware training to learn the two-moons dataset when latent vector distribution is a mixture of two Gaussian distributions aligned horizontally (Figure 7a). It is clear that in this case, the generative model induced a connection artifact even when the class-aware training was used.
+
+We explain this by interpreting reversible generative models as dynamical systems (Weinan, 2017; Chen et al., 2018; Grathwohl et al., 2018; Zhang et al., 2018). To elaborate, a reversible generative
+
+model can be viewed as a dynamical system moving the latent vector distribution to the target distribution continuously in time. When two Gaussian mixtures are aligned vertically, a reversible generative model is likely to learn how to move the upper (and lower) Gaussian distribution toward the upper moon (and the lower moon, respectively), without being affected by the entanglement of two moons. However, moving the left (and right) Gaussian distribution toward the left moon (and the right moon, respectively) continuously in time is required to avoid the entanglement of two moons during the transition. This case alludes that information about the topological properties may not be enough to learn a generative model separating manifolds, because it does not provide an understanding of information about how data-generating manifolds are aligned.
+
+# E MORE EXPERIMENTAL RESULTS
+
+We present more experimental results about the INC performance comparing topology-aware generative model to its topology-ignorant counterpart.
+
+Histogram for projection error distributions in 5.4. Figure 8 presents the histogram of the projection errors distributed from 0 to the diameter of the distribution. Each row corresponds to each dataset, whereas the first column and the second column represent the results from the topology-ignorant model and the topology-aware model, respectively. All histograms are normalized so that the sum of values adds up to 1. To explain, the $y$ -axis of each histogram is the estimated probability that INC achieves the projection error on the $x$ -axis. Not only can we observe the improved mean of projection errors in the histograms, but we can also check the reduced standard deviation, i.e., we get more consistent projection errors near the mean.
+
+
+
+
+(b) Two-moons, topology-aware
+
+
+(a) Two-moons, topology-ignorant
+
+
+(d) Spirals, topology-aware
+
+
+(c) Spirals, topology-ignorant
+(e) Circles, topology-ignorant
+Figure 8: Histograms of the projection errors of INC. Each $y$ -axis represents the estimated probability that INC incurs the projection error on the corresponding $x$ -axis.
+
+
+(f) Circles, topology-aware
\ No newline at end of file
diff --git a/ontheneedfortopologyawaregenerativemodelsformanifoldbaseddefenses/images.zip b/ontheneedfortopologyawaregenerativemodelsformanifoldbaseddefenses/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..c8df8144eefe8248a835da2ec148180269a49930
--- /dev/null
+++ b/ontheneedfortopologyawaregenerativemodelsformanifoldbaseddefenses/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:83b3ea1ec8d6bdc10a4c8138726262ec5b02e527b6fba0c0c752980cdc332b66
+size 649025
diff --git a/ontheneedfortopologyawaregenerativemodelsformanifoldbaseddefenses/layout.json b/ontheneedfortopologyawaregenerativemodelsformanifoldbaseddefenses/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..02c83b4134f7e26ffec56be6403fd34dbbce68d9
--- /dev/null
+++ b/ontheneedfortopologyawaregenerativemodelsformanifoldbaseddefenses/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9e78eb1a1f54f8aa61f53fbd3cbd268c32d585464f2a7e5444070a7010181e05
+size 1571205
diff --git a/ontherelationshipbetweenselfattentionandconvolutionallayers/3e5ce1ef-dba0-48e8-afe3-e1959710950f_content_list.json b/ontherelationshipbetweenselfattentionandconvolutionallayers/3e5ce1ef-dba0-48e8-afe3-e1959710950f_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..04a32a1b469e9ec281628ab7f3c0fee61ba16fc2
--- /dev/null
+++ b/ontherelationshipbetweenselfattentionandconvolutionallayers/3e5ce1ef-dba0-48e8-afe3-e1959710950f_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4cf7b226a09cb792c00ab74834ccdcf83100a5747e41e7ec18e7696cfb3f3b06
+size 109641
diff --git a/ontherelationshipbetweenselfattentionandconvolutionallayers/3e5ce1ef-dba0-48e8-afe3-e1959710950f_model.json b/ontherelationshipbetweenselfattentionandconvolutionallayers/3e5ce1ef-dba0-48e8-afe3-e1959710950f_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..15e86255d09c0e26c29f84edeb49a2b7a878b918
--- /dev/null
+++ b/ontherelationshipbetweenselfattentionandconvolutionallayers/3e5ce1ef-dba0-48e8-afe3-e1959710950f_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:74c745218b5a5d5d826a2e357109194e687541f31e561800f3229e1dcc3bbdb3
+size 125972
diff --git a/ontherelationshipbetweenselfattentionandconvolutionallayers/3e5ce1ef-dba0-48e8-afe3-e1959710950f_origin.pdf b/ontherelationshipbetweenselfattentionandconvolutionallayers/3e5ce1ef-dba0-48e8-afe3-e1959710950f_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..7f0234935cc1c58a279860e458315a8babdbd571
--- /dev/null
+++ b/ontherelationshipbetweenselfattentionandconvolutionallayers/3e5ce1ef-dba0-48e8-afe3-e1959710950f_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:fb25b4aa57f846e336d825780fdc03804af7594a4c52606410576f388585e690
+size 3436815
diff --git a/ontherelationshipbetweenselfattentionandconvolutionallayers/full.md b/ontherelationshipbetweenselfattentionandconvolutionallayers/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..b2a8addb2121e7b2189cbc9a92621019fc9e46e1
--- /dev/null
+++ b/ontherelationshipbetweenselfattentionandconvolutionallayers/full.md
@@ -0,0 +1,458 @@
+# ON THE RELATIONSHIP BETWEEN SELF-ATTENTION AND CONVOLUTIONAL LAYERS
+
+Jean-Baptiste Cordonnier, Andreas Loukas & Martin Jaggi
+
+École Polytechnique Fédérale de Lausanne (EPFL)
+
+{first.last}@epfl.ch
+
+# ABSTRACT
+
+Recent trends of incorporating attention mechanisms in vision have led researchers to reconsider the supremacy of convolutional layers as a primary building block. Beyond helping CNNs to handle long-range dependencies, Ramachandran et al. (2019) showed that attention can completely replace convolution and achieve state-of-the-art performance on vision tasks. This raises the question: do learned attention layers operate similarly to convolutional layers? This work provides evidence that attention layers can perform convolution and, indeed, they often learn to do so in practice. Specifically, we prove that a multi-head self-attention layer with sufficient number of heads is at least as expressive as any convolutional layer. Our numerical experiments then show that self-attention layers attend to pixel-grid patterns similarly to CNN layers, corroborating our analysis. Our code is publicly available1.
+
+# 1 INTRODUCTION
+
+Recent advances in Natural Language Processing (NLP) are largely attributed to the rise of the transformer (Vaswani et al., 2017). Pre-trained to solve an unsupervised task on large corpora of text, transformer-based architectures, such as GPT-2 (Radford et al., 2018), BERT (Devlin et al., 2018) and Transformer-XL (Dai et al., 2019), seem to possess the capacity to learn the underlying structure of text and, as a consequence, to learn representations that generalize across tasks. The key difference between transformers and previous methods, such as recurrent neural networks (Hochreiter & Schmidhuber, 1997) and convolutional neural networks (CNN), is that the former can simultaneously attend to every word of their input sequence. This is made possible thanks to the attention mechanism—originally introduced in Neural Machine Translation to better handle long-range dependencies (Bahdanau et al., 2015). With self-attention in particular, the similarity of two words in a sequence is captured by an attention score measuring the distance of their representations. The representation of each word is then updated based on those words whose attention score is highest.
+
+Inspired by its capacity to learn meaningful inter-dependencies between words, researchers have recently considered utilizing self-attention in vision tasks. Self-attention was first added to CNN by either using channel-based attention (Hu et al., 2018) or non-local relationships across the image (Wang et al., 2018). More recently, Bello et al. (2019) augmented CNNs by replacing some convolutional layers with self-attention layers, leading to improvements on image classification and object detection tasks. Interestingly, Ramachandran et al. (2019) noticed that, even though state-of-the-art results are reached when attention and convolutional features are combined, under same computation and model size constraints, self-attention-only architectures also reach competitive image classification accuracy.
+
+These findings raise the question, do self-attention layers process images in a similar manner to convolutional layers? From a theoretical perspective, one could argue that transformers have the capacity to simulate any function—including a CNN. Indeed, Pérez et al. (2019) showed that a multilayer attention-based architecture with additive positional encodings is Turing complete under some strong theoretical assumptions, such as unbounded precision arithmetic. Unfortunately, universality results do not reveal how a machine solves a task, only that it has the capacity to do so. Thus, the question of how self-attention layers actually process images remains open.
+
+Contributions. In this work, we put forth theoretical and empirical evidence that self-attention layers can (and do) learn to behave similar to convolutional layers:
+
+I. From a theoretical perspective, we provide a constructive proof showing that self-attention layers can express any convolutional layers.
+
+Specifically, we show that a single multi-head self-attention layer using relative positional encoding can be re-parametrized to express any convolutional layer.
+
+II. Our experiments show that the first few layers of attention-only architectures (Ramachandran et al., 2019) do learn to attend on grid-like pattern around each query pixel, similar to our theoretical construction.
+
+Strikingly, this behavior is confirmed both for our quadratic encoding, but also for relative encoding that is learned. Our results seem to suggest that localized convolution is the right inductive bias for the first few layers of an image classifying network. We provide an interactive website2 to explore how self-attention exploits localized position-based attention in lower layers and content-based attention in deeper layers. For reproducibility purposes, our code is publicly available.
+
+# 2 BACKGROUND ON ATTENTION MECHANISMS FOR VISION
+
+We here recall the mathematical formulation of self-attention layers and emphasize the role of positional encodings.
+
+# 2.1 THE MULTI-HEAD SELF-ATTENTION LAYER
+
+Let $\mathbf{X} \in \mathbb{R}^{T \times D_{in}}$ be an input matrix consisting of $T$ tokens in of $D_{in}$ dimensions each. While in NLP each token corresponds to a word in a sentence, the same formalism can be applied to any sequence of $T$ discrete objects, e.g. pixels. A self-attention layer maps any query token $t \in [T]$ from $D_{in}$ to $D_{out}$ dimensions as follows:
+
+$$
+\operatorname {S e l f - A t t e n t i o n} (\boldsymbol {X}) _ {t,:} := \operatorname {s o f t m a x} \left(\boldsymbol {A} _ {t,:}\right) \boldsymbol {X} \boldsymbol {W} _ {\text {v a l}}, \tag {1}
+$$
+
+where we refer to the elements of the $T\times T$ matrix
+
+$$
+\boldsymbol {A} := \boldsymbol {X} \boldsymbol {W} _ {\text {q r y}} \boldsymbol {W} _ {\text {k e y}} ^ {\top} \boldsymbol {X} ^ {\top} \tag {2}
+$$
+
+as attention scores and the softmax output $^3$ as attention probabilities. The layer is parametrized by a query matrix $\mathbf{W}_{qry} \in \mathbb{R}^{D_{in} \times D_k}$ , a key matrix $\mathbf{W}_{key} \in \mathbb{R}^{D_{in} \times D_k}$ and a value matrix $\mathbf{W}_{val} \in \mathbb{R}^{D_{in} \times D_{out}}$ . For simplicity, we exclude any residual connections, batch normalization and constant factors.
+
+A key property of the self-attention model described above is that it is equivariant to reordering, that is, it gives the same output independently of how the $T$ input tokens are shuffled. This is problematic for cases we expect the order of things to matter. To alleviate the limitation, a positional encoding is learned for each token in the sequence (or pixel in an image), and added to the representation of the token itself before applying self-attention
+
+$$
+\boldsymbol {A} := (\boldsymbol {X} + \boldsymbol {P}) \boldsymbol {W} _ {\text {q r y}} \boldsymbol {W} _ {\text {k e y}} ^ {\top} (\boldsymbol {X} + \boldsymbol {P}) ^ {\top}, \tag {3}
+$$
+
+where $\pmb{P} \in \mathbb{R}^{T \times D_{in}}$ contains the embedding vectors for each position. More generally, $\pmb{P}$ may be substituted by any function that returns a vector representation of the position.
+
+It has been found beneficial in practice to replicate this self-attention mechanism into multiple heads, each being able to focus on different parts of the input by using different query, key and value matrices. In multi-head self-attention, the output of the $N_{h}$ heads of output dimension $D_{h}$ are concatenated and projected to dimension $D_{out}$ as follows:
+
+$$
+\operatorname {M H S A} (\boldsymbol {X}) := \underset {h \in \left[ N _ {h} \right]} {\operatorname {c o n c a t}} \left[ \text {S e l f - A t t e n t i o n} _ {h} (\boldsymbol {X}) \right] \boldsymbol {W} _ {\text {o u t}} + \boldsymbol {b} _ {\text {o u t}} \tag {4}
+$$
+
+and two new parameters are introduced: the projection matrix $\mathbf{W}_{out} \in \mathbb{R}^{N_hD_h \times D_{out}}$ and a bias term $\mathbf{b}_{out} \in \mathbb{R}^{D_{out}}$ .
+
+# 2.2 ATTENTION FOR IMAGES
+
+Convolutional layers are the de facto choice for building neural networks that operate on images. We recall that, given an image tensor $\mathbf{X} \in \mathbb{R}^{W \times H \times D_{in}}$ of width $W$ , height $H$ and $D_{in}$ channels, the output of a convolutional layer for pixel $(i,j)$ is given by
+
+$$
+\operatorname {C o n v} (\boldsymbol {X}) _ {i, j,:} := \sum_ {\left(\delta_ {1}, \delta_ {2}\right) \in \mathbb {A} _ {K}} \mathbf {X} _ {i + \delta_ {1}, j + \delta_ {2,:}} \mathbf {W} _ {\delta_ {1}, \delta_ {2},:,:} + \boldsymbol {b}, \tag {5}
+$$
+
+where $\mathbf{W}$ is the $K\times K\times D_{in}\times D_{out}$ weight tensor $^4$ $\pmb {b}\in \mathbb{R}^{D_{out}}$ is the bias vector and the set
+
+$$
+\mathbb {A} _ {K} := \left[ - \left\lfloor \frac {K}{2} \right\rfloor , \dots , \left\lfloor \frac {K}{2} \right\rfloor \right] \times \left[ - \left\lfloor \frac {K}{2} \right\rfloor , \dots , \left\lfloor \frac {K}{2} \right\rfloor \right]
+$$
+
+contains all possible shifts appearing when convolving the image with a $K \times K$ kernel.
+
+In the following, we review how self-attention can be adapted from 1D sequences to images.
+
+With images, rather than tokens, we have query and key pixels $\mathbf{q}$ , $\mathbf{k} \in [W] \times [H]$ . Accordingly, the input is a tensor $\mathbf{X}$ of dimension $W \times H \times D_{in}$ and each attention score associates a query and a key pixel.
+
+To keep the formulas consistent with the 1D case, we abuse notation and slice tensors by using a 2D index vector: if $\pmb{p} = (i,j)$ , we write $\mathbf{X}_{\pmb{p},\cdot}$ and $\mathbf{A}_{\pmb{p},\cdot}$ to mean $\mathbf{X}_{i,j,\cdot}$ and $\mathbf{A}_{i,j,\cdot,\cdot}$ , respectively. With this notation in place, the multi-head self attention layer output at pixel $\pmb{q}$ can be expressed as follows:
+
+$$
+\operatorname {S e l f - A t t e n t i o n} (\boldsymbol {X}) _ {\boldsymbol {q},:} = \sum_ {\boldsymbol {k}} \operatorname {s o f t m a x} \left(\mathbf {A} _ {\boldsymbol {q},:}\right) _ {\boldsymbol {k}} \mathbf {X} _ {\boldsymbol {k},:} W _ {\text {v a l}} \tag {6}
+$$
+
+and accordingly for the multi-head case.
+
+# 2.3 POSITIONAL ENCODING FOR IMAGES
+
+There are two types of positional encoding that has been used in transformer-based architectures: the absolute and relative encoding (see also Table 3 in the Appendix).
+
+With absolute encodings, a (fixed or learned) vector $\mathbf{P}_{p,:}$ is assigned to each pixel $p$ . The computation of the attention scores we saw in eq. (2) can then be decomposed as follows:
+
+$$
+\begin{array}{l} \mathbf {A} _ {\boldsymbol {q}, \boldsymbol {k}} ^ {\mathrm {a b s}} = (\mathbf {X} _ {\boldsymbol {q},:} + \mathbf {P} _ {\boldsymbol {q},:}) W _ {q r y} W _ {k e y} ^ {\top} (\mathbf {X} _ {\boldsymbol {k},:} + \mathbf {P} _ {\boldsymbol {k},:}) ^ {\top} \\ = \mathbf {X} _ {\boldsymbol {q},:} W _ {q r y} W _ {k e y} ^ {\top} \mathbf {X} _ {\boldsymbol {k},:} ^ {\top} + \mathbf {X} _ {\boldsymbol {q},:} W _ {q r y} W _ {k e y} ^ {\top} \mathbf {P} _ {\boldsymbol {k},:} ^ {\top} + \mathbf {P} _ {\boldsymbol {q},:} W _ {q r y} W _ {k e y} ^ {\top} \mathbf {X} _ {\boldsymbol {k},:} + \mathbf {P} _ {\boldsymbol {q},:} W _ {q r y} W _ {k e y} ^ {\top} \mathbf {P} _ {\boldsymbol {k},:} \tag {7} \\ \end{array}
+$$
+
+where $\mathbf{q}$ and $\mathbf{k}$ correspond to the query and key pixels, respectively.
+
+The relative positional encoding was introduced by Dai et al. (2019). The main idea is to only consider the position difference between the query pixel (pixel we compute the representation of) and the key pixel (pixel we attend) instead of the absolute position of the key pixel:
+
+$$
+\mathbf {A} _ {\boldsymbol {q}, \boldsymbol {k}} ^ {\text {r e l}} := \mathbf {X} _ {\boldsymbol {q};;} ^ {\top} W _ {\boldsymbol {q r y}} ^ {\top} W _ {\boldsymbol {k e y}} \mathbf {X} _ {\boldsymbol {k};;} + \mathbf {X} _ {\boldsymbol {q},;} ^ {\top} W _ {\boldsymbol {q r y}} ^ {\top} \widehat {W} _ {\boldsymbol {k e y}} \boldsymbol {r} _ {\delta} + \boldsymbol {u} ^ {\top} W _ {\boldsymbol {k e y}} \mathbf {X} _ {\boldsymbol {k};;} + \boldsymbol {v} ^ {\top} \widehat {W} _ {\boldsymbol {k e y}} \boldsymbol {r} _ {\delta} \tag {8}
+$$
+
+In this manner, the attention scores only depend on the shift $\delta \coloneqq k - q$ . Above, the learnable vectors $\mathbf{u}$ and $\mathbf{v}$ are unique for each head, whereas for every shift $\delta$ the relative positional encoding $\mathbf{r}_{\delta} \in \mathbb{R}^{D_p}$ is shared by all layers and heads. Moreover, now the key weights are split into two types: $\mathbf{W}_{key}$ pertain to the input and $\widehat{\mathbf{W}}_{key}$ to the relative position of pixels.
+
+# 3 SELF-ATTENTION AS A CONVOLUTIONAL LAYER
+
+This section derives sufficient conditions such that a multi-head self-attention layer can simulate a convolutional layer. Our main result is the following:
+
+Theorem 1. A multi-head self-attention layer with $N_{h}$ heads of dimension $D_{h}$ , output dimension $D_{out}$ and a relative positional encoding of dimension $D_{p} \geq 3$ can express any convolutional layer of kernel size $\sqrt{N_{h}} \times \sqrt{N_{h}}$ and $\min(D_{h}, D_{out})$ output channels.
+
+The theorem is proven constructively by selecting the parameters of the multi-head self-attention layer so that the latter acts like a convolutional layer. In the proposed construction, the attention scores of each self-attention head should attend to a different relative shift within the set $\Delta_K = \{-\lfloor K / 2\rfloor ,\ldots ,\lfloor K / 2\rfloor \} ^2$ of all pixel shifts in a $K\times K$ kernel. The exact condition can be found in the statement of Lemma 1.
+
+Then, Lemma 2 shows that the aforementioned condition is satisfied for the relative positional encoding that we refer to as the quadratic encoding:
+
+$$
+\boldsymbol {v} ^ {(h)} := - \alpha^ {(h)} \left(1, - 2 \boldsymbol {\Delta} _ {1} ^ {(h)}, - 2 \boldsymbol {\Delta} _ {2} ^ {(h)}\right) \quad \boldsymbol {r} _ {\delta} := \left(\| \boldsymbol {\delta} \| ^ {2}, \boldsymbol {\delta} _ {1}, \boldsymbol {\delta} _ {2}\right) \quad \boldsymbol {W} _ {q r y} = \boldsymbol {W} _ {k e y} := \mathbf {0} \quad \widehat {\boldsymbol {W}} _ {k e y} := \boldsymbol {I} \tag {9}
+$$
+
+The learned parameters $\pmb{\Delta}^{(h)} = (\pmb{\Delta}_1^{(h)},\pmb{\Delta}_2^{(h)})$ and $\alpha^{(h)}$ determine the center and width of attention of each head, respectively. On the other hand, $\delta = (\delta_{1},\delta_{2})$ is fixed and expresses the relative shift between query and key pixels.
+
+It is important to stress that the above encoding is not the only one for which the conditions of Lemma 1 are satisfied. In fact, in our experiments, the relative encoding learned by the neural network also matched the conditions of the lemma (despite being different from the quadratic encoding). Nevertheless, the encoding defined above is very efficient in terms of size, as only $D_{p} = 3$ dimensions suffice to encode the relative position of pixels, while also reaching similar or better empirical performance (than the learned one).
+
+The theorem covers the general convolution operator as defined in eq. (17). However, machine learning practitioners using differential programming frameworks (Paszke et al., 2017; Abadi et al., 2015) might question if the theorem holds for all hyper-parameters of 2D convolutional layers:
+
+- Padding: a multi-head self-attention layer uses by default the "SAME" padding while a convolutional layer would decrease the image size by $K - 1$ pixels. The correct way to alleviate these boundary effects is to pad the input image with $\lfloor K / 2 \rfloor$ zeros on each side. In this case, the cropped output of a MHSA and a convolutional layer are the same.
+- Stride: a strided convolution can be seen as a convolution followed by a fixed pooling operation—with computational optimizations. Theorem 1 is defined for stride 1, but a fixed pooling layer could be appended to the Self-Attention layer to simulate any stride.
+- Dilation: a multi-head self-attention layer can express any dilated convolution as each head can attend a value at any pixel shift and form a (dilated) grid pattern.
+
+Remark for the 1D case. Convolutional layers acting on sequences are commonly used in the literature for text (Kim, 2014), as well as audio (van den Oord et al., 2016) and time series (Franceschi et al., 2019). Theorem 1 can be straightforwardly extended to show that multi-head self-attention with $N_{h}$ heads can also simulate a 1D convolutional layer with a kernel of size $K = N_{h}$ with $\min(D_{h}, D_{out})$ output channels using a positional encoding of dimension $D_{p} \geq 2$ . Since we have not tested empirically if the preceding construction matches the behavior of 1D self-attention in practice, we cannot claim that it actually learns to convolve an input sequence—only that it has the capacity to do so.
+
+# PROOF OF MAIN THEOREM
+
+The proof follows directly from Lemmas 1 and 2 stated below:
+
+Lemma 1. Consider a multi-head self-attention layer consisting of $N_{h} = K^{2}$ heads, $D_{h} \geq D_{out}$ and let $f: [N_{h}] \to \mathbb{A}_{K}$ be a bijective mapping of heads onto shifts. Further, suppose that for every head the following holds:
+
+$$
+\operatorname {s o f t m a x} \left(\boldsymbol {A} _ {\boldsymbol {q},:} ^ {(h)}\right) _ {\boldsymbol {k}} = \left\{ \begin{array}{l l} 1 & \text {i f} \boldsymbol {f} (h) = \boldsymbol {q} - \boldsymbol {k} \\ 0 & \text {o t h e r w i s e .} \end{array} \right. \tag {10}
+$$
+
+Then, for any convolutional layer with a $K \times K$ kernel and $D_{out}$ output channels, there exists $\{\mathbf{W}_{val}^{(h)}\}_{h \in [N_h]}$ such that $\mathrm{MHSA}(\mathbf{X}) = \mathrm{Conv}(\mathbf{X})$ for every $\mathbf{X} \in \mathbb{R}^{W \times H \times D_{in}}$ .
+
+
+Figure 1: Illustration of a Multi-Head Self-Attention layer applied to a tensor image $\mathbf{X}$ . Each head $h$ attends pixel values around shift $\Delta^{(h)}$ and learn a filter matrix $\boldsymbol{W}_{val}^{(h)}$ . We show attention maps computed for a query pixel at position $\boldsymbol{q}$ .
+
+Proof. Our first step will be to rework the expression of the Multi-Head Self-Attention operator from equation (1) and equation (4) such that the effect of the multiple heads becomes more transparent:
+
+$$
+\operatorname {M H S A} (\boldsymbol {X}) = \boldsymbol {b} _ {\text {o u t}} + \sum_ {h \in \left[ N _ {h} \right]} \operatorname {s o f t m a x} \left(\boldsymbol {A} ^ {(h)}\right) \boldsymbol {X} \underbrace {\boldsymbol {W} _ {\text {v a l}} ^ {(h)} \boldsymbol {W} _ {\text {o u t}} [ (h - 1) D _ {h} + 1 : h D _ {h} + 1 ]} _ {\boldsymbol {W} ^ {(h)}} \tag {11}
+$$
+
+Note that each head's value matrix $\mathbf{W}_{val}^{(h)} \in \mathbb{R}^{D_{in} \times D_h}$ and each block of the projection matrix $\mathbf{W}_{out}$ of dimension $D_h \times D_{out}$ are learned. Assuming that $D_h \geq D_{out}$ , we can replace each pair of matrices by a learned matrix $\mathbf{W}^{(h)}$ for each head. We consider one output pixel of the multi-head self-attention:
+
+$$
+\operatorname {M H S A} (\boldsymbol {X}) _ {\boldsymbol {q},:} = \sum_ {h \in \left[ N _ {h} \right]} \left(\sum_ {\boldsymbol {k}} \operatorname {s o f t m a x} \left(\boldsymbol {A} _ {\boldsymbol {q},:} ^ {(h)}\right) _ {\boldsymbol {k}} \boldsymbol {X} _ {\boldsymbol {k},:}\right) \boldsymbol {W} ^ {(h)} + \boldsymbol {b} _ {\text {o u t}} \tag {12}
+$$
+
+Due to the conditions of the Lemma, for the $h$ -th attention head the attention probability is one when $\pmb{k} = \pmb{q} - \pmb{f}(h)$ and zero otherwise. The layer's output at pixel $\pmb{q}$ is thus equal to
+
+$$
+\operatorname {M H S A} (\mathbf {X}) _ {\mathbf {q}} = \sum_ {h \in [ N _ {h} ]} \mathbf {X} _ {\mathbf {q} - \mathbf {f} (h);}; \mathbf {W} ^ {(h)} + \mathbf {b} _ {\text {o u t}} \tag {13}
+$$
+
+For $K = \sqrt{N_h}$ , the above can be seen to be equivalent to a convolutional layer expressed in eq. 17: there is a one-to-one mapping (implied by map $\pmb{f}$ ) between the matrices $\mathbf{W}^{(h)}$ for $h = [N_h]$ and the matrices $\mathbf{W}_{k_1,k_2,\dots}$ for all $(k_{1},k_{2})\in [K]^{2}$ .
+
+Remark about $D_h$ and $D_{out}$ . It is frequent in transformer-based architectures to set $D_h = D_{out} / N_h$ , hence $D_h < D_{out}$ . In that case, $W^{(h)}$ can be seen to be of rank $D_{out} - D_h$ which does not suffice to express every convolutional layer with $D_{out}$ channels. Nevertheless, it can be seen that any $D_h$ out of $D_{out}$ outputs of $\mathrm{MHSA}(X)$ can express the output of any convolutional layer with $D_h$ output channels. To cover both cases, in the statement of the main theorem we assert that the output channels of the convolutional layer should be $\min(D_h, D_{out})$ . In practice, we advise to concatenate heads of dimension $D_h = D_{out}$ instead of splitting the $D_{out}$ dimensions among heads to have exact re-parametrization and no "unused" channels.
+
+Lemma 2. There exists a relative encoding scheme $\{\pmb{r}_{\delta} \in \mathbb{R}^{D_p}\}_{\delta \in \mathbb{Z}^2}$ with $D_p \geq 3$ and parameters $\pmb{W}_{qry}, \pmb{W}_{key}, \widehat{\pmb{W}}_{key}, \pmb{u}$ with $D_p \leq D_k$ such that, for every $\Delta \in \Delta_K$ there exists some vector $\pmb{v}$ (conditioned on $\Delta$ ) yielding $\text{softmax}(\pmb{A}_{q,:})_k = 1$ if $\pmb{k} - \pmb{q} = \Delta$ and zero, otherwise.
+
+Proof. We show by construction the existence of a $D_p = 3$ dimensional relative encoding scheme yielding the required attention probabilities.
+
+As the attention probabilities are independent of the input tensor $\mathbf{X}$ , we set $\pmb{W}_{key} = \pmb{W}_{qry} = \mathbf{0}$ which leaves only the last term of eq. (8). Setting $\widehat{\pmb{W}}_{key} \in \mathbb{R}^{D_k \times D_p}$ to the identity matrix (with appropriate row padding), yields $\pmb{A}_{q,k} = \pmb{v}^\top \pmb{r}_\delta$ where $\delta \coloneqq \pmb{k} - \pmb{q}$ . Above, we have assumed that $D_p \leq D_k$ such that no information from $\pmb{r}_\delta$ is lost.
+
+Now, suppose that we could write:
+
+$$
+\boldsymbol {\Delta} _ {\boldsymbol {q}, \boldsymbol {k}} = - \alpha \left(\left\| \boldsymbol {\delta} - \boldsymbol {\Delta} \right\| ^ {2} + c\right) \tag {14}
+$$
+
+for some constant $c$ . In the above expression, the maximum attention score over $\mathbf{A}_{q,\cdot}$ is $-\alpha c$ and it is reached for $\mathbf{A}_{q,k}$ with $\delta = \Delta$ . On the other hand, the $\alpha$ coefficient can be used to scale arbitrarily the difference between $\mathbf{A}_{q,\Delta}$ and the other attention scores.
+
+In this way, for $\delta = \Delta$ , we have
+
+$$
+\begin{array}{l} \lim _ {\alpha \rightarrow \infty} \operatorname {s o f t m a x} (\mathbf {A} _ {\boldsymbol {q},:}) _ {\boldsymbol {k}} = \lim _ {\alpha \rightarrow \infty} \frac {e ^ {- \alpha \left(\| \boldsymbol {\delta} - \boldsymbol {\Delta} \| ^ {2} + c\right)}}{\sum_ {\boldsymbol {k} ^ {\prime}} e ^ {- \alpha \left(\| (\boldsymbol {k} - \boldsymbol {q} ^ {\prime}) - \boldsymbol {\Delta} \| ^ {2} + c\right)}} \\ = \lim _ {\alpha \to \infty} \frac {e ^ {- \alpha \| \pmb {\delta} - \pmb {\Delta} \| ^ {2}}}{\sum_ {\pmb {k} ^ {\prime}} e ^ {- \alpha \| (\pmb {k} - \pmb {q} ^ {\prime}) - \pmb {\Delta} \| ^ {2}}} = \frac {1}{1 + \lim _ {\alpha \to \infty} \sum_ {\pmb {k} ^ {\prime} \neq \pmb {k}} e ^ {- \alpha \| (\pmb {k} - \pmb {q} ^ {\prime}) - \pmb {\Delta} \| ^ {2}}} = 1 \\ \end{array}
+$$
+
+and for $\delta \neq \Delta$ , the equation becomes $\lim_{\alpha \to \infty} \operatorname{softmax}(\mathbf{A}_{q,:})_k = 0$ , exactly as needed to satisfy the lemma statement.
+
+What remains is to prove that there exist $\pmb{v}$ and $\{\pmb{r}_{\delta}\}_{\delta \in \mathbb{Z}^2}$ for which eq. (14) holds. Expanding the RHS of the equation, we have $-\alpha (\| \pmb {\delta} - \pmb {\Delta}\| ^2 +c) = -\alpha (\| \pmb {\delta}\| ^2 +\| \pmb {\Delta}\| ^2 -2\langle \pmb {\delta},\pmb {\Delta}\rangle +c)$ . Now if we set $\pmb {v} = -\alpha$ $(1, - 2\Delta_{1}, - 2\Delta_{2})$ and $\pmb {r}_{\delta} = (\| \pmb {\delta}\|^{2},\pmb {\delta}_{1},\pmb {\delta}_{2})$ , then
+
+$$
+\mathbf {A} _ {\boldsymbol {q}, \boldsymbol {k}} = \boldsymbol {v} ^ {\top} \boldsymbol {r} _ {\boldsymbol {\delta}} = - \alpha (\| \boldsymbol {\delta} \| ^ {2} - 2 \Delta_ {1} \boldsymbol {\delta} _ {1} - 2 \Delta_ {2} \boldsymbol {\delta} _ {2}) = - \alpha (\| \boldsymbol {\delta} \| ^ {2} - 2 \langle \boldsymbol {\delta}, \mathbf {\Delta} \rangle) = - \alpha (\| \boldsymbol {\delta} - \mathbf {\Delta} \| ^ {2} - \| \mathbf {\Delta} \| ^ {2}),
+$$
+
+which matches eq. (14) with $c = -\|\pmb{\Delta}\|^2$ and the proof is concluded.
+
+Remark on the magnitude of $\alpha$ . The exact representation of one pixel requires $\alpha$ (or the matrices $W_{qry}$ and $W_{key}$ ) to be arbitrary large, despite the fact that the attention probabilities of all other pixels converge exponentially to 0 as $\alpha$ grows. Nevertheless, practical implementations always rely on finite precision arithmetic for which a constant $\alpha$ suffices to satisfy our construction. For instance, since the smallest positive float32 scalar is approximately $10^{-45}$ , setting $\alpha = 46$ would suffice to obtain hard attention.
+
+# 4 EXPERIMENTS
+
+The aim of this section is to validate the applicability of our theoretical results—which state that self-attention can perform convolution—and to examine whether self-attention layers in practice do actually learn to operate like convolutional layers when trained on standard image classification tasks. In particular, we study the relationship between self-attention and convolution with quadratic and learned relative positional encodings. We find that, for both cases, the attention probabilities learned tend to respect the conditions of Lemma 1, supporting our hypothesis.
+
+# 4.1 IMPLEMENTATION DETAILS
+
+We study a fully attentional model consisting of six multi-head self-attention layers. As it has already been shown by Bello et al. (2019) that combining attention features with convolutional features improves performance on CIFar-100 and ImageNet, we do not focus on attaining state-of-the-art performance. Nevertheless, to validate that our model learns a meaningful classifier, we compare it to the standard ResNet18 (He et al., 2015) on the CIFAR-10 dataset (Krizhevsky et al.). In all experiments, we use a $2 \times 2$ invertible down-sampling (Jacobsen et al., 2018) on the input to reduce the size of the image. As the size of the attention coefficient tensors (stored during forward) scales quadratically with the size of the input image, full attention cannot be applied to bigger images. The fixed size representation of the input image is computed as the average pooling of the last layer representations and given to a linear classifier.
+
+
+Figure 2: Test accuracy on CIFAR-10.
+
+| Models | accuracy | # of params | # of FLOPS |
| ResNet18 | 0.938 | 11.2M | 1.1B |
| SA quadratic emb. | 0.938 | 12.1M | 6.2B |
| SA learned emb. | 0.918 | 12.3M | 6.2B |
| SA learned emb. + content | 0.871 | 29.5M | 15B |
+
+Table 1: Test accuracy on CIFAR-10 and model sizes. SA stands for Self-Attention.
+
+
+Figure 3: Centers of attention of each attention head (different colors) at layer 4 during the training with quadratic relative positional encoding. The central black square is the query pixel, whereas solid and dotted circles represent the $50\%$ and $90\%$ percentiles of each Gaussian, respectively.
+
+
+
+
+
+
+
+We used the PyTorch library (Paszke et al., 2017) and based our implementation on PyTorch Transformers5. We release our code on Github6 and hyper-parameters are listed in Table 2 (Appendix).
+
+Remark on accuracy. To verify that our self-attention models perform reasonably well, we display in Figure 6 the evolution of the test accuracy on CIFAR-10 over the 300 epochs of training for our self-attention models against a small ResNet (Table 1). The ResNet is faster to converge, but we cannot ascertain whether this corresponds to an inherent property of the architecture or an artifact of the adopted optimization procedures. Our implementation could be optimized to exploit the locality of Gaussian attention probabilities and reduce significantly the number of FLOPS. We observed that learned embeddings with content-based attention were harder to train probably due to their increased number of parameters. We believe that the performance gap can be bridged to match the ResNet performance, but this is not the focus of this work.
+
+# 4.2 QUADRATIC ENCODING
+
+As a first step, we aim to verify that, with the relative position encoding introduced in equation (9), attention layers learn to behave like convolutional layers. We train nine attention heads at each layer to be on par with the $3 \times 3$ kernels used predominantly by the ResNet architecture. The center of attention of each head $h$ is initialized to $\pmb{\Delta}^{(h)} \sim \mathcal{N}(\mathbf{0}, 2\mathbf{I}_2)$ .
+
+Figure 3 shows how the initial positions of the heads (different colors) at layer 4 changed during training. We can see that after optimization, the heads attend on specific pixel of the image forming a grid around the query pixel. Our intuition that Self-Attention applied to images learns convolutional filters around the queried pixel is confirmed.
+
+Figure 4 displays all attention head at each layer of the model at the end of the training. It can be seen that in the first few layers the heads tend to focus on local patterns (layers 1 and 2), while deeper layers (layers 3-6) also attend to larger patterns by positioning the center of attention further from the queried pixel position. We also include in the Appendix a plot of the attention positions for a higher number of heads ( $N_{h} = 16$ ). Figure 14 displays both local patterns similar to CNN and long range dependencies. Interestingly, attention heads do not overlap and seem to take an arrangement maximizing the coverage of the input space.
+
+
+Figure 4: Centers of attention of each attention head (different colors) for the 6 self-attention layers using quadratic positional encoding. The central black square is the query pixel, whereas solid and dotted circles represent the $50\%$ and $90\%$ percentiles of each Gaussian, respectively.
+
+
+
+
+
+
+
+
+
+
+
+# 4.3 LEARNED RELATIVE POSITIONAL ENCODING
+
+We move on to study the positional encoding used in practice by fully-attentional models on images.
+
+We implemented the 2D relative positional encoding scheme used by (Ramachandran et al., 2019; Bello et al., 2019): we learn a $\lfloor D_p / 2\rfloor$ position encoding vector for each row and each column pixel shift. Hence, the relative positional encoding of a key pixel at position $k$ with a query pixel at position $q$ is the concatenation of the row shift embedding $\delta_{1}$ and the column shift embedding $\delta_{2}$ (where $\delta = k - q$ ). We chose $D_{p} = D_{out} = 400$ in the experiment. We differ from their (unpublished) implementation in the following points: (i) we do not use convolution stem and ResNet bottlenecks for downsampling, but only a $2\times 2$ invertible downsampling layer (Jacobsen et al., 2018) at input, (ii) we use $D_h = D_{out}$ instead of $D_{h} = D_{out} / N_{h}$ backed by our theory that the effective number of learned filters is $\min(D_h,D_{out})$ .
+
+At first, we discard the input data and compute the attention scores solely as the last term of eq. (8). The attention probabilities of each head at each layer are displayed on Figure 5. The figure confirms our hypothesis for the first two layers and partially for the third: even when left to learn the positional encoding scheme from randomly initialized vectors, certain self-attention heads (depicted on the left) learn to attend to individual pixels, closely matching the condition of Lemma 1 and thus Theorem 1. At the same time, other heads pay attention to horizontally-symmetric but non-localized patterns, as well as to long-range pixel inter-dependencies.
+
+We move on to a more realistic setting where the attention scores are computed using both positional and content-based attention (i.e., $q^{\top}k + q^{\top}r$ in (Ramachandran et al., 2019)) which corresponds to a full-blown standalone self-attention model.
+
+The attention probabilities of each head at each layer are displayed in Figure 6. We average the attention probabilities over a batch of 100 test images to outline the focus of each head and remove the dependency on the input image. Our hypothesis is confirmed for some heads of layer 2 and 3: even when left to learn the encoding from the data, certain self-attention heads only exploit position-based attention to attend to distinct pixels at a fixed shift from the query pixel reproducing the receptive field of a convolutional kernel. Other heads use more content-based attention (see Figures 8 to 10 in Appendix for non-averaged probabilities) leveraging the advantage of Self-Attention over CNN which does not contradict our theory. In practice, it was shown by Bello et al. (2019) that combining CNN and self-attention features outperforms each taken separately. Our experiments show that such combination is learned when optimizing an unconstrained fully-attentional model.
+
+The similarity between convolution and multi-head self-attention is striking when the query pixel is slid over the image: the localized attention patterns visible in Figure 6 follow the query pixel. This characteristic behavior materializes when comparing Figure 6 with the attention probabilities at a different query pixel (see Figure 7 in Appendix). Attention patterns in layers 2 and 3 are not only localized but stand at a constant shift from the query pixel, similarly to convolving the receptive field of a convolutional kernel over an image. This phenomenon is made evident on our interactive website7. This tool is designed to explore different components of attention for diverse images with or without content-based attention. We believe that it is a useful instrument to further understand how MHSA learns to process images.
+
+
+Figure 5: Attention probabilities of each head (column) at each layer (row) using learned relative positional encoding without content-based attention. The central black square is the query pixel. We reordered the heads for visualization and zoomed on the 7x7 pixels around the query pixel.
+
+
+Figure 6: Attention probabilities for a model with 6 layers (rows) and 9 heads (columns) using learned relative positional encoding and content-content based attention. Attention maps are averaged over 100 test images to display head behavior and remove the dependence on the input content. The black square is the query pixel. More examples are presented in Appendix A.
+
+# 5 RELATED WORK
+
+In this section, we review the known differences and similarities between CNNs and transformers.
+
+The use of CNN networks for text—at word level (Gehring et al., 2017) or character level (Kim, 2014)—is more seldom than transformers (or RNN). Transformers and convolutional models have been extensively compared empirically on tasks of Natural Language Processing and Neural Machine Translation. It was observed that transformers have a competitive advantage over convolutional model applied to text (Vaswani et al., 2017). It is only recently that Bello et al. (2019); Ramachandran et al. (2019) used transformers on images and showed that they achieve similar accuracy as ResNets. However, their comparison only covers performance and number of parameters and FLOPS but not expressive power.
+
+Beyond performance and computational-cost comparisons of transformers and CNN, the study of expressiveness of these architectures has focused on their ability to capture long-term dependencies (Dai et al., 2019). Another interesting line of research has demonstrated that transformers are Turing-complete (Dehghani et al., 2018; Pérez et al., 2019), which is an important theoretical result but is not informative for practitioners. To the best of our knowledge, we are the first to show that the class of functions expressed by a layer of self-attention encloses all convolutional filters.
+
+The closest work in bridging the gap between attention and convolution is due to Andreoli (2019). They cast attention and convolution into a unified framework leveraging tensor outer-product. In this framework, the receptive field of a convolution is represented by a "basis" tensor $\mathbf{A} \in \mathbb{R}^{K \times K \times H \times W \times H \times W}$ . For instance, the receptive field of a classical $K \times K$ convolutional kernel would be encoded by $\mathbf{A}_{\Delta, q, k} = \mathbb{1}\{k - q = \Delta\}$ for $\Delta \in \Delta_K$ . The author distinguishes this index-based convolution with content-based convolution where $\mathbf{A}$ is computed from the value of the input, e.g., using a key/query dot-product attention. Our work moves further and presents sufficient conditions for relative positional encoding injected into the input content (as done in practice) to allow content-based convolution to express any index-based convolution. We further show experimentally that such behavior is learned in practice.
+
+# 6 CONCLUSION
+
+We showed that self-attention layers applied to images can express any convolutional layer (given sufficiently many heads) and that fully-attentional models learn to combine local behavior (similar to convolution) and global attention based on input content. More generally, fully-attentional models seem to learn a generalization of CNNs where the kernel pattern is learned at the same time as the filters—similar to deformable convolutions (Dai et al., 2017; Zampieri, 2019). Interesting directions for future work include translating existing insights from the rich CNNs literature back to transformers on various data modalities, including images, text and time series.
+
+# ACKNOWLEDGMENTS
+
+Jean-Baptiste Cordonnier is thankful to the Swiss Data Science Center (SDSC) for funding this work. Andreas Loukas was supported by the Swiss National Science Foundation (project "Deep Learning for Graph Structured Data", grant number PZ00P2 179981).
+
+# REFERENCES
+
+Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mané, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viégas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. Software available from tensorflow.org.
+Jean-Marc Andreoli. Convolution, attention and structure embedding. NeurIPS 2019 workshop on Graph Representation Learning, Dec 13, 2019, Vancouver, BC, Canada, 2019.
+Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, 2015.
+Irwan Bello, Barret Zoph, Ashish Vaswani, Jonathon Shlens, and Quoc V. Le. Attention Augmented Convolutional Networks. arXiv:1904.09925 [cs], April 2019.
+Jifeng Dai, Haozhi Qi, Yuwen Xiong, Yi Li, Guodong Zhang, Han Hu, and Yichen Wei. Deformable convolutional networks. CoRR, abs/1703.06211, 2017.
+Zihang Dai, Zhilin Yang, Yiming Yang, Jaime G. Carbonell, Quoc V. Le, and Ruslan Salakhutdinov. Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context. CoRR, abs/1901.02860, 2019.
+Mostafa Dehghani, Stephan Gouws, Oriol Vinyals, Jakob Uszkoreit, and Lukasz Kaiser. Universal transformers. CoRR, abs/1807.03819, 2018.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: pre-training of deep bidirectional transformers for language understanding. CoRR, abs/1810.04805, 2018.
+Jean-Yves Franceschi, Aymeric Dieuleveut, and Martin Jaggi. Unsupervised scalable representation learning for multivariate time series. In NeurIPS 2019, 2019.
+Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N. Dauphin. Convolutional sequence to sequence learning. CoRR, abs/1705.03122, 2017.
+Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. CoRR, abs/1512.03385, 2015.
+Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural Computation, 9(8): 1735-1780, 1997.
+Jie Hu, Li Shen, and Gang Sun. Squeeze-and-excitation networks. In 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018, pp. 7132-7141, 2018.
+Jörn-Henrik Jacobsen, Arnold W.M. Smeulders, and Edouard Oyallon. i-revnet: Deep invertible networks. In International Conference on Learning Representations, 2018.
+Yoon Kim. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1746-1751, Doha, Qatar, October 2014. Association for Computational Linguistics. doi: 10.3115/v1/D14-1181.
+Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton. Cifar-10 (canadian institute for advanced research).
+Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. Automatic differentiation in PyTorch. In NIPS Autodiff Workshop, 2017.
+
+Jorge Pérez, Javier Marinkovic, and Pablo Barceló. On the turing completeness of modern neural network architectures. CoRR, abs/1901.03429, 2019.
+Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners. 2018.
+Prajit Ramachandran, Niki Parmar, Ashish Vaswani, Irwan Bello, Anselm Levskaya, and Jonathon Shlens. Stand-alone self-attention in vision models. CoRR, abs/1906.05909, 2019.
+Aäron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alexander Graves, Nal Kalchbrenner, Andrew Senior, and Koray Kavukcuoglu. Wavenet: A generative model for raw audio. arXiv preprint arXiv:1609.03499, 2016.
+Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. CoRR, abs/1706.03762, 2017.
+Xiaolong Wang, Ross B. Girshick, Abhinav Gupta, and Kaiming He. Non-local neural networks. In 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018, pp. 7794-7803, 2018.
+Zhilin Yang, Zihang Dai, Yiming Yang, Jaime G. Carbonell, Ruslan Salakhutdinov, and Quoc V. Le. Xlnet: Generalized autoregressive pretraining for language understanding. CoRR, abs/1906.08237, 2019.
+Luca Zampieri. Geometric deep learning for volumetric computational fluid dynamics. pp. 67, 2019.
+
+# APPENDIX
+
+# A MORE EXAMPLES WITH CONTENT-BASED ATTENTION
+
+We present more examples of attention probabilities computed by self-attention model. Figure 7 shows average attention at a different query pixel than Figure 6. Figures 8 to 10 display attention for single images.
+
+
+Figure 7: Attention probabilities for a model with 6 layers (rows) and 9 heads (columns) using learned relative positional encoding and content-content attention. We present the average of 100 test images. The black square is the query pixel.
+
+
+Figure 8: Attention probabilities for a model with 6 layers (rows) and 9 heads (columns) using learned relative positional encoding and content-content based attention. The query pixel (black square) is on the frog head.
+
+
+Figure 9: Attention probabilities for a model with 6 layers (rows) and 9 heads (columns) using learned relative positional encoding and content-content based attention. The query pixel (black square) is on the horse head.
+
+
+Figure 10: Attention probabilities for a model with 6 layers (rows) and 9 heads (columns) using learned relative positional encoding and content-content based attention. The query pixel (black square) is on the building in the background.
+
+B HYPER-PARAMETERS USED IN OUR EXPERIMENTS
+
+| Hyper-parameters |
| number of layers | 6 |
| number of heads | 9 |
| hidden dimension | 400 |
| intermediate dimension | 512 |
| invertible pooling width | 2 |
| dropout probability | 0.1 |
| layer normalization epsilon | 10-12 |
| number of epochs | 300 |
| batch size | 100 |
| learning rate | 0.1 |
| weight decay | 0.0001 |
| momentum | 0.9 |
| cosine decay | ✓ |
| linear warm up ratio | 0.05 |
+
+Table 2: Self-attention network parameters
+C POSITIONAL ENCODING REFERENCES
+
+| Model | type of positional encoding | relative |
| sinusoids | learned | quadratic |
| Vaswani et al. (2017) | ✓ | | | |
| Radford et al. (2018) | | ✓ | | |
| Devlin et al. (2018) | | ✓ | | |
| Dai et al. (2019) | ✓ | | | ✓ |
| Yang et al. (2019) | ✓ | | | ✓ |
| Bello et al. (2019) | | ✓ | | ✓ |
| Ramachandran et al. (2019) | | ✓ | | ✓ |
| Our work | | ✓ | ✓ | ✓ |
+
+Table 3: Types of positional encoding used by transformers models applied to text (top) and images (bottom). When multiple encoding types have been tried, we report the one advised by the authors.
+
+# D GENERALIZED LEMMA 1
+
+We present a generalization of Lemma 1 that replaces the necessity of hard attention (to single pixels) by a milder assumption: the attention probabilities should span the grid receptive field. The conditions of this Lemma are still satisfied by Lemma 2, hence Theorem 1 follows.
+
+Lemma 3. Consider a multi-head self-attention layer consisting of $N_{h} \geq K^{2}$ heads, $D_{h} \geq D_{out}$ and let $\omega : [H] \times [W] \to [HW]$ be a pixel indexing. Then, for any convolutional layer with a $K \times K$ kernel and $D_{out}$ output channels, there exists $\{\mathbf{W}_{val}^{(h)}\}_{h \in [N_{h}]}$ and $\mathbf{W}_{out}$ such that $\mathrm{MHSA}(\mathbf{X}) = \mathrm{Conv}(\mathbf{X})$ for every $\mathbf{X} \in \mathbb{R}^{W \times H \times D_{in}}$ if and only if, for all $\mathbf{q} \in [H] \times [W]$ ,
+
+$$
+\operatorname {s p a n} \left(\left\{\boldsymbol {e} _ {\omega (\boldsymbol {q} + \boldsymbol {\Delta})} \in \mathbb {R} ^ {H W}: \boldsymbol {\Delta} \in \mathbb {A} _ {K} \right\}\right) \subseteq \operatorname {s p a n} \left(\left\{\operatorname {v e c t} \left(\operatorname {s o f t m a x} \left(\boldsymbol {A} _ {\boldsymbol {q}, :} ^ {(h)}\right)\right): h \in [ N _ {h} ] \right\}\right).
+$$
+
+
+Figure 11: Factorization of the vectorized weight matrices $V_{q}^{\mathrm{conv}}$ and $V_{q}^{\mathrm{SA}}$ used to compute the output at position $q$ for an input image of dimension $H \times W$ . On the left: a convolution of kernel $2 \times 2$ , on the right: a self-attention with $N_{h} = 5$ heads. $D_{in} = 2$ , $D_{out} = 3$ in both cases.
+
+
+
+Proof. Our first step will be to rework the expression of the Multi-Head Self-Attention operator from equation (1) and equation (4) such that the effect of the multiple heads becomes more transparent:
+
+$$
+\operatorname {M H S A} (\mathbf {X}) = \boldsymbol {b} _ {\text {o u t}} + \sum_ {h \in \left[ N _ {h} \right]} \operatorname {s o f t m a x} \left(\mathbf {A} ^ {(h)}\right) \mathbf {X} \underbrace {\boldsymbol {W} _ {\text {v a l}} ^ {(h)} \boldsymbol {W} _ {\text {o u t}} [ (h - 1) D _ {h} + 1 : h D _ {h} + 1 ]} _ {\boldsymbol {W} ^ {(h)}} \tag {15}
+$$
+
+Note that each head's value matrix $\mathbf{W}_{val}^{(h)} \in \mathbb{R}^{D_{in} \times D_h}$ and each block of the projection matrix $\mathbf{W}_{out}$ of dimension $D_h \times D_{out}$ are learned. Assuming that $D_h \geq D_{out}$ , we can replace each pair of matrices by a learned matrix $\mathbf{W}^{(h)}$ for each head. We consider one output pixel of the multi-head self-attention and drop the bias term for simplicity:
+
+$$
+\operatorname {M H S A} (\mathbf {X}) _ {\mathbf {q},:} = \sum_ {h \in [ N _ {h} ]} \left(\sum_ {\mathbf {k}} a _ {\mathbf {q}, \mathbf {k}} ^ {(h)} \mathbf {X} _ {\mathbf {k},:}\right) \boldsymbol {W} ^ {(h)} = \sum_ {\mathbf {k}} \mathbf {X} _ {\mathbf {k},:} \underbrace {\left(\sum_ {h \in [ N _ {h} ]} a _ {\mathbf {q} , \mathbf {k}} ^ {(h)} \boldsymbol {W} ^ {(h)}\right)} _ {\boldsymbol {W} _ {\mathbf {q}, \mathbf {k}} ^ {\mathrm {S A}} \in \mathbb {R} ^ {D _ {i n} \times D _ {o u t}}}, \tag {16}
+$$
+
+with $a_{\pmb{q},\pmb{k}}^{(h)} = \mathrm{softmax}(\mathbf{A}_{\pmb{q},:}^{(h)})_{\pmb{k}}$ . We rewrite the output of a convolution at pixel $\pmb{q}$ in the same manner:
+
+$$
+\operatorname {C o n v} (\mathbf {X}) _ {\mathbf {q},:} = \sum_ {\boldsymbol {\Delta} \in \boldsymbol {\Delta} _ {K}} \mathbf {X} _ {\mathbf {q} + \boldsymbol {\Delta},:} \mathbf {W} _ {\boldsymbol {\Delta},:,:} = \sum_ {\mathbf {k} \in [ H ] \times [ W ]} \mathbf {X} _ {\mathbf {k},:} \underbrace {\mathbb {1} _ {\left\{\mathbf {k} - \mathbf {q} \in \boldsymbol {\Delta} _ {K} \right\}} \mathbf {W} _ {\mathbf {k} - \mathbf {q} , : :}} _ {\mathbf {W} _ {\mathbf {q}, \mathbf {k}} ^ {\text {c o n v}} \in \mathbb {R} ^ {D _ {i n} \times D _ {o u t}}} \tag {17}
+$$
+
+Equality between equations (16) and (17) holds for any input $\mathbf{X}$ if and only if the linear transformations for each pair of key/query pixels are equal, i.e. $\pmb{W}_{\pmb{q},\pmb{k}}^{\mathrm{conv}} = \pmb{W}_{\pmb{q},\pmb{k}}^{\mathrm{SA}}\forall \pmb{q},\pmb{k}$ . We vectorize the weight matrices into matrices of dimension $D_{in}D_{out}\times HW$ as $\pmb{V}_{\pmb{q}}^{\mathrm{conv}}\coloneqq [\mathrm{vec}(\pmb{W}_{\pmb{q},\pmb{k}}^{\mathrm{conv}})]_{\pmb{k}\in [H]\times [W]}$ and $\pmb{V}_{\pmb{q}}^{\mathrm{SA}}\coloneqq [\mathrm{vec}(\pmb{W}_{\pmb{q},\pmb{k}}^{\mathrm{SA}})]_{\pmb{k}\in [H]\times [W]}$ . Hence, to show that $\mathrm{Conv}(\mathbf{X}) = \mathrm{MHSA}(\mathbf{X})$ for all $\mathbf{X}$ , we must show that $\pmb{V}_{\pmb{q}}^{\mathrm{conv}} = \pmb{V}_{\pmb{q}}^{\mathrm{SA}}$ for all $\pmb{q}$ .
+
+The matrix $V_{q}^{\mathrm{conv}}$ has a restricted support: only the columns associated with a pixel shift $\Delta \in \mathbb{A}_{K}$ in the receptive field of pixel $q$ can be non-zero. This leads to the factorization $V_{q}^{\mathrm{conv}} = W^{\mathrm{conv}}E_{q}$ displayed in Figure 11 where $W^{\mathrm{conv}} \in \mathbb{R}^{D_{in}D_{out} \times K^2}$ and $E_{q} \in \mathbb{R}^{K^2 \times HW}$ . Given an ordering of the shifts $\Delta \in \mathbb{A}_{K}$ indexed by $j$ , set $(W^{\mathrm{conv}})_{:,j} = \operatorname{vec}(W_{\Delta,:;})$ and $(E_q)_{j,:} = e_{\omega (q + \Delta)}$ . On the other hand, we decompose $V_{q}^{\mathrm{SA}} = W^{\mathrm{SA}}A_{q}$ with $(W^{\mathrm{SA}})_{:,h} = \operatorname{vec}(W^{(h)})$ and $(A_q)_{h,i} = a_{q,\omega(i)}^{(h)}$ .
+
+The proof is concluded by showing that $\operatorname{row}(\pmb{E}_q) \subseteq \operatorname{row}(\pmb{A}_q)$ is a necessary and sufficient condition for the existence of a $W^{\mathrm{SA}}$ such that any $V_q^{\mathrm{conv}} = W^{\mathrm{conv}}\pmb{E}_q$ can be written as $W^{\mathrm{SA}}\pmb{A}_q$ .
+
+Sufficient. Given that $\operatorname{row}(\pmb{E}_q) \subseteq \operatorname{row}(\pmb{A}_q)$ , there exists $\Phi \in \mathbb{R}^{K^2 \times N_h}$ such that $\pmb{E}_q = \Phi \pmb{A}_q$ and a valid decomposition is $\pmb{W}^{\mathrm{SA}} = \pmb{W}^{\mathrm{conv}}\Phi$ which gives $\pmb{W}^{\mathrm{SA}}\pmb{A}_q = \pmb{V}_q^{\mathrm{conv}}$ .
+
+Necessary. Assume there exists $\pmb{x} \in \mathbb{R}^{HW}$ such that $\pmb{x} \in \mathrm{row}(\pmb{E}_{\pmb{q}})$ and $\pmb{x} \notin \mathrm{row}(\pmb{A}_{\pmb{q}})$ and set $\pmb{x}^{\top}$ to be a row of $V_{q}^{\mathrm{conv}}$ . Then, $W^{\mathrm{SA}} A_{q} \neq V_{q}^{\mathrm{conv}}$ for any $W^{\mathrm{SA}}$ and there is no possible decomposition.
+
+# E GENERALIZED QUADRATIC POSITIONAL ENCODING
+
+We noticed the similarity of the attention probabilities in the quadratic positional encoding (Section 3) to isotropic bivariate Gaussian distributions with bounded support:
+
+$$
+\operatorname {s o f t m a x} \left(\mathbf {A} _ {\mathbf {q},:}\right) _ {\mathbf {k}} = \frac {e ^ {- \alpha \left\| (\mathbf {k} - \mathbf {q}) - \boldsymbol {\Delta} \right\| ^ {2}}}{\sum_ {\mathbf {k} ^ {\prime} \in [ W ] \times [ H ]} e ^ {- \alpha \left\| (\mathbf {k} ^ {\prime} - \mathbf {q}) - \boldsymbol {\Delta} \right\| ^ {2}}}. \tag {18}
+$$
+
+Building on this observation, we further extended our attention mechanism to non-isotropic Gaussian distribution over pixel positions. Each head is parametrized by a center of attention $\Delta$ and a covariance matrix $\Sigma$ to obtain the following attention scores,
+
+$$
+\boldsymbol {A} _ {\boldsymbol {q}, \boldsymbol {k}} = - \frac {1}{2} (\boldsymbol {\delta} - \boldsymbol {\Delta}) ^ {\top} \boldsymbol {\Sigma} ^ {- 1} (\boldsymbol {\delta} - \boldsymbol {\Delta}) = - \frac {1}{2} \boldsymbol {\delta} ^ {\top} \boldsymbol {\Sigma} ^ {- 1} \boldsymbol {\delta} + \boldsymbol {\delta} ^ {\top} \boldsymbol {\Sigma} ^ {- 1} \boldsymbol {\Delta} - \frac {1}{2} \boldsymbol {\Delta} ^ {\top} \boldsymbol {\Sigma} ^ {- 1} \boldsymbol {\Delta}, \tag {19}
+$$
+
+where, once more, $\delta = k - q$ . The last term can be discarded because the softmax is shift invariant and we rewrite the attention coefficient as a dot product between the head target vector $\pmb{v}$ and the relative position encoding $r_{\delta}$ (consisting of the first and second order combinations of the shift in pixels $\delta$ ):
+
+$$
+\boldsymbol {v} = \frac {1}{2} (2 (\boldsymbol {\Sigma} ^ {- 1} \boldsymbol {\Delta}) _ {1}, 2 (\boldsymbol {\Sigma} ^ {- 1} \boldsymbol {\Delta}) _ {2}, - \boldsymbol {\Sigma} _ {1, 1} ^ {- 1}, - \boldsymbol {\Sigma} _ {2, 2} ^ {- 1}, - 2 \cdot \boldsymbol {\Sigma} _ {1, 2} ^ {- 1}) ^ {\top} \text {a n d} \boldsymbol {r} _ {\boldsymbol {\delta}} = (\boldsymbol {\delta} _ {1}, \boldsymbol {\delta} _ {2}, \boldsymbol {\delta} _ {1} ^ {2}, \boldsymbol {\delta} _ {2} ^ {2}, \boldsymbol {\delta} _ {1} \boldsymbol {\delta} _ {2}) ^ {\top}.
+$$
+
+Evaluation. We trained our model using this generalized quadratic relative position encoding. We were curious to see if, using the above encoding the self-attention model would learn to attend to non-isotropic groups of pixels—thus forming unseen patterns in CNNs. Each head was parametrized by $\Delta \in \mathbb{R}^2$ and $\Sigma^{-1/2} \in \mathbb{R}^{2 \times 2}$ to ensure that the covariance matrix remained positive semi-definite. We initialized the center of attention to $\Delta^{(h)} \sim \mathcal{N}(0, 2I_2)$ and $\Sigma^{-1/2} = I_2 + \mathcal{N}(0, 0.01I_2)$ so that initial attention probabilities were close to an isotropic Gaussian. Figure 12 shows that the network did learn non-isotropic attention probability patterns, especially in high layers. Nevertheless, the fact that we do not obtain any performance improvement seems to suggest that attention non-isotropy is not particularly helpful in practice—the quadratic positional encoding suffices.
+
+
+Figure 12: Centers of attention of each attention head (different colors) for the 6 self-attention layers using non-isotropic Gaussian parametrization. The central black square is the query pixel, whereas solid and dotted circles represent the $50\%$ and $90\%$ percentiles of each Gaussian, respectively.
+
+
+
+
+
+
+
+
+
+
+
+Pruning degenerated heads. Some non-isotropic attention heads attend on "non-intuitive" patches of pixels: either attending a very thin stripe of pixels, when $\Sigma^{-1}$ was almost singular, or attending all pixels uniformly, when $\Sigma^{-1}$ was close to 0 (i.e. constant attention scores). We asked ourselves, are such attention patterns indeed useful for the model or are these heads degenerated and unused? To find out, we pruned all heads having largest eigen-values smaller than $10^{-5}$ or condition number (ratio of the biggest and smallest eigen-values) greater than $10^{5}$ . Specifically in our model with 6-layer and 9-heads each, we pruned [2, 4, 1, 2, 6, 0] heads from the first to the last layer. This means that these layers cannot express a $3 \times 3$ kernel anymore. As shown in yellow on fig. 2, this ablation initially hurts a bit the performance, probably due to off biases, but after a few epochs of continued training with a smaller learning rate (divided by 10) the accuracy recovers its unpruned value. Hence, without sacrificing performance, we reduce the size of the parameters and the number of FLOPS by a fourth.
+
+# F INCREASING THE NUMBER OF HEADS
+
+For completeness, we also tested increasing the number of heads of our architecture from 9 to 16.
+
+
+Figure 13: Evolution of test accuracy on CIFAR-10. Pruned model (yellow) is continued training of the non-isotropic model (orange).
+
+| Models | accuracy | # of params | # of FLOPS |
| ResNet18 | 0.938 | 11.2M | 1.1B |
| SA quadratic emb. | 0.938 | 12.1M | 6.2B |
| SA quadratic emb. gen. | 0.934 | 12.1M | 6.2B |
| SA quadratic emb. gen. pruned | 0.934 | 9.7M | 4.9B |
| SA learned emb. | 0.918 | 12.3M | 6.2B |
| SA learned emb. + content | 0.871 | 29.5M | 15B |
+
+
+Figure 14: Centers of attention for 16 attention heads (different colors) for the 6 self-attention layers using quadratic positional encoding. The central black square is the query pixel, whereas solid and dotted circles represent the $50\%$ and $90\%$ percentiles of each Gaussian, respectively.
+
+
+
+
+Table 4: Number of parameters and accuracy on CIFAR-10 per model. SA stands for Self-Attention.
+
+
+
+
+
+
+
+Similar to Figure 4, we see that the network distinguishes two main types of attention patterns. Localized heads (i.e., those that attend to nearly individual pixels) appear more frequently in the first few layers. The self-attention layer uses these heads to act in a manner similar to how convolutional layers do. Heads with less-localized attention become more common at higher layers.
\ No newline at end of file
diff --git a/ontherelationshipbetweenselfattentionandconvolutionallayers/images.zip b/ontherelationshipbetweenselfattentionandconvolutionallayers/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..038d7316abd33a3409f36f4111fbe50dc81f7719
--- /dev/null
+++ b/ontherelationshipbetweenselfattentionandconvolutionallayers/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3a53ba2f4daf82127ba89779c5844227da7a3b253663335cb8b53404efbdf365
+size 1235265
diff --git a/ontherelationshipbetweenselfattentionandconvolutionallayers/layout.json b/ontherelationshipbetweenselfattentionandconvolutionallayers/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..6f454892c6a9166a08055b7a5c48e2bc47db9a03
--- /dev/null
+++ b/ontherelationshipbetweenselfattentionandconvolutionallayers/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9d4a49cce58c09767370fb542c403d00430247f992eaa3a218d67eb3c637f62b
+size 647708
diff --git a/onthesteerabilityofgenerativeadversarialnetworks/a1c0e785-948e-4091-8bae-4fdc1021722e_content_list.json b/onthesteerabilityofgenerativeadversarialnetworks/a1c0e785-948e-4091-8bae-4fdc1021722e_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..e3e34d34ad9b5391f055705a0ebae89896506c11
--- /dev/null
+++ b/onthesteerabilityofgenerativeadversarialnetworks/a1c0e785-948e-4091-8bae-4fdc1021722e_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a37f0727fc23c6394a526a7f0886bc08cb7d7508b01bf070bc119eab8ee81efa
+size 159508
diff --git a/onthesteerabilityofgenerativeadversarialnetworks/a1c0e785-948e-4091-8bae-4fdc1021722e_model.json b/onthesteerabilityofgenerativeadversarialnetworks/a1c0e785-948e-4091-8bae-4fdc1021722e_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..020a27908eed296c2c8531d32d8e749626f1c301
--- /dev/null
+++ b/onthesteerabilityofgenerativeadversarialnetworks/a1c0e785-948e-4091-8bae-4fdc1021722e_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1b69ea38d54f2640c4c7ae6f99c38e3dd79ad44af3f4b6e15847a74c99b1fb7f
+size 167016
diff --git a/onthesteerabilityofgenerativeadversarialnetworks/a1c0e785-948e-4091-8bae-4fdc1021722e_origin.pdf b/onthesteerabilityofgenerativeadversarialnetworks/a1c0e785-948e-4091-8bae-4fdc1021722e_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..be9e4f0c472ca0f81d5c5ff27ffcd77bfb4e2c6c
--- /dev/null
+++ b/onthesteerabilityofgenerativeadversarialnetworks/a1c0e785-948e-4091-8bae-4fdc1021722e_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c596b8c608894094ff8efb451e49f2b65f0c7b14f6463f7eb30470a2179b889b
+size 11033782
diff --git a/onthesteerabilityofgenerativeadversarialnetworks/full.md b/onthesteerabilityofgenerativeadversarialnetworks/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..835a44e4141d1161a8fdc939d213b67e150deb25
--- /dev/null
+++ b/onthesteerabilityofgenerativeadversarialnetworks/full.md
@@ -0,0 +1,728 @@
+# ON THE “STEERABILITY” OF GENERATIVE ADVERSARIAL NETWORKS
+
+Ali Jahanian*, Lucy Chai*, & Phillip Isola
+
+Massachusetts Institute of Technology
+
+Cambridge, MA 02139, USA
+
+{jahanian,lrchai,phillipi}@mit.edu
+
+# ABSTRACT
+
+An open secret in contemporary machine learning is that many models work beautifully on standard benchmarks but fail to generalize outside the lab. This has been attributed to biased training data, which provide poor coverage over real world events. Generative models are no exception, but recent advances in generative adversarial networks (GANs) suggest otherwise – these models can now synthesize strikingly realistic and diverse images. Is generative modeling of photos a solved problem? We show that although current GANs can fit standard datasets very well, they still fall short of being comprehensive models of the visual manifold. In particular, we study their ability to fit simple transformations such as camera movements and color changes. We find that the models reflect the biases of the datasets on which they are trained (e.g., centered objects), but that they also exhibit some capacity for generalization: by “steering” in latent space, we can shift the distribution while still creating realistic images. We hypothesize that the degree of distributional shift is related to the breadth of the training data distribution. Thus, we conduct experiments to quantify the limits of GAN transformations and introduce techniques to mitigate the problem. Code is released on our project page: https://ali-design.github.io/gan_steerability/.
+
+# 1 INTRODUCTION
+
+The quality of deep generative models has increased dramatically over the past few years. When introduced in 2014, Generative Adversarial Networks (GANs) could only synthesize MNIST digits and low-resolution grayscale faces (Goodfellow et al., 2014). The most recent models, however, produce diverse high-resolution images that are often indistinguishable from natural photos (Brock et al., 2018; Karras et al., 2018).
+
+Science fiction has long dreamed of virtual realities filled with synthetic content as rich as, or richer, than the real world (e.g., The Matrix, Ready Player One). How close are we to this dream? Traditional computer graphics can render photorealistic 3D scenes, but cannot automatically generate detailed content. Generative models like GANs, in contrast, can create content from scratch, but we do not currently have tools for navigating the generated scenes in the same kind of way as you can walk through and interact with a 3D game engine.
+
+In this paper, we explore the degree to which you can navigate the visual world of a GAN. Figure 1 illustrates the kinds of transformations we explore. Consider the dog at the top-left. By moving in some direction of GAN latent space, can we hallucinate walking toward this dog? As the figure indicates, and as we will show in this paper, the answer is yes. However, as we continue to zoom in, we quickly reach limits. Once the dog face fills the full frame, continuing to walk in this direction fails to increase the zoom. A similar effect occurs in the daisy example (row 2 of Fig. 1), where a direction in latent space moves the daisy up and down, but cannot move it out of frame.
+
+We hypothesize that these limits are due to biases in the distribution of images on which the GAN is trained. For example, if the training dataset consists of centered dogs and daises, the same may be the case in GAN-generated images. Nonetheless, we find that some degree of transformation is possible. When and why can we achieve certain transformations but not others?
+
+
+Figure 1: Learned latent space trajectories in generative adversarial networks correspond to visual transformations like camera shift and zoom. Take the "steering wheel", drive in the latent space, and explore the natural image manifold via generative transformations!
+
+
+
+This paper seeks to quantify the degree to which we can achieve basic visual transformations by navigating in GAN latent space. In other words, are GANs "steerable" in latent space?1 We analyze the relationship between the data distribution on which the model is trained and the success in achieving these transformations. From our experiments, it is possible to shift the distribution of generated images to some degree, but we cannot extrapolate entirely out of the dataset's support. In particular, attributes can be shifted in proportion to the variability of that attribute in the training data. We further demonstrate an approach to increase model steerability by jointly optimizing the generator and latent direction, together with data augmentation on training images. One of the current criticisms of generative models is that they simply interpolate between datapoints, and fail to generate anything truly new, but our results add nuance to this story. It is possible to achieve distributional shift, but the ability to create realistic images from a modified distributions relies on sufficient diversity in the dataset along the dimension that we vary.
+
+# Our main findings are:
+
+- A simple walk in the latent space of GANs achieves camera motion and color transformations in the output image space. These walks are learned in self-supervised manner without labeled attributes or distinct source and target images.
+- The linear walk is as effective as more complex non-linear walks, suggesting that the models learn to roughly linearize these operations without being explicitly trained to do so.
+- The extent of each transformation is limited, and we quantify a relationship between dataset variability and how much we can shift the model distribution.
+- The transformations are a general-purpose framework that work with different model architectures, e.g. BigGAN, StyleGAN, and DCGAN, and illustrate different disentanglement properties in their respective latent spaces.
+- Data augmentation improves steerability, as does jointly training the walk trajectory and the generator weights, which allows us to achieve larger transformation effects.
+
+# 2 RELATED WORK
+
+Latent space manipulations can be seen from several perspectives - how we achieve it, what limits it, and what it enables us to do. Our work addresses these three aspects together, and we briefly refer to each one in related work.
+
+Interpolations in latent space Traditional approaches to image editing with GAN latent spaces find linear directions that correspond to changes in labeled attributes, such as smile-vectors and gender-vectors for faces (Radford et al., 2015; Karras et al., 2018). However these manipulations are not exclusive to GANs; in flow-based generative models, linearly interpolating between two encoded images allow one to edit a source image toward attributes of the target (Kingma & Dhariwal, 2018). Möllenhoff & Cremers (2019) proposes a modified GAN formulation by treating data
+
+as directional $k$ -currents, where moving along tangent planes naturally corresponds to interpretable manipulations. Upchurch et al. (2017) removes the generative model entirely and instead interpolates in the intermediate feature space of a pretrained classifier, again using feature mappings of source and target sets to determine an edit direction. Unlike these approaches, we learn our latent-space trajectories in a self-supervised manner without labeled attributes or distinct source and target images. Instead, we learn to approximate editing operations on individual source images. We find that linear trajectories in latent space can capture simple image manipulations, e.g., zoom-vectors and shift-vectors, although we also obtain similar results using nonlinear trajectories.
+
+Dataset bias Biases from training data and network architecture both impact the generalization capacity of learned models (Torralba & Efros, 2011; Geirhos et al., 2018; Amini et al.). Dataset biases partly comes from human preferences in taking photos: we tend to take pictures in specific "canonical" views that are not fully representative of the entire visual world (Mezuman & Weiss, 2012; Jahanian et al., 2015). Consequently, models trained with these datasets inherit their biases. This may result in models that misrepresent the given task – such as tendencies towards texture bias rather than shape bias on ImageNet classifiers (Geirhos et al., 2018) – and in turn limits their generalization performance on similar objectives (Azulay & Weiss, 2018). Our latent space trajectories transform the output corresponding to various image editing operations, but ultimately we are constrained by biases in the data and cannot extrapolate arbitrarily far beyond the data's support.
+
+Generative models for content creation The recent progress in generative models has opened interesting avenues for content creation (Brock et al., 2018; Karras et al., 2018), including applications that enable users to fine-tune the generated output (Simon; Zhu et al., 2016; Bau et al., 2018). A by-product the current work is enable users to modify image properties by turning a single knob – the magnitude of the learned transformation in latent space. We further demonstrate that these image manipulations are not just a simple creativity tool; they also provide us with a window into biases and generalization capacity of these models.
+
+Applications of latent space editing Image manipulations using generative models suggest several interesting downstream applications. For example, Denton et al. (2019) learns linear walks corresponding to various facial characteristics – they use these to measure biases in facial attribute detectors, whereas we study biases in the generative model that originate from training data. Shen et al. (2019) also assumes linear latent space trajectories and learns paths for face attribute editing according to semantic concepts such as age and expression, thus demonstrating disentanglement of the latent space. White (2016) suggests approaches to improve the learned manipulations, such as using spherical linear interpolations, resampling images to remove biases in attribute vectors, and using data augmentation as a synthetic attribute for variational autoencoders. Goetschalckx et al. (2019) applies a linear walk to achieve transformations corresponding to cognitive properties of an image such as memorability, aesthetics, and emotional valence. Unlike these works, we do not require an attribute detector or assessor function to learn the latent space trajectory, and therefore our loss function is based on image similarity between source and target images. In addition to linear walks, we explore using non-linear walks parametrized by neural networks for editing operations.
+
+# 3 METHOD
+
+Generative models such as GANs (Goodfellow et al., 2014) learn a mapping function $G$ such that $G: z \to x$ . Here, $z$ is the latent code drawn from a Gaussian density and $x$ is an output, e.g., an image. Our goal is to achieve transformations in the output space by moving in latent space, as shown in Fig. 2. In general, this goal also captures the idea in equivariance, in which transformations in the input space result in equivalent transformations in the output space (c.f. Hinton et al. (2011); Cohen et al. (2019); Lenc & Vedaldi (2015)).
+
+Objective We want to learn an $N$ -dimensional vector representing the optimal path in latent space for a given transformation. The vector is multiplied with continuous parameter $\alpha$ which signifies the step size: large $\alpha$ values correspond to a greater degree of transformation, while small $\alpha$ values correspond to a lesser degree. Formally, we learn the walk $w$ by minimizing the objective function:
+
+$$
+w ^ {*} = \underset {w} {\arg \min } \mathbb {E} _ {z, \alpha} [ \mathcal {L} (G (z + \alpha w), \operatorname {e d i t} (G (z), \alpha)) ]. \tag {1}
+$$
+
+
+Figure 2: We aim to find a path in $z$ space to transform the generated image $G(z)$ to its edited version $\operatorname{edit}(G(z, \alpha))$ , e.g., an $\alpha \times$ zoom. This walk results in the generated image $G(z + \alpha w)$ when we choose a linear walk, or $G(f(f(\dots(z)))$ when we choose a non-linear walk.
+
+Here, $\mathcal{L}$ measures the distance between the generated image after taking an $\alpha$ -step in the latent direction $G(z + \alpha w)$ and the target $\operatorname{edit}(G(z), \alpha)$ derived from the source image $G(z)$ . We use $L2$ loss as our objective $\mathcal{L}$ , however we also obtain similar results when using the LPIPS perceptual image similarity metric (Zhang et al., 2018) (see Appendix B.4.1). Note that we can learn this walk in a fully self-supervised manner - we perform the $\operatorname{edit}(\cdot)$ operation on an arbitrary generated image and subsequently the vector to minimize the objective. Let model $(\alpha)$ denote the optimized transformation vector $w^{*}$ with the step size $\alpha$ , defined as model $(\alpha) = G(z + \alpha w^{*})$ .
+
+The previous setup assumes linear latent space walks, but we can also learn non-linear trajectories in which the walk direction depends on the current latent space position. For the non-linear walk, we learn a function, $f^{*}(z)$ , which corresponds to a small $\epsilon$ -step transformation $\operatorname{edit}(G(z), \epsilon)$ . To achieve bigger transformations, we apply $f$ recursively, mimicking discrete Euler ODE approximations. Formally, for a fixed $\epsilon$ , we minimize
+
+$$
+\mathcal {L} = \mathbb {E} _ {z, n} [ | | G (f ^ {n} (z)) - \operatorname {e d i t} (G (z), n \epsilon)) | | ], \tag {2}
+$$
+
+where $f^n(\cdot)$ is an $n$ th-order function composition $f(f(f(\ldots)))$ , and $f(z)$ is parametrized with a neural network. We discuss further implementation details in Appendix A.4. We use this function composition approach rather than the simpler setup of $G(z + \alpha \mathrm{NN}(z))$ because the latter learns to ignore the input $z$ when $\alpha$ takes on continuous values, and is thus equivalent to the previous linear trajectory (see Appendix A.3 for further details).
+
+Quantifying Steerability We further seek to quantify how well we can achieve desired image manipulations under each transformation. To this end, we compare the distribution of a given attribute, e.g., "luminance", in the dataset versus in images generated after walking in latent space.
+
+For color transformations, we consider the effect of increasing or decreasing the $\alpha$ coefficient corresponding to each color channel. To estimate the color distribution of model-generated images, we randomly sample $N = 100$ pixels per image both before and after taking a step in latent space. Then, we compute the pixel value for each channel, or the mean RGB value for luminance, and normalize the range between 0 and 1.
+
+For zoom and shift transformations, we rely on an object detector which captures the central object in the image class. We use a MobileNet-SSD v1 (Liu et al., 2016) detector to estimate object bounding boxes, and average over image classes recognizable by the detector. For each successful detection, we take the highest probability bounding box corresponding to the desired class and use that to quantify the amount of transformation. For the zoom operation, we use the area of the bounding box normalized by the area of the total image. For shift in the X and Y directions, we take the center X and Y coordinates of the bounding box, and normalize by image width or height.
+
+Truncation parameters in GANs (as used in Brock et al. (2018); Karras et al. (2018)) trade off between the diversity of the generated images and sample quality. When comparing generated images to the dataset distribution, we use the largest possible truncation for the model and perform similar cropping and resizing of the dataset as done during model training (see Brock et al. (2018)). When comparing the attributes of generated distributions under different $\alpha$ magnitudes to each other but not to the dataset, we reduce truncation to 0.5 to ensure better performance of the object detector.
+
+Reducing Transformation Limits Equations 1 and 2 learn a latent space walk assuming a pretrained generative model, thus keeping the model weights fixed. The previous approach allows us
+
+to understand the latent space organization and limitations in the model's transformation capacity. To overcome these limits, we explore adding data augmentation by editing the training images with each corresponding transformation, and train the generative model with this augmented dataset. We also introduce a modified objective function that jointly optimizes the generator weights and a linear walk vector:
+
+$$
+G ^ {*}, w ^ {*} = \arg \min _ {G, w} \left(\mathcal {L} _ {\text {e d i t}} + \mathcal {L} _ {G A N}\right), \tag {3}
+$$
+
+where the edit loss encourages low $L2$ error between learned transformation and target image:
+
+$$
+\mathcal {L} _ {\text {e d i t}} = L 2 \left(G (z + \alpha w) - \operatorname {e d i t} (G (z), \alpha)\right). \tag {4}
+$$
+
+The GAN loss optimizes for discriminator error:
+
+$$
+\mathcal {L} _ {G A N} = \max _ {D} \left(\mathbb {E} _ {z, \alpha} [ D (G (z + \alpha w)) ] - \mathbb {E} _ {x, \alpha} [ D (\operatorname {e d i t} (x, \alpha)) ]\right), \tag {5}
+$$
+
+where we draw images $x$ from the training dataset and perform data augmentation by applying the edit operation on them. This optimization approach encourages the generator to organize its latent space so that the transformations lie along linear paths, and when combined with data augmentation, results in larger transformation ranges which we demonstrate in Sec. 4.4
+
+# 4 EXPERIMENTS
+
+We demonstrate our approach using BigGAN (Brock et al., 2018), a class-conditional GAN trained on 1000 ImageNet categories. We learn a shared latent space walk by averaging across the image categories, and further quantify how this walk affects each class differently. We focus on linear walks in latent space for the main text, and show additional results on nonlinear walks in Sec. 4.3 and Appendix B.4.2. We also conduct experiments on StyleGAN (Karras et al., 2018), which uses an unconditional style-based generator architecture in Sec. 4.3 and Appendix B.5.
+
+# 4.1 WHAT IMAGE TRANSFORMATIONS CAN WE ACHIEVE IN LATENT SPACE?
+
+
+Figure 3: Transformation limits. As we increase the magnitude of $w^{*}$ , the operation either does not transform the image any further, or the image becomes unrealistic. Below each figure we also indicate the average LPIPS perceptual distance between 200 sampled image pairs of that category. Perceptual distance decreases as we move farther from the source (center image), which indicates that the images are converging.
+
+We show qualitative results of the learned transformations in Fig. 1. By steering in the generator latent space, we learn a variety of transformations on a given source image (shown in the center panel of each transformation). Interestingly, several priors come into play when learning these image transformations. When we shift a daisy downwards in the Y direction, the model hallucinates that the sky exists on the top of the image. However, when we shift the daisy up, the model inpaints the remainder of the image with grass. When we alter the brightness of a image, the model transitions between nighttime and daytime. This suggests that the model can extrapolate from the original source image, and still remain consistent with the image context.
+
+
+Figure 4: Each row shows how a single latent direction $w^{*}$ affects two different ImageNet classes. We observe that changes are consistent with semantic priors (e.g., "Volcanoes" explode, "Alps" do not). Boxplots show the LPIPS perceptual distance before and after transformation for 200 samples per class.
+
+However, when we increase the step size of $\alpha$ , we observe that the degree to which we can achieve each transformation is limited. In Fig. 3 we observe two potential failure cases: one in which the image becomes unrealistic, and the other in which the image fails to transform any further. When we try to zoom in on a Persian cat, we observe that the cat no longer increases in size beyond some point, and in fact consistently undershoots the target zoom. On the other hand, when we try to zoom out on the cat, we observe that it begins to fall off the image manifold, and does not become any smaller after some point. Indeed, the perceptual distance (using LPIPS) between images decreases as we push $\alpha$ towards the transformation limits. Similar trends hold with other transformations: we are able to shift a lorikeet up and down to some degree until the transformation yields unrealistic output, and despite adjusting $\alpha$ on the rotation vector, we are unable to rotate a pizza. Are the limitations to these transformations governed by the training dataset? In other words, are our latent space walks limited because in ImageNet photos the cats are mostly centered and taken within a certain size? We seek to investigate and quantify these biases in the next sections.
+
+An intriguing characteristic of the learned trajectory is that the amount it affects the output depends on the image class. In Fig. 4, we investigate the impact of the walk for different image categories under color transformations. By moving in the direction of a redness vector, we are able to successfully recolor a jellyfish, but we are unable to change the color of a goldfinch, which remains yellow which slight changes in background textures. Likewise, increasing brightness changes an erupting volcano to a dormant one, but does not have much effect on Alps, which only transitions between night and day. In the third example, we use our latent walk to turn red sports cars to blue, but it cannot recolor firetrucks. Again, perceptual distance over image samples confirms these qualitative observations: a 2-sample $t$ -test yields $t = 20.77$ , $p < 0.001$ for jellyfish/goldfinch, $t = 8.14$ , $p < 0.001$ for volcano/alp, and $t = 6.84$ , $p < 0.001$ for sports car/fire engine. We hypothesize that the different impact of the shared transformation on separate image classes relates to the variability in the underlying dataset. The overwhelming majority of firetrucks are $\mathrm{red}^2$ , but sports cars appear in a variety of colors. Therefore, our color transformation is constrained by the dataset biases of individual classes.
+
+With shift, we can move the distribution of the center object by varying $\alpha$ . In the underlying model, the center coordinate of the object is most concentrated at half of the image width and height, but after applying the shift in X and shift in Y transformation, the mode of the transformed distribution varies between 0.3 and 0.7 of the image width/height. To quantify the distribution changes, we compute the area of intersection between the original model distribution and the distribution after applying each transformation and observe that the intersection decreases as we increase or decrease the magnitude of $\alpha$ . However, our transformations are limited to a certain extent – if we increase $\alpha$
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Figure 5: Quantifying the extent of transformations. We compare the attributes of generated images under the raw model output $G(z)$ , compared to the distribution under a learned transformation model( $\alpha$ ). We measure the intersection between $G(z)$ and model( $\alpha$ ), and also compute the FID on the transformed image to limit our transformations to the natural image manifold.
+
+
+
+
+
+
+
+beyond 150 pixels for vertical shifts, we start to generate unrealistic images, as evidenced by a sharp rise in FID and converging modes in the transformed distributions (Fig. 5 columns 2 & 3).
+
+We perform a similar procedure for zoom, by measuring the area of the bounding box for the detected object under different magnitudes of $\alpha$ . Like shift, we observe that subsequent increases in $\alpha$ magnitude start to have smaller and smaller effects on the mode of the resulting distribution (Fig. 5 last column). Past an $8\mathrm{x}$ zoom in or out, we observe an increase in the FID signifying decreasing image quality. Interestingly for zoom, the FID under zooming in and zooming out is anti-symmetric, indicating that how well we can zoom-in and retain realistic images differs from that of zooming out. These trends are consistent with the plateau in transformation behavior that we qualitatively observe in Fig. 3. Although we can arbitrarily increase the $\alpha$ step size, after some point we are unable to achieve further transformation and risk deviating from the natural image manifold.
+
+# 4.2 HOW DOES THE DATA AFFECT THE TRANSFORMATIONS?
+
+Is the extent to which we can transform each class, as we observed in Fig. 4, due to limited variability in the underlying dataset for each class? One way of quantifying this is to measure the difference in transformed model means, model $(+ \alpha)$ and model $(- \alpha)$ , and compare it to the spread of the dataset distribution. For each class, we compute standard deviation of the dataset with respect to our statistic of interest (pixel RGB value for color, and bounding box area and center value for zoom and shift transformations respectively). We hypothesize that if the amount of transformation is biased depending on the image class, we will observe a correlation between the distance of the mean shifts and the standard deviation of the data distribution.
+
+More concretely, we define the change in model means under a given transformation as:
+
+$$
+\Delta \mu_ {k} = \mu_ {k, \text {m o d e l} (+ \alpha^ {*})} - \mu_ {k, \text {m o d e l} (- \alpha^ {*})} \tag {6}
+$$
+
+for a given class $k$ and we set $\alpha^{*}$ to be largest and smallest $\alpha$ values used in training. The degree to which we achieve each transformation is a function of $\alpha$ , so we use the same $\alpha$ value for all classes - one that is large enough to separate the means of $\mu_{k,\mathrm{model}(\alpha^{*})}$ and $\mu_{k,\mathrm{model}(-\alpha^{*})}$ under
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Figure 6: Understanding per-class biases. We observe a correlation between the variability in the training data for ImageNet classes, and our ability to shift the distribution under latent space transformations. Classes with low variability (e.g., robin) limit our ability to achieve desired transformations, in comparison to classes with a broad dataset distribution (e.g., laptop). To the right, we show the distribution of the zoom attribute in the dataset (black) and under $+\alpha$ (red) and $-\alpha$ (green) transformations for these two examples.
+
+
+
+transformation, but also for which the FID of the generated distribution remains below a threshold $T$ of generating reasonably realistic images (for our experiments we use $T = 22$ ).
+
+In Fig. 6 we plot the standard deviation $\sigma$ of the dataset on the x-axis, and the model $\Delta \mu$ under a $+\alpha^{*}$ and $-\alpha^{*}$ transformation on the y-axis, as defined in Eq. 6. We sample randomly from 100 classes for the color, zoom and shift transformations, and generate 200 samples of each class under the positive and negative transformations. We use the same setup of drawing samples from the model and dataset and computing the statistics for each transformation as described in Sec. 4.1.
+
+Indeed, we find that the width of the dataset distribution, captured by the standard deviation of random samples drawn from the dataset for each class, relates to how much we can transform. There is a positive correlation between the spread of the dataset and the magnitude of $\Delta \mu$ observed in the transformed model distributions, and the slope of all observed trends differs significantly from zero ( $p < 0.001$ for all transformations). For the zoom transformation, we show examples of two extremes along the trend. For the "robin" class the spread $\sigma$ in the dataset is low, and subsequently, the separation $\Delta \mu$ that we are able to achieve by applying $+\alpha^{*}$ and $-\alpha^{*}$ transformations is limited. On the other hand, for "laptops", the dataset spread is broad; ImageNet contains images of laptops of various sizes, and we are able to attain wider shifts in the model distribution.
+
+From these results, we conclude that the amount of transformation we can achieve relates to the dataset variability. Consistent with our qualitative observations in Fig. 4, we find that if the images for a particular class have adequate coverage over the entire range of a given transformation, then we are better able to move the model distribution to both extremes. On the other hand, if the images for a given class are less diverse, the transformation is limited by this dataset bias.
+
+# 4.3 ALTERNATIVE ARCHITECTURES AND WALKS
+
+We ran an identical set of experiments using the nonlinear walk in the BigGAN latent space (Eq 2) and obtained similar quantitative results. To summarize, the Pearson's correlation coefficient between dataset $\sigma$ and model $\Delta \mu$ for linear walks and nonlinear walks is shown in Table 1, and full results in Appendix B.4.2. Qualitatively, we observe that while the linear trajectory undershoots the targeted level of transformation, it is able to preserve more realistic-looking results (Fig. 7). The
+
+
+Figure 7: Comparison of linear and nonlinear walks for the zoom operation. The linear walk undershoots the targeted level of transformation, but maintains more realistic output.
+
+transformations involve a trade-off between minimizing the loss and maintaining realistic output, and we hypothesize that the linear walk functions as an implicit regularizer that corresponds well with the inherent organization of the latent space.
+
+ | Luminance | Shift X | Shift Y | Zoom |
| Linear | 0.59 | 0.28 | 0.39 | 0.37 |
| Non-linear | 0.49 | 0.49 | 0.55 | 0.60 |
+
+Table 1: Pearson's correlation coefficient between dataset $\sigma$ and model $\Delta \mu$ for measured attributes. p-value for slope $< 0.001$ for all transformations.
+
+
+Figure 8: Distribution for luminance transformation learned from the StyleGAN cars generator, and qualitative examples of color transformations on various datasets using StyleGAN.
+
+
+
+
+
+To test the generality of our findings across model architecture, we ran similar experiments on StyleGAN, in which the latent space is divided into two spaces, $z$ and $W$ . As Karras et al. (2018) notes that the $W$ space is less entangled than $z$ , we apply the linear walk to $W$ and show results in Fig. 8 and Appendix B.5. One interesting aspect of StyleGAN is that we can change color while leaving other structure in the image unchanged. In other words, while green faces do not naturally exist in the dataset, the StyleGAN model is still able to generate them. This differs from the behavior of BigGAN, where changing color results in different semantics in the image, e.g., turning a dormant volcano to an active one. StyleGAN, however, does not preserve the exact geometry of objects under other transformations, e.g., zoom and shift (see Appendix B.5).
+
+# 4.4 TOWARDS STEERABLE GANS
+
+So far, we have frozen the parameters of the generative model when learning a latent space walk for image editing, and observe that the transformations are limited by dataset bias. Here we investigate approaches to overcome these limitations and increase model steerability. For these experiments, we use a class-conditional DCGAN model (Radford et al., 2015) trained on MNIST digits (LeCun, 1998).
+
+To study the effect of dataset biases, we train (1) a vanilla DCGAN and (2) a DCGAN with data augmentation, and then learn the optimal walk in Eq. 1 after the model has been trained – we refer to these two approaches in Fig. 9 as argmin $W$ and argmin $W + \text{aug}$ , respectively. We observe that adding data augmentation yields transformations that better approximate the target image and
+
+attain lower $L2$ error than the vanilla DCGAN (blue and orange curves in Fig. 9). Qualitatively, we observe that transformations using the vanilla GAN (argmin $W$ ) become patchy and unrealistic as we increase the magnitude of $\alpha$ , but when the model is trained with data augmentation (argmin $W + aug$ ), the digits retain their structural integrity.
+
+Rather than learning the walk vector $w$ assuming a frozen generator, we may also jointly optimize the model and linear walk parameter together, as we formalized in Eq. 3. This allows the model to learn an equivariance between linear directions in the latent space and the corresponding image transformations. We refer to this model as argmin $G, W$ in Fig. 9. Compared to the frozen generator (in argmin $W$ and argmin $W + aug$ ), the joint objective further decreases $L2$ error (green curve in Fig. 9). We show additional qualitative examples in Appendix B.8. The steerable range of the generator increases with joint optimization and data augmentation, which provides additional evidence that training data bias impacts the models' steerability and generalization capacity. We tried DCGAN on CIFAR10 as a more complicated dataset, however were unable to get steering to be effective - all three methods failed to produce realistic transformations and joint training in fact performed the worst. Finding the right steering implementation per GAN and dataset, especially for joint training, may be a difficult problem and an interesting direction for future work.
+
+
+Figure 9: Reducing the effect of transformation limits. Using a DCGAN model on MNIST digits, we compare the $L2$ reconstruction errors on latent space walks for models trained with vanilla GANs without (argmin $W$ ) and with data augmentation (argmin $W + aug$ ). We also compare to jointly optimizing the generator and the walk parameters with data augmentation (argmin $G, W$ ), which achieves the lowest $L2$ error.
+
+
+
+
+
+# 5 CONCLUSION
+
+GANs are powerful generative models, but are they simply replicating the existing training datapoints, or can they to generalize beyond the training distribution? We investigate this question by exploring walks in the latent space of GANs. We optimize trajectories in latent space to reflect simple image transformations in the generated output, learned in a self-supervised manner. We find that the model is able to exhibit characteristics of extrapolation - we are able to "steer" the generated output to simulate camera zoom, horizontal and vertical movement, camera rotations, and recolorization. However, our ability to naively move the distribution is finite: we can transform images to some degree but cannot extrapolate entirely outside the support of the training data. To increase model steerability, we add data augmentation during training and jointly optimize the model and walk trajectory. Our experiments illustrate the connection between training data bias and the resulting distribution of generated images, and suggest methods for extending the range of images that the models are able to create.
+
+# ACKNOWLEDGEMENTS
+
+We would like to thank Quang H Le, Lore Goetschalckx, Alex Andonian, David Bau, and Jonas Wulff for helpful discussions. This work was supported by a Google Faculty Research Award to P.I., and a U.S. National Science Foundation Graduate Research Fellowship to L.C.
+
+# REFERENCES
+
+Alexander Amini, Ava Soleimany, Wilko Schwarting, Sangeeta Bhatia, and Daniela Rus. Uncovering and mitigating algorithmic bias through learned latent structure.
+
+Aharon Azulay and Yair Weiss. Why do deep convolutional networks generalize so poorly to small image transformations? arXiv preprint arXiv:1805.12177, 2018.
+David Bau, Jun-Yan Zhu, Hendrik Strobelt, Bolei Zhou, Joshua B Tenenbaum, William T Freeman, and Antonio Torralba. Gan dissection: Visualizing and understanding generative adversarial networks. arXiv preprint arXiv:1811.10597, 2018.
+Andrew Brock, Jeff Donahue, and Karen Simonyan. Large scale gan training for high fidelity natural image synthesis. arXiv preprint arXiv:1809.11096, 2018.
+Taco S Cohen, Maurice Weiler, Berkay Kicanaoglu, and Max Welling. Gauge equivariant convolutional networks and the icosahedral cnn. arXiv preprint arXiv:1902.04615, 2019.
+Emily Denton, Ben Hutchinson, Margaret Mitchell, and Timnit Gebru. Detecting bias with generative counterfactual face attribute augmentation. arXiv preprint arXiv:1906.06439, 2019.
+Bella DiGrazia. Swampscott fd debuts new blue fire truck, 2019. https://www.itemlive.com/2019/05/29/swampscott-fd-debuts-new-blue-fire-truck/, accessed 2019-09-18.
+William T. Freeman and Edward H Adelson. The design and use of steerable filters. IEEE Transactions on Pattern Analysis & Machine Intelligence, (9):891-906, 1991.
+Robert Geirhos, Patricia Rubisch, Claudio Michaelis, Matthias Bethge, Felix A Wichmann, and Wieland Brendel. Imagenet-trained cnns are biased towards texture; increasing shape bias improves accuracy and robustness. arXiv preprint arXiv:1811.12231, 2018.
+Lore Goetschalckx, Alex Andonian, Aude Oliva, and Phillip Isola. Ganalyze: Toward visual definitions of cognitive image properties. arXiv preprint arXiv:1906.10112, 2019.
+Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural information processing systems, pp. 2672-2680, 2014.
+Geoffrey E Hinton, Alex Krizhevsky, and Sida D Wang. Transforming auto-encoders. In International Conference on Artificial Neural Networks, pp. 44-51. Springer, 2011.
+Ali Jahanian, SVN Vishwanathan, and Jan P Allebach. Learning visual balance from large-scale datasets of aesthetically highly rated images. In Human Vision and Electronic Imaging XX, volume 9394, pp. 93940Y. International Society for Optics and Photonics, 2015.
+Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. Progressive growing of gans for improved quality, stability, and variation. arXiv preprint arXiv:1710.10196, 2017.
+Tero Karras, Samuli Laine, and Timo Aila. A style-based generator architecture for generative adversarial networks. arXiv preprint arXiv:1812.04948, 2018.
+Davis E. King. Dlib-ml: A machine learning toolkit. Journal of Machine Learning Research, 10: 1755-1758, 2009.
+Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
+Durk P Kingma and Prafulla Dhariwal. Glow: Generative flow with invertible 1x1 convolutions. In Advances in Neural Information Processing Systems, pp. 10236-10245, 2018.
+Yann LeCun. The mnist database of handwritten digits. http://yann.lecun.com/exdb/mnist/, 1998.
+Karel Lenc and Andrea Vedaldi. Understanding image representations by measuring their equivariance and equivalence. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 991-999, 2015.
+Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, and Alexander C Berg. Ssd: Single shot multibox detector. In European conference on computer vision, pp. 21-37. Springer, 2016.
+
+Elad Mezuman and Yair Weiss. Learning about canonical views from internet image collections. In Advances in neural information processing systems, pp. 719-727, 2012.
+Thomas Möllenhoff and Daniel Cremers. Flat metric minimization with applications in generative modeling. arXiv preprint arXiv:1905.04730, 2019.
+Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015.
+Yujun Shen, Jinjin Gu, Xiaou Tang, and Bolei Zhou. Interpreting the latent space of gans for semantic face editing. arXiv preprint arXiv:1907.10786, 2019.
+Joel Simon. Ganbreeder. http://https://ganbreeder.app/, accessed 2019-03-22.
+Antonio Torralba and Alexei A Efros. Unbiased look at dataset bias. 2011.
+Paul Upchurch, Jacob Gardner, Geoff Pleiss, Robert Pless, Noah Snavely, Kavita Bala, and Kilian Weinberger. Deep feature interpolation for image content changes. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 7064-7073, 2017.
+Tom White. Sampling generative networks. arXiv preprint arXiv:1609.04468, 2016.
+Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. In CVPR, 2018.
+Jun-Yan Zhu, Philipp Krahenbuhl, Eli Shechtman, and Alexei A Efros. Generative visual manipulation on the natural image manifold. In European Conference on Computer Vision, pp. 597-613. Springer, 2016.
+
+# A METHOD DETAILS
+
+# A.1 OPTIMIZATION FOR THE LINEAR WALK
+
+We learn the walk vector using mini-batch stochastic gradient descent with the Adam optimizer (Kingma & Ba, 2014) in tensorflow, trained on 20000 unique samples from the latent space $z$ . We share the vector $w$ across all ImageNet categories for the BigGAN model.
+
+# A.2 IMPLEMENTATION DETAILS FOR LINEAR WALK
+
+We experiment with a number of different transformations learned in the latent space, each corresponding to a different walk vector. Each of these transformations can be learned without any direct supervision, simply by applying our desired edit to the source image. Furthermore, the parameter $\alpha$ allows us to vary the extent of the transformation. We found that a slight modification to each transformation improved the degree to which we were able to steer the output space: we scale $\alpha$ differently for the learned transformation $G(z + \alpha_g w)$ , and the target edit $\operatorname{edit}(G(z), \alpha_t)$ . We detail each transformation below:
+
+Shift. We learn transformations corresponding to shifting an image in the horizontal X direction and the vertical Y direction. We train on source images that are shifted $-\alpha_{t}$ pixels to the left and $\alpha_{t}$ pixels to the right, where we set $\alpha_{t}$ to be between zero and one-half of the source image width or height $D$ . When training the walk, we enforce that the $\alpha_{g}$ parameter ranges between -1 and 1; thus for a random shift by $t$ pixels, we use the value $\alpha_{g} = \alpha_{t} / D$ . We apply a mask to the shifted image, so that we only apply the loss function on the visible portion of the source image. This forces the generator to extrapolate on the obscured region of the target image.
+
+Zoom. We learn a walk which is optimized to zoom in and out up to four times the original image. For zooming in, we crop the central portion of the source image by some $\alpha_{t}$ amount, where $0.25 < \alpha_{t} < 1$ and resize it back to its original size. To zoom out, we downsample the image by $\alpha_{t}$ where $1 < \alpha_{t} < 4$ . To allow for both a positive and negative walk direction, we set $\alpha_{g} = \log (\alpha_{t})$ . Similar to shift, a mask applied during training allows the generator to inpaint the background scene.
+
+Color. We implement color as a continuous RGB slider, e.g., a 3-tuple $\alpha_{t} = (\alpha_{R},\alpha_{G},\alpha_{B})$ , where each $\alpha_{R}$ $\alpha_{G}$ $\alpha_{B}$ can take values between [-0.5,0.5] in training. To edit the source image, we simply add the corresponding $\alpha_{t}$ values to each of the image channels. Our latent space walk is parameterized as $z + \alpha_{g}w = z + \alpha_{R}w_{R} + \alpha_{G}w_{G} + \alpha_{B}w_{B}$ where we jointly learn the three walk directions $w_{R}$ $w_{G}$ , and $w_{B}$
+
+Rotate in 2D. Rotation in 2D is trained in a similar manner as the shift operations, where we train with $-45 \leq \alpha_{t} \leq 45$ degree rotation. Using $R = 45$ , scale $\alpha_{g} = \alpha_{t} / R$ . We use a mask to enforce the loss only on visible regions of the target.
+
+Rotate in 3D. We simulate a 3D rotation using a perspective transformation along the Z-axis, essentially treating the image as a rotating billboard. Similar to the 2D rotation, we train with $-45 \leq \alpha_{t} \leq 45$ degree rotation, we scale $\alpha_{g} = \alpha_{t} / R$ where $R = 45$ , and apply a mask during training.
+
+# A.3 LINEAR $\mathbf{NN}(z)$ WALK
+
+Rather than defining $w$ as a vector in $z$ space (Eq. 1), one could define it as a function that takes a $z$ as input and maps it to the desired $z'$ after taking a variable-sized step $\alpha$ in latent space. In this case, we may parametrize the walk with a neural network $w = \mathrm{NN}(z)$ , and transform the image using $G(z + \alpha \mathrm{NN}(z))$ . However, as we show in the following proof, this idea will not learn to let $w$ be a function of $z$ .
+
+Proof. For simplicity, let $w = F(z)$ . We optimize for $J(w, \alpha) = \mathbb{E}_z[\mathcal{L}(G(z + \alpha w), \operatorname{edit}(G(z), \alpha))]$ where $\alpha$ is an arbitrary scalar value. Note that for the target image, two equal edit operations are equivalent to performing a single edit of twice the size (e.g., shifting by 10px the same as shifting by 5px twice; zooming by 4x is the same as zooming by 2x twice). That is,
+
+$$
+\operatorname {e d i t} (G (z), 2 \alpha) = \operatorname {e d i t} (\operatorname {e d i t} (G (z), \alpha), \alpha).
+$$
+
+To achieve this target, starting from an initial $z$ , we can take two steps of size $\alpha$ in latent space as follows:
+
+$$
+\begin{array}{l} z _ {1} = z + \alpha F (z) \\ z _ {2} = z _ {1} + \alpha F (z _ {1}) \\ \end{array}
+$$
+
+However, because we let $\alpha$ take on any scalar value during optimization, our objective function enforces that starting from $z$ and taking a step of size $2\alpha$ equals taking two steps of size $\alpha$ :
+
+$$
+z + 2 \alpha F (z) = z _ {1} + \alpha F \left(z _ {1}\right) \tag {7}
+$$
+
+Therefore:
+
+$$
+\begin{array}{l} z + 2 \alpha F (z) = z + \alpha F (z) + \alpha F (z _ {1}) \Rightarrow \\ \alpha F (z) = \alpha F (z _ {1}) \Rightarrow \\ F (z) = F \left(z _ {1}\right). \\ \end{array}
+$$
+
+Thus $F(\cdot)$ simply becomes a linear trajectory that is independent of the input $z$ .
+
+# A.4 OPTIMIZATION FOR THE NON-LINEAR WALK
+
+Given the limitations of the previous walk, we define our nonlinear walk $F(z)$ using discrete step sizes $\epsilon$ . We define $F(z)$ as $z + \mathrm{NN}(z)$ , where the neural network NN learns a fixed $\epsilon$ step transformation, rather than a variable $\alpha$ step. We then renormalize the magnitude $z$ . This approach mimics the Euler method for solving ODEs with a discrete step size, where we assume that the gradient of the transformation in latent space is of the form $\epsilon \frac{dz}{dt} = \mathrm{NN}(z)$ and we approximate $z_{i+1} = z_i + \epsilon \frac{dz}{dt} |z_i$ . The key difference from A.3 is the fixed step size, which avoids optimizing for the equality in (7).
+
+We use a two-layer neural network to parametrize the walk, and optimize over 20000 samples using the Adam optimizer as before. Positive and negative transformation directions are handled with two neural networks having identical architecture but independent weights. We set $\epsilon$ to achieve the same transformation ranges as the linear trajectory within 4-5 steps.
+
+# B ADDITIONAL EXPERIMENTS
+
+# B.1 MODEL AND DATA DISTRIBUTIONS
+
+How well does the model distribution of each property match the dataset distribution? If the generated images do not form a good approximation of the dataset variability, we expect that this would also impact our ability to transform generated images. In Fig. 10 we show the attribute distributions of the BigGAN model $G(z)$ compared to samples from the ImageNet dataset. We show corresponding results for StyleGAN and its respective datasets in Appendix B.5. While there is some bias in how well model-generated images approximate the dataset distribution, we hypothesize that additional biases in our transformations come from variability in the training data.
+
+# B.2 QUANTIFYING TRANSFORMATION LIMITS
+
+We observe that when we increase the transformation magnitude $\alpha$ in latent space, the generated images become unrealistic and the transformation ceases to have further effect. We show this qualitatively in Fig. 3. To quantitatively verify this trends, we can compute the LPIPS perceptual distance of images generated using consecutive pairs of $\alpha_{i}$ and $\alpha_{i + 1}$ . For shift and zoom transformations, perceptual distance is larger when $\alpha$ (or $\log (\alpha)$ for zoom) is near zero, and decreases as the the magnitude of $\alpha$ increases, which indicates that large $\alpha$ magnitudes have a smaller transformation effect, and the transformed images appear more similar. On the other hand, color and rotate in 2D/3D exhibit a steady transformation rate as the magnitude of $\alpha$ increases.
+
+Note that this analysis does not tell us how well we achieve the specific transformation, nor whether the latent trajectory deviates from natural-looking images. Rather, it tells us how much we manage to change the image, regardless of the transformation target. To quantify how well each transformation is achieved, we rely on attribute detectors such as object bounding boxes (see B.3).
+
+# B.3 DETECTED BOUNDING BOXES
+
+To quantify the degree to which we are able to achieve the zoom and shift transformations, we rely on a pre-trained MobileNet-SSD $\nu I^3$ object detection model. In Fig. 12 and 13 we show the results of applying the object detection model to images from the dataset, and images generated by the model under the zoom, horizontal shift, and vertical shift transformations for randomly selected values of $\alpha$ , to qualitatively verify that the object detection boundaries are reasonable. Not all ImageNet images contain recognizable objects, so we only use ImageNet classes containing objects recognizable by the detector for this analysis.
+
+# B.4 ALTERNATIVE WALKS IN BIGGAN
+
+# B.4.1 LPIPS OBJECTIVE
+
+In the main text, we learn the latent space walk $w$ by minimizing the objective function:
+
+$$
+J (w, \alpha) = \mathbb {E} _ {z} \left[ \mathcal {L} (G (z + \alpha w), \operatorname {e d i t} (G (z), \alpha)) \right]. \tag {8}
+$$
+
+using a Euclidean loss for $\mathcal{L}$ . In Fig. 14 we show qualitative results using the LPIPS perceptual similarity metric (Zhang et al., 2018) instead of Euclidean loss. Walks were trained using the same parameters as those in the linear-L2 walk shown in the main text: we use 20k samples for training, with Adam optimizer and learning rate 0.001 for zoom and color, 0.0001 for the remaining edit operations (due to scaling of $\alpha$ ).
+
+# B.4.2 NON-LINEAR WALKS
+
+Following B.4.2, we modify our objective to use discrete step sizes $\epsilon$ rather than continuous steps. We learn a function $F(z)$ to perform this $\epsilon$ -step transformation on given latent code $z$ , where $F(z)$ is parametrized with a neural network. We show qualitative results in Fig. 15. We perform the same set of experiments shown in the main text using this nonlinear walk in Fig. 16. These experiments
+
+exhibit similar trends as we observed in the main text – we are able to modify the generated distribution of images using latent space walks, and the amount to which we can transform is related to the variability in the dataset. However, there are greater increases in FID when we apply the non-linear transformation, suggesting that these generated images deviate more from natural images and look less realistic.
+
+# B.4.3 ADDITIONAL QUALITATIVE EXAMPLES
+
+We show qualitative examples for randomly generated categories for BigGAN linear-L2, linear LPIPS, and nonlinear trajectories in Figs. 17, 18, 19 respectively.
+
+# B.5 WALKs IN STYLEGAN
+
+We perform similar experiments for linear latent space walks using StyleGAN models trained on the LSUN cat, LSUN car, and FFHQ face datasets. As suggested by Karras et al. (2018), we learn the walk vector in the intermediate $W$ latent space due to improved attribute disentanglement in $W$ . We show qualitative results for color, shift, and zoom transformations in Figs. 20, 22, 24 and corresponding quantitative analyses in Figs. 21, 23, 25. We show qualitative examples for the comparison of optimizing in the $W$ and $z$ latent spaces in Stylegan in 28.
+
+# B.6 WALKS IN PROGRESSIVE GAN
+
+We also experiment with the linear walk objective in the latent space of Progressive GAN Karras et al. (2017). One interesting property of the Progressive GAN interpolations is that they take much longer to train to have a visual effect – for example for color, we could obtain drastic color changes in Stylegan W latent space using as few as 2k samples, but with progressive gan, we used 60k samples and still did not obtain as strong of an effect. This points to the Stylegan w latent space being more “flexible” and generalizable for transformation, compared to the latent space of progressive GAN. Moreover, we qualitatively observe some entanglement in the progressive gan transformations – for example, changing the level of zoom also changes the lighting. We did not observe big effects in the horizontal and vertical shift transformations. Qualitative examples and quantitative results are shown in Figs. 26, 27.
+
+# B.7 QUALITATIVE EXAMPLES FOR ADDITIONAL TRANSFORMATIONS
+
+Since the color transformation operates on individual pixels, we can optimize the walk using a segmented target – for example when learning a walk for cars, we only modify pixels in segmented car region when generating edit $(G(z),\alpha)$ . StyleGAN is able to roughly localize the color transformation to this region, suggesting disentanglement of different objects within the $W$ latent space (Fig. 29 left) as also noted in Karras et al. (2018); Shen et al. (2019). We also show qualitative results for adjust image contrast (Fig. 29 right), and for combining zoom, shift X, and shift Y transformations (Fig. 30).
+
+# B.8 ADDITIONAL RESULTS FOR IMPROVING MODEL STEERABILITY
+
+We further test the hypothesis that dataset variability impacts the amount we are able to transform by comparing DCGAN models trained with and without data augmentation. Namely, with data augmentation, the discriminator is able to see edited versions of the real images. We also jointly train the model and the walk trajectory which encourages the model to learn linear walks. For zoom, horizontal shift, and 2D rotate transformations, additional samples for three training approaches – without data augmentation, with data augmentation, and joint optimization – appear in Fig. 31-33. Qualitatively, transformations using the model trained without data augmentation degrade the digit structure as $\alpha$ magnitude increases, and may even change one digit to another. Training with data augmentation and joint optimization better preserves digit structure and identity.
+
+
+Figure 10: Comparing model versus dataset distribution. We plot statistics of the generated under the color (luminance), zoom (object bounding box size), and shift operations (bounding box center), and compare them to the statistics of images in the training dataset.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Figure 11: LPIPS Perceptual distances between images generated from pairs of consecutive $\alpha_{i}$ and $\alpha_{i+1}$ . We sample 1000 images from randomly selected categories using BigGAN, transform them according to the learned linear trajectory for each transformation. We plot the mean perceptual distance and one standard deviation across the 1000 samples (shaded area), as well as 20 individual samples (scatterplot). Because the Rotate 3D operation undershoots the targeted transformation, we observe more visible effects when we increase the $\alpha$ magnitude.
+
+
+
+
+
+
+Figure 12: Bounding boxes for random selected classes using ImageNet training images.
+
+
+Figure 13: Bounding boxes for random selected classes using model-generated images for zoom and horizontal and vertical shift transformations under random values of $\alpha$ .
+
+
+Figure 14: Linear walks in BigGAN, trained to minimize LPIPS loss. For comparison, we show the same samples as in Fig. 1 (which used a linear walk with L2 loss).
+
+
+Figure 15: Nonlinear walks in BigGAN, trained to minimize L2 loss for color and LPIPS loss for the remaining transformations. For comparison, we show the same samples in Fig. 1 (which used a linear walk with L2 loss), replacing the linear walk vector $w$ with a nonlinear walk.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Figure 16: Quantitative experiments for nonlinear walks in BigGAN. We show the attributes of generated images under the raw model output $G(z)$ , compared to the distribution under a learned transformation model( $\alpha$ ), the intersection area between $G(z)$ and model( $\alpha$ ), FID score on transformed images, and scatterplots relating dataset variability to the extent of model transformation.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Figure 17: Qualitative examples for randomly selected categories in BigGAN, using the linear trajectory and L2 objective.
+
+
+
+
+
+
+
+
+
+
+
+
+Figure 18: Qualitative examples for randomly selected categories in BigGAN, using the linear trajectory and LPIPS objective.
+
+
+
+
+
+
+
+
+
+
+
+
+Figure 19: Qualitative examples for randomly selected categories in BigGAN, using a nonlinear trajectory.
+
+
+
+
+
+
+
+
+
+
+
+
+Figure 20: Qualitative examples for learned transformations using the StyleGAN car generator.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Figure 21: Quantitative experiments for learned transformations using the StyleGAN car generator.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Figure 22: Qualitative examples for learned transformations using the StyleGAN cat generator.
+
+
+
+
+
+
+
+
+
+
+Figure 23: Quantitative experiments for learned transformations using the StyleGAN cat generator.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Figure 24: Qualitative examples for learned transformations using the StyleGAN FFHQ face generator.
+
+
+
+
+
+| model | α=0.5 | α=0.5 |
| α=1.0 | α=0.25 | α=0.75 |
| α=0.75 | α=0.25 | α=1.0 |
+
+
+
+
+
+| model | a=50.0 | a=50.0 |
| a=100.0 | a=25.0 | a=75.0 |
| a=75.0 | a=25.0 | a=100.0 |
+
+
+
+
+
+| model | α=50.0 | α=50.0 |
| α=100.0 | α=25.0 | α=75.0 |
| α=75.0 | α=25.0 | α=100.0 |
+
+
+
+
+
+| model | α=0.50 | α=2.00 |
| α=0.25 | α=0.71 | α=2.83 |
| α=0.35 | α=1.41 | α=4.00 |
+
+
+
+
+
+
+Figure 25: Quantitative experiments for learned transformations using the StyleGAN FFHQ face generator. For the zoom operation not all faces are detectable; we plot the distribution as zeros for $\alpha$ values in which no face is detected. We use the dlib face detector (King, 2009) for bounding box coordinates.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Figure 26: Qualitative examples for learned transformations using the Progressive GAN CelebaA-HQ face generator.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Figure 27: Quantitative experiments for learned transformations using the Progressive GAN CelebA-HQ face generator.
+
+
+
+
+
+
+
+
+Figure 28: Comparison of optimizing for color transformations in the Stylegan w and z latent spaces.
+
+
+
+
+Figure 29: Qualitative examples of optimizing for a color walk with a segmented target using StyleGAN in left column and a contrast walk for both BigGAN and StyleGAN in the right column.
+
+
+
+
+Figure 30: Qualitative examples of a linear walk combining the zoom, shift X, and shift Y transformations. First row shows the target image, second row shows the result of learning a walk for the three transformations jointly, and the third row shows results for combining the separately trained walks. Green vertical line denotes image center.
+
+
+Figure 31: Quantitative experiments on steerability with an MNIST DCGAN for the Zoom transformation. Odd rows are the target images and even rows are the learned transformations.
+
+
+
+
+
+
+Figure 32: Quantitative experiments on steerability with an MNIST DCGAN for the Shift X transformation. Odd rows are the target images and even rows are the learned transformations.
+
+
+
+
+
+
+Figure 33: Quantitative experiments on steerability with an MNIST DCGAN for the Rotate 2D transformation. Odd rows are the target images and even rows are the learned transformations.
+
+
+
+
\ No newline at end of file
diff --git a/onthesteerabilityofgenerativeadversarialnetworks/images.zip b/onthesteerabilityofgenerativeadversarialnetworks/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..02aaca3aea194cedbccc888193b33c2054ba5b1d
--- /dev/null
+++ b/onthesteerabilityofgenerativeadversarialnetworks/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0d7545cc75ad808caa8561a34e91820298917b2b5e9af76cc96923597c3d1095
+size 4239618
diff --git a/onthesteerabilityofgenerativeadversarialnetworks/layout.json b/onthesteerabilityofgenerativeadversarialnetworks/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..797be7035fc4f35646a74ea3f0f08eba511018ac
--- /dev/null
+++ b/onthesteerabilityofgenerativeadversarialnetworks/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:803fdac59dc736202a1657d022c49e9d300ffb3d705a90173b657eff5da6bed3
+size 922363
diff --git a/onthevarianceoftheadaptivelearningrateandbeyond/66d2f199-ab91-4087-aa12-e9a0fc22335d_content_list.json b/onthevarianceoftheadaptivelearningrateandbeyond/66d2f199-ab91-4087-aa12-e9a0fc22335d_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..a9685f181bb9095a272e45fe76e4a00ecbcc8632
--- /dev/null
+++ b/onthevarianceoftheadaptivelearningrateandbeyond/66d2f199-ab91-4087-aa12-e9a0fc22335d_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2f8f9429f8ee995b8addd30c91fe221b74ac4172b51eb6eeb84a6820d948d021
+size 86615
diff --git a/onthevarianceoftheadaptivelearningrateandbeyond/66d2f199-ab91-4087-aa12-e9a0fc22335d_model.json b/onthevarianceoftheadaptivelearningrateandbeyond/66d2f199-ab91-4087-aa12-e9a0fc22335d_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..d60ac8596d2c4284727456dc2c20d8986a577e9c
--- /dev/null
+++ b/onthevarianceoftheadaptivelearningrateandbeyond/66d2f199-ab91-4087-aa12-e9a0fc22335d_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ff3c5b8288cce834426049a80fad6ae823ed46c5fe0df165368def1c79bca98e
+size 104833
diff --git a/onthevarianceoftheadaptivelearningrateandbeyond/66d2f199-ab91-4087-aa12-e9a0fc22335d_origin.pdf b/onthevarianceoftheadaptivelearningrateandbeyond/66d2f199-ab91-4087-aa12-e9a0fc22335d_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..d4990051a8ffed78f0b9b6d74f47e11834b6cdfc
--- /dev/null
+++ b/onthevarianceoftheadaptivelearningrateandbeyond/66d2f199-ab91-4087-aa12-e9a0fc22335d_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d737c3b4977b3a9929e9019319c6764cbc63a56e826751c82b28f9b5fd8d3572
+size 2807786
diff --git a/onthevarianceoftheadaptivelearningrateandbeyond/full.md b/onthevarianceoftheadaptivelearningrateandbeyond/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..54c0dc4b3df2884801c8b44ea314b6bd606e664f
--- /dev/null
+++ b/onthevarianceoftheadaptivelearningrateandbeyond/full.md
@@ -0,0 +1,420 @@
+# ON THE VARIANCE OF THE ADAPTIVE LEARNING RATE AND BEYOND
+
+Liyuan Liu *
+
+University of Illinois, Urbana-Champaign 112@illinois
+
+Haoming Jiang
+
+Georgia Tech
+
+jianghm@gatech.edu
+
+Pengcheng He, Weizhu Chen
+
+Microsoft Dynamics 365 AI
+
+{penhe, wzchen}@microsoft.com
+
+Xiaodong Liu, Jianfeng Gao
+
+Microsoft Research
+
+{xiaodl,jfgao}@microsoft.com
+
+Jiawei Han
+
+University of Illinois, Urbana-Champaign
+
+hanj@illinois
+
+# ABSTRACT
+
+The learning rate warmup heuristic achieves remarkable success in stabilizing training, accelerating convergence and improving generalization for adaptive stochastic optimization algorithms like RMSprop and Adam. Pursuing the theory behind warmup, we identify a problem of the adaptive learning rate - its variance is problematically large in the early stage, and presume warmup works as a variance reduction technique. We provide both empirical and theoretical evidence to verify our hypothesis. We further propose Rectified Adam (RAdam), a novel variant of Adam, by introducing a term to rectify the variance of the adaptive learning rate. Experimental results on image classification, language modeling, and neural machine translation verify our intuition and demonstrate the efficacy and robustness of RAdam.1
+
+# 1 INTRODUCTION
+
+Fast and stable optimization algorithms are what generations of researchers have been pursuing (Gauss, 1823; Cauchy, 1847). Remarkably, stochastic gradient-based optimization, such as stochastic gradient descent (SGD), has witnessed tremendous success in many fields of science and engineering despite its simplicity. Recently, many efforts have been made to accelerate optimization by applying adaptive learning rate. In particular, Adagrad (Duchi et al., 2010) and its variants, e.g., RMSprop (Hinton et al., 2012), Adam (Kingma & Ba, 2014), Adadelta (Zeiler, 2012) and Nadam (Dozat, 2016), stand out due to their fast convergence, and have been considered as the optimizer of choice in many applications.
+
+
+Figure 1: Training loss v.s. # of iterations of Transformers on the De-En IWSLT'14 dataset.
+
+However, it has been observed that these optimization methods may converge to bad/suspicious local optima, and have to resort to a warmup heuristic – using a small learning rate in the first few epochs of training to mitigate such problem (Vaswani et al., 2017; Popel & Bojar, 2018). For example, when training typical Transformers based neural machine translation models on the De-En IWSLT'14 dataset, removing the warmup stage increases the training loss from 3 to around 10, as shown in Figure 1. Similar phenomena are observed in other scenarios like BERT (a bidirectional transformer language model) pre-training (Devlin et al., 2019).
+
+Duo to the lack of the theoretical underpinnings, there is neither guarantee that warmup would bring consistent improvements for various machine learning settings nor guidance on how we should
+
+conduct warmup. Thus, researchers typically use different settings in different applications and have to take a trial-and-error approach, which can be tedious and time-consuming.
+
+In this paper, we conduct both empirical and theoretical analysis of the convergence issue to identify its origin. We show that its root cause is: the adaptive learning rate has undesirably large variance in the early stage of model training, due to the limited amount of training samples being used. Thus, to reduce such variance, it is better to use smaller learning rates in the first few epochs of training, which justifies the warmup heuristic.
+
+Inspired by our analysis results, we propose a new variant of Adam, called Rectified Adam (RAdam), which explicitly rectifies the variance of the adaptive learning rate based on derivations. We conduct extensive experiments on language modeling, image classification, and neural machine translation. RAdam brings consistent improvement over the vanilla Adam, which verifies the variance issue generally exists on various tasks across different network architectures.
+
+In summary, our main contributions are two-fold:
+
+- We identify the variance issue of the adaptive learning rate and present a theoretical justification for the warmup heuristic. We show that the convergence issue is due to the undesirably large variance of the adaptive learning rate in the early stage of model training.
+- We propose a new variant of Adam (i.e., RAdam), which not only explicitly rectifies the variance and is theoretically sound, but also compares favorably with the heuristic warmup.
+
+# 2 PRELIMINARIES AND MOTIVATIONS
+
+Generic adaptive methods. Algorithm 1 is a generic framework (all operations are element-wise). It describes various popular stochastic gradient descent algorithms (Reddi et al., 2018). Specifically, different optimization algorithms can be specified by different choices of $\phi(.)$ and $\psi(.)$ , where $\phi(.)$ specifies how the momentum at time step $t$ is calculated, and $\psi(.)$ how the adaptive learning rate at $t$ is calculated. For example, in the Adam algorithm, we have:
+
+$$
+\phi \left(g _ {1}, \dots , g _ {t}\right) = \frac {\left(1 - \beta_ {1}\right) \sum_ {i = 1} ^ {t} \beta_ {1} ^ {t - i} g _ {t}}{1 - \beta_ {1} ^ {t}} \quad \text {a n d} \quad \psi \left(g _ {1}, \dots , g _ {t}\right) = \sqrt {\frac {1 - \beta_ {2} ^ {t}}{\left(1 - \beta_ {2}\right) \sum_ {i = 1} ^ {t} \beta_ {2} ^ {t - i} g _ {i} ^ {2}}}. \tag {1}
+$$
+
+For numerical stability, the function $\psi(.)$ in Equation 1 is usually calculated as $\hat{\psi}(g_1, \dots, g_t) = \frac{\sqrt{1 - \beta_2^t}}{\epsilon + \sqrt{(1 - \beta_2) \sum_{i=1}^{t} \beta_2^{t-i} g_i^2}}$ , where $\epsilon$ is a relatively small / negligible value (e.g., $1 \times 10^{-8}$ ).
+
+Algorithm 1: Generic adaptive optimization method setup. All operations are element-wise.
+Output: $\theta_T$ : resulting parameters
+```txt
+Input: $\{\alpha_t\}_{t=1}^T$ : step size, $\{\phi_t, \psi_t\}_{t=1}^T$ : function to calculate momentum and adaptive rate, $\theta_0$ : initial parameter, $f(\theta)$ : stochastic objective function.
+```
+
+6 return $\theta_T$
+```txt
+1 while $t = 1$ to $T$ do
+2 $\begin{array}{rl} & g_t\gets \Delta_\theta f_t(\theta_{t - 1})\mathrm{(Calculate~gradients~w.r.t.~stochastic~objective~at~timestep~t)}\\ & m_t\gets \phi_t(g_1,\dots ,g_t)\mathrm{(Calculate~momentum)}\\ & l_t\gets \psi_t(g_1,\dots ,g_t)\mathrm{(Calculate~adaptive~learning~rate)}\\ & \theta_t\gets \theta_{t - 1} - \alpha_tm_tl_t\mathrm{(Update~parameters)} \end{array}$
+3
+4
+5
+```
+
+Learning rate warmup. Instead of setting the learning rate $\alpha_{t}$ as a constant or in a decreasing order, a learning rate warmup strategy sets $\alpha_{t}$ as smaller values in the first few steps, thus not satisfying $\forall t\alpha_{t + 1}\leq \alpha_{t}$ . For example, linear warmup sets $\alpha_{t} = t\alpha_{0}$ when $t < T_w$ . Warmup has been demonstrated to be beneficial in many deep learning applications. For example, in the NMT experiments in Figure 1, the training loss converges around 10 when warmup is not applied (Adam-vanilla), and it surprisingly decreases to below 3 after applying warmup (Adam-warmup).
+
+To further analyze this phenomenon, we visualize the histogram of the absolute value of gradients on a log scale in Figure 2. We observe that, without applying warmup, the gradient distribution is distorted to have a mass center in relatively small values within 10 updates. Such gradient distortion means that the vanilla Adam is trapped in bad/suspicious local optima after the first few
+
+
+Figure 2: The absolute gradient histogram of the Transformers on the De-En IWSLT' 14 dataset during the training (stacked along the y-axis). X-axis is absolute value in the log scale and the height is the frequency. Without warmup, the gradient distribution is distorted in the first 10 steps.
+
+
+Figure 3: The histogram of the absolute value of gradients (on a log scale) during the training of Transformers on the De-En IWSLT' 14 dataset. using Adam-2k, RAdam and Adam-eps.
+
+updates. Warmup essentially reduces the impact of these problematic updates to avoid the convergence problem. In the following sections, we focus our analysis on learning rate warmup for the Adam algorithm, while it can be applied to other algorithms that use similar adaptive learning rate $(\psi(.))$ designs, e.g., RMSprop (Hinton et al., 2012) and Nadam (Dozat, 2016).
+
+# 3 VARIANCE OF THE ADAPTIVE LEARNING RATE
+
+In this section, we first introduce empirical evidence, then analyze the variance of the adaptive learning rate to support our hypothesis - Due to the lack of samples in the early stage, the adaptive learning rate has an undesirably large variance, which leads to suspicious/bad local optima.
+
+To convey our intuition, we begin with a special case. When $t = 1$ , we have $\psi(g_1) = \sqrt{1 / g_1^2}$ . We view $\{g_1, \dots, g_t\}$ as i.i.d. Gaussian random variables following $\mathcal{N}(0, \sigma^2)^2$ . Therefore, $1 / g_1^2$ is subject to the scaled inverse chi-squared distribution, Scale-inv- $\mathcal{X}^2(1, 1 / \sigma^2)$ , and $\mathrm{Var}[\sqrt{1 / g_1^2}]$ is divergent. It means that the adaptive ratio can be undesirably large in the first stage of learning. Meanwhile, setting a small learning rate at the early stage can reduce the variance $(\mathrm{Var}[\alpha x] = \alpha^2 \mathrm{Var}[x])$ , thus alleviating this problem. Therefore, we suggest it is the unbounded variance of the adaptive learning rate in the early stage that causes the problematic updates.
+
+# 3.1 WARMUP AS VARIANCE REDUCTION
+
+In this section, we design a set of controlled experiments to verify our hypothesis. Particularly, we design two variants of Adam that reducing the variance of the adaptive learning rate: Adam-2k and Adam-eps. We compare them to vanilla Adam with and without warmup on the IWSLT'14 German to English translation dataset (Cettolo et al., 2014).
+
+In order to reduce the variance of the adaptive learning rate $(\psi(.))$ , Adam-2k only updates $\psi(.)$ in the first two thousand iterations, while the momentum $(\phi(.))$ and parameters $(\theta)$ are fixed3; other than this, it follows the original Adam algorithm. To make comparison with other methods, its iterations are indexed from -1999 instead of 1. In Figure 1, we observe that, after getting these additional two thousand samples for estimating the adaptive learning rate, Adam-2k avoids the convergence problem of the vanilla-Adam. Also, comparing Figure 2 and Figure 3, getting large enough samples prevents the gradient distribution from being distorted. These observations verify our hypothesis that the lack of sufficient data samples in the early stage is the root cause of the convergence issue.
+
+Another straightforward way to reduce the variance is to increase the value of $\epsilon$ in $\widehat{\psi}(g_1, \cdots, g_t) = \frac{\sqrt{1 - \beta_2^t}}{\epsilon + \sqrt{(1 - \beta_2)\sum_{i=1}^{t}\beta_2^{t-i}g_i^2}}$ . Actually, if we assume $\widehat{\psi}(.)$ is subject to the uniform distribution, its variance equals to $\frac{1}{12\epsilon^2}$ . Therefore, we design Adam-eps, which uses a non-negligibly large $\epsilon = 10^{-4}$ , while $\epsilon = 10^{-8}$ for vanilla Adam. Its performance is summarized in Figure 1. We observe that it does not suffer from the serious convergence problem of vanilla-Adam. This further demonstrates that the convergence problem can be alleviated by reducing the variance of the adaptive learning rate, and also explains why tuning $\epsilon$ is important in practice (Liu et al., 2019). Besides, similar to Adam-2k, it prevents the gradient distribution from being distorted (as shown in Figure 3). However, as in Figure 1, it produces a much worse performance comparing to Adam-2k and Adam-warmup. We conjecture that this is because large $\epsilon$ induces a large bias into the adaptive learning rate and slows down the optimization process. Thus, we need a more principled and rigorous way to control the variance of the adaptive learning rate. In the next subsection, we will present a theoretical analysis of the variance of the adaptive learning rate.
+
+# 3.2 ANALYSIS OF ADAPTIVE LEARNING RATE VARIANCE
+
+As mentioned before, Adam uses the exponential moving average to calculate the adaptive learning rate. For gradients $\{g_1,\dots ,g_t\}$ , their exponential moving average has a larger variance than their simple average. Also, in the early stage ( $t$ is small), the difference of the exponential weights of $\{g_1,\dots ,g_t\}$ is relatively small (up to $1 - \beta_2^{t - 1}$ ). Therefore, for ease of analysis, we approximate the distribution of the exponential moving average as the distribution of the simple average (Nau,
+
+2014), i.e., $p(\psi(.)) = p\left(\sqrt{\frac{1 - \beta_2^t}{(1 - \beta_2)\sum_{i=1}^{t}\beta_2^{t-i}g_i^2}}\right) \approx p\left(\sqrt{\frac{t}{\sum_{i=1}^{t}g_i^2}}\right)$ . Since $g_i \sim \mathcal{N}(0,\sigma^2)$ , we have $\frac{t}{\sum_{i=1}^{t}g_i^2} \sim \text{Scale-inv-}\mathcal{X}^2(t,\frac{1}{\sigma^2})$ . Therefore, we assume $\frac{1 - \beta_2^t}{(1 - \beta_2)\sum_{i=1}^{t}\beta_2^{t-i}g_i^2}$ also subjects to a scaled inverse chi-square distribution with $\rho$ degrees of freedom (further analysis on this approximation is conducted in Section 5.3). Based on this assumption, we can calculate $\operatorname{Var}[\psi^2(.)]$ and the PDF of $\psi^2(.)$ . Now, we proceed to the analysis of its square root variance, i.e., $\operatorname{Var}[\psi(.)]$ , and show how the variance changes with $\rho$ (which corresponds to number of used training samples).
+
+Theorem 1. If $\psi^2(.) \sim \text{Scale - inv - } \mathcal{X}^2(\rho, \frac{1}{\sigma^2})$ , $\operatorname{Var}[\psi(.)]$ monotonically decreases as $\rho$ increases.
+
+Proof. For $\forall \rho > 4$ , we have:
+
+$$
+\operatorname {V a r} [ \psi (.) ] = \mathbb {E} [ \psi^ {2} (.) ] - \mathbb {E} [ \psi (.) ] ^ {2} = \tau^ {2} \left(\frac {\rho}{\rho - 2} - \frac {\rho 2 ^ {2 \rho - 5}}{\pi} \mathcal {B} \left(\frac {\rho - 1}{2}, \frac {\rho - 1}{2}\right) ^ {2}\right), \tag {2}
+$$
+
+where $\mathcal{B}(.)$ is the beta function. By analyzing the derivative of $\mathrm{Var}[\psi(.)]$ , we know it monotonically decreases as $\rho$ increases. The detailed derivation is elaborated in the Appendix A.
+
+Theorem 1 gives a qualitative analysis of the variance of the adaptive learning rate. It shows that, due to the lack of used training samples in the early stage, $\mathrm{Var}[\psi(.)]$ is larger than the late stage (Figure 8). To rigorously constraint the variance, we perform a quantified analysis on $\mathrm{Var}[\psi(.)]$ by estimating the degree of freedoms $\rho$ .
+
+# 4 RECTIFIED ADAPTIVE LEARNING RATE
+
+In the previous section, Equation 2 gives the analytic form of $\mathrm{Var}[\psi(.)]$ , where $\rho$ is the degree of freedoms. Here, we first give an estimation of $\rho$ based on $t$ to conduct a quantified analysis for $\mathrm{Var}[\psi(g_1, \dots, g_t)]$ , then we describe the design of the learning rate rectification, and compare it to the heuristic warmup strategies.
+
+# 4.1 ESTIMATION OF $\rho$
+
+The exponential moving average (EMA) can be interpreted as an approximation to the simple moving average (SMA) in real application (Nau, 2014), i.e.,
+
+$$
+p \left(\frac {\left(1 - \beta_ {2}\right) \sum_ {i = 1} ^ {t} \beta_ {2} ^ {t - i} g _ {i} ^ {2}}{1 - \beta_ {2} ^ {t}}\right) \approx p \left(\frac {\sum_ {i = 1} ^ {f (t , \beta_ {2})} g _ {t + 1 - i} ^ {2}}{f (t , \beta_ {2})}\right). \tag {3}
+$$
+
+Algorithm 2: Rectified Adam. All operations are element-wise.
+Input: $\{\alpha_{t}\}_{t = 1}^{T}$ : step size, $\{\beta_1,\beta_2\}$ : decay rate to calculate moving average and moving 2nd moment, $\theta_0$ : initial parameter, $f_{t}(\theta)$ : stochastic objective function. Output: $\theta_{t}$ : resulting parameters
+1 $m_0,v_0\gets 0,0$ (Initialize moving 1st and 2nd moment)
+2 $\rho_{\infty}\leftarrow 2 / (1 - \beta_{2}) - 1$ (Compute the maximum length of the approximated SMA)
+3 while $t = \{1,\dots ,T\}$ do
+4 $g_{t}\gets \Delta_{\theta}f_{t}(\theta_{t - 1})$ (Calculate gradients w.r.t. stochastic objective at timestep t)
+5 $v_{t}\gets 1 / \beta_{2}v_{t - 1} + (1 - \beta_{2})g_{t}^{2}$ (Update exponential moving 2nd moment)
+6 $m_t\gets \beta_1m_{t - 1} + (1 - \beta_1)g_t$ (Update exponential moving 1st moment)
+7 $\widehat{m_t}\gets m_t / (1 - \beta_1^t)$ (Compute bias-corrected moving average)
+8 $\rho_t\gets \rho_\infty -2t\beta_2^t /(1 - \beta_2^t)$ (Compute the length of the approximated SMA)
+9 if the variance is tractable, i.e., $\rho_t > 4$ then
+10 $\begin{array}{r}l_{t}\gets \sqrt{(1 - \beta_{2}^{t}) / v_{t}} (\mathrm{Compute~adaptive~learning~rate})\\ r_{t}\gets \sqrt{\frac{(\rho_{t} - 4)(\rho_{t} - 2)\rho_{\infty}}{(\rho_{\infty} - 4)(\rho_{\infty} - 2)\rho_{t}}}\end{array}$ (Compute the variance rectification term)
+11 $\theta_t\gets \theta_{t - 1} - \alpha_tr_t\widehat{m}_tt(t)$ (Update parameters with adaptive momentum)
+12 else
+13 $\theta_t\gets \theta_{t - 1} - \alpha_t\widehat{m}_t$ (Update parameters with un-adapted momentum)
+14 return $\theta_T$
+
+where $f(t, \beta_2)$ is the length of the SMA which allows the SMA to have the same "center of mass" with the EMA. In other words, $f(t, \beta_2)$ satisfies:
+
+$$
+\frac {\left(1 - \beta_ {2}\right) \sum_ {i = 1} ^ {t} \beta_ {2} ^ {t - i} \cdot i}{1 - \beta_ {2} ^ {t}} = \frac {\sum_ {i = 1} ^ {f (t , \beta_ {2})} (t + 1 - i)}{f (t , \beta_ {2})}. \tag {4}
+$$
+
+By solving Equation 4, we have: $f(t,\beta_{2}) = \frac{2}{1 - \beta_{2}} -1 - \frac{2t\beta_{2}^{t}}{1 - \beta_{2}^{t}}$ . In the previous section, we assume: $\frac{1 - \beta_2^t}{(1 - \beta_2)\sum_{i = 1}^t\beta_2^{t - i}g_i^2}\sim$ Scale-inv- $\mathcal{X}^2 (\rho ,\frac{1}{\sigma^2})$ . Here, since $g_{i}\sim \mathcal{N}(0,\sigma^{2})$ , we have $\frac{\sum_{i = 1}^{f(t,\beta_2)}g_{t + 1 - i}^2}{f(t,\beta_2)}\sim$ Scale-inv- $\mathcal{X}^2 (f(t,\beta_2),\frac{1}{\sigma^2})$ . Thus, Equation 3 views Scale-inv- $\mathcal{X}^2 (f(t,\beta_2),\frac{1}{\sigma^2})$ as an approximation to Scale-inv- $\mathcal{X}^2 (\rho ,\frac{1}{\sigma^2})$ . Therefore, we treat $f(t,\beta_{2})$ as an estimation of $\rho$ . For ease of notation, we mark $f(t,\beta_{2})$ as $\rho_t$ . Also, we refer $\frac{2}{1 - \beta_2} -1$ as $\rho_{\infty}$ (maximum length of the approximated SMA), due to the inequality $f(t,\beta_{2})\leq \lim_{t\to \infty}f(t,\beta_{2}) = \frac{2}{1 - \beta_{2}} -1$ .
+
+# 4.2 VARIANCE ESTIMATION AND RECTIFICATION
+
+Based on previous estimations, we have $\mathrm{Var}[\psi(.)] = \tau^2\left(\frac{\rho_t}{\rho_t - 2} - \frac{\rho_t 2^{2\rho_t - 5}}{\pi}\mathcal{B}\left(\frac{\rho_t - 1}{2}, \frac{\rho_t - 1}{2}\right)^2\right)$ . The value of this function in the early stage is significantly larger than the late stage (as analyzed later, it decays roughly at the speed of $O\left(\frac{1}{\rho_t}\right)$ ). For example, the variance at $\rho_t = 5$ is over 100 times larger than the variance at $\rho_t = 500$ . Additionally, based on Theorem 1, we know $\min_{\rho_t} \mathrm{Var}[\psi(.)] = \left.\mathrm{Var}[\psi(.)]\right|_{\rho_t = \rho_\infty}$ and mark this minimal value as $C_{\mathrm{Var}}$ . In order to ensure that the adaptive learning rate $(\psi(.))$ has consistent variance, we rectify the variance at the $t$ -th timestamp as below,
+
+$$
+\operatorname {V a r} \left[ r _ {t} \psi \left(g _ {1}, \dots , g _ {t}\right) \right] = C \text {v a r} \quad \text {w h e r e} \quad r _ {t} = \sqrt {C \operatorname {v a r} / \operatorname {V a r} \left[ \psi \left(g _ {1} , \dots , g _ {t}\right) \right]}.
+$$
+
+Although we have the analytic form of $\operatorname{Var}[\psi(.)]$ (i.e., Equation 2), it is not numerically stable. Therefore, we use the first-order approximation to calculate the rectification term. Specifically, by approximating $\sqrt{\psi^2(.)}$ to the first order (Wolter, 2007),
+
+$$
+\sqrt {\psi^ {2} (.)} \approx \sqrt {\mathbb {E} [ \psi^ {2} (.) ]} + \frac {1}{2 \sqrt {\mathbb {E} [ \psi^ {2} (.) ]}} (\psi^ {2} (.) - \mathbb {E} [ \psi^ {2} (.) ]) \quad \mathrm {a n d} \quad \mathrm {V a r} [ \psi (.) ] \approx \frac {\mathrm {V a r} [ \psi^ {2} (.) ]}{4 \mathbb {E} [ \psi^ {2} (.) ]}.
+$$
+
+Since $\psi^2(.)\sim$ Scale-inv- $\mathcal{X}^2 (\rho_t,\frac{1}{\sigma^2})$ , we have:
+
+$$
+\operatorname {V a r} [ \psi (.) ] \approx \rho_ {t} / [ 2 (\rho_ {t} - 2) (\rho_ {t} - 4) \sigma^ {2} ]. \tag {5}
+$$
+
+In Section 5.3, we conduct simulation experiments to examine Equation 5 and find that it is a reliable approximation. Based on Equation 5, we know that $\mathrm{Var}[\sqrt{\psi(.)}]$ decreases approximately at the
+
+
+Figure 4: Language modeling (LSTMs) on the One Billion Word.
+
+Table 1: Image Classification
+
+ | Method | Acc. |
| CIFAR10 | SGD | 91.51 |
| Adam | 90.54 |
| RAdam | 91.38 |
| ImageNet | SGD | 69.86 |
| Adam | 66.54 |
| RAdam | 67.62 |
+
+
+Figure 5: Training of ResNet-18 on the ImageNet and ResNet-20 on the CIFAR10 dataset.
+
+
+
+speed of $O\left(\frac{1}{\rho_t}\right)$ . With this approximation, we can calculate the rectification term as:
+
+$$
+r _ {t} = \sqrt {\frac {(\rho_ {t} - 4) (\rho_ {t} - 2) \rho_ {\infty}}{(\rho_ {\infty} - 4) (\rho_ {\infty} - 2) \rho_ {t}}}.
+$$
+
+Applying our rectification term to Adam, we come up with a new variant of Adam, Rectified Adam (RAdam), as summarized in Algorithm 2. Specifically, when the length of the approximated SMA is less or equal than 4, the variance of the adaptive learning rate is intractable and the adaptive learning rate is inactivated. Otherwise, we calculate the variance rectification term and update parameters with the adaptive learning rate. It is worth mentioning that, if $\beta_{2} \leq 0.6$ , we have $\rho_{\infty} \leq 4$ and RAdam is degenerated to SGD with momentum.
+
+# 4.3 IN COMPARISON WITH WARMUP AND OTHER STABILIZATION TECHNIQUES
+
+Different from the analysis in this paper, warmup is originally proposed to handle training with very large batches for SGD (Goyal et al., 2017; Gotmare et al., 2019; Bernstein et al., 2018; Xiao et al., 2017). We notice that $r_t$ has a similar form to the heuristic linear warmup, which can be viewed as setting the rectification term as $\frac{\min(t, T_w)}{T_w}$ . It verifies our intuition that warmup works as a variance reduction technique. RAdam deactivates the adaptive learning rate when its variance is divergent, thus avoiding undesired instability in the first few updates. Besides, our method does not require an additional hyperparameter (i.e., $T_w$ ) and can automatically adapt to different moving average rules.
+
+Here, we identify and address an underlying issue of adaptive optimization methods independent of (neural) model architectures. Thus, the proposed rectification term is orthogonal to other training stabilization techniques such as gradient clipping (Bengio et al., 2013), smoothing the adaptive learning rate (i.e., increasing $\epsilon$ , applying geometric mean filter (Chen & Gu, 2018), or adding range constraints (Luo et al., 2019)), initialization (Balduzzi et al., 2017; Zhang et al., 2019) and normalization (Ba et al., 2016; Ioffe & Szegedy, 2015). Indeed, these techniques can be combined with the proposed variance rectification method.
+
+# 5 EXPERIMENTS
+
+We evaluate RAdam on several benchmarks: One Billion Word for language modeling; Cifar10 and ImageNet for image classification; IWSLT'14 De-En/EN-DE and WMT'16 EN-De for neural machine translation. Following Loshchilov & Hutter (2018), we decouple weight decays in the vanilla Adam, Adam with warmup and RAdam in our experiments. Details are in Appendix B.
+
+# 5.1 COMPARING TO VANILLA ADAM
+
+As analyzed before, the adaptive learning rate has undesirably large variance in the early stage of training and leads to suspicious/bad local optima on NMT. One question we are interested in
+
+
+Figure 6: Performance of RAdam, Adam and SGD with different learning rates on CIFAR10.
+
+
+Figure 7: Performance of RAdam, Adam with warmup on CIFAR10 with different learning rates.
+
+is: whether such an issue widely exists in other similar tasks and applications. Thus, we conduct a set of experiments with two classical tasks of NLP and CV, i.e., language modeling and image classification. RAdam not only results in consistent improvements over the vanilla Adam, but also demonstrates its robustness to the change of learning rates. It verifies that the variance issue exists in various machine learning applications, and has a big impact on the model behavior.
+
+Performance Comparison. The performances on language modeling (i.e., One Billion Word (Chelba et al., 2013)) and image classification (i.e., CIFAR10 (Krizhevsky et al., 2009) and ImageNet (Deng et al., 2009)) are presented in Figure 4, 5. The results show that RAdam outperforms Adam in all three datasets. As shown in Figure 4, although the rectification term makes RAdam slower than the vanilla Adam in the first few epochs, it allows RAdam to converge faster after that. In other words, by reducing the variance of the adaptive learning rate in the early stage, it gets both faster convergence and better performance, which verifies the impact of the variance issue. We also observe that RAdam obtains consistent improvements over Adam on image classification. It is worth noting that, on both ImageNet and CIFAR10, although RAdam fails to outperform SGD in terms of test accuracy, it results in a better training performance (e.g., the training accuracy of SGD, Adam, and RAdam on ImageNet are 69.57, 69.12 and 70.30 respectively).
+
+Robustness to Learning Rate Change. Besides performance improvements, RAdam also improves the robustness of model training. We use different initial learning rates, conduct experiments with ResNet-20 on the CIFAR10 datasets, and summarize their performance in Figure 6. For learning rates within a broad range (i.e., $\{0.1, 0.03, 0.01, 0.003\}$ ), RAdam achieves consistent model performances (their test accuracy curves highly overlap with each other), while Adam and SGD are shown to be more sensitive to the learning rate. The observation can be interpreted that by rectifying the variance of the adaptive learning rate, RAdam improves the robustness of model training and can adapt to different learning rates of a broader range.
+
+# 5.2 COMPARING TO HEURISTIC WARMUP
+
+To examine the effectiveness of RAdam, we first conduct comparisons on neural machine translation, on which the state-of-the-art employs Adam with the linear warmup. Specifically, we conduct experiments on three datasets, i.e., IWSLT'14 De-En, IWSLT'14 En-De, and WMT'16 En-De. Due
+
+Table 2: BLEU score on Neural Machine Translation.
+
+| Method | IWSLT'14 DE-EN | IWSLT'14 EN-DE | WMT'16 EN-DE |
| Adam with warmup | 34.66 ± 0.014 | 28.56 ± 0.067 | 27.03 |
| RAdam | 34.76 ± 0.003 | 28.48 ± 0.054 | 27.27 |
+
+to the limited size of the IWSLT'14 dataset, we conduct experiments using 5 different random seeds and report their mean and standard derivation. As discussed before, the vanilla Adam algorithm leads to suspicious/bad local optima (i.e., converges to a training perplexity around 500), and needs a learning rate warmup stage to stabilize the training.
+
+We summarize the performance obtained with the heuristic warmup and our proposed rectification term in Table 2 and visualize the training curve of IWSLT De-En in Figure 1. With a consistent adaptive learning rate variance, our proposed method achieves similar performance to that of previous state-of-the-art warmup heuristics. It verifies our intuition that the problematic updates of Adam are indeed caused by the undesirably large variance in the early stage.
+
+Moreover, we applied Adam with warmup on the CIFAR10 dataset. Its best accuracy on the test set is 91.29, which is similar to RAdam (91.38). However, we found that RAdam requires less hyperparameter tuning. Specifically, we visualize their learning curves in Figure 7. For some warmup steps, Adam with warmup is relatively more sensitive to the choice of the learning rate. RAdam, at the same time, is not only more robust, but also can automatically control the warmup behavior (i.e., without requiring the length of warmup). For example, when setting the learning rate as 0.1, Adam with 100 steps of warmup fails to get satisfying performance and only results in an accuracy of 90.13; RAdam successfully gets an accuracy of 91.06, with the original setting of the moving average calculation (i.e., $\beta_{1} = 0.9$ , $\beta_{2} = 0.999$ ). We conjecture the reason is due to the fact that RAdam, which is based on a rigorous variance analysis, explicitly avoids the extreme situation where the variance is divergent, and rectifies the variance to be consistent in other situations.
+
+# 5.3 SIMULATED VERIFICATION
+
+In Sections 3 and 4, we approximate $\mathrm{Var}[\sqrt{t / \sum_{i = 1}^{t}g_{i}^{2}} ]$ to the first order, and assume $\psi^2 (.) = \frac{1 - \beta_2^t}{(1 - \beta_2)\sum_{i = 1}^t\beta_2^{t - i}g_i^2}$ subjects to a scaled inverse chi-square distribution (this assumption covers the approximation from EMA to SMA). Here, we examine these two approximations using simulations.
+
+First Order Approximation of $\operatorname{Var}\left[\sqrt{t} / \sum_{i=1}^{t} g_i^2\right]$ . To compare Equations 5 and 2, we assume $\tau = 1$ and plot their values and difference for $\nu = \{5, \dots, 500\}$ in Figure 8. The curve of the analytic form and the first-order approximation highly overlaps, and their difference is much smaller than their value. This result verifies that our first-order approximation is very accurate.
+
+Scaled Inverse Chi-Square Distribution Assumption. In this paper, we assume $g_{i}$ accords to a Normal distribution with a zero mean. We also assume $\psi^2(.)$ accords to the scaled inverse chi-square distribution to derive the variance of $\mathrm{Var}[\psi(.)]$ , based on the similarity between the exponential moving average and simple moving average. Here, we empirically verify this assumption.
+
+Specifically, since $g_{i}$ in the optimization problem may not be zero-mean, we assume its expectation is $\mu$ and sample $g_{i}$ from $\mathcal{N}(\mu, 1)$ . Then, based on these samples, we calculate the variance of the original adaptive learning rate and the proposed rectified adaptive learning rate, i.e., $\mathrm{Var}\left[\frac{1}{\hat{v}_t}\right]$ and $\mathrm{Var}\left[\frac{r_t}{\hat{v}_t}\right]$ respectively. We set $\beta_{2}$ to 0.999, the number of sampled trajectories to 5000, the number of iterations to 6000, and summarize the simulation results in Figure 9. Across all six settings with different $\mu$ , the adaptive learning rate has a larger variance in the first stage and the rectified adaptive learning rate has relative consistent variance. This verifies the reliability of our assumption.
+
+# 6 CONCLUSION
+
+In this paper, we explore the underlying principle of the effectiveness of the warmup heuristic used for adaptive optimization algorithms. Specifically, we identify that, due to the limited amount of samples in the early stage of model training, the adaptive learning rate has an undesirably large variance and can cause the model to converge to suspicious/bad local optima. We provide both empirical and theoretical evidence to support our hypothesis, and further propose a new variant
+
+
+Figure 8: The value of Equation 2, Equation 5 and their difference (absolute difference). The x-axis is $\rho$ and the y-axis is the variance (log scale).
+
+
+
+
+$\mu = 0.1$
+
+
+
+
+$\mu = 1$
+
+
+
+
+$\mu = 10$
+Figure 9: The simulation of $\mathrm{Var}\left[\frac{1}{v_t}\right]$ and $\mathrm{Var}\left[\frac{c_t}{v_t}\right]$ . The x-axis is iteration # (from 5), the y-axis is the variance (log scale).
+
+of Adam, whose adaptive learning rate is rectified so as to have a consistent variance. Empirical results demonstrate the effectiveness of our proposed method. In future work, we plan to replace the rectification strategy by sharing the second moment estimation across similar parameters.
+
+# ACKNOWLEDGE
+
+We thank Aeyuan Allen-Zhu for valuable discussions and comments, Microsoft Research Technology Engineering team for setting up GPU machines. Research was sponsored in part by DARPA No. W911NF-17-C-0099 and FA8750-19-2-1004, National Science Foundation IIS 16-18481, IIS 17-04532, and IIS-17-41317, and DTRA HDTRA11810026.
+
+# REFERENCES
+
+Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016.
+David Balduzzi, Marcus Frean, Lennox Leary, JP Lewis, Kurt Wan-Duo Ma, and Brian McWilliams. The shattered gradients problem: If resnets are the answer, then what is the question? In ICML, 2017.
+Yoshua Bengio, Nicolas Boulanger-Lewandowski, and Razvan Pascanu. Advances in optimizing recurrent networks. In 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 8624-8628. IEEE, 2013.
+Jeremy Bernstein, Yu-Xiang Wang, Kamyar Azizzadenesheli, and Anima Anandkumar. *signsgd: Compressed optimisation for non-convex problems*. In ICML, 2018.
+Augustin Cauchy. Méthode générale pour la résolution des systèmes déquations simultanées. Comp. Rend. Sci. Paris, 25(1847):536-538, 1847.
+Mauro Cettolo, Jan Niehues, Sebastian Stüker, Luisa Bentivogli, and Marcello Federico. Report on the 11th iwslt evaluation campaign, iwslt 2014. In Proceedings of the International Workshop on Spoken Language Translation., 2014.
+Ciprian Chelba, Tomas Mikolov, Michael Schuster, Qi Ge, Thorsten Brants, Phillip Koehn, and Tony Robinson. One billion word benchmark for measuring progress in statistical language modeling. In *INTERSPEECH*, 2013.
+Jinghui Chen and Quanquan Gu. Closing the generalization gap of adaptive gradient methods in training deep neural networks. arXiv preprint arXiv:1806.06763, 2018.
+Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In ICML, 2009.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. In *NAACL-HLT*, 2019.
+
+Timothy Dozat. Incorporating nesterov momentum into adam. 2016.
+John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. In $COLT$ , 2010.
+Carl-Friedrich Gauss. Theoria combinationis observationum erroribus minimis obnoxiae. Commentationes Societatis Regiae Scientiarum Gottingensis Recentiores, 1823.
+Akhilesh Gotmare, Nitish Shirish Keskar, Caiming Xiong, and Richard Socher. A closer look at deep learning heuristics: Learning rate restarts, warmup and distillation. In ICLR, 2019.
+Priya Goyal, Piotr Dólar, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, and Kaiming He. Accurate, large minibatch sgd: Training imagenet in 1 hour. arXiv preprint arXiv:1706.02677, 2017.
+Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, 2016.
+Geoffrey Hinton, Nitish Srivastava, and Kevin Swersky. Neural networks for machine learning lecture 6a overview of mini-batch gradient descent. Cited on, 2012.
+Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML, 2015.
+Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2014.
+Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. Technical report, Citeseer, 2009.
+Liyuan Liu, Xiang Ren, Jingbo Shang, Jian Peng, and Jiawei Han. Efficient contextualized representation: Language model pruning for sequence labeling. EMNLP, 2018.
+Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692, 2019.
+Ilya Loshchilov and Frank Hutter. Fixing weight decay regularization in adam. In ICLR, 2018.
+Liangchen Luo, Yuanhao Xiong, Yan Liu, and Xu Sun. Adaptive gradient methods with dynamic bound of learning rate. In ICLR, 2019.
+Robert Nau. Forecasting with moving averages. 2014.
+Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. fairseq: A fast, extensible toolkit for sequence modeling. In NAACL, 2019.
+Martin Popel and Ondrej Bojar. Training tips for the transformer model. The Prague Bulletin of Mathematical Linguistics, 110(1):43-70, 2018.
+Sashank J Reddi, Satyen Kale, and Sanjiv Kumar. On the convergence of adam and beyond. In ICLR, 2018.
+Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. Rethinking the inception architecture for computer vision. In CVPR, 2016.
+Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In NIPS, 2017.
+Kirk M Wolter. Taylor series methods. In Introduction to variance estimation. 2007.
+Lin Xiao, Adams Wei Yu, Qihang Lin, and Weizhu Chen. Dscovr: Randomized primal-dual block coordinate algorithms for asynchronous distributed optimization. J. Mach. Learn. Res., 2017.
+Matthew D Zeiler. Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701, 2012.
+Hongyi Zhang, Yann N Dauphin, and Tengyu Ma. Fixup initialization: Residual learning without normalization. In ICLR, 2019.
+
+# A PROOF OF THEOREM 1
+
+For ease of notation, we refer $\psi^2(.)$ as $x$ and $\frac{1}{\sigma^2}$ as $\tau^2$ . Thus, $x\sim$ Scale-inv- $\mathcal{X}^2(\rho,\tau^2)$ and:
+
+$$
+p (x) = \frac {(\tau^ {2} \rho / 2) ^ {\rho / 2}}{\Gamma (\rho / 2)} \frac {\exp [ \frac {- \rho \tau^ {2}}{2 x} ]}{x ^ {1 + \rho / 2}} \quad \text {a n d} \quad \mathbb {E} [ x ] = \frac {\rho}{(\rho - 2) \sigma^ {2}} (\forall \rho > 2) \tag {6}
+$$
+
+where $\Gamma (.)$ is the gamma function. Therefore, we have:
+
+$$
+\mathbb {E} [ \sqrt {x} ] = \int_ {0} ^ {\infty} \sqrt {x} p (x) d x = \frac {\tau \sqrt {\rho} \Gamma ((\rho - 1) / 2)}{\sqrt {2} \Gamma (\rho / 2)} (\forall \rho > 4). \tag {7}
+$$
+
+Based on Equation 6 and 7, for $\forall \rho >4$ we have:
+
+$$
+\operatorname {V a r} [ \psi (.) ] = \operatorname {V a r} [ \sqrt {x} ] = \mathbb {E} [ x ] - \mathbb {E} [ \sqrt {x} ] ^ {2} = \tau^ {2} \left(\frac {\rho}{\rho - 2} - \frac {\rho 2 ^ {2 \rho - 5}}{\pi} \mathcal {B} \left(\frac {\rho - 1}{2}, \frac {\rho - 1}{2}\right) ^ {2}\right), \tag {8}
+$$
+
+where $\mathcal{B}(.)$ is the beta function. To prove the monotonic property of $\mathrm{Var}[\psi(.)]$ , we need to show:
+
+Lemma 1. for $t \geq 4$ , $\frac{\partial}{\partial t}\left(\frac{t}{t - 2} - \frac{t}{\pi}^{2t - 5}\mathcal{B}\left(\frac{t - 1}{2}, \frac{t - 1}{2}\right)^2\right) < 0$
+
+Proof. The target inequality can be re-wrote as
+
+$$
+\begin{array}{l} \frac {\partial}{\partial t} \left(\frac {t}{t - 2} - \frac {t 2 ^ {2 t - 5}}{\pi} \mathcal {B} \left(\frac {t - 1}{2}, \frac {t - 1}{2}\right) ^ {2}\right) \\ = \frac {- 2}{(t - 2) ^ {2}} - \frac {2 ^ {2 t - 5}}{\pi} \mathcal {B} (\frac {t - 1}{2}, \frac {t - 1}{2}) ^ {2} - \frac {t 2 ^ {2 t - 5} \ln 4}{\pi} \mathcal {B} (\frac {t - 1}{2}, \frac {t - 1}{2}) ^ {2} \\ - \frac {2 t 2 ^ {2 t - 5}}{\pi} \mathcal {B} \left(\frac {t - 1}{2}, \frac {t - 1}{2}\right) ^ {2} \left(\Psi \left(\frac {t - 1}{2}\right) - \Psi (t - 1)\right), \quad \left(\Psi (x) = \frac {\Gamma^ {\prime} (x)}{\Gamma (x)}\right) \\ < 0 \\ \end{array}
+$$
+
+This inequality is equivalent to:
+
+$$
+\begin{array}{l} \frac {6 4 \pi}{(t - 2) ^ {2} 4 ^ {t} \mathcal {B} (\frac {t - 1}{2} , \frac {t - 1}{2}) ^ {2}} + 1 + t \ln 4 + 2 t \Psi (\frac {t - 1}{2}) \\ > 2 t \Psi (t - 1) \stackrel {(i)} {=} t \left[ \Psi \left(\frac {t - 1}{2}\right) + \Psi \left(\frac {t}{2}\right) + \ln 4 \right], \\ \end{array}
+$$
+
+where $(i)$ is derived from Legendre duplication formula. Simplify the above inequality, we get:
+
+$$
+\frac {6 4 \pi}{(t - 2) ^ {2} 4 ^ {t} \mathcal {B} (\frac {t - 1}{2} , \frac {t - 1}{2}) ^ {2}} + 1 + t \Psi (\frac {t - 1}{2}) - t \Psi (\frac {t}{2}) > 0,
+$$
+
+We only need to show
+
+$$
+\begin{array}{l} \frac {6 4 \pi}{(t - 2) ^ {2} 4 ^ {t} \mathcal {B} (\frac {t - 1}{2} , \frac {t - 1}{2}) ^ {2}} + 1 + t \Psi (\frac {t - 1}{2}) - t \Psi (\frac {t}{2}) \\ \geq \frac {6 4 \pi}{(t - 2) ^ {2} 4 ^ {t} \mathcal {B} (\frac {t - 1}{2} , \frac {t - 1}{2}) ^ {2}} + 2 + t (\ln (t / 2) - 1 / (t / 2 - 0. 5)) - t \ln (t / 2) \\ = \frac {6 4 \pi}{(t - 2) ^ {2} 4 ^ {t} \mathcal {B} (\frac {t - 1}{2} , \frac {t - 1}{2}) ^ {2}} - \frac {2}{t - 1} \\ > \frac {6 4 \pi}{(t - 2) ^ {2} 4 ^ {t} \mathcal {B} (\frac {t - 1}{2} , \frac {t - 1}{2}) ^ {2}} - \frac {2}{t - 2} \geq 0, \\ \end{array}
+$$
+
+where the first inequality is from $\ln (x) - 1 / (2x) > \Psi (x) > \ln (x + 0.5) - 1 / x$
+
+Therefore, we only need to show
+
+$$
+3 2 \pi \geq (t - 2) 4 ^ {t} \mathcal {B} (\frac {t - 1}{2}, \frac {t - 1}{2}) ^ {2},
+$$
+
+which is equivalent to
+
+$$
+\begin{array}{l} (t - 2) 4 ^ {t} \mathcal {B} \left(\frac {t - 1}{2}, \frac {t - 1}{2}\right) ^ {2} = (t - 2) 4 ^ {t} \frac {\Gamma \left(\frac {t - 1}{2}\right) ^ {4}}{\Gamma (t - 1) ^ {2}} \\ \stackrel {(i)} {=} (t - 2) 4 ^ {t} \frac {\Gamma (\frac {t - 1}{2}) ^ {2}}{\Gamma (t / 2) ^ {2}} 4 ^ {2 - t} \pi = 1 6 \pi (t - 2) \frac {\Gamma (\frac {t - 1}{2}) ^ {2}}{\Gamma (t / 2) ^ {2}} \leq 3 2 \pi , \\ \end{array}
+$$
+
+where $(i)$ is from Legendre duplication formula.
+
+So we only need to show
+
+$$
+(t - 2) \frac {\Gamma (\frac {t - 1}{2}) ^ {2}}{\Gamma (t / 2) ^ {2}} \leq 2 \tag {9}
+$$
+
+Using Gautschi's inequality $(\frac{\Gamma(x + 1)}{\Gamma(x + s)} < (x + 1)^{1 - s})$ , we have
+
+$$
+(t - 2) \frac {\Gamma \left(\frac {t - 1}{2}\right) ^ {2}}{\Gamma (t / 2) ^ {2}} \leq (t - 2) \left(\frac {t - 1}{2}\right) ^ {- 1} = \frac {2 (t - 2)}{t - 1} < 2 \tag {10}
+$$
+
+# B IMPLEMENTATION DETAILS
+
+# B.1 LANGUAGE MODELING
+
+Our implementation is based on the previous work (Liu et al., 2018). Specifically, we use two-layer LSTMs with 2048 hidden states with adaptive softmax to conduct experiments on the one billion words dataset. Word embedding (random initialized) of 300 dimensions is used as the input and the adaptive softmax is incorporated with a default setting (cut-offs are set to [4000, 40000, 200000]). Additionally, as pre-processing, we replace all tokens occurring equal or less than 3 times with as UNK, which shrinks the dictionary from 7.9M to 6.4M. Dropout is applied to each layer with a ratio of 0.1, gradients are clipped at 5.0. We use the default hyper-parameters to update moving averages, i.e. $\beta_{1} = 0.9$ and $\beta_{2} = 0.999$ . The learning rate is set to start from 0.001, and decayed at the start of 10th epochs. LSTMs are unrolled for 20 steps without resetting the LSTM states and the batch size is set to 128. All models are trained on one NVIDIA Tesla V100 GPU.
+
+# B.2 IMAGEINE CLASSIFICATION
+
+We use the default ResNet architectures (He et al., 2016) in a public pytorch re-implementation4. Specifically, we use 20-layer ResNet (9 Basic Blocks) for CIFAR-10 and 18-layer ResNet (8 Basic Blocks) for ImageNet. Batch size is 128 for CIFAR-10 and 256 for ImageNet. The model is trained for 186 epochs and the learning rate decays at the 81-th and the 122-th epochs by 0.1 on CIFAR-10, while the model is trained for 90 epochs and the learning rate decays at the 31-th and the 61-th epoch by 0.1 on ImageNet. For Adam and RAdam, we set $\beta_{1} = 0.9$ , $\beta_{2} = 0.999$ . For SGD, we set the momentum factor as 0.9. The weight decay rate is $10^{-4}$ . Random cropping and random horizontal flipping are applied to training data.
+
+# B.3 NEURAL MACHINE TRANSLATION
+
+Our experiments are based on the default Transformers (Vaswani et al., 2017) implementation from the fairseq package (Ott et al., 2019). Specifically, we use word embedding with 512 dimensions and 6-layer encoder / decoder with 4 head and 1024 hidden dimensions on the IWSLT14' dataset; use word embedding with 512 dimension and 6-layer encoder / decoder with 8 heads and 2048 hidden dimensions. Label smoothed cross entropy is used as the objective function with an uncertainty $= 0.1$ (Szegedy et al., 2016). We use linear learning rate decay starting from $3e^{-4}$ , and the checkpoints of the last 20 epochs are averaged before evaluation. As to the wamrup strategy, we use a linear warmup for Adam in the first 4000 updates, and set $\beta_{2}$ to satisfy $\nu = 4000$ ( $\beta_{2} = 0.9995$ ). In the IWSLT'14 dataset, we conduct training on one NVIDIA Tesla V100 GPU, set maximum batch size as 4000, apply dropout with a ratio 0.3, using weight decay of 0.0001 and clip the gradient norm at 25. In the WMT'16 dataset, we conduct training on four NVIDIA Quadro R8000 GPUs and set maximum batch size as 8196.
+
+# C DOWNGRADING TO SGDM
+
+As a byproduct determined by math derivations, we degenerated RAdam to SGD with momentum in the first several updates. Although this stage only contains several gradient updates, these up-
+
+dates could be quite damaging (e.g., in our Figure 2, the gradient distribution is distorted within 10 gradient updates). Intuitively, updates with divergent adaptive learning rate variance could be more damaging than the ones with converged variance, as divergent variance implies more instability. As a case study, we performed experiments on the CIFAR10 dataset. Five-run average results are summarized in Table 3. The optimizer fails to get an equally reliably model when changing the first 4 updates to Adam, yet the influence of switching is less deleterious when we change 5-8 updates instead. This result verifies our intuition and is in agreement with our theory the first few updates could be more damaging than later updates. By saying that, we still want to emphasize that this part (downgrading to SGDM) is only a minor part of our algorithm design whereas our main focus is on the mechanism of warmup and the derivation of the rectification term.
+
+Table 3: Performance on CIFAR10 (lr = 0.1).
+
+| 1-4 steps | 5-8 steps | 8+ steps | test acc | train loss | train error |
| RAdam | RAdam | RAdam | 91.08 | 0.021 | 0.74 |
| Adam (w. divergent var.) | RAdam | RAdam | 89.98 | 0.060 | 2.12 |
| SGD | Adam (w. convergent var.) | RAdam | 90.29 | 0.038 | 1.23 |
\ No newline at end of file
diff --git a/onthevarianceoftheadaptivelearningrateandbeyond/images.zip b/onthevarianceoftheadaptivelearningrateandbeyond/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..08db52cef8717d431154fcd2e37dbe84b4fba983
--- /dev/null
+++ b/onthevarianceoftheadaptivelearningrateandbeyond/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:04ffd2f2d186c6b44967b093403b64bbf32ee6017d47a12cf924479d1f646abf
+size 635748
diff --git a/onthevarianceoftheadaptivelearningrateandbeyond/layout.json b/onthevarianceoftheadaptivelearningrateandbeyond/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..05885d85fea434483eed1dd9bb13c80e4c44dd6f
--- /dev/null
+++ b/onthevarianceoftheadaptivelearningrateandbeyond/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:add743655827fced3416aae1356c3d9ea1822b6d13ea58770b150fb89f44c1db
+size 511404
diff --git a/ontheweaknessesofreinforcementlearningforneuralmachinetranslation/c358ab2f-bdb5-4c0f-ab32-3117487a5b5c_content_list.json b/ontheweaknessesofreinforcementlearningforneuralmachinetranslation/c358ab2f-bdb5-4c0f-ab32-3117487a5b5c_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..e9fa52c859d0c8232b47ef44831113e518c1dc19
--- /dev/null
+++ b/ontheweaknessesofreinforcementlearningforneuralmachinetranslation/c358ab2f-bdb5-4c0f-ab32-3117487a5b5c_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b67693b1a392bb9a4f92360f0d40a83196ac6e946f88a76fa584dcdd41c65985
+size 83733
diff --git a/ontheweaknessesofreinforcementlearningforneuralmachinetranslation/c358ab2f-bdb5-4c0f-ab32-3117487a5b5c_model.json b/ontheweaknessesofreinforcementlearningforneuralmachinetranslation/c358ab2f-bdb5-4c0f-ab32-3117487a5b5c_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..70455cb6c60637b71f6f05c5525d381e1f83cee0
--- /dev/null
+++ b/ontheweaknessesofreinforcementlearningforneuralmachinetranslation/c358ab2f-bdb5-4c0f-ab32-3117487a5b5c_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:cb2e6753dc555955481b8e13fd96ebb8beed35aa9aadf7477a019f798f7c7d95
+size 104482
diff --git a/ontheweaknessesofreinforcementlearningforneuralmachinetranslation/c358ab2f-bdb5-4c0f-ab32-3117487a5b5c_origin.pdf b/ontheweaknessesofreinforcementlearningforneuralmachinetranslation/c358ab2f-bdb5-4c0f-ab32-3117487a5b5c_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..3ab1a5f005b67e7c8ef374147b752e506734aff2
--- /dev/null
+++ b/ontheweaknessesofreinforcementlearningforneuralmachinetranslation/c358ab2f-bdb5-4c0f-ab32-3117487a5b5c_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4f194e9b6983a2bb80ddf718815a35c0ae0e587748cdabbf9173356253a7339e
+size 800753
diff --git a/ontheweaknessesofreinforcementlearningforneuralmachinetranslation/full.md b/ontheweaknessesofreinforcementlearningforneuralmachinetranslation/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..b9c120345344cd5a9ac6a8d5f162002b6689533f
--- /dev/null
+++ b/ontheweaknessesofreinforcementlearningforneuralmachinetranslation/full.md
@@ -0,0 +1,345 @@
+# ON THE WEAKNESSES OF REINFORCEMENT LEARNING FOR NEURAL MACHINE TRANSLATION
+
+Leshem Choshen $^{1}$ , Lior Fox $^{2}$ , Zohar Aizenbud $^{1}$ , Omri Abend $^{1,3}$
+
+1 School of Computer Science and Engineering, 2 The Edmond and Lily Safra Center for Brain Sciences
+$^{3}$ Department of Cognitive Sciences
+
+The Hebrew University of Jerusalem
+
+first.last@mail.huji.ac.il, oabend@cs.huji.ac.il
+
+# ABSTRACT
+
+Reinforcement learning (RL) is frequently used to increase performance in text generation tasks, including machine translation (MT), notably through the use of Minimum Risk Training (MRT) and Generative Adversarial Networks (GAN). However, little is known about what and how these methods learn in the context of MT. We prove that one of the most common RL methods for MT does not optimize the expected reward, as well as show that other methods take an infeasibly long time to converge. In fact, our results suggest that RL practices in MT are likely to improve performance only where the pre-trained parameters are already close to yielding the correct translation. Our findings further suggest that observed gains may be due to effects unrelated to the training signal, concretely, changes in the shape of the distribution curve.
+
+# 1 INTRODUCTION
+
+Reinforcement learning (RL) is an appealing path for advancement in Machine Translation (MT), as it allows training systems to optimize non-differentiable score functions, common in MT evaluation, as well as tackling the "exposure bias" (Ranzato et al., 2015) in standard training, namely that the model is not exposed during training to incorrectly generated tokens, and is thus unlikely to recover from generating such tokens at test time. These motivations have led to much interest in RL for text generation in general and MT in particular (see §2). Various policy gradient methods have been used, notably REINFORCE (Williams, 1992) and variants thereof (e.g., Ranzato et al., 2015; Edunov et al., 2018) and Minimum Risk Training (MRT; e.g., Och, 2003; Shen et al., 2016). Another popular use of RL is for training GANs (Yang et al., 2018; Tevet et al., 2018). Nevertheless, despite increasing interest and strong results, little is known about what accounts for these performance gains, and the training dynamics involved.
+
+We present the following contributions. First, our theoretical analysis shows that commonly used approximation methods are theoretically ill-founded, and may converge to parameter values that do not minimize the risk, nor are local minima thereof (§2.2).
+
+Second, using both naturalistic experiments and carefully constructed simulations, we show that performance gains observed in the literature likely stem not from making target tokens the most probable, but from unrelated effects, such as increasing the peakiness of the output distribution (i.e., the probability mass of the most probable tokens). We do so by comparing a setting where the reward is informative, vs. one where it is constant. In §4 we discuss this peakiness effect (PKE).
+
+Third, we show that promoting the target token to be the mode is likely to take a prohibitively long time. The only case we find, where improvements are likely, is where the target token is among the first 2-3 most probable tokens according to the pretrained model. These findings suggest that REINFORCE (§5) and CMRT (§6) are likely to improve over the pre-trained model only under the best possible conditions, i.e., where the pre-trained model is "nearly" correct.
+
+We conclude by discussing other RL practices in MT which should be avoided for practical and theoretical reasons, and briefly discuss alternative RL approaches that will allow RL to tackle a larger class of errors in pre-trained models (§7).
+
+# 2 RL IN MACHINE TRANSLATION
+
+An MT system generates tokens $y = (y_{1}, \dots, y_{n})$ from a vocabulary $V$ one token at a time. The probability of generating $y_{i}$ given preceding tokens $y_{i}$ .
+
+We also assume there is exactly one valid target token, as de facto, training is done against a single reference (Schulz et al., 2018). In practice, either a token-level reward is approximated using Monte-Carlo methods (e.g., Yang et al., 2018), or a sentence-level (sparse) reward is given at the end of the episode (sentence). The latter is equivalent to a uniform token-level reward.
+
+$r$ is often the negative log-likelihood, or a standard MT metric, e.g., BLEU (Papineni et al., 2002). RL's goal is to maximize the expected episode reward (denoted with $R$ ); i.e., to find
+
+$$
+\theta^ {*} = \underset {\theta} {\arg \max } R (\theta) = \underset {\theta} {\arg \max } \mathbb {E} _ {y \sim P _ {\theta}} [ r (y) ] \tag {1}
+$$
+
+# 2.1 REINFORCE
+
+For a given source sentence, and past predictions $y_{1 However, given CMRT popularity, the strong results it yielded and the absence of theory for explaining it, we discuss it here. Given a sample $S$ , the gradient of $\widetilde{R}$ is given by
+
+$$
+\nabla \widetilde {R} = \alpha \sum_ {i = 1} ^ {n} \left(Q \left(y _ {i}\right) \cdot r \left(y _ {i}\right) \cdot \nabla \log P \left(y _ {i}\right)\right) - \mathbb {E} _ {Q} [ r ] \nabla \log Z (S) \tag {3}
+$$
+
+where $Z(S) = \sum_{i}P(y_{i})^{\alpha}$ . See Appendix A.2.
+
+Comparing Equations 2 and 3, the differences between REINFORCE and CMRT are reflected again. First, $\nabla \widetilde{R}$ has an additional term, proportional to $\nabla \log Z(S)$ , which yields the contrastive effect. This contrast may improve the rate of convergence since it counters the decrease of probability mass for non-sampled tokens.
+
+Second, given $S$ , the relative weighting of the gradients $\nabla \log P(y_i)$ is proportional to $r(y_i)Q(y_i)$ , or equivalently to $r(y_i)P(y_i)^{\alpha}$ . CMRT with dedduplication sums over distinct values in $S$ (equation 3), while REINFORCE sums over all values. This means that the relative weight of the unique value $y_i$ is $\frac{r(y_i)|\{y_i \in S\}|}{k}$ in REINFORCE. For $\alpha = 1$ the expected value of these relative weights is the same, and so for $\alpha < 1$ (as is commonly used), more weight is given to improbable tokens, which could also have a positive effect on the convergence rate. However, if $\alpha$ is too close to 0, $\nabla \widetilde{R}$ vanishes, as it is not affected by $\theta$ . This tradeoff explains the importance of tuning $\alpha$ reported in the literature. In §6 we present simulations with CMRT, showing very similar trends as presented by REINFORCE.
+
+# 3 MOTIVATING DISCUSSION
+
+Implementing a stochastic gradient ascent, REINFORCE is guaranteed to converge to a stationary point of $R$ under broad conditions. However, not much is known about its convergence rate under the prevailing conditions in NMT.
+
+We begin with a qualitative, motivating analysis of these questions. As work on language generation empirically showed, RNNs quickly learn to output very peaky distributions (Press et al., 2017). This tendency is advantageous for generating fluent sentences with high probability, but may also entail slower convergence rates when using RL to fine-tune the model, because RL methods used in text generation sample from the (pretrained) policy distribution, which means they mostly sample what the pretrained model deems to be likely. Since the pretrained model (or policy) is peaky, exploration of other potentially more rewarding tokens will be limited, hampering convergence.
+
+Intuitively, REINFORCE increases the probabilities of successful (positively rewarding) observations, weighing updates by how rewarding they were. When sampling a handful of tokens in each context (source sentence $x$ and generated prefix $y_{| S | P(S) | R | ∇R |
| {a,b} | 4θ3 | 1/1+2θ | -2/(1+2θ)2 |
| {a,c} | 2θ(1-θ-2θ2) | 0.5 + θ/2-4θ2 | 2x2+1/2(1-2θ2)2 |
| {b,c} | 4θ2(1-θ-2θ2) | 1-θ-2θ2/2-2θ | θ2-2θ/(1-θ)2 |
| a,a | θ2 | 1 | 0 |
| b,b | 4θ4 | 0 | 0 |
| c,c | (1-θ-2θ2)2 | 0.5 | 0 |
+
+Table 1: The gradients of $\widetilde{R}$ for each possible sample $S$ . The batch size is $k = 2$ . Rows correspond to different sampled outcomes. $\nabla \widetilde{R}$ is the gradient of $\widetilde{R}$ given the corresponding value for $S$ .
+
+# A.2 DERIVING THE GRADIENT OF $\widetilde{R}$
+
+Given $S$ , recall the definition of $\widetilde{R}$ :
+
+$$
+\widetilde {R} (\theta , S) = \sum_ {i = 1} ^ {k} Q _ {\theta , S} (y _ {i}) r (y _ {i})
+$$
+
+Taking the derivative w.r.t. $\theta$
+
+$$
+\begin{array}{l} \sum_ {i = 1} ^ {k} r (y _ {i}) \frac {\nabla P (y) \cdot \alpha P (y) ^ {\alpha - 1} \cdot Z (S) - \nabla Z (S) \cdot P (y) ^ {\alpha}}{Z (S) ^ {2}} = \\ \sum_ {i = 1} ^ {k} r \left(y _ {i}\right) \left(\frac {\alpha \nabla P \left(y _ {i}\right)}{P \left(y _ {i}\right)} Q \left(y _ {i}\right) - \frac {\nabla Z (S)}{Z (S)} Q \left(y _ {i}\right)\right) = \\ \sum_ {i = 1} ^ {k} r \left(y _ {i}\right) Q \left(y _ {i}\right) \left(\alpha \nabla \log P \left(y _ {i}\right) - \nabla \log Z (S)\right) = \\ \alpha \sum_ {i = 1} ^ {k} \left(r (y _ {i}) Q (y _ {i}) \nabla \log P (y _ {i})\right) - \mathbb {E} _ {Q} [ r ] \nabla \log Z (S) \\ \end{array}
+$$
+
+
+(a)
+
+
+(b)
+Figure 7: The probability of different tokens following REINFORCE, in the controlled simulations in the Constant Reward setting. The left/center/right figures correspond to simulations where the target token ( $y_{best}$ ) was initially the second/third/fourth most probable token. The green line corresponds to the target token, yellow lines to medium-reward tokens and red lines to tokens with $r(y) = 0$ .
+
+
+(c)
+
+# A.3 NMT IMPLEMENTATION DETAILS
+
+True casing and tokenization were used (Koehn et al., 2007), including escaping html symbols and "-" that represents a compound was changed into a separate token of =. Some preprocessing used before us converted the latter to ##AT##-##AT## but standard tokenizers in use process that into 11 different tokens, which over-represents the significance of that character when BLEU is calculated. BPE (Sennrich et al., 2016) extracted 30,715 tokens. For the MT experiments we used 6 layers in the encoder and the decoder. The size of the embeddings was 512. Gradient clipping was used with size of 5 for pre-training (see Discussion on why not to use it in training). We did not use attention dropout, but 0.1 residual dropout rate was used. In pretraining and training sentences of more than 50 tokens were discarded. Pretraining and training were considered finished when BLEU did not increase in the development set for 10 consecutive evaluations, and evaluation was done every 1,000 and 5,000 for batches of size 100 and 256 for pretraining and training respectively. Learning rate used for rmsprop (Tieleman & Hinton, 2012) was 0.01 in pretraining and for adam (Kingma & Ba, 2015) with decay was 0.005 for training. 4,000 learning rate warm up steps were used. Pretraining took about 7 days with 4 GPUs, afterwards, training took roughly the same time. Monte Carlo used 20 sentence rolls per word.
+
+# A.4 DETAILED RESULTS FOR CONSTANT REWARD SETTING
+
+We present graphs for the constant reward setting in Figures 8 and 7. Trends are similar to the ones obtained for the Simulated Reward setting.
+
+
+Figure 8: Difference between the ranks of $y_{best}$ in the reinforced with constant reward and the pretrained model. Each column $x$ corresponds to the difference in the probability that $y_{best}$ is ranked in rank $x$ in the reinforced model, and the same probability in the pretrained model.
\ No newline at end of file
diff --git a/ontheweaknessesofreinforcementlearningforneuralmachinetranslation/images.zip b/ontheweaknessesofreinforcementlearningforneuralmachinetranslation/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..cd1f5eecb8f5baddce6f4122739918377ae40471
--- /dev/null
+++ b/ontheweaknessesofreinforcementlearningforneuralmachinetranslation/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:185c6bc36d11fccdc6c9791e42d06a0925a8e11b383172e11f055a67751c2851
+size 233701
diff --git a/ontheweaknessesofreinforcementlearningforneuralmachinetranslation/layout.json b/ontheweaknessesofreinforcementlearningforneuralmachinetranslation/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..875299da2042d026585bff5a0c5f93963aaab13d
--- /dev/null
+++ b/ontheweaknessesofreinforcementlearningforneuralmachinetranslation/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:abef08b3f893aead02fb76d7a4bccfb15991ef1b857685722aa0bea400c51d89
+size 508877
diff --git a/optimalstrategiesagainstgenerativeattacks/f77561cf-dc21-4ab7-a9c2-ea3c24a7a69f_content_list.json b/optimalstrategiesagainstgenerativeattacks/f77561cf-dc21-4ab7-a9c2-ea3c24a7a69f_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..65693145042a771fcaa3275e5ddd21284798c326
--- /dev/null
+++ b/optimalstrategiesagainstgenerativeattacks/f77561cf-dc21-4ab7-a9c2-ea3c24a7a69f_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:235ed3da1cc5d23f137291be380f7f7601cfafdb71708ee1dad6017f14d96fad
+size 250917
diff --git a/optimalstrategiesagainstgenerativeattacks/f77561cf-dc21-4ab7-a9c2-ea3c24a7a69f_model.json b/optimalstrategiesagainstgenerativeattacks/f77561cf-dc21-4ab7-a9c2-ea3c24a7a69f_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..8cbf35863aaf7e216edb4eac53d5e63221116e40
--- /dev/null
+++ b/optimalstrategiesagainstgenerativeattacks/f77561cf-dc21-4ab7-a9c2-ea3c24a7a69f_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:55e7f891346678fa982246c16be02f7383df7f165b9c7f4fd096fb1c58df03b6
+size 285028
diff --git a/optimalstrategiesagainstgenerativeattacks/f77561cf-dc21-4ab7-a9c2-ea3c24a7a69f_origin.pdf b/optimalstrategiesagainstgenerativeattacks/f77561cf-dc21-4ab7-a9c2-ea3c24a7a69f_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..5bb55f71c3354e02187e9538d18c7cc9b12f27f7
--- /dev/null
+++ b/optimalstrategiesagainstgenerativeattacks/f77561cf-dc21-4ab7-a9c2-ea3c24a7a69f_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:351bf8f022eb3b742abb58922094d69a47346980578ca037102901027d21b1b6
+size 6373995
diff --git a/optimalstrategiesagainstgenerativeattacks/full.md b/optimalstrategiesagainstgenerativeattacks/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..be1811e33b4a40599854966a65c3ad13b0f9da95
--- /dev/null
+++ b/optimalstrategiesagainstgenerativeattacks/full.md
@@ -0,0 +1,1191 @@
+# OPTIMAL STRATEGIES AGAINST GENERATIVE ATTACKS
+
+Roy Mor
+
+Tel Aviv University Tel Aviv, Israel
+
+Erez Peterfreund
+
+The Hebrew University of Jerusalem Jerusalem, Israel
+
+Matan Gavish
+
+The Hebrew University of Jerusalem Jerusalem, Israel
+
+Amir Globerson
+
+Tel Aviv University Tel Aviv, Israel
+
+# ABSTRACT
+
+Generative neural models have improved dramatically recently. With this progress comes the risk that such models will be used to attack systems that rely on sensor data for authentication and anomaly detection. Many such learning systems are installed worldwide, protecting critical infrastructure or private data against malfunction and cyber attacks. We formulate the scenario of such an authentication system facing generative impersonation attacks, characterize it from a theoretical perspective and explore its practical implications. In particular, we ask fundamental theoretical questions in learning, statistics and information theory: How hard is it to detect a "fake reality"? How much data does the attacker need to collect before it can reliably generate nominally-looking artificial data? Are there optimal strategies for the attacker or the authenticator? We cast the problem as a maximin game, characterize the optimal strategy for both attacker and authenticator in the general case, and provide the optimal strategies in closed form for the case of Gaussian source distributions. Our analysis reveals the structure of the optimal attack and the relative importance of data collection for both authenticator and attacker. Based on these insights we design practical learning approaches and show that they result in models that are more robust to various attacks on real-world data.
+
+# 1 INTRODUCTION
+
+Generative models have attracted considerable attention since the introduction of Generative Adversarial Networks (Goodfellow et al., 2014a). Empirically, GANs have been shown to generate novel data instances that resemble those in the true distribution of the data. The success of GANs also comes with the risk that generative models will be used for attacking sensor-based security systems. One example is identity authentication systems, where an individual is identified via her images, and GANs might be able to generate such images to gain access (Thies et al., 2016). Another is anomaly detection systems protecting critical infrastructure. As demonstrated by recent cyber-attacks (notably the Stuxnet attack) sensors of these systems can be hijacked, so that GANs can be used to generate "normal" looking activity while the actual system is being tampered with. The latter is, in fact, a new form of a man-in-the-middle attack.
+
+Our goal here is to construct a theoretical framework for studying the security risk arising from generative models and explore its practical implications. We begin with a simple key insight. If the attacker (i.e., the generative model) has unlimited observations of the source it is trying to imitate, it will be able to fool any authenticator. On the other hand, if the attacker has access to fewer sensor observations than the number of fake observations it needs to generate, it seems intuitively clear that it cannot always succeed (as we indeed prove in Sec. 4). Therefore, the optimal defense and attack strategies depend crucially on the amount of information available to the attacker and authenticator.
+
+Motivated by the above insight, we cast the authentication setting as a two-player maximin game (authenticator vs. attacker) where all observations are finite. Specifically, there are three key obser
+
+vation sets to consider: those available to the attacker, those that the attacker needs to generate, and those available to the authenticator when designing the system. Our goal is to understand how these three information sources determine the optimal strategies for both players. Under the realistic assumption that cyber attackers are sophisticated enough to play optimal or close to optimal strategies, a characterization of the maximin authentication strategy can be of significant value.
+
+We prove several theoretical results characterizing the optimal strategy for both players. These results highlight the role played by the available observations as well as the functional form for an optimal attacker and authenticator. We refer to the setting above as "GAN in The Middle" (GIM) due to its similarity to "man in the middle" attacks. After describing our theoretical results, we show how to learn both authenticator and attacker policies in practice, where both are based on neural architectures. Our GIM method can be applied to multiple practical problems. The first is building an authentication system that is robust to impersonation attacks. The second is building a data generating mechanism that can generate novel data instances. Finally, we evaluate the method empirically, showing that it outperforms existing methods in terms of resilience to generative attacks, and that it can be used effectively for data-augmentation in the few-shot learning setting.
+
+# 2 PROBLEM STATEMENT
+
+We begin by motivating our problem and formulating it as a two-player zero-sum game. As a simple illustrative example, consider a face authentication security system whose goal is to maximize authentication accuracy. The system is initialized by registering $k$ images of an individual $\theta$ (the "source"), whose identity is to be authenticated. At test-time, each entity claiming to be $\theta$ is required to present to the system $n$ of its images, and the authentication system decides whether the entity is $\theta$ or an impersonator. We let $m$ denote the maximum number of "leaked" images any attacker obtained. We observe that if an attacker obtained $m \geqslant n$ images of $\theta$ , it can present $n$ of those images. Thus the observations generated by the attacker are indistinguishable from ones generated by $\theta$ , leading to failure of the authentication system (see Sec. 4.2 below). Hence, the number of images of $\theta$ that the attacker obtains and the size of the authentication sample are of key importance. We now turn to formally stating the problem.
+
+Notation. The set of possible observations is denoted by $\mathcal{X}$ . Let $\mathcal{H}$ denote the known set of possible sources $\theta$ , where each source $\theta \in \mathcal{H}$ is defined by a probability density $f_{\theta}$ , and an observation of a source $\theta \in \mathcal{H}$ is an $\mathcal{X}$ -valued random variable with density $f_{\theta}$ . We assume that subsequent observations of the source are IID, so that $n$ sequential observations have density $f_{\theta}^{(n)}(x_1,\ldots ,x_n)\coloneqq \prod_{i = 1}^n f_\theta (x_i)$ . We allow $\theta$ to be sampled from a known distribution $Q$ on $\mathcal{H}$ and denote the corresponding $\mathcal{H}$ -valued random variable by $\Theta$ . In what follows we will denote the number of observations leaked to the attacker by $m$ , the number of "registration" observations available to the authenticator by $k$ , and the number of observations required at authentication by $n$ (these may be generated by either the attacker or the true source).
+
+The Authentication Game. The game begins with a random source $\Theta$ being drawn from $\mathcal{H}$ according to $Q$ . The authenticator first receives information about the drawn source, and then chooses a decision rule for deciding whether a given test sequence $x \in \mathcal{X}^n$ is an authentic sequence of observations sampled from $f_{\theta}^{(n)}$ or a fake sequence generated by the attacker. Formally, the authenticator learns about the source by seeing $k$ IID "registration" observations $A = A_1, \ldots, A_k \sim f_{\theta}^{(k)}$ . The set of all possible decision rules is then $\mathcal{D}: \mathcal{X}^k \times \mathcal{X}^n \to \{0, 1\}$ (where a decision of 1 corresponds to the true source and 0 to an attacker). After the authenticator fixes its strategy, the attacker can seek the best attack strategy. We assume that the attacker has access to $m$ "leaked" IID observations $Y = Y_1, \ldots, Y_m \sim f_{\theta}^{(m)}$ as information about the source $\theta$ . Then it generates an attack sequence $X \in \mathcal{X}^n$ and presents it to the authenticator, which uses its decision rule to decide whether $X$ is an authentic sequence of observations sampled from $f_{\theta}^{(n)}$ or a fake sequence generated by the attacker. Formally, the strategy set of the attacker is all functions $\mathcal{G}: \mathcal{X}^m \to \Delta(\mathcal{X}^n)$ , where $\Delta(\mathcal{X}^n)$ is the set of probability distributions over $\mathcal{X}^n$ , and $g_{X|Y}$ is the associated conditional probability density. We note that the set $\mathcal{H}$ , the parametric family $f_{\theta}$ , and the prior probability $Q$ are known to both players. Also, note that the leaked sample $Y$ revealed to the attacker is not available to the authenticator, and the "registration" sample $A$ is not available to the attacker.
+
+The goal of the authenticator is to maximize its expected accuracy, and the goal of the attacker is to minimize it (or equivalently maximize its success probability). We define the utility (payoff) of
+
+the game as the expected prediction accuracy of the authenticator. To define expected accuracy we consider the case of equal priors for attack and real samples. $^{1}$ Formally, for a pair of strategies $(\mathcal{D},\mathcal{G})$ and a specific source $\theta$ , the expected accuracy of the authenticator is then:
+
+$$
+V (\theta , \mathcal {D}, \mathcal {G}) = \frac {1}{2} \mathbb {E} _ {A \sim f _ {\theta} ^ {(k)}} \mathbb {E} _ {Y \sim f _ {\theta} ^ {(m)}} \left[ \mathbb {E} _ {X \sim f _ {\theta} ^ {(n)}} [ \mathcal {D} (A, X) ] + \mathbb {E} _ {X \sim \mathcal {G} (Y)} [ 1 - \mathcal {D} (A, X) ] \right] \tag {2.1}
+$$
+
+Since this utility only depends on $\mathcal{G}$ in the second term, minimizing it is equivalent to $\mathcal{G}$ maximizing its success probability. To obtain the overall utility for the authenticator, we take the expected value w.r.t $\Theta$ and define $V(\mathcal{D},\mathcal{G}) = \mathbb{E}_{\Theta \sim_{Q}}V(\Theta ,\mathcal{D},\mathcal{G})$ . Finally, we arrive at the following maximin game:
+
+$$
+V _ {\text {g a m e}} = \max _ {\mathcal {D} \in \mathbb {D}} \min _ {\mathcal {G} \in \mathbb {G}} V (\mathcal {D}, \mathcal {G}) \tag {2.2}
+$$
+
+where $\mathbb{D},\mathbb{G}$ are the sets of all possible authenticator and attacker strategies, respectively. In Sec. 4 we show that this game has a Nash equilibrium, we characterize the optimal strategies and game value in general, and find them in closed form for the case of Multivariate Gaussian sources.
+
+# 3 RELATED WORK
+
+Adversarial hypothesis testing (AHT): Hypothesis testing (HT) is a rich field in statistics that studies how one can detect whether a sample was generated by one of two sets of distributions. A variant of HT that is related to our work but distinct from it is AHT (e.g., see Brandão et al., 2014; Barni & Tondi, 2013b;a; 2014; Bao et al., 2011; Zhou et al., 2019; Brückner & Scheffer, 2011; Brückner et al., 2012). These works describe an HT setting where the sample is generated by one of two hypotheses classes, but is then modified by an adversary in some restricted way. E.g., in Barni & Tondi (2013b;a; 2014) the adversary can change the sample of one class up to a fixed distance (e.g., Hamming). Given the quality of current generative models and the rapid pace of progress, when considering an impersonation attack, it seems that the only relevant restriction one can assume on an attacker is on the information it has. This is not captured by prior work since it assumes that the adversary has a restricted strategy set. In contrast, our work considers a novel problem setting where both players are not limited in their strategy set in any way. This leads to a novel analysis that focuses on the dependence on the finite information available to each player $(m,n,k)$ .
+
+Adversarial Examples: It has been observed (Goodfellow et al., 2014b) that deep learning models can be very sensitive to small changes in their input. Such "misleading" inputs are known as adversarial examples and much recent work has analyzed the phenomenon (Ilyas et al., 2019; Shamir et al., 2019; Zhang et al., 2019), addressed the problem of robustness to these (Moosavi-Dezfooli et al., 2016; Papernot et al., 2017; Yuan et al., 2017), and studied when robustness can be certified (Raghunathan et al., 2018; Wong et al., 2018). The setting of robust classification in the presence of adversarial examples can also be thought of as a specific case of AHT (see above), where a classifier is required to predict the class of an observation that could have been perturbed by a restricted adversary (Wong et al., 2018; 2019) or generated by an adversary limited to generating examples that will be classified correctly by humans (Song et al., 2018). In contrast, in our setting the attacker is not limited in any way, nor does it have another utility in addition to impersonating the source. Furthermore, in adversarial examples, there is no notion of limited information for the adversary, whereas our work focuses on the dependence of the game on the information available to the players (sample sizes $n$ , $m$ , $k$ ).
+
+GAN: The GAN model is a game between a generator and a discriminator. While our concept of generative attacks is inspired by GAN, it is very different from it: a successful GAN generator is not necessarily a successful attacker in our setting, and vice-versa (e.g., given sufficiently expressive generators and discriminators, GANs can "memorize" the training data, and thus the discriminator will perform at chance level. $^2$ Such a discriminator will not be useful as a defense against generative attacks). Unlike GANs, in our setting, sample sizes are of key importance. Thus, our attacker will not memorize the data it sees, as this will be detected when generating $n > m$ examples.
+
+Conditional GANs: In conditional GANs (Mirza & Osindero, 2014) the generator uses side information for generating new samples. The attacker in our approach (analogous to GAN generator) has input, but this input is not available to the authenticator (analogous to GAN discriminator). Thus,
+
+the objective of the learning process is fundamentally different.
+
+Few-shot learning and generation: Our work relates to few-shot learning (Snell et al., 2017; Vinyals et al., 2016; Finn et al., 2017; Lake et al., 2011; Koch et al., 2015) and few-shot generative models (Rezende et al., 2016; Zakharov et al., 2019; Lake et al., 2015; Edwards & Storkey, 2017; Hewitt et al., 2018) in the sense that both authenticator and attacker need to learn from a limited set of observations. However, in our setting the authenticator is required to predict whether a sample came from the true source or an attacker impersonating the source while taking into consideration the amount of information both players have. These are notions that are not part of the general few-shot learning setup. Also, in prior work on few-shot generation, the generator is either measured through human evaluation (Lake et al., 2015) or trained to maximize the likelihood of its generated sample (Rezende et al., 2016; Edwards & Storkey, 2017; Hewitt et al., 2018). In contrast, in our setting the attacker's objective is to maximize the probability that its generated sample will be labeled as real by an authenticator. To this end, we show that the attacker must consider the sample sizes $m, n, k$ , which the generative models in prior work do not account for. Furthermore, we show in Sec. F.4, and in Figures 1c,6, that the maximum likelihood (ML) solution is indeed sub-optimal in our setting.
+
+Image to image translation: Several GAN models have been introduced for mapping between two domains (Zhu et al., 2017; Huang et al., 2018; Isola et al., 2017; Wang et al., 2017; Park et al., 2019). This relates to our work since the attacker also needs to learn to map the leaked sample to an attack sample. However, in our setting the mapping is not to a different domain but rather to other images from the same distribution, which results in a different objective.
+
+Data Augmentation: Generative models have also been used for augmenting data in supervised learning, and in particular few-shot learning (Koch et al., 2015; Snell et al., 2017; Vinyals et al., 2016; Lake et al., 2011). One such approach is Data Augmentation GAN (DAGAN) (Antoniou et al., 2018), which takes as input an image and generates a new image. It relates to our framework in the limited case of $m = 1$ , $n = 2$ , $k = 1$ , in the sense that the generator's objective is to map one image to two. However, in DAGAN the only goal of the discriminator is to improve the generator, and the generator is limited to the functional form of adding a new image to the existing one, which is a sub-optimal attack strategy, as can be seen from the Gaussian case of our problem.
+
+# 4 THEORETICAL RESULTS
+
+In this section, we study the game defined in Eq. 2.2. First, in Sec. 4.1 we show the existence of a Nash equilibrium and characterize the optimal strategies for both players. Specifically, we show that the optimal attacker strategy minimizes a certain divergence between the source and the attacker's conditional distribution of $X$ given $A$ . Next, Sec. 4.2 shows that when there are more leaked samples $m$ than generated samples $n$ , the authenticator will fail. Finally, in Sec. 4.3 we provide a closed-form solution for both attacker and authenticator for the case of multivariate Gaussian distributions and analyze the effect of the dimension of the observations and the sample sizes $m, n$ , and $k$ . Proofs are provided in the appendix.
+
+# 4.1 CHARACTERIZING THE OPTIMAL STRATEGIES
+
+We begin by showing that the game defined in Eq. 2.2 admits a Nash equilibrium. Namely, Theorem 4.1 below shows that there exists a pair of strategies $(\mathcal{D}^*,\mathcal{G}^*)$ that satisfy:
+
+$$
+\max _ {\mathcal {D} \in \mathbb {D}} \min _ {\mathcal {G} \in \mathbb {G}} V (\mathcal {D}, \mathcal {G}) = \min _ {\mathcal {G} \in \mathbb {G}} \max _ {\mathcal {D} \in \mathbb {D}} V (\mathcal {D}, \mathcal {G}) = V (\mathcal {D} ^ {*}, \mathcal {G} ^ {*}) \tag {4.1}
+$$
+
+Theorem 4.1. Consider the attacker $\mathcal{G}^*$ defined by:
+
+$$
+g _ {X \mid Y} ^ {*} \in \operatorname * {a r g m i n} _ {g _ {X \mid Y}} \mathbb {E} _ {A \sim f _ {A}} \left[ \int_ {x \in \mathcal {X} ^ {n}} | f _ {X \mid A} (x \mid A) - g _ {X \mid A} (x \mid A) | d x \right] \tag {4.2}
+$$
+
+Where, $f_{A}(a) = \int_{\theta \in \mathcal{H}} Q(\theta) f_{\theta}^{(k)}(a) d\theta$ is the marginal density of $A$ , $Q_{\Theta|A}$ is the posterior over $\mathcal{H}$ given $A$ , and $f_{Y|A}(y|a) = \int_{\theta \in \mathcal{H}} f_{\theta}^{(m)}(y) Q_{\Theta|A}(\theta|a) d\theta$ . Also, $f_{X|A}(x|a) = \int_{\theta \in \mathcal{H}} f_{\theta}^{(n)}(x) Q_{\Theta|A}(\theta|a) d\theta$ and $g_{X|A}(x|a) = \int_{y \in \mathcal{X}^m} g_{X|Y}(x|y) f_{Y|A}(y|a) dy$ are the conditional densities of $X$ given $A$ , generated by the source and the attacker respectively. Consider the authenticator defined by $\mathcal{D}^*(a, x) = I\left[f_{X|A}(x|a) > g_{X|A}^*(x|a)\right]$ , where $I$ is the indicator function. Then $(\mathcal{D}^*, \mathcal{G}^*)$ is a solution of Eq. 2.2 that satisfies Eq. 4.1.
+
+The proof (see Sec. D) follows by first showing that since $\mathcal{D}(a,x)\in \{0,1\}$ , it holds that for any $\mathcal{G}$ , the optimal authenticator strategy is a MAP test between the two hypotheses (true source or attacker). We then show that given $\mathcal{D}^*$ , the game objective for $\mathcal{G}$ becomes Eq. 4.2. Namely, the optimal attacker minimizes the $\ell_1$ distance over the space $\mathcal{X}^k\times \mathcal{X}^n$ between the true source's conditional distribution of $X$ given $A$ , and its own. Therefore, since the proposed $\mathcal{G}^*$ minimizes Eq. 4.2 by definition, it holds that $\min_{\mathcal{G}}V(\mathcal{D}^{*},\mathcal{G}) = V(\mathcal{D}^{*},\mathcal{G}^{*}) = \max_{\mathcal{D}}V(\mathcal{D},\mathcal{G}^{*})$ and it follows that $\mathcal{D}^*,\mathcal{G}^*$ satisfy Eq. 4.1.
+
+# 4.2 REPLAY ATTACKS: AUTHENTICATION FAILURE FOR $n \leqslant m$
+
+When $n \leqslant m$ , the attacker generates a number of observations that is at most the number of observations it has seen. Intuitively, an optimal attack in this case, is to simply "replay" a subset of size $n$ from the $m$ observations. This is known as a replay-attack (Syverson, 1994). This subset constitutes an IID sample of length $n$ of the observed source, and is, therefore, a legitimate "fresh" sample. In this case, it seems like the attack cannot be detected by the authenticator. Indeed it is easy to show using Theorem 4.1 that this attack is optimal and therefore for $n \leqslant m$ we have: $\max_{\mathcal{D} \in \mathbb{D}} \min_{\mathcal{G} \in \mathbb{G}} V(\mathcal{D}, \mathcal{G}) = 0.5$ (see Sec. E)
+
+# 4.3 THE GAUSSIAN CASE
+
+We now turn to the case of multivariate Gaussian distributions where we can find the exact form of the attacker and authenticator, providing insight into the general problem. Specifically, we consider the setting where the source distributions are $d$ -dimensional multivariate Gaussians with an unknown mean and known covariance, and the prior $Q$ over $\mathcal{H}$ is the improper uniform prior. We assume $n > m$ to keep the problem non-trivial. Let the observations be $d$ -dimensional Gaussian vectors with a known covariance matrix $\Sigma \in \mathbb{R}^{d\times d}$ and an unknown mean vector $\theta \in \mathbb{R}^d$ . The set of possible sources $\mathcal{H}$ becomes $\mathbb{R}^d$ , the Gaussian mean vectors. For any sample of $n$ examples $z\in \mathbb{R}^{n\times d}$ , we let $z_{i}$ denote the $i$ 'th example, and $\bar{z} = \frac{1}{n}\sum_{i = 1}^{n}z_{i}$ denote the sample mean. Finally, for any $v\in \mathbb{R}^d$ , $B\in \mathbb{R}^{d\times d}$ , we define $\| v\| _B^2 = v^T Bv$ . The following theorem gives a closed-form solution for both attacker and authenticator for the game defined in Eq. 2.2.
+
+Theorem 4.2. Define $\delta = m / n \leqslant 1$ and let $\rho = m / k$ . Consider the attacker $\mathcal{G}^*$ defined by the following generative process: Given a leaked sample $Y \in \mathbb{R}^{m \times d}$ , $\mathcal{G}^*$ generates a sample $X \in \mathbb{R}^{n \times d}$ as follows: it first samples $n$ vectors $W_1, \ldots, W_n \stackrel{\text{id}}{\sim} \mathcal{N}(0, \Sigma)$ and then sets: $X_i = W_i - \bar{W} + \bar{Y}$ . Define the authenticator $\mathcal{D}^*$ by:
+
+$$
+\mathcal {D} ^ {*} (a, x) = I \left[ \| \bar {x} - \bar {a} \| _ {\Sigma^ {- 1}} ^ {2} < \frac {d (1 + \rho) (1 + \rho \delta^ {- 1})}{n (1 - \delta)} \log \left(\frac {\rho + 1}{\rho + \delta}\right) \right] \tag {4.3}
+$$
+
+Then $(\mathcal{D}^*,\mathcal{G}^*)$ is a solution of Eq. 2.2 that satisfies Eq. 4.1.
+
+The proof (see Sec. F) starts by showing that $\forall \alpha > 0$ , given $\mathcal{D}(a, x) = I[\| \bar{x} - \bar{a} \|_{\Sigma^{-1}}^2 < \alpha]$ , the optimal strategy for $\mathcal{G}$ is to set $\bar{x} = \bar{y}$ with probability 1 (as done in $\mathcal{G}^*$ ). To prove this, we first use the Prekopa-Leindler inequality (Prékopa, 1973) to show that in this case $\mathcal{G}$ 's maximization objective is log-concave. We then show that any $\mathcal{G}$ that satisfies $\bar{x} = \bar{y}$ with probability 1 is a local extremum, and since the objective is log-concave it follows that it is the global maximum. We continue by showing that given $\mathcal{G}^*$ , the proposed $\mathcal{D}^*$ is optimal. To do so, we first find the distribution of $\mathcal{G}^*$ 's attack sample using the Woodbury matrix identity, and then show that $\mathcal{D}^*$ is indeed the optimal decision rule. Finally, using the max-min inequality, this implies that $(\mathcal{D}^*, \mathcal{G}^*)$ satisfy Eq. 4.1.
+
+There are several interesting observations about the above optimal strategies. Perhaps the most intuitive strategy for the attacker would have been to sample $n$ elements from a Gaussian with mean $\bar{Y}$ and the known covariance $\Sigma$ . In expectation, this sample would have the correct mean. However, this turns out to be sub-optimal, as can be seen in Figures 1c and 6 (we refer to this as an ML attack. See Sec. F.4 in the appendix for the derivation and visualizations). Instead, the optimal attacker begins by drawing an IID sample $W$ from a Gaussian distribution with mean 0, and then "forces" the sample mean to be exactly $\bar{Y}$ by shifting the sample points by $\bar{Y} - \bar{W}$ . This optimal attacker
+
+strategy can be viewed as matching the sufficient statistics of the leaked sample $Y$ in the generated sample $X$ . The optimal authenticator is a MAP test for the optimal attacker, as in Theorem 4.1.
+
+As a corollary to Theorem 4.2 we obtain the value of the game (i.e., the accuracy of $\mathcal{D}^*$ ).
+
+Corollary 4.3. Define $\delta$ and $\rho$ as in Theorem 4.2. Then the game value for the Gaussian case is:
+
+$$
+\frac {1}{2} + \frac {1}{2 \Gamma \left(\frac {d}{2}\right)} \left[ \gamma \left(\frac {d}{2}, \frac {d (1 + \rho)}{2 (1 - \delta)} \log \frac {1 + \rho}{\delta + \rho}\right) - \gamma \left(\frac {d}{2}, \frac {d (\delta + \rho)}{2 (1 - \delta)} \log \frac {1 + \rho}{\delta + \rho}\right) \right] \tag {4.4}
+$$
+
+Where $\gamma$ is the lower incomplete Gamma function, and $\Gamma$ is the Gamma function.
+
+The proof (see Sec. F) follows by showing that the test statistic used by $\mathcal{D}^*$ is Gamma distributed. Fig. 1 demonstrates several interesting aspects of the above results. First, Fig. 1a shows that the authenticator accuracy improves as $n$ (the size of the test sample) grows. Furthermore, accuracy also improves as the dimension $d$ grows, meaning that for a specified authentication accuracy, the required ratio $n / m$ becomes smaller with the dimension $d$ . This is a very encouraging result since although this dimensional dependency is proved only for Gaussian sources, it suggests that for real-world high-dimensional sources (e.g., faces, video, voice, etc.) the authenticator can achieve high accuracy even when requiring a small (and practical) authentication sample.
+
+Intuitively it may seem like authentication is impossible when the authenticator has less data than the attacker (i.e., $m > k$ ). Surprisingly, this is not the case. As can be seen in Fig. 1b, even when $m > k$ , the authenticator can achieve non trivial accuracy. An intuitive explanation of this phenomenon is that the test statistic used by the authenticator is $\left\| \bar{X} - \bar{A} \right\|$ , which, due to the variance in the attacker estimation, has higher variance when $X$ is generated by an attacker than when $X$ is generated by the true source. This, in turn, allows the authenticator to discriminate between the hypotheses.
+
+A closed-form solution for the general case remains an open problem. We believe that solving for Gaussians is an important step forward, since it exposes interesting structural properties of the solution, which we use in practice. Furthermore, if $\mathcal{G}$ has an encoder-decoder structure, it is not unreasonable that the source distribution in latent space can be approximately Gaussian (as in VAE).
+
+# 5 GAN IN THE MIDDLE NETWORKS
+
+So far we explored the general formalism of authentication games. Here we consider specific architectures for $\mathcal{D}$ and $\mathcal{G}$ . As in GAN based models (Mirza & Osindero, 2014; Mescheder et al., 2018; Karras et al., 2018a;b), we use neural nets to model these, while using insights from our theoretical analysis. Below we provide implementation details for the GIM model (see Sec. H and code for more details). In our analysis, we considered the non-differentiable zero-one loss since it is the real accuracy measure. In practice, we will use cross-entropy as used in most GAN approaches.
+
+Authenticator Architecture: The authenticator is implemented as a neural network $\mathcal{D}(a,x)$ that maps from a source information sample $a\in \mathcal{X}^k$ and a test sample $x\in \mathcal{X}^n$ to a probability that the test sample came from the true source. Our framework does not restrict the authenticator to any specific function type, but in practice one must implement it using some model. We recall that our theoretical results do suggest a certain functional form. The Gaussian results in Sec. 4.3 show that the optimal authenticator is a test on the sufficient statistic of the source parameters. Motivated by this result, and in the spirit of Siamese networks (Koch et al., 2015; Chopra et al., 2005), we consider the following form for the authenticator. We define a function $T_{\mathcal{D}}$ that maps a sample to a fixed sized vector, analogous to the sufficient statistic in the theorem. We apply $T_{\mathcal{D}}$ to both $a$ and $x$ . Then, these two outputs are used as input to a comparison function $\sigma$ which outputs a scalar reflecting their similarity. Thus the authenticator can be expressed as: $\mathcal{D}(a,x) = \sigma (T_{\mathcal{D}}(a),T_{\mathcal{D}}(x))$
+
+Attacker Architecture: The attacker is implemented as a stochastic neural network $\mathcal{G}(y)$ that maps a leaked sample $y\in \mathcal{X}^m$ to an attack sample $x\in \mathcal{X}^n$ . Our theoretical results suggest a certain functional form for this network. The Gaussian analysis in Sec. 4.3 shows that the optimal attacker generates a sample whose sufficient statistic matches that of the leaked sample. Motivated by this result, we consider the following functional form for the attacker. First, it applies a function $T_{\mathcal{G}}$ that maps the leaked sample $Y$ to a fixed sized vector $T_{\mathcal{G}}(Y)$ , analogous to the sufficient statistic in the
+
+theorem. It then draws $n$ random latent vectors $W_{1},\ldots ,W_{n}$ and matches their mean to the leaked sufficient statistic to obtain the latent vectors $W_{i}^{\prime}$ . Namely it sets $W_{i}^{\prime} = W_{i} - \bar{W} +T_{\mathcal{G}}(Y)$ as done in Theorem 4.2. Finally, it uses a decoder function $\varphi$ that maps each latent vector $W_{i}^{\prime}$ to the domain $\mathcal{X}$ . Thus, the attacker can be expressed as: $\mathcal{G}(Y)_i = \varphi (W_i - \bar{W} +T_{\mathcal{G}}(Y))\quad \forall i\in [n]$
+
+**Optimization Details:** Each iteration begins when a source $\theta$ is chosen randomly from the set of sources in the training dataset (e.g., a person to be authenticated). Samples $A, Y, X_{\theta}$ are drawn from the set of examples available for $\theta$ , where $X_{\theta}$ represents a test sample from $\theta$ . Then, given a leaked sample $Y$ , the generator $\mathcal{G}$ generates a fake sample $X_{\mathcal{G}}$ , passes it to $\mathcal{D}$ and suffers the appropriate loss. Finally, $\mathcal{D}$ receives as input the source information sample $A$ , outputs a prediction for each of the test samples $X_{\theta}, X_{\mathcal{G}}$ , and suffers the appropriate loss. Optimization is done via gradient ascent on authenticator parameters and descent on attacker parameters, as is typical for GAN problems.
+
+# 6 EXPERIMENTS
+
+We next evaluate our method empirically. In all experiments, we use the model described in Sec. 5. We optimize the model with adversarial training, using the loss suggested by Mescheder et al. (2018) and Adam (Kingma & Ba, 2015). Our implementation is available at https://github.com/roymor1/OptimalStrategiesAgainstGenerativeAttacks. Also, see Sec. H for further implementation details.
+
+Gaussian sources: For the case of Gaussian sources, we arrived at a complete characterization of the solution in Sec. 4.3. Thus, we can learn the models using our GIM algorithm and check whether it finds the correct solution. This is important, as the GIM objective is clearly non-convex and GANs are generally hard to train in practice and lack convergence guarantees (Mescheder et al., 2018). We ran all experiments using a multivariate Gaussian with $\Sigma = I_d$ , and in each game the source mean was drawn from the prior distribution $Q = \mathcal{N}(0,10I_d)$ . This approximates the improper uniform prior since the prior has much larger variance than the sources. Fig. 1a shows the empirical game value compared with the theoretical one as a function of the test sample size $n$ , for fixed $m = 1, k = 10$ and different values of $d$ . It can be seen that there is an excellent fit between theory and experiment.
+
+
+(a)
+
+
+(b)
+Figure 1: Game value (expected authentication accuracy) for the Gaussian case. (a) A comparison between empirical and theoretical game value for different $d$ values $(m = 1, k = 10)$ . Solid lines describe the theoretical game values whereas the * markers describe the empirical accuracy when learning with the GIM model. (b) Theoretical game value as a function of $\delta, \rho$ (see Corollary 4.3) for $d = 100$ . (c) Empirical accuracy of an optimal authenticator against two attacks: the theoretically optimal attack $\mathcal{G}^*$ from Theorem 4.2 and a maximum likelihood (ML) attack (See Sec. F.4) for the Gaussian case. It can be seen that the ML attack is inferior in that it results in better accuracy for the authenticator, as predicted by our theoretical results.
+
+
+(c)
+
+Authentication on Faces and Characters: We next evaluate GIM in an authentication setting on two datasets: the VoxCeleb2 faces dataset (Nagrani et al., 2017; Chung & Zisserman, 2018), and the Omniglot handwritten character dataset (Lake et al., 2015). Additional information about the datasets, splits, and modeling details is provided in Sections G and H. Our goal is to check whether the GIM authenticator is more robust to generative attacks than a state of the art authentication system. To evaluate this, we consider several attackers: 1) A "random source" attacker (RS): a naive attacker that ignores the leaked sample $Y$ . It simply draws a random source from the dataset, and samples $n$ real images of that source. From the authenticator's perspective, it's equivalent to a sample version of the verification task (Koch et al., 2015; Schroff et al., 2015; Deng et al., 2018), in
+
+Table 1: Accuracy of GIM and baselines against attacks. Avg acc denotes average over all attacks.
+
+| Authenticationator | Dataset | m | n | k | RS | Replay | GIM | Avg acc |
| GIM | VoxCeleb2 | 1 | 5 | 5 | 0.897 | 0.837 | 0.822 | 0.852 |
| ArcFace | VoxCeleb2 | 1 | 5 | 5 | 0.998 | 0.598 | 0.526 | 0.707 |
| GIM | Omniglot | 1 | 5 | 5 | 0.912 | 0.942 | 0.868 | 0.907 |
| Siamese | Omniglot | 1 | 5 | 5 | 0.994 | 0.509 | 0.785 | 0.763 |
+
+which an agent is presented with a pair of real images and needs to decide whether they are from the same source or not. 2) Replay attacker (Replay): an attacker which, upon seeing a leaked sample $Y$ , draws $n$ random images of the leaked sample (with replacement). 3) A GIM attack, which is the "worst case" attacker $\mathcal{G}$ , learned by our GIM model.
+
+For VoxCeleb we compare the GIM authenticator to the ArcFace method (Deng et al., 2018), which is currently state of the art in face verification. As a baseline for Omniglot, we use the Siamese network suggested by Koch et al. (2015), which achieves state of the art in the verification task on Omniglot. Results are shown in Table 1. It can be seen that on average across attacks, GIM outperforms the baselines. The only attack for which GIM is inferior is RS. This is not surprising as this is the objective that both baselines are trained for.
+
+Qualitative evaluation of attacker: In Fig. 2 we provide images generated by the GIM attacker for the test set of both Omniglot and Voxceleb. The images demonstrate qualitatively the strategy learned by the attacker. In the Voxceleb2 dataset, face images are drawn from a video of the person talking. Note that as in real samples from the data, the attack sample varies in pose and expression and not in the background or clothing.
+
+
+(a)
+
+
+(b)
+Figure 2: Images generated by the GIM attacker based on one leaked image. In each row, the leftmost image is the real leaked image, and the rest of the images are an attack sample generated by the GIM attacker. (a) Voxceleb2 dataset. (b) Omniglot dataset.
+
+Data augmentation: Finally, we use GIM for data augmentation in one-shot classification on Omniglot. This is done by using the GIM attacker to generate more data for a given class. We first train GIM on the training set with parameters $m = 1, n = 5, k = 5$ . Then, during both training and testing of one-shot classification, given an example, we use the attacker to augment the single example available for each of the classes, by adding to it the $n = 5$ examples our attacker generated from it. We use Prototypical Nets (Snell et al., 2017) as the baseline model. We find that without using our augmentation method, Prototypical Nets achieve $95.9\%$ accuracy on the test split, and with our method, they achieve $96.5\%$ , which is similar to the improvement achieved by Antoniou et al. (2018) with Matching networks (Vinyals et al., 2016) as the few-shot classification algorithm.
+
+# 7 CONCLUSIONS
+
+We defined the notion of authentication in the face of generative attacks, in which a generative model attempts to produce a fake reality based on observations of reality. These attacks raise numerous interesting theoretical questions and are very important and timely from a practical standpoint. We proposed to study generative attacks as a two-person zero-sum game between attacker and authenticator. In our most general setup both attacker and authenticator have access to a finite set of
+
+observations of the source. We show that this game has a Nash equilibrium, and we characterize the optimal strategies. In the Gaussian version of the game, a closed form of the optimal strategies is available. A nice outcome of the analysis is that the game value depends on $m, n, k$ only through their ratios $\delta = m/n$ (i.e., the "expansion ratio" between attack and leaked sample sizes) and $\rho = m/k$ (i.e., the "information ratio" between the number of source observations available to attacker and authenticator). As we show in Fig. 1b, there is a large range of values for which high accuracy authentication is possible, and as $d$ grows we observe that the high authentication accuracy region in the $(\delta, \rho)$ plane grows sharply. We introduce the GIM model, which is a practical approach to learning both authenticator and attacker, and whose structure is inspired by our analysis. GIM achieves accuracy that is very close to the theoretical rates in the Gaussian case, and is also more robust to attacks when compared to state of the art authenticators on real data. Many theoretical and practical questions remain. For example, finding closed form optimal strategies for other distributions, and going beyond IID generation. The non-IID setting is of particular importance for the problem of fake video (Thies et al., 2016) and audio (Arik et al., 2018) generation, which we intend to study in the future.
+
+# ACKNOWLEDGEMENTS
+
+This work has been supported by the Blavatnik Interdisciplinary Research Center (ICRC), the Feder-mann Research Center (Hebrew University) and Israeli Science Foundation research grants 1523/16 and 1186/18.
+
+# REFERENCES
+
+Antreas Antoniou, Amos J. Storkey, and Harrison Edwards. Augmenting image classifiers using data augmentation generative adversarial networks. In Artificial Neural Networks and Machine Learning, pp. 594-603, 2018.
+Sercan Arik, Jitong Chen, Kainan Peng, Wei Ping, and Yanqi Zhou. Neural voice cloning with a few samples. In Advances in Neural Information Processing Systems, pp. 10040-10050, 2018.
+Sanjeev Arora, Rong Ge, Yingyu Liang, Tengyu Ma, and Yi Zhang. Generalization and equilibrium in generative adversarial nets (GANs). In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 224-232. JMLR.org, 2017.
+Ning Bao, O. Patrick Kreidl, and John Musacchio. A network security classification game. In Game Theory for Networks - 2nd International ICST Conference, GAMENETS 2011, Shanghai, China, April 16-18, 2011, Revised Selected Papers, pp. 265-280, 2011. doi: 10.1007/978-3-642-30373-9\_19. URL https://doi.org/10.1007/978-3-642-30373-9_19.
+Mauro Barni and Benedetta Tondi. Multiple-observation hypothesis testing under adversarial conditions. In 2013 IEEE International Workshop on Information Forensics and Security, WIFS 2013, Guangzhou, China, November 18-21, 2013, pp. 91-96, 2013a. doi: 10.1109/WIFS.2013.6707800. URL https://doi.org/10.1109/WIFS.2013.6707800.
+Mauro Barni and Benedetta Tondi. The source identification game: An information-theoretic perspective. IEEE Trans. Information Forensics and Security, 8(3):450-463, 2013b. doi: 10.1109/TIFS.2012.2237397. URL https://doi.org/10.1109/TIFS.2012.2237397.
+Mauro Barni and Benedetta Tondi. Binary hypothesis testing game with training data. IEEE Trans. Information Theory, 60(8):4848-4866, 2014. doi: 10.1109/TIT.2014.2325571. URL https://doi.org/10.1109/TIT.2014.2325571.
+Fernando G. S. L. Brandão, Aram Wettroth Harrow, James R. Lee, and Yuval Peres. Adversarial hypothesis testing and a quantum stein's lemma for restricted measurements. In Innovations in Theoretical Computer Science, ITCS'14, Princeton, NJ, USA, January 12-14, 2014, pp. 183-194, 2014. doi: 10.1145/2554797.2554816. URL https://doi.org/10.1145/2554797.2554816.
+
+Michael Brückner and Tobias Scheffer. Stackelberg games for adversarial prediction problems. In Proceedings of the 17th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Diego, CA, USA, August 21-24, 2011, pp. 547-555, 2011. doi: 10.1145/2020408.2020495. URL https://doi.org/10.1145/2020408.2020495.
+Michael Brückner, Christian Kanzow, and Tobias Scheffer. Static prediction games for adversarial learning problems. J. Mach. Learn. Res., 13:2617-2654, 2012. URL http://dl.acm.org/citation.cfm?id=2503326.
+Sumit Chopra, Raia Hadsell, and Yann LeCun. Learning a similarity metric discriminatively, with application to face verification. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 539-546, 2005.
+Arsha Nagrani Chung, Joon Son and Andrew Zisserman. Voxceleb2: Deep speaker recognition. In Interspeech, 2018.
+Jiankang Deng, Jia Guo, and Stefanos Zafeiriou. Arcface: Additive angular margin loss for deep face recognition. arXiv preprint arXiv:1801.07698, 2018.
+Harrison Edwards and Amos Storkey. Towards a neural statistician. In International Conference on Learning Representations, 2017.
+Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. CoRR, abs/1703.03400, 2017. URL http://arxiv.org/abs/1703.03400.
+Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural information processing systems, pp. 2672-2680, 2014a.
+Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. In International Conference on Learning Representations, 2014b.
+Luke B Hewitt, Maxwell I Nye, Andreea Gane, Tommi Jaakkola, and Joshua B Tenenbaum. The variational homoencoder: Learning to learn high capacity generative models from few examples. In Proceedings of the Thirty-Fourth Conference on Uncertainty in Artificial Intelligence, pp. 988-997, 2018.
+Xun Huang and Serge J. Belongie. Arbitrary style transfer in real-time with adaptive instance normalization. In IEEE International Conference on Computer Vision, pp. 1510-1519, 2017.
+Xun Huang, Ming-Yu Liu, Serge Belongie, and Jan Kautz. Multimodal unsupervised image-to-image translation. In Proceedings of the European Conference on Computer Vision, pp. 172-189, 2018.
+Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Logan Engstrom, Brandon Tran, and Aleksander Madry. Adversarial examples are not bugs, they are features. ArXiv, abs/1905.02175, 2019.
+Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A. Efros. Image-to-image translation with conditional adversarial networks. In The IEEE Conference on Computer Vision and Pattern Recognition, pp. 5967-5976, 2017.
+Justin Johnson, Alexandre Alahi, and Li Fei-Fei. Perceptual losses for real-time style transfer and super-resolution. In European Conference on Computer Vision, 2016.
+Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. Progressive growing of GANs for improved quality, stability, and variation. In International Conference on Learning Representations, 2018a.
+Tero Karras, Samuli Laine, and Timo Aila. A style-based generator architecture for generative adversarial networks. arXiv preprint arXiv:1812.04948, 2018b.
+Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In International Conference on Learning Representations, 2015.
+
+Gregory Koch, Richard Zemel, and Ruslan Salakhutdinov. Siamese neural networks for one-shot image recognition. In ICML deep learning workshop, volume 2, 2015.
+Brenden M. Lake, Ruslan Salakhutdinov, Jason Gross, and Joshua B. Tenenbaum. One shot learning of simple visual concepts. In Proceedings of the 33th Annual Meeting of the Cognitive Science Society, 2011.
+Brenden M Lake, Ruslan Salakhutdinov, and Joshua B Tenenbaum. Human-level concept learning through probabilistic program induction. Science, 350(6266):1332-1338, 2015.
+Lars Mescheder, Andreas Geiger, and Sebastian Nowozin. Which training methods for GANs do actually converge? In Proceedings of the 35th International Conference on Machine Learning, pp. 3478-3487, 2018.
+Mehdi Mirza and Simon Osindero. Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784, 2014.
+Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. Deepfool: a simple and accurate method to fool deep neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2574-2582, 2016.
+A. Nagrani, J. S. Chung, and A. Zisserman. Voxceleb: a large-scale speaker identification dataset. In Interspeech, 2017.
+Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z Berkay Celik, and Ananthram Swami. Practical black-box attacks against machine learning. In Proceedings of the 2017 ACM on Asia conference on computer and communications security, pp. 506-519, 2017.
+Taesung Park, Ming-Yu Liu, Ting-Chun Wang, and Jun-Yan Zhu. Semantic image synthesis with spatially-adaptive normalization. CoRR, abs/1903.07291, 2019. URL http://arxiv.org/abs/1903.07291.
+András Prekopa. On logarithmic concave measures and functions. Acta Scientiarium Mathematicarum, 34:335-343, 1973.
+Aditi Raghunathan, Jacob Steinhardt, and Percy Liang. Certified defenses against adversarial examples. In International Conference on Learning Representations, 2018.
+Danilo Jimenez Rezende, Shakir Mohamed, Ivo Danihelka, Karol Gregor, and Daan Wierstra. One-shot generalization in deep generative models. In Proceedings of the 33nd International Conference on Machine Learning, ICML 2016, New York City, NY, USA, June 19-24, 2016, pp. 1521-1529, 2016. URL http://proceedings.mlr.press/v48/rezende16.html.
+Florian Schroff, Dmitry Kalenichenko, and James Philbin. Facenet: A unified embedding for face recognition and clustering. In The IEEE Conference on Computer Vision and Pattern Recognition, pp. 815-823, 2015.
+Adi Shamir, Itay Safran, Eyal Ronen, and Orr Dunkelman. A simple explanation for the existence of adversarial examples with small hamming distance. CoRR, abs/1901.10861, 2019. URL http://arxiv.org/abs/1901.10861.
+Jake Snell, Kevin Swersky, and Richard S. Zemel. Prototypical networks for few-shot learning. In Advances in Neural Information Processing Systems, pp. 4080-4090, 2017.
+Yang Song, Rui Shu, Nate Kushman, and Stefano Ermon. Constructing unrestricted adversarial examples with generative models. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (eds.), Advances in Neural Information Processing Systems 31, pp. 8312-8323. Curran Associates, Inc., 2018. URL http://papers.nips.cc/paper/8052-constructing-unrestricted-adversarial-examples-with-generative-models.pdf.
+Paul Syverson. A taxonomy of replay attacks [cryptographic protocols]. pp. 187 - 191, 07 1994. ISBN 0-8186-6230-1. doi: 10.1109/CSFW.1994.315935.
+
+Justus Thies, Michael Zollhofer, Marc Stamminger, Christian Theobalt, and Matthias Nießner. Face2face: Real-time face capture and reenactment of rgb videos. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2387-2395, 2016.
+Oriol Vinyals, Charles Blundell, Tim Lillicrap, Koray Kavukcuoglu, and Daan Wierstra. Matching networks for one shot learning. In Advances in Neural Information Processing Systems, pp. 3630-3638, 2016.
+Ting-Chun Wang, Ming-Yu Liu, Jun-Yan Zhu, Andrew Tao, Jan Kautz, and Bryan Catanzaro. High-resolution image synthesis and semantic manipulation with conditional gans. CoRR, abs/1711.11585, 2017. URL http://arxiv.org/abs/1711.11585.
+Eric Wong, Frank Schmidt, Jan Hendrik Metzen, and J. Zico Kolter. Scaling provable adversarial defenses. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (eds.), Advances in Neural Information Processing Systems, pp. 8400-8409. Curran Associates, Inc., 2018.
+Eric Wong, Frank R. Schmidt, and J. Zico Kolter. Wasserstein adversarial examples via projected sinkhorn iterations. In Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, pp. 6808-6817, 2019. URL http://proceedings.mlr.press/v97/wong19a.html.
+Xiaoyong Yuan, Pan He, Qile Zhu, Rajendra Rana Bhat, and Xiaolin Li. Adversarial examples: Attacks and defenses for deep learning. CoRR, abs/1712.07107, 2017. URL http://arxiv.org/abs/1712.07107.
+Egor Zakharov, Aliaksandra Shysheya, Egor Burkov, and Victor S. Lempitsky. Few-shot adversarial learning of realistic neural talking head models. CoRR, abs/1905.08233, 2019. URL http://arxiv.org/abs/1905.08233.
+Hongyang Zhang, Yaodong Yu, Jiantao Jiao, Eric P. Xing, Laurent El Ghaoui, and Michael I. Jordan. Theoretically principled trade-off between robustness and accuracy. In Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, pp. 7472-7482, 2019. URL http://proceedings.mlr.press/v97/zhang19p.html.
+Yan Zhou, Murat Kantarcioglu, and Bowei Xi. A survey of game theoretic approach for adversarial machine learning. Wiley Interdiscip. Rev. Data Min. Knowl. Discov., 9(3), 2019. doi: 10.1002/widm.1259. URL https://doi.org/10.1002/widm.1259.
+Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE international conference on computer vision, 2017.
+
+# A INTRODUCTION TO THE APPENDIX
+
+Here we provide additional information and proofs for the main paper. In Sec. B we provide a visualization of the game setting. In Sec. C we plot the game value in the Gaussian case for different parameter values. In Sec. D we provide a proof for Theorem 4.1, in Sec. E we formally state the theorem discussed in Sec. 4.2 and prove it, and in Sec. F we prove Theorem 4.2 and also derive the game value for the sub-optimal "ML attacker". Finally, in Sections G, H we provide additional details about the experiments and the implementation of GIM.
+
+# B PROBLEM SETUP VISUALIZATION
+
+Our problem setup is illustrated in Fig. 3 for a face authentication scenario. A source $\theta$ (in this case an individual) generates IID observations (images). $k$ images are used by the authenticator to study the source which it aims to authenticate. $m$ images are "leaked" and obtained by an attacker who wishes to impersonate the source and pass the authentication. At test time the authenticator is presented with $n$ images that were generated either by the true source $\theta$ , or by the attacker, and decides whether the entity that generated the images was the source $\theta$ or an attacker.
+
+
+Figure 3: An illustration of the game described in the main text. An authenticator receives a sample of $n$ images and needs to decide whether these were generated by a known source, or by an adversary that had access to leaked images. In order to decide, the authenticator is supplied with a sample of $k$ images from the real source.
+
+# C GAME VALUE VISUALIZATIONS
+
+In corollary 4.3 we provide the game value (the expected authenticator accuracy) for the case of multivariate Gaussian sources. In this section, we present additional visualizations of the game value as a function of the different parameters of the game. Fig. 4 visualizes the game value as a function of $\delta$ and $\rho$ , for different values of $d$ (the observation dimension). $\delta = \frac{m}{n}$ is the "expansion ratio" between the source information available to the attacker through the leaked sample and the size of the attacker's attack sample. $\rho = \frac{m}{k}$ is the "information ratio" between the number of source observations available to attacker and authenticator. One can clearly see that as $d$ , the observation dimension, grows large, so does the accuracy of the authenticator. Even for $\delta$ values higher than 0.5, and $\rho$ values which intuitively would give the attacker an advantage (e.g., $\frac{m}{k} = 10$ ).
+
+
+Fig. 5 visualizes the game value as a function of $\delta$ and $\epsilon$ , for different values of $d$ (the observation dimension). Where $\epsilon = \frac{n}{k}$ is the "information expansion" of the attacker with respect to the authenticator source information. Again, one can clearly see that as $d$ , the observation dimension, grows large, so does the accuracy of the authenticator.
+Figure 4: Game value as a function of $\delta, \rho$ for different dimensions $d$ .
+
+
+Figure 5: Game value as a function of $\delta, \epsilon$ for different dimensions $d$ .
+
+# D ADDITIONAL THEOREMS AND PROOFS FOR SEC. 4.1
+
+We begin with some additional notation. Let
+
+$$
+f _ {A} (a) = \int_ {\theta \in \mathcal {H}} Q (\theta) f _ {\theta} ^ {(k)} (a) d \theta
+$$
+
+denote the marginal density of $A$ . Let $Q_{\Theta|A}$ denote the posterior probability over $\mathcal{H}$ given $A$ . That is:
+
+$$
+Q _ {\Theta | A} (\theta | a) = \frac {Q (\theta) f _ {\theta} ^ {(k)} (a)}{\int_ {\nu \in \mathcal {H}} Q (\nu) f _ {\nu} ^ {(k)} (a) d \nu} \equiv \frac {Q (\theta) f _ {\theta} ^ {(k)} (a)}{f _ {A} (a)}
+$$
+
+Also, let $f_{Y|A}, f_{X|A}, g_{X|A}$ , denote the conditional densities defined by:
+
+$$
+f _ {Y | A} (y | a) = \int_ {\theta \in \mathcal {H}} f _ {\theta} ^ {(m)} (y) Q _ {\Theta | A} (\theta | a) d \theta
+$$
+
+$$
+f _ {X | A} (x | a) = \int_ {\theta \in \mathcal {H}} f _ {\theta} ^ {(n)} (x) Q _ {\Theta | A} (\theta | a) d \theta
+$$
+
+$$
+g _ {X \mid A} (x \mid a) = \int_ {y \in \mathcal {X} ^ {m}} g _ {X \mid Y} (x \mid y) f _ {Y \mid A} (y \mid a) d y
+$$
+
+Lemma D.1. Let $\mathcal{G}$ be an attacker defined by the conditional probability distribution $g_{X|Y}$ . Then $\forall a \in \mathcal{X}^k, x \in \mathcal{X}^n$ a best response strategy for $\mathcal{D}$ is:
+
+$$
+\mathcal {D} (a, x) = I \left[ f _ {X \mid A} (x \mid a) > g _ {X \mid A} (x \mid a) \right] \tag {D.1}
+$$
+
+Proof. Given an attacker strategy $g_{X|Y}$ , the objective for $\mathcal{D}$ is given by:
+
+$$
+\begin{array}{l} \operatorname * {a r g m a x} _ {\mathcal {D}} \mathbb {E} _ {\Theta \sim Q} V (\Theta , \mathcal {D}, \mathcal {G}) \\ = \underset {\mathcal {D}} {\operatorname {a r g m a x}} \mathbb {E} _ {\Theta \sim Q} \mathbb {E} _ {A \sim f _ {\Theta} ^ {(k)}} \left[ \mathbb {E} _ {X \sim f _ {\Theta} ^ {(n)}} [ \mathcal {D} (A, X) ] + \mathbb {E} _ {Y \sim f _ {\Theta} ^ {(m)}} \mathbb {E} _ {X \sim g _ {X | Y} (\cdot | Y)} [ 1 - \mathcal {D} (A, X) ] \right] \\ = \operatorname * {a r g m a x} _ {\mathcal {D}} \mathbb {E} _ {A \sim f _ {A}} \left[ \mathbb {E} _ {X \sim f _ {X | A} (\cdot | A)} [ \mathcal {D} (A, X) ] + \mathbb {E} _ {Y \sim f _ {Y | A} (\cdot | A)} \mathbb {E} _ {X \sim g _ {X | Y} (\cdot | Y)} [ 1 - \mathcal {D} (A, X) ] \right] \\ = \underset {\mathcal {D}} {\operatorname {a r g m a x}} \int_ {a \in \mathcal {X} ^ {k}} f _ {A} (a) \int_ {x \in \mathcal {X} ^ {n}} \left[ f _ {X | A} (x | a) \mathcal {D} (a, x) + g _ {X | A} (x | a) [ 1 - \mathcal {D} (a, x) ] \right] d x d a \\ \end{array}
+$$
+
+Note that $\mathcal{D}$ can be optimized independently for each pair $a, x \in \mathcal{X}^k \times \mathcal{X}^n$ . Hence, $\forall a, x \in \mathcal{X}^k \times \mathcal{X}^n$ the objective is:
+
+$$
+\operatorname * {a r g m a x} _ {\mathcal {D} (a, x) \in \{0, 1 \}} \left\{f _ {X | A} (x | a) \mathcal {D} (a, x) + g _ {X | A} (x | a) [ 1 - \mathcal {D} (a, x) ] \right\}
+$$
+
+And thus, the optimal decision rule for $\mathcal{D}$ is:
+
+$$
+\mathcal {D} (a, x) = I \left[ f _ {X | A} (x | a) > g _ {X | A} (x | a) \right]
+$$
+
+As required.
+
+Lemma D.2. $\forall a, x \in \mathcal{X}^k \times \mathcal{X}^n$ let the strategy for $\mathcal{D}$ be defined by:
+
+$$
+\mathcal {D} (a, x) = I \left[ f _ {X | A} (x | a) > g _ {X | A} (x | a) \right]
+$$
+
+Then $\mathcal{G}^*$ is a best response strategy for $\mathcal{G}$ , iff it minimizes the $\ell_1$ distance between the distributions $f_{X|A}, g_{X|A}$ over the space $\mathcal{X}^k \times \mathcal{X}^n$ . Namely:
+
+$$
+g _ {X | Y} ^ {*} \in \operatorname * {a r g m i n} _ {g _ {X | Y}} \mathbb {E} _ {A \sim f _ {A}} \int_ {x \in \mathcal {X} ^ {n}} \left| f _ {X | A} (x | A) - g _ {X | A} (x | A) \right| d x \tag {D.2}
+$$
+
+Proof. Let $\mathcal{D}$ be defined as in Eq. D.1, the objective for $\mathcal{G}$ is:
+
+$$
+\begin{array}{l} \operatorname * {a r g m i n} _ {\mathcal {G}} \frac {1}{2} \mathbb {E} _ {\Theta \sim Q} V (\Theta , \mathcal {D}, \mathcal {G}) \\ = \operatorname *{argmin}_{g_{X|Y}}\mathbb{E}_{\Theta \sim Q}\mathbb{E}_{A\sim f_{\Theta}^{(k)}}\left[\mathbb{E}_{X\sim f_{\Theta}^{(n)}}\bigl[\mathcal{D}(A,X)\bigr ] + \mathbb{E}_{Y\sim f_{\Theta}^{(m)}}\mathbb{E}_{X\sim g_{X|Y}(\cdot |Y)}\bigl[1 - \mathcal{D}(A,X)\bigr ]\right] \\ = \operatorname * {a r g m i n} _ {g _ {X | Y}} \mathbb {E} _ {A \sim f _ {A}} \left[ \mathbb {E} _ {X \sim f _ {X | A} (\cdot | A)} [ \mathcal {D} (A, X) ] + \mathbb {E} _ {X \sim g _ {X | A} (\cdot | A)} [ 1 - \mathcal {D} (A, X) ] \right] \\ = \operatorname * {a r g m i n} _ {g _ {X | Y}} \int_ {a \in \mathcal {X} ^ {k}} d a f _ {A} (a) \int_ {x \in \mathcal {X} ^ {n}} d x \left[ f _ {X | A} (x | a) \mathcal {D} (a, x) + g _ {X | A} (x | a) [ 1 - \mathcal {D} (a, x) ] \right] \\ = \operatorname {a r g m i n} _ {g _ {X \mid Y}} \int_ {a \in \mathcal {X} ^ {k}} d a f _ {A} (a) \int_ {x \in \mathcal {X} ^ {n}} d x \max \left\{f _ {X \mid A} (x \mid a), g _ {X \mid A} (x \mid a) \right\} \tag {D.3} \\ = \operatorname * {a r g m i n} _ {g _ {X | Y}} \frac {1}{2} \int_ {a \in \mathcal {X} ^ {k}} d a f _ {A} (a) \int_ {x \in \mathcal {X} ^ {n}} d x \left[ f _ {X | A} (x | a) + g _ {X | A} (x | a) + \left| f _ {X | A} (x | a) - g _ {X | A} (x | a) \right| \right] \\ = \operatorname * {a r g m i n} _ {g _ {X | Y}} 1 + \frac {1}{2} \int_ {a \in \mathcal {X} ^ {k}} d a f _ {A} (a) \int_ {x \in \mathcal {X} ^ {n}} d x \left| f _ {X | A} (x | a) - g _ {X | A} (x | a) \right| \\ = \operatorname * {a r g m i n} _ {g _ {X | Y}} \int_ {a \in \mathcal {X} ^ {k}} d a f _ {A} (a) \int_ {x \in \mathcal {X} ^ {n}} d x \left| f _ {X | A} (x | a) - g _ {X | A} (x | a) \right| \\ \end{array}
+$$
+
+As required. Where in Eq. D.3 we used the definition of $\mathcal{D}$ .
+
+
+
+Theorem 4.1. Consider the attacker defined by:
+
+$$
+g _ {X \mid Y} ^ {*} \in \underset {g _ {X \mid Y}} {\operatorname {a r g m i n}} \mathbb {E} _ {A \sim f _ {A}} \left[ \int_ {x \in \mathcal {X} ^ {n}} \left| f _ {X \mid A} (x \mid A) - g _ {X \mid A} (x \mid A) \right| d x \right] \tag {D.4}
+$$
+
+and let $\mathcal{G}^*$ be the corresponding map from $\mathcal{X}^m$ to the set of probability distributions over $\mathcal{X}^n$ . Consider the authenticator defined by:
+
+$$
+\mathcal {D} ^ {*} (a, x) = I \left[ f _ {X | A} (x | a) > g _ {X | A} ^ {*} (x | a) \right] \tag {D.5}
+$$
+
+where $I$ is the indicator function. Then $(\mathcal{D}^*,\mathcal{G}^*)$ is a solution of Eq. 2.2 that satisfies Eq. 4.1.
+
+Proof. From Lemmas D.1, D.2 we have that $\max_{\mathcal{D}} V(\mathcal{D}, \mathcal{G}^*) = V(\mathcal{D}^*, \mathcal{G}^*) = \min_{\mathcal{G}} V(\mathcal{D}^*, \mathcal{G})$ , from which it follows that Eq. 4.1 is satisfied and thus $(\mathcal{D}^*, \mathcal{G}^*)$ is a solution of Eq. 2.2.
+
+# E THEOREM AND PROOF FOR SEC. 4.2
+
+Theorem E.1. For all $n \leqslant m$ it holds that:
+
+$$
+\max _ {\mathcal {D} \in \mathbb {D}} \min _ {\mathcal {G} \in \mathbb {G}} V (\mathcal {D}, \mathcal {G}) = 0. 5
+$$
+
+Proof. Consider the attacker $\mathcal{G}_{\mathrm{replay}}$ defined by the following generative process: Given a leaked sample $Y \in \mathbb{R}^{m \times d}$ , $\mathcal{G}_{\mathrm{replay}}$ generates a sample $X \in \mathbb{R}^{n \times d}$ such that $X_{i} = Y_{i} \quad \forall i \in [n]$ . (this is possible since we assumed $n \leqslant m$ ). Namely, we have:
+
+$$
+g _ {X \mid Y} (x \mid y) = \prod_ {i = 1} ^ {n} \delta (x _ {i} - y _ {i})
+$$
+
+Where $\delta$ is the Dirac delta. Thus $\forall a, x \in \mathcal{X}^k \times \mathcal{X}^n$ :
+
+$$
+\begin{array}{l} g _ {X \mid A} (x \mid a) = \int_ {y \in \mathcal {X} ^ {m}} d y g _ {X \mid Y} (x \mid y) f _ {Y \mid A} (y \mid a) \\ = \int_ {y \in \mathcal {X} ^ {m}} d y g _ {X | Y} (x | y) \int_ {\theta \in \mathcal {H}} d \theta f _ {\theta} ^ {(m)} (y) Q _ {\Theta | A} (\theta | a) \\ = \int_ {\theta \in \mathcal {H}} d \theta \int_ {y \in \mathcal {X} ^ {m}} d y \prod_ {i = 1} ^ {n} \delta (x _ {i} - y _ {i}) f _ {\theta} ^ {(m)} (y) Q _ {\Theta | A} (\theta | a) \\ = \int_ {\theta \in \mathcal {H}} d \theta Q _ {\Theta | A} (\theta | a) \int_ {y \in \mathcal {X} ^ {n}} d y \prod_ {i = 1} ^ {n} \delta (x _ {i} - y _ {i}) f _ {\theta} ^ {(n)} (y) \int_ {y ^ {\prime} \in \mathcal {X} ^ {m - n}} d y ^ {\prime} f _ {\theta} ^ {(m - n)} (y ^ {\prime}) \\ = \int_ {\theta \in \mathcal {H}} d \theta Q _ {\Theta | A} (\theta | a) \int_ {y \in \mathcal {X} ^ {n}} d y \prod_ {i = 1} ^ {n} \delta (x _ {i} - y _ {i}) f _ {\theta} ^ {(n)} (y) \\ = \int_ {\theta \in \mathcal {H}} d \theta Q _ {\Theta | A} (\theta | a) f _ {\theta} ^ {(n)} (x) \\ = f _ {X \mid A} (x \mid a) \\ \end{array}
+$$
+
+Define:
+
+$$
+\mathcal {D} _ {0} (a, x) = 0 \quad \forall a, x \in \mathcal {X} ^ {k} \times \mathcal {X} ^ {n}
+$$
+
+Then according to Theorem 4.1 $(\mathcal{D}_0, \mathcal{G}_{\mathrm{replay}})$ is a solution of Eq. 2.2 that satisfies Eq. 4.1, and therefore:
+
+$$
+\begin{array}{l} \max _ {\mathcal {D} \in \mathbb {D}} \min _ {\mathcal {G} \in \mathbb {G}} V (\mathcal {D}, \mathcal {G}) = V (\mathcal {D} _ {0}, \mathcal {G} _ {\text {r e p l a y}}) \\ = \frac {1}{2} \mathbb {E} _ {\Theta \sim Q} \mathbb {E} _ {A \sim f _ {\theta} ^ {(k)}} \mathbb {E} _ {Y \sim f _ {\theta} ^ {(m)}} \left[ \mathbb {E} _ {X \sim f _ {\theta} ^ {(n)}} [ \mathcal {D} (A, X) ] + \mathbb {E} _ {X \sim \mathcal {G} (Y)} [ 1 - \mathcal {D} (A, X) ] \right] \\ = \frac {1}{2} \mathbb {E} _ {\Theta \sim Q} \mathbb {E} _ {A \sim f _ {\theta} ^ {(k)}} \mathbb {E} _ {Y \sim f _ {\theta} ^ {(m)}} \left[ \mathbb {E} _ {X \sim f _ {\theta} ^ {(n)}} [ 0 ] + \mathbb {E} _ {X \sim \mathcal {G} (Y)} [ 1 ] \right] \\ = \frac {1}{2} \\ \end{array}
+$$
+
+As required.
+
+
+
+# F ADDITIONAL THEOREMS AND PROOFS FOR SEC. 4.3
+
+# F.1 NOTATION AND DEFINITIONS
+
+In this section, we consider the case where the sources are $d$ -dimensional Gaussian vectors with a known covariance matrix $\Sigma = CC^T \in \mathbb{R}^{d \times d}$ and unknown mean vector $\theta \in \mathbb{R}^d$ . That is, the set of possible sources is $\mathcal{H} = \mathbb{R}^d$ and given $\theta \in \mathcal{H}$ the associated probability density over the domain $\mathcal{X} = \mathbb{R}^d$ is $f_{\theta}(x) = \frac{1}{\sqrt{(2\pi)^d|\operatorname*{det}(\Sigma)|}}\exp\left(-\frac{1}{2}(x - \theta)^T\Sigma^{-1}(x - \theta)\right)$
+
+A sample of $n$ examples $x \in \mathcal{X}^n$ is considered a matrix $x \in \mathbb{R}^{n \times d}$ , where the first index represents the sample and the second represents the observation space, $\mathbb{R}^d$ . I.e., $x_{ij} \in \mathbb{R}$ is the $j$ 'th element of the $i$ 'th example in the sample $x \in \mathbb{R}^{n \times d}$ .
+
+We continue with a few more notations that simplify the proofs. Given a matrix $x \in \mathbb{R}^{n \times d}$ , we let $x_{c} = \left[x_{1}^{T}, \ldots, x_{n}^{T}\right]^{T} \in \mathbb{R}^{nd}$ be the concatenation vector representing $x$ . Given a vector $\theta \in \mathbb{R}^{d}$ , we let $\theta_{c,n} = \left[\theta^{T}, \ldots, \theta^{T}\right]^{T} \in \mathbb{R}^{nd}$ be the concatenation of $n$ copies of $\theta$ . Given a matrix $x \in \mathbb{R}^{n \times d}$ , we let $\bar{x} \equiv \frac{1}{n} \sum_{i=1}^{n} x_{i} \in \mathbb{R}^{d}$ denote its mean along the sample dimension. For any matrix $B \in \mathbb{R}^{d \times d}$ , we denote:
+
+$$
+d i a g (B, k) = \left[ \begin{array}{c c c} B & & 0 \\ & \ddots & \\ 0 & & B \end{array} \right] \in \mathbb {R} ^ {k d \times k d}
+$$
+
+and
+
+$$
+r e p (B, k) = \left[ \begin{array}{c c c} B & \dots & B \\ \vdots & \ddots & \vdots \\ B & \dots & B \end{array} \right] \in \mathbb {R} ^ {k d \times k d}
+$$
+
+Finally, we define strategies for both attacker and authenticator, which we prove in what follows to be the optimal strategies for the game.
+
+Definition F.1. Let $\mathcal{G}^*$ denote an attacker defined by the following generative process: Given a leaked sample $Y\in \mathbb{R}^{m\times d}$ , $\mathcal{G}^*$ generates an attack sample $X\in \mathbb{R}^{n\times d}$ as follows. It first samples $n$ vectors $W_{1},\ldots ,W_{n}\stackrel {iid}{\sim}\mathcal{N}(0,\Sigma)$ and then sets:
+
+$$
+X _ {i} = W _ {i} - \bar {W} + \bar {Y} \tag {F.1}
+$$
+
+Also, let $g_{X|Y}^*$ denote its associated conditional probability.
+
+Definition F.2. For any $\alpha \in \mathbb{R}_{+}$ , let $\mathcal{D}_{\alpha}$ denote an authenticator defined as:
+
+$$
+\mathcal {D} _ {\alpha} (a, x) = I \left[ \| \bar {x} - \bar {a} \| _ {\Sigma^ {- 1}} ^ {2} < \alpha \right]
+$$
+
+Where $I$ is the indicator function
+
+# F.2 TECHNICAL LEMMAS
+
+Lemma F.3. Let $X_{1},\ldots ,X_{k}\stackrel {iid}{\sim}\mathcal{N}(\mu ,\Sigma)$ Where $\mu \in \mathbb{R}^d$ $\Sigma \in \mathbb{R}^{d\times d}$ . Then
+
+$$
+\bar {X} = \frac {1}{k} \sum_ {j = 1} ^ {k} X _ {j} \sim \mathcal {N} (\mu , \frac {1}{k} \Sigma)
+$$
+
+Proof. We begin by observing that $X_{c} \sim \mathcal{N}(\mu_{c,k}, \text{diag}(\Sigma, k))$ . Let $B = \frac{1}{k} \left[ \begin{array}{ccc} I_{d} & \dots & I_{d} \end{array} \right] \in \mathbb{R}^{d \times kd}$ and observe that $\bar{X} = BX_{c}$ . Therefore, since this is an affine transformation of a Gaussian vector we have:
+
+$$
+\bar {X} \sim \mathcal {N} (B \mu_ {c, k}, B d i a g (\Sigma , k) B ^ {T}) = \mathcal {N} (\mu , \frac {1}{k} \Sigma)
+$$
+
+As required.
+
+
+
+Lemma F.4. Let $X \in \mathbb{R}^d$ be a Gaussian vector s.t $X \sim \mathcal{N}(\mu, \Sigma)$ , and let $X_{c,n} \in \mathbb{R}^{nd}$ be the concatenation of $n$ copies of $X$ . Then:
+
+$$
+X _ {c, n} \sim \mathcal {N} (\mu_ {c, n}, r e p (\Sigma , n))
+$$
+
+Proof. Let
+
+$$
+B = \left[ \begin{array}{c} I _ {d} \\ \vdots \\ I _ {d} \end{array} \right] \in \mathbb {R} ^ {n d \times d}
+$$
+
+and observe that $X_{c,n} = BX$ . Therefore, since this is an affine transformation of a Gaussian vector we have:
+
+$$
+X _ {c, n} \sim \mathcal {N} (B \mu , B \Sigma B ^ {T}) = \mathcal {N} (\mu_ {c, n}, r e p (\Sigma , n))
+$$
+
+As required.
+
+
+
+Lemma F.5. Let $\theta \in \mathbb{R}^d$ , $\Sigma \in \mathbb{R}^{d\times d}$ represent the mean and covariance of a Gaussian distribution. Let $X\in \mathbb{R}^{n\times d}$ be a random sample generated by the attacker defined in Def. F.1. Then:
+
+$$
+X _ {c} \sim \mathcal {N} (\theta_ {c, n}, d i a g (\Sigma , n) + r e p ((\frac {n - m}{m n}) \Sigma , n)) \equiv \mathcal {N} (\theta_ {c, n}, \Psi)
+$$
+
+Proof. Observe that $W_{c} \sim \mathcal{N}(0, \text{diag}(\Sigma, n))$ . Using Lemma F.3 we get $\bar{Y} \sim \mathcal{N}(\theta, \frac{1}{m}\Sigma)$ and observe that $\bar{W}_{c,n} = \text{rep}(\frac{1}{n} I_{d}, n)W_{c}$ . Using Lemma F.4 we get $\bar{Y}_{c,n} \sim \mathcal{N}(\theta_{c,n}, \frac{1}{m}\text{rep}(\Sigma, n))$ . We define the following block matrices
+
+$$
+Z = \left[ \begin{array}{c} W _ {c} \\ \bar {Y} _ {c, n} \end{array} \right], \quad B = \left[ \begin{array}{c c} I _ {n d} - \frac {1}{n} r e p (I _ {d}, n), & I _ {n d} \end{array} \right]
+$$
+
+and observe that:
+
+$$
+Z \sim \mathcal {N} (\left[ \begin{array}{c} 0 _ {n d} \\ \theta_ {c, n} \end{array} \right], \left[ \begin{array}{c c} d i a g (\Sigma , n) & 0 \\ 0 & \frac {1}{m} r e p (\Sigma , n) \end{array} \right])
+$$
+
+Note that $X_{c} = W_{c} - \bar{W}_{c,n} + \bar{Y}_{c,n} = BZ$ and therefore we get:
+
+$$
+\begin{array}{l} X _ {c} \sim \mathcal {N} (B \left[ \begin{array}{c} 0 _ {n d} \\ \theta_ {c, n} \end{array} \right], B \left[ \begin{array}{c c} d i a g (\Sigma , n) & 0 \\ 0 & \frac {1}{m} r e p (\Sigma , n) \end{array} \right] B ^ {T}) \\ = \mathcal {N} \left(\theta_ {c, n}, d i a g (\Sigma , n) + r e p \left(\left(\frac {n - m}{m n}\right) \Sigma , n\right)\right) \\ \end{array}
+$$
+
+As required.
+
+
+
+Lemma F.6. Let $\Sigma = CC^T \in \mathbb{R}^{d \times d}$ represent the covariance of a Gaussian distribution, and consider the following covariance matrix:
+
+$$
+\Psi = d i a g (\Sigma , n) + r e p ((\frac {n - m}{m n}) \Sigma , n)
+$$
+
+Then its inverse is:
+
+$$
+\Psi^ {- 1} = d i a g (\Sigma^ {- 1}, n) - \frac {n - m}{n ^ {2}} r e p (\Sigma^ {- 1}, n)
+$$
+
+and the determinant is:
+
+$$
+\det (\Psi) = \left(\frac {n}{m}\right) ^ {d} \det (\Sigma) ^ {n}
+$$
+
+Proof. We begin by defining the following block matrices:
+
+$$
+U = \left[ \begin{array}{c} \Sigma \\ \vdots \\ \Sigma \end{array} \right] \in \mathbb {R} ^ {n d \times d} \tag {F.2}
+$$
+
+$$
+V = \left(\frac {n - m}{m n}\right) \left[ \begin{array}{c} I _ {d} \\ \vdots \\ I _ {d} \end{array} \right] \in \mathbb {R} ^ {n d \times d} \tag {F.3}
+$$
+
+and note that $rep\left(\left(\frac{n - m}{mn}\right)\Sigma, n\right) = UV^T$ . Then:
+
+$$
+\begin{array}{l} \Psi^ {- 1} = \left(d i a g (\Sigma , n) + r e p \left(\left(\frac {n - m}{m n}\right) \Sigma , n\right)\right) ^ {- 1} \\ = (d i a g (\Sigma , n) + U V ^ {T}) ^ {- 1} \\ \end{array}
+$$
+
+$$
+\begin{array}{l} \Psi^ {- 1} = \left(d i a g (\Sigma , n) + r e p \left(\left(\frac {n - m}{m n}\right) \Sigma , n\right)\right) ^ {- 1} \\ = (d i a g (\Sigma , n) + U V ^ {T}) ^ {- 1} \\ \stackrel {(i)} {=} d i a g (\Sigma , n) ^ {- 1} - d i a g (\Sigma , n) ^ {- 1} U \left(I _ {d} + V ^ {T} d i a g (\Sigma , n) ^ {- 1} U\right) ^ {- 1} V ^ {T} d i a g (\Sigma , n) ^ {- 1} \\ \stackrel {(i i)} {=} d i a g (\Sigma^ {- 1}, n) - d i a g (\Sigma^ {- 1}, n) U \left(I _ {d} + V ^ {T} d i a g (\Sigma^ {- 1}, n) U\right) ^ {- 1} V ^ {T} d i a g (\Sigma^ {- 1}, n) \\ = \operatorname {d i a g} (\Sigma^ {- 1}, n) - \left[ \begin{array}{c} I _ {d} \\ \vdots \\ I _ {d} \end{array} \right] \left(I _ {d} + \frac {n - m}{n m} \left[ I _ {d} \quad \dots \quad I _ {d} \right] \left[ \begin{array}{c} I _ {d} \\ \vdots \\ I _ {d} \end{array} \right]\right) ^ {- 1} \frac {n - m}{n m} \left[ \begin{array}{c c c} \Sigma^ {- 1} & \dots & \Sigma^ {- 1} \end{array} \right] \\ = \operatorname {d i a g} (\Sigma^ {- 1}, n) - \left[ \begin{array}{c} I _ {d} \\ \vdots \\ I _ {d} \end{array} \right] \frac {m}{n} I _ {d} \frac {n - m}{n m} \left[ \begin{array}{c c c} \Sigma^ {- 1} & \dots & \Sigma^ {- 1} \end{array} \right] \\ = d i a g (\Sigma^ {- 1}, n) - \frac {n - m}{n ^ {2}} \left[ \begin{array}{c} I _ {d} \\ \vdots \\ I _ {d} \end{array} \right] \left[ \begin{array}{c c c} \Sigma^ {- 1} & \dots & \Sigma^ {- 1} \end{array} \right] \\ = d i a g (\Sigma^ {- 1}, n) - \frac {n - m}{n ^ {2}} r e p (\Sigma^ {- 1}, n) \\ \end{array}
+$$
+
+As required. Where in $(i)$ we used the Woodbury matrix identity, and in $(ii)$ we used the inverse of a diagonal block matrix.
+
+Next, we turn to find the determinant of $\Psi$ :
+
+$$
+\begin{array}{l} \det (\Psi) = \det (\operatorname {d i a g} (\Sigma , n) + \operatorname {r e p} \left(\left(\frac {n - m}{m n}\right) \Sigma , n)\right)) \\ = \det (d i a g (\Sigma , n) + U V ^ {T}) \\ \stackrel {(i i i)} {=} \det (\operatorname {d i a g} (\Sigma , n)) \det (I _ {d} + V ^ {T} \operatorname {d i a g} (\Sigma , n) ^ {- 1} U) \\ \stackrel {(i v)} {=} \det (\Sigma) ^ {n} \det (I _ {d} + V ^ {T} \operatorname {d i a g} (\Sigma^ {- 1}, n) U) \\ = d e t (\Sigma) ^ {n} d e t (I _ {d} + \frac {n - m}{n m} \left[ I _ {d} \quad \dots \quad I _ {d} \right] d i a g (\Sigma^ {- 1}, n) \left[ \begin{array}{c} \Sigma \\ \vdots \\ \Sigma \end{array} \right]) \\ = d e t (\Sigma) ^ {n} d e t \left(I _ {d} + \frac {n - m}{m} I _ {d}\right) \\ = \det (\Sigma) ^ {n} \left(\frac {n}{m}\right) ^ {d} \\ \end{array}
+$$
+
+Where in (iii) we used the matrix determinant lemma, and in (iv) we used the determinant of a diagonal block matrix.
+
+Lemma F.7. Let
+
+$$
+h (x, \mu) = \exp \{- \frac {1}{2 \sigma^ {2}} (x - \mu) ^ {T} \Sigma^ {- 1} (x - \mu) \} I [ x ^ {T} \Sigma^ {- 1} x < \alpha ] \quad \forall (x, \mu) \in \mathbb {R} ^ {d} \times \mathbb {R} ^ {d}
+$$
+
+and define the function:
+
+$$
+\psi (\mu) = \int_ {x \in \mathbb {R} ^ {d}} d x h (x, \mu)
+$$
+
+Then $\psi (\mu)$ is log-concave over the space $\mathbb{R}^d$
+
+Proof. We begin by noting that the function $(x - \mu)^T\Sigma^{-1}(x - \mu)$ is convex w.r.t both $x$ and $\mu$ , hence its negative is concave, and thus, by definition the function:
+
+$$
+\exp \left\{- \frac {1}{2 \sigma^ {2}} (x - \mu) ^ {T} \Sigma^ {- 1} (x - \mu) \right\} \tag {F.4}
+$$
+
+is log-concave w.r.t $x, \mu$ . We now show that $h(x, \mu)$ is log-concave w.r.t both $x$ and $\mu$ . First, w.r.t $\mu$ . Let $\beta \in [0,1]$ , $\mu_1, \mu_2 \in \mathbb{R}^d$ , and observe that:
+
+$$
+\begin{array}{l} h (x, \beta \mu_ {1} + (1 - \beta) \mu_ {2}) \\ = \exp \left\{- \frac {1}{2 \sigma^ {2}} \left(x - \beta \mu_ {1} - (1 - \beta) \mu_ {2}\right) ^ {T} \Sigma^ {- 1} \left(x - \beta \mu_ {1} - (1 - \beta) \mu_ {2}\right) \right\} I \left[ x ^ {T} \Sigma^ {- 1} x < \alpha \right] \\ \stackrel {(i)} {\geqslant} \exp \left\{- \frac {\beta}{2 \sigma^ {2}} (x - \mu_ {1}) ^ {T} \Sigma^ {- 1} (x - \mu_ {1}) \right\} \exp \left\{- \frac {(1 - \beta)}{2 \sigma^ {2}} (x - \mu_ {2}) ^ {T} \Sigma^ {- 1} (x - \mu_ {2}) \right\} \\ I \left[ x ^ {T} \Sigma^ {- 1} x < \alpha \right] \\ \stackrel {(i i)} {=} \exp \left\{- \frac {\beta}{2 \sigma^ {2}} (x - \mu_ {1}) ^ {T} \Sigma^ {- 1} (x - \mu_ {1}) \right\} \exp \left\{- \frac {(1 - \beta)}{2 \sigma^ {2}} (x - \mu_ {2}) ^ {T} \Sigma^ {- 1} (x - \mu_ {2}) \right\} \\ (I [ x ^ {T} \Sigma^ {- 1} x < \alpha ]) ^ {\beta} (I [ x ^ {T} \Sigma^ {- 1} x < \alpha ]) ^ {(1 - \beta)} \\ = h (x, \mu_ {1}) ^ {\beta} h (x, \mu_ {2}) ^ {(1 - \beta)} \\ \end{array}
+$$
+
+Therefore $h(x, \mu)$ is log-concave w.r.t $\mu$ . Where in (i) we used the log-concavity of the function in Eq. F.4, and in (ii) we used the fact that $I[x^T \Sigma^{-1} x < \alpha] \in \{0, 1\}$ .
+
+Now, w.r.t $x$ . Let $\beta \in [0,1]$ , $x_1, x_2 \in \mathbb{R}^d$ . Observe that for any convex function $q$ we have:
+
+$$
+\begin{array}{l} I \left[ q \left(x _ {1}\right) < \alpha \right] ^ {\beta} I \left[ q \left(x _ {2}\right) < \alpha \right] ^ {(1 - \beta)} \stackrel {(i)} {=} I \left[ q \left(x _ {1}\right) < \alpha \right] I \left[ q \left(x _ {2}\right) < \alpha \right] \\ = I [ q (x _ {1}) < \alpha \wedge q (x _ {2}) < \alpha ] \\ \stackrel {(i i)} {\leqslant} I \left[ \beta q \left(x _ {1}\right) + (1 - \beta) q \left(x _ {2}\right) < \alpha \right] \\ \stackrel {(i i i)} {\leqslant} I \left[ q (\beta x _ {1} + (1 - \beta) x _ {2}) < \alpha \right] \\ \end{array}
+$$
+
+Therefore $I[x^T \Sigma^{-1} x < \alpha]$ is log-concave. Where in (i) we used the fact that $I[q(x) < \alpha] \in \{0, 1\}$ , in (ii) we used the fact that $q(x_1) < \alpha \wedge q(x_2) < \alpha \Rightarrow \beta q(x_1) + (1 - \beta)q(x_2) < \alpha$ , and in (iii) we used the convexity of $q$ . Hence, observing $h(x, \mu)$ we have:
+
+$$
+\begin{array}{l} h (\beta x _ {1} + (1 - \beta) x _ {2}, \mu) \\ = \exp \left\{- \frac {1}{2 \sigma^ {2}} \left(\beta x _ {1} + (1 - \beta) x _ {2} - \mu\right) ^ {T} \Sigma^ {- 1} \left(\beta x _ {1} + (1 - \beta) x _ {2} - \mu\right) \right\} \\ I \left[ (\beta x _ {1} + (1 - \beta) x _ {2}) ^ {T} \Sigma^ {- 1} (\beta x _ {1} + (1 - \beta) x _ {2}) < \alpha \right] \\ \stackrel {(i v)} {\geqslant} \exp \left\{- \frac {\beta}{2 \sigma^ {2}} (x _ {1} - \mu) ^ {T} \Sigma^ {- 1} (x _ {1} - \mu) \right\} \exp \left\{- \frac {(1 - \beta)}{2 \sigma^ {2}} (x _ {2} - \mu) ^ {T} \Sigma^ {- 1} (x _ {2} - \mu) \right\} \\ I \left[ (\beta x _ {1} + (1 - \beta) x _ {2}) ^ {T} \Sigma^ {- 1} (\beta x _ {1} + (1 - \beta) x _ {2}) < \alpha \right] \\ \stackrel {(v)} {\geqslant} \exp \left\{- \frac {\beta}{2 \sigma^ {2}} (x - \mu_ {1}) ^ {T} \Sigma^ {- 1} (x - \mu_ {1}) \right\} \exp \left\{- \frac {(1 - \beta)}{2 \sigma^ {2}} (x - \mu_ {2}) ^ {T} \Sigma^ {- 1} (x - \mu_ {2}) \right\} \\ (I [ x _ {1} ^ {T} \Sigma^ {- 1} x _ {1} < \alpha ]) ^ {\beta} (I [ x _ {2} ^ {T} \Sigma^ {- 1} x _ {2} < \alpha ]) ^ {(1 - \beta)} \\ = h (x _ {1}, \mu) ^ {\beta} h (x _ {2}, \mu) ^ {(1 - \beta)} \\ \end{array}
+$$
+
+Therefore $h(x, \mu)$ is log-concave w.r.t $x$ . Where in $(iv)$ we used the log-concavity of the function in Eq. F.4, and in $(v)$ we used the log-concavity of $I[x^T \Sigma^{-1} x < \alpha]$ .
+
+Finally, by using Prekopa-Leindler inequality (Prékopa, 1973) we have that $\psi (\mu)$ is log-concave, as required.
+
+# F.3 PROOF OF THEOREM 4.2
+
+Lemma F.8. Consider the attacker $\mathcal{G}^*$ , defined in Def. F.1. The best response strategy for the authenticator against this attacker is:
+
+$$
+\mathcal {D} ^ {*} (a, x) = I \left[ \| \bar {x} - \bar {a} \| _ {\Sigma^ {- 1}} ^ {2} < \alpha^ {*} \right]
+$$
+
+Where:
+
+$$
+\alpha^ {*} = \frac {d (m + k) (n + k)}{k ^ {2} (n - m)} \log \frac {n (m + k)}{m (n + k)}
+$$
+
+Proof. The best response authenticator satisfies:
+
+$$
+\begin{array}{l} \mathcal{D}^{*}\in \operatorname *{argmax}_{\mathcal{D}\in \mathbb{D}}V(\mathcal{D},\mathcal{G}) \\ = \operatorname * {a r g m a x} _ {\mathcal {D} \in \mathbb {D}} \frac {1}{2} \mathbb {E} _ {\Theta \sim Q} \mathbb {E} _ {A \sim f _ {\Theta} ^ {(k)}} \mathbb {E} _ {Y \sim f _ {\Theta} ^ {(m)}} \left[ \mathbb {E} _ {X \sim f _ {\Theta} ^ {(n)}} \left[ \mathcal {D} (A, X) \right] + \mathbb {E} _ {X \sim g _ {X | Y} (\cdot | Y)} [ 1 - \mathcal {D} (A, X) ] \right] \\ = \operatorname * {a r g m a x} _ {\mathcal {D} \in \mathbb {D}} \mathbb {E} _ {\Theta \sim Q} \mathbb {E} _ {A \sim f _ {\Theta} ^ {(k)}} \left[ \mathbb {E} _ {X \sim f _ {\Theta} ^ {(n)}} \left[ \mathcal {D} (A, X) \right] + \mathbb {E} _ {X \sim g _ {X | \Theta} (\cdot | \Theta)} \left[ 1 - \mathcal {D} (A, X) \right] \right] \\ \stackrel {F. 5} {=} \underset {\mathcal {D} \in \mathbb {D}} {\operatorname {a r g m a x}} \mathbb {E} _ {\Theta \sim Q} \mathbb {E} _ {A \sim \mathcal {N} (\Theta_ {c, k}, d i a g (\Sigma , k))} \\ \left[ \mathbb {E} _ {X \sim \mathcal {N} (\Theta_ {c, n}, d i a g (\Sigma , n))} [ \mathcal {D} (A, X) ] + \mathbb {E} _ {X \sim \mathcal {N} (\theta_ {c, n}, \Psi)} [ 1 - \mathcal {D} (A, X) ] \right] \\ \stackrel {F. 6} {=} \underset {\mathcal {D} \in \mathbb {D}} {\operatorname {a r g m a x}} \mathbb {E} _ {\Theta \sim Q} \int_ {a \in \mathbb {R} ^ {k d}} d a \int_ {x \in \mathbb {R} ^ {n d}} d x \exp \left\{- \frac {1}{2} (a - \Theta_ {c, k}) ^ {T} d i a g (\Sigma , k) ^ {- 1} (a - \Theta_ {c, k}) \right\} [ \\ \exp \{- \frac {1}{2} (x - \Theta_ {c, n}) ^ {T} d i a g (\Sigma , n) ^ {- 1} (x - \Theta_ {c, n}) \} \mathcal {D} (a, x) + \\ \sqrt {\frac {1}{\left(\frac {n}{m}\right) ^ {d}}} \exp \left\{- \frac {1}{2} \left(x - \Theta_ {c , n}\right) ^ {T} \Psi^ {- 1} \left(x - \Theta_ {c, n}\right) \right\} [ 1 - \mathcal {D} (a, x) ] \\ \end{array}
+$$
+
+$\mathcal{D}$ can be chosen independently for each pair $(a,x)\in \mathcal{X}^k\times \mathcal{X}^n$ . Therefore, for any $(a,x)\in \mathcal{X}^k\times \mathcal{X}^n$ the decision rule for $\mathcal{D}(a,x) = 1$ is:
+
+$$
+\sqrt {(\frac {n}{m}) ^ {d}} \int_ {\theta \in \mathbb {R} ^ {d}} d \theta Q (\theta) \exp \left\{- \frac {1}{2} \left[ (x - \theta_ {c, n}) ^ {T} d i a g (\Sigma , n) ^ {- 1} (x - \theta_ {c, n}) + \right. \right.
+$$
+
+$$
+\left. \left(a - \theta_ {c, k}\right) ^ {T} \operatorname {d i a g} (\Sigma , k) ^ {- 1} \left(a - \theta_ {c, k}\right) \right] \} >
+$$
+
+$$
+\int_ {\theta \in \mathbb {R} ^ {d}} d \theta Q (\theta) \exp \left\{- \frac {1}{2} \left[ (x - \theta_ {c, n}) ^ {T} \Psi^ {- 1} (x - \theta_ {c, n}) + (a - \theta_ {c, k}) ^ {T} \operatorname {d i a g} (\Sigma , k) ^ {- 1} (a - \theta_ {c, k}) \right] \right\}
+$$
+
+Observing the LHS integral and using the improper uniform prior assumption we have:
+
+$$
+\begin{array}{l} \int_ {\theta \in \mathbb {R} ^ {d}} d \theta Q (\theta) \exp \left\{- \frac {1}{2} \left[ (x - \theta_ {c, n}) ^ {T} d i a g (\Sigma^ {- 1}, n) (x - \theta_ {c, n}) + \right. \right. \\ \left. \left(a - \theta_ {c, k}\right) ^ {T} \operatorname {d i a g} \left(\Sigma^ {- 1}, k\right) \left(a - \theta_ {c, k}\right) \right] \rbrace \\ = \int_ {\theta \in \mathbb {R} ^ {d}} d \theta \exp \left\{- \frac {1}{2} \left[ \sum_ {i = 1} ^ {n} x _ {i} ^ {T} \Sigma^ {- 1} x _ {i} + \sum_ {j = 1} ^ {k} a _ {j} ^ {T} \Sigma^ {- 1} a _ {j} - 2 \theta^ {T} \Sigma^ {- 1} (n \bar {x} + k \bar {a}) + (n + k) \theta^ {T} \Sigma^ {- 1} \theta \right] \right\} \\ = \exp \left\{- \frac {1}{2} \left[ \sum_ {i = 1} ^ {n} x _ {i} ^ {T} \Sigma^ {- 1} x _ {i} + \sum_ {j = 1} ^ {k} a _ {j} ^ {T} \Sigma^ {- 1} a _ {j} - (n + k) \left(\frac {n \bar {x} + k \bar {a}}{n + k}\right) ^ {T} \Sigma^ {- 1} \left(\frac {n \bar {x} + k \bar {a}}{n + k}\right) \right] \right\} \\ \int_ {\theta \in \mathbb {R} ^ {d}} d \theta \exp \left\{- \frac {n + k}{2} \left[ \left(\theta - \frac {n \bar {x} + k \bar {a}}{n + k}\right) ^ {T} \Sigma^ {- 1} \left(\theta - \frac {n \bar {x} + k \bar {a}}{n + k}\right) \right] \right\} \\ = \exp \left\{- \frac {1}{2} \left[ \sum_ {i = 1} ^ {n} x _ {i} ^ {T} \Sigma^ {- 1} x _ {i} + \sum_ {j = 1} ^ {k} a _ {j} ^ {T} \Sigma^ {- 1} a _ {j} - (n + k) \left(\frac {n \bar {x} + k \bar {a}}{n + k}\right) ^ {T} \Sigma^ {- 1} \left(\frac {n \bar {x} + k \bar {a}}{n + k}\right) \right] \right\} \\ \sqrt {\left(\frac {2 \pi}{n + k}\right) ^ {d} \det (\Sigma)} \\ \end{array}
+$$
+
+Observing the RHS and using the improper uniform prior assumption we have:
+
+$$
+\begin{array}{l} \int_ {\theta \in \mathbb {R} ^ {d}} d \theta Q (\theta) \exp \left\{- \frac {1}{2} \left[ (x - \theta_ {c, n}) ^ {T} \Psi^ {- 1} (x - \theta_ {c, n}) + (a - \theta_ {c, k}) ^ {T} \operatorname {d i a g} \left(\Sigma^ {- 1}, k\right) (a - \theta_ {c, k}) \right] \right\} \\ = \int_ {\theta \in \mathbb {R} ^ {d}} d \theta \exp \left\{- \frac {1}{2} \left[ \left(x - \theta_ {c, n}\right) ^ {T} \Psi^ {- 1} \left(x - \theta_ {c, n}\right) + \left(a - \theta_ {c, k}\right) ^ {T} \operatorname {d i a g} \left(\Sigma^ {- 1}, k\right) \left(a - \theta_ {c, k}\right) \right] \right\} \\ \stackrel {F. 6} {=} \int_ {\theta \in \mathbb {R} ^ {d}} d \theta \exp \left\{- \frac {1}{2} \left[ \left(x - \theta_ {c, n}\right) ^ {T} (d i a g (\Sigma^ {- 1}, n) - \frac {n - m}{n ^ {2}} r e p (\Sigma^ {- 1}, n)) (x - \theta_ {c, n}) + \right. \right. \\ (a - \theta_ {c, k}) ^ {T} d i a g (\Sigma^ {- 1}, k) (a - \theta_ {c, k}) ] \} \\ \end{array}
+$$
+
+$$
+\begin{array}{l} = \int_ {\theta \in \mathbb {R} ^ {d}} d \theta \exp \{- \frac {1}{2} [ \sum_ {i = 1} ^ {n} (x _ {i} - \theta) ^ {T} \Sigma^ {- 1} (x _ {i} - \theta) + \sum_ {j = 1} ^ {k} (a _ {j} - \theta) ^ {T} \Sigma^ {- 1} (a _ {j} - \theta) - \\ \frac {n - m}{n ^ {2}} \sum_ {i = 1} ^ {n} \sum_ {j = 1} ^ {n} (x _ {i} - \theta) ^ {T} \Sigma^ {- 1} (x _ {j} - \theta) ] \} \\ = \int_ {\theta \in \mathbb {R} ^ {d}} d \theta \exp \{- \frac {1}{2} \left[ \sum_ {i = 1} ^ {n} (x _ {i} - \theta) ^ {T} \Sigma^ {- 1} (x _ {i} - \theta) + \sum_ {j = 1} ^ {k} (a _ {j} - \theta) ^ {T} \Sigma^ {- 1} (a _ {j} - \theta) - \right. \\ (n - m) (\bar {x} - \theta) ^ {T} \Sigma^ {- 1} (\bar {x} - \theta) ] \} \\ = \int_ {\theta \in \mathbb {R} ^ {d}} d \theta \exp \left\{- \frac {1}{2} \left[ \sum_ {i = 1} ^ {n} x _ {i} ^ {T} \Sigma^ {- 1} x _ {i} + \sum_ {j = 1} ^ {k} a _ {j} ^ {T} \Sigma^ {- 1} a _ {j} - (n - m) \bar {x} ^ {T} \Sigma^ {- 1} \bar {x} + \right. \right. \\ (m + k) \theta^ {T} \Sigma^ {- 1} \theta - 2 \theta^ {T} \Sigma^ {- 1} (m \bar {x} + k \bar {a}) ] \rbrace \\ = \exp \{- \frac {1}{2} \left[ \sum_ {i = 1} ^ {n} x _ {i} ^ {T} \Sigma^ {- 1} x _ {i} + \sum_ {j = 1} ^ {k} a _ {j} ^ {T} \Sigma^ {- 1} a _ {j} - (n - m) \bar {x} ^ {T} \Sigma^ {- 1} \bar {x} - \right. \\ (m + k) \left(\frac {m \bar {x} + k \bar {a}}{m + k}\right) ^ {T} \Sigma^ {- 1} \left(\frac {m \bar {x} + k \bar {a}}{m + k}\right) ] \} \\ \int_ {\theta \in \mathbb {R} ^ {d}} d \theta \exp \left\{- \frac {m + k}{2} \left[ \left(\theta - \frac {m \bar {x} + k \bar {a}}{m + k}\right) ^ {T} \Sigma^ {- 1} \left(\theta - \frac {m \bar {x} + k \bar {a}}{m + k}\right) \right] \right\} \\ = \sqrt {\left(\frac {2 \pi}{m + k}\right) ^ {d} \det (\Sigma)} \exp \{- \frac {1}{2} \left[ \sum_ {i = 1} ^ {n} x _ {i} ^ {T} \Sigma^ {- 1} x _ {i} + \sum_ {j = 1} ^ {k} a _ {j} ^ {T} \Sigma^ {- 1} a _ {j} - (n - m) \bar {x} ^ {T} \Sigma^ {- 1} \bar {x} - \right. \\ (m + k) \left(\frac {m \bar {x} + k \bar {a}}{m + k}\right) ^ {T} \Sigma^ {- 1} \left(\frac {m \bar {x} + k \bar {a}}{m + k}\right) ] \} \\ \end{array}
+$$
+
+Therefore the decision rule is
+
+$$
+\begin{array}{l} \sqrt {(\frac {n}{m}) ^ {d}} \exp \left\{\frac {1}{2} \left[ (n + k) \left(\frac {n \bar {x} + k \bar {a}}{n + k}\right) ^ {T} \Sigma^ {- 1} \left(\frac {n \bar {x} + k \bar {a}}{n + k}\right) \right] \right\} \sqrt {\left(\frac {2 \pi}{n + k}\right) ^ {d}} > \\ \exp \{\frac {1}{2} [ (n - m) \bar {x} ^ {T} \Sigma^ {- 1} \bar {x} + (m + k) (\frac {m \bar {x} + k \bar {a}}{m + k}) ^ {T} \Sigma^ {- 1} (\frac {m \bar {x} + k \bar {a}}{m + k}) ] \} \sqrt {\left(\frac {2 \pi}{m + k}\right) ^ {d}} \\ \Leftrightarrow \sqrt {\left(\frac {n (m + k)}{m (n + k)}\right) ^ {d}} \exp \left\{\frac {1}{2} \left[ (n + k) \left(\frac {n \bar {x} + k \bar {a}}{n + k}\right) ^ {T} \Sigma^ {- 1} \left(\frac {n \bar {x} + k \bar {a}}{n + k}\right) \right] \right\} > \\ \exp \left\{\frac {1}{2} \left[ (n - m) \bar {x} ^ {T} \Sigma^ {- 1} \bar {x} + (m + k) \left(\frac {m \bar {x} + k \bar {a}}{m + k}\right) ^ {T} \Sigma^ {- 1} \left(\frac {m \bar {x} + k \bar {a}}{m + k}\right) \right] \right\} \\ \Leftrightarrow d \log \frac {n (m + k)}{m (n + k)} > (n - m) \bar {x} ^ {T} \Sigma^ {- 1} \bar {x} + (m + k) \left(\frac {m \bar {x} + k \bar {a}}{m + k}\right) ^ {T} \Sigma^ {- 1} \left(\frac {m \bar {x} + k \bar {a}}{m + k}\right) - \\ (n + k) \left(\frac {n \bar {x} + k \bar {a}}{n + k}\right) ^ {T} \Sigma^ {- 1} \left(\frac {n \bar {x} + k \bar {a}}{n + k}\right) \\ \Leftrightarrow d (m + k) (n + k) \log \frac {n (m + k)}{m (n + k)} > \\ k ^ {2} (n - m) \bar {x} ^ {T} \Sigma^ {- 1} \bar {x} - 2 k ^ {2} (n - m) \bar {x} ^ {T} \Sigma^ {- 1} \bar {a} + k ^ {2} (n - m) \bar {a} ^ {T} \Sigma^ {- 1} \bar {a} \\ \Leftrightarrow \frac {d (m + k) (n + k)}{k ^ {2} (n - m)} \log \frac {n (m + k)}{m (n + k)} > (\bar {x} - \bar {a}) ^ {T} \Sigma^ {- 1} (\bar {x} - \bar {a}) \\ \end{array}
+$$
+
+As required.
+
+Lemma F.9. Consider the authenticator $\mathcal{D}_{\alpha}$ , as defined in Def. F.2. Then any attacker $\mathcal{G}$ , represented by a conditional probability $g_{X|Y}$ , that satisfies the condition $\bar{x} = \bar{y}$ for any leaked sample $y \in \mathbb{R}^{m \times d}$ and attacker generated sample $x \in \{\mathbb{R}^{n \times d}: g_{X|Y}(x|y) > 0\}$ , satisfies:
+
+$$
+\mathcal {G} \in \operatorname * {a r g m i n} _ {\mathcal {G} ^ {\prime} \in \mathbb {G}} V (\mathcal {D} _ {\alpha}, \mathcal {G} ^ {\prime}) \quad \forall \alpha \in \mathbb {R} _ {+}
+$$
+
+Proof. The best response attacker satisfies:
+
+$$
+\begin{array}{l} g _ {X \mid Y} ^ {\prime} \in \operatorname * {a r g m i n} _ {g _ {X \mid Y}} \frac {1}{2} \mathbb {E} _ {\Theta \sim Q} \mathbb {E} _ {A \sim f _ {\Theta} ^ {(k)}} \mathbb {E} _ {Y \sim f _ {\Theta} ^ {(m)}} [ \mathbb {E} _ {X \sim f _ {\Theta} ^ {(n)}} [ \mathcal {D} _ {\alpha} (A, X) ] + \mathbb {E} _ {X \sim g _ {X \mid Y} (\cdot | Y)} [ 1 - \mathcal {D} _ {\alpha} (A, X) ] ] \\ = \operatorname * {a r g m i n} _ {g _ {X | Y}} \mathbb {E} _ {\Theta \sim Q} \mathbb {E} _ {A \sim f _ {\Theta} ^ {(k)}} \mathbb {E} _ {Y \sim f _ {\Theta} ^ {(m)}} \mathbb {E} _ {X \sim g _ {X | Y} (\cdot | Y)} \big [ 1 - \mathcal {D} _ {\alpha} (A, X) \big ] \\ = \operatorname * {a r g m a x} _ {g _ {X | Y}} \mathbb {E} _ {\Theta \sim Q} \mathbb {E} _ {A \sim f _ {\Theta} ^ {(k)}} \mathbb {E} _ {Y \sim f _ {\Theta} ^ {(m)}} \mathbb {E} _ {X \sim g _ {X | Y} (\cdot | Y)} [ \mathcal {D} _ {\alpha} (A, X) ] \\ = \operatorname * {a r g m a x} _ {g _ {X | Y}} \mathbb {E} _ {\Theta \sim Q} \mathbb {E} _ {A ^ {\text {i d}}} \mathcal {N} _ {(\Theta , \Sigma)} \mathbb {E} _ {Y ^ {\text {i d}}} \mathcal {N} _ {(\Theta , \Sigma)} \mathbb {E} _ {X \sim g _ {X | Y} (\cdot | Y)} \left[ I \left[ \| \bar {X} - \bar {A} \| _ {\Sigma^ {- 1}} ^ {2} < \alpha \right] \right] \\ \end{array}
+$$
+
+$$
+\begin{array}{l} \stackrel {\text {L e m m a F . 3}} {=} \operatorname * {a r g m a x} _ {g _ {X | Y}} \mathbb {E} _ {\Theta \sim Q} \mathbb {E} _ {\bar {A} \sim \mathcal {N} (\Theta , \frac {1}{k} \Sigma)} \mathbb {E} _ {Y \stackrel {{\mathrm {i d}}} {{\sim}} \mathcal {N} (\Theta , \Sigma)} \mathbb {E} _ {X \sim g _ {X | Y} (\cdot | Y)} \left[ I \left[ \| \bar {X} - \bar {A} \| _ {\Sigma^ {- 1}} ^ {2} < \alpha \right] \right] \\ = \operatorname * {a r g m a x} _ {g _ {X | Y}} \int_ {y \in \mathbb {R} ^ {m \times d}} d y \int_ {x \in \mathbb {R} ^ {n \times d}} d x g _ {X | Y} (x | y) \int_ {\bar {a} \in \mathbb {R} ^ {d}} d \bar {a} I \left[ \| \bar {x} - \bar {a} \| _ {\Sigma^ {- 1}} ^ {2} < \alpha \right] \int_ {\theta \in \mathbb {R} ^ {d}} d \theta Q (\theta) \\ \exp \{- \frac {k}{2} (\bar {a} - \theta) ^ {T} \Sigma^ {- 1} (\bar {a} - \theta) \} \exp \{- \frac {1}{2} \sum_ {j = 1} ^ {m} (y _ {j} - \theta) ^ {T} \Sigma^ {- 1} (y _ {j} - \theta) \} \\ \end{array}
+$$
+
+Note that $g_{X|Y}(x|y)$ can be chosen independently for each $y \in \mathbb{R}^{m \times d}$ . Thus, we can optimize it independently for each $y \in \mathbb{R}^{m \times d}$ and we have:
+
+$$
+\begin{array}{l} g _ {X | Y} ^ {\prime} (\cdot | y) \in \operatorname * {a r g m a x} _ {g _ {X | Y} (\cdot | y)} \int_ {x \in \mathbb {R} ^ {n \times d}} d x g _ {X | Y} (x | y) \int_ {\bar {a} \in \mathbb {R} ^ {d}} d \bar {a} I \left[ \| \bar {x} - \bar {a} \| _ {\Sigma^ {- 1}} ^ {2} < \alpha \right] \int_ {\theta \in \mathbb {R} ^ {d}} d \theta Q (\theta) \\ \exp \{- \frac {k}{2} (\bar {a} - \theta) ^ {T} \Sigma^ {- 1} (\bar {a} - \theta) \} \exp \{- \frac {1}{2} \sum_ {j = 1} ^ {m} (y _ {j} - \theta) ^ {T} \Sigma^ {- 1} (y _ {j} - \theta) \} \\ \end{array}
+$$
+
+Note that for any PDF $f$ over $\mathbb{R}^{n \times d}$ and a function $\varphi : \mathbb{R}^{n \times d} \to \mathbb{R}$ , it holds that $\int_{x \in \mathbb{R}^{n \times d}} dx f(x) \varphi(x) \leqslant \sup_x \varphi(x)$ . Therefore, there exists a deterministic distribution $g_{X|Y}'(x|y) = \delta(x - x')$ that achieves the maximum. Thus, it's sufficient to find a vector $x_G$ that achieves the maximum:
+
+$$
+\begin{array}{l} x _ {\mathcal {G}} \in \operatorname * {a r g m a x} _ {x} \int_ {x ^ {\prime} \in \mathbb {R} ^ {n \times d}} d x ^ {\prime} \delta \left(x ^ {\prime} - x\right) \int_ {\bar {a} \in \mathbb {R} ^ {d}} d \bar {a} I \left[ \left\| \bar {x} ^ {\prime} - \bar {a} \right\| _ {\Sigma^ {- 1}} ^ {2} < \alpha \right] \int_ {\theta \in \mathbb {R} ^ {d}} d \theta Q (\theta) \\ \exp \{- \frac {k}{2} (\bar {a} - \theta) ^ {T} \Sigma^ {- 1} (\bar {a} - \theta) \} \exp \{- \frac {1}{2} \sum_ {j = 1} ^ {m} (y _ {j} - \theta) ^ {T} \Sigma^ {- 1} (y _ {j} - \theta) \} \\ = \operatorname {a r g m a x} _ {x} \int_ {\bar {a} \in \mathbb {R} ^ {d}} d \bar {a} I \left[ \| \bar {x} - \bar {a} \| _ {\Sigma^ {- 1}} ^ {2} < \alpha \right] \int_ {\theta \in \mathbb {R} ^ {d}} d \theta Q (\theta) \\ \exp \{- \frac {k}{2} (\bar {a} - \theta) ^ {T} \Sigma^ {- 1} (\bar {a} - \theta) \} \exp \{- \frac {1}{2} \sum_ {j = 1} ^ {m} (y _ {j} - \theta) ^ {T} \Sigma^ {- 1} (y _ {j} - \theta) \} \\ \stackrel {(*)} {=} \operatorname * {a r g m a x} _ {x} \int_ {\bar {a} \in \mathbb {R} ^ {d}} d \bar {a} I \left[ \| \bar {x} - \bar {a} \| _ {\Sigma^ {- 1}} ^ {2} < \alpha \right] \int_ {\theta \in \mathbb {R} ^ {d}} d \theta \\ \exp \{- \frac {1}{2} [ k (\bar {a} - \theta) ^ {T} \Sigma^ {- 1} (\bar {a} - \theta) + \sum_ {j = 1} ^ {m} (y _ {j} - \theta) ^ {T} \Sigma^ {- 1} (y _ {j} - \theta) ] \} \\ = \operatorname {a r g m a x} _ {x} \int_ {\bar {a} \in \mathbb {R} ^ {d}} d \bar {a} \exp \left\{- \frac {1}{2} \left[ k \bar {a} ^ {T} \Sigma^ {- 1} \bar {a} \right] \right\} I \left[ \| \bar {x} - \bar {a} \| _ {\Sigma^ {- 1}} ^ {2} < \alpha \right] \int_ {\theta \in \mathbb {R} ^ {d}} d \theta \\ \exp \left\{- \frac {1}{2} \left[ (m + k) \theta^ {T} \Sigma^ {- 1} \theta - 2 \theta^ {T} \Sigma^ {- 1} \left(m \bar {y} + k \bar {a}\right) \right] \right\} \\ = \operatorname {a r g m a x} _ {x} \int_ {\bar {a} \in \mathbb {R} ^ {d}} d \bar {a} \exp \left\{- \frac {1}{2} \left[ k \bar {a} ^ {T} \Sigma^ {- 1} \bar {a} - \frac {1}{m + k} (m \bar {y} + k \bar {a}) ^ {T} \Sigma^ {- 1} (m \bar {y} + k \bar {a}) \right] \right\} \\ I \left[ \| \bar {x} - \bar {a} \| _ {\Sigma^ {- 1}} ^ {2} < \alpha \right] \int_ {\theta \in \mathbb {R} ^ {d}} d \theta \exp \{- \frac {(m + k)}{2} (\theta - \frac {m \bar {y} + k \bar {a}}{m + k}) ^ {T} \Sigma^ {- 1} (\theta - \frac {m \bar {y} + k \bar {a}}{m + k}) \} \\ \end{array}
+$$
+
+$$
+\begin{array}{l} = \operatorname * {a r g m a x} _ {x} \int_ {\bar {a} \in \mathbb {R} ^ {d}} d \bar {a} \exp \{- \frac {1}{2} [ k \bar {a} ^ {T} \Sigma^ {- 1} \bar {a} - \frac {1}{m + k} (m \bar {y} + k \bar {a}) ^ {T} \Sigma^ {- 1} (m \bar {y} + k \bar {a}) ] \} \\ I \left[ \| \bar {x} - \bar {a} \| _ {\Sigma^ {- 1}} ^ {2} < \alpha \right] \\ = \operatorname {a r g m a x} _ {x} \int_ {\bar {a} \in \mathbb {R} ^ {d}} d \bar {a} \exp \left\{- \frac {1}{2} \left[ \frac {m k}{m + k} \bar {a} ^ {T} \Sigma^ {- 1} \bar {a} - \frac {2 m k}{m + k} \bar {y} ^ {T} \Sigma^ {- 1} \bar {a} + \frac {m k}{m + k} \bar {y} ^ {T} \Sigma^ {- 1} \bar {y} \right] \right\} \\ I \left[ \| \bar {x} - \bar {a} \| _ {\Sigma^ {- 1}} ^ {2} < \alpha \right] \\ = \operatorname {a r g m a x} _ {x} \int_ {\bar {a} \in \mathbb {R} ^ {d}} d \bar {a} \exp \left\{- \frac {m k}{2 (m + k)} \left[ (\bar {a} - \bar {y}) ^ {T} \Sigma^ {- 1} (\bar {a} - \bar {y}) \right] \right\} I \left[ \| \bar {x} - \bar {a} \| _ {\Sigma^ {- 1}} ^ {2} < \alpha \right] \\ \end{array}
+$$
+
+Where in $(\ast)$ we used the fact that $Q(\theta)$ is the improper uniform prior. Note that the expression depends only on the mean $\bar{x}$ . Therefore, it's sufficient to find a mean vector $\bar{x}$ that maximizes the expression. We substitute the integration variable to $\varphi = \bar{a} - \bar{x}$ and obtain:
+
+$$
+\begin{array}{l} \bar {x} _ {\mathcal {G}} \in \operatorname * {a r g m a x} _ {\bar {x}} \int_ {\{\varphi \in \mathbb {R} ^ {d}: \varphi^ {T} \Sigma^ {- 1} \varphi < \alpha \}} d \varphi \exp \left\{- \frac {m k}{2 (m + k)} \left[ (\varphi + \bar {x} - \bar {y}) ^ {T} \Sigma^ {- 1} (\varphi + \bar {x} - \bar {y}) \right] \right\} \\ \equiv \operatorname * {a r g m a x} _ {\bar {x}} \psi (\bar {y} - \bar {x}) \equiv \operatorname * {a r g m a x} _ {\bar {x}} \psi (\mu) \\ \end{array}
+$$
+
+Where $\psi$ is defined as in Lemma F.7 (with $\sigma = \frac{m + k}{mk}$ ), from which it follows that $\psi(\mu)$ is log-concave, and therefore has at most one local extremum which can only be a maximum. Therefore, it is sufficient to show that $\mu = 0$ (i.e., $\bar{x} = \bar{y}$ ) is a local extremum by equating the gradient at the point to $0$ .
+
+$$
+\begin{array}{l} \frac {\partial}{\partial \mu} \psi (\mu) = \frac {\partial}{\partial \mu} \int_ {\{\varphi \in \mathbb {R} ^ {d}: \varphi^ {T} \Sigma^ {- 1} \varphi < \alpha \}} d \varphi \exp \{- \frac {m k}{2 (m + k)} [ (\varphi - \mu) ^ {T} \Sigma^ {- 1} (\varphi - \mu) ] \} \\ = - \frac {m k}{2 (m + k)} \int_ {\{\varphi \in \mathbb {R} ^ {d}: \varphi^ {T} \Sigma^ {- 1} \varphi < \alpha \}} d \varphi \exp \{- \frac {m k}{2 (m + k)} [ (\varphi - \mu) ^ {T} \Sigma^ {- 1} (\varphi - \mu) ] \} \\ \frac {\partial}{\partial \mu} (\varphi - \mu) ^ {T} \Sigma^ {- 1} (\varphi - \mu) \\ = - \frac {m k}{(m + k)} \int_ {\left\{\varphi \in \mathbb {R} ^ {d}: \varphi^ {T} \Sigma^ {- 1} \varphi < \alpha \right\}} d \varphi \exp \left\{- \frac {m k}{2 (m + k)} \left[ (\varphi - \mu) ^ {T} \Sigma^ {- 1} (\varphi - \mu) \right] \right\} \Sigma^ {- 1} (\mu - \varphi) \\ \end{array}
+$$
+
+Therefore:
+
+$$
+\frac {\partial}{\partial \mu} \psi (\mu) | _ {\mu = 0} = \frac {m k}{(m + k)} \int_ {\{\varphi \in \mathbb {R} ^ {d}: \varphi^ {T} \Sigma^ {- 1} \varphi < \alpha \}} d \varphi \exp \left\{- \frac {m k}{2 (m + k)} [ \varphi^ {T} \Sigma^ {- 1} \varphi ] \right\} \Sigma^ {- 1} \varphi
+$$
+
+Note that since the domain of integration is symmetric about the origin with respect to negation and the integrand is odd with respect to the integration variable, the integral is equal to zero. I.e., $\frac{\partial}{\partial\mu}\psi (\mu)|_{\mu = 0} = 0$ . Therefore, $\bar{x} = \bar{y}$ $(\mu = 0)$ achieves the global maximum, and any attacker that satisfies the condition: $\bar{x} = \bar{y}$ for any leaked sample $y\in \mathbb{R}^{m\times d}$ and attacker generated sample $x\in \{\mathbb{R}^{n\times d}:g_{X|Y}(x|y) > 0\}$ satisfies:
+
+$$
+\mathcal {G} \in \operatorname * {a r g m i n} _ {\mathcal {G} ^ {\prime} \in \mathbb {G}} V (\mathcal {D} _ {\alpha}, \mathcal {G} ^ {\prime}) \quad \forall \alpha \in \mathbb {R} _ {+}
+$$
+
+As required.
+
+Corollary F.10. Consider an authenticator $\mathcal{D}_{\alpha}$ , as defined in Def. F.2. Then the attacker $\mathcal{G}^*$ , defined in Def. F.1, is a best response. i.e.:
+
+$$
+\mathcal{G}^{*}\in \operatorname *{argmin}_{\mathcal{G}^{\prime}\in \mathbb{G}}V(\mathcal{D}_{\alpha},\mathcal{G}^{\prime})\quad \forall \alpha \in \mathbb{R}_{+}
+$$
+
+Proof. Directly from Lemma F.9
+
+Theorem F.11. The game value is:
+
+$$
+\max _ {\mathcal {D}} \min _ {\mathcal {G}} V (\mathcal {D}, \mathcal {G}) = \min _ {\mathcal {G}} \max _ {\mathcal {D}} V (\mathcal {D}, \mathcal {G}) = V (\mathcal {D} ^ {*}, \mathcal {G} ^ {*}) =
+$$
+
+$$
+\frac {1}{2} + \frac {1}{2 \Gamma (\frac {d}{2})} \left[ \gamma \left(\frac {d}{2}, \frac {d n (m + k)}{2 k (n - m)} \log \frac {n (m + k)}{m (n + k)}\right) - \gamma \left(\frac {d}{2}, \frac {d m (n + k)}{2 k (n - m)} \log \frac {n (m + k)}{m (n + k)}\right) \right]
+$$
+
+Where $\gamma$ is the lower incomplete gamma function.
+
+Proof. From the max-min inequality we have:
+
+$$
+\max _ {\mathcal {D}} \min _ {\mathcal {G}} V (\mathcal {D}, \mathcal {G}) \leqslant \min _ {\mathcal {G}} \max _ {\mathcal {D}} V (\mathcal {D}, \mathcal {G})
+$$
+
+On the other hand, using Lemma F.8 and Corollary F.10 we have:
+
+$$
+\max _ {\mathcal {D}} \min _ {\mathcal {G}} V (\mathcal {D}, \mathcal {G}) \geqslant \min _ {\mathcal {G}} V (\mathcal {D} ^ {*}, \mathcal {G}) \stackrel {{F. 1 0}} {=} V (\mathcal {D} ^ {*}, \mathcal {G} ^ {*}) \stackrel {{F. 8}} {=} \max _ {\mathcal {D}} V (\mathcal {D}, \mathcal {G} ^ {*}) \geqslant \min _ {\mathcal {G}} \max _ {\mathcal {D}} V (\mathcal {D}, \mathcal {G})
+$$
+
+Therefore:
+
+$$
+\max _ {\mathcal {D}} \min _ {\mathcal {G}} V (\mathcal {D}, \mathcal {G}) = \min _ {\mathcal {G}} \max _ {\mathcal {D}} V (\mathcal {D}, \mathcal {G}) = V (\mathcal {D} ^ {*}, \mathcal {G} ^ {*})
+$$
+
+The game value is given by:
+
+$$
+\begin{array}{l} V \left(\mathcal {D} ^ {*}, \mathcal {G} ^ {*}\right) = \mathbb {E} _ {\Theta \sim_ {Q}} V \left(\Theta , \mathcal {D} ^ {*}, \mathcal {G} ^ {*}\right) \\ = \frac {1}{2} \mathbb {E} _ {\Theta \sim Q} \mathbb {E} _ {A \sim f _ {\Theta} ^ {(k)}} \mathbb {E} _ {Y \sim f _ {\Theta} ^ {(m)}} \left[ \mathbb {E} _ {X \sim f _ {\Theta} ^ {(n)}} [ \mathcal {D} ^ {*} (A, X) ] + \mathbb {E} _ {X \sim g _ {X | Y} ^ {*} (\cdot | Y)} [ 1 - \mathcal {D} ^ {*} (A, X) ] \right] \\ = \frac {1}{2} \mathbb {E} _ {\Theta \sim Q} \mathbb {E} _ {A \sim f _ {\Theta} ^ {(k)}} \mathbb {E} _ {Y \sim f _ {\Theta} ^ {(m)}} \\ \left[ \mathbb {E} _ {X \sim f _ {\Theta} ^ {(n)}} \left[ I \left[ \| \bar {X} - \bar {A} \| _ {\Sigma^ {- 1}} ^ {2} < \alpha^ {*} \right] \right] + \mathbb {E} _ {X \sim g _ {X | Y} ^ {*}} \left[ 1 - I \left[ \| \bar {X} - \bar {A} \| _ {\Sigma^ {- 1}} ^ {2} < \alpha^ {*} \right] \right] \right] \\ = \frac {1}{2} + \frac {1}{2} \mathbb {E} _ {\Theta \sim Q} \mathbb {E} _ {A \sim f _ {\Theta} ^ {(k)}} \mathbb {E} _ {X \sim f _ {\Theta} ^ {(n)}} \left[ I [ \| \bar {x} - \bar {A} \| _ {\Sigma^ {- 1}} ^ {2} < \alpha^ {*} ] \right] - \\ \frac {1}{2} \mathbb {E} _ {\Theta \sim Q} \mathbb {E} _ {A \sim f _ {\Theta} ^ {(k)}} \mathbb {E} _ {X \sim g _ {X | \Theta} ^ {*} (\cdot | \Theta)} \left[ I \left[ \| \bar {x} - \bar {A} \| _ {\Sigma^ {- 1}} ^ {2} < \alpha^ {*} \right] \right] \\ \end{array}
+$$
+
+Observing the first term we have:
+
+$$
+\begin{array}{l} \frac {1}{2} \mathbb {E} _ {\Theta \sim Q} \mathbb {E} _ {A \sim f _ {\Theta} ^ {(k)}} \mathbb {E} _ {X \sim f _ {\Theta} ^ {(n)}} \left[ I [ \| \bar {X} - \bar {A} \| _ {\Sigma^ {- 1}} ^ {2} < \alpha^ {*} ] \right] \\ = \frac {1}{2} \mathbb {E} _ {\Theta \sim Q} \mathbb {E} _ {A \sim f _ {\Theta} ^ {(k)}} \mathbb {E} _ {X \sim f _ {\Theta} ^ {(n)}} \left[ I \left[ (\bar {X} - \bar {A}) ^ {T} \Sigma^ {- 1} (\bar {X} - \bar {A}) < \alpha^ {*} \right] \right] \\ = \frac {1}{2} \mathbb {E} _ {\Theta \sim Q} \mathbb {E} _ {A \sim f _ {\Theta} ^ {(k)}} \mathbb {E} _ {X \sim f _ {\Theta} ^ {(n)}} \left[ I \left[ (\bar {X} - \bar {A}) ^ {T} \left(C C ^ {T}\right) ^ {- 1} (\bar {X} - \bar {A}) < \alpha^ {*} \right] \right] \\ = \frac {1}{2} \mathbb {E} _ {\Theta \sim Q} \mathbb {E} _ {A \sim f _ {\Theta} ^ {(k)}} \mathbb {E} _ {X \sim f _ {\Theta} ^ {(n)}} \left[ I \left[ (\bar {X} - \bar {A}) ^ {T} C ^ {- T} C ^ {- 1} (\bar {X} - \bar {A}) < \alpha^ {*} \right] \right] \\ = \frac {1}{2} \mathbb {E} _ {\Theta \sim Q} \mathbb {E} _ {A \sim f _ {\Theta} ^ {(k)}} \mathbb {E} _ {X \sim f _ {\Theta} ^ {(n)}} \left[ I \left[ \left(C ^ {- 1} (\bar {X} - \bar {A})\right) ^ {T} \left(C ^ {- 1} (\bar {X} - \bar {A})\right) < \alpha^ {*} \right] \right] \\ \equiv \frac {1}{2} \mathbb {E} _ {\Theta \sim Q} \mathbb {E} _ {A \sim f _ {\Theta} ^ {(k)}} \mathbb {E} _ {X \sim f _ {\Theta} ^ {(n)}} \left[ I \left[ Z ^ {T} Z < \alpha^ {*} \right] \right] \\ = (\ast) \\ \end{array}
+$$
+
+Observe that
+
+$$
+\begin{array}{l} Z = C ^ {- 1} (\bar {X} - \bar {A}) = C ^ {- 1} [ (\bar {X} - \Theta) - (\bar {A} - \Theta) ] \\ = C ^ {- 1} \left[ I _ {d}, - I _ {d} \right] \left[ \begin{array}{c} \bar {X} - \Theta \\ \bar {A} - \Theta \end{array} \right] = \left[ C ^ {- 1}, - C ^ {- 1} \right] \left[ \begin{array}{c} \bar {X} - \Theta \\ \bar {A} - \Theta \end{array} \right] \\ \end{array}
+$$
+
+Note that:
+
+$$
+\left[ \begin{array}{c} \bar {X} - \Theta \\ \bar {A} - \Theta \end{array} \right] \sim \mathcal {N} (0 _ {2 d}, \left[ \begin{array}{c c} \frac {1}{n} \Sigma & 0 _ {d \times d} \\ 0 _ {d \times d} & \frac {1}{k} \Sigma \end{array} \right])
+$$
+
+Therefore:
+
+$$
+\begin{array}{l} Z \sim \mathcal {N} (0 _ {d}, [ C ^ {- 1}, - C ^ {- 1} ] \left[ \begin{array}{c c} \frac {1}{n} \Sigma & 0 _ {d \times d} \\ 0 _ {d \times d} & \frac {1}{k} \Sigma \end{array} \right] \left[ \begin{array}{c} C ^ {- T} \\ - C ^ {- T} \end{array} \right]) \\ = \mathcal {N} \left(0 _ {d}, \left(\frac {1}{n} + \frac {1}{k}\right) C ^ {- 1} \Sigma C ^ {- T}\right) \\ = \mathcal {N} \left(0 _ {d}, \frac {n + k}{n k} C ^ {- 1} C C ^ {T} C ^ {- T}\right) \\ = \mathcal {N} (0 _ {d}, \frac {n + k}{n k} I _ {d}) \\ \end{array}
+$$
+
+We denote $\tilde{Z} = \sqrt{\frac{nk}{n + k}} Z \sim \mathcal{N}(0_d, I_d)$ , and thus $\tilde{Z}_1, \ldots, \tilde{Z}_d$ are independent standard normal random variables and $\tilde{Z}^T\tilde{Z} \sim \chi^2(d)$ . Therefore, $Z^T Z = \frac{n + k}{nk}\tilde{Z}^T\tilde{Z} \sim \Gamma\left(k = \frac{d}{2}, \theta = 2\frac{n + k}{nk}\right)$ and we have:
+
+$$
+\begin{array}{l} (*) = \frac {1}{2} \mathbb {E} _ {\Theta \sim Q} \mathbb {E} _ {A \sim f _ {\Theta} ^ {(k)}} \mathbb {E} _ {X \sim f _ {\Theta} ^ {(n)}} \left[ I \left[ Z ^ {T} Z < \alpha^ {*} \right] \right] \\ = \frac {1}{2} \mathbb {E} _ {\Theta \sim Q} \mathbb {E} _ {Z ^ {T} Z \sim \Gamma (k = \frac {d}{2}, \theta = 2 \frac {n + k}{n k})} \left[ I \left[ Z ^ {T} Z < \alpha^ {*} \right] \right] \\ \stackrel {(i)} {=} \frac {1}{2} \mathbb {E} _ {\Theta \sim Q} \frac {1}{\Gamma (\frac {d}{2})} \gamma \left(\frac {d}{2}, \frac {n k \alpha^ {*}}{2 (n + k)}\right) \\ = \frac {1}{2} \frac {1}{\Gamma (\frac {d}{2})} \gamma \left(\frac {d}{2}, \frac {n k \alpha^ {*}}{2 (n + k)}\right) \\ \end{array}
+$$
+
+Where in (i) We used the CDF of the Gamma distribution in which $\gamma$ is the lower incomplete gamma function.
+
+Similarly, observing the second term we have:
+
+$$
+\begin{array}{l} \frac {1}{2} \mathbb {E} _ {\Theta \sim Q} \mathbb {E} _ {A \sim f _ {\Theta} ^ {(k)}} \mathbb {E} _ {X \sim g _ {X | \Theta} ^ {*} (\cdot | \Theta)} \left[ I \left[ \| \bar {X} - \bar {A} \| _ {\Sigma^ {- 1}} ^ {2} < \alpha^ {*} \right] \right] \\ \equiv \frac {1}{2} \mathbb {E} _ {\Theta \sim Q} \mathbb {E} _ {A \sim f _ {\Theta} ^ {(k)}} \mathbb {E} _ {X \sim g _ {X | \Theta} ^ {*} (\cdot | \Theta)} \left[ I \left[ V ^ {T} V < \alpha^ {*} \right] \right] \\ = (* *) \\ \end{array}
+$$
+
+Where:
+
+$$
+\begin{array}{l} V = C ^ {- 1} (\bar {X} - \bar {A}) = C ^ {- 1} [ (\bar {X} - \Theta) - (\bar {A} - \Theta) ] \\ = C ^ {- 1} [ I _ {d}, - I _ {d} ] \left[ \begin{array}{c} \bar {X} - \Theta \\ \bar {A} - \Theta \end{array} \right] = [ C ^ {- 1}, - C ^ {- 1} ] \left[ \begin{array}{c} \bar {X} - \Theta \\ \bar {A} - \Theta \end{array} \right] \\ \end{array}
+$$
+
+Using the definition of $\mathcal{G}^*$ (Definition F.1) and Lemma F.3 we have:
+
+$$
+\left[ \begin{array}{c} \bar {X} - \Theta \\ \bar {A} - \Theta \end{array} \right] \sim \mathcal {N} (0 _ {2 d}, \left[ \begin{array}{c c} \frac {1}{m} \Sigma & 0 _ {d \times d} \\ 0 _ {d \times d} & \frac {1}{k} \Sigma \end{array} \right])
+$$
+
+Therefore:
+
+$$
+V \sim \mathcal {N} (0 _ {d}, [ C ^ {- 1}, - C ^ {- 1} ] \left[ \begin{array}{c c} \frac {1}{m} \Sigma & 0 _ {d \times d} \\ 0 _ {d \times d} & \frac {1}{k} \Sigma \end{array} \right] \left[ \begin{array}{c} C ^ {- T} \\ - C ^ {- T} \end{array} \right]) = \mathcal {N} (0 _ {d}, \frac {m + k}{m k} I _ {d})
+$$
+
+And similarly to the first term, we get:
+
+$$
+V ^ {T} V \sim \Gamma (k = \frac {d}{2}, \theta = 2 \frac {m + k}{m k})
+$$
+
+And thus:
+
+$$
+(*) = \frac {1}{2} \frac {1}{\Gamma (\frac {d}{2})} \gamma (\frac {d}{2}, \frac {m k \alpha^ {*}}{2 (m + k)})
+$$
+
+Therefore, the game value is given by:
+
+$$
+\begin{array}{l} V (\mathcal {D} ^ {*}, \mathcal {G} ^ {*}) = \frac {1}{2} + \frac {1}{2} \frac {1}{\Gamma (\frac {d}{2})} [ \gamma (\frac {d}{2}, \frac {n k \alpha^ {*}}{2 (n + k)}) - \gamma (\frac {d}{2}, \frac {m k \alpha^ {*}}{2 (m + k)}) ] \\ = \frac {1}{2} + \frac {1}{2} \frac {1}{\Gamma (\frac {d}{2})} [ \gamma (\frac {d}{2}, \frac {d n (m + k)}{2 k (n - m)} \log \frac {n (m + k)}{m (n + k)}) - \gamma (\frac {d}{2}, \frac {d m (n + k)}{2 k (n - m)} \log \frac {n (m + k)}{m (n + k)}) ] \\ \end{array}
+$$
+
+As required.
+
+Finally, we prove Theorem 4.2 and Corollary 4.3.
+
+Theorem 4.2. Define $\delta = m / n \leqslant 1$ and let $\rho = m / k$ . Consider the attacker $\mathcal{G}^*$ defined by the following generative process: Given a leaked sample $Y \in \mathbb{R}^{m \times d}$ , $\mathcal{G}^*$ generates a sample $X \in \mathbb{R}^{n \times d}$ as follows: it first samples $n$ vectors $W_1, \ldots, W_n \stackrel{i,d}{\sim} \mathcal{N}(0, \Sigma)$ and then sets: $X_i = W_i - \bar{W} + \bar{Y}$ . Define the authenticator $\mathcal{D}^*$ by:
+
+$$
+\mathcal {D} ^ {*} (a, x) = I \left[ \| \bar {x} - \bar {a} \| _ {\Sigma^ {- 1}} ^ {2} < \frac {d (1 + \rho) (1 + \rho \delta^ {- 1})}{n (1 - \delta)} \log \left(\frac {\rho + 1}{\rho + \delta}\right) \right] \tag {F.5}
+$$
+
+Then $(\mathcal{D}^*,\mathcal{G}^*)$ is a solution of Eq. 2.2 that satisfies Eq. 4.1.
+
+Proof. Directly from Lemma F.8, Corollary F.10, and Theorem F.11 by assigning $\delta = \frac{m}{n}, \rho = \frac{m}{k}$ .
+
+Corollary 4.3. Define $\delta$ and $\rho$ as in Theorem 4.2. Then the game value for the Gaussian case is:
+
+$$
+\frac {1}{2} + \frac {1}{2 \Gamma \left(\frac {d}{2}\right)} \left[ \gamma \left(\frac {d}{2}, \frac {d (1 + \rho)}{2 (1 - \delta)} \log \frac {1 + \rho}{\delta + \rho}\right) - \gamma \left(\frac {d}{2}, \frac {d (\delta + \rho)}{2 (1 - \delta)} \log \frac {1 + \rho}{\delta + \rho}\right) \right] \tag {F.6}
+$$
+
+Where $\gamma$ is the lower incomplete Gamma function, and $\Gamma$ is the Gamma function.
+
+Proof. Directly from Theorem F.11 by assigning $\delta = \frac{m}{n}, \rho = \frac{m}{k}$ .
+
+
+
+# F.4 GAME VALUE FOR A MAXIMUM LIKELIHOOD ATTACKER
+
+In this section, we consider the most intuitive attacker strategy, which one could naively see as optimal. However, we show that this intuitive "optimal attacker" is sub-optimal as can be seen in Fig. 1c in the main paper. We consider an attacker that draws the attack sample from a Gaussian distribution with the maximum likelihood estimate of the mean and the known covariance. We denote this attacker by the name ML attacker. We find the best response authenticator to this attacker and the associated game value. Fig. 6 visualizes the difference in theoretical game value between the ML attacker (see Definition F.12) and the optimal attacker (see Definition F.1) for different values of $d$ (the dimension of observations), and demonstrates that the ML attacker is indeed sub-optimal.
+
+
+(a)
+
+
+(b)
+
+
+(c)
+Figure 6: The difference in game value (expected authentication accuracy of the optimal authenticator) between the ML attacker and the optimal attacker for different values of the observations' dimension $d$ , as a function of the parameters $\rho = \frac{m}{k}, \delta = \frac{m}{n}$ . Namely: $\max_{\mathcal{D}}\{V(\mathcal{D},\mathcal{G}_{ML})\} - \max_{\mathcal{D}}\{V(\mathcal{D},\mathcal{G}^{*})\}$ . (a) Difference in game value for $d = 10$ . (b) Difference in game value for $d = 100$ . (c) Difference in game value for $d = 1000$ .
+
+Definition F.12. Let $\mathcal{G}_{ML}$ denote an attacker defined by the following generative process: Given a leaked sample $Y\in \mathbb{R}^{m\times d}$ , $\mathcal{G}_{ML}$ generates an attack sample $X\stackrel {iid}{\sim}\mathcal{N}(\bar{Y},\Sigma)$
+
+Lemma F.13. Let $\theta \in \mathbb{R}^d$ , $\Sigma \in \mathbb{R}^{d\times d}$ represent the mean and covariance of a Gaussian distribution. Let $X\in \mathbb{R}^{n\times d}$ be a random sample generated by the attacker defined in Def. F.12. Then:
+
+$$
+X _ {c} \sim \mathcal {N} (\theta_ {c, n}, d i a g (\Sigma , n) + r e p ((\frac {1}{m}) \Sigma , n)) \equiv \mathcal {N} (\theta_ {c, n}, \Psi_ {M L})
+$$
+
+Proof. Let $W_{1}, \ldots, W_{n} \stackrel{\mathrm{iid}}{\sim} \mathcal{N}(0, \Sigma)$ , observe that $X_{i} = \bar{Y} + W_{i} \quad \forall i \in [n]$ , and thus:
+
+$$
+X _ {c} = \bar {Y} _ {c, n} + W _ {c}
+$$
+
+Where $W_{c} \sim \mathcal{N}(0 \cdot 1_{dn}, \text{diag}(\Sigma, n)$ . Using Lemma F.3 we have $\bar{Y} \sim \mathcal{N}(\theta, \frac{1}{m}\Sigma)$ and using Lemma F.4 we have $\bar{Y}_{c,n} \sim \mathcal{N}(\theta_{c,n}, \text{rep}(\frac{1}{m}\Sigma, n))$ .
+
+Let $Z = \left[ \begin{array}{c} W_{c} \\ \bar{Y}_{c,n} \end{array} \right]$ and $B = \left[ \begin{array}{ccc} I_{nd\times nd} & , & I_{nd\times nd} \end{array} \right]$ , then $X_{c} = W_{c} + \bar{Y}_{c,n} = BZ$ . Note that:
+
+$$
+Z \sim \mathcal {N} (\left[ \begin{array}{c} 0 _ {n d} \\ \theta_ {c, n} \end{array} \right], \left[ \begin{array}{c c} d i a g (\Sigma , n) & 0 \\ 0 & r e p (\frac {1}{m} \Sigma , n) \end{array} \right])
+$$
+
+and therefore we have:
+
+$$
+\begin{array}{l} X _ {c} \sim \mathcal {N} (B \left[ \begin{array}{c} 0 _ {n d} \\ \theta_ {c, n} \end{array} \right], B \left[ \begin{array}{c c} d i a g (\Sigma , n) & 0 \\ 0 & r e p (\frac {1}{m} \Sigma , n) \end{array} \right] B ^ {T}) \\ = \mathcal {N} (\theta_ {c, n}, d i a g (\Sigma , n) + r e p \left(\frac {1}{m} \Sigma , n\right)) \\ \end{array}
+$$
+
+As required.
+
+
+
+Lemma F.14. Let $\Sigma = CC^T \in \mathbb{R}^{d \times d}$ represent the covariance of a Gaussian distribution, and consider the following covariance matrix: $\Psi_{ML} = \text{diag}(\Sigma, n) + \text{rep}(\frac{1}{m}\Sigma, n)$ . Then:
+
+$$
+\Psi_ {M L} ^ {- 1} = d i a g (\Sigma^ {- 1}, n) - \frac {1}{n + m} r e p (\Sigma^ {- 1}, n)
+$$
+
+and the determinant is:
+
+$$
+\det (\Psi) = \det (\Sigma) ^ {n} \left(\frac {n + m}{m}\right) ^ {d}
+$$
+
+Proof. To find the inverse of $\Psi_{ML}$ we first define:
+
+$$
+U = \left[ \begin{array}{c} \Sigma \\ \vdots \\ \Sigma \end{array} \right] \in \mathbb {R} ^ {n d \times d}, \quad V = \frac {1}{m} \left[ \begin{array}{c} I _ {d} \\ \vdots \\ I _ {d} \end{array} \right] \in \mathbb {R} ^ {n d \times d} \tag {F.7}
+$$
+
+Therefore we have:
+
+$$
+\begin{array}{l} \Psi_ {M L} ^ {- 1} = \left(d i a g (\Sigma , n) + r e p \left(\frac {1}{m} \Sigma , n\right)\right) ^ {- 1} \\ = \left(d i a g (\Sigma , n) + U V ^ {T}\right) ^ {- 1} \\ \stackrel {(i)} {=} d i a g (\Sigma , n) ^ {- 1} - d i a g (\Sigma , n) ^ {- 1} U \left(I _ {d} + V ^ {T} d i a g (\Sigma , n) ^ {- 1} U\right) ^ {- 1} V ^ {T} d i a g (\Sigma , n) ^ {- 1} \\ \stackrel {(i i)} {=} d i a g (\Sigma^ {- 1}, n) - d i a g (\Sigma^ {- 1}, n) U (I _ {d} + V ^ {T} d i a g (\Sigma^ {- 1}, n) U) ^ {- 1} V ^ {T} d i a g (\Sigma^ {- 1}, n) \\ = d i a g (\Sigma^ {- 1}, n) - \left[ \begin{array}{c} I _ {d} \\ \vdots \\ I _ {d} \end{array} \right] (I _ {d} + \frac {1}{m} [ I _ {d} \quad \dots \quad I _ {d} ] \left[ \begin{array}{c} I _ {d} \\ \vdots \\ I _ {d} \end{array} \right]) ^ {- 1} \frac {1}{m} \left[ \begin{array}{c c c} \Sigma^ {- 1} & \dots & \Sigma^ {- 1} \end{array} \right] \\ = \operatorname {d i a g} (\Sigma^ {- 1}, n) - \left[ \begin{array}{c} I _ {d} \\ \vdots \\ I _ {d} \end{array} \right] (I _ {d} + \frac {n}{m} I _ {d}) ^ {- 1} \frac {1}{m} \left[ \begin{array}{c c c} \Sigma^ {- 1} & \dots & \Sigma^ {- 1} \end{array} \right] \\ = d i a g (\Sigma^ {- 1}, n) - \left[ \begin{array}{c} I _ {d} \\ \vdots \\ I _ {d} \end{array} \right] \left(\left(\frac {n + m}{m}\right) I _ {d}\right) ^ {- 1} \frac {1}{m} \left[ \begin{array}{c c c} \Sigma^ {- 1} & \dots & \Sigma^ {- 1} \end{array} \right] \\ = d i a g (\Sigma^ {- 1}, n) - \frac {1}{n + m} \left[ \begin{array}{c} I _ {d} \\ \vdots \\ I _ {d} \end{array} \right] I _ {d} \left[ \begin{array}{c c c} \Sigma^ {- 1} & \dots & \Sigma^ {- 1} \end{array} \right] \\ \end{array}
+$$
+
+$$
+= d i a g (\Sigma^ {- 1}, n) - \frac {1}{n + m} r e p (\Sigma^ {- 1}, n)
+$$
+
+As required. Where in $(i)$ we used the Woodbury matrix identity, and in $(ii)$ we used the inverse of a diagonal block matrix.
+
+Next, we turn to find the determinant of $\Psi_{ML}$ :
+
+$$
+\begin{array}{l} \det (\Psi) = \det (\operatorname {d i a g} (\Sigma , n) + r e p ((\frac {1}{m}) \Sigma , n)) \\ = \det (d i a g (\Sigma , n) + U V ^ {T}) \\ \stackrel {(i i i)} {=} \det (d i a g (\Sigma , n)) \det (I _ {d} + V ^ {T} d i a g (\Sigma , n) ^ {- 1} U) \\ \stackrel {(i v)} {=} \det (\Sigma) ^ {n} \det \left(I _ {d} + V ^ {T} d i a g (\Sigma^ {- 1}, n) U\right) \\ = \det (\Sigma) ^ {n} \det (I _ {d} + \frac {1}{m} \left[ I _ {d} \quad \dots \quad I _ {d} \right] d i a g (\Sigma^ {- 1}, n) \left[ \begin{array}{c} \Sigma \\ \vdots \\ \Sigma \end{array} \right]) \\ = \det (\Sigma) ^ {n} \det \left(\left(\frac {n + m}{m}\right) I _ {d}\right) \\ = \det (\Sigma) ^ {n} \left(\frac {n + m}{m}\right) ^ {d} \\ \end{array}
+$$
+
+Where in (iii) we used the matrix determinant lemma, and in (iv) we used the determinant of a diagonal block matrix.
+
+Lemma F.15. Consider the attacker $\mathcal{G}_{ML}$ , defined in F.12. The best response strategy for the authenticator against this attacker is:
+
+$$
+\mathcal {D} _ {M L} (a, x) = I \left[ \| \bar {x} - \bar {a} \| _ {\Sigma^ {- 1}} ^ {2} < \alpha_ {M L} \right]
+$$
+
+Where:
+
+$$
+\alpha_ {M L} = \frac {d (n + k) (n m + n k + m k)}{k ^ {2} n ^ {2}} \log (\frac {n m + n k + m k}{m (n + k)})
+$$
+
+Proof. The best response authenticator satisfies:
+
+$$
+\begin{array}{l} \mathcal{D}^{*}\in_{\substack{\operatorname {argmax}\\ \mathcal{D}\in \mathbb{D}}}V(\mathcal{D},\mathcal{G}_{ML}) \\ = \operatorname * {a r g m a x} _ {\mathcal {D} \in \mathbb {D}} \frac {1}{2} \mathbb {E} _ {\Theta \sim Q} \mathbb {E} _ {A \sim f _ {\Theta} ^ {(k)}} \mathbb {E} _ {Y \sim f _ {\Theta} ^ {(m)}} \left[ \mathbb {E} _ {X \sim f _ {\Theta} ^ {(n)}} [ \mathcal {D} (A, X) ] + \mathbb {E} _ {X \sim g _ {X | Y} ^ {M L} (\cdot | Y)} [ 1 - \mathcal {D} (A, X) ] \right] \\ = \operatorname * {a r g m a x} _ {\mathcal {D} \in \mathbb {D}} \mathbb {E} _ {\Theta \sim Q} \mathbb {E} _ {A \sim f _ {\Theta} ^ {(k)}} \left[ \mathbb {E} _ {X \sim f _ {\Theta} ^ {(n)}} [ \mathcal {D} (A, X) ] + \mathbb {E} _ {X \sim g _ {X | \Theta} ^ {M L} (\cdot | \Theta)} [ 1 - \mathcal {D} (A, X) ] \right] \\ \end{array}
+$$
+
+$$
+\begin{array}{l} \stackrel {{\text {L e m m a F . 1 3}}} {{=}} \underset {\mathcal {D} \in \mathbb {D}} {\operatorname {a r g m a x}} \mathbb {E} _ {\Theta \sim Q} \mathbb {E} _ {A \sim \mathcal {N} (\Theta_ {c, k}, d i a g (\Sigma , k))} \\ \left[ \mathbb {E} _ {X \sim \mathcal {N} (\Theta_ {c, n}, d i a g (\Sigma , n))} [ \mathcal {D} (A, X) ] + \mathbb {E} _ {X \sim \mathcal {N} (\theta_ {c, n}, \Psi_ {M L})} [ 1 - \mathcal {D} (A, X) ] \right] \\ = \operatorname {a r g m a x} _ {\mathcal {D} \in \mathbb {D}} \mathbb {E} _ {\Theta \sim Q} \int_ {a \in \mathbb {R} ^ {k d}} d a \int_ {x \in \mathbb {R} ^ {n d}} d x \exp \left\{- \frac {1}{2} (a - \Theta_ {c, k}) ^ {T} d i a g (\Sigma , k) ^ {- 1} (a - \Theta_ {c, k}) \right\} \\ \left[ \sqrt {\frac {1}{| \det (d i a g (\Sigma , n)) |}} \exp \left\{- \frac {1}{2} \left(x - \Theta_ {c , n}\right) ^ {T} d i a g (\Sigma , n) ^ {- 1} \left(x - \Theta_ {c , n}\right) \right\} \mathcal {D} (a, x) + \right. \\ \sqrt {\frac {1}{| \det (\Psi_ {M L}) |}} \exp \{- \frac {1}{2} (x - \Theta_ {c, n}) ^ {T} \Psi_ {M L} ^ {- 1} (x - \Theta_ {c, n}) \} [ 1 - \mathcal {D} (a, x) ] \\ \end{array}
+$$
+
+$$
+\begin{array}{l} \stackrel {\text {L e m m a}} {=} \underset {\mathcal {D} \in \mathbb {D}} {\operatorname {a r g m a x}} \mathbb {E} _ {\Theta \sim Q} \int_ {a \in \mathbb {R} ^ {k d}} d a \int_ {x \in \mathbb {R} ^ {n d}} d x \exp \left\{- \frac {1}{2} (a - \Theta_ {c, k}) ^ {T} d i a g (\Sigma , k) ^ {- 1} (a - \Theta_ {c, k}) \right\} [ \\ \sqrt {\left(\frac {n + m}{m}\right) ^ {d}} \exp \{- \frac {1}{2} (x - \Theta_ {c, n}) ^ {T} d i a g (\Sigma , n) ^ {- 1} (x - \Theta_ {c, n}) \} \mathcal {D} (a, x) + \\ \end{array}
+$$
+
+$$
+\exp \{- \frac {1}{2} (x - \Theta_ {c, n}) ^ {T} \Psi_ {M L} ^ {- 1} (x - \Theta_ {c, n}) \} [ 1 - \mathcal {D} (a, x) ] ]
+$$
+
+$\mathcal{D}$ can be chosen independently for each pair $(a,x)\in \mathcal{X}^k\times \mathcal{X}^n$ . Therefore, for any $(a,x)\in \mathcal{X}^k\times \mathcal{X}^n$ the decision rule for $\mathcal{D}(a,x) = 1$ is:
+
+$$
+\sqrt {(\frac {n + m}{m}) ^ {d}} \int_ {\theta \in \mathbb {R} ^ {d}} d \theta Q (\theta) \exp \left\{- \frac {1}{2} (x - \theta_ {c, n}) ^ {T} d i a g (\Sigma , n) ^ {- 1} (x - \theta_ {c, n}) \right\}
+$$
+
+$$
+\exp \left\{- \frac {1}{2} \left(a - \theta_ {c, k}\right) ^ {T} d i a g (\Sigma , k) ^ {- 1} \left(a - \theta_ {c, k}\right) \right\} >
+$$
+
+$$
+\int_ {\theta \in \mathbb {R} ^ {d}} d \theta Q (\theta) \exp \{- \frac {1}{2} (x - \theta_ {c, n}) ^ {T} \Psi_ {M L} ^ {- 1} (x - \theta_ {c, n}) \} \exp \{- \frac {1}{2} (a - \theta_ {c, k}) ^ {T} d i a g (\Sigma , k) ^ {- 1} (a - \theta_ {c, k}) \}
+$$
+
+Observing the LHS integral and using the improper uniform prior assumption, we obtain:
+
+$$
+\int_ {\theta \in \mathbb {R} ^ {d}} d \theta Q (\theta) \exp \left\{- \frac {1}{2} \left[ (x - \theta_ {c, n}) ^ {T} d i a g (\Sigma , n) ^ {- 1} (x - \theta_ {c, n}) + \right. \right.
+$$
+
+$$
+\left. \left(a - \theta_ {c, k}\right) ^ {T} d i a g (\Sigma , k) ^ {- 1} (a - \theta_ {c, k}) \right] \}}
+$$
+
+$$
+= \int_ {\theta \in \mathbb {R} ^ {d}} d \theta \exp \{- \frac {1}{2} [ \sum_ {i = 1} ^ {n} x _ {i} ^ {T} \Sigma^ {- 1} x _ {i} + \sum_ {j = 1} ^ {k} a _ {j} ^ {T} \Sigma^ {- 1} a _ {j} - 2 \theta^ {T} \Sigma^ {- 1} (n \bar {x} + k \bar {a}) + (n + k) \theta^ {T} \Sigma^ {- 1} \theta ] \}
+$$
+
+$$
+= \exp \{- \frac {1}{2} [ \sum_ {i = 1} ^ {n} x _ {i} ^ {T} \Sigma^ {- 1} x _ {i} + \sum_ {j = 1} ^ {k} a _ {j} ^ {T} \Sigma^ {- 1} a _ {j} - (n + k) (\frac {n \bar {x} + k \bar {a}}{n + k}) ^ {T} \Sigma^ {- 1} (\frac {n \bar {x} + k \bar {a}}{n + k}) ] \}
+$$
+
+$$
+\int_ {\theta \in \mathbb {R} ^ {d}} d \theta \exp \left\{- \frac {n + k}{2} \left[ \left(\theta - \frac {n \bar {x} + k \bar {a}}{n + k}\right) ^ {T} \Sigma^ {- 1} \left(\theta - \frac {n \bar {x} + k \bar {a}}{n + k}\right) \right] \right\}
+$$
+
+$$
+= \exp \left\{- \frac {1}{2} \left[ \sum_ {i = 1} ^ {n} x _ {i} ^ {T} \Sigma^ {- 1} x _ {i} + \sum_ {j = 1} ^ {k} a _ {j} ^ {T} \Sigma^ {- 1} a _ {j} - (n + k) \left(\frac {n \bar {x} + k \bar {a}}{n + k}\right) ^ {T} \Sigma^ {- 1} \left(\frac {n \bar {x} + k \bar {a}}{n + k}\right) \right] \right\}
+$$
+
+$$
+\sqrt {(\frac {2 \pi}{n + k}) ^ {d} \det (\Sigma)}
+$$
+
+Observing the RHS and using the improper uniform prior assumption, we obtain:
+
+$$
+\begin{array}{l} \int_ {\theta \in \mathbb {R} ^ {d}} d \theta Q (\theta) \exp \left\{- \frac {1}{2} \left[ (x - \theta_ {c, n}) ^ {T} \Psi_ {M L} ^ {- 1} (x - \theta_ {c, n}) + (a - \theta_ {c, k}) ^ {T} d i a g (\Sigma^ {- 1}, k) (a - \theta_ {c, k}) \right] \right\} \\ = \int_ {\theta \in \mathbb {R} ^ {d}} d \theta \exp \left\{- \frac {1}{2} \left[ (x - \theta_ {c, n}) ^ {T} \Psi_ {M L} ^ {- 1} (x - \theta_ {c, n}) + (a - \theta_ {c, k}) ^ {T} d i a g (\Sigma^ {- 1}, k) (a - \theta_ {c, k}) \right] \right\} \\ \stackrel {\mathrm {F . 1 4}} {=} \int_ {\theta \in \mathbb {R} ^ {d}} d \theta \exp \{- \frac {1}{2} [ (x - \theta_ {c, n}) ^ {T} (d i a g (\Sigma^ {- 1}, n) - \frac {1}{n + m} r e p (\Sigma^ {- 1}, n)) (x - \theta_ {c, n}) + \\ \left. \left(a - \theta_ {c, k}\right) ^ {T} \operatorname {d i a g} \left(\Sigma^ {- 1}, k\right) \left(a - \theta_ {c, k}\right) \right] \rbrace \\ = \int_ {\theta \in \mathbb {R} ^ {d}} d \theta \exp \{- \frac {1}{2} \left[ \sum_ {i = 1} ^ {n} (x _ {i} - \theta) ^ {T} \Sigma^ {- 1} (x _ {i} - \theta) + \sum_ {j = 1} ^ {k} (a _ {j} - \theta) ^ {T} \Sigma^ {- 1} (a _ {j} - \theta) - \right. \\ \left. \frac {n ^ {2}}{n + m} (\bar {x} - \theta) ^ {T} \Sigma^ {- 1} (\bar {x} - \theta) ] \right\} \\ \end{array}
+$$
+
+$$
+\begin{array}{l} \stackrel {(i)} {=} \exp \{- \frac {1}{2} [ \sum_ {i = 1} ^ {n} x _ {i} ^ {T} \Sigma^ {- 1} x _ {i} + \sum_ {j = 1} ^ {k} a _ {j} ^ {T} \Sigma^ {- 1} a _ {j} - \frac {n ^ {2}}{n + m} \bar {x} ^ {T} \Sigma^ {- 1} \bar {x} - \frac {n m + n k + m k}{n + m} v ^ {T} \Sigma^ {- 1} v ] \} \\ \int_ {\theta \in \mathbb {R} ^ {d}} d \theta \exp \{- \frac {1}{2} \frac {n m + n k + m k}{n + m} [ v ^ {T} \Sigma^ {- 1} v ] \} \\ = \exp \{- \frac {1}{2} [ \sum_ {i = 1} ^ {n} x _ {i} ^ {T} \Sigma^ {- 1} x _ {i} + \sum_ {j = 1} ^ {k} a _ {j} ^ {T} \Sigma^ {- 1} a _ {j} - \frac {n ^ {2}}{n + m} \bar {x} ^ {T} \Sigma^ {- 1} \bar {x} - \frac {n m + n k + m k}{n + m} v ^ {T} \Sigma^ {- 1} v ] \} \\ \sqrt {(\frac {2 \pi (n + m)}{n m + n k + m k}) ^ {d} \det (\Sigma)} \\ \end{array}
+$$
+
+Where in (i) we denoted $v = \frac{nm\bar{x} + k(n + m)\bar{a}}{nm + nk + mk}$ . Therefore, the decision rule is
+
+$$
+\begin{array}{l} \exp \{\frac {1}{2} [ (n + k) (\frac {n \bar {x} + k \bar {a}}{n + k}) ^ {T} \Sigma^ {- 1} (\frac {n \bar {x} + k \bar {a}}{n + k}) ] \} \sqrt {(\frac {1}{m (n + k)}) ^ {d}} > \\ \exp \{\frac {1}{2} [ \frac {n ^ {2}}{n + m} \bar {x} ^ {T} \Sigma^ {- 1} \bar {x} + \frac {n m + n k + m k}{n + m} v ^ {T} \Sigma^ {- 1} v ] \} \sqrt {\left(\frac {1}{n m + n k + m k}\right) ^ {d}} \\ \Leftrightarrow \sqrt {\left(\frac {n m + n k + m k}{m (n + k)}\right) ^ {d}} > \\ \exp \{\frac {1}{2} [ \frac {n ^ {2}}{n + m} \bar {x} ^ {T} \Sigma^ {- 1} \bar {x} + \frac {n m + n k + m k}{n + m} v ^ {T} \Sigma^ {- 1} v - (n + k) (\frac {n \bar {x} + k \bar {a}}{n + k}) ^ {T} \Sigma^ {- 1} (\frac {n \bar {x} + k \bar {a}}{n + k}) ] \} \\ \Leftrightarrow d \log \left(\frac {n m + n k + m k}{m (n + k)}\right) > \\ \frac {n ^ {2}}{n + m} \bar {x} ^ {T} \Sigma^ {- 1} \bar {x} + \frac {n m + n k + m k}{n + m} v ^ {T} \Sigma^ {- 1} v - (n + k) \left(\frac {n \bar {x} + k \bar {a}}{n + k}\right) ^ {T} \Sigma^ {- 1} \left(\frac {n \bar {x} + k \bar {a}}{n + k}\right) \\ \Leftrightarrow d \log \left(\frac {n m + n k + m k}{m (n + k)}\right) > \frac {k ^ {2} n ^ {2}}{(n + k) (n m + n k + m k)} (\bar {x} - \bar {a}) ^ {T} \Sigma^ {- 1} (\bar {x} - \bar {a}) \\ \Leftrightarrow (\bar {x} - \bar {a}) ^ {T} \Sigma^ {- 1} (\bar {x} - \bar {a}) < \frac {d (n + k) (n m + n k + m k)}{k ^ {2} n ^ {2}} \log \left(\frac {n m + n k + m k}{m (n + k)}\right) \\ \end{array}
+$$
+
+As required.
+
+
+
+Theorem F.16. Fix the attacker to be $\mathcal{G}_{ML}$ as defined in F.12, then the game value is:
+
+$$
+\max _ {\mathcal {D}} V (\mathcal {D}, \mathcal {G} _ {M L}) \stackrel {{L e m m a = F. 1 5}} {=} V (\mathcal {D} _ {M L}, \mathcal {G} _ {M L}) =
+$$
+
+$$
+\frac {1}{2} + \frac {1}{2} \frac {1}{\Gamma (\frac {d}{2})} [ \gamma (\frac {d}{2}, \frac {d (n m + n k + m k)}{2 n k} \log \frac {n m + n k + m k}{m (n + k)}) -
+$$
+
+$$
+\gamma (\frac {d}{2}, \frac {d m (n + k)}{2 n k} \log \frac {n m + n k + m k}{m (n + k)}) ] =
+$$
+
+$$
+\frac {1}{2} + \frac {1}{2} \frac {1}{\Gamma (\frac {d}{2})} \left[ \gamma \left(\frac {d}{2}, \frac {d}{2} (1 + \rho + \delta) \log \frac {1 + \rho + \delta}{\rho + \delta}\right) - \gamma \left(\frac {d}{2}, \frac {d}{2} (\rho + \delta) \log \frac {1 + \rho + \delta}{\rho + \delta}\right) \right]
+$$
+
+Where $\rho = \frac{m}{k},\delta = \frac{m}{n}$ , and $\gamma$ is the lower incomplete gamma function.
+
+Proof. The game value is given by:
+
+$$
+\begin{array}{l} V \left(\mathcal {D} _ {M L}, \mathcal {G} _ {M L}\right) \\ = \mathbb {E} _ {\Theta \sim Q} V (\Theta , \mathcal {D} _ {M L}, \mathcal {G} _ {M L}) \\ = \frac {1}{2} \mathbb {E} _ {\Theta \sim Q} \mathbb {E} _ {A \sim f _ {\Theta} ^ {(k)}} \mathbb {E} _ {Y \sim f _ {\Theta} ^ {(m)}} \left[ \mathbb {E} _ {X \sim f _ {\Theta} ^ {(n)}} \left[ \mathcal {D} _ {M L} (A, X) \right] + \mathbb {E} _ {X \sim g _ {X | Y} ^ {M L} (\cdot | Y)} \left[ 1 - \mathcal {D} _ {M L} (A, X) \right] \right] \\ = \frac {1}{2} \mathbb {E} _ {\Theta \sim Q} \mathbb {E} _ {A \sim f _ {\Theta} ^ {(k)}} \mathbb {E} _ {Y \sim f _ {\Theta} ^ {(m)}} \\ \left[ \mathbb {E} _ {X \sim f _ {\Theta} ^ {(n)}} \left[ I \left[ \| \bar {X} - \bar {A} \| _ {\Sigma^ {- 1}} ^ {2} < \alpha_ {M L} \right] \right] + \mathbb {E} _ {X \sim g _ {X | Y} ^ {M L} (\cdot | Y)} \left[ 1 - I \left[ \| \bar {X} - \bar {A} \| _ {\Sigma^ {- 1}} ^ {2} < \alpha_ {M L} \right] \right] \right] \\ = \frac {1}{2} + \frac {1}{2} \mathbb {E} _ {\Theta \sim Q} \mathbb {E} _ {A \sim f _ {\Theta} ^ {(k)}} \mathbb {E} _ {X \sim f _ {\Theta} ^ {(n)}} \left[ I \left[ \| \bar {X} - \bar {A} \| _ {\Sigma^ {- 1}} ^ {2} < \alpha_ {M L} \right] \right] - \\ \frac {1}{2} \mathbb {E} _ {\Theta \sim Q} \mathbb {E} _ {A \sim f _ {\Theta} ^ {(k)}} \mathbb {E} _ {X \sim g _ {X | \Theta} ^ {M L} (\cdot | \Theta)} \left[ I \left[ \| \bar {X} - \bar {A} \| _ {\Sigma^ {- 1}} ^ {2} < \alpha_ {M L} \right] \right] \\ \end{array}
+$$
+
+Observing the first term, we can see that by replacing $\alpha^{*}$ with $\alpha_{ML}$ in the analog part of the proof for Theorem F.11 we get:
+
+$$
+\frac {1}{2} \mathbb {E} _ {\Theta \sim Q} \mathbb {E} _ {A \sim f _ {\Theta} ^ {(k)}} \mathbb {E} _ {X \sim f _ {\Theta} ^ {(n)}} \left[ I [ \| \bar {X} - \bar {A} \| _ {\Sigma^ {- 1}} ^ {2} < \alpha_ {M L} ] \right] = \frac {1}{2} \frac {1}{\Gamma (\frac {d}{2})} \gamma (\frac {d}{2}, \frac {n k \alpha_ {M L}}{2 (n + k)})
+$$
+
+Again, similarly to the analog part of the proof for Theorem F.11, observing the second term we have:
+
+$$
+\begin{array}{l} \frac {1}{2} \mathbb {E} _ {\Theta \sim Q} \mathbb {E} _ {A \sim f _ {\Theta} ^ {(k)}} \mathbb {E} _ {X \sim g _ {X | \Theta} ^ {M L} (\cdot | \Theta)} \left[ I \left[ \| \bar {X} - \bar {A} \| _ {\Sigma^ {- 1}} ^ {2} < \alpha_ {M L} \right] \right] \\ \equiv \frac {1}{2} \mathbb {E} _ {\Theta \sim Q} \mathbb {E} _ {A \sim f _ {\Theta} ^ {(k)}} \mathbb {E} _ {X \sim g _ {X | \Theta} ^ {M L} (\cdot | \Theta)} [ I [ V ^ {T} V < \alpha_ {M L} ] ] \\ = (\ast) \\ \end{array}
+$$
+
+Where:
+
+$$
+\begin{array}{l} V = C ^ {- 1} (\bar {X} - \bar {A}) = C ^ {- 1} \left[ (\bar {X} - \Theta) - (\bar {A} - \Theta) \right] \\ = C ^ {- 1} \left[ I _ {d}, - I _ {d} \right] \left[ \begin{array}{c} \bar {X} - \Theta \\ \bar {A} - \Theta \end{array} \right] = [ C ^ {- 1}, - C ^ {- 1} ] \left[ \begin{array}{c} \bar {X} - \Theta \\ \bar {A} - \Theta \end{array} \right] \\ \end{array}
+$$
+
+Using the definition of $\mathcal{G}_{ML}$ (Definition F.12) we have $\bar{X} \sim \mathcal{N}(\theta, \frac{n + m}{nm}\Sigma)$ , using Lemma F.3 we have $\bar{A} \sim \mathcal{N}(\theta, \frac{1}{k}\Sigma)$ , and thus:
+
+$$
+\left[ \begin{array}{c} \bar {X} - \Theta \\ \bar {A} - \Theta \end{array} \right] \sim \mathcal {N} (\left[ \begin{array}{c} \theta \\ \theta \end{array} \right], \left[ \begin{array}{c c} \frac {n + m}{n m} \Sigma & 0 \\ 0 & \frac {1}{k} \Sigma \end{array} \right])
+$$
+
+Therefore:
+
+$$
+V \sim \mathcal {N} (0, [ C ^ {- 1}, - C ^ {- 1} ] \left[ \begin{array}{c c} \frac {n + m}{n m} \Sigma & 0 \\ 0 & \frac {1}{k} \Sigma \end{array} \right] \left[ \begin{array}{c} C ^ {- T} \\ - C ^ {- T} \end{array} \right]) = \mathcal {N} (0 _ {d}, \frac {n m + n k + m k}{n m k} I _ {d})
+$$
+
+We denote $\tilde{V} = \sqrt{\frac{nmk}{nm + nk + mk}} V$ and thus $\tilde{V}_1, \ldots, \tilde{V}_d$ are independent standard normal random variables and $\tilde{V}^T\tilde{V} \sim \chi^2(d)$ . Therefore
+
+$$
+V ^ {T} V \sim \Gamma (k = \frac {d}{2}, \theta = 2 \frac {n m + n k + m k}{n m k})
+$$
+
+And we have:
+
+$$
+(*) = \frac {1}{2} \frac {1}{\Gamma (\frac {d}{2})} \gamma (\frac {d}{2}, \frac {n m k \alpha_ {M L}}{2 (n m + n k + m k)})
+$$
+
+Hence, the game value is given by:
+
+$$
+V (\mathcal {D} ^ {*}, \mathcal {G} ^ {*}) = \frac {1}{2} + \frac {1}{2} \frac {1}{\Gamma (\frac {d}{2})} [ \gamma (\frac {d}{2}, \frac {n k \alpha_ {M L}}{2 (n + k)}) - \gamma (\frac {d}{2}, \frac {n m k \alpha_ {M L}}{2 (n m + n k + m k)}) ] =
+$$
+
+$$
+\frac {1}{2} + \frac {1}{2} \frac {1}{\Gamma (\frac {d}{2})} [ \gamma (\frac {d}{2}, \frac {d (n m + n k + m k)}{2 n k} \log \frac {n m + n k + m k}{m (n + k)}) -
+$$
+
+$$
+\gamma (\frac {d}{2}, \frac {d m (n + k)}{2 n k} \log \frac {n m + n k + m k}{m (n + k)}) ]
+$$
+
+As required.
+
+
+
+# G EXPERIMENTS - DATASETS
+
+Below we provide details on the datasets used for the authentication experiments on faces and characters. The VoxCeleb2 (Nagrani et al., 2017; Chung & Zisserman, 2018) dataset contains cropped face videos of 6112 identities. We used the original split of 5994 identities for training and 118 for test. For each identity, we saved every fifth frame, resized each frame to $64 \times 64$ , and augmented it using horizontal flip. The Omniglot dataset (Lake et al., 2015) contains handwritten character images from 50 alphabets. There are 1623 different characters, and 20 examples for each character. We use the splits and augmentations suggested by Vinyals et al. (2016) and used by Snell et al. (2017).
+
+# H EXPERIMENTS - IMPLEMENTATION DETAILS
+
+In this section, we describe our implementation of the GIM model for the different experiments, in detail. Recall from Sec. 5, that in general, the authenticator is a neural network $\mathcal{D}(a,x)$ that can be expressed as $\mathcal{D}(a,x) = \sigma (T_{\mathcal{D}}(a),T_{\mathcal{D}}(x))$ , and the generator is a neural network $\mathcal{G}(y)$ that can be expressed as $\mathcal{G}(y)_i = \varphi (W_i - \bar{W} +T_{\mathcal{G}}(y))\quad \forall i\in [n]$ . In what follows we describe our implementation of these models for each of the experiments.
+
+# H.1 GAUSSIAN SOURCES
+
+Authenticator Architecture: For the statistic function $T_{\mathcal{D}}$ , we use a concatenation of the mean and standard deviation of the sample. For the comparison function $\sigma$ , we use the element-wise absolute difference between the statistics $T_{\mathcal{D}}(a), T_{\mathcal{D}}(x)$ , followed by a linear layer.
+
+Attacker Architecture: For the statistic function $T_{\mathcal{G}}$ , we use the sample mean, i.e., $T_{\mathcal{G}}(y) = \bar{y}$ . The noise vectors $W_{i}$ are generated as follows: First, $n$ Gaussian noise vectors $Z_{1}, \ldots, Z_{n} \stackrel{\mathrm{iid}}{\sim} \mathcal{N}(0, I_{d})$ are drawn, then each vector $Z_{i}$ is passed through a linear layer to obtain $W_{i}$ . Finally, the decoder $\varphi$ is the identity function.
+
+**Optimization details:** The model is trained in an authentication setup as in our theoretical setup, using alternating gradient descent as is common in GAN optimization (Mescheder et al., 2018). Each iteration begins when a source $\theta \in \mathbb{R}^d$ is drawn from the prior distribution $Q = \mathcal{N}(0,10I_d)$ . Samples $A \in \mathbb{R}^{k \times d}$ , $Y \in \mathbb{R}^{m \times d}$ , $X_\theta \in \mathbb{R}^{n \times d}$ are drawn IID from $f_\theta = \mathcal{N}(\theta, I_d)$ , where $X_\theta$ represents a real sample from $\theta$ . The attacker, given the leaked sample $Y$ , generates a fake sample $X_G = \mathcal{G}(Y) \in \mathbb{R}^{n \times d}$ , passes it to $\mathcal{D}$ , and suffers the loss $-\log (\text{sigmoid}(\mathcal{D}(\mathrm{A}, X_G)))$ . The authenticator, $\mathcal{D}$ , receives as input the source information sample $A$ , outputs a prediction for each of the test samples $X_\theta$ , $X_G$ , and suffers the binary cross-entropy loss $-0.5\left(\log (\text{sigmoid}(\mathcal{D}(\mathrm{A}, X_\theta))) + \log (\text{sigmoid}(1 - \mathcal{D}(\mathrm{A}, X_G)))\right)$ . Each experiment is trained for $200K$ iterations with a batch size of 4000 using the Adam optimizer (Kingma & Ba, 2015) with learning rate $10^{-4}$ .
+
+# H.2 EXPERIMENTS ON VOXCELEB2 AND OMNIGLOT
+
+To describe the models we begin with some notation. We let $c$ denote the number of image channels, $h$ denote the image size (we only consider square images of size $c \times h \times h$ ), and $l$ denote the latent dimension of the model.
+
+Authentication Architecture: As mentioned above, the authenticator is a neural network model that can be expressed as:
+
+$$
+\mathcal {D} (a, x) = \sigma (T _ {\mathcal {D}} (a), T _ {\mathcal {D}} (x))
+$$
+
+The statistic function $T_{\mathcal{D}}$ maps a sample of images to a statistic vector $s \in \mathbb{R}^{6l}$ in the following way: Each image in the sample is mapped using encoders $E_{src}^{\mathcal{D}}, E_{env}^{\mathcal{D}} : [-1,1]^{c \times h \times h} \to \mathbb{R}^l$ to two latent vectors $v_{src}, v_{env} \in \mathbb{R}^l$ , respectively. $v_{src}$ is designed to represent the source $\theta$ , and $v_{env}$ is designed to represent the environment (e.g., pose, lighting, expression). To represent the source of the sample, the sample mean of $v_{src}$ is taken. To represent the sample distribution, $v_{env}$ is passed through a non-linear statistic module $\zeta$ which is meant to capture more complex statistical functions of the sample. Finally, $T_{\mathcal{D}}(x)$ is obtained by concatenating $\bar{v}_{src}$ and $\zeta(v_{env})$ . E.g., for $x$ we have:
+
+$$
+T _ {\mathcal {D}} (x) = \mathrm {c o n c a t} \left(\frac {1}{n} \sum_ {i = 1} ^ {n} E _ {s r c} ^ {\mathcal {D}} (x _ {i}), \zeta \left(E _ {e n v} ^ {\mathcal {D}} (x)\right)\right)
+$$
+
+The comparison function $\sigma : \mathbb{R}^{6l} \to \mathbb{R}$ receives two latent vectors $s_a = T_{\mathcal{D}}(a), s_x = T_{\mathcal{D}}(x)$ representing the statistics of the samples $a$ and $x$ respectively. The vectors are concatenated and then passed through a Multi-Layered Perceptron which outputs a scalar reflecting their similarity. Namely:
+
+$$
+\sigma \left(s _ {a}, s _ {x}\right) = \operatorname {M L P} \left(\operatorname {c o n c a t} \left(s _ {a}, s _ {x}\right)\right)
+$$
+
+The full architecture of the authenticator is depicted in Fig. 7.
+
+
+Figure 7: An overview of the implementation of the GIM authenticator architecture for the experiments on the Voxceleb2 and Omniglot datasets.
+
+Attacker Architecture: Our implementation of the attacker is inspired by the architecture suggested by Zakharov et al. (2019), which relies on an implicit assumption that an image could be modeled as a mapping of two latent vectors to image space. The first vector represents the source $\theta$ and is the same for any image of $\theta$ , the second vector represents the environment (e.g. pose, lighting, expression) and is different for each image of the source.
+
+The attacker model consists of the following components: An image encoder $\mathrm{E}_{src}^{\mathcal{G}}:[-1,1]^{c\times h\times h}\to \mathbb{R}^l$ that maps an image to a latent vector representing the source $\theta$ , an image encoder $\mathrm{E}_{env}^{\mathcal{G}}:[-1,1]^{c\times h\times h}\to \mathbb{R}^l$ that maps an image to a latent vector representing the environment, a Multi-layered Perceptron MLP $\mathcal{G}:\mathbb{R}^l\to \mathbb{R}^l$ that maps Gaussian noise to the environment latent space, an environment decoder $\varphi_{env}:\mathbb{R}^l\to \mathbb{R}^{c\times h\times h}$ that maps a latent vector to an environment image which could represent aspects of the environment such as facial landmarks6, and finally, a generator $\phi :\mathbb{R}^{2c\times h\times h}\times \mathbb{R}^l\to [-1,1]^{c\times h\times h}$ that maps an environment image concatenated to the real image to a new image. The generator is based on the image to image model used by Zakharov et al. (2019) and Johnson et al. (2016), and uses the source latent vector as input for Adaptive instance normalization (Huang & Belongie, 2017).
+
+The attacker generates a fake sample $X \in [-1,1]^{n \times c \times h \times h}$ based on a leaked sample $Y \in [-1,1]^{m \times c \times h \times h}$ in the following way: Each image $Y_j$ in the leaked sample is mapped using $E_{src}^G$ and $E_{env}^G$ to latent vectors $u_j^{src}, u_j^{env} \in \mathbb{R}^l$ . A latent environment vector, $v_i^{env}$ , is constructed for each fake image $X_i$ in the following way: First, $n$ Gaussian noise vectors $Z_1, \ldots, Z_n \stackrel{\mathrm{id}}{\sim} \mathcal{N}(0, I_l)$
+
+are drawn, then each vector $Z_{i}$ is passed through $\mathrm{MLP}_{\mathcal{G}}$ to obtain $W_{i}$ , and finally, $v_{i}^{env}$ is obtained by matching the mean of the new latent environment vectors to the sample mean $\bar{u}^{env}$ . Namely:
+
+$$
+v _ {i} ^ {e n v} = W _ {i} - \bar {W} + \bar {u} ^ {e n v} \forall i \in [ n ]
+$$
+
+Each fake image $X_{i}$ is then generated deterministically as follows: $v_{i}^{env}$ is used as input to the decoder $\varphi_{env}$ which outputs an environment image. This image is concatenated along the channel dimension to a random image from the leaked sample $Y$ , and then passed as input to the generator $\phi$ , which also receives $u^{src}$ as input to its Adaptive instance norm layers. The output of the generator is the fake image $X_{i}$ for all $i \in [n]$ . The full architecture of the attacker is depicted in Fig. 8.
+
+
+Figure 8: An overview of the implementation of the GIM attacker architecture for the experiments on the Voxceleb2 and Omniglot datasets.
+
+**Optimization details:** The model is trained in an authentication setup as in our theoretical setup, using alternating gradient descent with the regularization parameter as suggested by Mescheder et al. (2018). Each iteration begins when a source $\theta \in \mathbb{R}^d$ is drawn uniformly from the dataset. Samples $A \in [-1,1]^{k \times c \times h \times h}$ , $Y \in [-1,1]^{m \times c \times h \times h}$ , $X_\theta \in [-1,1]^{n \times c \times h \times h}$ are sampled uniformly from the images available to the source $\theta$ . The attacker, given the leaked sample $Y$ , generates a fake sample $X_{\mathcal{G}} = \mathcal{G}(Y)$ , passes it to $\mathcal{D}$ , and suffers the loss $-\log (\text{sigmoid}(\mathcal{D}(\mathrm{A}, X_{\mathcal{G}})))$ . The authenticator, $\mathcal{D}$ , receives as input the source information sample $A$ , outputs a prediction for each of the test samples $X_\theta$ , $X_{\mathcal{G}}$ , and suffers the binary cross-entropy loss $-0.5\left(\log (\text{sigmoid}(\mathcal{D}(\mathrm{A}, X_\theta))) + \log (\text{sigmoid}(1 - \mathcal{D}(\mathrm{A}, X_{\mathcal{G}})))\right)$ .
+
+The experiments on Omniglot were trained for $520k$ iterations with batch size 128 using the Adam optimizer (Kingma & Ba, 2015) with learning rate $10^{-6}$ for $\mathcal{D}$ , $10^{-5}$ for $\mathcal{G}$ , and $10^{-7}$ for $\mathrm{MLP}_{\mathcal{G}}$ (as done by Karras et al. (2018b)). The regularization parameter was set to 0.
+
+The experiments on Voxceleb2 were trained for $250k$ iterations with batch size 64 using the Adam optimizer (Kingma & Ba, 2015) with learning rate $10^{-4}$ for both $\mathcal{D}$ and $\mathcal{G}$ and $10^{-6}$ for $\mathrm{MLP}_{\mathcal{G}}$ . The regularization parameter was set to 10 (as done by Karras et al. (2018b)) since we noticed that it stabilized and sped up the training, and in contrast to Omniglot and the Gaussian experiments did not seem to hurt the results.
\ No newline at end of file
diff --git a/optimalstrategiesagainstgenerativeattacks/images.zip b/optimalstrategiesagainstgenerativeattacks/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..3d56421597964b06a70ee40a64de007e4690e905
--- /dev/null
+++ b/optimalstrategiesagainstgenerativeattacks/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:00fd0badce929f35452ad39928ccbe414c27d0cd57938bcfd73418e4b1d6f61c
+size 2479744
diff --git a/optimalstrategiesagainstgenerativeattacks/layout.json b/optimalstrategiesagainstgenerativeattacks/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..6d3b3011770018a9f97066ad2d3f762494658d13
--- /dev/null
+++ b/optimalstrategiesagainstgenerativeattacks/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4973f70c510ffd8b4142235823f1ee204951b62cbaa1b17e53d79a02a182063c
+size 1395110
diff --git a/optimisticexplorationevenwithapessimisticinitialisation/675bfdd6-48ee-4604-9dc3-3fcdfea37a82_content_list.json b/optimisticexplorationevenwithapessimisticinitialisation/675bfdd6-48ee-4604-9dc3-3fcdfea37a82_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..96b516c0044c1a64d3ed87d98e2d34d18b61a504
--- /dev/null
+++ b/optimisticexplorationevenwithapessimisticinitialisation/675bfdd6-48ee-4604-9dc3-3fcdfea37a82_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d019d0505fa8c3a2fb816de6a87852d0d98a6682f5048b96e61d5144c976d7a6
+size 190234
diff --git a/optimisticexplorationevenwithapessimisticinitialisation/675bfdd6-48ee-4604-9dc3-3fcdfea37a82_model.json b/optimisticexplorationevenwithapessimisticinitialisation/675bfdd6-48ee-4604-9dc3-3fcdfea37a82_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..b6359469376b93acc60476601e9eb1aaa8ecfe9a
--- /dev/null
+++ b/optimisticexplorationevenwithapessimisticinitialisation/675bfdd6-48ee-4604-9dc3-3fcdfea37a82_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:569efa9b57b8251f5b7ebaab9c3faa0bb4f910beb3ca3e4fb72ce9a58c5e9d56
+size 224384
diff --git a/optimisticexplorationevenwithapessimisticinitialisation/675bfdd6-48ee-4604-9dc3-3fcdfea37a82_origin.pdf b/optimisticexplorationevenwithapessimisticinitialisation/675bfdd6-48ee-4604-9dc3-3fcdfea37a82_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..4356ea1526b3ac5cd149ab4a391e141cd354ee6b
--- /dev/null
+++ b/optimisticexplorationevenwithapessimisticinitialisation/675bfdd6-48ee-4604-9dc3-3fcdfea37a82_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9af5818e36f39178e098ea4d44b19c219f52a3e024c964ffb8ad4bd069ca150a
+size 3148414
diff --git a/optimisticexplorationevenwithapessimisticinitialisation/full.md b/optimisticexplorationevenwithapessimisticinitialisation/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..7ee8d673f99df7d828e99c733df0f7569195faf4
--- /dev/null
+++ b/optimisticexplorationevenwithapessimisticinitialisation/full.md
@@ -0,0 +1,984 @@
+# OPTIMISTIC EXPLORATION EVEN WITH A PESSIMISTIC INITIALISATION
+
+Tabish Rashid, Bei Peng, Wendelin Böhmer, Shimon Whiteson
+
+University of Oxford
+
+Department of Computer Science
+
+tabish.rashid,bei.peng,
+
+wendelin.boehmer, shimon.whiteson}@cs.ox.ac.uk
+
+# ABSTRACT
+
+Optimistic initialisation is an effective strategy for efficient exploration in reinforcement learning (RL). In the tabular case, all provably efficient model-free algorithms rely on it. However, model-free deep RL algorithms do not use optimistic initialisation despite taking inspiration from these provably efficient tabular algorithms. In particular, in scenarios with only positive rewards, $Q$ -values are initialised at their lowest possible values due to commonly used network initialisation schemes, a pessimistic initialisation. Merely initialising the network to output optimistic $Q$ -values is not enough, since we cannot ensure that they remain optimistic for novel state-action pairs, which is crucial for exploration. We propose a simple count-based augmentation to pessimistically initialised $Q$ -values that separates the source of optimism from the neural network. We show that this scheme is provably efficient in the tabular setting and extend it to the deep RL setting. Our algorithm, Optimistic Pessimistically Initialised $Q$ -Learning (OPIQ), augments the $Q$ -value estimates of a DQN-based agent with count-derived bonuses to ensure optimism during both action selection and bootstrapping. We show that OPIQ outperforms non-optimistic DQN variants that utilise a pseudocount-based intrinsic motivation in hard exploration tasks, and that it predicts optimistic estimates for novel state-action pairs.
+
+# 1 INTRODUCTION
+
+In reinforcement learning (RL), exploration is crucial for gathering sufficient data to infer a good control policy. As environment complexity grows, exploration becomes more challenging and simple randomisation strategies become inefficient.
+
+While most provably efficient methods for tabular RL are model-based (Brafman and Tennenholtz, 2002; Strehl and Littman, 2008; Azar et al., 2017), in deep RL, learning models that are useful for planning is notoriously difficult and often more complex (Hafner et al., 2019) than model-free methods. Consequently, model-free approaches have shown the best final performance on large complex tasks (Mnih et al., 2015; 2016; Hessel et al., 2018), especially those requiring hard exploration (Bellemare et al., 2016; Ostrovski et al., 2017). Therefore, in this paper, we focus on how to devise model-free RL algorithms for efficient exploration that scale to large complex state spaces and have strong theoretical underpinnings.
+
+Despite taking inspiration from tabular algorithms, current model-free approaches to exploration in deep RL do not employ optimistic initialisation, which is crucial to provably efficient exploration in all model-free tabular algorithms. This is because deep RL algorithms do not pay special attention to the initialisation of the neural networks and instead use common initialisation schemes that yield initial $Q$ -values around zero. In the common case of non-negative rewards, this means $Q$ -values are initialised to their lowest possible values, i.e., a pessimistic initialisation.
+
+While initialising a neural network optimistically would be trivial, e.g., by setting the bias of the final layer of the network, the uncontrolled generalisation in neural networks changes this initialisation quickly. Instead, to benefit exploration, we require the $Q$ -values for novel state-action pairs must remain high until they are explored.
+
+An empirically successful approach to exploration in deep RL, especially when reward is sparse, is intrinsic motivation (Oudeyer and Kaplan, 2009). A popular variant is based on pseudocounts (Bellemare et al., 2016), which derive an intrinsic bonus from approximate visitation counts over states and is inspired by the tabular MBIE-EB algorithm (Strehl and Littman, 2008). However, adding a positive intrinsic bonus to the reward yields optimistic $Q$ -values only for state-action pairs that have already been chosen sufficiently often. Incentives to explore unvisited states rely therefore on the generalisation of the neural network. Exactly how the network generalises to those novel state-action pairs is unknown, and thus it is unclear whether those estimates are optimistic when compared to nearby visited state-action pairs.
+
+
+Figure 1
+
+Consider the simple example with a single state and two actions shown in Figure 1. The left action yields $+0.1$ reward and the right action yields $+1$ reward. An agent whose $Q$ -value estimates have been zero-initialised must at the first time step select an action randomly. As both actions are underestimated, this will increase the estimate of the chosen action. Greedy agents always pick the action with the largest $Q$ -value estimate and will select the same action forever, failing
+
+to explore the alternative. Whether the agent learns the optimal policy or not is thus decided purely at random based on the initial $Q$ -value estimates. This effect will only be amplified by intrinsic reward.
+
+To ensure optimism in unvisited, novel state-action pairs, we introduce Optimistic Pessimistically Initialised $Q$ -Learning (OPIQ). OPIQ does not rely on an optimistic initialisation to ensure efficient exploration, but instead augments the $Q$ -value estimates with count-based bonuses in the following manner:
+
+$$
+Q ^ {+} (s, a) := Q (s, a) + \frac {C}{(N (s , a) + 1) ^ {M}}, \tag {1}
+$$
+
+where $N(s, a)$ is the number of times a state-action pair has been visited and $M, C > 0$ are hyperparameters. These $Q^{+}$ -values are then used for both action selection and during bootstrapping, unlike the above methods which only utilise $Q$ -values during these steps. This allows OPIQ to maintain optimism when selecting actions and bootstrapping, since the $Q^{+}$ -values can be optimistic even when the $Q$ -values are not.
+
+In the tabular domain, we base OPIQ on UCB-H (Jin et al., 2018), a simple online $Q$ -learning algorithm that uses count-based intrinsic rewards and optimistic initialisation. Instead of optimistically initialising the $Q$ -values, we pessimistically initialise them and use $Q^{+}$ -values during action selection and bootstrapping. Pessimistic initialisation is used to enable a worst case analysis where all of our $Q$ -value estimates underestimate $Q^{*}$ and is not a requirement for OPIQ. We show that these modifications retain the theoretical guarantees of UCB-H.
+
+Furthermore, our algorithm easily extends to the Deep RL setting. The primary difficulty lies in obtaining appropriate state-action counts in high-dimensional and/or continuous state spaces, which has been tackled by a variety of approaches (Bellemare et al., 2016; Ostrovski et al., 2017; Tang et al., 2017; Machado et al., 2018a) and is orthogonal to our contributions.
+
+We demonstrate clear performance improvements in sparse reward tasks over 1) a baseline DQN that just uses intrinsic motivation derived from the approximate counts, 2) simpler schemes that aim for an optimistic initialisation when using neural networks, and 3) strong exploration baselines. We show the importance of optimism during action selection for ensuring efficient exploration. Visualising the predicted $Q^{+}$ -values shows that they are indeed optimistic for novel state-action pairs.
+
+# 2 BACKGROUND
+
+We consider a Markov Decision Process (MDP) defined as a tuple $(S, \mathcal{A}, P, R)$ , where $S$ is the state space, $\mathcal{A}$ is the discrete action space, $P(\cdot | s, a)$ is the state-transition distribution, $R(\cdot | s, a)$ is the distribution over rewards and $\gamma \in [0,1)$ is the discount factor. The goal of the agent is then to maximise the expected discounted sum of rewards: $\mathbb{E}[\sum_{t=0}^{\infty} \gamma^{t} r_{t} | r_{t} \sim R(\cdot | s_{t}, a_{t})]$ , in the discounted episodic setting. A policy $\pi(\cdot | s)$ is a mapping from states to actions such that it is a valid probability distribution. $Q^{\pi}(s, a) := \mathbb{E}[\sum_{t=0}^{\infty} \gamma^{t} r_{t} | a_{t} \sim \pi(\cdot | s_{t})]$ and $Q^{*} := \max_{\pi} Q^{\pi}$ .
+
+Deep $Q$ -Network (DQN) (Mnih et al., 2015) uses a nonlinear function approximator (a deep neural network) to estimate the action-value function, $Q(s,a;\theta)\approx Q^{*}(s,a)$ , where $\theta$ are the parameters of the network. Exploration based on intrinsic rewards (e.g., Bellemare et al., 2016), which uses a DQN agent, additionally augments the observed rewards $r_t$ with a bonus $\beta /\sqrt{N(s_t,a_t)}$ based on
+
+
+Figure 2: A simple regression task to illustrate the effect of an optimistic initialisation in neural networks. Left: 10 different networks whose final layer biases are initialised at 3 (shown in green), and the same networks after training on the blue data points (shown in red). Right: One of the trained networks whose output has been augmented with an optimistic bias as in equation 1. The counts were obtained by computing a histogram over the input space $[-2, 2]$ with 50 bins.
+
+
+
+pseudo-visitation-counts $N(s_{t},a_{t})$ . The DQN parameters $\theta$ are trained by gradient descent on the mean squared regression loss $\mathcal{L}$ with bootstrapped 'target' $y_{t}$ :
+
+$$
+\mathcal {L} [ \theta ] := \mathbb {E} \left[ \left(\overbrace {r _ {t} + \frac {\beta}{\sqrt {N (s _ {t} , a _ {t})}} + \gamma \max _ {a ^ {\prime}} Q \left(s _ {t + 1} , a ^ {\prime} ; \theta^ {-}\right)} ^ {y _ {t}} - Q \left(s _ {t}, a _ {t}; \theta\right)\right) ^ {2} \mid \left(s _ {t}, a _ {t}, r _ {t}, s _ {t + 1}\right) \sim D \right]. \tag {2}
+$$
+
+The expectation is estimated with uniform samples from a replay buffer $D$ (Lin, 1992). $D$ stores past transitions $(s_t, a_t, r_t, s_{t+1})$ , where the state $s_{t+1}$ is observed after taking the action $a_t$ in state $s_t$ and receiving reward $r_t$ . To improve stability, DQN uses a target network, parameterised by $\theta^{-}$ , which is periodically copied from the regular network and kept fixed for a number of iterations.
+
+# 3 OPTIMISTIC PESSIMISTICALLY INITIALISED $Q$ -LEARNING
+
+Our method Optimistic Pessimistically Initialised $Q$ -Learning (OPIQ) ensures optimism in the $Q$ -value estimates of unvisited, novel state-action pairs in order to drive exploration. This is achieved by augmenting the $Q$ -value estimates in the following manner:
+
+$$
+Q ^ {+} (s, a) := Q (s, a) + \frac {C}{(N (s , a) + 1) ^ {M}},
+$$
+
+and using these $Q^{+}$ -values during action selection and bootstrapping. In this section, we motivate OPIQ, analyse it in the tabular setting, and describe a deep RL implementation.
+
+# 3.1 MOTIVATIONS
+
+Optimistic initialisation does not work with neural networks. For an optimistic initialisation to benefit exploration, the $Q$ -values must start sufficiently high. More importantly, the values for unseen state-action pairs must remain high, until they are updated. When using a deep neural network to approximate the $Q$ -values, we can initialise the network to output optimistic values, for example, by adjusting the final bias. However, after a small amount of training, the values for novel state-action pairs may not remain high. Furthermore, due to the generalisation of neural networks we cannot know how the values for these unseen state-action pairs compare to the trained state-action pairs. Figure 2 (left), which illustrates this effect for a simple regression task, shows that different initialisations can lead to dramatically different generalisations. It is therefore prohibitively difficult to use optimistic initialisation of a deep neural network to drive exploration.
+
+Instead, we augment our $Q$ -value estimates with an optimistic bonus. Our motivation for the form of the bonus in equation 1, $\frac{C}{(N(s,a) + 1)^M}$ , stems from UCB-H (Jin et al., 2018), where all tabular $Q$ -values are initialised with $H$ and the first update for a state-action pair completely overwrites that value because the learning rate for the update $(\eta_1)$ is 1. One can alternatively view these $Q$ -values as zero-initialised with the additional term $Q(s,a) + H \cdot \mathbb{1}\{N(s,a) < 1\}$ , where $N(s,a)$ is the visitation count for the state-action pair $(s,a)$ . Our approach approximates the discrete indicator
+
+Algorithm 1 OPIQ algorithm
+Initialize $Q_{t}(s,a)\gets 0,N(s,a,t)\gets 0,\forall (s,a,t)\in \mathcal{S}\times \mathcal{A}\times \{1,\dots,H,H + 1\}$
+for each episode $k = 1,\ldots ,K$ do for each timestep $t = 1,\ldots ,H$ do Take action $a_{t}\leftarrow \arg \max_{a}Q_{t}^{+}(s_{t},a)$ Receive $r(s_t,a_t,t)$ and $s_{t + 1}$ Increment $N(s_{t},a_{t},t)$ $Q_{t}(s_{t},a_{t})\gets (1 - \eta_{N})Q_{t}(s_{t},a_{t}) + \eta_{N}(r(s_{t},a_{t},t) + b_{N}^{T} + \min \{H,\max_{a^{\prime}}Q_{t + 1}^{+}(s_{t + 1},a^{\prime})\}).$ end
+end
+
+function $\mathbb{1}$ as $(N(s,a) + 1)^{-M}$ for sufficiently large $M$ . However, since gradient descent cannot completely overwrite the $Q$ -value estimate for a state-action pair after a single update, it is beneficial to have a smaller hyperparameter $M$ that governs how quickly the optimism decays.
+
+For a worst case analysis we assume all $Q$ -value estimates are pessimistic. In the common scenario where all rewards are nonnegative, the lowest possible return for an episode is zero. If we then zero-initialise our $Q$ -value estimates, as is common for neural networks, we are starting with a pessimistic initialisation. As shown in Figure 2(left), we cannot predict how a neural network will generalise, and thus we cannot predict if the $Q$ -value estimates for unvisited state-action pairs will be optimistic or pessimistic. We thus assume they are pessimistic in order to perform a worst case analysis. However, this is not a requirement: our method works with any initialisation and rewards.
+
+In order to then approximate an optimistic initialisation, the scaling parameter $C$ in equation 1 can be chosen to guarantee unseen $Q^{+}$ -values are overestimated, for example, $C \coloneqq H$ in the undiscounted finite-horizon tabular setting and $C \coloneqq 1 / (1 - \gamma)$ in the discounted episodic setting (assuming 1 is the maximum reward obtainable at each timestep). However, in some environments it may be beneficial to use a smaller parameter $C$ for faster convergence. These $Q^{+}$ -values are then used both during action selection and during bootstrapping. Note that in the finite horizon setting the counts $N$ , and thus $Q^{+}$ , would depend on the timestep $t$ .
+
+Hence, we split the optimistic $Q^{+}$ -values into two parts: a pessimistic $Q$ -value component and an optimistic component based solely on the counts for a state-action pair. This separates our source of optimism from the neural network function approximator, yielding $Q^{+}$ -values that remain high for unvisited state-action pairs, assuming a suitable counting scheme. Figure 2 (right) shows the effects of adding this optimistic component to a network's outputs.
+
+Optimistic $Q^{+}$ -values provide an increased incentive to explore. By using optimistic $Q^{+}$ estimates, especially during action selection and bootstrapping, the agent is incentivised to try and visit novel state-action pairs. Being optimistic during action selection in particular encourages the agent to try novel actions that have not yet been visited. Without an optimistic estimate for novel state-action pairs the agent would have no incentive to try an action it has never taken before at a given state. Being optimistic during bootstrapping ensures the agent is incentivised to return to states in which it has not yet tried every action. This is because the maximum $Q^{+}$ -value will be large due to the optimism bonus. Both of these effects lead to a strong incentive to explore novel state-action pairs.
+
+# 3.2 TABULAR REINFORCEMENT LEARNING
+
+In order to ensure that OPIQ has a strong theoretical foundation, we must ensure it is provably efficient in the tabular domain. We restrict our analysis to the finite horizon tabular setting and only consider building upon UCB-H (Jin et al., 2018) for simplicity. Achieving a better regret bound using UCB-B (Jin et al., 2018) and extending the analysis to the infinite horizon discounted setting (Dong et al., 2019) are steps for future work.
+
+Our algorithm removes the optimistic initialisation of UCB-H, instead using a pessimistic initialisation (all $Q$ -values start at 0). We then use our $Q^{+}$ -values during action selection and bootstrapping. Pseudocode is presented in Algorithm 1.
+
+Theorem 1. For any $p \in (0,1)$ , with probability at least $1 - p$ the total regret of $Q^{+}$ is at most $\mathcal{O}(\sqrt{H^4SAT\log(SAT / p)})$ for $M \geq 1$ and at most $\mathcal{O}(H^{1 + M}SAT^{1 - M} + \sqrt{H^4SAT\log(SAT / p)})$ for $0 < M < 1$ .
+
+The proof is based on that of Theorem 1 from (Jin et al., 2018). Our $Q^{+}$ -values are always greater than or equal to the $Q$ -values that UCB-H would estimate, thus ensuring that our estimates are also greater than or equal to $Q^{*}$ . Our overestimation relative to UCB-H is then governed by the quantity $H / (N(s,a) + 1)^M$ , which when summed over all timesteps does not depend on $T$ for $M > 1$ . As $M \to \infty$ we exactly recover UCB-H, and match the asymptotic performance of UCB-H for $M \geq 1$ . Smaller values of $M$ result in our optimism decaying more slowly, which results in more exploration. The full proof is included in Appendix I.
+
+We also show that OPIQ without optimistic action selection or the count-based intrinsic motivation term $b_{N}^{T}$ is not provably efficient by showing it can incur linear regret with high probability on simple MDPs (see Appendices G and H).
+
+Our primary motivation for considering a tabular algorithm that pessimistically initialises its $Q$ -values, is to provide a firm theoretical foundation on which to base a deep RL algorithm, which we describe in the next section.
+
+# 3.3 DEEP REINFORCEMENT LEARNING
+
+For deep RL, we base OPIQ on DQN (Mnih et al., 2015), which uses a deep neural network with parameters $\theta$ as a function approximator $Q_{\theta}$ . During action selection, we use our $Q^{+}$ -values to determine the greedy action:
+
+$$
+a _ {t} = \arg \max _ {a} \left\{Q _ {\theta} (s, a) + \frac {C _ {\text {a c t i o n}}}{(N (s , a) + 1) ^ {M}} \right\}, \tag {3}
+$$
+
+where $C_{\text{action}}$ is a hyperparameter governing the scale of the optimistic bias during action selection. In practice, we use an $\epsilon$ -greedy policy. After every timestep, we sample a batch of experiences from our experience replay buffer, and use $n$ -step $Q$ -learning (Mnih et al., 2016). We recompute the counts for each relevant state-action pair, to avoid using stale pseudo-rewards. The network is trained by gradient descent on the loss in equation 2 with the target:
+
+$$
+y _ {t} := \sum_ {i = 0} ^ {n - 1} \gamma^ {i} \left(r \left(s _ {t + i}, a _ {t + i}\right) + \frac {\beta}{\sqrt {N \left(s _ {t + i} , a _ {t + i}\right)}}\right) + \gamma^ {n} \max _ {a ^ {\prime}} \left\{Q _ {\theta^ {-}} \left(s _ {t + n}, a ^ {\prime}\right) + \frac {C _ {\text {b o o t s t r a p}}}{\left(N \left(s _ {t + n} , a ^ {\prime}\right) + 1\right) ^ {M}} \right\}. \tag {4}
+$$
+
+where $C_{\mathrm{bootstrap}}$ is a hyperparameter that governs the scale of the optimistic bias during bootstrapping.
+
+For our final experiments on Montezuma's Revenge we additionally use the Mixed Monte Carlo (MMC) target (Bellemare et al., 2016; Ostrovski et al., 2017), which mixes the target with the environmental monte carlo return for that episode. Further details are included in Appendix D.4.
+
+We use the method of static hashing (Tang et al., 2017) to obtain our pseudocounts on the first 2 of 3 environments we test on. For our experiments on Montezuma's Revenge we count over a downsampled image of the current game frame. More details can be found in Appendix B.
+
+A DQN with pseudocount derived intrinsic reward (DQN + PC) (Bellemare et al., 2016) can be seen as a naive extension of UCB-H to the deep RL setting. However, it does not attempt to ensure optimism in the $Q$ -values used during action selection and bootstrapping, which is a crucial component of UCB-H. Furthermore, even if the $Q$ -values were initialised optimistically at the start of training they would not remain optimistic long enough to drive exploration, due to the use of neural networks. OPIQ, on the other hand, is designed with these limitations of neural networks in mind. By augmenting the neural network's $Q$ -value estimates with optimistic bonuses of the form $\frac{C}{(N(s,a) + 1)^M}$ , OPIQ ensures that the $Q^{+}$ -values used during action selection and bootstrapping are optimistic. We can thus consider OPIQ as a deep version of UCB-H. Our results show that optimism during action selection and bootstrapping is extremely important for ensuring efficient exploration.
+
+# 4 RELATED WORK
+
+Tabular Domain: There is a wealth of literature related to provably efficient exploration in the tabular domain. Popular model-based algorithms such as R-MAX (Brafman and Tennenholtz, 2002),
+
+MBIE (and MBIE-EB) (Strehl and Littman, 2008), UCRL2 (Jaksch et al., 2010) and UCBVI (Azar et al., 2017) are all based on the principle of optimism in the face of uncertainty. Osband and Van Roy (2017) adopt a Bayesian viewpoint and argue that posterior sampling (PSRL) (Strens, 2000) is more practically efficient than approaches that are optimistic in the face of uncertainty, and prove that in Bayesian expectation PSRL matches the performance of any optimistic algorithm up to constant factors. Agrawal and Jia (2017) prove that an optimistic variant of PSRL is provably efficient under a frequentist regret bound.
+
+The only provably efficient model-free algorithms to date are delayed $Q$ -learning (Strehl et al., 2006) and UCB-H (and UCB-B) (Jin et al., 2018). Delayed $Q$ -learning optimistically initialises the $Q$ -values that are carefully controlled when they are updated. UCB-H and UCB-B also optimistically initialise the $Q$ -values, but also utilise a count-based intrinsic motivation term and a special learning rate to achieve a favourable regret bound compared to model-based algorithms. In contrast, OPIQ pessimistically initialises the $Q$ -values. Whilst we base our current analysis on UCB-H, the idea of augmenting pessimistically initialised $Q$ -values can be applied to any model-free algorithm.
+
+Deep RL Setting: A popular approach to improving exploration in deep RL is to utilise intrinsic motivation (Oudeyer and Kaplan, 2009), which computes a quantity to add to the environmental reward. Most relevant to our work is that of Bellemare et al. (2016), which takes inspiration from MBIE-EB (Strehl and Littman, 2008). Bellemare et al. (2016) utilise the number of times a state has been visited to compute the intrinsic reward. They outline a framework for obtaining approximate counts, dubbed pseudocounts, through a learned density model over the state space. Ostrovski et al. (2017) extend the work to utilise a more expressive PixelCNN (van den Oord et al., 2016) as the density model, whereas Fu et al. (2017) train a neural network as a discriminator to also recover a density model. Machado et al. (2018a) instead use the successor representation to obtain generalised counts. Choi et al. (2019) learn a feature space to count that focuses on regions of the state space the agent can control, and Pathak et al. (2017) learn a similar feature space in order to provide the error of a learned model as intrinsic reward. A simpler and more generic approach to approximate counting is static hashing which projects the state into a lower dimensional space before counting (Tang et al., 2017). None of these approaches attempt to augment or modify the $Q$ -values used for action-selection or bootstrapping, and hence do not attempt to ensure optimistic values for novel state-action pairs.
+
+Chen et al. (2017) build upon bootstrapped DQN (Osband et al., 2016) to obtain uncertainty estimates over the $Q$ -values for a given state in order to act optimistically by choosing the action with the largest UCB. However, they do not utilise optimistic estimates during bootstrapping. Osband et al. (2018) also extend bootstrapped DQN to include a prior by extending RLSVI (Osband et al., 2017) to deep RL. Osband et al. (2017) show that RLSVI achieves provably efficient Bayesian expected regret, which requires a prior distribution over MDPs, whereas OPIQ achieves provably efficient worse case regret. Bootstrapped DQN with a prior is thus a model-free algorithm that has strong theoretical support in the tabular setting. Empirically, however, its performance on sparse reward tasks is worse than DQN with pseudocounts.
+
+Machado et al. (2015) shift and scale the rewards so that a zero-initialisation is optimistic. When applied to neural networks this approach does not result in optimistic $Q$ -values due to the generalisation of the networks. Bellemare et al. (2016) empirically show that using a pseudocount intrinsic motivation term performs much better empirically on hard exploration tasks.
+
+Choshen et al. (2018) attempt to generalise the notion of a count to include information about the counts of future state-action pairs in a trajectory, which they use to provide bonuses during action selection. Oh and Iyengar (2018) extend delayed $Q$ -learning to utilise these generalised counts and prove the scheme is PAC-MDP. The generalised counts are obtained through $E$ -values which are learnt using SARSA with a constant 0 reward and $E$ -value estimates initialised at 1. When scaling to the deep RL setting, these $E$ -values are estimated using neural networks that cannot maintain their initialisation for unvisited state-action pairs, which is crucial for providing an incentive to explore. By contrast, OPIQ uses a separate source to generate the optimism necessary to explore the environment.
+
+# 5 EXPERIMENTAL SETUP
+
+We compare OPIQ against baselines and ablations on three sparse reward environments. The first is a randomized version of the Chain environment proposed by Osband et al. (2016) and used in (Shyam et al., 2019) with a chain of length 100, which we call Randomised Chain. The second is a two-dimensional maze in which the agent starts in the top left corner (white dot) and is only rewarded upon finding the goal (light grey dot). We use an image of the maze as input and randomise the actions similarly to the chain. The third is Montezuma's Revenge from Arcade Learning environment (Bellemare et al., 2013), a notoriously difficult sparse reward environment commonly used as a benchmark to evaluate the performance and scaling of Deep RL exploration algorithms.
+
+See Appendix D for further details on the environments, baselines and hyperparameters used.
+
+# 5.1 ABLATIONS AND BASELINES
+
+We compare OPIQ against a variety of DQN-based approaches that use pseudocount intrinsic rewards, the DORA agent (Choshen et al., 2018) (which generates count-like optimism bonuses using a neural network), and two strong exploration baselines:
+
+$\epsilon$ -greedy DQN: a standard DQN that uses an $\epsilon$ -greedy policy to encourage exploration. We anneal $\epsilon$ linearly over a fixed number of timesteps from 1 to 0.01.
+
+$\mathbf{DQN} + \mathbf{PC}$ : we add an intrinsic reward of $\beta / \sqrt{N(s, a)}$ to the environmental reward based on (Bellemare et al., 2016; Tang et al., 2017).
+
+DQN R-Subtract (+PC): we subtract a constant from all environmental rewards received when training, so that a zero-initialisation is optimistic, as described for a DQN in (Bellemare et al., 2016) and based on Machado et al. (2015).
+
+DQN Bias (+PC): we initialise the bias of the final layer of the DQN to a positive value at the start of training as a simple method for optimistic initialisation with neural networks.
+
+DQN + DORA: we use the generalised counts from (Choshen et al., 2018) as an intrinsic reward.
+
+$\mathbf{DQN} + \mathbf{DORA}$ OA: we additionally use the generalised counts to provide an optimistic bonus during action selection.
+
+$\mathbf{DQN} + \mathbf{RND}$ : we add the RND bonus from (Burda et al., 2018) as an intrinsic reward.
+
+BSP: we use Bootstrapped DQN with randomised prior functions (Osband et al., 2018).
+
+In order to better understand the importance of each component of our method, we also evaluate the following ablations:
+
+Optimistic Action Selection (OPIQ w/o OB): we only use our $Q^{+}$ -values during action selection, and use $Q$ during bootstrapping (without Optimistic Bootstrapping). The intrinsic motivation term remains.
+
+Optimistic Action Selection and Bootstrapping (OPIQ w/o PC): we use our $Q^{+}$ -values during action selection and bootstrapping, but do not include an intrinsic motivation term (without Pseudo Counts).
+
+# 6 RESULTS
+
+# 6.1 RANDOMISED CHAIN
+
+We first consider the visually simple domain of the randomised chain and compare the count-based methods. Figure 3 shows the performance of OPIQ compared to the baselines and ablations. OPIQ significantly outperforms the baselines, which do not have any explicit mechanism for optimism during action selection. A DQN with pseudocount derived intrinsic rewards is unable to reliably find the goal state, but setting the final layer's bias to one produces much better performance. For the DQN variant in which a constant is subtracted from all rewards, all of the configurations (including those with pseudocount derived intrinsic bonuses) were unable to find the goal on the right and thus the agents learn quickly to latch on the inferior reward of moving left.
+
+Compared to its ablations, OPIQ is more stable in this task. OPIQ without pseudocounts performs similarly to OPIQ but is more varied across seeds, whereas the lack of optimistic bootstrapping results in worse performance and significantly more variance across seeds.
+
+# 6.2 MAZE
+
+We next consider the harder and more visually complex task of the Maze and compare against all baselines.
+
+Figure 4 shows that only OPIQ is able to find the goal in the sparse reward maze. This indicates that explicitly ensuring optimism during action selection and bootstrapping can have a significant positive impact in sparse reward tasks, and that a naive extension of UCB-H to the deep RL setting (DQN + PC) results in insufficient exploration.
+
+Figure 4 (right) shows that attempting to ensure optimistic $Q$ -values by adjusting the bias of the final layer (DQN Bias + PC), or by subtracting a constant from the reward (DQN R-Subtract + PC) has very little effect.
+
+As expected DQN + RND performs poorly on this domain compared to the pseudocount based methods. The visual input does not vary much across the state space, resulting in the RND bonus failing to provide enough intrinsic motivation to ensure efficient exploration. Additionally it does not feature any explicit mechanism for optimism during action selection, and thus Figure 4 (right) shows it explores the environment relatively slowly.
+
+Both DQN+DORA and DQN+DORA OA also perform poorly in this domain since their source of intrinsic motivation disappears quickly. As noted in Figure 2, neural networks do not maintain their starting initialisations after training. Thus, the intrinsic reward DORA produces goes to 0 quickly since the network producing its bonuses learns to generalise quickly.
+
+BSP is the only exploration baseline we test that does not add an intrinsic reward to the environmental reward, and thus it performs poorly compared to the other baselines on this environment.
+
+Figure 5 shows that OPIQ and all its ablations manage to find the goal in the maze. OPIQ also explores slightly faster than its ablations (right), which shows the benefits of optimism during both action selection and bootstrapping. In addition, the episodic reward for the ablation without optimistic bootstrapping is noticeably more unstable (Figure 5, left). Interestingly, OPIQ without pseudocounts performs significantly worse than the other ablations. This is surprising since the theory suggests that the count-based intrinsic motivation is only required when the reward or transitions of the MDP are stochastic (Jin et al., 2018), which is not the case here. We hypothesise that adding PC-derived intrinsic bonuses to the reward provides an easier learning problem, especially when using $n$ -step $Q$ -Learning, which yields the performance gap.
+
+However, our results show that the PC-derived intrinsic bonuses are not enough on their own to ensure sufficient exploration. The large difference in performance between DQN + PC and OPIQ w/o OB is important, since they only differ in the use of optimistic action selection. The results in Figures 4 and 5 show that optimism during action selection is extremely important in exploring the environment efficiently. Intuitively, this makes sense, since this provides an incentive for the agent to try actions it has never tried before, which is crucial in exploration.
+
+Figure 6 visualises the values used during action selection for a DQN + PC agent and OPIQ, showing the count-based augmentation provides optimism for relatively novel state-action pairs, driving the agent to explore more of the state-action space.
+
+
+Figure 3: Results for the randomised chain environment. Median across 20 seeds is plotted and the $25\% -75\%$ quartile is shown shaded. Left: OPIQ outperforms the baselines. Right: OPIQ is more stable than its ablations.
+
+
+
+
+Figure 4: Results for the maze environment comparing OPIQ and baselines. Median across 8 seeds is plotted and the $25\% -75\%$ quartile is shown shaded. Left: The episode reward. Right: Number of distinct states visited over training. The total number of states in the environment is shown as a dotted line.
+
+
+
+
+Figure 5: Results for the maze environment comparing OPIQ and ablations. Median across 8 seeds is plotted and the $25\% -75\%$ quartile is shown shaded. Left: The episode reward. Right: Number of distinct states visited over training. The total number of states in the environment is shown as a dotted line.
+
+
+
+
+Figure 6: Values used during action selection for each of the 4 actions. The region in blue indicates states that have already been visited. Other colours denote $Q$ -values between 0 (black) and 10 (white). Left: The $Q$ -values used by DQN with pseudocounts. Right: $Q^{+}$ -values used by OPIQ with $C_{\text{action}} = 100$ .
+
+
+
+# 6.3 MONTEZUMA'S REVENGE
+
+Finally, we consider Montezuma's Revenge, one of the hardest sparse reward games from the ALE (Bellemare et al., 2013). Note that we only train up to 12.5mil timesteps (50mil frames), a 1/4 of the usual training time (50mil timesteps, 200mil frames).
+
+Figure 7 shows that OPIQ significantly outperforms the baselines in terms of the episodic reward and the maximum episodic reward achieved during training. The higher episode reward and much higher maximum episode reward of OPIQ compared to DQN + PC once again demonstrates the importance of optimism during action selection and bootstrapping.
+
+
+Figure 7: Results for Montezuma's Revenge. Median across 4 seeds is plotted and the $25\% -75\%$ quartile is shown shaded. Left: The episode reward. Right: The maximum reward achieved during an episode.
+
+
+
+
+Figure 8: Further Results for Montezuma's Revenge showing the number of rooms visited over training comparing OPIQ and baselines. Median across 4 seeds is plotted and the $25\% -75\%$ quartile is shown shaded.
+
+In this environment BSP performs much better than in the Maze, but achieves significantly lower episodic rewards than OPIQ.
+
+Figure 8 shows the distinct number of rooms visited across the training period. We can see that OPIQ manages to reliably explore 12 rooms during the 12.5mil timesteps, significantly more than the other methods, thus demonstrating its improved exploration in this complex environment.
+
+Our results on this challenging environment show that OPIQ can scale to high dimensional complex environments and continue to provide significant performance improvements over an agent only using pseudocount based intrinsic rewards.
+
+# 7 CONCLUSIONS AND FUTURE WORK
+
+This paper presented OPIQ, a model-free algorithm that does not rely on an optimistic initialisation to ensure efficient exploration. Instead, OPIQ augments the $Q$ -values estimates with a count-based optimism bonus. We showed that this is provably efficient in the tabular setting by modifying UCB-H to use a pessimistic initialisation and our augmented $Q^{+}$ -values for action selection and bootstrapping. Since our method does not rely on a specific initialisation scheme, it easily scales to deep RL when paired with an appropriate counting scheme. Our results showed the benefits of maintaining optimism both during action selection and bootstrapping for exploration on a number of hard sparse reward environments including Montezuma's Revenge. In future work, we aim to extend OPIQ by integrating it with more expressive counting schemes.
+
+# 8 ACKNOWLEDGEMENTS
+
+We would like to thank the entire WhiRL lab for their helpful feedback, in particular Gregory Farquhar and Supratik Paul. We would also like to thank the anonymous reviewers for their constructive comments during the reviewing process. This project has received funding from the European Research Council (ERC), under the European Union's Horizon 2020 research and innovation programme (grant agreement number 637713). It was also supported by an EPSRC grant (EP/M508111/1, EP/N509711/1). The experiments were made possible by a generous equipment grant from NVIDIA and the JP Morgan Chase Faculty Research Award.
+
+# REFERENCES
+
+Shipra Agrawal and Randy Jia. Optimistic posterior sampling for reinforcement learning: worst-case regret bounds. In Advances in Neural Information Processing Systems, pages 1184-1194, 2017.
+Mohammad Gheshlaghi Azar, Ian Osband, and Rémi Munos. Minimax regret bounds for reinforcement learning. In Proceedings of the 34th International Conference on Machine Learning, pages 263-272, 2017.
+Marc Bellemare, Sriram Srinivasan, Georg Ostrovski, Tom Schaul, David Saxton, and Remi Munos. Unifying count-based exploration and intrinsic motivation. In Advances in Neural Information Processing Systems, pages 1471-1479, 2016.
+Marc G Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. The arcade learning environment: An evaluation platform for general agents. Journal of Artificial Intelligence Research, 47: 253-279, 2013.
+Ronen I Brafman and Moshe Tennenholtz. R-MAX - a general polynomial time algorithm for near-optimal reinforcement learning. Journal of Machine Learning Research, 3(Oct):213-231, 2002.
+Yuri Burda, Harrison Edwards, Amos Storkey, and Oleg Klimov. Exploration by random network distillation. arXiv preprint arXiv:1810.12894, 2018.
+Richard Y Chen, John Schulman, Pieter Abbeel, and Szymon Sidor. UCB and infogain exploration via $\mathbf{Q}$ -ensembles. arXiv preprint arXiv:1706.01502, 2017.
+Jongwook Choi, Yijie Guo, Marcin Moczulski, Junhyuk Oh, Neal Wu, Mohammad Norouzi, and Honglak Lee. Contingency-aware exploration in reinforcement learning. In International Conference on Learning Representations, 2019.
+Leshem Choshen, Lior Fox, and Yonatan Loewenstein. Dora the explorer: Directed outreach reinforcement action-selection. In International Conference on Learning Representations, 2018.
+Kefan Dong, Yuanhao Wang, Xiaoyu Chen, and Liwei Wang. $Q$ -learning with UCB exploration is sample efficient for infinite-horizon MDP. arXiv preprint arXiv:1901.09311, 2019.
+Adrien Ecoffet, Joost Huizinga, Joel Lehman, Kenneth O Stanley, and Jeff Clune. Go-exlore: a new approach for hard-exploration problems. arXiv preprint arXiv:1901.10995, 2019.
+Li Fan, Pei Cao, Jussara Almeida, and Andrei Z Broder. Summary cache: a scalable wide-area web cache sharing protocol. IEEE/ACM Transactions on Networking (TON), 8(3):281-293, 2000.
+Justin Fu, John Co-Reyes, and Sergey Levine. EX2: Exploration with exemplar models for deep reinforcement learning. In Advances in Neural Information Processing Systems, pages 2577-2587, 2017.
+Sudhir K Goel and Dennis M Rodriguez. A note on evaluating limits using riemann sums. Mathematics Magazine, 60(4):225-228, 1987.
+Danijar Hafner, Timothy Lillicrap, Ian Fischer, Ruben Villegas, David Ha, Honglak Lee, and James Davidson. Learning latent dynamics for planning from pixels. In Proceedings of the 36th International Conference on Machine learning, 2019.
+
+Matteo Hessel, Joseph Modayil, Hado Van Hasselt, Tom Schaul, Georg Ostrovski, Will Dabney, Dan Horgan, Bilal Piot, Mohammad Azar, and David Silver. Rainbow: Combining improvements in deep reinforcement learning. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018.
+Thomas Jaksch, Ronald Ortner, and Peter Auer. Near-optimal regret bounds for reinforcement learning. Journal of Machine Learning Research, 11(Apr):1563-1600, 2010.
+Chi Jin, Zeyuan Allen-Zhu, Sebastian Bubeck, and Michael I Jordan. Is $Q$ -learning provably efficient? In Advances in Neural Information Processing Systems, pages 4863-4873, 2018.
+Long-Ji Lin. Self-improving reactive agents based on reinforcement learning, planning and teaching. Machine learning, 8(3-4):293-321, 1992.
+Marlos C Machado, Sriram Srinivasan, and Michael H Bowling. Domain-independent optimistic initialization for reinforcement learning. In AAAI Workshop: Learning for General Competency in Video Games, 2015.
+Marlos C Machado, Marc G Bellemare, and Michael Bowling. Count-based exploration with the successor representation. arXiv preprint arXiv:1807.11622, 2018a.
+Marlos C Machado, Marc G Bellemare, Erik Talvitie, Joel Veness, Matthew Hausknecht, and Michael Bowling. Revisiting the arcade learning environment: Evaluation protocols and open problems for general agents. Journal of Artificial Intelligence Research, 61:523-562, 2018b.
+Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529, 2015.
+Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In Proceedings of the 33rd International Conference on Machine Learning, pages 1928-1937, 2016.
+Min-hwan Oh and Garud Iyengar. Directed exploration in PAC model-free reinforcement learning. arXiv preprint arXiv:1808.10552, 2018.
+Ian Osband and Benjamin Van Roy. Why is posterior sampling better than optimism for reinforcement learning? In Proceedings of the 34th International Conference on Machine Learning, pages 2701-2710, 2017.
+Ian Osband, Charles Blundell, Alexander Pritzel, and Benjamin Van Roy. Deep exploration via bootstrapped DQN. In Advances in Neural Information Processing Systems, pages 4026-4034, 2016.
+Ian Osband, Daniel Russo, Zheng Wen, and Benjamin Van Roy. Deep exploration via randomized value functions. arXiv preprint arXiv:1703.07608, 2017.
+Ian Osband, John Aslanides, and Albin Cassirer. Randomized prior functions for deep reinforcement learning. In Advances in Neural Information Processing Systems, pages 8617-8629, 2018.
+Georg Ostrovski, Marc G Bellemare, Aaron van den Oord, and Rémi Munos. Count-based exploration with neural density models. arXiv preprint arXiv:1703.01310, 2017.
+Pierre-Yves Oudeyer and Frederic Kaplan. What is intrinsic motivation? a typology of computational approaches. Frontiers in neurorobotics, 1:6, 2009.
+Deepak Pathak, Pulkit Agrawal, Alexei A. Efros, and Trevor Darrell. Curiosity-driven exploration by self-supervised prediction. In Proceedings of the 34th International Conference on Machine Learning, 2017.
+Pranav Shyam, Wojciech Jaśkowski, and Faustino Gomez. Model-based active exploration. In Proceedings of the 36th International Conference on Machine Learning, 2019.
+Alexander L Strehl and Michael L Littman. An analysis of model-based interval estimation for markov decision processes. Journal of Computer and System Sciences, 74(8):1309-1331, 2008.
+
+Alexander L Strehl, Lihong Li, Eric Wiewiora, John Langford, and Michael L Littman. PAC model-free reinforcement learning. In Proceedings of the 23rd International Conference on Machine learning, pages 881-888. ACM, 2006.
+Malcolm Strens. A bayesian framework for reinforcement learning. In Proceedings of the 17th International Conference on Machine Learning, pages 943-950, 2000.
+Adrien Ali Taïga, Aaron Courville, and Marc G Bellemare. Approximate exploration through state abstraction. arXiv preprint arXiv:1808.09819, 2018.
+Haoran Tang, Rein Houthooft, Davis Foote, Adam Stooke, OpenAI Xi Chen, Yan Duan, John Schulman, Filip DeTurck, and Pieter Abbeel. #Exploration: A study of count-based exploration for deep reinforcement learning. In Advances in Neural Information Processing Systems, pages 2753-2762, 2017.
+Aaron van den Oord, Nal Kalchbrenner, Lasse Espeholt, Oriol Vinyals, Alex Graves, et al. Conditional image generation with pixelCNN decoders. In Advances in Neural Information Processing Systems, pages 4790-4798, 2016.
+
+# A BACKGROUND
+
+# A.1 TABULAR REINFORCEMENT LEARNING
+
+For the tabular setting, we consider a discrete finite-horizon Markov Decision Process (MDP), which can be defined as a tuple $(\mathcal{S},\mathcal{A},\{P_t\},\{R_t\},H,\rho)$ , where $\mathcal{S}$ is the finite state space, $\mathcal{A}$ is the finite action space, $P_{t}(\cdot |s,a)$ is the state-transition distribution for timestep $t = 1,\dots,H,R_{t}(\cdot |s,a)$ is the distribution over rewards after taking action $a$ in state $s$ , $H$ is the horizon, and $\rho$ is the distribution over starting states. Without loss of generality we assume that $R_{t}(\cdot |s,a)\in [0,1]$ . We use $S$ and $A$ to denote the number of states and the number of actions, respectively, and $N(s,a,t)$ as the number of times a state-action pair $(s,a)$ has been visited at timestep $t$ .
+
+Our goal is to find a set of policies $\pi_t: \mathcal{S} \to \mathcal{A}$ , $\pi := \{\pi_t\}$ , that chooses the agent's actions at time $t$ such that the expected sum of future rewards is maximised. To this end we define the $Q$ -value at time $t$ of a given policy $\pi$ as $Q_t^\pi(s, a) := \mathbb{E}\left[r + Q_{t+1}^\pi(s', \pi_{t+1}(s')) \mid r \sim R_t(\cdot|s, a), s' \sim P_t(\cdot|s, a)\right]$ , where $Q_t^\pi(s, a) = 0, \forall t > H$ . The agent interacts with the environment for $K$ episodes, $T := KH$ , yielding a total regret: $\mathrm{Regret}(K) = \sum_{k=1}^{K} \left( \max_{\pi^*} Q_1^{\pi^*}(s_1^k, \pi_1^*(s_1^k)) - Q_1^{\pi^k}(s_1^k, \pi_1^k(s_1^k)) \right)$ . Here $s_1^k$ refers to the starting state and $\pi^k$ to the policy at the beginning of episode $k$ . We are interested in bounding the worst case total regret with probability $1 - p$ , $0 < p < 1$ .
+
+UCB-H (Jin et al., 2018) is an online $Q$ -learning algorithm for the finite-horizon setting outlined above where the worse case total regret is bounded with a probability of $1 - p$ by $\mathcal{O}(\sqrt{H^4SAT\log(SAT / p)})$ . All $Q$ -values for timesteps $t \leq H$ are optimistically initialised at $H$ . The learning rate is defined as $\eta_N = \frac{H + 1}{H + N}$ , where $N \coloneqq N(s_t, a_t, t)$ is the number of times state-action pair $(s_t, a_t)$ has been observed at step $t$ and $\eta_1 = 1$ at the first encounter of any state-action pair. The update rule for a transition at step $t$ from state $s_t$ to $s_{t+1}$ , after executing action $a_t$ and receiving reward $r_t$ , is:
+
+$$
+Q _ {t} \left(s _ {t}, a _ {t}\right) \leftarrow \left(1 - \eta_ {N}\right) Q _ {t} \left(s _ {t}, a _ {t}\right) + \eta_ {N} \left(r _ {t} + b _ {N} ^ {T} + \min \left\{H, \max _ {a ^ {\prime}} Q _ {t + 1} \left(s _ {t + 1}, a ^ {\prime}\right) \right\}\right), \tag {5}
+$$
+
+where $b_{N}^{T} \coloneqq 2\sqrt{\frac{H^{3}\log(SAT / p)}{N}}$ , is the count-based intrinsic motivation term.
+
+# B COUNTING IN LARGE, COMPLEX STATE SPACES
+
+In deep RL, the primary difficulty for exploration based on count-based intrinsic rewards is obtaining appropriate state-action counts. In this paper we utilise approximate counting schemes (Bellemare et al., 2016; Ostrovski et al., 2017; Tang et al., 2017) in order to cope with continuous and/or high-dimensional state spaces. In particular, for the chain and maze environments we use static hashing (Tang et al., 2017), which projects a state $s$ to a low-dimensional feature vector $\phi(s) = \mathrm{sign}(Af(s))$ , where $f$ flattens the state $s$ into a single dimension of length $D$ ; $A$ is a $k \times D$ matrix whose entries are initialised i.i.d. from a unit Gaussian: $\mathcal{N}(0,1)$ ; and $k$ is a hyperparameter controlling the granularity of counting: higher $k$ leads to more distinguishable states at the expense of generalisation.
+
+Given the vector $\phi(s)$ , we use a counting bloom filter (Fan et al., 2000) to update and retrieve its counts efficiently. To obtain counts $N(s, a)$ for state-action pairs, we maintain a separate data structure of counts for each action (the same vector $\phi(s)$ is used for all actions). This counting scheme is tabular and hence the counts for sufficiently different states do not interfere with one another. This ensures $Q^{+}$ -values for unseen state-action pairs in equation 1 are large.
+
+For our experiments on Montezuma's Revenge we use the same method of downsampling as in (Ecoffet et al., 2019), in which the greyscale state representation is resized from (42x42) to (11x8) and then binned from $\{0, \dots, 255\}$ into 8 categories. We then maintain tabular counts over the new representation.
+
+# C GRANULARITY OF THE COUNTING MODEL
+
+The granularity of the counting scheme is an important modelling consideration. If it is too granular, then it will assign an optimistic bias in regions of the state space where the network should be trusted
+
+to generalise. On the other hand, if it is too coarse then it could fail to provide enough of an optimistic bias in parts of the state space where exploration is still required. Figures 9 shows the differences between 2 levels of granularity. Taïga et al. (2018) provide a much more detailed analysis on the granularity of the count-based model, and its implications on the learned $Q$ -values.
+
+
+Figure 9: We consider the counting scheme from Figure 2, but vary the number of bins used. Left: 6 bins are used. Only the data points far from the training data are given an optimistic bias. Right: 50 bins are used. An optimistic bias is given to all data points that are not very close to the training data.
+
+
+
+# D EXPERIMENTAL SETUP
+
+# D.1 ENVIRONMENTS
+
+# D.1.1 RANDOMISED CHAIN
+
+A randomized version of the Chain environment proposed by Osband et al. (2016) and used in (Shyam et al., 2019). We use a Chain of length 100. The agent starts the episode in State 2, and interacts with the MDP for 109 steps, after which the agent is reset. The agent has 2 actions that can move it Left or Right. At the beginning of training the action which takes the agent left or right at each state is randomly picked and then fixed. The agent receives a reward of 0.001 for going Left in State 1, a reward of 1 for going Right in State 100 and no reward otherwise. The optimal policy is thus to pick the action that takes it Right at each timestep. Figure 10 shows the structure of the 100 Chain. Similarly to Osband et al. (2016) we use a thermometer encoding for the state: $\phi(s) := (\mathbb{1}\{x \leq s\}) \in \{0,1\}^{100}$ .
+
+
+Figure 10: 100 Chain environment.
+
+# D.1.2 MAZE
+
+A 2-dimensional gridworld maze with a sparse reward in which the agent can move Up, Down, Left or Right. The agent starts each episode at a fixed location and must traverse through the maze in order to find the goal which provides $+10$ reward and terminates the episode, all other rewards are 0. The agent interacts with the maze for 250 timesteps before being reset. Empty space is represented by a 0, walls are 1, the goal is 2 and the player is 3. The state representation is a greyscale image of the entire grid where each entry is divided by 3 to lie in $[0,1]$ . The shape of the state representation is: (24, 24, 1). Once again the effect of each action is randomised at each state at the beginning of training. Figure 11 shows the structure of the maze environment.
+
+
+Figure 11: Maze environment.
+
+# D.1.3 MONTEZUMA'S REVENGE
+
+We follow the suggestions in (Machado et al., 2018b) and use the same environmental setup as used in (Burda et al., 2018). Specifically, we use stick actions with a probability of $p = 0.25$ , a frame skip of 4 and do not show a terminal state on the loss of life.
+
+# D.2 HYPERPARAMETERS AND NETWORK ARCHITECTURES
+
+In all experiments we set $\gamma = 0.99$ , use RMSProp with a learning rate of 0.0005 and scale the gradient norms during training to be at most 5.
+
+# D.2.1 RANDOMISED CHAIN
+
+The network used is a MLP with 2 hidden layers of 256 units and ReLU non-linearities. We use 1 step $Q$ -Learning.
+
+Training lasts for $100\mathrm{k}$ timesteps. $\epsilon$ is fixed at 0.01 for all methods except for $\epsilon$ -greedy DQN in which it is linearly decayed from 1 to 0.01 over $\{100,50\mathrm{k},100\mathrm{k}\}$ timesteps. We train on a batch size of 64 after every timestep with a replay buffer of size $10\mathrm{k}$ . The target network is updated every 200 timesteps. The embedding size used for the counts is 32. We set $\beta = 0.1$ for the scale of the count-based intrinsic motivation.
+
+For reward subtraction we consider subtracting $\{0.1, 1, 10\}$ from the reward. For an optimistic initialisation bias, we consider setting the final layer's bias to $\{0.1, 1, 10\}$ . We consider both of the methods with and without count-based intrinsic motivation.
+
+For OPIQ and its ablations we consider: $M \in \{0.1, 0.5, 2, 10\}$ , $C_{\mathrm{action}} \in \{0.1, 1, 10\}$ , $C_{\mathrm{bootstrap}} \in \{0.01, 0.1, 1, 10\}$ .
+
+For all methods we run 20 independent runs across the cross-product of all relevant parameters considered. We then sort them by the median test reward (largest area underneath the line) and report the median, lower and upper quartiles.
+
+The best hyperparameters we found were:
+
+DQN $\epsilon$ -greedy: Decay rate: 100 timesteps.
+
+Optimistic Initialisation Bias: Bias: 1, Pseudocount intrinsic motivation: True.
+
+Reward Subtraction: Constant to subtract: 1, Pseudocount intrinsic motivation: False.
+
+OPIQ: M: 0.5, $C_{\text{action}}$ : 1, $C_{\text{bootstrap}}$ : 1.
+
+OPIQ without Optimistic Bootstrapping: M: 2, $C_{\text{action}}$ : 10.
+
+OPIQ without Pseudocounts: M: 2, $C_{\text{action}}$ : 10, $C_{\text{bootstrap}}$ : 10.
+
+For Figure 13 the best hyperparameters for OPIQ with differing values of $M$ are:
+
+M: 0.1: $C_{\text{action}}$ : 10, $C_{\text{bootstrap}}$ : 1.
+
+M: 0.5: $C_{\text{action}}$ : 1, $C_{\text{bootstrap}}$ : 1.
+
+M: 2: $C_{\text{action}}$ : 10, $C_{\text{bootstrap}}$ : 1.
+
+M: 10: $C_{\text{action}}$ : 10, $C_{\text{bootstrap}}$ : 10.
+
+# D.2.2 MAZE
+
+The network used is the following feedforward network:
+
+(State input: (24,24,1))
+
+$\rightarrow$ (Conv Layer, 3x3 Filter, 16 Channels, Stride 2) $\rightarrow$ ReLU
+$\rightarrow$ (Conv Layer, 3x3 Filter, 16 Channels, Stride 2) $\rightarrow$ ReLU
+$\rightarrow$ Flatten
+$\rightarrow$ (FC Layer, 400 Units) $\rightarrow$ ReLU
+$\rightarrow$ (FC Layer, 200 Units)
+$\rightarrow \mathcal{A} = 4$ outputs.
+
+We use 3 step $Q$ -Learning.
+
+Training lasts for 1mil timesteps. $\epsilon$ is decayed linearly from 1 to 0.01 over 50k timesteps for all methods except for $\epsilon$ -greedy DQN in which it is linearly decayed from 1 to 0.01 over $\{100, 50\mathrm{k}, 100\mathrm{k}\}$ timesteps. We train on a batch of 64 after every timestep with a replay buffer of size 250k. The target network is updated every 1000 timesteps. The embedding dimension for the counts is 128.
+
+For DQN + PC we consider $\beta \in \{0.01, 0.1, 1, 10, 100\}$ . For all other methods we set $\beta = 0.1$ as it performed best.
+
+For reward subtraction we consider subtracting $\{0.1, 1, 10\}$ from the reward. For an optimistic initialisation bias, we consider setting the final layer's bias to $\{0.1, 1, 10\}$ . Both methods utilise a count-based intrinsic motivation.
+
+For OPIQ and its ablations we set $M = 2$ since it worked best in preliminary experiments. We consider: $C_{\mathrm{action}} \in \{0.1, 1, 10, 100\}$ , $C_{\mathrm{bootstrap}} \in \{0.01, 0.1, 1, 10\}$ .
+
+For the RND bonus we use the same architecture as the DQN for both the target and predictor networks, except the output is of size 128 instead of $|\mathcal{A}|$ . We scale the squared error by $\beta_{rnd} \in \{0.001, 0.01, 0.1, 1, 10, 100\}$ :
+
+For DQN + DORA we use the same architecture for the $E$ -network as the DQN. We add a sigmoid non-linearity to the output and initialise the final layer's weights and bias to 0 as described in (Choshen et al., 2018). We sweep across the scale of the intrinsic reward $\beta_{dora} \in \{\}$ . For DQN + DORA OA we use $\beta_{dora} =$ and sweep across $\beta_{dora\_action} \in \{\}$ .
+
+For BSP we use the following architecture:
+
+Shared conv layers:
+
+(State input: (24,24,1))
+
+$\rightarrow$ (Conv Layer, 3x3 Filter, 16 Channels, Stride 2) $\rightarrow$ ReLU
+$\rightarrow$ (Conv Layer, 3x3 Filter, 16 Channels, Stride 2) $\rightarrow$ ReLU
+$\rightarrow$ Flatten
+
+$Q$ -value Heads:
+
+$\rightarrow$ (FC Layer, 400 Units) $\rightarrow$ ReLU
+$\rightarrow$ (FC Layer, 200 Units)
+$\rightarrow \mathcal{A} = 4$ outputs.
+
+We use $K = 10$ different bootstrapped DQN heads, and sweep over $\beta_{bsp} \in \{0.1, 1, 3, 10, 30, 100\}$ .
+
+For all methods we run 8 independent runs across the cross-product of all relevant parameters considered. We then sort them by the median episodic reward (largest area underneath the line) and report the median, lower and upper quartiles.
+
+The best hyperparameters we found were:
+
+DQN $\epsilon$ -greedy: Decay rate: 100k timesteps.
+
+$\mathbf{DQN} + \mathbf{PC}$ .. $\beta = 0.1$
+
+Optimistic Initialisation Bias: Bias: 1.
+
+Reward Subtraction: Constant to subtract: 0.1.
+
+$\mathbf{DQN} + \mathbf{RND}$ .. $\beta_{rnd} = 10$
+
+$$
+\mathbf {D Q N} + \mathbf {D O R A}: \beta_ {d o r a} = 0. 0 1.
+$$
+
+$\mathbf{DQN} + \mathbf{DORA}$ OA: $\beta_{dora} = 0.01$ and $\beta_{dora\_action} = 0.1$ .
+
+BSP: $\beta_{bsp} = 100$
+
+OPIQ: M: 2, $C_{\text{action}}$ : 100, $C_{\text{bootstrap}}$ : 0.01.
+
+OPIQ without Optimistic Bootstrapping: M: 2, $C_{\text{action}}$ : 100.
+
+OPIQ without Pseudocounts: M: 2, $C_{\text{action}}$ : 100, $C_{\text{bootstrap}}$ : 0.1.
+
+# D.2.3 MONTEZUMA'S REVENGE
+
+The network used is the standard DQN used for Atari (Mnih et al., 2015; Bellemare et al., 2016).
+
+We use 3 step $Q$ -Learning.
+
+Training lasts for 12.5mil timesteps (50mil frames in Atari). $\epsilon$ is decayed linearly from 1 to 0.01 over 1mil timesteps. We train on a batch of 32 after every 4th timestep with a replay buffer of size 1mil. The target network is updated every 8000 timesteps.
+
+For all methods we consider $\beta_{mmc} \in \{0.005, 0.01, 0.025\}$ .
+
+For DQN + PC we consider $\beta \in \{0.01, 0.1, 1\}$ .
+
+For OPIQ and its ablations we set $M = 2$ . We consider: $C_{\mathrm{action}} \in \{0.1, 1\}$ , $C_{\mathrm{bootstrap}} \in \{0.01, 0.1\}$ , $\beta \in \{0.01, 0.1\}$ .
+
+For the RND bonus we use the same architectures as in (Burda et al., 2018) (target network is smaller than the learned predictor network) except we use ReLU non-linearity. The output is the same of size 512. We scale the squared error by $\beta_{rnd} \in \{0.001, 0.01, 0.1, 1\}$ :
+
+For BSP we use the same architecture as in (Osband et al., 2018).
+
+We use $K = 10$ different bootstrapped DQN heads, and sweep over $\beta_{bsp} \in \{0.1, 1, 10, 100\}$ .
+
+For all methods we run 4 independent runs across the cross-product of all relevant parameters considered. We then sort them by the median maximum episodic reward (largest area underneath the line) and report the median, lower and upper quartiles.
+
+The best hyperparameters we found were:
+
+$$
+\mathbf {D Q N} + \mathbf {P C}: \beta = 0. 0 1, \beta_ {m m c} = 0. 0 1.
+$$
+
+$$
+\mathbf {D Q N} + \mathbf {R N D}: \beta_ {r n d} = 0. 1, \beta_ {m m c} = 0. 0 1.
+$$
+
+$$
+\mathbf {B S P}: \beta_ {b s p} = 0. 1, \beta_ {m m c} = 0. 0 2 5.
+$$
+
+$$
+\mathbf {O P I Q}: \mathrm {M} = 2, C _ {\text {a c t i o n}} = 0. 1, C _ {\text {b o o t s t r a p}} = 0. 0 1, \beta_ {m m c} = 0. 0 1.
+$$
+
+OPIQ without Optimistic Bootstrapping: $\mathrm{M} = 2$ , $C_{\text{action}} = 0.1$ , $\beta_{mmc} = 0.005$ .
+
+# D.3 BASELINES TRAINING DETAILS
+
+$\mathbf{DQN} + \mathbf{RND}$ : We do a single gradient descent step on a minibatch of the 32 most recently visited states. We also recompute the intrinsic rewards when sampling minibatches to train the DQN. The intrinsic reward used for a state $s$ , is the squared error between the predictor network and the target network $\beta_{rnd}||\mathrm{predictor}(s) - \mathrm{target}(s)||_2^2$ .
+
+$\mathbf{DQN} + \mathbf{DORA}$ : We train the $E$ -values network using $n$ -step SARSA (same $n$ as the DQN) with $\gamma_{E} = 0.99$ . We maintain a replay buffer of size (batch size * 4) and sample batch size elements to train every timestep. The intrinsic reward we use is $\frac{\beta_{dora}}{\sqrt{-\log E(s, a)}}$ .
+
+$\mathbf{DQN} + \mathbf{DORA}$ OA: We train the DQN + DORA agent described above and additionally augment the $Q$ -values used for action selection with $\frac{\beta_{\text{dora\_action}}}{\sqrt{-\log E(s, a)}}$ .
+
+BSP: We train each Bootstrapped DQN head on all of the data from the replay buffer (as is done in (Osband et al., 2016; 2018). We normalise the gradients of the shared part of the network by $1 / K$ , where $K$ is the number of heads. The output of each head is $Q_{k} + \beta_{bsp}p_{k}$ , where $p_k$ is a randomly initialised network (of the same architecture as $Q_{k}$ ) which is kept fixed throughout training. $\beta_{bsp}$ is a hyperparameter governing the scale of the prior regularisation.
+
+# D.4 MIXED MONTE CARLO RETURN
+
+For our experiments on Montezuma's Revenge we additionally mixed the 3 step $Q$ -Learning target with the environmental rewards monte carlo return for the episode.
+
+That is, the 3 step targets $y_{t}$ become:
+
+$$
+y _ {m m c} := \left(1 - \beta_ {m m c}\right) y _ {t} + \beta_ {m m c} \left(\sum_ {i = 0} ^ {\infty} \gamma^ {i} r \left(s _ {t + i}, a _ {t + i}\right)\right)
+$$
+
+If the episode hasn't finished yet, we used 0 for the monte carlo return.
+
+Our implementation differs from (Bellemare et al., 2016; Ostrovski et al., 2017) in that we do not use the intrinsic rewards as part of the monte carlo return. This is because we recompute the intrinsic rewards whenever we are using them as part of the targets for training, and recomputing all the intrinsic rewards for an entire episode (which can be over 1000 timesteps) is computationally prohibitive.
+
+# E FURTHER RESULTS
+
+# E.1 RANDOMISED CHAIN
+
+
+Figure 12: The number of distinct states visited over training for the chain environment. The median across 20 seeds is plotted and the $25\% -75\%$ quartile is shown shaded.
+
+
+
+
+Figure 13: Comparing the performance of $M \in \{0.1, 0.5, 2, 10\}$ on the chain environment. The best hyperparameter combination for the differing values of $M$ is shown. The median across 20 seeds is plotted and the $25\% - 75\%$ quartile is shown shaded.
+
+
+
+We can see that OPIQ and ablations explore the environment much more quickly than the count-based baselines. The ablation without optimistic bootstrapping exhibits significantly more variance than the other ablations, showing the importance of optimism during bootstrapping. On this simple task the ablation without count-based intrinsic motivation performs on par with the full OPIQ. This is most
+
+likely due to the simpler nature of the environment that makes propagating rewards much easier than the Maze. The importance of directed exploration is made abundantly clear by the $\epsilon$ -greedy baseline that fails to explore much of the environment.
+
+Figure 13 compares OPIQ with differing values of $M$ . We can clearly see that a small value of 0.1 results in insufficient exploration, due to the over-exploration of already visited state-action pairs. Additionally if $M$ is too large then the rate of exploration suffers due to the decreased optimism. On this task we found that $M = 0.5$ performed best, but on the harder Maze environment we found that $M = 2$ was better in preliminary experiments.
+
+# E.2 MAZE
+
+
+Figure 14: The $Q^{+}$ -values OPIQ used during bootstrapping with $C_{\mathrm{bootstrap}} = 0.01$ .
+
+Figure 14 shows the values used during bootstrapping for OPIQ. These $Q$ -values show optimism near the novel state-action pairs which provides an incentive for the agent to return to this area of the state space.
+
+# E.3 MONTEZUMA'S REVENGE
+
+
+
+
+
+
+Figure 15: Results for Montezuma's Revenge comparing OPIQ and ablation. Median across 4 seeds is plotted and the $25\% -75\%$ quartile is shown shaded.
+
+
+Figure 16: $\frac{1}{(x + 1)^M}$ for various values of $M$ , and the indicator function $\mathbb{1}\{x < 1\}$ shown in black. Higher values of $M$ provide a better approximation at integer values of $x$ (shown as crosses).
+Figure 15 shows further results on Montezuma's Revenge comparing OPIQ and its ablation without optimistic bootstrapping (OPIQ w/o OB). Similarly to the Chain and Maze results, we can see that OPIQ w/o OB performs similarly to the full OPIQ but has a higher variance across seeds.
+
+# F MOTIVATIONS
+
+Figure 16 compares various values of $M$ with the indicator function $\mathbb{1}\{x < 1\}$ .
+
+# G NECESSITY FOR OPTIMISM DURING ACTION SELECTION
+
+To emphasise the necessity of optimistic $Q$ -value estimates during exploration, we analyse the simple failure case for pessimistically initialised greedy $Q$ -learning provided in the introduction. We use Algorithm 1, but use $Q$ instead of $Q^{+}$ for action selection. We will assume the agent will act greedily with respect to its $Q$ -value estimates and break ties uniformly:
+
+$$
+a _ {t} \leftarrow \operatorname {U n i f o r m} \left\{\underset {a} {\arg \max } Q _ {t} \left(s _ {t}, a\right) \right\}.
+$$
+
+Consider the single state MDP in Figure 17 with $H = 1$ . We use this MDP to show that with 0.5 probability pessimistically initialised greedy $Q$ -learning never finds the optimal policy.
+
+
+Figure 17: A simple failure case for pessimistically initialised greedy $Q$ -learning. There is 1 state with 2 actions and $H = 1$ . The agent receives 0.1 reward for the left action and 1 for the right action.
+
+The agent receives a reward of $+1$ for selecting the right action and 0.1 otherwise. Therefore the optimal policy is to select the right action. Now consider the first episode: $\forall a, Q_{1}(s,a) = 0$ . Thus, the agent selects an action at random with uniform probability. If it selects the left action, it updates:
+
+$$
+Q _ {1} (s, L) = \eta_ {1} (\underbrace {0 . 1} _ {\text {M D P r e w a r d}} + \underbrace {r _ {\text {i n t}}} _ {\text {I n t r i n s i c R e w a r d}} + \underbrace {b} _ {\text {B o o t s t r a p}}) > 0.
+$$
+
+Thus, in the second episode it selects the left action again, since $Q_{1}(s,L) > 0 = Q_{1}(s,R)$ . Our estimate of $Q_{1}(s,L)$ never drops below 0.1, and so the right action is never taken. Thus, with probability of $\frac{1}{2}$ it never selects the correct action (also a linear regret of $0.9T$ ).
+
+This counterexample applies for any non-negative intrinsic motivation (including no intrinsic motivation), and is unaffected if we utilise optimistic bootstrapping or not.
+
+# H NECESSITY FOR INTRINSIC MOTIVATION BONUS
+
+Despite introducing an extra optimism term with a tunable hyperparameter $M$ , OPIQ still requires the intrinsic motivation term $b_{i}^{T}$ to ensure it does not under-exlore in stochastic environments.
+
+We will prove that OPIQ without the intrinsic motivation term $b_{i}^{T}$ does not satisfy Theorem 1. Specifically we will show that there exists a 1 state, 2 action MDP with stochastic reward function such that for all $M > 0$ the probability of incurring linear regret is greater than the allowed failure probability $p$ . We choose to use stochastic rewards as opposed to stochastic transitions for a simpler proof.
+
+
+Figure 18: The parametrised MDP.
+
+The MDP we will use is shown in Figure 18, where $\lambda > 1$ and $a \in (0,1)$ s.t $p < 1 - a$ . $H = 1$ , $S = 1$ and $A = 2$ . The reward function for the left action is stochastic, and will return $+1$ reward with probability $a$ and $0$ otherwise. The reward for the right action is always $a / \lambda$ .
+
+Let $p > 0$ , the probability with which we are allowed to incur a total regret not bounded by the theorem. OPIQ cannot depend on the value of $\lambda$ or $a$ as they are unknown.
+
+Pick $\lambda$ s.t $M > \frac{\log\left(\frac{\lambda}{a}\right)}{\log\left(\frac{\log(p)}{\log(1 - a)}\right)}$ .
+
+OPIQ will recover the sub-optimal policy of taking the right action if every time we take the left action we receive a 0 reward. This will happen since our $Q^{+}$ -value estimate for the left action will eventually drop below the $Q$ -value estimate for the right action which is $a / \lambda > 0$ . The sup-optimal policy will incur a linear regret, which is not bounded by the theorem.
+
+Our probability of failure is at least $(1 - a)^R$ , where $R$ is the number of times we select the left action, which decreases as $R$ increases. This corresponds to receiving a 0 reward for every one of the $R$ left transitions we take. Note that $(1 - a)^R$ is an underestimate of the probability of failure.
+
+For the first 2 episodes we will select both actions, and with probability $(1 - a)$ the left action will return 0 reward. Our $Q$ -values will then be: $Q_{1}(s,L) = 0$ , $Q_{1}(s,R) = a / \lambda$ .
+
+It is possible to take the left action as long as
+
+$$
+\frac {1}{(R + 1) ^ {M}} \geq \frac {a}{\lambda},
+$$
+
+since the optimistic bonus for the right action decays to 0.
+
+This then provides a very loose upper bound for $R$ as $(\frac{\lambda}{a})^{1 / M}$ , which then leads to a further underestimation of the probability of failure.
+
+Assume for a contradiction that $(1 - a)^{R} < p$ :
+
+$$
+\begin{array}{l} (1 - a) ^ {R} < p \iff (1 - a) ^ {(\lambda / a) ^ {1 / M}} < p \\ \Longleftrightarrow (\lambda / a) ^ {1 / M} \log (1 - a) < \log (p) \\ \Longleftrightarrow (\lambda / a) ^ {1 / M} > \log (p) / \log (1 - a) \\ \Longleftrightarrow (1 / M) \log (\lambda / a) > \log (\log (p) / \log (1 - a)) \\ \Longleftrightarrow M < \log (\lambda / a) / \log (\log (p) / \log (1 - a)) \\ \end{array}
+$$
+
+(6)
+
+This provides our contradiction as we choose $\lambda$ s.t $M > \log (\lambda /a) / \log (\log (p) / \log (1 - a))$ . We can always pick such a $\lambda$ because $\log (\lambda /a)$ can get arbitrarily close to 0.
+
+So our probability of failure (of which $(1 - a)^R$ is a severe underestimate) is greater than the allowed probability of failure $p$ .
+
+# I PROOF OF THEOREM 1
+
+Theorem 1. For any $p \in (0,1)$ , with probability at least $1 - p$ the total regret of $Q^{+}$ is at most $\mathcal{O}\left(\sqrt{H^4SAT\log(SAT / p)}\right)$ for $M \geq 1$ and at most $\mathcal{O}\left(H^{1 + M}SAT^{1 - M} + \sqrt{H^4SAT\log(SAT / p)}\right)$ for $0 < M < 1$ .
+
+OPIQ is heavily based on UCB-H (Jin et al., 2018), and as such the proof very closely mirrors its proof except for a few minor differences. For completeness, we reproduce the entirety of the proof with the minor adjustments required for our scheme.
+
+The proof is concerned with bounding the regret of the algorithm after $K$ episodes. The algorithm we follow takes as input the value of $K$ , and changes the magnitudes of $b_{N}^{T}$ based on it.
+
+We will make use of a corollary to Azuma's inequality multiple times during the proof.
+
+Theorem 2. (Azuma's Inequality). Let $Z_0, \dots, Z_n$ be a martingale sequence of random variables such that $\forall i \exists c_i: |Z_i - Z_{i-1}| < c_i$ almost surely, then:
+
+$$
+P (Z _ {n} - Z _ {0} \geq t) \leq \exp \frac {- t ^ {2}}{2 \sum_ {i = 1} ^ {n} c _ {i} ^ {2}}
+$$
+
+Corollary 1. Let $Z_0, \ldots, Z_n$ be a martingale sequence of random variables such that $\forall i \exists c_i: |Z_i - Z_{i-1}| < c_i$ almost surely, then with probability at least $1 - \delta$ :
+
+$$
+\left| Z _ {n} - Z _ {0} \right| \leq \sqrt {2 \left(\sum_ {i = 1} ^ {n} c _ {i} ^ {2}\right) \log \frac {2}{\delta}}
+$$
+
+Lemma 1. (Jin et al., 2018)
+
+Define $\eta_N^0 = \prod_{j=1}^N (1 - \eta_j), \eta_N^i = \eta_i \prod_{j=i+1}^N (1 - \eta_j)$
+
+The following properties hold for $\eta_N^i$ :
+
+$\frac{1}{\sqrt{N}} \leq \sum_{i=1}^{N} \frac{\eta_N^i}{\sqrt{i}} \leq \frac{2}{\sqrt{N}}, \forall N \geq 1$
+$\max_{i = 1,\dots,N}\eta_N^i\leq \frac{2H}{N},\sum_{i = 1}^N (\eta_N^i)^2\leq \frac{2H}{N},\forall N\geq 1$
+- $\sum_{N = i}^{\infty}\eta_N^i = 1 + \frac{1}{H},\forall i\geq 1$
+
+Lemma 2. Adapted slightly from (Jin et al., 2018)
+
+Define $V_{t}^{k}(s)\coloneqq \min \{\bar{H},\max_{a^{\prime}}Q_{t}^{+,k}(s,a^{\prime})\} ,\forall s\in S.$
+
+For notational convenience, we also define $[\vec{P}_t^k V_{t + 1}](s_t^k,a_t^k)\coloneqq V_{t + 1}(s_{t + 1}^k)$ , where $s_t^k$ was the state encountered at episode $k$ at timestep $t$ (similarly for $a_{t}^{k}$ ),
+
+and $[PV_{t + 1}](s,\bar{a})\coloneqq \mathbb{E}_{s'\sim P_t(\cdot |s_t = s,a_t = a)}[V_{t + 1}(s')]$
+
+For any $(s,a,t)\in \mathcal{S}\times \mathcal{A}\times [H]$ , episode $k\leq K$ and $N = N(s,a,t)$ . Suppose $(s,a)$ was previously taken at step $t$ of episodes $k_{1},\ldots ,k_{N} < k$ . Then:
+
+$$
+\begin{array}{l} \left(Q _ {t} ^ {+, k} - Q _ {t} ^ {*}\right) (s, a) = - \eta_ {N} ^ {0} Q _ {t} ^ {*} (s, a) \\ + \sum_ {i = 1} ^ {N} \eta_ {N} ^ {i} \left[ \left(V _ {t + 1} ^ {k _ {i}} - V _ {t + 1} ^ {*}\right) \left(s _ {t + 1} ^ {k _ {i}}\right) + \left(P _ {t} - P _ {t} ^ {k _ {i}}\right) V _ {t + 1} ^ {*} (s, a) + b _ {i} ^ {T} \right] + \frac {H}{(N + 1) ^ {M}} \\ \end{array}
+$$
+
+Proof. We have the following recursive formula for $Q^{+}$ at episode $k$ and timestep $t$ :
+
+$$
+Q _ {t} ^ {+, k} (s, a) = \sum_ {i = 1} ^ {N} \eta_ {N} ^ {i} \left[ r _ {t} (s, a) + V _ {t + 1} ^ {k _ {i}} \left(s _ {t + 1} ^ {k _ {i}}\right) + b _ {i} ^ {T} \right] + \frac {H}{(N + 1) ^ {M}} \tag {7}
+$$
+
+We can produce a similar formula for $Q^{*}$ :
+
+$$
+Q _ {t} ^ {*} (s, a) = \left(r _ {t} + P _ {t} (s, a) V _ {t + 1} ^ {*}\right) (s, a)
+$$
+
+From the Bellman Optimality Equation
+
+$$
+= \eta_ {N} ^ {0} Q _ {t} ^ {*} (s, a) + \sum_ {i = 1} ^ {N} \eta_ {N} ^ {i} \left[ r _ {t} (s, a) + P _ {t} V _ {t + 1} ^ {*} (s, a) \right]
+$$
+
+Since $\sum_{i=1}^{N} \eta_{N}^{i} = 1$ and $\eta_{N}^{0} = 0$ for $N \geq 1$ and $\sum_{i=1}^{N} \eta_{N}^{i} = 0$ and $\eta_{N}^{0} = 1$ for $N = 0$
+
+$$
+= \eta_ {N} ^ {0} Q _ {t} ^ {*} (s, a) + \sum_ {i = 1} ^ {N} \eta_ {N} ^ {i} \left[ r _ {t} (s, a) + \left(P _ {t} - P _ {t} ^ {k _ {i}}\right) V _ {t + 1} ^ {*} (s, a) + P _ {t} ^ {k _ {i}} V _ {t + 1} ^ {*} (s, a) \right]
+$$
+
+$\pm P_{t}^{k_{i}}V_{t + 1}^{*}(s,a)$ inside the summation.
+
+$$
+= \eta_ {N} ^ {0} Q _ {t} ^ {*} (s, a) + \sum_ {i = 1} ^ {N} \eta_ {N} ^ {i} \left[ r _ {t} (s, a) + \left(P _ {t} - P _ {t} ^ {k _ {i}}\right) V _ {t + 1} ^ {*} (s, a) + V _ {t + 1} ^ {*} \left(s _ {t + 1} ^ {k _ {i}}\right) \right] \tag {8}
+$$
+
+By definition of $P_{t}^{k}$
+
+Subtracting equation 7 from equation 8 gives the required result.
+
+
+
+# Lemma 3. Adapted slightly from (Jin et al., 2018)
+
+Bounding $Q^{+} - Q^{*}$
+
+There exists an absolute constant $c > 0$ such that, for any $\delta \in (0,1)$ , letting $b_N^T = 2\sqrt{\frac{H^3\log(SAT / \epsilon)}{N}}$ , we have that $\beta_N^T = 2\sum_{i = 1}^{N}\eta_N^ib_i^T \leq 8\sqrt{\frac{H^3\log(SAT / \epsilon)}{N}}$ and, with probability at least $1 - \delta$ , the following holds simultaneously for all $(s,a,t,k) \in S \times \mathcal{A} \times [H] \times [K]$ :
+
+$$
+0 \leq \left(Q _ {t} ^ {+, k} - Q _ {t} ^ {*}\right) (s, a) \leq \sum_ {i = 1} ^ {N} \eta_ {N} ^ {i} \left[ \left(V _ {t + 1} ^ {k _ {i}} - V _ {t + 1} ^ {*}\right) \left(s _ {t + 1} ^ {k _ {i}}\right) \right] + \beta_ {N} ^ {T} + \frac {H}{(N + 1) ^ {M}}
+$$
+
+where $N = N(s,a,t)$ and $k_{1},\ldots ,k_{N} < k$ are the episodes where $(s,a)$ was taken at step $t$
+
+Proof. For each fixed $(s, a, t) \in S \times \mathcal{A} \times [H]$ , let $k_0 = 0$ and
+
+$$
+k _ {i} = \min (\left\{k \in [ K ] \mid k > k _ {i - 1}, \left(s _ {t} ^ {k}, a _ {t} ^ {k}\right) = (s, a) \right\} \cup \left\{K + 1 \right\})
+$$
+
+$k_{i}$ is then the episode at which $(s, a)$ was taken at step $t$ for the $i$ th time, or $k_{i} = K + 1$ if it has been taken fewer than $i$ times.
+
+Then the random variable $k_{i}$ is a stopping time. Let $\mathcal{F}_i$ be the $\sigma$ -field generated by all the random variables until episode $k_{i}$ step $t$ .
+
+Let $\tau \in [K]$
+
+Let $X_{i} \coloneqq \eta_{\tau}^{i}\mathbb{1}[k_{i} \leq K][(P_{t}^{k_{i}} - P_{t})V_{t + 1}^{*}](s,a))$ . Then $Z_{i} = \sum_{j = 1}^{i}X_{i}$ is also a martingale sequence with respect to the filtration $(\mathbb{1}[k_i \leq K][(P_t^{k_i} - P_t)V_{t + 1}^* ](s,a))^{\tau}_{i = 1},Z_0 = 0,Z_n - Z_0 = \sum_{i = 1}^{n}X_{i}$ and $Z_{i} - Z_{i - 1} = X_{i}$ . We also have that $|X_{i}| \leq \eta_{\tau}^{i}H$ .
+
+Then by Azuma's Inequality we have that with probability at least $1 - 2\delta/(SAHK)$
+
+$$
+\begin{array}{l} \left| \sum_ {i = 1} ^ {\tau} \eta_ {\tau} ^ {i} \mathbb {1} \left[ k _ {i} \leq K \right] \left[ \left(P _ {t} ^ {k _ {i}} - P _ {t}\right) V _ {t + 1} ^ {*} \right] (s, a) \right| \leq \sqrt {2 \left(\sum_ {i = 1} ^ {\tau} \left(\eta_ {\tau} ^ {i} H\right) ^ {2}\right) \log \frac {S A T}{\delta}} \\ = H \sqrt {2 \left(\sum_ {i = 1} ^ {\tau} \left(\eta_ {\tau} ^ {i}\right) ^ {2}\right) \log \frac {S A T}{\delta}} \\ \end{array}
+$$
+
+Then by a Union bound over all $\tau \in [K]$ , we have that with probability at least $1 - 2\delta/(SAH)$ :
+
+$$
+\begin{array}{l} \forall \tau \in [ K ] | \sum_ {i = 1} ^ {\tau} \eta_ {\tau} ^ {i} \mathbb {1} \left[ k _ {i} \leq K \right] \left[ \left(P _ {t} ^ {k _ {i}} - P _ {t}\right) V _ {t + 1} ^ {*} \right] (s, a) | \leq H \sqrt {2 \left(\sum_ {i = 1} ^ {\tau} (\eta_ {\tau} ^ {i}) ^ {2}\right) \log \frac {S A T}{\delta}} \\ \leq 2 \sqrt {\frac {H ^ {3} \log S A T / \delta}{\tau}} \tag {9} \\ \end{array}
+$$
+
+Since From Lemma 1 we have that $\sum_{i=1}^{\tau} (\eta_N^i)^2 \leq \frac{2H}{\tau}$
+
+Since inequality equation 9 holds for all fixed $\tau \in [K]$ uniformly, it also holds for a random variable $\tau \tau = N = N^{k}(s,a,t)\leq K$ . Also note that $\mathbb{1}[k_i\leq K] = 1$ for all $i\leq N$ .
+
+We can then additionally apply a union bound over all $s \in S, a \in \mathcal{A}, t \in [H]$ to give:
+
+$$
+\left| \sum_ {i = 1} ^ {N} \eta_ {N} ^ {i} \left[ \left(P _ {t} ^ {k _ {i}} - P _ {t}\right) V _ {t + 1} ^ {*} \right] (s, a) \right| \leq 2 \sqrt {\frac {H ^ {3} \log S A T / \delta}{N}} = b _ {N} ^ {T} \tag {10}
+$$
+
+which holds with probability $1 - 2\delta$ for all $(s,a,t,k)\in \mathcal{S}\times \mathcal{A}\times [H]\times [K]$ . We then rescale $\delta$ to $\delta /2$ .
+
+By Lemma 1 we have that $b_N^T = 2\sqrt{H^3 \log(SAT / \delta) / N} \leq \beta_N^T / 2 = \sum_{i=1}^{N} \eta_N^i b_i^T \leq 4\sqrt{H^3 \log(SAT / \delta) / N} = 2b_N^T$ .
+
+From Lemma 2 we have that:
+
+$$
+\begin{array}{l} \left(Q _ {t} ^ {+, k} - Q _ {t} ^ {*}\right) (s, a) = - \eta_ {N} ^ {0} Q _ {t} ^ {*} (s, a) \\ + \sum_ {i = 1} ^ {N} \eta_ {N} ^ {i} \left[ \left(V _ {t + 1} ^ {k _ {i}} - V _ {t + 1} ^ {*}\right) \left(s _ {t + 1} ^ {k _ {i}}\right) + \left(P _ {t} - P _ {t} ^ {k _ {i}}\right) V _ {t + 1} ^ {*} (s, a) + b _ {i} ^ {T} \right] + \frac {H}{(N + 1) ^ {M}} \\ = - \eta_ {N} ^ {0} Q _ {t} ^ {*} (s, a) + \sum_ {i = 1} ^ {N} \eta_ {N} ^ {i} \left[ \left(V _ {t + 1} ^ {k _ {i}} - V _ {t + 1} ^ {*}\right) \left(s _ {t + 1} ^ {k _ {i}}\right) \right] \\ + \sum_ {i = 1} ^ {N} \eta_ {N} ^ {i} \left[ \left(P _ {t} - P _ {t} ^ {k _ {i}}\right) V _ {t + 1} ^ {*} (s, a) \right] + \sum_ {i = 1} ^ {N} \eta_ {N} ^ {i} \left[ b _ {i} ^ {T} \right] + \frac {H}{(N + 1) ^ {M}} \\ \end{array}
+$$
+
+rearranging terms
+
+$$
+\leq \sum_ {i = 1} ^ {N} \eta_ {N} ^ {i} [ (V _ {t + 1} ^ {k _ {i}} - V _ {t + 1} ^ {*}) (s _ {t + 1} ^ {k _ {i}}) ] + b _ {N} ^ {T} + \beta_ {N} ^ {T} / 2 + \frac {H}{(N + 1) ^ {M}}
+$$
+
+From Eqn equation 10, defn on $\beta_N^T$ and non-negativity of $Q^{*}$
+
+$$
+\leq \sum_ {i = 1} ^ {N} \eta_ {N} ^ {i} [ (V _ {t + 1} ^ {k _ {i}} - V _ {t + 1} ^ {*}) (s _ {t + 1} ^ {k _ {i}}) ] + \beta_ {N} ^ {T} + \frac {H}{(N + 1) ^ {M}}
+$$
+
+Since $b_N^T\leq \beta_N^T /2$
+
+which gives the R.H.S.
+
+Eqn equation 10 tells us that $\sum_{i=1}^{N} \eta_N^i[(P_t - P_t^{k_i})V_{t+1}^*](s, a) \geq -b_N^T$ , along with $b_N^T \leq \beta_N^T / 2 = \sum_{i=1}^{N} \eta_N^i b_i^T$ which then gives:
+
+$$
+\begin{array}{l} \left(Q _ {t} ^ {+, k} - Q _ {t} ^ {*}\right) (s, a) = - \eta_ {N} ^ {0} Q _ {t} ^ {*} (s, a) \\ + \sum_ {i = 1} ^ {N} \eta_ {N} ^ {i} \left[ \left(V _ {t + 1} ^ {k _ {i}} - V _ {t + 1} ^ {*}\right) \left(s _ {t + 1} ^ {k _ {i}}\right) + \left(P _ {t} - P _ {t} ^ {k _ {i}}\right) V _ {t + 1} ^ {*} (s, a) + b _ {i} ^ {T} \right] + \frac {H}{(N + 1) ^ {M}} \\ \geq - \eta_ {N} ^ {0} Q _ {t} ^ {*} (s, a) + \sum_ {i = 1} ^ {N} \eta_ {N} ^ {i} \left[ \left(V _ {t + 1} ^ {k _ {i}} - V _ {t + 1} ^ {*}\right) \left(s _ {t + 1} ^ {k _ {i}}\right) \right] + \frac {H}{(N + 1) ^ {M}} \\ \end{array}
+$$
+
+We can then prove the L.H.S. by induction on $t = H, \dots, 1$ . For $t = H(Q_t^{+,k} - Q_t^*) \geq 0$ . This is because $V_{H + 1}^{k_i} = V_{H + 1}^* = 0$ , for $N = 0\frac{H}{(0 + 1)^M} = H > Q_H^*$ , and for $N > 0$ we have that $\eta_N^0 = 0$ . If we assume the statement true for $t + 1$ , consider $t$ :
+
+$$
+\begin{array}{l} (Q _ {t} ^ {+, k} - Q _ {t} ^ {*}) (s, a) \geq - \eta_ {N} ^ {0} Q _ {t} ^ {*} (s, a) + \sum_ {i = 1} ^ {N} \eta_ {N} ^ {i} [ (V _ {t + 1} ^ {k _ {i}} - V _ {t + 1} ^ {*}) (s _ {t + 1} ^ {k _ {i}}) ] + \frac {H}{(N + 1) ^ {M}} \\ = - \eta_ {N} ^ {0} Q _ {t} ^ {*} (s, a) + \sum_ {i = 1} ^ {N} \eta_ {N} ^ {i} \left[ \min \left\{H, \max _ {a ^ {\prime}} Q _ {t + 1} ^ {+, k _ {i}} \left(s _ {t + 1} ^ {k _ {i}}, a ^ {\prime}\right) \right\} - \max _ {a ^ {\prime \prime}} Q _ {t + 1} ^ {*} \left(s _ {t + 1} ^ {k _ {i}}, a ^ {\prime \prime}\right) \right] \\ + \frac {H}{(N + 1) ^ {M}} \\ \geq - \eta_ {N} ^ {0} Q _ {t} ^ {*} (s, a) + \frac {H}{(N + 1) ^ {M}} \\ \end{array}
+$$
+
+If $\min \{H, \max_{a'} Q_{t+1}^{+,k_i}(s_{t+1}^{k_i}, a')\} = \max_{a'} Q_{t+1}^{+,k_i}(s_{t+1}^{k_i}, a')$ , by our inductive assumption since $(Q_{t+1}^{+,k} - Q_{t+1}^*)(s, a) \geq 0 \implies \max_{a'} Q_{t+1}^{+,k_i}(s, a') \geq \max_{a''} Q_{t+1}^*(s, a'')$
+
+If $\min \{H, \max_{a'} Q_{t+1}^{+,k_i}(s_{t+1}^{k_i}, a')\} = H$ , we have that $\max_{a''} Q_{t+1}^*(s_{t+1}^{k_i}, a'') \leq H$ .
+
+This then proves the L.H.S.
+
+
+
+Note on stochastic rewards: We have assumed so far that the reward function is deterministic for a simpler presentation of the proof. If we allowed for a stochastic reward function, the previous lemmas can be easily adapted to allow for it.
+
+Lemma 2 would give us:
+
+$$
+\begin{array}{l} \left(Q _ {t} ^ {+, k} - Q _ {t} ^ {*}\right) (s, a) = - \eta_ {N} ^ {0} Q _ {t} ^ {*} (s, a) + \sum_ {i = 1} ^ {N} \eta_ {N} ^ {i} \left[ \left(r _ {t} ^ {k _ {i}} - \mathbb {E} _ {r ^ {\prime} \sim R _ {t} (\cdot | s, a)} [ r ^ {\prime} ]\right) \right. \\ + \left(V _ {t + 1} ^ {k _ {i}} - V _ {t + 1} ^ {*}\right) \left(s _ {t + 1} ^ {k _ {i}}\right) + \left(P _ {t} - P _ {t} ^ {k _ {i}}\right) V _ {t + 1} ^ {*} (s, a) + b _ {i} ^ {T} ] + \frac {H}{(N + 1) ^ {M}} \\ \end{array}
+$$
+
+$(r_t^{k_i} - \mathbb{E}_{r'\sim R_t(\cdot |s,a)}[r'])$ can then be bounded in the same way as in Lemma 3. Increasing the constants for $b_N^T, \beta_N^T$ and rescaling of $\delta$ appropriately then completes the necessary changes.
+
+We will now prove that our algorithm is provably efficient. We will do this by showing it has sub-linear regret, following closely the proof of Theorem 1 from (Jin et al., 2018).
+
+Theorem 1. For any $p \in (0,1)$ , with probability at least $1 - p$ the total regret of $Q^{+}$ is at most $\mathcal{O}\left(\sqrt{H^4SAT\log(SAT / p)}\right)$ for $M \geq 1$ and at most $\mathcal{O}\left(H^{1 + M}SAT^{1 - M} + \sqrt{H^4SAT\log(SAT / p)}\right)$ for $0 < M < 1$ .
+
+Proof. Let
+
+$$
+\delta_ {t} ^ {k} := \left(V _ {t} ^ {k} - V _ {t} ^ {\pi_ {k}}\right) \left(s _ {t} ^ {k}\right), \phi_ {t} ^ {k} := \left(V _ {t} ^ {k} - V _ {t} ^ {*}\right) \left(s _ {t} ^ {k}\right)
+$$
+
+Lemma 3 tells us with probability at least $1 - \delta$ that $Q_{t}^{+,k}\geq Q_{t}^{*}$ , which also implies that $V_{t}^{k}\geq V_{t}^{*}$ . We can then upperbound the regret as follows:
+
+$$
+\operatorname {R e g r e t} (K) = \sum_ {k = 1} ^ {K} \left(V _ {1} ^ {*} - V _ {1} ^ {\pi_ {k}}\right) \left(s _ {1} ^ {k}\right) \leq \sum_ {k = 1} ^ {K} \left(V _ {1} ^ {k} - V _ {1} ^ {\pi_ {k}}\right) \left(s _ {1} ^ {k}\right) = \sum_ {k = 1} ^ {K} \delta_ {1} ^ {k}
+$$
+
+We then aim to bound $\sum_{k=1}^{K} \delta_t^k$ by the next timestep $\sum_{k=1}^{K} \delta_{t+1}^k$ , which will give us a recursive formula to upperbound the regret. We will accomplish this by relating $\sum_{k=1}^{K} \delta_t^k$ to $\sum_{k=1}^{K} \phi_t^k$ .
+
+For any fixed $(k,t)\in [K]\times [H]$ , let $N_{k} = N(s_{t}^{k},a_{t}^{k},t)$ where $(s_t^k,a_t^k)$ was previously taken at step $t$ of episodes $k_{1},\ldots ,k_{N} < k$ . We then have:
+
+$$
+\begin{array}{l} \delta_ {t} ^ {k} = \left(V _ {t} ^ {k} - V _ {t} ^ {\pi_ {k}}\right) \left(s _ {t} ^ {k}\right) \leq \left(Q _ {t} ^ {+, k} - Q _ {t} ^ {\pi_ {k}}\right) \left(s _ {t} ^ {k}, a _ {t} ^ {k}\right) \\ = \left(Q _ {t} ^ {+, k} - Q _ {t} ^ {*}\right) \left(s _ {t} ^ {k}, a _ {t} ^ {k}\right) + \left(Q _ {t} ^ {*} - Q _ {t} ^ {\pi_ {k}}\right) \left(s _ {t} ^ {k}, a _ {t} ^ {k}\right) \\ \leq \sum_ {i = 1} ^ {N _ {k}} \eta_ {N _ {k}} ^ {i} \left[ \left(V _ {t + 1} ^ {k _ {i}} - V _ {t + 1} ^ {*}\right) \left(s _ {t + 1} ^ {k _ {i}}\right) \right] \\ + \beta_ {N _ {k}} ^ {T} + \frac {H}{(N _ {k} + 1) ^ {M}} + [ P _ {t} (V _ {t + 1} ^ {*} - V _ {t + 1} ^ {\pi_ {k}}) ] (s _ {t} ^ {k}, a _ {t} ^ {k}) \\ \end{array}
+$$
+
+By Lemma 3 for the first term. Bounding $(Q_{t}^{*} - Q_{t}^{\pi_{k}})$ is achieved through the Bellman Equation giving the final term.
+
+$$
+= \sum_ {i = 1} ^ {N _ {k}} \eta_ {N _ {k}} ^ {i} \phi_ {t + 1} ^ {k _ {i}} + \beta_ {N _ {k}} ^ {T} + \frac {H}{(N _ {k} + 1) ^ {M}} + [ P _ {t} (V _ {t + 1} ^ {*} - V _ {t + 1} ^ {\pi_ {k}}) ] (s _ {t} ^ {k}, a _ {t} ^ {k})
+$$
+
+$$
++ \left[ P _ {t} ^ {k} \left(V _ {t + 1} ^ {*} - V _ {t + 1} ^ {\pi_ {k}}\right) \right] \left(s _ {t} ^ {k}, a _ {t} ^ {k}\right) - \left[ P _ {t} ^ {k} \left(V _ {t + 1} ^ {*} - V _ {t + 1} ^ {\pi_ {k}}\right) \right] \left(s _ {t} ^ {k}, a _ {t} ^ {k}\right)
+$$
+
+$\pm [P_t^k (V_{t + 1}^* -V_{t + 1}^{\pi_k})](s_t^k,a_t^k)$ and subbing for $\phi$
+
+$$
+\begin{array}{l} = \sum_ {i = 1} ^ {N _ {k}} \eta_ {N _ {k}} ^ {i} \phi_ {t + 1} ^ {k _ {i}} + \beta_ {N _ {k}} ^ {T} + \frac {H}{(N _ {k} + 1) ^ {M}} + [ (P _ {t} - P _ {t} ^ {k}) (V _ {t + 1} ^ {*} - V _ {t + 1} ^ {\pi_ {k}}) ] (s _ {t} ^ {k}, a _ {t} ^ {k}) \\ + \left(V _ {t + 1} ^ {*} - V _ {t + 1} ^ {\pi_ {k}}\right) \left(s _ {t + 1} ^ {k}\right) \\ \end{array}
+$$
+
+Since $[P_t^k (V_{t + 1}^* -V_{t + 1}^{\pi_k})](s_t^k,a_t^k) = (V_{t + 1}^* -V_{t + 1}^{\pi_k})(s_{t + 1}^k)$
+
+$$
+= \sum_ {i = 1} ^ {N _ {k}} \eta_ {N _ {k}} ^ {i} \phi_ {t + 1} ^ {k _ {i}} + \beta_ {N _ {k}} ^ {T} + \frac {H}{(N _ {k} + 1) ^ {M}} + \xi_ {t + 1} ^ {k} + \delta_ {t + 1} ^ {k} - \phi_ {t + 1} ^ {k} \tag {11}
+$$
+
+Letting $\xi_{t + 1}^k \coloneqq [(P_t - P_t^k)(V_{t + 1}^* - V_{t + 1}^{\pi_k})](s_t^k, a_t^k). (V_{t + 1}^* - V_{t + 1}^{\pi_k})(s_{t + 1}^k) = \delta_{t + 1}^k - \phi_{t + 1}^k$ .
+
+We must now bound the summation $\sum_{k=1}^{K} \delta_t^k$ , which we will do by considering each term of equation 11 separately.
+
+$$
+\sum_ {k = 1} ^ {K} \frac {H}{(N _ {k} + 1) ^ {M}}:
+$$
+
+$$
+\sum_ {k = 1} ^ {K} \frac {H}{(N _ {k} + 1) ^ {M}} \geq \sum_ {k = 1} ^ {K} \frac {H}{(K + 1) ^ {M}} = \frac {H K}{(K + 1) ^ {M}} \Rightarrow \sum_ {k = 1} ^ {K} \frac {H}{(N _ {k} + 1) ^ {M}} \in \Omega (H K ^ {1 - M})
+$$
+
+The first inequality follows since $N \leq K$ . This shows that we require $M > 0$ in order to guarantee a sublinear regret in $K$ .
+
+$$
+\begin{array}{l} \sum_ {k = 1} ^ {K} \frac {H}{(N _ {k} + 1) ^ {M}} = \sum_ {k = 1} ^ {K} \sum_ {n = 0} ^ {\infty} \mathbb {1} \{N _ {k} = n \} \frac {H}{(n + 1) ^ {M}} = \sum_ {n = 0} ^ {\infty} \sum_ {k = 1} ^ {K} \mathbb {1} \{N _ {k} = n \} \frac {H}{(n + 1) ^ {M}} \\ \leq S A \sum_ {n = 0} ^ {\infty} \frac {H}{(n + 1) ^ {M}} = S A H \sum_ {n = 1} ^ {K} \frac {1}{n ^ {M}} \\ \end{array}
+$$
+
+The first equality follows by rewriting $\frac{H}{(N_k + 1)^M}$ as $\sum_{n = 0}^{\infty}\mathbb{1}\{N_k = n\} \frac{H}{(n + 1)^M}$ . Only one of the indicator functions will be true for episode $k$ , giving us the required value. The inequality is a crude upper bound that suffices for the proof. For a given $n$ , $\mathbb{1}\{N_k = n\}$ can only be true for at most $SA$ times across all of training. They will contribute $\frac{H}{(n + 1)^M}$ to the final sum.
+
+For $M > 1$ , the sum $\sum_{n=1}^{K} \frac{1}{n^M}$ is bounded, which means that $\sum_{k=1}^{K} \frac{H}{(N_k + 1)^M} \in \mathcal{O}(SAH)$ .
+
+For $M = 1$ , $\sum_{n=1}^{K} \frac{1}{n^M} \leq \log(K + 1) \Rightarrow \sum_{k=1}^{K} \frac{H}{(N_k + 1)^M} \in \mathcal{O}(SAH\log(K))$
+
+For $0 < M < 1$ , $\sum_{k=1}^{K} \frac{H}{(N_k + 1)^M} \in \mathcal{O}(SAHK^{1 - M})$ , since $\sum_{n=1}^{K} \frac{1}{n^M} \sim \frac{K^{1 - M}}{1 - M}$ (Goel and Rodriguez, 1987)
+
+$$
+\sum_ {k = 1} ^ {K} \sum_ {i = 1} ^ {N _ {k}} \eta_ {N _ {k}} ^ {i} \phi_ {t + 1} ^ {k _ {i}}:
+$$
+
+$$
+\sum_ {k = 1} ^ {K} \sum_ {i = 1} ^ {N _ {k}} \eta_ {N _ {k}} ^ {i} \phi_ {t + 1} ^ {k _ {i}} \leq \sum_ {k ^ {\prime} = 1} ^ {K} \phi_ {t + 1} ^ {k ^ {\prime}} \sum_ {i = N _ {k ^ {\prime}} + 1} ^ {\infty} \eta_ {i} ^ {N _ {k ^ {\prime}}} \leq (1 + \frac {1}{H}) \sum_ {k = 1} ^ {K} \phi_ {t + 1} ^ {k}
+$$
+
+The first inequality is achieved by rearranging how we take the sum.
+
+Consider a $k' \in [K]$ . The term $\phi_{t+1}^{k'}$ will appear in the summation for $k > k'$ provided that $(s_t^k, a_t^k) = (s_t^{k'}, a_t^{k'})$ , e.g. we took the same state-action pair at episodes $k$ and $k'$ .
+
+On the first occasion, the associated learning rate will be $\eta_{N_{k^{\prime}} + 1}^{N_{k^{\prime}}}$ , on the second $\eta_{N_{k^{\prime}} + 2}^{N_{k^{\prime}}}$ .
+
+We can then consider all possible counts up to $\infty$ to achieve an inequality. By considering all $k' \in [K]$ we will not miss any terms of the original summation.
+
+The second inequality follows by application of Lemma 1 on the sum involving the learning rate.
+
+$$
+\sum_ {k = 1} ^ {K} \beta_ {N _ {k}}:
+$$
+
+For all $t\in [H]$ , we have that:
+
+$$
+\begin{array}{l} \sum_ {k = 1} ^ {K} \beta_ {N _ {k}} \leq \sum_ {k = 1} ^ {K} 8 \sqrt {\frac {H ^ {3} \log (S A T / \delta)}{N _ {k}}} = \sum_ {s, a} \sum_ {n = 1} ^ {N _ {k} (s, a, t)} 2 \sqrt {\frac {H ^ {3} \log (S A T / \delta)}{n}} \\ \leq \sum_ {s, a} ^ {N _ {k} (s, a, t) = \frac {K}{S A}} \sum_ {n = 1} ^ {2 \sqrt {\frac {H ^ {3} \log (S A T / \delta)}{n}}} \\ \leq \sum_ {s, a} 2 \sqrt {H ^ {3} \log (S A T / \delta)} \sum_ {n = 1} ^ {\frac {K}{S A}} \frac {1}{\sqrt {n}} \\ \leq 4 S A \sqrt {H ^ {3} \log (S A T / \delta)} \sqrt {\frac {K}{S A}} \\ \in \mathcal {O} (\sqrt {H ^ {3} S A K \log (S A T / \delta)}) \\ = \mathcal {O} \left(\sqrt {H ^ {2} S A T \log (S A T / \delta)}\right) \\ \end{array}
+$$
+
+We transform the sum over the episode's counts $N_{k}$ into a sum over all state-action pairs, summing over all of their respective counts.
+
+The first inequality is due to $N_{k} = K / SA$ maximising the quantity, this is since $1 / \sqrt{n}$ grows smaller as $n$ increases. Thus, we want to spread out the summation across the state-action space as much as possible.
+
+Then we use the result that $\sum_{i=1}^{n} \frac{1}{\sqrt{i}} \leq 2\sqrt{n}$ , and that we are taking the sum over $SA$ identical terms. We then set $n = K / SA$ and are thus taking the sum over $N_k(s, a, t)$ identical terms.
+
+$$
+\sum_ {t = 1} ^ {H} \sum_ {k = 1} ^ {K} \xi_ {t + 1} ^ {k}
+$$
+
+Following a similar argument as from Lemma 3, we can apply Azuma's inequality and a union bound over all $(s,a,t)\in S\times A\times [H]$ . Let $X_{i} = [(P_{t} - P_{t}^{k})(V_{t + 1}^{*} - V_{t + 1}^{\pi_{k}})](s_{t}^{k},a_{t}^{k})$ . Then $Z_{i}\coloneqq \sum_{j = 1}^{i}X_{j}$ is a martingale difference sequence, and we have that $|Z_{i} - Z_{i - 1}| = |X_{i}|\leq H$ , and $Z_{0} = 0$ . Then with probability at least $1 - 2\delta$ we have that:
+
+$$
+\left| \sum_ {t = 1} ^ {H} \sum_ {k = 1} ^ {K} \xi_ {t + 1} ^ {k} \right| \leq H \left| \sum_ {k = 1} ^ {K} \xi_ {t + 1} ^ {k} \right| \leq H \sqrt {2 K H ^ {2} \log (S A T / \delta)} \in \mathcal {O} \left(H ^ {3 / 2} \sqrt {T \log (S A T / \delta)}\right)
+$$
+
+Finally, we can utilise these intermediate results when bounding the regret via Equation equation 11:
+
+$$
+\begin{array}{l} \sum_ {k = 1} ^ {K} \delta_ {t} ^ {k} \leq \sum_ {i = 1} ^ {N _ {k}} \eta_ {N _ {k}} ^ {i} \phi_ {t + 1} ^ {k _ {i}} + \beta_ {N _ {k}} ^ {T} + \frac {H}{(N _ {k} + 1) ^ {M}} + \xi_ {t + 1} ^ {k} + \delta_ {t + 1} ^ {k} - \phi_ {t + 1} ^ {k} \\ \leq \left(1 + \frac {1}{H}\right) \sum_ {k = 1} ^ {K} \phi_ {t + 1} ^ {k} - \sum_ {k = 1} ^ {K} \phi_ {t + 1} ^ {k} + \sum_ {k = 1} ^ {K} \delta_ {t + 1} ^ {k} + \sum_ {k = 1} ^ {K} \left(\beta_ {N _ {k}} ^ {T} + \xi_ {t + 1} ^ {k}\right) + \sum_ {k = 1} ^ {K} \frac {H}{\left(N _ {k} + 1\right) ^ {M}} \\ \end{array}
+$$
+
+From our intermediate result on $\phi$
+
+$$
+\leq \left(1 + \frac {1}{H}\right) \sum_ {k = 1} ^ {K} \delta_ {t + 1} ^ {k} + \sum_ {k = 1} ^ {K} \left(\beta_ {N _ {k}} ^ {T} + \xi_ {t + 1} ^ {k} + \frac {H}{\left(N _ {k} + 1\right) ^ {M}}\right)
+$$
+
+Since $V^{*}\geq V^{\pi_{k}}\Rightarrow \delta_{t + 1}^{k}\geq \phi_{t + 1}^{k}$
+
+Letting $A_{t} \coloneqq \sum_{k=1}^{K} (\beta_{N_{k}}^{T} + \xi_{t+1}^{k} + \frac{H}{(N_{k} + 1)^{M}})$ , we have that $\sum_{k=1}^{K} \delta_{H}^{k} \leq A_{H}$ since $\delta_{H+1} = 0$ . By induction on $t = H, \ldots, 1$ we then have that $\sum_{k=1}^{K} \delta_{t}^{k} \leq \sum_{j=0}^{H-t} (1 + \frac{1}{H})^{j} A_{H-j} \Rightarrow \sum_{k=1}^{K} \delta_{1}^{k} \in \mathcal{O}\left(\sum_{t=1}^{H} \sum_{k=1}^{K} (\beta_{N_{k}}^{T} + \xi_{t+1}^{k} + \frac{H}{(N_{k} + 1)^{M}})\right)$ since $(1 + 1/H)^{H} \leq e$ .
+
+Using our intermediate results for the relevant quantities then gives us with probability at least $1 - 2\delta$
+
+$$
+\begin{array}{l} \sum_ {k = 1} ^ {K} \delta_ {1} ^ {k} \in \mathcal {O} (\sqrt {\left(H ^ {4} S A T \log (S A T / \delta)\right)} + H ^ {3 / 2} \sqrt {T \log (S A T / \delta)}) + \mathcal {O} (\sum_ {t = 1} ^ {H} \sum_ {k = 1} ^ {K} \frac {H}{\left(N _ {k} + 1\right) ^ {M}}) \\ = \mathcal {O} (\sqrt {(H ^ {4} S A T \log (S A T / \delta)))} + \mathcal {O} (\sum_ {t = 1} ^ {H} \sum_ {k = 1} ^ {K} \frac {H}{(N _ {k} + 1) ^ {M}}) \\ \end{array}
+$$
+
+We then consider the 3 cases of $M$ :
+
+For $M > 1$ we get $\mathcal{O}(\sqrt{(H^4SAT\log(SAT / \delta))} + H^2 SA)$
+
+For $M = 1$ we get $\mathcal{O}(\sqrt{(H^4SAT\log(SAT / \delta))} + H^2 SA\log (K))$
+
+And for $0 < M < 1$ we have $\mathcal{O}(\sqrt{(H^4SAT\log(SAT / \delta))} + H^{1 + M}SAT^{1 - M})$ .
+
+$T \geq \sqrt{H^4SAT\log(SAT / \delta)} \Rightarrow \sqrt{H^4SAT\log(SAT / \delta)} \geq H^2SA\log(K)$ , which means we can remove the $H^2SA\log(K)$ or $H^2SA$ from the upperbound. If $T \leq \sqrt{H^4SAT\log(SAT / \delta)}$ then that is also a sufficient upper bound since the regret cannot be more than $HK = T$ .
+
+This gives us the required result after rescaling $\delta$ to $\delta /2$ .
\ No newline at end of file
diff --git a/optimisticexplorationevenwithapessimisticinitialisation/images.zip b/optimisticexplorationevenwithapessimisticinitialisation/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..39691952666272acb48c584c9686e70b14991d64
--- /dev/null
+++ b/optimisticexplorationevenwithapessimisticinitialisation/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a0ec211722ee20af8e0e2ee45aecd7624b2215956e690dca85a036d890150f27
+size 1019396
diff --git a/optimisticexplorationevenwithapessimisticinitialisation/layout.json b/optimisticexplorationevenwithapessimisticinitialisation/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..87f70fc6e8d24cbeda82b46ef66079117bb3cd39
--- /dev/null
+++ b/optimisticexplorationevenwithapessimisticinitialisation/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:08df249b35e812ca686cd351cda6cd4201376368fed88d8b42b261cd0add1e52
+size 1298141
diff --git a/optiondiscoveryusingdeepskillchaining/170c68d5-2152-4f6f-950f-c5a56ee9ea38_content_list.json b/optiondiscoveryusingdeepskillchaining/170c68d5-2152-4f6f-950f-c5a56ee9ea38_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..e7bb29570ec6a49cdbc9f10f797132ba795f6f7b
--- /dev/null
+++ b/optiondiscoveryusingdeepskillchaining/170c68d5-2152-4f6f-950f-c5a56ee9ea38_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:61923a08f15ca1b3b3b3d55b7f00d9730692726b926acbf94e8aed3cbbfcfc3e
+size 112697
diff --git a/optiondiscoveryusingdeepskillchaining/170c68d5-2152-4f6f-950f-c5a56ee9ea38_model.json b/optiondiscoveryusingdeepskillchaining/170c68d5-2152-4f6f-950f-c5a56ee9ea38_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..4abfe0f86ef22fb5cf1eaf8bbb649eae16d4e47f
--- /dev/null
+++ b/optiondiscoveryusingdeepskillchaining/170c68d5-2152-4f6f-950f-c5a56ee9ea38_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0d8f4799dc6598d0a4b88a15555b1129ebb1c66deb5f0e3cedcbc5862e0d76e3
+size 137990
diff --git a/optiondiscoveryusingdeepskillchaining/170c68d5-2152-4f6f-950f-c5a56ee9ea38_origin.pdf b/optiondiscoveryusingdeepskillchaining/170c68d5-2152-4f6f-950f-c5a56ee9ea38_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..82ac97c46b8a83e02768198a53c7fdd1277364b6
--- /dev/null
+++ b/optiondiscoveryusingdeepskillchaining/170c68d5-2152-4f6f-950f-c5a56ee9ea38_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:fe6081934eb1719f6feeeebae988d1718cfa712c4504ff0351621a367b3dee26
+size 3759938
diff --git a/optiondiscoveryusingdeepskillchaining/full.md b/optiondiscoveryusingdeepskillchaining/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..063f30e5bf4e746dab45ca88b8453529fd7fd937
--- /dev/null
+++ b/optiondiscoveryusingdeepskillchaining/full.md
@@ -0,0 +1,479 @@
+# OPTION DISCOVERY USING DEEP SKILL CHAINING
+
+# Akhil Bagaria
+
+Department of Computer Science
+
+Brown University
+
+Providence, RI, USA
+
+akhil_bagaria@brown.edu
+
+# George Konidaris
+
+Department of Computer Science
+
+Brown University
+
+Providence, RI, USA
+
+gdk@brown.edu
+
+# ABSTRACT
+
+Autonomously discovering temporally extended actions, or skills, is a longstanding goal of hierarchical reinforcement learning. We propose a new algorithm that combines skill chaining with deep neural networks to autonomously discover skills in high-dimensional, continuous domains. The resulting algorithm, deep skill chaining, constructs skills with the property that executing one enables the agent to execute another. We demonstrate that deep skill chaining significantly outperforms both non-hierarchical agents and other state-of-the-art skill discovery techniques in challenging continuous control tasks. $^{1}$
+
+# 1 INTRODUCTION
+
+Hierarchical reinforcement learning (Barto & Mahadevan, 2003) is a promising approach for solving long-horizon sequential decision making problems. Hierarchical methods lower the decision making burden on the agent through the use of problem specific action abstractions (Konidaris, 2019). While the use of temporally extended actions, or options (Sutton et al., 1999), has been shown to accelerate learning (McGovern & Sutton, 1998), there remains the question of skill discovery: how can agents autonomously construct useful skills via interaction with the environment? While a large body of work has sought to answer this question in small discrete domains, skill discovery in high-dimensional continuous spaces remains an open problem.
+
+An early approach to skill discovery in continuous-state environments was skill chaining (Konidaris & Barto, 2009b), where an agent constructs a sequence of options that target a salient event in the MDP (for example, the goal state). The skills are constructed so that successful execution of each option in the chain allows the agent to execute another option, which brings it closer still to its eventual goal. While skill chaining was capable of discovering skills in continuous state spaces, it could only be applied to relatively low-dimensional state-spaces with discrete actions.
+
+We introduce a new algorithm that combines the core insights of skill chaining with recent advances in using non-linear function approximation in reinforcement learning. The new algorithm, deep skill chaining, scales to high-dimensional problems with continuous state and action spaces. Through a series of experiments on five challenging domains in the MuJoCo physics simulator (Todorov et al., 2012), we show that deep skill chaining can solve tasks that otherwise cannot be solved by non-hierarchical agents in a reasonable amount of time. Furthermore, the new algorithm outperforms state-of-the-art deep skill discovery algorithms (Bacon et al., 2017; Levy et al., 2019) in these tasks.
+
+# 2 BACKGROUND AND RELATED WORK
+
+Sequential decision making problems can be formalized as Markov Decision Processes (MDPs). We consider goal-oriented episodic MDPs, where $S$ denotes the state space, $A$ is the action space, $R$ is the reward function, $\mathcal{T}$ is the transition function, $\gamma$ is the discount factor and $g \in S$ is the terminating goal state (Sutton & Barto, 2018). Unlike goal-conditioned algorithms (Sutton et al., 2011; Schaul et al., 2015), we do not require that $g$ be known; instead we assume access to an indicator function $\mathbb{1}_g: s \in S \to \{0,1\}$ which the agent can query to determine if it has reached the MDP's goal.
+
+One way to learn a policy in an MDP is to first learn an action-value function. The action-value function $Q^{\pi}(s_t, a_t)$ is defined as the expected sum of discounted future rewards if the agent takes action $a_t$ from $s_t$ and then follows policy $\pi$ thereafter: $Q^{\pi}(s_t, a_t) = \mathbb{E}_{\pi}[r_t + \gamma \max_{a_{t+1}} Q^{\pi}(s_{t+1}, a_{t+1})]$ .
+
+Q-learning (Watkins & Dayan, 1992) is a commonly used off-policy algorithm that uses the action-value function for control through a greedy policy $\pi(s_t) = \arg \max_{a_t} Q(s_t, a_t)$ . Inspired by recent success in scaling Q-learning to high-dimensional spaces (Mnih et al., 2015; Van Hasselt et al., 2016; Lillicrap et al., 2015; Tesauro, 1994), we learn the action-value function $Q_\phi^\pi(s_t, a_t)$ using non-linear function approximators parameterized by $\phi$ , by minimizing the loss $L(\phi) = \mathbb{E}_\pi[(Q_\phi(s_t, a_t) - y_t)^2]$ where the Q-learning target $y_t$ is given by the following equation (Van Hasselt et al., 2016):
+
+$$
+y _ {t} = r _ {t} + \gamma Q _ {\phi^ {\prime}} \left(s _ {t + 1}, \underset {a _ {t + 1}} {\arg \max } Q _ {\phi} \left(s _ {t + 1}, a _ {t + 1}\right)\right). \tag {1}
+$$
+
+Deep Q-Learning (DQN) (Mnih et al., 2015) casts minimizing $L(\phi)$ as a standard regression problem by using target networks (parameterized by $\phi'$ ) and experience replay (Lin, 1993).
+
+# 2.1 THE OPTIONS FRAMEWORK
+
+The options framework (Sutton et al., 1999) models skills as options. An option $o$ consists of three components: (a) its initiation condition, $\mathcal{I}_o(s)$ , which determines whether $o$ can be executed in state $s$ , (b) its termination condition, $\beta_o(s)$ , which determines whether option execution must terminate in state $s$ and (c) its closed-loop control policy, $\pi_o(s)$ , which maps state $s$ to a low level action $a \in A$ . Augmenting the set of available actions with options results in a Semi-Markov Decision Process (SMDP) (Sutton et al., 1999) where the next state depends on the current state, action and time.
+
+# 2.2 SKILL DISCOVERY ALGORITHMS
+
+Skill discovery has been studied extensively in small discrete domains (McGovern & Sutton, 1998; Simsek & Barto, 2004; Simsek et al., 2005; Bakker & Schmidhuber, 2004; Schmidhuber, 1991; Pickett & Barto, 2002; Dietterich, 2000). Recently however, there has been a significant body of work aimed at discovering skills in continuous spaces.
+
+Option-critic methods: Option-Critic (Bacon et al., 2017) uses an end-to-end gradient based algorithm to learn options in high-dimensional continuous spaces. Option-Critic was a substantial step forward in skill discovery and led to a family of related methods (Klissarov et al., 2017; Tiwari & Thomas, 2019; Riemer et al., 2018; Liu et al., 2017; Jain et al., 2018). Proximal Policy Option Critic (PPOC) (Klissarov et al., 2017) extends Option-Critic to continuous action spaces and is the version of Option-Critic that we compare against in this paper. Our method bypasses two fundamental shortcomings of the Option-Critic framework: (a) unlike Option-Critic, we explicitly learn initiation sets of options and thus do not assume that all options are executable from everywhere, and (b) we do not treat the number of skills required to solve a task as a fixed and costly hyperparameter. Instead, our algorithm flexibly discovers as many skills as it needs to solve the given problem.
+
+Feudal methods: An alternative to the options framework is Feudal RL (Dayan & Hinton, 1993), which creates a hierarchy in which managers learn to assign subgoals to workers; workers take a subgoal state as input and learn to reach it. Feudal Networks (FuN) (Vezhnevets et al., 2017) used neural networks to scale the Feudal-RL framework to high-dimensional continuous spaces; it was extended and outperformed by HIRO (Nachum et al., 2018) in a series of control tasks in the MuJoCo simulator. More recently, Hierarchical Actor-Critic (HAC) (Levy et al., 2019) outperformed HIRO in a similar suite of continuous control problems. While HIRO relies on having a dense "distance-to-goal" based reward function to train both levels of their feudal hierarchy, HAC's use of Hindsight Experience Replay (HER) (Andrychowicz et al., 2017) allows it to work in the more general sparse-reward setting. Given its strong performance in continuous control problems and its ability to learn effectively in sparse-reward settings, we compare against HAC as a representative feudal method.
+
+Learning backward from the goal: The idea of sequencing locally applicable controllers is well established in robotics and control theory in the form of pre-image backchaining (Kaelbling & Lozano-Pérez, 2017) and LQR-Trees (Tedrake, 2009). Such methods either require individually engineered control loops or a model of the system dynamics. Our work fits in the model-free RL setting and
+
+thus requires neither. More recently, reverse curriculum learning (Florensa et al., 2017) also learns backward from the goal. However, they define a curriculum of start states to learn a single policy, rather than learning skills. Relay Networks (Kumar et al., 2018) segment the value function backward from the goal using a thresholding scheme, which makes their method reliant on the accurate estimation of the value function. By contrast, our algorithm is agnostic to errors in value estimation, which are unavoidable when using function approximation in high-dimensional spaces.
+
+Planning with learned skills: Options have been shown to empirically speed up planning in several domains (Silver & Ciosek, 2012; Jinnai et al., 2019; James et al., 2018; Francis & Ram, 1993; Konidaris, 2016; Sharma et al., 2019). However, Konidaris et al. (2018) show that for resulting plans to be provably feasible, skills must be executable sequentially. While they assume that such skills are given, we show that they can be autonomously discovered in high-dimensional spaces.
+
+# 3 DEEP SKILL CHAINING
+
+Deep skill chaining (DSC) is based on the intuition that it is easier to solve a long-horizon task from states in the local neighborhood of the goal. This intuition informs the first step of the algorithm: create an option that initiates near the goal and reliably takes the agent to the goal. Once such an option is learned, we create another option whose goal is to take the agent to a state from which it can successfully execute the first option. Skills are chained backward in this fashion until the start state of the MDP lies inside the initiation set of some option. The inductive bias of creating sequentially executable skills guarantees that as long as the agent successfully executes each skill in its chain, it will solve the original task. More formally, skill chaining amounts to learning options such that the termination condition $\beta_{o_i}(s_t)$ of an option $o_i$ is the initiation condition $\mathcal{I}_{o_{i-1}}(s_t)$ of the option that precedes it in its chain.
+
+Our algorithm proceeds as follows: at time $t$ , the policy over options $\pi_{\mathcal{O}}: s_t \in S \to o \in \mathcal{O}$ determines which option to execute (Section 3.2). Control is then handed over to the selected option $o_i$ 's internal policy $\pi_{o_i}: s \in S \to a_t \in \mathbb{R}^{|A|}$ . $\pi_{o_i}$ outputs joint torques until it either reaches its goal $(\beta_{o_i} := \mathcal{I}_{o_{i-1}})$ or times out at its predetermined budget $T$ (Section 3.1). At this point, $\pi_{\mathcal{O}}$ chooses another option to execute. If at any point the agent reaches the goal state of the MDP or the initiation condition of a previously learned option, it creates a new option to target such a salient event. The machinery for learning the initiation condition of this new option is described in Section 3.3. We now detail the components of our architecture and how they are learned. Readers may also refer to Figures 4 & 7 and the pseudo-code in Appendix A.5 to gain greater intuition about our algorithm.
+
+# 3.1 INTRA-OPTION POLICY
+
+Each option $o$ maintains its own policy $\pi_o: s \to a_t \in \mathbb{R}^{|A|}$ , which is parameterized by its own neural networks $\theta_o$ . To train $\pi_o(s; \theta_o)$ , we must define $o$ 's internal reward function. In sparse reward problems, $o$ is given a subgoal reward when it triggers $\beta_o$ ; otherwise it is given a step penalty. In the dense reward setting, we can compute the distance to the parent option's initiation set classifier and use that to define $o$ 's internal reward function. We can now treat learning the intra-option policy $(\pi_o)$ as a standard RL problem and use an off-the-shelf algorithm to learn this policy. Since in this work we solve tasks with continuous action spaces, we use Deep Deterministic Policy Gradient (DDPG) (Lillicrap et al., 2015) to learn option policies over real-valued actions.
+
+# 3.2 POLICY OVER OPTIONS
+
+Initially, the policy over options $(\pi_{\mathcal{O}})$ only possesses one option that operates over a single time step $(T = 1)$ . We call this option the global option $(o_G)$ since its initiation condition is true everywhere in the state space and its termination condition is true only at the goal state of the MDP (i.e, $\mathcal{I}_{o_G}(s) = 1 \forall s$ and $\beta_{o_G} = \mathbb{1}_g$ ). Using $o_G$ , $\pi_{\mathcal{O}}$ can select primitive actions. At first the agent continually calls upon $o_G$ , which uses its internal option policy $\pi_{o_G}$ to output exactly one primitive action. Once $o_G$ triggers the MDP's goal state $N$ times, DSC creates its first temporally extended option, the goal option $(o_g)$ , whose termination condition is also set to be the goal state of the MDP, i.e, $\beta_{o_g} = \mathbb{1}_g$ .
+
+As the agent discovers new skills, it adds them to its option repertoire and relies on $\pi_{\mathcal{O}}$ to determine which option (including $o_G$ ) it must execute at each state. Unlike $o_G$ , learned options will be
+
+temporally extended, i.e., they will operate over $T > 1$ time steps. If in state $s_t$ the agent chooses to execute option $o_i$ , then $o_i$ will execute its own closed-loop control policy (for $\tau$ steps) until its termination condition is met ( $\tau < T$ ) or it has timed out at $\tau = T$ time steps. At this point, control is handed back to $\pi_{\mathcal{O}}$ , which must now choose a new option at state $s_{t + \tau}$ .
+
+Option selection: To select an option in state $s_t$ , $\pi_{\mathcal{O}}$ first constructs a set of admissible options given by Equation 2. $\pi_{\mathcal{O}}$ then chooses the admissible option that maximizes its option-value function, as shown in Equation 3. Since the agent must choose from a discrete set of options at any time, we learn its option-value function using Deep Q-learning (DQN) (Mnih et al., 2015).
+
+$$
+\mathcal {O} ^ {\prime} \left(s _ {t}\right) = \left\{o _ {i} \mid \mathcal {I} _ {o _ {i}} \left(s _ {t}\right) = 1 \cap \beta_ {o _ {i}} \left(s _ {t}\right) = 0, \forall o _ {i} \in \mathcal {O} \right\} \tag {2}
+$$
+
+$$
+o _ {t} = \underset {o _ {i} \in \mathcal {O} ^ {\prime} \left(s _ {t}\right)} {\arg \max } Q _ {\phi} \left(s _ {t}, o _ {i}\right). \tag {3}
+$$
+
+Learning the option-value function: Given an SMDP transition $(s_t, o_t, r_{t:t+\tau}, s_{t+\tau})$ , we update the value of taking option $o_t$ in state $s_t$ according to SMDP Q-learning update (Bradtke & Duff, 1995). Since the agent learns Q-values for different state-option pairs, it may choose to ignore learned options in favor of primitive actions in certain parts of the state-space (in the interest of maximizing its expected future sum of discounted rewards). The Q-value target for learning the weights $\phi$ of the DQN is given by:
+
+$$
+y _ {t} = \sum_ {t ^ {\prime} = t} ^ {\tau} \gamma^ {t ^ {\prime} - t} r _ {t ^ {\prime}} + \gamma^ {\tau - t} Q _ {\phi^ {\prime}} \left(s _ {t + \tau}, \underset {o ^ {\prime} \in \mathcal {O} ^ {\prime} \left(s _ {t + \tau}\right)} {\arg \max } Q _ {\phi} \left(s _ {t + \tau}, o ^ {\prime}\right)\right). \tag {4}
+$$
+
+Adding new options to the policy over options: Equations 2, 3 and 4 show how we can learn the option-value function and use it for selecting options. However, we must still incrementally add new skills to the network during the agent's lifetime. After the agent has learned a new option $o$ 's initiation set classifier $\mathcal{I}_o$ (we will discuss how this happens in Section 3.3), it performs the following steps before it can add $o$ to its option repertoire:
+
+- To initialize $o$ 's internal policy $\pi_{o}$ , the parameters of its DDPG $(\theta_{o})$ are set to the parameters of the global agent's DDPG $(\theta_{o_{G}})$ . Subsequently, their neural networks are trained independently. This provides a good starting point for optimizing $\pi_{o}$ , while allowing it to learn sub-problem specific abstractions.
+- To begin predicting Q-values for $o$ , we add a new output node to final layer of the DQN parameterizing $\pi_{\mathcal{O}}$ .
+- We must assign appropriate initial values to $Q_{\phi}(s, o)$ . We follow Konidaris & Barto (2009b) and collect all the transitions that triggered $\beta_{o}$ and use the max over these Q-values to optimistically initialize the new output node of our DQN. This is done by setting the bias of this new node, which ensures that the Q-value predictions corresponding to the other options remain unchanged.
+
+# 3.3 INITIATION SET CLASSIFIER
+
+Central to the idea of learning skills is the ability to learn the set of states from which they can be executed. First, we must learn the initiation set classifier for $o_g$ , the option used to trigger the MDP's goal state. While acting in the environment, the agent's global DDPG will trigger the goal state $N$ times (also referred to as the gestation period of the option by Konidaris & Barto (2009b) and Niekum & Barto (2011)). We collect these $N$ successful trajectories, segment the last $K$ states from each trajectory and learn a one-class classifier around the segmented states. Once initialized, it may be necessary to refine the option's initiation set based on its policy. We do so by executing the option and collecting data to train a two-class classifier. States from which option execution was successful are labeled as positive examples. States from which option execution timed out are labeled as negative examples. We continue this process of refining the option's initiation set classifier for a fixed number of episodes, which we call the initiation period of the option.
+
+At the end of the initiation period, we fix the option's initiation set classifier and add it to the list of salient events in the MDP. We then construct a new option whose termination condition is the initiation classifier of the option we just learned. We continue adding to our chain of options in this fashion until a learned initiation set classifier contains the start state of the MDP.
+
+# 3.4 GENERALIZING TO SKILL TREES
+
+Our discussion so far has been focused on learning skill chains that extend from the goal to the start state of the MDP. However, such a chain is not sufficient if the agent has multiple start states or if we want the agent to learn multiple ways of solving the same problem. To permit such behavior, our algorithm can be used to learn skills that organize more generally in the form of trees (Konidaris & Barto, 2009b; Konidaris et al., 2012). This generalization requires some additional care while learning initiation set classifiers, the details of which can be found in Section A.1 of the Appendix. To demonstrate our ability to construct such skill trees (and their usefulness), we consider a maze navigation task, E-Maze, with distinct start states in Section 4.
+
+# 3.5 OPTIMALITY OF DISCOVERED SOLUTIONS
+
+Each option $o$ 's internal policy $\pi_o$ is given a subgoal reward only when it triggers its termination condition $\beta_{o}$ . As a result, $\pi_o$ is trained to find the optimal trajectory for entering its own goal region. Naively executing learned skills would thus yield a recursively optimal solution to the MDP (Barto & Mahadevan, 2003). However, since the policy over options $\pi_{\mathcal{O}}$ does not see subgoal rewards and is trained using extrinsic rewards only, it can combine learned skills and primitive actions to discover a flat optimal solution $\pi^{*}$ to the MDP (Barto & Mahadevan, 2003). Indeed, our algorithm allows $\pi_{\mathcal{O}}$ to employ discovered skills to quickly and reliably find feasible paths to the goal, which over time can be refined into optimal solutions. It is worth noting that our ability to recover $\pi^{*}$ in the limit is in contrast to feudal methods such as HAC (Levy et al., 2019) in which higher levels of the hierarchy are rewarded for choosing feasible subgoals, not optimal ones.
+
+To summarize, our algorithm proceeds as follows: (1) Collect trajectories that trigger new option $o_k$ 's termination condition $\beta_{o_k}$ . (2) Train $o_k$ 's option policy $\pi_{o_k}$ . (3) Learn $o_k$ 's initiation set classifier $\mathcal{I}_{o_k}$ . (4) Add $o_k$ to the agent's option repertoire. (5) Create a new option $o_{k+1}$ such that $\beta_{o_{k+1}} = \mathcal{I}_{o_k}$ . (6) Train policy over options $\pi_{\mathcal{O}}$ . Steps 1, 3, 4 and 5 continue until the MDP's start state is inside some option's initiation set. Continue steps 2 and 6 indefinitely.
+
+# 4 EXPERIMENTS
+
+We test our algorithm in five tasks that exhibit a strong hierarchical structure: (1) Point-Maze (Duan et al., 2016), (2) Four Rooms with Lock and Key, (3) Reacher (Brockman et al., 2016), (4) Point E-Maze and (5) Ant-Maze (Duan et al., 2016; Brockman et al., 2016). Since tasks 1, 3 and 5 appear frequently in the literature, details of their setup can be found in Appendix A.3.
+
+Four Rooms with Lock and Key: In this task, a point agent (Duan et al., 2016) is placed in the Four Rooms environment (Sutton et al., 1999). It must pick up the key (blue sphere in the top-right room in Figure 1(c), row 2) and then navigate to the lock (red sphere in the top-left room). The agent's state space consists of its position, orientation, linear velocity, rotational velocity and a has_key indicator variable. If it reaches the lock with the key in its possession, its episode terminates with a sparse reward of 0; otherwise it gets a step penalty of $-1$ . If we wish to autonomously discover the importance of the key, (i.e., without any corresponding extrinsic rewards) a distance-based dense reward such as that used in related work (Nachum et al., 2018) would be infeasible.
+
+Point E-Maze: This task extends the benchmark U-shaped Point-Maze task (Duan et al., 2016) so that the agent has two possible start locations - on the top and bottom rungs of the E-shaped maze respectively. We include this task to demonstrate our algorithm's ability to construct skill trees.
+
+# 4.1 COMPARATIVE ANALYSES
+
+We compared the performance of our algorithm to DDPG, Option-Critic and Hierarchical Actor-Critic (HAC), in the conditions most similar to those in which they were originally evaluated. For
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+(a)
+
+
+(b)
+
+
+(c)
+Figure 1: (a) Learning curves comparing deep skill chaining (DSC), a flat agent (DDPG) and Option-Critic. (b) Comparison with Hierarchical Actor Critic (HAC). (c) the continuous control tasks corresponding to the learning curves in (a) and (b). Solid lines represent median reward per episode, with error bands denoting one standard deviation. Our algorithm remains the same between (a) and (b). All curves are averaged over 20 runs, except for Ant Maze which was averaged over 5 runs.
+
+
+
+
+
+
+
+
+
+
+(a)
+
+
+(b)
+
+
+(c)
+Figure 2: Initiation sets of options learned in the Lock and Key task. Blue sphere in top-right room represents the key, red sphere in top-left room represents the lock. Red regions represent states inside the initiation classifier of learned skills, whereas blue/gray regions represent states outside of it. Each column represents an option - the top row corresponding to the initiation set when has_key is false and the bottom row corresponding to the initiation set when has_key is true.
+
+
+(d)
+
+instance, in the Ant-Maze task we compare against Option-Critic under a dense-reward formulation of the problem while comparing to HAC under a sparse-reward version of the same task. As a result, we show the learning curves comparing against them on different plots (columns (a) and (b) in Figure 1 respectively) to emphasize the difference between the algorithms, the settings in which they are applicable, and the way they are evaluated.
+
+Comparison with DDPG and Option-Critic: Figure 1(a) shows the results of comparing our proposed algorithm (DSC) with a flat RL agent (DDPG) and the version of Option-Critic designed for continuous action spaces (PPOC). Deep skill chaining comfortably outperforms both baselines. Both DSC and DDPG use the same exploration strategy in which $a_{t} = \pi_{\theta}(s_{t}) + \eta_{t}$ where $\eta_t\sim N(0,\epsilon_t)$ . Option-Critic, on the other hand, learns a stochastic policy $\pi_{\theta}(a_{t}|s_{t})$ and thus has baked-in exploration (Sutton & Barto, 2018, Ch. 13), precluding the need for additive noise during action selection. We hypothesize that this difference in exploration strategies is the reason Option-Critic initially performs better than both DDPG and DSC in the Reacher and Point E-Maze tasks.
+
+Comparison with Hierarchical Actor-Critic: We compare our algorithm to Hierarchical Actor-Critic (HAC) (Levy et al., 2019), which has recently outperformed other hierarchical reinforcement learning methods (Nachum et al., 2018; Vezhnevets et al., 2017) on a wide variety of tasks. A noteworthy property of the HAC agent is that it may prematurely terminate its training episodes to prevent flooding its replay buffer with uninformative transitions. The length of each training episode in DSC however, is fixed and determined by the test environment. Unless the agent reaches the goal state, its episode lasts for the entirety of its episodic budget (e.g., this would be 1000 timesteps in the Point-Maze environment). Thus, to compare the two algorithms, we perform periodic test rollouts wherein all networks are frozen and both algorithms have the same time budget to solve the given task. Furthermore, since both DSC and HAC learn deterministic policies, we set $\epsilon_{t} = 0$ during these test rollouts. When comparing to HAC, we perform 1 test rollout after each training episode in all tasks except for Ant-Maze, where we average performance over 5 test rollouts every 10 episodes.
+
+Figure 1(b) shows that DSC outperforms HAC in all environments except for Four Rooms with a Lock and Key, where their performance is similar, even though DSC does not use Hindsight Experience Replay (Andrychowicz et al., 2017) to deal with the sparse reward nature of this task.
+
+# 4.2 INTERPRETING LEARNED SKILLS
+
+Figure 2 visualizes the initiation set classifiers of options discovered by DSC in Four Rooms with a Lock and Key. Despite not getting any extrinsic reward for picking up the key, DSC discovers
+
+
+(a) Point-Maze
+
+
+(b) Four-Rooms
+
+
+(c) Ant-Maze
+Figure 3: Solution trajectories found by deep skill chaining. Sub-figure (d) shows two trajectories corresponding to the two possible initial locations in this task. Black points denote states in which $\pi_{\mathcal{O}}$ chose primitive actions, other colors denote temporally extended option executions.
+
+
+(d) E-Maze
+
+the following skill chain: the options shown in Figure 2 columns (c) and (d) bring the agent to the room with the key. The option shown in column (b) then picks up the key (top row) and then takes the agent to the room with the lock (bottom row). Finally, the option in column (a) solves the overall problem by navigating to the lock with the key. Similar visualizations of learned initiation set classifiers in the E-Maze task can be found in the Figure 6 in the Appendix.
+
+Figure 3 shows that DSC is able to learn options that induce simple, efficient policies along different segments of the state-space. Furthermore, it illustrates that in some states, the policy over options prefers primitive actions (shown in black) over learned skills. This suggests that DSC is robust to situations in which it constructs poor options or is unable to learn a good option policy in certain portions of the state-space. In particular, Figure 3 (d) shows how DSC constructs a skill tree to solve a problem with two distinct start states. It learns a common option near the goal (shown in blue), which then branches off into two different chains leading to its two different start states respectively.
+
+# 5 DISCUSSION AND CONCLUSION
+
+Deep skill chaining breaks complex long-horizon problems into a series of sub-problems and learns policies that solve those sub-problems. By doing so, it provides a significant performance boost when compared to a flat learning agent in all of the tasks considered in Section 4.
+
+We show superior performance when compared to Option-Critic, the leading framework for option discovery in continuous domains. A significant drawback of Option-Critic is that it assumes that all options are executable from everywhere in the state-space. By contrast, deep skill chaining explicitly learns initiation set classifiers. As a result, learned skills specialize in different regions of the statespace and do not have to bear the burden of learning representations for states that lie far outside of their initiation region. Furthermore, each option in the Option-Critic architecture leverages the same state-abstraction to learn option-specific value functions and policies, while deep skill chaining permits each skill to construct its own skill-specific state-abstraction (Konidaris & Barto, 2009a). An advantage of using Option-Critic over DSC is that it is not confined to goal-oriented tasks and can work in tasks which require continually maximizing non-sparse rewards.
+
+Section 4 also shows that deep skill chaining outperforms HAC in four out of five domains, while achieving comparable performance in one. We note that even though HAC was designed to work in the multi-goal setting, we test it here in the more constrained single-goal setting. Consequently, we argue that in problems which permit a stationary set of target events (like the ones considered here), deep skill chaining provides a favorable alternative to HAC. Furthermore, HAC depends on Hind-sight Experience Replay (HER) to train the different layers of their hierarchy. Deep skill chaining shows the benefits of using hierarchies even in the absence of such data augmentation techniques but including them should yield additional performance benefits in sparse-reward tasks.
+
+A drawback of deep skill chaining is that, because it builds skills backward from the goal, its performance in large state-spaces is dependent on a good exploration algorithm. We used the naive exploration strategy of adding Gaussian noise to chosen actions (Lillicrap et al., 2015; Fujimoto et al., 2018) since the exploration question is orthogonal to the ideas presented here. The lack of a sophisticated exploration algorithm also explains the higher variance in performance in the PointMaze task in Figure 1. Combining effective exploration (Machado et al., 2018; Jinnai et al., 2020) with DSC's high reliability of triggering target events is a promising avenue for future work.
+
+We presented a new skill discovery algorithm that can solve high-dimensional goal-oriented tasks far more reliably than flat RL agents and other popular hierarchical methods. To our knowledge,
+
+DSC is the first deep option discovery algorithm that does not treat the number of options as a fixed and costly hyperparameter. Furthermore, where other deep option discovery techniques have struggled to show consistent improvements over baseline flat agents in the single task setting (Zhang & Whiteson, 2019; Smith et al., 2018; Harb et al., 2018; Klissarov et al., 2017), we unequivocally show the necessity for hierarchies for solving challenging problems.
+
+# 6 ACKNOWLEDGEMENTS
+
+We thank Andrew Levy, Nakul Gopalan, Sam Lobel, Theresa Barton and other members of the Brown bigAI group for their inputs. This research was supported in part by DARPA under agreement number W911NF1820268, AFOSR Young Investigator Grant agreement number FA9550-17-1-0124 and the ONR under the PERISCOPE MURI Contract N00014-17-1-2699. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The content is solely the responsibility of the authors and does not necessarily represent the official views of DARPA, the ONR, or the AFOSR.
+
+# REFERENCES
+
+Marcin Andrychowicz, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder, Bob McGrew, Josh Tobin, OpenAI Pieter Abbeel, and Wojciech Zaremba. Hindsight experience replay. In Advances in Neural Information Processing Systems, pp. 5048-5058, 2017.
+Pierre-Luc Bacon, Jean Harb, and Doina Precup. The option-critic architecture. In Thirty-First AAAI Conference on Artificial Intelligence, 2017.
+Bram Bakker and Jürgen Schmidhuber. Hierarchical reinforcement learning with subpolicies specializing for learned subgoals. In Neural Networks and Computational Intelligence, pp. 125-130. CiteSeer, 2004.
+Andrew G Barto and Sridhar Mahadevan. Recent advances in hierarchical reinforcement learning. Discrete event dynamic systems, 13(1-2):41-77, 2003.
+Marc Bellemare, Sriram Srinivasan, Georg Ostrovski, Tom Schaul, David Saxton, and Remi Munos. Unifying count-based exploration and intrinsic motivation. In Advances in Neural Information Processing Systems, pp. 1471-1479, 2016.
+Steven J Bradtke and Michael O Duff. Reinforcement learning methods for continuous-time markov decision problems. In Advances in neural information processing systems, pp. 393-400, 1995.
+Ronen I Brafman and Moshe Tennenholtz. R-Max - a general polynomial time algorithm for near-optimal reinforcement learning. Journal of Machine Learning Research, 3(Oct):213-231, 2002.
+Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. Openai gym. arXiv preprint arXiv:1606.01540, 2016.
+Yuri Burda, Harrison Edwards, Amos Storkey, and Oleg Klimov. Exploration by random network distillation. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=H11JJJnR5Ym.
+Peter Dayan and Geoffrey E Hinton. Feudal reinforcement learning. In Advances in neural information processing systems, pp. 271-278, 1993.
+Thomas G Dietterich. Hierarchical reinforcement learning with the maxq value function decomposition. Journal of Artificial Intelligence Research, 13:227-303, 2000.
+Yan Duan, Xi Chen, Rein Houthooft, John Schulman, and Pieter Abbeel. Benchmarking deep reinforcement learning for continuous control. In International Conference on Machine Learning, pp. 1329-1338, 2016.
+Carlos Florensa, David Held, Markus Wulfmeier, Michael Zhang, and Pieter Abbeel. Reverse curriculum generation for reinforcement learning. In Conference on Robot Learning, pp. 482-495, 2017.
+
+Anthony G Francis and Ashwin Ram. The utility problem in case-based reasoning. In Case-Based Reasoning: Papers from the 1993 Workshop, pp. 160-161, 1993.
+Scott Fujimoto, Herke Hoof, and David Meger. Addressing function approximation error in actor-critic methods. In International Conference on Machine Learning, pp. 1582-1591, 2018.
+Jean Harb, Pierre-Luc Bacon, Martin Klissarov, and Doina Precup. When waiting is not an option: Learning options with a deliberation cost. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018.
+Arushi Jain, Khimya Khetarpal, and Doina Precup. Safe option-critic: Learning safety in the option-critic architecture. CoRR, abs/1807.08060, 2018. URL http://arxiv.org/abs/1807.08060.
+Steven James, Benjamin Rosman, and George Konidaris. Learning to plan with portable symbols. the ICML/IJCAI/AAMAS 2018 Workshop on Planning and Learning, 2018.
+Yuu Jinnai, David Abel, David Hershkowitz, Michael Littman, and George Konidaris. Finding options that minimize planning time. In Kamalika Chaudhuri and Ruslan Salakhutdinov (eds.), Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pp. 3120-3129, Long Beach, California, USA, 09-15 Jun 2019. PMLR. URL http://proceedings.mlr.press/v97/jinnai19a.html.
+Yu Jinnai, Jee Won Park, Marlos C. Machado, and George Konidaris. Exploration in reinforcement learning with deep covering options. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=SkeIyaVtwB.
+Leslie Pack Kaelbling and Tomás Lozano-Pérez. Pre-image backchaining in belief space for mobile manipulation. In Robotics Research, pp. 383-400. Springer, 2017.
+Martin Klissarov, Pierre-Luc Bacon, Jean Harb, and Doina Precup. Learnings options end-to-end for continuous action tasks. Hierarchical Reinforcement Learning Workshop (NeurIPS), 2017.
+George Konidaris. Constructing abstraction hierarchies using a skill-symbol loop. In *IJCAI: proceedings of the conference*, volume 2016, pp. 1648. NIH Public Access, 2016.
+George Konidaris. On the necessity of abstraction. Current Opinion in Behavioral Sciences, 29: 1-7, 2019.
+George Konidaris and Andrew Barto. Efficient skill learning using abstraction selection. In Twenty-First International Joint Conference on Artificial Intelligence, 2009a.
+George Konidaris and Andrew Barto. Skill discovery in continuous reinforcement learning domains using skill chaining. In Advances in Neural Information Processing Systems, pp. 1015-1023, 2009b.
+George Konidaris, Scott Kuindersma, Roderic Grupen, and Andrew Barto. Robot learning from demonstration by constructing skill trees. The International Journal of Robotics Research, 31(3): 360-375, 2012.
+George Konidaris, Leslie Pack Kaelbling, and Tomas Lozano-Perez. From skills to symbols: Learning symbolic representations for abstract high-level planning. Journal of Artificial Intelligence Research, 61:215-289, 2018.
+Visak CV Kumar, Sehoon Ha, and C Karen Liu. Expanding motor skills using relay networks. In Conference on Robot Learning, pp. 744-756, 2018.
+Andrew Levy, Robert Platt, and Kate Saenko. Hierarchical reinforcement learning with hindsight. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=ryzECoAcY7.
+Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971, 2015.
+
+Long-Ji Lin. Reinforcement learning for robots using neural networks. Technical report, Carnegie-Mellon Univ Pittsburgh PA School of Computer Science, 1993.
+Miao Liu, Marlos C Machado, Gerald Tesauro, and Murray Campbell. The eigenoption-critic framework. arXiv preprint arXiv:1712.04065, 2017.
+Marlos C. Machado, Clemens Rosenbaum, Xiaoxiao Guo, Miao Liu, Gerald Tesauro, and Murray Campbell. Eigenoption discovery through the deep successor representation. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=Bk8ZcAxAxR-.
+Amy McGovern and Richard S Sutton. Macro-actions in reinforcement learning: An empirical analysis. Computer Science Department Faculty Publication Series, pp. 15, 1998.
+Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529, 2015.
+Ofir Nachum, Shixiang Shane Gu, Honglak Lee, and Sergey Levine. Data-efficient hierarchical reinforcement learning. In Advances in Neural Information Processing Systems, pp. 3303-3313, 2018.
+Scott Niekum and Andrew G. Barto. Clustering via dirichlet process mixture models for portable skill discovery. In J. Shawe-Taylor, R. S. Zemel, P. L. Bartlett, F. Pereira, and K. Q. Weinberger (eds.), Advances in Neural Information Processing Systems 24, pp. 1818-1826. Curran Associates, Inc., 2011.
+F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825–2830, 2011.
+Marc Pickett and Andrew G Barto. Policyblocks: An algorithm for creating useful macro-actions in reinforcement learning. In ICML, volume 19, pp. 506-513, 2002.
+Matthew Riemer, Miao Liu, and Gerald Tesauro. Learning abstract options. In Advances in Neural Information Processing Systems, pp. 10424-10434, 2018.
+Tom Schaul, Daniel Horgan, Karol Gregor, and David Silver. Universal value function approximators. In International conference on machine learning, pp. 1312-1320, 2015.
+Jürgen Schmidhuber. Learning to generate sub-goals for action sequences. In Artificial neural networks, pp. 967-972, 1991.
+Archit Sharma, Shixiang Gu, Sergey Levine, Vikash Kumar, and Karol Hausman. Dynamics-aware unsupervised discovery of skills. arXiv preprint arXiv:1907.01657, 2019.
+David Silver and Kamil Ciosek. Compositional planning using optimal option models. In ICML, 2012.
+Özgün Şimşek and Andrew G Barto. Using relative novelty to identify useful temporal abstractions in reinforcement learning. In Proceedings of the twenty-first international conference on Machine learning, pp. 95. ACM, 2004.
+Özgür Şimşek, Alicia P Wolfe, and Andrew G Barto. Identifying useful subgoals in reinforcement learning by local graph partitioning. In Proceedings of the 22nd international conference on Machine learning, pp. 816-823. ACM, 2005.
+Matthew Smith, Herke van Hoof, and Joelle Pineau. An inference-based policy gradient method for learning options. In Jennifer Dy and Andreas Krause (eds.), Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pp. 4703-4712, Stockholm, Sweden, 10-15 Jul 2018. PMLR. URL http://proceedings.mlr.press/v80.smith18a.html.
+
+Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction. MIT press, 2018.
+Richard S Sutton, Doina Precup, and Satinder P Singh. Intra-option learning about temporally abstract actions. In ICML, volume 98, pp. 556-564, 1998.
+Richard S Sutton, Joseph Modayil, Michael Delp, Thomas Degris, Patrick M Pilarski, Adam White, and Doina Precup. Horde: A scalable real-time architecture for learning knowledge from unsupervised sensorimotor interaction. In The 10th International Conference on Autonomous Agents and Multiagent Systems-Volume 2, pp. 761-768. International Foundation for Autonomous Agents and Multiagent Systems, 2011.
+R.S. Sutton, , D. Precup, and S. Singh. Between MDPs and semi-MDPs: A framework for temporal abstraction in reinforcement learning. Artificial Intelligence, 112(1):181-211, 1999.
+Haoran Tang, Rein Houthooft, Davis Foote, Adam Stooke, OpenAI Xi Chen, Yan Duan, John Schulman, Filip DeTurck, and Pieter Abbeel. # exploration: A study of count-based exploration for deep reinforcement learning. In Advances in neural information processing systems, pp. 2753-2762, 2017.
+Russ Tedrake. LQR-trees: Feedback motion planning on sparse randomized trees. 2009.
+Gerald Tesauro. TD-Gammon, a self-teaching backgammon program, achieves master-level play. Neural computation, 6(2):215-219, 1994.
+Saket Tiwari and Philip S Thomas. Natural option critic. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pp. 5175-5182, 2019.
+Emanuel Todorov, Tom Erez, and Yuval Tassa. Mujoco: A physics engine for model-based control. In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 5026-5033. IEEE, 2012.
+Hado Van Hasselt, Arthur Guez, and David Silver. Deep reinforcement learning with double Q-learning. In Thirtieth AAAI Conference on Artificial Intelligence, 2016.
+Alexander Sasha Vezhnevets, Simon Osindero, Tom Schaul, Nicolas Heess, Max Jaderberg, David Silver, and Koray Kavukcuoglu. Feudal networks for hierarchical reinforcement learning. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 3540-3549. JMLR.org, 2017.
+Christopher JCH Watkins and Peter Dayan. Q-learning. Machine learning, 8(3-4):279-292, 1992.
+Shangtong Zhang and Shimon Whiteson. DAC: The double actor-critic architecture for learning options. In Advances in neural information processing systems, pp. To appear, 2019. URL https://arxiv.org/abs/1904.12691.
+
+
+(a)
+
+
+(b)
+
+
+(c)
+
+
+(d)
+Figure 4: An illustration of the deep skill chaining algorithm. $\star$ represents the goal state, $\times$ represents the two start states. (a) Before the agent has discovered its first skill/option, it acts according to its global DDPG policy. Having encountered the goal state $N$ times, the agent creates an option to trigger the goal from its local neighborhood. (b) Now, when the agent enters the initiation set of the first option, it begins to learn another option to trigger the first option. (c) Because the agent has two different start states, it learns two qualitatively different options to trigger the option learned in (b). (d) Finally, the agent has learned a skill tree which it can follow to consistently reach the goal.
+
+# A APPENDIX
+
+# A.1 CREATING SKILL TREES
+
+In Section 3.4, we introduced the idea of generalizing skill chains to skill trees to incorporate qualitatively different solution trajectories. In this section, we provide some of the implementation details required to learn initiation set classifiers that organize in the form of trees.
+
+When creating skill chains, the goal of each option is to trigger the initiation condition of the option that precedes it in its chain (i.e., its parent option). When creating a skill tree of branching factor $B$ , we allow at most $B$ options to target each salient event in the MDP (i.e., the goal state and the initiation set classifiers of preexisting options). To further control the branching factor of the skill tree, we impose two more conditions on option creation:
+
+1. Consider an option $o_1$ which already has one child option $o_2$ targeting it. Now suppose that we want to learn another option $o_3$ that also targets $o_1$ . We only consider state $s_t$ to be a positive example for training $\mathcal{I}_{o_3}$ if $\mathcal{I}_{o_2}(s_t) = 0$ .
+2. To prevent significant overlap between options that target the same event, we treat the positive examples used to train the initiation set classifier of one as negative training examples of all its sibling options. This allows for multiple options that trigger the same target event, while encouraging them to specialize in different parts of the state-space.
+
+In the Point E-Maze task considered in Section 4, we learn a skill tree with $B = 2$ .
+
+# A.2 INTRA-OPTION Q-LEARNING
+
+In principle, the methodology outlined in Section 3.2 is sufficient to learn an effective policy over options $\pi_{\mathcal{O}}$ . However, when $\mathcal{O}$ is a set of Markov options (Sutton et al., 1999), which is the setting considered in this paper, we can use intra-option Q-learning (Sutton et al., 1998) to improve the sample efficiency associated with learning $\pi_{\mathcal{O}}$ .
+
+More specifically, given a transition $(s_t,o,r_{t:t + \tau},s_{t + \tau})$ , SMDP Q-learning treats option $o$ as a black box and uses Equation 4 to determine the Q-value target $y_{t}$ for updating $\pi_{\mathcal{O}}$ . Intra-option Q-learning leverages the fact that option $o$ is Markov to point out that all the transitions experienced during the execution of $o$ are also valid experiences for training $\pi_{\mathcal{O}}$ . As long as a state $s_{t + i},\forall i\in [0,\tau ]$ is inside the initiation set of the option $o$ , we can pretend that option execution really began in state $s_{t + i}$ and add the transition $(s_{t + i},o,r_{t + i:t + \tau},s_{t + \tau})$ to the $\pi_{\mathcal{O}}$ ’s replay buffer.
+
+Furthermore, intra-option Q-learning also provides a way to improve the sample efficiency associated with learning option policies $\pi_o, \forall o \in \mathcal{O}$ . This can be done by making off-policy updates to each option's internal policy. In other words, regardless of which option is actually executed in the MDP, as long as a state experienced during execution is inside the initiation set of some other option, we can add the associated experience tuple to that (un-executed) option's replay buffer. Note that this is possible because we use an off-policy learning algorithm (DDPG) to learn intra-option policies.
+
+# A.3 TEST ENVIRONMENTS
+
+A description of the Four Rooms and the Point E-Maze tasks was provided in Section 4. Here we describe the remaining tasks considered in this paper:
+
+Point Maze: In this task, the same point agent as in the four rooms task must navigate around a U-shaped maze to reach its goal. The agent receives a reward of $-1$ for every step it lives, and a sparse terminating reward of 0 when it reaches its goal location. This is an interesting task for hierarchical agents because in order to reach the goal, the agent must first move away from it. It is clear that a dense distance-based reward formulation of this problem would only serve to deceive non-hierarchical agents such as DDPG.
+
+Ant Maze: The ant (Duan et al., 2016) is a challenging agent to control due to its non-linear and highly unstable dynamics. In this task, the ant must now navigate around the same U-shaped maze as in the Point Maze task. Getting the ant to cover significant distances along the $x, y$ plane without falling over, is a benchmark control task itself (Brockman et al., 2016). As a result, constructing options backward from the goal could require prohibitively large training episodes or the use of a sophisticated exploration algorithms (Burda et al., 2019; Bellemare et al., 2016; Tang et al., 2017). To avoid conflating our results with the orthogonal investigation of effective exploration in RL, we follow the experimental design of other state-of-the-art hierarchical reinforcement learning algorithms (Levy et al., 2019; Nachum et al., 2018) and sample the initial state of the ant uniformly across the maze for the first 30 episodes. For fair comparison, all baseline algorithms use this exploration strategy.
+
+Fixed Reacher: We use the Reacher task (Brockman et al., 2016) with two modifications. First, rather than randomly sampling a new goal at the start of each episode, we fix the target across all episodes. We do this because if the goal moves, following a learned skill chain will no longer solve the MDP. Note that the same modification was made in the DDPG paper (Lillicrap et al., 2015). Second, to increase the difficulty of the resulting task, we use a sparse reward function rather than the dense distance-based one used in the original formulation.
+
+| Task | Number of steps per episode |
| Point-Maze | 1000 |
| Four Rooms with Lock and Key | 5000 |
| Point E-Maze | 1500 |
| Reacher | 500 |
| Ant-Maze | 2000 |
+
+Table 1: Maximum number of time steps per episode in each of the experimental domains
+
+
+(a)
+
+
+(b)
+
+
+(c)
+Figure 5: Analysis of performance (as measured by mean cumulative reward) of DSC agent as it is allowed to learn more skills in (a) Point-Maze, (b) Four Rooms with Lock and Key, (c) E-Maze and (d) Ant-Maze. Note that in general, DSC discovers as many skills as it needs to solve the given problem. For this experiment alone, we restrict the number of skills that the DSC agent can learn. All experiments averaged over 5 runs. Error bars denote 1 standard deviation. Higher is better.
+
+
+(d)
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Figure 6: Initiation set classifiers learned in the Point E-Maze domain. Discovered skills organize in the form of a tree with a branching factor of 2. The option on the extreme left initiates in the proximity of the goal. Options learned after the goal option branch off into two separate skill chains. The chain on top extends backward to the start state in the top rung of the E-Maze. The chain shown in the bottom row extends backward to the start state in the bottom rung of the E-Maze.
+
+# A.4 ABLATION STUDY
+
+# A.4.1 PERFORMANCE AS A FUNCTION OF NUMBER OF SKILLS
+
+Deep skill chaining generally discovers and learns as many skills as it needs to solve a given problem. In this experiment however, we restrict the number of skills DSC can learn to examine its impact on overall agent performance (as measured by cumulative reward during training). Figure 5 shows that the performance of the agent increases monotonically (with diminishing marginal improvements) as it is allowed to learn more skills.
+
+# A.4.2 NUMBER OF SKILLS OVER TIME
+
+Figures 7 (a) and 7 (b) illustrate how deep skill chaining incrementally discovers options and adds it to the agent's option repertoire. Figure 7(c) shows how the number of skills empirically increases over time, plateaus and has low variance between runs. Since the agent has to learn the importance of the key in the Four Rooms task, learning initiation set classifiers takes longer than in the Point-Maze task.
+
+# A.4.3 HYPERPARAMETER SENSITIVITY
+
+In this section, we analyze DSC's sensitivity to some of the hyperparameters specific to the algorithm. In Figure 8, we show that even under a fairly large range of values for the buffer length $K$ and the gestation period $N$ , DSC is able to retain its strong performance.
+
+
+(a)
+
+
+(b)
+
+
+(c)
+Figure 7: (a) Initially, the policy over options $\pi_{\mathcal{O}}$ can only choose the global option $o_G$ as a proxy for selecting primitive actions. (b) Over time, the agent learns temporally extended skills and adds output nodes to the final layer of the DQN parameterizing $\pi_{\mathcal{O}}$ . This continues until the start state $s_0$ lies inside the initiation set of a learned option. (c) Empirical evaluation of how the number of skills in the agent's option repertoire changes over time in Point-Maze and Four-Rooms with a Lock and Key.
+
+
+
+
+Figure 8: Variation in DSC performance (as measured by mean cumulative reward) as a function of two hyperparameters: (left) the buffer length $K$ and (right) the gestation period $N$ of the option. For a qualitative description of both hyperparameters, refer to Section 3.3. This experiment shows that DSC is fairly robust to most reasonable choices of these parameters. All experiments averaged over 5 runs. Error bars denote 1 standard deviation. Higher is better.
+
+
+
+
+
+# A.5 ALGORITHM PSEUDO-CODE
+
+Algorithm 1: Deep Skill Chaining
+$s_0$ is the start state of the MDP
+ $\mathbf{1}_g(s)\coloneqq 1$ if $s$ is a target state in the MDP, 0 otherwise
+Given hyperparameter $T_{0}$ , the time budget for discovered, temporally extended options
+Global option: $o_{G} = (I_{o_{G}},\pi_{o_{G}},\beta_{o_{G}} = \mathbb{1}_{g},T = 1)$
+Goal option: $o_g = (I_{o_g},\pi_{o_g},\beta_{o_g} = \mathbb{1}_g,T = T_0)$
+Agent's option repertoire: $\mathcal{O} = \{o_G\}$
+Untrained Option: $o_U = o_g / /$ option whose initiation classifier is yet unlearned
+Policy over options: $\pi_{\mathcal{O}}$ .. $s_t\to o_t$ $s_t = s_0$
+while not $s_t.$ is terminal() do
+1. Pick new option and execute in environment Choose $o_t$ according to $\pi_{\mathcal{O}}(s_t)$ using Equations 2 and 3 $r_{t:\tau}$ $s_{t + \tau} =$ execute_option(o t) $\pi_{\mathcal{O}}.$ update $(s_t,o_t,r_{t:t + \tau},s_{t + \tau})$ using Equation 4
+2. Learn initiation set of new option // Collect trajectories that trigger $o_U$ 's termination region unless we have finished chaining if $\beta_{o_U}(s_{t + \tau})$ & $(s_0\notin I_o\forall o_i\in \mathcal{O})$ then $o_U.$ learn_initiation_classifier() using procedure described in Section 3.3 if $o_U.$ initiation_classifier_is_trained() then $\pi_{\mathcal{O}}.$ add $(o_U)$ using procedure described in Section 3.2 $\mathcal{O}.$ append $(o_U)$ $o_U = create\_ child\_ option(o_U)$ end
+end
+end
+Function create_child_option (o):
+""Create a new option whose $\beta$ is the parent's I."""
+ $o^{*} =$ Option() // Create a new option $\mathcal{I}_{o^*} =$ None $\beta_{o^*} = \mathcal{I}_o$
+return $o^*$
+Function execute_option $(o_t)$ .
+"""Option control loop.""""
+ $t_0 = t$ $T$ is the option's episodic time budget
+ $\pi_{o_t}$ is the option's internal policy
+while not $\beta_{o_t}(s_t)\& t < T$ do
+ $a_{t} = \pi_{o_{t}}(s_{t};\theta_{o_{t}})$ $r_t,s_{t + 1} = env.\text{step}(a_t)$ $s_t = s_{t + 1}$ $t = t + 1$
+end
+ $\tau = t / /$ duration of option execution
+return $r_{t_0:t_0 + \tau},s_{t_0 + \tau}$
+
+# A.6 MORE DETAILS ON IMPLEMENTING OPTION REWARD FUNCTIONS
+
+Section 3.1 explains that to learn an option's intra-option policy, we must define its internal reward function. While most of our experiments are conducted in the sparse-reward setting, deep skill chaining can be used without much modification in dense reward tasks as well. All that remains is a clear description of how each option's internal reward function would be defined in such a setting.
+
+Consider an option $o_i$ with parent option $o_{i-1}$ such that $\beta_{o_i} = \mathcal{I}_{o_{i-1}}$ . In the dense reward setting, we use the negative distance from the state to the parent option's initiation classifier as the reward function. Since initiation classifiers are represented using parametric classifiers, computing the dis
+
+tance to the classifier's decision boundary is straightforward and can be done using most popular machine learning frameworks. For instance, when using scikit-learn (Pedregosa et al., 2011), this is implemented as follows:
+
+$$
+R _ {o} (s, a, s ^ {\prime}) = \left\{ \begin{array}{l l} 0, & \text {i f} \beta_ {o} \left(s ^ {\prime}\right) = 1 \\ - \mathcal {I} _ {o _ {i - 1}}. \text {d e c i s i o n}. \text {f u n c t i o n} \left(s ^ {\prime}\right), & \text {o t h e r w i s e} \end{array} \right. \tag {5}
+$$
+
+Where in Equation 5, decision_function(x) returns the distance in feature space between point $x \in \mathbb{R}^N$ and the decision boundary learned by the classifier $\mathcal{T}_{o_{i-1}}$ .
+
+# A.7 LEARNING INITIATION SET CLASSIFIERS
+
+To learn initiation set classifiers as described in Section 3.3, we used scikit-learn's One-Class SVM and Two-Class SVM packages (Pedregosa et al., 2011). Initiation set classifiers were learned on a subset of the state variables available in the domain. For instance, in the Lock and Key domain, the initiation set classifier was learned over the $x,y$ position and the has_key indicator variable. This is similar to other methods like HAC (Levy et al., 2019) which require the user to specify the dimensions of the state variable necessary to achieve the overall goal of the MDP. Incorporating the entire state variable to learn initiation set classifiers or using neural networks for automatic feature extraction should be straightforward and is left as future work.
+
+# A.8 HYPERPARAMETER SETTINGS
+
+We divide the full set of hyperparameters that our algorithm depends on into two groups: those that are common to all algorithms that use DDPG (Table 2), and those that are specific to skill chaining (Table 3). We did not try to optimize over the space of DDPG hyperparameters, and used the ones used in previous work (Lillicrap et al., 2015; Fujimoto et al., 2018). Table 3 shows the hyperparameters that we chose on the different tasks considered in this paper. Most of them are concerned with learning initiation set classifiers, the difficulty of which varies based on domain. To determine the correct setting of these parameters, we usually visualized the learned initiation set classifiers during the course of training (like Figures 2 and 6), and made adjustments accordingly.
+
+| Parameter | Value |
| Replay buffer size | 1e6 |
| Batch size | 64 |
| γ | 0.99 |
| τ | 0.01 |
| Number of hidden layers | 2 |
| Hidden size 1 | 400 |
| Hidden size 2 | 300 |
| Critic learning rate | 1e-3 |
| Actor learning rate | 1e-4 |
+
+Table 2: DDPG Hyperparameters
+
+| Parameter | Point Maze | Four Rooms | Reacher | Ant Maze | E-Maze |
| Gestation Period (N) | 5 | 10 | 5 | 1 | 5 |
| Initiation Period | 1 | 10 | 3 | 0 | 1 |
| Buffer Length (K) | 20 | 20 | 20 | 750 | 20 |
| Option Max Time Steps (T) | 100 | 150 | 150 | 100 | 100 |
+
+Table 3: Deep Skill Chaining Hyperparameters
+
+# A.9 COMPUTE INFRASTRUCTURE
+
+We used 1 NVIDIA GeForce 2080 Ti, 2 NVIDIA GeForce 2070 Ti and 2 Tesla K80s on the Google Cloud compute infrastructure to perform all experiments reported in this paper.
+
+# A.10 NOTE ON COMPUTATION TIME
+
+Each option is parameterized by its own neural networks, which are only updated when the agent is inside that option's initiation set. For a given transition, this leads to at most two or three updates. In Point-Maze, updating all options on a transition took $0.004 \pm 0.0003$ s more than just updating the global DDPG agent (averaged over 300 episodes using 1 NVIDIA 2080 Ti GPU) - a trivial amount of extra computation time.
\ No newline at end of file
diff --git a/optiondiscoveryusingdeepskillchaining/images.zip b/optiondiscoveryusingdeepskillchaining/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..25f812e2b4529c36f0a5aa4a34fa270e9a6bbc37
--- /dev/null
+++ b/optiondiscoveryusingdeepskillchaining/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:05e8851e1fa8f8ad4e4a305aec477fd34d089cbfa93e095705841169b78efaad
+size 593949
diff --git a/optiondiscoveryusingdeepskillchaining/layout.json b/optiondiscoveryusingdeepskillchaining/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..c86b1296c228f6ed15a0f6a47d9069b27b1b781d
--- /dev/null
+++ b/optiondiscoveryusingdeepskillchaining/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:36af53408090f6d590669bd2928c4d56343b343a413732948c8588a134a9babd
+size 681931
diff --git a/orderlearninganditsapplicationtoageestimation/81e85f0b-b0d2-4b78-be71-41df77ba763c_content_list.json b/orderlearninganditsapplicationtoageestimation/81e85f0b-b0d2-4b78-be71-41df77ba763c_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..14ac20426662d64ada1b21fb1071e54f268817e9
--- /dev/null
+++ b/orderlearninganditsapplicationtoageestimation/81e85f0b-b0d2-4b78-be71-41df77ba763c_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4edc44280745be8bf92d97e5950151a17ae3b9390099a7e008122d43010ca325
+size 112751
diff --git a/orderlearninganditsapplicationtoageestimation/81e85f0b-b0d2-4b78-be71-41df77ba763c_model.json b/orderlearninganditsapplicationtoageestimation/81e85f0b-b0d2-4b78-be71-41df77ba763c_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..7c33233378d8c37782f2500e1176dc459d3a0558
--- /dev/null
+++ b/orderlearninganditsapplicationtoageestimation/81e85f0b-b0d2-4b78-be71-41df77ba763c_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3fc60b6d075332159372fef1c362603b988f75635a98e724a68eb495704e136b
+size 139403
diff --git a/orderlearninganditsapplicationtoageestimation/81e85f0b-b0d2-4b78-be71-41df77ba763c_origin.pdf b/orderlearninganditsapplicationtoageestimation/81e85f0b-b0d2-4b78-be71-41df77ba763c_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..199dad5038489a598334ad8c6f997afe81d1b8ee
--- /dev/null
+++ b/orderlearninganditsapplicationtoageestimation/81e85f0b-b0d2-4b78-be71-41df77ba763c_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ca0cb7b5bd0bf3c71d31f13d82483af74914e6aea98b75c02f730950b6319add
+size 12117846
diff --git a/orderlearninganditsapplicationtoageestimation/full.md b/orderlearninganditsapplicationtoageestimation/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..e5944f47d4dad98ae90b759284703d8445b84e4f
--- /dev/null
+++ b/orderlearninganditsapplicationtoageestimation/full.md
@@ -0,0 +1,472 @@
+# ORDER LEARNING AND ITS APPLICATION TO AGE ESTIMATION
+
+Kyungsun Lim, Nyeong-Ho Shin, Young-Yoon Lee, and Chang-Su Kim
+
+School of Electrical Engineering, Korea University and Samsung Electronics Co., Ltd {kslim, nhshin, cskim}@mcl.korea.ac.kr, yy77lee@gmail.com
+
+# ABSTRACT
+
+We propose order learning to determine the order graph of classes, representing ranks or priorities, and classify an object instance into one of the classes. To this end, we design a pairwise comparator to categorize the relationship between two instances into one of three cases: one instance is 'greater than,' 'similar to,' or 'smaller than' the other. Then, by comparing an input instance with reference instances and maximizing the consistency among the comparison results, the class of the input can be estimated reliably. We apply order learning to develop a facial age estimator, which provides the state-of-the-art performance. Moreover, the performance is further improved when the order graph is divided into disjoint chains using gender and ethnic group information or even in an unsupervised manner.
+
+# 1 INTRODUCTION
+
+To measure the quality of something, we often compare it with other things of a similar kind. Before assigning 4 stars to a film, a critic would have thought, "It is better than 3-star films but worse than 5-stars." This ranking through pairwise comparisons is done in various decision processes (Saaty, 1977). It is easier to tell the nearer one between two objects in a picture than to estimate the distance of each object directly (Chen et al., 2016; Lee & Kim, 2019a). Also, it is easy to tell a higher pitch between two notes, but absolute pitch is a rare ability (Bachem, 1955).
+
+Ranking through comparisons has been investigated for machine learning. In learning to rank (LTR), the pairwise approach learns, between two documents, which one is more relevant to a query (Liu, 2009). Also, in ordinal regression (Frank & Hall, 2001; Li & Lin, 2007), to predict the rank of an object, binary classifications are performed to tell whether the rank is higher than a series of thresholds or not. In this paper, we propose order learning to learn ordering relationship between objects. Thus, order learning is related to LTR and ordinal regression. However, whereas LTR and ordinal regression assume that ranks form a total order (Hrbacek & Jech, 1984), order learning can be used for a partial order as well. Order learning is also related to metric learning (Xing et al., 2003). While metric learning is about whether an object is 'similar to or dissimilar from' another object, order learning is about 'greater than or smaller than.' Section 2 reviews this related work.
+
+In order learning, a set of classes, $\Theta = \{\theta_{1},\theta_{2},\dots ,\theta_{n}\}$ , is ordered, where each class $\theta_{i}$ represents one or more object instances. Between two classes $\theta_{i}$ and $\theta_{j}$ , there are three possibilities: $\theta_{i} > \theta_{j}$ or $\theta_{i} < \theta_{j}$ or neither (i.e. incomparable). These relationships are represented by the order graph. The goal of order learning is to determine the order graph and then classify an instance into one of the classes in $\Theta$ . To achieve this, we develop a pairwise comparator that determines ordering relationship between two instances $x$ and $y$ into one of three categories: $x$ is 'greater than,' 'similar to,' or 'smaller than' $y$ . Then, we use the comparator to measure an input instance against multiple reference instances in known classes. Finally, we estimate the class of the input to maximize the consistency among the comparison results. It is noted that the parameter optimization of the pairwise comparator, the selection of the references, and the discovery of the order graph are jointly performed to minimize a common loss function. Section 3 proposes this order learning.
+
+We apply order learning to facial age estimation. Order learning matches age estimation well, since it is easier to tell a younger one between two people than to estimate each person's age directly (Chang et al., 2010; Zhang et al., 2017a). Even when we assume that age classes are linearly ordered, the proposed age estimator performs well. The performance is further improved, when classes are
+
+divided into disjoint chains in a supervised manner using gender and ethnic group information or even in an unsupervised manner. Section 4 describes this age estimator and discusses its results. Finally, Section 5 concludes this work.
+
+# 2 RELATED WORK
+
+Pairwise comparison: It is a fundamental problem to estimate the priorities (or ranks) of objects through pairwise comparison. In the classic paper, Saaty (1977) noted that, even when direct estimates of certain quantities are unavailable, rough ratios between them are easily obtained in many cases. Thus, he proposed the scaling method to reconstruct absolute priorities using only relative priorities. The scaling method was applied to monocular depth estimation (Lee & Kim, 2019a) and aesthetic assessment (Lee & Kim, 2019b). Ranking from a pairwise comparison matrix has been studied to handle cases, in which the matrix is huge or some elements are noisy (Braverman & Mossel, 2008; Jamieson & Nowak, 2011; Negahban et al., 2012; Wauthier et al., 2013). On the other hand, the pairwise approach to LTR learns, between two documents, which one is more relevant to a query (Liu, 2009; Herbrich et al., 1999; Burges et al., 2005; Tsai et al., 2007). The proposed order learning is related to LTR, since it also predicts the order between objects. But, while LTR sorts multiple objects with unknown ranks and focuses on the sorting quality, order learning compares a single object $x$ with optimally selected references with known ranks to estimate the rank of $x$ .
+
+Ordinal regression: Ordinal regression predicts an ordinal variable (or rank) of an instance. Suppose that a 20-year-old is misclassified as a 50-year old and a 25-year old, respectively. The former error should be more penalized than the latter. Ordinal regression exploits this characteristic in the design of a classifier or a regressor. In Frank & Hall (2001) and Li & Lin (2007), a conversion scheme was proposed to transform an ordinal regression problem into multiple binary classification problems. Ordinal regression based on this conversion scheme has been used in various applications, including age estimation (Chang et al., 2010; 2011; Niu et al., 2016; Chen et al., 2017) and monocular depth estimation (Fu et al., 2018). Note that order learning is different from ordinal regression. Order learning performs pairwise comparison between objects, instead of directly estimating the rank of each object. In age estimation, ordinal regression based on the conversion scheme is concerned with the problem, "Is a person's age bigger than a threshold $\theta$ ?" for each $\theta$ . In contrast, order learning concerns "Between two people, who is older?" Conceptually, order learning is easier. Technically, if there are $N$ ranks, the conversion scheme requires $N - 1$ binary classifiers, but order learning needs only a single ternary classifier. Moreover, whereas ordinal regression assumes that ranks form a total order, order learning can be used even in the case of a partial order (Hrbacek & Jech, 1984).
+
+Metric learning: A distance metric can be learned from examples of similar pairs of points and those of dissimilar pairs (Xing et al., 2003). The similarity depends on an application and is implicitly defined by user-provided examples. If a learned metric generalizes well to unseen data, it can be used to enforce the desired similarity criterion in clustering (Xing et al., 2003), classification (Weinberger et al., 2006), or information retrieval (McFee & Lanckriet, 2010). Both metric learning and order learning learn important binary relations in mathematics: metric and order (Hrbacek & Jech, 1984). However, a metric decides whether an object $x$ is similar to or dissimilar from another object $y$ , whereas an order tells whether $x$ is greater than or smaller than $y$ . Thus, a learned metric is useful for grouping similar data, whereas a learned order is suitable for processing ordered data.
+
+Age estimation: Human ages can be estimated from facial appearance (Kwon & da Vitoria Lobo, 1994). Geng et al. (2007) proposed the aging pattern subspace, and Guo et al. (2009) introduced biologically inspired features to age estimation. Recently, deep learning has been adopted for age estimation. Niu et al. (2016) proposed OR-CNN for age estimation, which is an ordinal regressor using the conversion scheme. Chen et al. (2017) proposed Ranking-CNN, which is another ordinal regressor. While OR-CNN uses a common feature for multiple binary classifiers, Ranking-CNN employs a separate CNN to extract a feature for each binary classifier. Tan et al. (2018) grouped adjacent ages via the group-n encoding, determined whether a face belongs to each group, and combined the results to predict the age. Pan et al. (2018) proposed the mean-variance loss to train a CNN classifier for age estimation. Shen et al. (2018) proposed the deep regression forests for age estimation. Zhang et al. (2019) developed a compact age estimator using the two-points representation. Also, Li et al. (2019) proposed a continuity-aware probabilistic network for age estimation.
+
+
+(a)
+
+
+(b)
+Figure 1: Examples of order graphs, in which node $n$ precedes node $m$ ( $n \to m$ ), if $n$ divides $m$ . For clarity, self-loops for reflexivity and edges deducible from transitivity are omitted from the graphs.
+
+# 3 ORDER LEARNING
+
+# 3.1 WHAT IS ORDER?
+
+Let us first review mathematical definitions and concepts related to order. An order (Hrbacek & Jech, 1984; Bartle, 1976), often denoted by $\leq$ , is a binary relation on a set $\Theta = \{\theta_1, \theta_2, \dots, \theta_n\}$ that satisfies the three properties of
+
+- Reflexivity: $\theta_{i} \leq \theta_{i}$ for every $\theta_{i} \in \Theta$ ;
+- Antisymmetry: If $\theta_{i} \leq \theta_{j}$ and $\theta_{j} \leq \theta_{i}$ , then $\theta_{i} = \theta_{j}$ ;
+- Transitivity: If $\theta_{i} \leq \theta_{j}$ and $\theta_{j} \leq \theta_{k}$ , then $\theta_{i} \leq \theta_{k}$ .
+
+In real-world problems, an order describes ranks or priorities of objects. For example, in age estimation, $\theta_{i} \leq \theta_{j}$ means that people in age class $\theta_{i}$ look younger than those in $\theta_{j}$ .
+
+We may use the symbol $\rightarrow$ , instead of $\leq$ , to denote an order on a finite set $\Theta$ . Then, the order can be represented by a directed graph (Gross & Yellen, 2006) using elements in $\Theta$ as nodes. If $\theta_i \rightarrow \theta_j$ , there is a directed edge from node $\theta_i$ to node $\theta_j$ . The order graph is acyclic because of antisymmetry and transitivity. For example, for $n, m \in \mathbb{N}$ , let $n \rightarrow m$ denote that $m$ is a multiple of $n$ . Note that it is an order on any subset of $\mathbb{N}$ . Figure 1(a) is the graph representing this order on $\{1, \ldots, 9\}$ .
+
+Elements $\theta_{i}$ and $\theta_{j}$ are comparable if $\theta_{i} \rightarrow \theta_{j}$ or $\theta_{j} \rightarrow \theta_{i}$ , or incomparable otherwise. In Figure 1(a), 6 and 8 are incomparable. In age estimation, it is difficult to compare apparent ages of people in different ethnic groups or of different genders.
+
+An order on a set $\Theta$ is total (or linear) if all elements in $\Theta$ are comparable to one another. In such a case, $\Theta$ is called a linearly ordered set. In some real-world problems, orders are not linear. In this work, a subset $\Theta_c$ of $\Theta$ is referred to as a chain, if $\Theta_c$ is linearly ordered and also maximal, i.e. there is no proper superset of $\Theta_c$ that is linearly ordered. In Figure 1(a), nodes 1, 2, 4, and 8 form a chain. In Figure 1(b), the entire set is composed of three disjoint chains.
+
+# 3.2 ORDER LEARNING - BASICS
+
+Let $\Theta = \{\theta_1, \theta_2, \dots, \theta_n\}$ be an ordered set of classes, where each class $\theta_i$ represents one or more object instances. For example, in age estimation, age class 11 is the set of 11-year-olds. The objective of order learning is to determine the order graph, such as Figure 1(a) or (b), and categorize an object instance into one of the classes. However, in many cases, order graphs are given explicitly or obvious from the contexts. For example, in quality assessment, there are typically five classes (poor → satisfactory → good → very good → excellent), forming a single chain. Also, in age estimation, suppose that an algorithm first classifies a person's gender into female or male and then estimates the age differently according to the gender. In this case, implicitly, there are separate age classes for each gender, and the age classes compose two disjoint chains similarly to Figure 1(b). Thus, in this subsection, we assume that the order graph is already known. Also, given an object instance, we assume that the chain to which the instance belongs is known. Then, we attempt to categorize the instance into one of the classes in the chain. Section 3.4 will propose the order learning in the case of an unknown order graph, composed of disjoint chains.
+
+Instead of directly estimating the class of each instance, we learn pairwise ordering relationship between two instances. Let $\Theta_c = \{0, 1, \dots, N - 1\}$ be a chain, where $N$ is the number of classes. Let
+
+
+Figure 2: Illustration of the pairwise comparator, where $\mathbb{C}$ denotes concatenation.
+
+$x$ and $y$ be two instances belonging to classes in $\Theta_c$ . Let $\theta(\cdot)$ denote the class of an instance. Then, $x$ and $y$ are compared and their ordering relationship is defined according to their class difference as
+
+$$
+x \succ y \quad \text {i f} \theta (x) - \theta (y) > \tau , \tag {1}
+$$
+
+$$
+x \approx y \quad \text {i f} | \theta (x) - \theta (y) | \leq \tau , \tag {2}
+$$
+
+$$
+x \prec y \quad \text {i f} \theta (x) - \theta (y) < - \tau , \tag {3}
+$$
+
+where $\tau$ is a threshold. To avoid confusion, we use $>, \approx, \prec'$ for the instance ordering, while $>, =, <'$ for the class order. In practice, the categorization in (1)~(3) is performed by a pairwise comparator in Figure 2, which consists of a Siamese network and a ternary classifier (Lee & Kim, 2019b). To train the comparator, only comparable instance pairs are employed.
+
+We estimate the class $\theta(x)$ of a test instance $x$ by comparing it with reference instances $y_m$ , $0 \leq m \leq M - 1$ , where $M$ is the number of references. The references are selected from training data such that they are from the same chain as $x$ . Given $x$ and $y_m$ , the comparator provides one of three categories $\succ, \approx, \prec'$ as a result. Let $\theta'$ be an estimate of the true class $\theta(x)$ . Then, the consistency between the comparator result and the estimate is defined as
+
+$$
+\phi_ {\text {c o n}} \left(x, y _ {m}, \theta^ {\prime}\right) = \tag {4}
+$$
+
+$$
+\left[ x \succ y _ {m} \right] \left[ \theta^ {\prime} - \theta \left(y _ {m}\right) > \tau \right] + \left[ x \approx y _ {m} \right] \left[ \left| \theta^ {\prime} - \theta \left(y _ {m}\right) \right| \leq \tau \right] + \left[ x \prec y _ {m} \right] \left[ \theta^ {\prime} - \theta \left(y _ {m}\right) < - \tau \right]
+$$
+
+where $[\cdot ]$ is the indicator function. The function $\phi_{\mathrm{con}}(x,y_m,\theta ')$ returns either 0 for an inconsistent case or 1 for a consistent case. For example, suppose that the pairwise comparator declares $x\prec y_{m}$ but $\theta^{\prime} - \theta (y_{m}) > \tau$ . Then, $\phi_{\mathrm{con}}(x,y_m,\theta ') = 0\cdot 1 + 0\cdot 0 + 1\cdot 0 = 0$ . Due to a possible classification error of the comparator, this inconsistency may occur even when the estimate $\theta^{\prime}$ equals the true class $\theta (x)$ . To maximize the consistency with all references, we estimate the class of $x$ by
+
+$$
+\hat {\theta} _ {\mathrm {M C}} (x) = \arg \max _ {\theta^ {\prime} \in \Theta_ {c}} \sum_ {m = 0} ^ {M - 1} \phi_ {\text {c o n}} \left(x, y _ {m}, \theta^ {\prime}\right), \tag {5}
+$$
+
+which is called the maximum consistency (MC) rule. Figure 3 illustrates this MC rule.
+
+It is noted that $\succ, \approx, \prec'$ is not an mathematical order. For example, if $\theta(x) + \frac{3}{4}\tau = \theta(y) = \theta(z) - \frac{3}{4}\tau$ , then $x \approx y$ and $y \approx z$ but $x \prec z$ . This is impossible in an order. More precisely, due to the quantization effect of the ternary classifier in (1)~(3), $\succ, \approx, \prec'$ is quasi-transitive (Sen, 1969), and $\approx'$ is symmetric but intransitive. We use this quasi-transitive relation to categorize an instance into one of the classes, on which a mathematical order is well defined.
+
+# 3.3 ORDER LEARNING - SUPERVISED CHAINS
+
+# 3.3.1 SINGLE-CHAIN HYPOTHESIS (1CH)
+
+In the simplest case of 1CH, all classes form a single chain $\Theta_c = \{0, 1, \dots, N - 1\}$ . For example, in 1CH age estimation, people's ages are estimated regardless of their ethnic groups or genders.
+
+We implement the comparator in Figure 2 using CNNs, as described in Section 4.1. Let $\mathbf{q}^{xy} = (q_0^{xy}, q_1^{xy}, q_2^{xy})$ be the one-hot vector, indicating the ground-truth ordering relationship between training instances $x$ and $y$ . Specifically, $(1,0,0)$ , $(0,1,0)$ , and $(0,0,1)$ represent $x \succ y$ , $x \approx y$ , and $x \prec y$ . Also, $\mathbf{p}^{xy} = (p_0^{xy}, p_1^{xy}, p_2^{xy})$ is the corresponding softmax probability vector of the comparator. We train the comparator to minimize the comparator loss
+
+$$
+\ell_ {\mathrm {c o}} = - \sum_ {x \in \mathcal {T}} \sum_ {y \in \mathcal {R}} \sum_ {j = 0} ^ {2} q _ {j} ^ {x y} \log p _ {j} ^ {x y} \tag {6}
+$$
+
+
+Figure 3: Consistency computation in the MC rule: It is illustrated how to compute the sum in (5) for two candidates $\theta' = 7$ and 9. Each box represents a reference $y_m$ . There are 5 references for each class in $\{0, \ldots, 14\}$ . Comparison results are color-coded (yellow for $x \succ y_m$ , gray for $x \approx y_m$ , and green for $x \prec y_m$ ). The bold black rectangle encloses the references satisfying $|\theta' - \theta(y_m)| \leq \tau$ , where $\tau = 4$ . The computed consistency $\phi_{\mathrm{con}}(x, y_m, \theta')$ in (5) is written within the box. For $\theta' = 7$ , there are six inconsistent boxes. For $\theta' = 9$ , there are 24 such boxes. In this example, $\theta' = 7$ minimizes the inconsistency, or equivalently maximizes the consistency. Therefore, $\hat{\theta}_{\mathrm{MC}}(x) = 7$ .
+
+
+
+where $\mathcal{T}$ is the set of all training instances and $\mathcal{R} \subset \mathcal{T}$ is the set of reference instances. First, we initialize $\mathcal{R} = \mathcal{T}$ and minimize $\ell_{\mathrm{co}}$ via the stochastic gradient descent. Then, we reduce the reference set $\mathcal{R}$ by sampling references from $\mathcal{T}$ . Specifically, for each class in $\Theta_c$ , we choose $M / N$ reference images to minimize the same loss $\ell_{\mathrm{co}}$ , where $M$ is the number of all references and $N$ is the number of classes. In other words, the reliability score of a reference candidate $y$ is defined as
+
+$$
+\alpha (y) = \sum_ {x \in \mathcal {T}} \sum_ {j = 0} ^ {2} q _ {j} ^ {x y} \log p _ {j} ^ {x y} \tag {7}
+$$
+
+and the $M / N$ candidates with the highest reliability scores are selected. Next, after fixing the reference set $\mathcal{R}$ , the comparator is trained to minimize the loss $\ell_{\mathrm{co}}$ . Then, after fixing the comparator parameters, the reference set $\mathcal{R}$ is updated to minimize the same loss $\ell_{\mathrm{co}}$ , and so forth.
+
+In the test phase, an input instance is compared with the $M$ references and its class is estimated using the MC rule in (5).
+
+# 3.3.2 $K$ -CHAIN HYPOTHESIS (KCH)
+
+In $K\mathrm{CH}$ , we assume that classes form $K$ disjoint chains, as in Figure 1(b). For example, in the supervised 6CH for age estimation, we predict a person's age according to the gender in {female, male} and the ethnic group in {African, Asian, European}. Thus, there are 6 chains in total. In this case, people in different chains are assumed to be incomparable for age estimation. It is supervised, since gender and ethnic group annotations are used to separate the chains. The supervised 2CH or 3CH also can be implemented by dividing chains by genders only or ethnic groups only.
+
+The comparator is trained similarly to 1CH. However, in computing the comparator loss in (6), a training instance $x$ and a reference $y$ are constrained to be from the same chain. Also, during the test, the type (or chain) of a test instance should be determined. Therefore, a $K$ -way type classifier is trained, which shares the feature extractor with the comparator in Figure 2 and uses additional fully-connected (FC) layers. Thus, the overall loss is given by
+
+$$
+\ell = \ell_ {\mathrm {c o}} + \ell_ {\mathrm {t y}} \tag {8}
+$$
+
+where $\ell_{\mathrm{co}}$ is the comparator loss and $\ell_{\mathrm{ty}}$ is the type classifier loss. The comparator and the type classifier are jointly trained to minimize this overall loss $\ell$ .
+
+During the test, given an input instance, we determine its chain using the type classifier, and compare it with the references from the same chain, and then estimate its class using the MC rule in (5).
+
+# 3.4 ORDER LEARNING - UNSUPERVISED CHAINS
+
+This subsection proposes an algorithm to separate classes into $K$ disjoint chains when there are no supervision or annotation data available for the separation. First, we randomly partition the training set $\mathcal{T}$ into $\mathcal{T}_0, \mathcal{T}_1, \ldots, \mathcal{T}_{K-1}$ , where $\mathcal{T} = \mathcal{T}_0 \cup \ldots \cup \mathcal{T}_{K-1}$ and $\mathcal{T}_k \cap \mathcal{T}_l = \emptyset$ for $k \neq l$ . Then, similarly
+
+Algorithm 1 Order Learning with Unsupervised Chains
+Input: $\mathcal{T} =$ training set of ordinal data, $K = \#$ of chains, $N = \#$ of classes in each chain, and $M = \#$ of references in each chain
+1: Partition $\mathcal{T}$ randomly into $\mathcal{T}_0,\ldots ,\mathcal{T}_{K - 1}$ and train a pairwise comparator
+2: for each chain $k$ do $\triangleright$ Reference Selection $(\mathcal{R}_k)$
+3: From $\mathcal{T}_k$ , select $M / N$ references $y$ with the highest reliability scores $\alpha_{k}(y)$
+4: end for
+5: repeat
+6: for each instance $x$ do $\triangleright$ Membership Update $(\mathcal{T}_k)$
+7: Assign it to $\mathcal{T}_{k^*}$ , where $k^{*} = \arg \max_{k}\beta_{k}(x)$ subject to the regularization constraint
+8: end for
+9: Fine-tune the comparator and train a type classifier using $\mathcal{T}_0,\ldots ,\mathcal{T}_{K - 1}$ to minimize $\ell = \ell_{\mathrm{co}} + \ell_{\mathrm{ty}}$
+10: for each instance $x$ do $\triangleright$ Membership Refinement $(\mathcal{T}_k)$
+11: Assign it to $\mathcal{T}_{k'}$ where $k'$ is its type classification result
+12: end for
+13: for each chain $k$ do $\triangleright$ Reference Selection $(\mathcal{R}_k)$
+14: From $\mathcal{T}_k$ , select $M / N$ references $y$ with the highest reliability scores $\alpha_{k}(y)$
+15: end for
+16: until convergence or predefined number of iterations
+Output: Pairwise comparator, type classifier, reference sets $\mathcal{R}_0,\ldots ,\mathcal{R}_{K - 1}$
+
+to (6), the comparator loss $\ell_{\mathrm{co}}$ can be written as
+
+$$
+\ell_ {\mathrm {c o}} = - \sum_ {k = 0} ^ {K - 1} \sum_ {x \in \mathcal {T} _ {k}} \sum_ {y \in \mathcal {R} _ {k}} \sum_ {j = 0} ^ {2} q _ {j} ^ {x y} \log p _ {j} ^ {x y} = - \sum_ {k = 0} ^ {K - 1} \sum_ {y \in \mathcal {R} _ {k}} \alpha_ {k} (y) = - \sum_ {k = 0} ^ {K - 1} \sum_ {x \in \mathcal {T} _ {k}} \beta_ {k} (x) \tag {9}
+$$
+
+where $\mathcal{R}_k\subset \mathcal{T}_k$ is the set of references for the $k$ th chain, $\alpha_{k}(y) = \sum_{x\in \mathcal{T}_{k}}\sum_{j}q_{j}^{xy}\log p_{j}^{xy}$ is the reliability of a reference $y$ in the $k$ th chain, and $\beta_{k}(x) = \sum_{y\in \mathcal{R}_{k}}\sum_{j}q_{j}^{xy}\log p_{j}^{xy}$ is the affinity of an instance $x$ to the references in the $k$ th chain. Note that $\beta_{k}(x) = -\sum_{y\in \mathcal{R}_{k}}D(\mathbf{q}^{xy}\| \mathbf{p}^{xy})$ where $D$ is the Kullback-Leibler distance (Cover & Thomas, 2006). Second, after fixing the chain membership $\mathcal{T}_k$ for each chain $k$ , we select references $y$ to maximize the reliability scores $\alpha_{k}(y)$ . These references form $\mathcal{R}_k$ . Third, after fixing $\mathcal{R}_0,\ldots ,\mathcal{R}_{K - 1}$ , we update the chain membership $\mathcal{T}_0,\ldots ,\mathcal{T}_{K - 1}$ , by assigning each training instance $x$ to the $k$ th chain that maximizes the affinity score $\beta_{k}(x)$ . The second and third steps are iteratively repeated. Both steps decrease the same loss $\ell_{\mathrm{co}}$ in (9).
+
+The second and third steps are analogous to the centroid rule and the nearest neighbor rule in the $K$ -means clustering (Gersho & Gray, 1991), respectively. The second step determines representatives in each chain (or cluster), while the third step assigns each instance to an optimal chain according to the affinity. Furthermore, both steps decrease the same loss alternately.
+
+However, as described in Algorithm 1, we modify this iterative algorithm by including the membership refinement step in lines $10 \sim 12$ . Specifically, we train a $K$ -way type classifier using $\mathcal{T}_0, \ldots, \mathcal{T}_{K-1}$ . Then, we accept the type classification results to refine $\mathcal{T}_0, \ldots, \mathcal{T}_{K-1}$ . This refinement is necessary because the type classifier should be used in the test phase to determine the chain of an unseen instance. Therefore, it is desirable to select the references also after refining the chain membership. Also, in line 7, if we assign an instance $x$ to maximize $\beta_k(x)$ only, some classes may be assigned too few training instances, leading to data imbalance. To avoid this, we enforce the regularization constraint so that every class is assigned at least a predefined number of instances. This regularized membership update is described in Appendix A.
+
+# 4 AGE ESTIMATION
+
+We develop an age estimator based on the proposed order learning. Order learning is suitable for age estimation, since telling the older one between two people is easier than estimating each person's age directly (Chang et al., 2010; Zhang et al., 2017a).
+
+# 4.1 IMPLEMENTATION DETAILS
+
+It is less difficult to distinguish between a 5-year-old and a 10-year-old than between a 65-year-old and a 70-year-old. Therefore, in age estimation, we replace the categorization based on the
+
+Table 1: A summary of the balanced dataset, formed from MORPH II, AFAD, and UTK. An element $\frac{n}{m}$ means that, out of $m$ images in the original dataset, $n$ images are sampled for the balanced dataset.
+
+ | MORPH II | AFAD | UTK | Balanced |
| Male | Female | Male | Female | Male | Female | Male | Female |
| African | 4,022 | 4,446 | 0 | 0 | 2,047 | 1,871 | 6,069 | 6,317 |
| 36,772 | 5,748 | 0 | 0 | 2,319 | 2,209 | | |
| Asian | 153 | 17 | 5,000 | 5,000 | 1,015 | 1,200 | 6,168 | 6,217 |
| 153 | 17 | 100,752 | 63,680 | 1,575 | 1,859 | | |
| European | 1,852 | 2,602 | 0 | 0 | 4,487 | 3,437 | 6,339 | 6,039 |
| 7,992 | 2,602 | 0 | 0 | 5,477 | 4,601 | | |
+
+arithmetic difference in $(1)\sim (3)$ with that based on the geometric ratio as follows.
+
+$$
+x \succ y \quad \text {i f} \log \theta (x) - \log \theta (y) > \tau_ {\mathrm {a g e}}, \tag {10}
+$$
+
+$$
+x \approx y \quad \text {i f} | \log \theta (x) - \log \theta (y) | \leq \tau_ {\text {a g e}}, \tag {11}
+$$
+
+$$
+x \prec y \quad \text {i f} \log \theta (x) - \log \theta (y) < - \tau_ {\text {a g e}}, \tag {12}
+$$
+
+which represent 'older,' 'similar,' and 'younger.' The consistency in (4) is also modified accordingly.
+
+There are 5 reference images for each age class within range [15, 80] in this work ( $M = 330$ , $N = 66$ ). Thus, a test image should be compared with 330 references. However, we develop a two-step approach, which does at most 130 comparisons but performs as good as the method using 330 comparisons. The two-step estimation is employed in all experiments. It is described in Appendix B.
+
+We align all facial images using SeetaFaceEngine (Zhang et al., 2014) and resize them into $256 \times 256 \times 3$ . Then, we crop a resized image into $224 \times 224 \times 3$ . For the feature extractors in Figure 2, we use VGG16 without the FC layers (Simonyan & Zisserman, 2014). They yield 512-channel feature vectors. Then, the vectors are concatenated and input to the ternary classifier, which has three FC layers, yielding 512-, 512-, and 3-channel vectors sequentially. The 3-channel vector is normalized to the softmax probabilities of the three categories $\succ, \approx, \prec$ . In (10)~(12), $\tau_{\mathrm{age}}$ is set to 0.1.
+
+In $K\mathrm{CH}$ with $K \geq 2$ , the type (or chain) of a test image should be determined. Thus, we design a type classifier, which shares the feature extractor with the comparator. Similarly to the ternary classifier, the type classifier uses three FC layers, yielding 512-, 512-, and $K$ -channel vectors sequentially. The comparator and the type classifier are jointly trained.
+
+To initialize the feature extractors, we adopt the VGG16 parameters pre-trained on ImageNet (Deng et al., 2009). We randomly initialize all the other layers. We update the parameters using the Adam optimizer (Kingma & Ba, 2014). We set the learning rate to $10^{-4}$ for the first 70 epochs. Then, we select 5 references for each age class. Using the selected references, we fine-tune the network with a learning rate of $10^{-5}$ . We repeat the reference selection and the parameter fine-tuning up to 3 times.
+
+In the case of unsupervised chains, we enforce the regularization constraint (line 7 in Algorithm 1). By default, for each age, all chains are constrained to be assigned the same number of training images. If there are $L$ training images of $\theta$ -year-olds, the age classes $\theta$ in the $K$ chains are assigned $L / K$ images, respectively, according to the affinity scores $\beta_{k}(x)$ by Algorithm 2 in Appendix A.
+
+# 4.2 DATASETS AND EVALUATION METRICS
+
+MORPH II (Ricanek & Tesafaye, 2006) is the most popular age estimation benchmark, containing about 55,000 facial images in the age range [16, 77]. IMDB-WIKI (Rothe et al., 2018) is another dataset containing about 500,000 celebrity images obtained from IMDB and Wikipedia. It is sometimes used to pre-train age estimation networks. Optionally, we also select 150,000 clean data from IMDB-WIKI to pre-train the proposed pairwise comparator.
+
+Although several facial age datasets are available, most are biased to specific ethnic groups or genders. Data unbalance restricts the usability and degrades the generalization performance. Thus, we form a 'balanced dataset' from MORPH II, AFAD (Niu et al., 2016), and UTK (Zhang et al., 2017b). Table 1 shows how the balanced dataset is organized. Before sampling images from MORPH II, AFAD, and UTK, we rectify inconsistent labels by following the strategy in Yip et al. (2018). For each combination of gender in {female, male} and ethnic group in {African, Asian, European}, we sample about 6,000 images. Also, during the sampling, we attempt to make the age distribution as
+
+Table 2: Performance comparison on the MORPH II dataset: * means that the networks are pretrained on IMDB-WIKI, and † the values are read from the reported CS curves or measured by experiments. The best results are boldfaced, and the second best ones are underlined.
+
+ | Setting A | Setting B | Setting C (SE) | Setting D (RS) |
| MAE | CS(%) | MAE | CS(%) | MAE | CS(%) | MAE | CS(%) |
| OHRank (Chang et al., 2011) | - | - | - | - | - | - | 6.07 | 56.3 |
| OR-CNN (Niu et al., 2016) | - | - | - | - | - | - | 3.27 | \(73.0^{\dagger}\) |
| Ranking-CNN (Chen et al., 2017) | - | - | - | - | - | - | 2.96 | \(85.0^{\dagger}\) |
| DMTL (Han et al., 2018) | - | - | - | - | 3.00 | 85.3 | - | - |
| DEX* (Rothe et al., 2018) | 2.68 | - | - | - | - | - | - | - |
| DRFs (Shen et al., 2018) | 2.91 | 82.9 | 2.98 | - | - | - | 2.17 | 91.3 |
| MO-CNN* (Tan et al., 2018) | 2.52 | \(85.0^{\dagger}\) | 2.70 | \(83.0^{\dagger}\) | - | - | - | - |
| MV (Pan et al., 2018) | - | - | - | - | 2.80 | \(87.0^{\dagger}\) | 2.41 | \(90.0^{\dagger}\) |
| MV* | - | - | - | - | 2.79 | - | 2.16 | - |
| BridgeNet* (Li et al., 2019) | 2.38 | \(91.0^{\dagger}\) | 2.63 | \(86.0^{\dagger}\) | - | - | - | - |
| Proposed (1CH) | 2.69 | 89.1 | 3.00 | 85.2 | 2.76 | 88.0 | 2.32 | 92.4 |
| Proposed* (1CH) | 2.41 | 91.7 | 2.75 | 88.2 | 2.68 | 88.8 | 2.22 | 93.3 |
+
+uniform as possible within range [15, 80]. The balanced dataset is partitioned into training and test subsets with ratio $8:2$ .
+
+For performance assessment, we calculate the mean absolute error (MAE) (Lanitis et al., 2004) and the cumulative score (CS) (Geng et al., 2006). MAE is the average absolute error between predicted and ground-truth ages. Given a tolerance level $l$ , CS computes the percentage of test images whose absolute errors are less than or equal to $l$ . In this work, $l$ is fixed to 5, as done in Chang et al. (2011), Han et al. (2018), and Shen et al. (2018).
+
+# 4.3 EXPERIMENTAL RESULTS
+
+Table 2 compares the proposed algorithm (1CH) with conventional algorithms on MORPH II. As evaluation protocols for MORPH II, we use four different settings, including the 5-fold subject-exclusive (SE) and the 5-fold random split (RS) (Chang et al., 2010; Guo & Wang, 2012). Appendix C.1 describes these four settings in detail and provides an extended version of Table 2.
+
+OHRank, OR-CNN, and Ranking-CNN are all based on ordinal regression. OHRank uses traditional features, yielding relatively poor performances, whereas OR-CNN and Ranking-CNN use CNN features. DEX, DRFs, MO-CNN, MV, and BridgeNet employ VGG16 as backbone networks. Among them, MV and BridgeNet achieve the state-of-the-art results, by employing the mean-variance loss and the gating networks, respectively. The proposed algorithm outperforms these algorithms in setting C, which is the most challenging task. Furthermore, in terms of CS, the proposed algorithm yields the best performances in all four settings. These outstanding performances indicate that order learning is an effective approach to age estimation.
+
+In Table 3, we analyze the performances of the proposed algorithm on the balanced dataset according to the number of hypothesized chains. We also implement and train the state-of-the-art MV on the balanced dataset and provide its results using supervised chains.
+
+Let us first analyze the performances of the proposed algorithm using 'supervised' chains. The MAE and CS scores on the balanced dataset are worse than those on MORPH II, since the balanced dataset contains more diverse data and thus is more challenging. By processing facial images separately according to the genders (2CH), the proposed algorithm reduces MAE by 0.05 and improves CS by $0.2\%$ in comparison with 1CH. Similar improvements are obtained by 3CH or 6CH, which consider the ethnic groups only or both gender and ethnic groups, respectively. In contrast, in the case of MV, multi-chain hypotheses sometimes degrade the performances; e.g., MV (6CH) yields a lower CS than MV (1CH). Regardless of the number of chains, the proposed algorithm trains a single comparator but uses a different set of references for each chain. The comparator is a ternary classifier. In contrast, MV (6CH) should train six different age estimators, each of which is a 66-way classifier, to handle different chains. Thus, their training is more challenging than that of the single ternary classifier. Note that, for the multi-chain hypotheses, the proposed algorithm first identifies the chain of a test image using the type classifiers, whose accuracies are about $98\%$ . In Table 3, these
+
+Table 3: Comparison of the proposed algorithm with MV on the balanced dataset. In MV and the supervised algorithm, multi-chain hypotheses divide data by the genders and/or the ethnic groups.
+
+ | MAE | CS(%) |
| 1CH | 2CH | 3CH | 6CH | 1CH | 2CH | 3CH | 6CH |
| MV (Pan et al., 2018) | 4.49 | 4.52 | 4.44 | 4.40 | 69.9 | 70.1 | 70.3 | 69.6 |
| Proposed (supervised) | 4.23 | 4.18 | 4.19 | 4.18 | 73.2 | 73.4 | 73.4 | 73.4 |
| Proposed (unsupervised) | - | 4.16 | 4.17 | 4.16 | - | 74.0 | 73.9 | 74.0 |
+
+
+Test image
+22 Male Asian
+
+
+Reference images
+Figure 4: Age estimation in 6CH: Only the references of ages from 15 to 50 are shown. Comparison results are color-coded. Cyan, yellow, and magenta mean that the test subject is older than $(\succ)$ , similar to $(\approx)$ , and younger than $(\prec)$ a reference. The age is estimated correctly as 22.
+
+type classifiers are used to obtain the results of the proposed algorithm, whereas the ground-truth gender and ethnic group of each test image are used for MV.
+
+Figure 4 shows how to estimate an age in 6CH. In this test, the subject is a 22-year-old Asian male. He is compared with the references who are also Asian males. Using the comparison results, the age is correctly estimated as 22 by the MC rule in (5).
+
+Table 4 lists the MAE results for each test chain. Europeans yield poorer MAEs than Africans or Asians. However, this is not due to inherent differences between ethnic groups. It is rather caused by differences in image qualities. As listed in Table 1, more European faces are sampled from UTK. The UTK faces were crawled from the Internet and their qualities are relatively low. Also, from the cross-chain test results using 6CH, some observations can be made:
+
+- Except for the As-F test chain, the lowest MAE is achieved by the references in the same chain.
+- Eu-M and Eu-F are mutually compatible. For Eu-M, the second best performance is obtained by the Eu-F references, and vice versa. On the other hand, some chains, such as Af-M and Eu-F, are less compatible for the purpose of the proposed age estimation.
+
+Table 3 also includes the performances of the proposed algorithm using 'unsupervised' chains. The unsupervised algorithm outperforms the supervised one, which indicates that the gender or ethnic group is not the best information to divide data for age estimation. As in the supervised case, 2CH, 3CH, and 6CH yield similar performances, which means that two chains are enough for the balanced set. Compared with MV (1CH), the unsupervised algorithm (2CH) improves the performances significantly, by 0.33 in terms of MAE and $4.1\%$ in terms of CS.
+
+Figure 5 shows how training images are divided into two chains in the unsupervised 2CH. During the membership update, for each age, each chain is regularized to include at least a certain percentage $(\kappa)$ of the training images. In the default mode, the two chains are assigned the same number of images with $\kappa = 50\%$ . However, Appendix C.3 shows that the performance is not very sensitive to $\kappa$ . At $\kappa = 10\%$ , MAE = 4.17 and CS = 73.7%. From Figure 5, we observe
+
+- The division of the chains is not clearly related to genders or ethnic groups. Regardless of genders or ethnic groups, about half of the images are assigned to chain 1 and the others to chain 2.
+- At $\kappa = 10\%$ , chain 1 mostly consists of middle ages, while chain 2 of 10s, 20s, 60s, and 70s.
+- At $\kappa = 50\%$ , there is no such strong age-dependent tendency. But, for some combinations of gender, ethnic group, and age band, it is not equal division. For example, for Asian females, a majority of 40s are assigned to chain 1 but a majority of 50s and 60s are assigned to chain 2.
+
+The unsupervised algorithm is designed to divide instances into multiple clusters when gender and ethnic group information is unavailable. As shown in Appendix C.3, different $\kappa$ 's yield various clustering results. Surprisingly, these different clusters still outperform the supervised algorithm.
+
+Table 4: Cross-chain tests on the balanced dataset (MAEs). For example, in 6CH, when African male references are used to estimate the ages of Asian females, the resultant MAE is 3.82.
+
+| Method | Reference chain | Test chain |
| Af-M | Af-F | As-M | As-F | Eu-M | Eu-F |
| 1CH | All | 3.87 | 3.82 | 3.98 | 3.79 | 5.21 | 4.69 |
| 6CH | African-Male | 3.85 | 3.79 | 4.03 | 3.82 | 5.50 | 5.00 |
| African-Female | 4.02 | 3.65 | 4.18 | 3.85 | 5.42 | 5.02 |
| Asian-Male | 3.97 | 3.75 | 3.97 | 3.81 | 5.48 | 4.87 |
| Asian-Female | 4.06 | 3.78 | 4.05 | 3.78 | 5.69 | 4.89 |
| European-Male | 3.99 | 3.71 | 4.02 | 3.80 | 5.13 | 4.66 |
| European-Female | 4.45 | 3.79 | 4.11 | 3.77 | 5.21 | 4.65 |
+
+
+Figure 5: Distributions of training images in the unsupervised algorithm (2CH).
+
+
+
+For example, at $\kappa = 10\%$ , let us consider the age band of 20s and 30s. If the references in chain 2 are used to estimate the ages of people in chain 1, the average error is 4.6 years. On the contrary, if the references in chain 1 are used for chain 2, the average error is $-5.4$ years. These opposite biases mean that people in chain 1 tend to look older than those in chain 2. These 'looking-older' people in 20s and 30s compose the blue cluster (chain 1) together with most people in 40s and 50s in Figure 5. In this case, 'looking-older' people in 20s and 30s are separated from 'looking-younger' ones by the unsupervised algorithm. This is more effective than the gender-based or ethnic-group-based division of the supervised algorithm. Appendix C presents more results on age estimation.
+
+# 5 CONCLUSIONS
+
+Order learning was proposed in this work. In order learning, classes form an ordered set, and each class represents object instances of the same rank. Its goal is to determine the order graph of classes and classify a test instance into one of the classes. To this end, we designed the pairwise comparator to learn ordering relationships between instances. We then decided the class of an instance by comparing it with reference instances in the same chain and maximizing the consistency among the comparison results. For age estimation, it was shown that the proposed algorithm yields the state-of-the-art performance even in the case of the single-chain hypothesis. The performance is further improved when the order graph is divided into multiple disjoint chains.
+
+In this paper, we assumed that the order graph is composed of disjoint chains. However, there are more complicated graphs, e.g. Figure 1(a), than disjoint chains. For example, it is hard to recognize an infant's sex from its facial image (Porter et al., 1984). But, after puberty, male and female take divergent paths. This can be reflected by an order graph, which consists of two chains sharing common nodes up to a certain age. It is an open problem to generalize order learning to find an optimal order graph, which is not restricted to disjoint chains.
+
+# ACKNOWLEDGEMENTS
+
+This work was supported by 'The Cross-Ministry Giga KOREA Project' grant funded by the Korea government (MSIT) (No. GK19P0200, Development of 4D reconstruction and dynamic deformable action model based hyperrealistic service technology), and by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIP) (No. NRF-2018R1A2B3003896).
+
+# REFERENCES
+
+A. Bachem. Absolute pitch. J. Acoust. Soc. Am., 27(6):1180-1185, November 1955.
+Robert G. Bartle. The Elements of Real Analysis. John Wiley & Sons, 2nd edition, 1976.
+Mark Braverman and Elchanan Mossel. Noisy sorting without resampling. In Symp. Discrete Algorithms, pp. 268-276, 2008.
+Chris Burges, Tal Shaked, Erin Renshaw, Ari Lazier, Matt Deeds, Nicole Hamilton, and Greg Hul-lender. Learning to rank using gradient descent. In ICML, pp. 89-96, 2005.
+Kuang-Yu Chang, Chu-Song Chen, and Yi-Ping Hung. A ranking approach for human ages estimation based on face images. In ICPR, pp. 3396-3399, 2010.
+Kuang-Yu Chang, Chu-Song Chen, and Yi-Ping Hung. Ordinal hyperplanes ranker with cost sensitivities for age estimation. In CVPR, pp. 585-592, 2011.
+Shixing Chen, Caojin Zhang, Ming Dong, Jialiang Le, and Mike Rao. Using Ranking-CNN for age estimation. In CVPR, pp. 5183-5192, 2017.
+Weifeng Chen, Zhao Fu, Dawei Yang, and Jia Deng. Single-image depth perception in the wild. In NIPS, pp. 730-738, 2016.
+Thomas M. Cover and Joy A. Thomas. Elements of Information Theory. Wiley, 2006.
+Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. ImageNet: A large-scale hierarchical image database. In CVPR, pp. 248-255, 2009.
+Eibe Frank and Mark Hall. A simple approach to ordinal classification. In ECML, pp. 145-156, 2001.
+Huan Fu, Mingming Gong, Chaohui Wang, Kayhan Batmanghelich, and Dacheng Tao. Deep ordinal regression network for monocular depth estimation. In CVPR, pp. 2002-2011, 2018.
+Xin Geng, Zhi-Hua Zhou, Yu Zhang, Gang Li, and Honghua Dai. Learning from facial aging patterns for automatic age estimation. In ACM Multimedia, pp. 307-316, 2006.
+Xin Geng, Zhi-Hua Zhou, and Kate Smith-Miles. Automatic age estimation based on facial aging patterns. IEEE Trans. Pattern Anal. Mach. Intell., 29(12):2234-2240, December 2007.
+A. Gersho and R. M. Gray. Vector Quantization and Signal Compression. Kluwer Academic Publishers Norwell, 1991.
+Jonathan L. Gross and Jay Yellen. Graph Theory and Its Applications. Chapman & Hall, 2nd edition, 2006.
+Guodong Guo and Guowang Mu. Simultaneous dimensionality reduction and human age estimation via kernel partial least squares regression. In CVPR, pp. 657-664, 2011.
+Guodong Guo and Xiaolong Wang. A study on human age estimation under facial expression changes. In CVPR, pp. 2547-2553, 2012.
+Guodong Guo, Guowang Mu, Yun Fu, and Thomas S. Huang. Human age estimation using bio-inspired features. In CVPR, pp. 112-119, 2009.
+Hu Han, Anil K. Jain, Fang Wang, Shiguang Shan, and Xilin Chen. Heterogeneous face attribute estimation: A deep multi-task learning approach. IEEE Trans. Pattern Anal. Mach. Intell., 40 (11):2597-2609, November 2018.
+Ralf Herbrich, Thore Graepel, and Klaus Obermayer. Support vector learning for ordinal regression. In ICANN, pp. 97-102, 1999.
+Karel Hrbacek and Thomas Jech. Introduction to Set Theory. Marcel Dekker, Inc., 2nd edition, 1984.
+
+Ivan Huerta, Carles Fernández, Carlos Segura, Javier Hernando, and Andrea Prati. A deep analysis on age estimation. *Pattern Recog. Lett.*, 68:239–249, December 2015.
+Kevin G. Jamieson and Robert D. Nowak. Active ranking using pairwise comparisons. In NIPS, pp. 2240-2248, 2011.
+Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
+Young Ho Kwon and Niels da Vitoria Lobo. Age classification from facial images. In CVPR, pp. 762-767, 1994.
+Andreas Lanitis, Chrisina Draganova, and Chris Christodoulou. Comparing different classifiers for automatic age estimation. IEEE Trans. Syst., Man, Cybern. B, Cybern., 34(1):621-628, February 2004.
+Jae-Han Lee and Chang-Su Kim. Monocular depth estimation using relative depth maps. In CVPR, pp. 9729-9738, 2019a.
+Jun-Tae Lee and Chang-Su Kim. Image aesthetic assessment based on pairwise comparison - a unified approach to score regression, binary classification, and personalization. In ICCV, pp. 1191-1200, 2019b.
+Ling Li and Hsuan-Tien Lin. Ordinal regression by extended binary classification. In NIPS, pp. 865-872, 2007.
+Wanhua Li, Jiwen Lu, Jianjiang Feng, Chunjing Xu, Jie Zhou, and Qi Tian. BridgeNet: A continuity-aware probabilistic network for age estimation. In CVPR, pp. 1145-1154, 2019.
+Tie-Yan Liu. Learning to rank for information retrieval. Foundations and Trends in Information Retrieval, 3(3):225-331, 2009.
+Brian McFee and Gert Lanckriet. Metric learning to rank. In ICML, pp. 775-782, 2010.
+Sahand Negahban, Sewoong Oh, and Devavrat Shah. Iterative ranking from pair-wise comparisons. In NIPS, pp. 2474-2482, 2012.
+Zhenxing Niu, Mo Zhou, Le Wang, Xinbo Gao, and Gang Hua. Ordinal regression with multiple output CNN for age estimation. In CVPR, pp. 4920-4928, 2016.
+Hongyu Pan, Hu Han, Shiguang Shan, and Xilin Chen. Mean-variance loss for deep age estimation from a face. In CVPR, pp. 5285-5294, 2018.
+Gabriel Panis, Andreas Lanitis, Nicholas Tsapatsoulis, and Timothy F. Cootes. Overview of research on facial ageing using the FG-NET ageing database. IET Biometrics, 5(2):37-46, 2016.
+Richard H. Porter, Jennifer M. Cernoch, and Rene D. Balogh. Recognition of neonates by facial-visual characteristics. Pediatrics, 74(4):501-504, October 1984.
+Karl Ricanek and Tamirat Tesafaye. MORPH: A longitudinal image database of normal adult ageprogression. In FGR, pp. 341-345, 2006.
+Rasmus Rothe, Radu Timofte, and Luc Van Gool. Deep expectation of real and apparent age from a single image without facial landmarks. Int. J. Comput. Vis., 126(2):144-157, April 2018.
+Thomas L. Saaty. A scaling method for priorities in hierarchical structures. J. Math. Psychol., 15 (3):234-281, June 1977.
+Amartya Sen. Quasi-transitivity, rational choice and collective decisions. Rev. Econ. Stud., 36(3): 381-393, July 1969.
+Wei Shen, Yilu Guo, Yan Wang, Kai Zhao, Bo Wang, and Alan Yuille. Deep regression forests for age estimation. In CVPR, pp. 2304-2313, 2018.
+
+Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR, 2014.
+Zichang Tan, Shuai Zhou, Jun Wan, Zhen Lei, and Stan Z. Li. Age estimation based on a single network with soft softmax of aging modeling. In ACCV, 2016.
+Zichang Tan, Jun Wan, Zhen Lei, Ruicong Zhi, Guodong Guo, and Stan Z. Li. Efficient group-n encoding and decoding for facial age estimation. IEEE Trans. Pattern Anal. Mach. Intell., 40(11): 2610-2623, November 2018.
+Ming-Feng Tsai, Tie-Yan Liu, Tao Qin, Hsin-Hsi Chen, and Wei-Ying Ma. FRank: A ranking method with fidelity loss. In SIGIR, pp. 383-390, 2007.
+Fabian L. Wauthier, Michael I. Jordan, and Nebojsa Jojic. Efficient ranking from pairwise comparisons. In ICML, pp. 109-117, 2013.
+Kilian Q. Weinberger, John Blitzer, and Lawrence K. Saul. Distance metric learning for large margin nearest neighbor classification. In NIPS, pp. 1473-1480, 2006.
+Eric P. Xing, Andrew Y. Ng, Michael I. Jordan, and Stuart Russell. Distance metric learning with application to clustering with side-information. In NIPS, pp. 521-528, 2003.
+Dong Yi, Zhen Lei, and Stan Z. Li. Age estimation by multi-scale convolutional network. In ACCV, 2014.
+Benjamin Yip, Garrett Bingham, Katherine Kempfert, Jonathan Fabish, Troy Kling, Cuixian Chen, and Yishi Wang. Preliminary studies on a large face database. In ICBD, pp. 2572-2579, 2018.
+ByungIn Yoo, Youngjun Kwak, Youngsung Kim, Changkyu Choi, and Junmo Kim. Deep facial age estimation using conditional multitask learning with weak label expansion. IEEE Signal Process. Lett., 25(6):808-812, June 2018.
+Chao Zhang, Shuaicheng Liu, Xun Xu, and Ce Zhu. C3AE: Exploring the limits of compact model for age estimation. In CVPR, pp. 12587-12596, 2019.
+Jie Zhang, Shiguang Shan, Meina Kan, and Xilin Chen. Coarse-to-fine auto-encoder networks (CFAN) for real-time face alignment. In ECCV, pp. 1-16, 2014.
+Yunxuan Zhang, Li Liu, Cheng Li, and Chen Change Loy. Quantifying facial age by posterior of age comparisons. In BMVC, 2017a.
+Zhifei Zhang, Yang Song, and Hairong Qi. Age progression/regression by conditional adversarial autoencoder. In CVPR, pp. 5810-5818, 2017b.
+
+# A REGULARIZED MEMBERSHIP UPDATE
+
+During the chain membership update in Algorithm 1, we assign an instance $x$ to chain $k$ to maximize $\beta_{k}(x)$ subject to the regularization constraint. As mentioned in Section 4.1, in age estimation, this regularization is enforced for each age. Let $\mathcal{X}$ denote the set of $\theta$ -year-olds for a certain $\theta$ . Also, let $\mathcal{K} = \{0,1,\dots,K - 1\}$ be the set of chains. Suppose that we should assign at least a certain number $(L)$ of instances in $\mathcal{X}$ to each chain. This is done by calling RegularAssign( $\mathcal{K}, \mathcal{X}, L$ ) in Algorithm 2, which is a recursive function. Algorithm 2 yields the membership function $c(x)$ as output. For example, $c(x) = 1$ means that $x$ belongs to chain 1.
+
+Algorithm 2 RegularAssign $(\mathcal{K},\mathcal{X},L)$
+Input: $\mathcal{K} =$ set of chains, $\mathcal{X} =$ set of instances, and $L =$ minimum number. 1: for each $k\in \mathcal{K}$ do $\triangleright$ Initialize chains 2: $\mathcal{X}_k = \emptyset$
+3: end for
+4: for each $x\in \mathcal{X}$ do $\triangleright$ Irregular partitioning
+5: $c(x) = \arg \max_{k\in \mathcal{K}}\beta_k(x)$
+6: $\mathcal{X}_{c(x)} = \mathcal{X}_{c(x)}\cup \{x\}$
+7: end for
+8: $k_{m} = \arg \min_{k\in \mathcal{K}}|\mathcal{X}_{k}|$ $\triangleright$ Chain of the minimum size
+9: if $|\mathcal{X}_k|\geq L$ then
+10: return
+11: else
+12: $\mathcal{X} = \mathcal{X} - \mathcal{X}_{km}$
+13: while $|\mathcal{X}_{km}| < L$ do $\triangleright$ Increase $\mathcal{X}_{km}$
+14: $x^{\prime} = \max_{x\in \mathcal{X}}\beta_{km}(x)$
+15: $\mathcal{X} = \mathcal{X} - \{x^{\prime}\}$
+16: $\mathcal{X}_{km} = \mathcal{X}_{km}\cup \{x^{\prime}\}$
+17: end while
+18: RegularAssign $(\mathcal{K} - \{k_m\} ,\mathcal{X},L)$ $\triangleright$ Recursion
+19: end if
+Output: Membership function $c(x)$
+
+# B TWO-STEP ESTIMATION
+
+There are 5 reference images for each age within range [15, 80] in this work. Thus, for the age estimation of a test image using the MC rule in (5), the test image should be compared with $M = 330$ reference images. However, we reduce the number of comparisons using a two-step approach. First, the test image is compared with the 35 references of ages 15, 25, ..., 75 only, and a rough age estimate $\hat{\theta}_{1}$ is obtained using the MC rule. Second, it is compared with the 105 references of all ages within $[\hat{\theta}_{1} - 10, \hat{\theta}_{1} + 10]$ , and the final estimate $\hat{\theta}_{2}$ is obtained. Since there are at least 10 common references in the first and second steps, the two-step estimation requires at most 130 comparisons.
+
+# C MORE EXPERIMENTS
+
+# C.1 PERFORMANCE COMPARISON ON MORPH II
+
+Four experimental settings are used for performance comparison on MORPH II (Ricanek & Tesafaye, 2006).
+
+- Setting A: 5,492 images of Europeans are randomly selected and then divided into training and testing sets with ratio 8:2 (Chang et al., 2011).
+- Setting B: About 21,000 images are randomly selected, while restricting the ratio between Africans and Europeans to 1:1 and that between females and males to 1:3. They are divided into three subsets (S1, S2, S3). The training and testing are done under two sub-settings (Guo & Mu, 2011).
+- (B1) training on S1, testing on $\mathrm{S}2 + \mathrm{S}3$
+(B2) training on S2, testing on $\mathrm{S}1 + \mathrm{S}3$
+
+- Setting C (SE): The entire dataset is randomly split into five folds, subject to the constraint that the same person's images should belong to only one fold, and the 5-fold cross-validation is performed.
+
+- Setting D (RS): The entire dataset is randomly split into five folds without any constraint, and the 5-fold cross-validation is performed.
+
+Table 5 is an extended version of Table 2. It includes the results of more conventional algorithms.
+Table 5: Performance comparison on the MORPH II dataset: * means that the networks are pretrained on IMDB-WIKI, and † the values are read from the reported CS curves or measured by experiments. The best results are boldfaced, and the second best ones are underlined.
+
+ | Setting A | Setting B | Setting C (SE) | Setting D (RS) |
| MAE | CS(%) | MAE | CS(%) | MAE | CS(%) | MAE | CS(%) |
| RED-SVM (Chang et al., 2010) | - | - | - | - | - | - | 6.49 | 49.0† |
| OHRank (Chang et al., 2011) | - | - | - | - | - | - | 6.07 | 56.3 |
| KPLS (Guo & Mu, 2011) | - | - | 4.18 | - | - | - | - | - |
| CPLF (Yi et al., 2014) | - | - | 3.63 | - | - | - | - | - |
| Huerta et al. (Huerta et al., 2015) | - | - | - | - | 3.88 | - | - | - |
| OR-CNN (Niu et al., 2016) | - | - | - | - | - | - | 3.27 | 73.0† |
| Tan et al. (Tan et al., 2016) | - | - | 3.03 | - | - | - | - | - |
| Ranking-CNN (Chen et al., 2017) | - | - | - | - | - | - | 2.96 | 85.0† |
| DMTL (Han et al., 2018) | - | - | - | - | 3.00 | 85.3 | - | - |
| DEX (Rothe et al., 2018) | 3.25 | - | - | - | - | - | - | - |
| DEX* | 2.68 | - | - | - | - | - | - | - |
| CMT (Yoo et al., 2018) | - | - | - | - | 2.91 | - | - | - |
| DRFs (Shen et al., 2018) | 2.91 | 82.9 | 2.98 | - | - | - | 2.17 | 91.3 |
| MO-CNN (Tan et al., 2018) | 2.93 | 83.0† | 2.86 | 82.0† | - | - | - | - |
| MO-CNN* | 2.52 | 85.0† | 2.70 | 83.0† | - | - | - | - |
| MV (Pan et al., 2018) | - | - | - | - | 2.80 | 87.0† | 2.41 | 90.0† |
| MV* | - | - | - | - | 2.79 | - | 2.16 | - |
| C3AE (Zhang et al., 2019) | - | - | - | - | - | - | 2.78 | - |
| C3AE* | - | - | - | - | - | - | 2.75 | - |
| BridgeNet* (Li et al., 2019) | 2.38 | 91.0† | 2.63 | 86.0† | - | - | - | - |
| Proposed (1CH) | 2.69 | 89.1 | 3.00 | 85.2 | 2.76 | 88.0 | 2.32 | 92.4 |
| Proposed* (1CH) | 2.41 | 91.7 | 2.75 | 88.2 | 2.68 | 88.8 | 2.22 | 93.3 |
+
+# C.2 GENERALIZATION PERFORMANCE OF COMPARATOR ON FG-NET
+
+We assess the proposed age estimator (1CH) on the FG-NET database (Panis et al., 2016). FG-NET is a relatively small dataset, composed of 1,002 facial images of 82 subjects. Ages range from 0 to 69. For FG-NET, the leave one person out (LOPO) approach is often used for evaluation. In other words, to perform tests on each subject, an estimator is trained using the remaining 81 subjects. Then, the results are averaged over all 82 subjects.
+
+In order to assess the generalization performance, we do not retrain the comparator on the FG-NET data. Instead, we fix the comparator trained on the balanced dataset and just select references from the remaining subjects' faces in each LOPO test. For the comparator, the arithmetic scheme in (1)~(3) is tested as well as the default geometric scheme in (10)~(12).
+
+For comparison, MV (Pan et al., 2018) is tested, but it is trained for each LOPO test.
+
+Table 6 summarizes the comparison results. MV provides better average performances on the entire age range [0, 69] than the proposed algorithm does. This is because the balanced dataset does not include subjects of ages between 0 and 14. If we reduce the test age range to [15, 69], the proposed algorithm outperforms MV, even though the comparator is not retrained. These results indicate that the comparator generalizes well to unseen data, as long as the training images cover a desired age range. Also, note that the geometric scheme provides better performances than the arithmetic scheme.
+
+Table 6: Performance comparison on FG-NET. The average performances over test ages within ranges [0, 69] and [15, 69] are reported, respectively.
+
+ | 0 to 69 | 15 to 69 |
| MAE | CS(%) | MAE | CS(%) |
| MV | 3.98 | 79.5 | 6.00 | 63.7 |
| Proposed (1CH, Geometric τage = 0.15) | 8.04 | 41.4 | 4.90 | 64.3 |
| Proposed (1CH, Arithmetic τ = 7) | 9.26 | 33.1 | 5.32 | 64.1 |
+
+
+Figure 6 compares MAEs according to a test age. Again, within the covered range [15, 69], the proposed algorithm significantly outperforms MV especially when test subjects are older than 45.
+Figure 6: MAEs of the proposed algorithm (1CH) and MV on FG-NET in terms of a test age.
+
+# C.3 PERFORMANCE ACCORDING TO $\kappa$
+
+Table 7: MAE and CS performances of the unsupervised algorithm (2CH) on the balanced dataset, according to the minimum percentage $(\kappa)$ constraint during the regularized membership update. The performances are not very sensitive to $\kappa$ . The best performances are achieved in the default mode, i.e. at $\kappa = 50\%$ .
+
+| κ (%) | 10 | 20 | 30 | 40 | 50 |
| MAE | 4.17 | 4.18 | 4.17 | 4.16 | 4.16 |
| CS (%) | 73.7 | 73.6 | 73.6 | 73.7 | 74.0 |
+
+
+
+
+
+
+Figure 7: Distributions of training images in the unsupervised algorithm (2CH) at $\kappa = 20\%$ , $30\%$ , and $40\%$ . From Figures 5 and 7, we see that stronger age-dependent tendencies are observed, as $\kappa$ gets smaller.
+
+# C.4 PERFORMANCE ACCORDING TO THRESHOLDS $\tau$ AND $\tau_{\mathrm{age}}$
+
+The ordering relationship between two instances can be categorized via the arithmetic scheme in $(1)\sim (3)$ using a threshold $\tau$ or the geometric scheme in $(10)\sim (12)$ using a threshold $\tau_{\mathrm{age}}$ . Table 8 lists the performances of the proposed algorithm (1CH) according to these thresholds. We see that the geometric scheme outperforms the arithmetic scheme in general. The best performance is achieved with $\tau_{\mathrm{age}} = 0.1$ , which is used in all experiments in the main paper. Note that the scores are poorer than those in Table 3, since the comparator is trained for a smaller number of epochs to facilitate this test. At $\tau_{\mathrm{age}} = 0.1$ , two teenagers are declared to be not 'similar to' each other if their age difference is larger than about 1. Also, two forties are not 'similar' if the age difference is larger than about 5.
+
+Table 8: The performances of the proposed algorithm (1CH) on the balanced dataset according to thresholds $\tau$ and $\tau_{\mathrm{age}}$ .
+
+ | τ for arithmetic scheme | τage for geometric scheme |
| 0 | 2 | 5 | 7 | 9 | 0.05 | 0.10 | 0.15 | 0.20 | 0.25 |
| MAE | 4.42 | 4.36 | 4.33 | 4.32 | 4.33 | 4.38 | 4.31 | 4.32 | 4.41 | 4.41 |
| CS(%) | 71.0 | 71.7 | 72.2 | 72.2 | 72.5 | 71.4 | 72.8 | 72.2 | 71.8 | 71.7 |
+
+# C.5 PERFORMANCE ACCORDING TO NUMBER OF REFERENCES
+
+Table 9: The performances of the proposed algorithm (supervised) on the balanced dataset according to the number of references for each age class $(M / N)$ . In general, the performances get better with more references. However, the performances are not very sensitive to $M / N$ . They saturate when $M / N \geq 5$ . Therefore, we set $M / N = 5$ in this work.
+
+| M/N | 1CH | 2CH | 3CH | 6CH | Average |
| MAE | CS(%) | MAE | CS(%) | MAE | CS(%) | MAE | CS(%) | MAE | CS(%) |
| 1 | 4.321 | 72.43 | 4.180 | 72.98 | 4.199 | 73.20 | 4.168 | 73.76 | 4.217 | 73.09 |
| 2 | 4.318 | 72.43 | 4.182 | 73.00 | 4.200 | 73.23 | 4.170 | 73.64 | 4.218 | 73.08 |
| 3 | 4.313 | 72.61 | 4.175 | 73.04 | 4.200 | 73.29 | 4.170 | 73.68 | 4.214 | 73.16 |
| 4 | 4.311 | 72.58 | 4.178 | 72.96 | 4.197 | 73.24 | 4.176 | 73.62 | 4.215 | 73.10 |
| 5 | 4.309 | 72.61 | 4.177 | 73.02 | 4.197 | 73.27 | 4.168 | 73.72 | 4.213 | 73.16 |
| 6 | 4.308 | 72.66 | 4.178 | 73.01 | 4.195 | 73.20 | 4.170 | 73.76 | 4.213 | 73.16 |
| 7 | 4.308 | 72.70 | 4.179 | 73.00 | 4.196 | 73.24 | 4.167 | 73.81 | 4.213 | 73.19 |
| 8 | 4.306 | 72.69 | 4.178 | 72.94 | 4.196 | 73.31 | 4.172 | 73.71 | 4.213 | 73.16 |
| 9 | 4.305 | 72.63 | 4.180 | 73.04 | 4.194 | 73.36 | 4.172 | 73.72 | 4.213 | 73.19 |
| 10 | 4.305 | 72.65 | 4.180 | 73.07 | 4.193 | 73.35 | 4.173 | 73.75 | 4.213 | 73.21 |
+
+# C.6 REFERENCE IMAGES
+
+Figure 8 shows all references in the supervised 6CH.
+
+6CH Reference images
+
+| Male
+African | 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 | |
| Female
+African | 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 | |
| Male
+Asian | 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 | |
| Female
+Asian | 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 | |
| Male
+European | 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 | |
+
+Figure 8: All reference images in the supervised 6CH. For some ages in certain chains, the balanced dataset includes less than 5 faces. In such cases, there are less than 5 references.
+
+# C.7 AGE ESTIMATION EXAMPLES
+
+
+(a) Success cases
+
+
+(b) Failure cases
+Figure 9: Age estimation results of the proposed algorithm (supervised 6CH). For each face, the estimated label is provided together with the ground-truth in parentheses. In (a), the ages are estimated correctly. In the last row, third column, the ethnic group is misclassified. This happens rarely. In (b), failure cases are provided. These are hard examples due to various challenging factors, such as low quality photographs and occlusion by hairs, hats, hands, and stickers.
\ No newline at end of file
diff --git a/orderlearninganditsapplicationtoageestimation/images.zip b/orderlearninganditsapplicationtoageestimation/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..1ade0c823b02ff18279afe3fc9e2e574fcb70c3a
--- /dev/null
+++ b/orderlearninganditsapplicationtoageestimation/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ba226cf3b04b18580eeadf81e1168070b70810392f5ad48223ecbba14af52564
+size 1377448
diff --git a/orderlearninganditsapplicationtoageestimation/layout.json b/orderlearninganditsapplicationtoageestimation/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..ebeefa4f5e89c6f245c4276df468c30316414be4
--- /dev/null
+++ b/orderlearninganditsapplicationtoageestimation/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7cc12366ebce533afbe1bf1a9066c3a528c68f00cf6faa45369923619b710ce6
+size 744916
diff --git a/overlearningrevealssensitiveattributes/ac51d5d5-f713-49b7-864f-54ead314516d_content_list.json b/overlearningrevealssensitiveattributes/ac51d5d5-f713-49b7-864f-54ead314516d_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..691b4e11213ca86ddf985045d55e51b3c7dcd674
--- /dev/null
+++ b/overlearningrevealssensitiveattributes/ac51d5d5-f713-49b7-864f-54ead314516d_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b430fedf824bdf634656552b12100b2265798b215a09f4ae08779bf364e07990
+size 76789
diff --git a/overlearningrevealssensitiveattributes/ac51d5d5-f713-49b7-864f-54ead314516d_model.json b/overlearningrevealssensitiveattributes/ac51d5d5-f713-49b7-864f-54ead314516d_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..d689b81841d6ccd88b5ac4f14c892862537de3cc
--- /dev/null
+++ b/overlearningrevealssensitiveattributes/ac51d5d5-f713-49b7-864f-54ead314516d_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:61ac9be96cd3ab38624f5f32d93c3bdfc13b8f1f58b34bafd5b2e0124f065c04
+size 97041
diff --git a/overlearningrevealssensitiveattributes/ac51d5d5-f713-49b7-864f-54ead314516d_origin.pdf b/overlearningrevealssensitiveattributes/ac51d5d5-f713-49b7-864f-54ead314516d_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..487821e360f0ca4f05a98b98ce7bb03f8a59cdfe
--- /dev/null
+++ b/overlearningrevealssensitiveattributes/ac51d5d5-f713-49b7-864f-54ead314516d_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9c48a778e9eea3a643be17f7289477bacea2996af6187c58cc2f00fdd6d539ad
+size 299013
diff --git a/overlearningrevealssensitiveattributes/full.md b/overlearningrevealssensitiveattributes/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..4be1adfc8cd0363c4b76a3ad0d1aee700fe1c259
--- /dev/null
+++ b/overlearningrevealssensitiveattributes/full.md
@@ -0,0 +1,316 @@
+# OVERLEARNING REVEALS SENSITIVE ATTRIBUTES
+
+Congzheng Song
+
+Cornell University
+
+cs2296@cornell.edu
+
+Vitaly Shmatikov
+
+Cornell Tech
+
+shmat@cs.cornell.edu
+
+# ABSTRACT
+
+"Overlearning" means that a model trained for a seemingly simple objective implicitly learns to recognize attributes and concepts that are (1) not part of the learning objective, and (2) sensitive from a privacy or bias perspective. For example, a binary gender classifier of facial images also learns to recognize races—even races that are not represented in the training data—and identities.
+
+We demonstrate overlearning in several vision and NLP models and analyze its harmful consequences. First, inference-time representations of an overlearned model reveal sensitive attributes of the input, breaking privacy protections such as model partitioning. Second, an overlearned model can be "re-purposed" for a different, privacy-violating task even in the absence of the original training data.
+
+We show that overlearning is intrinsic for some tasks and cannot be prevented by censoring unwanted attributes. Finally, we investigate where, when, and why overlearning happens during model training.
+
+# 1 INTRODUCTION
+
+We demonstrate that representations learned by deep models when training for seemingly simple objectives reveal privacy- and bias-sensitive attributes that are not part of the specified objective. These unintentionally learned concepts are neither finer-, nor coarse-grained versions of the model's labels, nor statistically correlated with them. We call this phenomenon overlearning. For example, a binary classifier trained to determine the gender of a facial image also learns to recognize races (including races not represented in the training data) and even identities of individuals.
+
+Overlearning has two distinct consequences. First, the model's inference-time representation of an input reveals the input's sensitive attributes. For example, a facial recognition model's representation of an image reveals if two specific individuals appear together in it. Overlearning thus breaks inference-time privacy protections based on model partitioning (Osia et al., 2018; Chi et al., 2018; Wang et al., 2018). Second, we develop a new, transfer learning-based technique to "re-purpose" a model trained for benign task into a model for a different, privacy-violating task. This shows the inadequacy of privacy regulations that rely on explicit enumeration of learned attributes.
+
+Overlearning is intrinsic for some tasks, i.e., it is not possible to prevent a model from learning sensitive attributes. We show that if these attributes are censored (Xie et al., 2017; Moyer et al., 2018), the censored models either fail to learn their specified tasks, or still leak sensitive information. We develop a new de-censoring technique to extract information from censored representations. We also show that overlearned representations enable recognition of sensitive attributes that are not present in the training data. Such attributes cannot be censored using any known technique. This shows the inadequacy of censoring as a privacy protection technology.
+
+To analyze where and why overlearning happens, we empirically show how general features emerge in the lower layers of models trained for simple objectives and conjecture an explanation based on the complexity of the training data.
+
+# 2 BACKGROUND
+
+We focus on supervised deep learning. Given an input $x$ , a model $M$ is trained to predict the target $y$ using a discriminative approach. We represent the model $M = C \circ E$ as a feature extractor
+
+(encoder) $E$ and classifier $C$ . The representation $z = E(x)$ is passed to $C$ to produce the prediction by modeling $p(y|z) = C(z)$ . Since $E$ can have multiple layers of representation, we use $E_{l}(x) = z_{l}$ to denote the model's internal representation at layer $l$ ; $z$ is the representation at the last layer.
+
+Model partitioning splits the model into a local, on-device part and a remote, cloud-based part to improve scalability of inference (Lane & Georgiev, 2015; Kang et al., 2017) and protect privacy of inputs into the model (Li et al., 2017; Osia et al., 2018; Chi et al., 2018; Wang et al., 2018). For privacy, the local part of the model computes a representation, censors it as described below, and sends it to the cloud part, which computes the model's output.
+
+Censoring representations. The goal is to encode input $x$ into a representation $z$ that does not reveal unwanted properties of $x$ , yet is expressive enough to predict the task label $y$ . Censoring has been used to achieve transform-invariant representations for computer vision, bias-free representations for fair machine learning, and privacy-preserving representations that hide sensitive attributes.
+
+A straightforward censoring approach is based on adversarial training (Goodfellow et al., 2014). It involves a mini-max game between a discriminator $D$ trying to infer $s$ from $z$ during training and an encoder and classifier trying to infer the task label $y$ while minimizing the discriminator's success (Edwards & Storkey, 2016; Iwasawa et al., 2016; Hamm, 2017; Xie et al., 2017; Li et al., 2018; Coavoux et al., 2018; Elazar & Goldberg, 2018). The game is formulated as:
+
+$$
+\min _ {E, C} \max _ {D} \mathbb {E} _ {x, y, s} [ \gamma \log p (s | z = E (x)) - \log p (y | z = E (x)) ] \tag {1}
+$$
+
+where $\gamma$ balances the two log likelihood terms. The inner optimization maximizes $\log p(s|z = E(x))$ , i.e., the discriminator's prediction of the sensitive attribute $s$ given a representation $z$ . The outer optimization, on the other hand, trains the encoder and classifier to minimize the log likelihood of the discriminator predicting $s$ and maximize that of predicting the task label $y$ .
+
+Another approach casts censoring as a single information-theoretical objective. The requirement that $z$ not reveal $s$ can be formalized as an independence constraint $z \perp s$ , but independence is intractable to measure in practice, thus the requirement is relaxed to a constraint on the mutual information between $z$ and $s$ (Osia et al., 2018; Moyer et al., 2018). The overall training objective of censoring $s$ and predicting $y$ from $z$ is formulated as:
+
+$$
+\max I (z, y) - \beta I (z, x) - \lambda I (z, s) \tag {2}
+$$
+
+where $I$ is mutual information and $\beta, \lambda$ are the balancing coefficients; $\beta = 0$ in (Osia et al., 2018). The first two terms $I(z,y) - \beta I(z,x)$ is the objective of variational information bottleneck (Alemi et al., 2017), the third term is the relaxed independence constraint of $z$ and $s$ .
+
+Intuitively, this objective aims to maximize the information of $y$ in $z$ as per $I(z,y)$ , forget the information of $x$ in $z$ as per $-\beta I(z,x)$ , and remove the information of $s$ in $z$ as per $-\lambda I(z,s)$ . This objective has an analytical lower bound (Moyer et al., 2018):
+
+$$
+\mathbb {E} _ {x, s} \left[ \mathbb {E} _ {z, y} [ \log p (y | z) ] - (\beta + \lambda) K L [ q (z | x) | | q (z) ] - \lambda \mathbb {E} _ {z} [ \log p (x | z, s) ] \right] \tag {3}
+$$
+
+where $KL$ is Kullback-Leibler divergence and $\log p(x|z,s)$ is the reconstruction likelihood of $x$ given $z$ and $s$ . The conditional distributions $p(y|z) = C(z)$ , $q(z|x) = E(x)$ are modeled as in adversarial training and $p(x|z,s)$ is modeled with a decoder $R(z,s) = p(x|z,s)$ .
+
+All known censoring techniques require a "blacklist" of attributes to censor, and inputs with these attributes must be represented in the training data. Censoring for fairness is applied to the model's final layer to make its output independent of the sensitive attributes or satisfy a specific fairness constraint (Zemel et al., 2013; Louizos et al., 2016; Madras et al., 2018; Song et al., 2019). In this paper, we use censoring not for fairness but to demonstrate that models cannot be prevented from learning to recognize sensitive attributes. To show this, we apply censoring to different layers, not just the output.
+
+# 3 EXPLOITING OVERLEARNING
+
+We demonstrate two different ways to exploit overlearning in a trained model $M$ . The inference-time attack (Section 3.1) applies $M$ to an input and uses $M$ 's representation of that input to predict its sensitive attributes. The model-repurposing attack (Section 3.2) uses $M$ to create another model that, when applied to an input, directly predicts its sensitive attributes.
+
+# Inferring $s$ from representation:
+
+1: Input: Adversary's auxiliary dataset $\mathcal{D}_{\mathrm{aux}}$ , black-box oracle $E$ , observed $z^{\star}$
+2: $\mathcal{D}_{\mathrm{attack}}\gets \{(E(x),s)\mid (x,s)\in \mathcal{D}_{\mathrm{aux}}\}$
+3: Train attack model $M_{attack}$ on $\mathcal{D}_{attack}$
+4: return prediction $\hat{s} = M_{\mathrm{attack}}(z^{\star})$
+
+# Adversarial re-purposing:
+
+1: Input: Model $M$ for the original task, transfer dataset $\mathcal{D}_{\mathrm{transfer}}$ for the new task
+2: Build $M_{\text{transfer}} = C_{\text{transfer}} \circ E_l$ on layer $l$
+3: Fine-tune $M_{\text{transfer}}$ on $\mathcal{D}_{\text{transfer}}$
+4: return transfer model $M_{\text{transfer}}$
+
+Figure 1: Pseudo-code for inference from representation and adversarial re-purposing
+
+# Algorithm 1 De-censoring representations
+
+1: Input: Auxiliary dataset $\mathcal{D}_{\mathrm{aux}}$ , black-box oracle $E$ , observed representation $z^{\star}$
+2: Train auxiliary model $M_{\mathrm{aux}} = E_{\mathrm{aux}} \circ C_{\mathrm{aux}}$ on $\mathcal{D}_{\mathrm{aux}}$
+3: Initialize transform model $T$ , inference attack model $M_{attack}$
+4: for each training iteration do
+5: Sample a batch of data $(x,s)$ from $\mathcal{D}_{\mathrm{aux}}$ and compute $z = E(x), z_{\mathrm{aux}} = E_{\mathrm{aux}}(x)$
+6: Update $T$ on the batch of $(z, z_{\mathrm{aux}})$ with loss $||T(z) - z_{\mathrm{aux}}||_2^2$
+7: Update $M_{\text{attack}}$ on the batch of $(T(z), s)$ with cross-entropy loss
+8: end for
+9: return prediction $\hat{s} = M_{\mathrm{attack}}(T(z^{\star}))$
+
+# 3.1 INFERRING SENSITIVE ATTRIBUTES FROM REPRESENTATION
+
+We measure the leakage of sensitive properties from the representations of overlearned models via the following attack. Suppose an adversary can observe the representation $z^{\star}$ of a trained model $M$ on input $x^{\star}$ at inference time but cannot observe $x^{\star}$ directly. This scenario arises in practice when model evaluation is partitioned in order to protect privacy of inputs—see Section 2. The adversary wants to infer some property $s$ of $x^{\star}$ that is not part of the task label $y$ .
+
+We assume that the adversary has an auxiliary set $\mathcal{D}_{\mathrm{aux}}$ of labeled $(x,s)$ pairs and black-box oracle $E$ to compute the corresponding $E(x)$ . The purpose of $\mathcal{D}_{\mathrm{aux}}$ is to help the adversary recognize the property of interest in the model's representations; it need not be drawn from the same dataset as $x^{\star}$ . The adversary uses supervised learning on the $(E(x),s)$ pairs to train an attack model $M_{\mathrm{attack}}$ . At inference time, the adversary predicts $\hat{s}$ from the observed $z^{\star}$ as $M_{\mathrm{attack}}(z^{\star})$ .
+
+De-censoring. If the representation $z$ is "censored" (see Section 2) to reduce the amount of information it reveals about $s$ , the direct inference attack may not succeed. We develop a new, learning-based de-censoring approach (see Algorithm 1) to convert censored representations into a different form that leaks more information about the property of interest. The adversary trains $M_{\mathrm{aux}}$ on $\mathcal{D}_{\mathrm{aux}}$ to predict $s$ from $x$ , then transforms $z$ into the input features of $M_{\mathrm{aux}}$ .
+
+We treat de-censoring as an optimization problem with a feature space $L_{2}$ loss $||T(z) - z_{\mathrm{aux}}||_2^2$ , where $T$ is the transformer that the adversary wants to learn and $z_{\mathrm{aux}}$ is the uncensored representation from $M_{\mathrm{aux}}$ . Training with a feature-space loss has been proposed for synthesizing more natural images by matching them with real images (Dosovitskiy & Brox, 2016; Nguyen et al., 2016). In our case, we match censored and uncensored representations. The adversary can then use $T(z)$ as an uncensored approximation of $z$ to train an inference model $M_{\mathrm{attack}}$ and infer property $s$ as $M_{\mathrm{attack}}(T(z^{\star}))$ .
+
+# 3.2 RE-PURPOSING MODELS TO PREDICT SENSITIVE ATTRIBUTES
+
+To re-purpose a model—for example, to convert a model trained for a benign task into a model that predicts a sensitive attribute—we can use features $z_{l}$ in any layer of $M$ as the feature extractor and connect a new classifier $C_{\mathrm{transfer}}$ to $E_{l}$ . The transferred model $M_{\mathrm{transfer}} = C_{\mathrm{transfer}} \circ E_{l}$ is fine-tuned on another, small dataset $\mathcal{D}_{\mathrm{transfer}}$ , which in itself is not sufficient to train an accurate model for the new task. Utilizing features learned by $M$ on the original $\mathcal{D}$ , $M_{\mathrm{transfer}}$ can achieve better results than models trained from scratch on $\mathcal{D}_{\mathrm{transfer}}$ .
+
+Feasibility of model re-purposing complicates the application of policies and regulations such as GDPR (EU, 2018). GDPR requires data processors to disclose every purpose of data collection and obtain consent from the users whose data was collected. We show that, given a trained model, it is not possible to determine—nor, consequently, disclose or obtain user consent for—what the model
+
+Table 1: Summary of datasets and tasks. Cramer's V captures statistical correlation between $y$ and $s$ (0 indicates no correlation and 1 indicates perfectly correlated).
+
+| Dataset | Health | UTKFace | FaceScrub | Places365 | Twitter | Yelp | PIPA |
| Target y | CCI | gender | gender | in/outdoor | age | review score | facial IDs |
| Attribute s | age | race | facial IDs | scene type | author | author | IDs together |
| Cramer's V | 0.149 | 0.035 | 0.044 | 0.052 | 0.134 | 0.033 | n/a |
+
+has learned. Learning per se thus cannot be a regulated "purpose" of data collection. Regulators must be aware that even if the original training data has been erased, a model can be re-purposed for a different objective, possibly not envisioned at the time of original data collection. We discuss this further in Section 6.
+
+# 4 EXPERIMENTAL RESULTS
+
+# 4.1 DATASETS, TASKS, AND MODELS
+
+Health is the Heritage Health dataset (Heritage Health Prize) with medical records of over 55,000 patients, binarized into 112 features with age information removed. The task is to predict if Charlson Index (an estimate of patient mortality) is greater than zero; the sensitive attribute is age (binned into 9 ranges).
+
+UTKFace is a set of over 23,000 face images labeled with age, gender, and race (UTKFace; Zhang et al., 2017). We rescaled them into $50 \times 50$ RGB pixels. The task is to predict gender; the sensitive attribute is race.
+
+FaceScrub is a set of face images labeled with gender (FaceScrub). Some URLs are expired, but we were able to download 74,000 images for 500 individuals and rescale them into $50 \times 50$ RGB pixels. The task is to predict gender; the sensitive attribute is identity.
+
+Places365 is a set of 1.8 million images labeled with 365 fine-grained scene categories. We use a subset of 73,000 images, 200 per category. The task is to predict whether the scene is indoor or outdoor; the sensitive attribute is the fine-grained scene label.
+
+Twitter is a set of tweets from the PAN16 dataset (Rangel et al., 2016) labeled with user information. We removed tweets with fewer than 20 tokens and users with fewer than 50 tweets, yielding a dataset of over 46,000 tweets from 151 users with an over 80,000-word vocabulary. The task is to predict the age of the user given a tweet; the sensitive attribute is the author's identity.
+
+Yelp is a set of Yelp reviews labeled with user identities (Yelp Open Dataset). We removed users with fewer than 1,000 reviews and reviews with more than 200 tokens, yielding a dataset of over 39,000 reviews from 137 users with an over 69,000-word vocabulary. The task is to predict the review score between 1 to 5; the sensitive attribute is the author's identity.
+
+PIPA is a set of over 60,000 photos of 2,000 individuals gathered from public Flickr photo albums (Piper project page; Zhang et al., 2015). Each image can include one or more individuals. We cropped their head regions using the bounding boxes in the image annotations. The task is to predict the identity given the head region; the sensitive attribute is whether two head regions are from the same photo.
+
+Models. For Health, we use a two-layer fully connected (FC) neural network with 128 and 32 hidden units, respectively, following (Xie et al., 2017; Moyer et al., 2018). For UTKFace and FaceScrub, we use a LeNet (LeCun et al., 1998) variant: three $3 \times 3$ convolutional and $2 \times 2$ max-pooling layers with 16, 32, and 64 filters, followed by two FC layers with 128 and 64 hidden units. For Twitter and Yelp, we use text CNN (Kim, 2014). For Places365 and PIPA, we use AlexNet (Krizhevsky et al., 2012) with convolutional layers pre-trained on ImageNet (Deng et al., 2009) and further add a $3 \times 3$ convolutional layer with 128 filters and $2 \times 2$ max-pooling followed by two FC layers with 128 and 64 hidden units, respectively.
+
+Table 2: Accuracy of inference from representations (last FC layer). RAND is random guessing based on majority class labels; BASE is inference from the uncensored representation; ADV from the representation censored with adversarial training; IT from the information-theoretically censored representation.
+
+| Dataset | Acc of predicting target y | Acc of inferring sensitive attribute s |
| RAND | BASE | ADV | IT | RAND | BASE | ADV | IT |
| Health | 66.31 | 84.33 | 80.16 | 82.63 | 16.00 | 32.52 | 32.00 | 26.60 |
| UTKFace | 52.27 | 90.38 | 90.15 | 88.15 | 42.52 | 62.18 | 53.28 | 53.30 |
| FaceScrub | 53.53 | 98.77 | 97.90 | 97.66 | 1.42 | 33.65 | 30.23 | 10.61 |
| Places365 | 56.16 | 91.41 | 90.84 | 89.82 | 1.37 | 31.03 | 12.56 | 2.29 |
| Twitter | 45.17 | 76.22 | 57.97 | n/a | 6.93 | 38.46 | 34.27 | n/a |
| Yelp | 42.56 | 57.81 | 56.79 | n/a | 15.88 | 33.09 | 27.32 | n/a |
| PIPA | 7.67 | 77.34 | 52.02 | 29.64 | 68.50 | 87.95 | 69.96 | 82.02 |
+
+# 4.2 INFERRING SENSITIVE ATTRIBUTES FROM REPRESENTATIONS
+
+Setup. We use $80\%$ of the data for training the target models and $20\%$ for evaluation. The size of the adversary's auxiliary dataset is $50\%$ of the training data. Success of the inference attack is measured on the final FC layer's representation of test data. The baseline is inference from the uncensored representation. We also measure the success of inference against representations censored with $\gamma = 1.0$ for adversarial training and $\beta = 0.01, \lambda = 0.0001$ for information-theoretical censoring, following (Xie et al., 2017; Moyer et al., 2018).
+
+For censoring with adversarial training, we simulate the adversary with a two-layer FC neural network with 256 and 128 hidden units. The number of epochs is 50 for censoring with adversarial training, 30 for the other models. We use the Adam optimizer with the learning rate of 0.001 and batch size of 128. For information-theoretical censoring, the model is based on VAE (Kingma & Welling, 2013; Moyer et al., 2018). The encoder $q(z|x)$ has the same architecture as the CNN models with all convolutional layers. On top of that, the encoder outputs a mean vector and a standard deviation vector to model the random variable $z$ with the re-parameterization trick. The decoder $p(x|z)$ has three de-convolution layers with up-sampling to map $z$ back to the same shape as the input $x$ .
+
+For our inference model, we use the same architecture as the censoring adversary. For the PIPA inference model, which takes two representations of faces and outputs a binary prediction of whether these faces appear in the same photo, we use two FC layers followed by a bilinear model: $p(s|z_1,z_2) = \sigma (h(z_1)Wh(z_2)^\top)$ , where $z_{1},z_{2}$ are the two input representations, $h$ is the two FC layers, and $\sigma$ is the sigmoid function. We train the inference model for 50 epochs with the Adam optimizer, learning rate of 0.001, and batch size of 128.
+
+Results. Table 2 reports the results. When representations are not censored, accuracy of inference from the last-layer representations is much higher than random guessing for all tasks, which means models overlearn even in the higher, task-specific layers. When representations are censored with adversarial training, accuracy drops for both the main and inference tasks. Accuracy of inference is much higher than in (Xie et al., 2017). The latter uses logistic regression, which is weaker than the training-time censoring-adversary network, whereas we use the same architecture for both the training-time and post-hoc adversaries. Information-theoretical censoring reduces accuracy of inference, but also damages main-task accuracy more than adversarial training for almost all models.
+
+Overlearning can cause a model to recognize even the sensitive attributes that are not represented in the training dataset. Such attributes cannot be censored using any known technique. We trained a UTKFace gender classifier on datasets where all faces are of the same race. We then applied this model to test images with four races (White, Black, Asian, Indian) and attempted to infer the race attribute from the model's representations. Inference accuracy is $61.95\%$ , $61.99\%$ , $60.85\%$ and $60.81\%$ for models trained only on, respectively, White, Black, Asian, and Indian images—almost as good as the $62.18\%$ baseline and much higher than random guessing $(42.52\%)$ .
+
+Effect of censoring strength. Fig. 2 shows that stronger censoring does not help. On FaceScrub and Twitter with adversarial training, increasing $\gamma$ damages the model's accuracy on the main task, while
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Figure 2: Reduction in accuracy due to censoring. Blue lines are the main task, red lines are the inference of sensitive attributes. First row is adversarial training with different $\gamma$ values; second and third row is information-theoretical censoring with different $\beta$ and $\lambda$ values respectively.
+
+
+
+
+
+
+
+Table 3: Improving inference accuracy with de-censoring. $\delta$ is the increase from Table 2.
+
+| Dataset | Health | UTKFace | FaceScrub | Places365 | Twitter | Yelp |
| ADV + δ | 32.55 +0.55 | 59.38 +6.10 | 40.37 +12.24 | 19.71 +7.15 | 36.55 +2.22 | 31.36 +4.04 |
| IT + δ | 27.05 +0.45 | 54.31 +1.01 | 16.40 +5.79 | 3.10 +0.81 | n/a | n/a |
+
+accuracy of inference decreases slightly or remains the same. For UTKFace and Yelp, increasing $\gamma$ improves accuracy of inference. This may indicate that the simulated "adversary" during adversarial training overpowers the optimization process and censoring defeats itself.
+
+For all models with information-theoretical censoring, increasing $\beta$ reduces the accuracy of inference but can lead to the model not converging on its main task. Increasing $\lambda$ results in the model not converging on the main task, without affecting the accuracy of inference, on Health, UTKFace and FaceScrub. This seems to contradict the censoring objective, but the reconstruction loss in Equation 2 dominates the other loss terms, which leads to poor divergence between conditional $q(z|x)$ and $q(z)$ , i.e., information about $x$ is still retained in $z$ .
+
+Decensoring. As described in Section 3.1, we developed a new technique to transform censored representations to make inference easier. We first train an auxiliary model on $\mathcal{D}_{\mathrm{aux}}$ to predict the sensitive attribute from representations, using the same architecture as in the baseline models. The resulting uncensored representations from the last convolutional layer are the target for the decensoring transformations. We use a single-layer fully connected neural network as the transformer and set the number of hidden units to the dimension of the uncensored representation. The inference model operates on top of the transformer network, with the same hyper-parameters as before.
+
+Table 3 shows that de-censoring significantly boosts the accuracy of inference from representations censored with adversarial training. The boost is smaller against information-theoretical censoring because its objective not only censors $z$ with $I(z,s)$ , but also forgets $x$ with $I(x,z)$ . On the Health task, there is not much difference since the baseline attack is already similar to the attack on censored representations, leaving little room for improvement.
+
+Table 4: Adversarial re-purposing. The values are differences between the accuracy of predicting sensitive attributes using a re-purposed model vs. a model trained from scratch.
+
+| |Dtransfer|/|D| | Health | UTKFace | FaceScrub | Places365 | Twitter | Yelp | PIPA |
| 0.02 | -0.57 | 4.72 | 7.01 | 4.42 | 12.99 | 5.57 | 1.33 |
| 0.04 | 0.22 | 2.70 | 15.07 | 2.14 | 10.87 | 3.60 | 2.41 |
| 0.06 | -1.21 | 2.83 | 7.02 | 2.06 | 10.51 | 8.45 | 6.50 |
| 0.08 | -0.99 | 0.25 | 11.80 | 3.39 | 9.57 | 0.33 | 4.93 |
| 0.10 | 0.35 | 2.24 | 9.43 | 2.86 | 7.30 | 2.1 | 5.89 |
+
+Table 5: The effect of censoring on adversarial re-purposing for FaceScrub with $\gamma = 0.5$ , 0.75, 1.0. $\delta_A$ is the difference in the original-task accuracy (second column) between uncensored and censored models; $\delta_B$ is the difference in the accuracy of inferring the sensitive attribute (columns 3 to 7) between the models re-purposed from different layers and the model trained from scratch. Negative values mean reduced accuracy. Heatmaps on the right are linear CKA similarities between censored and uncensored representations. Numbers 0 through 4 represent layers conv1, conv2, conv3, fc4, and fc5. For each model censored at layer $i$ (x-axis), we measure similarity between the censored and uncensored models at layer $j$ (y-axis).
+
+| Censored on γ = 0.5 | δA | δB when transferred from conv1 | conv2 | conv3 | fc4 | fc5 |
| conv1 | -1.66 | -6.42 | -4.09 | -1.65 | 0.46 | -3.87 |
| conv2 | -2.87 | 0.95 | -1.77 | -2.88 | -1.53 | -2.22 |
| conv3 | -0.64 | 1.49 | 1.49 | 0.67 | -0.48 | -1.38 |
| fc4 | -0.16 | 2.03 | 5.16 | 6.73 | 6.12 | 0.54 |
| fc5 | 0.05 | 1.52 | 4.53 | 7.42 | 6.14 | 4.53 |
| γ = 0.75 | | | | | | |
| conv1 | -4.48 | -7.33 | -5.01 | -1.51 | -7.99 | -7.82 |
| conv2 | -6.02 | 0.44 | -7.04 | -5.46 | -5.94 | -5.82 |
| conv3 | -1.90 | 1.32 | 1.37 | 1.88 | 0.74 | -0.67 |
| fc4 | 0.01 | 3.65 | 4.56 | 5.11 | 4.44 | 0.91 |
| fc5 | -0.74 | 1.54 | 3.61 | 6.75 | 7.18 | 4.99 |
| γ = 1 | | | | | | |
| conv1 | -45.25 | -7.36 | -3.93 | -2.75 | -4.37 | -2.91 |
| conv2 | -20.30 | -3.28 | -5.27 | -7.03 | -6.38 | -5.54 |
| conv3 | -45.20 | -2.13 | -3.06 | -4.48 | -4.05 | -5.18 |
| fc4 | -0.52 | 1.73 | 5.19 | 4.80 | 5.83 | 1.84 |
| fc5 | -0.86 | 1.56 | 3.55 | 5.59 | 5.14 | 1.97 |
+
+
+
+
+
+
+
+In summary, these results demonstrate that information about sensitive attributes unintentionally captured by the overlearned representations cannot be suppressed by censoring.
+
+# 4.3 RE-PURPOSING MODELS TO PREDICT SENSITIVE ATTRIBUTES
+
+To demonstrate that overlearned representations can be picked up by a small set of unseen data to create a model for predicting sensitive attributes, we re-purpose uncensored baseline models from Section 4.2 by fine-tuning them on a small $(2 - 10\%)$ of $\mathcal{D}$ set $\mathcal{D}_{\mathrm{transfer}}$ and compare with the models trained from scratch on $\mathcal{D}_{\mathrm{transfer}}$ . We fine-tune all models for 50 epochs with batch size of 32; the other hyper-parameters are as in Section 4.2. For all CNN models, we use the trained convolutional layers as the feature extractor and randomly initialize the other layers. Table 4 shows that the re-purposed models always outperform those trained from scratch. FaceScrub and Twitter exhibit the biggest gain.
+
+Effect of censoring. Previous work only censored the highest layer of the models. Model repurposing can use any layer of the model for transfer learning. Therefore, to prevent re-purposing, inner layers must be censored, too. We perform the first study of inner-layers censoring and measure
+
+
+Figure 3: Pairwise similarities of layer representations between models for the original task (A) and for predicting a sensitive attribute (B). Numbers 0 through 4 denote layers conv1, conv2, conv3, fc4 and fc5.
+
+its effect on both the original and re-purposed tasks. We use FaceScrub for this experiment and apply adversarial training to every layer with different strengths ( $\gamma = 0.5, 0.75, 1.0$ ).
+
+Table 5 summarizes the results. Censoring lower layers (conv1 to conv3) blocks adversarial repurposing, at the cost of reducing the model's accuracy on its original task. Hyper-parameters must be tuned carefully, e.g. when $\gamma = 1$ , there is a huge drop in the original-task accuracy.
+
+To further investigate how censoring in one layer affects the representations learned across all layers, we measure per-layer similarity between censored and uncensored models using CKA, linear centered kernel alignment (Kornblith et al., 2019)—see Table 5. When censoring is applied to a specific layer, similarity for that layer is the smallest (values on the diagonal). When censoring lower layers with moderate strength ( $\gamma = 0.5$ or 0.75), similarity between higher layers is still strong; when censoring higher layers, similarity between lower layers is strong. Therefore, censoring can block adversarial re-purposing from a specific layer, but the adversary can still re-purpose representations in the other layer(s) to obtain an accurate model for predicting sensitive attributes.
+
+# 4.4 WHEN, WHERE, AND WHY OVERLEARNING HAPPENS
+
+To investigate when (during training) and where (in which layer) the models overlearn, we use linear CKA similarity (Kornblith et al., 2019) to compare the representations at different epochs of training between models trained for the original task (A) and models trained to predict a sensitive attribute (B). We use UTKFace and FaceScrub for these experiments.
+
+Fig. 3 shows that lower layers of models A and B learn very similar features. This was observed in (Kornblith et al., 2019) for CIFAR-10 and CIFAR-100 models, but those tasks are closely related. In our case, the tasks are entirely different and B reveals the sensitive attribute while A does not. The similar low-level features are learned very early during training. There is little similarity between the low-level features of A and high-level features of B (and vice versa), matching intuition. Interestingly, on FaceScrub even the high-level features are similar between A and B.
+
+We conjecture that one of the reasons for overlearning is structural complexity of the data. Previous work theoretically showed that over-parameterized neural networks favor simple solutions on structured data when optimized with SGD, where structure is quantified as the number of distributions (e.g., images from different identities) within each class in the target task (Li & Liang, 2018), i.e., the fewer distributions, the more structured the data. For data generated from more complicated distributions, networks learn more complex solutions, leading to the emergence of features that are much more general than the learning objective and, consequently, overlearning.
+
+Fig. 4 shows that the representations of a gender classifier trained on the faces from 50 individuals are closer to the random initialization than the representations trained on the faces from 500 individuals (the hyper-parameters and the total number of training examples are the same in both cases). More complex training data thus results in more complex representations for the same objective.
+
+
+Figure 4: Similarity of layer representations of a partially trained gender classifier to a randomly initialized model before training. Models are trained on FaceScrub using 50 IDs (blue line) and 500 IDs (red line).
+
+
+
+
+
+# 5 RELATED WORK
+
+Prior work studied transferability of representations only between closely related tasks. Transferability of features between ImageNet models decreases as the distance between the base and target tasks grows (Yosinski et al., 2014), and performance of tasks is correlated to their distance from the source task (Azizpour et al., 2015). CNN models trained to distinguish coarse classes also distinguish their subsets (Huh et al., 2016). By contrast, we show that models trained for simple tasks implicitly learn privacy-sensitive concepts unrelated to the labels of the original task. Other than an anecdotal mention in the acknowledgments paragraph of (Kim et al., 2017) that logit-layer activations leak non-label concepts, this phenomenon has never been described in the research literature.
+
+Gradient updates revealed by participants in distributed learning leak information about individual training batches that is uncorrelated with the learning objective (Melis et al., 2019). We show that overlearning is a generic problem in (fully trained) models, helping explain these observations.
+
+There is a large body of research on learning disentangled representations (Bengio et al., 2013; Locatello et al., 2019). The goal is to separate the underlying explanatory factors in the representation so that it contains all information about the input in an interpretable structure. State-of-the-art approaches use variational autoencoders (Kingma & Welling, 2013) and their variants to learn disentangled representations in an unsupervised fashion (Higgins et al., 2017; Kumar et al., 2018; Kim & Mnih, 2018; Chen et al., 2018). By contrast, overlearning means that representations learned during supervised training for one task implicitly and automatically enable another task—without disentangling the representation on purpose during training.
+
+Work on censoring representations aims to suppress sensitive demographic attributes and identities in the model's output for fairness and privacy. Techniques include adversarial training (Edwards & Storkey, 2016), which has been applied to census and health records (Xie et al., 2017), text (Li et al., 2018; Coavoux et al., 2018; Elazar & Goldberg, 2018), images (Hamm, 2017) and sensor data of wearables (Iwasawa et al., 2016). An alternative approach is to minimize mutual information between the representation and the sensitive attribute (Moyer et al., 2018; Osia et al., 2018). Neither approach can prevent overlearning, except at the cost of destroying the model's accuracy. Furthermore, these techniques cannot censor attributes that are not represented in the training data. We show that overlearned models recognize such attributes, too.
+
+# 6 CONCLUSIONS
+
+We demonstrated that models trained for seemingly simple tasks implicitly learn concepts that are not represented in the objective function. In particular, they learn to recognize sensitive attributes, such as race and identity, that are statistically orthogonal to the objective. The failure of censoring to suppress these attributes and the similarity of learned representations across uncorrelated tasks suggest that overlearning may be intrinsic, i.e., learning for some objectives may not be possible without recognizing generic low-level features that enable other tasks, including inference of sensi
+
+tive attributes. For example, there may not exist a set of features that enables a model to accurately determine the gender of a face but not its race or identity.
+This is a challenge for regulations such as GDPR that aim to control the purposes and uses of machine learning technologies. To protect privacy and ensure certain forms of fairness, users and regulators may desire that models not learn some features and attributes. If overlearning is intrinsic, it may not be technically possible to enumerate, let alone control, what models are learning. Therefore, regulators should focus on ensuring that models are applied in a way that respects privacy and fairness, while acknowledging that they may still recognize and use sensitive attributes.
+Acknowledgments. This research was supported in part by NSF grants 1611770, 1704296, and 1916717, the generosity of Eric and Wendy Schmidt by recommendation of the Schmidt Futures program, and a Google Faculty Research Award.
+
+# REFERENCES
+
+Alexander A. Alemi, Ian Fischer, Joshua V. Dillon, and Kevin Murphy. Deep variational information bottleneck. In ICLR, 2017.
+Hossein Azizpour, Ali Sharif Razavian, Josephine Sullivan, Atsuto Maki, and Stefan Carlsson. From generic to specific deep representations for visual recognition. In CVPR Workshops, 2015.
+Yoshua Bengio, Aaron Courville, and Pascal Vincent. Representation learning: A review and new perspectives. PAMI, 2013.
+Tian Qi Chen, Xuechen Li, Roger B Grosse, and David K Duvenaud. Isolating sources of disentanglement in variational autoencoders. In NIPS, 2018.
+Jianfeng Chi, Emmanuel Owusu, Xuwang Yin, Tong Yu, William Chan, Patrick Tague, and Yuan Tian. Privacy partitioning: Protecting user data during the deep learning inference phase. arXiv:1812.02863, 2018.
+Maximin Coavoux, Shashi Narayan, and Shay B. Cohen. Privacy-preserving neural representations of text. In EMNLP, 2018.
+Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. ImageNet: A large-scale hierarchical image database. In CVPR, 2009.
+Alexey Dosovitskiy and Thomas Brox. Generating images with perceptual similarity metrics based on deep networks. In NIPS, 2016.
+Harrison Edwards and Amos J. Storkey. Censoring representations with an adversary. In ICLR, 2016.
+Yanai Elazar and Yoav Goldberg. Adversarial removal of demographic attributes from text data. In EMNLP, 2018.
+EU. General Data Protection Regulation. https://en.wikipedia.org/wiki/General_Data_Protection_Regulation, 2018.
+FaceScrub. http://vintage.winklerbros.net/facescrub.html, 2014.
+Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In NIPS, 2014.
+Jihun Hamm. Minimax filter: Learning to preserve privacy from inference attacks. JMLR, 18(129): 1-31, 2017.
+Heritage Health Prize. https://www.kaggle.com/c/hhp, 2012.
+Irina Higgins, Loic Matthew, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, and Alexander Lerchner. beta-VAE: Learning basic visual concepts with a constrained variational framework. In ICLR, 2017.
+
+Minyoung Huh, Pulkit Agrawal, and Alexei A Efros. What makes ImageNet good for transfer learning? arXiv:1608.08614, 2016.
+Yusuke Iwasawa, Kotaro Nakayama, Ikuko Yairi, and Yutaka Matsuo. Privacy issues regarding the application of DNNs to activity-recognition using wearables and its countermeasures by use of adversarial training. In *IJCAI*, 2016.
+Yiping Kang, Johann Hauswald, Cao Gao, Austin Rovinski, Trevor Mudge, Jason Mars, and Lingjia Tang. Neurosurgeon: Collaborative intelligence between the cloud and mobile edge. In ASPLOS, 2017.
+Been Kim, Martin Wattenberg, Justin Gilmer, Carrie Cai, James Wexler, Fernanda Viegas, and Rory Sayres. Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (TCAV). arXiv:1711.11279, 2017.
+Hyunjik Kim and Andriy Mnih. Disentangling by factorising. In ICML, 2018.
+Yoon Kim. Convolutional neural networks for sentence classification. In EMNLP, 2014.
+Diederik P Kingma and Max Welling. Auto-encoding variational Bayes. arXiv:1312.6114, 2013.
+Simon Kornblith, Mohammad Norouzi, Honglak Lee, and Geoffrey Hinton. Similarity of neural network representations revisited. In ICML, 2019.
+Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. ImageNet classification with deep convolutional neural networks. In NIPS, 2012.
+Abhishek Kumar, Prasanna Sattigeri, and Avinash Balakrishnan. Variational inference of disentangled latent concepts from unlabeled observations. In ICLR, 2018.
+Nicholas D Lane and Petko Georgiev. Can deep learning revolutionize mobile sensing? In HotMobile, 2015.
+Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proc. IEEE, 86(11):2278-2324, 1998.
+Meng Li, Liangzhen Lai, Naveen Suda, Vikas Chandra, and David Z Pan. PrivyNet: A flexible framework for privacy-preserving deep neural network training. arXiv:1709.06161, 2017.
+Yitong Li, Timothy Baldwin, and Trevor Cohn. Towards robust and privacy-preserving text representations. In ACL, 2018.
+Yuanzhi Li and Yingyu Liang. Learning overparameterized neural networks via stochastic gradient descent on structured data. In NIPS, 2018.
+Francesco Locatello, Stefan Bauer, Mario Lucic, Gunnar Raetsch, Sylvain Gelly, Bernhard Schölkopf, and Olivier Bachem. Challenging common assumptions in the unsupervised learning of disentangled representations. In ICML, 2019.
+Christos Louizos, Kevin Swersky, Yujia Li, Max Welling, and Richard Zemel. The variational fair autoencoder. In ICLR, 2016.
+David Madras, Elliot Creager, Toniann Pitassi, and Richard Zemel. Learning adversarially fair and transferable representations. In ICML, 2018.
+Luca Melis, Congzheng Song, Emiliano De Cristofaro, and Vitaly Shmatikov. Exploiting unintended feature leakage in collaborative learning. In S&P, 2019.
+Daniel Moyer, Shuyang Gao, Rob Brekelmans, Aram Galstyan, and Greg Ver Steeg. Invariant representations without adversarial training. In NIPS, 2018.
+Anh Nguyen, Alexey Dosovitskiy, Jason Yosinski, Thomas Brox, and Jeff Clune. Synthesizing the preferred inputs for neurons in neural networks via deep generator networks. In NIPS, 2016.
+
+Seyed Ali Osia, Ali Taheri, Ali Shahin Shamsabadi, Minos Katevas, Hamed Haddadi, and Hamid R. R. Rabiee. Deep private-feature extraction. TKDE, 2018.
+Piper project page. https://people.eecs.berkeley.edu/~nzhang/piper.html, 2015.
+Francisco Rangel, Paolo Rosso, Ben Verhoeven, Walter Daelemans, Martin Potthast, and Benno Stein. Overview of the 4th author profiling task at PAN 2016: Cross-genre evaluations. In CEUR Workshop, 2016.
+Jiaming Song, Pratyusha Kalluri, Aditya Grover, Shengjia Zhao, and Stefano Ermon. Learning controllable fair representations. In AISTATS, 2019.
+UTKFace. http://aicip.eecs.utk.edu/wiki/UTKFace, 2017.
+Ji Wang, Jianguo Zhang, Weidong Bao, Xiaomin Zhu, Bokai Cao, and Philip S Yu. Not just privacy: Improving performance of private deep learning in mobile cloud. In KDD, 2018.
+Qizhe Xie, Zihang Dai, Yulun Du, Eduard H. Hovy, and Graham Neubig. Controllable invariance through adversarial feature learning. In NIPS, 2017.
+Yelp Open Dataset. https://www.yelp.com/dataset, 2018.
+Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. How transferable are features in deep neural networks? In NIPS, 2014.
+Rich Zemel, Yu Wu, Kevin Swersky, Toni Pitassi, and Cynthia Dwork. Learning fair representations. In ICML, 2013.
+Ning Zhang, Manohar Paluri, Yaniv Taigman, Rob Fergus, and Lubomir Bourdev. Beyond frontal faces: Improving person recognition using multiple cues. In CVPR, 2015.
+Zhifei Zhang, Yang Song, and Hairong Qi. Age progression/regression by conditional adversarial autoencoder. In CVPR, 2017.
\ No newline at end of file
diff --git a/overlearningrevealssensitiveattributes/images.zip b/overlearningrevealssensitiveattributes/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..10dda6f296584698361014e47042913773709e06
--- /dev/null
+++ b/overlearningrevealssensitiveattributes/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b683eae9f5b4941180e7fd735eda46ba8f679c08c680d3aa61c93c68ef159e5d
+size 461743
diff --git a/overlearningrevealssensitiveattributes/layout.json b/overlearningrevealssensitiveattributes/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..c2a96f3428b3a2a5f7a12bcff0a642a514d96330
--- /dev/null
+++ b/overlearningrevealssensitiveattributes/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:829c7ab6d7ee4e06ba70781d5e3f655c64e9235fd5bf0fe53fda4eaabdde9712
+size 501146
diff --git a/pacconfidencesetsfordeepneuralnetworksviacalibratedprediction/e488123d-1fc7-4091-b709-25695f5c7f57_content_list.json b/pacconfidencesetsfordeepneuralnetworksviacalibratedprediction/e488123d-1fc7-4091-b709-25695f5c7f57_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..0074c1d3e9a26803e056fafd5b0dd527c9338b55
--- /dev/null
+++ b/pacconfidencesetsfordeepneuralnetworksviacalibratedprediction/e488123d-1fc7-4091-b709-25695f5c7f57_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5f08649ee4a0f777a5821d86b307c4acb329c000e02cf7beaaa7798428eaff45
+size 167280
diff --git a/pacconfidencesetsfordeepneuralnetworksviacalibratedprediction/e488123d-1fc7-4091-b709-25695f5c7f57_model.json b/pacconfidencesetsfordeepneuralnetworksviacalibratedprediction/e488123d-1fc7-4091-b709-25695f5c7f57_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..af954d83c31ca2b8d1e68bc9149c937f16a88340
--- /dev/null
+++ b/pacconfidencesetsfordeepneuralnetworksviacalibratedprediction/e488123d-1fc7-4091-b709-25695f5c7f57_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:13c093784f9fc5daaa77238d4f70a1c90b00295962d5e57884daa40a62e450db
+size 195007
diff --git a/pacconfidencesetsfordeepneuralnetworksviacalibratedprediction/e488123d-1fc7-4091-b709-25695f5c7f57_origin.pdf b/pacconfidencesetsfordeepneuralnetworksviacalibratedprediction/e488123d-1fc7-4091-b709-25695f5c7f57_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..d183290de64b13bda0acbb844ac1badd7d43809f
--- /dev/null
+++ b/pacconfidencesetsfordeepneuralnetworksviacalibratedprediction/e488123d-1fc7-4091-b709-25695f5c7f57_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:21cde91e2a02f3cf2e84872db648fc002762af85e51fd72e3e766fb6c8405660
+size 9132913
diff --git a/pacconfidencesetsfordeepneuralnetworksviacalibratedprediction/full.md b/pacconfidencesetsfordeepneuralnetworksviacalibratedprediction/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..c142324f7f836e5f3a290c8554645c7d314402eb
--- /dev/null
+++ b/pacconfidencesetsfordeepneuralnetworksviacalibratedprediction/full.md
@@ -0,0 +1,845 @@
+# PAC CONFIDENCE SETS FOR DEEP NEURAL NETWORKS VIA CALIBRATED PREDICTION
+
+Sangdon Park
+
+University of Pennsylvania
+
+sangdonp@cis.upenn.edu
+
+Osbert Bastani
+
+University of Pennsylvania
+
+obastani@seas.upenn.edu
+
+Nikolai Matni
+
+University of Pennsylvania
+
+nmatni@seas.upenn.edu
+
+Insup Lee
+
+University of Pennsylvania
+
+lee@cis.upenn.edu
+
+# ABSTRACT
+
+We propose an algorithm combining calibrated prediction and generalization bounds from learning theory to construct confidence sets for deep neural networks with PAC guarantees—i.e., the confidence set for a given input contains the true label with high probability. We demonstrate how our approach can be used to construct PAC confidence sets on ResNet for ImageNet, a visual object tracking model, and a dynamics model for the half-cheetah reinforcement learning problem.1
+
+# 1 INTRODUCTION
+
+A key challenge facing deep neural networks is that they do not produce reliable confidence estimates, which are important for applications such as safe reinforcement learning (Berkenkamp et al., 2017), guided exploration (Malik et al., 2019), and active learning (Gal et al., 2017).
+
+We consider the setting where the test data follows the same distribution as the training data (i.e., we do not consider adversarial examples designed to fool the network (Szegedy et al., 2014)); even in this setting, confidence estimates produced by deep neural networks are notoriously unreliable (Guo et al., 2017). One intuition for this shortcoming is that unlike traditional supervised learning algorithms, deep learning models typically overfit the training data (Zhang et al., 2017). As a consequence, the confidence estimates of deep neural networks are flawed even for test data from the training distribution since, by construction, they overestimate the likelihood of the training data.
+
+A promising approach to addressing this challenge is temperature scaling (Platt, 1999). This approach takes as input a trained neural network $f_{\hat{\phi}}(y \mid x)$ —i.e., whose parameters $\hat{\phi}$ have already been fit to a training dataset $Z_{\mathrm{train}}$ —which produces unreliable probabilities $f_{\hat{\phi}}(y \mid x)$ . Then, this approach rescales these confidence estimates based on a validation dataset to improve their "calibration". More precisely, this approach fits confidence estimates of the form
+
+$$
+f _ {\hat {\phi}, \tau} (y \mid x) \propto \exp (\tau \log f _ {\hat {\phi}} (y \mid x)),
+$$
+
+where $\tau \in \mathbb{R}_{>0}$ is a temperature scaling parameter that is fit based on the validation dataset. The goal is to choose $\tau$ to minimize calibration error, which roughly speaking measures the degree to which the reported error rate differs from the actual error rate.
+
+The key insight is that in the temperature scaling approach, only a single parameter $\tau$ is fit to the validation data—thus, unlike fitting the original neural network, the temperature scaling algorithm comes with generalization guarantees based on traditional statistical learning theory.
+
+Despite the improved generalization guarantees, these confidence estimates still do not come with theoretical guarantees. We are interested in producing confidence sets that satisfy statistical guarantees while being as small as possible. Given a test input $x \in \mathcal{X}$ , a confidence set $C_T(x) \subseteq \mathcal{Y}$
+
+
+Table 1: ImageNet images with varying ResNet confidence set sizes. The confidence set sizes are on the top. The true label is on the left-hand side. Incorrectly labeled images are boxed in red.
+
+parameterized by $T \in \mathbb{R}$ should contain the true label $y$ for at least a $1 - \epsilon$ fraction of cases:
+
+$$
+\mathbb {P} _ {(x, y) \sim D} [ y \in C _ {T} (x) ] \geq 1 - \epsilon .
+$$
+
+Since we are fitting a parameter $T$ to based on $Z_{\mathrm{val}}$ , we additionally incur a probability of failure due to the randomness in $Z_{\mathrm{val}}$ . In other words, given $\epsilon, \delta \in \mathbb{R}_{>0}$ , we aim to obtain probably approximately correct (PAC) confidence sets $C_T(x) \subseteq \mathcal{V}$ satisfying the guarantee
+
+$$
+\mathbb {P} _ {Z _ {\text {v a l}} \sim D ^ {n}} \left(\mathbb {P} _ {(x, y) \sim D} (y \in C _ {T} (x)) \geq 1 - \epsilon\right) \geq 1 - \delta .
+$$
+
+Indeed, techniques from statistical learning theory (Vapnik, 1999) can be used to do so (Vovk, 2013).
+
+There are a number of reasons why confidence sets can be useful. First, they can be used to inform safety critical decision making. For example, consider a doctor who uses prediction tools to help perform diagnosis. Having a confidence set would both help the doctor estimate the confidence of the prediction (i.e., smaller confidence sets imply higher confidence), but also give a sense of the set of possible diagnoses. Second, having a confidence set can be useful for reasoning about safety since they contain the true outcome with high probability. For instance, robots may use a confidence set over predicted trajectories to determine whether it is safe to act with high probability. As a concrete example, consider a self-driving car that uses a deep neural network to predict the path that a pedestrian might take. We require that the self-driving car avoid the pedestrian with high probability, which it can do by avoiding all possible paths in the predicted confidence set.
+
+Contributions. We propose an algorithm combining calibrated prediction and statistical learning theory to construct PAC confidence sets for deep neural networks (Section 3). We propose instantiations of this framework in the settings of classification, regression, and learning models for reinforcement learning (Section 3.6). Finally, we evaluate our approach on three benchmarks: ResNet (He et al., 2016) for ImageNet (Russakovsky et al., 2015), a model (Held et al., 2016) learned for a visual object tracking benchmark (Wu et al., 2013), and a probabilistic dynamics model (Chua et al., 2018) learned for the half-cheetah environment (Brockman et al., 2016) (Section 4). Examples of ImageNet images with different sized ResNet confidence sets are shown in Table 1. As can be seen, our confidence sets become larger and the images become more challenging to classify. In addition, we show predicted confidence sets for ResNet in Table 2, as well as predicted confidence sets for the visual object tracking model in Table 3.
+
+Related work. There has been work on constructing confidence sets with theoretical guarantees. Oftentimes, these guarantees are asymptotic rather than finite sample (Steinberger & Leeb, 2016; 2018). Alternatively, there has been work focused on predicting confidence sets with a given expected size (Denis & Hebiri, 2017).
+
+More relatedly, there has been recent work on obtaining PAC guarantees. For example, there has been some work specific prediction tasks such as binary classification (Lei, 2014; Wang & Qiao,
+
+
+Table 2: Confidence sets of ImageNet images with varying ResNet confidence set sizes. The predicted confidence set is shown to the right of the corresponding input image. The true label is shown in red, and the predicted label is shown with a hat. See Table 5 in Appendix D for more examples.
+
+
+Table 3: Visualization of confidence sets for the tracking dataset (Wu et al., 2013), including the ground truth bounding box (white), the bounding box predicted by the original neural network (Held et al., 2016) (red), and the bounding box produced using our confidence set predictor (green). We have overapproximated the predicted ellipsoid confidence set with a box. Our bounding box contains the ground truth bounding box with high probability. See Table 9 in Appendix D for more examples.
+
+2018). There has also been work in the setting of regression (Lei et al., 2018; Barber et al., 2019). However, in this case, the confidence sets are fixed in size—i.e., they do not depend on the input $x$ (Barber et al., 2019). Furthermore, they make stability assumptions about the learning algorithm (though they achieved improved rates by doing so) (Lei et al., 2018; Barber et al., 2019).
+
+The most closely related work is on conformal prediction (Papadopoulos, 2008; Vovk, 2013). Like our approach, this line of work provides a way to construct confidence sets from a given confidence predictor, and provided PAC guarantees for the validity of these confidence sets. Indeed, with some work, our generalization bound Theorem 1 can be shown to be equivalent to Theorem 1 in Vovk (2013). In contrast to their approach, we proposed to use calibrated prediction to construct confidence predictors that can suitably be used with deep neural networks. Furthermore, our approach
+
+makes explicit the connections to temperature scaling and as well as to generalization bounds from statistical learning theory (Vapnik, 1999). In addition, unlike our paper, they do not explicitly provide an efficient algorithm for constructing confidence sets. Finally, we also propose an extension to the case of learning models for reinforcement learning.
+
+Finally, we build on a long line of work on calibrated prediction, which aims to construct "calibrated" probabilities (Murphy, 1972; DeGroot & Fienberg, 1983; Platt, 1999; Zadrozny & Elkan, 2001; 2002; Naeini et al., 2015; Kuleshov & Liang, 2015). Roughly speaking, probabilities are calibrated if events happen at rates equal to the predicted probabilities. This work has recently been applied to obtaining confidence estimates for deep neural networks (Guo et al., 2017; Kuleshov et al., 2018; Pearce et al., 2018), including for learned models for reinforcement learning (Malik et al., 2019). However, these approaches do not come with PAC guarantees.
+
+# 2 PAC CONFIDENCE SETS
+
+Our goal is to estimate confidence sets that are as small as possible, while simultaneously ensuring that they are probably approximately correct (PAC) (Valiant, 1984). Essentially, a confidence set is correct if it contains the true label. More precisely, let $\mathcal{X}$ be the inputs and $\mathcal{Y}$ be the labels, and let $D$ be a distribution over $\mathcal{Z} = \mathcal{X} \times \mathcal{Y}$ . A confidence set predictor is a function $C: \mathcal{X} \to 2^{\mathcal{Y}}$ such that $C(x) \subseteq \mathcal{Y}$ is a set of labels; we denote the set of all confidence set predictors by $\mathcal{C}$ . For a given example $(x, y) \sim D$ , we say $C$ is correct if $y \in C(x)$ . Then, the error of $C$ is
+
+$$
+L (C) = \mathbb {P} _ {(x, y) \sim D} [ y \notin C (x) ]. \tag {1}
+$$
+
+Finally, consider an algorithm $\mathcal{A}$ that takes as input a validation set $Z_{\mathrm{val}} \subseteq \mathcal{Z}$ consisting of $n$ i.i.d. samples $(x,y) \sim D$ , and outputs a confidence set predictor $\hat{C}$ . Given $\epsilon, \delta \in \mathbb{R}_{>0}$ , we say that $\mathcal{A}$ is probably approximately correct (PAC) if
+
+$$
+\mathbb {P} _ {Z _ {\text {v a l}} \sim D ^ {n}} \left[ L (\hat {C}) > \epsilon \text {w h e r e} \hat {C} = \mathcal {A} \left(Z _ {\text {v a l}}\right) \right] < \delta . \tag {2}
+$$
+
+Our goal is to design an algorithm $\mathcal{A}$ that satisfies (2) while constructing confidence sets $C(x)$ that are as "small in size" as possible on average. The size of $C(x)$ depends on the domain. For classification, we consider confidence sets that are arbitrary subsets of labels $C(x) \subseteq \mathcal{Y} = \{1, \dots, Y\}$ , and we measure the size by $|C(x)| \in \mathbb{N}$ —i.e., the number of labels in $C(x)$ . For regression, we consider confidence sets that are intervals $C(x) = [a, b] \subseteq \mathcal{Y} = \mathbb{R}$ , and we measure size by $b - a$ —i.e., the length of the predicted interval. Note that there is an intrinsic tradeoff between satisfying (2) and average size of $C(x)$ —larger confidence sets are more likely to satisfy (2).
+
+# 3 PAC ALGORITHM FOR CONFIDENCE SET CONSTRUCTION
+
+Our algorithm is formulated in the empirical risk framework. Typically, this framework refers to empirical risk minimization. In our setting, such an algorithm would take as input (i) a parametric family of confidence set predictors $\mathcal{C} = \{C_{\theta} \mid \theta \in \Theta\}$ , where $\Theta$ is the parameter space, and (ii) a training set $Z_{\mathrm{val}} \subseteq \mathcal{Z}$ of $n$ i.i.d. samples $(x,y) \sim D$ , and output the confidence set predictor $C_{\hat{\theta}}$ where $\hat{\theta}$ minimizes the empirical risk:
+
+$$
+\hat {\theta} = \underset {\theta \in \Theta} {\arg \min } \hat {L} (C _ {\theta}; Z _ {\text {v a l}}) \qquad \text {w h e r e} \quad \hat {L} (C; Z _ {\text {v a l}}) = \frac {1}{n} \sum_ {(x, y) \in Z _ {\text {v a l}}} \mathbb {I} [ y \notin C (x) ].
+$$
+
+Here, $\mathbb{I}[\phi ]\in \{0,1\}$ is the indicator function, and the empirical risk $\hat{L}$ in an estimate of the confidence set error (1) based on the validation set $Z_{\mathrm{val}}$
+
+However, our algorithm does not minimize the empirical risk. Rather, recall that our goal is to minimize the size of the predicted confidence sets given a PAC constraint on the true risk $L(\hat{\theta})$ based on the given PAC parameters $\epsilon, \delta \in \mathbb{R}_{>0}$ and the number of available validation samples $n = |Z_{\mathrm{val}}|$ . Thus, the risk shows up as a constraint in the optimization problem, and the objective is instead to minimize the size of the predicted confidence sets:
+
+$$
+\hat {\theta} = \underset {\theta \in \Theta} {\arg \min } S (\theta) \quad \text {s u b j . t o} \quad \hat {L} \left(C _ {\theta}; Z _ {\text {v a l}}\right) \leq \alpha . \tag {3}
+$$
+
+At a high level, the value $\alpha = \alpha (n,\epsilon ,\delta)\in \mathbb{R}_{\geq 0}$ is chosen to enforce the PAC constraint, and is based on generalization bounds from statistical learning theory (Valiant, 1984). Furthermore, following the temperature scaling approach (Platt, 1999), the parameter space $\Theta$ is chosen to be as small as possible (in particular, one dimensional) to enable good generalization. Finally, our choice of size metric $S$ follows straightforwardly based on our choice of parameter space. In the remainder of this section, we describe the choices of (i) parameter space $\Theta$ , (ii) size metric $S(\theta)$ , and (iii) confidence level $\alpha (n,\epsilon ,\delta)$ in more detail, as well as how to solve (3) given these choices.
+
+# 3.1 CHOICE OF PARAMETER SPACE $\Theta$
+
+Probability forecasters. Our construction of the parametric family of confidence set predictors $C_{\theta}$ assumes given a probability forecaster $f: \mathcal{X} \to \mathcal{P}_{\mathcal{Y}}$ , where $\mathcal{P}_{\mathcal{Y}}$ is a space of probability distributions over $\mathcal{Y}$ . Given such an $f$ , we use $f(y \mid x)$ to denote the probability of label $y$ under distribution $f(x)$ . Intuitively, $f(y \mid x)$ should be the probability (or probability density) that $y$ is the true label for a given input $x$ —i.e., $f(y \mid x) \approx \mathbb{P}_{(X,Y) \sim D}[Y = y \mid X = x]$ . For example, in classification, we can choose $\mathcal{P}_{\mathcal{Y}}$ to be the space of categorical distributions over $\mathcal{Y}$ , and $f$ may be a neural network whose last layer is a softmax layer with $|\mathcal{Y}|$ outputs. Then, $f(y \mid x) = f(x)_y$ . Alternatively, in regression, we can choose $\mathcal{P}_{\mathcal{Y}}$ to be the space of Gaussian distributions, and $f$ may be a neural network whose last layer outputs the values $(\mu, \sigma) \in \mathbb{R} \times \mathbb{R}_{>0}$ of a Gaussian distribution. Then, $f(y \mid x) = \mathcal{N}(x; \mu(x), \sigma(x)^2)$ , where $(\mu(x), \sigma(x)) = f(x)$ , and $\mathcal{N}(\cdot; \mu, \sigma^2)$ is the Gaussian density function with mean $\mu$ and variance $\sigma^2$ .
+
+Training a probability forecaster. To train a probability forecaster, we use a standard approach to calibrated prediction that combines maximum likelihood estimation with temperature scaling. First, we consider a parametric model family $\mathcal{F} = \{f_{\phi} \mid \phi \in \Phi\}$ , where $\Phi$ is the parameter space. Note that $\Phi$ can be high-dimensional—e.g., the weights of a neural network model. Given a training set $Z_{\mathrm{train}} \subseteq \mathcal{Z}$ of $m$ i.i.d. samples $(x,y) \sim D$ , the maximum likelihood estimate (MLE) of $\phi$ is
+
+$$
+\hat {\phi} = \underset {\phi \in \Phi} {\arg \min } \ell (\phi ; Z _ {\text {t r a i n}}) \quad \text {w h e r e} \quad \ell (\phi ; Z _ {\text {t r a i n}}) = - \sum_ {(x, y) \in Z _ {\text {t r a i n}}} \log f _ {\phi} (y \mid x). \tag {4}
+$$
+
+We could now use $f_{\hat{\phi}}$ as the probability forecaster. However, the problem with directly using $\hat{\phi}$ is that because $\hat{\phi}$ may be high-dimensional, it often overfits the training data $Z_{\mathrm{train}}$ . Thus, the probabilities are typically overconfident compared to what they should be.
+
+To reduce their confidence, we use the temperature scaling approach to calibrate the predicted probabilities (Platt, 1999; Guo et al., 2017). Intuitively, this approach is to train an MLE estimate using exactly the same approach used to train $\hat{\phi}$ , but using a single new parameter $\tau \in \mathbb{R}_{>0}$ . The key idea is that this time, the model family is based on the parameters $\hat{\phi}$ from (4). In other words, the "shape" of the probabilities forecast by $f_{\hat{\phi}}$ are preserved, but their exact values are shifted.
+
+More precisely, consider the model family $\mathcal{F}' = \{f_{\hat{\phi},\tau} \mid \tau \in \mathbb{R}_{>0}\}$ , where
+
+$$
+f _ {\hat {\phi}, \tau} (y \mid x) \propto \exp \left(\tau \log f _ {\hat {\phi}} (y \mid x)\right).
+$$
+
+Then, we have the following MLE for $\tau$ :
+
+$$
+\hat {\tau} = \underset {\tau \in \mathbb {R} _ {> 0}} {\arg \min } \ell^ {\prime} (\tau ; Z _ {\text {t r a i n}} ^ {\prime}) \quad \text {w h e r e} \quad \ell^ {\prime} (\tau ; Z _ {\text {t r a i n}} ^ {\prime}) = - \sum_ {(x, y) \in Z _ {\text {t r a i n}} ^ {\prime}} \log f _ {\hat {\phi}, \tau} (y \mid x). \tag {5}
+$$
+
+Note that $\hat{\tau}$ is estimated based on a second training set $Z_{\mathrm{train}}^{\prime}$ . Because we are only fitting a single parameter, this training set can be much smaller than the training set $Z_{\mathrm{train}}$ used to fit $\hat{\phi}$ .
+
+Parametric family of confidence set predictors. Finally, given a probability forecaster $f$ , we consider one dimensional parameter space $\Theta = \mathbb{R}$ ; in an analogy to the temperature scaling technique for calibrated prediction, we denote this parameter by $T \in \Theta$ . In particular, we assume a confidence probability predictor $f$ is given, and consider
+
+$$
+C _ {T} (x) = \{y \in \mathcal {Y} \mid f (y \mid x) \geq e ^ {- T} \}.
+$$
+
+In other words, $C_T(x)$ is the set of $y$ with high probability given $x$ according to $f$ . Considering this scalar parameter space, we denote the minimum of (3) by $\hat{T}$ .
+
+# 3.2 CHOICE OF SIZE METRIC $S(T)$
+
+To choose the size metric $S(T)$ , we note that for our chosen parametric family of confidence set predictors, smaller values correspond to uniformly smaller confidence sets—i.e.,
+
+$$
+T \leq T ^ {\prime} \Rightarrow \forall x, C _ {T} (x) \subseteq C _ {T ^ {\prime}} (x).
+$$
+
+Thus, we can simply choose the size metric to be
+
+$$
+S (T) = T. \tag {6}
+$$
+
+This choice minimizes the size of the confidence sets produced by our algorithm.
+
+# 3.3 CHOICE OF CONFIDENCE LEVEL $\alpha (n,\epsilon ,\delta)$
+
+Naive approach based on VC generalization bound. A naive approach to choosing $\alpha (n,\epsilon ,\delta)$ is to do so based on the VC dimension generalization bound (Vapnik, 1999). It is not hard to show that the problem of estimating $\hat{T}$ is equivalent to a binary classification problem, and that the VC dimension of $\Theta$ for this problem is 1. Thus, the VC dimension bound implies that for all $T\in \Theta$
+
+$$
+\mathbb {P} _ {Z _ {\text {v a l}} \sim D ^ {n}} \left[ L \left(C _ {T}\right) \leq \hat {L} \left(C _ {T}; Z _ {\text {v a l}}\right) + \sqrt {\frac {\log (2 n) + 1 - \log (\delta / 4)}{n}} \right] \geq 1 - \delta . \tag {7}
+$$
+
+The details of this equivalence are given in Appendix B.2. Then, suppose we choose
+
+$$
+\alpha (n, \epsilon , \delta) = \epsilon - \sqrt {\frac {\log (2 n) + 1 - \log (\delta / 4)}{n}}.
+$$
+
+With this choice, for the solution $\hat{T}$ of (3) with $\alpha = \alpha(n,\epsilon,\delta)$ , the constraint in (3) ensures that $\hat{L}(C_{\hat{T}};Z_{\mathrm{val}}) \leq \alpha(n,\epsilon,\delta)$ . Together with the VC generalization bound (7), we have
+
+$$
+\mathbb {P} _ {Z _ {\text {v a l}} \sim D ^ {n}} \left[ L \left(C _ {\hat {T}}\right) > \epsilon \right] < \delta ,
+$$
+
+which is exactly desired the PAC constraint on our predicted confidence sets.
+
+Direct generalization bound. In fact, we can get better choices of $\alpha$ by directly bounding generalization error. For instance, in the realizable setting (i.e., we always have $\hat{L}(C_{\hat{T}}; Z_{\mathrm{val}}) = 0$ ), we can get rates of $n = \tilde{O}(1/\epsilon)$ instead of $n = \tilde{O}(1/\epsilon^2)$ (Kearns & Vazirani, 1994); see Appendix A.2 for details. We can achieve these rates by choosing $\alpha = 0$ , but then, the PAC guarantees we obtain may actually be stronger than desired (i.e., for $\epsilon' < \epsilon$ ). Intuitively, we can directly prove a bound that interpolates between the realizable setting and the VC generalization bound—in particular:
+
+Theorem 1. For any $\epsilon \in [0,1]$ , $n\in \mathbb{N}_{>0}$ , and $k\in \{0,1,\dots,n\}$ , we have
+
+$$
+\mathbb {P} _ {Z _ {v a l} \sim D ^ {n}} \left[ L (C _ {\hat {T}}) > \epsilon \right] \leq \sum_ {i = 0} ^ {k} \binom {n} {i} \epsilon^ {i} (1 - \epsilon) ^ {n - i},
+$$
+
+where $\hat{T}$ is the solution to (3) with $\alpha = k / n$ .
+
+We give a proof in Appendix B.2. Based on Theorem 1, we can choose
+
+$$
+\alpha (n, \epsilon , \delta) = \max _ {k \in \mathbb {N} \cup \{0 \}} k / n \quad \text {s u b j . t o} \quad \sum_ {i = 0} ^ {k} \binom {n} {i} \epsilon^ {i} (1 - \epsilon) ^ {n - i} < \delta . \tag {8}
+$$
+
+# 3.4 THEORETICAL GUARANTEES
+
+We have the following guarantee, which follows straightforwardly from Theorem 1:
+
+Corollary 1. Let $\hat{T}$ be the solution to (3) for $\alpha = \alpha(n, \epsilon, \delta)$ chosen according to (8). Then, we have
+
+$$
+\mathbb {P} _ {Z _ {\text {v a l}} \sim D ^ {n}} [ L (C _ {\hat {T}}) > \epsilon ] < \delta .
+$$
+
+In other words, our algorithm is probably approximately correct.
+
+Algorithm 1 Algorithm for solving (3).
+procedure ESTIMATECONFIDENCESETPREDICTOR( $Z_{train}$ , $Z_{train}^{\prime}$ , $Z_{val}$ ) Estimate $\hat{\phi},\hat{\tau}$ using (4) and (5), respectively Compute $\alpha(n,\epsilon,\delta)$ according to (8) by enumerating $k\in\{0,1,\dots,n\}$ Let $k^{*}=n\cdot\alpha(n,\epsilon,\delta)$ (note that $k\in\{0,1,\dots,n\}$ ) Sort $(x,y)\in Z_{\mathrm{val}}$ in ascending order of $f_{\hat{\phi},\hat{\tau}}(y|x)$ Let $(x_{k^{*}+1},y_{k^{*}+1})$ be the $(k^{*}+1)$ st element in the sorted $Z_{\mathrm{val}}$ Solve (3) by choosing $\hat{T}=-\log f_{\hat{\phi},\hat{\tau}}(y_{k^{*}+1}|x_{k^{*}+1})$ Return $C_{\hat{T}}:x\mapsto\{y\in\mathcal{V}\mid f_{\hat{\phi},\hat{\tau}}(y|x)\geq e^{-\hat{T}}\}$ end procedure
+
+# 3.5 PRACTICAL IMPLEMENTATION
+
+Our algorithm for estimating a confidence set predictor $C_{\hat{T}}$ is summarized in Algorithm 1. The algorithm solves the optimization problem (3) using the choices of $\Theta$ , $S(T)$ , and $\alpha(n, \epsilon, \delta)$ described in the preceding sections. There are two key implementation details that we describe here.
+
+Computing $\alpha(n, \epsilon, \delta)$ . To compute $\alpha(n, \epsilon, \delta)$ , we need to solve (8). A straightforward approach is to enumerate all possible choices of $k \in \{0, 1, \dots, n\}$ . There are two optimizations. First, the objective is monotone increasing in $k$ , so we can enumerate $k$ in ascending order until the constraint no longer holds. Second, rather than re-compute the left-hand side of the constraint $\sum_{i=0}^{k} \binom{n}{i} \epsilon^i (1 - \epsilon)^{n-i}$ , we can accumulate the sum as we iterate over $k$ . We can also incrementally compute $\binom{n}{i}$ , $\epsilon^i$ , and $(1 - \epsilon)^{n-i}$ . For numerical stability, we perform these computations in log space.
+
+Solving (3). To solve (3), note that the constraint in (3) is equivalent to
+
+$$
+\sum_ {(x, y) \in Z _ {\text {v a l}}} E (x, y; T) \leq n \cdot \alpha (n, \epsilon , \delta) \quad \text {w h e r e} \quad E (x, y; T) = \mathbb {I} \left[ f _ {\hat {\phi}, \hat {\tau}} (y \mid x) < e ^ {- T} \right]. \tag {9}
+$$
+
+Also, note that $k^{*} = n \cdot \alpha(n, \epsilon, \delta)$ is an integer due to the definition of $\alpha(n, \epsilon, \delta)$ in (8). Thus, we can interpret (9) as saying that $E(x, y; T) = 1$ for at most $k^{*}$ of the points $(x, y) \in Z_{\mathrm{val}}$ .
+
+In addition, note that $E(x,y;T)$ decreases monotonically as $f_{\hat{\phi},\hat{\tau}}(y\mid x)$ becomes larger. Thus, we can sort the points $(x,y)\in Z_{\mathrm{val}}$ in ascending order of $f_{\hat{\phi},\hat{\tau}}(y\mid x)$ , and require that only the first $k^*$ points $(x,y)$ in this list satisfy $E(x,y;T) = 1$ . In particular, letting $(x_{k^{*} + 1},y_{k^{*} + 1})$ be the $(k^{*} + 1)$ st point, (9) is equivalent to
+
+$$
+f _ {\hat {\phi}, \hat {\tau}} \left(y _ {k ^ {*} + 1} \mid x _ {k ^ {*} + 1}\right) \geq e ^ {- T}. \tag {10}
+$$
+
+In other words, this constraint says that $T$ must satisfy $y_{k^{*} + 1} \in C_T(x_{k^{*} + 1})$ . Finally, the solution $\hat{T}$ to (3) is the smallest $T$ that satisfies (10), which is the $T$ that makes (10) hold with equality—i.e.,
+
+$$
+\hat {T} = - \log f _ {\hat {\phi}, \hat {\tau}} \left(y _ {k ^ {*} + 1} \mid x _ {k ^ {*} + 1}\right). \tag {11}
+$$
+
+We have assumed $f_{\hat{\phi},\hat{\tau}}(y_{k^* +1}\mid x_{k^* +1}) > f_{\hat{\phi},\hat{\tau}}(y_{k^*}\mid x_{k^*})$ ; if not, we decrement $k^*$ until this holds.
+
+# 3.6 PROBABILITY FORECASTERS FOR SPECIFIC TASKS
+
+We briefly discuss the architectures we use for probability forecasters for various tasks. We give details, including how we measure the sizes of predicted confidence sets $C_T(x)$ , in Appendix C. We consider three tasks: classification, regression, and model-based reinforcement learning. For classification, we use the standard approach of using a soft-max layer to predict label probabilities $f(y \mid x)$ . For regression, we also use a standard approach where the neural network predicts both the mean $\mu(x)$ and covariance $\Sigma(x)$ of a Gaussian distribution $\mathcal{N}(\mu(x), \Sigma(x))$ ; then, $f(y \mid x) = \mathcal{N}(y; \mu(x), \Sigma(x))$ is the probability density of $y$ according to this Gaussian distribution.
+
+Finally, for model-based reinforcement learning, our goal is to construct confidence sets over trajectories predicted using a learned model of the dynamics. We consider unknown dynamics
+
+
+(a)
+
+
+(b)
+
+
+(c)
+
+
+(d)
+Figure 1: Results on ResNet for ImageNet with $n = 20000$ . Default parameters are $\epsilon = 0.01$ and $\delta = 10^{-5}$ . We plot the median and min/max confidence set sizes. (a) Ablation study; $C$ is "calibrated predictor" (i.e., use $f_{\hat{\phi},\hat{\tau}}$ instead of $f_{\hat{\phi}}$ ), and $D$ is "direct bound" (i.e., use Theorem 1 instead of the VC generalization bound). (b) Restricted to correctly vs. incorrectly labeled images. (c) Varying $\epsilon$ . (d) Varying $\delta$ .
+
+$g^{*}(x^{\prime}\mid x,u)$ mapping a state-action pair $(x,u)$ to a distribution over states $x^{\prime}$ , and consider a known (and fixed) policy $\pi (u\mid x)$ mapping a given state $x$ to a distribution over actions $u\in \mathcal{U}\subseteq \mathbb{R}^{d_U}$ . Then, we let $f^{*}(x^{\prime}\mid x) = \mathbb{E}_{\pi (u|x)}[g^{*}(x^{\prime}\mid u)]$ denote the (unknown) closed-loop dynamics.
+
+Next, we consider a forecaster $f(x' \mid x) \approx f^{*}(x' \mid x)$ of the form $f(x' \mid x) = \mathcal{N}(x'; \mu(x), \Sigma(x))$ , and our goal is to construct confidence sets for the predictions of $f$ . However, we want to do so for not just for one-step predictions, but for predictions over a time horizon $H \in \mathbb{N}$ . In particular, given initial state $x_0 \in \mathcal{X}$ , we can sample $x_{1:H}^* = (x_1, \dots, x_H) \sim f^*$ by letting $x_0^* = x_0$ and sequentially sampling $x_{t+1}^* \sim f(\cdot \mid x_t^*)$ for each $t \in \{0, 1, \dots, H-1\}$ . Then, our goal is to construct a confidence set that contains $x_{1:H}^* \in \mathcal{X}^H$ with high probability (over both the randomness in an initial state distribution $x_0 \sim d_0$ and the randomness in $f^*$ ).
+
+To do so, we construct and use a forecaster $\tilde{f}(x_{1:H} \mid x_0)$ based on $f$ . In principle, this task is a special case of multivariate regression, where the inputs are $\mathcal{X}$ (i.e., the initial state $x_0$ ) and the outputs are $\mathcal{Y} = \mathcal{X}^H$ (i.e., a predicted trajectory $x_{1:H}$ ). However, the variance $\Sigma(x)$ predicted by our probability forecaster is only for a single step, and does not take into account the fact that $x$ is itself uncertain. Thus, we use a simple heuristic where we accumulate variances over time. More precisely, we construct (i) the predicted mean $\bar{x}_{1:H} = (\bar{x}_1, \dots, \bar{x}_H)$ by $\bar{x}_0 = x_0$ and $\bar{x}_{t+1} = \mu(\bar{x}_t)$ for $t \in \{0, 1, \dots, H-1\}$ , and (ii) the predicted variances $\tilde{\Sigma}_{1:H} = (\tilde{\Sigma}_1, \dots, \tilde{\Sigma}_H)$ by
+
+$$
+\tilde {\Sigma} _ {t} = \Sigma (\bar {x} _ {0}) + \Sigma (\bar {x} _ {1}) + \dots + \Sigma (\bar {x} _ {t - 1}).
+$$
+
+We use a probability forecaster $\tilde{f} (x_{1:H}\mid x_0) = \mathcal{N}(x_{1:H};\bar{x}_{1:H},\bar{\Sigma}_{1:H})$ to construct confidence sets.
+
+# 4 EXPERIMENTS
+
+We describe our experiments on ImageNet (a classification task), a visual object tracking benchmark (a regression task), and the half-cheetah environment (a model-based reinforcement learning task). We give additional results in Appendix D.
+
+
+(a)
+
+
+(b)
+Figure 2: Confidence set sizes for an object tracking benchmark (Wu et al., 2013); we use $n = 5,000$ , $\epsilon = 0.01$ , and $\delta = 10^{-5}$ . (a) Ablation study similar to Figure 3. In (b) and (c), we show how the confidence set sizes produced using our algorithm vary with respect to $\epsilon$ and $\delta$ , respectively.
+
+
+(c)
+
+ResNet for ImageNet. We use our algorithm to compute confidence sets for ResNet (He et al., 2016) on ImageNet (Russakovsky et al., 2015), for $\epsilon = 0.01$ , $\delta = 10^{-5}$ , and $n = 20000$ validation images. We show the results in Figure 1. In (a), we compare our approach to an ablation. In particular, $C$ refers to performing an initial temperature scaling step to calibrate the neural network predictor (i.e., using $f_{\hat{\phi}}$ instead of $f_{\hat{\phi},\hat{\tau}}$ ), and (ii) $D$ refers to using Theorem 1 instead of the VC generalization bound. Thus, $C + D$ refers to our approach. As can be seen, using calibrated predictor produces a noticeable reduction in the maximum confidence set size.
+
+We also compared to the ablation $C$ —i.e., using the VC generalization bound. However, we were unable to obtain valid confidence sets for our choice of $\epsilon$ and $\delta$ —i.e., (3) is infeasible. That is, using Theorem 1 outperforms using the VC generalization bound since the VC bound is too loose to satisfy the PAC criterion for our choice of parameters. In addition, in Table 6 in Appendix D, we show results for larger choices of $\epsilon$ and $\delta$ ; these results show that our approach substantially outperforms the ablation based on the VC bound even when the VC bound produces valid confidence sets.
+
+In (b), we show the confidence set sizes for images correctly vs. incorrectly labeled by ResNet. As expected, the sizes are substantially larger for incorrectly labeled images. Finally, in (c) and (d), we show how the sizes vary with $\epsilon$ and $\delta$ , respectively. As expected, the dependence on $\epsilon$ is much more pronounced (note that $\delta$ is log-scale).
+
+Visual object tracking. We apply our confidence set prediction algorithm to a 2D visual single-object tracking task, which is a multivariate regression problem. Specifically, the input space $\mathcal{X}$ consists of the previous image, the previous bounding box (in $\mathbb{R}^4$ ), and the current image. The output space $\mathcal{Y} = \mathbb{R}^4$ is a current bounding box. We use the regression-based tracker from Held et al. (2016), and retrain the regressor neural network to predict the mean and variance of a Gaussian distribution. More precisely, our object tracking model predicts the mean and variance of each bounding box parameter—i.e., $(x_{\min}, y_{\min}, x_{\max}, y_{\max})$ . Given this bounding box forecaster $f_{\hat{\phi}}$ , we calibrate and estimate a confidence set predictor as described in Section 3.6.
+
+We use the visual object tracking benchmark from Wu et al. (2013) to train and evaluate our confidence set predictor. This benchmark consists of 99 video sequences labeled with ground truth bounding boxes. We randomly split these sequences to form the training set for calibration, validation set for confidence set estimation, and test set for evaluation. For each sequence, a pair of two adjacent frames constitute a single example. Our training dataset contains 20,882 labeled examples, each consisting of a pair of consecutive images and ground truth bounding boxes. The validation set for confidence set estimation and test set contain 22,761 and 22,761 labeled examples, respectively. Figure 2 shows the sizes of the predicted confidence sets; the sizes are measured as described in Section 3.6 for regression tasks. As for ResNet, we omit results for the VC bound ablation since $n$ is too small to get a bound. The trends are similar to the ones for ResNet.
+
+Half-cheetah. We use our algorithm to compute confidence sets for a probabilistic neural network dynamics model (Chua et al., 2018) for the half-cheetah environment (Brockman et al., 2016), for $\epsilon = 0.01$ , $\delta = 10^{-5}$ , $H = 20$ time steps, and $n = 5000$ validation rollouts. When using temperature scaling to calibrate $f_{\hat{\phi}}$ to obtain $f_{\hat{\phi},\hat{\tau}}$ , we calibrate each dimension of time steps independently (i.e., we fit $H$ parameters, where $H$ is time horizon). We show the results in Figure 3.
+
+
+
+
+(b)
+
+
+(a)
+(c)
+
+
+(d)
+Figure 3: Results on the dynamics model for the half-cheetah with $n = 5000$ . Default parameters are $\epsilon = 0.01$ and $\delta = 10^{-5}$ . (a) Ablation study; $A$ is "accumulated variance" (i.e., for each $t \in \{1, \dots, 20\}$ , use $\tilde{\Sigma}_t$ instead of $\Sigma_t = \Sigma(\bar{x}_{t-1})$ ), and $C$ and $D$ are as for ResNet. We plot the median and min/max confidence set sizes (see Section 3.6), averaged across $t \in \{1, \dots, 20\}$ . (b) Same ablations, but with per time step size. We plot the average size of the confidence set for the predicted state $x_t$ on step $t$ , as a function of $t \in \{1, \dots, 20\}$ . (c) Varying $\epsilon$ , and (d) varying $\delta$ .
+
+In (a), we compare to two ablations. The labels $C$ and $D$ are as for ResNet; in addition, $A$ refers to using the accumulated variance $\tilde{\Sigma}_t$ instead of the one-step predicted variances $\Sigma_t = \Sigma (\bar{x}_{t - 1})$ . Thus, $A + C + D$ is our approach. As before, we omit results for the ablation using the VC generalization bound since $n$ is so small that the bound does not hold for any $k$ for the given $\epsilon$ and $\delta$ . In (b), we show the same ablations over the entire trajectory until $t = 20$ . As can be seen, using the calibrated predictor produces a large gain; these gains are most noticeable in the tails. Using the accumulated confidence produces a smaller, but still significant, gain. In (c) and (d), we show how the sizes vary with $\epsilon$ and $\delta$ , respectively. The trends are similar those for ResNet.
+
+# 5 CONCLUSION
+
+We have proposed an algorithm for constructing PAC confidence sets for deep neural networks. Our approach leverages statistical learning theory to obtain theoretical guarantees on the predicted confidence sets. These confidence sets quantify the uncertainty of deep neural networks. For instance, they can be used to inform safety-critical decision-making, and to ensure safety with high-probability in robotics control settings that leverage deep neural networks for perception. Future work includes extending these results to more complex tasks (e.g., structured prediction), and handling covariate shift (e.g., to handle policy updates in reinforcement learning).
+
+# ACKNOWLEDGMENTS
+
+This work was supported in part by NSF CCF-1910769 and by the Air Force Research Laboratory and the Defense Advanced Research Projects Agency under Contract No. FA8750-18-C-0090. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the Air Force Research Laboratory (AFRL), the Defense Advanced Research Projects Agency (DARPA), the Department of Defense, or the United States Government.
+
+# REFERENCES
+
+Rina Foygel Barber, Emmanuel J Candes, Aaditya Ramdas, and Ryan J Tibshirani. Predictive inference with the jackknife+. arXiv preprint arXiv:1905.02928, 2019.
+Felix Berkenkamp, Matteo Turchetta, Angela Schoellig, and Andreas Krause. Safe model-based reinforcement learning with stability guarantees. In Advances in neural information processing systems, pp. 908-918, 2017.
+Marko Bohanec and Vladislav Rajkovic. Knowledge acquisition and explanation for multi-attribute decision making. In 8th Intl Workshop on Expert Systems and their Applications, 1988.
+Christopher P Bonafide, A Russell Localio, John H Holmes, Vinay M Nadkarni, Shannon Stemler, Matthew MacMurchy, Miriam Zander, Kathryn E Roberts, Richard Lin, and Ron Keren. Video analysis of factors associated with response time to physiologic monitor alarms in a childrens hospital. JAMA pediatrics, 171(6):524-531, 2017.
+Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. Openai gym. arXiv preprint arXiv:1606.01540, 2016.
+Kurtland Chua, Roberto Calandra, Rowan McAllister, and Sergey Levine. Deep reinforcement learning in a handful of trials using probabilistic dynamics models. In Advances in Neural Information Processing Systems, pp. 4754-4765, 2018.
+Paulo Cortez and Alice Maria Goncalves Silva. Using data mining to predict secondary school student performance. 2008.
+Morris H DeGroot and Stephen E Fienberg. The comparison and evaluation of forecasters. Journal of the Royal Statistical Society: Series D (The Statistician), 32(1-2):12-22, 1983.
+Christophe Denis and Mohamed Hebiri. Confidence sets with expected sizes for multiclass classification. The Journal of Machine Learning Research, 18(1):3571-3598, 2017.
+Yarin Gal, Riashat Islam, and Zoubin Ghahramani. Deep bayesian active learning with image data. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 1183-1192. JMLR.org, 2017.
+Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q Weinberger. On calibration of modern neural networks. arXiv preprint arXiv:1706.04599, 2017.
+H Altay Guvenir, Burak Acar, Gulsen Demiroz, and Ayhan Cekin. A supervised machine learning algorithm for arrhythmia analysis. In Computers in Cardiology 1997, pp. 433-436. IEEE, 1997.
+Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770-778, 2016.
+David Held, Sebastian Thrun, and Silvio Savarese. Learning to track at 100 fps with deep regression networks. In European Conference on Computer Vision, pp. 749-765. Springer, 2016.
+Michael J Kearns and Umesh Virkumar Vazirani. An introduction to computational learning theory. MIT press, 1994.
+Alex Krizhevsky. One weird trick for parallelizing convolutional neural networks. arXiv preprint arXiv:1404.5997, 2014.
+Volodymyr Kuleshov and Percy S Liang. Calibrated structured prediction. In Advances in Neural Information Processing Systems, pp. 3474-3482, 2015.
+Volodymyr Kuleshov, Nathan Fenner, and Stefano Ermon. Accurate uncertainties for deep learning using calibrated regression. arXiv preprint arXiv:1807.00263, 2018.
+Jing Lei. Classification with confidence. Biometrika, 101(4):755-769, 2014.
+
+Jing Lei, Max GSell, Alessandro Rinaldo, Ryan J Tibshirani, and Larry Wasserman. Distribution-free predictive inference for regression. Journal of the American Statistical Association, 113 (523):1094-1111, 2018.
+Ali Malik, Volodymyr Kuleshov, Jiaming Song, Danny Nemer, Harlan Seymour, and Stefano Ermon. Calibrated model-based deep reinforcement learning. In International Conference on Machine Learning, pp. 4314-4323, 2019.
+Allan H Murphy. Scalar and vector partitions of the probability score: Part i. two-state situation. Journal of Applied Meteorology, 11(2):273-282, 1972.
+Mahdi Pakdaman Naeini, Gregory Cooper, and Milos Hauskrecht. Obtaining well calibrated probabilities using bayesian binning. In Twenty-Ninth AAAI Conference on Artificial Intelligence, 2015.
+Harris Papadopoulos. Inductive conformal prediction: Theory and application to neural networks. In Tools in artificial intelligence. IntechOpen, 2008.
+T Pearce, M Zaki, A Brintrup, and A Neely. High-quality prediction intervals for deep learning: A distribution-free, ensembled approach. In 35th International Conference on Machine Learning, ICML 2018, volume 9, pp. 6473-6482, 2018.
+John Platt. Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. Advances in large margin classifiers, 1999.
+J Ross Quinlan. Combining instance-based and model-based learning. In Proceedings of the Tenth International Conference on International Conference on Machine Learning, pp. 236-243. Morgan Kaufmann Publishers Inc., 1993.
+Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge. International journal of computer vision, 115(3):211-252, 2015.
+Lukas Steinberger and Hannes Leeb. Leave-one-out prediction intervals in linear regression models with many variables. arXiv preprint arXiv:1602.05801, 2016.
+Lukas Steinberger and Hannes Leeb. Conditional predictive inference for high-dimensional stable algorithms. arXiv preprint arXiv:1809.01412, 2018.
+Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. In International Conference on Learning Representations (ICLR), 2014.
+Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1-9, 2015.
+Leslie G Valiant. A theory of the learnable. Communications of the ACM, 27(11):1134-1142, 1984.
+Vladimir N Vapnik. An overview of statistical learning theory. IEEE transactions on neural networks, 10(5):988-999, 1999.
+Vladimir Vovk. Conditional validity of inductive conformal predictors. Machine learning, 92(2-3): 349-376, 2013.
+Wenbo Wang and Xingye Qiao. Learning confidence sets using support vector machines. In Advances in Neural Information Processing Systems, pp. 4929-4938, 2018.
+Yi Wu, Jongwoo Lim, and Ming-Hsuan Yang. Online object tracking: A benchmark. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2013.
+Bianca Zadrozny and Charles Elkan. Obtaining calibrated probability estimates from decision trees and naive bayesian classifiers. In *In Proceedings of the Eighteenth International Conference on Machine Learning*. CiteSeer, 2001.
+
+Bianca Zadrozny and Charles Elkan. Transforming classifier scores into accurate multiclass probability estimates. In Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 694-699. ACM, 2002.
+Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understanding deep learning requires rethinking generalization. In ICLR, 2017.
+
+# A DISCUSSION OF ALGORITHM DESIGN CHOICES
+
+# A.1 USEFULNESS OF TEMPERATURE SCALING
+
+In this section, we discuss why temperature scaling can help improve the predicted confidence sets. A concern is that temperature scaling does not change the ordering of label probabilities. Thus, we may expect that temperature scaling does not affect the predicted confidence sets. However, this fact only holds when considering a single input $x$ —i.e., the ordering of the probabilities $p(y \mid x)$ for $y \in \mathcal{V}$ is not changed by temperature scaling. Indeed, the order of confidences for labels for different inputs can change. For a concrete example, consider two inputs $x$ and $x'$ , and the case $\mathcal{V} = \{0,1,2\}$ . Assume that the label probabilities are
+
+$$
+f (\cdot \mid x) = \left[ \begin{array}{c c c} 1 / 3 & 1 / 3 & 1 / 3 \end{array} \right] ^ {\top}
+$$
+
+$$
+f (\cdot \mid x ^ {\prime}) = \left[ \begin{array}{c c c} 3 / 4 & 1 / 4 & 0 \end{array} \right] ^ {\top}.
+$$
+
+Now, if we take temperature $\tau$ very large, then the labels become roughly
+
+$$
+f _ {\tau} (\cdot \mid x) = \left[ \begin{array}{c c c} 1 / 3 & 1 / 3 & 1 / 3 \end{array} \right] ^ {\top}
+$$
+
+$$
+f _ {\tau} (\cdot \mid x ^ {\prime}) = \left[ \begin{array}{c c c} 1 / 2 & 1 / 2 & 0 \end{array} \right] ^ {\top}.
+$$
+
+As a consequence, there are confidence sets that are achievable when using $f_{\tau}$ that are not achievable when using $f$ . In particular, the confidence sets
+
+$$
+C _ {T} (x) = \varnothing
+$$
+
+$$
+C _ {T} (x ^ {\prime}) = \{0, 1 \}
+$$
+
+can be achieved using $f_{\tau}$ (e.g., with $e^{-T} = 2/5$ ). However, it is impossible to achieve these confidence sets using $f$ for any choice of $T$ , since if $1 \in C_T(x')$ , then it must be the case that $C_T(x) = \{0,1,2\}$ . Intuitively, we expect calibrated prediction to improve the ordering of probabilities across different inputs. Our experiments support this intuition, since they show that empirically, using calibrated predictors $f_{\tau}$ produces confidence sets of smaller size.
+
+# A.2 USEFULNESS OF DIRECT BOUND
+
+One key design choice is to use a specialized generalization bound that directly provides PAC guarantees on our confidence sets rather than simply applying the VC dimension bound. The easiest way to determine which bound is better is to examine which one produces a smaller confidence set. In our approach, the size of the confidence set decreases monotonically with the choice of $\alpha = \alpha (n,\epsilon ,\delta)$ in (3). Thus, the bound that produces larger $\alpha$ is better. Recall that the VC dimension bound produces
+
+$$
+\alpha_ {\mathrm {V C}} (n, \epsilon , \delta) = \epsilon - \sqrt {\frac {\log (2 n) + 1 - \log (\delta / 4)}{n}},
+$$
+
+whereas our direct bound produces (for $k = 0$ )
+
+$$
+\alpha_ {\text {d i r e c t}} (n, \epsilon , \delta) = \max _ {k \in \mathbb {N}} k / n \quad \text {s u b j .} \quad \text {t o} \quad \sum_ {i = 0} ^ {k} \binom {n} {i} \epsilon^ {i} (1 - \epsilon) ^ {n - i} < \delta .
+$$
+
+Directly comparing these two choices of $\alpha$ is difficult, but our experiments show empirically that using the direct bound outperforms using the indirect bound.
+
+A more direct way to compare the two approaches is to instead ask how large $n$ needs to be to achieve $\alpha(n, \epsilon, \delta) = 0$ . For $\alpha_{\mathrm{VC}}$ , it is easy to check that we need
+
+$$
+n \geq \frac {\log (2 n) + 1 + \log (4 / \delta)}{\epsilon^ {2}}.
+$$
+
+Thus, we need $n$ to be at least $O(\log(1/\delta)/\epsilon^2)$ (and possibly greater, to account for the $\log(2n)$ term). In contrast, for our direct bound, $\alpha = 0$ corresponds to the case $k = 0$ . To achieve $k = 0$ , it suffices to have $n$ satisfying $(1 - \epsilon)^n < \delta$ . Using $(1 - \epsilon)^n \leq e^{-n\epsilon}$ , it suffices to have $n$ satisfying
+
+$$
+n \geq \frac {\log (1 / \delta)}{\epsilon}.
+$$
+
+
+Figure 4: Sample complexity of different bounds; we fix $\delta = 10^{-5}$ . Left: Sample complexity of VC bound and direct bound when $k = 0$ . Right: Sample complexity of direct bound for varying $k$ .
+
+
+
+In other words, $n$ only needs to be $O(\log(1/\delta)/\epsilon)$ . For small $\epsilon$ (e.g., $\epsilon = 0.01$ ), we need $100 \times$ fewer samples to achieve the same size confidence set (i.e., with choice $\alpha(n,\epsilon,\delta) = 0$ ). In Figure 4 (right), we compute the exact values of $n$ needed to get $\alpha(n,\epsilon,\delta) = 0$ as a function of $\epsilon$ for each bound (fixing $\delta = 10^{-5}$ ). As expected, our bound requires substantially smaller $n$ .
+
+Finally, in Figure 4 (right), we compare the magnitude of $n$ needed to achieve larger values of $\alpha$ using our direct bound; for simplicity, we actually consider larger values of $k$ (where $\alpha = k / n$ ), but the qualitative insights are the same. As can be seen, even for large $k$ , (e.g., $k = 50$ ), the number of samples increases, but not substantially.
+
+# B THEORETICAL GUARANTEES
+
+# B.1 ASSUMPTIONS
+
+We make two additional technical assumptions in Theorem 1, both of which are standard. First, we assume that $f$ is measurable; this assumption holds for all models used in practice, including neural networks (e.g., it holds as long as $f$ is continuous).
+
+Second, letting $\phi : \mathcal{Z} \to \mathbb{R}$ , where $\mathcal{Z} = \mathcal{X} \times \mathcal{Y}$ , be defined by $\phi((x,y)) = -\log f(y \mid x)$ , we assume that the distribution $\bar{D}$ induced by $\phi$ on $\mathbb{R}$ has continuous cumulative distribution function (CDF). More precisely, letting $\mu_D$ be the measure defining $D$ , then $\bar{D}$ is defined by the measure
+
+$$
+\mu_ {D} (t) = \mu_ {D} (\phi^ {- 1} (t)),
+$$
+
+where $\phi^{-1}:\mathbb{R}\to 2^{\mathcal{Z}}$ is the inverse of $\phi$ in the sense that $z\in \phi^{-1}(\phi (z))$ for all $z\in \mathcal{Z}$ . Then, we assume that the CDF corresponding to $\bar{D}$ is continuous. This second assumption is standard in statistical learning theory (Kearns & Vazirani, 1994). Essentially, it says that for any $t\in \mathbb{R}$ , the probability that $t = -\log f(y\mid x)$ must equal zero. This assumption should hold unless $p(x,y)$ or $f(y\mid x)$ are degenerate in some way. Furthermore, we can detect this case. In particular, the failure mode corresponds to the case that we see multiple points with the same value $-\log f(y\mid x)$ . Thus, choosing $\hat{T} = -\log f(y\mid x)$ would include all these points, so the realized error rate $\alpha$ is larger than desired for $\hat{T}$ . In this case, we can simply choose a slightly larger $\hat{T}$ to avoid this problem.
+
+# B.2 PROOF OF THEOREM 1
+
+At a high level, our proof proceeds in three steps. First, we show that a confidence set predictor $C_T$ can be encoded as a binary classifier $M_T$ . Second, we show that a PAC bound for $M_T$ implies a PAC bound for $C_T$ (where in both cases, the unknown parameter is $T \in \mathbb{R}$ ). Third, we prove PAC bounds on the error of $M_{\hat{T}}$ ; by the second step, these bounds complete our proof.
+
+Encoding $C_T$ as a binary classifier $M_T$ . We begin by showing how the problem of learning a PAC confidence set predictor $C_T$ reduces to the problem of learning a PAC binary classifier $M_T$ . First, we show that for any $T \in \mathbb{R}$ , the confidence set predictor $C_T$ can be encoded as a binary classifier $M_T$ . Consider any parameter $T \in \Theta = \mathbb{R}$ . Recall that we use the model $f(y \mid x)$ to construct the confidence set predictor
+
+$$
+C _ {T} (x) = \{y \in \mathcal {Y} \mid f (y \mid x) \geq e ^ {- T} \}.
+$$
+
+Now, define the map $\phi : \mathcal{Z} \to \mathbb{R}$ by $\phi(x,y) = -\log f(y \mid x)$ , where $\mathcal{Z} = \mathcal{X} \times \mathcal{Y}$ , and define the binary classifier $M_T : \mathbb{R} \to \{0,1\}$ by
+
+$$
+M _ {T} (t) = \mathbb {I} [ t \leq T ].
+$$
+
+Here, $\mathbb{I}[s]$ is the indicator function, which returns one if a statement $s$ is true and zero otherwise. We claim that
+
+$$
+C _ {T} (x) = \{y \in \mathcal {Y} \mid M _ {T} (\phi (x, y)) = 1 \}. \tag {12}
+$$
+
+To see this claim, note that
+
+$$
+\begin{array}{l} C _ {T} (x) = \{y \in \mathcal {Y} \mid f (y \mid x) \geq e ^ {- T} \} \\ = \left\{y \in \mathcal {Y} \mid - \log f (y \mid x) \leq T \right\} \\ = \{y \in \mathcal {Y} \mid \phi (x, y) \leq T \} \\ = \{y \in \mathcal {Y} \mid \mathbb {I} [ \phi (x, y) \leq T ] = 1 \} \\ = \{y \in \mathcal {Y} \mid M _ {T} (\phi (x, y)) = 1 \}, \\ \end{array}
+$$
+
+as claimed.
+
+PAC bound for $M_T$ implies PAC bound for $C_T$ . Next, we show that a PAC bound for $M_T$ implies a PAC bound for $C_T$ . More precisely, we design a data distribution $\tilde{D}$ and loss $\tilde{\ell}$ , and show that (i) the distribution of $\tilde{T}$ (trained to optimize $M_T$ ) is the same as the distribution of $\hat{T}$ (constructed using our algorithm), and (ii) a PAC bound for $M_{\tilde{T}}$ (where $\tilde{T}$ is trained on data from $\tilde{D}$ ) implies a PAC bound for $C_{\tilde{T}}$ . We show that as a consequence, a PAC bound on $M_{\tilde{T}}$ implies a PAC bound on $C_{\hat{T}}$ .
+
+We begin by constructing $\tilde{D}$ and $\tilde{\ell}$ . To this end, recall that $D$ is a given distribution over $\mathcal{X} \times \mathcal{Y}$ . We define a data distribution $\tilde{D}$ over $\tilde{\mathcal{X}} \times \tilde{\mathcal{Y}}$ , where $\tilde{\mathcal{X}} = \mathbb{R}$ and $\tilde{\mathcal{Y}} = \{0,1\}$ , as follows. The first component of $\tilde{D}$ is the distribution over $\tilde{\mathcal{X}}$ induced by $\phi$ from $D$ , and the second component is the distribution over $\tilde{\mathcal{Y}}$ that places all probability mass on 1. Formally, $\tilde{D}$ exists assuming $\phi$ is measurable, so the induced distribution exists; for all our choices of $f$ (i.e., categorical or Gaussian), this property is satisfied. Then,
+
+$$
+\mu_ {\tilde {D}} ((t, a)) = \mu_ {D} (\phi^ {- 1} (t)) \cdot \mathbb {I} [ a = 1 ],
+$$
+
+where $\mu_{\tilde{D}}$ is the measure encoding $\tilde{D}$ , and $\mu_{D}$ is the measure encoding $D$ . Furthermore, we define $\ell : \tilde{\mathcal{Y}} \times \tilde{\mathcal{Y}} \to \{0,1\}$ to be the 0-1 loss $\ell(a,a') = \mathbb{I}[a \neq a']$ . Finally, let $\hat{T}$ be chosen using our algorithm—i.e.,
+
+$$
+\hat {T} = \arg \min T \text {s u b j . t o} L \left(C _ {T}; Z\right) \leq \alpha
+$$
+
+$$
+L (C _ {T}; Z) = \frac {1}{| Z |} \sum_ {(x, y) \in Z} \mathbb {I} [ y \notin C _ {T} (x) ],
+$$
+
+for any $\alpha \in \mathbb{R}_{\geq 0}$ , and let $\tilde{T}$ be chosen similarly for $M_T$ —i.e.,
+
+$$
+\tilde {T} = \arg \min T \quad \text {s u b j . t o} \quad L \left(M _ {T}; \tilde {Z}\right) \leq \alpha
+$$
+
+$$
+L (M _ {T}; \tilde {Z}) = \frac {1}{| \tilde {Z} |} \sum_ {(t, a) \in \tilde {Z}} \ell (M _ {T} (t), a) = \frac {1}{| \tilde {Z} |} \sum_ {(t, a) \in \tilde {Z}} \mathbb {I} [ M _ {T} (t) \neq a ].
+$$
+
+Now, we show (i) above. In particular, we claim that $\hat{T}(Z)$ has the same distribution as $\tilde{T}(\tilde{Z})$ , where $Z \sim D^n$ and $\tilde{Z} \sim \tilde{D}^n$ are random datasets. To this end, define $\Phi: \mathcal{Z}^n \mapsto \tilde{\mathcal{Z}}^n$ by
+
+$$
+\Phi ((z _ {1}, \dots , z _ {n})) = ((\phi (z _ {1}), 1), \dots , (\phi (z _ {n}), 1)).
+$$
+
+Note that
+
+$$
+\begin{array}{l} \tilde {L} (M _ {T}; \Phi (Z)) = \frac {1}{| \Phi (Z) |} \sum_ {i = 1} ^ {n} \mathbb {I} [ M _ {T} (\phi (x _ {i}, y _ {i})) \neq 1 ] \\ = \frac {1}{| Z |} \sum_ {i = 1} ^ {n} \mathbb {I} [ y _ {i} \notin C _ {T} (x _ {i}) ] \\ = L \left(C _ {T}; Z\right), \\ \end{array}
+$$
+
+from which it follows that
+
+$$
+\begin{array}{l} \hat {T} (Z) = \arg \min T \text {s u b j . t o} L \left(C _ {T}; Z\right) \leq \alpha \\ = \arg \min T \quad \text {s u b j . t o} \quad \tilde {L} (M _ {T}; \Phi (Z)) \leq \alpha \\ = \bar {T} (\Phi (Z)). \\ \end{array}
+$$
+
+By construction of $\Phi$ , the random variables $\tilde{Z}$ and $\Phi(Z)$ have the same distribution; thus, it follows that the random variables $\tilde{T}(\tilde{Z})$ and $\tilde{T}(\Phi(Z))$ have the same distribution as well. Since $\hat{T}(Z) = \tilde{T}(\Phi(Z))$ , it follows that $\hat{T}(Z)$ has the same distribution as $\tilde{T}(\tilde{Z})$ , as claimed.
+
+Next, we show (ii) above. In particular, we claim that a PAC bound for $M_{\tilde{T}(\tilde{Z})}$ -i.e.,
+
+$$
+\mathbb {P} _ {\tilde {Z} \sim \tilde {D} ^ {n}} [ \tilde {L} (M _ {\tilde {T} (\tilde {Z})}) \leq \epsilon ] \geq 1 - \delta ,
+$$
+
+implies a PAC bound for $C_{\tilde{T} (\tilde{Z})}$ -i.e.,
+
+$$
+\mathbb {P} _ {\tilde {Z} \sim \tilde {D} ^ {n}} \left[ L \left(C _ {\tilde {T} (\tilde {Z})}\right) \leq \epsilon \right] \geq 1 - \delta ,
+$$
+
+where the true losses are
+
+$$
+\begin{array}{l} \tilde {L} (M _ {T}) = \mathbb {E} _ {(t, a) \sim \tilde {D}} [ \ell (M _ {T} (t), a) ] = \mathbb {P} _ {(t, a) \sim \tilde {D}} [ M _ {T} (t) \neq a ] \\ L \left(C _ {T}\right) = \mathbb {E} _ {(x, y) \sim D} \left[ \mathbb {I} [ y \notin C _ {T} (x) ] \right] = \mathbb {P} _ {(x, y) \sim D} [ y \notin C _ {T} (x) ]. \\ \end{array}
+$$
+
+Note that it suffices to show that the true loss for $C_T$ equals the true loss for $M_T$ —i.e.,
+
+$$
+L \left(C _ {T}\right) = \tilde {L} \left(M _ {T}\right),
+$$
+
+since this equation (together with the PAC bound for $M_{\tilde{T}(\tilde{Z})}$ ) implies
+
+$$
+\mathbb {P} _ {\tilde {Z} \sim \tilde {D} ^ {n}} [ L (C _ {\tilde {T} (\tilde {Z})}) \leq \epsilon ] = \mathbb {P} _ {\tilde {Z} \sim \tilde {D} ^ {n}} [ \tilde {L} (M _ {\hat {T} (\tilde {Z})}) \leq \epsilon ] \geq 1 - \delta ,
+$$
+
+as desired. To see the claim, note that
+
+$$
+\begin{array}{l} \tilde {L} (M _ {T}) = \mathbb {P} _ {(t, a) \sim \tilde {D}} [ M _ {T} (t) \neq a ] \\ = \int \mathbb {I} [ M _ {T} (t) \neq a ] d \mu_ {\tilde {D}} ((t, a)) \\ = \sum_ {a = 0} ^ {1} \mathbb {I} [ a = 1 ] \cdot \int \mathbb {I} [ M _ {T} (t) \neq a ] d \mu_ {D} (\phi^ {- 1} (t)) \\ = \int \mathbb {I} [ M _ {T} (t) \neq 1 ] d \mu_ {D} (\phi^ {- 1} (t)) \\ \end{array}
+$$
+
+Now, using the change of variables $t \mapsto \phi(z)$ , we have
+
+$$
+\begin{array}{l} \tilde {L} (M _ {T}) = \int \mathbb {I} [ M _ {T} (\phi (z)) \neq 1 ] d \mu_ {D} (z) \\ = \int \mathbb {I} [ M _ {T} (\phi (x, y)) \neq 1 ] \cdot D (x, y) d x d y. \\ \end{array}
+$$
+
+Then, using (12), we have
+
+$$
+\begin{array}{l} \tilde {L} (M _ {T}) = \int \mathbb {I} [ y \notin C _ {T} (x) ] D (x, y) d x d y \\ = \mathbb {P} _ {(x, y) \sim D} [ y \notin C _ {T} (x) ] \\ = L \left(C _ {T}\right), \\ \end{array}
+$$
+
+as claimed.
+
+Finally, combining (i) and (ii), we have
+
+$$
+\mathbb {P} _ {Z \sim D ^ {n}} [ L (C _ {\hat {T} (Z)}) \leq \epsilon ] = \mathbb {P} _ {\tilde {Z} \sim \tilde {D} ^ {n}} [ L (C _ {\tilde {T} (\tilde {Z})}) \leq \epsilon ] \geq 1 - \delta ,
+$$
+
+where the first equality follows since (i) says that $\hat{T}(Z)$ (where $Z \sim D^n$ ) has the same distribution as $\hat{T}(\tilde{Z})$ (where $\tilde{Z} \sim \tilde{D}^n$ ), and the second inequality follows by (ii).
+
+Generalization bound. Finally, we prove the PAC bound
+
+$$
+\mathbb {P} _ {\tilde {Z} \sim \tilde {D} ^ {n}} [ \tilde {L} (M _ {\hat {T}}) \leq \epsilon ] \geq 1 - \delta_ {0}, \tag {13}
+$$
+
+for $M_{\tilde{T}}$ , where $\delta_0 = \sum_{i=0}^{k} \binom{n}{i} \epsilon^i (1 - \epsilon)^{n-i}$ ; for conciseness, we have dropped the dependence of $\tilde{T}$ on $\tilde{Z}$ . By the previous step, this bound implies the theorem statement. To this end, we first simplify the left-hand side of the inequality (13). In particular, let $T^*$ be the smallest $T$ for which $\tilde{L}(M_{T^*}) = \epsilon$ ; such a $T^*$ exists by our assumption that $\tilde{D}$ has continuous density function.
+
+First, we claim that $T < T^{*}$ implies $\tilde{L}(M_T) > \tilde{L}(M_{T^*})$ . Assuming $T < T^{*}$ , then
+
+$$
+\begin{array}{l} \tilde {L} (M _ {T}) = \mathbb {P} _ {(t, a) \sim \tilde {D}} [ M _ {T} (t) \neq a ] \\ = \mathbb {E} _ {(t, a) \sim \tilde {D}} [ \mathbb {I} [ M _ {T} (t) \neq a ] \\ = \mathbb {E} _ {(t, a) \sim \tilde {D}} [ \mathbb {I} [ M _ {\hat {T}} (t) \neq 1 ] ] \\ = \mathbb {E} _ {(t, a) \sim \tilde {D}} \left[ \mathbb {I} [ \mathbb {I} [ t \leq T ] \neq 1 ] \right] \\ = \mathbb {E} _ {(t, a) \sim \bar {D}} [ \mathbb {I} [ t > T ] ] \\ > \mathbb {E} _ {(t, a) \sim \tilde {D}} [ \mathbb {I} [ t > T ^ {*} ] ] \\ = \tilde {L} \left(M _ {T ^ {*}}\right). \\ \end{array}
+$$
+
+Assuming $T \geq T^{*}$ , we can similarly show that $\tilde{L}(M_{\hat{T}}) \leq \tilde{L}(M_{T^{*}})$ . It follows that
+
+$$
+\begin{array}{l} \mathbb {P} _ {\tilde {Z} \sim \tilde {D} ^ {n}} \left[ \tilde {L} \left(M _ {\tilde {T}}\right) > \epsilon \right] = \mathbb {P} _ {\tilde {Z} \sim \tilde {D} ^ {n}} \left[ \tilde {L} \left(M _ {\tilde {T}}\right) > \tilde {L} \left(M _ {T ^ {*}}\right) \right] \\ = \mathbb {P} _ {\tilde {Z} \sim \tilde {D} ^ {n}} \left[ \tilde {T} < T ^ {*} \right]. \\ \end{array}
+$$
+
+As a consequence, (13) is equivalent to
+
+$$
+\mathbb {P} _ {\tilde {Z} \sim \tilde {D} ^ {n}} \left[ \tilde {T} < T ^ {*} \right] \leq \delta_ {0}.
+$$
+
+Next, recall that $\tilde{T}$ must satisfy $\tilde{L}(M_{\tilde{T}}; \tilde{Z}) \leq \alpha$ , where
+
+$$
+\tilde {L} (M _ {\tilde {T}}; \tilde {Z}) = \frac {1}{n} \sum_ {(t, a) \in \tilde {Z}} \mathbb {I} [ M _ {\tilde {T}} (t) \neq a ].
+$$
+
+Assuming $\hat{T} < T^{*}$ , and using $k = n\cdot \alpha$ , it follows that
+
+$$
+\begin{array}{l} k \geq \sum_ {(t, a) \in \tilde {Z}} \mathbb {I} [ M _ {\tilde {T}} (t) \neq a ] = \sum_ {(t, a) \in \tilde {Z}} \mathbb {I} [ M _ {\tilde {T}} (t) \neq 1 ] \\ = \sum_ {(t, a) \in \tilde {Z}} \mathbb {I} [ t > \tilde {T} ] \\ \geq \sum_ {(t, a) \in \tilde {Z}} \mathbb {I} [ t > T ^ {*} ]. \\ \end{array}
+$$
+
+As a consequence, we have
+
+$$
+\begin{array}{l} \mathbb {P} _ {\tilde {Z} \sim \tilde {D} ^ {n}} \left[ \tilde {T} < T ^ {*} \right] \leq \mathbb {P} _ {\tilde {Z} \sim \tilde {D} ^ {n}} \left[ \sum_ {(t, a) \in \tilde {Z}} \mathbb {I} [ t > T ^ {*} ] \leq k \right] \\ = \sum_ {i = 0} ^ {k} \mathbb {P} _ {\tilde {Z} \sim \tilde {D} ^ {n}} \left[ \sum_ {(t, a) \in \tilde {Z} _ {\mathrm {v a l}}} \mathbb {I} [ t > T ^ {*} ] = i \right]. \\ \end{array}
+$$
+
+By our definition of $T^{*}$ , the event in the final expression says that the sum of $n$ i.i.d. Bernoulli random variables $\mathbb{I}[t > T^{*}] \sim \operatorname{Bernoulli}(\epsilon)$ is at most $k$ . Thus, this event follows a distribution $\operatorname{Binomial}(n, \epsilon)$ , so
+
+$$
+\mathbb {P} _ {\tilde {Z} \sim \tilde {D} ^ {n}} \left[ \tilde {T} < T ^ {*} \right] \leq \sum_ {i = 0} ^ {k} \operatorname {B i n o m i a l} (i; n, \epsilon) = \sum_ {i = 0} ^ {k} \binom {n} {i} \epsilon^ {i} (1 - \epsilon) ^ {n - i} = \delta_ {0},
+$$
+
+as claimed. The theorem statement follows.
+
+# C DETAILS ON PROBABILITY FORECASTERS FOR SPECIFIC TASKS
+
+In this section, we describe architectures for probability forecasters for classification, regression, and model-based reinforcement learning.
+
+Classification. For the case $\mathcal{V} = \{1,\dots,Y\}$ , we choose the probability forecaster $f$ to be a neural network with a softmax output. Then, we can compute a given confidence set
+
+$$
+C _ {T} (x) = \left\{y \in \mathcal {Y} \mid f (y \mid x) \geq e ^ {- T} \right\}
+$$
+
+by explicitly enumerating $y \in \mathcal{V}$ . We measure the size of $C_T(x)$ as $|C_T(x)|$ .
+
+Regression. For the case $\mathcal{V} = \mathbb{R}$ , we choose the probability forecaster $f$ to be a neural network that outputs the parameters $(\mu, \sigma) \in \mathcal{V} \times \mathbb{R}_{>0}$ of a Gaussian distribution. Then, we have
+
+$$
+C _ {T} (x) = \left[ \mu - \sigma \sqrt {2 (T - \log (\sigma \sqrt {2 \pi}))}, \mu + \sigma \sqrt {2 (T - \log (\sigma \sqrt {2 \pi}))} \right].
+$$
+
+This choice generalizes to $\mathcal{V} = \mathbb{R}^d$ by having $f$ output the parameters $(\mu, \Sigma) \in \mathcal{V} \times \mathbb{S}_{\succ 0}^d$ (where $\mathbb{S}_{\succ 0}^d$ is the set of $d$ dimensional symmetric positive definite matrices) of a $d$ dimensional Gaussian distribution. Note that $C_T(x)$ is an ellipsoid $C_T(x) = \mu + \Lambda S^{d-1}$ , where $\Lambda \in \mathbb{R}^{d \times d}$ and $S^{d-1}$ is the unit sphere in $\mathbb{R}^d$ ; in particular, $\Lambda = D^{-\frac{1}{2}}Q$ , where $QDQ^\top$ is the eigendecomposition of
+
+$$
+(2 T - d \ln 2 \pi - \ln \det \Sigma) ^ {- 1} \cdot \Sigma^ {- 1}.
+$$
+
+We measure the size of $C_T(x)$ as $\| \Lambda \|_F$ , where $\| \cdot \|_F$ is the Frobenius norm.
+
+Model-based reinforcement learning. In model-based reinforcement learning, the goal is to predict trajectories based on a model of the dynamics. We consider an MDP with states $\mathcal{X} \subseteq \mathbb{R}^{d_X}$ , actions $\mathcal{U} \subseteq \mathbb{R}^{d_U}$ , an unknown distribution over initial states $x_0 \sim d_0$ , and unknown dynamics $g^*(x' \mid x, u)$ mapping a state-action pair $(x, u) \in \mathcal{X} \times \mathcal{U}$ to a distribution over states $x' \in \mathcal{X}$ . We assume a fixed, known policy $\pi(u \mid x)$ , mapping a state $x \in \mathcal{X}$ to a distribution over actions $u \in \mathcal{U}$ . The (unknown) closed-loop dynamics are $f^*(x' \mid x) = \mathbb{E}_{\pi(u|x)}[g^*(x' \mid x, u)]$ .
+
+Given initial state $x_0 \in \mathcal{X}$ and time horizon $H \in \mathbb{N}$ , we can sample a trajectory $x_{1:H}^{*} = (x_{1}^{*},\dots,x_{H}^{*}) \sim f^{*}$ by setting $x_0^* = x_0$ and sequentially sampling $x_{t+1}^{*} \sim f^{*}(\cdot | x_t^*)$ for $t \in \{0,1,\dots,H-1\}$ . Our goal is to predict a confidence set $C_T(x_0) \subseteq \mathcal{X}^H$ that contains $x_{1:H}^{*} \in \mathcal{X}^H$ with high probability (according to both the randomness in initial states $x_0 \sim d_0$ and in $f$ ). This problem is a multivariate regression problem with inputs $\mathcal{X}$ and outputs $\mathcal{Y} = \mathcal{X}^H$ .
+
+We assume given a probability forecaster $f(x' \mid x) = \mathcal{N}(x'; \mu(x), \Sigma(x))$ trained to predict the distribution over next states—i.e., $f(x' \mid x) \approx f^*(x' \mid x)$ . Given initial state $x_0 \in \mathcal{X}$ and time horizon $H \in \mathbb{N}$ , we construct the mean trajectory $\bar{x}_{1:H}$ by setting $\bar{x}_0 = x_0$ and letting $\bar{x}_{t+1} = \mu(\bar{x}_t)$ . To account for the fact that the variances accumulate over time, we sum them together to obtain the predicted variances $\tilde{\Sigma}_{1:H}$ —i.e.,
+
+$$
+\tilde {\Sigma} _ {t} = \Sigma (\bar {x} _ {0}) + \Sigma (\bar {x} _ {1}) + \ldots + \Sigma (\bar {x} _ {t - 1}).
+$$
+
+Then, we use the probability forecast $\tilde{f}(\bar{x}_{1:H}, \tilde{\Sigma}_{1:H}) = \mathcal{N}(\bar{x}_{1:H}, \tilde{\Sigma}_{1:H})$ (where we think of $\bar{x}_{1:H}$ as a vector in $\mathbb{R}^{H \cdot d_X}$ and $\tilde{\Sigma}_{1:H}$ as a block diagonal matrix in $\mathbb{R}^{(H \cdot d_X) \times (H \cdot d_X)}$ ) to construct confidence sets.
+
+Finally, we describe how we measure the size of a predicted confidence set $C_T(x_0) \subseteq \mathcal{X}^H$ . In particular, note that $C_T(x_0)$ has the form
+
+$$
+C _ {T} (x _ {0}) = \left(C _ {T, 1} (x _ {0}),..., C _ {T, H} (x _ {0})\right),
+$$
+
+
+
+
+(b)
+
+
+(a)
+(c)
+
+
+(d)
+Figure 5: Comparison to baselines that do not have theoretical guarantees. In (a) and (b), we show results for ImageNet, and in (c) and (d), we show results for the half-cheetah. In (a) and (c), we show the empirical error in the confidence set sizes; the dotted line denotes $\epsilon = 0.01$ , our target confidence set error. In (b) and (d), we show the sizes of the constructed confidence sets.
+
+i.e., $C_{T,t}(x_0)$ is the confidence set for the state $x_t$ reached after $t$ time steps. Then, we measure the size of the confidence set for each component $C_{T,t}(x_0)$ (for $t \in \{1, \dots, H\}$ ) individually, and take the average. As in the case of regression, $C_{T,t}(x_0)$ is an ellipsoid $C_{T,t}(x_0) = \bar{x}_t + \Lambda_t S^{d_X - 1}$ ; then, the size of $C_T(x_0)$ is $H^{-1} \sum_{t=1}^{H} \| \Lambda_t \|_F$ .
+
+An additional detail is that when we calibrate this forecaster, we calibrate each component $C_{T,t}(x_0)$ individually—i.e., we use $H$ calibration parameters $\tau_1, \ldots, \tau_H$ .
+
+# D ADDITIONAL RESULTS
+
+# D.1 COMPARISON TO ADDITIONAL BASELINES
+
+We compare to two baselines that do not have theoretical guarantees. We assume given a probability forecaster $f(y \mid x)$ . Then, given an input $x \in \mathcal{X}$ , we construct the confidence set to satisfy
+
+$$
+\sum_ {y \in C (x)} f (y \mid x) \geq 1 - \epsilon . \tag {14}
+$$
+
+More precisely, we first rank the labels in decreasing order of $f(y \mid x)$ , to obtain a list $(y_{1}, y_{2}, \dots, y_{|\mathcal{Y}|})$ . Then, we choose the smallest $k$ such that (14) holds for $C(x) = \{y_{1}, \dots, y_{k}\}$ . Intuitively, if the probabilities $f(y \mid x)$ are correct (i.e., $f(y \mid x)$ is the true probability of $y$ given $x$ ), then this confidence set should contain the true label $y$ with high probability.
+
+For regression, we cannot explicitly rank labels $y \in \mathcal{V} \subseteq \mathbb{R}^d$ , but they are monotonically decreasing away from the mean. Then, assuming $f(y \mid x) = \mathcal{N}(y; \mu(x), \Sigma(x))$ is Gaussian, we take an ellipsoid of shape $\Sigma(x)$ around $\mu(x)$ with minimum radius that captures $1 - \epsilon$ of the probability mass of $f(y \mid x)$ . More precisely, we choose
+
+$$
+C (x) = C _ {\hat {T} (x)} (x)
+$$
+
+$$
+\hat {T} (x) = \arg \min _ {T \in \mathbb {R}} T \mathrm {s u b j . t o} \mathbb {P} _ {f (y | x)} [ y \in C _ {T} (x) ] \geq 1 - \epsilon ,
+$$
+
+
+Figure 6: Confidence set sizes for two neural network architectures trained on ImageNet; for both, we use $n = 20,000$ , $\epsilon = 0.01$ and $\delta = 10^{-5}$ . Left: AlexNet (Krizhevsky, 2014); here, the empirical confidence set error of our approach $C + D$ is 0.0066. Right: GoogLeNet (Szegedy et al., 2015); here, the empirical confidence set error of our approach is 0.0061.
+
+
+
+where $C_T(x) = \{y \in \mathcal{Y} \mid f(y \mid x) \geq e^{-T}\}$ as before. Note that unlike our algorithm, the threshold $\hat{T}(x)$ is not a learned parameter, but is computed independently for each new input $x$ . We can solve for $\hat{T}(x)$ efficiently by changing basis to convert $f(y \mid x)$ to a standard Gaussian distribution, and then using the error function to compute the cutoff that includes the desired probability mass.
+
+In Figure 5, we compare the confidence sets constructed using this approach with (i) the forecaster $f_{\hat{\phi}}(y \mid x)$ without any calibration, and (ii) the calibrated forecaster $f_{\hat{\phi},\hat{\tau}}(y \mid x)$ . We plot both the confidence set sizes and the empirical error rates. For the latter, recall that a confidence set predictor $C$ is correct if $L(C) < \epsilon$ , where $L(C)$ the true error rate. However, we cannot measure $L(C)$ ; instead, we approximate it on a held-out test set $Z_{\mathrm{test}} \subseteq \mathcal{X} \times \mathcal{Y}$ —i.e., $L(C) \approx \hat{L}(C; Z_{\mathrm{test}})$ , where
+
+$$
+\hat {L} (C; Z _ {\text {t e s t}}) = \frac {1}{| Z _ {\text {t e s t}} |} \sum_ {(x, y) \in Z _ {\text {t e s t}}} \mathbb {I} [ y \notin C (x) ].
+$$
+
+Intuitively, $\hat{L}(C;Z_{\mathrm{test}})$ is the fraction of inputs $(x,y)\in Z_{\mathrm{test}}$ such that the predicted confidence set for $x$ does not contain $y$ . We say a confidence set $C$ is empirically valid when $\hat{L}(C;Z_{\mathrm{test}}) < \epsilon$ . Recall that our algorithm guarantees correctness with probability at least $1 - \delta$ , where $\delta = 10^{-5}$ .
+
+As can be seen, the baseline approaches are not empirically valid in all cases. In one case—namely, the baseline with the calibrated forecaster on ImageNet—the confidence sets are almost empirically valid. However, in this case, the confidence sets are much larger than those based on our approach, despite the fact that the error rate of our confidence sets are empirically valid. Thus, our algorithms outperform the baselines in all cases.
+
+# D.2 RESULTS ON ADDITIONAL IMAGENET NEURAL NETWORK ARCHITECTURES
+
+We apply our approach to two additional neural network architectures for ImageNet: AlexNet (Krizhevsky, 2014) and GoogLeNet (Szegedy et al., 2015). Our results are shown in Figure 6. As can be seen, calibration reduces the confidence set sizes for AlexNet, but actually increases the confidence set sizes for GoogleNet. Thus, both calibrated and uncalibrated models may need to be considered when constructing confidence set predictors. Also, we find that confidence set sizes are correlated with classification error—the test errors for AlexNet, GoogleNet, and ResNet are $47.83\%$ , $29.41\%$ , and $21.34\%$ , respectively, and their confidence set sizes decrease in the same order.
+
+# D.3 RESULTS ON ADDITIONAL CLASSIFICATION DATASETS
+
+We apply our approach to three small classification datasets: an Arrhythmia detection dataset (Guyenir et al., 1997), a car evaluation dataset (Bohanec & Rajkovic, 1988), and a medical alarm dataset (Bonafide et al., 2017). The confidence set sizes are shown in Figure 7. We choose larger values of $\epsilon$ and $\delta$ since we cannot obtain confidence sets that satisfy the PAC criterion with smaller $\epsilon$ and $\delta$ when the number of validation examples $n$ is too small. For all three datasets, the empirical confidence set error is smaller than the specified error $\epsilon$ ; thus, the constructed confidence sets are empirically valid. For these datasets, the confidence set sizes of our approach $C + D$ and our approach without calibration $D$ are similar, most likely due to the small number of class labels.
+
+
+
+
+(b) Car
+
+
+(a) Arrhythmia
+(c) CHOP Alarm
+
+
+(d) Suppressed alarms
+Figure 7: Confidence set sizes for three additional classification benchmarks: (a) the arrhythmia detection dataset (Guvenir et al., 1997); here, $n = 90$ , $\epsilon = 0.1$ , $\delta = 0.05$ , and the empirical confidence set error of our approach $C + D$ is 0.0435, (b) the car evaluation dataset (Bohanec & Rajkovic, 1988); here, $n = 345$ , $\epsilon = 0.05$ , $\delta = 10^{-5}$ , and the empirical confidence set error of our approach $C + D$ is 0.0172, and (c) the CHOP alarm dataset (Bonafide et al., 2017); here, $n = 1000$ , $\epsilon = 0.02$ , $\delta = 10^{-5}$ , and the empirical confidence set error of our approach $C + D$ is 0.0159. (d) The fractions of actionable and false alarms with a confidence set $\{0\}$ (i.e., only contains false alarm).
+
+We additionally ran our approach on a medical dataset where classification decisions are safety critical; thus, correct predicted confidence sets are required. In particular, we use the Children's Hospital of Philadelphia (CHOP) alarm dataset (Bonafide et al., 2017). This dataset consists of vital signs from 100 patients around one year of age. One of the vital signs is the oxygen level of the blood, and a medical device generates an alarm if the oxygen level is below a specified level. The labels indicate whether the generated alarm is true ( $y = 1$ ) or false ( $y = 0$ ). We use $n = 1000$ , $\epsilon = 0.02$ , and $\delta = 10^{-5}$ . The empirical confidence set error of our approach is $\hat{L}(C; Z_{\mathrm{test}}) = 0.0159$ .
+
+The key question is how many false alarms can be reliably detected using machine learning to help reduce alarm fatigue. We consider an approach where we use the predicted confidence sets to detect false alarms. In particular, we first train a probability forecaster $f: \mathcal{X} \to \mathcal{P}_{\mathcal{Y}}$ , where $\mathcal{Y} = \{0,1\}$ , to predict the probability that an alarm is true, and then construct a calibrated confidence set predictor $\tilde{f}: \mathcal{X} \to 2^{\mathcal{Y}}$ based on this forecaster. We consider an alarm to be false if the predicted confidence set is $\tilde{f}(x) = \{0\}$ —i.e., according to our confidence set predictor, the alarm is definitely false. Then, our PAC guarantee says that the alarm is actually false with probability at least $1 - \epsilon$ . In summary, we suppress an alarm if $\tilde{f}(x) = \{0\}$ . Using our approach, 176/630 (i.e., $27.94\%$ ) of false alarms are suppressed, while only 13/187 (i.e., $6.95\%$ ) true alarms are suppressed (see Figure 7 (d)).
+
+# D.4 RESULTS ON ADDITIONAL REGRESSION DATASETS
+
+We ran our algorithm on two small regression baselines—the Auto MPG dataset (Quinlan, 1993) and the student grade dataset (Cortez & Silva, 2008). We show results in Figure 8. The parameters we use are $\epsilon = 0.1$ and $\delta = 0.05$ ; as with the smaller classification datasets, we use larger choices of $\epsilon$ and $\delta$ since we cannot construct valid confidence sets for smaller choices. For the Auto MPG dataset, the empirical confidence set error of our final model $C + D$ is $\hat{L}(C; Z_{\mathrm{test}}) = 0.0597$ , so these are empirically valid. For the student grade dataset, the error is $\hat{L}(C; Z_{\mathrm{test}}) = 0.1250$ , which is slightly larger than desired; this failure is likely due to the fact that the failure probability $\delta = 0.05$ is somewhat large.
+
+
+Figure 8: Confidence set sizes for two benchmarks focused on regression; for both, we use $\epsilon = 0.1$ and $\delta = 0.05$ . Left: the Auto MPG dataset (Quinlan, 1993); here, $n = 70$ , and the empirical confidence set error of our approach $C + D$ is 0.1250. Right: The student grade dataset (Cortez & Silva, 2008); here, $n = 100$ , and the empirical confidence set error of our approach is 0.0597.
+
+
+
+# D.5 ADDITIONAL RESULTS ON IMAGENET, HALF-CHEETAH, AND OBJECT TRACKING
+
+Table 4 & 5 show examples of ResNet confidence set sizes for ImageNet images. Table 6 shows results for varying $\epsilon, \delta$ on ResNet. Tables 7 & 8 show results for varying $\epsilon, \delta$ on the Half-Cheetah. Table 9 shows visualizations of the confidence sets predicted for our object tracking benchmark.
+
+
+Table 4: ImageNet images with varying ResNet confidence set sizes. The confidence set sizes are on the top. The true label is on the left-hand side. Incorrectly labeled images are boxed in red.
+
+| 1 ≤ |C(x)| < 5 | 5 ≤ |C(x)| < 10 | 10 ≤ |C(x)| < 20 |
| {king penguin} | {Chihuahua, toy terrier, Italian greyhound, Boston bull, miniature pinscher} | {banded gecko, common iguana, American chameleon, whiptail, agama, frilled lizard, alligator lizard, green lizard, African chameleon, Komodo dragon} |
| {shopping basket} | {English springer, Welsh springer spaniel, collie, boxer, Saint Bernard, Leonberg} | {altar, analog clock, bell cote, castle, church, cinema, dome, monastery, palace, vault, wall clock} |
| {chambered nautilus} | {face powder, hamper, lotion, packet, shopping basket} | {barber chair, hand blower, medicine chest, paper towel, plunger, shower curtain, soap dispenser, toilet seat, tub, washbasin, washer, toilet tissue} |
| {bonnet} | {kite, bald eagle, vulture, great grey owl, bittern} | {beach wagon, cab, car wheel, convertible, grille, limousine, minivan, mobile home, passenger car, pickup, recreational vehicle, sports car, tow truck} |
| {Madagascar cat, indri} | {tiger cat, lynx, leopard, snow leopard, jaguar, tiger, cheetah} | {cannon, castle, cliff dwelling, megalith, monastery, obelisk, prison, stone wall, triumphal arch, vault, alp, cliff, promontory, valley} |
| {ballpoint, fountain pen} | {cash machine, desktop computer, entertain. center, home theater, loudspeaker, monitor, screen, television} | {amphibian, cassette player, fire engine, minibus, minivan, passenger car, pole, police van, puck, racer, radio, school bus, screddriver, streetcar, trolleybus} |
| {ibex, impala, gazelle} | {common iguana, whiptail, agama, frilled lizard, alligator lizard, green lizard, Komodo dragon, African crocodile, American alligator} | {juno, water ouzel, water snake, drake, red-breasted merganser, goose, crayfish, little blue heron, European gallinule, ruddy turnstone, red-backed sandpiper, redshank, doitcher, oystercatcher, albatross, otter} |
| {indigo bunting, bee eater, hummingbird, jacamar} | {barber chair, barbershop, electric fan, hand blower, iron, rocking chair, table lamp, tricycle, vacuum} | {accordion, acoustic guitar, banjo, bassoon, cornet, drum, drumstick, electric guitar, French horn, maraca, microphone, oboe, sax, stage, torch, trombone, violin} |
+
+Table 5: Confidence sets of ImageNet images with varying ResNet confidence set sizes. The predicted confidence set is shown to the right of the corresponding input image. The true label is shown in red, and the predicted label is shown with a hat.
+
+| δ=10-1 | δ=10-3 | δ=10-5 |
| ε=0.05 | Conf. set size | Conf. set size | Conf. set size | Conf. set size | Conf. set size | Conf. set size | Conf. set size | Conf. set size |
| Conf. set size | Conf. set size | Conf. set size | Conf. set size | Conf. set size | Conf. set size | Conf. set size | Conf. set size |
| Conf. set size | Conf. set size | Conf. set size | Conf. set size | Conf. set size | Conf. set size | Conf. set size | Conf. set size |
| Conf. set size | Conf. set size | Conf. size | Conf. size | Conf. size | Conf. size | Conf. size | Conf. size |
| Conf. set size | Conf. set size | Conf. set size | Conf. set size | Conf. set size | Conf. set size | Conf. set size | Conf. set size |
| Conf. set size | Conf. set size | Conf. set size | Conf. set size | Conf. set size | Conf. set size | Conf. set size | Conf. set size |
| Conf. set size | Conf. setsize | Conf. set size | Conf. set size | Conf. set size | Conf. set size | Conf. set size | Conf. set size |
| Conf. set size | Conf. set size | Conf. set size | Conf. set size | Conf. set size | Conf. set size | Conf. set size | Conf. set size |
| Conf. set size | Conf. set size | Conf. set size | Conf. set size | Conf. size | Conf. size | Conf. size | Conf. size |
| Conf. set size | Conf. set size | Conf. set size | Conf. set size | Conf. size | Conf. size | Conf. size | Conf. size |
| Conf. set size | Conf. set size | Conf. set size | Conf. set size | Conf. size | Conf. size | Conf. size | Conf. size |
| Conf. set size | Conf. set size | Conf. set size | Conf . set size | Conf. size | Conf. size | Conf. size | Conf. size |
| Conf. set size | Conf. set size | Conf. set size | Conf. set size | Conf. size | Conf. size | Conf. size | Conf. size |
| Conf. set size | Conf. set size | Conf. set size | Conf. set size | Conf. size | Conf. size | Conf. size | Conf. size |
| | | | | | | |
+
+Table 6: Confidence set sizes for ResNet trained on ImageNet, for varying $\epsilon$ , $\delta$ and for $n = 20,000$ . The plots are as in Figure 1 (a).
+
+
+Table 7: Confidence set sizes for a neural network dynamics model trained on the half-cheetah environment, for varying $\epsilon, \delta$ and for $n = 5000$ . The plots are as in Figure 3 (a).
+
+$\delta = 10^{-1}$
+
+$\delta = 10^{-3}$
+
+$\delta = 10^{-5}$
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Table 8: Confidence set sizes for a neural network dynamics model trained on the half-cheetah environment, for varying $\epsilon, \delta$ and for $n = 5000$ . The plots are as in Figure 3 (b).
+
+
+
+
+
+
+Table 9: Visualization of confidence sets for the tracking dataset (Wu et al., 2013), including the ground truth bounding box (white), the bounding box predicted by the original neural network (Held et al., 2016) (red), and the bounding box produced using our confidence set predictor (green). We have overapproximated the predicted ellipsoid confidence set with a box. Our bounding box contains the ground truth bounding box with high probability.
\ No newline at end of file
diff --git a/pacconfidencesetsfordeepneuralnetworksviacalibratedprediction/images.zip b/pacconfidencesetsfordeepneuralnetworksviacalibratedprediction/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..1f926b384e243d68dff58b0d7bde94ea2d71d435
--- /dev/null
+++ b/pacconfidencesetsfordeepneuralnetworksviacalibratedprediction/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:39bd67ce27dc7c3860f1bd96cc0722bdf20a74770bbfdb0e6cac67bcb1b89ae2
+size 2503104
diff --git a/pacconfidencesetsfordeepneuralnetworksviacalibratedprediction/layout.json b/pacconfidencesetsfordeepneuralnetworksviacalibratedprediction/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..8471617a7b1ca2d3a7bd2855f54af1d27618573d
--- /dev/null
+++ b/pacconfidencesetsfordeepneuralnetworksviacalibratedprediction/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0cb2749c552904054349e750acb04bed371fe079c56e14262ff70d2d13889feb
+size 1260764
diff --git a/padactivationunitsendtoendlearningofflexibleactivationfunctionsindeepnetworks/a78796f7-8d32-4963-b3de-3c99f9954cd8_content_list.json b/padactivationunitsendtoendlearningofflexibleactivationfunctionsindeepnetworks/a78796f7-8d32-4963-b3de-3c99f9954cd8_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..59cf5d966bea6000064696fc2855b3ce995cdca3
--- /dev/null
+++ b/padactivationunitsendtoendlearningofflexibleactivationfunctionsindeepnetworks/a78796f7-8d32-4963-b3de-3c99f9954cd8_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:bc37b8e08cbdd3ab5d98d55a7a7d2d4a29217c28e1b21d85938380bcf9d0c1b7
+size 90712
diff --git a/padactivationunitsendtoendlearningofflexibleactivationfunctionsindeepnetworks/a78796f7-8d32-4963-b3de-3c99f9954cd8_model.json b/padactivationunitsendtoendlearningofflexibleactivationfunctionsindeepnetworks/a78796f7-8d32-4963-b3de-3c99f9954cd8_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..fb80add993295ecd75e3d5730c82b3395e79bd32
--- /dev/null
+++ b/padactivationunitsendtoendlearningofflexibleactivationfunctionsindeepnetworks/a78796f7-8d32-4963-b3de-3c99f9954cd8_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a2d869ebff46f61ef731dfeac65e7e3783b040252c6df81f54f68e40c40049d0
+size 111322
diff --git a/padactivationunitsendtoendlearningofflexibleactivationfunctionsindeepnetworks/a78796f7-8d32-4963-b3de-3c99f9954cd8_origin.pdf b/padactivationunitsendtoendlearningofflexibleactivationfunctionsindeepnetworks/a78796f7-8d32-4963-b3de-3c99f9954cd8_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..2b0cdd2b52c6e98cac31317a69fb96753e031442
--- /dev/null
+++ b/padactivationunitsendtoendlearningofflexibleactivationfunctionsindeepnetworks/a78796f7-8d32-4963-b3de-3c99f9954cd8_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:554fe16467413737e32ed320219a8577199c7757d859cdc4adf5d9e9317946a7
+size 1126048
diff --git a/padactivationunitsendtoendlearningofflexibleactivationfunctionsindeepnetworks/full.md b/padactivationunitsendtoendlearningofflexibleactivationfunctionsindeepnetworks/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..6dbeb374e501bb2b4b6cc39ebdba9cddd4104dfd
--- /dev/null
+++ b/padactivationunitsendtoendlearningofflexibleactivationfunctionsindeepnetworks/full.md
@@ -0,0 +1,333 @@
+# PADE ACTIVATION UNITS: END-TO-END LEARNING OF FLEXIBLE ACTIVATION FUNCTIONS IN DEEP NETWORKS
+
+Alejandro Molina1, Patrick Schramowski1, Kristian Kersting1,2
+
+$^{1}$ AI and Machine Learning Group, CS Department, TU Darmstadt, Germany
+$^{2}$ Centre for Cognitive Science, TU Darmstadt, Germany
+
+{molina, schramowski, kersting}@cs.tu-darmstadt.de
+
+# ABSTRACT
+
+The performance of deep network learning strongly depends on the choice of the non-linear activation function associated with each neuron. However, deciding on the best activation is non-trivial, and the choice depends on the architecture, hyper-parameters, and even on the dataset. Typically these activations are fixed by hand before training. Here, we demonstrate how to eliminate the reliance on first picking fixed activation functions by using flexible parametric rational functions instead. The resulting Padé Activation Units (PAUs) can both approximate common activation functions and also learn new ones while providing compact representations. Our empirical evidence shows that end-to-end learning deep networks with PAUs can increase the predictive performance. Moreover, PAUs pave the way to approximations with provable robustness.
+
+https://github.com/ml-research/pau
+
+# 1 INTRODUCTION
+
+An important building block of deep learning is the non-linearities introduced by the activation functions $f(x)$ . They play a major role in the success of training deep neural networks, both in terms of training time and predictive performance. Consider e.g. Rectified Linear Unit (ReLU) due to Nair and Hinton (2010). The demonstrated benefits in training deep networks, see e.g. (Glorot et al., 2011), brought renewed attention to the development of new activation functions. Since then, several ReLU variations with different properties have been introduced such as LeakyReLUs (Maas et al., 2013), ELUs (Clevert et al., 2016), RReLUs (Xu et al., 2015), among others. Another line of research, such as (Ramachandran et al., 2018) automatically searches for activation functions. It identified the Swish unit empirically as a good candidate. However, for a given dataset, there are no guarantees that Swish unit behaves well and the proposed search algorithm is computationally quite demanding.
+
+The activation functions are traditionally fixed and, in turn, impose a set of inductive biases on the network. One attempt to relax this bias are for instance PReLUs (He et al., 2015), where the negative slope is subject to optimization allowing for more flexibility than other ReLU variants. Learnable activation functions generalize this idea. They exploit parameterizations of the activation functions, adapted in an end-to-end fashion to different network architectures and datasets during training. For instance, Maxout (Goodfellow et al., 2013) and Mixout (Zhao et al., 2017) use a fixed set of piecewise linear components and optimized their (hyper-)parameters. Although they are theoretically universal function approximators, they heavily increase the number of parameters of the network and strongly depend on hyper-parameters such as the number of components to realize this potential. Vercellino and Wang (2017) used a meta-learning approach for learning task-specific activation functions (hyperactivations). However, as Vercellino and Wang admit, the implementation of hyperactivations, while easy to express notationally, can be frustrating to implement for generalizability over any given activation network. Recently, Goyal et al. (2019) proposed a learnable activation function based on Taylor approximation and suggested a transformation strategy to avoid exploding gradients. However, relying on polynomials suffers from well-known limitations such as exploding values and a tendency to oscillate (Trefethen, 2012).
+
+
+Figure 1: Approximations of common activation functions (ReLU, Sigmoid, Tanh, Swish and Leaky ReLU $(\alpha = 0.20)$ ) using PAUs (marked with *). As one can see, PAUs can encode common activation functions very well. (Best viewed in color)
+
+As an alternative, we here introduce a learnable activation function based on the Padé approximation, i.e., rational functions. In contrast to approximations for high accuracy hardware implementation of the hyperbolic tangent and the sigmoid activation functions (Hajduk, 2018), we do not assume fixed coefficients. The resulting Padé Activation Units (PAU) can be learned using standard stochastic gradient and, hence, seamlessly integrated into the deep learning stack. PAUs provide more flexibility and increase the predictive performance of deep neural networks, as we demonstrate.
+
+We proceed as follows. We start off by introducing PAUs. Then introduce Padé networks and show that they are universal approximators. Before concluding, we present our empirical evaluation.
+
+# 2 PADE ACTIVATION UNITS (PAU)
+
+Our starting point is the set of intuitive assumptions activation functions should ultimately fulfill shown in Tab. 1. The assumptions (i,v) concern the ability of neural networks to approximate functions. Rational functions can fulfill assumptions (i,iv), and our experimental evaluation demonstrates that assumptions (ii,iii,v) also hold.
+
+(i) They must allow the networks to be universal function approximators.
+(ii) They should ameliorate gradient vanishing.
+(iii) They should be stable.
+(iv) They should be parsimonious on the number of parameters.
+(v) They should provide networks with high predictive performance.
+
+| Activation | Learnable | Assumptions |
| i | ii | iii | iv | v |
| ReLU | N | Y | Y | Y | | Y |
| ReLU6 | N | Y | Y | Y | | Y |
| RReLU | N | Y | Y | Y | | Y |
| LReLU | N | Y | Y | Y | | Y |
| ELU | N | Y | Y | Y | | Y |
| CELU | N | Y | Y | Y | | Y |
| Swish | N | Y | ? | Y | | Y |
| PReLU | Y | Y | Y | Y | Y | Y |
| Maxout | Y | Y | Y | Y | N | Y |
| Mixture | Y | Y | Y | Y | Y | Y |
| APL | Y | Y | Y | Y | Y | Y |
| SReLU | Y | Y | Y | Y | Y | Y |
| SLAF | Y | Y | Y | N | Y | ? |
| PAU | Y | Y | Y | Y | Y | Y |
+
+Table 1: (Left) The intuitive assumptions activation functions (AFs) should ultimately fulfill. (Right) Existing ELU, CELU and ReLU like AFS do not fulfill them. Only learnable AFs allow one to tune their shape at training time, ignoring hyper-parameters such as $\alpha$ for LReLU. For Swish, our experimental results do not indicate problems with vanishing gradients. SLAF (Goyal et al., 2019) showed undefined values (iii), and we could not judge their performance (v).
+
+# 2.1 PADÉ APPROXIMATION OF ACTIVATION FUNCTIONS
+
+Let us now formally introduce PAUs. Assume for the moment that we start with a fixed activation function $f(x)$ . The Padé approximant (Brezinski and Van Iseghem, 1994) is the "best" approximation of $f(x)$ by a rational function of given orders $m$ and $n$ . Applied to typical actication functions, Fig. 1 shows that they can be approximated well using rational functions.
+
+More precisely, given $f(x)$ , the Padé approximant is the rational function $F(x)$ over polynomials $P(x), Q(x)$ of order $m, n$ of the form
+
+$$
+F (x) = \frac {P (x)}{Q (x)} = \frac {\sum_ {j = 0} ^ {m} a _ {j} x ^ {j}}{1 + \sum_ {k = 1} ^ {n} b _ {k} x ^ {k}} = \frac {a _ {0} + a _ {1} x + a _ {2} x ^ {2} + \cdots + a _ {m} x ^ {m}}{1 + b _ {1} x + b _ {2} x ^ {2} + \cdots + b _ {n} x ^ {n}}, \tag {1}
+$$
+
+which agrees with $f(x)$ the best. The Padé approximant often gives a better approximation of a function $f(x)$ than truncating its Taylor series, and it may still work where the Taylor series does not converge. For these reasons, it has been used before in the context of graph convolutional networks (Chen et al., 2018). However, they have not been considered so far for general deep networks. Padé Activation Units (PAUs) go one step further, instead of fixing the coefficients $a_{j}, b_{k}$ to approximate a particular activation function, we allow them to be free parameters that can be optimized end-to-end with the rest of the neural network. This allows the optimization process to find the activation function needed at each layer automatically.
+
+The flexibility of Padé is not only a blessing but might also be a curse: it can model processes that contain poles. For a learnable activation function, however, a pole may produce undefined values depending on the input as well as instabilities at learning and inference time. Therefore we consider a restriction, called safe PAU, that guarantees that the polynomial $Q(x)$ is not 0, i.e., we avoid poles. In general, restricting $Q(x)$ implies that either $Q(x) \mapsto \mathbb{R}_{>0}$ or $Q(x) \mapsto \mathbb{R}_{<0}$ , but as $P(x) \mapsto \mathbb{R}$ we can focus on $Q(x) \mapsto \mathbb{R}_{>0}$ wlog. However, as $\lim_{Q(x) \to 0^+} F(x) \to \infty$ learning and inference become unstable. To fix this, we impose a stronger constraint, namely $Q(x) \geq q \gg 0$ . In this work, $q = 1$ , i.e., $\forall x : Q(x) \geq 1$ , preventing poles and allowing for safe computation on $\mathbb{R}$ :
+
+$$
+F (x) = \frac {P (x)}{Q (x)} = \frac {\sum_ {j = 0} ^ {m} a _ {j} x ^ {j}}{1 + \left| \sum_ {k = 1} ^ {n} b _ {k} x ^ {k} \right|} = \frac {a _ {0} + a _ {1} x + a _ {2} x ^ {2} + \cdots + a _ {m} x ^ {m}}{1 + \left| b _ {1} x + b _ {2} x ^ {2} + \cdots + b _ {n} x ^ {n} \right|}. \tag {2}
+$$
+
+Other values for $q \in (0,1)$ might still be interesting, as they could provide gradient amplification due to the partial derivatives having $Q(X)$ in the denominator. However, we leave this for future work.
+
+# 2.2 LEARNING SAFE PADÉ APPROXIMATIONS USING BACK PROPAGATION
+
+In contrast to the standard way of fitting Padé approximants where the coefficients are found via derivatives and algebraic manipulation against a given function, we optimize their polynomials via backpropagation and (stochastic) gradient descent. To do this, we have to compute the gradients with respect to the parameters $\frac{\partial F}{\partial a_j}, \frac{\partial F}{\partial b_k}$ as well as the gradient for the input $\frac{\partial F}{\partial x}$ . A simple alternative is to implement the forward pass as described in Eq. 2, and let automatic differentiation do the job. To be more efficient, however, we can also implement PAUs directly in CUDA (Nickolls et al. (2008)), and for this we need to compute the gradients ourselves:
+
+$$
+\frac {\partial F}{\partial x} = \frac {\partial P (x)}{\partial x} \frac {1}{Q (x)} - \frac {\partial Q (x)}{\partial x} \frac {P (x)}{Q (x) ^ {2}}, \quad \frac {\partial F}{\partial a _ {j}} = \frac {x ^ {j}}{Q (x)} \quad \text {a n d} \quad \frac {\partial F}{\partial b _ {k}} = - x ^ {k} \frac {A (X)}{| A (X) |} \frac {P (X)}{Q (x) ^ {2}},
+$$
+
+where $\frac{\partial P(x)}{\partial x} = a_1 + 2a_2x + \dots + ma_mx^{m - 1}$ , $\frac{\partial Q(x)}{\partial x} = \frac{A(X)}{|A(X)|}$ ( $b_{1} + 2b_{2}x + \dots + nb_{n}x^{n - 1}$ ), $A(X) = b_{1}x + b_{2}x^{2} + \dots + b_{n}x^{n}$ , and $Q(x) = 1 + |A(X)|$ . Here we reuse the expressions to reduce computations. To avoid divisions by zero when computing the gradients, we define $\frac{z}{|z|}$ as the sign of $z$ . With the gradients at hand, PAUs can seamlessly be placed together with other modules onto the differentiable programming stack.
+
+# 3 PADÉ NETWORKS
+
+Having PAUs at hand, one can define Padé networks as follows: Padé networks are feedforward networks with PAU activation functions that may include convolutional and residual architectures
+
+with pooling layers. To use Padé networks effectively, one simply replaces the standard activation functions in a neural network by PAUs and then proceed to optimize all the parameters and use the network as usual. However, even if every PAU contains a low number of parameters (coefficients $a_{j}, b_{k}$ ), in the extreme case, learning one PAU per neuron may considerably increase the complexity of the networks and in turn the learning time. To ameliorate this and inspired by the idea of weight-sharing as introduced by Teh and Hinton (2001), we propose to learn one PAU per layer. Therefore we only add $\phi$ many parameters, where $\phi = L * (m + n)$ and $L$ is the number of activation layers in the network. In our experiments we set $\phi = 10L$ , a rather small number of parameters (iv).
+
+The last step missing before we can start the optimization process is to initialize the coefficients of the PAUs. Surely, one can do random initialization of the coefficients and allow the optimizer to train the network end-to-end. However, we obtained better results after initializing all PAUs with coefficients that approximate standard activation functions. For a discussion on how to obtain different PAU coefficients, we refer to Sec. A.1.
+
+Before evaluating Padé Networks empirically, let us touch upon their expressivity and how to sparsify them.
+
+# 3.1 PADENetworkS ARE UNIVERSAL FUNCTION APPROXIMATORS
+
+A standard multi-layer perceptron (MLP) with enough hidden units and non-polynomial activation functions is a universal approximator, see e.g. (Hornik et al., 1989; Leshno et al., 1993). Padé Networks are also universal approximators.
+
+Theorem 1. Let $\rho \colon \mathbb{R} \to \mathbb{R}$ be a PAU activation function. Let $\mathcal{N}^{\rho}$ represent the class of neural networks with activation function $\rho$ . Let $K \subseteq \mathbb{R}^{n}$ be compact. Then $\mathcal{N}^{\rho}$ is dense in $C(K)$ .
+
+The proof holds for both PAUs and safe PAUs as it makes no assumptions on the form of the denominator $Q(x)$ , and is a direct application of the following propositions:
+
+Proposition 1. (From Theorem 1.1 in (Kidger and Lyons, 2019)) Let $\rho \colon \mathbb{R} \to \mathbb{R}$ be any continuous function. Let $\mathcal{N}_n^\rho$ represent the class of neural networks with activation function $\rho$ , with $n$ neurons in the input layer, one neuron in the output layer, and one hidden layer with an arbitrary number of neurons. Let $K \subseteq \mathbb{R}^n$ be compact. Then $\mathcal{N}_n^\rho$ is dense in $C(K)$ if and only if $\rho$ is non-polynomial.
+
+Proposition 2. (From Theorem 3.2 in (Kidger and Lyons, 2019)) Let $\rho \colon \mathbb{R} \to \mathbb{R}$ be any continuous function which is continuously differentiable at least one point, with nonzero derivative at that point. Let $K \subseteq \mathbb{R}^n$ be compact. Then $\mathcal{N}\mathcal{N}_{n,m,n+m+2}^\rho$ is dense in $C(K; \mathbb{R}^m)$ .
+
+Proof. Let $\rho(x) = P(x) / Q(x)$ , we have to consider the following two cases:
+
+Case 1: $Q(x) \neq 1$ , by definition, $\rho(x)$ is non-polynomial. Then by proposition 1, we get that $\mathcal{N}_n^\rho$ is dense in $C(K)$ .
+
+Case 2: $Q(x) = 1$ , here $\rho(x) = \sum_{j=0}^{m} a_{j} x^{j}$ , is polynomial, continuous and continuously differentiable in $\mathbb{R}$ . Let any $a_{j > 0} > 0$ , then there exists a point $\alpha \in \mathbb{R}$ such that $\rho'(\alpha) \neq 0$ and then by proposition 2, we get that $\mathcal{N}\mathcal{N}_{n,m,n+m+2}^{\rho}$ is dense in $C(K; \mathbb{R}^{m})$ .
+
+
+
+# 3.2 SPARSE PADÉ NETWORKS AND RANDOMIZED PAUS
+
+Padé Networks can $\epsilon$ -approximate neural networks with ReLU activations (Telgarsky, 2017). This implies that by using PAUs, we are embedding a virtual network into the networks we want to use. This, in turn, is the operating assumption of the lottery ticket hypothesis due to Frankle and Carbin (2019). Thus, we expect that lottery ticket pruning can find well-performing Padé networks that are smaller than their original counterparts while reducing inference time and potentially improving the predictive performance.
+
+Generally, overfitting is an important practical concern when training neural networks. Usually, we can apply regularization techniques such as Dropout (Srivastava et al., 2014). Unfortunately, although each PAU approximates a small ReLU network section, we do not have access to the internal representation of this virtual network. Therefore, we can not regularize the activation function via
+
+
+Figure 2: PAU compared to baseline activation function units over 5 runs on Fashion-MNIST using the LeNet architecture: (left) mean test-accuracy (the higher, the better) and (right) mean train-loss (the lower, the better). As one can see, PAU outperforms all baseline activations and enable the networks to achieve a lower loss during training compared to all baselines. (Best viewed in color)
+
+standard Dropout. An alternative for regularizing activation functions was introduced in Randomized Leaky ReLUs (RReLUs, Xu et al. (2015)), where the negative slope parameter is sampled uniformly on a range. This makes the activation function behave differently at training time for every input $x$ , forwarding and backpropagating according to $x$ and the sampled noise.
+
+We can employ a similar technique to make PAUs resistant to overfitting. Consider a PAU with coefficients $\mathbf{C} = \{a_0, \dots, a_m, b_0, \dots, b_n\}$ . We can introduce additive noise during training into each coefficient $c_i \in \mathbf{C}$ for every input $x_j$ via $c_{i,j} = c_i + z_{i,j}$ where $z_{i,j} \sim U(l_i, u_i)$ , $l_i = (1 - \alpha\%)*c_i$ and reciprocally $u_i = (1 + \alpha\%) * c_i$ . This results in Randomized PAU (RPAU):
+
+$$
+R \left(x _ {j}\right) = \frac {c _ {0 , j} + c _ {1 , j} x + c _ {2 , j} x ^ {2} + \cdots + c _ {m} x ^ {m}}{1 + \left| c _ {m + 1} x + c _ {m + 2} x ^ {2} + \cdots + c _ {m + n} x ^ {n} \right|}. \tag {3}
+$$
+
+We compute the gradients as before and simply replace the coefficients by their noisy counterparts.
+
+# 4 EXPERIMENTAL EVALUATION
+
+Our intention here is to investigate the behavior and performance of PAUs as well as to compare them to other activation functions using standard deep neural networks. All our experiments are implemented in PyTorch with PAU implemented in CUDA, and were executed on an NVIDIA DGX-2 system. In all experiments, we initialized PAUs with coefficients that approximate LeakyReLUs for a rational function of order $m = 5$ , $n = 4$ . In all experiments except for ImageNet, we report the mean of 5 runs initialized with different seeds for the accuracy on the test-set after training. And, we compared PAU to the following activation functions: ReLU, ReLU6, Leaky ReLU (LReLU), Random ReLU (RReLU), ELU, CELU, Swish, Parametric ReLU (PReLU), Maxout, Mixture of activations (Mixture), SLAF, APL and SReLU. For details on all the activation functions, we refer to Appendix A.2. As datasets we considered MNIST, Fahion-MNIST, CIFAR-10 and ImageNet.
+
+# 4.1 EMPIRICAL RESULTS ON MNIST AND FASHION-MNIST BENCHMARKS
+
+First we evaluated PAUs on MNIST (LeCun et al., 2010) and Fashion-MNIST (Xiao et al., 2017) using two different architectures: LeNet (LeCun et al., 1998) and VGG-8 (Simonyan and Zisserman, 2015). For more details on the architectures, learning settings, and results, we refer to Sec. A.3.
+
+As can be seen in Fig. 2 and Tab. 2, PAU outperformed on average the baseline activation functions on every network in terms of predictive performance. Moreover, the results are stable on different runs (c.f. mean ± std). PAUs also enable the networks to achieve a lower loss during training compared to all baselines on all networks. Actually, PAU achieved the best results on both datasets and on Fashion-MNIST it provides the best results for both architectures. As expected, reducing the bias is beneficial in this experiment. Comparing the baseline activation functions on the MNIST dataset and the different architectures, there is no clear choice of activation that achieves the best performance. However, PAU always matches or even outperforms the best performing baseline activation function. This shows that a learnable activation function relieves the network designer of having to commit to a
+
+Table 2: Performance comparison of activation functions on MNIST and Fashion-MNIST (the higher, the better) on two common deep architectures. Shown are the results averaged over 5 reruns as well as the top result among these 5 runs. The best ("●") and runner-up ("○") results per architecture are bold. As one can see, PAUs consistently outperform the other activation functions on average and yields the top performance on each dataset.
+
+ | VGG-8mean ± std | best | LeNetmean ± std | best | VGG-8mean ± std | best | LeNetmean ± std | best |
| MNIST | Fashion-MNIST |
| ReLU | 99.17 ± 0.10 | 99.30 | 99.17 ± 0.05 | 99.25 | 89.11 ± 0.43 | 89.69 | 89.86 ± 0.32 | 90.48 |
| ReLU6 | 99.28 ± 0.04 | 99.31 | 99.09 ± 0.09 | 99.22 | 89.87 ± 0.62 | 90.38 | 89.74 ± 0.27 | 89.96 |
| LReLU | 99.13 ± 0.11 | 99.27 | 99.10 ± 0.06 | 99.22 | 89.37 ± 0.30 | 89.74 | 89.74 ± 0.24 | 90.02 |
| RReLU | 99.16 ± 0.13 | 99.28 | 99.20 ± 0.13 | ○99.38 | 88.46 ± 0.85 | 89.32 | 89.74 ± 0.19 | 89.88 |
| ELU | 99.15 ± 0.09 | 99.28 | 99.15 ± 0.06 | 99.22 | 89.65 ± 0.33 | 90.06 | 89.84 ± 0.47 | 90.25 |
| CELU | 99.15 ± 0.09 | 99.28 | 99.15 ± 0.06 | 99.22 | 89.65 ± 0.33 | 90.06 | 89.84 ± 0.47 | 90.25 |
| Swish | 99.10 ± 0.06 | 99.20 | 99.19 ± 0.09 | 99.29 | 88.54 ± 0.59 | 89.36 | 89.54 ± 0.22 | 89.89 |
| PReLU | 99.16 ± 0.09 | 99.25 | 99.14 ± 0.09 | 99.24 | 88.82 ± 0.51 | 89.54 | 90.09 ± 0.22 | ○90.29 |
| SLAF | — | — | — | — | 90.60 ± 0.00 | 90.60 | 89.33 ± 0.28 | 89.80 |
| APL | ○99.35 ± 0.11 | ○99.50 | 99.18 ± 0.10 | 99.33 | ○91.41 ± 0.48 | ○92.25 | 89.72 ± 0.30 | 90.01 |
| SReLU | 99.15 ± 0.03 | 99.20 | 99.13 ± 0.14 | 99.27 | 89.65 ± 0.42 | 90.31 | 89.83 ± 0.30 | 90.28 |
| PAU | 99.30 ± 0.05 | ○99.40 | ○99.21 ± 0.04 | 99.26 | ○91.25 ± 0.18 | ○91.56 | ○90.33 ± 0.15 | ○90.62 |
| RPAU | ○99.35 ± 0.04 | ○99.38 | ○99.26 ± 0.11 | ○99.42 | 91.23 ± 0.15 | 91.41 | ○90.20 ± 0.11 | ○90.29 |
+
+potentially suboptimal choice. Moreover, Fig. 2 also shows that PAU is more stable than SLAF. This is not unexpected as Taylor approximations tend to oscillate and overshoot (Trefethen, 2012). We also observed undefined values at training time for SLAF; therefore, we do not compare against it in the following experiments. Finally, when considering the number of parameters used by PAU, we can see that they are very efficient. The VGG-8 network uses 9.2 million parameters, PAU here uses 50 parameters, and for LeNet, the network uses 0.5 million parameters while PAU uses only 40.
+
+In summary, this shows that PAUs are stable, parsimonious and can improve the predictive performance of deep neural networks (iii,iv,v).
+
+# 4.2 LEARNED ACTIVATION FUNCTIONS ON MNIST AND FASHION-MNIST
+
+When looking at the activation functions learned from the data, we can see that the PAU family is flexible yet presents similarities to standard functions. In particular, Fig. 3 illustrates that some of the learned activations seem to be smoothed versions of Leaky ReLUs, since V-shaped activations are simply Leaky ReLUs with negative $\alpha$ values. This is not surprising as we initialize PAUs with coefficients that match Leaky ReLUs. Finding different initialization and optimization parameters is left as future work. In contrast, when learning piecewise approximations of the same activations using Maxout, we would require a high $k$ . This significantly increases the number of parameters of the network. This again provides more evidence in favor of PAUs being flexible and parsimonious (iv). SLAF produced undefined values during training on all networks except Fashion-MNIST where LeNet finished 4 runs and VGG only one run.
+
+# 4.3 EMPIRICAL RESULTS ON CIFAR-10
+
+After investigating PAU on MNIST and Fashion-MNIST, we considered a more challenging setting: CIFAR-10 (Krizhevsky et al. (2009)). We also considered other learnable activation functions, namely Maxout $(k = 2)$ and Mixture of activations (Id and ReLU) as well as another popular deep network architectures: MobileNetV2 (Sandler et al., 2018), ResNet101 (He et al., 2016) and DenseNet121 (Huang et al., 2017). For more details on the learning settings and results, we refer to Sec. A.4.
+
+Let us start by considering the results for VGG-8 and MobileNetV2 on CIFAR-10. These networks are the smallest of this round of experiments and, therefore, could benefit more from bias reduction. Indeed, we can see in Tab. 3 that both networks take advantage of learnable activation functions, i.e., Maxout, PAU, and RPAU. As expected, adding more capacity to VGG-8 helps and this is what Maxout is doing. Moreover, even if Mixtures do not seem to provide a significant benefit on VGG-8,
+
+Table 3: Performance comparison of activation functions on CIFAR-10 (the higher, the better) on four state-of-the-art deep neural architectures. Shown are the results averaged over 5 reruns as well as the top result among these 5 runs. The best ("●") and runner-up ("○") results per architecture are bold. As one can see, PAUs are either in the lead or close to the best. ("***") are experiments that did not finish on time.
+
+ | VGG-8 | MobileNetV2 | ResNet101 | DenseNet121 |
| mean ± std | best | mean ± std | best | mean ± std | best | mean ± std | best |
| ReLU | 92.32 ± 0.16 | 92.58 | 91.51 ± 0.28 | 91.82 | 95.07 ± 0.17 | 95.36 | 95.36 ± 0.18 | 95.63 |
| ReLU6 | 92.36 ± 0.06 | 92.47 | 91.30 ± 0.23 | 91.57 | 95.11 ± 0.24 | 95.29 | 95.33 ± 0.14 | 95.46 |
| LReLU | 92.43 ± 0.14 | 92.65 | 91.94 ± 0.12 | 92.08 | 95.08 ± 0.19 | 95.29 | 95.42 ± 0.17 | 95.65 |
| RReLU | 92.32 ± 0.07 | 92.42 | ○94.66 ± 0.16 | ○94.94 | 95.21 ± 0.23 | ○95.51 | 95.00 ± 0.12 | 95.14 |
| ELU | 91.24 ± 0.09 | 91.33 | 90.43 ± 0.14 | 90.61 | 94.04 ± 0.14 | 94.24 | 90.78 ± 0.29 | 91.23 |
| CELU | 91.24 ± 0.09 | 91.33 | 90.69 ± 0.27 | 90.97 | 93.80 ± 0.36 | 94.25 | 90.88 ± 0.19 | 91.08 |
| PReLU | 92.22 ± 0.26 | 92.51 | 93.54 ± 0.45 | 93.95 | 94.15 ± 0.39 | 94.50 | 94.98 ± 0.16 | 95.15 |
| Swish | 91.58 ± 0.18 | 91.86 | 92.04 ± 0.13 | 92.21 | 91.83 ± 1.61 | 92.84 | 93.04 ± 0.16 | 93.32 |
| Maxout | ○93.03 ± 0.11 | ○93.23 | 94.41 ± 0.10 | 94.54 | 95.11 ± 0.13 | 95.23 | *** | *** |
| Mixture | 91.86 ± 0.14 | 92.06 | 94.06 ± 0.16 | 94.25 | 94.50 ± 0.25 | 94.71 | 93.33 ± 0.17 | 93.59 |
| APL | 91.63 ± 0.13 | 91.82 | 93.62 ± 0.64 | 94.50 | 94.12 ± 0.36 | 94.50 | 94.45 ± 0.23 | 94.78 |
| SReLU | ○92.66 ± 0.27 | ○93.13 | 94.03 ± 0.11 | 94.25 | ○95.24 ± 0.13 | 95.38 | 94.77 ± 0.24 | 95.20 |
| PAU | 92.51 ± 0.16 | 92.70 | 94.57 ± 0.21 | 94.90 | 95.16 ± 0.13 | 95.28 | 95.03 ± 0.07 | 95.16 |
| RPAU | 92.50 ± 0.09 | 92.62 | ○94.82 ± 0.21 | ○95.13 | ○95.34 ± 0.13 | ○95.54 | 95.27 ± 0.10 | 95.41 |
+
+they do help in MobileNetV2. Here, we see again that PAU and RPAU are either in the lead or close to the best when it comes to predictive performance, without having to make a choice apriori.
+
+Now, let us have a look at the performance of PAU and RPAU on the larger networks ResNet101 and DenseNet121. As these networks are so expressive, we do not expect the flexibility of the learnable activation functions to have a big impact on the performance. Tab. 3 confirms this. Nevertheless, they are still competitive and their performance is stable as shown by the standard deviation. On ResNet101, PAUs actually provided the top performance.
+
+# 4.4 FINDING SPARSE PADÉ NETWORKS
+
+As discussed in Sec. 3.2, using PAUs in a network is equivalent to introducing virtual networks of ReLUs in the middle of the network, effectively adding virtual depth to the networks. Therefore, we also investigated whether pruning can help one to unmask smaller sub-networks whose performance is similar to the original network. In a sense, we are removing blocks of the real network as they get replaced by the virtual network. Here, we only do pruning on the convolutional layers. For details about the algorithm and hyper-parameters, we refer to Sec. A.4.3.
+
+Specifically, we compared PAU against the best activation functions for the different architectures. However, we discarded Maxout, as instead of pruning it introduces many more parameters into
+
+
+Figure 3: Estimated activation functions after training the VGG-8 network with RPAU on Fashion-MNIST. The center line indicates the PAU while the surrounding area indicates the space of the additive noise in RPAUs. As one can see, the PAU family differs from common activation functions but capture characteristics of them. (Best viewed in color)
+
+
+
+
+
+
+
+
+
+
+Figure 4: Comparison of the predictive accuracy (higher is better) for the architectures VGG-8, MobileNetV2 and ResNet101 between PAU and the best activation functions according to Tab. 3. PAU is consistently better. On ResNet101 PAU is not affected by the increase pruning pressure. Furthermore, PAU enables the ResNet101 subnetwork, pruned by $30\%$ , to achieve a higher accuracy compared to all pruned and not pruned networks. (Best viewed in color)
+
+
+
+
+
+the network defeating the original purpose. As one can see in Fig. 4, pruning on the already size-optimized networks VGG-8 and MobileNetV2 has an effect on the predictive performance. However, the performance of PAU remains above the other activation functions despite the increase in pruning pressure. In contrast, when we look at ResNet101, we see that the performance of PAU is not influenced by pruning, showing that indeed we can find sparse Padé network without major loss in accuracy. And what is more, PAU enables the ResNet101 subnetwork, pruned by $30\%$ , to achieve a higher accuracy compared to all pruned and not pruned networks.
+
+# 4.5 EMPIRICAL RESULTS ON IMAGENET
+
+Finally, we investigated the performance on a much larger dataset, namely ImageNet (Russakovsky et al. (2015)) used to train MobileNetV2. As can be seen in Fig. 5 and Tab. 4, PAU and Swish clearly dominate in performance (v). PAU leads in top-1 accuracy and Swish in top-5 accuracy. Moreover, both PAU and Swish show faster learning compared to the other activation functions.
+
+Furthermore, we argue that the rapid learning rate of PAU in all the experiments indicate that they do not exhibit vanishing gradient issues (ii).
+
+
+Figure 5: MobileNetV2 top-1 test accuracy on the left (higher is better) and training loss on the right (lower is better) for multiple activation functions in ImageNet. PAU achieves higher accuracy and lower loss values in fewer epochs. (Best viewed in color)
+
+| MobileNetV2 | ReLU | ReLU6 | LReLU | RReLU | ELU | CELU | PReLU | Swish | SReLU | PAU |
| Acc@1 | 69.65 | 69.83 | 70.03 | 69.12 | 69.13 | 69.17 | 68.61 | ○71.24 | 70.62 | ○71.35 |
| Acc@5 | 89.09 | 89.34 | 89.26 | 88.80 | 88.46 | 88.59 | 88.51 | ○89.95 | 89.59 | ○89.85 |
+
+# 4.6 SUMMARIZED RESULTS
+
+Finally, we compare the PAU family of activation functions to all the other activation functions and we aggregate the number of occurrences where PAU performed better or worse in comparison. The aggregate results can be found in Tab. 5. These results are the aggregates of all the experiments on all datasets and all architectures. As we can see, the PAU family is very competitive.
+
+Table 4: MobileNetV2 top-1 and top-5 accuracies in ImageNet (higher is better) for different activations. Best ("●") and runner-up ("○") are shown in **bold**. PAU is the best in top-1 accuracy and runner-up for top-5.
+
+| Baselines | ReLU | ReLU6 | LReLU | RReLU | ELU | CELU | PReLU | Swish | Maxout | Mixture | APL | SReLU |
| PAU/RPAU >= Baseline | 34 | 35 | 34 | 33 | 40 | 40 | 39 | 41 | 9 | 20 | 32 | 33 |
| PAU/RPAU < Baseline | 8 | 7 | 8 | 9 | 2 | 2 | 3 | 1 | 6 | 0 | 7 | 8 |
+
+Table 5: The number of models on which PAU and RPAU outperforms or underperforms each baseline activation function we compared against in our experiments.
+
+To summarize, PAUs satisfy all the assumptions (i-v). They allow the network to be universal function approximators (i) as shown by theorem 1. They present a fast and stable learning behavior (ii, iii) as shown in Figs. (5,6,7,8). The number of parameters introduced by PAUs is minimal in comparison to the size of the networks. In our experiments, we add 10 parameters per layer, showing that PAUs are parsimonious (iv). Finally, they allow deep neural networks to provide high predictive performance (v) as shown in Tab. 5.
+
+# 5 CONCLUSIONS
+
+We have presented a novel learnable activation function, called Padé Activation Unit (PAU). PAUs encode activation functions as rational functions, trainable in an end-to-end fashion using backpropagation. This makes it easy for practitioners to replace standard activation functions with PAU units in any neural network. The results of our empirical evaluation for image classification demonstrate that PAUs can indeed learn new activation functions and in turn novel neural networks that are competitive to state-of-the-art networks with fixed and learned activation functions. Actually, across all activation functions and architectures, Padé networks are among the top performing networks. This clearly shows that the reliance on first picking fixed, hand-engineered activation functions can be eliminated and that learning activation functions is actually beneficial and simple. Moreover, our results provide the first empirically evidence that the open question "Can rational functions be used to design algorithms for training neural networks?" raised by Telgarsky (2017) can be answered affirmatively for common deep architectures.
+
+Our work provides several interesting avenues for future work. One should explore more the space between safe and unsafe PAUs, in order to gain even more predictive power. Most interestingly, since Padé networks can be reduced to ReLU networks. one should explore globally optimal training (Arora et al., 2018) as well as provable robustness (Croce et al., 2019) of Padé approximations of general deep networks.
+
+Acknowledgments. PS and KK were supported by funds of the German Federal Ministry of Food and Agriculture (BMEL) based on a decision of the Parliament of the Federal Republic of Germany via the Federal Office for Agriculture and Food (BLE) under the innovation support program, project "DePhenS" (FKZ 2818204715).
+
+# REFERENCES
+
+F. Agostinelli, M. D. Hoffman, P. J. Sadowski, and P. Baldi. Learning activation functions to improve deep neural networks. In Workshop Track Proceedings of the International Conference on Learning Representations, 2015.
+R. Arora, A. Basu, P. Mianjy, and A. Mukherjee. Understanding deep neural networks with rectified linear units. In International Conference on Learning Representations, 2018.
+J. T. Barron. Continuously differentiable exponential linear units. arXiv preprint arXiv:1704.07483, 2017.
+C. Brezinski and J. Van Iseghem. Padé approximations. Handbook of numerical analysis, 3:47-222, 1994.
+Z. Chen, F. Chen, R. Lai, X. Zhang, and C.-T. Lu. Rational neural networks for approximating graph convolution operator on jump discontinuities. In 2018 IEEE International Conference on Data Mining (ICDM). IEEE, 2018.
+D. Clevert, T. Unterthiner, and S. Hochreiter. Fast and accurate deep network learning by exponential linear units (elus). In 4th International Conference on Learning Representations (ICLR), 2016.
+F. Croce, M. Andriushchenko, and M. Hein. Provable robustness of relu networks via maximization of linear regions. In The 22nd International Conference on Artificial Intelligence and Statistics (AISTATS), pages 2057-2066, 2019.
+J. Frankle and M. Carbin. The lottery ticket hypothesis: Finding sparse, trainable neural networks. In ICLR, 2019.
+X. Glorot, A. Bordes, and Y. Bengio. Deep sparse rectifier neural networks. In Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics (AISTATS), pages 315-323, 2011.
+I. J. Goodfellow, D. Warde-Farley, M. Mirza, A. C. Courville, and Y. Bengio. Maxout networks. In Proceedings of the 30th International Conference on Machine Learning (ICML), pages 1319-1327, 2013.
+M. Goyal, R. Goyal, and B. Lall. Learning activation functions: A new paradigm of understanding neural networks. arXiv preprint arXiv:1906.09529, 2019.
+Z. Hajduk. Hardware implementation of hyperbolic tangent and sigmoid activation functions. Bulletin of the Polish Academy of Sciences. Technical Sciences, 66(5), 2018.
+K. He, X. Zhang, S. Ren, and J. Sun. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the IEEE international conference on computer vision, pages 1026-1034, 2015.
+K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778, 2016.
+K. Hornik, M. B. Stinchcombe, and H. White. Multilayer feedforward networks are universal approximators. Neural Networks, 2(5):359-366, 1989.
+G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4700-4708, 2017.
+X. Jin, C. Xu, J. Feng, Y. Wei, J. Xiong, and S. Yan. Deep learning with s-shaped rectified linear activation units. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, 2016.
+P. Kidger and T. Lyons. Universal approximation with deep narrow networks. arXiv preprint arXiv:1905.08539, 2019.
+D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, (ICLR), 2015.
+
+A. Krizhevsky and G. Hinton. Convolutional deep belief networks on cifar-10. 2010.
+A. Krizhevsky, G. Hinton, et al. Learning multiple layers of features from tiny images. Technical report, Computer Science Department, University of Toronto, 2009.
+Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of IEEE, 86(11):2278-2324, 1998.
+Y. LeCun, C. Cortes, and C. Burges. Mnist handwritten digit database. at&t labs, 2010.
+M. Leshno, V. Y. Lin, A. Pinkus, and S. Schocken. Multilayer feedforward networks with a nonpolynomial activation function can approximate any function. Neural Networks, 6(6):861-867, 1993.
+A. L. Maas, A. Y. Hannun, and A. Y. Ng. Rectifier nonlinearities improve neural network acoustic models. In in ICML Workshop on Deep Learning for Audio, Speech and Language Processing, 2013.
+F. Manessi and A. Rozza. Learning combinations of activation functions. In 2018 24th International Conference on Pattern Recognition (ICPR), pages 61-66. IEEE, 2018.
+V. Nair and G. E. Hinton. Rectified linear units improve restricted boltzmann machines. In Proceedings of the 27th international conference on machine learning (ICML-10), pages 807-814, 2010.
+J. Nickolls, I. Buck, and M. Garland. Scalable parallel programming. In 2008 IEEE Hot Chips 20 Symposium (HCS), pages 40-53. IEEE, 2008.
+N. Qian. On the momentum term in gradient descent learning algorithms. Neural networks, 12(1): 145-151, 1999.
+P. Ramachandran, B. Zoph, and Q. V. Le. Searching for activation functions. In Proceedings of the Workshop Track of the 6th International Conference on Learning Representations (ICLR), 2018.
+O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. S. Bernstein, A. C. Berg, and F. Li. Imagenet large scale visual recognition challenge. International Journal of Computer Vision, 115(3):211-252, 2015.
+M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4510-4520, 2018.
+K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In Proceedings of the 3rd International Conference on Learning Representations(ICLR), 2015.
+N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research, 15(1): 1929-1958, 2014.
+Y. W. Teh and G. E. Hinton. Rate-coded restricted boltzmann machines for face recognition. In Proceedings of Neural Information Processing Systems (NIPS), pages 908-914, 2001.
+M. Telgarsky. Neural networks and rational functions. In Proceedings of the 34th International Conference on Machine Learning (ICML), pages 3387-3393, 2017.
+L. N. Trefethen. Approximation Theory and Approximation Practice. SIAM, 2012. ISBN 978-1-611-97239-9.
+C. J. Vercellino and W. Y. Wang. Hyperactivations for activation function exploration. In 31st Conference on Neural Information Processing Systems (NIPS 2017), Workshop on Meta-learning. Long Beach, USA, 2017.
+H. Xiao, K. Rasul, and R. Vollgraf. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. CoRR, 2017.
+
+B. Xu, N. Wang, T. Chen, and M. Li. Empirical evaluation of rectified activations in convolutional network. CoRR, 2015.
+H.-Z. Zhao, F.-X. Liu, and L.-Y. Li. Improving deep convolutional neural networks with mixed maxout units. PloS one, 12(7), 2017.
+
+# A APPENDIX
+
+# A.1 INITIALIZATION COEFFICIENTS
+
+As show in Table 6 we compute initial coefficients for PAU approximations to different known activation functions. We predefined the orders to be [5,4] and for Sigmoid, Tanh and Swish, we have computed the Padé approximant using the standard techniques. For the different variants of PReLU, LeakyRelu and Relu we optimized the coefficients using least squares over the line range between [-3,3] in steps of 0.000001.
+
+ | Sigmoid | Tanh | Swish | ReLU | LReLU(0.01) | LReLU(0.20) | LReLU(0.25) | LReLU(0.30) | LReLU(-0.5) |
| a0 | 1/2 | 0 | 0 | 0.02996348 | 0.02979246 | 0.02557776 | 0.02423485 | 0.02282366 | 0.02650441 |
| a1 | 1/4 | 1 | 1/2 | 0.61690165 | 0.61837738 | 0.66182815 | 0.67709718 | 0.69358438 | 0.80772912 |
| a2 | 1/18 | 0 | b/4 | 2.37539147 | 2.32335207 | 1.58182975 | 1.43858363 | 1.30847432 | 13.56611639 |
| a3 | 1/144 | 1/9 | 3b2/56 | 3.06608078 | 3.05202660 | 2.94478759 | 2.95497990 | 2.97681599 | 7.00217900 |
| a4 | 1/2016 | 0 | b3/168 | 1.52474449 | 1.48548002 | 0.95287794 | 0.85679722 | 0.77165297 | 11.61477781 |
| a5 | 1/60480 | 1/945 | b4/3360 | 0.25281987 | 0.25103717 | 0.23319681 | 0.23229612 | 0.23252265 | 0.68720375 |
| b1 | 0 | 0 | 0 | 1.19160814 | 1.14201226 | 0.50962605 | 0.41014746 | 0.32849543 | 13.70648993 |
| b2 | 1/9 | 4/9 | 3b2/28 | 4.40811795 | 4.39322834 | 4.18376890 | 4.14691964 | 4.11557902 | 6.07781733 |
| b3 | 0 | 0 | 0 | 0.91111034 | 0.87154450 | 0.37832090 | 0.30292546 | 0.24155603 | 12.32535229 |
| b4 | 1/10008 | 1/63 | b4/1680 | 0.34885983 | 0.34720652 | 0.32407314 | 0.32002850 | 0.31659365 | 0.54006880 |
+
+Table 6: Initial coefficients to approximate different activation functions.
+
+# A.2 LIST OF ACTIVATION FUNCTIONS
+
+For our experiments, we compare against the following activation functions with their respective parameters.
+
+- ReLU (Nair and Hinton, 2010): $y = \max(x, 0)$
+- ReLU6 (Krizhevsky and Hinton, 2010): $y = \min \left( \max \left( x, 0 \right), 6 \right)$ , a variation of ReLU with an upper bound.
+- Leaky ReLU (Maas et al., 2013): $y = \max(0, x) + \alpha * \min(0, x)$ with the negative slope, which is defined by the parameter $\alpha$ . Leaky ReLU enables a small amount of information to flow when $x < 0$ .
+- Random ReLU (Xu et al., 2015): a randomized variation of Leaky ReLU.
+- ELU (Clevert et al., 2016): $y = \max(0, x) + \min(0, \alpha * (\exp(x) - 1))$ .
+- CELU (Barron, 2017): $y = \max(0, x) + \min(0, \alpha * (\exp(x / \alpha) - 1))$ .
+- Swish (Ramachandran et al., 2018): $y = x * \text{sigmoid}(x)$ , which tends to work better than ReLU on deeper models across a number of challenging datasets.
+- Parametric ReLU (PReLU) (He et al., 2015) $y = \max(0, x) + \alpha * \min(0, x)$ , where the leaky parameter $\alpha$ is a learn-able parameter of the network.
+- Maxout (Goodfellow et al., 2013): $y = \max(z_{ij})$ , where $z_{ij} = x^T W_{\dots ij} + b_{ij}$ , and $W \in R^{d \times m \times k}$ and $b \in R^{m \times k}$ are learned parameters.
+- Mixture of activations (Manessi and Rozza, 2018): a combination of weighted activation functions e.g. {id, ReLU}, where the weight is a learnable parameter of the network.
+- SLAF (Goyal et al., 2019): a learnable activation function based on a Taylor approximation.
+- APL (Agostinelli et al., 2015): a learnable piecewise linear activation function.
+- SReLU (Jin et al., 2016): a learnable S-shaped rectified linear activation function.
+
+# A.3 DETAILS OF THE MNIST AND FASHION-MNIST EXPERIMENT
+
+# A.3.1 NETWORK ARCHITECTURES
+
+Here we describe the architectures for the networks VGG and LeNet, along with the number of trainable parameters. The number of parameters of the activation function is reported for using PAU. Common not trainable activation functions do not have trainable parameters. PReLU has one
+
+trainable parameter. In total the VGG network as 9224508 parameters with 50 for PAU, and the LeNet network has 61746 parameters with 40 for PAU.
+
+| No. | VGG | # params | LeNet | # params |
| 1 | Convolutional 3x3x64 | 640 | Convolutional 5x5x6 | 156 |
| 2 | Activation | 10 | Activation | 10 |
| 3 | Max-Pooling | 0 | Max-Pooling | 0 |
| 4 | Convolutional 3x3x128 | 73856 | Convolutional 5x5x16 | 2416 |
| 5 | Activation | 10 | Activation | 10 |
| 6 | Max-Pooling | 0 | Max-Pooling | 0 |
| 7 | Convolutional 3x3x256 | 295168 | Convolutional 5x5x120 | 48120 |
| 8 | Convolutional 3x3x256 | 590080 | Activation | 10 |
| 9 | Activation | 10 | Linear 84 | 10164 |
| 10 | Max-Pooling | 0 | Activation | 10 |
| 11 | Convolutional 3x3x512 | 1180160 | Linear 10 | 850 |
| 12 | Convolutional 3x3x512 | 2359808 | Softmax | 0 |
| 13 | Activation | 10 | | |
| 14 | Max-Pooling | 0 | | |
| 15 | Convolutional 3x3x512 | 2359808 | | |
| 16 | Convolutional 3x3x512 | 2359808 | | |
| 17 | Activation | 10 | | |
| 18 | Max-Pooling | 0 | | |
| 19 | Linear 10 | 5130 | | |
| 20 | Softmax | 0 | | |
+
+Table 7: Architecture of Simple Convolutional Neural Networks
+
+# A.3.2 LEARNING PARAMETERS
+
+The parameters of the networks, both the layer weights and the coefficients of the PAUs, were trained over 100 epochs using Adam (Kingma and Ba, 2015) with a learning rate of 0.002 or SGD (Qian, 1999) with a learning rate of 0.01, momentum set to 0.5, and without weight decay. In all experiments we used a batch size of 256 samples. The weights of the networks were initialized randomly and the coefficients of the PAUs were initialized with the initialization constants of Leaky ReLU, see Tab. 6. We report the mean of 5 different runs for both the accuracy on the test-set and the loss on the train-set after each training epoch.
+
+# A.3.3 PREDICTIVE PERFORMANCE
+
+
+MNIST
+Figure 6: PAU compared to baseline activation function units on 5 runs of MNIST using the VGG and LeNet: first column mean test-accuracy, second column mean train-loss. PAU consistently outperforms or matches the best performances of the baseline activations. Moreover, PAUs enable the networks to achieve a lower loss during training compared to all baselines.
+
+
+Fashion-MNIST
+Figure 7: PAU compared to baseline activation function units on 5 runs of Fashion-MNIST using the VGG and LeNet architectures: first column mean test-accuracy, second column mean train-loss. PAU consistently outperforms the baselines activation functions in terms of performance and training time, especially on the VGG.
+
+# A.4 DETAILS OF THE CIFAR10 AND IMAGENET EXPERIMENT
+
+# A.4.1 LEARNING PARAMETERS
+
+The parameters of the networks, both the layer weights and the coefficients of the PAUs, were trained over 400 epochs using SGD with momentum set to 0.9. On the Cifar10 dataset we have different optimizer setups for PAU layers and the rest of the network. For the PAU layers we use constant learning rates per networks and no weight decay. For updating the rest of the network we use initial learning rate of 0.1, and learning rate decay of 0.985 per epoch and set weight decay to $5e - 4$ . In all experiments we used a batch size of 64 samples. The weights of the networks were initialized randomly and the coefficients of the PAUs were initialized with the initialization constants of Leaky ReLU, see Tab. A.1. The additive noise of the Randomized PAUs is set to $\alpha = 1\%$ training the networks VGG8 and MobileNetV2, respectively $\alpha = 10\%$ for ResNet101.
+
+On the Imagenet dataset we use the same optimizer for PAU and the rest of the network. We follow the default setup provided by Pytorch and use an initial learning rate of 0.1, and decay the learning rate by $10\%$ after 30, 60 and 90 epochs.
+
+# A.4.2 NETWORK ARCHITECTURES
+
+The network architectures were taken from reference implementations in PyTorch and we modified them to use PAUs. All architectures are the same among the different activation functions except for Maxout. The default amount of trainable parameters of VGG8 (Simonyan and Zisserman, 2015) is 3,918,858. Using PAU 50 additional parameters are introduced. Maxout is extending the VGG8 network to a total number of 7,829,642 parameters. MobileNetV2 (Sandler et al., 2018) is contains by default 2,296,922 trainable parameters. PAU adds 360 additional parameters. The Maxout activation function is results in a total number of 3,524,506 parameters. With respect to the number of parameters ResNet101 (He et al., 2016) is the largest network we train. By default it contains 42,512,970 trainable parameters, we introduce 100 PAUs and therefore add 1000 additional parameters to the network. If one is replacing each activation function using Maxout the resulting ResNet101 network contains 75,454,090 trainable parameters. The default DenseNet121 (Huang et al., 2017) network has 6,956,298 parameters. Replacing the activation functions with PAU adds 1200 parameters to the network.
+
+# A.4.3 PRUNING EXPERIMENT
+
+For the pruning experiment, we implement the "Lottery ticket hypothesis" (Frankle and Carbin (2019)) in PyTorch. We compare PAUs against the best activation for the network architecture according to the average predictive accuracy from Tab. 3. More precisely, we compare the predictive performance under pruning for the networks $N_{1} = \{\mathrm{VGG - 8}_{\mathrm{pau}}, \mathrm{MobileNetV2}_{\mathrm{pau}}, \mathrm{ResNet101}_{\mathrm{pau}}\}$ against the networks $N_{2} = \{\mathrm{VGG - 8}_{\mathrm{LReLU}}, \mathrm{MobileNetV2}_{\mathrm{RReLU}}, \mathrm{ResNet101}_{\mathrm{pau}}\}$ . Here we avoided Maxout as it heavily increases the parameters in the model, defeating the purpose of pruning. Unlike the original paper, we compress the convolutions using a fixed pruning parameter per iteration of $p\% = 10, 20, 30, 40, 50, 60$ and evaluated once per network. After each training iteration we remove $p\%$ of filters in every convolution and the filters we remove are the ones where the sum of its weights is lowest. After pruning, we proceed to re-initialize the network and repeat the training and pruning procedure with the next $p\%$ parameter.
+
+
+A.4.4 PREDICTIVE PERFORMANCE CIFAR10
+Figure 8: PAU compared to baseline activation function units on 5 runs of CIFAR-10. Accuracy on the left column and loss on the right one. (Best viewed in color)
\ No newline at end of file
diff --git a/padactivationunitsendtoendlearningofflexibleactivationfunctionsindeepnetworks/images.zip b/padactivationunitsendtoendlearningofflexibleactivationfunctionsindeepnetworks/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..74eb60d5ebbd180930ad4c165d7188b459f3d2fb
--- /dev/null
+++ b/padactivationunitsendtoendlearningofflexibleactivationfunctionsindeepnetworks/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:81686dc3b2f1e6ddbb8bfda78ea4109a287e7de632aa74b91b8f5fe540c2065b
+size 1183658
diff --git a/padactivationunitsendtoendlearningofflexibleactivationfunctionsindeepnetworks/layout.json b/padactivationunitsendtoendlearningofflexibleactivationfunctionsindeepnetworks/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..727d7a1341b5a2ca3e7978a94720748c00a250fe
--- /dev/null
+++ b/padactivationunitsendtoendlearningofflexibleactivationfunctionsindeepnetworks/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:296ac4a0aba757533f887e0f00410cfb7c8d9cd87fabe389b6e62d817ae61f62
+size 460637
diff --git a/pairnormtacklingoversmoothingingnns/65c74dc4-1de2-4926-a9d5-15314115da9a_content_list.json b/pairnormtacklingoversmoothingingnns/65c74dc4-1de2-4926-a9d5-15314115da9a_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..a0d4115df94b2f567606c70dd0ab075abfdce011
--- /dev/null
+++ b/pairnormtacklingoversmoothingingnns/65c74dc4-1de2-4926-a9d5-15314115da9a_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f7b6bac91c81ba4c247c0e2b52a3df68cc25f7b76b91172ba5bcb7cfc7775e5e
+size 106015
diff --git a/pairnormtacklingoversmoothingingnns/65c74dc4-1de2-4926-a9d5-15314115da9a_model.json b/pairnormtacklingoversmoothingingnns/65c74dc4-1de2-4926-a9d5-15314115da9a_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..22fc1d3c485ee80136a72f1ebbd046e4b5d83918
--- /dev/null
+++ b/pairnormtacklingoversmoothingingnns/65c74dc4-1de2-4926-a9d5-15314115da9a_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a85809dfcd4f7310715ee23f1efbea2de3fa77d3f2251e054ff1b6b92a2917a6
+size 116885
diff --git a/pairnormtacklingoversmoothingingnns/65c74dc4-1de2-4926-a9d5-15314115da9a_origin.pdf b/pairnormtacklingoversmoothingingnns/65c74dc4-1de2-4926-a9d5-15314115da9a_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..05aa7b68b223c83133fcfd1cb10217d1a47f4c4f
--- /dev/null
+++ b/pairnormtacklingoversmoothingingnns/65c74dc4-1de2-4926-a9d5-15314115da9a_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:233e43a2199777c68386a5b61db8980386a4ac45ea198b0429d8b6f550fc5b65
+size 2982903
diff --git a/pairnormtacklingoversmoothingingnns/full.md b/pairnormtacklingoversmoothingingnns/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..f56c741aa95a48db69004289cc9ae9ddbc0ddad2
--- /dev/null
+++ b/pairnormtacklingoversmoothingingnns/full.md
@@ -0,0 +1,441 @@
+# PAIRNORM: TACKLING OVERSMOOTHING IN GNNS
+
+Lingxiao Zhao
+
+Carnegie Mellon University
+
+Pittsburgh, PA 15213, USA
+
+{lingxial}@andrew.cmu.edu
+
+Leman Akoglu
+
+Carnegie Mellon University
+
+Pittsburgh, PA 15213, USA
+
+{lakoglu}@andrew.cmu.edu
+
+# ABSTRACT
+
+The performance of graph neural nets (GNNs) is known to gradually decrease with increasing number of layers. This decay is partly attributed to oversmoothing, where repeated graph convolutions eventually make node embeddings indistinguishable. We take a closer look at two different interpretations, aiming to quantify oversmoothing. Our main contribution is PAIRNORM, a novel normalization layer that is based on a careful analysis of the graph convolution operator, which prevents all node embeddings from becoming too similar. What is more, PAIRNORM is fast, easy to implement without any change to network architecture nor any additional parameters, and is broadly applicable to any GNN. Experiments on real-world graphs demonstrate that PAIRNORM makes deeper GCN, GAT, and SGC models more robust against oversmoothing, and significantly boosts performance for a new problem setting that benefits from deeper GNNs. Code is available at https://github.com/LingxiaoShawn/PairNorm.
+
+# 1 INTRODUCTION
+
+Graph neural networks (GNNs) is a family of neural networks that can learn from graph structured data. Starting with the success of GCN (Kipf & Welling, 2017) on achieving state-of-the-art performance on semi-supervised classification, several variants of GNNs have been developed for this task; including GraphSAGE (Hamilton et al., 2017), GAT (Velickovic et al., 2018), SGC (Wu et al., 2019), and GMNN (Qu et al., 2019) to name a few most recent ones.
+
+A key issue with GNNs is their depth limitations. It has been observed that deeply stacking the layers often results in significant drops in performance for GNNs, such as GCN and GAT, even beyond just a few (2-4) layers. This drop is associated with a number of factors; including the vanishing gradients in back-propagation, overfitting due to the increasing number of parameters, as well as the phenomenon called oversmoothing. Li et al. (2018) was the first to call attention to the oversmoothing problem. Having shown that the graph convolution is a type of Laplacian smoothing, they proved that after repeatedly applying Laplacian smoothing many times, the features of the nodes in the (connected) graph would converge to similar values—the issue coined as “oversmoothing”. In effect, oversmoothing hurts classification performance by causing the node representations to be indistinguishable across different classes. Later, several others have alluded to the same problem (Xu et al., 2018; Klicpera et al., 2019; Rong et al., 2019; Li et al., 2019) (See §5 Related Work).
+
+In this work, we address the oversmoothing problem in deep GNNs. Specifically, we propose (to the best of our knowledge) the first normalization layer for GNNs that is applied in-between intermediate layers during training. Our normalization has the effect of preventing the output features of distant nodes to be too similar or indistinguishable, while at the same time allowing those of connected nodes in the same cluster become more similar. We summarize our main contributions as follows.
+
+- Normalization to Tackle Oversmoothing in GNNs: We introduce a normalization scheme, called PAIRNORM, that makes GNNs significantly more robust to oversmoothing and as a result enables the training of deeper models without sacrificing performance. Our proposed scheme capitalizes on the understanding that most GNNs perform a special form of Laplacian smoothing, which makes node features more similar to one another. The key idea is to ensure that the total pairwise feature distances remains a constant across layers, which in turn leads to distant pairs having less similar features, preventing feature mixing across clusters.
+
+- Speed and Generality: PAIRNORM is very straightforward to implement and introduces no additional parameters. It is simply applied to the output features of each layer (except the last one) consisting of simple operations, in particular centering and scaling, that are linear in the input size. Being a simple normalization step between layers, PAIRNORM is not specific to any particular GNN but rather applies broadly.
+
+- Use Case for Deeper GNNs: While PAIRNORM prevents performance from dropping significantly with increasing number of layers, it does not necessarily yield increased performance in absolute terms. We find that this is because shallow architectures with no more than 2–4 layers is sufficient for the often-used benchmark datasets in the literature. In response, we motivate a real-world scenario wherein a notable portion of the nodes have no feature vectors. In such settings, nodes benefit from a larger range (i.e., neighborhood, hence a deeper GNN) to "recover" effective feature representations. Through extensive experiments, we show that GNNs employing our PAIRNORM significantly outperform the 'vanilla' GNNs when deeper models are beneficial to the classification task.
+
+# 2 UNDERSTANDING OVERSMOOTHING
+
+In this work, we consider the semi-supervised node classification (SSNC) problem on a graph. In the general setting, a graph $\mathcal{G} = (\mathcal{V},\mathcal{E},\mathbf{X})$ is given in which each node $i\in \mathcal{V}$ is associated with a feature vector $\mathbf{x}_i\in \mathbb{R}^d$ where $\mathbf{X} = [\mathbf{x}_1,\dots ,\mathbf{x}_n]^T$ denotes the feature matrix, and a subset $\mathcal{V}_l\subset \mathcal{V}$ of the nodes are labeled, i.e. $y_{i}\in \{1,\ldots ,c\}$ for each $i\in \mathcal{V}_l$ where $c$ is the number of classes. Let $\mathbf{A}\in \mathbb{R}^{n\times n}$ be the adjacency matrix and $\mathbf{D} = \mathrm{diag}(deg_1,\dots ,deg_n)\in \mathbb{R}^{n\times n}$ be the degree matrix of $\mathcal{G}$ . Let $\tilde{\mathbf{A}} = \mathbf{A} + \mathbf{I}$ and $\tilde{\mathbf{D}} = \mathbf{D} + \mathbf{I}$ denote the augmented adjacency and degree matrices with added self-loops on all nodes, respectively. Let $\tilde{\mathbf{A}}_{\mathrm{sym}} = \tilde{\mathbf{D}}^{-1 / 2}\tilde{\mathbf{A}}\tilde{\mathbf{D}}^{-1 / 2}$ and $\tilde{\mathbf{A}}_{\mathrm{rw}} = \tilde{\mathbf{D}}^{-1}\tilde{\mathbf{A}}$ denote symmetrically and nonsymmetrically normalized adjacency matrices with self-loops.
+
+The task is to learn a hypothesis that predicts $y_{i}$ from $\mathbf{x}_i$ that generalizes to the unlabeled nodes $\mathcal{V}_u = \mathcal{V}\backslash \mathcal{V}_l$ . In Section 3.2, we introduce a variant of this setting where only a subset $\mathcal{F}\subset \mathcal{V}$ of the nodes have feature vectors and the rest are missing.
+
+# 2.1 THE OVERSMOOTHING PROBLEM
+
+Although GNNs like GCN and GAT achieve state-of-the-art results in a variety of graph-based tasks, these models are not very well-understood, especially why they work for the SSNC problem where only a small amount of training data is available. The success appears to be limited to shallow GNNs, where the performance gradually decreases with the increasing number of layers. This decrease is often attributed to three contributing factors: (1) overfitting due to increasing number of parameters, (2) difficulty of training due to vanishing gradients, and (3) oversmoothing due to many graph convolutions.
+
+Among these, perhaps the least understood one is oversmoothing, which indeed lacks a formal definition. In their analysis of GCN's working mechanism, Li et al. (2018) showed that the graph convolution of GCN is a special form of Laplacian smoothing. The standard form being $(\mathbf{I} - \gamma \mathbf{I})\mathbf{X} + \gamma \tilde{\mathbf{A}}_{\mathrm{rw}}\mathbf{X}$ , the graph convolution lets $\gamma = 1$ and uses the symmetrically normalized Laplacian to obtain $\tilde{\mathbf{X}} = \tilde{\mathbf{A}}_{\mathrm{sym}}\mathbf{X}$ , where the new features $\tilde{\mathbf{x}}$ of a node is the weighted average of its own and its neighbors' features. This smoothing allows the node representations within the same cluster become more similar, and in turn helps improve SSNC performance under the cluster assumption (Chapelle et al., 2006). However when GCN goes deep, the performance can suffer from oversmoothing where node representations from different clusters become mixed up. Let us refer to this issue of node representations becoming too similar as node-wise oversmoothing.
+
+Another way of thinking about oversmoothing is as follows. Repeatedly applying Laplacian smoothing too many times would drive node features to a stationary point, washing away all the information from these features. Let $\mathbf{x}_{.j} \in \mathbb{R}^n$ denote the $j$ -th column of $\mathbf{X}$ . Then, for any $\mathbf{x}_{.j} \in \mathbb{R}^n$ :
+
+$$
+\lim _ {k \rightarrow \infty} \tilde {\mathbf {A}} _ {\mathrm {s y m}} ^ {k} \mathbf {x}. _ {j} = \boldsymbol {\pi} _ {j} \quad \text {a n d} \quad \frac {\boldsymbol {\pi} _ {j}}{\| \boldsymbol {\pi} _ {j} \| _ {1}} = \boldsymbol {\pi}, \tag {1}
+$$
+
+where the normalized solution $\pi \in \mathbb{R}^n$ satisfies $\pi_i = \frac{\sqrt{deg_i}}{\sum_i\sqrt{deg_i}}$ for all $i\in [n]$ . Notice that $\pi$ is independent of the values $\mathbf{x}_{\cdot j}$ of the input feature and is only a function of the graph structure (i.e.,
+
+degree). In other words, (Laplacian) oversmoothing washes away the signal from all the features, making them indistinguishable. We will refer to this viewpoint as feature-wise oversmoothing.
+
+To this end we propose two measures, row-diff and col-diff, to quantify these two types of oversmoothing. Let $\mathbf{H}^{(k)}\in \mathbb{R}^{n\times d}$ be the representation matrix after $k$ graph convolutions, i.e. $\mathbf{H}^{(k)} = \tilde{\mathbf{A}}_{\mathrm{sym}}^k\mathbf{X}$ . Let $\mathbf{h}_i^{(k)}\in \mathbb{R}^d$ be the $i$ -th row of $\mathbf{H}^{(k)}$ and $\mathbf{h}_{.i}^{(k)}\in \mathbb{R}^n$ be the $i$ -th column of $\mathbf{H}^{(k)}$ . Then we define row-diff $(\mathbf{H}^{(k)})$ and col-diff $(\mathbf{H}^{(k)})$ as follows.
+
+$$
+\operatorname {r o w - d i f f} \left(\mathbf {H} ^ {(k)}\right) = \frac {1}{n ^ {2}} \sum_ {i, j \in [ n ]} \left\| \mathbf {h} _ {i} ^ {(k)} - \mathbf {h} _ {j} ^ {(k)} \right\| _ {2} \tag {2}
+$$
+
+$$
+\operatorname {c o l} - \operatorname {d i f f} \left(\mathbf {H} ^ {(k)}\right) = \frac {1}{d ^ {2}} \sum_ {i, j \in [ d ]} \left\| \mathbf {h} _ {\cdot i} ^ {(k)} / \| \mathbf {h} _ {\cdot i} ^ {(k)} \| _ {1} - \mathbf {h} _ {\cdot j} ^ {(k)} / \| \mathbf {h} _ {\cdot j} ^ {(k)} \| _ {1} \right\| _ {2} \tag {3}
+$$
+
+The row-diff measure is the average of all pairwise distances between the node features (i.e., rows of the representation matrix) and quantifies node-wise oversmoothing, whereas col-diff is the average of pairwise distances between $(L_{1}$ -normalized $^{1}$ ) columns of the representation matrix and quantifies feature-wise oversmoothing.
+
+# 2.2 STUDYING OVERSMOOTHING WITH SGC
+
+Although oversmoothing can be a cause of performance drop with increasing number of layers in GCN, adding more layers also leads to more parameters (due to learned linear projections $\mathbf{W}^{(k)}$ at each layer $k$ ) which magnify the potential of overfitting. Furthermore, deeper models also make the training harder as backpropagation suffers from vanishing gradients.
+
+In order to decouple the effect of oversmoothing from these other two factors, we study the oversmoothing problem using the SGC model (Wu et al., 2019). (Results on other GNNs are presented in §4.) SGC is simplified from GCN by removing all projection parameters of graph convolution layers and all nonlinear activations between layers. The estimation of SGC is simply written as:
+
+$$
+\widehat {\mathbf {Y}} = \operatorname {s o f t m a x} \left(\tilde {\mathbf {A}} _ {\text {s y m}} ^ {K} \mathbf {X} \mathbf {W}\right) \tag {4}
+$$
+
+where $K$ is the number of graph convolutions, and $\mathbf{W} \in \mathbb{R}^{d \times c}$ denote the learnable parameters of a logistic regression classifier.
+
+Note that SGC has a fixed number of parameters that does not depend on the number of graph convolutions (i.e. layers). In effect, it is guarded against the influence of overfitting and vanishing gradient problem with more layers. This leaves us only with oversmoothing as a possible cause of performance degradation with increasing $K$ . Interestingly, the simplicity of SGC does not seem to be a sacrifice; it has been observed that it achieves similar or better accuracy in various relational classification tasks (Wu et al., 2019).
+
+
+Figure 1: (best in color) SGC's performance (dashed lines) with increasing graph convolutions $(K)$ on Cora dataset (train/val/test split is $3\% / 10\% / 87\%$ ). For each $K$ , we train SGC in 500 epochs, save the model with the best validation accuracy, and report all measures based on the saved model. Measures row-diff and col-diff are computed based on the final layer representation of the saved model. (Solid lines depict after applying our method PAIRNORM, which we discuss in §3.2.)
+
+
+
+
+
+
+
+Dashed lines in Figure 1 illustrate the performance of SGC on the Cora dataset as we increase the number of layers $(K)$ . The training (cross-entropy) loss monotonically increases with larger $K$ , potentially because graph convolution mixes node representations with their neighbors' and makes them less distinguishable (training becomes harder). On the other hand, graph convolutions (i.e., smoothing) improve generalization ability, reducing the gap between training and validation/test loss
+
+up to $K = 4$ , after which (over)smoothing begins to hurt performance. The row-diff and col-diff both continue decreasing monotonically with $K$ , providing supporting evidence for oversmoothing.
+
+# 3 TACKLING OVERSMOOTHING
+
+# 3.1 PROPOSED PAIRNORM
+
+We start by establishing a connection between graph convolution and an optimization problem, that is graph-regularized least squares (GRLS), as shown by NT & Maehara (2019). Let $\bar{\mathbf{X}}\in \mathbb{R}^{n\times d}$ be a new node representation matrix, with $\bar{\mathbf{x}}_i\in \mathbb{R}^d$ depicting the $i$ -th row of $\bar{\mathbf{X}}$ . Then the GRLS problem is given as
+
+$$
+\min _ {\bar {\mathbf {X}}} \sum_ {i \in \mathcal {V}} \| \bar {\mathbf {x}} _ {i} - \mathbf {x} _ {i} \| _ {\tilde {\mathbf {D}}} ^ {2} + \sum_ {(i, j) \in \mathcal {E}} \| \bar {\mathbf {x}} _ {i} - \bar {\mathbf {x}} _ {j} \| _ {2} ^ {2} \tag {5}
+$$
+
+where $\| \mathbf{z}_i\|_{\tilde{\mathbf{D}}}^2 = \mathbf{z}_i^T\tilde{\mathbf{D}}\mathbf{z}_i$ . The first term is seen as total degree-weighted least squares. The second is a graph-regularization term that measures the variation of the new features over the graph structure. The goal of the optimization problem can be stated as estimating new "denoised" features $\bar{\mathbf{x}}_i$ 's that are not too far off of the input features $\mathbf{x}_i$ 's and are smooth over the graph structure.
+
+The GRLS problem has a closed form solution $\bar{\mathbf{X}} = (2\mathbf{I} - \tilde{\mathbf{A}}_{\mathrm{rw}})^{-1}\mathbf{X}$ , for which $\tilde{\mathbf{A}}_{\mathrm{rw}}\mathbf{X}$ is the first-order Taylor approximation, that is $\tilde{\mathbf{A}}_{\mathrm{rw}}\mathbf{X} \approx \bar{\mathbf{X}}$ . By exchanging $\tilde{\mathbf{A}}_{\mathrm{rw}}$ with $\tilde{\mathbf{A}}_{\mathrm{sym}}$ we obtain the same form as the graph convolution, i.e., $\tilde{\mathbf{X}} = \tilde{\mathbf{A}}_{\mathrm{sym}}\mathbf{X} \approx \bar{\mathbf{X}}$ . As such, graph convolution can be viewed as an approximate solution of (5), where it minimizes the variation over the graph structure while keeping the new representations close to the original.
+
+The optimization problem in (5) facilitates a closer look to the oversmoothing problem of graph convolution. Ideally, we want to obtain smoothing over nodes within the same cluster, however avoid smoothing over nodes from different clusters. The objective in (5) dictates only the first goal via the graph-regularization term. It is thus prone to oversmoothing when convolutions are applied repeatedly. To circumvent the issue and fulfill both goals simultaneously, we can add a negative term such as the sum of distances between disconnected pairs as follows.
+
+$$
+\min _ {\bar {\mathbf {x}}} \sum_ {i \in \mathcal {V}} \| \bar {\mathbf {x}} _ {i} - \mathbf {x} _ {i} \| _ {\bar {\mathbf {D}}} ^ {2} + \sum_ {(i, j) \in \mathcal {E}} \| \bar {\mathbf {x}} _ {i} - \bar {\mathbf {x}} _ {j} \| _ {2} ^ {2} - \lambda \sum_ {(i, j) \notin \mathcal {E}} \| \bar {\mathbf {x}} _ {i} - \bar {\mathbf {x}} _ {j} \| _ {2} ^ {2} \tag {6}
+$$
+
+where $\lambda$ is a balancing scalar to account for different volume and importance of the two goals.2 By deriving the closed-form solution of (6) and approximating it with first-order Taylor expansion, one can get a revised graph convolution operator with hyperparameter $\lambda$ . In this paper, we take a different route. Instead of a completely new graph convolution operator, we propose a general and efficient "patch", called PAIRNORM, that can be applied to any form of graph convolution having the potential of oversmoothing.
+
+Let $\tilde{\mathbf{X}}$ (the output of graph convolution) and $\dot{\mathbf{X}}$ respectively be the input and output of PAIRNORM. Observing that the output of graph convolution $\tilde{\mathbf{X}} = \tilde{\mathbf{A}}_{\mathrm{sym}}\mathbf{X}$ only achieves the first goal, PAIRNORM serves as a normalization layer that works on $\tilde{\mathbf{X}}$ to achieve the second goal of keeping disconnected pair representations farther off. Specifically, PAIRNORM normalizes $\tilde{\mathbf{X}}$ such that the total pairwise squared distance $\mathrm{TPSD}(\tilde{\mathbf{X}}) := \sum_{i,j \in [n]} \| \dot{\mathbf{x}}_i - \dot{\mathbf{x}}_j \|^2$ is the same as $\mathrm{TPSD}(\mathbf{X})$ . That is,
+
+$$
+\sum_ {(i, j) \in \mathcal {E}} \| \dot {\mathbf {x}} _ {i} - \dot {\mathbf {x}} _ {j} \| _ {2} ^ {2} + \sum_ {(i, j) \notin \mathcal {E}} \| \dot {\mathbf {x}} _ {i} - \dot {\mathbf {x}} _ {j} \| _ {2} ^ {2} = \sum_ {(i, j) \in \mathcal {E}} \| \mathbf {x} _ {i} - \mathbf {x} _ {j} \| _ {2} ^ {2} + \sum_ {(i, j) \notin \mathcal {E}} \| \mathbf {x} _ {i} - \mathbf {x} _ {j} \| _ {2} ^ {2}. \tag {7}
+$$
+
+By keeping the total pairwise squared distance unchanged, the term $\sum_{(i,j) \notin \mathcal{E}} \| \dot{\mathbf{x}}_i - \dot{\mathbf{x}}_j \|_2^2$ is guaranteed to be at least as large as the original value $\sum_{(i,j) \notin \mathcal{E}} \| \mathbf{x}_i - \mathbf{x}_j \|_2^2$ since the other term $\sum_{(i,j) \in \mathcal{E}} \| \dot{\mathbf{x}}_i - \dot{\mathbf{x}}_j \|_2^2 \approx \sum_{(i,j) \in \mathcal{E}} \| \tilde{\mathbf{x}}_i - \tilde{\mathbf{x}}_j \|_2^2$ is shrunk through the graph convolution.
+
+In practice, instead of always tracking the original value $\mathrm{TPSD}(\mathbf{X})$ , we can maintain a constant TPSD value $C$ across all layers, where $C$ is a hyperparameter that could be tuned per dataset.
+
+To normalize $\tilde{\mathbf{X}}$ to constant TPSD, we need to first compute $\mathrm{TPSD}(\tilde{\mathbf{X}})$ . Directly computing TPSD involves $n^2$ pairwise distances that is $\mathcal{O}(n^2 d)$ , which can be time consuming for large datasets.
+
+Equivalently, normalization can be done via a two-step approach where TPSD is rewritten as3
+
+$$
+\operatorname {T P S D} (\tilde {\mathbf {X}}) = \sum_ {i, j \in [ n ]} \| \tilde {\mathbf {x}} _ {i} - \tilde {\mathbf {x}} _ {j} \| _ {2} ^ {2} = 2 n ^ {2} \left(\frac {1}{n} \sum_ {i = 1} ^ {n} \| \tilde {\mathbf {x}} _ {i} \| _ {2} ^ {2} - \| \frac {1}{n} \sum_ {i = 1} ^ {n} \tilde {\mathbf {x}} _ {i} \| _ {2} ^ {2}\right). \tag {8}
+$$
+
+The first term (ignoring the scale $2n^2$ ) in Eq. (8) represents the mean squared length of node representations, and the second term depicts the squared length of the mean of node representations. To simplify the computation of (8), we subtract the row-wise mean from each $\tilde{\mathbf{x}}_i$ , i.e., $\tilde{\mathbf{x}}_i^c = \tilde{\mathbf{x}}_i - \frac{1}{n}\sum_i^n\tilde{\mathbf{x}}_i$ where $\tilde{\mathbf{x}}_i^c$ denotes the centered representation. Note that this shifting does not affect the TPSD, and furthermore drives the term $\| \frac{1}{n}\sum_{i = 1}^{n}\tilde{\mathbf{x}}_i\| _2^2$ to zero, where computing $\mathrm{TPSD}(\tilde{\mathbf{X}})$ boils down to calculating the squared Frobenius norm of $\tilde{\mathbf{X}}^c$ and overall takes $\mathcal{O}(nd)$ . That is,
+
+$$
+\operatorname {T P S D} (\tilde {\mathbf {X}}) = \operatorname {T P S D} \left(\tilde {\mathbf {X}} ^ {c}\right) = 2 n \| \tilde {\mathbf {X}} ^ {c} \| _ {F} ^ {2}. \tag {9}
+$$
+
+In summary, our proposed PAIRNORM (with input $\tilde{\mathbf{X}}$ and output $\tilde{\mathbf{X}}$ ) can be written as a two-step, center-and-scale, normalization procedure:
+
+$$
+\tilde {\mathbf {x}} _ {i} ^ {c} = \tilde {\mathbf {x}} _ {i} - \frac {1}{n} \sum_ {i = 1} ^ {n} \tilde {\mathbf {x}} _ {i} \tag {10}
+$$
+
+$$
+\dot {\mathbf {x}} _ {i} = s \cdot \frac {\tilde {\mathbf {x}} _ {i} ^ {c}}{\sqrt {\frac {1}{n} \sum_ {i = 1} ^ {n} \| \tilde {\mathbf {x}} _ {i} ^ {c} \| _ {2} ^ {2}}} = s \sqrt {n} \cdot \frac {\tilde {\mathbf {x}} _ {i} ^ {c}}{\sqrt {\| \tilde {\mathbf {X}} ^ {c} \| _ {F} ^ {2}}} \tag {11}
+$$
+
+After scaling the data remains centered, that is, $\| \sum_{i=1}^{n} \dot{\mathbf{x}}_i \|_2^2 = 0$ . In Eq. (11), $s$ is a hyperparameter that determines $C$ . Specifically,
+
+$$
+\operatorname {T P S D} (\dot {\mathbf {X}}) = 2 n \| \dot {\mathbf {X}} \| _ {F} ^ {2} = 2 n \sum_ {i} \| s \cdot \frac {\tilde {\mathbf {x}} _ {i} ^ {c}}{\sqrt {\frac {1}{n} \sum_ {i} \| \tilde {\mathbf {x}} _ {i} ^ {c} \| _ {2} ^ {2}}} \| _ {2} ^ {2} = 2 n \frac {s ^ {2}}{\frac {1}{n} \sum_ {i} \| \tilde {\mathbf {x}} _ {i} ^ {c} \| _ {2} ^ {2}} \sum_ {i} \| \tilde {\mathbf {x}} _ {i} ^ {c} \| _ {2} ^ {2} = 2 n ^ {2} s ^ {2} \tag {12}
+$$
+
+Then, $\dot{\mathbf{X}} := \mathrm{PAIRNORM}(\tilde{\mathbf{X}})$ has row-wise mean 0 (i.e., is centered) and constant total pairwise squared distance $C = 2n^2 s^2$ . An illustration of PAIRNORM is given in Figure 2. The output of PAIRNORM is input to the next convolution layer.
+
+
+Figure 2: Illustration of PAIRNORM, comprising centering and rescaling steps.
+
+We also derive a variant of PAIRNORM by replacing $\sum_{i=1}^{n} \|\tilde{\mathbf{x}}_i^c\|_2^2$ in Eq. (11) with $n\|\tilde{\mathbf{x}}_i^c\|_2^2$ , such that the scaling step computes $\dot{\mathbf{x}}_i = s \cdot \frac{\tilde{\mathbf{x}}_i^c}{\|\tilde{\mathbf{x}}_i^c\|_2}$ . We call it PAIRNORM-SI (for scale individually), which imposes more restriction on node representations, such that all have the same $L_2$ -norm $s$ . In practice we found that both PAIRNORM and PAIRNORM-SI work well for SGC, whereas PAIRNORM-SI provides better and more stable results for GCN and
+
+
+Figure 3: (best in color) Performance comparison of the original (dashed) vs. PAIRNORM-enhanced (solid) GCN and GAT models with increasing layers on Cora.
+
+GAT. The reason why GCN and GAT require stricter normalization may be because they have more parameters and are more prone to overfitting. In Appx. A.6 we provide additional measures to demonstrate why PAIRNORM and PAIRNORM-SI work. In all experiments, we employ PAIRNORM for SGC and PAIRNORM-SI for both GCN and GAT.
+
+PAIRNORM is effective and efficient in solving the oversmoothing problem of GNNs. As a general normalization layer, it can be used for any GNN. Solid lines in Figure 1 present the performance
+
+of SGC on Cora with increasing number of layers, where we employ PAIRNORM after each graph convolution layer, as compared to 'vanilla' versions. Similarly, Figure 3 is for GCN and GAT (PAIRNORM is applied after the activation of each graph convolution). Note that the performance decay with PAIRNORM-at-work is much slower. (See Fig.s 5-6 in Appx. A.3 for other datasets.)
+
+While PAIRNORM enables deeper models that are more robust to oversmoothing, it may seem odd that the overall test accuracy does not improve. In fact, the benchmark graph datasets often used in the literature require no more than 4 layers, after which performance decays (even if slowly). In the next section, we present a realistic use case setting for which deeper models are more likely to provide higher performance, where the benefit of PAIRNORM becomes apparent.
+
+# 3.2 A CASE WHERE DEeper GNNS ARE BENEFICIAL
+
+In general, oversmoothing gets increasingly more severe as the number of layers goes up. A task would benefit from employing PAIRNORM more if it required a large number of layers to achieve its best performance. To this effect we study the "missing feature setting", where a subset of the nodes lack feature vectors. Let $\mathcal{M} \subseteq \mathcal{V}_u$ be the set where $\forall m \in \mathcal{M}, \mathbf{x}_m = \emptyset$ , i.e., all of their features are missing. We denote with $p = |\mathcal{M}| / |\mathcal{V}_u|$ the missing fraction. We call this variant of the task as semi-supervised node classification with missing vectors (SSNC-MV). Intuitively, one would require a larger number of propagation steps (hence, a deeper GNN) to be able to "recover" effective feature representations for these nodes.
+
+SSNC-MV is a general and realistic problem that finds several applications in the real world. For example, the credit lending problem of identifying low- vs. high-risk customers (nodes) can be modeled as SSNC-MV where a large fraction of nodes do not exhibit any meaningful features (e.g., due to low-volume activity). In fact, many graph-based classification tasks with the cold-start issue (entity with no history) can be cast into SSNC-MV. To our knowledge, this is the first work to study the SSNC-MV problem using GNN models.
+
+Figure 4 presents the performance of SGC, GCN, and GAT models on Cora with increasing number of layers, where we remove feature vectors from all the unlabeled nodes, i.e. $p = 1$ . The models with PAIRNORM achieve a higher test accuracy compared to those without, which they typically reach at a larger number of layers. (See Fig. 7 in Appx. A.4 for results on other datasets.)
+
+
+Figure 4: (best in color) Comparison of 'vanilla' vs. PAIRNORM-enhanced SGC, GCN, and GAT performance on Cora for $p = 1$ . Green diamond symbols depict the layer at which validation accuracy peaks. PAIRNORM boosts overall performance by enabling more robust deep GNNs.
+
+
+
+
+
+# 4 EXPERIMENTS
+
+In section 3 we have shown the robustness of PAIRNORM-enhanced models against increasing number of layers in SSNC problem. In this section we design extensive experiments to evaluate the effectiveness of PAIRNORM under the SSNC-MV setting, over SGC, GCN and GAT models.
+
+# 4.1 EXPERIMENT SETUP
+
+Datasets. We use 4 well-known benchmark datasets in GNN domain: Cora, CiteSeer, Pubmed (Sen et al., 2008), and CoauthorCS (Shchur et al., 2018). Their statistics are reported in Appx. A.2. For Cora, CiteSeer and Pubmed, we use the same dataset splits as Kipf & Welling (2017), where all nodes outside train and validation are used as test set. For CoauthorCS, we randomly split all nodes into train/val/test as $3\% / 10\% / 87\%$ , and keep the same split for all experiments.
+
+Models. We use three different GNN models as our base model: SGC (Wu et al., 2019), GCN (Kipf & Welling, 2017), and GAT (Velickovic et al., 2018). We compare our PAIRNORM with residual connection method (He et al., 2016) over base models (except SGC since there is no "resid
+
+ual connected” SGC), as we surprisingly find it can slow down oversmoothing and benefit SSNC-MV problem. Similar to us, residual connection is a general technique that can be applied to any model without changing its architecture. We focus on the comparison between the base models and PAIRNORM-enhanced models, rather than achieving the state of the art performance for SSNC and SSNC-MV. There exist a few other work addressing oversmoothing (Klicpera et al., 2019; Li et al., 2018; Rong et al., 2019; Xu et al., 2018) however they design specialized architectures and not simple “patch” procedures like PAIRNORM that can be applied on top of any GNN.
+
+Hyperparameters. We choose the hyperparameter $s$ of PAIRNORM from $\{0.1, 1, 10, 50, 100\}$ over validation set for SGC, while keeping it fixed at $s = 1$ for both GCN and GAT due to resource limitations. We set the #hidden units of GCN and GAT (#attention heads is set to 1) to 32 and 64 respectively for all datasets. Dropout with rate 0.6 and $L_{2}$ regularization with penalty $5 \cdot 10^{-4}$ are applied to GCN and GAT. For SGC, we vary number of layers in $\{1, 2, \dots, 10, 15, \dots, 60\}$ and for GCN and GAT in $\{2, 4, \dots, 12, 15, 20, \dots, 30\}$ .
+
+Configurations. For PAIRNORM-enhanced models, we apply PAIRNORM after each graph convolution layer (i.e., after activation if any) in the base model. For residual-connected models with $t$ skip steps, we connect the output of $l$ -th layer to $(l + t)$ -th, that is, $\mathbf{H}_{\mathrm{new}}^{(l + t)} = \mathbf{H}^{(l + t)} + \mathbf{H}^{(l)}$ where $\mathbf{H}^{(l)}$ denotes the output of $l$ -th graph convolution (after activation). For the SSNC-MV setting, we randomly erase $p$ fraction of the feature vectors from nodes in validation and test sets (for which we input vector $\mathbf{0} \in \mathbb{R}^d$ ), whereas all training (labeled) nodes keep their original features (See 3.2). We run each experiment within 1000 epochs 5 times and report the average performance. We mainly use a single GTX-1080ti GPU, with some SGC experiments ran on an Intel i7-8700k CPU.
+
+# 4.2 EXPERIMENT RESULTS
+
+We first show the global performance gain of applying PAIRNORM to SGC for SSNC-MV under varying feature missing rates as shown in Table 1. PAIRNORM-enhanced SGC performs similar or better over $0\%$ missing, while it significantly outperforms vanilla SGC for most other settings, especially for larger missing rates. #L denotes the best number of layers for the model that yields the largest average validation accuracy (over 5 runs), for which we report the average test accuracy (Acc). Notice the larger #L values for SGC-PN compared to vanilla SGC, which shows the power of PAIRNORM for enabling "deep" SGC models by effectively tackling oversmoothing.
+
+Similar to Wu et al. (2019) who showed that the simple SGC model achieves comparable or better performance as other GNNs for various tasks, we found PAIRNORM-enhanced SGC to follow the same trend when compared with PAIRNORM-enhanced GCN and GAT, for all SSNC-MV settings. Due to its simplicity and extreme efficiency, we believe PAIRNORM-enhanced SGC sets a strong baseline for the SSNC-MV problem.
+
+Table 1: Comparison of 'vanilla' vs. PAIRNORM-enhanced SGC performance in Cora, Citeseer, Pubmed, and CoauthorCS for SSNC-MV problem, with missing rate ranging from $0\%$ to $100\%$ . Showing test accuracy at $\#L$ ( $K$ in Eq. 4) layers, at which model achieves best validation accuracy.
+
+| Missing Percentage
+Dataset | Method | 0%
+Acc #L | 20%
+Acc #L | 40%
+Acc #L | 60%
+Acc #L | 80%
+Acc #L | 100%
+Acc #L |
| Cora | SGC | 0.815 | 4 | 0.806 | 5 | 0.786 | 3 | 0.742 | 4 | 0.733 | 3 | 0.423 | 15 |
| SGC-PN | 0.811 | 7 | 0.799 | 7 | 0.797 | 7 | 0.783 | 20 | 0.780 | 25 | 0.745 | 40 |
| Citeseer | SGC | 0.689 | 10 | 0.684 | 6 | 0.668 | 8 | 0.657 | 9 | 0.565 | 8 | 0.290 | 2 |
| SGC-PN | 0.706 | 3 | 0.695 | 3 | 0.653 | 4 | 0.641 | 5 | 0.590 | 50 | 0.486 | 50 |
| Pubmed | SGC | 0.754 | 1 | 0.748 | 1 | 0.723 | 4 | 0.746 | 2 | 0.659 | 3 | 0.399 | 35 |
| SGC-PN | 0.782 | 9 | 0.781 | 7 | 0.778 | 60 | 0.782 | 7 | 0.772 | 60 | 0.719 | 40 |
| CoauthorCS | SGC | 0.914 | 1 | 0.898 | 2 | 0.877 | 2 | 0.824 | 2 | 0.751 | 4 | 0.318 | 2 |
| SGC-PN | 0.915 | 2 | 0.909 | 2 | 0.899 | 3 | 0.891 | 4 | 0.880 | 8 | 0.860 | 20 |
+
+We next employ PAIRNORM-SI for GCN and GAT under the same setting, comparing it with the residual (skip) connections technique. Results are shown in Table 2 and Table 3 respectively for GCN and GAT. Due to space and resource limitations, we only show results for $0\%$ and $100\%$ missing rate scenarios. (We provide results for other missing rates $(70,80,90\%)$ over 1 run only in Appx. A.5.) We observe similar trend for GCN and GAT: (1) vanilla model suffers from performance drop under SSNC-MV with increasing missing rate; (2) both residual connections and PAIRNORM-SI enable deeper models and improve performance (note the larger #L and Acc); (3) GCN-PN and
+
+GAT-PN achieve performance that is comparable or better than just using skips; (4) performance can be further improved (albeit slightly) by using skips along with PAIRNORM-SI.
+
+Table 2: Comparison of 'vanilla' and (PAIRNORM-SI/ residual)-enhanced GCN performance on Cora, Citeseer, Pubmed, and CoauthorCS for SSNC-MV problem, with $0\%$ and $100\%$ feature missing rate. $t$ represents the skip-step of residual connection. (See A.5 Fig. 8 for more settings.)
+
+| Dataset Missing(%) Method | Cora | CiteSeer | Pubmed | CoauthorCS |
| 0% Acc #L | 100% Acc #L | 0% Acc #L | 100% Acc #L | 0% Acc #L | 100% Acc #L | 0% Acc #L | 100% Acc #L | 0% Acc #L | 100% Acc #L | 0% Acc #L | 100% Acc #L |
| GCN | 0.821 | 2 | 0.582 | 2 | 0.695 | 2 | 0.313 | 2 | 0.779 | 2 | 0.449 | 2 |
| GCN-PN | 0.790 | 2 | 0.731 | 10 | 0.660 | 2 | 0.498 | 8 | 0.780 | 30 | 0.745 | 25 |
| GCN-t1 | 0.822 | 2 | 0.721 | 15 | 0.696 | 2 | 0.441 | 12 | 0.780 | 2 | 0.656 | 25 |
| GCN-t1-PN | 0.780 | 2 | 0.724 | 30 | 0.648 | 2 | 0.465 | 10 | 0.756 | 15 | 0.690 | 12 |
| GCN-t2 | 0.820 | 2 | 0.722 | 10 | 0.691 | 2 | 0.432 | 20 | 0.779 | 2 | 0.645 | 20 |
| GCN-t2-PN | 0.785 | 4 | 0.740 | 30 | 0.650 | 2 | 0.508 | 12 | 0.770 | 15 | 0.725 | 30 |
+
+Table 3: Comparison of 'vanilla' and (PAIRNORM-SI/ residual)-enhanced GAT performance on Cora, Citeseer, Pubmed, and CoauthorCS for SSNC-MV problem, with $0\%$ and $100\%$ feature missing rate. $t$ represents the skip-step of residual connection. (See A.5 Fig. 9 for more settings.)
+
+| Dataset Missing(%) Method | Cora | CiteSeer | Pubmed | CoauthorCS |
| 0% Acc | 100% Acc | #L | | 0% Acc | 100% Acc | #L | | 0% Acc | 100% Acc | #L | | 0% Acc | 100% Acc | #L | |
| GAT | 0.823 | 2 | 0.653 | 4 | 0.693 | 2 | 0.428 | 4 | 0.774 | 6 | 0.631 | 4 | 0.892 | 4 | 0.737 | 4 |
| GAT-PN | 0.787 | 2 | 0.718 | 6 | 0.670 | 2 | 0.483 | 4 | 0.774 | 12 | 0.714 | 10 | 0.916 | 2 | 0.843 | 8 |
| GAT-t1 | 0.822 | 2 | 0.706 | 8 | 0.693 | 2 | 0.461 | 6 | 0.769 | 4 | 0.698 | 8 | 0.899 | 4 | 0.842 | 10 |
| GAT-t1-PN | 0.787 | 2 | 0.710 | 10 | 0.658 | 6 | 0.500 | 10 | 0.757 | 4 | 0.684 | 12 | 0.911 | 2 | 0.844 | 20 |
| GAT-t2 | 0.820 | 2 | 0.691 | 8 | s0.692 | 2 | 0.461 | 6 | 0.774 | 8 | 0.702 | 8 | 0.895 | 4 | 0.803 | 6 |
| GAT-t2-PN | 0.788 | 4 | 0.738 | 12 | 0.672 | 4 | 0.517 | 10 | 0.776 | 15 | 0.704 | 12 | 0.917 | 2 | 0.855 | 30 |
+
+# 5 RELATED WORK
+
+Oversmoothing in GNNs: Li et al. (2018) was the first to call attention to the oversmoothing problem. Xu et al. (2018) introduced Jumping Knowledge Networks, which employ skip connections for multi-hop message passing and also enable different neighborhood ranges. Klicpera et al. (2019) proposed a propagation scheme based on personalized Pagerank that ensures locality (via teleports) which in turn prevents oversmoothing. Li et al. (2019) built on ideas from ResNet to use residual as well as dense connections to train deep GCNs. DropEdge Rong et al. (2019) proposed to alleviate oversmoothing through message passing reduction via removing a certain fraction of edges at random from the input graph. These are all specialized solutions that introduce additional parameters and/or a different network architecture.
+
+Normalization Schemes for Deep-NNs: There exist various normalization schemes proposed for deep neural networks, including batch normalization Ioffe & Szegedy (2015), weight normalization Salimans & Kingma (2016), layer normalization Ba et al. (2016), and so on. Conceptually these have substantially different goals (e.g., reducing training time), and were not proposed for graph neural networks nor the oversmoothing problem therein. Important difference to note is that larger depth in regular neural-nets does not translate to more hops of propagation on a graph structure.
+
+# 6 CONCLUSION
+
+We investigated the oversmoothing problem in GNNs and proposed PAIRNORM, a novel normalization layer that boosts the robustness of deep GNNs against oversmoothing. PAIRNORM is fast to compute, requires no change in network architecture nor any extra parameters, and can be applied to any GNN. Experiments on real-world classification tasks showed the effectiveness of PAIRNORM, where it provides performance gains when the task benefits from more layers. Future work will explore other use cases of deeper GNNs that could further showcase PAIRNORM's advantages.
+
+# REFERENCES
+
+Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. CoRR, abs/1607.06450, 2016.
+Olivier Chapelle, Bernhard Scholkopf, and Alexander Zien. Semi-Supervised Learning. 2006.
+William L. Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. In NIPS, pp. 1024-1034, 2017.
+Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep Residual Learning for Image Recognition. In Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition, pp. 770-778. IEEE, 2016.
+Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. CoRR, abs/1502.03167, 2015.
+Thomas N. Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. In International Conference on Learning Representations (ICLR). OpenReview.net, 2017.
+Johannes Klicpera, Aleksandar Bojchevski, and Stephan Gunnemann. Combining neural networks with personalized pagerank for classification on graphs. In International Conference on Learning Representations (ICLR), 2019.
+Guohao Li, Matthias Müller, Ali Thabet, and Bernard Ghanem. Can GCNs go as deep as CNNs? CoRR, abs/1904.03751, 2019.
+Qimai Li, Zhichao Han, and Xiao-Ming Wu. Deeper Insights into Graph Convolutional Networks for Semi-Supervised Learning. In Proceedings of the 32nd AAAI Conference on Artificial Intelligence, pp. 3538-3545, 2018.
+Hoang NT and Takanori Maehara. Revisiting graph neural networks: All we have is low-pass filters. CoRR, abs/1905.09550, 2019.
+Meng Qu, Yoshua Bengio, and Jian Tang. Gmnn: Graph markov neural networks. In International Conference on Machine Learning, pp. 5241-5250, 2019.
+Yu Rong, Wenbing Huang, Tingyang Xu, and Junzhou Huang. The truly deep graph convolutional networks for node classification. CoRR, abs/1907.10903, 2019.
+Tim Salimans and Durk P Kingma. Weight normalization: A simple reparameterization to accelerate training of deep neural networks. In Advances in Neural Information Processing Systems, pp. 901-909, 2016.
+Prithviraj Sen, Galileo Namata, Mustafa Bilgic, Lise Getoor, Brian Galligher, and Tina Eliassi-Rad. Collective classification in network data. AI magazine, 29(3):93-93, 2008.
+Oleksandr Shchur, Maximilian Mumme, Aleksandar Bojchevski, and Stephan Gunnemann. Pitfalls of graph neural network evaluation. arXiv preprint arXiv:1811.05868, 2018.
+Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Li, and Yoshua Bengio. Graph attention networks. In International Conference on Learning Representations (ICLR). OpenReview.net, 2018.
+Felix Wu, Amauri H. Souza Jr., Tianyi Zhang, Christopher Fifty, Tao Yu, and Kilian Q. Weinberger. Simplifying graph convolutional networks. In ICML, volume 97 of Proceedings of Machine Learning Research, pp. 6861-6871. PMLR, 2019.
+Keyulu Xu, Chengtao Li, Yonglong Tian, Tomohiro Sonobe, Ken-ichi Kawarabayashi, and Stefanie Jegelka. Representation Learning on Graphs with Jumping Knowledge Networks. In Proceedings of the 35th International Conference on Machine Learning, volume 80, pp. 5453-5462, 2018.
+
+# A APPENDIX
+
+# A.1 DERIVATION OF EQ.8
+
+$$
+\begin{array}{l} \operatorname {T P S D} (\tilde {\mathbf {X}}) = \sum_ {i, j \in [ n ]} \| \tilde {\mathbf {x}} _ {i} - \tilde {\mathbf {x}} _ {j} \| _ {2} ^ {2} = \sum_ {i, j \in [ n ]} \left(\tilde {\mathbf {x}} _ {i} - \tilde {\mathbf {x}} _ {j}\right) ^ {T} \left(\tilde {\mathbf {x}} _ {i} - \tilde {\mathbf {x}} _ {j}\right) (13) \\ = \sum_ {i, j \in [ n ]} \left(\tilde {\mathbf {x}} _ {i} ^ {T} \tilde {\mathbf {x}} _ {i} + \tilde {\mathbf {x}} _ {j} ^ {T} \tilde {\mathbf {x}} _ {j} - 2 \tilde {\mathbf {x}} _ {i} ^ {T} \tilde {\mathbf {x}} _ {j}\right) (14) \\ = 2 n \sum_ {i \in [ n ]} \tilde {\mathbf {x}} _ {i} ^ {T} \tilde {\mathbf {x}} _ {i} - 2 \sum_ {i, j \in [ n ]} \tilde {\mathbf {x}} _ {i} ^ {T} \tilde {\mathbf {x}} _ {j} (15) \\ = 2 n \sum_ {i \in [ n ]} \| \tilde {\mathbf {x}} _ {i} \| _ {2} ^ {2} - 2 \mathbf {1} ^ {T} \tilde {\mathbf {X}} \tilde {\mathbf {X}} ^ {T} \mathbf {1} (16) \\ = 2 n \sum_ {i \in [ n ]} \| \tilde {\mathbf {x}} _ {i} \| _ {2} ^ {2} - 2 \| \mathbf {1} ^ {T} \tilde {\mathbf {X}} \| _ {2} ^ {2} (17) \\ = 2 n ^ {2} \left(\frac {1}{n} \sum_ {i = 1} ^ {n} \| \tilde {\mathbf {x}} _ {i} \| _ {2} ^ {2} - \| \frac {1}{n} \sum_ {i = 1} ^ {n} \tilde {\mathbf {x}} _ {i} \| _ {2} ^ {2}\right). (18) \\ \end{array}
+$$
+
+# A.2 DATASET STATISTICS
+
+Table 4: Dataset statistics.
+
+| Name | #Nodes | #Edges | #Features | #Classes | Label Rate |
| Cora | 2708 | 5429 | 1433 | 7 | 0.052 |
| CiteSeer | 3327 | 4732 | 3703 | 6 | 0.036 |
| Pubmed | 19717 | 44338 | 500 | 3 | 0.003 |
| CoauthorCS | 18333 | 81894 | 6805 | 15 | 0.030 |
+
+# A.3 ADDITIONAL PERFORMANCE PLOTS WITH INCREASING NUMBER OF LAYERS
+
+
+
+
+
+
+Figure 5: Comparison of 'vanilla' vs. PAIRNORM-enhanced SGC, corresponding to Figure 1, for datasets (from top to bottom) Citeseer, Pubmed, and CoauthorCS. PAIRNORM provides improved robustness to performance decay due to oversmoothing with increasing number of layers.
+
+
+
+
+
+
+
+
+
+
+Figure 6: Comparison of 'vanilla' (dashed) vs. PAIRNORM-enhanced (solid) GCN (left) and GAT (right) models, corresponding to Figure 3, for datasets (from top to bottom) Citeseer, Pubmed, and CoauthorCS. PAIRNORM provides improved robustness against performance decay with increasing number of layers.
+
+
+
+
+A.4 ADDITIONAL PERFORMANCE PLOTS WITH INCREASING NUMBER OF LAYERS UNDER SSNC-MV WITH $p = 1$
+
+
+
+
+
+
+
+
+
+
+
+
+Figure 7: Comparison of 'vanilla' (dashed) vs. PAIRNORM-enhanced (solid) (from left to right) SGC, GCN, and GAT model performance under SSNC-MV for $p = 1$ , corresponding to Figure 4, for datasets (from top to bottom) Citeseer, Pubmed, and CoauthorCS. Green diamond symbols depict the layer at which validation accuracy peaks. PAIRNORM boosts overall performance by enabling more robust deep GNNs.
+
+
+
+
+
+# A.5 ADDITIONAL EXPERIMENTS UNDER SSNC-MV WITH INCREASING MISSING FRACTION $p$
+
+In this section we report additional experiment results under the SSNC-MV setting with varying missing fraction, in particular $p = \{0.7, 0.8, 0.9, 1\}$ and also report the base case where $p = 0$ for comparison.
+
+Figure 8 presents results on all four datasets for GCN vs. PAIRNORM-enhanced GCN (denoted PN for short). The models without any skip connections are denoted by $*$ -0, with one-hop skip connection by $*$ -1, and with one and two-hop skip connections by $*$ -2. Barcharts on the right report the best layer that each model produced the highest validation accuracy, and those on the left report the corresponding test accuracy. Figure 9 presents corresponding results for GAT.
+
+We discuss the take-aways from these figures on the following page.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Figure 8: Supplementary results to Table 2 for GCN on (from top to bottom) Cora, Citeseer, Pubmed, and CoauthorCS.
+
+
+
+We make the following observations based on Figures 8 and 9:
+
+- Performance of 'vanilla' GCN and GAT models without skip connections (i.e., GCN-0 and GAT-0) drop monotonically as we increase missing fraction $p$ .
+- PAIRNORM-enhanced 'vanilla' models (PN-0, no skips) perform comparably or better than GCN-0 and GAT-0 in all cases, especially as $p$ increases. In other words, with PAIRNORM at work, model performance is more robust against missing data.
+- Best number of layers for GCN-0 as we increase $p$ only changes between 2-4. For GAT-0, it changes mostly between 2-6.
+- PAIRNORM-enhanced 'vanilla' models (PN-0, no skips) can go deeper, i.e., they can leverage a larger range of #layers (2-12) as we increase $p$ . Specifically, GCN-PN-0 (GAT-PN-0) uses equal number or more layers than GCN-0 (GAT-0) in almost all cases.
+- Without any normalization, adding skip connections helps—GCN/GAT-1 and GCN/GAT-2 are better than GCN/GAT-0, especially as we increase $p$ .
+- With PAIRNORM but no-skip, performance is comparable or better than just adding skips.
+- Adding skips on top of PAIRNORM does not seem to introduce any notable gains.
+
+In summary, simply employing our PAIRNORM for GCN and GAT provides robustness against oversmoothing that allows them to go deeper and achieve improved performance under SSNC-MV.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Figure 9: Supplementary results to Table 3 for GAT on (from top to bottom) Cora, Citeseer, Pubmed, and CoauthorCS.
+
+
+
+# A.6 CASE STUDY: ADDITIONAL MEASURES FOR PAIRNORM AND PAIRNORM-SI WITH SGC AND GCN
+
+To better understand why PAIRNORM and PAIRNORM-SI are helpful for training deep GNNs, we report additional measures for (SGC and GCN) with (PAIRNORM and PAIRNORM-SI) over the Cora dataset. In the main text, we claim TPSD (total pairwise squared distances) is constant across layers for SGC with PAIRNORM (for GCN/GAT this is not guaranteed because of the influence of activation function and dropout layer). In this section we empirically measure pairwise (squared) distances for both SGC and GCN, with PAIRNORM and PAIRNORM-SI.
+
+# A.6.1 SGC WITH PAIRNORM AND PAIRNORM-SI
+
+To verify our analysis of PAIRNORM for SGC, and understand how the variant of PAIRNORM (PAIRNORM-SI) works, we measure the average pairwise squared distance (APSD) as well as the average pairwise distance (APD) between the representations for two categories of node pairs: (1) connected pairs (nodes that are directly connected in graph) and (2) random pairs (uniformly randomly chosen among the node set). APSD of random pairs reflects the TPSD, and APD of random pairs reflects the total pairwise distance (TPD). Under the homophily assumption of the labels w.r.t. the graph structure, we want APD or APSD of connected pairs to be small while keeping APD or APSD of random pairs relatively large.
+
+The results are shown in Figure 10. Without normalization, SGC suffers from fast diminishing APD and APSD of random pairs. As we have proved, PAIRNORM normalizes APSD to be constant across layers, however it does not normalize APD, which appears to decrease linearly with increasing number of layers. Surprisingly, although PAIRNORM-SI is not theoretically proved to have a constant APSD and APD, empirically it achieves more stable APSD and APD than PAIRNORM. We were not able to prove this phenomenon mathematically, and leave it for further investigation.
+
+
+
+
+
+
+
+
+Figure 10: Measuring average distance (squared and not-squared) between representations at each layer for SGC, SGC with PAIRNORM, and SGC with PAIRNORM-SI. The setting is the same with Figure 1 and they share the same performance.
+
+
+
+
+
+APD does not capture the full information of the distribution of pairwise distances. To show how the distribution changes by increasing number of layers, we use Tensorboard to plot the histograms of pairwise distances, as shown in Figure 11. Comparing SGC and SGC with PAIRNORM, adding PAIRNORM keeps the left shift (shrinkage) of the distribution of random pair distances much slower than without normalization, while still sharing similar behavior of the distribution of connected pairwise distances. PAIRNORM-SI seems to be more powerful in keeping the median and mean of the distribution of random pair distances stable, while "spreading" the distribution out by increasing the variance. The performance of PAIRNORM and PAIRNORM-SI are similar, however it seems that PAIRNORM-SI is more powerful in stabilizing TPD and TPSD.
+
+
+
+
+
+
+
+
+Figure 11: Measuring distribution of distances between representations at each layer for SGC, SGC with PAIRNORM, and SGC with PAIRNORM-SI. Supplementary results for Figure 10.
+
+
+
+
+
+# A.6.2 GCN WITH PAIRNORM AND PAIRNORM-SI
+
+
+
+
+
+
+
+
+Figure 12: Measuring average distance (squared and not-squared) between representations at each layer for GCN, GCN with PAIRNORM, and GCN with PAIRNORM-SI. We trained three 12-layer GCNs with #hidden=128 and dropout=0.6 in 1000 epochs. Respective test set accuracies are $31.09\%$ , $77.77\%$ , $75.09\%$ . Note that the scale of distances is not comparable across models, since they have learnable parameters that scale these distances differently.
+
+
+
+
+
+The formal analysis for PAIRNORM and PAIRNORM-SI is based on SGC. GCN (and other GNNs) has learnable parameters, dropout layers, and activation layers, all of which complicate direct mathematical analyses. Here we perform similar empirical measurements for pairwise distances to get a rough sense of how PAIRNORM and PAIRNORM-SI work with GCN based on the Cora dataset. Figures 12 and 13 demonstrate how PAIRNORM and PAIRNORM-SI can help train a relatively deep (12 layers) GCN.
+
+Notice that oversmoothing occurs very quickly for GCN without any normalization, where both connected and random pair distances reach zero (!). In contrast, GCN with PAIRNORM or PAIRNORM-SI is able to keep random pair distances relatively apart while allowing connected pair distances to shrink. As also stated in main text, using PAIRNORM-SI for GCN and GAT is relatively more
+
+stable than using PAIRNORM in general cases (notice the near-constant random pair distances in the rightmost subfigures). There are several possible explanations for why PAIRNORM-SI is more stable. First, as shown in Figure 10 and Figure 12, PAIRNORM-SI not only keeps APSD stable but also APD, further, the plots of distributions of pairwise distances (Figures 11 and 13) also show the power of PAIRNORM-SI (notice the large gap between smaller connected pairwise distances and the larger random pairwise distances). Second, we conjecture that restricting representations to reside on a sphere can make training stable and faster, which we also observe empirically by studying the training curves. Third, GCN and GAT tend to overfit easily for the SSNC problem, due to many learnable parameters across layers and limited labeled input data, therefore it is possible that adding more restriction on these models helps reduce overfitting.
+
+
+Figure 13: Measuring distribution of distances between representations at each layer for GCN, GCN with PAIRNORM, and GCN with PAIRNORM-SI. Supplementary results for Figure 12.
+
+All in all, these empirical measurements as illustrated throughout the figures in this section demonstrate that PAIRNORM and PAIRNORM-SI successfully address the oversmoothing problem for deep GNNs. Our work is the first to propose a normalization layer specifically designed for graph neural networks, which we hope will kick-start more work in this area toward training more robust and effective GNNs.
\ No newline at end of file
diff --git a/pairnormtacklingoversmoothingingnns/images.zip b/pairnormtacklingoversmoothingingnns/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..547e7cdfbba53b95448623b93bbaccf6ed0eeeb5
--- /dev/null
+++ b/pairnormtacklingoversmoothingingnns/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:40102f0d732d12124187a10efb480537e43c6d2b28c22daadf93b30f95206056
+size 1430245
diff --git a/pairnormtacklingoversmoothingingnns/layout.json b/pairnormtacklingoversmoothingingnns/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..a0978536bb134228f11c987acfcd3ff7c9327c4b
--- /dev/null
+++ b/pairnormtacklingoversmoothingingnns/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0297c6d38d9185e68ece946898b8d8c72f05932c3a8f45e3b3e9c8096b034c0c
+size 559096
diff --git a/payattentiontofeaturestransferlearnfastercnns/d94a9a70-4bee-464d-b0f2-970632fbd80a_content_list.json b/payattentiontofeaturestransferlearnfastercnns/d94a9a70-4bee-464d-b0f2-970632fbd80a_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..4c6fe247f48cd356478962a4ed960c9f1c33d31b
--- /dev/null
+++ b/payattentiontofeaturestransferlearnfastercnns/d94a9a70-4bee-464d-b0f2-970632fbd80a_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5fd5d346f3fd528dc0702ab9b2e49f0f41c8cec8ebc4338441345a7ea0dda5a4
+size 82802
diff --git a/payattentiontofeaturestransferlearnfastercnns/d94a9a70-4bee-464d-b0f2-970632fbd80a_model.json b/payattentiontofeaturestransferlearnfastercnns/d94a9a70-4bee-464d-b0f2-970632fbd80a_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..91b0a1718108e16d6c3acf4ff7accb72e7dbcd2c
--- /dev/null
+++ b/payattentiontofeaturestransferlearnfastercnns/d94a9a70-4bee-464d-b0f2-970632fbd80a_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:dae908b824472646558cc18ffe6ce94a7960e0fe4851724708e286dc2c3583b2
+size 102807
diff --git a/payattentiontofeaturestransferlearnfastercnns/d94a9a70-4bee-464d-b0f2-970632fbd80a_origin.pdf b/payattentiontofeaturestransferlearnfastercnns/d94a9a70-4bee-464d-b0f2-970632fbd80a_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..610abff7822c56945497376b0f249b99ce300503
--- /dev/null
+++ b/payattentiontofeaturestransferlearnfastercnns/d94a9a70-4bee-464d-b0f2-970632fbd80a_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4b68495cab225256f90c7f652f17fb9970306bc3eba6b5b44f2381fd7e5f6418
+size 2112258
diff --git a/payattentiontofeaturestransferlearnfastercnns/full.md b/payattentiontofeaturestransferlearnfastercnns/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..d87fe8f10841836e0cafab89597bdb07e5c92641
--- /dev/null
+++ b/payattentiontofeaturestransferlearnfastercnns/full.md
@@ -0,0 +1,295 @@
+# PAY ATTENTION TO FEATURES, TRANSFER LEARN FASTER CNNS
+
+Kafeng Wang\*†, Xitong Gao\* Yiren Zhao\* Xingjian Li\* Dejing Dou\* Cheng-Zhong Xu\*
+
+$^{1,2}$ Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences.
+1 University of Chinese Academy of Sciences. 3 University of Cambridge.
+4,5 Big Data Lab, Baidu Research. 6 University of Macau.
+1 kf.wang@siat.ac.cn, 2 xt.gao@siat.ac.cn.
+
+# ABSTRACT
+
+Deep convolutional neural networks are now widely deployed in vision applications, but a limited size of training data can restrict their task performance. Transfer learning offers the chance for CNNs to learn with limited data samples by transferring knowledge from models pretrained on large datasets. Blindly transferring all learned features from the source dataset, however, brings unnecessary computation to CNNs on the target task. In this paper, we propose attentive feature distillation and selection (AFDS), which not only adjusts the strength of transfer learning regularization but also dynamically determines the important features to transfer. By deploying AFDS on ResNet-101, we achieved a state-of-the-art computation reduction at the same accuracy budget, outperforming all existing transfer learning methods. With a $10 \times$ MACs reduction budget, a ResNet-101 equipped with AFDS transfer learned from ImageNet to Stanford Dogs 120, can achieve an accuracy $11.07\%$ higher than its best competitor.
+
+# 1 INTRODUCTION
+
+Despite recent successes of CNNs achieving state-of-the-art performance in vision applications (Tan & Le, 2019; Cai & Vasconcelos, 2018; Zhao et al., 2018; Ren et al., 2015), there are two major shortcomings limiting their deployments in real life. First, training CNNs from random initializations to achieve high task accuracy generally requires a large amount of data that is expensive to collect. Second, CNNs are typically compute-intensive and memory-demanding, hindering their adoption to power-limited scenarios.
+
+To address the former challenge, transfer learning (Pan & Yang, 2009) is thus designed to transfer knowledge learned from the source task to a target dataset that has limited data samples. In practice, we often choose a source dataset such that the input domain of the source comprises the domain of the target. A common paradigm for transfer learning is to train a model on a large source dataset, and then fine-tune the pre-trained weights with regularization methods on the target dataset (Zagoruyko & Komodakis, 2017; Yim et al., 2017; Li et al., 2018; Li & Hoiem, 2018; Li et al., 2019). For example, one regularization method, $L^2$ -SP (Li et al., 2018), penalizes the $L^2$ -distances of pretrained weights on the source dataset and the weights being trained on the target dataset. The pretrained source weights serves as a starting point when training on the target data. During fine-tuning on the target dataset, the regularization constrains the search space around this starting point, which in turn prevents overfitting the target dataset.
+
+Intuitively, the responsibility of transfer learning is to preserve the source knowledge acquired by important neurons. The neurons thereby retain their abilities to extract features from the source domain, and contribute to the network's performance on the target dataset.
+
+Moreover, by determining the importance of neurons, unimportant ones can further be removed from computation during inference with network pruning methods (Luo et al., 2017; He et al., 2017; Zhuang et al., 2018; Ye et al., 2018; Gao et al., 2019). The removal of unnecessary compute not only makes CNNs smaller in size but also reduces computational costs while minimizing possible accuracy degradations. As the source domain encompasses the target, many neurons responsible for extracting features from the source domain may become irrelevant to the target domain and can be removed. In Figure 1, a simple empirical study of the channel neurons' activation magnitudes corroborates our intuition: as deeper layers extract higher-level features, more neurons become either specialized or irrelevant to dogs. The discussion above hence prompts two questions regarding the neurons: which neurons should we transfer source knowledge to, and which are actually important to the target model?
+
+
+(a) Example images.
+
+
+Figure 1: (a) shows sample images from two datasets, ImageNet contains images with greater diversity. (b) shows the average maximum activations of 20 channel neurons in 3 layers of ResNet-101 that are most excited by images from Dogs.
+
+
+(b) Maximum channel activations.
+
+
+
+
+
+Yet traditional transfer learning methods fail to provide answers to both, as generally they transfer knowledge either equally for each neuron with the same regularized weights, or determine the strength of regularization using only the source dataset (Li et al., 2018). The source domain could be vastly larger than the target, giving importance to weights that are irrelevant to the target task.
+
+Recent years have seen a surge of interest in network pruning techniques, many of which induce sparsity by pushing neuron weights or outputs to zeros, allowing them to be pruned without a detrimental impact on the task accuracies. Even though pruning methods present a solution to neuron/weight importance, unfortunately they do not provide an answer to the latter question, i.e. whether these neurons/weights are important to the target dataset. The reason for this is that pruning optimization objectives are often in conflict with traditional transfer learning, as both drive weight values in different directions: zero for pruning and the initial starting point for transfer learning. As we will see later, a naive composition of the two methods could have a disastrous impact on the accuracy of a pruned CNN transfer-learned on the target dataset.
+
+In this paper, to tackle the challenge of jointly transferring source knowledge and pruning target CNNs, we propose a new method based on attention mechanism (Vaswani et al., 2017), attentive feature distillation and selection (AFDS). For the images in the target dataset, AFDS dynamically learns not only the features to transfer, but also the unimportant neurons to skip.
+
+During transfer learning, instead of fine-tuning with $L^2$ -SP regularization which explores the proximity of the pre-trained weights, we argue that a better alternative is to mimic the feature maps, i.e. the output response of each convolutional layer in the source model when images from the target dataset are shown, with $L^2$ -distances. This way the fine-tuned model can still learn the behavior of the source model. Additionally, without the restriction of searching only the proximity of the initial position, the weights in the target model can be optimized freely and thus increasing their generalization capacity. Therefore, we present attentive feature distillation (AFD) to learn which relevant features to transfer.
+
+To accelerate the transfer-learned model, we further propose attentive feature selection (AFS) to prune networks dynamically. AFS is designed to learn to predictively select important output channels in the convolution to evaluate and skip unimportant ones, depending
+
+on the input to the convolution. Rarely activated channel neurons can further be removed from the network, reducing the model's memory footprint.
+
+From an informal perspective, both AFD and AFS learn to adjust the "valves" that control the flow of information for each channel neuron. The former adjusts the strength of regularization, thereby tuning the flow of knowledge being transferred from the source model. The latter allows salient information to pass on to the subsequent layer and stops the flow of unimportant information. A significant attribute that differentiates AFD and AFS from their existing counterparts is that we employ attention mechanisms to adaptively learn to "turn the valves" dynamically with small trainable auxiliary networks.
+
+Our main contributions are as follows:
+
+- We present attentive feature distillation and selection (AFDS) to effectively transfer learn CNNs, and demonstrate state-of-the-art performance on many publicly available datasets with ResNet-101 (He et al., 2016) models transfer learned from ImageNet (Deng et al., 2009).
+- We paired a large range of existing transfer learning and network pruning methods, and examined their abilities to trade-off FLOPs with task accuracy.
+- By changing the fraction of channel neurons to skip for each convolution, AFDS can further accelerate the transfer learned models while minimizing the impact on task accuracy. We found that AFDS generally provides the best FLOPs and accuracy trade-off when compared to a broad range of paired methods.
+
+# 2 RELATED WORK
+
+# 2.1 TRANSFER LEARNING
+
+Training a deep CNN to achieve high accuracy generally requires a large amount of training data, which may be expensive to collect. Transfer learning (Pan & Yang, 2009) addresses this challenge by transferring knowledge learned on a large dataset that has a similar domain to the training dataset. A typical approach for CNNs is to first train the model on a large source dataset, and make use of their feature extraction abilities (Donahue et al., 2014; Razavian et al., 2014). Moreover, it has been demonstrated that the task accuracy can be further improved by fine-tuning the resulting pre-trained model on a smaller target dataset with a similar domain but a different task (Yosinski et al., 2014; Azizpour et al., 2015). Li et al. (2018) proposed $L^2$ -SP regularization to minimize the $L^2$ -distance between each fine-tuned parameter and its initial pre-trained value, thus preserving knowledge learned in the pre-trained model. In addition, they presented $L^2$ -SP-Fisher, which further weighs each $L^2$ -distance using Fisher information matrix estimated from the source dataset. Instead of constraining the parameter search space, Li et al. (2019) showed that it is often more effective to regularize feature maps during fine-tuning, and further learns which features to pay attention to. Learning without Forgetting (Li & Hoiem, 2018) learns to adapt the model to new tasks, while trying to match the output response on the original task of the original model using knowledge distillation (KD) (Hinton et al., 2014). Methods proposed by Zagoruyko & Komodakis (2017) and Yim et al. (2017) transfer knowledge from a teacher model to a student by regularizing features. The former computes and regularizes spatial statistics across all feature maps channels, whereas the latter estimates the flow of information across layers for each pair of channels, and transfers this knowledge to the student. Instead of manually deciding the regularization penalties and what to regularize as in the previous approaches, Jang et al. (2019) used meta-learning to automatically learn what knowledge to transfer from the teacher and to where in the student model.
+
+Inspired by Li et al. (2019) and Jang et al. (2019), this paper introduces attentive feature distillation (AFD), which similarly transfers knowledge by learning from the teacher's feature maps. It however differs from Jang et al. (2019) as the teacher and student models share the same network topology, and it instead learns which channel to transfer from the teacher to the student in the same convolutional output.
+
+# 2.2 STRUCTURED SPARSITY
+
+Sparsity in neural networks has been a long-studied subject (Reed, 1993; LeCun et al., 1990; Chauvin, 1989; Mozer & Smolensky, 1989; Hassibi et al., 1994). Related techniques have been applied to modern deep CNNs with great success (Guo et al., 2016; Dong et al., 2017a), significantly lowering their storage requirements. In general, as these methods zero out individual weights, producing irregular sparse connections, which cannot be efficiently exploited by GPUs to speed up computation.
+
+For this, many recent work turned their attention to structured sparsity (Alvarez & Salzmann, 2016; Wen et al., 2016; Liu et al., 2017; He et al., 2017; 2018). This approach aims to find coarse-grained sparsity and preserves dense structures, thus allowing conventional GPUs to compute them efficiently. Alvarez & Salzmann (2016) and Wen et al. (2016) both added group Lasso to penalize non-zero weights, and removed channels entirely that have been reduced to zero. Liu et al. (2017) proposed network slimming (NS), which adds $L^1$ regularization to the trainable channel-wise scaling parameters $\gamma$ used in batch normalization, and gradually prunes channels with small $\gamma$ values by threshold. He et al. (2018) introduced soft filter pruning (SFP), which iteratively fine-tunes and sets channels with small $L^2$ -norms to zero.
+
+Pruning algorithms remove weights or neurons from the network. The network may therefore lose its ability to process some difficult inputs correctly, as the neurons responsible for them are permanently discarded. Gao et al. (2019) have found empirically that task accuracies degrades considerably when most of the computation are removed from the network, and introduced feature boosting and suppression (FBS). Instead of removing neurons permanently from the network, FBS learns to dynamically prune unimportant channels, depending on the current input image. In this paper, attentive feature selection (AFS) builds on top of the advantages of both static and dynamic pruning algorithms. AFS not only preserves neurons that are important to some input images, but also removes unimportant ones for most inputs from the network, reducing both the memory and compute requirements for inference.
+
+There are methods that dynamically select which paths to evaluate in a network dependent on the input (Figurnov et al., 2017; Dong et al., 2017b; Bolukbasi et al., 2017; Lin et al., 2017; Shazeer et al., 2017; Wu et al., 2018; Ren et al., 2018). They however introduce architectural and/or training method changes, and thus cannot be applied directly on existing popular models pre-trained on ImageNet (Deng et al., 2009).
+
+# 3 ATTENTIVE FEATURE DISTILLATION AND SELECTION
+
+# 3.1 HIGH-LEVEL OVERVIEW
+
+
+Figure 2: High-level overview of AFDS.
+
+We begin by providing a high-level overview of attentive feature distillation and selection (AFDS). AFDS introduces two new components to augment each conventional batch-normalized convolutional (ConvBN) layer (Ioffe & Szegedy, 2015), as illustrated in Figure 2. The AFS preemptively learns the importance of each channel, in the output of the ConvBN layer, and can suppress unimportant channels, thus allowing the expensive convolution op
+
+eration to skip evaluating these channels. The AFD learns the importance of each channel in the output activation, and use the importance as weights to regularize feature maps in the target model with $L^2$ -distance. Each component is a small neural network containing a small number of parameters that can be trained with conventional stochastic gradient descent (SGD).
+
+# 3.2 PRELIMINARIES
+
+Consider a set of training data $\mathcal{D}$ where each sample $(\pmb{x},y)$ consists of an input image $\pmb{x} \in \mathbb{R}^{C \times H \times W}$ , and a ground-truth label $y \in \mathbb{N}$ . Here $C, H$ and $W$ respectively denote the number of channels, and the height and width of the input image. Training a deep CNN classifier thus minimizes the following loss function with an optimization method based on SGD:
+
+$$
+\mathcal {L} (\boldsymbol {\theta}) = \mathbb {E} _ {(\boldsymbol {x}, y) \sim \mathcal {D}} \left[ \mathcal {L} ^ {\mathrm {C E}} \left(f (\boldsymbol {x}, \boldsymbol {\theta}), y\right) + \mathcal {R} (\boldsymbol {\theta}, \boldsymbol {x}) + \lambda \| \boldsymbol {\theta} \| _ {2} ^ {2} \right], \tag {1}
+$$
+
+where $\pmb{\theta}$ comprises all parameters of the model, the loss $\mathcal{L}^{\mathrm{CE}}(f(\pmb{x},\pmb{\theta}),y)$ denotes the cross-entropy loss between the CNN output $f(\pmb{x},\pmb{\theta})$ and the label $y$ . The regularizer $\mathcal{R}(\pmb{\theta},\pmb{x})$ is often used to reduce the risk of overfitting. In conventional training, $\mathcal{R}(\pmb{\theta},\pmb{x}) = 0$ . Finally, we impose a $L^2$ penalty on $\pmb{\theta}$ , where $\|z\|_2$ represents the $L^2$ -norm of $\pmb{z}$ across all its elements.
+
+We assume that $f(\pmb{x}, \pmb{\theta})$ is a feed-forward CNN composed of $N$ ConvBN layers for feature extraction, $f_{l}(\pmb{x}_{l-1}, \pmb{\theta}_{l})$ with $l \in L = \{1, 2, \dots, N\}$ , and a final fully-connected layer for classification, $g(\pmb{x}_N, \pmb{\theta}_g)$ . Here, for the $l^{\text{th}}$ layer, $\pmb{x}_{l-1}$ is the input to the layer, with $\pmb{x}_0$ indicating $\pmb{x}$ , and $\pmb{\theta}_l$ is the layer's parameters. Therefore, the $l^{\text{th}}$ layer is defined as:
+
+$$
+\boldsymbol {x} _ {l} = f _ {l} \left(\boldsymbol {x} _ {l - 1}, \boldsymbol {\theta} _ {l}\right) = \operatorname {r e l u} \left(\boldsymbol {\gamma} _ {l} \cdot \operatorname {n o r m} \left(\operatorname {c o n v} \left(\boldsymbol {x} _ {l - 1}, \boldsymbol {\theta} _ {l}\right)\right) + \boldsymbol {\beta} _ {l}\right), \tag {2}
+$$
+
+where $\pmb{x}_l \in \mathbb{R}^{C_l \times H_l \times W_l}$ contains $C_l$ feature maps of the layer, each with a $H_l$ height and $W_l$ width. The function $\mathrm{conv}(\pmb{x}_{l-1}, \pmb{\theta}_l)$ is a convolution that takes $\pmb{x}_{l-1}$ as input and uses trainable parameters $\pmb{\theta}_l$ , and $\mathrm{norm}(\pmb{z})$ performs batch normalization. Finally, $\pmb{\gamma}_l, \pmb{\beta}_l \in \mathbb{R}^{C_l}$ are trainable vectors, the multiplications $(\cdot)$ and additions $(+)$ are channel-wise, and $\mathrm{relu}(\pmb{z}) = \max(\pmb{z}, 0)$ stands for the ReLU activation. Although we use the feed-forward classifier above for simplicity, it can be easily modified to contain additional structures such as residual connections (He et al., 2016) and computations for object detection (Ren et al., 2015).
+
+During transfer learning, as we fine-tune the network with a different task, the final layer $g(\pmb{x}_N, \pmb{\theta}_g)$ is generally replaced with a new randomly-initialized one $h(\pmb{x}_N, \pmb{\theta}_h)$ . To prevent overfitting, additional terms are used during transfer learning, for instance, $L^2$ -SP (Li et al., 2018) further constrains the parameters $\pmb{\theta}_l$ to explore around their initial values $\pmb{\theta}_l^\star$ :
+
+$$
+\mathcal {R} (\boldsymbol {\theta}, \boldsymbol {x}) = \lambda_ {\mathrm {S P}} \sum_ {l \in L} \| \boldsymbol {\theta} _ {l} - \boldsymbol {\theta} _ {l} ^ {\star} \| _ {2} ^ {2} + \lambda_ {\mathrm {L 2}} \| \boldsymbol {\theta} \| _ {2} ^ {2}. \tag {3}
+$$
+
+Instead of regularizing parameters, methods based on knowledge distillation (Hinton et al., 2014) encourages the model to mimic the behavior of the original while learning the target task. Learning without Forgetting (LwF) (Li & Hoiem, 2018) uses the following regularizer to mimic the response from the original classifiers:
+
+$$
+\mathcal {R} (\boldsymbol {\theta}, \boldsymbol {x}) = \lambda_ {\mathrm {L w F}} \mathcal {L} ^ {\mathrm {C E}} \left(g ^ {\star} \left(f _ {L} \left(\boldsymbol {x}, \boldsymbol {\theta} _ {L}\right), \boldsymbol {\theta} _ {g} ^ {\star}\right)\right), \tag {4}
+$$
+
+where $f_{L}(\pmb{x},\pmb{\theta}_{L})$ indicates the first $N$ layers, and $g^{\star}$ and $\pmb{\theta}_g^{\star}$ respectively denote the original fully-connected (FC) layer and its associated parameters, and generally $\lambda_{\mathrm{LwF}} = 1$ . Zagoruyko & Komodakis (2017), Yim et al. (2017) and Li et al. (2019) chose to regularize feature maps in some intermediate layers $L^{\prime}\subseteq L$ . We assume that $\pmb{x}_l^\star$ is the $l^{\mathrm{th}}$ layer output of the original model with weights $\pmb{\theta}^{\star}$ when the input $\pmb{x}$ is shown to the model, and $r$ is a method-dependent function that constrains the relationship between $\pmb{x}_l^\star$ and $\pmb{x}_l$ . The regularizer can then be defined as follows:
+
+$$
+\mathcal {R} (\boldsymbol {\theta}, \boldsymbol {x}) = \lambda_ {\mathrm {K D}} \sum_ {l \in L ^ {\prime}} r \left(\boldsymbol {x} _ {l} ^ {\star}, \boldsymbol {x} _ {l}\right). \tag {5}
+$$
+
+# 3.3 ATTENTIVE FEATURE DISTILLATION
+
+A simple way to extend Equation (5) is to constrain the $L^2$ -norm-distance between $\pmb{x}_l^\star$ and $\pmb{x}_l$ , and thus pushing the target model to learn the feature map responses of the source:
+
+$$
+\mathcal {R} (\boldsymbol {\theta}, \boldsymbol {x}) = \lambda_ {\mathrm {F D}} \sum_ {l \in L ^ {\prime}} \| \boldsymbol {x} _ {l} ^ {\star} - \boldsymbol {x} _ {l} \| _ {2} ^ {2}. \tag {6}
+$$
+
+The above formulation, however, places equal weight to each channel neurons of the feature maps. As we discussed earlier, the importance of channel neurons varies drastically when different input images are shown. it is thus desirable to enforce a different penalty for each channel depending on the input $\mathbf{x}$ . For this purpose, we design the regularizer:
+
+$$
+\mathcal {R} (\boldsymbol {\theta}, \boldsymbol {x}) = \lambda_ {\mathrm {A F D}} \sum_ {l \in L ^ {\prime}} \sum_ {c \in C _ {l}} \rho_ {l} ^ {[ c ]} \left(\boldsymbol {x} _ {l} ^ {\star}\right) \| \left(\boldsymbol {x} _ {l} ^ {\star} - \boldsymbol {x} _ {l}\right) ^ {[ c ]} \| _ {2} ^ {2}. \tag {7}
+$$
+
+Note that in Equation (7), for any tensor $\mathbf{z}$ , the term $\mathbf{z}^{[c]}$ denotes the $c^{\mathrm{th}}$ slice of the tensor. The transfer importance predictor $\rho_{l}: \mathbb{R}^{C_{l} \times H_{l} \times W_{l}} \to \mathbb{R}^{C_{l}}$ computes for each channel the importance of the source activation maps, which governs the strength of the $L^2$ regularization for each channel. The predictor function is trainable and is defined as a small network with two FC layers:
+
+$$
+\boldsymbol {\rho} _ {l} ^ {[ c ]} \left(\boldsymbol {x} _ {l} ^ {\star}\right) = \operatorname {s o f t m a x} \left(\operatorname {r e l u} \left(\mathfrak {b} \left(\boldsymbol {x} _ {l} ^ {\star}\right) \boldsymbol {\varphi} _ {l} + \boldsymbol {\nu} _ {l}\right) \boldsymbol {\varphi} _ {l} ^ {\prime} + \boldsymbol {\nu} _ {l} ^ {\prime}\right). \tag {8}
+$$
+
+The function $\flat : \mathbb{R}^{C \times H \times W} \to \mathbb{R}^{C \times HW}$ flattens the spatial dimensions in a channel-wise fashion; The parameters $\varphi_{l} \in \mathbb{R}^{HW \times H}$ , $\nu_{l} \in \mathbb{R}^{1 \times H}$ , $\varphi_{l}' \in \mathbb{R}^{H}$ and $\nu_{l}' \in \mathbb{R}^{C}$ can thus be trained to adjust the importance of each channel dynamically; finally, the softmax activation is borrowed from attention mechanism (Vaswani et al., 2017) to normalize the importance values. In our experiments, $\varphi_{l}$ and $\varphi_{l}'$ use He et al. (2015)'s initialization, $\nu_{l}$ and $\nu_{l}'$ are both initialized to 0.
+
+# 3.4 ATTENTIVE FEATURE SELECTION
+
+In a fashion similar to feature boosting and suppression (FBS) (Gao et al., 2019), AFS modifies the ConvBN layers from Equation (2):
+
+$$
+\hat {f} _ {l} \left(\boldsymbol {x} _ {l - 1}, \boldsymbol {\theta} _ {l}\right) = \operatorname {r e l u} \left(\boldsymbol {\pi} _ {l} \left(\boldsymbol {x} _ {l - 1}\right) \cdot \operatorname {n o r m} \left(\operatorname {c o n v} \left(\boldsymbol {x} _ {l - 1}, \boldsymbol {\theta} _ {l}\right)\right) + \boldsymbol {\beta} _ {l}\right), \tag {9}
+$$
+
+where the predictor function takes as input the activation maps of the previous layer, i.e. $\pi_l: \mathbb{R}^{C_{l-1} \times H_{l-1} \times W_{l-1}} \to \mathbb{R}^C$ , is used to replace the vector $\gamma_l$ . This function dynamically predicts the importance of each channel, and suppresses certain unimportant channels by setting them to zero. The expensive conv function can hence be accelerated by skipping the disabled output channels. The predictor function is defined as below:
+
+$$
+\boldsymbol {\pi} _ {l} \left(\boldsymbol {x} _ {l - 1}\right) = \mathbf {m} _ {l} \cdot q _ {l} \left(\boldsymbol {x} _ {l - 1}\right), \text {w h e r e} q _ {l} \left(\boldsymbol {x} _ {l - 1}\right) = \operatorname {w t a} _ {\lceil d C _ {l} \rceil} \left(\mathbf {s} _ {l} \cdot h _ {l} \left(\boldsymbol {x} _ {l - 1}\right) + \left(1 - \mathbf {s} _ {l}\right) \cdot \boldsymbol {\gamma} _ {l}\right), \tag {10}
+$$
+
+where $\mathbf{m}_l, \mathbf{s}_l \in \{0,1\}^{C_l}$ are both constant masks that take binary values: $\mathbf{m}_l$ prunes output channels by permanently setting them to zeros, and $\mathbf{s}_l$ decides for each channel whether the output of $h_l(\mathbf{x}_{l-1})$ or $\gamma_l$ should be used. It is clear that when $\mathbf{m}_l = \mathbf{1}$ , no channel neurons are removed from the network. In Section 3.5, we explain how $\mathbf{m}_l$ and $\gamma_l$ can be determined during the fine-tuning process. The winner-take-all function $\mathrm{wta}_{\lceil dC_l\rceil}(z)$ preserves the $\lceil dC_l\rceil$ most salient values in $z$ , and suppresses the remaining ones by setting them to zeros. The density value $0 < d \leq 1$ is a constant that controls the number of channels to preserve during inference, with 1 preserving all $C_l$ channels. The smaller $d$ gets, the more channels can be skipped, which in turn accelerates the model. Finally, the function $h_l: \mathbb{R}^{C_{l-1} \times H \times W} \to \mathbb{R}^{C_l}$ is a small network that is used to predict the importance of each channel. It is composed of a global average pool followed by a FC layer, where pool: $\mathbb{R}^{C_{l-1} \times H \times W} \to \mathbb{R}^{C_{l-1}}$ computes the average across the spatial dimensions for each channel:
+
+$$
+h \left(\boldsymbol {x} _ {l - 1}\right) = \operatorname {r e l u} \left(\operatorname {p o o l} \left(\boldsymbol {x} _ {l - 1}\right) \boldsymbol {\varphi} _ {l} ^ {\prime \prime} + \boldsymbol {\nu} _ {l} ^ {\prime \prime}\right). \tag {11}
+$$
+
+For the initialization of the FC parameters, we apply He et al. (2015)'s method on the trainable weights $\varphi_l^{\prime \prime} \in \mathbb{R}^{C_{l - 1} \times C_l}$ and $\pmb{\nu}_l^{\prime \prime} \in \mathbb{R}^{C_l}$ is initialized to zeros.
+
+# 3.5 TRAINING PROCEDURE
+
+In this section, we describe the pipeline of AFDS for transferring knowledge from a source model to a new model by fine-tuning on target dataset. The detailed algorithm can be found in Appendix A.
+
+Initially, we have a pre-trained model $f$ with parameters $\theta^{\star}$ for the source dataset (e.g. ImageNet). To ensure better accuracies on compressed target models, All ConvBN layers $f_{l}$ in $f$ are extended with AFS as discussed in Section 3.4, with $d$ initially set to 1, which means that all output channels in a convolutional layer are evaluated during inference, i.e. no acceleration. The pre-trained model is then fine-tuned on the target training dataset $\mathcal{D}$ with the AFD regularization proposed in Section 3.3.
+
+Empirically we found that in residual networks with greater depths, AFS could become notably challenging to train to high accuracies. To mitigate this, for each output channel of a layer $l$ we update $\mathbf{s}_l$ according to the variance of $h_l(\pmb{x}_{l-1})$ observed on the target dataset. For each channel if the variance is smaller than a threshold $\delta_s$ , then we set the entry in $\mathbf{s}_l$ to zero for that particular channel. This action replaces the output of $h_l(\pmb{x}_{l-1})$ with $\gamma_l$ , which is a trainable parameter initialized to the mean of $h_l(\pmb{x}_{l-1})$ . We compute the mean and variance statistics using Welford (1962)'s online algorithm which can efficiently compute the statistics in a single-pass with $O(1)$ storage. In our experiments, $\delta_s$ is set to a value such that 50% of the channel neurons use the predictor function $h_l$ .
+
+Moreover, we discovered that many of the channel neurons are rarely activated in a AFS-based network. We further propose to remove the channel neurons that are activated with a low frequency. In each layer $l$ , the mask $\mathbf{m}_l$ is used to disable certain channels from the network by setting their output to a constant $\mathbf{0}$ , if the probability of a channel neuron being active is lower than $\delta_{\mathrm{m}}$ . Zeroed-out channels can thus be permanently removed when the model is used in inference.
+
+# 4 EXPERIMENTS
+
+In this section we provide an extensive empirical study of the joint methods of transfer learning and channel pruning. We evaluate the methods with 6 different benchmark datasets: Caltech-256 (Griffin et al., 2007) of 256 general object categories; Stanford Dogs 120 (Khosla et al., 2011) specializes to images containing dogs; MIT Indoors 67 (Quattoni & Torralba, 2009) for indoor scene classification; Caltech-UCSD Birds-200-2011 (CUB-200-2011) (Wah et al., 2011) for classifying birds; and Food-101 (Bossard et al., 2014) for food categories. We refer to Li et al. (2018) and Li et al. (2019), for a detailed description of the benchmark datasets. For Caltech-256, we randomly sample either 30 or 60 images from the training set for each category to produce Caltech-256-30 and -60 training datasets.
+
+We use the ResNet-101 from torchvision $^{1}$ pre-trained on ImageNet as the network for experiments. For ResNet-101 equipped with AFS, we start by extending the pre-trained model and replacing each batch normalization with a randomly initialized AFS, and fine-tune the resulting model on ImageNet for 90 epochs with a learning rate of 0.01 decaying by a factor of 10 every 30 epochs. The resulting model matches its original baseline accuracy.
+
+For each benchmark dataset, the final FC layer of the network is replaced with a new FC randomly initialized with He et al. (2015)'s method to match the number of output categories accordingly. We then perform transfer learning with 4 different methods: $L^2$ (fine-tuning without additional regularization), $L^2$ -SP (Li et al., 2018), learning without forgetting (LwF) (Li & Hoiem, 2018), and finally AFD for models using AFS.
+
+To accelerate the resulting fine-tuned models, we continue fine-tuning the model while gradually pruning away channels used during inference. For this, we separately examine 3 pruning strategies: network slimming (NS) (Liu et al., 2017), soft filter pruning (SFP) (He et al., 2018) and finally AFS for models transfer learned with AFD. Note that NS prunes channels by sorting them globally, while SFP does so in a layer-wise manner with identical prune
+
+ratios. During this procedure, we start with an unpruned model and incrementally remove $10\%$ of the channels used in inference, i.e. preserving $90\%$ , $80\%$ , and etc., down to $10\%$ of all channels for the accelerated models. At each step, we fine-tune each model using 4500 steps of SGD with a batch size of 48, at a learning rate of 0.01, before fine-tuning for a further 4500 steps at a learning rate of 0.001. AFS additionally updates the $\mathbf{m}$ and $\mathbf{s}$ masks between the two fine-tuning runs.
+
+Table 1: Top-1 accuracy (\%) comparisons of NS, SFP and AFDS on 6 datasets fine-tuned with their respective best transfer learning methods under various speed-up constraints.
+
+| MACs reduction | NS | SFP | AFDS |
| MIT Indoors 67 | 2× | 81.83 ± 0.35 | 79.43 ± 0.50 | 82.05 ± 0.43 |
| 5× | 69.38 ± 0.27 | 60.43 ± 0.31 | 69.93 ± 0.52 |
| 10× | 1.50 ± 0.30 | 58.49 ± 0.34 | 66.72 ± 0.53 |
| Stanford Dogs 120 | 2× | 87.21 ± 0.58 | 81.74 ± 0.26 | 87.41 ± 0.56 |
| 5× | 73.44 ± 0.27 | 61.20 ± 0.31 | 75.14 ± 0.52 |
| 10× | 1.33 ± 0.50 | 59.63 ± 0.23 | 70.70 ± 0.33 |
| Caltech-256-30 | 2× | 85.87 ± 0.38 | 77.26 ± 0.28 | 85.15 ± 0.75 |
| 5× | 66.57 ± 0.23 | 64.27 ± 0.31 | 66.64 ± 0.32 |
| 10× | 0.39 ± 0.04 | 57.11 ± 0.54 | 61.45 ± 0.43 |
| Caltech-256-60 | 2× | 88.02 ± 0.45 | 84.59 ± 0.28 | 87.15 ± 0.75 |
| 5× | 73.95 ± 0.27 | 68.38 ± 0.59 | 74.46 ± 0.52 |
| 10× | 5.05 ± 0.11 | 61.27 ± 0.49 | 70.16 ± 0.53 |
| CUB-200-2011 | 2× | 78.88 ± 0.65 | 75.65 ± 0.26 | 78.03 ± 0.45 |
| 5× | 73.44 ± 0.27 | 61.50 ± 0.31 | 73.35 ± 0.52 |
| 10× | 0.52 ± 0.50 | 57.88 ± 0.23 | 69.07 ± 0.43 |
| Food-101 | 2× | 83.78 ± 0.61 | 75.65 ± 0.26 | 84.21 ± 0.65 |
| 5× | 73.36 ± 0.45 | 17.10 ± 0.17 | 79.12 ± 0.52 |
| 10× | 0.99 ± 0.04 | 3.85 ± 0.09 | 76.95 ± 0.49 |
+
+Table 2: Top-1 accuracy (\%) comparisons of $L^2$ , $L^2$ -SP, LwF, AFDS on 6 datasets fine-tuned with their respective best pruning methods under various speed-up constraints.
+
+| MACs reduction | L2 | L2-SP | LwF | AFDS |
| MIT Indoors 67 | 2× | 79.13 ± 0.16 | 78.09 ± 0.56 | 81.83 ± 0.35 | 82.05 ± 0.43 |
| 5× | 64.02 ± 0.21 | 62.00 ± 0.31 | 69.38 ± 0.27 | 69.93 ± 0.52 |
| 10× | 58.04 ± 0.38 | 58.49 ± 0.34 | 48.09 ± 0.52 | 66.72 ± 0.53 |
| Stanford Dogs 120 | 2× | 85.38 ± 0.67 | 87.21 ± 0.58 | 87.07 ± 0.35 | 87.41 ± 0.56 |
| 5× | 70.20 ± 0.37 | 67.10 ± 0.31 | 73.44 ± 0.27 | 75.14 ± 0.52 |
| 10× | 59.63 ± 0.23 | 42.89 ± 0.48 | 17.79 ± 0.50 | 70.70 ± 0.33 |
| Caltech-256-30 | 2× | 83.83 ± 0.62 | 83.67 ± 0.53 | 85.87 ± 0.38 | 85.15 ± 0.75 |
| 5× | 61.45 ± 0.17 | 60.03 ± 0.21 | 66.57 ± 0.23 | 66.64 ± 0.32 |
| 10× | 57.11 ± 0.54 | 56.12 ± 0.31 | 40.32 ± 0.34 | 61.45 ± 0.43 |
| Caltech-256-60 | 2× | 86.27 ± 0.47 | 85.84 ± 0.51 | 88.02 ± 0.45 | 87.15 ± 0.75 |
| 5× | 71.02 ± 0.37 | 69.9 ± 0.31 | 73.95 ± 0.27 | 74.46 ± 0.52 |
| 10× | 61.27 ± 0.49 | 39.41 ± 0.71 | 26.75 ± 0.50 | 70.16 ± 0.53 |
| CUB-200-2011 | 2× | 76.27 ± 0.37 | 75.58 ± 0.46 | 78.88 ± 0.65 | 78.03 ± 0.45 |
| 5× | 66.48 ± 0.37 | 64.49 ± 0.31 | 73.44 ± 0.27 | 73.35 ± 0.52 |
| 10× | 57.88 ± 0.23 | 57.13 ± 0.38 | 29.57 ± 0.31 | 69.07 ± 0.43 |
| Food-101 | 2× | 83.78 ± 0.61 | 82.27 ± 0.23 | 82.38 ± 0.85 | 84.21 ± 0.65 |
| 5× | 73.36 ± 0.33 | 70.12 ± 0.71 | 73.05 ± 0.64 | 79.12 ± 0.52 |
| 10× | 1.6 ± 0.04 | 3.56 ± 0.08 | 3.85 ± 0.09 | 76.95 ± 0.49 |
+
+For each pruned model, we can compute the number of multiply-accumulate operations (MACs) required to perform inference on an image. For each accelerated convolution, the required number of MACs is $k^2 HWC_{\mathrm{in}}C_{\mathrm{out}}$ , where $C_{\mathrm{in}}$ and $C_{\mathrm{out}}$ are the number of input and output channels that are not pruned, respectively. We compute the total number of MACs by summing up the MACs in all convolutions, residual connections, and the final pooling and FC layers. For AFS as we dynamically select which channels to evaluate during inference, we additionally add the overhead of the importance predictor layers to the number of total MACs.
+
+Table 3: Comparison to related transfer learning methods.
+
+| Dataset | Method | Model | Accuracy | MACs |
| CUB-200-2011 | Zagoruyko & Komodakis (2017) | ResNet-34 | 73.5 | 3.6 G |
| ResNet-18 | 73.0 | 1.8 G |
| Jang et al. (2019) | ResNet-18 | 65.05 | 1.8 G |
| AFDS | ResNet-101 | 76.34 | 2.4 G |
| ResNet-101 | 73.35 | 1.9 G |
| MIT Indoors 67 | Zagoruyko & Komodakis (2017) | ResNet-34 | 74.0 | 3.6 G |
| ResNet-18 | 72.9 | 1.8 G |
| Jang et al. (2019) | ResNet-18 | 64.85 | 1.8 G |
| AFDS | ResNet-101 | 78.09 | 2.4 G |
| ResNet-101 | 74.57 | 1.9 G |
+
+
+(a) Stanford Dogs 120.
+
+
+(b) Caltech-256-60.
+Figure 3: MACs and accuracy (\%) trade-off comparisons among different joint methods.
+
+In Figure 3, we present the trade-off relationship between the number of vs. the target dataset accuracies for Stanford Dogs and Caltech-256-60. It is clear that AFDS (ours) exceeds various combinations of pruning methods (NS, SFP) and transfer learning methods $(L^2, L^2\text{-SP}, \mathrm{LwF})$ . The results for the remaining datasets can be found in Appendix B. The trade-off curves show that AFDS minimizes accuracy degradation even if $47\%$ of the total MACs are removed from the original model, AFDS resulted in only $1.83\%$ drop in accuracy for the model trained on Stanford Dogs. In extreme cases where we permit only $\frac{1}{10}$ of the original computations, our method can still manage a $70.70\%$ accuracy, which is substantially better when compared to other pruning algorithms: NS drops to $1.33\%$ and SFP only has $59.63\%$ .
+
+Table 1 provide numerical comparisons of different pruning methods against AFS under various speed-up constraints. Table 2 similarly compares transfer learning strategies against AFD. Under most acceleration requirements, the combined method, AFDS, achieves the best accuracies on the target datasets. Finally, Table 3 compares AFDS against other literatures that performs transfer learning. AFDS can achieve state-of-the-art accuracies when compared to methods that produce models with similar number of MACs.
+
+# 5 CONCLUSION
+
+In this paper, we introduced attentive feature distillation and selection (AFDS), a dual-attention method that aims to reap the advantages of transfer learning and channel pruning methods. By applying AFDS during fine-tuning, we can not only learn a new model with a higher target task accuracy, but also further accelerates it by computing a subset of channel neurons in each convolutional layers. Under a wide range of datasets, we demonstrated the smallest drop in validation accuracies under the same speed-up constraints when compared to traditional compression methods such as network slimming (Liu et al., 2017) and soft filter pruning (He et al., 2018).
+
+# ACKNOWLEDGEMENTS
+
+This work is supported in part by National Key R&D Program of China (No. 2019YFB2102100), Science and Technology Development Fund of Macao S.A.R (FDCT) under number 0015/2019/AKP, Shenzhen Discipline Construction Project for Urban Computing and Data Intelligence, the National Natural Science Foundation of China (Nos. 61806192, 61802387), Shenzhen Science and Technology Innovation Commission (No. JCYJ2017081853518789, JCYJ20190812160003719), the Guangdong Science and Technology Plan Guangdong-Hong Kong Cooperation Innovation Platform (No. 2018B050502009), and China's Post-doctoral Science Fund (No. 2019M663183).
+
+# REFERENCES
+
+Jose M Alvarez and Mathieu Salzmann. Learning the number of neurons in deep networks. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett (eds.), Advances in Neural Information Processing Systems (NIPS), pp. 2270-2278. 2016.
+Hossein Azizpour, Ali Sharif Razavian, Josephine Sullivan, Atsuto Maki, and Stefan Carlsson. From generic to specific deep representations for visual recognition. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition Workshops, CVPRW '15, pp. 36-45, 2015.
+Tolga Bolukbasi, Joseph Wang, Ofer Dekel, and Venkatesh Saligrama. Adaptive neural networks for efficient inference. In Proceedings of the 34th International Conference on Machine Learning (ICML), pp. 527-536, 2017.
+Lukas Bossard, Matthieu Guillaumin, and Luc Van Gool. Food-101 - mining discriminative components with random forests. In European Conference on Computer Vision, 2014.
+Zhaowei Cai and Nuno Vasconcelos. Cascade R-CNN: Delving into high quality object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 6154-6162, 2018.
+Yves Chauvin. A back-propagation algorithm with optimal use of hidden units. In D. S. Touretzky (ed.), Advances in Neural Information Processing Systems 1, pp. 519-526. Morgan-Kaufmann, 1989.
+Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. ImageNet: A large-scale hierarchical image database. In 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248-255, June 2009.
+Jeff Donahue, Yangqing Jia, Oriol Vinyals, Judy Hoffman, Ning Zhang, Eric Tzeng, and Trevor Darrell. DeCAF: A deep convolutional activation feature for generic visual recognition. In Eric P. Xing and Tony Jebara (eds.), Proceedings of the 31st International Conference on Machine Learning, volume 32 of Proceedings of Machine Learning Research, pp. 647-655, Beijing, China, 22-24 Jun 2014. PMLR.
+Xin Dong, Shangyu Chen, and Sinno Pan. Learning to prune deep neural networks via layer-wise optimal brain surgeon. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (eds.), Advances in Neural Information Processing Systems 30, pp. 4857-4867. Curran Associates, Inc., 2017a.
+Xuanyi Dong, Junshi Huang, Yi Yang, and Shuicheng Yan. More is less: A more complicated network with less inference complexity. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017b.
+Michael Figurnov, Maxwell D. Collins, Yukun Zhu, Li Zhang, Jonathan Huang, Dmitry Vetrov, and Ruslan Salakhutdinov. Spatially adaptive computation time for residual networks. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017.
+
+Xitong Gao, Yiren Zhao, Lukasz Dudziak, Robert Mullins, and Cheng-zhong Xu. Dynamic channel pruning: Feature boosting and suppression. In International Conference on Learning Representations (ICLR), 2019.
+Gregory Griffin, Alex Holub, and Pietro Perona. Caltech-256 object category dataset. Technical report, 2007.
+Yiwen Guo, Anbang Yao, and Yurong Chen. Dynamic network surgery for efficient DNNs. In Advances in Neural Information Processing Systems (NIPS), 2016.
+Babak Hassibi, David G. Stork, and Gregory Wolff. Optimal brain surgeon: Extensions and performance comparisons. In J. D. Cowan, G. Tesauro, and J. Alspector (eds.), Advances in Neural Information Processing Systems (NIPS), pp. 263-270. 1994.
+Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), ICCV '15, pp. 1026-1034, Washington, DC, USA, 2015. IEEE Computer Society. ISBN 978-1-4673-8391-2. doi: 10.1109/ICCV.2015.123.
+Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778, 2016.
+Yang He, Guoliang Kang, Xuanyi Dong, Yanwei Fu, and Yi Yang. Soft filter pruning for accelerating deep convolutional neural networks. In International Joint Conference on Artificial Intelligence (IJCAI), pp. 2234-2240, 2018.
+Yihui He, Xiangyu Zhang, and Jian Sun. Channel pruning for accelerating very deep neural networks. In The IEEE International Conference on Computer Vision (ICCV), Oct 2017.
+Geoffrey Hinton, Oriol Vinylals, and Jeff Dean. Distilling the knowledge in a neural network. Advances in neural information processing systems 2014, Deep Learning Workshop, 2014.
+Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the 32Nd International Conference on International Conference on Machine Learning - Volume 37, ICML'15, pp. 448-456. JMLR.org, 2015. URL http://d1.acm.org/citation.cfm?id=3045118.3045167.
+Yunhun Jang, Hankook Lee, Sung Ju Hwang, and Jinwoo Shin. Learning what and where to transfer. In Kamalika Chaudhuri and Ruslan Salakhutdinov (eds.), Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pp. 3030-3039, Long Beach, California, USA, 09-15 Jun 2019. PMLR.
+Aditya Khosla, Nityananda Jayadevaprakash, Bangpeng Yao, and Li Fei-Fei. Novel dataset for fine-grained image categorization. In First Workshop on Fine-Grained Visual Categorization, IEEE Conference on Computer Vision and Pattern Recognition, Colorado Springs, CO, June 2011.
+Yann LeCun, John S. Denker, and Sara A. Solla. Optimal brain damage. In Advances in Neural Information Processing Systems (NIPS), pp. 598-605. 1990.
+Xingjian Li, Haoyi Xiong, Hanchao Wang, Yuxuan Rao, Liping Liu, and Jun Huan. DELTA: Deep learning transfer using feature map with attention for convolutional networks. In International Conference on Learning Representations (ICLR), 2019.
+Xuhong Li, Yves Grandvalet, and Franck Davoine. Explicit inductive bias for transfer learning with convolutional networks. Thirty-fifth International Conference on Machine Learning, 2018.
+Zhizhong Li and Derek Hoiem. Learning without forgetting. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(12):2935-2947, Dec 2018. ISSN 0162-8828. doi: 10.1109/TPAMI.2017.2773081.
+
+Ji Lin, Yongming Rao, Jiwen Lu, and Jie Zhou. Runtime neural pruning. In Advances in Neural Information Processing Systems (NIPS), pp. 2181-2191. 2017.
+Zhuang Liu, Jianguo Li, Zhiqiang Shen, Gao Huang, Shoumeng Yan, and Changshui Zhang. Learning efficient convolutional networks through network slimming. In International Conference on Computer Vision (ICCV), 2017.
+Jian-Hao Luo, Jianxin Wu, and Weiyao Lin. ThiNet: A filter level pruning method for deep neural network compression. In Proceedings of the IEEE international conference on computer vision, pp. 5058-5066, 2017.
+Michael C Mozer and Paul Smolensky. Skeletonization: A technique for trimming the fat from a network via relevance assessment. In D. S. Touretzky (ed.), Advances in Neural Information Processing Systems 1, pp. 107-115. Morgan-Kaufmann, 1989.
+Sinno Jialin Pan and Qiang Yang. A survey on transfer learning. IEEE Transactions on knowledge and data engineering, 22(10):1345-1359, 2009.
+Ariadna Quattoni and Antonio Torralba. Recognizing indoor scenes. In 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 413-420, June 2009. doi: 10.1109/CVPR.2009.5206537.
+Ali Sharif Razavian, Hossein Azizpour, Josephine Sullivan, and Stefan Carlsson. CNN features off-the-shelf: An astounding baseline for recognition. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition Workshops, CVPRW '14, pp. 512-519, Washington, DC, USA, 2014. IEEE Computer Society. ISBN 978-1-4799-4308-1. doi: 10.1109/CVPRW.2014.131.
+R. Reed. Pruning algorithms-a survey. IEEE Transactions on Neural Networks, 4(5):740-747, Sep. 1993. doi: 10.1109/72.248452.
+Mengye Ren, Andrei Pokrovsky, Bin Yang, and Raquel Urtasun. SBNet: Sparse blocks network for fast inference. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018.
+Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster R-CNN: Towards real-time object detection with region proposal networks. In Advances in neural information processing systems, pp. 91-99, 2015.
+Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, and Jeff Dean. Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. 2017.
+Mingxing Tan and Quoc Le. EfficientNet: Rethinking model scaling for convolutional neural networks. In International Conference on Machine Learning, pp. 6105-6114, 2019.
+Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in neural information processing systems, pp. 5998-6008, 2017.
+C. Wah, S. Branson, P. Welinder, P. Perona, and S. Belongie. The Caltech-UCSD Birds200-2011 Dataset. Technical Report CNS-TR-2011-001, California Institute of Technology, 2011.
+B. P. Welford. Note on a method for calculating corrected sums of squares and products. Technometrics, 4(3):419-420, 1962. ISSN 00401706. URL http://www.jstor.org/stable/1266577.
+Wei Wen, Chunpeng Wu, Yandan Wang, Yiran Chen, and Hai Li. Learning structured sparsity in deep neural networks. In Advances in Neural Information Processing Systems (NIPS), pp. 2074-2082. 2016.
+
+Zuxuan Wu, Tushar Nagarajan, Abhishek Kumar, Steven Rennie, Larry S. Davis, Kristen Grauman, and Rogerio Feris. BlockDrop: Dynamic inference paths in residual networks. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018.
+Jianbo Ye, Xin Lu, Zhe Lin, and James Z Wang. Rethinking the smaller-norm-less-informative assumption in channel pruning of convolution layers. 2018.
+Junho Yim, Donggyu Joo, Jihoon Bae, and Junmo Kim. A gift from knowledge distillation: Fast optimization, network minimization and transfer learning. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 7130-7138, July 2017. doi: 10.1109/CVPR.2017.754.
+Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. How transferable are features in deep neural networks? In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger (eds.), Advances in Neural Information Processing Systems 27, pp. 3320-3328. Curran Associates, Inc., 2014.
+Sergey Zagoruyko and Nikos Komodakis. Paying more attention to attention: Improving the performance of convolutional neural networks via attention transfer. In International Conference on Learning Representations (ICLR), 2017.
+Hengshuang Zhao, Xiaojuan Qi, Xiaoyong Shen, Jianping Shi, and Jiaya Jia. ICNet for real-time semantic segmentation on high-resolution images. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 405-420, 2018.
+Zhuangwei Zhuang, Mingkui Tan, Bohan Zhuang, Jing Liu, Yong Guo, Qingyao Wu, Junzhou Huang, and Jinhui Zhu. Discrimination-aware channel pruning for deep neural networks. In Advances in Neural Information Processing Systems, pp. 875-886, 2018.
+
+# A THE OVERALL TRAINING ALGORITHM
+
+In Algorithm 1 we illustrate the complete training procedure described above. Here, the function takes as input the target training dataset $\mathcal{D}$ , the source model $f$ and its parameters $\pmb{\theta}^{\star}$ , the total number of steps to fine-tune $S$ , the initial learning rate $\alpha$ , and the threshold hyperparameters $\delta_{\mathrm{s}}$ and $\delta_{\mathrm{m}}$ respectively for $\mathbf{s}_l$ and $\mathbf{m}_l$ . The function returns the optimized parameters $\pmb{\theta}$ for the target dataset, and both constant masks for all layers $\mathbf{s} = (\mathbf{s}_1, \mathbf{s}_2, \dots, \mathbf{s}_L)$ and $\mathbf{m} = (\mathbf{m}_1, \mathbf{m}_2, \dots, \mathbf{m}_L)$ . The function SGD then fine-tunes the model parameters. For each layer $l$ , we compute the mean $\pmb{\mu}_l$ and variance $\pmb{\sigma}_l$ statistics of $q_l(\pmb{x}_{l-1})$ , and use it to compute $\mathbf{s}_l$ .
+
+| Algorithm 1 Training Procedure |
| 1: function AFDS(D,f,θ*,S,α,δs,δm) |
| 2: for l ∈ L: sl ← 1 |
| 3: for l ∈ L: ml ← 1 |
| 4: θ ← SGD(D,f,θ*,s,m,⌊S/2⌋,α,R) |
| 5: for l ∈ L do |
| 6: μl ← E(x,y)~D[ql(xl-1)] |
| 7: σl2 ← E(x,y)~D[(ql(xl-1) - μl)2] |
| 8: pl ← E(x,y)~D[πl(xl-1) > 0] |
| 9: sl ← σl2 > δs |
| 10: γl ← μl |
| 11: ml ← pl > δm |
| 12: end for |
| 13: θ ← SGD(D,f,θ,s,m,⌊S/2⌋,α/10,R) |
| 14: return θ,s,m |
| 15: end function |
+
+
+B ADDITIONAL RESULTS
+
+
+
+
+(a) MIT Indoors 67.
+(c) Caltech-UCSD Birds-200-2011.
+Figure 4: MACs and accuracy (\%) trade-off comparisons among different joint methods.
+
+
+(b) Food-101.
+(d) Caltech-256-30.
\ No newline at end of file
diff --git a/payattentiontofeaturestransferlearnfastercnns/images.zip b/payattentiontofeaturestransferlearnfastercnns/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..8991ce1f6195c0b7ddceb775659880c5c21f38d4
--- /dev/null
+++ b/payattentiontofeaturestransferlearnfastercnns/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c408673150475fab78886113912ea165bcd2bb91741ddee16e48895a53f62f73
+size 570423
diff --git a/payattentiontofeaturestransferlearnfastercnns/layout.json b/payattentiontofeaturestransferlearnfastercnns/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..460f6fb5cca920e5fbab945c09867820617a1dbe
--- /dev/null
+++ b/payattentiontofeaturestransferlearnfastercnns/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:148696cb1e262eb27508c4974ab28efe331b61f6a5b975e5679c9bbb68c5e8d8
+size 468599
diff --git a/pcmcnetfeaturebasedpairwisechoicemarkovchains/49d0fede-5c85-4033-96d0-dd21a4b511f4_content_list.json b/pcmcnetfeaturebasedpairwisechoicemarkovchains/49d0fede-5c85-4033-96d0-dd21a4b511f4_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..2de8eae83babd31bc558f2d2ea030d4f8ddf48af
--- /dev/null
+++ b/pcmcnetfeaturebasedpairwisechoicemarkovchains/49d0fede-5c85-4033-96d0-dd21a4b511f4_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:745f6711301889226b56c7826296891105f5a84a9008fd7e8910a18b7b3e0776
+size 70708
diff --git a/pcmcnetfeaturebasedpairwisechoicemarkovchains/49d0fede-5c85-4033-96d0-dd21a4b511f4_model.json b/pcmcnetfeaturebasedpairwisechoicemarkovchains/49d0fede-5c85-4033-96d0-dd21a4b511f4_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..8e3522b96d86e252996f017e50d018f02831b748
--- /dev/null
+++ b/pcmcnetfeaturebasedpairwisechoicemarkovchains/49d0fede-5c85-4033-96d0-dd21a4b511f4_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a21fabc06a48b2d9162f479719255c3f0b57704fbd6262228ba27d63602e17a5
+size 85621
diff --git a/pcmcnetfeaturebasedpairwisechoicemarkovchains/49d0fede-5c85-4033-96d0-dd21a4b511f4_origin.pdf b/pcmcnetfeaturebasedpairwisechoicemarkovchains/49d0fede-5c85-4033-96d0-dd21a4b511f4_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..9fc0c2020b8f1ddf9bb988af674ec712697378f3
--- /dev/null
+++ b/pcmcnetfeaturebasedpairwisechoicemarkovchains/49d0fede-5c85-4033-96d0-dd21a4b511f4_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:39c0da855a66e96da22536380b2cd87d1dd493a79de7bca01e323ffb9b4c9942
+size 448359
diff --git a/pcmcnetfeaturebasedpairwisechoicemarkovchains/full.md b/pcmcnetfeaturebasedpairwisechoicemarkovchains/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..7a65ff67193bc31dc9ad2be075df87ad70cd59c8
--- /dev/null
+++ b/pcmcnetfeaturebasedpairwisechoicemarkovchains/full.md
@@ -0,0 +1,312 @@
+# PCMC-NET: FEATURE-BASED PAIRWISE CHOICE MARKOV CHAINS
+
+Alix Lheritier
+
+Amadeus SAS
+
+F-06902 Sophia-Antipolis, France
+
+elix.lheritier@amadeus.com
+
+# ABSTRACT
+
+Pairwise Choice Markov Chains (PCMC) have been recently introduced to overcome limitations of choice models based on traditional axioms unable to express empirical observations from modern behavior economics like context effects occurring when a choice between two options is altered by adding a third alternative. The inference approach that estimates the transition rates between each possible pair of alternatives via maximum likelihood suffers when the examples of each alternative are scarce and is inappropriate when new alternatives can be observed at test time. In this work, we propose an amortized inference approach for PCMC by embedding its definition into a neural network that represents transition rates as a function of the alternatives' and individual's features. We apply our construction to the complex case of airline itinerary booking where singletons are common (due to varying prices and individual-specific itineraries), and context effects and behaviors strongly depend on market segments are observed. Experiments show our network significantly outperforming, in terms of prediction accuracy and logarithmic loss, feature engineered standard and latent class Multinomial Logit models as well as recent machine learning approaches.
+
+# 1 INTRODUCTION
+
+Choice modeling aims at finding statistical models capturing the human behavior when faced with a set of alternatives. Classical examples include consumer purchasing decisions, choices of schooling or employment, and commuter choices for modes of transportation among available options. Traditional models are based on different assumptions about human decision making, e.g. Thurstone's Case V model (Thurstone, 1927) or Bradley-Terry-Luce (BTL) model (Bradley & Terry, 1952). Nevertheless, in complex scenarios, like online shopping sessions presenting numerous alternatives to user-specific queries, these assumptions are often too restrictive to provide accurate predictions.
+
+Formally, there is a universe of alternatives $U$ , possibly infinite. In each choice situation, some finite choice set $S \subseteq U$ is considered. A choice model is a distribution over the alternatives of a given choice set $S$ , where the probability of choosing the item $i$ among $S$ is denoted as $P_S(i)$ . These models can be further parameterized by the alternatives' features and by those of the individual making the choice.
+
+An important class of choice models is the Multinomial Logit (MNL), a generalization of the BTL model—defined for pairwise choices only—to larger sets. MNL models satisfy Luce's axiom also known as independence of irrelevant alternatives (Luce, 1959), which states that the probability of selecting one alternative over another from a set of many alternatives is not affected by the presence or absence of other alternatives in the set. Moreover, any model satisfying Luce's axiom is equivalent to some MNL model (Luce, 1977). Equivalently, the probability of choosing some item $i$ from a given set $S$ can be expressed as $P_{S}(i) = w_{i} / \sum_{j\in S}w_{j}$ where $w_{i}$ is the latent value of the item $i$ . Luce's axiom implies stochastic transitivity i.e. if $P(a\triangleright b)\geq 1 / 2$ and $P(b\triangleright c)\geq 1 / 2$ , then $P(a\triangleright c)\geq \max \left(P(a\triangleright b),P(b\triangleright c)\right)$ where $P(i\triangleright j)\equiv P_{\{i,j\}}(i)$ (Luce, 1977). Stochastic transitivity implies the necessity of a total order across all elements and also prevents from expressing cyclic preference situations like the stochastic rock-paper-scissors game described in Section 3.2. Thurstone's Case V model exhibits strict stochastic transitivity but does not satisfy Luce's axiom
+
+(Adams & Messick, 1958). Luce's axiom and stochastic transitivity are strong assumptions that often do not hold for empirical choice data (see (Ragain & Ugander, 2016) and references therein). For example, Luce's axiom prevents models from expressing context effects like the attraction effect (also known as asymmetric dominance or decoy effect), the similarity effect and the compromise effect. The attraction effect occurs when two alternatives are augmented with an asymmetricly dominated one (i.e., a new option that is inferior in all aspects with respect to one option, but inferior in only some aspects and superior in other aspects with respect to the other option) and the probability of selecting the better, dominant alternative increases (Huber et al., 1982). The similarity effect arises from the introduction of an alternative that is similar to, and competitive with, one of the original alternatives, and causes a decrease in the probability of choosing the similar alternative (Tversky, 1972). The compromise effect occurs when there is an increase in the probability of choosing an alternative that becomes the intermediate option when a third extreme option is introduced (Simonson, 1989). Examples of these effects are visualized in Section 4.
+
+A larger class of models is the one of Random Utility Models (RUM) (Block & Marschak, 1960; Manski, 1977), which includes MNL but also other models satisfying neither Luce's axiom nor stochastic transitivity. This class affiliates with each $i \in U$ a random variable $X_{i}$ and defines for each subset $S \subseteq U$ the probability $P_{S}(i) = P(X_{i} \geq X_{j}, \forall j \in S)$ . RUM exhibits regularity i.e. if $A \subseteq B$ then $P_{A}(x) \geq P_{B}(x)$ . Regularity also prevents models from expressing context effects (Huber et al., 1982). The class of Nested MNL (McFadden, 1980) allows to express RUM models but also others that do not obey regularity. Nevertheless, inference is practically difficult for Nested MNL models.
+
+Recently, a more flexible class of models called Pairwise Choice Markov Chains has been introduced in Ragain & Ugander (2016). This class includes MNL but also other models that satisfy neither Luce's axiom, nor stochastic transitivity, nor regularity. This class defines the choice distribution as the stationary distribution of a continuous time Markov chain defined by some transition rate matrix. Still, it satisfies a weakened version of Luce's axiom called uniform expansion stating that if we add "copies" (with no preference between them), the probability of choosing one element of the copies is invariant to the number of copies. Although the flexibility of this class is appealing, the proposed inference is based on maximizing the likelihood of the rate matrix for the observed choices which is prone to overfitting when the number of observations for each possible alternative is small and is inappropriate when new alternatives can be seen at test time.
+
+Alternatives and individuals making choices can be described by a set of features that can be then used to understand their impact on the choice probability. A linear-in-features MNL assumes that the latent value is given by a linear combination of the parameters of the alternatives and the individual. Features of the individual can be taken into account by these models but inference suffers from scarcity and is inappropriate when new alternatives can be seen at test time. The latent class MNL (LC-MNL) model (Greene & Hensher, 2003) takes into account individual heterogeneity by using a Bayesian mixture over different latent classes—whose number must be specified—in which homogeneity and linearity is assumed. A linear-in-features parameterization of PCMC, for features in $\mathbb{R}^d$ , was suggested in (Ragain & Ugander, 2016, Appendix) but requires fitting a weight matrix of size $|U| \times d$ , which makes it scale poorly and does not allow to predict unseen alternatives. In this work, we propose an amortized inference approach for PCMC in the sense that the statistical parameters are reused for any pair of alternatives and their number is thus independent of the size of the universe. In addition, we allow non-linear modeling by using a neural network.
+
+In complex cases like airline itinerary choice, where the alternatives are strongly dependent on an individual-specific query and some features, like price, can be dynamic, the previous approaches have limited expressive power or are inappropriate. Two recently introduced methods allow complex feature handling for alternatives and individuals. Mottini & Acuna-Agost (2017) proposes a recurrent neural network method consisting in learning to point, within a sequence of alternatives, to the chosen one. This model is appealing because of its feature learning capability but neither its choice-theoretic properties have been studied nor its dependence on the order of the sequence. Lheritier et al. (2019) proposes to train a Random Forest classifier to predict whether an alternative is going to be predicted or not independently of the rest of the alternatives of the choice set. This approach does not take into account the fact that in each choice set exactly one alternative is chosen. For this reason, the probabilities provided by the model are only used as scores to rank the alternatives, which can be interpreted as latent values—making it essentially equivalent to a non-linear MNL. To escape this limitation and make the latent values dependent on the choice set, relative features are added (e.g. the
+
+price for $i$ -th alternative price is converted to price $\frac{1}{\min_{j \in S} \text{price}_j}$ . The non-parametric nature of this model is appealing but its choice-theoretic properties have not been studied either.
+
+In this work, we propose to enable PCMC with neural networks based feature handling, therefore enjoying both the good theoretical properties of PCMC and the complex feature handling of the previous neural network based and non-parametric methods. This neural network parameterization of PCMC makes the inference amortized allowing to handle large (and even infinite) size universes as shown in our experiments for airline itinerary choice modeling shown in Section 5.
+
+# 2 BACKGROUND: PAIRWISE CHOICE MARKOV CHAINS
+
+# 2.1 DEFINITION
+
+A Pairwise Choice Markov Chain (PCMC) (Ragain & Ugander, 2016) defines the choice probability $P_{S}(i)$ as the probability mass on the alternative $i \in S$ of the stationary distribution of a continuous time Markov chain (CTMC) whose set of states corresponds to $S$ . The model's parameters are the off-diagonal entries $q_{ij} \geq 0$ of a rate matrix $Q$ indexed by pairs of elements in $U$ . Given a choice set $S$ , the choice distribution is the stationary distribution of the continuous time Markov chain given by the matrix $Q_{S}$ obtained by restricting the rows and columns of $Q$ to elements in $S$ and setting $q_{ii} = -\sum_{j \in S \setminus i} q_{ij}$ for each $i \in S$ . Therefore, the distribution $P_{S}$ is parameterized by the $|S|(|S| - 1)$ transition rates of $Q_{S}$ .
+
+The constraint
+
+$$
+q _ {i j} + q _ {j i} > 0 \tag {1}
+$$
+
+is imposed in order to guarantee that the chain has a single closed communicating class which implies the existence and the unicity of the stationary distribution $\pi_S$ (see, e.g., Norris (1997)) obtained by solving
+
+$$
+\left\{ \begin{array}{l} \pi_ {S} Q _ {S} = \mathbf {0} \\ \pi_ {S} \mathbf {1} ^ {T} = 1 \end{array} \right. \tag {2}
+$$
+
+where $\mathbf{0}$ and $\mathbf{1}$ are row vectors of zeros and ones, respectively. Since any column of $Q_{s}$ is the opposite of the sum of the rest of the columns, it is equivalent to solve
+
+$$
+\pi_ {S} Q _ {S} ^ {\prime} = \left[ \begin{array}{l l l} \mathbf {0} & | & 1 \end{array} \right] \tag {3}
+$$
+
+where $Q_S^\prime \equiv \left[ \begin{array}{cc}((Q_S)_{ij})_{1\leq i\leq |S|,1\leq j < |S|} & \mid \mathbf{1}^T \end{array} \right].$
+
+# 2.2 PROPERTIES
+
+In Ragain & Ugander (2016), it is shown that PCMC allows to represent any MNL model, but also models that are non-regular and do not satisfy stochastic transitivity (using the rock-scissor-paper example of Section 3.2).
+
+In the classical red bus/blue bus example (see, e.g., Train (2009)), the color of the bus is irrelevant to the preference of the transportation mode "bus" with respect to the "car" mode. Nevertheless, MNL models reduce the probability of choosing the "car" mode when color variants of buses are added, which does not match empirical behavior. PCMC models allow to model this kind of situations thanks to a property termed contractibility, which intuitively means that we can "contract" subsets $A_{i} \subseteq U$ to a single "type" when the probability of choosing an element of $A_{i}$ is independent of the pairwise probabilities between elements within the subsets. Formally, a partition of $U$ into non-empty sets $A_{1},\ldots ,A_{k}$ is a contractible partition if $q_{a_i a_j} = \lambda_{ij}$ for all $a_{i} \in A_{i}, a_{j} \in A_{j}$ for some $\Lambda = \{\lambda_{ij}\}$ for $i,j \in \{1,\dots ,k\}$ . Then, the following proposition is shown.
+
+Proposition 1 (Ragain & Ugander (2016)). For a given $\Lambda$ , let $A_{1},\ldots ,A_{k}$ be a contractible partition for two PCMC models on $U$ represented by $Q,Q^{\prime}$ with stationary distributions $\pi ,\pi^{\prime}$ . Then, for any $A_{i}$ ,
+
+$$
+\sum_ {j \in A _ {i}} P _ {U} (j) = \sum_ {j \in A _ {i}} P _ {U} ^ {\prime} (j).
+$$
+
+Then it is shown, that contractibility implies uniform expansion formally defined as follows.
+
+Definition 1 (Uniform Expansion). Consider a choice between $n$ elements in a set $S^{(1)} = \{i_{11},\ldots ,i_{n1}\}$ , and another choice from a set $S^{(k)}$ containing $k$ copies of each of the $n$ elements: $S^{(k)} = \{i_{11},\dots ,i_{1k},i_{21},\dots ,i_{2k},\dots ,i_{n1},\dots ,i_{nk}\}$ . The axiom of uniform expansion states that for each $m\in \{1,\dots ,n\}$ and all $k\geq 1$
+
+$$
+P _ {S ^ {(1)}} (i _ {m 1}) = \sum_ {j = 1} ^ {k} P _ {S ^ {(k)}} (i _ {m j}).
+$$
+
+# 2.3 INFERENCE
+
+Given a dataset $\mathcal{D}$ , the inference method proposed in Ragain & Ugander (2016) consists in maximizing the log likelihood of the rate matrix $Q$ indexed by $U$
+
+$$
+\log \mathcal {L} (Q; \mathcal {D}) = \sum_ {S \subseteq U} \sum_ {i \in S} C _ {i S} (\mathcal {D}) \log \left(P _ {S} ^ {Q} (i)\right) \tag {4}
+$$
+
+where $P_S^Q (i)$ denotes the probability that $i$ is selected from $S$ as a function of $Q$ and $C_{iS}(\mathcal{D})$ denotes the number of times in the data that $i$ was chosen out of set $S$ .
+
+This optimization is difficult since there is no general closed form expression for $P_S^Q (i)$ and the implicit definition also makes it difficult to derive gradients for $\log \mathcal{L}$ with respect to the parameters $q_{ij}$ . The authors propose to use Sequential Least Squares Programming (SLSQP) to maximize $\log \mathcal{L}(Q;\mathcal{D})$ , which is nonconcave in general. However, in their experiments, they encounter numerical instabilities leading to violations $(q_{ij} + q_{ji} = 0)$ of the PCMC definition, which were solved with additive smoothing at the cost of some efficacy of the model. In addition, when the examples of each alternative are scarce like in the application of Section 5, this inference approach is prone to severe overfitting and is inappropriate to predict unseen alternatives. These two drawbacks motivate the amortized inference approach we introduce next.
+
+# 3 PCMC-NET
+
+We propose an amortized inference approach for PCMC based on a neural network architecture called PCMC-Net that uses the alternatives' and the individual's features to determine the transition rates and can be trained using standard stochastic gradient descent techniques.
+
+# 3.1 ARCHITECTURE
+
+Input layer For PCMC, the choice sets $S$ were defined as a set of indices. For PCMC-Net, since it is feature-based, the choice sets $S$ are defined as sets of tuples of features. Let $S_{i}$ be the tuple of features of the $i$ -th alternative of the choice set $S$ belonging to a given feature space $\mathcal{F}_{a}$ and $I$ be the tuple of the individual's features belonging to a given feature space $\mathcal{F}_{0}$ . The individual's features are allowed to be an empty tuple.
+
+Representation layer The first layer is composed of a representation function for the alternatives' features
+
+$$
+\rho_ {w _ {a}}: \mathcal {F} _ {a} \rightarrow \mathbb {R} ^ {d _ {a}} \tag {5}
+$$
+
+and a representation function for the individual's features
+
+$$
+\rho_ {w _ {0}}: \mathcal {F} _ {0} \rightarrow \mathbb {R} ^ {d _ {0}} \tag {6}
+$$
+
+where $w_0$ and $w_a$ are the sets of weights parameterizing them and $d_0, d_a \in \mathbb{N}$ are hyperparameters. These functions can include, e.g., embedding layers for categorical variables, a convolutional network for images or text, etc., depending on the inputs' types.
+
+Cartesian product layer In order to build the transition rate matrix, all the pairs of different alternatives need to be considered, this is accomplished by computing the cartesian product
+
+$$
+\left\{\rho_ {w _ {a}} \left(S _ {1}\right), \dots , \rho_ {w _ {a}} \left(S _ {| S |}\right) \right\} \times \left\{\rho_ {w _ {a}} \left(S _ {1}\right), \dots , \rho_ {w _ {a}} \left(S _ {| S |}\right) \right\}. \tag {7}
+$$
+
+
+Figure 1: PCMC-Net. $\times$ denotes the cartesian product and $\oplus$ vector concatenation.
+
+The combinations of embedded alternatives are concatenated together with the embedded features of the individual, i.e.
+
+$$
+R _ {i j} \equiv \rho_ {w _ {0}} (I) \oplus \rho_ {w _ {a}} \left(S _ {i}\right) \oplus \rho_ {w _ {a}} \left(S _ {j}\right) \tag {8}
+$$
+
+where $\oplus$ denotes vector concatenation.
+
+Transition rate layer The core component is a model of the transition rate $(Q_{S})_{ij}, i \neq j$ :
+
+$$
+\hat {q} _ {i j} \equiv \max \left(0, f _ {w _ {q}} \left(R _ {i j}\right)\right) + \epsilon \tag {9}
+$$
+
+where $f_{w_q}$ consists of multiple fully connected layers parameterized by a set of weights $w_q$ and $\epsilon > 0$ is a hyperparameter. Notice that taking the maximum with 0 and adding $\epsilon$ guarantees non-negativity and the condition of Eq. 1. The transition rate matrix $\hat{Q}$ is then obtained as follows:
+
+$$
+\hat {Q} _ {i j} \equiv \left\{ \begin{array}{l l} \hat {q} _ {i j} & \text {i f} i \neq j \\ - \sum_ {j \neq i} \hat {q} _ {i j} & \text {o t h e r w i s e} \end{array} . \right. \tag {10}
+$$
+
+Stationary distribution layer The choice probabilities correspond to the stationary distribution $\hat{\pi}$ that is guaranteed to exist and be unique by the condition of Eq. 1 and can be obtained by solving the system
+
+$$
+\hat {\pi} \left[ \left(\hat {Q} _ {i j}\right) _ {1 \leq i \leq | S |, 1 \leq j < | S |} \mid \mathbf {1} ^ {T} \right] = [ \mathbf {0} \mid 1 ] \tag {11}
+$$
+
+by, e.g., partially-pivoted LU decomposition which can be differentiated with automatic differentiation.
+
+The whole network is represented in Fig. 1.
+
+# 3.2 PROPERTIES
+
+Non-regularity As shown in Ragain & Ugander (2016), non-regular models can be obtained by certain rate matrices. For example, the stochastic rock-paper-scissors game can be described by a non-regular model obtained with the following transition rate matrix with $\frac{1}{2} < \alpha \leq 1$ :
+
+$$
+Q = \left[ \begin{array}{c c c} - 1 & 1 - \alpha & \alpha \\ \alpha & - 1 & 1 - \alpha \\ 1 - \alpha & \alpha & - 1 \end{array} \right]. \tag {12}
+$$
+
+PCMC-Net can represent such a model by setting the following design parameters. In this case, the individual's features correspond to an empty tuple yielding an empty vector as representation. By setting $\rho_{w_a}$ to a one-hot representation of the alternative (thus $d_a = 3$ ), a fully connected network $f_{w_q}$ consisting of one neuron (i.e. six coefficients and one bias) is enough to represent this matrix since six combinations of inputs are of interest.
+
+Non-parametric limit More generally, the following theorem shows that any PCMC model can be arbitrarily well approximated by PCMC-Net.
+
+Theorem 1. If $\rho_{w_a}$ and $f_{w_q}$ are given enough capacity, PCMC-Net can approximate any PCMC model arbitrarily well.
+
+Proof. A PCMC model jointly specifies a family of distributions $\pi_S$ for each $S \in 2^U$ obtained by subsets of a single rate matrix $Q$ indexed by $U$ . Therefore, it is sufficient to prove that PCMC-Net forces the transition rates to be at least $\epsilon$ , whereas the PCMC definition allows any $q_{ij} \geq 0$ as long as $q_{ij} + q_{ji} > 0$ . Since multiplying all the entries of a rate matrix by some $c > 0$ does not affect the stationary distribution of the corresponding CTMC, let us consider, without loss of generality, an arbitrary PCMC model given by a transition rate matrix $Q^\star$ , whose entries are either at least $\epsilon$ or zero. Let $\pi^\star$ be its stationary distribution. Then, let us consider the matrix $Q(\epsilon, c)$ obtained by replacing the null entries of $Q^\star$ by $\epsilon$ and by multiplying the non-null entries by some $c > 0$ , and let $\pi(\epsilon, c(\delta))$ be its stationary distribution. Since, by Cramer's rule, the entries of the stationary distribution are continuous functions of the entries of the rate matrix, for any $\delta > 0$ , there exist $c(\delta) > 0$ such that $|\pi(\epsilon, c(\delta)) - \pi^\star| < \delta$ .
+
+Since deep neural networks are universal function approximators (Hornik et al., 1989), PCMC-Net allows to represent arbitrarily well any $Q(\epsilon, c)$ if enough capacity is given to the network, which completes the proof.
+
+Contractibility Let $Q, Q'$ be the rate matrices obtained after the transition rate layer of two different PCMC-Nets on a finite universe of alternatives $U$ . Then, Proposition 1 can be applied. Regarding uniform expansion, when copies are added to a choice set, their transition rates to the other elements of the choice set will be identical since they only depend on their features. Therefore, PCMC-Net allows uniform expansion.
+
+# 3.3 INFERENCE
+
+The logarithmic loss is used to assess the predicted choice distribution $\hat{\pi}$ given by the model parameterized by $w\equiv w_0\cup w_a\cup w_q$ on the input $(I,S)$ against the index of actual choice $Y_{S}$ ,
+
+$$
+\left. \operatorname {l o s s} (w, I, S, Y _ {S}) \equiv \log \hat {\pi} _ {Y _ {S}} \right.. \tag {13}
+$$
+
+Training can be performed using stochastic gradient descent and dropout to avoid overfitting, which is stable unlike the original inference approach.
+
+# 4 EXPERIMENTS ON SYNTHETIC DATA WITH CONTEXT EFFECTS
+
+To illustrate the ability of PCMC and PCMC-Net models of capturing context effects we simulate them using the multiattribute linear ballistic accumulator (MLBA) model of Trueblood et al. (2014) that is able to represent attraction, similarity and compromise effects. MLBA models choice as a process where independent accumulators are associated to each alternative and race toward a threshold. The alternative whose accumulator reaches the threshold first is selected. The speed of each accumulator is determined by a number of parameters modeling human psychology notably, weights that determine the attention paid to each comparison and a curvature parameter determining a function that gives the subjective value of each alternative from its objective value.
+
+We consider the example of (Trueblood et al., 2014, Figure 7) reproduced in Figure 2(i) where a choice set of two fixed alternatives $\{a,b\}$ is augmented with a third option $c$ and the preference of $a$ over $b$ , i.e. $\frac{P_{\{a,b,c\}}(a)}{P_{\{a,b,c\}}(a) + P_{\{a,b,c\}}(b)}$ , is computed. The model considers two attributes such that higher values are more preferable. The two original alternatives are $a = (4,6)$ , $b = (6,4)$ . For example, these can correspond to two different laptops with RAM capacity and battery life as attributes, and $c$ is a third laptop choice influencing the subjective value of $a$ with respect to $b$ .
+
+In order to generate synthetic choice sets, we uniformly sample the coordinates of $c$ in $[1,9]^2$ . Then, we compute the choice distribution given by the MLBA model that is used to sample the choice.
+
+We instantiate PCMC-Net with an identity representation layer with $d_{a} = 4$ and $d_{0} = 0$ and a transition rate layer with $h \in \{1,2,3\}$ hidden layers of $\nu = 16$ nodes with Leaky ReLU activation
+
+
+(i) Ground truth.
+
+
+(ii) MNL
+
+
+(iii) PCMC
+Figure 2: Preference for $a$ over $b$ for different attributes (coordinates of each pixel) of the third alternative $c$ . Lighter shades indicate higher preference for $a$ . Models were trained on 20000 choice sets. In Fig. (i), the points $c_{A}$ , $c_{S}$ and $c_{C}$ show examples of, respectively, attraction, similarity and compromise effects with respect to $a$ .
+
+
+(iv) PCMC-Net $3 \times 16$
+
+(slope = 0.01) and $\epsilon = 0.5$ . In order to use the original PCMC model, we discretize the attributes of the third option using 8 bins on each attribute, obtaining 64 different alternatives in addition to identified $a$ and $b$ . We also compare to a linear-in-features MNL model. Figure 2 shows how the different models represent the preference for $a$ over $b$ . Table 1, shows a Monte Carlo estimation of the expected Kullback-Leibler divergence comparing each model $\hat{P}$ to the true MLBA model $P$
+
+$$
+\mathbb {E} _ {c \sim \mathcal {U} ([ 1, 9 ] ^ {2})} [ D _ {\mathrm {K L}} (P \| \hat {P}) ] = \frac {1}{6 4} \int_ {[ 1, 9 ] ^ {2}} \sum_ {i \in \{a, b, c \}} P _ {\{a, b, c \}} (i) \log \frac {P _ {\{a , b , c \}} (i)}{\hat {P} _ {\{a , b , c \}} (i)} d c. \tag {14}
+$$
+
+As expected, MNL is unable to represent context effects due the independence of irrelevant alternatives. After discretization, the original PCMC provides a good approximation of the MLBA model. Nevertheless, it is difficult to further refine it since the number of statistical parameters grows biquadratically with the number of bins for each feature. As shown in Table 1, the amortized inference approach of PCMC-Net allows a better approximation of the original model with significantly fewer statistical parameters than the discretized PCMC.
+
+Table 1: Monte Carlo estimate of the expected KL divergence between the different models and the true MLBA model. A set of 10000 points $c$ , different from the training one, was used.
+
+| ˆP | h ×ν | #parameters | E_c~U([1,9]^2)[D_KL(P||ˆP)] |
| MNL | | 2 | .119 |
| PCMC | | 4290 | .022 |
| PCMC-Net | 1 × 16 | 97 | .018 |
| PCMC-Net | 2 × 16 | 369 | .011 |
| PCMC-Net | 3 × 16 | 641 | .009 |
+
+# 5 EXPERIMENTS ON AIRLINE ITINERARY CHOICE MODELING
+
+In this section, we instantiate PCMC-Net for the case of airline itinerary choice modeling. As shown in Babutsidze et al. (2019), this kind of data often exhibit attraction effects, calling for more flexible models such as PCMC. Nevertheless, in the considered dataset, alternatives rarely repeat themselves, which makes the original inference approach for PCMC inappropriate.
+
+# 5.1 DATASET
+
+We used the dataset from Mottini & Acuna-Agost (2017) consisting of flight bookings sessions on a set of European origins and destinations. Each booking session contains up to 50 different itineraries, one of which has been booked by the customer. There are 815559 distinct alternatives among which $84\%$ are singletons and $99\%$ are observed at most seven times. In total, there are 33951 choice sessions of which 27160 were used for training and 6791 for testing. The dataset has a total of 13 features, both numerical and categorical, corresponding to individuals and alternatives (see Table 4).
+
+# 5.2 INSTANTIATION OF PCMC-NET
+
+PCMC-Net was implemented in PyTorch (Paszke et al., 2017) $^5$ During training, a mini-batch is composed of a number of sessions whose number of alternatives can be variable. Dynamic computation graphs are required in order to adapt to the varying session size. Stochastic gradient optimization is performed with Adam (Kingma & Ba, 2015). In our experiments, numerical variables are unidimensional and thus are not embedded. They were standardized during a preprocessing step. Each categorical input of cardinality $c_i$ is passed through an embedding layer, such that the resulting dimension is obtained by the rule of thumb $d_i \coloneqq \min(\lceil c_i / 2 \rceil, 50)$ . We maximize regularization by using a dropout probability of 0.5 (see, e.g., Baldi & Sadowski (2013)). The additive constant $\epsilon$ was set to 0.5. The linear solver was implemented with torch.solve, which uses LU decomposition. Table 2 shows the hyperparameters and learning parameters that were optimized by performing 25 iterations of Bayesian optimization (using GPyOpt authors (2016)). Early stopping is performed during training if no significant improvement (greater than 0.01 with respect to the best log loss obtained so far) is made on a validation set (a random sample consisting of 10% of the choice sessions from the training set) during 5 epochs. Using the hyperparameters values returned by the Bayesian optimization procedure and the number of epochs at early stopping (66), the final model is obtained by training on the union of the training and validation sets.
+
+Table 2: Hyperparameters optimized with Bayesian optimization.
+
+| parameter | range | best value |
| learning rate | {10-1}i=1...6 | 0.001 |
| batch size (in sessions) | {2i}i=0...4 | 16 |
| hidden layers in fwq | {1,2,3} | 2 |
| nodes per layer in fwq | {2i}i=5...9 | 512 |
| activation | {ReLU, Sigmoid, Tanh, LeakyReLU} | LeakyReLU |
+
+# 5.3 RESULTS
+
+We compare the performance of the PCMC-Net instantiation against three simple baselines:
+
+- Uniform: probabilities are assigned uniformly to each alternative.
+- Cheapest (non-probabilistic): alternatives are ranked by increasing price.
+- Shortest (non-probabilistic): alternatives are ranked by increasing trip duration.
+
+We also compare against the results presented in Lheritier et al. (2019)
+
+- Multinomial Logit (MNL): choice probabilities are determined from the alternatives' features only, using some feature transformations to improve the performance.
+- Latent Class Multinomial Logit (LC-MNL): in addition to the alternatives' features, it uses individual's features which are used to model the probability of belonging to some latent classes whose number is determined using the Akaike Information Criterion. Feature transformations are also used to improve the performance.
+- Random Forest (RF): a classifier is trained on the alternatives as if they were independent, considering both individual's and alternatives' features and using as label whether each alternative was chosen or not. Some alternatives' features are transformed to make them relative to the values of each choice set. Since the classifier evaluates each alternative
+
+independently, the probabilities within a given session generally do not add to one, and therefore are just interpreted as scores to rank the alternatives.
+
+And, finally, we compare to
+
+- Deep Pointer Networks (DPN) (Mottini & Acuna-Agost, 2017): a recurrent neural network that uses both the features of the individual and those of the alternatives to learn to point to the chosen alternative from the choice sets given as sequences. The results are dependent on the order of the alternatives, which was taken as in the original paper, that is, as they were shown to the user.
+
+We compute the following performance measures on the test set $\mathcal{T}$ of choice sets and corresponding individuals:
+
+- Normalized Log Loss (NLL): given a probabilistic choice model $\hat{P}$ , $\mathrm{NLL} \equiv -\frac{1}{|\mathcal{T}|} \sum_{(S,I) \in \mathcal{T}} \log \hat{P}_S(Y_S|I)$ .
+- TOP $N$ accuracy: proportion of choice sessions where the actual choice was within the top $N$ ranked alternatives. In case of ties, they are randomly broken. We consider $N \in \{1, 5\}$ .
+
+Table 3 shows that PCMC-Net outperforms all the contenders in all the considered metrics. It achieves a $21.3\%$ increase in TOP-1 accuracy and a $12.8\%$ decrease in NLL with respect to the best contender for each metric. In particular, we observe that the best in TOP $N$ accuracy among the contenders are LC-MNL and RF, both requiring manual feature engineering to achieve such performances whereas PCMC-Net automatically learns the best representations. We also observe that our results are significantly better than those obtained with the previous deep learning approach DPN, showing the importance of the PCMC definition in our deep learning approach to model the complex behaviors observed in airline itinerary choice data.
+
+Table 3: Results on airline itinerary choice prediction. * indicates cases with feature engineering.
+
+| method | TOP 1 | TOP 5 | NLL |
| Uniform | .063 | .255 | 3.24 |
| Cheapest | .164 | .471 | - |
| Shortest | .154 | .472 | - |
| MNL* | .224 | .624 | 2.44 |
| LC-MNL* | .271 | .672 | 2.33 |
| RF* | .273 | .674 | - |
| DPN | .257 | .665 | 2.33 |
| PCMC-Net | .331 | .745 | 2.03 |
+
+# 6 CONCLUSIONS
+
+We proposed PCMC-Net, a generic neural network architecture equipping PCMC choice models with amortized and automatic differentiation based inference using alternatives' features. As a side benefit, the construction allows to condition the probabilities on the individual's features. We showed that PCMC-net is able to approximate any PCMC model arbitrarily well and, thus, maintains the flexibility (e.g., allowing to represent non-regular models) and the desired property of uniform expansion. Being neural network based, PCMC-Net allows complex feature handling as previous machine learning and deep learning based approaches but with the additional theoretical guarantees.
+
+We proposed a practical implementation showing the benefits of the construction on the challenging problem of airline itinerary choice prediction, where attraction effects are often observed and where alternatives rarely appear more than once—making the original inference approach for PCMC inappropriate.
+
+As future work, we foresee investigating the application of PCMC-Net on data with complex features (e.g. images, texts, graphs ...) to assess the impact of such information on preferences and choice.
+
+# ACKNOWLEDGMENTS
+
+Thanks to María Zuluaga, Eoin Thomas, Nicolas Bondoux and Rodrigo Acuña-Agost for their insightful comments and to the four anonymous reviewers whose suggestions have greatly improved this manuscript.
+
+# REFERENCES
+
+Ernest Adams and Samuel Messick. An axiomatic formulation and generalization of successive intervals scaling. Psychometrika, 23(4):355-368, Dec 1958. ISSN 1860-0980. doi: 10.1007/BF02289784.
+The GPyOpt authors. Gpyopt: A bayesian optimization framework in python. http://github.com/SheffieldML/GPyOpt, 2016.
+Zakaria Babutsidze, William Rand, Emil Mirzayev, Ismael Rafai, Nobuyuki Hanaki, Thierry Delahaye, and Rodrigo Acuna-Agost. Asymmetric dominance in airfare choice. In 6th International Conference of Choice Modelling. ICMC, 2019.
+Pierre Baldi and Peter J Sadowski. Understanding dropout. In Advances in neural information processing systems, pp. 2814-2822, 2013.
+H D Block and Jacob Marschak. Random orderings and stochastic theories of response. Contributions to Probability and Statistics, 2:97-132, 1960.
+Ralph Allan Bradley and Milton E Terry. Rank analysis of incomplete block designs: I. the method of paired comparisons. Biometrika, 39(3/4):324-345, 1952.
+William H Greene and David A Hensher. A latent class model for discrete choice analysis: contrasts with mixed logit. _Transportation Research Part B: Methodological_, 37(8):681-698, 2003.
+Kurt Hornik, Maxwell Stinchcombe, and Halbert White. Multilayer feedforward networks are universal approximators. Neural networks, 2(5):359-366, 1989.
+Joel Huber, John W Payne, and Christopher Puto. Adding asymmetrically dominated alternatives: Violations of regularity and the similarity hypothesis. Journal of consumer research, 9(1):90-98, 1982.
+Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, 2015.
+Alix Lheritier, Michael Bocamazo, Thierry Delahaye, and Rodrigo Acuna-Agost. Airline itinerary choice modeling using machine learning. Journal of Choice Modelling, 31:198-209, 2019.
+R. Duncan Luce. Individual Choice Behavior: A Theoretical analysis. Wiley, New York, NY, USA, 1959.
+R Duncan Luce. The choice axiom after twenty years. Journal of mathematical psychology, 15(3): 215-233, 1977.
+Charles F Manski. The structure of random utility models. Theory and decision, 8(3):229-254, 1977.
+Daniel McFadden. Econometric models for probabilistic choice among products. Journal of Business, pp. S13-S29, 1980.
+Alejandro Mottini and Rodrigo Acuna-Agost. Deep choice model using pointer networks for airline itinerary prediction. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1575-1583. ACM, 2017.
+J. R. Norris. Markov Chains. Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge University Press, 1997. doi: 10.1017/CBO9780511810633.
+
+Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. Automatic differentiation in pytorch. 2017.
+Stephen Ragain and Johan Ugander. Pairwise choice markov chains. In Advances in Neural Information Processing Systems, pp. 3198-3206, 2016.
+Itamar Simonson. Choice based on reasons: The case of attraction and compromise effects. Journal of consumer research, 16(2):158-174, 1989.
+Louis L Thurstone. A law of comparative judgment. Psychological review, 34(4):273, 1927.
+Kenneth E Train. Discrete choice methods with simulation. Cambridge university press, 2009.
+Jennifer S Trueblood, Scott D Brown, and Andrew Heathcote. The multiattribute linear ballistic accumulator model of context effects in multialternative choice. Psychological review, 121(2):179, 2014.
+Amos Tversky. Elimination by aspects: A theory of choice. Psychological review, 79(4):281, 1972.
+
+# A FEATURES OF THE AIRLINE ITINERARY CHOICE DATASET
+
+Table 4: Features of the airline itinerary choice dataset.
+
+ | Type | Feature | Range/Cardinality |
| Individual | Categorical | Origin/Destination | 97 |
| Search Office | 11 |
| Numerical | Departure weekday | [0,6] |
| Stay Saturday | [0,1] |
| Continental Trip | [0,1] |
| Domestic Trip | [0,1] |
| Days to departure | [0, 343] |
| Alternative | Categorical | Airline (of first flight) | 63 |
| Numerical | Price | [77.15,16781.5] |
| Stay duration (minutes) | [121,434000] |
| Trip duration (minutes) | [105, 4314] |
| Number connections | [2,6] |
| Number airlines | [1,4] |
| Outbound departure time (in s from midnight) | [0, 84000] |
| Outbound arrival time (in s from midnight) | [0, 84000] |
\ No newline at end of file
diff --git a/pcmcnetfeaturebasedpairwisechoicemarkovchains/images.zip b/pcmcnetfeaturebasedpairwisechoicemarkovchains/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..23bca5444e1c7285933ec5f248c31d9c68841f4b
--- /dev/null
+++ b/pcmcnetfeaturebasedpairwisechoicemarkovchains/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:976958940a7c67cb608683aebfa5ce76ea682de548cfc83ffdf6535e4672e64b
+size 262045
diff --git a/pcmcnetfeaturebasedpairwisechoicemarkovchains/layout.json b/pcmcnetfeaturebasedpairwisechoicemarkovchains/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..66e5927f388ae02d663f7fc18146c82c3a19ce72
--- /dev/null
+++ b/pcmcnetfeaturebasedpairwisechoicemarkovchains/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7919255e2cb86598535170d30a656b37e4e49341b1ba98cae235c8cda3d2f601
+size 435788
diff --git a/permutationequivariantmodelsforcompositionalgeneralizationinlanguage/adb952c9-6934-43aa-a45a-6e6a1ab84ed4_content_list.json b/permutationequivariantmodelsforcompositionalgeneralizationinlanguage/adb952c9-6934-43aa-a45a-6e6a1ab84ed4_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..4fe4063b8c1e4aa3ecccb413c955a3f1a0fdc48e
--- /dev/null
+++ b/permutationequivariantmodelsforcompositionalgeneralizationinlanguage/adb952c9-6934-43aa-a45a-6e6a1ab84ed4_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5839d109e868c70ed0af9de7e940811c3b99f41bc6e3f6ed886af80ffab141fe
+size 78990
diff --git a/permutationequivariantmodelsforcompositionalgeneralizationinlanguage/adb952c9-6934-43aa-a45a-6e6a1ab84ed4_model.json b/permutationequivariantmodelsforcompositionalgeneralizationinlanguage/adb952c9-6934-43aa-a45a-6e6a1ab84ed4_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..7ad16f3640c2dc17a35066e51b93a0d6e15d9250
--- /dev/null
+++ b/permutationequivariantmodelsforcompositionalgeneralizationinlanguage/adb952c9-6934-43aa-a45a-6e6a1ab84ed4_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2ea9eb6bd733c7260a9f89ea2f2a04a881b659c421d74f43b4e95fc7b239d0cc
+size 94428
diff --git a/permutationequivariantmodelsforcompositionalgeneralizationinlanguage/adb952c9-6934-43aa-a45a-6e6a1ab84ed4_origin.pdf b/permutationequivariantmodelsforcompositionalgeneralizationinlanguage/adb952c9-6934-43aa-a45a-6e6a1ab84ed4_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..b4d5661bd68afd3a56a11c5c80885ca2f935e735
--- /dev/null
+++ b/permutationequivariantmodelsforcompositionalgeneralizationinlanguage/adb952c9-6934-43aa-a45a-6e6a1ab84ed4_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:98184d0de20e29dc33d7c94af44a309c2531314836a5abba36ebdc4b6d3e6357
+size 547829
diff --git a/permutationequivariantmodelsforcompositionalgeneralizationinlanguage/full.md b/permutationequivariantmodelsforcompositionalgeneralizationinlanguage/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..5e9200e8db0315bca943152a504f8e0ae7e0ead1
--- /dev/null
+++ b/permutationequivariantmodelsforcompositionalgeneralizationinlanguage/full.md
@@ -0,0 +1,312 @@
+# PERMUTATION EQUIVARIANT MODELS FOR COMPOSITIONAL GENERALIZATION IN LANGUAGE
+
+Jonathan Gordon*
+
+University of Cambridge
+jg801@cam.ac.uk
+
+David Lopez-Paz, Marco Baroni, Diane Bouchacourt
+
+Facebook AI Research
+
+{dlp, mbaroni, dianeb}@fb.com
+
+# ABSTRACT
+
+Humans understand novel sentences by composing meanings and roles of core language components. In contrast, neural network models for natural language modeling fail when such compositional generalization is required. The main contribution of this paper is to hypothesize that language compositionality is a form of group-equivariance. Based on this hypothesis, we propose a set of tools for constructing equivariant sequence-to-sequence models. Throughout a variety of experiments on the SCAN tasks, we analyze the behavior of existing models under the lens of equivariance, and demonstrate that our equivariant architecture is able to achieve the type compositional generalization required in human language understanding.
+
+# 1 INTRODUCTION
+
+When using language, humans recombine known concepts to understand novel sentences. For instance, if one understands the meaning of "run", "jump", and "jump twice", then one understands the meaning of "run twice", even if such sentence was never heard before. This relies on the notion of language compositionality, which states that the meaning of a sentence ("jump twice") is to be obtained by the meaning of its constituents (e.g. the verb "jump" and the quantifying adverb "twice") and the use of algebraic computation (a verb combined with a quantifying adverb $m$ results in doing that verb $m$ times) (Kratzer & Heim, 1998).
+
+In the realm of machines, deep learning has achieved unprecedented results in language modeling tasks (Bahdanau et al., 2015; Vaswani et al., 2017). However, these models are sample inefficient, and do not generalize to examples that require the use of language compositionality (Lake & Baroni, 2018; Loula et al., 2018; Dessi & Baroni, 2019). This result suggests that deep language models fail to leverage compositionality; a failure remaining to this day a roadblock towards true natural language understanding.
+
+Focusing on this issue, Lake & Baroni (2018) proposed the Simplified version of the CommAI Navigation (SCAN), a dataset to benchmark the compositional generalization capabilities of state-of-the-art sequence-to-sequence (seq2seq) translation models (Sutskever et al., 2014; Bahdanau et al., 2015). In a nutshell, the SCAN dataset contains compositional navigation commands such as JUMP TWICE AFTER RUN LEFT, to be translated into the sequence of actions LTURN RUN JUMP JUMP.
+
+Using SCAN, Lake & Baroni (2018) demonstrated that seq2seq models fail spectacularly at tasks requiring the use of language compositionality. Following our introductory example, models trained on the three commands JUMP, RUN and JUMP TWICE fail to generalize to RUN TWICE. Most recently, Dessi & Baroni (2019) showed that architectures based on temporal convolutions meet the same fate.
+
+SCAN did not only reveal the lack of compositionality in language models, but it also became the blueprint to build novel language models able to handle language compositionality. On the one hand, Russin et al. (2019) proposed a seq2seq model where semantic and syntactic information are represented separately, in a hope that such disentanglement would elicit compositional rules. However, their model was not able to solve all of the compositional tasks comprising SCAN. On the other hand, Lake (2019) introduced a meta-learning approach with excellent performance in multiple
+
+SCAN tasks. However, their method requires substantial amounts of additional supervision, and a complex meta-learning procedure hand-engineered for each task.
+
+In this paper, we take a holistic look at the problem and connect language compositionality in SCAN to the disparate literature in models equivariant to certain group symmetries (Kondor, 2008; Cohen & Welling, 2016; Ravanbakhsh et al., 2017; Kondor & Trivedi, 2018). Interesting links have recently been proposed between group symmetries and the areas of causality (Arjovsky et al., 2019) and disentangled representation learning (Higgins et al., 2018), and this work proceeds in a similar fashion. In particular, the main contribution of this work is not to chase performance numbers, but to put forward the novel hypothesis that language compositionality can be understood as a form of group-equivariance (Section 3). To sustain our hypothesis, we provide tools to construct seq2seq models equivariant when the group symmetries are known (Section 4), and demonstrate that these models solve all SCAN tasks, except length generalization (Section 6).1
+
+# 2 THE SCAN COMPOSITIONAL TASKS
+
+The purpose of the Simplified version of the CommAI Navigation (SCAN) tasks (Lake & Baroni, 2018) is to benchmark the abilities of machine translation models for compositional generalization. Following prior literature (Lake & Baroni, 2018; Baroni, 2019; Russin et al., 2019; Andreas, 2019), compositional generalization is understood as the ability to translate novel families of sentences, when this requires leveraging the compositional structure in language.
+
+The SCAN dataset contains compositional navigation commands in English (the input-language) paired with a desired action sequence (the output-language). For instance, the input-language sentence JUMP TWICE AND RUN LEFT is paired to the output-language actions sequence JUMP JUMP LTURN RUN. The rest of our exposition uses SMALL CAPS to denote examples in the input-language, and LARGE CAPS to denote examples in the output-language. Appendix A contains a full description of the grammar generating the SCAN language.
+
+To evaluate the compositional generalization abilities of sequence-to-sequence (seq2seq) machine translation models (Sutskever et al., 2014; Bahdanau et al., 2015), Lake & Baroni (2018) proposes four main tasks based on SCAN:
+
+1. Simple task: data pairs are randomly split into training and test sets. No compositional generalization is required.
+2. Add jump task: the only command in the training set containing the verb JUMP is the command JUMP. All commands not containing JUMP are in the training set (for instance, RUN TWICE, and WALK RIGHT THRICE AND LOOK LEFT). The test set contains all commands containing JUMP (for instance, JUMP TWICE, and RUN LEFT AND JUMP RIGHT). To succeed in this task, models must learn that JUMP is a verb, and that any verb can be composed with an adverbial number to be repeated a number of times.
+3. Around right task: the phrase AROUND RIGHT is held out from the training set; however, both AROUND and RIGHT are shown in all other contexts (for example, AROUND LEFT or OPPOSITE RIGHT). To succeed at this task, models must learn that both RIGHT and LEFT are directions, and can be combined with AROUND and OPPOSITE.
+4. Length generalization task: the training set contains pairs such that the length of the action sequence in the output-language is shorter than 24 actions. The test set contains all pairs with action sequences of a length greater or equal than 24 actions. The type of compositional ability required to succeed at this task is more difficult to sketch out, as we discuss in Section 6.2.
+
+Lake & Baroni (2018) use these four tasks to demonstrate that state-of-the-art seq2seq translation models (Bahdanau et al., 2015) succeed at Simple task, but fail at the other three tasks requiring compositional generalization. Convolutional architectures (Dessi & Baroni, 2019) achieve only slightly better performance, and state-of-the-art methods specially developed to address SCAN tasks fall short from the best achievable performance (Rusin et al., 2019), or call for substantial amounts of additional supervision (Lake, 2019).
+
+In the following, we take a holistic look at the language compositionality problems in SCAN, and highlight their connection to equivariant maps in group theory.
+
+# 3 SCAN COMPOSITIONALITY AS GROUP EQUIVARIANCE
+
+This section puts forward the hypothesis that:
+
+Models achieving the compositional generalization required in certain SCAN tasks are equivariant with respect to permutation group operations2 in the input and output languages.
+
+To unfold the meaning of our hypothesis, we must revisit some basic concepts in group theory. A discrete group $G$ is a set of elements $\{g_1, \ldots, g_{|G|}\}$ , equipped with a binary group operation “.” satisfying the four group axioms (closure, associativity, identity, and invertibility). The sequel focuses on permutation groups $G$ , whose elements are permutations of a set $\mathcal{X}$ , and whose binary group operation composes the permutations contained in $G$ . The set of all permutations of $\mathcal{X}$ is a group, but not all subsets of permutations of $\mathcal{X}$ satisfy the four group axioms, and therefore they do not form a group. For each element $g \in G$ , we define the group operation $T_g: \mathcal{X} \to \mathcal{X}$ as the map applying the permutation $g$ to the element $x \in \mathcal{X}$ , to obtain $T_gx$ . Armed with these definitions, we are ready to introduce the main object of study in this paper: equivariant maps.
+
+Definition 1 (Equivariant map). Let $\mathcal{X}$ and $\mathcal{Y}$ be two sets. Let $G$ be a group whose group operation on $\mathcal{X}$ is denoted by $T_{g}:\mathcal{X}\to \mathcal{X}$ , and whose group operation on $\mathcal{Y}$ is denoted by $T_{g}^{\prime}:\mathcal{Y}\rightarrow \mathcal{Y}$ . Then, $\Phi :\mathcal{X}\to \mathcal{Y}$ is an equivariant map if and only if $\Phi (T_gx) = T_g'\Phi (x)$ for all $x\in \mathcal{X}$ and $g\in G$ .
+
+The operation groups $(T_g, T_g')$ defined above operate on entire sequences, an enormous space when we consider those sequences to be language sentences. In the following two definitions, we relax group operations and equivariant maps to operate at a word level.
+
+Definition 2 (Local group operations). Let $\mathcal{X}$ be a set of sequences (or sentences), where each sequence $x\in \mathcal{X}$ contains elements $x_{i}\in \mathcal{V}$ from a vocabulary set $\mathcal{V}$ , for all $x_{i}\in x$ . Let $G$ be a group with associated group operation $T_{g}:\mathcal{X}\to \mathcal{X}$ . Then, we say that $T_{g}$ is a local group operation if there exists a group operation $T_{g_w}:\mathcal{V}\times \mathcal{V}$ such that $T_{g}x = (T_{g_{w}}x_{1},\ldots ,T_{g_{w}}x_{L_{x}})$ for all $x\in \mathcal{X}$ .
+
+When understanding sequences as language sentences, the group operation $T_{g_w}$ would be a permutation of the words from the language vocabulary. Such operation can be implemented in terms of a permutation matrix, a $|\mathcal{V}| \times |\mathcal{V}|$ matrix with zero/one entries where each row and each column sum to one. Finally, we leverage the definition of local group operations to define locally equivariant maps.
+
+Definition 3 (Locally equivariant map). Let $\mathcal{X}$ and $\mathcal{Y}$ be two sets of sequences. Let $G$ be a group whose group operation on $\mathcal{X}$ is local in its vocabulary, denoted by $T_{g}:\mathcal{X}\to \mathcal{X}$ , and whose group operation on $\mathcal{Y}$ is local in its vocabulary and denoted by $T_g^{\prime}:\mathcal{V}\times \mathcal{V}$ . Then, we say that $\Phi :\mathcal{X}\rightarrow \mathcal{Y}$ is an equivariant map if and only if $\Phi (T_gx) = T_g'\Phi (x)$ for all $x\in \mathcal{X}$ and $g\in G$ .
+
+
+(a)
+
+
+(b)
+Figure 1: (a) Commutative diagram for equivariance. (b) Local equivariance enables generalization to verb replacement in SCAN. (c) Local equivariance does not enable generalization to conjunction replacement in SCAN.
+
+
+(c)
+
+Now, how do equivariances and local equivariances manifest themselves in the world of SCAN? To assist our examples, the commutative diagram in Figure 1a summarizes the group theory notations introduced so far. In Figure 1b and Figure 1c, we parallel these notations to two different examples of compositional skills required to solve SCAN: verb and conjunction replacement. In the SCAN domain, $\mathcal{X}$ is the set of sentences in the input-language, and $\mathcal{Y}$ is the set of sentences in the output-language. Furthermore, let $\Phi$ be a locally equivariant SCAN translation model, and let $G$ be a group with associated local group operations that permutes words in the input- and output- languages.
+
+On the one hand, we observe in Figure 1b that local equivariance enables compositional generalization in the case of verb replacement. This is because replacing one verb in the input-language can be implemented in terms of a local group operation. In turn, this input-verb replacement corresponds deterministically to a second local group operation that replaces the corresponding verb in the output-language. The same would apply to a SCAN task where we are interested in generalizing to the replacement of LEFT and RIGHT. As such, a translation model $\Phi$ with these compositional generalization capabilities must be locally equivariant.
+
+On the other hand, we observe in Figure 1c that local equivariance is insufficient to enable compositional generalization in the case of conjunction replacement. This is because no local group operation in the output-language would be able to implement the necessary changes induced by the replacement of AND by AFTER in the input-language. In such cases, we refer to the equivariance as global equivariance. In particular, we can see how blocks of multiple words in the output-language swap their relative location. Local equivariances are also insufficient to enable compositional generalization in the Length generalization SCAN task and we elaborate on this in Section 6.2.
+
+In the following section, we propose a set of tools to implement equivariant seq2seq translation models, and propose a particular architecture with which we conduct our experiments.
+
+# 4 IMPLEMENTING AN EQUIVARIANT SEQUENCE-TO-SEQUENCE MODEL
+
+We now implement our proposed equivariant seq2seq model, following the encoder-decoder architecture illustrated in Figure 2. Readers unfamiliar with group theory may parse Figure 2 by temporarily discarding the “ $G-$ ” prefixes, and realize that each depicted module is a well-known building block of recurrent neural network models.
+
+
+(a) G-Equivariant Encoder
+
+
+(b) G-Equivariant Decoder
+Figure 2: Architecture of our fully-equivariant seq2seq model. Variables shaded in gray are mappings $G \to \mathbb{R}^{K}$ , implemented as $|G| \times K$ matrices. Encoder and decoder meet at $\tilde{h}_0 \coloneqq h_{L_x}$ .
+
+To make our model equivariant, we will make intense use of group convolutions.
+
+Definition 4 (Group convolution (Kondor & Trivedi, 2018)). Let $G$ be a discrete group. Let $f: G \to \mathbb{R}^K$ be an input function. Let $\psi = \{\psi^i: G \to \mathbb{R}^K\}_{i=1}^{K'}$ be a set of learnable filter functions. Then, each scalar real entry from the result of $G$ -convolving $f$ and $\psi$ is given by a $|G| \times K'$ matrix with entries
+
+$$
+G - \operatorname {C o n v} (f; \psi) _ {g, i} = \sum_ {h \in \operatorname {d o m} (f)} \sum_ {k = 1} ^ {K} f _ {k} (h) \psi_ {k} ^ {i} \left(g ^ {- 1} h\right), \tag {1}
+$$
+
+for all $g \in G$ and $i \in \{1, \dots, K'\}$ . As shown by (Kondor & Trivedi, 2018), the $G$ -Conv layer is equivariant wrt the operations of $G$ . We apply this definition in two ways: (i) "convolving" words with learnable filters to generate equivariant embeddings. Later, when we introduce our notations, we
+
+discuss how words may be viewed as functions so as to fit the definition. And (ii) convolving two group representations, in which case $\operatorname{dom}(f) = G$ .
+
+We note that there are several additional methods proposed in the literature for constructing permutation equivariant layers (e.g., Zaheer et al., 2017; Ravanbakhsh et al., 2017). However, as demonstrated by Kondor & Trivedi (2018); Bloem-Reddy & Teh (2019), the above form is very general and subsumes most alternatives. Further, while layers based on weight-sharing may be more efficient than the general form of Definition 4, the parameter tying restricts the capacity of the layer. For example, the permutation equivariant layer of Zaheer et al. (2017) requires weight matrices that are restricted to a form $\lambda I + \gamma (\mathbf{11})^T$ , with learnable parameters $\lambda$ and $\gamma$ . This layer has fewer learnable parameters than the convolutional form of Definition 4. Thus, for reasons of generality and capacity, we employ the general and expressive convolutional form of Definition 4 for our permutation equivariant layers.
+
+Equivariant with respect to what group? The previous $G$ -Conv layer requires choosing a discrete group $G$ . As hinted in Section 3, we will choose $G$ to contain $|G|$ permutations of language vocabularies, e.g. products of cyclic groups on sets of words. Note that for a vocabulary size of $|V|$ , the set of all permutations has a size of $n!$ . However, it suffices to consider subgroups containing permutations such that every word can be reached by composing elements of the subgroup. For example, while the group of permutations on the four verbs in SCAN consists of 24 elements, it will suffice to choose $G$ as the circular shift group on the four verbs, which is a subgroup of four elements. Following standard notation in group theory, we write $g \cdot h$ to denote the composition of two group elements $g, h \in G$ , and $g^{-1}$ to denote the inverse element of $g$ .
+
+As final preliminaries, denoting $[V] = \{1,\dots ,|V|\}$ , we represent a word $w$ in the input-language by the function $w: [V] \to \{0,1\}$ , where $\sum_{v \in [V]} w(v) = 1$ , and similarly by using $\tilde{w} \in \tilde{V}$ for the output-language. These notations are functional representations of word one-hot encodings that will play well with our notations. Note that this representation is equivalent to one-hot vectors, and in what follows we use the shorthand $w$ for the one-hot vector representation of words.
+
+To avoid notational clutter, we use $g$ to denote the permutation-matrix-representation of the corresponding group element. Thus, the group operation on a word $gw$ can be implemented as matrix multiplication between the permutation matrix $g$ and the one-hot vector $w$ . Note that this operation results in another one-hot vector, i.e. another word in the vocabulary. Similarly, the binary group operation can be written as matrix multiplication $gh$ between two group members $g, h \in G$ . Here too, multiplication of permutation matrices results in permutation matrix, so $gh \in G$ .
+
+We now describe each of the components in our $G$ -equivariant translation model, by following the transformation process of an input sequence $x = (w_{1}, \ldots, w_{L_{x}})$ (in SCAN, a navigation command in English) into its output translation $y = (\tilde{w}_{1}, \ldots, \tilde{w}_{L_{y}})$ (in SCAN, a sequence of actions).
+
+# 4.1 G-EQUIVARIANT ENCODER
+
+Upon arrival, the input-language sentence $x = (w_{1},\ldots ,w_{L_{x}})$ is sent to a $G$ -equivariant encoder. The first step in the encoding process is to transform each input word $w_{t}$ into a permutation equivariant embedding $e(w_{t})$ . As mentioned before, each word $w_{t}$ is represented by the one-hot vector $w_{t} : [V]\to \{0,1\}$ . The corresponding embedding is obtained by applying a set of $K$ 1-dimensional learnable filter functions $\{\psi^i :[V]\to \mathbb{R}\}_{i = 1}^K$ in a group convolution (throughout the section, we use $K$ everywhere to ease notation). Using Definition 4, the embedding, which we call $G$ -Embed, is then represented as a matrix $\mathbb{R}^{|G|\times K}$ , where
+
+$$
+e (w) _ {g, i} = G \operatorname {E m b e d} (w; \psi) _ {g, i} = \psi^ {i} \left(g ^ {- 1} w\right), \tag {2}
+$$
+
+for all $g \in G$ and $i = \{1, \dots, K\}$ . Note that since $w$ is a one-hot vector, $G$ -Embed is a particularly simple instantiation of Definition 4, as summation over $\mathrm{dom}(f)$ consists of only a single term. The corresponding embedding is a function $e(w_t): G \to \mathbb{R}^K$ , which can be represented as a $|G| \times K$ matrix, where each row corresponds to the embedding of the word $gw$ for a particular $g \in G$ . This layer can be implemented by defining $\psi$ with standard deep learning embedding modules.3
+
+Importantly, we note that for this layer, both $\psi$ and $w$ are functions on $[V]$ . However, the resulting embedding $e(w)$ is a function on the group $G$ . Therefore, in all subsequent computations we will require the learnable filters $\psi$ to also be functions on $G$ .
+
+We illustrate this layer with an example. Let $G$ be the cyclic group that permutes the words LEFT and RIGHT. We can think of $g_{1}$ as the identity, and $g_{2}$ as permuting the words LEFT and RIGHT (leaving all other words unchanged). In this case, embedding LEFT results in the $2 \times K$ matrix $[\psi(\mathrm{LEFT})^T, \psi(\mathrm{RIGHT})^T]^{T}$ , while embedding JUMP results in $[\psi(\mathrm{JUMP})^T, \psi(\mathrm{JUMP})^T]^{T}$ , since both $g_{1}$ and $g_{2}$ act as the identity permutation for JUMP.
+
+Next, the word embedding $e(w_{t})$ is sent to a permutation equivariant Recurrent Neural Network (G-RNN). The cells of a $G$ -RNN mimic those of a standard RNN, where linear transformations are replaced by $G$ -Convs (Definition 4). This cell receives two inputs (the word embedding $e(w_{t})$ and the previous hidden state $h_{t-1}$ ) and returns one output (the current hidden state $h_{t}$ ), all three being functions $G \to \mathbb{R}^{K}$ , parametrized as $|G| \times K$ matrices. More specifically:
+
+$$
+h _ {t} = G \text {- R N N} (e (w _ {t}), h _ {t - 1}) = \sigma (G \text {- C o n v} (h _ {t - 1}; \psi_ {h}) + G \text {- C o n v} (e (w _ {t}); \psi_ {e})), \tag {3}
+$$
+
+where $\psi_h, \psi_e: G \to \mathbb{R}^K$ are learnable filters (represented as $|G| \times K$ matrices), and $\sigma$ is a point-wise activation function.
+
+The cell $G$ -RNN is equivariant because the sum of two equivariant representations is equivariant (Cohen & Welling, 2016), and the pointwise transformation of an equivariant representation is also equivariant. To initialize the hidden state, we set $h_0 = \vec{0}$ . We note that our experiments use the equivariant analog of LSTM cells (Hochreiter & Schmidhuber, 1997), which we denote $G$ -LSTM, since these achieved the best performance. We include the architecture of $G$ -LSTM cells in Appendix B.
+
+This completes the description of our equivariant encoder, illustrated in Figure 2a.
+
+# 4.2 G-EQUIVARIANT DECODER
+
+Once the entire input-language sentence $x = (w_{1},\dots ,w_{L_{x}})$ has been encoded into the hidden representations $h = (h_1,\ldots ,h_{L_x})$ , we are ready to start the decoding process that will produce the output-language translation $y = (\tilde{w}_1,\dots ,\tilde{w}_{L_y})$ .
+
+As illustrated in Figure 2b, our equivariant decoder is also run by an equivariant recurrent cell $G$ -RNN. We denote the hidden states of the recurrent decoding process by $\tilde{h}_t$ , where $\tilde{h}_0 = h_{L_x}$ . At time $t$ , the two inputs to the decoding $G$ -RNN cell are the previous hidden state $\tilde{h}_{t-1}$ as well as an attention $\bar{a}_t$ over all the encoding hidden states $h$ . (Once again, all variables are mappings $G \to \mathbb{R}^K$ implemented as $|G| \times K$ matrices.)
+
+Attention mechanisms (Bahdanau et al., 2015; Vaswani et al., 2017) have emerged as a central tool in language modelling. Fortunately, attention mechanisms are typically implemented as linear combinations, and a linear combination of equivariant representations is itself an equivariant representation. We now leverage this fact to develop an equivariant attention mechanism. Given all the encoder hidden states $h$ , as well as the previous decoding hidden state $\tilde{h}_{t-1}$ , we propose the equivariant analog of dot-product attention (Luong et al., 2015) as
+
+$$
+\bar {a} _ {t} = G \text {- A t t e n t i o n} (\tilde {h} _ {t - 1}, h) = \sum_ {j = 1} ^ {L _ {x}} \alpha_ {t, j} h _ {j}, \text {w h e r e} \tag {4}
+$$
+
+$$
+\alpha_ {t, j} = \frac {\exp \beta_ {t , j}}{\sum_ {k = 1} ^ {L _ {x}} \exp \beta_ {t , k}}, \text {a n d} \beta_ {t, j} = \sum_ {g \in G} \tilde {h} _ {t - 1} (g) ^ {\top} h _ {j} (g). \tag {5}
+$$
+
+Following Figure 2b, the attention $\bar{a}_t$ and a $G$ -embedding $e(\tilde{w}_{t-1})$ for the previous output word are concatenated and sent to a $G$ -Convolution. The concatenation with $e(\tilde{w}_{t-1})$ provides the decoder with information regarding the previously embedded word. In practice, during training we use teacher-forcing (Williams & Zipser, 1989) to provide the decoder with information about the correct output sequences. This process returns a final hidden representation $\phi: G \to \mathbb{R}^K$ .
+
+As a final step in the decoding process, we need to convert $\phi$ into a collection of logits over the output-language vocabulary. Then, sampling from the categorical distribution induced by these logits at time $t$ (or taking the maximum) will produce the word $\tilde{w}_t$ , to be appended in the output-language translation, $y$ . This final decoding module can be implemented as follows:
+
+$$
+G \text {D e c o d e} (\phi ; \psi) _ {\tilde {w}} = \sum_ {h \in G} \sum_ {k = 1} ^ {K} \phi_ {k} (h) \psi_ {k} \left(h ^ {- 1} \tilde {w}\right), \tag {6}
+$$
+
+where $\psi = [\tilde{V} ]\to \mathbb{R}^k$ are the learnable parameters of this layer (represented by a $|\tilde{V} |\times K$ matrix).
+
+Recall that $\phi(h) \in \mathbb{R}^K$ is the final-layer representation for the group element $h$ , and that $h^{-1}\tilde{w}$ is the inverse element of $h \in G$ applied to the output word $\tilde{w}$ (represented as a one-hot vector), which results in another word in the output language. Thus, $\psi$ is a learnable embedding of the output words into $\mathbb{R}^K$ . This layer is evaluated at every $\tilde{w}$ in the output vocabulary to produce a scalar. The resulting vector of logits represents a categorical distribution over the output vocabulary. While similar, this layer is not a group convolution (Definition 4). Rather, equivariance for this module is achieved via parameter-sharing (Ravanbakhsh et al., 2017).
+
+This completes the description of our equivariant decoder, illustrated in Figure 2b. Composing the equivariant encoder and decoder results in our complete sequence-2-sequence model. Importantly, since all operations in this model are equivariant, the complete model is itself also equivariant to the group $G$ (Kondor & Trivedi, 2018). In Section 6, we provide further implementation details for our model, and detail our empirical evaluation of its equivariant properties and their relation to the SCAN tasks described in Section 2.
+
+# 5 RELATED WORK
+
+In this section we review state-of-the-art methods to address SCAN compositional tasks. We focus on two recent models that we will compare to in our experiments.
+
+On the one hand, the syntactic attention model of Russin et al. (2019) builds on the idea that compositional generalization can be achieved by language models given the correct architectural organization. Borrowing inspiration from neuroscience, Russin et al. (2019) argue that compositionality might arise when using separate processing channels for semantic and syntactic information. In their model, the attention weights depend on a recurrent encoding of the input sequence, which they refer to as the syntactic representation. The attention weights are then applied to separate, context-independent embeddings of the words in the input sequence, which intend to model a semantic representation. We find (Russin et al., 2019) interesting from a group equivariance perspective, since one way to enforce equivariance is to use an invariant representation (about syntax) together with an additional representation (about semantics) that maintains the information about the original "sentence pose".
+
+On the other hand, the meta-learning (Thrun & Pratt, 2012; Schmidhuber, 1987) approach of Lake (2019) is a model that learns to generalize. In particular, Lake (2019) designs one specific and complex meta-learning procedure for each SCAN task, where a distribution over tasks is provided to the learner (Finn et al., 2017; Gordon et al., 2018). For example, in the Add jump and Around right tasks, the meta-learning procedure of Lake (2019) samples permutations from the relevant groups (the permutation groups on the verbs and set of directions, respectively). This is interpreted as data-augmentation, a valid procedure for encouraging equivariance (Cohen & Welling, 2016; Andreas, 2019; Weiler et al., 2018). However, at test-time, Lake (2019) sets the context set to the correct mapping between the permuted commands and their corresponding actions. For example, in the Add jump task, the context set for meta-testing would consist of the following pairs: { (WALK, WALK), (RUN, RUN), (LOOK, LOOK), (JUMP, JUMP)}}. This is equivalent to providing the model with one-to-one information regarding the correct command-to-action mapping for the permuted words.
+
+# 6 EXPERIMENTS
+
+We now evaluate the empirical performance of our equivariant seq2seq model (described in Section 4) on the four SCAN tasks (described in Section 2). We compare our equivariant seq2seq to regular seq2seq models (Lake & Baroni, 2018), convolutional models (Dessi & Baroni, 2019), the syntactic attention model of Russin et al. (2019), and the meta-learning approach of Lake (2019). The compared seq2seq models use bi-directional, single-layer LSTM cells with 64 hidden units. For the equivariant
+
+| Model | Simple | Add Jump | Around Right | Length |
| seq2seq (Lake & Baroni, 2018) | 99.7 | 1.2 | NA | 13.8 |
| CNN (Dessi & Baroni, 2019) | 100.0 | 69.2 ± 9.2 | 56.7 ± 10.2 | 0.0 |
| Syntactic Attention (Russian et al., 2019) | 100.0 | 91.0 ± 27.4 | 28.9 ± 34.8 | 15.2 ± 0.7 |
| Meta seq2seq (Lake, 2019) | NA | 99.9 | 99.9* | 16.64 |
| seq2seq (comparable architecture) | 100.0 | 0.0 ± 0.0 | 0.02 ± 2e-2 | 12.4 ± 2.3 |
| Equivariant seq2seq (ours) | 100.0 | 99.1 ± 0.04 | 92.0 ± 0.24 | 15.9 ± 3.2 |
+
+Table 1: Test accuracies for four SCAN tasks, comparing our equivariant seq2seq to the state-of-the-art.
+
+seq2seq models, we use the cyclic permutation group on the verbs for the Add jump task, and the cyclic permutation group on directions for the Around right task. For Length, we use the product of those groups. Our model knows that the same group operates on both the input- and output- languages. However, it does not receive information regarding the correspondence between commands and actions in the set of words being permuted in the input / output languages. This is in contrast to Lake (2019), where (as stated in Section 5), it is necessary to provide the model with explicit information regarding the correct command-to-action mapping at test-time.
+
+Training procedures match those of Lake & Baroni (2018) where possible. We train models for $200k$ iterations, where each iteration consists of a minibatch of size 1, using the Adam optimizer to perform parameter updates with default parameters (Kingma & Ba, 2015) with a learning rate of 1e-4. We use teacher-forcing (Williams & Zipser, 1989) with a ratio of 0.5, and early-stopping based on a validation set consisting on $10\%$ of the training examples. As in previous works, we compute test accuracies by counting how many exact translations each model provides, across the test set associated to each task.
+
+# 6.1 RESULTS
+
+Table 1 summarizes the results of our experiments. First and as expected, all models achieve excellent performance on the Simple task, which does not require any form of compositional generalization.
+
+Second, our equivariant seq2seq model performs very well at the Add jump and Around right SCAN tasks, which are the two tasks satisfying our local equivariance assumption from Definition 3. Our equivariant seq2seq model significantly outperforms the regular seq2seq (Lake & Baroni, 2018) and convolutional (Dessi & Baroni, 2019) models, as well as the state-of-the-art methods of Russian et al. (2019) and Lake (2019). This result is an encouraging piece of evidence supporting our main hypothesis from Section 3. Next, let us compare the results of our equivariant seq2seq model with the previous state-of-the-art Russian et al. (2019); Lake (2019) in more detail.
+
+On the one hand, the syntactic attention model of Russin et al. (2019) achieves significant improvements over baselines methods at the Add jump SCAN task. However, it does not fare so well on the Around right task. Furthermore, its performance has high variance. Although we here report the numbers from Russin et al. (2019), we observed such high variance in our own implementation as well, where the model often achieved $0\%$ test accuracy. We hypothesize that modeling the invariance of the syntactic attention directly would result in improved performance and stability. This can be achieved, for instance, by replacing all verbs in the syntactic module by a shared word. As expected, by explicitly exploiting equivariance, our model outperforms Russin et al. (2019) on the Add jump and Around right SCAN tasks, also being much more robust.
+
+On the other hand, the meta-learning model of Lake (2019) achieves excellent performance on the local equivariance tasks Add jump and Around right. This is additional evidence supporting the usefulness of local equivariance. In contrast to our model, Lake (2019) requires (i) a complicated model and training procedure tailored to each task, (ii) providing the model with the correct permutation of words, equivalent to telling the model the "true" mappings between the input and output words, and (iii) augmenting the set of words being permuted, to ensure enough diversity in the training distribution (for instance, adding additional directions beyond RIGHT and LEFT).
+
+# 6.2 ON THE DIFFICULTY OF LENGTH GENERALIZATION
+
+As seen in Table 1, length generalization remains a tough challenge in SCAN. While generating long sequences is a known challenge in seq2seq models (Bahdanau et al., 2015), we believe that this is not the main issue with our equivariant seq2seq model, as it is able to produce long translations when these appear in the training set (as are the other models). Therefore, this is not a capacity problem, but one of not being able to express the Length generalization SCAN task in terms of local equivariances on both input- and output- languages. We hypothesize that this is the very reason why (Rusin et al., 2019; Lake, 2019) also fail on this task.
+
+However, we suspect that some forms of local equivariance on the input language, but global equivariance on the output language, may help. For example, RUN TWICE, RUN THRICE and RUN AROUND LEFT TWICE are all input commands contained in the training set of the length task. A trained seq2seq model is able to execute them, but fails on the unseen test command RUN AROUND LEFT THRICE, suggesting that the network did not correctly understand the relationship between TWICE and THRICE. Using a network that is explicitly equivariant to the permutation of TWICE and THRICE should be able to generalize correctly on RUN AROUND LEFT THRICE. However, while the TWICE-THRICE permutation is a local group operation (Definition 2), the corresponding operation on the output language, which is to repeat the same action sequence multiple times, is a global group operation. Similarly, permuting AND and AFTER in the input sequence using a local group operation, while operating globally on the output language by permuting the order of the associated actions, should help succeed on the Length generalization SCAN task. How to formalize the aforementioned global operations on the output language and build the desired equivariant network remains a fascinating open research question that we leave for future work.
+
+# 7 DISCUSSION AND FUTURE WORK
+
+This work has introduced hypothesis linking between group equivariance and compositional generalization in language. Motivated by this hypothesis, we have proposed an equivariant seq2seq translation model, which achieves state-of-the-art performance on a variety of SCAN tasks.
+
+Our work has several points for improvement. Most importantly, our model requires knowing the permutation symmetries of interest, to be provided by some domain expert. While this is simple to do in the synthetic language of SCAN, it may prove more difficult in real-world tasks. We propose three directions to attack this problem. (i) Group words by their parts-of-speech (e.g., nouns, verbs, etc.), which can be done automatically by standard part-of-speech taggers (Márquez & Rodríguez, 1998); (ii) Learn such groupings of words from corpora, for example using the recent work of Andreas (2019); (iii) Most appealingly, parameterize the symmetry group and learn operations end-to-end while enforcing the group structure. For permutation symmetries, the group elements can be parameterized by permutation matrices, and learned from data (Lyu et al., 2019). Our preliminary work in this direction hints that this is a fruitful avenue for future research.
+
+A further consideration to address is that of computational overhead. In particular, for the convolutional form we use in this work (Definition 4), computational complexity scales linearly with the size of the group, $\mathcal{O}(|G|)$ . This arises from the need to sum over group elements when the representation is a function on $G$ , and may be prohibitive when considering large groups. One way of addressing this issue when large symmetry groups are of interest is to consider more efficient computational layers for permutation equivariance (e.g. Zaheer et al., 2017; Ravanbakhsh et al., 2017). These methods incur less computational overhead at the cost of restricting the layer capacity. Another interesting option for future research is to consider sub-sampling group elements when performing the summation in Definition 4, which requires further consideration of the consequences of doing so.
+
+Another exciting direction for future research is to consider global equivariances. Many operations of interest, e.g. groups operating directly on parse trees, can only be expressed as global equivariances. Modeling these equivariances holds exciting possibilities for capturing non-trivial symmetries in language tasks, but also requires more sophisticated machinery than is proposed in this work.
+
+Finally, in further theoretical work, we would like to explore the relation between our equivariance framework and the idea of compositionality in formal semantics (Kratzer & Heim, 1998). On the one hand, the classic idea of compositionality as an isomorphism between syntax and semantics is intuitively related to the notion of group equivariance. On the other hand, as shown by the failures
+
+at the length generalization example, it is still unclear how to apply our ideas to more sophisticated forms of permutation, such as those involving grammatical phrases rather than words. This would also require to extend our approach to account for the context-sensitivity that pervades linguistic composition (c.f., the natural interpretation of "run" in "run the marathon" vs. "run the code").
+
+# ACKNOWLEDGMENTS
+
+We thank Emmanuel Dupoux and Clara Vania for helpful feedback and discussions.
+
+# REFERENCES
+
+Jacob Andreas. Good-enough compositional data augmentation, 2019.
+Martin Arjovsky, Léon Bottou, Ishaan Gulrajani, and David Lopez-Paz. Invariant risk minimization. arXiv preprint arXiv:1907.02893, 2019.
+Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. In Proceedings of ICLR Conference Track, San Diego, CA, 2015. Published online: http://www.iclr.cc/doku.php?id=iclr2015:main.
+Marco Baroni. Linguistic generalization and compositionality in modern artificial neural networks. arXiv preprint arXiv:1904.00157, 2019.
+Benjamin Bloem-Reddy and Yee Whye Teh. Probabilistic symmetry and invariant neural networks. arXiv preprint arXiv:1901.06082, 2019.
+Taco Cohen and Max Welling. Group equivariant convolutional networks. In International conference on machine learning, pp. 2990-2999, 2016.
+Roberto Dessi and Marco Baroni. CNNs found to jump around more skillfully than RNNs: Compositional generalization in seq2seq convolutional networks. arXiv preprint arXiv:1905.08527, 2019.
+Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 1126-1135. JMLR.org, 2017.
+Jonathan Gordon, John Bronskill, Matthias Bauer, Sebastian Nowozin, and Richard E Turner. Meta-learning probabilistic inference for prediction. arXiv preprint arXiv:1805.09921, 2018.
+Irina Higgins, David Amos, David Pfau, Sebastien Racaniere, Loic Matthew, Danilo Rezende, and Alexander Lerchner. Towards a definition of disentangled representations. arXiv preprint arXiv:1812.02230, 2018.
+Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8): 1735-1780, 1997.
+Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In In International Conference on Learning Representations (ICLR), 2015.
+Imre Risi Kondor. Group theoretical methods in machine learning. Columbia University, 2008.
+Risi Kondor and Shubhendu Trivedi. On the generalization of equivariance and convolution in neural networks to the action of compact groups. arXiv preprint arXiv:1802.03690, 2018.
+Angelika Kratzer and Irene Heim. Semantics in generative grammar, volume 1185. Blackwell Oxford, 1998.
+Brenden M Lake. Compositional generalization through meta sequence-to-sequence learning. arXiv preprint arXiv:1906.05381, 2019.
+Brenden M. Lake and Marco Baroni. Generalization without systematicity: On the compositional skills of sequence-to-sequence recurrent networks. In Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholm, Sweden, pp. 2879-2888, 2018.
+
+João Loula, Marco Baroni, and Brenden Lake. Rearranging the familiar: Testing compositional generalization in recurrent networks. pp. 108-114, 01 2018. doi: 10.18653/v1/W18-5413.
+Minh-Thang Luong, Hieu Pham, and Christopher D Manning. Effective approaches to attention-based neural machine translation. arXiv preprint arXiv:1508.04025, 2015.
+Jiancheng Lyu, Shuai Zhang, Yingyong Qi, and Jack Xin. AutoShuffleNet: Learning permutation matrices via an exact lipschitz continuous penalty in deep convolutional neural networks. arXiv preprint arXiv:1901.08624, 2019.
+Lluis Marquez and Horacio Rodriguez. Part-of-speech tagging using decision trees. In European Conference on Machine Learning, pp. 25-36. Springer, 1998.
+Siamak Ravanbakhsh, Jeff Schneider, and Barnabas Poczos. Equivariance through parameter-sharing. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 2892-2901. JMLR.org, 2017.
+Jake Russian, Jason Jo, and Randall C O'Reilly. Compositional generalization in a deep seq2seq model by separating syntax and semantics. arXiv preprint arXiv:1904.09708, 2019.
+Jürgen Schmidhuber. Evolutionary principles in self-referential learning, or on learning how to learn: the meta-meta... hook. PhD thesis, Technische Universität München, 1987.
+Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pp. 3104-3112, 2014.
+Sebastian Thrun and Lorien Pratt. Learning to learn. Springer Science & Business Media, 2012.
+Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in neural information processing systems, pp. 5998-6008, 2017.
+Maurice Weiler, Fred A Hamprecht, and Martin Storath. Learning steerable filters for rotation equivariant cnns. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 849-858, 2018.
+Ronald J Williams and David Zipser. A learning algorithm for continually running fully recurrent neural networks. Neural computation, 1(2):270-280, 1989.
+Manzil Zaheer, Satwik Kottur, Siamak Ravanbakhsh, Barnabas Poczos, Ruslan R Salakhutdinov, and Alexander J Smola. Deep sets. In Advances in Neural Information Processing Systems, pp. 3394-3404, 2017.
+
+# A DETAILS ON THE SCAN DATASET
+
+SCAN is composed from a non-recursive grammar, as shown in Figure 3. In particular, SCAN consists of all commands that can be generated from this grammar (20,910 command sequences), with their deterministic mapping into actions, as detailed by Figure 4
+
+$\mathrm{C}\rightarrow \mathrm{S}$ and S V $\rightarrow$ D[1] opposite D[2] D $\rightarrow$ turn left
+ $\mathbf{C}\rightarrow \mathbf{S}$ after S V $\rightarrow$ D[1] around D[2] D $\rightarrow$ turn right
+ $\mathbf{C}\rightarrow \mathbf{S}$ V $\rightarrow$ D U $\rightarrow$ walk
+ $\mathbf{S}\rightarrow \mathbf{V}$ twice V $\rightarrow$ U U $\rightarrow$ look
+ $\mathbf{S}\rightarrow \mathbf{V}$ thrice D $\rightarrow$ U left U $\rightarrow$ run
+ $\mathbf{S}\rightarrow \mathbf{V}$ D $\rightarrow$ U right U $\rightarrow$ jump
+
+Figure 3: The grammar used to generate commands in the SCAN domain. Indexing notation is used to allow infixing: read $D[i]$ as "the $i$ -th element directly dominated by category $D$ ". Image borrowed from Lake & Baroni (2018).
+
+[ \text{[walk]} = \text{WALK} \quad \text{[u opposite left]} = \text{[turn opposite left] [u]} ]
+[ \text{[look]} = \text{LOOK} \quad \text{[u opposite right]} = \text{[turn opposite right] [u]} ]
+[ \text{[run]} = \text{RUN} \quad \text{[turn around left]} = \text{LTURN LTURN LTURN LTURN} ]
+[ \text{[jump]} = \text{JUMP} \quad \text{[turn around right]} = \text{RTURN RTURN RTURN RTURN} ]
+[ \text{[turn left]} = \text{LTURN} \quad \text{[u around left] = LTURN [u] LTURN [u] LTURN [u] LTURN [u]} ]
+[ \text{[turn right]} = \text{RTURN} \quad \text{[u around right] = RTURN [u] RTURN [u] RTURN [u] RTURN [u]} ]
+[ \text{[u left]} = \text{LTURN [u]} \quad \text{[x twice] = [x] [x]} ]
+[ \text{[u right]} = \text{RTURN [u]} \quad \text{[x thrice] = [x] [x] [x]} ]
+[ \text{[turn opposite left]} = \text{LTURN LTURN} \quad \text{[x1 and x2] = [x1] [x2]} ]
+[ \text{[turn opposite right]} = \text{RTURN LTURN} \quad \text{[x1 after x2] = [x2] [x1]} ]
+
+Figure 4: The SCAN translation mapping. Double brackets denote the interpretation function translating SCAN's command (input language) into the action (output) language (which are denoted by upper-case strings. Image borrowed from Lake & Baroni (2018).
+
+# B G-LSTM DETAILS
+
+We provide the equations for implementing our G-LSTM. Given $\pmb{h}_{t-1}, \pmb{c}_{t-1}$ (hidden state and cell-state, respectively), and $e(w)_t$ (all of which are of the form $G \mapsto \mathbb{R}^K$ , we can describe the G-LSTM cell as follows:
+
+$$
+\boldsymbol {i} _ {t} = \sigma \left(\boldsymbol {x} _ {t} * \boldsymbol {\psi} _ {i i} + \boldsymbol {s} _ {t - 1} * \boldsymbol {\psi} _ {i h}\right); \quad \boldsymbol {f} _ {t} = \sigma \left(\boldsymbol {x} _ {t} * \boldsymbol {\psi} _ {f i} + \boldsymbol {s} _ {t - 1} * \boldsymbol {\psi} _ {f h}\right)
+$$
+
+$$
+\boldsymbol {g} _ {t} = \tanh \left(\boldsymbol {x} _ {t} * \boldsymbol {\psi} _ {g i} + \boldsymbol {s} _ {t - 1} * \boldsymbol {\psi} _ {g h}\right); \quad \boldsymbol {o} _ {t} = \sigma \left(\boldsymbol {x} _ {t} * \boldsymbol {\psi} _ {o i} + \boldsymbol {s} _ {t - 1} * \boldsymbol {\psi} _ {o h}\right)
+$$
+
+$$
+\boldsymbol {c} _ {t} = \boldsymbol {f} _ {t} \circ \boldsymbol {c} _ {t - 1} + \boldsymbol {i} _ {t} \circ \boldsymbol {g} _ {t}; \quad \boldsymbol {h} _ {t} = \boldsymbol {o} _ {t} \circ \operatorname {t a n h} (\boldsymbol {c} _ {t}),
+$$
+
+where $\{\psi_{jk}:G\mapsto \mathbb{R}^K;j\in \{i,f,g,o\} ;k\in \{i,h\} \}$ are the learnable filters of the cell. Here we have used the shorthand
+
+$$
+\boldsymbol {f} * \boldsymbol {\psi} := [ \boldsymbol {f} * \boldsymbol {\psi} ] (g) \quad \forall g \in G
+$$
+
+for two functions on the group.
\ No newline at end of file
diff --git a/permutationequivariantmodelsforcompositionalgeneralizationinlanguage/images.zip b/permutationequivariantmodelsforcompositionalgeneralizationinlanguage/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..6a609bbdc7c5d9fe7a4596f357df2fa312bccfcb
--- /dev/null
+++ b/permutationequivariantmodelsforcompositionalgeneralizationinlanguage/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6ae3c90754cc0391a85fb15dc09d9cd8ccbcc43edceab27046dd99b4c1c1e2d7
+size 157391
diff --git a/permutationequivariantmodelsforcompositionalgeneralizationinlanguage/layout.json b/permutationequivariantmodelsforcompositionalgeneralizationinlanguage/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..bad7b1ed85e785e564937cb9b3de118310dbabcd
--- /dev/null
+++ b/permutationequivariantmodelsforcompositionalgeneralizationinlanguage/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:bc25fbaaf6b8ffba7d9611767959f8850b3fed1a919731e8bda06c62c37dec2b
+size 478845
diff --git a/phasetransitionsfortheinformationbottleneckinrepresentationlearning/979d5420-8f1b-4e59-9387-4560534f5aec_content_list.json b/phasetransitionsfortheinformationbottleneckinrepresentationlearning/979d5420-8f1b-4e59-9387-4560534f5aec_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..5c2ff597d88a590302d343097f61da7838c64b6e
--- /dev/null
+++ b/phasetransitionsfortheinformationbottleneckinrepresentationlearning/979d5420-8f1b-4e59-9387-4560534f5aec_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:97ce39c4034394dc36aea273c661c6ac5305240c4e98d2add4ea374d340bdd45
+size 175916
diff --git a/phasetransitionsfortheinformationbottleneckinrepresentationlearning/979d5420-8f1b-4e59-9387-4560534f5aec_model.json b/phasetransitionsfortheinformationbottleneckinrepresentationlearning/979d5420-8f1b-4e59-9387-4560534f5aec_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..a821b4053c78bc8dbf4e9bf663f5188b5662af80
--- /dev/null
+++ b/phasetransitionsfortheinformationbottleneckinrepresentationlearning/979d5420-8f1b-4e59-9387-4560534f5aec_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:bd2d58b1d022d2bbba0553ddf151d3b6e76a3fe27e7ff584e38fe7c1480a0e1f
+size 201850
diff --git a/phasetransitionsfortheinformationbottleneckinrepresentationlearning/979d5420-8f1b-4e59-9387-4560534f5aec_origin.pdf b/phasetransitionsfortheinformationbottleneckinrepresentationlearning/979d5420-8f1b-4e59-9387-4560534f5aec_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..03a76409888cf8b8a84d1d9577e85f9a7ac08205
--- /dev/null
+++ b/phasetransitionsfortheinformationbottleneckinrepresentationlearning/979d5420-8f1b-4e59-9387-4560534f5aec_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e14756101d98795aa2d7d6b548acfae251f425b9b8b4533bfc700d2f8a77b499
+size 790458
diff --git a/phasetransitionsfortheinformationbottleneckinrepresentationlearning/full.md b/phasetransitionsfortheinformationbottleneckinrepresentationlearning/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..f44223459919bbb9c573f015c520e9e0a6a46ec2
--- /dev/null
+++ b/phasetransitionsfortheinformationbottleneckinrepresentationlearning/full.md
@@ -0,0 +1,795 @@
+# PHASE TRANSITIONS FOR THE INFORMATION BOTTLE-NECK IN REPRESENTATION LEARNING
+
+Tailin Wu
+
+Stanford
+
+tailin@cs.stanford.edu
+
+Ian Fischer
+
+Google Research
+
+iansf@google.com
+
+# ABSTRACT
+
+In the Information Bottleneck (IB), when tuning the relative strength between compression and prediction terms, how do the two terms behave, and what's their relationship with the dataset and the learned representation? In this paper, we set out to answer these questions by studying multiple phase transitions in the IB objective: $\mathrm{IB}_{\beta}[p(z|x)] = I(X;Z) - \beta I(Y;Z)$ defined on the encoding distribution $p(z|x)$ for input $X$ , target $Y$ and representation $Z$ , where sudden jumps of $\frac{dI(Y;Z)}{d\beta}$ and prediction accuracy are observed with increasing $\beta$ . We introduce a definition for IB phase transitions as a qualitative change of the IB loss landscape, and show that the transitions correspond to the onset of learning new classes. Using second-order calculus of variations, we derive a formula that provides a practical condition for IB phase transitions, and draw its connection with the Fisher information matrix for parameterized models. We provide two perspectives to understand the formula, revealing that each IB phase transition is finding a component of maximum (nonlinear) correlation between $X$ and $Y$ orthogonal to the learned representation, in close analogy with canonical-correlation analysis (CCA) in linear settings. Based on the theory, we present an algorithm for discovering phase transition points. Finally, we verify that our theory and algorithm accurately predict phase transitions in categorical datasets, predict the onset of learning new classes and class difficulty in MNIST, and predict prominent phase transitions in CIFAR10.
+
+# 1 INTRODUCTION
+
+The Information Bottleneck (IB) objective (Tishby et al., 2000):
+
+$$
+\mathrm {I B} _ {\beta} [ p (z | x) ] := I (X; Z) - \beta I (Y; Z) \tag {1}
+$$
+
+explicitly trades off model compression $(I(X;Z), I(\cdot ;\cdot)$ denoting mutual information) with predictive performance $(I(Y;Z))$ using the Lagrange multiplier $\beta$ , where $X, Y$ are observed random variables, and $Z$ is a learned representation of $X$ . The IB method has proved effective in a variety of scenarios, including improving the robustness against adversarial attacks (Alemi et al., 2016; Fischer, 2018), learning invariant and disentangled representations (Achille & Soatto, 2018a;b), underlying information-based geometric clustering (Strouse & Schwab, 2017b), improving the training and performance in adversarial learning (Peng et al., 2018), and facilitating skill discovery (Sharma et al., 2019) and learning goal-conditioned policy (Goyal et al., 2019) in reinforcement learning.
+
+From Eq. (1) we see that when $\beta \to 0$ it will encourage $I(X;Z) = 0$ which leads to a trivial representation $Z$ that is independent of $X$ , while when $\beta \to +\infty$ , it reduces to a maximum likelihood objective1 that does not constrain the information flow. Between these two extremes, how will the IB objective behave? Will prediction and compression performance change smoothly, or do there exist interesting transitions in between? In Wu et al. (2019), the authors observe and study the learnability transition, i.e. the $\beta$ value such that the IB objective transitions from a trivial global minimum to learning a nontrivial representation. They also show how this first phase transition relates to the structure of the dataset. However, to answer the full question, we need to consider the full range of $\beta$ .
+
+
+Figure 1: CIFAR10 plots (a) showing the information plane, as well as $\beta$ vs (b) $I(X;Z)$ and $I(Y;Z)$ , and (c) accuracy, all on the training set with $20\%$ label noise. The arrows point to empirically-observed phase transitions. The vertical lines correspond to phase transitions found with Alg. 1.
+
+
+
+
+
+Motivation. To get a sense of how $I(Y;Z)$ and $I(X;Z)$ vary with $\beta$ , we train Variational Information Bottleneck (VIB) models (Alemi et al., 2016) on the CIFAR10 dataset (Krizhevsky & Hinton, 2009), where each experiment is at a different $\beta$ and random initialization of the model. Fig. 1 shows the $I(X;Z)$ , $I(Y;Z)$ and accuracy vs. $\beta$ , as well as $I(Y;Z)$ vs. $I(X;Z)$ for CIFAR10 with $20\%$ label noise (see Appendix I for details).
+
+From Fig. 1(b)(c), we see that as we increase $\beta$ , instead of going up smoothly, both $I(X;Z)$ and $I(Y;Z)$ show multiple phase transitions, where the slopes $\frac{dI(X;Z)}{d\beta}$ and $\frac{dI(Y;Z)}{d\beta}$ are discontinuous and the accuracy has discrete jumps. The observation lets us refine our question: When do the phase transitions occur, and how do they depend on the structure of the dataset? These questions are important, since answering them will help us gain a better understanding of the IB objective and its close interplay with the dataset and the learned representation.
+
+Moreover, the IB objective belongs to a general form of two-term trade-offs in many machine learning objectives: $L =$ Prediction-loss + $\beta \cdot$ Complexity, where the complexity term generally takes the form of regularization. Usually, learning is set at a specific $\beta$ . Many more insights can be gained if we understand the behavior of the prediction loss and model complexity with varying $\beta$ , and how they depend on the dataset. The techniques developed to address the question in the IB setting may also help us understand the two-term tradeoff in other learning objectives.
+
+Contributions. In this work, we begin to address the above question in IB settings. Specifically:
+
+- We identify a qualitative change of the IB loss landscape w.r.t. $p(z|x)$ for varying $\beta$ as IB phase transitions (Section 3).
+- Based on the definition, we introduce a quantity $G[p(z|x)]$ and use it to prove a theorem giving a practical condition for IB phase transitions. We further reveal the connection between $G[p(z|x)]$ and the Fisher information matrix when $p(z|x)$ is parameterized by $\theta$ (Section 3).
+- We reveal the close interplay between the IB objective, the dataset and the learned representation, by showing that in IB, each phase transition corresponds to learning a new nonlinear component of maximum correlation between $X$ and $Y$ , orthogonal to the previously-learned $Z$ , and each with decreasing strength (Section 4).
+
+To the best of our knowledge, our work provides the first theoretical formula to address IB phase transitions in the most general setting. In addition, we present an algorithm for iteratively finding the IB phase transition points (Section 5). We show that our theory and algorithm give tight matches with the observed phase transitions in categorical datasets, predict the onset of learning new classes and class difficulty in MNIST, and predict prominent transitions in CIFAR10 experiments (Section 6).
+
+# 2 RELATED WORK
+
+The Information Bottleneck Method (Tishby et al., 2000) provides a tabular method based on the Blahut-Arimoto (BA) Algorithm (Blahut, 1972) to numerically solve the IB functional for the optimal encoder distribution $P(Z|X)$ , given the trade-off parameter $\beta$ and the cardinality of the representation
+
+variable $Z$ . This work has been extended in a variety of directions, including to the case where all three variables $X, Y, Z$ are multivariate Gaussians (Chechik et al., 2005), cases of variational bounds on the IB and related functionals for amortized learning (Alemi et al., 2016; Achille & Soatto, 2018a; Fischer, 2018), and a more generalized interpretation of the constraint on model complexity as a Kolmogorov Structure Function (Achille et al., 2018). Previous theoretical analyses of IB include Rey & Roth (2012), which looks at IB through the lens of copula functions, and Shamir et al. (2010), which starts to tackle the question of how to bound generalization with IB. We will make practical use of the original IB algorithm, as well as the amortized bounds of the Variational Informmation Bottleneck (Alemi et al., 2016) and the Conditional Entropy Bottleneck (Fischer, 2018).
+
+Phase transitions, where key quantities change discontinuously with varying relative strength in the two-term trade-off, have been observed in many different learning domains, for multiple learning objectives. In Rezende & Viola (2018), the authors observe phase transitions in the latent representation of $\beta$ -VAE for varying $\beta$ . Strouse & Schwab (2017b) utilize the kink angle of the phase transitions in the Deterministic Information Bottleneck (DIB) (Strouse & Schwab, 2017a) to determine the optimal number of clusters for geometric clustering. Tegmark & Wu (2019) explicitly considers critical points in binary classification tasks using a discrete information bottleneck with a non-convex Pareto-optimal frontier. In Achille & Soatto (2018a), the authors observe a transition on the tradeoff of $I(\theta; X, Y)$ vs. $H(Y|X, \theta)$ in InfoDropout. Under IB settings, Chechik et al. (2005) study the Gaussian Information Bottleneck, and analytically solve the critical values $\beta_i^c = \frac{1}{1 - \lambda_i}$ , where $\lambda_i$ are eigenvalues of the matrix $\Sigma_{x|y} \Sigma_x^{-1}$ , and $\Sigma_x$ is the covariance matrix. This work provides valuable insights for IB, but is limited to the special case that $X, Y$ and $Z$ are jointly Gaussian. Phase transitions in the general IB setting have also been observed, which Tishby (2018) describes as "information bifurcation". In Wu et al. (2019), the authors study the first phase transition, i.e. the learnability phase transition, and provide insights on how the learnability depends on the dataset. Our work is the first work that addresses all the IB phase transitions in the most general setting, and provides theoretical insights on the interplay between the IB objective, its phase transitions, the dataset, and the learned representation.
+
+# 3 FORMULA FOR IB PHASE TRANSITIONS
+
+# 3.1 DEFINITIONS
+
+Let $X \in \mathcal{X}, Y \in \mathcal{Y}, Z \in \mathcal{Z}$ be random variables denoting the input, target and representation, respectively, having a joint probability distribution $p(X,Y,Z)$ , with $\mathcal{X} \times \mathcal{Y} \times \mathcal{Z}$ its support. $X, Y$ and $Z$ satisfy the Markov chain $Z - X - Y$ , i.e. $Y$ and $Z$ are conditionally independent given $X$ . We assume that the integral (or summing if $X, Y$ or $Z$ are discrete random variables) is on $\mathcal{X} \times \mathcal{Y} \times \mathcal{Z}$ . We use $x, y$ and $z$ to denote the instances of the respective random variables. The above settings are used throughout the paper. We can view the IB objective $\mathrm{IB}_{\beta}[p(z|x)]$ (Eq. 1) as a functional of the encoding distribution $p(z|x)$ . To prepare for the introduction of IB phase transitions, we first define relative perturbation function and second variation, as follows.
+
+Definition 1. Relative perturbation function: For $p(z|x)$ , its relative perturbation function $r(z|x)$ is a bounded function that maps $\mathcal{X} \times \mathcal{Z}$ to $\mathbb{R}$ and satisfies $\mathbb{E}_{z \sim p(z|x)}[r(z|x)] = 0$ . Formally, define $\mathcal{Q}_{\mathcal{Z}|\mathcal{X}} := \{r(z|x) : \mathcal{X} \times \mathcal{Z} \to \mathbb{R} \mid \mathbb{E}_{z \sim p(z|x)}[r(z|x)] = 0, \text{ and } \exists M > 0 \text{ s.t. } \forall X \in \mathcal{X}, Z \in \mathcal{Z}, |r(z|x)| \leq M\}$ . We have that $r(z|x) \in \mathcal{Q}_{\mathcal{Z}|\mathcal{X}}$ iff $r(z|x)$ is a relative perturbation function of $p(z|x)$ . The perturbed probability (density) is $p'(z|x) = p(z|x) (1 + \epsilon \cdot r(z|x))$ for some $\epsilon > 0$ .
+
+Definition 2. Second variation: Let functional $F[f(x)]$ be defined on some normed linear space $\mathcal{R}$ . Let us add a perturbative function $\epsilon \cdot h(x)$ to $f(x)$ , and now the functional $F[f(x) + \epsilon \cdot h(x)]$ can be expanded as
+
+$$
+\begin{array}{l} \Delta F [ f (x) ] = F [ f (x) + \epsilon \cdot h (x) ] - F [ f (x) ] \\ = \varphi_ {1} [ \epsilon \cdot h (x) ] + \varphi_ {2} [ \epsilon \cdot h (x) ] + \varphi_ {r} [ \epsilon \cdot h (x) ] \| \epsilon \cdot h (x) \| ^ {2} \\ \end{array}
+$$
+
+such that $\lim_{\epsilon \to 0} \varphi_r[\epsilon \cdot h(x)] = 0$ , where $\|\cdot\|$ denotes the norm, $\varphi_1[\epsilon \cdot h(x)] = \epsilon \frac{dF[f(x)]}{d\epsilon}$ is a linear functional of $\epsilon \cdot h(x)$ , and is called the first variation, denoted as $\delta F[f(x)]$ . $\varphi_2[\epsilon \cdot h(x)] = \frac{1}{2}\epsilon^2 \frac{d^2F[f(x)]}{d\epsilon^2}$ is a quadratic functional of $\epsilon \cdot h(x)$ , and is called the second variation, denoted as $\delta^2 F[f(x)]$ .
+
+We can think of the perturbation function $\epsilon \cdot h(x)$ as an infinite-dimensional "vector" ( $x$ being the indices), with $\epsilon$ being its amplitude and $h(x)$ its direction. With the above preparations, we define the IB phase transition as a change in the local curvature on the global minimum of $\mathrm{IB}_{\beta}[p(z|x)]$ .
+
+Definition 3. IB phase transitions: Let $r(z|x) \in \mathcal{Q}_{\mathcal{Z}|\mathcal{X}}$ be a perturbation function of $p(z|x)$ , $p_{\beta}^{*}(z|x)$ denote the optimal solution of $IB_{\beta}[p(z|x)]$ at $\beta$ , where the IB functional $IB[\cdot]$ is defined in Eq. (1). The IB phase transitions $\beta_{i}^{c}$ are the $\beta$ values satisfying the following two conditions:
+
+(1) $\forall r(z|x)\in \mathcal{Q}_{\mathcal{Z}|\mathcal{X}},\delta^{2}IB_{\beta}[p(z|x)]\big|_{p_{\beta}^{*}(z|x)}\geq 0;$
+(2) $\lim_{\beta' \to \beta^{+}} \inf_{r(z|x) \in \mathcal{Q}_{\mathcal{Z}|\mathcal{X}}} \delta^{2}IB_{\beta'}[p(z|x)]\big|_{p_{\beta}^{*}(z|x)} = 0^{-}$ .
+
+Here $\beta^{+}$ and $0^{-}$ denote one-sided limits.
+
+We can understand the $\delta^2\mathrm{IB}_{\beta}[p(z|x)]$ as a local "curvature" of the IB objective $\mathrm{IB}_{\beta}$ (Eq. 1) w.r.t. $p(z|x)$ , along some relative perturbation $r(z|x)$ . A phase transition occurs when the convexity of $\mathrm{IB}_{\beta}[p(z|x)]$ w.r.t. $p(z|x)$ changes from a minimum to a saddle point in the neighborhood of its optimal solution $p_{\beta}^{*}(z|x)$ as $\beta$ increases from $\beta_{c}$ to $\beta_{c} + 0^{+}$ . This means that there exists a perturbation to go downhill and find a better minimum. We validate this definition empirically below.
+
+# 3.2 CONDITION FOR IB PHASE TRANSITIONS
+
+The definition for IB phase transition (Definition 3) indicates the important role $\delta^2\mathrm{IB}_{\beta}[p(z|x)]$ plays on the optimal solution in providing the condition for phase transitions. To concretize it and prepare for a more practical condition for IB phase transitions, we expand $\mathrm{IB}_{\beta}[p(z|x)(1 + \epsilon \cdot r(z|x))]$ to the second order of $\epsilon$ , giving:
+
+Lemma 0.1. For $\mathrm{IB}_{\beta}[p(z|x)]$ , the condition of $\forall r(z|x) \in \mathcal{Q}_{\mathcal{Z}|\mathcal{X}}$ , $\delta^2\mathrm{IB}_{\beta}[p(z|x)] \geq 0$ is equivalent to $\beta \leq G[p(z|x)]$ . The threshold function $G[p(z|x)]$ is given by:
+
+$$
+G[ p (z | x) ] := \inf _ {r (z | x) \in \mathcal {Q} _ {\mathcal {Z} | \mathcal {X}}} \mathcal {G} [ r (z | x); p (z | x) ]
+$$
+
+$$
+\mathcal {G} [ r (z | x); p (z | x) ] := \frac {\mathbb {E} _ {x , z \sim p (x , z)} \left[ r ^ {2} (z | x) \right] - \mathbb {E} _ {z \sim p (z)} \left[ \left(\mathbb {E} _ {x \sim p (x | z)} [ r (z | x) ]\right) ^ {2} \right]}{\mathbb {E} _ {y , z \sim p (y , z)} \left[ \left(\mathbb {E} _ {x \sim p (x | y , z)} [ r (z | x) ]\right) ^ {2} \right] - \mathbb {E} _ {z \sim p (z)} \left[ \left(\mathbb {E} _ {x \sim p (x | z)} [ r (z | x) ]\right) ^ {2} \right]} \tag {2}
+$$
+
+The proof is given in Appendix B, in which we also give Eq. (20) for empirical estimation. Note that Lemma 0.1 is very general and can be applied to any $p(z|x)$ , not only at the optimal solution $p_{\beta}^{*}(z|x)$ .
+
+The Fisher Information matrix. In practice, the encoder $p_{\theta}(z|x)$ is usually parameterized by some parameter vector $\pmb{\theta} = (\theta_1, \theta_2, \dots, \theta_k)^T \in \Theta$ , e.g. weights and biases in a neural net, where $\Theta$ is the parameter field. An infinitesimal change of $\pmb{\theta}' \gets \pmb{\theta} + \Delta \pmb{\theta}$ induces a relative perturbation $\epsilon \cdot r(z|x) \simeq \Delta \pmb{\theta}^T \frac{\partial \log p_{\theta}(z|x)}{\partial \pmb{\theta}}$ on $p_{\theta}(z|x)$ , from which we can compute the threshold function $G_{\Theta}[p_{\theta}(z|x)]$ :
+
+Lemma 0.2. For $\mathrm{IB}_{\beta}[p_{\theta}(z|x)]$ objective, the condition of $\forall \Delta \theta \in \Theta$ , $\delta^2\mathrm{IB}_{\beta}[p_{\theta}(z|x)] \geq 0$ is equivalent to $\beta \leq G_{\Theta}[p_{\theta}(z|x)]$ , where
+
+$$
+G _ {\Theta} \left[ p _ {\boldsymbol {\theta}} (z | x) \right] := \inf _ {\Delta \boldsymbol {\theta} \in \Theta} \frac {\Delta \boldsymbol {\theta} ^ {T} \left(\mathcal {I} _ {Z | X} (\boldsymbol {\theta}) - \mathcal {I} _ {Z} (\boldsymbol {\theta})\right) \Delta \boldsymbol {\theta}}{\Delta \boldsymbol {\theta} ^ {T} \left(\mathcal {I} _ {Z | Y} (\boldsymbol {\theta}) - \mathcal {I} _ {Z} (\boldsymbol {\theta})\right) \Delta \boldsymbol {\theta}} = \lambda_ {\max } ^ {- 1} \tag {3}
+$$
+
+where $\mathcal{I}_Z(\pmb{\theta}) := \int dz p_{\pmb{\theta}}(z) \left( \frac{\partial \log p_{\pmb{\theta}}(z)}{\partial \pmb{\theta}} \right) \left( \frac{\partial \log p_{\pmb{\theta}}(z)}{\partial \pmb{\theta}} \right)^T$ is the Fisher information matrix of $\pmb{\theta}$ for $Z$ , $\mathcal{I}_{Z|X}(\pmb{\theta}) := \int dx dz p(x) p_{\pmb{\theta}}(z|x) \left( \frac{\partial \log p_{\pmb{\theta}}(z|x)}{\partial \pmb{\theta}} \right) \left( \frac{\partial \log p_{\pmb{\theta}}(z|x)}{\partial \pmb{\theta}} \right)^T$ , $\mathcal{I}_{Z|Y}(\pmb{\theta}) := \int dy dz p(y) p_{\pmb{\theta}}(z|y) \left( \frac{\partial \log p_{\pmb{\theta}}(z|y)}{\partial \pmb{\theta}} \right) \left( \frac{\partial \log p_{\pmb{\theta}}(z|y)}{\partial \pmb{\theta}} \right)^T$ are the conditional Fisher information matrix (Zegers, 2015) of $\pmb{\theta}$ for $Z$ conditioned on $X$ and $Y$ , respectively. $\lambda_{\max}$ is the largest eigenvalue of $C^{-1}(\mathcal{I}_{Z|Y}(\pmb{\theta}) - \mathcal{I}_Z(\pmb{\theta})) (C^T)^{-1}$ with $v_{\max}$ the corresponding eigenvector, where $CC^T$ is the Cholesky decomposition of the matrix $\mathcal{I}_{Z|X}(\pmb{\theta}) - \mathcal{I}_Z(\pmb{\theta})$ , and $v_{\max}$ is the eigenvector for $\lambda_{\max}$ . The infimum is attained at $\Delta \pmb{\theta} = (C^T)^{-1} v_{\max}$ .
+
+The proof is in appendix C. We see that for parameterized encoders $p_{\theta}(z|x)$ , each term of $G[p(z|x)]$ in Eq. (2) can be replaced by a bilinear form with the Fisher information matrix of the respective variables. Although this lemma is not required to understand the more general setting of Lemma 0.1, where the model is described in a functional space, Lemma 0.2 helps understand $G[p(z|x)]$ for parameterized models, which permits directly linking the phase transitions to the model's parameters.
+
+Phase Transitions. Now we introduce Theorem 1 that gives a concrete and practical condition for IB phase transitions, which is the core result of the paper:
+
+Theorem 1. The IB phase transition points $\{\beta_i^c\}$ as defined in Definition 3 are given by the roots of the following equation:
+
+$$
+G \left[ p _ {\beta} ^ {*} (z | x) \right] = \beta \tag {4}
+$$
+
+where $G[p(z|x)]$ is given by Eq. (2) and $p_{\beta}^{*}(z|x)$ is the optimal solution of $IB_{\beta}[p(z|x)]$ at $\beta$ .
+
+We can understand Eq. (4) as the condition when $\delta^2\mathrm{IB}_{\beta}[p(z|x)]$ is about to be able to be negative at the optimal solution $p_{\beta}^{*}(z|x)$ for a given $\beta$ . The proof for Theorem 1 is given in Appendix D. In Section 4, we will analyze Theorem 1 in detail.
+
+# 4 UNDERSTANDING THE FORMULA FOR IB PHASE TRANSITIONS
+
+In this section we set out to understand $G[p(z|x)]$ as given by Eq. (2) and the phase transition condition as given by Theorem 1, from the perspectives of Jensen's inequality and representational maximum correlation.
+
+# 4.1 JENSEN'S INEQUALITY
+
+The condition for IB phase transitions given by Theorem 1 involves $G[p(z|x)] = \inf_{r(z|x) \in \mathcal{Q}_{\mathcal{Z}|\mathcal{X}}} \mathcal{G}[r(z|x); p(z|x)]$ which is in itself an optimization problem. We can understand $G[p(z|x)] = \inf_{r(z|x) \in \mathcal{Q}_{\mathcal{Z}|\mathcal{X}}} \frac{A - C}{B - C}$ in Eq. (2) using Jensen's inequality:
+
+$$
+\underbrace {\mathbb {E} _ {x , z \sim p (x , z)} [ r ^ {2} (z | x) ]} _ {A} \geq \underbrace {\mathbb {E} _ {y , z \sim p (y , z)} \left[ \left(\mathbb {E} _ {x \sim p (x | y , z)} [ r (z | x) ]\right) ^ {2} \right]} _ {B} \geq \underbrace {\mathbb {E} _ {z \sim p (z)} \left[ \left(\mathbb {E} _ {x \sim p (x | z)} [ r (z | x) ]\right) ^ {2} \right]} _ {C} \tag {5}
+$$
+
+The equality between $A$ and $B$ holds when the perturbation $r(z|x)$ is constant w.r.t. $x$ for any $z$ ; the equality between $B$ and $C$ holds when $\mathbb{E}_{x\sim p(x|y,z)}[r(z|x)]$ is constant w.r.t. $y$ for any $z$ . Therefore, the minimization of $\frac{A - C}{B - C}$ encourages the relative perturbation function $r(z|x)$ to be as constant w.r.t. $x$ as possible (minimizing intra-class difference), but as different w.r.t. different $y$ as possible (maximizing inter-class difference), resulting in a clustering of the values of $r(z|x)$ for different examples $x$ according to their class $y$ . Because of this clustering property in classification problems, we conjecture that there are at most $|\mathcal{Y}| - 1$ phase transitions, where $|\mathcal{Y}|$ is the number of classes, with each phase transition differentiating one or more classes.
+
+# 4.2 REPRESENTATIONAL MAXIMUM CORRELATION
+
+Under certain conditions we can further simplify $G[p(z|x)]$ and gain a deeper understanding of it. Firstly, inspired by maximum correlation (Anantharam et al., 2013), we introduce two new concepts, representational maximum correlation and conditional maximum correlation, as follows.
+
+Definition 4. Given a joint distribution $p(X,Y)$ , and a representation $Z$ satisfying the Markov chain $Z - X - Y$ , the representational maximum correlation $\rho_r(X,Y;Z)$ is defined as
+
+$$
+\rho_ {r} (X, Y; Z) := \sup _ {(f (x, z), g (y, z)) \in \mathcal {S} _ {1}} \mathbb {E} _ {x, y, z \sim p (x, y, z)} [ f (x, z) g (y, z) ] \tag {6}
+$$
+
+where $\mathcal{S}_1 = \{(f:\mathcal{X}\times \mathcal{Z}\to \mathbb{R},g:\mathcal{Y}\times \mathcal{Z}\to \mathbb{R})\mid f,g$ bounded, and $\mathbb{E}_{x\sim p(x|z)}[f(x,z)] = \mathbb{E}_{y\sim p(y|z)}[g(y,z)] = 0,$ $\mathbb{E}_{x,z\sim p(x,z)}[f^2 (x,z)] = \mathbb{E}_{y,z\sim p(y,z)}[g^2 (y,z)] = 1\}$
+
+The conditional maximum correlation $\rho_{m}(X,Y|Z)$ is defined as:
+
+$$
+\rho_ {m} (X, Y | Z) := \sup _ {(f (x), g (y)) \in \mathcal {S} _ {2}} \mathbb {E} _ {x, y \sim p (x, y | z)} [ f (x) g (y) ] \tag {7}
+$$
+
+where $\mathcal{S}_2 = \{(f:\mathcal{X}\to \mathbb{R},g:\mathcal{Y}\to \mathbb{R})\mid f,g$ bounded, and $\forall z\in \mathcal{Z}:\mathbb{E}_{x\sim p(x|z)}[f(x)] = \mathbb{E}_{y\sim p(y|z)}[g(y)] = 0,\mathbb{E}_{x\sim p(x|z)}[f^2 (x)] = \mathbb{E}_{y\sim p(y|z)}[g^2 (y)] = 1\}$
+
+We prove the following Theorem 2, which expresses $G[p(z|x)]$ in terms of representational maximum correlation and related quantities, with proof given in Appendix F.
+
+Theorem 2. Define $\mathcal{Q}_{\mathcal{Z}|\mathcal{X}}^{(0)} := \{r(z|x) : \mathcal{X} \times \mathcal{Z} \to \mathbb{R} \mid r \text{ bounded}\}$ . If $\mathcal{Q}_{\mathcal{Z}|\mathcal{X}}^{(0)}$ and $\mathcal{Q}_{\mathcal{Z}|\mathcal{X}}$ satisfy: $\forall r(z|x) \in \mathcal{Q}_{\mathcal{Z}|\mathcal{X}}^{(0)}$ , there exists $r_1(z|x) \in \mathcal{Q}_{\mathcal{Z}|\mathcal{X}}$ , $s(z) \in \{s(z) : \mathcal{Z} \to \mathbb{R} \mid s \text{ bounded}\}$ s.t. $r(z|x) = r_1(z|x) + s(z)$ , then we have:
+
+(i) The representation maximum correlation and $G$ :
+
+$$
+G [ p (z | x) ] = \frac {1}{\rho_ {r} ^ {2} (X , Y ; Z)} \tag {8}
+$$
+
+(ii) The representational maximum correlation and conditional maximum correlation:
+
+$$
+\rho_ {r} (X, Y; Z) = \sup _ {Z \in \mathcal {Z}} [ \rho_ {m} (X, Y | Z) ] \tag {9}
+$$
+
+(iii) When $Z$ is continuous, an optimal relative perturbation function $r(z|x)$ for $G[p(z|x)]$ is given by
+
+$$
+r ^ {*} (z | x) = h ^ {*} (x) \sqrt {\frac {\delta (z - z ^ {*})}{p (z)}} \tag {10}
+$$
+
+where $z^{*} = \underset {z\in \mathcal{Z}}{\arg \max}\rho_{m}(X,Y|Z = z)$ , and $h^{\ast}(x)$ is the optimal solution for the learnability threshold function $h^{*}(x) = \operatorname *{arg min}_{h(x)\in \{h:\mathcal{X}\to \mathbb{R}\mid h\text{bounded}\}}\beta_{0}[h(x)]$ with $p(X,Y|Z = z^{*})$ ( $\beta_0[h(x)]$ is given in Theorem 4 of Wu et al. (2019)).
+
+(iv) For discrete $X, Y$ and $Z$ , we have
+
+$$
+\rho_ {r} (X, Y; Z) = \max _ {Z \in \mathcal {Z}} \sigma_ {2} (Z) \tag {11}
+$$
+
+where $\sigma_2(Z)$ is the second largest singular value of the matrix $Q_{X,Y|Z}\coloneqq \left(\frac{p(x,y|z)}{\sqrt{p(x|z)p(y|z)}}\right)_{x,y} = \left(\frac{p(x,y)}{\sqrt{p(x)p(y)}}\sqrt{\frac{p(z|x)}{p(z|y)}}\right)_{x,y}$ .
+
+Theorem 2 furthers our understanding of $G[p(z|x)]$ and the phase transition condition (Theorem 1), which we elaborate as follows.
+
+Discovering maximum correlation in the orthogonal space of a learned representation: Intuitively, the representational maximum correlation measures the maximum linear correlation between $f(X,Z)$ and $g(Y,Z)$ among all real-valued functions $f,g$ , under the constraint that $f(X,Z)$ is "orthogonal" to $p(X|Z)$ and $g(Y,Z)$ is "orthogonal" to $p(Y|Z)$ . Theorem 2 (i) reveals that $G[p(z|x)]$ is the inverse square of this representational maximum correlation. Theorem 2 (ii) further shows that $G[p(z|x)]$ is finding a specific $z^{*}$ on which maximum (nonlinear) correlation between $X$ and $Y$
+
+conditioned on $Z$ can be found. Combined with Theorem 1, we have that when we continuously increase $\beta$ , for the optimal representation $Z_{\beta}^{*}$ given by $p_{\beta}^{*}(z|x)$ at $\beta$ , $\rho_r(X,Y;Z_\beta^*)$ shall monotonically decrease due to that $X$ and $Y$ has to find their maximum correlation on the orthogonal space of an increasingly better representation $Z_{\beta}^{*}$ that captures more information about $X$ . A phase transition occurs when $\rho_r(X,Y;Z_\beta^*)$ reduces to $\frac{1}{\sqrt{\beta}}$ , after which as $\beta$ continues to increase, $\rho_r(X,Y;Z_\beta^*)$ will try to find maximum correlation between $X$ and $Y$ orthogonal to the full previously learned representation. This is reminiscent of canonical-correlation analysis (CCA) (Hotelling, 1992) in linear settings, where components with decreasing linear maximum correlation that are orthogonal to previous components are found one by one. In comparison, we show that in IB, each phase transition corresponds to learning a new nonlinear component of maximum correlation between $X$ and $Y$ in $Z$ , orthogonal to the previously-learned $Z$ . In the case of classification where different classes may have different difficulty (e.g. due to label noise or support overlap), we should expect that classes that are less difficult as measured by a larger maximum correlation between $X$ and $Y$ are learned earlier.
+
+Conspicuous subset conditioned on a single $z$ : Furthermore, we show in (iii) that an optimal relative perturbation function $r(z|x)$ can be decomposed into a product of two factors, a $\sqrt{\frac{\delta(z - z^{*})}{p(z)}}$ factor that only focuses on perturbing a specific point $z^{*}$ in the representation space, and an $h^{*}(x)$ factor that is finding the "conspicuous subset" (Wu et al., 2019), i.e. the most confident, large, typical, and imbalanced subset in the $X$ space for the distribution $(X,Y) \sim p(X,Y|z^{*})$ .
+
+Singular values In categorical settings, (iv) reveals a connection between $G[p(z|x)]$ and the singular value of the $Q_{X,Y|Z}$ matrix. Due to the property of SVD, we know that the square of the singular values of $Q_{X,Y|Z}$ equals the non-negative eigenvalue of the matrix $Q_{X,Y|Z}^T Q_{X,Y|Z}$ . Then the phase transition condition in Theorem 1 is equivalent to a (nonlinear) eigenvalue problem. This is resonant with previous analogy with CCA in linear settings, and is also reminiscent of the linear eigenvalue problem in Gaussian IB (Chechik et al., 2005).
+
+# 5 ALGORITHM FOR PHASE TRANSITIONS DISCOVERY IN CLASSIFICATION
+
+As a consequence of the theoretical analysis above, we are able to derive an algorithm to efficiently estimate the phase transitions for a given model architecture and dataset. This algorithm also permits us to empirically confirm some of our theoretical results in Section 6.
+
+Typically, classification involves high-dimensional inputs $X$ . Without sweeping the full range of $\beta$ where at each $\beta$ it is a full learning problem, it is in general a difficult task to estimate the phase transitions. In Algorithm 1, we present a two-stage approach.
+
+In the first stage, we train a single maximum likelihood neural network $f_{\theta}$ with the same encoder architecture as in the (variational) IB to estimate $p(y|x)$ , and obtain an $N \times C$ matrix $p(y|x)$ , where $N$ is the number of examples in the dataset and $C$ is the number of classes. In the second stage, we perform an iterative algorithm w.r.t. $G$ and $\beta$ , alternatively, to converge to a phase transition point.
+
+Specifically, for a given $\beta$ , we use a Blahut-Arimoto type IB algorithm (Tishby et al., 2000) to efficiently reach IB optimal $p_{\beta}^{*}(z|x)$ at $\beta$ , then use SVD (with the formula given in Theorem 2 (iv)) to efficiently estimate $G[p_{\beta}^{*}(z|x)]$ at $\beta$ (step 8). We then use the $G[p_{\beta}^{*}(z|x)]$ value as the new $\beta$ and do it again (step 7 in the next iteration). At convergence, we will reach the phase transition point given by $G[p_{\beta}^{*}(z|x)] = \beta$ (Theorem 1). After convergence as measured by patience parameter $K$ , we slightly increase $\beta$ by $\delta$ (step 13), so that the algorithm can discover the subsequent phase transitions.
+
+# 6 EMPIRICAL STUDY
+
+We quantitatively and qualitatively test the ability of our theory and Algorithm 1 to provide good predictions for IB phase transitions. We first verify them in fully categorical settings, where $X,Y,Z$ are all discrete, and we show that the phase transitions can correspond to learning new classes as we increase $\beta$ . We then test our algorithm on versions of the MNIST and CIFAR10 datasets with added label noise.
+
+Algorithm 1 Phase transitions discovery for IB
+Require $(X,Y)$ : the dataset
+Require $f_{\theta}$ : a neural net with the same encoder architecture as the (variational) IB
+Require $K$ patience
+Require $\delta$ : precision floor
+Require $R$ : maximum ratio between $\beta^{\mathrm{(th)}}$ and $\beta$
+// First stage: fit $p(y|x)$ using neural net $f_{\theta}$ ..
+1: $p(y|x)\gets$ fitting $(X,Y)$ using $f_{\theta}$ via maximum likelihood.
+2: $p(x)\gets \frac{1}{N}$
+// Second stage: coordinate descent using $G[p(z|x)]$ and IB algorithm:
+3: $\beta_0^c\gets \beta^{(\mathrm{th})}(1)$
+4: $\mathbb{B}\gets \{\beta_0^c\} / / \mathbb{B}$ is a set collecting the phase transition points
+5: $(\beta^{(\mathrm{new})},\beta ,\mathrm{count})\gets (\beta_0^c,1,0)$
+6: while $\frac{\beta^{(\mathrm{new})}}{\beta} < R$ do:
+7: $\beta \leftarrow \beta^{(\mathrm{new})}$
+8: $\beta^{(\mathrm{new})}\gets \beta^{(\mathrm{th})}(\beta)$
+9: if $\beta^{(\mathrm{new})} - \beta < \delta$ do:
+10: count $\leftarrow$ count + 1
+11: if count $>K$ do:
+12: $\mathbb{B}\gets \mathbb{B}\cup \{\beta^{(\mathrm{new})}\}$
+13: $\beta^{(\mathrm{new})}\gets \beta^{(\mathrm{new})} + \delta$
+14: end if
+15: else: count $\leftarrow 0$
+16: end if
+17: end while
+18: return B
+subroutine $\beta^{(\mathrm{th})}(\beta)$ ..
+s1: Compute $p_{\beta}^{*}(z|x)$ using the IB algorithm (Tishby et al., 2000).
+s2: $\beta^{(\mathrm{new})}\gets G[p_{\beta}^{*}(z|x)]$ using SVD (Eq. 8 and 11).
+s3: return $\beta^{(\mathrm{new})}$
+
+# 6.1 CATEGORICAL DATASET
+
+For categorical datasets, $X$ and $Y$ are discrete, and $p(X)$ and $p(Y|X)$ are given. To test Theorem 1, we use the Blahut-Arimoto IB algorithm to compute the optimal $p_{\beta}^{*}(z|x)$ for each $\beta$ . $I(Y;Z^{*})$ vs. $\beta$ is plotted in Fig. 2 (a). There are two phase transitions at $\beta_0^c$ and $\beta_1^c$ . For each $\beta$ and the corresponding $p_{\beta}^{*}(z|x)$ , we use the SVD formula (Theorem 2) to compute $G[p_{\beta}^{*}(z|x)]$ , shown in Fig. 2 (b). We see that $G[p_{\beta}^{*}(z|x)] = \beta$ at exactly the observed phase transition points $\beta_0^c$ and $\beta_1^c$ . Moreover, starting at $\beta = 1$ , Alg. 1 converges to each phase transition points within few iterations. Our other experiments with random categorical datasets show similarly tight matches.
+
+Furthermore, in Appendix G we show that the phase transitions correspond to the onset of separation of $p(z|x)$ for subsets of $X$ that correspond to different classes. This supports our conjecture from Section 4.1 that there are at most $|\mathcal{V}| - 1$ phase transitions in classification problems.
+
+# 6.2 MNIST DATASET
+
+For continuous $X$ , how does our algorithm perform, and will it reveal aspects of the dataset? We first test our algorithm in a 4-class MNIST with noisy labels3, whose confusion matrix and experimental settings are given in Appendix H. Fig. 3 (a) shows the path Alg. 1 takes. We see again that in each
+
+
+(a)
+
+
+(b)
+Figure 2: (a) $I(Y;Z^{*})$ vs. $\beta$ for a categorical dataset with $|X| = |Y| = |Z| = 3$ , where $Z^{*}$ is given by $p_{\beta}^{*}(z|x)$ , and the vertical lines are the experimentally discovered phase transition points $\beta_0^c$ and $\beta_1^c$ . (b) $G[p_{\beta}^{*}(z|x)]$ vs. $\beta$ for the same dataset, and the path for Alg. 1, with $\beta_0^c$ and $\beta_1^c$ in (a) also plotted. The dataset is given in Fig. 5.
+
+
+(a)
+
+
+(b)
+Figure 3: (a) Path of Alg. 1 starting with $\beta = 1$ , where the maximum likelihood model $f_{\theta}$ is using the same encoder architecture as in the CEB model. This stairstep path shows that Alg. 1 is able to ignore very large regions of $\beta$ , while quickly and precisely finding the phase transition points. Also plotted is an accumulation of $G[p_{\beta}^{*}(z|x)]$ vs. $\beta$ by running Alg. 1 with varying starting $\beta$ (blue dots). (b) Per-class accuracy vs. $\beta$ , where the accuracy at each $\beta$ is from training an independent CEB model on the dataset. The per-class accuracy denotes the fraction of correctly predicted labels by the CEB model for the observed label $\tilde{y}$ .
+
+phase Alg. 1 converges to the phase transition points within a few iterations, and it discovers in total 3 phase transition points. Similar to the categorical case, we expect that each phase transition corresponds to the onset of learning a new class, and that the last class is much harder to learn due to a larger separation of $\beta$ . Therefore, this class should have a much larger label noise so that it is hard to capture this component of maximum correlation between $X$ and $Y$ , as analyzed in representational maximum correlation (Section 4.2). Fig. 3 (b) plots the per-class accuracy with increasing $\beta$ for running the Conditional Entropy Bottleneck (Fischer, 2018) (another variational bound on IB). We see that the first two predicted phase transition points $\beta_0^c$ , $\beta_1^c$ closely match the observed onset of learning class 3 and class 0. Class 1 is observed to learn earlier than expected, possibly due to the gap between the variational IB objective and the true IB objective in continuous settings. By looking at the confusion matrix for the label noise (Fig. 7), we see that the ordering of onset of learning: class 2, 3, 0, 1, corresponds exactly to the decreasing diagonal element $p(\tilde{y} = 1|y = 1)$ (increasing noise) of the classes, and as predicted, class 1 has a much smaller diagonal element $p(\tilde{y} = 1|y = 1)$ than the other three classes, which makes it much more difficult to learn. This ordering of classes by difficulty is what our representational maximum correlation predicts.
+
+
+(a)
+
+
+(b)
+
+
+(c)
+Figure 4: (a) Accumulated $G[p_{\beta}^{*}(z|x)]$ vs. $\beta$ by running Alg. 1 with varying starting $\beta$ (blue dots). Also plotted are predicted phase transition points. (b) $I(X;Z)$ and $I(Y;Z)$ vs. $\beta$ . The manually-identified phase transition points are labelled with arrows. The vertical black lines are the phase transitions identified by Alg. 1, denoted as $\beta_0^c, \beta_1^c, \ldots, \beta_8^c$ , from left to right. (c) Accuracy vs. $\beta$ with the same sets of points identified. The most interesting region is right before $\beta = 2$ , where accuracy decreases with $\beta$ . Alg. 1 identifies both sides of that region, as well as points at or near all of the early obvious phase transitions. It also seems to miss later transitions, possibly due to the gap between the variational IB objective and the true IB objective in continuous settings.
+
+# 6.3 CIFAR10 DATASET
+
+Finally, we investigate the CIFAR10 experiment from Section 1. The details of the experimental setup are described in Appendix I. This experiment stretches the current limits of our discrete approximation to the underlying continuous representation being learned by the models. Nevertheless, we can see in Fig. 4 that many of the visible empirical phase transitions are tightly identified by Alg. 1. Particularly, the onset of learning is predicted quite accurately; the large interval between the predicted $\beta_{3} = 1.21$ and $\beta_{4} = 1.61$ corresponds well to the continuous increase of $I(X;Z)$ and $I(Y;Z)$ at the same interval. And Alg. 1 is able to identify many dense transitions not obviously seen by just looking at $I(Y;Z)$ vs. $\beta$ curve alone. Alg. 1 predicts 9 phase transitions, exactly equal to $|\mathcal{Y}| - 1$ for CIFAR10.
+
+# 7 CONCLUSION
+
+In this work, we observe and study the phase transitions in IB as we vary $\beta$ . We introduce the definition for IB phase transitions, and based on it derive a formula that gives a practical condition for IB phase transitions. We further understand the formula via Jensen's inequality and representational maximum correlation. We reveal the close interplay between the IB objective, the dataset and the learned representation, as each phase transition is learning a nonlinear maximum correlation component in the orthogonal space of the learned representation. We present an algorithm for finding the phase transitions, and show that it gives tight matches with observed phase transitions in categorical datasets, predicts onset of learning new classes and class difficulty in MNIST, and predicts prominent transitions in CIFAR10 experiments. This work is a first theoretical step towards a deeper understanding of the phenomenon of phase transitions in the Information Bottleneck. We believe our approach will be applicable to other "trade-off" objectives, like $\beta$ -VAE (Higgins et al., 2017) and InfoDropout (Achille & Soatto, 2018a), where the model's ability to predict is balanced against a measure of complexity.
+
+# 8 ACKNOWLEDGEMENTS
+
+The authors would like to thank Alex Alemi, Kevin Murphy, Sergey Ioffe, Isaac Chuang and Max Tegmark for helpful discussions.
+
+# REFERENCES
+
+Alessandro Achille and Stefano Soatto. Emergence of invariance and disentanglement in deep representations. The Journal of Machine Learning Research, 19(1):1947-1980, 2018a.
+Alessandro Achille and Stefano Soatto. Information dropout: Learning optimal representations through noisy computation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018b.
+Alessandro Achille, Glen Mbeng, and Stefano Soatto. The dynamics of differential learning i: Information-dynamics and task reachability. arXiv preprint arXiv:1810.02440, 2018.
+Alexander A Alemi, Ian Fischer, Joshua V Dillon, and Kevin Murphy. Deep variational information bottleneck. arXiv preprint arXiv:1612.00410, 2016.
+Venkat Anantharam, Amin Gohari, Sudeep Kamath, and Chandra Nair. On maximal correlation, hypercontractivity, and the data processing inequality studied by erkip and cover. arXiv preprint arXiv:1304.6133, 2013.
+Richard Blahut. Computation of channel capacity and rate-distortion functions. IEEE transactions on Information Theory, 18(4):460-473, 1972.
+Gal Chechik, Amir Globerson, Naftali Tishby, and Yair Weiss. Information bottleneck for gaussian variables. Journal of machine learning research, 6(Jan):165-188, 2005.
+Ekin D Cubuk, Barret Zoph, Dandelion Mane, Vijay Vasudevan, and Quoc V Le. Autoaugment: Learning augmentation policies from data. arXiv preprint arXiv:1805.09501, 2018.
+Ian Fischer. The conditional entropy bottleneck, 2018. URL openreview.net/forum?id= rkVOXhAqY7.
+Anirudh Goyal, Riashat Islam, Daniel Strouse, Zafarali Ahmed, Matthew Botvinick, Hugo Larochelle, Sergey Levine, and Yoshua Bengio. Infobot: Transfer and exploration via the information bottleneck. arXiv preprint arXiv:1901.10902, 2019.
+Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep Residual Learning for Image Recognition. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016.
+Irina Higgins, Loic Matthew, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, and Alexander Lerchner. beta-vae: Learning basic visual concepts with a constrained variational framework. In International Conference on Learning Representations, 2017. URL https://openreview.net/forum?id=Sy2fzU9gl.
+Harold Hotelling. Relations between two sets of variates. In *Breakthroughs in statistics*, pp. 162-190. Springer, 1992.
+Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In International Conference on Learning Representations, 2015. URL https://arxiv.org/abs/1412.6980.
+Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
+Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. Technical report, CIFAR, 2009.
+Xue Bin Peng, Angjoo Kanazawa, Sam Toyer, Pieter Abbeel, and Sergey Levine. Variational discriminator bottleneck: Improving imitation learning, inverse rl, and gans by constraining information flow. arXiv preprint arXiv:1810.00821, 2018.
+Mélanie Rey and Volker Roth. Meta-gaussian information bottleneck. In Advances in Neural Information Processing Systems, pp. 1916-1924, 2012.
+Danilo Jimenez Rezende and Fabio Viola. Taming VAEs. arXiv preprint arXiv:1810.00597, 2018.
+
+Ohad Shamir, Sivan Sabato, and Naftali Tishby. Learning and generalization with the information bottleneck. Theoretical Computer Science, 411(29-30):2696-2711, 2010.
+Archit Sharma, Shixiang Gu, Sergey Levine, Vikash Kumar, and Karol Hausman. Dynamics-aware unsupervised discovery of skills. arXiv preprint arXiv:1907.01657, 2019.
+DJ Strouse and David J Schwab. The deterministic information bottleneck. *Neural computation*, 29 (6):1611-1630, 2017a.
+DJ Strouse and David J Schwab. The information bottleneck and geometric clustering. arXiv preprint arXiv:1712.09657, 2017b.
+Max Tegmark and Tailin Wu. Pareto-optimal data compression for binary classification tasks. arXiv preprint arXiv:1908.08961, 2019.
+Naftali Tishby. Lecture: the information theory of deep neural networks: the statistical physics aspects. https://www.perimeterinstitute.ca/videos/information-theory-deep-neural-networks-statistical-physics-aspects/, 2018.
+Naftali Tishby, Fernando C Pereira, and William Bialek. The information bottleneck method. arXiv preprint physics/0004057, 2000.
+Tailin Wu, Ian Fischer, Isaac Chuang, and Max Tegmark. Learnability for the information bottleneck. arXiv preprint arXiv:1907.07331, 2019.
+S. Zagoruyko and N. Komodakis. Wide Residual Networks. arXiv: 1605.07146, 2016.
+Pablo Zegers. Fisher information properties. Entropy, 17(7):4918-4939, 2015.
+
+# Appendix
+
+# A CALCULUS OF VARIATIONS AT ANY ORDER OF $\mathbf{IB}_{\beta}[p(z|x)]$
+
+Here we prove the Lemma 2.1, which will be crucial in the lemmas and theorems in this paper that follows.
+
+Lemma 2.1. For a relative perturbation function $r(z|x) \in \mathcal{Q}_{\mathcal{Z}|\mathcal{X}}$ for a $p(z|x)$ , where $r(z|x)$ satisfies $\mathbb{E}_{z \sim p(z|x)}[r(z|x)] = 0$ , we have that the IB objective can be expanded as
+
+$$
+\begin{array}{l} \left. \mathbf {I B} _ {\beta} [ p (z | x) (1 + \epsilon \cdot r (z | x)) ] \right. \\ = \mathbf {I B} _ {\beta} [ p (z | x) ] + \epsilon \cdot \left(\mathbb {E} _ {x, z \sim p (x, z)} \left[ r (z | x) \log \frac {p (z | x)}{p (z)} \right] - \beta \cdot \mathbb {E} _ {y, z \sim p (y, z)} \left[ r (z | y) \log \frac {p (z | y)}{p (z)} \right]\right) \\ + \sum_ {n = 2} ^ {\infty} \frac {(- 1) ^ {n} \epsilon^ {n}}{n (n - 1)} \left\{\left(\mathbb {E} [ r ^ {n} (z | x) ] - \mathbb {E} [ r ^ {n} (z) ]\right) - \beta \cdot \left(\mathbb {E} [ r ^ {n} (z | y) ] - \mathbb {E} [ r ^ {n} (z) ]\right) \right\} \\ = \mathbf {I B} _ {\beta} [ p (z | x) ] + \epsilon \cdot \left(\mathbb {E} _ {x, z \sim p (x, z)} \left[ r (z | x) \log \frac {p (z | x)}{p (z)} \right] - \beta \cdot \mathbb {E} _ {y, z \sim p (y, z)} \left[ r (z | y) \log \frac {p (z | y)}{p (z)} \right]\right) \\ + \frac {\epsilon^ {2}}{1 \cdot 2} \left\{\left(\mathbb {E} [ r ^ {2} (z | x) ] - \mathbb {E} [ r ^ {2} (z) ]\right) - \beta \cdot \left(\mathbb {E} [ r ^ {2} (z | y) ] - \mathbb {E} [ r ^ {2} (z) ]\right) \right\} \\ \left. \right. - \frac {\epsilon^ {3}}{2 \cdot 3} \left\{\left(\mathbb {E} \left[ r ^ {3} (z | x) \right] - \mathbb {E} \left[ r ^ {3} (z) \right]\right) - \beta \cdot \left(\mathbb {E} \left[ r ^ {3} (z | y) \right] - \mathbb {E} \left[ r ^ {3} (z) \right]\right)\right\} \\ + \frac {\epsilon^ {4}}{3 \cdot 4} \left\{\left(\mathbb {E} \left[ r ^ {4} (z | x) \right] - \mathbb {E} \left[ r ^ {4} (z) \right]\right) - \beta \cdot \left(\mathbb {E} \left[ r ^ {4} (z | y) \right] - \mathbb {E} \left[ r ^ {4} (z) \right]\right) \right\} \\ - \dots \tag {12} \\ \end{array}
+$$
+
+where $r(z|y) = \mathbb{E}_{x\sim p(x|y,z)}[r(z|x)]$ and $r(z) = \mathbb{E}_{x\sim p(x|z)}[r(z|x)]$ . The expectations in the equations are all w.r.t. all variables. For example $\mathbb{E}[r^2 (z|x)] = \mathbb{E}_{x,z\sim p(x,z)}[r^2 (z|x)]$ .
+
+Proof. Suppose that we perform a relative perturbation $r(z|x)$ on $p(z|x)$ such that the perturbed conditional probability is $p'(z|x) = p(z|x) (1 + \epsilon \cdot r(z|x))$ , then we have
+
+$$
+p ^ {\prime} (z) = \int p (x) p ^ {\prime} (z | x) d x = \int d x p (x) p (z | x) (1 + \epsilon \cdot r (z | x)) = p (z) + \epsilon \cdot \int d x p (x) p (z | x) r (z | x)
+$$
+
+Therefore, we can denote the corresponding relative perturbation $r(z)$ on $p(z)$ as
+
+$$
+r (z) \equiv \frac {1}{\epsilon} \frac {p ^ {\prime} (z) - p (z)}{p (z)} = \frac {1}{p (z)} \int d x p (x) p (z | x) r (z | x) = \mathbb {E} _ {x \sim p (x | z)} [ r (z | x) ]
+$$
+
+Similarly, we have
+
+$$
+p ^ {\prime} (z | y) = \frac {p ^ {\prime} (y , z)}{p (y)} = \frac {1}{p (y)} \int d x p (x, y) p (z | x) \left(1 + \epsilon \cdot r (z | x)\right) = p (z | y) + \epsilon \cdot \frac {1}{p (y)} \int d x p (x, y) p (z | x) r (z | x)
+$$
+
+And we can denote the corresponding relative perturbation $r(z|y)$ on $p(z|y)$ as
+
+$$
+r (z | y) \equiv \frac {1}{\epsilon} \frac {p ^ {\prime} (z | y) - p (z | y)}{p (z | y)} = \frac {1}{p (z | y) p (y)} \int d x p (x, y) p (z | x) r (z | x) = \mathbb {E} _ {x \sim p (x | y, z)} [ r (z | x) ]
+$$
+
+Since
+
+$$
+\operatorname {I B} _ {\beta} [ p (z | x) ] = I (X; Z) - \beta \cdot I (Y; Z) = \int d x d z p (x, z) \log \frac {p (z | x)}{p (z)} - \beta \cdot \int d y d z p (y, z) \log \frac {p (z | y)}{p (z)}
+$$
+
+We have
+
+$$
+\begin{array}{l} \left. \mathbf {I B} _ {\beta} \left[ p ^ {\prime} (z | x) \right] = \mathbf {I B} _ {\beta} \left[ p (z | x) \left(1 + \epsilon \cdot r (z | x)\right) \right] \right. \\ = \int d x d z p (x) p ^ {\prime} (z | x) \log \frac {p ^ {\prime} (z | x)}{p ^ {\prime} (z)} - \beta \cdot \int d y d z p (y) p ^ {\prime} (z | y) \log \frac {p ^ {\prime} (z | y)}{p ^ {\prime} (z)} \\ = \int d x d z p (x) p (z | x) (1 + \epsilon \cdot r (z | x)) \log \frac {p (z | x) (1 + \epsilon \cdot r (z | x))}{p (z) (1 + \epsilon \cdot r (z))} \\ - \beta \cdot \int d y d z p (y) p (z | y) (1 + \epsilon \cdot r (z | y)) \log \frac {p (z | y) (1 + \epsilon \cdot r (z | y))}{p (z) (1 + \epsilon \cdot r (z))} \\ = \int d x d z p (x) p (z | x) (1 + \epsilon \cdot r (z | x)) \left[ \log \frac {p (z | x)}{p (z)} + \log (1 + \epsilon \cdot r (z | x)) - \log (1 + \epsilon \cdot r (z)) \right] \\ - \beta \cdot \int d y d z p (y) p (z | y) (1 + \epsilon \cdot r (z | y)) \left[ \log \frac {p (z | y)}{p (z)} + \log (1 + \epsilon \cdot r (z | y)) - \log (1 + \epsilon \cdot r (z)) \right] \\ = \int d x d z p (x) p (z | x) (1 + \epsilon \cdot r (z | x)) \left[ \log \frac {p (z | x)}{p (z)} + \sum_ {n = 1} ^ {\infty} (- 1) ^ {n - 1} \frac {\epsilon^ {n}}{n} (r (z | x) - r (z)) \right] \\ - \beta \cdot \int d y d z p (y) p (z | y) (1 + \epsilon \cdot r (z | y)) \left[ \log \frac {p (z | y)}{p (z)} + \sum_ {n = 1} ^ {\infty} (- 1) ^ {n - 1} \frac {\epsilon^ {n}}{n} \left(r (z | y) - r (z)\right) \right] \\ \end{array}
+$$
+
+The $0^{\mathrm{th}}$ -order term is simply $\mathbf{IB}_{\beta}[p(z|x)]$ . The first order term is
+
+$$
+\delta \mathbf {I B} _ {\beta} [ p (z | x) ] = \epsilon \cdot \left(\mathbb {E} _ {x, z \sim p (x, z)} \left[ r (z | x) \log \frac {p (z | x)}{p (z)} \right] - \beta \cdot \mathbb {E} _ {y, z \sim p (y, z)} \left[ r (z | y) \log \frac {p (z | y)}{p (z)} \right]\right)
+$$
+
+The $n^{\mathrm{th}}$ -order term for $n \geq 2$ is
+
+$$
+\begin{array}{l} \delta^ {n} \mathbf {I B} _ {\beta} [ p (z | x) ] \\ = (- 1) ^ {n} \epsilon^ {n} \int d x d z p (x) p (z | x) \left(- \frac {1}{n} \left[ r ^ {n} (z | x) - r ^ {n} (z) \right] + r (z | x) \frac {1}{n - 1} \left[ r ^ {n - 1} (z | x) - r ^ {n} (z) \right]\right) \\ \left. - \beta \cdot (- 1) ^ {n} \epsilon^ {n} \int d y d z p (y) p (z | y) \left(- \frac {1}{n} \left[ r ^ {n} (z | y) - r ^ {n} (z) \right] + r (z | y) \frac {1}{n - 1} \left[ r ^ {n - 1} (z | y) - r ^ {n} (z) \right]\right) \right. \\ = \frac {(- 1) ^ {n} \epsilon^ {n}}{n (n - 1)} \left(\mathbb {E} _ {x, z \sim p (x, z)} [ r ^ {n} (z | x) ] - n \mathbb {E} _ {x, z \sim p (x, z)} [ r (z | x) r ^ {n - 1} (z) ] - (n - 1) \mathbb {E} _ {z \sim p (z)} [ r ^ {n} (z) ]\right) \\ - \beta \cdot \frac {(- 1) ^ {n} \epsilon^ {n}}{n (n - 1)} \left(\mathbb {E} _ {y, z \sim p (y, z)} [ r ^ {n} (z | y) ] - n \mathbb {E} _ {y, z \sim p (y, z)} [ r (z | y) r ^ {n - 1} (z) ] - (n - 1) \mathbb {E} _ {z \sim p (z)} [ r ^ {n} (z) ]\right) \\ = \frac {(- 1) ^ {n} \epsilon^ {n}}{n (n - 1)} \left\{\left(\mathbb {E} \left[ r ^ {n} (z | x) \right] - \mathbb {E} \left[ r ^ {n} (z) \right]\right) - \beta \cdot \left(\mathbb {E} \left[ r ^ {n} (z | y) \right] - \mathbb {E} \left[ r ^ {n} (z) \right]\right) \right\} \\ \end{array}
+$$
+
+In the last equality we have used
+
+$$
+\mathbb {E} _ {x, z \sim p (x, z)} [ r (z | x) r ^ {n - 1} (z) ] = \mathbb {E} _ {z \sim p (z)} [ r ^ {n - 1} (z) \mathbb {E} _ {x \sim p (x | z)} [ r (z | x) ] ] = \mathbb {E} _ {z \sim p (z)} [ r ^ {n - 1} (z) r (z) ] = \mathbb {E} _ {z \sim p (z)} [ r ^ {n} (z) ]
+$$
+
+Combining the terms with all orders, we have
+
+$$
+\begin{array}{l} \left. \mathbf {I B} _ {\beta} [ p (z | x) (1 + \epsilon \cdot r (z | x)) ] \right. \\ = \mathbf {I B} _ {\beta} [ p (z | x) ] + \epsilon \cdot \left(\mathbb {E} _ {x, z \sim p (x, z)} \left[ r (z | x) \log \frac {p (z | x)}{p (z)} \right] - \beta \cdot \mathbb {E} _ {y, z \sim p (y, z)} \left[ r (z | y) \log \frac {p (z | y)}{p (z)} \right]\right) \\ + \sum_ {n = 2} ^ {\infty} \frac {(- 1) ^ {n} \epsilon^ {n}}{n (n - 1)} \left\{\left(\mathbb {E} \left[ r ^ {n} (z | x) \right] - \mathbb {E} \left[ r ^ {n} (z) \right]\right) - \beta \cdot \left(\mathbb {E} \left[ r ^ {n} (z | y) \right] - \mathbb {E} \left[ r ^ {n} (z) \right]\right) \right\} \\ \end{array}
+$$
+
+
+
+As a side note, the KL-divergence between $p'(z|x) = p(z|x)(1 + \epsilon \cdot r(z|x))$ and $p(z|x)$ is
+
+$$
+\begin{array}{l} \operatorname {K L} \left(p ^ {\prime} (z | x) | | p (z | x)\right) = \int d z p (z | x) (1 + \epsilon \cdot r (z | x)) \log \frac {p (z | x) (1 + \epsilon \cdot r (z | x))}{p (z | x)} \\ = \int d z p (z | x) (1 + \epsilon \cdot r (z | x)) \left(\epsilon \cdot r (z | x) - \frac {\epsilon^ {2}}{2} \cdot r ^ {2} (z | x)\right) + O (\epsilon^ {3}) \\ = \epsilon \cdot \int d z p (z | x) r (z | x) + \frac {\epsilon^ {2}}{2} \int d z p (z | x) r ^ {2} (z | x) + O (\epsilon^ {3}) \\ = \frac {\epsilon^ {2}}{2} \mathbb {E} _ {z \sim p (z | x)} [ r ^ {2} (z | x) ] + O (\epsilon^ {3}) \\ \end{array}
+$$
+
+Therefore, to the second order, we have
+
+$$
+\mathbb {E} _ {x \sim p (x)} [ \mathrm {K L} \left(p ^ {\prime} (z | x) | | p (z | x)\right) ] = \frac {\epsilon^ {2}}{2} \mathbb {E} [ r ^ {2} (z | x) ] \tag {13}
+$$
+
+Similarly, we have $\mathbb{E}_{x\sim p(x)}[\mathrm{KL}(p(z|x)||p'(z|x))] = \frac{\epsilon^2}{2}\mathbb{E}[r^2(z|x)]$ up to the second order. Using similar procedure, we have up to the second-order,
+
+$$
+\mathbb {E} _ {y \sim p (y)} \left[ \mathrm {K L} (p ^ {\prime} (z | y) | | p (z | y)) \right] = \mathbb {E} _ {y \sim p (y)} \left[ \mathrm {K L} (p (z | y) | | p ^ {\prime} (z | y)) \right] = \frac {\epsilon^ {2}}{2} \mathbb {E} [ r ^ {2} (z | y) ]
+$$
+
+$$
+\operatorname {K L} \left(p ^ {\prime} (z) | | p (z)\right) = \operatorname {K L} \left(p (z) | | p ^ {\prime} (z)\right) = \frac {\epsilon^ {2}}{2} \mathbb {E} [ r ^ {2} (z) ]
+$$
+
+# B PROOF OF LEMMA 0.1
+
+Proof. From Lemma 2.1, we have
+
+$$
+\delta^ {2} \mathbf {I B} _ {\beta} [ p (z | x) ] = \frac {\epsilon^ {2}}{2} \left\{\left(\mathbb {E} [ r ^ {2} (z | x) ] - \mathbb {E} [ r ^ {2} (z) ]\right) - \beta \cdot \left(\mathbb {E} [ r ^ {2} (z | y) ] - \mathbb {E} [ r ^ {2} (z) ]\right) \right\} \tag {14}
+$$
+
+The condition of
+
+$$
+\forall r (z | x) \in \mathcal {Q} _ {\mathcal {Z} | \mathcal {X}}, \delta^ {2} \mathrm {I B} _ {\beta} [ p (z | x) ] \geq 0 \tag {15}
+$$
+
+is equivalent to
+
+$$
+\forall r (z | x) \in \mathcal {Q} _ {\mathcal {Z} | \mathcal {X}}, \beta \cdot \left(\mathbb {E} \left[ r ^ {2} (z | y) \right] - \mathbb {E} \left[ r ^ {2} (z) \right]\right) \leq \mathbb {E} \left[ r ^ {2} (z | x) \right] - \mathbb {E} \left[ r ^ {2} (z) \right] \tag {16}
+$$
+
+Using Jensen's inequality and the convexity of the square function, we have
+
+$$
+\begin{array}{l} \mathbb {E} [ r ^ {2} (z | y) ] = \mathbb {E} _ {y, z \sim p (y, z)} \left[ \left(\mathbb {E} _ {x \sim p (x | y, z)} [ r (z | x) ]\right) ^ {2} \right] \\ = \mathbb {E} _ {z \sim p (z)} \left[ \mathbb {E} _ {y \sim p (y | z)} \left[ \left(\mathbb {E} _ {x \sim p (x | y, z)} [ r (z | x) ]\right) ^ {2} \right] \right] \\ \geq \mathbb {E} _ {z \sim p (z)} \left[ \left(\mathbb {E} _ {y \sim p (y | z)} \left[ \mathbb {E} _ {x \sim p (x | y, z)} [ r (z | x) ] \right]\right) ^ {2} \right] \\ = \mathbb {E} _ {z \sim p (z)} \left[ \left(\mathbb {E} _ {x \sim p (x | z)} [ r (z | x) ]\right) ^ {2} \right] \\ = \mathbb {E} [ r ^ {2} (z) ] \\ \end{array}
+$$
+
+The equality holds iff $r(z|y) = \mathbb{E}_{x\sim p(x|y,z)}[r(z|x)]$ is constant w.r.t. $y$ , for any $z$ .
+
+Using Jensen's inequality on $\mathbb{E}[r^2 (z)]$ , we have $\mathbb{E}[r^2 (z)] = \mathbb{E}_{z\sim p(z)}\left[\left(\mathbb{E}_{x\sim p(x|z)}[r(z|x)]\right)^2\right]\leq$ $\mathbb{E}_{z\sim p(z)}\left[\mathbb{E}_{x\sim p(x|z)}[r^2 (z|x)]\right] = \mathbb{E}[r^2 (z|x)]$ , where the equality holds iff $r(z|x)$ is constant w.r.t. $x$ for any $z$ .
+
+When $\mathbb{E}[r^2 (z|y)] - \mathbb{E}[r^2 (z)] > 0$ , we have that the condition Eq. (16) is equivalent to $\forall r(z|x)\in$ $\mathcal{Q}_{\mathcal{Z}|\mathcal{X}},\beta \leq \frac{\mathbb{E}[r^2(z|x)] - \mathbb{E}[r^2(z)]}{\mathbb{E}[r^2(z|y)] - \mathbb{E}[r^2(z)]}$ , i.e.
+
+$$
+\beta \leq G [ p (z | x) ] \equiv \inf _ {r (z | x) \in \mathcal {Q} _ {\mathcal {Z} | \mathcal {X}}} \frac {\mathbb {E} \left[ r ^ {2} (z | x) \right] - \mathbb {E} \left[ r ^ {2} (z) \right]}{\mathbb {E} \left[ r ^ {2} (z | y) \right] - \mathbb {E} \left[ r ^ {2} (z) \right]} \tag {17}
+$$
+
+where $r(z|y) = \mathbb{E}_{x\sim p(x|y,z)}[r(z|x)]$ and $r(z) = \mathbb{E}_{x\sim p(x|z)}[r(z|x)]$
+
+If $\mathbb{E}[r^2 (z|y)] - \mathbb{E}[r^2 (z)] = 0$ , substituting into Eq. (16), we have
+
+$$
+\beta \cdot 0 \leq \mathbb {E} \left[ r ^ {2} (z | x) \right] - \mathbb {E} \left[ r ^ {2} (z) \right] \tag {18}
+$$
+
+which is always true due to that $\mathbb{E}[r^2 (z|x)]\geq \mathbb{E}[r^2 (z)]$ , and will be a looser condition than Eq. (17) above. Above all, we have Eq. (17).
+
+
+
+Empirical estimate of $G[p(z|x)]$ To empirically estimate $G[p(z|x)]$ from a minibatch of $\{(x_i, y_i)\}$ , $i = 1, 2, \ldots, N$ and the encoder $p(z|x)$ , we can make the following Monte Carlo importance sampling estimation, where we use the samples $\{x_j\} \sim p(x)$ and also get samples of $\{z_i\} \sim p(z) = p(x)p(z|x)$ , and have:
+
+$$
+\begin{array}{l} \mathbb {E} _ {x, z \sim p (x, z)} [ r ^ {2} (z | x) ] = \int d x d z p (x) p (z) \frac {p (x , z)}{p (x) p (z)} r ^ {2} (z | x) \\ \simeq \frac {1}{N ^ {2}} \sum_ {i = 1} ^ {N} \sum_ {j = 1} ^ {N} \frac {p (x _ {j} , z _ {i})}{p (x _ {j}) p (z _ {i})} r ^ {2} (z _ {i} | x _ {j}) \\ \mathbb {E} _ {z \sim p (z)} [ r ^ {2} (z) ] = \mathbb {E} _ {z \sim p (z)} \left[ \left(\mathbb {E} _ {x \sim p (x | z)} [ r (z | x) ]\right) ^ {2} \right] \\ \simeq \frac {1}{N} \sum_ {i = 1} ^ {N} \left(\int d x p (x | z _ {i}) r (z _ {i} | x)\right) ^ {2} \\ = \frac {1}{N} \sum_ {i = 1} ^ {N} \left(\int d x p (x) \frac {p \left(z _ {i} \mid x\right)}{p \left(z _ {i}\right)} r \left(z _ {i} \mid x\right)\right) ^ {2} \\ \simeq \frac {1}{N} \sum_ {i = 1} ^ {N} \left(\frac {1}{N} \sum_ {j = 1} ^ {N} \frac {p (z _ {i} | x _ {j})}{p (z _ {i})} r (z _ {i} | x _ {j})\right) ^ {2} \\ \simeq \frac {1}{N} \sum_ {i = 1} ^ {N} \left(\frac {1}{N} \sum_ {j = 1} ^ {N} \frac {p \left(z _ {i} \mid x _ {j}\right)}{\frac {1}{N} \sum_ {k = 1} ^ {N} p \left(z _ {i} \mid x _ {k}\right)} r \left(z _ {i} \mid x _ {j}\right)\right) ^ {2} \\ = \frac {1}{N} \sum_ {i = 1} ^ {N} \left(\frac {\sum_ {j = 1} ^ {N} p (z _ {i} | x _ {j}) r (z _ {i} | x _ {j})}{\sum_ {j = 1} ^ {N} p (z _ {i} | x _ {j})}\right) ^ {2} \\ \end{array}
+$$
+
+$$
+\begin{array}{l} \mathbb {E} _ {y, z \sim p (y, z)} [ r ^ {2} (z | y) ] = \mathbb {E} _ {y, z \sim p (y, z)} \left[ \left(\mathbb {E} _ {x \sim p (x | y, z)} [ r (z | x) ]\right) ^ {2} \right] \\ \simeq \frac {1}{N} \sum_ {i = 1} ^ {N} \left(\int d x p (x | y _ {i}, z _ {i}) r (z _ {i} | x)\right) ^ {2} \\ = \frac {1}{N} \sum_ {i = 1} ^ {N} \left(\frac {1}{p \left(y _ {i} , z _ {i}\right)} \int d x p \left(y _ {i}\right) p \left(x \mid y _ {i}\right) p \left(z _ {i} \mid x\right) r \left(z _ {i} \mid x\right)\right) ^ {2} \\ = \frac {1}{N} \sum_ {i = 1} ^ {N} \left(\frac {\int d x p (y _ {i}) p (x | y _ {i}) p (z _ {i} | x) r (z _ {i} | x)}{\int d x p (y _ {i}) p (x | y _ {i}) p (z _ {i} | x)}\right) ^ {2} \\ \simeq \frac {1}{N} \sum_ {i = 1} ^ {N} \left(\frac {\sum_ {x _ {j} \in \Omega_ {x} (y _ {i})} p \left(z _ {i} \mid x _ {j}\right) r \left(z _ {i} \mid x _ {j}\right)}{\sum_ {x _ {j} \in \Omega_ {x} (y _ {i})} p \left(z _ {i} \mid x _ {j}\right)}\right) ^ {2} \\ = \frac {1}{N} \sum_ {i = 1} ^ {N} \left(\frac {\sum_ {j = 1} ^ {N} p (z _ {i} | x _ {j}) r (z _ {i} | x _ {j}) \mathbb {1} [ y _ {i} = y _ {j} ]}{\sum_ {j = 1} ^ {N} p (z _ {i} | x _ {j}) \mathbb {1} [ y _ {i} = y _ {j} ]}\right) ^ {2} \\ \end{array}
+$$
+
+Here $\Omega_x(y_i)$ denotes the set of $x$ examples that has label of $y_i$ , and $\mathbb{1}[\cdot]$ is an indicator function that takes value 1 if its argument is true, 0 otherwise.
+
+The requirement of $\mathbb{E}_{z\sim p(z|x)}[r(z|x)] = 0$ yields
+
+$$
+0 = \mathbb {E} _ {z \sim p (z | x)} [ r (z | x) ] = \int d z p (z) \frac {p (z | x)}{p (z)} r (z | x) \simeq \frac {1}{N} \sum_ {i = 1} ^ {N} \frac {p \left(z _ {i} \mid x _ {j}\right)}{p \left(z _ {i}\right)} r \left(z _ {i} \mid x _ {j}\right) \tag {19}
+$$
+
+for any $x_{j}$
+
+Combining all terms, we have that the empirical $\hat{G}[p(z|x)]$ is given by
+
+$$
+\hat {G} [ p (z | x) ] = \inf _ {r (z | x) \in \mathcal {Q} _ {\mathcal {Z} | \mathcal {X}}} \frac {\frac {1}{N} \sum_ {i = 1} ^ {N} \sum_ {j = 1} ^ {N} \frac {p \left(x _ {j} , z _ {i}\right)}{p \left(x _ {j}\right) p \left(z _ {i}\right)} r ^ {2} \left(z _ {i} \mid x _ {j}\right) - \sum_ {i = 1} ^ {N} \left(\frac {\sum_ {j = 1} ^ {N} p \left(z _ {i} \mid x _ {j}\right) r \left(z _ {i} \mid x _ {j}\right)}{\sum_ {j = 1} ^ {N} p \left(z _ {i} \mid x _ {j}\right)}\right) ^ {2}}{\sum_ {i = 1} ^ {N} \left(\frac {\sum_ {j = 1} ^ {N} p \left(z _ {i} \mid x _ {j}\right) r \left(z _ {i} \mid x _ {j}\right) \mathbb {1} \left[ y _ {i} = y _ {j} \right]}{\sum_ {j = 1} ^ {N} p \left(z _ {i} \mid x _ {j}\right) \mathbb {1} \left[ y _ {i} = y _ {j} \right]}\right) ^ {2} - \sum_ {i = 1} ^ {N} \left(\frac {\sum_ {j = 1} ^ {N} p \left(z _ {i} \mid x _ {j}\right) r \left(z _ {i} \mid x _ {j}\right)}{\sum_ {j = 1} ^ {N} p \left(z _ {i} \mid x _ {j}\right)}\right) ^ {\prime}} \tag {20}
+$$
+
+where $\{z_i\} \sim p(z)$ and $\{x_i\} \sim p(x)$ . It is also possible to use different distributions for importance sampling, which will results in different formulas for empirical estimation of $G[p(z|x)]$ .
+
+# C $G_{\Theta}[p_{\theta}(z|x)]$ FOR PARAMETERIZED DISTRIBUTION $p_{\theta}(z|x)$
+
+Proof. For the parameterized $p_{\theta}(z|x)$ with $\theta \in \Theta$ , after $\theta' \gets \theta + \Delta \theta$ , where $\Delta \theta \in \Theta$ is an infinitesimal perturbation on $\theta$ , we have that the distribution changes from $p_{\theta}(z|x)$ to $p_{\theta + \Delta \theta}(z|x)$ ,
+
+and thus the relative perturbation on $p_{\theta}(z|x)$ is
+
+$$
+\begin{array}{l} \epsilon \cdot r (z | x) = \frac {p _ {\boldsymbol {\theta} + \Delta \boldsymbol {\theta}} (z | x) - p _ {\boldsymbol {\theta}} (z | x)}{p _ {\boldsymbol {\theta}} (z | x)} \\ = \frac {1}{p _ {\boldsymbol {\theta}} (z | x)} \left(p _ {\boldsymbol {\theta}} (z | x) + \Delta \boldsymbol {\theta} ^ {T} \frac {\partial p _ {\boldsymbol {\theta}} (z | x)}{\partial \boldsymbol {\theta}} + \frac {1}{2} \Delta \boldsymbol {\theta} ^ {T} \frac {\partial^ {2} p _ {\boldsymbol {\theta}} (z | x)}{\partial \boldsymbol {\theta} ^ {2}} \Delta \boldsymbol {\theta} + O (\| \Delta \boldsymbol {\theta} \| ^ {3}) - p _ {\boldsymbol {\theta}} (z | x)\right) \\ \simeq \Delta \boldsymbol {\theta} ^ {T} \frac {\partial}{\partial \boldsymbol {\theta}} \log p _ {\boldsymbol {\theta}} (z | x) + \frac {1}{2} \Delta \boldsymbol {\theta} ^ {T} \frac {1}{p _ {\boldsymbol {\theta}} (z | x)} \frac {\partial^ {2} p _ {\boldsymbol {\theta}} (z | x)}{\partial \boldsymbol {\theta} ^ {2}} \Delta \boldsymbol {\theta} + O (\| \Delta \boldsymbol {\theta} \| ^ {3}) \\ \end{array}
+$$
+
+where $\| \Delta \pmb {\theta}\|$ is the norm of $\Delta \pmb{\theta}$ in the parameter field $\Theta$
+
+Similarly, we have
+
+$$
+\begin{array}{l} \boldsymbol {\epsilon} \cdot r (z | y) = \Delta \boldsymbol {\theta} ^ {T} \frac {\partial}{\partial \boldsymbol {\theta}} \log p _ {\boldsymbol {\theta}} (z | y) + \frac {1}{2} \Delta \boldsymbol {\theta} ^ {T} \frac {1}{p _ {\boldsymbol {\theta}} (z | y)} \frac {\partial^ {2} p _ {\boldsymbol {\theta}} (z | y)}{\partial \boldsymbol {\theta} ^ {2}} \Delta \boldsymbol {\theta} + O (\| \Delta \boldsymbol {\theta} \| ^ {3}) \\ \boldsymbol {\epsilon} \cdot \boldsymbol {r} (z) = \Delta \boldsymbol {\theta} ^ {T} \frac {\partial}{\partial \boldsymbol {\theta}} \log p _ {\boldsymbol {\theta}} (z) + \frac {1}{2} \Delta \boldsymbol {\theta} ^ {T} \frac {1}{p _ {\boldsymbol {\theta}} (z)} \frac {\partial^ {2} p _ {\boldsymbol {\theta}} (z)}{\partial \boldsymbol {\theta} ^ {2}} \Delta \boldsymbol {\theta} + O (\| \Delta \boldsymbol {\theta} \| ^ {3}) \\ \end{array}
+$$
+
+Substituting the above expressions into the expansion of $\mathrm{IB}_{\beta}[p(z|x)]$ in Eq. (12), and preserving to the second order $\| \Delta \theta \| ^2$ , we have
+
+$$
+\begin{array}{l} \left. \mathbf {I B} _ {\beta} \left[ p _ {\boldsymbol {\theta}} (z | x) (1 + \epsilon \cdot r (z | x)) \right] \right. \\ = \mathbf {I B} _ {\beta} [ p _ {\boldsymbol {\theta}} (z | x) ] + \epsilon \cdot \left(\mathbb {E} _ {x, z \sim p _ {\boldsymbol {\theta}} (x, z)} \left[ r (z | x) \log \frac {p _ {\boldsymbol {\theta}} (z | x)}{p _ {\boldsymbol {\theta}} (z)} \right] - \beta \cdot \mathbb {E} _ {y, z \sim p _ {\boldsymbol {\theta}} (y, z)} \left[ r (z | y) \log \frac {p _ {\boldsymbol {\theta}} (z | y)}{p _ {\boldsymbol {\theta}} (z)} \right]\right) \\ + \frac {\epsilon^ {2}}{1 \cdot 2} \left\{\left(\mathbb {E} _ {x, z \sim p _ {\boldsymbol {\theta}} (x, z)} [ r ^ {2} (z | x) ] - \mathbb {E} _ {z \sim p _ {\boldsymbol {\theta}} (z)} [ r ^ {2} (z) ]\right) - \beta \cdot \left(\mathbb {E} _ {y, z \sim p _ {\boldsymbol {\theta}} (y, z)} [ r ^ {2} (z | y) ] - \mathbb {E} _ {z \sim p _ {\boldsymbol {\theta}} (z)} [ r ^ {2} (z) ]\right) \right\} \\ = \mathbf {I B} _ {\beta} [ p _ {\boldsymbol {\theta}} (z | x) ] + \mathbb {E} _ {x, z \sim p _ {\boldsymbol {\theta}} (x, z)} \left[ \left(\Delta \boldsymbol {\theta} ^ {T} \frac {\partial}{\partial \boldsymbol {\theta}} \log p _ {\boldsymbol {\theta}} (z | x) + \frac {1}{2} \Delta \boldsymbol {\theta} ^ {T} \frac {1}{p _ {\boldsymbol {\theta}} (z | x)} \frac {\partial^ {2} p _ {\boldsymbol {\theta}} (z | x)}{\partial \boldsymbol {\theta} ^ {2}} \Delta \boldsymbol {\theta}\right) \log \frac {p _ {\boldsymbol {\theta}} (z | x)}{p _ {\boldsymbol {\theta}} (z)} \right] \\ - \beta \cdot \mathbb {E} _ {y, z \sim p _ {\boldsymbol {\theta}} (y, z)} \left[ \left(\Delta \boldsymbol {\theta} ^ {T} \frac {\partial}{\partial \boldsymbol {\theta}} \log p _ {\boldsymbol {\theta}} (z | y) + \frac {1}{2} \Delta \boldsymbol {\theta} ^ {T} \frac {1}{p _ {\boldsymbol {\theta}} (z | y)} \frac {\partial^ {2} p _ {\boldsymbol {\theta}} (z | y)}{\partial \boldsymbol {\theta} ^ {2}} \Delta \boldsymbol {\theta}\right) \log \frac {p _ {\boldsymbol {\theta}} (z | y)}{p _ {\boldsymbol {\theta}} (z)} \right] \\ \left. \right. + \frac {1}{2} \left(\mathbb {E} _ {x, z \sim p _ {\boldsymbol {\theta}} (x, z)} \left[\left(\Delta \boldsymbol {\theta} ^ {T} \frac {\partial}{\partial \boldsymbol {\theta}} \log p _ {\boldsymbol {\theta}} (z | x)\right) ^ {2} \right] - \mathbb {E} _ {z \sim p _ {\boldsymbol {\theta}} (z)} \left[\left(\Delta \boldsymbol {\theta} ^ {T} \frac {\partial}{\partial \boldsymbol {\theta}} \log p _ {\boldsymbol {\theta}} (z)\right) ^ {2} \right]\right) \\ \left. \right. - \frac {\beta}{2} \left(\mathbb {E} _ {y, z \sim p _ {\boldsymbol {\theta}} (y, z)} \left[\left(\Delta \boldsymbol {\theta} ^ {T} \frac {\partial}{\partial \boldsymbol {\theta}} \log p _ {\boldsymbol {\theta}} (z | y)\right) ^ {2} \right] - \mathbb {E} _ {z \sim p _ {\boldsymbol {\theta}} (z)} \left[\left(\Delta \boldsymbol {\theta} ^ {T} \frac {\partial}{\partial \boldsymbol {\theta}} \log p _ {\boldsymbol {\theta}} (z)\right) ^ {2} \right]\right) \\ = \mathbf {I B} _ {\beta} \left[ p _ {\boldsymbol {\theta}} (z | x) \right] + \Delta \boldsymbol {\theta} ^ {T} \left\{\mathbb {E} _ {x, z \sim p _ {\boldsymbol {\theta}} (x, z)} \left[ \log \frac {p _ {\boldsymbol {\theta}} (z | x)}{p _ {\boldsymbol {\theta}} (z)} \frac {\partial}{\partial \boldsymbol {\theta}} \log p _ {\boldsymbol {\theta}} (z | x) \right] - \beta \cdot \mathbb {E} _ {x, z \sim p _ {\boldsymbol {\theta}} (x, z)} \left[ \log \frac {p _ {\boldsymbol {\theta}} (z | x)}{p _ {\boldsymbol {\theta}} (z)} \frac {\partial}{\partial \boldsymbol {\theta}} \log p _ {\boldsymbol {\theta}} (z | x) \right] \right\} \\ + \frac {1}{2} \Delta \boldsymbol {\theta} ^ {T} \left\{\left(\mathcal {I} _ {Z | X} (\boldsymbol {\theta}) - \mathcal {I} _ {Z} (\boldsymbol {\theta})\right) - \beta \left(\mathcal {I} _ {Z | X} (\boldsymbol {\theta}) - \mathcal {I} _ {Z} (\boldsymbol {\theta})\right) \right\} \Delta \boldsymbol {\theta} \\ \end{array}
+$$
+
+In the last equality we have used $\mathbb{E}_{x,z\sim p_{\pmb{\theta}}(x,z)}[\frac{1}{p_{\pmb{\theta}}(z|x)}\frac{\partial^2p_{\pmb{\theta}}(z|x)}{\partial\pmb{\theta}^2}] = \int dxp(x)\frac{\partial^2}{\partial\pmb{\theta}^2}\int dzp_{\pmb{\theta}}(z|x) = \int dxp(x)\frac{\partial^2}{\partial\pmb{\theta}^2} 1 = \mathbf{0}$ , and similarly $\mathbb{E}_{y,z\sim p_{\pmb{\theta}}(y,z)}[\frac{1}{p_{\pmb{\theta}}(z|y)}\frac{\partial^2p_{\pmb{\theta}}(z|y)}{\partial\pmb{\theta}^2}] = \mathbf{0}$ . In other words, the $\|\Delta \pmb{\theta}\|^2$ terms in the first-order variation $\delta \mathrm{IB}_{\beta}[p_{\pmb{\theta}}(z|x)]$ vanish, and the remaining $\|\Delta \pmb{\theta}\|^2$ are all in $\delta^2\mathrm{IB}_{\beta}[p_{\pmb{\theta}}(z|x)]$ . Also in the last expression, $\mathcal{I}_Z(\pmb{\theta}) \equiv \int dzp_{\pmb{\theta}}(z)\left(\frac{\partial\log p_{\pmb{\theta}}(z)}{\partial\pmb{\theta}}\right)\left(\frac{\partial\log p_{\pmb{\theta}}(z)}{\partial\pmb{\theta}}\right)^T$ is the Fisher information matrix of $\pmb{\theta}$ for $Z$ , $\mathcal{I}_{Z|X}(\pmb{\theta}) \equiv \int dx dz p(x)p_{\pmb{\theta}}(z|x)\left(\frac{\partial\log p_{\pmb{\theta}}(z|x)}{\partial\pmb{\theta}}\right)\left(\frac{\partial\log p_{\pmb{\theta}}(z|x)}{\partial\pmb{\theta}}\right)^T$ , $\mathcal{I}_{Z|Y}(\pmb{\theta}) \equiv \int dy dz p(y)p_{\pmb{\theta}}(z|y)\left(\frac{\partial\log p_{\pmb{\theta}}(z|y)}{\partial\pmb{\theta}}\right)\left(\frac{\partial\log p_{\pmb{\theta}}(z|y)}{\partial\pmb{\theta}}\right)^T$ are the conditional Fisher information matrix (Zegers, 2015) of $\pmb{\theta}$ for $Z$ conditioned on $X$ and $Y$ , respectively.
+
+Let us look at
+
+$$
+\delta^ {2} \mathbf {I B} _ {\beta} [ p _ {\boldsymbol {\theta}} (z | x) ] = \frac {1}{2} \Delta \boldsymbol {\theta} ^ {T} \left\{\left(\mathcal {I} _ {Z | X} (\boldsymbol {\theta}) - \mathcal {I} _ {Z} (\boldsymbol {\theta})\right) - \beta \left(\mathcal {I} _ {Z | X} (\boldsymbol {\theta}) - \mathcal {I} _ {Z} (\boldsymbol {\theta})\right) \right\} \Delta \boldsymbol {\theta} \tag {21}
+$$
+
+Firstly, note that $\delta^2\mathrm{IB}_{\beta}[p_{\theta}(z|x)]$ is a quadratic function of $\Delta \theta$ , and the scale of $\Delta \theta$ does not change the sign of $\delta^2\mathrm{IB}_{\beta}[p_{\theta}(z|x)]$ , so the condition of $\forall \Delta \theta \in \Theta$ , $\delta^2\mathrm{IB}_{\beta}[p_{\theta}(z|x)] \geq 0$ is invariant to the scale of $\Delta \theta$ , and is describing the "curvature" in the infinitesimal neighborhood of $\theta$ . Therefore, $\Delta \theta$ can explore any value in $\Theta$ . Secondly, we see that Eq. (21) is a special case of Eq. (14) with $\epsilon \cdot r(z|x) = \Delta \theta^T\frac{\partial}{\partial\theta}\log p_\theta (z|x)$ . Therefore, The inequalities due to Jensen still hold: $\epsilon^2 (\mathbb{E}[r^2 (z|x)] - \mathbb{E}[r^2 (z)]) = \Delta \theta^T (\mathcal{I}_{Z|X}(\theta) - \mathcal{I}_Z(\theta))\Delta \theta \geq 0$ , $\epsilon^2 (\mathbb{E}[r^2 (z|y)] - \mathbb{E}[r^2 (z)]) = \Delta \theta^T (\mathcal{I}_{Z|Y}(\theta) - \mathcal{I}_Z(\theta))\Delta \theta \geq 0$ . If $\Delta \theta^T (\mathcal{I}_{Z|Y}(\theta) - \mathcal{I}_Z(\theta))\Delta \theta > 0$ , then the condition of $\forall \Delta \theta \in \Theta$ , $\delta^2\mathrm{IB}_{\beta}[p_{\theta}(z|x)] \geq 0$ is equivalent to $\forall \Delta \theta \in \Theta$ ,
+
+$$
+\beta \leq \frac {\Delta \boldsymbol {\theta} ^ {T} \left(\mathcal {I} _ {Z | X} (\boldsymbol {\theta}) - \mathcal {I} _ {Z} (\boldsymbol {\theta})\right) \Delta \boldsymbol {\theta}}{\Delta \boldsymbol {\theta} ^ {T} \left(\mathcal {I} _ {Z | Y} (\boldsymbol {\theta}) - \mathcal {I} _ {Z} (\boldsymbol {\theta})\right) \Delta \boldsymbol {\theta}}
+$$
+
+i.e.
+
+$$
+\beta \leq G _ {\Theta} [ p _ {\boldsymbol {\theta}} (z | x) ] \equiv \inf _ {\Delta \boldsymbol {\theta} \in \Theta} \frac {\Delta \boldsymbol {\theta} ^ {T} \left(\mathcal {I} _ {Z | X} (\boldsymbol {\theta}) - \mathcal {I} _ {Z} (\boldsymbol {\theta})\right) \Delta \boldsymbol {\theta}}{\Delta \boldsymbol {\theta} ^ {T} \left(\mathcal {I} _ {Z | Y} (\boldsymbol {\theta}) - \mathcal {I} _ {Z} (\boldsymbol {\theta})\right) \Delta \boldsymbol {\theta}} \tag {22}
+$$
+
+If $\Delta \pmb{\theta}^T\left(\mathcal{I}_{Z|Y}(\pmb {\theta}) - \mathcal{I}_Z(\pmb {\theta})\right)\Delta \pmb {\theta} = 0$ , we have that Eq. (21) always holds, which is a looser condition than Eq. (22). Above all, we have that the condition of $\forall \Delta \pmb {\theta}\in \Theta$ $\delta^2\mathrm{IB}_{\beta}[p_{\pmb{\theta}}(z|x)]$ is equivalent to $\beta \leq G_{\Theta}[p_{\pmb{\theta}}(z|x)]$
+
+Moreover, $(G_{\Theta}[p_{\theta}(z|x)])^{-1}$ given by Eq. (22) has the format of a generalized Rayleigh quotient $R(A,B;x)\equiv \frac{\Delta\pmb{\theta}^T A\Delta\pmb{\theta}}{\Delta\pmb{\theta}^T B\Delta\pmb{\theta}}$ where $A = \mathcal{I}_{Z|Y}(\pmb {\theta}) - \mathcal{I}_Z(\pmb {\theta})$ and $B = \mathcal{I}_{Z|X}(\pmb {\theta}) - \mathcal{I}_Z(\pmb {\theta})$ are both Hermitian matrices6, which can be reduced to Rayleigh quotient $R(D,C^{T}\Delta \pmb {\theta}) = \frac{(C^{T}\Delta\pmb{\theta})^{T}D(C^{T}\Delta\pmb{\theta})}{(C^{T}\Delta\pmb{\theta})^{T}(C^{T}\Delta\pmb{\theta})}$ , with the transformation $D = C^{-1}A(C^T)^{-1}$ where $CC^T$ is the Cholesky decomposition of $B = \mathcal{I}_{Z|X}(\pmb {\theta})-$ $\mathcal{I}_Z(\pmb {\theta})$ . Moreover, we have that when $G_{\Theta}[p_{\theta}(z|x)]$ attains its minimum value, the Rayleigh quotient $R(D,C^T\Delta \pmb {\theta})$ attains its maximum value of $\lambda_{\mathrm{max}}$ with $C^T\Delta \pmb {\theta} = v_{\mathrm{max}}$ , i.e. $\Delta \pmb {\theta} = (C^T)^{-1}v_{\mathrm{max}}$ where $\lambda_{\mathrm{max}}$ is the largest eigenvalue of $D$ and $v_{\mathrm{max}}$ the corresponding eigenvector.
+
+
+
+# D PROOF OF THEOREM 1
+
+Proof. Define
+
+$$
+T _ {\beta} \left(\beta^ {\prime}\right) := \inf _ {r (z | x) \in \mathcal {Q} _ {\mathcal {Z} | x}} \left[ \left(\mathbb {E} _ {\beta} \left[ r ^ {2} (z | x) \right] - \mathbb {E} _ {\beta} \left[ r ^ {2} (z) \right]\right) - \beta^ {\prime} \cdot \left(\mathbb {E} _ {\beta} \left[ r ^ {2} (z | y) \right] - \mathbb {E} _ {\beta} \left[ r ^ {2} (z) \right]\right) \right] \tag {23}
+$$
+
+where $\mathbb{E}_{\beta}[\cdot ]$ denotes taking expectation w.r.t. the optimal solution $p_{\beta}^{*}(x,y,z) = p(x,y)p_{\beta}^{*}(z|x)$ at $\beta$ . Using Lemma 2.1, we have that the IB phase transition as defined in Definition 3 corresponds to satisfying the following two equations:
+
+$$
+\left. T _ {\beta} \left(\beta^ {\prime}\right) \right| _ {\beta^ {\prime} = \beta} \geq 0 \tag {24}
+$$
+
+$$
+\lim _ {\beta^ {\prime} \rightarrow \beta^ {+}} T _ {\beta} \left(\beta^ {\prime}\right) = 0 ^ {-} \tag {25}
+$$
+
+Now we prove that $T_{\beta}(\beta')$ is continuous at $\beta' = \beta$ , i.e. $\forall \varepsilon > 0$ , $\exists \delta > 0$ s.t. $\forall \beta \in (\beta - \delta, \beta + \delta)$ , we have $|T_{\beta}(\beta') - T_{\beta}(\beta)| < \epsilon$ .
+
+From Eq. (23), we have $T_{\beta}(\beta') - T_{\beta}(\beta) = -(\beta' - \beta) \cdot \left( \mathbb{E}_{\beta}[r^2(z|y)] - \mathbb{E}_{\beta}[r^2(z)] \right)$ . Since $r(z|x)$ is bounded, i.e., $\exists M > 0$ s.t. $\forall z \in \mathcal{Z}, x \in \mathcal{X}, |r(z|x)| \leq M$ , we have
+
+$$
+\left| \mathbb {E} _ {\beta} \left[ r ^ {2} (z | y) \right] \right| = \left| \mathbb {E} _ {\beta} \left[ \left(\mathbb {E} _ {x \sim p (x | y, z)} \left[ r (z | x) \right]\right) ^ {2} \right] \right| \leq \left| \mathbb {E} _ {\beta} \left[ \left(\mathbb {E} _ {x \sim p (x | y, z)} \left[ M \right]\right) ^ {2} \right] \right| = M ^ {2}
+$$
+
+Similarly, we have
+
+$$
+\left| \mathbb {E} _ {\beta} \left[ r ^ {2} (z) \right] \right| = \left| \mathbb {E} _ {\beta} \left[ \left(\mathbb {E} _ {x \sim p (x | z)} [ r (z | x) ]\right) ^ {2} \right] \right| \leq \left| \mathbb {E} _ {\beta} \left[ \left(\mathbb {E} _ {x \sim p (x | z)} [ M ]\right) ^ {2} \right] \right| = M ^ {2}
+$$
+
+Hence, $|T_{\beta}(\beta') - T_{\beta}(\beta)| = |\beta' - \beta| \left| E_{\beta}[r^2(z|y)] - E_{\beta}[r^2(z)] \right| \leq 2|\beta' - \beta|M^2$ .
+
+To prove that $T_{\beta}(\beta')$ is continuous at $\beta' = \beta$ , we have $\forall \varepsilon > 0$ , $\exists \delta = \frac{\varepsilon}{2M^2} > 0$ , s.t. $\forall \beta' \in (\beta - \delta, \beta + \delta)$ , we have
+
+$$
+\left| T _ {\beta} \left(\beta^ {\prime}\right) - T _ {\beta} (\beta) \right| \leq 2 \left| \beta^ {\prime} - \beta \right| M ^ {2} < 2 \delta M ^ {2} = 2 \frac {\varepsilon}{2 M ^ {2}} M ^ {2} = \varepsilon
+$$
+
+Hence $T_{\beta}(\beta^{\prime})$ is continuous at $\beta^{\prime} = \beta$
+
+Combining the continuity of $T_{\beta}(\beta')$ at $\beta' = \beta$ , and Eq. (24) and (25), we have $T_{\beta}(\beta) = 0$ , which is equivalent to $G[p_{\beta}^{*}(z|x)] = \beta$ after simple manipulation.
+
+
+
+# E INVARIANCE OF $\mathcal{G}[r(z|x);p(z|x)]$ TO ADDITION OF A GLOBAL REPRESENTATION
+
+Here we prove the following lemma:
+
+Lemma 2.2. $\mathcal{G}[r(z|x);p(z|x)]$ defined in Lemma 0.1 is invariant to the transformation $r^{\prime}(z|x)\gets r(z|x) + s(z)$ .
+
+Proof. When we $r(z|x)$ is shifted by a global transformation $r'(z|x) \gets r(z|x) + s(z)$ , we have $r'(z) \gets \mathbb{E}_{x \sim p(x|z)}[r(z|x) + s(z)] = \mathbb{E}_{x \sim p(x|z)}[r(z|x)] + s(z) \mathbb{E}_{x \sim p(x|z)}[1] = r(z) + s(z)$ , and similarly $r'(z|y) \gets r(z|y) + s(z)$ .
+
+The numerator of $\mathcal{G}[r(z|x);p(z|x)]$ is then
+
+$$
+\begin{array}{l} \mathbb {E} _ {x, z \sim p (x, z)} \left[ \left(r ^ {\prime} (z | x)\right) ^ {2} \right] - \mathbb {E} _ {z \sim p (z)} \left[ \left(r ^ {\prime} (z)\right) ^ {2} \right] \\ = \mathbb {E} _ {x, z \sim p (x, z)} \left[ \big (r (z | x) + s (z) \big) ^ {2} \right] - \mathbb {E} _ {z \sim p (z)} \left[ \big (r (z) + s (z) \big) ^ {2} \right] \\ = \left(\mathbb {E} _ {x, z \sim p (x, z)} \left[ r ^ {2} (z | x) \right] + 2 \mathbb {E} _ {x, z \sim p (x, z)} \left[ r (z | x) s (z) \right] + \mathbb {E} _ {x, z \sim p (x, z)} \left[ s ^ {2} (z) \right]\right) \\ - \left(\mathbb {E} _ {z \sim p (z)} [ r ^ {2} (z) ] + 2 \mathbb {E} _ {z \sim p (z)} [ r (z) s (z) ] + \mathbb {E} _ {z \sim p (z)} [ s ^ {2} (z) ]\right) \\ = \left(\mathbb {E} _ {x, z \sim p (x, z)} \left[ r ^ {2} (z | x) \right] + 2 \mathbb {E} _ {z \sim p (z)} \left[ s (z) \mathbb {E} _ {x \sim p (x | z)} \left[ r (z | x) \right] \right] + \mathbb {E} _ {z \sim p (z)} \left[ s ^ {2} (z) \right]\right) \\ - \left(\mathbb {E} _ {z \sim p (z)} [ r ^ {2} (z) ] + 2 \mathbb {E} _ {z \sim p (z)} [ r (z) s (z) ] + \mathbb {E} _ {z \sim p (z)} [ s ^ {2} (z) ]\right) \\ = \left(\mathbb {E} _ {x, z \sim p (x, z)} [ r ^ {2} (z | x) ] + 2 \mathbb {E} _ {z \sim p (z)} [ r (z) s (z) ] + \mathbb {E} _ {z \sim p (z)} [ s ^ {2} (z) ]\right) \\ - \left(\mathbb {E} _ {z \sim p (z)} [ r ^ {2} (z) ] + 2 \mathbb {E} _ {z \sim p (z)} [ r (z) s (z) ] + \mathbb {E} _ {z \sim p (z)} [ s ^ {2} (z) ]\right) \\ = \mathbb {E} _ {x, z \sim p (x, z)} \left[ r ^ {2} (z | x) \right] - \mathbb {E} _ {z \sim p (z)} \left[ r ^ {2} (z) \right] \\ \end{array}
+$$
+
+Symmetrically, we have
+
+$$
+\mathbb {E} _ {y, z \sim p (y, z)} \left[ \left(r ^ {\prime} (z | y)\right) ^ {2} \right] - \mathbb {E} _ {z \sim p (z)} \left[ \left(r ^ {\prime} (z)\right) ^ {2} \right] = \mathbb {E} _ {y, z \sim p (y, z)} \left[ r ^ {2} (z | y) \right] - \mathbb {E} _ {z \sim p (z)} \left[ r ^ {2} (z) \right]
+$$
+
+Therefore, $\mathcal{G}[r(z|x);p(z|x)] = \frac{\mathbb{E}_{x,z\sim p(x,z)}[r^2(z|x)] - \mathbb{E}_{z\sim p(z)}[r^2(z)]}{\mathbb{E}_{y,z\sim p(y,z)}[r^2(z|y)] - \mathbb{E}_{z\sim p(z)}[r^2(z)]}$ is invariant to $r^\prime (z|x)\gets r(z|x) + s(z)$ .
+
+# F PROOF OF THEOREM 2
+
+Proof. Using the condition of the theorem, we have that $\forall r(z|x) \in \mathcal{Q}_{\mathcal{Z}|\mathcal{X}}^{0}$ , there exists $r_1(z|x) \in \mathcal{Q}_{\mathcal{Z}|\mathcal{X}}$ and $s(z) \in \{s : \mathcal{Z} \to \mathbb{R}|s \text{ bounded}\}$ s.t. $r(z|x) = r_1(z|x) + s(z)$ . Note that the only difference between $\mathcal{Q}_{\mathcal{Z}|\mathcal{X}}$ and $\mathcal{Q}_{\mathcal{Z}|\mathcal{X}}^{(0)}$ is that $\mathcal{Q}_{\mathcal{Z}|\mathcal{X}}$ requires $\mathbb{E}_{p(z|x)}[r_1(z|x)] = 0$ . Using Lemma 2.2, we have
+
+$$
+\inf _ {r (z | x) \in \mathcal {Q} _ {\mathcal {Z} | \mathcal {X}} ^ {(0)}} \mathcal {G} [ r (z | x); p (z | x) ] = \inf _ {r _ {1} (z | x) \in \mathcal {Q} _ {\mathcal {Z} | \mathcal {X}}} \mathcal {G} [ r _ {1} (z | x); p (z | x) ] = G [ p (z | x) ]
+$$
+
+where $r(z|x)$ doesn't have the constraint of $\mathbb{E}_{p(z|x)}[\cdot ] = 0$
+
+After dropping the constraint of $\mathbb{E}_{z\sim p(z|x)}[r(z|x)] = 0$ , again using Lemma 2.2, we can let $r(z) = \mathbb{E}_{x\sim p(x|z)}[r(z|x)] = 0$ (since we can perform the transformation $r'(z|x) \gets r(z|x) - r(z)$ , so that the new $r'(z) \equiv 0$ ). Now we get a simpler formula for $G[p(z|x)]$ , as follows:
+
+$$
+G [ p (z | x) ] = \inf _ {r (z | x) \in \mathcal {Q} _ {z | x} ^ {(1)}} \frac {\mathbb {E} _ {x , z \sim p (x , z)} \left[ r ^ {2} (z | x) \right]}{\mathbb {E} _ {y , z \sim p (y , z)} \left[ \left(\mathbb {E} _ {x \sim p (x | y , z)} \left[ r (z | x) \right]\right) ^ {2} \right]} \tag {26}
+$$
+
+where $\mathcal{Q}_{\mathcal{Z}|\mathcal{X}}^{(1)}\coloneqq \{r:\mathcal{X}\times \mathcal{Z}\to \mathbb{R}\left|\mathbb{E}_{x\sim p(x|z)}[r(z|x)] = 0,r\text{bounded}\right\} .$
+
+From Eq. (26), we can further require that $\mathbb{E}_{x,z\sim p(x,z)}[r^2 (z|x)] = 1$ . Define
+
+$$
+\rho_ {s} ^ {2} (X, Y; Z) := \sup _ {f (X, Z) \in \mathcal {Q} _ {Z | \mathcal {X}} ^ {(2)}} \mathbb {E} [ (\mathbb {E} [ f (X, Z) | Y, Z ]) ^ {2} ] = \sup _ {f (x, z) \in \mathcal {Q} _ {Z | \mathcal {X}} ^ {(2)}} \mathbb {E} _ {y, z \sim p (y, z)} \left[ \left(\mathbb {E} _ {x \sim p (x | y, z)} [ f (x, z) ]\right) ^ {2} \right] \tag {27}
+$$
+
+where $\mathcal{Q}_{\mathcal{Z}|\mathcal{X}}^{(2)} := \{r : \mathcal{X} \times \mathcal{Z} \to \mathbb{R} \mid \mathbb{E}_{x \sim p(x|z)}[r(z|x)] = 0, \mathbb{E}_{x,z \sim p(x,z)}[r^2(z|x)] = 1, r \text{ bounded}\}$ . Comparing with Eq. (26), it immediately follows that
+
+$$
+G [ p (z | x) ] = \frac {1}{\rho_ {s} ^ {2} (X , Y ; Z)}
+$$
+
+(i) We only have to prove that $\rho_s(X, Y; Z) = \rho_r(X, Y; Z)$ , where $\rho_r(X, Y; Z)$ is defined in Definition 4.
+
+We have
+
+$$
+\begin{array}{l} \mathbb {E} [ f (X, Z) g (Y, Z) ] \\ = \int d x d y d z p (x, y, z) f (x, z) g (y, z) \\ = \int d y d z p (y, z) g (y, z) \int d x p (x | y, z) f (x, z) \\ \equiv \int d y d z p (y, z) g (y, z) F (y, z) \\ \leq \sqrt {\int d y d z p (y , z) g ^ {2} (y , z)} \cdot \sqrt {\int d y d z p (y , z) F ^ {2} (y , z)} \\ \end{array}
+$$
+
+where $F(y,z) \coloneqq \int dx p(x|y,z)f(x,z)$ . We have used Cauchy-Schwarz inequality, where the equality holds when $g(y,z) = \alpha F(y,z)$ for some $\alpha$ . Since $\mathbb{E}[g^2 (y,z)] = 1$ , we have $\alpha^2\mathbb{E}[F^2 (y,z)] = 1$ .
+
+Taking the supremum of $(\mathbb{E}[f(X,Z)g(Y,Z)])^2$ w.r.t. $f$ and $g$ , we have
+
+$$
+\begin{array}{l} \rho_ {r} ^ {2} (X, Y; Z) = \sup _ {(f (X, Z), g (Y, Z)) \in \mathcal {S} _ {1}} (\mathbb {E} [ f (X, Z) g (Y, Z) ]) ^ {2} \\ = \sup _ {(f (x, z), g (y, z)) \in \mathcal {S} _ {1}} \int d y d z p (y, z) g ^ {2} (y, z) \cdot \int d y d z p (y, z) F ^ {2} (y, z) \\ = \sup _ {f (x, z) \in \mathcal {Q} _ {z | \mathcal {X}} ^ {(2)}} \int d y d z p (y, z) F ^ {2} (y, z) \\ = \sup _ {f (x, z) \in \mathcal {Q} _ {\mathcal {Z} | x} ^ {(2)}} \int d y d z p (y, z) \left(\int d x p (x | y, z) f (x, z)\right) ^ {2} \\ = \sup_{f(X,Z)\in \mathcal{Q}_{\mathcal{Z}|\mathcal{X}}^{(2)}}\mathbb{E}\bigl[(\mathbb{E}[f(X,Z)|Y,Z])^{2}\bigr ] \\ \equiv \rho_ {s} ^ {2} (X, Y; Z) \\ \end{array}
+$$
+
+Here $S_{1}$ is defined in Definition 4. By definition both $\rho_r(X,Y;Z)$ and $\rho_s(X,Y;Z)$ take non-negative values. Therefore,
+
+$$
+\rho_ {s} (X, Y; Z) = \rho_ {r} (X, Y; Z) \tag {28}
+$$
+
+(ii) Using the definition of $\rho_r(X,Y;Z)$ , we have
+
+$$
+\begin{array}{l} \rho_ {r} ^ {2} (X, Y; Z) \\ \equiv \sup _ {f (x, z) \in \mathcal {Q} _ {\mathcal {Z} | x} ^ {(2)}} \int d y d z p (y, z) \left(\int d x p (x | y, z) f (x, z)\right) ^ {2} \\ = \sup _ {f (x, z) \in \mathcal {Q} _ {\mathcal {Z} | x} ^ {(2)}} \int d z p (z) \int d y p (y | z) \left(\int d x p (x | y, z) f (x, z)\right) ^ {2} \\ \equiv \sup _ {f (x, z) \in \mathcal {Q} _ {\mathcal {Z} | \mathcal {X}} ^ {(2)}} \int d z p (z) W [ f (x, z) ] \\ \end{array}
+$$
+
+where $W[f(x,z)]\coloneqq \int dy p(y|z)\left(\int dx p(x|y,z)f(x,z)\right)^2$
+
+Denote $c(z)\coloneqq p(z)\mathbb{E}_{x\sim p(x|z)}[f^2 (x,z)]$ , we have $\int c(z)dz = \mathbb{E}_{x,z\sim p(x,z)}[f^2 (x,z)] = 1$ . Then the supremum $\rho_r^2 (X,Y;Z) = \sup_{f(x,z)\in \mathcal{Q}_{\mathcal{Z}|X}^{(2)}}\int dzp(z)W[f(x,z)]$ is equivalent to the following two-stage
+
+supremum:
+
+$$
+\rho_ {r} ^ {2} (X, Y; Z) = \sup _ {c (z): \int c (z) d z = 1} \int d z p (z) \sup _ {f (x, z) \in \mathcal {Q} _ {\mathcal {Z} | \mathcal {X}} ^ {(3)}} W [ f (x, z) ] \tag {29}
+$$
+
+where $\mathcal{Q}_{\mathcal{Z}|\mathcal{X}}^{(3)}\coloneqq \{\mathcal{X}\times \mathcal{Z}\to \mathbb{R}\mid \mathbb{E}_{x\sim p(x|z)}[f^2 (x,z)] = \frac{c(z)}{p(z)},\mathbb{E}_{x\sim p(x|z)}[f(x,z)] = 0,f$ bounded} We can think of the inner supremum $\sup_{f(x,z)\in \mathcal{Q}_{\mathcal{Z}|\mathcal{X}}^{(3)}}W[f(x,z)]$ as only w.r.t. $x$ , for some given $z$
+
+Now let's consider another supremum:
+
+$$
+\sup _ {h (x) \in \mathcal {Q} _ {\mathcal {X}} ^ {(h)}} \int d y p (y | z) \left(\int d x p (x | y, z) h (x)\right) ^ {2} \tag {30}
+$$
+
+where $\mathcal{Q}_{\mathcal{X}}^{(h)} := \{h : \mathcal{X} \to \mathbb{R} \mid \mathbb{E}_{p(x|z)}[h(x)] = 0, \mathbb{E}_{p(x|z)}[h^2(x)] = 1, h$ bounded\}. Using similar technique in (ii), it is easy to prove that it equals $\rho_m^2(X,Y|Z)$ as defined in Definition 4.
+
+Comparing Eq. (30) and the supremum:
+
+$$
+\sup_{(x,z)\in \mathcal{Q}^{(3)}_{\mathcal{Z} | x}}W[f(x,z)]
+$$
+
+we see that the only difference is that in the latter $\mathbb{E}_{x\sim p(x|z)}[f^2 (x,z)]$ equals $\frac{c(z)}{p(z)}$ instead of 1. Since $W[f(x,z)]$ is a quadratic functional of $f(x,z)$ , we have
+
+$$
+\sup _ {f (x, z) \in \mathcal {Q} _ {\mathcal {Z} | x} ^ {(3)}} W [ f (x, z) ] = \frac {c (z)}{p (z)} \rho_ {m} ^ {2} (X, Y | Z)
+$$
+
+Therefore,
+
+$$
+\begin{array}{l} \rho_ {r} (X, Y; Z) = \sup _ {c (z): \int c (z) d z = 1} \int d z p (z) \sup _ {f (x, z) \in \mathcal {Q} _ {z | \mathcal {X}} ^ {(3)}} W [ f (x, z) ] \\ = \sup _ {c (z): \int c (z) d z = 1} \int d z p (z) \frac {c (z)}{p (z)} \rho_ {m} ^ {2} (X, Y | Z) \\ = \sup _ {c (z): \int c (z) d z = 1} \int d z c (z) \rho_ {m} ^ {2} (X, Y | Z = z) \\ = \sup _ {Z \in \mathcal {Z}} \rho_ {m} ^ {2} (X, Y | Z) \\ \end{array}
+$$
+
+where in the last equality we have let $c(z)$ have "mass" only on the place where $\rho_m^2 (X,Y|Z = z)$ attains supremum w.r.t. $z$ .
+
+(iii) When $Z$ is a continuous variable, let $f(x,z) = f_{X}(x)\sqrt{\frac{\delta(z - z_{0})}{p(z)}}$ , where $\delta(\cdot)$ is the Dirac-delta function, $z_{0}$ is a parameter, $f_{X}(x) \in \mathcal{Q}_{\mathcal{X}|\mathcal{Z}}^{(f)}$ , with $\mathcal{Q}_{\mathcal{X}|\mathcal{Z}}^{(f)} := \{f_{X} : \mathcal{X} \to \mathbb{R} \mid f_{X} \text{ bounded}; \forall Z \in \mathcal{Z} : \mathbb{E}_{X \sim p(X|Z)}[f_{X}(x)] = 0, \mathbb{E}_{X \sim p(X|Z)}[f_{X}^{2}(x)] = 1\}$ . We have
+
+$$
+\begin{array}{l} \mathbb {E} _ {x \sim p (x | z)} [ f (x, z) ] = \int p (x | z) f (x, z) d x \\ = \sqrt {\frac {\delta (z - z _ {0})}{p (z)}} \int p (x | z) f _ {X} (x) d x \\ = \sqrt {\frac {\delta (z - z _ {0})}{p (z)}} \cdot 0 \\ = 0 \\ \end{array}
+$$
+
+And
+
+$$
+\begin{array}{l} \mathbb {E} \left[ f ^ {2} (X, Z) \right] = \int p (x, z) f ^ {2} (x, z) d x d z \\ = \int p (x, z) f _ {X} ^ {2} (x) \frac {\delta (z - z _ {0})}{p (z)} d x d z \\ = \int d z \delta (z - z _ {0}) \int d x p (x | z) f _ {X} ^ {2} (x) d x \\ = \int d z \delta \left(z - z _ {0}\right) \cdot 1 \\ = 1 \\ \end{array}
+$$
+
+Therefore, such constructed $f(x,z) = f_{X}(x)\sqrt{\frac{\delta(z - z_{0})}{p(z)}} \in \mathcal{Q}_{\mathcal{Z}|\mathcal{X}}^{(2)}$ , satisfying the requirement for $\rho_s(X,Y;Z)$ (which equals $\rho_r(X,Y;Z)$ by Eq. 28).
+
+Substituting in the special form of $f(x,z)$ into the expression of $\rho_s(X,Y;Z)$ in Eq. (27), we have
+
+$$
+\begin{array}{l} \sup_{f(x,z):f(x,z) = f_{X}(x)\sqrt{\frac{\delta(z - z_{0})}{p(z)}},f_{X}(x)\in \mathcal{Q}_{\mathcal{X}|z}^{(f)}}\int dzp(z)\int dy p(y|z)\left(\int dx p(x|y,z)f(x,z)\right)^{2} \\ = \sup _ {f _ {X} (x) \in \mathcal {Q} _ {\mathcal {X} | \mathcal {Z}} ^ {(f)}, z _ {0} \in \mathcal {Z}} \int d z p (z) \int d y p (y | z) \left(\int d x p (x | y, z) f _ {X} (x) \sqrt {\frac {\delta (z - z _ {0})}{p (z)}}\right) ^ {2} \\ = \sup _ {z _ {0} \in \mathcal {Z}} \int d z p (z) \frac {\delta (z - z _ {0})}{p (z)} \sup _ {f _ {X} (x) \in \mathcal {Q} _ {\mathcal {X} | \mathcal {Z}} ^ {(f)}} \int d y p (y | z) \left(\int d x p (x | y, z) f _ {X} (x)\right) ^ {2} \\ = \sup _ {z _ {0} \in \mathcal {Z}} \int d z \delta (z - z _ {0}) \sup _ {f _ {X} (X) \in \mathcal {Q} _ {\mathcal {X} | \mathcal {Z}} ^ {(f)}} \mathbb {E} [ (\mathbb {E} [ f _ {X} (X) | Y, Z = z ]) ^ {2} | Z = z ] \\ = \sup _ {z _ {0} \in \mathcal {Z}} \int d z \delta \left(z - z _ {0}\right) \rho_ {m} ^ {2} (X, Y | Z = z) \\ = \sup _ {z _ {0} \in \mathcal {Z}} \rho_ {m} ^ {2} (X, Y | Z = z _ {0}) \\ = \sup _ {Z \in \mathcal {Z}} \rho_ {m} ^ {2} (X, Y | Z) \\ \end{array}
+$$
+
+We can identify $\sup_{f_X(X) \in \mathcal{Q}_{\mathcal{X} | z}^{(f)}} \mathbb{E}[(\mathbb{E}[f_X(X) | Y, Z = z])^2 | Z = z]$ with $\rho_m^2(X, Y | Z = z)$ because $f_X(x)$ satisfies the requirement for conditional maximum correlation that $\mathbb{E}_{p(x | z)}[f_X(x)] = 0$ and $\mathbb{E}_{p(x | z)}[f_X^2(x)] = 1$ , for any $z$ , and using the same technique in (i), it is straightforward to prove that $\sup_{f_X(X) \in \mathcal{Q}_{\mathcal{X} | z}^{(f)}} \mathbb{E}[(\mathbb{E}[f_X(X) | Y, Z = z])^2 | Z = z]$ equals the conditional maximum correlation as defined in Definition 4.
+
+Since the conditional maximum correlation can be viewed as the maximum correlation between $X$ and $Y$ , where $X, Y \sim p(X, Y|Z)$ , using the equality of $(\beta_0[h(x)])^{-1} = \rho_m^2(X;Y)$ (Eq. 7 in Wu et al. (2019)), we can identify the $h(x)$ in $\beta_0[h(x)]$ with the $f_X(X)$ here, and an optimal $f_X^*(X)$ that maximizes $\rho_m^2(X, Y|Z)$ is also an optimal $h^*(x)$ that minimizes $\beta_0[h(x)]$ .
+
+(iv) For discrete $X$ , $Y$ and $Z$ and a given $Z = z$ , let $Q_{X,Y|Z} \coloneqq \left(\frac{p(x,y|z)}{\sqrt{p(x|z)p(y|z)}}\right)_{x,y} = \left(\frac{p(x,y)}{\sqrt{p(x)p(y)}}\sqrt{\frac{p(z|x)}{p(z|y)}}\right)_{x,y}$ , we first prove that its second largest singular value is $\rho_m^2(X,Y|Z) = \sup_{(f,g) \in S_2} \mathbb{E}_{x,y \sim p(x,y|z)}[f(x)g(y)]$ ( $\mathcal{S}_2$ is defined in Definition 4).
+
+Let column vectors $u_{1} = \sqrt{p(x|z)}$ and $v_{1} = \sqrt{p(y|z)}$ (note that $z$ is given and fixed). Also let $u_{2} = f(x)\sqrt{p(x|z)}$ and $v_{2} = g(y)\sqrt{p(y|z)}$ . Denote inner product $\langle u, v \rangle \equiv \sum_{i} u_{i}v_{i}$ , and the length of a vector as $||u|| = \sqrt{\langle u,u\rangle}$ . We have $||u_{1}|| = ||v_{1}|| = 1$ due to the normalization of probability, $||u_{2}|| = ||v_{2}|| = 1$ due to $\mathbb{E}_{x\sim p(x|z)}[f^2 (x)] = \mathbb{E}_{y\sim p(y|z)}[g^2 (y)] = 1$ , and $\langle u_1,u_2\rangle = \langle v_1,v_2\rangle = 0$ due to $\mathbb{E}_{x\sim p(x|z)}[f(x)] = \mathbb{E}_{y\sim p(y|z)}[g(y)] = 0$ . Furthermore, we have
+
+$$
+\sup_{(f,g)\in \mathcal{S}_{2}}\mathbb{E}_{x,y\sim p(x,y|z)}[f(x)g(y)] = \max_{u,v}u^{T}Q_{X,Y|Z}v
+$$
+
+which is exactly the second largest singular value $\sigma_2(Z)$ of the matrix $Q_{X,Y|Z}$ . Using the result in (ii), we have that $\rho_r(X,Y;Z) = \max_{Z\in \mathcal{Z}}\sigma_2(Z)$ .
+
+
+
+# G SUBSET SEPARATION AT PHASE TRANSITIONS
+
+In this section we study the behavior of $p(z|x)$ on the phase transitions. We use the same categorical dataset (where $|X| = |Y| = |Z| = 3$ and $p(x)$ is uniform, and $p(y|x)$ is given in Fig. 5). In Fig. 6 we show the $p(z|x)$ on the simplex before and after each phase transition. We see that the first phase transition corresponds to the separation of $x = 2$ (belonging to $y = 2$ ) w.r.t. $x \in \{0,1\}$ (belonging to classes $y \in \{0,1\}$ ), on the $p(z|x)$ simplex. The second phase transition corresponds to the separation of $x = 0$ with $x = 1$ . Therefore, each phase transition corresponds to the ability to distinguish subset of examples, and learning of new classes.
+
+# H MNIST EXPERIMENT DETAILS
+
+We use the MNIST training examples with class 0, 1, 2, 3, with a hidden label-noise matrix as given in Fig. 7, based on which at each minibatch we dynamically sample the observed label. We use conditional entropy bottleneck (CEB) (Fischer, 2018) as the variational IB objective, and run multiple independent instances with different the target $\beta$ . We jump start learning by started training at $\beta = 100$ for 100 epochs, annealing $\beta$ from 100 down to the target $\beta$ over 600 epochs, and continue to train at the target epoch for another 800 epochs. The encoder is a three-layer neural net, where each hidden layer has 512 neurons and leakyReLU activation, and the last layer has linear activation. The classifier $p(y|z)$ is a 2-layer neural net with a 128-neuron ReLU hidden layer. The backward encoder $p(z|y)$ is also a 2-layer neural net with a 128-neuron ReLU hidden layer. We trained with Adam (Kingma & Welling, 2013) at learning rate of $10^{-3}$ , and anneal down with factor $1/(1 + 0.01 \cdot$ epoch). For Alg. 1, for the $f_{\theta}$ we use the same architecture as the encoder of CEB, and use $|Z| = 50$ in Alg. 1.
+
+# I CIFAR10 EXPERIMENT DETAILS
+
+We use the same CIFAR10 class confusion matrix provided in Wu et al. (2019) to generate noisy labels with about $20\%$ label noise on average (reproduced in Table 1). We trained $28 \times 1$ Wide ResNet (He et al., 2016; Zagoruyko & Komodakis, 2016) models using the open source implementation from Cubuk et al. (2018) as encoders for the Variational Information Bottleneck (VIB) (Alemi et al., 2016). The 10 dimensional output of the encoder parameterized a mean-field Gaussian with unit covariance. Samples from the encoder were passed to the classifier, a 2 layer MLP. The marginal distributions were mixtures of 500 fully covariate 10-dimensional Gaussians, all parameters of which are trained.
+
+With this standard model, we trained 251 different models at $\beta$ from 1.0 to 6.0 with step size of 0.02. As in Wu et al. (2019), we jump-start learning by annealing $\beta$ from 100 down to the target $\beta$ . We do this over the first 4000 steps of training. The models continued to train for another 56,000 gradient steps after that, a total of 600 epochs. We trained with Adam (Kingma & Ba, 2015) at a base learning rate of $10^{-3}$ , and reduced the learning rate by a factor of 0.5 at 300, 400, and 500 epochs. The models converged to essentially their final accuracy within 40,000 gradient steps, and then remained stable.
+
+
+Figure 5: $p(y|x)$ for the categorical dataset in Fig. 2 and Fig. 6. The value in $i^{\text{th}}$ row and $j^{\text{th}}$ column denotes $p(y = j|x = i)$ . $p(x)$ is uniform.
+
+
+(a)
+
+
+(b)
+
+
+(c)
+
+
+(d)
+
+
+(e)
+Figure 6: (a) $I(Y;Z)$ vs. $\beta$ for the dataset given in Fig. 5. The phase transitions are marked with vertical dashed line, with $\beta_0^c = 2.065571$ and $\beta_1^c = 5.623333$ . (b)-(e) Optimal $p_{\beta}^{*}(z|x)$ for four values of $\beta$ , i.e. (b) $\beta = 2.060$ , (c) $\beta = 2.070$ , (d) $\beta = 5.620$ , (e) $\beta = 5.625$ (their $\beta$ values are also marked in (a)), where each marker denotes $p(z|x = i)$ for a given $i \in \{0,1,2\}$ .
+
+
+Figure 7: Confusion matrix for MNIST experiment. The value in $i^{\mathrm{th}}$ row and $j^{\mathrm{th}}$ column denotes $p(\tilde{y} = j|y = i)$ for the label noise.
+
+The accuracies reported in Figure 4 are averaged across five passes over the training set. We use $|Z| = 50$ in Alg. 1.
+
+Table 1: Class confusion matrix used in CIFAR10 experiments, reproduced from (Wu et al., 2019). The value in row $i$ , column $j$ means for class $i$ , the probability of labeling it as class $j$ . The mean confusion across the classes is $20\%$ .
+
+ | Plane | Auto. | Bird | Cat | Deer | Dog | Frog | Horse | Ship | Truck |
| Plane | 0.82232 | 0.00238 | 0.021 | 0.00069 | 0.00108 | 0 | 0.00017 | 0.00019 | 0.1473 | 0.00489 |
| Auto. | 0.00233 | 0.83419 | 0.00009 | 0.00011 | 0 | 0.00001 | 0.00002 | 0 | 0.00946 | 0.15379 |
| Bird | 0.03139 | 0.00026 | 0.76082 | 0.0095 | 0.07764 | 0.01389 | 0.1031 | 0.00309 | 0.00031 | 0 |
| Cat | 0.00096 | 0.0001 | 0.00273 | 0.69325 | 0.00557 | 0.28067 | 0.01471 | 0.00191 | 0.00002 | 0.0001 |
| Deer | 0.00199 | 0 | 0.03866 | 0.00542 | 0.83435 | 0.01273 | 0.02567 | 0.08066 | 0.00052 | 0.00001 |
| Dog | 0 | 0.00004 | 0.00391 | 0.2498 | 0.00531 | 0.73191 | 0.00477 | 0.00423 | 0.00001 | 0 |
| Frog | 0.00067 | 0.00008 | 0.06303 | 0.05025 | 0.0337 | 0.00842 | 0.8433 | 0 | 0.00054 | 0 |
| Horse | 0.00157 | 0.00006 | 0.00649 | 0.00295 | 0.13058 | 0.02287 | 0 | 0.83328 | 0.00023 | 0.00196 |
| Ship | 0.1288 | 0.01668 | 0.00029 | 0.00002 | 0.00164 | 0.00006 | 0.00027 | 0.00017 | 0.83385 | 0.01822 |
| Truck | 0.01007 | 0.15107 | 0 | 0.00015 | 0.00001 | 0.00001 | 0 | 0.00048 | 0.02549 | 0.81273 |
\ No newline at end of file
diff --git a/phasetransitionsfortheinformationbottleneckinrepresentationlearning/images.zip b/phasetransitionsfortheinformationbottleneckinrepresentationlearning/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..2c693cff999a7bdf61ed7275fb761fa3ab714a57
--- /dev/null
+++ b/phasetransitionsfortheinformationbottleneckinrepresentationlearning/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7e000bf44b8521e5e9a4c99dd87c556ef8fb4f26d43612a87d1233b401ac0076
+size 1545835
diff --git a/phasetransitionsfortheinformationbottleneckinrepresentationlearning/layout.json b/phasetransitionsfortheinformationbottleneckinrepresentationlearning/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..c8416ef82f2d2a2032c1a187db15efc67dc644d1
--- /dev/null
+++ b/phasetransitionsfortheinformationbottleneckinrepresentationlearning/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:04c0bb3955ecef9adeb68dbb6908ab095b6e80cee8b037e0726f9d71229441db
+size 1265404
diff --git a/physicsasinversegraphicsunsupervisedphysicalparameterestimationfromvideo/30ad0e32-07e1-44a7-a03e-c02a0dccf742_content_list.json b/physicsasinversegraphicsunsupervisedphysicalparameterestimationfromvideo/30ad0e32-07e1-44a7-a03e-c02a0dccf742_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..fa283948ec161793d8c737cd75404f8c6cd7950c
--- /dev/null
+++ b/physicsasinversegraphicsunsupervisedphysicalparameterestimationfromvideo/30ad0e32-07e1-44a7-a03e-c02a0dccf742_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b139c76184564dd98e0850bf1a602bf5dbcd2d73bf922a907c6064cb6da32c8f
+size 84866
diff --git a/physicsasinversegraphicsunsupervisedphysicalparameterestimationfromvideo/30ad0e32-07e1-44a7-a03e-c02a0dccf742_model.json b/physicsasinversegraphicsunsupervisedphysicalparameterestimationfromvideo/30ad0e32-07e1-44a7-a03e-c02a0dccf742_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..fbdd5e781ad2677f87f77130bf030606bbdf1739
--- /dev/null
+++ b/physicsasinversegraphicsunsupervisedphysicalparameterestimationfromvideo/30ad0e32-07e1-44a7-a03e-c02a0dccf742_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:33d09305413a7fa98b97c8c30ae8914a5db3c31410f36afc526c3a51ae64ac40
+size 104399
diff --git a/physicsasinversegraphicsunsupervisedphysicalparameterestimationfromvideo/30ad0e32-07e1-44a7-a03e-c02a0dccf742_origin.pdf b/physicsasinversegraphicsunsupervisedphysicalparameterestimationfromvideo/30ad0e32-07e1-44a7-a03e-c02a0dccf742_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..eb477f28d1b879b3fe154f3c170f39738ddda93b
--- /dev/null
+++ b/physicsasinversegraphicsunsupervisedphysicalparameterestimationfromvideo/30ad0e32-07e1-44a7-a03e-c02a0dccf742_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0dcbd5b2a20fcab13900964a1c9334b3a5ffef3388b5196513e22acd8a27b151
+size 2033416
diff --git a/physicsasinversegraphicsunsupervisedphysicalparameterestimationfromvideo/full.md b/physicsasinversegraphicsunsupervisedphysicalparameterestimationfromvideo/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..c0755558ce722963239f7c1b4064d97e9dfdb7fa
--- /dev/null
+++ b/physicsasinversegraphicsunsupervisedphysicalparameterestimationfromvideo/full.md
@@ -0,0 +1,333 @@
+# PHYSICS-AS-INVERSE-GRAPHICS: UNSUPERVISED PHYSICAL PARAMETER ESTIMATION FROM VIDEO
+
+Miguel Jaques
+
+School of Informatics
+
+University of Edinburgh
+
+Edinburgh, UK
+
+m.a.m. jaques@sms.ed.ac.uk
+
+Michael Burke
+
+School of Informatics
+
+University of Edinburgh
+
+Edinburgh, UK
+
+michael.burke@ed.ac.uk
+
+Timothy Hospedales
+
+School of Informatics
+
+University of Edinburgh
+
+Edinburgh, UK
+
+t.hospedales@ed.ac.uk
+
+# ABSTRACT
+
+We propose a model that is able to perform unsupervised physical parameter estimation of systems from video, where the differential equations governing the scene dynamics are known, but labeled states or objects are not available. Existing physical scene understanding methods require either object state supervision, or do not integrate with differentiable physics to learn interpretable system parameters and states. We address this problem through a physics-as-inverse-graphics approach that brings together vision-as-inverse-graphics and differentiable physics engines, enabling objects and explicit state and velocity representations to be discovered. This framework allows us to perform long term extrapolative video prediction, as well as vision-based model-predictive control. Our approach significantly outperforms related unsupervised methods in long-term future frame prediction of systems with interacting objects (such as ball-spring or 3-body gravitational systems), due to its ability to build dynamics into the model as an inductive bias. We further show the value of this tight vision-physics integration by demonstrating data-efficient learning of vision-actuated model-based control for a pendulum system. We also show that the controller's interpretability provides unique capabilities in goal-driven control and physical reasoning for zero-data adaptation.
+
+# 1 INTRODUCTION
+
+System identification or physical parameter estimation is commonly required for control or state estimation for physical modelling, and typically relies on dedicated sensing equipment and carefully constructed experiments. Current machine learning approaches to physical modeling from video either require training by supervised regression from video to object coordinates before estimating explicit physics (Watters et al., 2017; Wu et al., 2017b; Belbute-Peres et al., 2018), or are able to discover and segment objects from video in an unsupervised manner, but do not naturally integrate with a physics engine for long-term predictions or generation of interpretable locations and physical parameters for physical reasoning (Xu et al., 2019; van Steenkiste et al., 2018). In this work, we bridge the gap between unsupervised discovery of objects from video and learning the physical dynamics of a system, by learning unknown physical parameters and explicit trajectory coordinates.
+
+Our approach, called physics-as-inverse-graphics, solves the physical modeling problem via a novel vision-as-inverse-graphics encoder-decoder system that can render and de-render image components using Spatial Transformers (ST) (Jaderberg et al., 2015) in a way that makes it possible for the latent representation to generate disentangled interpretable states (position/velocity). These can be used
+
+directly by a differentiable physics engine (Degrave et al., 2016; Belbute-Peres et al., 2018) to learn the parameters of a scene where the family of differential equations governing the system are known (e.g. objects connected by a spring), but the corresponding parameters are not (e.g. spring constant). This allows us to identify physical parameters and learn vision components of the model jointly in an end-to-end fashion. Our contribution is a solution to unsupervised learning of physical parameters from video, without having access to ground-truth appearance, position or velocities of the objects, a task that had so far remained unsolved (Wu et al., 2015; Belbute-Peres et al., 2018).
+
+In addition to showing that our model can learn physical parameters without object or state supervision (a task with intrinsic scientific interest in and of itself), we show that incorporating dynamics priors in the form of known physical equations of motion with learnable parameters together with learnable vision and graphics can improve model performance in two challenging tasks: long term video prediction and visual model predictive control. We first evaluate physical parameter estimation accuracy and future video frame prediction on 4 datasets with different non-linear interactions and visual difficulty. We then demonstrate the value of our method by applying it for data-efficient learning of vision-based control of an under-actuated pendulum. Notably our unique ability to extract interpretable states and parameters from pixels without supervision enables end-to-end vision-based control to exploit goal-paramaterized policies and physical reasoning for zero-shot adaptation.
+
+# 2 RELATED WORK
+
+The ability to build inductive bias into models through structure is a key factor behind the success of modern neural architectures. Convolutional operations capture spatial correlations (Fukushima, 1980) in images, recency allows for temporal reasoning (Hochreiter & Schmidhuber, 1997), and spatial transformers (Jaderberg et al., 2015) provide spatial invariance in learning. However, many aspects of common data generation processes are not yet considered by these simple inductive biases. Importantly, they typically ignore the physical interactions underpinning data generation. For example, it is often the case that the underlying physics of a dynamic visual scene is known, even if specific parameters and objects are not. Incorporation of this information would be beneficial for learning, predicting the future of the visual scene, or control. Physics-as-inverse graphics introduces a framework that allows such high-level physical interaction knowledge to be incorporated into learning, even when ground-truth object appearance, positions and velocities are not available.
+
+In recent years there has been increased interest in physical scene understanding from video (Fragkiadaki et al., 2016; Finn et al., 2016; Fraccaro et al., 2017; Chang et al., 2017; Jonschkowski et al., 2017; Zheng et al., 2018; Janner et al., 2019). In order to learn explicit physical dynamics from video our system must discover and model the objects in a scene, having position as an explicit latent variable. Here we build on the long literature of neural vision-as-inverse-graphics (Hinton et al., 2011; Kulkarni et al., 2015; Huang & Murphy, 2016; Ellis et al., 2018; Romaszko et al., 2017; Wu et al., 2017a), particularly on the use of spatial transformers (ST) for rendering (Eslami et al., 2016; Rezende et al., 2016; Zhu et al., 2018).
+
+There are several models that assume knowledge of the family of equations governing system dynamics, but where the individual objects are either pre-segmented or their ground-truth positions/velocities are known (Stewart & Ermon, 2017; Wu et al., 2017b; Belbute-Peres et al., 2018). In terms of learning physical parameters, our work is directly inspired by the Galileo model and Physics 101 dataset (Wu et al., 2015; 2016), which fits the dynamics equations to a scene with interacting objects. However, the Galileo model makes use of custom trackers which estimate the position and velocity of each object of interest, and is incapable of end-to-end learning from video, thus bypasses the difficulty of recognizing and tracking objects from video using a neural system. To the best of our knowledge, our model is the first to offer end-to-end unsupervised physical parameter and state estimation.
+
+Within the differentiable physics literature (Degrave et al., 2016), Belbute-Peres et al. (2018) observed that a multi-layer perceptron (MLP) encoder-decoder architecture with a physics engine was not able to learn without supervising the physics engine's output with position/velocity labels (c.f. Fig. 4 in Belbute-Peres et al. (2018)). While in their case $2\%$ labeled data is enough to allow learning, the transition to no labels causes the model to not learn at all. The key contribution of our work is the incorporation of vision-as-inverse-graphics with physics, which makes the transition possible.
+
+
+Figure 1: Left: High-level view of our architecture. The encoder (top-right) estimates the position of $N$ objects in each input frame. These are passed to the velocity estimator which estimates objects' velocities at the last input frame. The positions and velocities of the last input frame are passed as initial conditions to the physics engine. At every time-step, the physics engine outputs a set of positions, which are used by the decoder (bottom-right) to output a predicted image. If the system is actuated, an input action is passed to the physics engine at every time-step. See Section 3 for detailed descriptions of the encoder and decoder architectures.
+
+Another related area of increasing interest is unsupervised discovery of objects and/or dynamics from video (Xu et al., 2019; van Steenkiste et al., 2018; Greff et al., 2019; Burgess et al., 2019). Though powerful, such models do not typically use interpretable latent representations that can be directly used by a physics engine, reasoned about for physical problem solving, or that are of explicit interest to model users. For example, Kosiorek et al. (2018) and Hsieh et al. (2018) use ST's to locate/place objects in a scene and predict their motion, but this work differs from ours in that our coordinate-consistent design obtains explicit cartesian, angular or scale coordinates, allowing us to feed state vectors directly into a differentiable physics engine. Under a similar motivation as our work, but without an inverse-graphics approach, Ehrhardt et al. (2018) developed an unsupervised model to obtain consistent object locations. However, this only applies to cartesian coordinates, not angles or scale.
+
+Despite recent interest in model-free reinforcement learning, model-based control systems have repeatedly shown to be more robust and sample efficient (Deisenroth & Rasmussen, 2011; Mania et al., 2018; Watters et al., 2019a). Hafner et al. (2019) learn a latent dynamics model (PlaNet) that allows for planning from pixels, which is significantly more sample efficient than model-free learning strategies A3C (Mnih et al., 2016) and D4PG (Barth-Maron et al., 2018). However, when used for control, there is often a desire for visually grounded controllers operating under known dynamics, as these are verifiable and interpretable (Burke et al., 2019), and provide transferability and generality. However, system identification is challenging in vision-based control settings. Byravan et al. (2018) use supervised learning to segment objects, controlling these using known rigid body dynamics. Penkov & Ramamoorthy (2019) learn feedforward models with REINFORCE (Williams, 1992) to predict physical states used by a known controller and dynamical model, but this is extremely sample inefficient. In contrast, we learn parameter and state estimation modules jointly to perform unsupervised system identification from pixels, enabling data-efficient vision-actuated model-based control.
+
+# 3 LEARNING PHYSICAL PARAMETERS FROM VIDEO VIA INVERSE GRAPHICS
+
+In order to learn explicit physics from video, several components have to be in place. First, the model must be able to learn to identify and represent the objects in an image. In order to perform dynamics prediction with a physics engine, the position and velocity of the object must be represented
+
+as explicit latent states (whereas appearance can be represented through some latent vector or, in our case, as a set of learned object templates). Our sequence-to-sequence video prediction architecture consists of 4 modules trained jointly: an encoder, a velocity estimator, a differentiable physics engine, and a graphics decoder. The architecture is shown in Figure 1.
+
+Encoder The encoder net takes a single frame $I_{t}$ as input and outputs a vector $\mathbf{p}_t\in \mathbb{R}^{N\times D}$ corresponding to the $D$ -dimensional coordinates of each of $N$ objects in the scene, $\mathbf{p}_t = [\mathbf{p}_t^1,\dots,\mathbf{p}_t^N ]$ For example, when modelling position in 2D space we have $D = 2$ and $\mathbf{p}_t^n = [x,y]_t^n$ ; when modelling object angle we have $D = 1$ and $\mathbf{p}_t^n = [\theta_t^n ]$ . The encoder architecture is shown in Figure 1(top right).
+
+To extract each object's coordinates we use a 2-stage localization approach1. First, the input frame is passed through a U-Net (Ronneberger et al., 2015) to produce $N$ unnormalized masks. These masks (plus a learnable background mask) are stacked and passed through a softmax to produce $N + 1$ masks, where each input pixel is softly assigned to a mask. The input image is then multiplied by each mask, and a 2-layer location network produces coordinate outputs from each masked input component. For a 2D system where the coordinates of each object are its $(x,y)$ position (the polar coordinates case is analogous) and the images have dimensions $H\times H$ , the encoder output represents $(x,y)$ coordinates with values in $[0,H]$ . To do this, the activation of the encoder's output layer is a saturating non-linearity $H / 2\cdot \tanh(\cdot) + H / 2$ .
+
+Velocity estimator The velocity estimator computes the velocity vector of each object at the $L$ -th input frame given the coordinates produced by the encoder for this object at the first $L$ input frames, $\mathbf{v}_L^n = f(\mathbf{p}_1^n,\dots,\mathbf{p}_L^n)$ . We implement this as a 3 hidden layer MLP with 100 tanh activated units.
+
+Differentiable physics engine The physics engine contains the differential equations governing the system, with unknown physical parameters to be learned - such as spring constants, gravity, mass, etc. Given initial positions and velocities produced by the encoder and velocity estimator, the physics engine rolls out the objects' trajectories. In this work we use a simple physics engine with Euler integration, where $\mathbf{p}_t,\mathbf{v}_t$ is computed from $\mathbf{p}_{t - 1},\mathbf{v}_{t - 1}$ by repeating for $i\in [1..M]$ :
+
+$$
+\mathbf {p} _ {t + \frac {i}{M}} = \mathbf {p} _ {t + \frac {i - 1}{M}} + \frac {\Delta t}{M} \cdot \mathbf {v} _ {t + \frac {i}{M}}; \quad \mathbf {v} _ {t + \frac {i}{M}} = \mathbf {v} _ {t + \frac {i - 1}{M}} + \frac {\Delta t}{M} \cdot \mathbf {F} \left(\mathbf {p} _ {t + \frac {i - 1}{M}}, \mathbf {v} _ {t + \frac {i - 1}{M}}; \theta\right), \tag {1}
+$$
+
+where $\Delta t$ is the integration step, $\theta$ are the physical parameters and $\mathbf{F}$ is the force applied to each object, according to the equations in Appendix A. We use $M = 5$ in all experiments. In principle, more complex physics engines could be used (Chen et al., 2018; Belbute-Peres et al., 2018).
+
+Coordinate-Consistent Decoder The decoder takes as input the positions given by the encoder or physics engine, and outputs a predicted image $\tilde{I}_t$ . The decoder is the most critical part of this system, and is what allows the encoder, velocity estimator and physics engine to train correctly in a fully unsupervised manner. We therefore describe its design and motivation in greater detail.
+
+While an encoder with outputs in the range $[0, H]$ can represent coordinates in pixel space, it does not mean that the decoder will learn to correctly associate an input vector $(x, y)$ with an object located at pixel $(x, y)$ . If the decoder is unconstrained, like a standard MLP, it can very easily learn erroneous, non-linear representations of this Cartesian space. For example, given two different inputs, $(x_1, y_1)$ and $(x_1, y_2)$ , with $y_1 \neq y_2$ , the decoder may render those two objects at different horizontal positions in the image. While having a correct Cartesian coordinate representation is not strictly necessary to allow physical parameters of the physics engine to be learned from video, it is critical to ensure correct future predictions. This is because the relationship between position vector and pixel space position must be fixed: if the position vector changes by $(\Delta x, \Delta y)$ , the object's position in the output image must change by $(\Delta x, \Delta y)$ . This is the key concept that allows us to improve on Belbute-Peres et al. (2018), in order to learn an encoder, decoder and physics engine without state labels.
+
+In order to impose a correct latent-coordinate to pixel-coordinate correspondence, we use spatial transformers (ST) with inverse parameters as the decoder's writing attention mechanism. We want transformer parameters $\omega$ to be such that a decoder input of $\mathbf{p}_t^n = [x,y]_t^n$ , places the center of the writing attention window at $(x,y)$ in the image, or that a decoder input of $\mathbf{p}_t^n = \theta_t^n$ rotates the attention window by $\theta$ . In the original ST formulation (Jaderberg et al., 2015), the matrix $\omega$ represents the affine transformation applied to the output image to obtain the source image. This means that the elements of $\omega$ in Eq. 1 of Jaderberg et al. (2015) do not directly represent translation, scale or angle of
+
+
+Figure 2: Future frame predictions for 3-ball gravitational system (top) and 2-digit spring system (bottom). IN: Interaction Network. Only the combination of Physics and Inverse-Graphics maintains object integrity and correct dynamics many steps into the future.
+
+the writing attention window. To achieve this representation, we use a ST with inverse transformation parameters. For a general affine transformation with translation $(x,y)$ , angle $\theta$ and scale $s$ , we want to modify the source image coordinates according to:
+
+$$
+\left( \begin{array}{c} x _ {o} \\ y _ {o} \\ 1 \end{array} \right) = \left( \begin{array}{c c c} s \cdot \cos \theta & s \cdot \sin \theta & x \\ - s \cdot \sin \theta & s \cdot \cos \theta & y \\ 0 & 0 & 1 \end{array} \right) \left( \begin{array}{c} x _ {s} \\ y _ {s} \\ 1 \end{array} \right) \tag {2}
+$$
+
+This transformation can be obtained with a ST by inverting (2):
+
+$$
+\left( \begin{array}{c} x _ {s} \\ y _ {s} \\ 1 \end{array} \right) = \frac {1}{s} \left( \begin{array}{c c c} \cos \theta & - \sin \theta & - x \cos \theta + y \sin \theta \\ \sin \theta & \cos \theta & - x \sin \theta - y \cos \theta \\ 0 & 0 & s \end{array} \right) \left( \begin{array}{c} x _ {o} \\ y _ {o} \\ 1 \end{array} \right) \tag {3}
+$$
+
+Therefore, to obtain a decoder with coordinate-consistent outputs, we simply use a ST with parameters $\omega$ as given in (3)
+
+Each object is represented by a learnable content $\mathbf{c}^n\in [0,1]^{H\times H\times C}$ and mask $\mathbf{m}^n\in \mathbb{R}^{H\times H\times 1}$ tensor, $n = 1..N$ . Additionally, we learn background content $\mathbf{c}^{bkg}\in [0,1]^{H\times H\times C}$ and mask $\mathbf{m}^{bkg}\in \mathbb{R}^{H\times H\times 1}$ , that do not undergo spatial transformation. One may think of the content as an RGB image containing the texture of an object and the mask as a grayscale image containing the shape and z-order of the object. In order to produce an output image, the content and mask are transformed according to $[\hat{\mathbf{c}}_t^n,\hat{\mathbf{m}}_t^n] = \mathrm{ST}([\mathbf{c}^n,\mathbf{m}^n],\omega_{\mathbf{p}_t^n})$ and the resulting logit masks are combined via a softmax across channels, $[\tilde{\mathbf{m}}_t^1,\dots,\tilde{\mathbf{m}}_t^N,\tilde{\mathbf{m}}_t^{bkg}] = \mathrm{softmax}(\hat{\mathbf{m}}_t^1,\dots,\hat{\mathbf{m}}_t^N,\mathbf{m}^{bkg})$ . The output image is obtained by multiplying the output masks by the contents:
+
+$$
+\tilde {I} _ {t} = \tilde {\mathbf {m}} _ {t} ^ {b k g} \odot \mathbf {c} ^ {b k g} + \sum_ {n = 1} ^ {N} \tilde {\mathbf {m}} _ {t} ^ {n} \odot \hat {\mathbf {c}} _ {t} ^ {n}. \tag {4}
+$$
+
+The decoder architecture is shown in Fig. 1, bottom-right. The combined use of ST's and masks provides a natural way to model depth ordering, allowing us to capture occlusions between objects.
+
+Auxiliary autoencoder loss Using a constrained decoder ensures the encoder and decoder produces objects in consistent locations. However, it is hard to learn the full model from future frame prediction alone, since the encoder's training signal comes exclusively from the physics engine. To alleviate this and quickly build a good encoder/decoder representation, we add a static per-frame autoencoder loss.
+
+Training During training we use $L$ input frames and predict the next $T_{pred}$ frames. Defining the frames produced by the decoder via the physics engine as $\tilde{I}_t^{\mathrm{pred}}$ and the frames produced by the decoder using the output of the encoder directly as $\tilde{I}_t^{\mathrm{ae}}$ , the total loss is:
+
+$$
+\mathcal {L} _ {\text {t o t a l}} = \mathcal {L} _ {\text {p r e d}} + \alpha \mathcal {L} _ {\text {r e c}} = \sum_ {t = L + 1} ^ {L + T _ {p r e d}} \mathcal {L} \left(\tilde {I} _ {t} ^ {\text {p r e d}}, I _ {t}\right) + \alpha \sum_ {t = 1} ^ {L + T _ {p r e d}} \mathcal {L} \left(\tilde {I} _ {t} ^ {\mathrm {a e}}, I _ {t}\right) \tag {5}
+$$
+
+where $\alpha$ is a hyper-parameter. We use mean-squared error loss throughout. During testing we predict an additional $T_{ext}$ frames in order to evaluate long term prediction beyond the length seen for training.
+
+# 4 EXPERIMENTS
+
+# 4.1 PHYSICAL PARAMETER LEARNING AND FUTURE PREDICTION
+
+
+Figure 3: Frame prediction accuracy (SSI, higher is better) for the balls datasets. Left of the green dashed line corresponds to the training range, $T_{pred}$ , right corresponds to extrapolation, $T_{ext}$ . We outperform Interaction Networks (IN) (Watters et al., 2017), DDPAE (Hsieh et al., 2018) and VideoLSTM (Srivastava et al., 2015) in extrapolation due to incorporating explicit physics.
+
+
+
+
+
+| Dataset
+Parameters | 2-balls spring
+(k, l) | 2-digits spring
+(k, l) | 3-balls gravity
+g | 3-balls gravity
+m |
| Learned value | (4.26, 6.17) | (2.18, 12.24) | 65.7 | 0.95 |
| Ground-truth value | (4.0, 6.0) | (2.0, 12.0) | 60.0 | 1.0 |
+
+Table 1: Physical parameters learned from video are within ${10}\%$ of true system parameters.
+
+Setup To explore learning physical parameters and evaluate long-term prediction we train our model on scenes with 5 different settings: two colored balls bouncing off the image edges; two colored balls connected by a spring; three colored balls with gravitational pull - all on a black background; and to test greater visual complexity, we also use 2 MNSIT digits connected by a spring, on a CIFAR background. We train using values of $(L, T_{pred}, T_{ext})$ set to $(3,7,20)$ , $(3,7,20)$ , $(3,7,20)$ , $(4,12,24)$ and $(3,7,20)$ , respectively. For the spring systems the physical parameters to be learned are the spring constant $k$ and equilibrium distance $l$ , and for the gravitational system it is the gravity constant $g$ or mass of the objects $m$ (when learning gravity the mass if fixed, and vice-versa). In all cases we use objects with mass $m = 1$ . We provide the exact equations of motion used in these systems and other training details in Appendices A and B, respectively. All datasets consist of 5000 sequences for training, 500 for validation, and 500 for testing. We use a learnable ST scale parameter initialized at $s = 2$ in the balls datasets and $s = 1$ in the digits dataset. In these datasets we set $\theta = 0$ .
+
+Baselines We compare our model to 3 strong baselines: DDPAE (Hsieh et al., 2018) $^2$ , which is a generative model that uses an inverse-graphics model with black-box dynamics; VideoLSTM (Srivastava et al., 2015), which uses black-box encoding, decoding and dynamics; Interaction Network + Inverse-Graphics, which uses the same encoder and decoder as our Physics-as-Inverse-Graphics model, but where the dynamics module is an Interaction Network (Battaglia et al., 2016). The latter
+
+model allows us to compare explicit physics with relational dynamics networks, in terms of their ability to correctly capture object interactions3.
+
+Results Table 1 shows that our model finds physical parameters close to the ground-truth values used to generate the datasets, and Figure 4 shows the contents and masks learned by the decoder. This highlights the fact that the proposed model can successfully perform unsupervised system identification from pixels. Future frame predictions for two of the systems are shown in Figure 2, and per-step Structural Similarity Index (SSI) $^4$ of the models on the prediction and extrapolation range are shown in Figure 3. While all models obtain low error in the prediction range (left of the green dashed line), our model is significantly better in the extrapolation range. Even many steps into the future, our model's predictions are still highly accurate, unlike those of other black-box models (Figure 2). This shows the value of using an explicit physics model in systems where the dynamics are non-linear yet well defined. Further rollouts are shown in Appendix C, and we encourage the reader to watch the videos for all the datasets at https://sites.google.com/view/physicsasinversegraphics.
+
+This difference in performance is explained in part by the fact that in some of these systems the harder-to-predict parts of the dynamics do not appear during training. For example, in the gravitational system, whiplash from objects coming in close contact is seldom present in the first $K + T_{pred}$ steps given in the training set, but it happens frequently in the $T_{ext}$ extrapolation steps evaluated during testing. We do not consider this to be a failure of black-box model, but rather a consequence of the generality vs specificity tradeoff: a model without a sufficiently strong inductive bias on the dynamics is simply not able to correctly infer close distance dynamics from long distance dynamics.
+
+
+Figure 4: Contents and masks learned by the decoder. Object masks: $\sigma (\mathbf{m})$ . Objects for rendering: $\sigma (\mathbf{m})\odot \mathbf{c}$ . Contents and masks correctly capture each part of the scene: colored balls, MNIST digits and CIFAR background. We omit the black background learned on the balls dataset.
+
+
+Table 2: Test loss under different training conditions. Separate gradients: Train encoder/decoder on $\mathcal{L}_{\mathrm{rec}}$ and velocity estimator and physics engine on $\mathcal{L}_{\mathrm{pred}}$ . Black-box decoder, joint: Joint training using a standard MLP network as the decoder. Only joint training using our coordinate-consistent decoder succeeds.
+
+| Train using | \( \mathcal{L}_{\text{pred}} \) | \( \mathcal{L}_{\text{rec}} \) |
| only \( \mathcal{L}_{\text{pred}} \) | 31.4 | 20.5 |
| separate gradients | 28.1 | 0.22 |
| joint \( \mathcal{L}_{\text{pred}} + \alpha \mathcal{L}_{\text{rec}} \) | 1.39 | 0.63 |
| black-box decoder, joint | 30.9 | 2.87 |
+
+Ablation studies Since the encoder and decoder must discover the objects present in the image and the corresponding locations, one might assume that the velocity estimator and physics engine could be learned using only the prediction loss, and encoder/decoder using only the static autoencoder loss, i.e., without joint training. In Table 2 we compare the performance of four variants on the 3-ball gravity dataset: joint training using only the prediction loss; joint training using the prediction and autoencoder losses; training the encoder/decoder on the autoencoder loss and the velocity estimator and physics engine on the prediction loss; and joint training, but using an MLP black-box decoder.
+
+We can see that only joint prediction and autoencoder loss obtain satisfactory performance, and that the use of the proposed coordinate-consistent decoder is critical. The prediction loss is essential in order for the model to learn encoders/decoders whose content and masks can be correctly used by the physics engine. This can be understood by considering how object interaction influences the decoder. In the gravitational system, the forces between objects depend on their distances, so if the objects swap locations, the forces must be the same. If the content/mask learned for each object are centered differently relative to its template center, rendering the objects at positions $[x,y]$ and $[w,z]$ , or $[w,z]$ and $[x,y]$ will produce different distances between these two objects in image space. This violates the
+
+
+
+
+
+
+
+
+Figure 5: Top: Comparison between our model and PlaNet Hafner et al. (2019) in terms of learning sample efficiency (left). Explicit physics allows reasoning for zero-shot adaptation to domain-shift in gravity (center) and goal-driven control to balance the pendulum in any position (right). DDPG (VAE) corresponds to a DDPG agent trained on the latent space of an autoencoder (trained with 320k images) after 80k steps. DDPG (proprio) corresponds to an agent trained from proprioception after 30k steps. Bottom: The first 3 rows show a zero-shot counterfactual episode with a gravity multiplier of 1.4 for an oracle, our model and planet, with vertical as the target position (as trained). The last row shows an episode using a goal image to infer the non-vertical goal state.
+
+permutation invariance property of the system. Learning the encoder/decoder along with the velocity estimator and physics engine on the prediction loss allows the encoder and decoder to learn locations and contents/masks that satisfy the characteristics of the system and allows the physics to be learned correctly. In Appendix D we perform further ablations on the decoder architecture and its ability to correctly render objects in regions of the image not seen during training.
+
+# 4.2 VISION-BASED MODEL-PREDICTIVE CONTROL (MPC)
+
+Tasks One of the main applications of our method is to identify the (actuated) dynamical parameters and states of a physical system from video, which enables vision-based planning and control. Here we apply it to the pendulum from OpenAI Gym (Brockman et al., 2016) - one typically solved from proprioceptive state, not pixels. For training we collect 5000 sequences of 14 frames with random initialization $(\dot{\theta}_0 \sim \mathrm{Unif}(-6, 6))$ and actions $(u_t \sim \mathrm{Unif}(-2, 2))$ . The physical parameters to learn are gravity $g = 10.0$ and actuation coefficient $a = 1.0$ . We use $K = 4$ and $T_{pred} = 10$ . We use the trained MPC model as follows. At every step, the previous 4 frames are passed to the encoder and velocity nets to estimate $[\theta_t, \dot{\theta}_t]$ . This is passed to the physics engine with learned parameters $g$ and $a$ . We perform 100-step model-predictive control using the cross entropy method (Rubinstein, 1997), exactly as described in Hafner et al. (2019), setting vertical position and zero velocity as the goal. Baselines We compare our model to an oracle model, which has the true physical parameters and access to the true pendulum position and velocity (not vision-based), as well as a concurrent state-of-the-art model-based RL method (PlaNet (Hafner et al., 2019)), and a model-free deep deterministic policy gradient (DDPG) agent (Lillicrap et al., 2016). To provide an equivalent comparison to our model, we train PlaNet on random episodes.
+
+Results In terms of system identification, our model recovers the correct gravity ( $g = 9.95$ ) and force coefficient ( $a = 0.99$ ) values from vision alone, which is a prerequisite to perform correct planning and control. Figure 5 (top-left) highlights the data efficiency of our method, which is comparable to PlaNet, while being dramatically faster than DDPG from pixels. Importantly, the
+
+interpretability of the explicit physics in our model provides some unique capabilities. We can perform simple counter-factual physical reasoning such as 'How should I adapt my control policy if gravity was increased?', which enables zero-shot adaptation to new environmental parameters. Figure 5 (top-middle) shows that our model can exploit such reasoning to succeed immediately over a wide range of gravities with no re-training. Similarly, while the typical inverted pendulum goal is to balance the pendulum upright, interpretable physics means that this is only one point in a space of potential goals. Figure 5 (top-right) evaluates the goal-paramaterized control enabled by our model. Any feasible target angle specified can be directly reached by the controller. There is extrapolative generalisation across the space of goals even though only one goal (vertical) was seen during training. Importantly, these last two capabilities are provided automatically by our model due to its disentangled interpretable representation, but cannot be achieved without further adaptive learning by alternatives that are reward-based (Mnih et al., 2016) or rely on implicit physics (Hafner et al., 2019).
+
+# 5 LIMITATIONS
+
+Although the approach presented here shows promising results in terms of physical parameter estimation, long-term video prediction and MPC, a number of limitations need to be overcome for real-world application.
+
+Templates as object representation Though the assumption that every scene in a dataset is a combination of learnable templates is a common one in the literature (c.f. Tieleman (2014) for an extensive study on this), this is insufficient to model real-world scenes. For example, applying physics-as-inverse-graphics to the Physics101 dataset would require representing objects using a latent appearance representation that could be used by the decoder (Eslami et al., 2016). This would introduce new modelling challenges, requiring object tracking to keep correct object identity associations (Kosiorek et al., 2018). In this work we simplify this problem by assuming that objects are visually distinct throughout the dataset, though this does not detract from the essential contributions of the paper.
+
+Rigid sequence to sequence architecture In this work we used a sequence-to-sequence architecture, with a fixed number of input steps. This architectural choice (inspired by Watters et al. (2017)), prevents the model from updating its state beliefs if given additional input frames later in the sequence. Formulating the current model in a probabilistic manner that would allow for state/parameter filtering and smoothing at inference time is a promising direction of future work.
+
+Static background assumption Many scenes of interest do not follow the assumption that the only moving objects in the scene are the objects of interest (even though this assumption is widely used). Adapting our model to varying scene backgrounds would require additional components to discern which parts of the scene follow the dynamics assumed by the physics engine, in order to correctly perform object discovery. This is a challenging problem, but we believe it would greatly increase the range of applications of the ideas presented here.
+
+# 6 CONCLUSION
+
+Physics-as-inverse graphics provides a valuable mechanism to integrate inductive bias about physical data generating processes into learning. This allows unsupervised object tracking and system identification, in addition to sample efficient, generalisable and flexible control. However, incorporating this structure into lightly supervised deep learning models has proven challenging to date. We introduced a model that accomplishes this, relying on a coordinate-consistent decoder that enables image reconstruction from physics. We have shown that our model is able to perform accurate long term prediction and that it can be used to learn the dynamics of an actuated system, allowing us to perform vision-based model-predictive control.
+
+# REFERENCES
+
+Gabriel Barth-Maron, Matthew W Hoffman, David Budden, Will Dabney, Dan Horgan, Alistair Muldal, Nicolas Heess, and Timothy Lillicrap. Distributed distributional deterministic policy gradients. *ICLR*, 2018.
+
+Peter W. Battaglia, Razvan Pascanu, Matthew Lai, Danilo Rezende, and Koray Kavukcuoglu. Interaction Networks for Learning about Objects, Relations and Physics. In NIPS, 2016.
+Filipe De A Belbute-Peres, Kevin A Smith, Kelsey R Allen, Joshua B Tenenbaum, and J Zico Kolter. End-to-End Differentiable Physics for Learning and Control. In NIPS, 2018.
+Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. OpenAI Gym. 2016.
+Christopher P Burgess, Loic Matthey, Nicholas Watters, Rishabh Kabra, Irina Higgins, Matt Botvinick, and Alexander Lerchner. MONet: Unsupervised Scene Decomposition and Representation. CoRR, abs/1901.11390, 2019.
+Michael Burke, Svetlin Penkov, and Subramanian Ramamoorthy. From explanation to synthesis: Compositional program induction for learning from demonstration. Robotics: Science and Systems (R:SS), 2019.
+Arunkumar Byravan, Felix Leeb, Franziska Meier, and Dieter Fox. SE3-Pose-Nets: Structured Deep Dynamics Models for Visuomotor Planning and Control. In ICRA, 2018.
+Michael B Chang, Tomer Ullman, Antonio Torralba, and Joshua B Tenenbaum. A Compositional Object-Based Approach to Learning Physical Dynamics. In ICLR, 2017.
+Ricky T Q Chen, Yulia Rubanova, Jesse Bettencourt, and David Duvenaud. Neural Ordinary Differential Equations. In NIPS, 2018.
+Jonas Degrave, Michiel Hermans, Joni Dambre, and Francis wyffels. A Differentiable Physics Engine for Deep Learning in Robotics. 11 2016.
+Marc Deisenroth and Carl E Rasmussen. *Pilco: A model-based and data-efficient approach to policy search*. In ICML, 2011.
+Sebastien Ehrhardt, Aron Monszpart, Andrea Vedaldi, and Niloy Mitra. Unsupervised Intuitive Physics from Visual Observations. CoRR, abs/1805.08095, 2018.
+Kevin Ellis, Daniel Ritchie, Armando Solar-Lezama, and Joshua B. Tenenbaum. Learning to Infer Graphics Programs from Hand-Drawn Images. In NIPS, 2018.
+S M Ali Eslami, Nicolas Heess, Theophane Weber, Yuval Tassa, Koray Kavukcuoglu, and Geoffrey E Hinton. Attend, Infer, Repeat: Fast Scene Understanding with Generative Models. In NIPS, 2016.
+Chelsea Finn, Ian Goodfellow, and Sergey Levine. Unsupervised Learning for Physical Interaction through Video Prediction. In NIPS, 2016.
+Marco Fraccaro, Simon Kamronn, Ulrich Paquet, and Ole Winther. A Disentangled Recognition and Nonlinear Dynamics Model for Unsupervised Learning. In NIPS, 2017.
+Katerina Fragkiadaki, Pulkit Agrawal, Sergey Levine, and Jitendra Malik. Learning Visual Predictive Models of Physics for Playing Billiards. In ICLR, 2016.
+Kunihiko Fukushima. Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biological cybernetics, 36(4):193-202, 1980.
+Klaus Greff, Raphaël Lopez Kaufman, Rishabh Kabra, Nick Watters, Chris Burgess, Daniel Zoran, Loic Matthey, Matthew Botvinick, and Alexander Lerchner. Multi-Object Representation Learning with Iterative Variational Inference. In ICML, 2019.
+Danijar Hafner, Timothy Lillicrap, Ian Fischer, Ruben Villegas, David Ha, Honglak Lee, and James Davidson. Learning latent dynamics for planning from pixels. ICML, 2019.
+Geoffrey Hinton, Nitish Srivastava, and Kevin Swersky. Neural Networks for Machine Learning Lecture 6a: Overview of mini-batch gradient descent. Technical report, .
+Geoffrey E. Hinton, Alex Krizhevsky, and Sida D. Wang. Transforming auto-encoders. In ICANN, pp. 44-51, 2011.
+
+Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8): 1735-1780, 1997.
+Jun-Ting Hsieh, Bingbin Liu, De-An Huang, Li Fei-Fei, and Juan Carlos Niebles. Learning to Decompose and Disentangle Representations for Video Prediction. In NIPS, 2018.
+Jonathan Huang and Kevin Murphy. Efficient Inference in Occlusion-Aware Generative Models of Images. In ICLR Workshop, 2016.
+Max Jaderberg, Karen Simonyan, Andrew Zisserman, and Koray Kavukcuoglu. Spatial Transformer Networks. In NIPS, 2015.
+Michael Janner, Sergey Levine, William T Freeman, Joshua B Tenenbaum, Chelsea Finn, and Jiajun Wu. Reasoning About Physical Interactions with Object-Oriented Prediction and Planning. In ICLR, 2019.
+Rico Jonschkowski, Roland Hafner, Jonathan Scholz, and Martin Riedmiller. PVEs: Position-Velocity Encoders for Unsupervised Learning of Structured State Representations. CoRR, abs/1705.09805, 2017.
+Adam R Kosiorek, Hyunjik Kim, Ingmar Posner, and Yee Whye Teh. Sequential Attend, Infer, Repeat: Generative Modelling of Moving Objects. In NIPS, 2018.
+Tejas D Kulkarni, Will Whitney, Pushmeet Kohli, and Joshua B Tenenbaum. Deep Convolutional Inverse Graphics Network. In NIPS, 2015.
+Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. *ICLR*, 2016.
+Horia Mania, Aurelia Guy, and Benjamin Recht. Simple random search provides a competitive approach to reinforcement learning. NIPS, 2018.
+Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In ICML, 2016.
+Svetlin Penkov and Subramanian Ramamoorthy. Learning programmatically structured representations with perceptor gradients. *ICLR*, 2019.
+Danilo J. Rezende, Shakir Mohamed, Ivo Danihelka, Karol Gregor, and Daan Wierstra. One-Shot Generalization in Deep Generative Models. In ICML, 2016.
+Lukasz Romaszko, Christopher K I Williams, Pol Moreno, and Pushmeet Kohli. Vision-as-Inverse-Graphics: Obtaining a Rich 3D Explanation of a Scene from a Single Image. In ICCV, 2017.
+Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-Net: Convolutional Networks for Biomedical Image Segmentation. In MICCAI, 2015.
+Reuven Y. Rubinstein. Optimization of computer simulation models with rare events. *EJOR*, 1997.
+Nitish Srivastava, Elman Mansimov, and Ruslan Salakhutdinov. Unsupervised Learning of Video Representations using LSTMs. In ICML, 2015.
+Russell Stewart and Stefano Ermon. Label-Free Supervision of Neural Networks with Physics and Domain Knowledge. In AAAI, 2017.
+Tijmen Tieleman. Optimizing Neural Networks that Generate Images. PhD thesis, 2014.
+Sjoerd van Steenkiste, Michael Chang, Klaus Greff, and Jurgen Schmidhuber. Relational Neural Expectation Maximization: Unsupervised Discovery of Objects and their Interactions. In ICLR, 2018.
+Nicholas Watters, Andrea Tacchetti, Théophane Weber, Razvan Pascanu, Peter Battaglia, and Daniel Zoran. Visual Interaction Networks: Learning a Physics Simulator from Video. In NIPS, 2017.
+
+Nicholas Watters, Loic Matthey, Matko Bosnjak, Christopher P. Burgess, and Alexander Lerchner. COBRA: Data-Efficient Model-Based RL through Unsupervised Object Discovery and Curiosity-Driven Exploration. CoRR, abs/1905.09275, 2019a.
+Nicholas Watters, Loic Matthew, Christopher P. Burgess, and Alexander Lerchner. Spatial Broadcast Decoder: A Simple Architecture for Learning Disentangled Representations in VAEs. 2019b.
+Ronald J. Williams. Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning. Machine Learning, 8:229-256, 1992.
+Jiajun Wu, Ilker Yildirim, J.J. Lim, W.T. Freeman, and J.B. Tenenbaum. Galileo: Perceiving Physical Object Properties by Integrating a Physics Engine with Deep Learning. In NIPS, 2015.
+Jiajun Wu, Joseph J Lim, Hongyi Zhang, Joshua B Tenenbaum, and William T Freeman. Physics 101: Learning physical object properties from unlabeled videos. In BMVC, 2016.
+Jiajun Wu, Mit Csail, Joshua B Tenenbaum, and Pushmeet Kohli. Neural Scene De-rendering. In CVPR, 2017a.
+Jiajun Wu, Erika Lu, Pushmeet Kohli, William T Freeman, and Joshua B Tenenbaum. Learning to See Physics via Visual De-animation. In NIPS, 2017b.
+Zhenjia Xu, Zhijian Liu, Chen Sun, Google Research, Kevin Murphy, William T Freeman, Joshua B Tenenbaum, and Jiajun Wu. Unsupervised Discovery of Parts, Structure, and Dynamics. In ICLR, 2019.
+David Zheng, Vinson Luo, Jiajun Wu, and Joshua B Tenenbaum. Unsupervised Learning of Latent Physical Properties Using Perception-Prediction Networks. In UAI, 2018.
+Guangxiang Zhu, Zhiao Huang, and Chongjie Zhang. Object-Oriented Dynamics Predictor. In NIPS, 2018.
+
+# A SYSTEM DESCRIPTIONS
+
+In this section we describe the equations of motion used for each system.
+
+2-balls and 2-digits spring The force applied on object $i$ by object $j$ follows Hooke's law:
+
+$$
+\vec {F} _ {i, j} = - k \left(\vec {p} _ {i} - \vec {p} _ {j}\right) - l \frac {\vec {p} _ {i} - \vec {p} _ {j}}{\left| \vec {p} _ {i} - \vec {p} _ {j} \right|}. \tag {6}
+$$
+
+Each step corresponds to an interval $\Delta t = 0.3$ .
+
+3-balls gravity The force applied on object $i$ by object $j$ follows Newton's law of gravity:
+
+$$
+\vec {F} _ {i, j} = - g m _ {i} m _ {j} \frac {\vec {p} _ {i} - \vec {p} _ {j}}{| \vec {p} _ {i} - \vec {p} _ {j} | ^ {3}} \tag {7}
+$$
+
+where the masses are set to 1. Each step corresponds to an interval $\Delta t = 0.5$ .
+
+Pendulum The pendulum follows the equations used by the OpenAI Gym environment:
+
+$$
+\vec {F} = - \frac {3}{2} g \sin (\theta + \pi) + 3 u \tag {8}
+$$
+
+where $u$ is the action. Each step corresponds to an interval $\Delta t = 0.05$ . In the physics engine used by the model we introduce an extra actuation coefficient $a$ to be learned along with $g$ :
+
+$$
+\vec {F} = - \frac {3}{2} g \sin (\theta + \pi) + a \cdot u \tag {9}
+$$
+
+# B TRAINING DETAILS AND HYPERPARAMETERS
+
+For all datasets we use RMSProp Hinton et al. (2012) with an initial learning rate of $3 \times 10^{-4}$ . For the balls and digits datasets we train for 500 epochs with $\alpha = 2$ , and divide the learning rate by 5 after 375 epochs. For the pendulum data we train for 1000 epochs using $\alpha = 3$ , but divide the learning rate by 5 after 500 epochs. The image sizes are $32 \times 32$ for the 2-balls bouncing and spring, $36 \times 36$ for the 3-balls gravity, $64 \times 64$ for the 2-digits spring, and $64 \times 64$ grayscale for the pendulum.
+
+The content and mask variables are the output of a neural network with a constant array of 1s as input and 1 hidden layer with 200 units and tanh activation. We found this easier to train rather than having the contents and masks as trainable variables themselves.
+
+# C ADDITIONAL ROLLOUTS FOR EACH DATASET
+
+
+3-BALLS GRAVITY
+
+
+2-BALLS SPRING
+
+
+2-BALLS BOUNCING
+
+
+2-DIGITS SPRING
+
+# D EXTRAPOLATION TO UNSEEN IMAGE REGIONS
+
+One limitation of standard fully-connected or deconvolutional decoders is the inability to decode states corresponding to object poses or locations not seen during training. For example, if in the training set no objects appear in the bottom half of the image, a fully-connected decoder will simply learn to output zeros in that region. If in the test set objects move into the bottom half of the image, the decoder lacks the inductive bias necessary to correctly extrapolate in image space.
+
+To test this hypothesis, we replaced our model's decoder with a Deconv and Spatial Broadcast (Watters et al., 2019b) decoder, and compared them in a spatial extrapolation experiment. In this experiments, objects never enter the bottom half of the image in the input and prediction range, though in the extrapolation range in the test set objects move to this region of the scene. In the
+
+rollouts shown in Fig. D, Broadcast performs better than Deconv, but they both fail to maintain object integrity when the balls move to the bottom half of the image in the extrapolation steps, validating our hypothesis that a black-box decoder has insufficient extrapolation ability. In contrast, our rendering decoder is able to correctly decode states not seen during training.
+
+In the limit that our renderer corresponds to a full-blown graphics-engine, any pose, location, color, etc. not seen during training can still be rendered correctly. This property gives models using rendering decoders, such as ours and Hsieh et al. (2018), an important advantage in terms of data-efficiency. We note, however, that in general this advantage does not apply to correctly inferring the states from images whose objects are located in regions not seen during training. This is because the encoders used are typically composed simply of convolutional and fully-connected layers, with limited de-rendering inductive biases.
+
+
+Figure 6: Comparison between graphics decoder and two black-box decoders, trained on data where objects only appear in the top half of the scene. Only the graphics decoder is able to correctly render the objects in the bottom half of the scene at test time. Broadcast: spatial broadcast decoder (Watters et al., 2019b); Deconv: standard deconvolutional network.
+
+# E INCORRECT NUMBER OF OBJECT SLOTS
+
+The model proposed assumes we know the number of objects present in the scene. Here we briefly explore how to the model behaves when we use an incorrect number of slots $N$ . We use the gravitational system, since interaction forces between objects are easy to generalize for any $N$ . Fig. 7, left, shows that when using only 2 object slots, two of the objects are found, since the model does not have capacity to find more. Fig. 7, right, shows that when using more slots than the number of objects in the scene, all objects are discovered, and extra slots are left empty. However, in both cases we found predictive performance to be subpar, since in one case there are objects missing to correctly infer interactions, and in the other there are interactions between object slots and empty slots, confusing the dynamics.
+
+
+Object 1 Object 2
+
+
+Object 1 Object 2 Object 3 Object 4
+Figure 7: Results for incorrect number of object slots in the physics engien for the 3-body gravitational system Left: Contents and masks learned for 2 object slots. Right: Contents and objects learned for 4 object slots.
\ No newline at end of file
diff --git a/physicsasinversegraphicsunsupervisedphysicalparameterestimationfromvideo/images.zip b/physicsasinversegraphicsunsupervisedphysicalparameterestimationfromvideo/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..14bb54f1dd3d357e1b2e5fa2bc9d1e312feddf8e
--- /dev/null
+++ b/physicsasinversegraphicsunsupervisedphysicalparameterestimationfromvideo/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6a4559d7f97947af65797316314c364f4f76a1080540ef0453113ad8460c9c86
+size 683009
diff --git a/physicsasinversegraphicsunsupervisedphysicalparameterestimationfromvideo/layout.json b/physicsasinversegraphicsunsupervisedphysicalparameterestimationfromvideo/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..cbfd31d880d65c7b8353911ef68dea5cf63e8d2e
--- /dev/null
+++ b/physicsasinversegraphicsunsupervisedphysicalparameterestimationfromvideo/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c08e034fde4fd5426939ed4f9c9ade5560b7221b01903250f5dad10608487052
+size 448765
diff --git a/physicsawaredifferencegraphnetworksforsparselyobserveddynamics/e3e7ae5a-3865-4c05-bde0-f73d771090db_content_list.json b/physicsawaredifferencegraphnetworksforsparselyobserveddynamics/e3e7ae5a-3865-4c05-bde0-f73d771090db_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..3fb6253e86ff2e5659f1a2cf4997989ae16e9151
--- /dev/null
+++ b/physicsawaredifferencegraphnetworksforsparselyobserveddynamics/e3e7ae5a-3865-4c05-bde0-f73d771090db_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:96ca5fd0c2cbdf18957d8886ffe588241c1fdcad78ac6e0f45215e3ac39a90b3
+size 94045
diff --git a/physicsawaredifferencegraphnetworksforsparselyobserveddynamics/e3e7ae5a-3865-4c05-bde0-f73d771090db_model.json b/physicsawaredifferencegraphnetworksforsparselyobserveddynamics/e3e7ae5a-3865-4c05-bde0-f73d771090db_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..d9f1587400c41e17593fd1602819142affd8d874
--- /dev/null
+++ b/physicsawaredifferencegraphnetworksforsparselyobserveddynamics/e3e7ae5a-3865-4c05-bde0-f73d771090db_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:faf3e7ed3b48f2ec5c2c3045db6321afe64b25b9ca9e4efb405c23840a8bf5af
+size 113056
diff --git a/physicsawaredifferencegraphnetworksforsparselyobserveddynamics/e3e7ae5a-3865-4c05-bde0-f73d771090db_origin.pdf b/physicsawaredifferencegraphnetworksforsparselyobserveddynamics/e3e7ae5a-3865-4c05-bde0-f73d771090db_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..6d61278ecdf6b8b7da85de5a411b743adbd9b8f7
--- /dev/null
+++ b/physicsawaredifferencegraphnetworksforsparselyobserveddynamics/e3e7ae5a-3865-4c05-bde0-f73d771090db_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9875b7b1defe4cf45af3de01d6e9d93f543749ec51e4462fcdb0123d31394ca4
+size 1434185
diff --git a/physicsawaredifferencegraphnetworksforsparselyobserveddynamics/full.md b/physicsawaredifferencegraphnetworksforsparselyobserveddynamics/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..fa73a8717c40d771925e8c8b1c86bf9a0009df4f
--- /dev/null
+++ b/physicsawaredifferencegraphnetworksforsparselyobserveddynamics/full.md
@@ -0,0 +1,412 @@
+# PHYSICS-AWARE DIFFERENCE GRAPH NETWORKS FOR SPARSELY-OBSERVED DYNAMICS
+
+Sungyong Seo\*, Chuizheng Meng\*, Yan Liu
+
+Department of Computer Science
+
+University of Southern California
+
+{sungyons,chuizhem,yanliu.cs}@usc.edu
+
+# ABSTRACT
+
+Sparsely available data points cause numerical error on finite differences which hinders us from modeling the dynamics of physical systems. The discretization error becomes even larger when the sparse data are irregularly distributed or defined on an unstructured grid, making it hard to build deep learning models to handle physics-governing observations on the unstructured grid. In this paper, we propose a novel architecture, Physics-aware Difference Graph Networks (PA-DGN), which exploits neighboring information to learn finite differences inspired by physics equations. PA-DGN leverages data-driven end-to-end learning to discover underlying dynamical relations between the spatial and temporal differences in given sequential observations. We demonstrate the superiority of PA-DGN in the approximation of directional derivatives and the prediction of graph signals on the synthetic data and the real-world climate observations from weather stations.
+
+# 1 INTRODUCTION
+
+Modeling real world phenomena, such as climate observations, traffic flow, physics and chemistry simulation (Li et al., 2018; Geng et al., 2019; Long et al., 2018; de Bezenac et al., 2018; Sanchez-Gonzalez et al., 2018; Gilmer et al., 2017), is important but extremely challenging. While deep learning has achieved remarkable successes in prediction tasks by learning latent representations from data-rich applications such as image recognition (Krizhevsky et al., 2012), text understanding (Wu et al., 2016), and speech recognition (Hinton et al., 2012), we confront many challenging scenarios in modeling natural phenomena with deep neural networks when only a limited number of observations are available. Particularly, the sparsely available data points cause substantial numerical error when we utilize existing finite difference operators and the limitation requires a more principled way to redesign deep learning models.
+
+While many methods have been proposed to model physics-simulated observations using deep learning, many of them are designed under the assumption that input is on a continuous domain. For example, Raissi et al. (2017a;b) proposed physics-informed neural networks (PINNs) to learn nonlinear relations between input (spatial- and temporal-coordinates $(x,t)$ ) and output simulated with a given partial differential equation (PDE). Since Raissi et al. (2017a;b) use the coordinates as input and compute derivatives based on the coordinates to represent the equation, the setting is only valid when the data are densely observed over spatial and temporal space.
+
+Prior knowledge related to physics equations has been combined with data-driven models for various purposes. Chen et al. (2015) proposed a nonlinear diffusion process for image restoration and de Bezenac et al. (2018) incorporated the transport physics (advection-diffusion equation) with deep neural networks for forecasting sea surface temperature by extracting the motion field. Lutter et al. (2019) introduced deep Lagrangian networks specialized to learn Lagrangian mechanics with learnable parameters. Seo & Liu (2019) proposed a physics-informed regularizer to impose data-specific physics equations. In common, the methods in Chen et al. (2015); de Bezenac et al. (2018); Lutter et al. (2019) are not efficiently applicable to sparsely discretized input as only a small number of data points are available and continuous properties on given space are not easily recovered. It is
+
+unsuitable to directly use continuous differential operators to provide local behaviors because it is hard to approximate the continuous derivatives precisely with the sparse points (Shewchuk, 2002; Amenta & Kil, 2004; Luo et al., 2009). Furthermore, they are only applicable when the specific physics equations are explicitly given and still hard to be generalized to incorporate other types of equations.
+
+As another direction to modeling physics-simulated data, Long et al. (2018) proposed PDE-Net which uncovers the underlying hidden PDEs and predicts the dynamics of complex systems. Ruthotto & Haber (2018) derived new CNNs: parabolic and hyperbolic CNNs based on ResNet (He et al., 2016) architecture motivated by PDE theory. While Long et al. (2018); Ruthotto & Haber (2018) are flexible to uncover hidden physics from the constrained kernels, it is still restrictive to a regular grid where the proposed constraints on the learnable filters are easily defined.
+
+The topic of reasoning physical dynamics of discrete objects has been actively studied (Sanchez-Gonzalez et al., 2018; Battaglia et al., 2016; Chang et al., 2016) as the appearance of graph-based neural networks (Kipf & Welling, 2017; Santoro et al., 2017; Gilmer et al., 2017). Although these models can handle sparsely located data points without explicitly given physics equations, they are purely data-driven so that the physics-inspired inductive bias for exploiting finite differences is not considered at all. In contrast, our method consists of physics-aware modules allowing efficiently leveraging the inductive bias to learn spatiotemporal data from the physics system.
+
+In this paper, we propose physics-aware difference graph networks (PA-DGN) whose architecture is motivated to leverage differences of sparsely available data from the physical systems. The differences are particularly important since most of the physics-related dynamic equations (e.g., Navier-Stokes equations) handle differences of physical quantities in spatial and temporal space instead of using the quantities directly. Inspired by the property, we first propose spatial difference layer (SDL) to efficiently learn the local representations by aggregating neighboring information in the sparse data points. The layer is based on graph networks (GN) as it easily leverages structural features to learn the localized representations and share the parameters for computing the localized features. Then, the layer is followed by recurrent graph networks (RGN) to predict the temporal difference which is another core component of physics-related dynamic equations. PA-DGN is applicable to various tasks and we provide two representative tasks; the approximation of directional derivatives and the prediction of graph signals.
+
+Our contributions are:
+
+- We tackle a limitation of the sparsely discretized data which cause numerical error when we model the physical system by proposing spatial difference layer (SDL) for efficiently exploiting neighboring information under the limitation of sparsely observable points.
+- We combine SDL with recurrent graph networks to build PA-DGN which automatically learns the underlying spatiotemporal dynamics in graph signals.
+- We verify that PA-DGN is effective in approximating directional derivatives and predicting graph signals in synthetic data. Then, we conduct exhaustive experiments to predict climate observations from land-based weather stations and demonstrate that PA-DGN outperforms other baselines.
+
+# 2 PHYSICS-AWARE DIFFERENCE GRAPH NETWORK
+
+In this section, we introduce the building module used to learn spatial differences of graph signals and describe how the module is used to predict signals in the physics system.
+
+# 2.1 DIFFERENCE OPERATORS ON GRAPH
+
+As approximation of derivatives in continuous domain, difference operators have been used as a core role to compute numerical solutions of (continuous) differential equations. Since it is hard to derive closed-form expressions of derivatives in real-world data, the difference operators have been considered as alternative tools to describe and solve PDEs in practice. The operators are especially important for physics-related data (e.g., meteorological observations) because the governing rules behind the observations are mostly differential equations.
+
+
+(a) Original graph signals
+
+
+(b) Detected edge
+Figure 1: Examples of difference operators applied to graph signal. Filters used for the processing are (b) $\sum_{j}(f_{i} - f_{j})$ (c) $\sum_{j}(1.1f_{i} - f_{j})$ , (d) $f_{j} - 0.5f_{i}$ .
+
+
+(c) Sharpened signals
+
+
+(d) Modulated gradients
+
+Graph signals Given a graph $\mathcal{G} = (\mathbb{V},\mathbb{E})$ where $\mathbb{V}$ is a set of vertices $\mathbb{V} = \{1,\dots ,N_v\}$ and $\mathbb{E}$ is a set of edges $\mathbb{E}\subseteq \{(i,j)|i,j\in \mathbb{V}\}$ $(|\mathbb{E}| = N_e)$ , graph signals on all nodes at time $t$ are $f(t) = \{f_i(t)\mid i\in \mathbb{V}\}$ where $f_{i}:\mathbb{V}\to \mathbb{R}$ . Graph signals on edges can also be defined similarly, $F(t) = \{F_{ij}(t)\mid (i,j)\in \mathbb{E}\}$ where $F_{ij}:\mathbb{E}\rightarrow \mathbb{R}$ . Both signals can be multidimensional.
+
+Gradient on graph The gradient $(\nabla)$ of a function on nodes of a graph is represented by finite difference
+
+$$
+\nabla : L ^ {2} (\mathbb {V}) \to L ^ {2} (\mathbb {E}), \quad (\nabla f) _ {i j} = (f _ {j} - f _ {i}) \quad \text {i f} (i, j) \in \mathbb {E} \text {a n d 0 o t h e r w i s e},
+$$
+
+where $L^2 (\mathbb{V})$ and $L^2 (\mathbb{E})$ denote Hilbert spaces for node/edge functions, respectively. The gradients on a graph provide finite differences of graph signals and they become edge $(i,j)$ features.
+
+Laplace-Beltrami operator Laplace-Beltrami operator (or Laplacian, $\Delta$ ) in graph domain is defined as
+
+$$
+\Delta : L ^ {2} (\mathbb {V}) \to L ^ {2} (\mathbb {V}), \qquad (\Delta f) _ {i} = \sum_ {j: (i, j) \in \mathbb {E}} \left(f _ {i} - f _ {j}\right) \quad \forall i, j \in \mathbb {V}
+$$
+
+This operator is usually regarded as a matrix form in other literature, $\pmb{L} = \pmb{D} - \pmb{A}$ where $\pmb{A}$ is an adjacency matrix and $\pmb{D} = \mathrm{diag}(\sum_{j:j\neq i}\pmb{A}_{ij})$ is a degree matrix.
+
+# 2.2 DIFFERENCE OPERATORS ON TRIANGULATED MESH
+
+According to Crane (2018), the gradient and Laplacian operators on the triangulated mesh can be discretized by incorporating the coordinates of nodes. To obtain the gradient operator, the per-face gradient of each triangular face is calculated first. Then, the gradient on each node is the area-weighted average of all its neighboring faces, and the gradient on edge $(i,j)$ is defined as the dot product between the per-node gradient and the direction vector $e_{ij}$ . The Laplacian operator can be discretized with Finite Element Method (FEM):
+
+$$
+(\Delta f) _ {i} = \frac {1}{2} \sum_ {j: (i, j) \in \mathbb {E}} \left(\cot \alpha_ {j} + \cot \beta_ {j}\right) \left(f _ {j} - f _ {i}\right),
+$$
+
+where node $j$ belongs to node $i$ 's immediate neighbors $(j \in \mathbb{N}_i)$ and $(\alpha_j, \beta_j)$ are two opposing angles of the edge $(i, j)$ .
+
+# 2.3 SPATIAL DIFFERENCE LAYER
+
+While the difference operators are generalized in Riemannian manifolds (Lai et al., 2013; Lim, 2015), there exist numerical errors compared to those in continuous space and the error can be larger when the nodes are spatially far from neighboring nodes because the connected nodes $(j\in \mathbb{N}_i)$ of $i$ -th node fail to represent local features around the node. Furthermore, the error is even larger if available data points are sparsely distributed (e.g., sensor-based observations). In other words, the difference operators are unlikely to discover meaningful spatial variations behind the sparse observations since they are highly limited to immediate neighboring information only. To mitigate the limitation, we propose spatial difference layer (SDL) which consists of a set of parameters to define learnable
+
+
+Figure 2: Physics-aware Difference Graph Networks for graph signal prediction. Blue boxes have learnable parameters and all parameters are trained through end-to-end learning. The nodes/edges can be multidimensional.
+
+difference operators as a form of gradient and Laplacian to fully utilize neighboring information:
+
+$$
+\left(^ {w} \nabla f\right) _ {i j} = w _ {i j} ^ {\left(g _ {1}\right)} \left(f _ {j} - w _ {i j} ^ {\left(g _ {2}\right)} f _ {i}\right), \quad \left(^ {w} \Delta f\right) _ {i} = \sum_ {j: (i, j) \in \mathbb {E}} w _ {i j} ^ {\left(l _ {1}\right)} \left(f _ {i} - w _ {i j} ^ {\left(l _ {2}\right)} f _ {j}\right) \tag {1}
+$$
+
+where $w_{ij}$ are the parameters tuning the difference operators along with the corresponding edge direction $e_{ij}$ . The two forms (Eq 1) are associated with edge and node features, respectively. The superscript in $w\nabla$ and $w\Delta$ denotes that the difference operators are functions of the learnable parameters $w$ . $w_{ij}^{(g)}$ and $w_{ij}^{(l)}$ are obtained by integrating local information as follow:
+
+$$
+w _ {i j} = g \left(\left\{f _ {k}, F _ {m n} \mid k, (m, n) \in h \text {- h o p n e i g h b o r h o o d o f e d g e} (i, j) \right\}\right) \tag {2}
+$$
+
+While the standard difference operators consider two connected nodes only ( $i$ and $j$ ) for each edge $(i, j)$ , Eq 2 uses a larger view ( $h$ -hop) to represent the differences between $i$ and $j$ nodes. Since graph networks (GN) (Battaglia et al., 2018) are efficient networks to aggregate neighboring information, we use GN for $g(\cdot)$ function and $w_{ij}$ are edge features of output of GN. Eq 2 can be viewed as a higher-order difference equation because nodes/edges which are multi-hop apart are considered.
+
+$w_{ij}$ has a similar role of parameters in convolution kernels of CNNs. For example, while the standard gradient operator can be regarded as an example of simple edge-detecting filters, the operator can be a sharpening filter if $w_{ij}^{(g_1)} = 1$ and $w_{ij}^{(g_2)} = \frac{|\mathbb{N}_i| + 1}{|\mathbb{N}_i|}$ for $i$ node and the operators over each edge are summed. In other words, by modulating $w_{ij}$ , it is readily extended to conventional kernels including edge detection or sharpening filters and even further complicated kernels. On top of $w_{ij}$ , the difference forms in Eq 1 make an optimizing process for learnable parameters based on the differences instead of the values intentionally. Eq 1 thus naturally provides the physics-inspired inductive bias which is particularly effective for modeling physics-related observations. Furthermore, it is easily possible to increase the number of channels for $w_{ij}^{(g)}$ and $w_{ij}^{(l)}$ to be more expressive. Figure 1 illustrates how the exemplary filters convolve the given graph signals.
+
+# 2.4 RECURRENT GRAPH NETWORKS
+
+Difference graph Once the modulated spatial differences $(^w\nabla f(t),^w\Delta f(t))$ are obtained, they are concatenated with the current signals $f(t)$ to construct node-wise $(z_{i})$ and edge-wise $(z_{ij})$ features and the graph is called a difference graph. The difference graph includes all information to describe spatial variations.
+
+Recurrent graph networks Given a snapshot $(f(t), F(t))$ of a sequence of graph signals, one difference graph is obtained and is used to predict next graph signals. While a non-linear layer can be used to combine the learned spatial differences to predict the next signals, it is limited to discover spatial relations only among the features in the difference graph. Since many equations describing physics-related phenomena are non-static (e.g., Navier-Stokes equations), we adopt recurrent graph networks (RGN) (Sanchez-Gonzalez et al., 2018) with a graph state $\mathcal{G}_h$ as input to combine the spatial
+
+differences with temporal variations. RGN returns a graph state $(\mathcal{G}_h^* = (\pmb{h}^{*(v)},\pmb{h}^{*(e)}))$ and next graph signal $z_{i}^{*}$ and $z_{ij}^{*}$ . The update rule is described as follow:
+
+1. $(\pmb{z}_{ij}^{*},\pmb{h}^{*(e)})\gets \phi^{e}(\pmb{z}_{ij},\pmb {z}_{i},\pmb{z}_{j},\pmb{h}^{(e)})$ for all $(i,j)\in \mathbb{E}$ pairs,
+2. $(\pmb {z}_i^*,\pmb{h}^{*(v)})\gets \phi^v (\pmb {z}_i,\bar{\pmb{z}}_i',\pmb{h}^{(v)})$ for all $i\in \mathbb{V}$ $\bar{z}_i^\prime$ is an aggregated edge attribute related to the node $i$
+
+where $\phi^e, \phi^v$ are edge and node update functions, respectively, and they can be any recurrent unit. Finally, the prediction is made through a decoder by feeding the graph signal, $z_i^*$ and $z_{ij}^*$ .
+
+Learning objective Let $\hat{f}$ and $\hat{F}$ denote predictions of the target node/edge signals. PA-DGN is trained by minimizing the following objective:
+
+$$
+\mathcal {L} = \sum_ {i \in \mathbb {V}} | | f _ {i} - \hat {f} _ {i} | | ^ {2} + \sum_ {(i, j) \in \mathbb {E}} | | F _ {i j} - \hat {F} _ {i j} | | ^ {2}. \tag {3}
+$$
+
+For multistep predictions, $\mathcal{L}$ is summed over all predicting steps. If only one type (node or edge) of signal is given, the corresponding term in Eq 3 is used to optimize the parameters in SDL and RGN simultaneously.
+
+# 3 EFFECTIVENESS OF SPATIAL DIFFERENCE LAYER
+
+To investigate if the proposed spatial difference forms (Eq 1) can be beneficial to learning physics-related patterns, we use SDL on two different tasks: (1) approximate directional derivatives and (2) predict synthetic graph signals.
+
+# 3.1 APPROXIMATION OF DIRECTIONAL DERIVATIVES
+
+As we claimed in Section 2.3, the standard difference forms (gradient and Laplacian) on a graph can cause significant numerical error easily because they are susceptible to a distance of two points and variations of a given function. To evaluate the applicability of the proposed SDL, we train SDL to approximate directional derivatives on a graph. First, we define a synthetic function and its gradients on 2D space and sample 200 points $(x_{i},y_{i})$ . Then, we construct a graph on the sampled points by using $k$ -NN algorithm $(k = 4)$ . With the known gradient $\left(\nabla f = \left(\frac{\partial f}{\partial x},\frac{\partial f}{\partial y}\right)\right)$ at each point (a node in the graph), we can compute directional derivatives by projecting $\nabla f$ to a connected edge $e_{ij}$ (See Figure 3). We compare against four baselines: (1) the finite gradient (FinGrad) (2) multilayer perceptron (MLP) (3) graph networks (GN) (4) a different form of Eq 1 (One-w). For the finite gradient $((f_j - f_i) / ||\pmb{x}_j - \pmb{x}_i||)$ , there is no
+
+learnable parameter and it only uses two points. For MLP, we feed $(f_{i}, f_{j}, \boldsymbol{x}_{i}, \boldsymbol{x}_{j})$ as input to see whether learnable parameters can benefit the approximation or not. For GN, we use distances of two connected points as edge features and function values on the points as node features. The edge feature output of GN is used as a prediction for the directional derivative on the edge. Finally, we modify the proposed form as $(^w\nabla f)_{ij} = w_{ij}f_j - f_i$ . GN and the modified form are used to verify the effectiveness of Eq 1. Note that we define two synthetic functions (Figure 4) which have different property; (1) monotonically increasing from a center and (2) periodically varying.
+
+
+Figure 3: Directional derivative on graph
+
+Table 1: Mean squared error $\left( {10}^{-2}\right)$ for approximation of directional derivatives.
+
+| Functions | FinGrad | MLP | GN | One-w | SDL |
| f1(x,y) = 0.1x2 + 0.5y2 | 6.42±0.47 | 2.12±0.32 | 1.05±0.42 | 1.41±0.44 | 0.97±0.39 |
| f2(x,y) = sin(x) + cos(y) | 5.90±0.04 | 2.29±0.77 | 2.17±0.34 | 6.73±1.17 | 1.26±0.05 |
+
+
+Figure 4: Gradients and graph structure of sampled points. Left: the synthetic function is $f_{1}(x,y) = 0.1x^{2} + 0.5y^{2}$ . Right: the synthetic function is $f_{2}(x,y) = \sin (x) + \cos (y)$ .
+
+
+
+
+
+
+
+Approximation accuracy As shown in Table 1, the proposed spatial difference layer outperforms others by a large margin. As expected, FinGrad provides the largest error since it only considers two points without learnable parameters. It is found that the learnable parameters can significantly benefit to approximate the directional derivatives even if input is the same (FinGrad vs. MLP). Note that utilizing neighboring information (GN, One-w, SDL) is generally helpful to learn spatial variations properly. However, simply training parameters in GN is not sufficient and explicitly defining difference, which is important to understand spatial variations, provides more robust inductive bias. One important thing we found is that One-w is not effective as much as GN and it can be even worse than FinGrad. It is because of its limited degree of freedom. As implied in the form $(\nabla_{w}f)_{ij} = w_{ij} * f_j - f_i$ , only one $w_{ij}$ adjusts the relative difference between $f_i$ and $f_j$ , and this is not enough to learn whole possible linear combinations of $f_i$ and $f_j$ . The unstable performance supports that the form of SDL is not ad-hoc but more rigorously designed.
+
+# 3.2 GRAPH SIGNAL PREDICTION
+
+We evaluate PA-DGN on the synthetic data sampled from the simulation of specific convection-diffusion equations, to provide if the proposed model can predict next signals of the simulated dynamics from observations on discrete nodes only. For the simulated dynamics, we use an equation slightly modified based on the one in Long et al. (2018).
+
+$$
+\frac {d f _ {i} (t)}{d t} = a (i) (\nabla f) _ {\hat {x}} + b (i) (\nabla f) _ {\hat {y}} + c (i) \Delta f, \quad f _ {i} (0) = f _ {o} (i), \tag {4}
+$$
+
+where the index $i$ points the $i$ -th node whose coordinate is $(x_i, y_i)$ in the 2D space $([0, 2\pi] \times [0, 2\pi])$ and $\hat{x}$ and $\hat{y}$ indicate $x$ - and $y$ -direction in the space. $a(i) = 0.5(\cos(y_i) + x_i(2\pi - x_i)\sin(x_i)) + 0.6$ , $b(i) = 2(\cos(y_i) + \sin(x_i)) + 0.8$ , and $c(i) = 0.5\left(1 - \frac{\sqrt{(x_i - \pi)^2 + (y_i - \pi)^2}}{\sqrt{2\pi}}\right)$ . Then, we uniformly sample 250 points in the above 2D space. The task is to predict signal values of all points in the future $M$ steps given observed values of the first $N$ steps. For our experiments, we choose $N = 5$ and $M = 15$ . Since there is no a priori graph structure on sampled points, we construct a graph with $k$ -NN algorithm ( $k = 4$ ) using the Euclidean distance. Figure 5 shows the dynamics and the graph structure of sampled points.
+
+To evaluate the effect of the proposed SDL on the above prediction task, we cascade SDL and a linear regression model as our prediction model since the dynamics follows a linear partial differential equation. We compare its performance with four baselines: (1) vector auto-regressor (VAR); (2) multilayer perceptron (MLP); (3) StandardOP: the standard approximation of differential operators in Section 2.1 followed by a linear regressor; (4) MeshOP: similar to StandardOP but use the discretization on triangulated mesh in Section 2.2 for differential operators.
+
+Table 2: Mean absolute error $\left( {10}^{-2}\right)$ for graph signal prediction.
+
+| VAR | MLP | StandardOP | MeshOP | SDL |
| 16.84±0.41 | 15.75±0.53 | 11.99±0.29 | 12.82±0.06 | 10.87±0.98 |
+
+Prediction Performance Table 2 shows the prediction performance of different models measured with mean absolute error. The prediction model with our proposed spatial differential layer outperforms other baselines. All models incorporating any form of spatial difference operators (StandardOP,
+
+
+$t = 1$
+
+
+$t = 5$
+
+
+$t = 10$
+Figure 5: Synthetic dynamics and graph structure of sampled points.
+
+
+$t = 15$
+
+
+$t = 20$
+
+MeshOP and SDL) outperform those without spatial difference operators (VAR and MLP), showing that introducing spatial differences information inspired by the intrinsic dynamics helps prediction. However, in cases where points with observable signal are sparse in the space, spatial difference operators derived with fixed rules can be inaccurate and sub-optimal for prediction since the locally linear assumption which they are based on no longer holds. Our proposed SDL, to the contrary, is capable of bridging the gap between approximated difference operators and accurate ones by introducing learnable coefficients utilizing neighboring information, and thus improves the prediction performance.
+
+# 4 PREDICTION: GRAPH SIGNALS ON LAND-BASED WEATHER SENSORS
+
+We evaluate the proposed model on the task of predicting climate observations (Temperature) from the land-based weather stations located in the United States.
+
+# 4.1 EXPERIMENTAL SET-UP
+
+Data and task We sample the weather stations located in the United States from the Online Climate Data Directory of the National Oceanic and Atmospheric Administration (NOAA) and choose the stations which have actively measured meteorological observations during 2015. We choose two geographically close but meteorologically diverse groups of stations: the Western and Southeastern states. We use $k$ -Nearest Neighbor (NN) algorithm ( $k = 4$ ) to generate graph structures and the final adjacency matrix is $A = (A_k + A_k^\top) / 2$ to make it symmetric where $A_k$ is the output adjacency matrix from $k$ -NN algorithm.
+
+Figure 6 shows the distributions of the land-based weather stations and their connectivity. Since the stations are not synchronized and have different timestamps for the observations, we aggregate the time series hourly. The 1-year sequential data are split into the train set (8 months), the validation set (2 months), and the test set (2 months), respectively.
+
+Our main task is to predict the next graph signals based on the current and past graph signals. All methods we evaluate are trained through the objective (Eq 3) with the Adam optimizer and we use scheduled sampling (Bengio et al., 2015) for the models with recurrent modules. We
+
+evaluate PA-DGN and other baselines on two prediction tasks, (1) 1-step and (2) multistep-ahead predictions. Furthermore, we demonstrate the ablation study that provides how much the spatial derivatives from our proposed SDL are important signals to predict the graph dynamics.
+
+
+Figure 6: Weather stations in (left) western (right) southeastern states in the United States and $k$ -NN graph.
+
+
+
+# 4.2 GRAPH SIGNAL PREDICTIONS
+
+We compare against the widely used baselines (VAR, MLP, and GRU) for 1-step and multistep prediction. Then, we use RGN (Sanchez-Gonzalez et al., 2018) to examine how much the graph structure is beneficial. Finally, we evaluate PA-DGN to verify if the proposed architecture (Eq 1) is able to reduce prediction loss. Experiment results for the prediction task are summarized in Table 3.
+
+Overall, RGN and PA-DGN are better than other baselines and it implies that the graph structure provides useful inductive bias for the task. It is intuitive as the meteorological observations are
+
+continuously changing over the space and time and thus, the observations at the $i$ -th station are strongly related to those of its neighboring stations.
+
+PA-DGN outperforms RGN and the discrepancy comes from the fact that the spatial derivatives (Eq 1) we feed in PA-DGN are beneficial and this finding is expected because the meteorological signals at a certain point are a function of not only its previous signal but also the relative differences between neighbor signals and itself. Knowing the relative differences among local observations is particularly essential to understand physics-related dynamics. For example, Diffusion equation, which describes how physical quantities (e.g., heat) are transported through space over time, is also a function of relative differences of the quantities $\left(\frac{df}{dt} = D\Delta f\right)$ rather than values of the neighbor signals. In other words, spatial differences are physics-aware features and it is desired to leverage the features as input to learn dynamics related to physical phenomena.
+
+Table 3: Graph signal prediction results (MAE) on multistep predictions. In each row, we report the average with standard deviations from all baselines and PA-DGN. One step is 1-hour time interval.
+
+| Region | Method | 1-step | 6-step | 12-step |
| West | VAR | 0.1241 ± 0.0234 | 0.4295 ± 0.1004 | 0.4820 ± 0.1298 |
| MLP | 0.1040 ± 0.0003 | 0.3742 ± 0.0238 | 0.4998 ± 0.0637 |
| GRU | 0.0913 ± 0.0047 | 0.1871 ± 0.0102 | 0.2707 ± 0.0006 |
| RGN | 0.0871 ± 0.0033 | 0.1708 ± 0.0024 | 0.2666 ± 0.0252 |
| RGN(StandardOP) | 0.0860 ± 0.0018 | 0.1674 ± 0.0019 | 0.2504 ± 0.0107 |
| RGN(MeshOP) | 0.0840 ± 0.0015 | 0.2119 ± 0.0018 | 0.4305 ± 0.0177 |
| PA-DGN | 0.0840 ± 0.0004 | 0.1614 ± 0.0042 | 0.2439 ± 0.0163 |
| SouthEast | VAR | 0.0889 ± 0.0025 | 0.2250 ± 0.0013 | 0.3062 ± 0.0032 |
| MLP | 0.0722 ± 0.0012 | 0.1797 ± 0.0086 | 0.2514 ± 0.0154 |
| GRU | 0.0751 ± 0.0037 | 0.1724 ± 0.0130 | 0.2446 ± 0.0241 |
| RGN | 0.0790 ± 0.0113 | 0.1815 ± 0.0239 | 0.2548 ± 0.0210 |
| RGN(StandardOP) | 0.0942 ± 0.0121 | 0.2135 ± 0.0187 | 0.2902 ± 0.0348 |
| RGN(MeshOP) | 0.0905 ± 0.0012 | 0.2052 ± 0.0012 | 0.2602 ± 0.0062 |
| PA-DGN | 0.0721 ± 0.0002 | 0.1664 ± 0.0011 | 0.2408 ± 0.0056 |
+
+# 4.3 CONTRIBUTION OF SPATIAL DERIVATIVES
+
+We further investigate if the modulated spatial derivatives (Eq 1) are effectively advantageous compared to the spatial derivatives defined in Riemannian manifolds. First, RGN without any spatial derivatives is assessed for the prediction tasks on Western and Southeastern states graph signals. Note that this model does not use any extra features but the graph signal, $f(t)$ . Secondly, we add (1) StandardOP, the discrete spatial differences (Gradient and Laplacian) in Section 2.1 and (2) MeshOP, the triangular mesh approximation of differential operators in Section 2.2 separately as additional signals to RGN. Finally, we incorporate with RGN our proposed Spatial Difference Layer.
+
+Table 3 shows the contribution of each component. As expected, PA-DGN provides much higher drops in MAE (3.56%, 5.50%, 8.51% and 8.73%, 8.32%, 5.49% on two datasets, respectively) compared to RGN without derivatives and the results demonstrate that the derivatives, namely, relative differences from neighbor signals are effectively useful. However, neither RGN with StandardOP nor with MeshOP can consistently outperform RGN. We also found that PA-DGN consistently shows positive effects on the prediction error compared to the fixed derivatives. This finding is a piece of evidence to support that the parameters modulating spatial derivatives in our proposed Spacial Difference Layer are properly inferred to optimize the networks and to improve the prediction performance.
+
+# 5 CONCLUSION
+
+In this paper, we introduce a novel architecture (PA-DGN) that approximates spatial derivatives to use them to represent PDEs which have a prominent role for physics-aware modeling. PA-DGN
+
+effectively learns the modulated derivatives for predictions and the derivatives can be used to discover hidden physics describing interactions between temporal and spatial derivatives.
+
+# ACKNOWLEDGEMENTS
+
+This work is supported in part by NSF Research Grant IIS-1254206 and MINERVA grant N00014-17-1-2281, granted to co-author Yan Liu in her academic role at the University of Southern California. The views and conclusions are those of the authors and should not be interpreted as representing the official policies of the funding agency, or the U.S. Government. Last but not least, we appreciate anonymous reviewers for your thorough comments and suggestions.
+
+# REFERENCES
+
+Nina Amenta and Yong Joo Kil. Defining point-set surfaces. In ACM Transactions on Graphics (TOG), volume 23, pp. 264-270. ACM, 2004.
+Peter Battaglia, Razvan Pascanu, Matthew Lai, Danilo Jimenez Rezende, et al. Interaction networks for learning about objects, relations and physics. In Advances in neural information processing systems, pp. 4502-4510, 2016.
+Peter W Battaglia, Jessica B Hamrick, Victor Bapst, Alvaro Sanchez-Gonzalez, Vinicius Zambaldi, Mateusz Malinowski, Andrea Tacchetti, David Raposo, Adam Santoro, Ryan Faulkner, et al. Relational inductive biases, deep learning, and graph networks. arXiv preprint arXiv:1806.01261, 2018.
+Samy Bengio, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. Scheduled sampling for sequence prediction with recurrent neural networks. In Advances in Neural Information Processing Systems, pp. 1171-1179, 2015.
+Michael B Chang, Tomer Ullman, Antonio Torralba, and Joshua B Tenenbaum. A compositional object-based approach to learning physical dynamics. arXiv preprint arXiv:1612.00341, 2016.
+Yunjin Chen, Wei Yu, and Thomas Pock. On learning optimized reaction diffusion processes for effective image restoration. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 5261-5269, 2015.
+Keenan Crane. Discrete differential geometry: An applied introduction. Notices of the AMS, Communication, 2018.
+Emmanuel de Bezenac, Arthur Pajot, and Patrick Gallinari. Deep learning for physical processes: Incorporating prior scientific knowledge. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=By4HsfWAZ.
+Xu Geng, Yaguang Li, Leye Wang, Lingyu Zhang, Qiang Yang, Jieping Ye, and Yan Liu. Spatiotemporal multi-graph convolution network for ride-hailing demand forecasting. In 2019 AAAI Conference on Artificial Intelligence (AAAI'19), 2019.
+Justin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl. Neural message passing for quantum chemistry. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 1263-1272. JMLR.org, 2017.
+Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778, 2016.
+Geoffrey Hinton, Li Deng, Dong Yu, George E Dahl, Abdel-rahman Mohamed, Navdeep Jaitly, Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara N Sainath, et al. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal processing magazine, 29(6):82-97, 2012.
+Thomas N. Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. In International Conference on Learning Representations (ICLR), 2017.
+
+Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pp. 1097-1105, 2012.
+Rongjie Lai, Jiang Liang, and Hongkai Zhao. A local mesh method for solving pdes on point clouds. Inverse Problems & Imaging, 7(3), 2013.
+Yaguang Li, Rose Yu, Cyrus Shahabi, and Yan Liu. Diffusion convolutional recurrent neural network: Data-driven traffic forecasting. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=SJiHXGWAZ.
+Lek-Heng Lim. Hodge laplacians on graphs. arXiv preprint arXiv:1507.05379, 2015.
+Zichao Long, Yiping Lu, Xianzhong Ma, and Bin Dong. Pde-net: Learning pdes from data. International Conference on Machine Learning, 2018.
+Chuanjiang Luo, Issam Safa, and Yusu Wang. Approximating gradients for meshes and point clouds via diffusion metric. In Computer Graphics Forum, volume 28, pp. 1497-1508. Wiley Online Library, 2009.
+Michael Lutter, Christian Ritter, and Jan Peters. Deep lagrangian networks: Using physics as model prior for deep learning. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=Bk1HpjCqKm.
+Maziar Raissi, Paris Perdikaris, and George Em Karniadakis. Physics informed deep learning (part i): Data-driven solutions of nonlinear partial differential equations. arXiv preprint arXiv:1711.10561, 2017a.
+Maziar Raissi, Paris Perdikaris, and George Em Karniadakis. Physics informed deep learning (part ii): Data-driven discovery of nonlinear partial differential equations. arXiv preprint arXiv:1711.10566, 2017b.
+Lars Ruthotto and Eldad Haber. Deep neural networks motivated by partial differential equations. arXiv preprint arXiv:1804.04272, 2018.
+Alvaro Sanchez-Gonzalez, Nicolas Heess, Jost Tobias Springenberg, Josh Merel, Martin Riedmiller, Raia Hadsell, and Peter Battaglia. Graph networks as learnable physics engines for inference and control. International Conference on Machine Learning, 2018.
+Adam Santoro, David Raposo, David G Barrett, Mateusz Malinowski, Razvan Pascanu, Peter Battaglia, and Timothy Lillicrap. A simple neural network module for relational reasoning. In Advances in neural information processing systems, pp. 4967-4976, 2017.
+Sungyong Seo and Yan Liu. Differentiable physics-informed graph networks. arXiv preprint arXiv:1902.02950, 2019.
+Jonathan Richard Shewchuk. What is a good linear finite element? interpolation, conditioning, anisotropy, and quality measures (preprint). University of California at Berkeley, 73:137, 2002.
+Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. Google's neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144, 2016.
+
+# A APPENDIX
+
+# A.1 SIMULATED DATA
+
+For the simulated dynamics, we discretize the following partial differential equation similar to the one in Long et al. (2018) to simulate the corresponding linear variable-coefficient convection-diffusion equation on graphs.
+
+In a continuous space, we define the linear variable-coefficient convection-diffusion equation as:
+
+$$
+\left\{ \begin{array}{l l} \frac {\partial f}{\partial t} & = a (x, y) f _ {x} + b (x, y) f _ {y} + c (x, y) \Delta f \\ f | _ {t = 0} & = f _ {0} (x, y) \end{array} \right. \tag {5}
+$$
+
+, with $\Omega = [0,2\pi ]\times [0,2\pi ],(t,x,y)\in [0,0.2]\times \Omega ,a(x,y) = 0.5(\cos (y) + x(2\pi -x)\sin (x))+$
+
+0.6, $b(x,y) = 2(\cos (y) + \sin (x)) + 0.8,c(x,y) = 0.5\left(1 - \frac{\sqrt{(x_i - \pi)^2 + (y_i - \pi)^2}}{\sqrt{2}\pi}\right).$
+
+We follow the setting of initialization in Long et al. (2018):
+
+$$
+f _ {0} (x, y) = \sum_ {| k |, | l | \leq N} \lambda_ {k, l} \cos (k x + l y) + \gamma_ {k, l} \sin (k x + l y) \tag {6}
+$$
+
+, where $N = 9$ , $\lambda_{k,l}, \gamma_{k,l} \sim \mathcal{N}\left(0, \frac{1}{50}\right)$ , and $k$ and $l$ are chosen randomly.
+
+We use spatial difference operators to approximate spatial derivatives:
+
+$$
+\left\{ \begin{array}{l} f _ {x} \left(x _ {i}, y _ {i}\right) = \frac {1}{2 s} \left(f \left(x _ {i}, y _ {i}\right) - f \left(x _ {i} - s, y _ {i}\right)\right) - \frac {1}{2 s} \left(f \left(x _ {i}, y _ {i}\right) - f \left(x _ {i} + s, y _ {i}\right)\right) \\ f _ {y} \left(x _ {i}, y _ {i}\right) = \frac {1}{2 s} \left(f \left(x _ {i}, y _ {i}\right) - f \left(x _ {i}, y _ {i} - s\right)\right) - \frac {1}{2 s} \left(f \left(x _ {i}, y _ {i}\right) - f \left(x _ {i}, y _ {i} + s\right)\right) \\ f _ {x x} \left(x _ {i}, y _ {i}\right) = \frac {1}{s ^ {2}} \left(f \left(x _ {i}, y _ {i}\right) - f \left(x _ {i} - s, y _ {i}\right)\right) + \frac {1}{s ^ {2}} \left(f \left(x _ {i}, y _ {i}\right) - f \left(x _ {i} + s, y _ {i}\right)\right) \\ f _ {y y} \left(x _ {i}, y _ {i}\right) = \frac {1}{s ^ {2}} \left(f \left(x _ {i}, y _ {i}\right) - f \left(x _ {i}, y _ {i} - s\right)\right) + \frac {1}{s ^ {2}} \left(f \left(x _ {i}, y _ {i}\right) - f \left(x _ {i}, y _ {i} + s\right)\right) \end{array} \right. \tag {7}
+$$
+
+, where $s$ is the spatial grid size for discretization.
+
+Then we rewrite (5) with difference operators defined on graphs:
+
+$$
+\left\{ \begin{array}{l} \frac {\partial f}{\partial t} = a (i) (\nabla f) _ {\hat {x}} + b (i) (\nabla f) _ {\hat {y}} + c (i) ((\Delta f) _ {\hat {x} \hat {x}} + (\Delta f) _ {\hat {y} \hat {y}}) \\ f _ {i} (0) = f _ {o} (i) \end{array} \right. \tag {8}
+$$
+
+, where
+
+$$
+a (i) \left(x _ {j}, y _ {j}\right) = \left\{ \begin{array}{l l} \frac {a \left(x _ {i} , y _ {i}\right)}{2 s} & \text {i f} x _ {i} = x _ {j} + s, y _ {i} = y _ {j} \\ - \frac {a \left(x _ {i} , y _ {i}\right)}{2 s} & \text {i f} x _ {i} = x _ {j} - s, y _ {i} = y _ {j} \end{array} \right. \tag {9}
+$$
+
+$$
+b (i) \left(x _ {j}, y _ {j}\right) = \left\{ \begin{array}{l l} \frac {b \left(x _ {i} , y _ {i}\right)}{2 s} & \text {i f} x _ {i} = x _ {j}, y _ {i} = y _ {j} + s \\ - \frac {b \left(x _ {i} , y _ {i}\right)}{2 s} & \text {i f} x _ {i} = x _ {j}, y _ {i} = y _ {j} - s \end{array} \right. \tag {10}
+$$
+
+$$
+c (i) \left(x _ {j}, y _ {j}\right) = \frac {c}{s ^ {2}} \tag {11}
+$$
+
+Then we replace the gradient w.r.t time in (8) with temporal discretization:
+
+$$
+\left\{ \begin{array}{l} f (t + 1) = \Delta t (a (i) (\nabla f) _ {\hat {x}} + b (i) (\nabla f) _ {\hat {y}} + c (i) ((\Delta f) _ {\hat {x} \hat {x}} + (\Delta f) _ {\hat {y} \hat {y}})) + f (t) \\ f _ {i} (0) = f _ {o} (i) \end{array} \right. \tag {12}
+$$
+
+, where $\Delta t$ is the time step in temporal discretization.
+
+Equation (12) is used for simulating the dynamics described by the equation (5). Then, we uniformly sample 250 points in the above 2D space and choose their corresponding time series of $u$ as the dataset used in our synthetic experiments. We generate 1000 sessions on a $50 \times 50$ regular mesh with time step size $\Delta t = 0.01$ . 700 sessions are used for training, 150 for validation and 150 for test.
+
+# A.2 EXPERIMENT SETTINGS
+
+Here we provide additional details for models we used in this work, including model architecture settings and hyper-parameter settings.
+
+# A.2.1 MODEL SETTINGS
+
+Unless mentioned otherwise, all models use a hidden dimension of size 64.
+
+- VAR: A vector autoregression model with 2 lags. Input is the concatenated features of previous 2 frames. The weights are shared among all nodes in the graph.
+- MLP: A multilayer perceptron model with 2 hidden layers. Input is the concatenated features of previous 2 frames. The weights are shared among all nodes in the graph.
+- GRU: A Gated Recurrent Unit network with 2 hidden layers. Input is the concatenated features of previous 2 frames. The weights are shared among all nodes in the graph.
+- RGN: A recurrent graph neural network model with 2 GN blocks. Each GN block has an edge update block and a node update block, both of which use a 2-layer GRU cell as the update function. We set its hidden dimension to 73 so that it has the same number of learnable parameters as our proposed model PA-DGN.
+- RGN(StandardOP): Similar to RGN, but use the output of difference operators in Section 2.1 as extra input features. We set its hidden dimension to 73.
+- RGN(MeshOP): Similar to RGN(StandardOP), but the extra input features are calculated using opeartors in Section 2.2. We set its hidden dimension to 73.
+- PA-DGN: Our proposed model. The spatial derivative layer uses a message passing neural network (MPNN) with 2 GN blocks using 2-layer MLPs as update functions. The forward network part uses a recurrent graph neural network with 2 recurrent GN blocks using 2-layer GRU cells as update functions.
+
+The numbers of learnable parameters of all models are listed as follows:
+
+Table 4: Numbers of learnable parameters.
+
+| Model | VAR | MLP | GRU | RGN | RGN(StandardOP) | RGN(MeshOP) | PA-DGN |
| # Params | 3 | 4,417 | 37,889 | 345,876 | 341,057 | 342,152 | 340,001 |
+
+# A.2.2 TRAINING SETTINGS
+
+The number of evaluation runs We performed 3 times for every experiment in this paper to report the mean and standard deviations.
+
+Length of prediction For experiments on synthetic data, all models take first 5 frames as input and predict the following 15 frames. For experiments on NOAA datasets, all models take first 12 frames as input and predict the following 12 frames.
+
+Training hyper-parameters We use Adam optimizer with learning rate 1e-3, batch size 8, and weight decay of 5e-4. All experiments are trained for a maximum of 2000 epochs with early stopping. All experiments are trained using inverse sigmoid scheduled sampling with the coefficient $k = 107$ .
+
+**Environments** All experiments are implemented with Python3.6 and PyTorch 1.1.0, and are run with NVIDIA GTX 1080 Ti GPUs.
+
+# A.3 EFFECT OF DIFFERENT GRAPH STRUCTURES
+
+In this section, we evaluate the effect of 2 different graph structures on baselines and our models: (1) k-NN: a graph constructed with $k$ -NN algorithm ( $k = 4$ ); (2) TriMesh: a graph generated with Delaunay Triangulation. All graphs use the Euclidean distance.
+
+Table 5: Mean absolute error $\left( {10}^{-2}\right)$ for graph signal prediction on the synthetic dataset.
+
+| VAR | MLP | StandardOP | MeshOP | SDL |
| k-NN | TriMesh | k-NN | TriMesh | k-NN | TriMesh |
| 17.30 | 16.27 | 12.00 | 12.29 | 12.87 | 12.82 | 11.04 | 12.40 |
+
+Table 6: Graph signal prediction results (MAE) on multistep predictions. In each row, we report the average with standard deviations from all baselines and PA-DGN. One step is 1 hour time interval.
+
+| Region | Method | Graph | 1-step | 6-step | 12-step |
| West | VAR | - | 0.1241 ± 0.0234 | 0.4295 ± 0.1004 | 0.4820 ± 0.1298 |
| MLP | - | 0.1040 ± 0.0003 | 0.3742 ± 0.0238 | 0.4998 ± 0.0637 |
| GRU | - | 0.0913 ± 0.0047 | 0.1871 ± 0.0102 | 0.2707 ± 0.0006 |
| RGN | k-NN | 0.0871 ± 0.0033 | 0.1708 ± 0.0024 | 0.2666 ± 0.0252 |
| TriMesh | 0.0897 ± 0.0030 | 0.1723 ± 0.0116 | 0.2800 ± 0.0414 |
| RGN (StandardOP) | k-NN | 0.0860 ± 0.0018 | 0.1674 ± 0.0019 | 0.2504 ± 0.0107 |
| TriMesh | 0.0842 ± 0.0011 | 0.1715 ± 0.0027 | 0.2517 ± 0.0369 |
| RGN (MeshOP) | k-NN | 0.0840 ± 0.0015 | 0.2119 ± 0.0018 | 0.4305 ± 0.0177 |
| TriMesh | 0.0846 ± 0.0017 | 0.2090 ± 0.0077 | 0.4051 ± 0.0457 |
| PA-DGN | k-NN | 0.0840 ± 0.0004 | 0.1614 ± 0.0042 | 0.2439 ± 0.0163 |
| TriMesh | 0.0849 ± 0.0012 | 0.1610 ± 0.0029 | 0.2473 ± 0.0162 |
| SouthEast | VAR | - | 0.0889 ± 0.0025 | 0.2250 ± 0.0013 | 0.3062 ± 0.0032 |
| MLP | - | 0.0722 ± 0.0012 | 0.1797 ± 0.0086 | 0.2514 ± 0.0154 |
| GRU | - | 0.0751 ± 0.0037 | 0.1724 ± 0.0130 | 0.2446 ± 0.0241 |
| RGN | k-NN | 0.0790 ± 0.0113 | 0.1815 ± 0.0239 | 0.2548 ± 0.0210 |
| TriMesh | 0.0932 ± 0.0105 | 0.2076 ± 0.0200 | 0.2854 ± 0.0211 |
| RGN (StandardOP) | k-NN | 0.0942 ± 0.0121 | 0.2135 ± 0.0187 | 0.2902 ± 0.0348 |
| TriMesh | 0.0868 ± 0.0132 | 0.1885 ± 0.0305 | 0.2568 ± 0.0328 |
| RGN (MeshOP) | k-NN | 0.0913 ± 0.0016 | 0.2069 ± 0.0031 | 0.2649 ± 0.0092 |
| TriMesh | 0.0877 ± 0.0020 | 0.2043 ± 0.0026 | 0.2579 ± 0.0057 |
| PA-DGN | k-NN | 0.0721 ± 0.0002 | 0.1664 ± 0.0011 | 0.2408 ± 0.0056 |
| TriMesh | 0.0876 ± 0.0096 | 0.2002 ± 0.0163 | 0.2623 ± 0.0180 |
+
+Table 5 and Table 6 show the effect of different graph structures on the synthetic dataset used in Section 3.2 and the real-world dataset in Section 4.2 separately. We find that for different models the effect of graph structures is not homogeneous. For RGN and PA-DGN, $k$ -NN graph is more beneficial to the prediction performance than TriMesh graph, because these two models rely more on neighboring information and a $k$ -NN graph incorporates it better than a Delaunay Triangulation graph. However, switching from TriMesh graph to $k$ -NN graph is harmful to the prediction accuracy of RGN(MeshOP) since Delaunay Triangulation is a well-defined method for generating triangulated mesh in contrast to $k$ -NN graphs. Given the various effects of graph structures on different models, our proposed PA-DGN under $k$ -NN graphs always outperforms other baselines using any graph structure.
+
+
+Figure 7: MAE across the nodes.
+
+# A.4 THE DISTRIBUTION OF PREDICTION ERROR ACROSS NODES
+
+Figure 7 provides the distribution of MAEs across the nodes of PA-DGN applied to the graph signal prediction task of the west coast region of the real-world dataset in Section 4.2. As shown in the figure, nodes with the highest prediction error for short-term prediction are gathered in the inner part where the observable nodes are sparse, while for long-term prediction nodes in the area with a limited number of observable points no longer have the largest MAE. This implies that PA-DGN can utilize neighboring information efficiently even under the limitation of sparsely observable points.
+
+# A.5 EVALUATION ON NEMO SEA SURFACE TEMPERATURE (SST) DATASET
+
+We tested our proposed method and baselines on the NEMO sea surface temperature (SST) dataset1. We first download the data in the area between $50\mathrm{N}^{\circ}$ - $65\mathrm{N}^{\circ}$ and $75\mathrm{W}^{\circ}$ - $10\mathrm{W}^{\circ}$ starting from 2016-01-01 to 2017-12-31, then we crop the $[0,550] \times [100,650]$ square from the area and sample 250 points from the square as our chosen dataset. We divide the data into 24 sequences, each lasting 30 days, and truncate the tail. All models use the first 5-day SST as input and predict the SST in the following 15 and 25 days. We use the data in 2016 for training all models and the left for testing.
+
+For StandardOP, MeshOP and SDL, we test both options using linear regression and using RGN for the prediction part and report the best result. The results in Table 7 show that all methods incorporating spatial differences gain improvement on prediction and that our proposed learnable SDL outperforms all other baselines.
+
+Table 7: Mean absolute error $\left( {10}^{-2}\right)$ for SST graph signal prediction.
+
+ | VAR | MLP | GRU | RGU | StandardOP | MeshOP | SDL |
| 15-step | 15.123 | 15.058 | 15.101 | 15.172 | 14.756 | 14.607 | 14.382 |
| 25-step | 19.533 | 19.473 | 19.522 | 19.705 | 18.983 | 18.977 | 18.434 |
+
+# A.6 EVALUATION ON DATASETS WITH DIFFERENT SPARSITY
+
+We changed the number of nodes to control the sparsity of data. As shown in Table 8, our proposed model outperforms others under various settings of sparsity on the synthetic experiment in Section 3.2.
+
+Table 8: Mean absolute error $(10^{-2})$ for graph signal prediction with different sparsity.
+
+| #Nodes | VAR | MLP | StandardOP | MeshOP | SDL |
| 250 | 0.1730 | 0.1627 | 0.1200 | 0.1287 | 0.1104 |
| 150 | 0.1868 | 0.1729 | 0.1495 | 0.1576 | 0.1482 |
| 100 | 0.1723 | 0.1589 | 0.1629 | 0.1696 | 0.1465 |
+
+Furthermore, we sampled 400 points and trained SDL as described in Section 3.1, and resampled fewer points (350,300,250,200) to evaluate if SDL generalizes less sparse setting. As Table 9 shows, MSE increases when fewer sample points are used. However, SDL is able to provide much more accurate gradients even if it is trained under a new graph with different properties. Thus, the results support that SDL is able to generalize the c setting.
+
+Table 9: Mean squared error $(10^{-2})$ for approximations of directional derivatives of function $f_{2}(x,y) = \sin (x) + \cos (y)$ with different sparsity.
+
+| Method | 350 Nodes | 300 Nodes | 250 Nodes | 200 Nodes |
| FinGrad | 2.88 ± 0.11 | 3.42 ± 0.14 | 3.96 ± 0.17 | 4.99 ± 0.31 |
| SDL | 1.03 ± 0.09 | 1.14 ± 0.12 | 1.40 ± 0.10 | 1.76 ± 0.10 |
\ No newline at end of file
diff --git a/physicsawaredifferencegraphnetworksforsparselyobserveddynamics/images.zip b/physicsawaredifferencegraphnetworksforsparselyobserveddynamics/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..77dcd9203015c45650e44d787eb0d2e43966153e
--- /dev/null
+++ b/physicsawaredifferencegraphnetworksforsparselyobserveddynamics/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1a5c945c2d91b749fd004d3afd87ec8b06cf12abf2afbf30d9517486ea4bfbc1
+size 814644
diff --git a/physicsawaredifferencegraphnetworksforsparselyobserveddynamics/layout.json b/physicsawaredifferencegraphnetworksforsparselyobserveddynamics/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..fddf971c1d6758b5e34b37d711863a00c13ba0db
--- /dev/null
+++ b/physicsawaredifferencegraphnetworksforsparselyobserveddynamics/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d145a562902ad44c4c22458be3e625bb4e7987289149c500d3f11c21fc71aaaf
+size 511984
diff --git a/pickingwinningticketsbeforetrainingbypreservinggradientflow/30688be3-b6b0-46fc-9172-225633de05af_content_list.json b/pickingwinningticketsbeforetrainingbypreservinggradientflow/30688be3-b6b0-46fc-9172-225633de05af_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..fe5a5e0dee164227090b324b3796c0c4779a3869
--- /dev/null
+++ b/pickingwinningticketsbeforetrainingbypreservinggradientflow/30688be3-b6b0-46fc-9172-225633de05af_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8e9b1340df3ad64a8e2f7c30df014edb20fe647d60cfc4784d78c78d1618032b
+size 74733
diff --git a/pickingwinningticketsbeforetrainingbypreservinggradientflow/30688be3-b6b0-46fc-9172-225633de05af_model.json b/pickingwinningticketsbeforetrainingbypreservinggradientflow/30688be3-b6b0-46fc-9172-225633de05af_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..d1d8585a9e9fc4ffd1fb64d99a5eb068c236b1f9
--- /dev/null
+++ b/pickingwinningticketsbeforetrainingbypreservinggradientflow/30688be3-b6b0-46fc-9172-225633de05af_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ffb57d2162aca55dfe04453db91ad5eaf1e67528a70678d3c8ffae1405cd419e
+size 91940
diff --git a/pickingwinningticketsbeforetrainingbypreservinggradientflow/30688be3-b6b0-46fc-9172-225633de05af_origin.pdf b/pickingwinningticketsbeforetrainingbypreservinggradientflow/30688be3-b6b0-46fc-9172-225633de05af_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..07d49fcc896baf164ff249952963f0614f76c400
--- /dev/null
+++ b/pickingwinningticketsbeforetrainingbypreservinggradientflow/30688be3-b6b0-46fc-9172-225633de05af_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a1ed20870c625d6ac938d81d6470f845a88ec7ff6cf407cd180212f8258d4ecb
+size 1606445
diff --git a/pickingwinningticketsbeforetrainingbypreservinggradientflow/full.md b/pickingwinningticketsbeforetrainingbypreservinggradientflow/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..206b6849175aae9cf727af2007251b41b8ee3a58
--- /dev/null
+++ b/pickingwinningticketsbeforetrainingbypreservinggradientflow/full.md
@@ -0,0 +1,266 @@
+# PICKING WINNING TICKETS BEFORE TRAINING BY PRESERVING GRADIENT FLOW
+
+Chaoqi Wang, Guodong Zhang, Roger Grosse
+
+University of Toronto, Vector Institute
+
+{cqwang, gdzhang, rgrosse}@cs.toronto.edu
+
+# ABSTRACT
+
+Overparameterization has been shown to benefit both the optimization and generalization of neural networks, but large networks are resource hungry at both training and test time. Network pruning can reduce test-time resource requirements, but is typically applied to trained networks and therefore cannot avoid the expensive training process. We aim to prune networks at initialization, thereby saving resources at training time as well. Specifically, we argue that efficient training requires preserving the gradient flow through the network. This leads to a simple but effective pruning criterion we term Gradient Signal Preservation (GraSP). We empirically investigate the effectiveness of the proposed method with extensive experiments on CIFAR-10, CIFAR-100, Tiny-[ImageNet and ImageNet, using VGGNet and ResNet architectures. Our method can prune $80\%$ of the weights of a VGG-16 network on ImageNet at initialization, with only a $1.6\%$ drop in top-1 accuracy. Moreover, our method achieves significantly better performance than the baseline at extreme sparsity levels. Our code is made public at: https://github.com/alecwangcq/GraSP.
+
+# 1 INTRODUCTION
+
+Deep neural networks exhibit good optimization and generalization performance in the overparameterized regime (Zhang et al., 2016; Neyshabur et al., 2019; Arora et al., 2019; Zhang et al., 2019b), but both training and inference for large networks are computationally expensive. Network pruning (LeCun et al., 1990; Hassibi et al., 1993; Han et al., 2015b; Dong et al., 2017; Zeng & Urtasun, 2019; Wang et al., 2019) has been shown to reduce the test-time resource requirements with minimal performance degradation. However, as the pruning is typically done to a trained network, these methods don't save resources at training time. Moreover, it has been argued that it is hard to train sparse architectures from scratch while maintaining comparable performance to their dense counterparts (Han et al., 2015a; Li et al., 2016). Therefore, we ask: can we prune a network prior to training, so that we can improve computational efficiency at training time?
+
+Recently, Frankle & Carbin (2019) shed light on this problem by proposing the Lottery Ticket Hypothesis (LTH), namely that there exist sparse, trainable sub-networks (called "winning tickets") within the larger network. They identify the winning tickets by taking a pre-trained network and removing connections with weights smaller than a pre-specified threshold. They then reset the remaining weights to their initial values, and retrain the sub-network from scratch. Hence, they showed that the pre-trained weights are not necessary, only the pruned architecture and the corresponding initial weight values. Nevertheless, like traditional pruning methods, the LTH approach still requires training the full-sized network in order to identify the sparse sub-networks.
+
+Can we identify sparse, trainable sub-networks at initialization? This would allow us to exploit sparse computation with specified hardware for saving computation cost. (For instance, Dey et al. (2019) demonstrated 5x efficiency gains for training networks with pre-specified sparsity.) At first glance, a randomly initialized network seems to provide little information that we can use to judge the importance of individual connections, since the choice would seem to depend on complicated training dynamics. However, recent work suggests this goal is attainable. Lee et al. (2018) proposed the first algorithm for pruning at initialization time: Single-shot Network Pruning (SNIP), which uses a connection sensitivity criterion to prune weights with both small magnitude and small gradients.
+
+Their empirical results are promising in the sense that they can find sparse, trainable sub-networks at initialization. However, connection sensitivity is sub-optimal as a criterion because the gradient of each weight might change dramatically after pruning due to complicated interactions between weights. Since SNIP only considers the gradient for one weight in isolation, it could remove connections that are important to the flow of information through the network. Practically, we find that this blocking of information flow manifests as a reduction in the norm of the gradient.
+
+Therefore, we aim to prune connections in a way that accounts for their role in the network's gradient flow. Specifically, we take the gradient norm after pruning as our criterion, and prune those weights whose removal will result in least decrease in the gradient norm after pruning. Because we rely on preserving the gradient flow to prune the network, we name our method Gradient Signal Preservation (GraSP). Our approach is easy to implement and conceptually simple. Moreover, the recently introduced Neural Tangent Kernel (NTK) (Jacot et al., 2018) provides tools for studying the learning dynamics in the output space. Building on the analysis of Arora et al. (2019), we show that our pruning criterion tends to keep those weights which will be beneficial for optimization. We evaluate GraSP on CIFAR-10, CIFAR-100 (Krizhevsky, 2009), Tiny-ImageNet and ImageNet (Deng et al., 2009) with modern neural networks, such as VGGNet (Simonyan & Zisserman, 2014) and ResNet (He et al., 2016). GraSP significantly outperforms SNIP in the extreme sparsity regime.
+
+# 2 RELATED WORK AND BACKGROUND
+
+In this section, we review the literature on neural network pruning including pruning after training, pruning during training, dynamic sparse training and pruning before training. We then discuss propagation of signals in deep neural networks and recent works in dynamical isometry and meanfield theory. Lastly, we also review the Neural Tangent Kernel (NTK) (Jacot et al., 2018), which builds up the foundation for justifying our method in Section 4.
+
+# 2.1 NETWORK PRUNING
+
+After training. Most pruning algorithms (LeCun et al., 1990; Hassibi et al., 1993; Dong et al., 2017; Han et al., 2015b; Li et al., 2016; Molchanov et al., 2016) operate on a pre-trained network. The main idea is to identify those weights which are most redundant, and whose removal will therefore least degrade the performance. Magnitude based pruning algorithms (Han et al., 2015b;a) remove those weights which are smaller than a threshold, which may incorrectly measure the importance of each weight. In contrast, Hessian-based pruning algorithms (LeCun et al., 1990; Hassibi et al., 1993) compute the importance of each weight by measuring how its removal will affect the loss. More recently, Wang et al. (2019) proposed a network reparameterization based on the Kronecker-factored Eigenbasis for further boosting the performance of Hessian-based methods. However, all the aforementioned methods require pre-training, and therefore aren't applicable at initialization.
+
+During training. There are also some works which attempt to incorporate pruning into the training procedure itself. Srinivas & Babu (2016) proposed generalized dropout, allowing for tuning the individual dropout rates during training, which can result in a sparse network after training. Louizos et al. (2018) proposed a method for dealing with discontinuity in training $L_{0}$ norm regularized networks in order to obtain sparse networks. Both methods require roughly the same computational cost as training the full network.
+
+Dynamic Sparse Training Another branch of pruning algorithms is Dynamic Sparse Training methods, which will dynamically change the weight sparsity during training. Representative works, such as Bellec et al. (2018); Mocanu et al. (2018); Mostafa & Wang (2019); Dettmers & Zettlemoyer (2019), follow a prune-redistribute-regrowth cycle for pruning. Among them, Dettmers & Zettlemoyer (2019), proposed the sparse momentum algorithm, which dynamically determines the sparse mask based on the mean momentum magnitude during training. However, their method requires maintaining the momentum of all the weights during training, and thus does not save memory. These techniques generally achieve higher accuracy compared with fixed sparse connectivity, but change the standard training procedure and therefore do not enjoy the potential hardware acceleration.
+
+Before training. Pruning at initialization is more challenging because we need to account for the effect on the training dynamics when removing each weight. There have been several attempts to conduct pruning before training. Frankle & Carbin (2019); Frankle et al. (2019) proposed and
+
+validated the Lottery Ticket Hypothesis (LTH), namely that the network structure found by traditional pruning algorithms and the corresponding initialization are together sufficient for training the subnetwork from scratch. Lee et al. (2018) proposed the SNIP algorithm, which was the first attempt to directly identify trainable and sparse sub-networks at initialization time. Their method was based on connection sensitivity, which aims to preserve the loss after pruning, and achieved impressive results. Concurrently to our work, Lee et al. (2019b) studied the pruning problem from a signal propagation perspective, and proposed to use an orthogonal initialization to ensure faithful signal propagation. Though their work shares the same spirit as our GraSP algorithm, they focused on the weight initialization scheme, while we focus on the pruning criterion.
+
+# 2.2 SIGNAL PROPAGATION AT INITIALIZATION
+
+Our pruning criteria shares the same spirit as recent works in dynamical isometry and mean-field theory (Saxe et al., 2013; Xiao et al., 2018; Yang & Schoenholz, 2017; Poole et al., 2016) where they derived initialization scheme theoretically by developing a mean field theory for signal propagation and by characterizing singular values of the input-output Jacobian matrix. Particularly, Xiao et al. (2018) successfully train 10,000-layers vanilla ConvNets with specific initialization scheme. Essentially, this line of work shows that the trainability of a neural network at initialization is crucial for final performance and convergence. While they focus on address the trainability issue of very deep networks, we aim to solve the issue of sparse neural networks. Besides, they measure the trainability by examining the input-output Jacobian while we do that by checking the gradient norm. Though different, gradient norm is closely related to the input-output Jacobian.
+
+# 2.3 NEURAL TANGENT KERNEL AND CONVERGENCE ANALYSIS
+
+Jacot et al. (2018) analyzed the dynamics of neural net training by directly analyzing the evolution of the network's predictions in output space. Let $\mathcal{L}$ denote the cost function, $\mathcal{X}$ the set of all training samples, $\mathcal{Z} = f(\mathcal{X};\pmb {\theta})\in \mathbb{R}^{nk\times 1}$ the outputs of the neural network, and $k$ and $n$ the output space dimension and the number of training examples. For a step of gradient descent, the change to the network's predictions can be approximated with a first-order Taylor approximation:
+
+$$
+f (\mathcal {X}; \boldsymbol {\theta} _ {t + 1}) = f (\mathcal {X}; \boldsymbol {\theta} _ {t}) - \eta \boldsymbol {\Theta} _ {t} (\mathcal {X}, \mathcal {X}) \nabla_ {\mathcal {Z}} \mathcal {L}, \tag {1}
+$$
+
+where the matrix $\Theta_t(\mathcal{X},\mathcal{X})$ is the Neural Tangent Kernel (NTK) at time step $t$ :
+
+$$
+\boldsymbol {\Theta} _ {t} (\mathcal {X}, \mathcal {X}) = \nabla_ {\boldsymbol {\theta}} f (\mathcal {X}; \boldsymbol {\theta} _ {t}) \nabla_ {\boldsymbol {\theta}} f (\mathcal {X}; \boldsymbol {\theta} _ {t}) ^ {\top} \in \mathbb {R} ^ {n k \times n k}, \tag {2}
+$$
+
+where $\nabla_{\theta}f(\mathcal{X};\theta)$ denotes the network Jacobian over the whole training set. Jacot et al. (2018) showed that for infinitely wide networks, with proper initialization, the NTK exactly captures the output space dynamics throughout training. In particular, $\Theta_t(\mathcal{X},\mathcal{X})$ remains constant throughout training. Arora et al. (2019) used the NTK to analyze optimization and generalization phenomena, showing that under the assumptions of constant NTK and squared error loss, the training dynamics can be analyzed in closed form:
+
+$$
+\left\| \mathcal {Y} - f (\mathcal {X}; \boldsymbol {\theta} _ {t}) \right\| _ {2} = \sqrt {\sum_ {i = 1} ^ {n} (1 - \eta \lambda_ {i}) ^ {2 t} \left(\mathbf {u} _ {i} ^ {\top} \mathcal {Y}\right) ^ {2}} \pm \epsilon \tag {3}
+$$
+
+where $\mathcal{V} \in \mathbb{R}^{nk \times 1}$ is all the targets, $\Theta = \mathbf{U} \boldsymbol{\Lambda} \mathbf{U}^{\top} = \sum_{i=1}^{n} \lambda_{i} \mathbf{u}_{i} \mathbf{u}_{i}^{\top}$ is the eigendecomposition, and $\epsilon$ is a bounded error term. Although the constant NTK assumption holds only in the infinite width limit, Lee et al. (2019a) found close empirical agreement between the NTK dynamics and the true dynamics for wide but practical networks, such as wide ResNet architectures (Zagoruyko & Komodakis, 2016).
+
+# 3 REVISITING SINGLE-SHOT NETWORK PRUNING (SNIP)
+
+Single-shot network pruning was introduced by Lee et al. (2018), who used the term to refer both to the general problem setting and to their specific algorithm. To avoid ambiguity, we refer to the general problem of pruning before training as foresight pruning. For completeness, we first revisit the formulation of foresight pruning, and then point out issues of SNIP for motivating our method.
+
+# Algorithm 1 Gradient Signal Preservation (GraSP).
+
+Require: Pruning ratio $p$ , training data $\mathcal{D}$ , network $f$ with initial parameters $\theta_0$
+
+1: $\mathcal{D}_b = \{(\mathbf{x}_i,\mathbf{y}_i)\}_{i = 1}^b\sim \mathcal{D}$ ▷ Sample a collection of training examples
+2: Compute the Hessian-gradient product $\mathbf{Hg}$ (see Eqn. (8)) See Algorithm 2
+3: $\mathbf{S}(-\pmb{\theta}_0) = -\pmb{\theta}_0 \odot \mathbf{Hg}$ Compute the importance of each weight
+4: Compute $p_{\mathrm{th}}$ percentile of $\mathbf{S}(-\pmb{\theta}_0)$ as $\tau$
+5: $\mathbf{m} = \mathbf{S}(-\pmb{\theta}_0) < \tau$ Remove the weights with smallest importance
+6: Train the network $f_{\mathbf{m} \odot \theta}$ on $\mathcal{D}$ until convergence.
+
+Problem Formulation. Suppose we have a neural network $f$ parameterized by $\pmb{\theta} \in \mathbb{R}^d$ , and our objective is to minimize the empirical risk $\mathcal{L}(\pmb{\theta}) = \frac{1}{N}\sum_{i}\left[\ell (f(\mathbf{x}_i;\pmb{\theta}),y_i)\right]$ given a training set $\mathcal{D} = \{(\mathbf{x}_i,y_i)\}_{i = 1}^N$ . Then, the foresight pruning problem can be formulated as:
+
+$$
+\min _ {\mathbf {m} \in \{0, 1 \} ^ {d}} \mathbb {E} _ {(\mathbf {x}, y) \sim \mathcal {D}} \left[ \ell \left(f \left(\mathbf {x}; \mathcal {A} (\mathbf {m}, \boldsymbol {\theta} _ {0})\right), y\right) \right] \quad \text {s . t .} \| \mathbf {m} \| _ {0} / d = 1 - p \tag {4}
+$$
+
+where $\lceil p\cdot d\rceil$ is the number of weights to be removed, and $\mathcal{A}$ is a known training algorithm (e.g. SGD), which takes the mask $\mathbf{m}$ (here we marginalize out the initial weights $\theta_0$ for simplicity), and returns the trained weights. Since globally minimizing Eqn. 4 is intractable, we are instead interested in heuristics that result in good practical performance.
+
+Revisiting SNIP. SNIP (Lee et al., 2018) was the first algorithm proposed for foresight pruning, and it leverages the notion of connection sensitivity to remove unimportant connections. They define this in terms of how removing a single weight $\theta_{q}$ in isolation will affect the loss:
+
+$$
+S \left(\theta_ {q}\right) = \lim _ {\epsilon \rightarrow 0} \left| \frac {\mathcal {L} \left(\boldsymbol {\theta} _ {0}\right) - \mathcal {L} \left(\boldsymbol {\theta} _ {0} + \epsilon \boldsymbol {\delta} _ {q}\right)}{\epsilon} \right| = \left| \theta_ {q} \frac {\partial \mathcal {L}}{\partial \theta_ {q}} \right| \tag {5}
+$$
+
+where $\theta_{q}$ is the $q_{th}$ element of $\pmb{\theta}_{0}$ , and $\delta_q$ is a one-hot vector whose $q_{th}$ element equals $\theta_{q}$ . Essentially, SNIP aims to preserve the loss of the original randomly initialized network.
+
+Preserving the loss value motivated several classic methods for pruning a trained network, such as optimal brain damage (LeCun et al., 1990) and optimal brain surgery (Hassibi et al., 1993). While the motivation for loss preservation of a trained network is clear, it is less clear why this is a good criterion for foresight pruning. After all, at initialization, the loss is no better than chance. We argue that at the beginning of training, it is more important to preserve the training dynamics than the loss itself. SNIP does not do this automatically, because even if removing a particular connection doesn't affect the loss, it could still block the flow of information through the network. For instance, we noticed in our experiments that SNIP with a high pruning ratio (e.g. $99\%$ ) tends to eliminate nearly all the weights in a particular layer, creating a bottleneck in the network. Therefore, we would prefer a pruning criterion which accounts for how the presence or absence of one connection influences the training of the rest of the network.
+
+# 4 GRADIENT SIGNAL PRESERVATION
+
+We now introduce and motivate our foresight pruning criterion, Gradient Signal Preservation (GraSP). To understand the problem we are trying to address, observe that the network after pruning will have fewer parameters and sparse connectivity, hindering the flow of gradients through the network and potentially slowing the optimization. This is reflected in Figure 2, which shows the reduction in gradient norm for random pruning with various pruning ratios. Moreover, the performance of the pruned networks is correspondingly worse (see Table 1).
+
+Mathematically, a larger gradient norm indicates that, to the first order, each gradient update achieves a greater loss reduction, as characterized by the following directional derivative:
+
+$$
+\Delta \mathcal {L} (\boldsymbol {\theta}) = \lim _ {\epsilon \rightarrow 0} \frac {\mathcal {L} (\boldsymbol {\theta} + \epsilon \nabla \mathcal {L} (\boldsymbol {\theta})) - \mathcal {L} (\boldsymbol {\theta})}{\epsilon} = \nabla \mathcal {L} (\boldsymbol {\theta}) ^ {\top} \nabla \mathcal {L} (\boldsymbol {\theta}) \tag {6}
+$$
+
+We would like to preserve or even increase (if possible) the gradient flow after pruning (i.e., the gradient flow of the pruned network). Following LeCun et al. (1990), we cast the pruning operation
+
+# Algorithm 2 Hessian-gradient Product.
+
+Require: A batch of training data $\mathcal{D}_b$ , network $f$ with initial parameters $\theta_0$ , loss function $\mathcal{L}$
+
+1: $\mathcal{L}(\pmb{\theta}_0) = \mathbb{E}_{(\mathbf{x},y)\sim \mathcal{D}_b}[\ell (f(\mathbf{x};\pmb{\theta}_0),y)]$ $\triangleright$ Compute the loss and build the computation graph
+2: $\mathbf{g} = \mathrm{grad}(\mathcal{L}(\pmb{\theta}_0),\pmb{\theta}_0)$ Compute the gradient of loss function with respect to $\pmb{\theta}_0$
+3: $\mathbf{Hg} = \mathrm{grad}(\mathbf{g}^{\top}\mathrm{stop\_grad}(\mathbf{g}),\pmb{\theta}_{0})$ Compute the Hessian vector product of $\mathbf{Hg}$
+4: Return Hg
+
+as adding a perturbation $\delta$ to the initial weights. We then use a Taylor approximation to characterize how removing one weight will affect the gradient flow after pruning:
+
+$$
+\begin{array}{l} \mathbf {S} (\boldsymbol {\delta}) = \Delta \mathcal {L} \left(\boldsymbol {\theta} _ {0} + \boldsymbol {\delta}\right) - \underbrace {\Delta \mathcal {L} \left(\boldsymbol {\theta} _ {0}\right)} _ {\text {C o n s t}} = 2 \boldsymbol {\delta} ^ {\top} \nabla^ {2} \mathcal {L} \left(\boldsymbol {\theta} _ {0}\right) \nabla \mathcal {L} \left(\boldsymbol {\theta} _ {0}\right) + \mathcal {O} \left(\| \boldsymbol {\delta} \| _ {2} ^ {2}\right) \tag {7} \\ = 2 \boldsymbol {\delta} ^ {\top} \mathbf {H} \mathbf {g} + \mathcal {O} (\| \boldsymbol {\delta} \| _ {2} ^ {2}), \\ \end{array}
+$$
+
+where $\mathbf{S}(\delta)$ approximately measures the change to (6). The Hessian matrix $\mathbf{H}$ captures the dependencies between different weights, and thus helps predict the effect of removing multiple weights. When $\mathbf{H}$ is approximated as the identity matrix, the above criterion recovers SNIP up to the absolute value (recall the SNIP criterion is $|\delta^{\top}\mathbf{g}|$ ). However, it has been observed that different weights are highly coupled (Hassibi et al., 1993), indicating that $\mathbf{H}$ is in fact far from the identity.
+
+GraSP uses eqn. (7) as the measure of the importance of each weight. Specifically, if $S(\delta)$ is negative, then removing the corresponding weights will reduce the gradient flow, while if it is positive, it will increase the gradient flow. We prefer to first remove those weights whose removal will not reduce the gradient flow. For each weight, the importance can be computed in the following way (by an abuse of notation, we use bold S to denote vectorized importance):
+
+$$
+\mathbf {S} (- \boldsymbol {\theta}) = - \boldsymbol {\theta} \odot \mathbf {H g} \tag {8}
+$$
+
+For a given pruning ratio $p$ , we obtain the resulting pruning mask by computing the importance score of every weight, and removing the bottom $p$ fraction of the weights (see Algorithm 1). Hence, GraSP takes the gradient flow into account for pruning. GraSP is efficient and easy to implement; the Hessian-gradient product can be computed without explicitly constructing the Hessian using higher-order automatic differentiation (Pearlmutter, 1994; Schraudolph, 2002).
+
+# 4.1 UNDERSTANDING GRASP THROUGH LINEARIZED TRAINING DYNAMICS
+
+The above discussion concerns only the training dynamics at initialization time. To understand the longer-term dynamics, we leverage the recently proposed Neural Tangent Kernel (NTK), which has been shown to be able to capture the training dynamics throughout training for practical networks (Lee et al., 2019a). Specifically, as we introduced in section 2.3, eqn. (3) characterizes how the training error changes throughout the training process, which only depends on time step $t$ , training targets $\mathcal{Y}$ and the NTK $\Theta$ . Since the NTK stays almost constant for wide but practical networks (Lee et al., 2019a), e.g., wide ResNet (Zagoruyko & Komodakis, 2016), eqn. 3 is fairly accurate in those settings. This shows that NTK captures the training dynamics throughout the training process. Now we decompose eqn. (6) in the following form:
+
+$$
+\nabla \mathcal {L} (\boldsymbol {\theta}) ^ {\top} \nabla \mathcal {L} (\boldsymbol {\theta}) = \nabla_ {\mathcal {Z}} \mathcal {L} ^ {\top} \boldsymbol {\Theta} (\mathcal {X}, \mathcal {X}) \nabla_ {\mathcal {Z}} \mathcal {L} = \left(\mathbf {U} ^ {\top} \nabla_ {\mathcal {Z}} \mathcal {L}\right) ^ {\top} \boldsymbol {\Lambda} \left(\mathbf {U} ^ {\top} \nabla_ {\mathcal {Z}} \mathcal {L}\right) = \sum_ {i = 1} ^ {n} \lambda_ {i} \left(\mathbf {u} _ {i} ^ {\top} \mathcal {Y}\right) ^ {2} \tag {9}
+$$
+
+By relating it to eqn. (3), we can see that GraSP implicitly encourages $\Theta$ to be large in the directions corresponding to output space gradients. Since larger eigenvalue directions of $\Theta$ train faster according to eqn. (3), this suggests that GraSP should result in efficient training. In practice, the increasing of gradient norm might be achieved by increasing the loss. Therefore, we incorporate a temperature term on the logits to smooth the prediction, and so as to reduce the effect introduced by the loss.
+
+# 5 EXPERIMENTS
+
+In this section, we conduct various experiments to validate the effectiveness of our proposed single-shot pruning algorithm in terms of the test accuracy vs. pruning ratios by comparing against
+
+Table 1: Comparisons with Random Pruning with VGG19 and ResNet32 on CIFAR-10/100.
+
+| Dataset | CIFAR-10 | CIFAR-100 |
| Pruning ratio | 95% | 98% | 99% | 99.5% | 95% | 98% | 99% | 99.5% |
| Random Pruning (VGG19) | 89.47(0.5) | 86.71(0.7) | 82.21(0.5) | 72.89(1.6) | 66.36(0.3) | 61.33(0.1) | 55.18(0.6) | 36.88(6.8) |
| GraSP (VGG19) | 93.04(0.2) | 92.19(0.1) | 91.33(0.1) | 88.61(0.7) | 71.23(0.1) | 68.90(0.5) | 66.15(0.2) | 60.21(0.1) |
| Random Pruning (ResNet32) | 89.75(0.1) | 85.90(0.4) | 71.78(9.9) | 50.08(7.0) | 64.72(0.2) | 50.92(0.9) | 34.62(2.8) | 18.51(0.43) |
| GraSP (ResNet32) | 91.39(0.3) | 88.81(0.1) | 85.43(0.5) | 80.50(0.3) | 66.50(0.1) | 58.43(0.4) | 48.73(0.3) | 35.55(2.4) |
+
+Table 2: Test accuracy of pruned VGG19 and ResNet32 on CIFAR-10 and CIFAR-100 datasets. The bold number is the higher one between the accuracy of GraSP and that of SNIP.
+
+| Dataset | CIFAR-10 | CIFAR-100 |
| Pruning ratio | 90% | 95% | 98% | 90% | 95% | 98% |
| VGG19 (Baseline) | 94.23 | - | - | 74.16 | - | - |
| OBD (LeCun et al., 1990) | 93.74 | 93.58 | 93.49 | 73.83 | 71.98 | 67.79 |
| MLPrune (Zeng & Urtasun, 2019) | 93.83 | 93.69 | 93.49 | 73.79 | 73.07 | 71.69 |
| LT (original initialization) | 93.51 | 92.92 | 92.34 | 72.78 | 71.44 | 68.95 |
| LT (reset to epoch 5) | 93.82 | 93.61 | 93.09 | 74.06 | 72.87 | 70.55 |
| DSR (Mostafa & Wang, 2019) | 93.75 | 93.86 | 93.13 | 72.31 | 71.98 | 70.70 |
| SET Mocanu et al. (2018) | 92.46 | 91.73 | 89.18 | 72.36 | 69.81 | 65.94 |
| Deep-R (Bellec et al., 2018) | 90.81 | 89.59 | 86.77 | 66.83 | 63.46 | 59.58 |
| SNIP (Lee et al., 2018) | 93.63±0.06 | 93.43±0.20 | 92.05±0.28 | 72.84±0.22 | 71.83±0.23 | 58.46±1.10 |
| GraSP | 93.30±0.14 | 93.04±0.18 | 92.19±0.12 | 71.95±0.18 | 71.23±0.12 | 68.90±0.47 |
| ResNet32 (Baseline) | 94.80 | - | - | 74.64 | - | - |
| OBD (LeCun et al., 1990) | 94.17 | 93.29 | 90.31 | 71.96 | 68.73 | 60.65 |
| MLPrune (Zeng & Urtasun, 2019) | 94.21 | 93.02 | 89.65 | 72.34 | 67.58 | 59.02 |
| LT (original initialization) | 92.31 | 91.06 | 88.78 | 68.99 | 65.02 | 57.37 |
| LT (reset to epoch 5) | 93.97 | 92.46 | 89.18 | 71.43 | 67.28 | 58.95 |
| DSR (Mostafa & Wang, 2019) | 92.97 | 91.61 | 88.46 | 69.63 | 68.20 | 61.24 |
| SET Mocanu et al. (2018) | 92.30 | 90.76 | 88.29 | 69.66 | 67.41 | 62.25 |
| Deep-R (Bellec et al., 2018) | 91.62 | 89.84 | 86.45 | 66.78 | 63.90 | 58.47 |
| SNIP (Lee et al., 2018) | 92.59±0.10 | 91.01±0.21 | 87.51±0.31 | 68.89±0.45 | 65.22±0.69 | 54.81±1.43 |
| GraSP | 92.38±0.21 | 91.39±0.25 | 88.81±0.14 | 69.24±0.24 | 66.50±0.11 | 58.43±0.43 |
+
+SNIP. We also include three Dynamic Sparse Training methods, DSR (Mostafa & Wang, 2019), SET (Mocanu et al., 2018) and Deep-R (Bellec et al., 2018). We further include two traditional pruning algorithms (LeCun et al., 1990; Zeng & Urtasun, 2019), which operate on the pre-trained networks, for serving as an upper bound for foresight pruning. Besides, we study the convergence performance of the sub-networks obtained by different pruning methods for investigating the relationship between gradient norm and final performance. Lastly, we study the role of initialization and batch sizes in terms of the performance of GraSP for ablation study.
+
+# 5.1 PRUNING RESULTS ON MODERN CONVNETS
+
+To evaluate the effectiveness of GraSP on real world tasks, we test GraSP on four image classification datasets, CIFAR-10/100, Tiny-ImageNet and ImageNet, with two modern network architectures, VGGNet and ResNet1. For the experiments on CIFAR-10/100 and Tiny-ImageNet, we use a minibatch with ten times of the number of classes for both GraSP and SNIP2 according to Lee et al. (2018). The pruned network is trained with Kaiming initialization (He et al., 2015) using SGD for 160 epochs for CIFAR-10/100, and 300 epochs for Tiny-ImageNet, with an initial learning rate of 0.1 and batch size 128. The learning rate is decayed by a factor of 0.1 at $1/2$ and $3/4$ of the total number of epochs. Moreover, we run each experiment for 3 trials for obtaining more stable results. For ImageNet, we adopt the Pytorch (Paszke et al., 2017) official implementation, but we used more epochs for training according to Liu et al. (2019). Specifically, we train the pruned networks with SGD for 150 epochs, and decay the learning rate by a factor of 0.1 every 50 epochs.
+
+We first compare GraSP against random pruning, which generates the mask randomly for a given pruning ratio. The test accuracy is reported in Table 1. We can observe GraSP clearly outperforms
+
+Table 4: Test accuracy of pruned VGG19 and ResNet32 on Tiny-ImageNet dataset. The bold number is the higher one between the accuracy of GraSP and that of SNIP.
+
+| Network | VGG19 | ResNet32 |
| Pruning ratio | 90% | 95% | 98% | 85% | 90% | 95% |
| VGG19/ResNet32 (Baseline) | 61.38 | - | - | 62.89 | - | - |
| OBD (LeCun et al., 1990) | 61.21 | 60.49 | 54.98 | 58.55 | 56.80 | 51.00 |
| MLPrune (Zeng & Urtasun, 2019) | 60.23 | 59.23 | 55.55 | 58.86 | 57.62 | 51.70 |
| LT (original initialization) | 60.32 | 59.48 | 55.12 | 56.52 | 54.27 | 49.47 |
| LT (reset to epoch 5) | 61.19 | 60.57 | 56.18 | 60.31 | 57.77 | 51.21 |
| DSR (Mostafa & Wang, 2019) | 62.43 | 59.81 | 58.36 | 57.08 | 57.19 | 56.08 |
| SET Mocanu et al. (2018) | 62.49 | 59.42 | 56.22 | 57.02 | 56.92 | 56.18 |
| Deep-R (Bellec et al., 2018) | 55.64 | 52.93 | 49.32 | 53.29 | 52.62 | 52.00 |
| SNIP (Lee et al., 2018) | 61.02±0.41 | 59.27±0.39 | 48.95±1.73 | 56.33±0.24 | 55.43±0.14 | 49.57±0.44 |
| GraSP | 60.76±0.23 | 59.50±0.33 | 57.28±0.34 | 57.25±0.11 | 55.53±0.11 | 51.34±0.29 |
+
+random pruning, and the performance gap can be up to more than $30\%$ in terms of test accuracy. We further compare GraSP with more competitive baselines on CIFAR-10/100 and Tiny-ImageNet for pruning ratios $\{85\%, 90\%, 95\%, 98\}$ , and the results can be referred in Table 2 and 4. When the pruning ratio is low, i.e. $85\%, 90\%$ , both SNIP and GraSP can achieve very close results to the baselines, though still do not outperform pruning algorithms that operate on trained networks, as expected. However, GraSP achieves significant better results for higher pruning ratios with more complicated networks (e.g. ResNet) and datasets (e.g. CIFAR-100 and Tiny-ImageNet), showing the advantages of directly relating the pruning criteria with the gradient flow after pruning. Moreover, in most cases, we notice that either SNIP or GraSP can match or even slightly outperform LT with original initialization, which indicates that both methods can identify meaningful structures. We further experiment with the late resetting as suggested in Frankle et al. (2019) by resetting the weights of the winning tickets to their values at 5 epoch. By doing so, we observe a boost in the performance of the LT across all the settings, which is consistent with (Frankle et al., 2019). As for the comparisons with Dynamic Sparse Training (DST) methods, DSR is the best performing one, and also we can observe that GraSP is still quite competitive to them even though without the flexibility to change the sparsity during training. In particular, GraSP can outperform Deep-R in almost all the settings, and surpasses SET in more than half of the settings.
+
+Table 3: Test accuracy of ResNet-50 and VGG16 on ImageNet with pruning ratios ${60}\% ,{80}\%$ and ${90}\%$ .
+
+| Pruning ratios | 60% | 80% | 90% |
| Accuracy | top-1 | top-5 | top-1 | top5 | top-1 | top5 |
| ResNet-50 (Baseline) | 75.70 | 92.81 | - | - | - | - |
| SNIP (Lee et al., 2018) | 73.95 | 91.97 | 69.67 | 89.24 | 61.97 | 82.90 |
| GraSP | 74.02 | 91.86 | 72.06 | 90.82 | 68.14 | 88.67 |
| VGG16 (Baseline) | 73.37 | 91.47 | - | - | - | - |
| SNIP (Lee et al., 2018) | 72.95 | 91.39 | 69.96 | 89.71 | 65.27 | 86.14 |
| GraSP | 72.91 | 91.18 | 71.65 | 90.58 | 69.94 | 89.48 |
+
+However, the above three datasets are overly simple and small-scale to draw robust conclusions, and thus we conduct further experiments on ImageNet using ResNet-50 and VGG16 with pruning ratios $\{60\%, 80\%, 90\% \}$ to validate the effectiveness of GraSP. The results are shown in Table 3, we can see that when the pruning ratio is $60\%$ , both SNIP and GraSP can achieve very close performance to the original one, and these two methods perform
+
+almost the same. However, as we increase the pruning ratio, we observe that GraSP surpasses SNIP more and more. When the pruning ratio is $90\%$ , GraSP can beat SNIP by $6.2\%$ for ResNet-50 and $4.7\%$ for VGG16 in top-1 accuracy, showing the advantages of GraSP at larger pruning ratios. We seek to investigate the reasons behind in Section 5.2 for obtaining a better understanding on GraSP.
+
+# 5.2 ANALYSIS ON CONVERGENCE PERFORMANCE AND GRADIENT NORM
+
+Based on our analysis in Section 4.1 and the observations in the previous subsection, we conduct further experiments on CIFAR-100 with ResNet32 to investigate where the performance gains come from in the high sparsity regions. We present the training statistics in terms of the training and test loss in Figure 1. We observe that the main bottleneck of pruned neural networks is underfitting, and thus support our optimization considerations when deriving the pruning criteria. As a result, the network pruned with GraSP can achieve much lower loss for both training and testing and the decrease in training loss is also much faster than SNIP.
+
+
+Figure 2: The gradient norm of ResNet32 after pruning on CIFAR-100 of various pruning ratios. Shaded area is the $95\%$ confidence interval calculated with 10 trials.
+
+
+Figure 1: The training and testing loss on CIFAR-100 of SNIP and GraSP with ResNet32 and a pruning ratio of $98\%$ .
+
+
+Figure 3: The portion of remaining weights at each layer after pruning with a pruning ratio of $95\%$ .
+
+
+
+
+
+Besides, we also plot the gradient norm of pruned networks at initialization for a ResNet32 on CIFAR-100. The gradient norm is computed as the average of the gradients over the entire dataset, and we normalize the gradient of the original network to be 1. We run each experiment for 10 trials in order to obtain more stable results. We observe that both SNIP and GraSP result in a lower gradient norm at high sparsity (e.g. $98\%$ ), but GraSP can better preserve the gradient norm after pruning, and also yield better results than SNIP at the high sparsity regions. Moreover, in these high sparsity regions, the pruned network usually underfits the training data, and thus the optimization will be a problem. The randomly pruned network has a much lower gradient norm and performs the worst. In contrast, the network pruned by GraSP can start with a higher gradient norm and thus hopefully more training progress can be made as evidenced by our results in Figure 1.
+
+# 5.3 VISUALIZING THE NETWORK AFTER PRUNING
+
+In order to probe the difference of the network pruned by SNIP and GraSP, we present the portion of the remaining weights at each layer of the sparse network obtained by SNIP and GraSP in Figure. 3. We observe that these two pruning methods result in different pruning strategies. Specifically, GraSP aims for preserving the gradient flow after pruning, and thus will not prune too aggressively for each layer. Moreover, it has been known that those convolutional layers at the top usually learn highly sparse features and thus more weights can be pruned (Zhang et al., 2019a). As a result, both methods prune most of the weights at the top layers, but GraSP will preserve more weights than SNIP due to the consideration of preserving the gradient flow. In contrast, SNIP is more likely to prune nearly all the weights in top layers, and thus those layers will be the bottleneck of blocking the information flow from the output layer to the input layer.
+
+# 5.4 EFFECT OF BATCH SIZE AND INITIALIZATION
+
+We also study how the batch size and initialization will affect GraSP on CIFAR-10 and CIFAR100 with ResNet32 for showing the robustness to different hyperparameters. Specifically, we test GraSP with three different initialization methods: Kaiming normal (He et al., 2015), Normal $\mathcal{N}(0,0.1)$ , and Xavier normal (Glorot &
+
+Table 5: Mean and standard variance of the test accuracy on CIFAR-10 and CIFAR-100 with ResNet32.
+
+| Dataset | CIFAR-10 | CIFAR-100 |
| 60% | 90% | 60% | 90% |
| Kaiming | 93.42 ± 0.39 | 92.12 ± 0.39 | 71.60 ± 0.65 | 68.93 ± 0.36 |
| Normal | 93.31 ± 0.36 | 92.13 ± 0.36 | 71.48 ± 0.60 | 67.98 ± 0.83 |
| Xavier | 93.32 ± 0.25 | 92.22 ± 0.50 | 71.10 ± 1.27 | 68.11 ± 0.93 |
+
+Bengio, 2010), as well as different mini-batch sizes. We present the mean and standard variance
+
+of the test accuracy obtained with different initialization methods and by varying the batch sizes in Table 5. For CIFAR-10 and CIFAR-100, we use batch sizes $\{100,400,\dots ,25600,50000\}$ and $\{1000,4000,16000,32000,50000\}$ respectively. We observe that GraSP can achieve reasonable performance with different initialization methods, and also the effect of batch size is minimal for networks pruned with Kaiming initialization, which is one of the most commonly adopted initialization techniques in training neural networks.
+
+# 6 DISCUSSION AND CONCLUSION
+
+We propose Gradient Signal Preservation (GraSP), a pruning criterion motivated by preserving the gradient flow through the network after pruning. It can also be interpreted as aligning the large eigenvalues of the Neural Tangent Kernel with the targets. Empirically, GraSP is able to prune the weights of a network at initialization, while still performing competitively to traditional pruning algorithms, which requires first training the network. More broadly, foresight pruning can be a way for enabling training super large models that no existing GPUs can fit in, and also reduce the training cost by adopting sparse matrices operations. Readers may notice that there is still a performance gap between GraSP and traditional pruning algorithms, and also the LT with late resetting performs better than LT with original initialization. However, this does not negate the possibility that foresight pruning can match the performance of traditional pruning algorithms while still enjoying cheaper training cost. As evidence, Evci et al. (2019) show that there exists a linear and monotonically decreasing path from the sparse initialization to the solution found by pruning the fully-trained dense network, but current optimizers fail to achieve this. Therefore, designing better gradient-based optimizers that can exploit such paths will also be a good direction to explore.
+
+# REFERENCES
+
+Sanjeev Arora, Simon S Du, Wei Hu, Zhiyuan Li, and Ruosong Wang. Fine-grained analysis of optimization and generalization for overparameterized two-layer neural networks. In Proceedings of the 36th International Conference on Machine Learning, pp. 322-332, 2019.
+Guillaume Bellec, David Kappel, Wolfgang Maass, and Robert Legenstein. Deep rewiring: Training very sparse deep networks. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=BJ_wN01C-.
+Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pp. 248-255, 2009.
+Tim Dettmers and Luke Zettlemoyer. Sparse networks from scratch: Faster training without losing performance. arXiv preprint arXiv:1907.04840, 2019.
+Sourya Dey, Kuan-Wen Huang, Peter A Beerel, and Keith M Chugg. Pre-defined sparse neural networks with hardware acceleration. IEEE Journal on Emerging and Selected Topics in Circuits and Systems, 2019.
+Xin Dong, Shangyu Chen, and Sinno Pan. Learning to prune deep neural networks via layer-wise optimal brain surgeon. In Advances in Neural Information Processing Systems, pp. 4857-4867, 2017.
+Utku Evci, Fabian Pedregosa, Aidan Gomez, and Erich Elsen. The difficulty of training sparse neural networks. arXiv preprint arXiv:1906.10732, 2019.
+Jonathan Frankle and Michael Carbin. The lottery ticket hypothesis: Finding sparse, trainable neural networks. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=rJ1-b3RcF7.
+Jonathan Frankle, Gintare Karolina Dziugaite, Daniel M Roy, and Michael Carbin. The lottery ticket hypothesis at scale. arXiv preprint arXiv:1903.01611, 2019.
+
+Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the thirteenth international conference on artificial intelligence and statistics, pp. 249-256, 2010.
+Song Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. International Conference on Learning Representations, 2015a.
+Song Han, Jeff Pool, John Tran, and William Dally. Learning both weights and connections for efficient neural network. In Advances in neural information processing systems, pp. 1135-1143, 2015b.
+Babak Hassibi, David G Stork, and Gregory J Wolff. Optimal brain surgeon and general network pruning. In IEEE international conference on neural networks, pp. 293-299. IEEE, 1993.
+Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the IEEE international conference on computer vision, pp. 1026-1034, 2015.
+Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778, 2016.
+Arthur Jacot, Franck Gabriel, and Clément Hongler. Neural tangent kernel: Convergence and generalization in neural networks. In Advances in neural information processing systems, pp. 8571-8580, 2018.
+Alex Krizhevsky. Learning multiple layers of features from tiny images. Technical report, CiteSeer, 2009.
+Yann LeCun, John S Denker, and Sara A Solla. Optimal brain damage. In Advances in neural information processing systems, pp. 598-605, 1990.
+Jaehoon Lee, Lechao Xiao, Samuel S Schoenholz, Yasaman Bahri, Jascha Sohl-Dickstein, and Jeffrey Pennington. Wide neural networks of any depth evolve as linear models under gradient descent. arXiv preprint arXiv:1902.06720, 2019a.
+Namhoon Lee, Thalaiyasingam Ajanthan, and Philip HS Torr. Snip: Single-shot network pruning based on connection sensitivity. International Conference on Learning Representations, 2018.
+Namhoon Lee, Thalaiyasingam Ajanthan, Stephen Gould, and Philip HS Torr. A signal propagation perspective for pruning neural networks at initialization. arXiv preprint arXiv:1906.06307, 2019b.
+Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, and Hans Peter Graf. Pruning filters for efficient convnets. International Conference on Learning Representations, 2016.
+Zhuang Liu, Mingjie Sun, Tinghui Zhou, Gao Huang, and Trevor Darrell. Rethinking the value of network pruning. International Conference on Learning Representations, 2019.
+Christos Louizos, Max Welling, and Diederik P Kingma. Learning sparse neural networks through $l_{-}0$ regularization. International Conference on Learning Representations, 2018.
+Decebal Constantin Mocanu, Elena Mocanu, Peter Stone, Phuong H Nguyen, Madeleine Gibescu, and Antonio Liotta. Scalable training of artificial neural networks with adaptive sparse connectivity inspired by network science. Nature communications, 9(1):2383, 2018.
+Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, and Jan Kautz. Pruning convolutional neural networks for resource efficient inference. International Conference on Learning Representations, 2016.
+Hesham Mostafa and Xin Wang. Parameter efficient training of deep convolutional neural networks by dynamic sparse reparameterization. International Conference on Machine Learning, 2019.
+
+Behnam Neyshabur, Zhiyuan Li, Srinadh Bhojanapalli, Yann LeCun, and Nathan Srebro. The role of over-parametrization in generalization of neural networks. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=BygfghAcYX.
+Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. Automatic differentiation in pytorch. 2017.
+Barak A Pearlmutter. Fast exact multiplication by the Hessian. Neural computation, 6(1):147-160, 1994.
+Ben Poole, Subhaneil Lahiri, Maithra Raghu, Jascha Sohl-Dickstein, and Surya Ganguli. Exponential expressivity in deep neural networks through transient chaos. In Advances in neural information processing systems, pp. 3360-3368, 2016.
+Andrew M Saxe, James L McClelland, and Surya Ganguli. Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. arXiv preprint arXiv:1312.6120, 2013.
+Nicol N Schraudolph. Fast curvature matrix-vector products for second-order gradient descent. Neural computation, 14(7):1723-1738, 2002.
+Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. International Conference on Learning Representations, 2014.
+Suraj Srinivas and R Venkatesh Babu. Generalized dropout. arXiv preprint arXiv:1611.06791, 2016.
+Chaoqi Wang, Roger Grosse, Sanja Fidler, and Guodong Zhang. *EigenDamage: Structured pruning in the Kronecker-factored eigenbasis.* In *Proceedings of the 36th International Conference on Machine Learning*, volume 97, pp. 6566–6575. PMLR, 2019. URL http://proceedings.mlr.press/v97/wang19g.html.
+Lechao Xiao, Yasaman Bahri, Jascha Sohl-Dickstein, Samuel Schoenholz, and Jeffrey Pennington. Dynamical isometry and a mean field theory of cnns: How to train 10,000-layer vanilla convolutional neural networks. In International Conference on Machine Learning, pp. 5393-5402, 2018.
+Ge Yang and Samuel Schoenholz. Mean field residual networks: On the edge of chaos. In Advances in neural information processing systems, pp. 7103-7114, 2017.
+Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. In BMVC, 2016.
+Wenyuan Zeng and Raquel Urtasun. MLPPrune: Multi-layer pruning for automated neural network compression, 2019. URL https://openreview.net/forum?id=r1g5b2RcKm.
+Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understanding deep learning requires rethinking generalization. International Conference on Learning Representations, 2016.
+Chiyuan Zhang, Samy Bengio, and Yoram Singer. Are all layers created equal? arXiv preprint arXiv:1902.01996, 2019a.
+Guodong Zhang, James Martens, and Roger Grosse. Fast convergence of natural gradient descent for overparameterized neural networks. arXiv preprint arXiv:1905.10961, 2019b.
\ No newline at end of file
diff --git a/pickingwinningticketsbeforetrainingbypreservinggradientflow/images.zip b/pickingwinningticketsbeforetrainingbypreservinggradientflow/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..afe9ba065c4f6590acc10b6e60fc258a9e5779d1
--- /dev/null
+++ b/pickingwinningticketsbeforetrainingbypreservinggradientflow/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:57d06d79a0c9381e4e32d3235396a86da2f8e535f2ca188ba82e8a2696b06882
+size 426213
diff --git a/pickingwinningticketsbeforetrainingbypreservinggradientflow/layout.json b/pickingwinningticketsbeforetrainingbypreservinggradientflow/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..a1494d4c6be85e0ca79393b8b556ee638b7760bb
--- /dev/null
+++ b/pickingwinningticketsbeforetrainingbypreservinggradientflow/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:90f3f58db7727cf1b06ad11d0b39b0901b29e06903d88bb350f3c8ca7b2576c4
+size 353592
diff --git a/piecewiselinearactivationssubstantiallyshapethelosssurfacesofneuralnetworks/f6f6d0f8-33c4-46b6-bd43-c3ce8b361d92_content_list.json b/piecewiselinearactivationssubstantiallyshapethelosssurfacesofneuralnetworks/f6f6d0f8-33c4-46b6-bd43-c3ce8b361d92_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..e1aacff74fff71707c3d9fdc34c394e2cace13fa
--- /dev/null
+++ b/piecewiselinearactivationssubstantiallyshapethelosssurfacesofneuralnetworks/f6f6d0f8-33c4-46b6-bd43-c3ce8b361d92_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:fec72b1d90a43def492f1de31f6ee15d28d2e7051750adee56ef29c8a2b0d3ea
+size 333924
diff --git a/piecewiselinearactivationssubstantiallyshapethelosssurfacesofneuralnetworks/f6f6d0f8-33c4-46b6-bd43-c3ce8b361d92_model.json b/piecewiselinearactivationssubstantiallyshapethelosssurfacesofneuralnetworks/f6f6d0f8-33c4-46b6-bd43-c3ce8b361d92_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..7d87b859db6ffeb352f72f7d69e5d3f5d9416ced
--- /dev/null
+++ b/piecewiselinearactivationssubstantiallyshapethelosssurfacesofneuralnetworks/f6f6d0f8-33c4-46b6-bd43-c3ce8b361d92_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a53c2d4000eb08bbda3db029e5a3cfe15ea9f9ee1677d18814befd4117459d3c
+size 380148
diff --git a/piecewiselinearactivationssubstantiallyshapethelosssurfacesofneuralnetworks/f6f6d0f8-33c4-46b6-bd43-c3ce8b361d92_origin.pdf b/piecewiselinearactivationssubstantiallyshapethelosssurfacesofneuralnetworks/f6f6d0f8-33c4-46b6-bd43-c3ce8b361d92_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..153cdd5810ca2a64093d38f19de9ea1b0b92c21a
--- /dev/null
+++ b/piecewiselinearactivationssubstantiallyshapethelosssurfacesofneuralnetworks/f6f6d0f8-33c4-46b6-bd43-c3ce8b361d92_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:92e1b77a86aa4e1bbea158e6936c3937e780accd568b593bffb35403194ba4f8
+size 497225
diff --git a/piecewiselinearactivationssubstantiallyshapethelosssurfacesofneuralnetworks/full.md b/piecewiselinearactivationssubstantiallyshapethelosssurfacesofneuralnetworks/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..4e03bd31e8a1a3081b5c2467dd82da08e726839a
--- /dev/null
+++ b/piecewiselinearactivationssubstantiallyshapethelosssurfacesofneuralnetworks/full.md
@@ -0,0 +1,1924 @@
+# PIECEWISE LINEAR ACTIVATIONS SUBSTANTIALLY SHAPE THE LOSS SURFACES OF NEURAL NETWORKS
+
+Fengxiang He*, Bohan Wang*† & Dacheng Tao
+
+UBTECH Sydney AI Centre, School of Computer Science, Faculty of Engineering
+
+The University of Sydney
+
+Darlington, NSW 2008, Australia
+
+{fengxiang.he, dacheng.tao}@sydney.edu.au, bhwangfy@gmail.com
+
+# ABSTRACT
+
+Understanding the loss surface of a neural network is fundamentally important to the understanding of deep learning. This paper presents how piecewise linear activation functions substantially shape the loss surfaces of neural networks. We first prove that the loss surfaces of many neural networks have infinite spurious local minima which are defined as the local minima with higher empirical risks than the global minima. Our result demonstrates that the networks with piecewise linear activations possess substantial differences to the well-studied linear neural networks. This result holds for any neural network with arbitrary depth and arbitrary piecewise linear activation functions (excluding linear functions) under most loss functions in practice. Essentially, the underlying assumptions are consistent with most practical circumstances where the output layer is narrower than any hidden layer. In addition, the loss surface of a neural network with piecewise linear activations is partitioned into multiple smooth and multilinear cells by nondifferentiable boundaries. The constructed spurious local minima are concentrated in one cell as a valley: they are connected by a continuous path, on which empirical risk is invariant. Further for one-hidden-layer networks, we prove that all local minima in a cell constitute an equivalence class; they are concentrated in a valley; and they are all global minima in the cell.
+
+# 1 INTRODUCTION
+
+Neural networks have been successfully deployed in many real-world applications (LeCun et al., 2015; Witten et al., 2016; Silver et al., 2016; He et al., 2016; Litjens et al., 2017). In spite of this, the theoretical foundations of neural networks are somewhat premature. To the many deficiencies in our knowledge of deep learning theory, the investigation into the loss surfaces of neural networks is of fundamental importance. Understanding the loss surface would be helpful in several relevant research areas, such as the ability to estimate data distributions, the optimization of neural networks, and the generalization to unseen data.
+
+This paper studies the role of the nonlinearities in activation functions in shaping the loss surfaces of neural networks. Our results demonstrate that the impact of nonlinearities is profound.
+
+First, we prove that the loss surfaces of nonlinear neural networks are substantially different to those of linear neural networks, in which local minima are created equal, and also, they are all global minima (Kawaguchi, 2016; Baldi & Hornik, 1989; Lu & Kawaguchi, 2017; Freeman & Bruna, 2017; Zhou & Liang, 2018; Laurent & von Brecht, 2018; Yun et al., 2018). By contrast,
+
+Neural networks with arbitrary depth and arbitrary piecewise linear activations (excluding linear functions) have infinitely many spurious local minima under arbitrary continuously differentiable loss functions.
+
+This result only relies on four mild assumptions that cover most practical circumstances: (1) the training sample set is linearly inseparable; (2) all training sample points are distinct; (3) the output layer is narrower than the other hidden layers; and (4) there exists some turning point in the piecewise linear activations that the sum of the slops on the two sides does not equal to 0.
+
+Our result significantly extends the existing study on the existence of spurious local minimum. For example, Zhou & Liang (2018) prove that one-hidden-layer neural networks with two nodes in the hidden layer and two-piece linear (ReLU-like) activations have spurious local minima; Swirszcz et al. (2016) prove that ReLU networks have spurious local minima under the squared loss when most of the neurons are not activated; Safran & Shamir (2018) present a computer-assisted proof that two-layer ReLU networks have spurious local minima; a recent work (Yun et al., 2019b) have proven that neural networks with two-piece linear activations have infinite spurious local minima, but the results only apply to the networks with one hidden layer and one-dimensional outputs; and a concurrent work (Goldblum et al., 2020) proves that for multi-layer perceptrons of any depth, the performance of every local minimum on the training data equals to a linear model, which is also verified by experiments.
+
+The proposed theorem is proved in three stages: (1) we prove that neural networks with one hidden layer and two-piece linear activations have spurious local minima; (2) we extend the conditions to neural networks with arbitrary hidden layers and two-piece linear activations; and (3) we further extend the conditions to neural networks with arbitrary depth and arbitrary piecewise linear activations. Since some parameters of the constructed spurious local minima are from continuous intervals, we have obtained infinitely many spurious local minima. At each stage, the proof follows a two-step strategy that: (a) constructs an infinite series of local minima; and (b) constructs a point in the parameter space whose empirical risk is lower than the constructed local minimum in Step (a). This strategy is inspired by Yun et al. (2019b) but we have made significant and non-trivial development.
+
+Second, we draw a "big picture" for the loss surfaces of nonlinear neural networks. Soudry & Hoffer (2018) highlight a smooth and multilinear partition of the loss surfaces of neural networks. The nonlinearities in the piecewise linear activations partition the loss surface of any nonlinear neural network into multiple smooth and multilinear open cells. Specifically, every nonlinear point in the activation functions creates a group of the non-differentiable boundaries between the cells, while the linear parts of activations correspond to the smooth and multilinear interiors. Based on the partition, we discover a degenerate nature of the large amounts of local minima from the following aspects:
+
+- Every local minimum is globally minimal within a cell. This property demonstrates that the local geometry within every cell is similar to the global geometry of linear networks, although technically, they are substantially different. It applies to any one-hidden-layer neural network with two-piece linear activations for regression under convex loss. We rigorously prove this property in two stages: (1) we prove that within every cell, the empirical risk $\hat{\mathcal{R}}$ is convex with respect to a variable $\hat{W}$ mapped from the weights $W$ by a mapping $Q$ . Therefore, the local minima with respect to the variable $\hat{W}$ are also the global minima in the cell; and then (2) we prove that the local optimality is maintained under the constructed mapping. Specifically, the local minima of the empirical risk $\hat{\mathcal{R}}$ with respect to the parameter $W$ are also the local minima with respect to the variable $\hat{W}$ . We thereby prove this property by combining the convexity and the correspondence of the minima. This proof is technically novel and non-trivial, though the intuitions are natural.
+
+- Equivalence classes and quotient space of local minimum valleys. All local minima in a cell are concentrated as a local minimum valley: on a local minimum valley, all local minima are connected with each other by a continuous path, on which the empirical risk is invariant. Further, all these local minima constitute an equivalence class. This local minima valley may have several parallel valleys that are in the same equivalence class but do not appear because of the restraints from cell boundaries. If such constraints are ignored, all the equivalence classes constitute a quotient space. The constructed mapping $\mathcal{Q}$ is exactly the quotient map. This result coincides with the property of mode connectivity that the minima found by gradient-based methods are connected by a path in the parameter space with almost invariant empirical risk (Garipov et al., 2018; Draxler et al., 2018; Kuditipudi et al., 2019). Additionally, this property suggests that we would need to study every local minimum valley as a whole.
+
+- Linear collapse. Linear neural networks are covered by our theories as a simplified case. When all activations are linear, the partitioned loss surface collapses to one single cell, in which all local minima are globally optimal, as suggested by the existing works on linear networks (Kawaguchi, 2016; Baldi & Hornik, 1989; Lu & Kawaguchi, 2017; Freeman & Bruna, 2017; Zhou & Liang, 2018; Laurent & von Brecht, 2018; Yun et al., 2018).
+
+Notations. If $M$ is a matrix, $M_{i,j}$ denotes the $(i,j)$ -th component of $M$ . If $M$ is a vector, $M_{i}$ denotes the $i$ -th component of $M$ . Define $E_{ij}$ as a matrix in which the $(i,j)$ -th component is 1 while all other components are 0. Also, denote $\pmb{e}_i$ as a vector such that the $i$ -th component is 1 while all others are 0. Additionally, we define $\mathbf{1}_k \in \mathbb{R}^{k \times 1}$ is a vector whose components are all 1, while those of $\mathbf{0}_{n \times m} \in \mathbb{R}^{n \times m}$ (or briefly, $\mathbf{0}$ ) are all 0. For the brevity, $[i:j]$ denotes $\{i,\dots ,j\}$ .
+
+# 2 RELATED WORK
+
+Some works suggest that linear neural networks have no spurious local minima. Kawaguchi (2016) proves that linear neural networks with squared loss do not have any spurious local minimum under three assumptions about the data matrix $X$ and the label matrix $Y$ : (1) both matrices $XX^T$ and $XY^T$ have full ranks; and (2) the input layer is wider than the output layer; and (3) the eigenvalues of matrix $YX^\top (XX^T)^{-1}XY^T$ are distinct with each other. Zhou & Liang (2018) give an analytic formulation of the critical points for the loss function of deep linear networks, and thereby obtain a group of equivalence conditions for that critical point is a global minimum. Lu & Kawaguchi (2017) prove the argument under one assumption that both matrices $X$ and $Y$ have full ranks, which is even more restrictive. However, in practice, the activations of most neural networks are not linear. The nonlinearities would make the loss surface extremely non-convex and even non-smooth and therefore far different from the linear case.
+
+The loss surfaces of over-parameterized neural networks have some special properties. Choromanska et al. (2015) empirically suggest that: (1) most local minima of over-parameterized networks are equivalent; and (2) small-size networks have spurious local minima but the probability of finding one decreases rapidly with the network size. Li et al. (2018) prove that over-parameterized fully-connected deep neural networks with continuous activation functions and convex, differentiable loss functions, have no bad strict local minimum. Nguyen et al. (2019) suggest that "sufficiently overparameterized" neural networks have no bad local valley under the cross-entropy loss. Nguyen (2019) further suggests that the global minima of sufficiently over-parameterized neural networks are connected within a unique valley. Many other works study the convergence, generalization, and other properties of stochastic gradient descent on the loss surfaces of over-parameterized networks (Chizat & Bach; Arora et al., 2018; Brutzkus et al., 2018; Du et al., 2019; Soltanolkotabi et al., 2018; Allen-Zhu et al., 2019a,b; Oymak & Soltanolkotabi, 2019).
+
+Many advances on the loss surfaces of neural networks are focused on other problems. Zhou & Feng (2018) and Mei et al. (2018) prove that the empirical risk surface and expected risk surface are linked. This correspondence highlights the value of investigating loss surfaces (empirical risk surfaces) to the study of generalization (the gap between empirical risks to expected risks). Hanin & Rolnick (2019) demonstrate that the input space of neural networks with piecewise linear activations are partitioned by multiple regions, while our work focuses on the partition of the loss surface. Xie et al. (2017) proves that the training error and test error are upper bounded by the magnitude of the gradient, under the assumption that the geometry discrepancy of the parameter $W$ is bounded. Sagun et al. (2016; 2018) present empirical results that the eigenvalues of the Hessian of the loss surface are two-fold: (1) a bulk centered closed to zero; and (2) outliers away from the bulk. Kawaguchi & Kaelbling (2020) prove that we can eliminate the spurious local minima by adding one unit per output unit for almost any neural network in practice. Tian (2017); Andrychowicz et al. (2016); Soltanolkotabi (2017); Zhong et al. (2017); Brutzkus & Globerson (2017); Tian (2017); Li & Yuan (2017); Zou et al. (2019); Li & Liang (2018); Du et al. (2018a; 2019); Zhang et al. (2019b); Zhou et al. (2019); Wang et al. (2019) study the optimization methods for neural networks. Other relevant works include Sagun et al. (2016; 2018); Nguyen & Hein (2018); Du et al. (2018b); Haeffele & Vidal (2017); Liang et al. (2018); Wu et al. (2018); Yun et al. (2019a); Zhang et al. (2019a); Kuditipudi et al. (2019); Garipov et al. (2018); Draxler et al. (2018); He et al. (2019); Kawaguchi & Kaelbling (2020).
+
+# 3 NEURAL NETWORK HAS INFINITE SPURIOUS LOCAL MINIMA
+
+This section investigates the existence of spurious local minima on the loss surfaces of neural networks. We find that almost all practical neural networks have infinitely many spurious local minima. This result stands for any neural network with arbitrary depth and arbitrary piecewise linear activations excluding linear functions under arbitrary continuously differentiable loss.
+
+# 3.1 PRELIMINARIES
+
+Consider a training sample set $\{(X_1, Y_1), (X_2, Y_2), \ldots, (X_n, Y_n)\}$ of size $n$ . Suppose the dimensions of feature $X_i$ and label $Y_i$ are $d_X$ and $d_Y$ , respectively. By aggregating the training sample set, we obtain the feature matrix $X \in \mathbb{R}^{d_X \times n}$ and label matrix $Y \in \mathbb{R}^{d_Y \times n}$ .
+
+Suppose a neural network has $L$ layers. Denote the weight matrix, bias, and activation in the $j$ -th layer respectively by $W_{j} \in \mathbb{R}^{d_{j} \times d_{j-1}}$ , $b_{j} \in \mathbb{R}^{d_{j}}$ , and $h: \mathbb{R}^{d_{j} \times n} \to \mathbb{R}^{d_{j} \times n}$ , where $d_{j}$ is the dimension of the output of the $j$ -th layer. Also, for the input matrix $X$ , the output of the $j$ -th layer is denoted as the $Y^{(j)}$ and the output of the $j$ -th layer before the activation is denoted as the $\tilde{Y}^{(j)}$ ,
+
+$$
+\tilde {Y} ^ {(j)} = W _ {j} Y ^ {(j - 1)} + b _ {i} \mathbf {1} _ {n} ^ {T}, \tag {1}
+$$
+
+$$
+Y ^ {(j)} = h \left(W _ {j} Y ^ {(j - 1)} + b _ {i} \mathbf {1} _ {n} ^ {T}\right). \tag {2}
+$$
+
+The output of the network is defined as follows,
+
+$$
+\hat {Y} = h _ {L} \left(W _ {L} h _ {L - 1} \left(W _ {L - 1} h _ {L - 2} \left(\dots h _ {1} \left(W _ {1} X + b _ {1} \mathbf {1} _ {n} ^ {T}\right) \dots\right) + b _ {L - 1} \mathbf {1} _ {n} ^ {T}\right) + b _ {L} \mathbf {1} _ {n} ^ {T}\right). \tag {3}
+$$
+
+Also, we define $Y^{(0)} = X$ , $Y^{(L)} = \hat{Y}$ , $d_0 = d_X$ , and $d_L = d_Y$ . In some situations, we use $\hat{Y}\left([W_i]_{i=1}^L, [b_i]_{i=1}^L\right)$ to clarify the parameters, as well as $\tilde{Y}^{(j)}, Y^{(j)}$ , etc.
+
+This section discusses neural networks with piecewise linear activations. A part of the proof uses two-piece linear activations $h_{s_{-},s_{+}}$ which are defined as follows,
+
+$$
+h _ {s _ {-}, s _ {+}} (x) = \mathbf {I} _ {\{x \leq 0 \}} s _ {-} x + \mathbf {I} _ {\{x > 0 \}} s _ {+} x, \tag {4}
+$$
+
+where $|s_{+}| \neq |s_{-}|$ and $\mathbf{I}_{\{\cdot\}}$ is the indicator function.
+
+Remark. Piecewise linear functions are dense in the space of continuous functions. In other words, for any continuous function, we can always find a piecewise linear function to estimate it with arbitrary small distance.
+
+This section uses continuously differentiable loss to evaluate the performance of neural networks. Continuous differentiability is defined as follows.
+
+Definition 1 (Continuously differentiable). We call a function $f: \mathbb{R}^n \to \mathbb{R}$ continuously differentiable with respect to the variable $x$ if: (1) the function $f$ is differentiable with respect to $x$ ; and (2) the gradient $\nabla_x f(x)$ of the function $f$ is continuous with respect to the variable $x$ .
+
+# 3.2 MAIN RESULT
+
+The theorem in this section relies on the following assumptions.
+
+Assumption 1. The training data cannot be fit by a linear model.
+
+Assumption 2. All data points are distinct.
+
+Assumption 3. All hidden layers are wider than the output layer.
+
+Assumption 4. For the piece-wise linear activations, there exists some turning point that the sum of the slops on the two sides does not equal to 0.
+
+To our best knowledge, our assumptions are the least restrictive compared with the relevant works in the literature. These assumptions are respectively justified as follows: (1) most real-world datasets are extremely complex and cannot be simply fit using linear models; (2) it is easy to guarantee that the data points are distinct by employing data cleansing methods; (3) for regression and many classification tasks, the width of output layer is limited and narrower than the hidden layers; and (4) this assumption is invalid only for activations like $f(x) = a|x|$ .
+
+Based on these four assumptions, we can prove the following theorem.
+
+Theorem 1. Neural networks with arbitrary depth and arbitrary piecewise linear activations (excluding linear functions) have infinitely many spurious local minima under arbitrary continuously differentiable loss whose derivative can equal 0 only when the prediction and label are the same.
+
+In practice, most loss functions are continuously differentiable and the derivative can equal 0 only when the prediction and label are the same, such as squared loss and cross-entropy loss (see Appendix A.1, Lemmas 2 and 3). Squared loss is a standard loss for regression and is defined as the $L_{2}$ norm of the difference between the ground-truth label and the prediction as follows.
+
+$$
+l _ {2} \left(Y _ {i}, \hat {Y} _ {i}\right) = \frac {1}{2} \left\| Y _ {i} - \hat {Y} _ {i} \right\| _ {F} ^ {2}. \tag {5}
+$$
+
+Meanwhile, cross-entropy loss is used as a standard loss in multiclass classification, which is defined as follows. Here, we treat the softmax function as a part of the loss function.
+
+$$
+l _ {c e} \left(Y _ {i}, \hat {Y} _ {i}\right) = - \sum_ {j = 1} ^ {d _ {Y}} Y _ {i, j} \log \left(\frac {\hat {Y} _ {i , j}}{\sum_ {k = 1} ^ {d _ {Y}} \hat {Y} _ {i , k}}\right). \tag {6}
+$$
+
+One can also remove Assumption 4, if Assumption 3 is replaced by the following assumption, which is mildly more restrictive (see a detailed proof in pp. 34-37).
+
+Assumption 5. The dimensions of the layers satisfy that:
+
+$$
+d _ {1} \geq d _ {Y} + 2,
+$$
+
+$$
+d _ {i} \geq d _ {Y} + 1, i = 2, \dots , L - 1.
+$$
+
+Our result demonstrates that introducing nonlinearities into activations substantially reshapes the loss surface: they bring infinitely many spurious local minima into the loss surface. This result highlights the substantial difference from linear neural networks that all local minima of linear neural networks are equally good, and therefore, they are all global minima (Kawaguchi, 2016; Baldi & Hornik, 1989; Lu & Kawaguchi, 2017; Freeman & Bruna, 2017; Zhou & Liang, 2018; Laurent & von Brecht, 2018; Yun et al., 2018).
+
+Some works have noticed the existence of spurious local minima on the loss surfaces of nonlinear neural networks, which however has a limited applicable domain (Choromanska et al., 2015; Swirszcz et al., 2016; Safran & Shamir, 2018; Yun et al., 2019b). A notable work by Yun et al. (2019b) proves that one-hidden-layer neural networks with two-piece linear (ReLU-like) activations for one-dimensional regression have infinitely many spurious local minima under squared loss. This work first constructs a series of local minima and then prove they are spurious. This idea inspires some of this work. However, our work makes significant and non-trivial development that extends the conditions to arbitrary depth, piecewise linear activations excluding linear functions, and continuously differentiable loss.
+
+# 3.3 PROOF SKELETON
+
+This section presents the skeleton of the proof. Theorem 1 is proved in three stages. We first prove a simplified version of Theorem 1 and then extend the conditions in the last two stages. The proof is partially inspired by Yun et al. (2019b) but the proof in this paper has made nontrivial development and the results are significantly extended.
+
+Yun et al. (2019b) and our paper both employ the following strategy: (a) construct a series of local minima based on a linear classifier; and (b) construct a new point with smaller empirical risk and thereby we prove that the constructed local minima are spurious. However, due to the differences in the loss function and the output dimensions, the exact constructions of local minima are substantially different.
+
+Our extensions from Yun et al. (2019b) are three-fold: (1) From one hidden layer to arbitrary depth: To prove that networks with an arbitrary depth have infinite spurious local minima, we develop a novel strategy that employs transformation operations to force data flow through the same linear parts of the activations, in order to construct the spurious local minima; (2) From squared loss to arbitrary differentiable loss: Yun et al. (2019b) calculate the analytic formations of derivatives of
+
+the loss to construct the local minima and then prove they are spurious. This technique cannot be transplanted to the case of arbitrary differentiable loss functions, because we cannot assume the analytic formation. To prove that the loss surface under an arbitrary differentiable loss has an infinite number of spurious local minima, we employ a new proof technique based on Taylor series and a new separation lemma; and (3) From one-dimensional output to arbitrary-dimensional output: To prove the loss surface of a neural network with an arbitrary-dimensional output has an infinite number of spurious local minima, we need to deal with the calculus of functions whose domain and codomain are a matrix space and a vector space, respectively. By contrast, when the output dimension is one, the codomain is only the space of real numbers. Therefore, the extension of the output dimension significantly mounts the difficulty of the whole proof.
+
+# Stage (1): Neural networks with one hidden layer and two-piece linear activations.
+
+We first prove that nonlinear neural networks with one hidden layer and two-piece linear activation functions (ReLU-like activations) have spurious local minima. The proof in this stage further follows a two-step strategy:
+
+(a) We first construct local minima of the empirical risk $\hat{\mathcal{R}}$ (see Appendix A.2, Lemma 4). These local minimizers are constructed based on a linear neural network which has the same network size (dimension of weight matrices) and evaluated under the same loss. The design of the hidden layer guarantees that the components of the output $\hat{Y}^{(1)}$ in the hidden layer before the activation are all positive. The activation is thus effectively reduced to a linear function. Therefore, the local geometry around the local minima with respect to the weights $W$ is similar to those of linear neural networks. Further, the design of the output layer guarantees that its output $\hat{Y}$ is the same as the linear neural network. This construction helps to utilize the results of linear neural networks to solve the problems in nonlinear neural networks.
+
+(b) We then prove that all the constructed local minima in Step (a) are spurious (see Appendix A.2, Theorem 4). Specifically, we assumed by Assumption 1 that the dataset cannot be fit by a linear model. Therefore, the gradient $\nabla_{\hat{Y}}\hat{\mathcal{R}}$ of the empirical risk $\hat{\mathcal{R}}$ with respect to the prediction $\hat{Y}$ is not zero. Suppose the $i$ -th row of the gradient $\nabla_{\hat{Y}}\hat{\mathcal{R}}$ is not zero. Then, we use Taylor series and a preparation lemma (see Appendix A.5, Lemma 7) to construct another point in the parameter space that has smaller empirical risk. Therefore, we prove that the constructed local minima are spurious. Furthermore, the constructions involve some parameters that are randomly picked from a continuous interval. Thus, we constructed infinitely many spurious local minima.
+
+# Stage (2) - Neural networks with arbitrary hidden layers and two-piece linear activations.
+
+We extend the condition in Stage (1) to any neural network with arbitrary depth and two-piece linear activations. The proof in this stage follows the same two-step strategy but has different implementations:
+
+(a) We first construct a series of local minima of the empirical risk $\hat{\mathcal{R}}$ (see Appendix A.3, Lemma 5). The construction guarantees that every component of the output $\tilde{Y}^{(i)}$ in each layer before the activations is positive, which secure all the input examples flow through the same part of the activations. Thereby, the nonlinear activations are reduced to linear functions. Also, our construction guarantees that the output $\hat{Y}$ of the network is the same as a linear network with the same weight matrix dimensions.
+(b) We then prove that the constructed local minima are spurious (see Appendix A.3, Theorem 5). The idea is to find a point in the parameter space that has the same empirical risk $\hat{\mathcal{R}}$ with the constructed point in Stage (1), Step (b).
+
+# Stage (3) - Neural networks with arbitrary hidden layer and piecewise linear activations.
+
+We further extend the conditions in Stage (2) to any neural network with arbitrary depth and arbitrary piecewise linear activations. We continue to adapt the two-step strategy in this stage:
+
+(a) We first construct a local minimizer of the empirical risk $\hat{\mathcal{R}}$ based on the results in Stages (1) and (2) (see Appendix A.4, Lemma 6). This construction is based on Stage (2), Step (a). The difference of the construction in this stage is that every linear part in activations can be a finite interval. The constructed weight matrices use several uniform scaling and translation operations to the outputs
+
+of hidden layers in order to guarantee that all the input training sample points flow through the same linear parts of the activations. We thereby reduce the nonlinear activations to linear functions, effectively. Also, our construction guarantees that the output $\hat{Y}$ of the neural network equals to that of the corresponding linear neural network.
+
+(b) We then prove that the constructed local minima are spurious (see Appendix A.4). We use the same strategy in Stage (2), Step (b). Some adaptations are implemented for the new conditions.
+
+# 4 A BIG PICTURE OF THE LOSS SURFACE
+
+This section draws a big picture for the loss surfaces of neural networks. Based on a recent result by Soudry & Hoffer (2018), we present four profound properties of the loss surface that collectively characterize how the nonlinearities in activations shape the loss surface.
+
+# 4.1 PRELIMINARIES
+
+The discussions in this section use the following concepts.
+
+Definition 2 (Open ball and open set). The open ball in $\mathcal{H}$ centered at $x\in \mathcal{H}$ and of radius $r > 0$ is defined by $B(h,r) = \{x:\| x - h\| < r\}$ . A subset $A\subset \mathcal{H}$ of a space $\mathcal{H}$ is called a open set, if for every point $h\in A$ , there exists a positive real $r > 0$ , such that the open ball $B(h,r)$ with center $h$ and radius $r$ is in the subset $A$ : $B(h,r)\subset A$ .
+
+Definition 3 (Interior point and interior). For a subset $A \subset \mathcal{H}$ of a space $\mathcal{H}$ , a point $h \in A$ is called an interior point of $A$ , if there exists a positive real $r > 0$ , such that the open ball $B(h, r)$ with center $h$ and radius $r$ is in the subset $A$ : $B(h, r) \subset A$ . The set of all the interior points of the set $A$ is called the interior of the set $A$ .
+
+Definition 4 (Limit point, closure, and boundary). For a subset $A \subset \mathcal{H}$ of a space $\mathcal{H}$ , a point $h \in A$ is called a limit point, if for every $r > 0$ , the open ball $B(h,r)$ with center $h$ and radius $r$ contains some point of $A$ : $B(h,r) \cap A \neq \emptyset$ . The closure $\bar{A}$ of the set $A$ consists of the union of the set $A$ and all its limit points. The boundary $\partial A$ is defined as the set of points which are in the closure of set $A$ but not in the interior of set $A$ .
+
+Definition 5 (Multilinear). A function $f \colon \mathcal{X}_1 \times \mathcal{X}_2 \to \mathcal{Y}$ is called multilinear if for arbitrary $x_1^1, x_1^2 \in \mathcal{X}, x_2^1, x_2^2 \in \mathcal{X}_2$ , and constants $\lambda_1, \lambda_2, \mu_1,$ and $\mu_2$ , we have
+
+$$
+f (\lambda_ {1} x _ {1} ^ {1} + \lambda_ {2} x _ {1} ^ {2}, \mu_ {1} x _ {2} ^ {1} + \mu_ {2} x _ {2} ^ {2}) = \lambda_ {1} \mu_ {1} f (x _ {1} ^ {1}, x _ {2} ^ {1}) + \lambda_ {1} \mu_ {2} f (x _ {1} ^ {1}, x _ {2} ^ {2}) + \lambda_ {2} \mu_ {1} f (x _ {1} ^ {2}, x _ {2} ^ {1}) + \lambda_ {2} \mu_ {2} f (x _ {1} ^ {2}, x _ {2} ^ {2}).
+$$
+
+Remark. The definition of "multilinear" implies that the domain of any multilinear function $f$ is a connective and convex set, such as the smooth and multilinear cells below.
+
+Definition 6 (Equivalence class, and quotient space). Suppose $X$ is a linear space. $[x] = \{v \in X : v \sim x\}$ is an equivalence class, if there is an equivalent relation $\sim$ on $[x]$ , such that for any $a, b, c \in [x]$ , we have: (1) reflexivity: $a \sim a$ ; (2) symmetry: if $a \sim b$ , $b \sim a$ ; and (3) transitivity: if $a \sim b$ and $b \sim c$ , $a \sim c$ . The quotient space and quotient map are defined to be $X / \sim = \{\{v \in X : v \sim x\} : x \in X\}$ and $x \rightarrow [x]$ , respectively.
+
+# 4.2 MAIN RESULTS
+
+In this section, the loss surface is defined under convex loss with respect to the prediction $\hat{Y}$ of the neural network. Convex loss covers many popular loss functions in practice, such as the squared loss for the regression tasks and many others based on norms. The triangle inequality of the norms secures the convexity of the corresponding loss functions. The convexity of the squared loss is checked in the appendix (see Appendix B, Lemma 8).
+
+We now present four propositions to express the loss surfaces of nonlinear neural networks. These propositions give four major properties of the loss surface that collectively draw a big picture for the loss surface.
+
+We first recall a lemma by Soudry & Hoffer (2018). It proves that the loss surfaces of neural networks have smooth and multilinear partitions.
+
+Lemma 1 (Smooth and multilinear partition; cf. Soudry & Hoffer (2018)). The loss surfaces of neural networks of arbitrary depth with piecewise linear functions excluding linear functions are partitioned into multiple smooth and multilinear open cells, while the boundaries are nondifferentiable.
+
+Based on the smooth and multilinear partition, we prove four propositions as follows.
+
+Theorem 2 (Analogous convexity). For one-hidden-layer neural networks with two-piece linear activation for regression under convex loss, within every cell, all local minima are equally good, and also, they are all global minima in the cell.
+
+Theorem 3 (Equivalence classes of local minimum valleys). Suppose all conditions of Theorem 2 hold. Assume the loss function is strictly convex. Then, all local minima in a cell are concentrated as a local minimum valley: they are connected with each other by a continuous path and have the same empirical risk. Additionally, all local minima in a cell constitute an equivalence class.
+
+Corollary 1 (Quotient space of local minimum valleys). Suppose all conditions of Theorem 3 hold. There might exist some "parallel" local minimum valleys in the equivalence class of a local minimum valley. They do not appear because of the constraints from the cell boundaries. If we ignore such constraints, all equivalence classes of local minima valleys constitute a quotient space.
+
+Corollary 2 (Linear collapse). The partitioned loss surface collapses to one single smooth and multilinear cell, when all activations are linear.
+
+# 4.3 DISCUSSIONS AND PROOF TECHNIQUES
+
+The four propositions collectively characterize how the nonlinearities in activations shape the loss surfaces of neural networks. This section discusses the results and the structure of the proofs. A detailed proof is omitted here and given in Appendix B.
+
+Smooth and multilinear partition. Intuitively, the nonlinearities in the piecewise linear activation functions partition the surface into multiple smooth and multilinear cells. Zhou & Liang (2018); Soudry & Hoffer (2018) highlight the partition of the loss surface. We restate it here to make the picture self-contained. A similar but also markedly different notions recently proposed by Hanin & Rolnick (2019) demonstrate that the input data space is partitioned into multiple linear regions, while our work focuses on the partition in the parameter space.
+
+Every local minimum is globally minimal within a cell. In convex optimization, convexity guarantees that all the local minima are global minima. This theorem proves that the local minima within a cell are equally good, and also, they are all global minima in the cell. This result is not surprising provided the excellent training performance of deep learning algorithms. However, the proof is technically non-trivial.
+
+Soudry & Hoffer (2018) proved that the local minima in a cell are the same. However, there would be some point near the boundary has a smaller empirical risk and is not locally minimal. Unfortunately, the proof by Soudry & Hoffer (2018) cannot exclude this possibility. By contrast, our proof completely solves this problem. Furthermore, our proof holds for any convex loss, including squared loss and cross-entropy loss, but Soudry & Hoffer (2018) only stands for squared loss.
+
+It is challenging to prove, because the proof techniques for the case of linear networks cannot be transplanted here. Technically, linear networks can be expressed by the product of a sequence of weight matrices, which guarantees good geometrical properties. Specifically, the effect of every linear activation function is just equivalently multiplying a real constant to the output. However, the loss surface within a cell of a nonlinear neural network does not have this property. Below is the skeleton of our proof.
+
+We first prove that the empirical risk $\hat{\mathcal{R}}$ is a convex function within every cell with respect to a variable $\hat{W}$ which is calculated from the weights $W$ . Therefore, all local minima of the empirical risk $\hat{\mathcal{R}}$ with respect to $\hat{W}$ are also globally optimal in the cell. Every cell corresponds to a specific series of linear parts of the activations. Therefore, in any fixed cell, the activation $h_{s_{-},s_{+}}$ can be expressed by the slopes of the corresponding linear parts as the following equations,
+
+$$
+\hat {\mathcal {R}} \left(W _ {1}, W _ {2}\right) = \frac {1}{n} \sum_ {i = 1} ^ {n} l \left(y _ {i}, W _ {2} h \left(W _ {1} x _ {i}\right)\right) = \frac {1}{n} \sum_ {i = 1} ^ {n} l \left(y _ {i}, W _ {2} \operatorname {d i a g} \left(A _ {., i}\right) W _ {1} x _ {i}\right), \tag {7}
+$$
+
+where $A_{.,i}$ is the $i$ -th column of matrix
+
+$$
+A = \left[ \begin{array}{c c c} h _ {s -, s _ {+}} ^ {\prime} ((W _ {1}) _ {1,..} x _ {1}) & \dots & h _ {s -, s _ {+}} ^ {\prime} ((W _ {1}) _ {1,..} x _ {n}) \\ \vdots & \ddots & \vdots \\ h _ {s -, s _ {+}} ^ {\prime} ((W _ {1}) _ {d _ {1},..} x _ {1}) & \dots & h _ {s -, s _ {+}} ^ {\prime} ((W _ {1}) _ {d _ {1},..} x _ {n}) \end{array} \right].
+$$
+
+Matrix $A$ is constituted by collecting the slopes of the activation $h$ at every point $(W_{1})_{i},x_{j}$
+
+Different elements of the matrix $A$ can be multiplied either one of $\{s_{-}, s_{+}\}$ . Therefore, we cannot use a single constant to express the effect of this activation, and thus, even within the cell, a nonlinear network cannot be expressed as the product of a sequence of weight matrices. This difference ensures that the proofs of deep linear neural networks cannot be transplanted here.
+
+Then, we prove that (see p. 40)
+
+$$
+W _ {2} \operatorname {d i a g} \left(A _ {., i}\right) W _ {1} x _ {i} = A _ {., i} ^ {T} \operatorname {d i a g} \left(W _ {2}\right) W _ {1} x _ {i}. \tag {8}
+$$
+
+Applying eq. (8) to eq. (7), the empirical risk $\hat{\mathcal{R}}$ equals to a formulation similar to the linear neural networks,
+
+$$
+\hat {\mathcal {R}} = \frac {1}{n} \sum_ {i = 1} ^ {n} l \left(y _ {i} - A _ {., i} ^ {T} \operatorname {d i a g} \left(W _ {2}\right) W _ {1} x _ {i}\right). \tag {9}
+$$
+
+Afterwards, define $\hat{W}_1 = \mathrm{diag}(W_2)W_1$ and then straighten the matrix $\hat{W}_1$ to a vector $\hat{W}$ ,
+
+$$
+\hat {W} = \left((\hat {W} _ {1}) _ {1, \cdot} \quad \dots \quad (\hat {W} _ {1}) _ {d _ {1}, \cdot}\right),
+$$
+
+Define $Q:(W_1,W_2)\mapsto \hat{W}$ , and also define,
+
+$$
+\hat {X} = \left( \begin{array}{c c c} A _ {., 1} \otimes x _ {1} & \dots & A _ {., n} \otimes x _ {n} \end{array} \right).
+$$
+
+We can prove the following equations (see p. 41),
+
+$$
+\left( \begin{array}{c c c} A _ {, 1} ^ {T} \hat {W} _ {1} x _ {1} & \dots & A _ {, n} ^ {T} \hat {W} _ {1} x _ {n} \end{array} \right) = \hat {W} \hat {X}.
+$$
+
+Applying eq. (9), the empirical risk is transferred to a convex function as follows,
+
+$$
+\hat {\mathcal {R}} = \frac {1}{n} \sum_ {i = 1} ^ {n} l \left(y _ {i}, \left(A _ {., i}\right) ^ {T} \hat {W} _ {1} x _ {i}\right) = \frac {1}{n} \sum_ {i = 1} ^ {n} l \left(y _ {i}, \hat {W} \hat {X} _ {i}\right).
+$$
+
+We then prove that the local optimality of the empirical risk $\hat{\mathcal{R}}$ is maintained when the weights $W$ are mapped to the variable $\hat{W}$ . Specifically, the local minima of the empirical risk $\hat{\mathcal{R}}$ with respect to the weight $W$ are also the local minima with respect to the variable $\hat{W}$ . The maintenance of optimality is not surprising but the proof is technically non-trivial (see a detailed proof in pp. 42-43).
+
+Equivalence classes and quotient space of local minimum valleys. The constructed mapping $Q$ is a quotient map. Under the setting in the previous property, all local minima in a cell is an equivalence class; they are concentrated as a local minimum valley. However, there might exist some "parallel" local minimum valley in the equivalence class, which do not appear because of the constraints from the cell boundaries. Further for neural networks of arbitrary depth, we also constructed a local minimum valley (the spurious local minima constructed in Section 3). This result explains the property of mode connectivity that the minima found by gradient-based methods are connected by a path in the parameter space with almost constant empirical risk, which is proposed in two empirical works (Garipov et al., 2018; Draxler et al., 2018). A recent theoretical work (Kuditipudi et al., 2019) proves that dropout stability and noise stability guarantee the mode connectivity.
+
+Linear collapse. Our theories also cover the case of linear neural networks. Linear neural networks do not have any nonlinearity in their activations. Correspondingly, the loss surface does not have any non-differentiable boundaries. In our theories, when there is no nonlinearity in the activations, the partitioned loss surface collapses to a single smooth, multilinear cell. All local minima wherein are equally good, and also, they are all global minima as follows. This result unites the existing results on linear neural networks (Kawaguchi, 2016; Baldi & Hornik, 1989; Lu & Kawaguchi, 2017; Freeman & Bruna, 2017; Zhou & Liang, 2018; Laurent & von Brecht, 2018; Yun et al., 2018).
+
+# 5 CONCLUSION AND FUTURE DIRECTIONS
+
+This paper reports that the nonlinearities in activations substantially shape the loss surfaces of neural networks. First, we prove that neural networks have infinitely many spurious local minima which are in contrast to the circumstance of linear neural networks. This result stands for any neural network with arbitrary hidden layers and arbitrary piecewise linear activations (excluding linear functions) under many popular loss functions in practice (e.g., squared loss and cross-entropy loss). This result significantly extends the conditions of the relevant results and has the least restrictive assumptions that cover most practical circumstances: (1) the training data is not linearly separable; (2) the training sample points are distinct; (3) all hidden layers are wider than the output layer; and (4) there exists some turning point in the piece-wise linear activation that the sum of the slops on the two sides does not equal to 0. Second, based on a recent result that the loss surface has a smooth and multilinear partition, we draw a big picture of the loss surface from the following aspects: (1) local minima in any cell are equally good, and also, they are all global minima in the cell; (2) all local minima in one cell constitute an equivalence class and are concentrated as a local minimum valley; and (3) the loss surface collapses to one single cell when all activations are linear functions, which explains the results of linear neural networks. The first and second properties are rigorously proved for any one-hidden-layer nonlinear neural networks with two-piece linear (ReLU-like) activations for regression tasks under convex/strictly convex loss without any other assumption.
+
+Theoretically understanding deep learning is of vital importance to both academia and industry. A major barrier recognized by the whole community is that deep neural networks' loss surfaces are extremely non-convex and even non-smooth. Such non-convexity and non-smoothness make the analysis of the optimization and generalization properties prohibitively difficult. A natural idea is to bypass the geometrical properties and then approach a theoretical explanation. We argue that such "intimidating" geometrical properties are exactly the major factors that shape the properties of deep neural networks, and also the key to explaining deep learning. We propose to explore the magic of deep learning from the geometrical structures of its loss surface. Future directions towards fully understanding deep learning are summarized as follows,
+
+- Investigate the (potential) equivalence classes and quotient space of local minimum valleys for deep neural networks. This paper suggests a degenerate nature of the large amounts of local minima: all the local minima within one cell constitute an equivalence class. We construct a quotient map for one-hidden-layer neural networks with two-piece activations for regression. Whether deep neural networks have similar properties remains an open problem. Understanding the quotient space would be a major step of understanding the approximation, optimization, and generalization of deep learning.
+- Explore the sophisticated geometry of local minimum valleys. The quotient space of local minima suggests a strategy that treats every local minimum valley as a whole. However, the sophisticated local geometrical properties around the local minimum valleys are still premature, such as the sharpness/flatness of the local minima, the potential categorization of the local minimum valley according to their performance, and the volumes of the local minima valleys from different categories.
+- Tackle the optimization and generalization problems of deep learning. Empirical results have overwhelmingly suggested that deep learning has excellent optimization and generalization capabilities, which is, however, beyond the current theoretical understanding: (1) one can employ stochastic convex optimization methods (such as SGD) to minimize the extremely non-convex and non-smooth loss function in deep learning, which is expected to be NP-hard but practically solved by computationally cheap optimization methods; and (2) heavily-parametrized neural networks can generalize well in many tasks, which is beyond the expectation of most current theoretical frameworks based on hypothesis complexity and the variants. The sophisticated geometrical expression, if fortunately, we possess in the future, would be a compelling push to tackle the generalization and optimization muses of deep learning.
+
+# ACKNOWLEDGMENTS
+
+This work was supported by Australian Research Council Project FL-170100117. The authors sincerely appreciate Micah Goldblum and the anonymous reviewers for their constructive comments.
+
+# REFERENCES
+
+Zeyuan Allen-Zhu, Yuzhhi Li, and Yingyu Liang. Learning and generalization in overparameterized neural networks, going beyond two layers. In Advances in Neural Information Processing Systems, 2019a.
+Zeyuan Allen-Zhu, Yanzhi Li, and Zhao Song. A convergence theory for deep learning via overparameterization. In International Conference on Machine Learning, 2019b.
+Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W Hoffman, David Pfau, Tom Schaul, Brendan Shillingford, and Nando De Freitas. Learning to learn by gradient descent by gradient descent. In Advances in Neural Information Processing Systems, 2016.
+Sanjeev Arora, Nadav Cohen, and Elad Hazan. On the optimization of deep networks: Implicit acceleration by overparameterization. In International Conference on Machine Learning, 2018.
+Pierre Baldi and Kurt Hornik. Neural networks and principal component analysis: Learning from examples without local minima. *Neural Networks*, 2(1):53-58, 1989.
+Alon Brutzkus and Amir Globerson. Globally optimal gradient descent for a convnet with gaussian inputs. In International Conference on Machine Learning, 2017.
+Alon Brutzkus, Amir Globerson, Eran Malach, and Shai Shalev-Shwartz. SGD learns overparameterized networks that provably generalize on linearly separable data. In International Conference on Learning Representations, 2018.
+Lenaic Chizat and Francis Bach. On the global convergence of gradient descent for overparameterized models using optimal transport. In Advances in Neural Information Processing Systems.
+Anna Choromanska, Mikael Henaff, Michael Mathieu, Gérard Ben Arous, and Yann LeCun. The loss surfaces of multilayer networks. In International Conference on Artificial Intelligence and Statistics, 2015.
+Felix Draxler, Kambis Veschgini, Manfred Salmhofer, and Fred Hamprecht. Essentially no barriers in neural network energy landscape. In International Conference on Machine Learning, 2018.
+Simon S Du, Jason D Lee, Haochuan Li, Liwei Wang, and Xiyu Zhai. Gradient descent finds global minima of deep neural networks. In International Conference on Machine Learning, 2018a.
+Simon S Du, Jason D Lee, Yuandong Tian, Barnabas Poczos, and Aarti Singh. Gradient descent learns one-hidden-layer cnn: Don't be afraid of spurious local minima. In International Conference on Machine Learning, 2018b.
+Simon S. Du, Xiyu Zhai, Barnabas Poczos, and Aarti Singh. Gradient descent provably optimizes over-parameterized neural networks. In International Conference on Learning Representations, 2019.
+C Daniel Freeman and Joan Bruna. Topology and geometry of half-rectified network optimization. In International Conference on Learning Representations, 2017.
+Timur Garipov, Pavel Izmailov, Dmitrii Podoprikhin, Dmitry P Vetrov, and Andrew G Wilson. Loss surfaces, mode connectivity, and fast ensembling of dnns. In Advances in Neural Information Processing Systems, 2018.
+Micah Goldblum, Jonas Geiping, Avi Schwarzschild, Michael Moeller, and Tom Goldstein. Truth or backpropaganda? an empirical investigation of deep learning theory. In International Conference on Learning Representations, 2020.
+Benjamin D. Haeffele and Rene Vidal. Global optimality in neural network training. In IEEE Conference on Computer Vision and Pattern Recognition, July 2017.
+Boris Hanin and David Rolnick. Complexity of linear regions in deep networks. In International Conference on Machine Learning, 2019.
+
+Fengxiang He, Tongliang Liu, and Dacheng Tao. Control batch size and learning rate to generalize well: Theoretical and empirical evidence. In Advances in Neural Information Processing Systems, 2019.
+Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In IEEE Conference on Computer Vision and Pattern Recognition, 2016.
+Kenji Kawaguchi. Deep learning without poor local minima. In Advances in Neural Information Processing Systems, 2016.
+Kenji Kawaguchi and Leslie Pack Kaelbling. Elimination of all bad local minima in deep learning. In International Conference on Artificial Intelligence and Statistics, 2020.
+Rohith Kuditipudi, Xiang Wang, Holden Lee, Yi Zhang, Zhiyuan Li, Wei Hu, Sanjeev Arora, and Rong Ge. Explaining landscape connectivity of low-cost solutions for multilayer nets. In Advances in Neural Information Processing Systems, 2019.
+Thomas Laurent and James von Brecht. The multilinear structure of relu networks. In International Conference on Machine Learning, 2018.
+Yann LeCun, Joshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 521(7553):436, 2015.
+Dawei Li, Tian Ding, and Ruoyu Sun. Over-parameterized deep neural networks have no strict local minima for any continuous activations. arXiv preprint arXiv:1812.11039, 2018.
+Yuanzhi Li and Yingyu Liang. Learning overparameterized neural networks via stochastic gradient descent on structured data. In Advances in Neural Information Processing Systems 31. 2018.
+Yuanzhi Li and Yang Yuan. Convergence analysis of two-layer neural networks with relu activation. In Advances in Neural Information Processing Systems 30. 2017.
+Shiyu Liang, Ruoyu Sun, Yixuan Li, and Rayadurgam Srikant. Understanding the loss surface of neural networks for binary classification. In International Conference on Machine Learning, 2018.
+Geert Litjens, Thijs Kooi, Babak Ehteshami Bejnordi, Arnaud Arindra Adiyoso Setio, Francesco Ciompi, Mohsen Ghafoorian, Jeroen Awm Van Der Laak, Bram Van Ginneken, and Clara I Sánchez. A survey on deep learning in medical image analysis. Medical Image Analysis, 42: 60-88, 2017.
+Haihao Lu and Kenji Kawaguchi. Depth creates no bad local minima. arXiv preprint arXiv:1702.08580, 2017.
+Song Mei, Yu Bai, Andrea Montanari, et al. The landscape of empirical risk for nonconvex losses. The Annals of Statistics, 46(6A):2747-2774, 2018.
+Quynh Nguyen. On connected sublevel sets in deep learning. In International Conference on Machine Learning, 2019.
+Quynh Nguyen and Matthias Hein. Optimization landscape and expressivity of deep cnns. In International Conference on Machine Learning, 2018.
+Quynh Nguyen, Mahesh Chandra Mukkamala, and Matthias Hein. On the loss landscape of a class of deep neural networks with no bad local valleys. In International Conference on Learning Representations, 2019.
+Samet Oymak and Mahdi Soltanolkotabi. Overparameterized nonlinear learning: Gradient descent takes the shortest path? In International Conference on Machine Learning, 2019.
+Itay Safran and Ohad Shamir. Spurious local minima are common in two-layer relu neural networks. In International Conference on Machine Learning, 2018.
+Levent Sagun, Léon Bottou, and Yann LeCun. Singularity of the hessian in deep learning. arXiv preprint arXiv:1611.07476, 2016.
+
+Levent Sagun, Utku Evci, V Ugur Guney, Yann Dauphin, and Leon Bottou. Empirical analysis of the hessian of over-parametrized neural networks. In International Conference on Learning Representations Workshop, 2018.
+David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go with deep neural networks and tree search. Nature, 529(7587):484, 2016.
+Mahdi Soltanolkotabi. Learning relus via gradient descent. In Advances in Neural Information Processing Systems 30. 2017.
+Mahdi Soltanolkotabi, Adel Javanmard, and Jason D Lee. Theoretical insights into the optimization landscape of over-parameterized shallow neural networks. IEEE Transactions on Information Theory, 65(2):742-769, 2018.
+Daniel Soudry and Elad Hoffer. Exponentially vanishing sub-optimal local minima in multilayer neural networks. In International Conference on Learning Representations Workshop, 2018.
+Grzegorz Swirszcz, Wojciech Marian Czarnecki, and Razvan Pascanu. Local minima in training of deep networks. arXiv preprint arXiv:1611:06310, 2016.
+Yuandong Tian. An analytical formula of population gradient for two-layered relu network and its applications in convergence and critical point analysis. In International Conference on Machine Learning, 2017.
+Gang Wang, Georgios B Giannakis, and Jie Chen. Learning relu networks on linearly separable data: Algorithm, optimality, and generalization. IEEE Transactions on Signal Processing, 67(9): 2357-2370, 2019.
+Ian H Witten, Eibe Frank, Mark A Hall, and Christopher J Pal. Data Mining: Practical machine learning tools and techniques. Morgan Kaufmann, 2016.
+Chenwei Wu, Jiajun Luo, and Jason D Lee. No spurious local minima in a two hidden unit relu network. In International Conference on Learning Representation Workshop, 2018.
+Bo Xie, Yingyu Liang, and Le Song. Diverse neural network learns true target functions. In International Conference on Artificial Intelligence and Statistics, 2017.
+Chulhee Yun, Suvrit Sra, and Ali Jadbabaie. Global optimality conditions for deep neural networks. In International Conference on Learning Representations, 2018.
+Chulhee Yun, Suvrit Sra, and Ali Jababaie. Efficiently testing local optimality and escaping saddles for reLU networks. In International Conference on Learning Representations, 2019a.
+Chulhee Yun, Suvrit Sra, and Ali Jadbabaie. Small nonlinearities in activation functions create bad local minima in neural networks. In International Conference on Learning Representations, 2019b.
+Hongyang Zhang, Junru Shao, and Ruslan Salakhutdinov. Deep neural networks with multi-branch architectures are intrinsically less non-convex. In International Conference on Artificial Intelligence and Statistics, 2019a.
+Xiao Zhang, Yaodong Yu, Lingxiao Wang, and Quanquan Gu. Learning one-hidden-layer relu networks via gradient descent. In International Conference on Artificial Intelligence and Statistics, 2019b.
+Kai Zhong, Zhao Song, Prateek Jain, Peter L Bartlett, and Inderjit S Dhillon. Recovery guarantees for one-hidden-layer neural networks. In International Conference on Machine Learning, 2017.
+Pan Zhou and Jiashi Feng. Empirical risk landscape analysis for understanding deep neural networks. In International Conference on Learning Representations, 2018.
+Yi Zhou and Yingbin Liang. Critical points of neural networks: Analytical forms and landscape properties. In International Conference on Learning Representations, 2018.
+
+Yi Zhou, Junjie Yang, Huishuai Zhang, Yingbin Liang, and Vahid Tarokh. SGD converges to global minimum in deep learning via star-convex path. In International Conference on Learning Representations, 2019.
+Difan Zou, Yuan Cao, Dongruo Zhou, and Quanquan Gu. Stochastic gradient descent optimizes over-parameterized deep relu networks. Machine Learning, 2019.
+
+# A PROOF OF THEOREM 1
+
+This appendix gives a detailed proof of Theorem 1 omitted from the main text. It follows the skeleton presented in Section 3.3.
+
+# A.1 SQUARED LOSS AND CROSS-ENTROPY LOSS
+
+We first check whether squared loss and cross-entropy loss are covered by the requirements of Theorem 1.
+
+Lemma 2. The squared loss (defined by eq. 5) is continuously differentiable with respect to the prediction of the model, whose gradient of loss equal to zero when the prediction and the label are different.
+
+Proof. Apparently, the squared loss is differentiable with respect to $\hat{Y}$ . Specifically, the gradient with respect to $\hat{Y}$ is as follows,
+
+$$
+\nabla_ {\hat {Y}} \left\| Y - \hat {Y} \right\| ^ {2} = 2 \left(Y - \hat {Y}\right),
+$$
+
+which is continuous with respect to $\hat{Y}$ .
+
+Also, when the prediction $\hat{Y}$ does not equals to the label $Y$ , we have
+
+$$
+\nabla_ {\hat {Y}} \left\| Y - \hat {Y} \right\| ^ {2} \neq 0.
+$$
+
+The proof is completed.
+
+
+
+Lemma 3. The cross-entropy loss eq. (6) is continuously differentiable with respect to the prediction of the model, whose gradient of loss equal to zero when the prediction and the label are different. Also, we assume that the ground-truth label is a one-hot vector.
+
+Proof. For any $i \in [1:n]$ , the cross-entropy loss is differentiable with respect to $\hat{Y}_i$ . The $j$ -th component of the gradient with respect to the prediction $\hat{Y}_i$ is as follows,
+
+$$
+\frac {\partial \left(- \sum_ {k = 1} ^ {d _ {Y}} Y _ {k , i} \log \left(\frac {e ^ {\hat {Y} _ {k , i}}}{\sum_ {k = 1} ^ {d _ {Y}} e ^ {\hat {Y} _ {k , i}}}\right)\right)}{\partial \hat {Y} _ {j , i}} = \binom {d _ {Y}} {k = 1} \frac {e ^ {\hat {Y} _ {j , i}}}{\sum_ {k = 1} ^ {d _ {Y}} e ^ {\hat {Y} _ {k , i}}} - Y _ {j, i}. \tag {10}
+$$
+
+which is continuous with respect to $\hat{Y}_i$ . So, the cross-entropy loss is continuously differentiable with respect to $\hat{Y}_i$ .
+
+Additionally, if the gradient (eq. (10)) is zero, we have the following equations,
+
+$$
+\left(\sum_ {k = 1} ^ {d _ {Y}} Y _ {k, i}\right) e ^ {\hat {Y} _ {j, i}} - Y _ {j, i} \sum_ {k = 1} ^ {d _ {Y}} e ^ {\hat {Y} _ {k, i}} = 0, j = 1, 2, \dots , n.
+$$
+
+Rewrite it into the matrix form, we have
+
+$$
+\left[ \begin{array}{c c c c} \sum_ {k = 1} ^ {d _ {Y}} Y _ {k, i} - Y _ {1, i} & - Y _ {1, i} & \dots & - Y _ {1, i} \\ - Y _ {2, i} & \sum_ {k = 1} ^ {d _ {Y}} Y _ {k, i} - Y _ {2, i} & \dots & - Y _ {2, i} \\ \vdots & \vdots & \ddots & \vdots \\ - Y _ {d _ {Y}, i} & \dots & - Y _ {d _ {Y}, i} & \sum_ {i = 1} ^ {d _ {Y}} Y _ {k, i} - Y _ {d _ {Y}, i} \end{array} \right] \left[ \begin{array}{c} e ^ {\hat {Y} _ {1, i}} \\ e ^ {\hat {Y} _ {2, i}} \\ \vdots \\ e ^ {\hat {Y} _ {d _ {Y}, i}} \end{array} \right] = 0.
+$$
+
+Since $\sum_{k=1}^{d_Y} Y_{k,i} = 1$ , we can easily check the rank of the left matrix is $d_Y - 1$ . So the dimension of the solution space is one. Meanwhile, we have
+
+$$
+\left[ \begin{array}{c c c c} \sum_ {k = 1} ^ {d _ {Y}} Y _ {k, i} - Y _ {1, i} & - Y _ {1, i} & \dots & - Y _ {1, i} \\ - Y _ {2, i} & \sum_ {k = 1} ^ {d _ {Y}} Y _ {k, i} - Y _ {2, i} & \dots & - Y _ {2, i} \\ \vdots & \vdots & \ddots & \vdots \\ - Y _ {d _ {Y}, i} & \dots & - Y _ {d _ {Y}, i} & \sum_ {i = 1} ^ {d _ {Y}} Y _ {k, i} - Y _ {d _ {Y}, i} \end{array} \right] \left[ \begin{array}{c} Y _ {1, i} \\ Y _ {2, i} \\ \vdots \\ Y _ {d _ {Y}, i} \end{array} \right] = 0.
+$$
+
+Therefore, $0 \neq e^{Y_{k,i}} = \lambda Y_{k,i}$ , for some $\lambda \in \mathbb{R}$ , which contradicts to the assumption that some of the components of $Y$ is 0 ( $Y_{i}$ is a one-hot vector).
+
+The proof is completed.
+
+
+
+# A.2 STAGE (1)
+
+In Stage (1), we prove that deep neural networks with one hidden layer, two-piece linear activation $h_{s_{-},s_{+}}$ , and multi-dimensional outputs have infinite spurious local minima.
+
+This stage is organized as follows: (a) we construct a local minimizer by Lemma 4; and (b) we prove that the local minimizer is spurious in Theorem 4 by constructing a set of parameters with smaller empirical risk.
+
+Without loss of generality, we assume that $s_{+} \neq 0$ . Otherwise, suppose that $s_{+} = 0$ . From the definition of ReLU-like activation (eq. (4)), we have $s_{-} \neq 0$ . Since
+
+$$
+h _ {s _ {-}, s _ {+}} (x) = h _ {- s _ {+}, - s _ {-}} (- x),
+$$
+
+the output of the neural network with parameters $\left\{[W_i]_{i=1}^L, [b_i]_{i=1}^L\right\}$ and activation $h_{s_{-}, s_{+}}$ equals to that of the neural network with parameters $\left\{[W_i']_{i=1}^L, [b_i']_{i=1}^L\right\}$ and activation $h_{-s_{+}, -s_{-}}$ where $W_i' = -W_i$ , $b_i' = -b_i$ , $i = 1, 2, \dots, L-1$ and $W_L' = W_L$ , $b_L' = b_L$ . Since $\left\{[W_i]_{i=1}^L, [b_i]_{i=1}^L\right\} \to \left\{[W_i']_{i=1}^L, [b_i']_{i=1}^L\right\}$ is an one-to-one map, it is equivalent to consider either the two networks, with $h_{-s_{+}, -s_{-}}(x)$ has non-zero slope when $x > 0$ .
+
+Step (a). Construct local minima of the loss surface.
+
+Lemma 4. Suppose that $\tilde{W}$ is a local minimizer of
+
+$$
+f (W) \triangleq \frac {1}{n} \sum_ {i = 1} ^ {n} l \left(Y _ {i}, W \left[ \begin{array}{l} x _ {i} \\ 1 \end{array} \right]\right), \tag {11}
+$$
+
+Under Assumption 3, any one-hidden-layer neural network has a local minimum at
+
+$$
+\hat {W} _ {1} = \left[ \begin{array}{c} {\left[ \tilde {W} \right] _ {\cdot , [ 1: d _ {X} ]}} \\ {\mathbf {0} _ {(d _ {1} - d _ {Y}) \times d _ {X}}} \end{array} \right], \hat {b} _ {1} = \left[ \begin{array}{c} {\left[ \tilde {W} \right] _ {\cdot , d _ {X} + 1} - \eta \mathbf {1} _ {d _ {Y}}} \\ {- \eta \mathbf {1} _ {d _ {1} - d _ {Y}}} \end{array} \right], \tag {12}
+$$
+
+and
+
+$$
+\hat {W} _ {2} = \left[ \begin{array}{l l} \frac {1}{s _ {+}} I _ {d _ {Y}} & \mathbf {0} _ {d _ {Y} \times \left(d _ {1} - d _ {Y}\right)} \end{array} \right], \hat {b} _ {2} = \eta \mathbf {1} _ {d _ {Y}}, \tag {13}
+$$
+
+where $\hat{W}_1$ and $\hat{b}_1$ are respectively the weight matrix and the bias of the first layer, $\hat{W}_2$ and $\hat{b}_2$ are respectively the weight matrix and the bias of the second layer, and $\eta$ is a negative constant with absolute value sufficiently large such that
+
+$$
+\tilde {W} \tilde {X} - \eta \mathbf {1} _ {d _ {Y}} \mathbf {1} _ {n} ^ {T} > \mathbf {0}, \tag {14}
+$$
+
+where $>$ is element-wise.
+
+Also, the loss in this lemma is continuously differentiable loss whose gradient does not equals to 0 when the prediction is not the same as the ground-truth label.
+
+Proof. We show that the empirical risk is higher in the neighborhood of $\left\{\left[\hat{W}_i\right]_{i=1}^2, \left[\hat{b}_i\right]_{i=1}^2\right\}$ , in order to prove that $\left\{\left[\hat{W}_i\right]_{i=1}^2, \left[\hat{b}_i\right]_{i=1}^2\right\}$ is a local minimizer.
+
+The output of the first layer before the activation is
+
+$$
+\tilde {Y} ^ {(1)} = \hat {W} _ {1} X + \hat {b} _ {1} \mathbf {1} _ {n} ^ {T} = \left[ \begin{array}{c} \hat {W} X - \eta \mathbf {1} _ {d _ {Y}} \mathbf {1} _ {n} ^ {T} \\ - \eta \mathbf {1} _ {d _ {1} - d _ {Y}} \mathbf {1} _ {n} ^ {T} \end{array} \right].
+$$
+
+Because $\eta$ is a negative constant with absolute value sufficiently large such that eq. (27)) holds, the output above is positive (element-wise), the output of the neural network with parameters $\{\hat{W}_1,\hat{W}_2,\hat{b}_1,\hat{b}_2\}$ is
+
+$$
+\begin{array}{l} \hat {Y} = \hat {W} _ {2} h _ {s -, s _ {+}} \left(\hat {W} _ {1} X + \hat {b} _ {1} \mathbf {1} _ {n} ^ {T}\right) + \hat {b} _ {2} \mathbf {1} _ {n} ^ {T} \\ = s _ {+} \hat {W} _ {2} \left(\hat {W} _ {1} X + \hat {b} _ {1} \mathbf {1} _ {n} ^ {T}\right) + \hat {b} _ {2} \mathbf {1} _ {n} ^ {T} \\ = s _ {+} \left[ \begin{array}{c c} \frac {1}{s _ {+}} I _ {d _ {Y}} & \mathbf {0} _ {d _ {Y} \times (d _ {1} - d _ {Y})} \end{array} \right] \left[ \begin{array}{c} \hat {W} X - \eta \mathbf {1} _ {d _ {Y}} \mathbf {1} _ {n} ^ {T} \\ - \eta \mathbf {1} _ {d _ {1} - d _ {Y}} \mathbf {1} _ {n} ^ {T} \end{array} \right] + \eta \mathbf {1} _ {d _ {Y}} \mathbf {1} _ {n} ^ {T} \\ = \tilde {W} \tilde {X}, \\ \end{array}
+$$
+
+where $\tilde{X}$ is defined as
+
+$$
+\tilde {X} = \left[ \begin{array}{l} X \\ \mathbf {1} _ {n} ^ {T} \end{array} \right]. \tag {15}
+$$
+
+Therefore, the empirical risk $\hat{\mathcal{R}}$ in terms of parameters $\{\hat{W}_1,\hat{W}_2,\hat{b}_1,\hat{b}_2\}$ is
+
+$$
+\hat {\mathcal {R}} \left(\hat {W} _ {1}, \hat {W} _ {2}, \hat {b} _ {1}, \hat {b} _ {2}\right) = \frac {1}{n} \sum_ {i = 1} ^ {n} l \left(Y _ {i}, \left(\tilde {W} \tilde {X}\right) _ {., i}\right) = \frac {1}{n} \sum_ {i = 1} ^ {n} l \left(Y _ {i}, \tilde {W} \left[ \begin{array}{c} x _ {i} \\ 1 \end{array} \right]\right) = f (\tilde {W}).
+$$
+
+Then, we introduce a sufficiently small disturbance $\left\{[\delta_{Wi}]_{i=1}^{2}, [\delta_{bi}]_{i=1}^{2}\right\}$ into the parameters $\left\{\left[\hat{W}_i\right]_{i=1}^2, \left[\hat{b}_i\right]_{i=1}^2\right\}$ . When the disturbance is sufficiently small, all components of the output of the first layer remain positive. Therefore, the output after the disturbance is
+
+$$
+\begin{array}{l} \hat {Y} \left(\left[ \hat {W} _ {i} + \delta_ {W i} \right] _ {i = 1} ^ {2}, \left[ \hat {b} _ {i} + \delta_ {b i} \right] _ {i = 1} ^ {2}\right) \\ = \left(\hat {W} _ {2} + \delta_ {W 2}\right) h _ {s _ {-}, s _ {+}} \left(\left(\hat {W} _ {1} + \delta_ {W 1}\right) X + \left(\hat {b} _ {1} + \delta_ {b 1}\right) \mathbf {1} _ {n} ^ {T}\right) + \left(\hat {b} _ {2} + \delta_ {b 2}\right) \mathbf {1} _ {n} ^ {T} \\ \stackrel {(*)} {=} \left(\hat {W} _ {2} + \delta_ {W 2}\right) s _ {+} \left(\left(\hat {W} _ {1} + \delta_ {W 1}\right) X + \left(\hat {b} _ {1} + \delta_ {b 1}\right) \mathbf {1} _ {n} ^ {T}\right) + \left(\hat {b} _ {2} + \delta_ {b 2}\right) \mathbf {1} _ {n} ^ {T} \\ = s _ {+} \delta_ {W 2} \left(\left(\hat {W} _ {1} + \delta_ {W 1}\right) X + \left(\hat {b} _ {1} + \delta_ {b 1}\right) \mathbf {1} _ {n} ^ {T}\right) + s _ {+} \hat {W} _ {2} \delta_ {W 1} X + s _ {+} \hat {W} _ {2} \delta_ {b 1} \mathbf {1} _ {n} ^ {T} + \delta_ {b 2} \mathbf {1} _ {n} ^ {T} \\ + \hat {W} _ {2} s _ {+} \left(\hat {W} _ {1} X + \hat {b} _ {1} \mathbf {1} _ {n} ^ {T}\right) + \hat {b} _ {2} \mathbf {1} _ {n} ^ {T} \\ = \left(s _ {+} \delta_ {W 2} \left(\hat {W} _ {1} + \delta_ {W 1}\right) + s _ {+} \hat {W} _ {2} \delta_ {W 1}\right) X + \left(s _ {+} \hat {W} _ {2} \delta_ {b 1} + \delta_ {b 2} + s _ {+} \delta_ {W 2} \left(\hat {b} _ {1} + \delta_ {b 1}\right)\right) \mathbf {1} _ {n} ^ {T} \\ + \hat {W} _ {2} h _ {s -, s _ {+}} \left(\hat {W} _ {1} X + \hat {b} _ {1} \mathbf {1} _ {n} ^ {T}\right) + \hat {b} _ {2} \mathbf {1} _ {n} ^ {T} \\ = (\tilde {W} + \delta) \left[ \begin{array}{c} X \\ 1 _ {n} ^ {T} \end{array} \right], \\ \end{array}
+$$
+
+where eq. (*) is because all components of $\left(\hat{W}_1 + \delta_{W1}\right)X + (b_1' + \delta_{b1})\mathbf{1}_n^T$ are positive, and $\delta$ is defined as the following matrix
+
+$$
+\delta = \left[ s _ {+} \left(\hat {W} _ {2} \delta_ {W 1} + \delta_ {W 2} \hat {W} _ {1} + \delta_ {W 2} \delta_ {W 1}\right) s _ {+} \hat {W} _ {2} \delta_ {b 1} + \delta_ {b 2} + s _ {+} \delta_ {W 2} \left(\hat {b} _ {1} + \delta_ {b 1}\right) \right].
+$$
+
+Therefore, the empirical risk $\hat{\mathcal{R}}$ with respect to $\left\{\left[\hat{W}_i + \delta_{Wi}\right]_{i = 1}^2,\left[\hat{b}_i + \delta_{bi}\right]_{i = 1}^2\right\}$ is
+
+$$
+\begin{array}{l} \hat {\mathcal {R}} \left(\left[ \hat {W} _ {i} + \delta_ {W i} \right] _ {i = 1} ^ {2}, \left[ \hat {b} _ {i} + \delta_ {b i} \right] _ {i = 1} ^ {2}\right) = \frac {1}{n} \sum_ {i = 1} ^ {n} l \left(Y _ {i}, \left(\left(\tilde {W} + \delta\right) \tilde {X}\right) _ {, i}\right) \\ = \frac {1}{n} \sum_ {i = 1} ^ {n} l \left(Y _ {i}, \left(\tilde {W} + \delta\right) \left[ \begin{array}{c} x _ {i} \\ 1 \end{array} \right]\right) \\ = f (\tilde {W} + \delta). \\ \end{array}
+$$
+
+$\delta$ approaches zero when the disturbances $\{\delta_{W1},\delta_{W2},\delta_{b1},\delta_{b2}\}$ approach zero (element-wise). Since $\hat{W}$ is the local minimizer of $f(W)$ , we have
+
+$$
+\hat {\mathcal {R}} \left(\left[ \hat {W} _ {i} \right] _ {i = 1} ^ {2}, \left[ \hat {b} _ {i} \right] _ {i = 1} ^ {2}\right) = f (\hat {W}) \leq f (\hat {W} + \delta) = \hat {\mathcal {R}} \left(\left[ \hat {W} _ {i} + \delta_ {W i} \right] _ {i = 1} ^ {2}, \left[ \hat {b} _ {i} + \delta_ {b i} \right] _ {i = 1} ^ {2}\right). \tag {16}
+$$
+
+Because the disturbances $\{\delta_{W1},\delta_{W2},\delta_{b1},\delta_{b2}\}$ are arbitrary, eq. (16) demonstrates that $\left\{\left[\hat{W}_i\right]_{i = 1}^2,\left[\hat{b}_i\right]_{i = 1}^2\right\}$ is a local minimizer.
+
+The proof is completed.
+
+
+
+# Step (b). Prove the constructed local minima are spurious.
+
+Theorem 4. Under the same conditions of Lemma 4 and Assumptions 1, 2, and 4, the constructed spurious local minima in Lemma 4 are spurious.
+
+Proof. The minimizer $\tilde{W}$ is the solution of the following equation
+
+$$
+\nabla_ {W} f (W) = 0.
+$$
+
+Specifically, we have
+
+$$
+\frac {\partial f (\tilde {W})}{\partial W _ {i , j}} = 0, i \in \{1, \dots , d _ {Y} \}, j \in \{1, \dots , d _ {X} \},
+$$
+
+Applying the definition of $f(W)$ (eq. (11)),
+
+$$
+\frac {\partial f (\tilde {W})}{\partial W _ {k , j}} = \sum_ {i = 1} ^ {n} \nabla_ {\hat {Y} _ {i}} l (Y _ {i}, \tilde {W} [ x _ {i} ]) E _ {k, j} [ x _ {i} ] = \sum_ {i = 1} ^ {n} \left(\nabla_ {\hat {Y} _ {i}} l (Y _ {i}, \tilde {W} [ x _ {i} ])\right) _ {k} ([ x _ {i} ]) _ {j},
+$$
+
+where $\hat{Y}_i = \tilde{W}\left[ \begin{array}{c}x_i\\ 1 \end{array} \right],\nabla_{\hat{Y}_i}l\left(Y_i,\tilde{W}\left[ \begin{array}{c}x_i\\ 1 \end{array} \right]\right)\in \mathbb{R}^{1\times d_Y}$ . Since $k,j$ are arbitrary in $\{1,\dots ,d_Y\}$ and $\{1,\dots ,d_X\}$ , respectively, we have
+
+$$
+\boldsymbol {V} \left[ \begin{array}{l l} X ^ {T} & \mathbf {1} _ {n} \end{array} \right] = \mathbf {0}, \tag {17}
+$$
+
+where
+
+$$
+\mathbf {V} = \left[ \left(\nabla_ {\hat {Y} _ {1}} l \left(Y _ {1}, \tilde {W} \left[ \begin{array}{c} x _ {1} \\ 1 \end{array} \right]\right)\right) ^ {T} \dots \left(\nabla_ {\hat {Y} _ {n}} l \left(Y _ {n}, \tilde {W} \left[ \begin{array}{c} x _ {n} \\ 1 \end{array} \right]\right)\right) ^ {T} \right].
+$$
+
+We then define $\tilde{Y} = \tilde{W}\tilde{X}$ . Applying Assumption 1, we have
+
+$$
+\tilde {Y} - Y = \left(\tilde {W} \tilde {X} - Y\right) \neq \mathbf {0},
+$$
+
+Thus, there exists some $k$ -th row of $\tilde{Y} - Y$ that does not equal to 0.
+
+We can rearrange the rows of $\tilde{W}$ and $Y$ simultaneously, while $\tilde{W}$ is maintained as the local minimizer of $f(W)$ and $f(\tilde{W})$ invariant1. Without loss of generality, we assume $k = 1$ ( $k$ is the index of the row). Set $\boldsymbol{u} = \boldsymbol{V}_{1,\cdot}$ and $v_{i} = \tilde{Y}_{1,i}$ in Lemma 7. There exists a non-empty separation $I = [1:l']$ and $J = [l' + 1:n]$ of $S = \{1,2,\dots,n\}$ and a vector $\beta \in \mathbb{R}^{d_X}$ , such that
+
+(1.1) for any positive constant $\alpha$ small enough, and $i\in I,j\in J,\tilde{Y}_{1,i} - \alpha \beta^{T}x_{i} < \tilde{Y}_{1,j} - \alpha \beta^{T}x_{j}$
+(1.2) $\sum_{i\in I}\mathbf{V}_{1,i}\neq 0$
+
+Define
+
+$$
+\eta_ {1} = \tilde {Y} _ {1, l ^ {\prime}} - \alpha \beta^ {T} x _ {l ^ {\prime}} + \frac {1}{2} \left(\min _ {i \in \{l ^ {\prime} + 1, \dots , n \}} \left(\tilde {Y} _ {1, i} - \alpha \beta^ {T} x _ {i}\right) - \left(\tilde {Y} _ {1, l ^ {\prime}} - \alpha \beta^ {T} x _ {l ^ {\prime}}\right)\right).
+$$
+
+Applying (1.1), for any $i\in I$
+
+$$
+\begin{array}{l} \tilde {Y} _ {1, i} - \alpha \beta^ {T} x _ {i} - \eta_ {1} \\ = \left(\tilde {Y} _ {1, i} - \alpha \beta^ {T} x _ {i} - \tilde {Y} _ {1, l ^ {\prime}} + \alpha \beta^ {T} x _ {l ^ {\prime}}\right) - \frac {1}{2} \left(\min _ {i \in \{l ^ {\prime} + 1, \dots , n \}} \left(\tilde {Y} _ {1, i} - \alpha \beta^ {T} x _ {i}\right) - \left(\tilde {Y} _ {1, l ^ {\prime}} - \alpha \beta^ {T} x _ {l ^ {\prime}}\right)\right) \\ < 0, \\ \end{array}
+$$
+
+while for any $j\in J$
+
+$$
+\begin{array}{l} \tilde {Y} _ {1, j} - \alpha \beta^ {T} x _ {j} - \eta_ {1} > 0 \\ = \left(\tilde {Y} _ {1, j} - \alpha \beta^ {T} x _ {i} - \tilde {Y} _ {1, l ^ {\prime}} + \alpha \beta^ {T} x _ {l ^ {\prime}}\right) - \frac {1}{2} \left(\min _ {i \in \{l ^ {\prime} + 1, \dots , n \}} \left(\tilde {Y} _ {1, i} - \alpha \beta^ {T} x _ {i}\right) - \left(\tilde {Y} _ {1, l ^ {\prime}} - \alpha \beta^ {T} x _ {l ^ {\prime}}\right)\right) \\ \geq \frac {1}{2} \left(\min _ {i \in \{l ^ {\prime} + 1, \dots , n \}} \left(\tilde {Y} _ {1, i} - \alpha \beta^ {T} x _ {i}\right) - \left(\tilde {Y} _ {1, l ^ {\prime}} - \alpha \beta^ {T} x _ {l ^ {\prime}}\right)\right) \\ > 0. \\ \end{array}
+$$
+
+Define $\gamma \in \mathbb{R}$ which satisfies
+
+$$
+| \gamma | = \left\{ \begin{array}{l l} \frac {1}{2} \min _ {i \in \{l ^ {\prime} + 1, \ldots , s _ {t + 1} \}} \alpha \beta^ {T} (x _ {l} - x _ {i}), & l ^ {\prime} < s _ {t + 1} \\ \alpha , & l ^ {\prime} = s _ {t + 1} \end{array} \right.,
+$$
+
+where $s_{t + 1}$ is defined in Lemma 7.
+
+We argue that
+
+$$
+\left. \left| \frac {1}{2} \left(\min _ {i \in \{l ^ {\prime} + 1, \dots , n \}} \left(\tilde {Y} _ {1, i} - \alpha \beta^ {T} x _ {i}\right) - \left(\tilde {Y} _ {1, l ^ {\prime}} - \alpha \beta^ {T} x _ {l ^ {\prime}}\right)\right) \right| - | \gamma | > 0. \right. \tag {18}
+$$
+
+When $l' = s_{t+1}$ , eq. (58) stands. Also,
+
+$$
+\lim_{\substack{\alpha \to 0^{+}}}\gamma = 0,
+$$
+
+$$
+\lim_{\alpha \to 0^{+}}\left(\min_{i\in \{l^{\prime} + 1,\ldots ,n\}}\left(\tilde{Y}_{1,i} - \alpha \beta^{T}x_{i}\right) - \left(\tilde{Y}_{1,l^{\prime}} - \alpha \beta^{T}x_{l^{\prime}}\right)\right) = \min_{i\in \{l^{\prime} + 1,\ldots ,n\}}\tilde{Y}_{1,i} - \tilde{Y}_{1,l^{\prime}} > 0.
+$$
+
+Therefore, we get eq. (18) when $\alpha$ is small enough.
+
+When $l^{\prime} < s_{t + 1}$ , eq. (57) stands. Therefore,
+
+$$
+\left| \gamma \right| = \frac {1}{2} \left| \frac {1}{2} \left(\min _ {i \in \{l ^ {\prime} + 1, \dots , n \}} \left(\tilde {Y} _ {1, i} - \alpha \beta^ {T} x _ {i}\right) - \left(\tilde {Y} _ {1, l ^ {\prime}} - \alpha \beta^ {T} x _ {l ^ {\prime}}\right)\right) \right|,
+$$
+
+which apparently leads to eq. (18).
+
+Therefore, for any $i \in I$ , we have that
+
+$$
+\begin{array}{l} \tilde {Y} _ {1, i} - \alpha \beta^ {T} x _ {i} - \eta_ {1} + | \gamma | \\ \leq -\frac{1}{2}\left(\min_{i\in \{\ell^{\prime} + 1,\dots,n\}}\left(\tilde{Y}_{1,i} - \alpha \beta^{T}x_{i}\right) - \left(\tilde{Y}_{1,l^{\prime}} - \alpha \beta^{T}x_{l^{\prime}}\right)\right) + |\gamma | \\ < 0, \\ \end{array}
+$$
+
+while for any $j\in J$
+
+$$
+\begin{array}{l} \tilde {Y} _ {1, j} - \alpha \beta^ {T} x _ {j} - \eta_ {1} - | \gamma | \\ \geq \frac {1}{2} \left(\min _ {i \in \{l ^ {\prime} + 1, \dots , n \}} \left(\tilde {Y} _ {1, i} - \alpha \beta^ {T} x _ {i}\right) - \left(\tilde {Y} _ {1, l ^ {\prime}} - \alpha \beta^ {T} x _ {l ^ {\prime}}\right)\right) - | \gamma | \\ > 0. \\ \end{array}
+$$
+
+Furthermore, define $\eta_{i}$ ( $2 \leq i \leq d_{Y}$ ) as negative reals with absolute value sufficiently large, such that for any $i \in [2:d_{Y}]$ and any $j \in [1:n]$ ,
+
+$$
+\tilde {Y} _ {i, j} - \eta_ {i} > 0.
+$$
+
+Now we construct a point in the parameter space whose empirical risk is smaller than the proposed local minimum in Lemma 4 as follows
+
+$$
+\tilde {W} _ {1} = \left[ \begin{array}{c} \tilde {W} _ {1, [ 1: d _ {X} ]} - \alpha \beta^ {T} \\ - \tilde {W} _ {1, [ 1: d _ {X} ]} + \alpha \beta^ {T} \\ \tilde {W} _ {2, [ 1: d _ {X} ]} \\ \vdots \\ \tilde {W} _ {d _ {Y}, [ 1: d _ {X} ]} \\ 0 _ {(d _ {1} - d _ {Y} - 1) \times d _ {X}} \end{array} \right], \tag {19}
+$$
+
+$$
+\tilde {b} _ {1} = \left[ \begin{array}{c} \tilde {W} _ {1, [ d _ {X} + 1 ]} - \eta_ {1} + \gamma \\ - \tilde {W} _ {1, [ d _ {X} + 1 ]} + \eta_ {1} + \gamma \\ \tilde {W} _ {2, [ d _ {X} + 1 ]} - \eta_ {2} \\ \vdots \\ \tilde {W} _ {d _ {Y}, [ d _ {X} + 1 ]} - \eta_ {d _ {Y}} \\ 0 _ {(d _ {1} - d _ {Y} - 1) \times 1} \end{array} \right], \tag {20}
+$$
+
+$$
+\tilde {W} _ {2} = \left[ \begin{array}{c c c c c c c c c} \frac {1}{s _ {+} + s _ {-}} & - \frac {1}{s _ {+} + s _ {-}} & 0 & 0 & \dots & 0 & 0 & \dots & 0 \\ 0 & 0 & \frac {1}{s _ {+}} & \dots & 0 & \vdots & \vdots & & \vdots \\ \vdots & \vdots & \vdots & \frac {1}{s _ {+}} & \dots & 0 & & \ddots & \\ \vdots & \vdots & \vdots & \vdots & \ddots & \vdots & \vdots & & \vdots \\ 0 & 0 & 0 & 0 & \dots & \frac {1}{s _ {+}} & 0 & \dots & 0 \end{array} \right], \tag {21}
+$$
+
+and
+
+$$
+\tilde {b} _ {2} = \left[ \begin{array}{c} \eta_ {1} \\ \eta_ {2} \\ \vdots \\ \eta_ {d _ {Y}} \end{array} \right], \tag {22}
+$$
+
+where $\tilde{W}_i$ and $\tilde{b}_i$ are the weight matrix and the bias of the $i$ -th layer, respectively.
+
+After some calculations, the network output of the first layer before the activation in terms of $\left\{\left[\tilde{W}_i\right]_{i=1}^2,\left[\tilde{b}_i\right]_{i=1}^2\right\}$ is
+
+$$
+\tilde {Y} ^ {(1)} = \tilde {W} _ {1} X + \tilde {b} _ {1} \mathbf {1} _ {n} ^ {T} = \left[ \begin{array}{c} \tilde {W} _ {1,..} \tilde {X} - \alpha \beta^ {T} X - \eta_ {1} \mathbf {1} _ {n} ^ {T} + \gamma \mathbf {1} _ {n} ^ {T} \\ - \tilde {W} _ {1,..} \tilde {X} + \alpha \beta^ {T} X + \eta_ {1} \mathbf {1} _ {n} ^ {T} + \gamma \mathbf {1} _ {n} ^ {T} \\ \tilde {W} _ {2,..} \tilde {X} - \eta_ {2} \mathbf {1} _ {n} ^ {T} \\ \vdots \\ \tilde {W} _ {d _ {Y},..} \tilde {X} - \eta_ {d _ {Y}} \mathbf {1} _ {n} ^ {T} \\ \mathbf {0} _ {(d _ {1} - d _ {Y} - 1) \times n} \end{array} \right].
+$$
+
+Therefore, the output of the whole neural network is
+
+$$
+\begin{array}{l} \hat {Y} = \tilde {W} _ {2} h _ {s _ {-}, s _ {+}} \left(\tilde {W} _ {1} X + \tilde {b} _ {1} \mathbf {1} _ {n} ^ {T}\right) + \tilde {b} _ {2} \mathbf {1} _ {n} ^ {T} \\ = \tilde {W} _ {2} h _ {s _ {-}, s _ {+}} \left(\left[ \begin{array}{c} \tilde {W} _ {1, *} \tilde {X} - \alpha \beta^ {T} X - \eta_ {1} \mathbf {1} _ {n} ^ {T} + \gamma \mathbf {1} _ {n} ^ {T} \\ - \tilde {W} _ {1, *} \tilde {X} + \alpha \beta^ {T} X + \eta_ {1} \mathbf {1} _ {n} ^ {T} + \gamma \mathbf {1} _ {n} ^ {T} \\ \tilde {W} _ {2, *} \tilde {X} - \eta_ {2} \mathbf {1} _ {n} ^ {T} \\ \vdots \\ \tilde {W} _ {d _ {Y}, *} \tilde {X} - \eta_ {d _ {Y}} \mathbf {1} _ {n} ^ {T} \\ \mathbf {0} _ {(d _ {1} - d _ {Y} - 1) \times n} \end{array} \right]\right) + \tilde {b} _ {2} \mathbf {1} _ {n} ^ {T}. \\ \end{array}
+$$
+
+Specifically, if $j \leq l'$ ,
+
+$$
+\begin{array}{l} \left(\tilde {Y} ^ {(1)} \left(\left[ \tilde {W} _ {i} \right] _ {i = 1} ^ {2}, \left[ \tilde {b} _ {i} \right] _ {i = 1} ^ {2}\right)\right) _ {1, j} = \tilde {W} _ {1, \cdot} \left[ \begin{array}{c} x _ {j} \\ 1 \end{array} \right] - \alpha \beta^ {T} x _ {j} - \eta_ {1} + \gamma \\ = \tilde {Y} _ {1, j} - \alpha \beta^ {T} x _ {j} - \eta_ {1} + \gamma < 0, \\ \end{array}
+$$
+
+$$
+\begin{array}{l} \left(\tilde {Y} ^ {(1)} \left(\left[ \tilde {W} _ {i} \right] _ {i = 1} ^ {2}, \left[ \tilde {b} _ {i} \right] _ {i = 1} ^ {2}\right)\right) _ {2, j} = - \tilde {W} _ {1,}. \left[ \begin{array}{c} x _ {j} \\ 1 \end{array} \right] + \alpha \beta^ {T} x _ {j} + \eta_ {1} + \gamma \\ = - \tilde {Y} _ {1, j} + \alpha \beta^ {T} x _ {j} + \eta_ {1} + \gamma > 0. \\ \end{array}
+$$
+
+Therefore, $(1,j)$ -th component of $\hat{Y}\left(\left[\tilde{W}_i\right]_{i=1}^2,\left[\tilde{b}_i\right]_{i=1}^2\right)$ is
+
+$$
+\begin{array}{l} \left(\hat {Y} \left(\left[ \tilde {W} _ {i} \right] _ {i = 1} ^ {2}, \left[ \tilde {b} _ {i} \right] _ {i = 1} ^ {2}\right)\right) _ {1, j} \\ = \left(\frac {1}{s _ {+} + s _ {-}}, - \frac {1}{s _ {+} + s _ {-}}, 0, \dots , 0\right) h _ {s _ {-}, s _ {+}} \left(\left[ \begin{array}{c} \tilde {W} _ {1,..} X - \alpha \beta^ {T} X - \eta_ {1} \mathbf {1} _ {n} ^ {T} + \gamma \mathbf {1} _ {n} ^ {T} \\ - \tilde {W} _ {1,..} X + \alpha \beta^ {T} X + \eta_ {1} \mathbf {1} _ {n} ^ {T} + \gamma \mathbf {1} _ {n} ^ {T} \\ \tilde {W} _ {2,..} X - \eta_ {2} \mathbf {1} _ {n} ^ {T} \\ \vdots \\ \tilde {W} _ {d _ {Y},..} X - \eta_ {d _ {Y}} \mathbf {1} _ {n} ^ {T} \\ \mathbf {0} _ {d _ {1} - d _ {Y} - 1} \mathbf {1} _ {n} ^ {T} \end{array} \right]\right) _ {\cdot , j} \\ + \eta_ {1} \\ = \frac {1}{s _ {+} + s _ {-}} h _ {s _ {-}, s _ {+}} (\tilde {Y} _ {1, j} - \alpha \beta^ {T} x _ {j} - \eta_ {1} + \gamma) - \frac {1}{s _ {+} + s _ {-}} h _ {s _ {-}, s _ {+}} (- \tilde {Y} _ {1, j} + \alpha \beta^ {T} x _ {j} + \eta_ {1} + \gamma) \\ + \eta_ {1} \\ = \frac {s _ {-}}{s _ {+} + s _ {-}} (\tilde {Y} _ {1, j} - \alpha \beta^ {T} x _ {j} - \eta_ {1} + \gamma) - \frac {s _ {+}}{s _ {+} + s _ {-}} (- \tilde {Y} _ {1, j} + \alpha \beta^ {T} x _ {j} + \eta_ {1} + \gamma) \\ + \eta_ {1} \\ = \tilde {Y} _ {1, j} - \alpha \beta^ {T} x _ {j} + \frac {s _ {-} - s _ {+}}{s _ {+} + s _ {-}} \gamma ; \tag {23} \\ \end{array}
+$$
+
+Similarly, when $j > l'$ , the $(1,j)$ -th component is
+
+$$
+\begin{array}{l} \left(\hat {Y} \left(\left[ \tilde {W} _ {i} \right] _ {i = 1} ^ {2}, \left[ \tilde {b} _ {i} \right] _ {i = 1} ^ {2}\right)\right) _ {1, j} \\ { = } { \frac { s _ { + } } { s _ { + } + s _ { - } } ( \tilde { Y } _ { 1 , j } - \alpha \beta ^ { T } x _ { j } - \eta _ { 1 } + \gamma ) - \frac { s _ { - } } { s _ { + } + s _ { - } } ( - \tilde { Y } _ { 1 , j } + \alpha \beta ^ { T } x _ { j } + \eta _ { 1 } + \gamma ) + \eta _ { 1 } + \gamma ) } \\ = \tilde {Y} _ {1, j} - \alpha \beta^ {T} x _ {j} + \frac {s _ {+} - s _ {-}}{s _ {+} + s _ {-}} \gamma , \tag {24} \\ \end{array}
+$$
+
+and
+
+$$
+\left(\hat {Y} \left(\left[ \tilde {W} _ {i} \right] _ {i = 1} ^ {2}, \left[ \tilde {b} _ {i} \right] _ {i = 1} ^ {2}\right)\right) _ {i, j} = \frac {s _ {+}}{s _ {+}} \left(\tilde {Y} _ {i, j} - \eta_ {i}\right) + \eta_ {i} = \tilde {Y} _ {i, j}, i \geq 2. \tag {25}
+$$
+
+Thus, the empirical risk of the neural network with parameters $\left\{\left[\tilde{W}_i\right]_{i=1}^2,\left[\tilde{b}_i\right]_{i=1}^2\right\}$ is
+
+$$
+\begin{array}{l} \hat {\mathcal {R}} \left(\left[ \tilde {W} _ {i} \right] _ {i = 1} ^ {2}, \left[ \tilde {b} _ {i} \right] _ {i = 1} ^ {2}\right) \\ = \frac {1}{n} \sum_ {i = 1} ^ {n} l \left(Y _ {i}, \tilde {W} _ {2} \left(\tilde {W} _ {1} x _ {i} + \tilde {b} _ {1} \mathbf {1} _ {n} ^ {T}\right) + \tilde {b} _ {2} \mathbf {1} _ {n} ^ {T}\right) \\ = \frac {1}{n} \sum_ {i = 1} ^ {n} \left(l \left(Y _ {i}, \tilde {W} \left[ \begin{array}{c} x _ {i} \\ 1 \end{array} \right]\right) + \nabla_ {\hat {Y} _ {i}} l \left(Y _ {i}, \tilde {W} \left[ \begin{array}{c} x _ {i} \\ 1 \end{array} \right]\right) \left(\tilde {W} _ {2} \left(\tilde {W} _ {1} x _ {i} + \tilde {b} _ {1} \mathbf {1} _ {n} ^ {T}\right) + \tilde {b} _ {2} \mathbf {1} _ {n} ^ {T} - \tilde {W} \left[ \begin{array}{c} x _ {i} \\ 1 \end{array} \right]\right)\right) \\ + \sum_ {i = 1} ^ {n} o \left(\left\| \tilde {W} _ {2} \left(\tilde {W} _ {1} x _ {i} + \tilde {b} _ {1} \mathbf {1} _ {n} ^ {T}\right) + \tilde {b} _ {2} \mathbf {1} _ {n} ^ {T} - \tilde {W} \left[ \begin{array}{l} x _ {i} \\ 1 \end{array} \right] \right\|\right). \tag {26} \\ \end{array}
+$$
+
+Applying eqs. (23), (24), and (25), we have
+
+$$
+\begin{array}{l} \sum_ {i = 1} ^ {n} \left(\tilde {W} _ {2} (\tilde {W} _ {1} x _ {i} + \tilde {b} _ {1}) + \tilde {b} _ {2} - \tilde {W} [ x _ {i} ]\right) ^ {T} \nabla_ {\hat {Y} _ {i}} l (Y _ {i}, \tilde {W} [ x _ {i} ]) ^ {T} \\ \stackrel {(*)} {=} \sum_ {i = 1} ^ {l ^ {\prime}} \boldsymbol {V} _ {1, i} (- \alpha \beta^ {T} x _ {i} + \frac {s _ {+} - s _ {-}}{s _ {+} + s _ {-}} \gamma) + \sum_ {i = l ^ {\prime} + 1} ^ {n} \boldsymbol {V} _ {1, i} (- \alpha \beta^ {T} x _ {i} - \frac {s _ {+} - s _ {-}}{s _ {+} + s _ {-}} \gamma) \\ = 2 \gamma \sum_ {i = 1} ^ {l ^ {\prime}} \frac {s _ {+} - s _ {-}}{s _ {+} + s _ {-}} V _ {1, i}, \\ \end{array}
+$$
+
+where eq. $(\ast)$ is because
+
+$$
+\left(\tilde {W} _ {2} \left(\tilde {W} _ {1} x _ {i} + \tilde {b} _ {1}\right) + \tilde {b} _ {2} - \tilde {W} \left[ \begin{array}{c} x _ {i} \\ 1 \end{array} \right]\right) _ {j} = \left\{ \begin{array}{l l} - \alpha \beta^ {T} x _ {j} + \frac {s _ {-} - s _ {+}}{s _ {+} + s _ {-}} \gamma , & j = 1, i \leq l ^ {\prime} \\ - \alpha \beta^ {T} x _ {j} - \frac {s _ {-} - s _ {+}}{s _ {+} + s _ {-}} \gamma , & j = 1, i > l ^ {\prime} \\ 0, & j \geq 2 \end{array} \right..
+$$
+
+Furthermore, note that $\alpha = O(\gamma)$ (from the definition of $\gamma$ ). We have
+
+$$
+\begin{array}{l} \sum_ {i = 1} ^ {n} o \left(\left\| \tilde {W} _ {2} \left(\tilde {W} _ {1} x _ {i} + \tilde {b} _ {1}\right) + \tilde {b} _ {2} - \hat {W} \left[ \begin{array}{l} x _ {i} \\ 1 \end{array} \right] \right\|\right) \\ = \sum_ {i = 1} ^ {n} o \left(\sqrt {\sum_ {j = 1} ^ {n} \left(\tilde {W} _ {2} \left(\tilde {W} _ {1} x _ {i} + \tilde {b} _ {1}\right) + \tilde {b} _ {2} - \tilde {W} \left[ \begin{array}{l} x _ {i} \\ 1 \end{array} \right]\right) _ {j} ^ {2}}\right) \\ = o (\gamma). \\ \end{array}
+$$
+
+Let $\alpha$ be sufficiently small while $\operatorname{sgn}(\gamma) = -\operatorname{sgn}\left(\sum_{i=1}^{l'} \frac{s_+ - s_-}{s_+ + s_-} V_{1,i}\right)$ . We have
+
+$$
+\begin{array}{l} \sum_ {i = 1} ^ {n} l \left(Y _ {i}, \tilde {W} _ {2} \left(\tilde {W} _ {1} x _ {i} + \tilde {b} _ {1}\right) + \tilde {b} _ {2}\right) - \sum_ {i = 1} ^ {n} l \left(Y _ {i}, \hat {W} \left[ \begin{array}{c} x _ {i} \\ 1 \end{array} \right]\right) \\ = 2 \gamma \sum_ {i = 1} ^ {l ^ {\prime}} \frac {s _ {+} - s _ {-}}{s _ {+} + s _ {-}} \mathbf {V} _ {1, i} + o (\gamma) \\ \begin{array}{c} \text {(*)} \\ < 0, \end{array} \\ \end{array}
+$$
+
+where inequality $(\ast \ast)$ comes from (1.2) (see p. 18).
+
+From Lemma 4, there exists a local minimizer $\left\{\left[\hat{W}_i\right]_{i=1}^2, \left[\hat{b}_i\right]_{i=1}^2\right\}$ with empirical risk that equals to $f(\tilde{W})$ . Meanwhile, we just construct a point in the parameter space with empirical risk smaller than $f(\tilde{W})$ .
+
+Therefore, $\left\{\left[\hat{W}_i\right]_{i=1}^2,\left[\hat{b}_i\right]_{i=1}^2\right\}$ is a spurious local minimum.
+
+The proof is completed.
+
+
+
+# A.3 STAGE (2)
+
+Stage (2) proves that neural networks with arbitrary hidden layers and two-piece linear activation $h_{s_{-},s_{+}}$ have spurious local minima. Here, we still assume $s_+ \neq 0$ . We have justified this assumption in Stage (1).
+
+This stage is organized similarly with Stage (1): (a) Lemma 5 constructs a local minimum; and (b) Theorem 5 proves the minimum is spurious.
+
+# Step (a). Construct local minima of the loss surface.
+
+Lemma 5. Suppose that all the conditions of Lemma 4 hold, while the neural network has $L - 1$ hidden layers. Then, this network has a local minimum at
+
+$$
+\hat {W} _ {1} ^ {\prime} = \left[ \begin{array}{c} \left[ \tilde {W} \right] _ {\cdot , [ 1: d _ {X} ]} \\ \mathbf {0} _ {(d _ {1} - d _ {Y}) \times d _ {X}} \end{array} \right], \hat {b} _ {1} ^ {\prime} = \left[ \begin{array}{c} \left[ \tilde {W} \right] _ {\cdot , d _ {X} + 1} - \eta \mathbf {1} _ {d _ {Y}} \\ - \eta \mathbf {1} _ {d _ {1} - d _ {Y}} \end{array} \right],
+$$
+
+$$
+\hat {W} _ {i} ^ {\prime} = \frac {1}{s _ {+}} \sum_ {j = 1} ^ {d _ {Y}} E _ {j, j} + \frac {1}{s _ {+}} \sum_ {j = d _ {Y} + 1} ^ {d _ {i}} E _ {j, (d _ {Y} + 1)}, \hat {b} _ {i} ^ {\prime} = 0 (i = 2, 3, \dots , L - 1),
+$$
+
+and
+
+$$
+\hat {W} _ {L} ^ {\prime} = \left[ \begin{array}{c c} \frac {1}{s _ {+}} I _ {d _ {Y}} & \mathbf {0} _ {d _ {Y} \times (d _ {L - 1} - d _ {Y})} \end{array} \right], \hat {b} _ {L} ^ {\prime} = \eta \mathbf {1} _ {d _ {Y}},
+$$
+
+where $\hat{W}_i^{\prime}$ and $\hat{b}_i^\prime$ are the weight matrix and the bias of the $i$ -th layer, respectively, and $\eta$ is a negative constant with absolute value sufficiently large such that
+
+$$
+\tilde {W} \tilde {X} - \eta \mathbf {1} _ {d _ {Y}} \mathbf {1} _ {n} ^ {T} > \mathbf {0}, \tag {27}
+$$
+
+where $>$ is element-wise.
+
+Proof. Recall the discussion in Lemma 4 that all components of $\hat{W}_1X + \hat{b}_1\mathbf{1}_n^T$ are positive. Specifically,
+
+$$
+\hat {W} _ {1} X + \hat {b} _ {1} \mathbf {1} _ {n} ^ {T} = \left[ \begin{array}{c} \tilde {Y} - \eta \mathbf {1} _ {d _ {Y}} \mathbf {1} _ {n} ^ {T} \\ - \eta \mathbf {1} _ {d _ {1} - d _ {Y}} \mathbf {1} _ {n} ^ {T} \end{array} \right],
+$$
+
+where $\tilde{Y}$ is defined in Lemma 4.
+
+Similar to the discussions in Lemma 4, when the parameters equal to $\left\{\left[\hat{W}_i^{\prime}\right]_{i = 1}^{L},\left[\hat{b}_i^{\prime}\right]_{i = 1}^{L}\right\}$ , the output of the first layer before the activation function is
+
+$$
+\tilde {Y} ^ {(1)} = \hat {W} _ {1} ^ {\prime} X + \hat {b} _ {1} ^ {\prime} \mathbf {1} _ {n} ^ {T} = \left[ \begin{array}{l} \tilde {Y} - \eta \mathbf {1} _ {d _ {Y}} \mathbf {1} _ {n} ^ {T} \\ - \eta \mathbf {1} _ {d _ {1} - d _ {Y}} \mathbf {1} _ {n} ^ {T} \end{array} \right],
+$$
+
+and
+
+$$
+\tilde {Y} - \eta \mathbf {1} _ {d _ {Y}} \mathbf {1} _ {n} ^ {T} > \mathbf {0}, \tag {28}
+$$
+
+$$
+- \eta \mathbf {1} _ {d _ {1} - d _ {Y}} \mathbf {1} _ {n} ^ {T} > \mathbf {0}. \tag {29}
+$$
+
+Here $>$ is defined element-wise.
+
+After the activation function, the output of the first layer is
+
+$$
+Y ^ {(1)} = h _ {s _ {-}, s _ {+}} (\hat {W} _ {1} ^ {\prime} X + \hat {b} _ {1} ^ {\prime} \mathbf {1} _ {n} ^ {T}) = s _ {+} (\hat {W} _ {1} ^ {\prime} X + \hat {b} _ {1} ^ {\prime} \mathbf {1} _ {n} ^ {T}) = s _ {+} \left[ \begin{array}{c} \tilde {Y} - \eta \mathbf {1} _ {d _ {Y}} \mathbf {1} _ {n} ^ {T} \\ - \eta \mathbf {1} _ {d _ {1} - d _ {Y}} \mathbf {1} _ {n} ^ {T} \end{array} \right].
+$$
+
+We prove by induction that for all $i \in [1:L - 1]$ that
+
+$$
+\tilde {Y} ^ {(i)} > 0, \text {e l e m e n t - w i s e}, \tag {30}
+$$
+
+$$
+Y ^ {(i)} = s _ {+} \left[ \begin{array}{l} \tilde {Y} - \eta \mathbf {1} _ {d _ {Y}} \mathbf {1} _ {n} ^ {T} \\ - \eta \mathbf {1} _ {d _ {i} - d _ {Y}} \mathbf {1} _ {n} ^ {T} \end{array} \right]. \tag {31}
+$$
+
+Suppose that for $1 \leq k \leq L - 2$ , $\tilde{Y}^{(k)}$ is positive (element-wise) and
+
+$$
+Y ^ {(k)} = s _ {+} \left[ \begin{array}{c} \tilde {Y} - \eta \mathbf {1} _ {d _ {Y}} \mathbf {1} _ {n} ^ {T} \\ - \eta \mathbf {1} _ {d _ {k} - d _ {Y}} \mathbf {1} _ {n} ^ {T} \end{array} \right].
+$$
+
+Then the output of the $(k + 1)$ -th layer before the activation is
+
+$$
+\begin{array}{l} \tilde {Y} ^ {(k + 1)} = \hat {W} _ {k + 1} ^ {\prime} Y ^ {(k)} + \hat {b} _ {k + 1} ^ {\prime} \mathbf {1} _ {n} ^ {T} \\ = \frac {1}{s _ {+}} \left(\sum_ {j = 1} ^ {d _ {Y}} E _ {j, j} + \sum_ {j = d _ {Y} + 1} ^ {d _ {k + 1}} E _ {j, (d _ {Y} + 1)}\right) s _ {+} \left[ \begin{array}{c} \tilde {Y} - \eta \mathbf {1} _ {d _ {Y}} \mathbf {1} _ {n} ^ {T} \\ - \eta \mathbf {1} _ {d _ {k} - d _ {Y}} \mathbf {1} _ {n} ^ {T} \end{array} \right] \\ = \left[ \begin{array}{c} \tilde {Y} - \eta \mathbf {1} _ {d _ {Y}} \mathbf {1} _ {n} ^ {T} \\ - \eta \mathbf {1} _ {d _ {k + 1} - d _ {Y}} \mathbf {1} _ {n} ^ {T} \end{array} \right]. \\ \end{array}
+$$
+
+Applying eqs. (28) and (29), we have
+
+$$
+\tilde {Y} ^ {(k + 1)} = \left[ \begin{array}{c} \tilde {Y} - \eta \mathbf {1} _ {d _ {Y}} \mathbf {1} _ {n} ^ {T} \\ - \eta \mathbf {1} _ {d _ {k + 1} - d _ {Y}} \mathbf {1} _ {n} ^ {T} \end{array} \right] > \mathbf {0},
+$$
+
+where $>$ is defined element-wise. Therefore,
+
+$$
+Y ^ {(k + 1)} = h _ {s _ {-}, s _ {+}} \left(\tilde {Y} ^ {(k + 1)}\right) = s _ {+} \tilde {Y} ^ {(k + 1)} = s _ {+} \left[ \begin{array}{c} \tilde {Y} - \eta \mathbf {1} _ {d _ {Y}} \mathbf {1} _ {n} ^ {T} \\ - \eta \mathbf {1} _ {d _ {k + 1} - d _ {Y}} \mathbf {1} _ {n} ^ {T} \end{array} \right].
+$$
+
+We thereby prove eqs. (30) and (31).
+
+Therefore, $Y^{(L)}$ can be calculated as
+
+$$
+\begin{array}{l} \hat {Y} = Y ^ {(L)} = \hat {W} _ {L} ^ {\prime} Y ^ {(L - 1)} + \hat {b} _ {L} ^ {\prime} \mathbf {1} _ {n} ^ {T} \\ = \frac {1}{s _ {+}} \left[ \begin{array}{c c} I _ {d _ {Y}} & \mathbf {0} _ {d _ {Y} \times (d _ {L - 1} - d _ {Y})} \end{array} \right] s _ {+} \left[ \begin{array}{c} \tilde {Y} - \eta \mathbf {1} _ {d _ {Y}} \mathbf {1} _ {n} ^ {T} \\ - \eta \mathbf {1} _ {d _ {i} - d _ {Y}} \mathbf {1} _ {n} ^ {T} \end{array} \right] + \eta \mathbf {1} _ {d _ {Y}} \mathbf {1} _ {n} ^ {T} \\ = \tilde {Y}. \tag {32} \\ \end{array}
+$$
+
+Then, we show the empirical risk is higher around $\left\{\left[\hat{W}_i^{\prime}\right]_{i = 1}^{L},\left[\hat{b}_i^{\prime}\right]_{i = 1}^{L}\right\}$ in order to prove that $\left\{\left[\hat{W}_i^{\prime}\right]_{i = 1}^{L},\left[\hat{b}_i^{\prime}\right]_{i = 1}^{L}\right\}$ is a local minimizer.
+
+Let $\left\{\left[\hat{W}_i' + \delta_{Wi}'\right]_{i=1}^L, \left[\hat{b}_i' + \delta_{bi}'\right]_{i=1}^L\right\}$ be point in the parameter space which is close enough to the point $\left\{\left[\hat{W}_i'\right]_{i=1}^L, \left[\hat{b}_i'\right]_{i=1}^L\right\}$ . Since the disturbances $\delta_{Wi}'$ and $\delta_{bi}'$ are both close to 0 (element-wise), all components of $\tilde{Y}^{(i)}\left(\left[\hat{W}_i' + \delta_{Wi}'\right]_{i=1}^L, \left[\hat{b}_i' + \delta_{bi}'\right]_{i=1}^L\right)$ remains positive. Therefore, the output of the neural network in terms of parameters $\left\{\left[\hat{W}_i' + \delta_{Wi}'\right]_{i=1}^L, \left[\hat{b}_i' + \delta_{bi}'\right]_{i=1}^L\right\}$ is
+
+$$
+\begin{array}{l} \hat {Y} \left(\left[ \hat {W} _ {i} ^ {\prime} + \delta_ {W i} ^ {\prime} \right] _ {i = 1} ^ {L}, \left[ \hat {b} _ {i} ^ {\prime} + \delta_ {b i} ^ {\prime} \right] _ {i = 1} ^ {L}\right) \\ = (\hat {W} _ {L} ^ {\prime} + \delta_ {W L} ^ {\prime}) h _ {s _ {-}, s _ {+}} \left(\dots h _ {s _ {-}, s _ {+}} \left(\left(\hat {W} _ {1} ^ {\prime} + \delta_ {W 1} ^ {\prime}\right) X + \left(\hat {b} _ {1} ^ {\prime} + \delta_ {b 1} ^ {\prime}\right) 1 _ {n} ^ {T}\right) \dots\right) \\ + \left(\hat {b} _ {L} ^ {\prime} + \delta_ {b L} ^ {\prime}\right) \mathbf {1} _ {n} ^ {T} \\ = (\hat {W} _ {L} ^ {\prime} + \delta_ {W L} ^ {\prime}) s _ {+} \left(\dots s _ {+} \left(\left(\hat {W} _ {1} ^ {\prime} + \delta_ {W 1} ^ {\prime}\right) X + \left(\hat {b} _ {1} ^ {\prime} + \delta_ {b 1} ^ {\prime}\right) 1 _ {n} ^ {T}\right) \dots\right) \\ + \left(\hat {b} _ {L} ^ {\prime} + \delta_ {b L} ^ {\prime}\right) \mathbf {1} _ {n} ^ {T} \\ = M _ {1} X + M _ {2} 1 _ {n} ^ {T}, \\ \end{array}
+$$
+
+where $M_{1}$ and $M_{2}$ can be obtained from $\left\{\left[\hat{W}_i^{\prime}\right]_{i = 1}^{L},\left[\hat{b}_i^{\prime}\right]_{i = 1}^{L}\right\}$ and $\left\{\left[\delta_{Wi}^{\prime}\right]_{i = 1}^{L},\left[\delta_{bi}^{\prime}\right]_{i = 1}^{L}\right\}$ through several multiplication and summation operations.
+
+Rewrite the output as
+
+$$
+M _ {1} X + M _ {2} 1 _ {n} ^ {T} = \left[ \begin{array}{c c} M _ {1} & M _ {2} \end{array} \right] \left[ \begin{array}{c} X \\ 1 _ {n} ^ {T} \end{array} \right].
+$$
+
+Therefore, the empirical risk $\hat{\mathcal{R}}$ before and after the disturbance can be expressed as $f(\tilde{W})$ and $f([M_1:M_2])$ , respectively.
+
+When the disturbances $\left\{[\delta_{Wi}^{\prime}]_{i = 1}^{L},[\delta_{bi}^{\prime}]_{i = 1}^{L}\right\}$ approach 0 (element-wise), $[M_1$ $M_2]$ approaches $\tilde{W}$ . Therefore, when $\left\{[\delta_{Wi}^{\prime}]_{i = 1}^{L},[\delta_{bi}^{\prime}]_{i = 1}^{L}\right\}$ are all small enough, we have
+
+$$
+\begin{array}{l} \hat {\mathcal {R}} \left(\left[ \hat {W} _ {i} ^ {\prime} + \delta_ {W i} ^ {\prime} \right] _ {i = 1} ^ {L}, \left[ \hat {b} _ {i} ^ {\prime} + \delta_ {b i} ^ {\prime} \right] _ {i = 1} ^ {L}\right) \\ = f \left(\left[ \begin{array}{c c} M _ {1} & M _ {2} \end{array} \right]\right) \\ \geq f (\tilde {W}) \\ = \hat {\mathcal {R}} \left(\left[ \hat {W} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}, \left[ \hat {b} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}\right). \tag {33} \\ \end{array}
+$$
+
+Since $\left\{\left[\hat{W}_i^{\prime} + \delta_{Wi}^{\prime}\right]_{i = 1}^{L},\left[\hat{b}_{i}^{\prime} + \delta_{bi}^{\prime}\right]_{i = 1}^{L}\right\}$ are arbitrary within a sufficiently small neighbour of $\left\{\left[\hat{W}_i^{\prime}\right]_{i = 1}^{L},\left[\hat{b}_{i}^{\prime}\right]_{i = 1}^{L}\right\}$ , eq. (33) yields that $\left\{\left[\hat{W}_i^{\prime}\right]_{i = 1}^{2},\left[\hat{b}_{i}^{\prime}\right]_{i = 1}^{2}\right\}$ is a local minimizer.
+
+# Step (b). Prove the constructed local minima are spurious.
+
+Theorem 5. Under the same conditions of Lemma 5 and Assumptions 1, 2, and 4, the constructed spurious local minima in Lemma 5 are spurious.
+
+Proof. We first construct the weight matrix and bias of the $i$ -th layer as follows,
+
+$$
+\tilde {W} _ {1} ^ {\prime} = \tilde {W} _ {1}, \tilde {b} _ {1} ^ {\prime} = \tilde {b} _ {1},
+$$
+
+$$
+\tilde {W} _ {2} ^ {\prime} = \left[ \begin{array}{c} \tilde {W} _ {2} \\ \mathbf {0} _ {(d _ {2} - d _ {Y}) \times d _ {1}} \end{array} \right], \tilde {b} _ {2} ^ {\prime} = \lambda \mathbf {1} _ {d _ {2}} + \left[ \begin{array}{c} \tilde {b} _ {2} \\ \mathbf {0} _ {(d _ {2} - d _ {Y}) \times 1} \end{array} \right],
+$$
+
+$$
+\tilde {W} _ {i} ^ {\prime} = \frac {1}{s _ {+}} \sum_ {i = 1} ^ {d _ {Y}} E _ {i, i}, \tilde {b} _ {i} ^ {\prime} = 0 (i = 3, 4, \dots , L - 1),
+$$
+
+and
+
+$$
+\tilde {W} _ {L} ^ {\prime} = \frac {1}{s _ {+}} \sum_ {i = 1} ^ {d _ {Y}} E _ {i, i}, \tilde {b} _ {L} ^ {\prime} = - \lambda \mathbf {1} _ {d _ {Y}},
+$$
+
+where $\tilde{W}_1$ , $\tilde{W}_2$ , $\tilde{b}_1$ and $\tilde{b}_2$ are defined by eqs. (19), (20), (21), and (22), respectively, and $\lambda$ is a sufficiently large positive real such that
+
+$$
+\hat {Y} \left(\left[ \tilde {W} _ {i} \right] _ {i = 1} ^ {2}, \left[ \tilde {b} _ {i} \right] _ {i = 1} ^ {2}\right) + \lambda \mathbf {1} _ {d _ {2}} \mathbf {1} _ {n} ^ {T} > \mathbf {0}, \tag {34}
+$$
+
+where $>$ is defined element-wise.
+
+We argue that $\left\{\left[\tilde{W}_i^{\prime}\right]_{i = 1}^{L},\left[\tilde{b}_i^{\prime}\right]_{i = 1}^{L}\right\}$ corresponds to a smaller empirical risk than $f(\tilde{W})$ which is defined in Lemma 4.
+
+First, Theorem 4 has proved that the point $\{\tilde{W}_1,\tilde{W}_2,\tilde{b}_1,\tilde{b}_2\}$ corresponds to a smaller empirical risk than $f(\tilde{W})$
+
+We prove by induction that for any $i\in \{3,4,\dots,L - 1\}$
+
+$$
+\tilde {Y} ^ {(i)} \left(\left[ \tilde {W} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}, \left[ \tilde {b} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}\right) \geq \mathbf {0}, \text {e l e m e n t - w i s e}, \tag {35}
+$$
+
+$$
+Y ^ {(i)} \left(\left[ \tilde {W} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}, \left[ \tilde {b} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}\right) = s _ {+} \left[ \begin{array}{c} \hat {Y} \left(\left[ \tilde {W} _ {i} \right] _ {i = 1} ^ {2}, \left[ \tilde {b} _ {i} \right] _ {i = 1} ^ {2}\right) + \lambda \mathbf {1} _ {d _ {Y}} \mathbf {1} _ {n} ^ {T} \\ 0 _ {\left(d _ {i} - d _ {Y}\right) \times n} \end{array} \right]. \tag {36}
+$$
+
+Apparently the output of the first layer before the activation is
+
+$$
+\tilde {Y} ^ {(1)} \left(\left[ \tilde {W} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}, \left[ \tilde {b} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}\right) = \tilde {W} _ {1} ^ {\prime} X + \tilde {b} _ {1} ^ {\prime} \mathbf {1} _ {n} ^ {T} = \tilde {W} _ {1} X + \tilde {b} _ {1} \mathbf {1} _ {n} ^ {T} = \tilde {Y} ^ {(1)} \left(\left[ \tilde {W} _ {i} \right] _ {i = 1} ^ {2}, \left[ \tilde {b} _ {i} \right] _ {i = 1} ^ {2}\right).
+$$
+
+Therefore, the output of the first layer after the activation is
+
+$$
+\begin{array}{l} Y ^ {(1)} \left(\left[ \tilde {W} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}, \left[ \tilde {b} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}\right) = h _ {s _ {-}, s _ {+}} \left(\tilde {Y} ^ {(1)} \left(\left[ \tilde {W} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}, \left[ \tilde {b} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}\right)\right) \\ = h _ {s _ {-}, s _ {+}} \left(\tilde {Y} ^ {(1)} \left(\left[ \tilde {W} _ {i} \right] _ {i = 1} ^ {L}, \left[ \tilde {b} _ {i} \right] _ {i = 1} ^ {L}\right)\right) \\ = Y ^ {(1)} \left(\left[ \tilde {W} _ {i} \right] _ {i = 1} ^ {2}, \left[ \tilde {b} _ {i} \right] _ {i = 1} ^ {2}\right). \\ \end{array}
+$$
+
+Thus, the output of the second layer before the activation is
+
+$$
+\begin{array}{l} \tilde {Y} ^ {(2)} \left(\left[ \tilde {W} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}, \left[ \tilde {b} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}\right) = \tilde {W} _ {2} ^ {\prime} Y ^ {(1)} \left(\left[ \tilde {W} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}, \left[ \tilde {b} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}\right) + \tilde {b} _ {2} ^ {\prime} \mathbf {1} _ {n} ^ {T} \\ = \left[ \begin{array}{c} \tilde {W} _ {2} \\ \mathbf {0} _ {(d _ {2} - d _ {Y}) \times d _ {1}} \end{array} \right] Y ^ {(1)} \left(\left[ \tilde {W} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}, \left[ \tilde {b} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}\right) + \left[ \begin{array}{c} \tilde {b} _ {2} \\ \mathbf {0} _ {(d _ {2} - d _ {Y}) \times 1} \end{array} \right] \mathbf {1} _ {n} ^ {T} \\ + \lambda \mathbf {1} _ {d _ {2}} \mathbf {1} _ {n} ^ {T} \\ = \left[ \begin{array}{c} \hat {Y} \left(\left[ \tilde {W} _ {i} \right] _ {i = 1} ^ {2}, \left[ \tilde {b} _ {i} \right] _ {i = 1} ^ {2}\right) + \lambda \mathbf {1} _ {d _ {Y}} \mathbf {1} _ {n} ^ {T} \\ \lambda \mathbf {1} _ {d _ {2} - d _ {Y}} \mathbf {1} _ {n} ^ {T} \end{array} \right]. \\ \end{array}
+$$
+
+Applying the definition of $\lambda$ (eq. (34)),
+
+$$
+\tilde {Y} ^ {(2)} \left(\left[ \tilde {W} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}, \left[ \tilde {b} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}\right) > \mathbf {0}, \text {e l e m e n t - w i s e}. \tag {37}
+$$
+
+Therefore, the output of the second layer after the activation is
+
+$$
+\begin{array}{l} Y ^ {(2)} \left(\left[ \tilde {W} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}, \left[ \tilde {b} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}\right) = h _ {s _ {-}, s _ {+}} \left(\tilde {Y} ^ {(2)} \left(\left[ \tilde {W} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}, \left[ \tilde {b} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}\right)\right) \\ = s _ {+} \left[ \begin{array}{c} \hat {Y} \left(\left[ \tilde {W} _ {i} \right] _ {i = 1} ^ {2}, \left[ \tilde {b} _ {i} \right] _ {i = 1} ^ {2}\right) + \lambda \mathbf {1} _ {d _ {Y}} \mathbf {1} _ {n} ^ {T} \\ \lambda \mathbf {1} _ {d _ {2} - d _ {Y}} \mathbf {1} _ {n} ^ {T} \end{array} \right]. \\ \end{array}
+$$
+
+Meanwhile, the output of the third layer before the activation is $\tilde{Y}^{(3)}\left(\left[\tilde{W}_i^{\prime}\right]_{i = 1}^{L},\left[\tilde{b}_i^{\prime}\right]_{i = 1}^{L}\right)$ can be calculated based on $Y^{(2)}$ $\begin{array}{r}\left(\left[\tilde{W}_i^{\prime}\right]_{i = 1}^L,\left[\tilde{b}_i^{\prime}\right]_{i = 1}^L\right): \end{array}$
+
+$$
+\begin{array}{l} \tilde {Y} ^ {(3)} \left(\left[ \tilde {W} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}, \left[ \tilde {b} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}\right) = \tilde {W} _ {3} ^ {\prime} Y ^ {(2)} \left(\left[ \tilde {W} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}, \left[ \tilde {b} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}\right) + \tilde {b} _ {3} ^ {\prime} \mathbf {1} _ {n} ^ {T} \\ = \frac {1}{s _ {+}} \left(\sum_ {i = 1} ^ {d _ {Y}} E _ {i, i}\right) s _ {+} \left[ \begin{array}{c} \hat {Y} \left(\left[ \tilde {W} _ {i} \right] _ {i = 1} ^ {2}, \left[ \tilde {b} _ {i} \right] _ {i = 1} ^ {2}\right) + \lambda \mathbf {1} _ {d _ {Y}} \mathbf {1} _ {n} ^ {T} \\ \lambda \mathbf {1} _ {d _ {2} - d _ {Y}} \mathbf {1} _ {n} ^ {T} \end{array} \right] \\ = \left[ \begin{array}{c} \hat {Y} \left(\left[ \tilde {W} _ {i} \right] _ {i = 1} ^ {2}, \left[ \tilde {b} _ {i} \right] _ {i = 1} ^ {2}\right) + \lambda \mathbf {1} _ {d _ {Y}} \mathbf {1} _ {n} ^ {T} \\ \mathbf {0} _ {(d _ {3} - d _ {Y}) \times n} \end{array} \right]. \\ \end{array}
+$$
+
+Applying eq. (37),
+
+$$
+\tilde {Y} ^ {(3)} \left(\left[ \tilde {W} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}, \left[ \tilde {b} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}\right) \geq \mathbf {0}, \text {e l e m e n t - w i s e}. \tag {38}
+$$
+
+Therefore, the output of the third layer after the activation is
+
+$$
+\begin{array}{l} Y ^ {(3)} \left(\left[ \tilde {W} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}, \left[ \tilde {b} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}\right) = h _ {s _ {-}, s _ {+}} \left(\tilde {Y} ^ {(3)} \left(\left[ \tilde {W} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}, \left[ \tilde {b} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}\right)\right) \\ = s _ {+} \left(\tilde {Y} ^ {(3)} \left(\left[ \tilde {W} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}, \left[ \tilde {b} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}\right)\right) \\ = s _ {+} \left[ \begin{array}{c} \hat {Y} \left(\left[ \tilde {W} _ {i} \right] _ {i = 1} ^ {2}, \left[ \tilde {b} _ {i} \right] _ {i = 1} ^ {2}\right) + \lambda \mathbf {1} _ {d _ {Y}} \mathbf {1} _ {n} ^ {T} \\ \mathbf {0} _ {(d _ {3} - d _ {Y}) \times n} \end{array} \right]. \\ \end{array}
+$$
+
+Suppose eqs. (35) and (36) hold for $k$ ( $3 \leq k \leq L - 2$ ), when $k + 1$ ,
+
+$$
+\begin{array}{l} \tilde {Y} ^ {(k + 1)} \left(\left[ \tilde {W} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}, \left[ \tilde {b} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}\right) = \tilde {W} _ {k + 1} ^ {\prime} Y ^ {(k)} \left(\left[ \tilde {W} _ {i} ^ {\prime} \right] _ {i = 1} ^ {2}, \left[ \tilde {b} _ {i} ^ {\prime} \right] _ {i = 1} ^ {2}\right) + \tilde {b} _ {k + 1} ^ {\prime} \mathbf {1} _ {n} ^ {T} \\ = \frac {1}{s _ {+}} \left(\sum_ {i = 1} ^ {d _ {Y}} E _ {i, i}\right) s _ {+} \left[ \begin{array}{c} \hat {Y} \left(\left[ \tilde {W} _ {i} \right] _ {i = 1} ^ {2}, \left[ \tilde {b} _ {i} \right] _ {i = 1} ^ {2}\right) + \lambda \mathbf {1} _ {d _ {Y}} \mathbf {1} _ {n} ^ {T} \\ \mathbf {0} _ {(d _ {k} - d _ {Y}) \times n} \end{array} \right] \\ = \left[ \begin{array}{c} \hat {Y} \left(\left[ \tilde {W} _ {i} \right] _ {i = 1} ^ {2}, \left[ \tilde {b} _ {i} \right] _ {i = 1} ^ {2}\right) + \lambda \mathbf {1} _ {d _ {Y}} \mathbf {1} _ {n} ^ {T} \\ \mathbf {0} _ {(d _ {k + 1} - d _ {Y}) \times n} \end{array} \right]. \\ \end{array}
+$$
+
+Applying eq. (38),
+
+$$
+\tilde {Y} ^ {(k + 1)} \left(\left[ \tilde {W} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}, \left[ \tilde {b} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}\right) \geq \mathbf {0}, \text {e l e m e n t - w i s e}. \tag {39}
+$$
+
+Therefore, the output of the $(k + 1)$ -th layer after the activation is
+
+$$
+\begin{array}{l} Y ^ {(k + 1)} \left(\left[ \tilde {W} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}, \left[ \tilde {b} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}\right) = h _ {s _ {-}, s _ {+}} \left(\tilde {Y} ^ {(k + 1)} \left(\left[ \tilde {W} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}, \left[ \tilde {b} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}\right)\right) \\ = s _ {+} \tilde {Y} ^ {(k + 1)} \left(\left[ \tilde {W} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}, \left[ \tilde {b} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}\right) \\ = s _ {+} \left[ \begin{array}{c} \hat {Y} \left(\left[ \tilde {W} _ {i} \right] _ {i = 1} ^ {2}, \left[ \tilde {b} _ {i} \right] _ {i = 1} ^ {2}\right) + \lambda \mathbf {1} _ {d _ {Y}} \mathbf {1} _ {n} ^ {T} \\ \mathbf {0} _ {(d _ {k + 1} - d _ {Y}) \times n} \end{array} \right]. \\ \end{array}
+$$
+
+Therefore, eqs. (35) and (36) hold for any $i \in \{3, 4, \dots, L - 1\}$ .
+
+Finally, the output of the network is
+
+$$
+\begin{array}{l} \hat {Y} \left(\left[ \tilde {W} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}, \left[ \tilde {b} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}\right) = Y ^ {(L)} \left(\left[ \tilde {W} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}, \left[ \tilde {b} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}\right) \\ = \tilde {W} _ {L} ^ {\prime} Y ^ {(L - 1)} \left(\left[ \tilde {W} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}, \left[ \tilde {b} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}\right) + \tilde {b} _ {L} ^ {\prime} \mathbf {1} _ {n} ^ {T} \\ = \left(\frac {1}{s _ {+}} \sum_ {i = 1} ^ {d _ {Y}} E _ {i, i}\right) s _ {+} \left[ \begin{array}{c} \hat {Y} \left(\left[ \tilde {W} _ {i} \right] _ {i = 1} ^ {2}, \left[ \tilde {b} _ {i} \right] _ {i = 1} ^ {2}\right) + \lambda \mathbf {1} _ {d _ {Y}} \mathbf {1} _ {n} ^ {T} \\ 0 _ {(d _ {L - 1} - d _ {Y}) \times n} \end{array} \right] \\ - \lambda \mathbf {1} _ {d _ {Y}} \mathbf {1} _ {n} ^ {T} \\ = \hat {Y} \left(\left[ \tilde {W} _ {i} \right] _ {i = 1} ^ {2}, \left[ \tilde {b} _ {i} \right] _ {i = 1} ^ {2}\right). \\ \end{array}
+$$
+
+Applying Theorem 4, we have
+
+$$
+\hat {\mathcal {R}} \left(\left[ \tilde {W} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}, \left[ \tilde {W} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}\right) = \hat {\mathcal {R}} \left(\left[ \tilde {W} _ {i} \right] _ {i = 1} ^ {2}, \left[ \tilde {b} _ {i} \right] _ {i = 1} ^ {2}\right) < f (\tilde {W}).
+$$
+
+The proof is completed.
+
+
+
+# A.4 STAGE (3)
+
+Finally, we prove Theorem 1.
+
+This stage also follows the two-step strategy.
+
+Step (a). Construct local minima of the loss surface.
+
+Lemma 6. Suppose $t$ is a non-differentiable point for the piece-wise linear activation function $h$ and $\sigma$ is a constant such that the activation $h$ is differentiable in the intervals $(t - \sigma, t)$ and $(t, t + \sigma)$ . Assume that $M$ is a sufficiently large positive real such that
+
+$$
+\frac {1}{M} \left\| \hat {W} _ {1} ^ {\prime} X + \hat {b} _ {1} ^ {\prime} \mathbf {1} _ {n} ^ {T} \right\| _ {F} < \sigma . \tag {40}
+$$
+
+Let $\alpha_{i}$ be any positive real such that
+
+$$
+\alpha_ {1} = 1
+$$
+
+$$
+0 < \alpha_ {i} < 1, i = 2, \dots , L - 1. \tag {41}
+$$
+
+Then, under Assumption 3, any neural network with piecewise linear activations and $L - 1$ hidden layers has local minima at
+
+$$
+\hat {W} _ {1} ^ {\prime \prime} = \frac {1}{M} \hat {W} _ {1} ^ {\prime}, \hat {b} _ {1} ^ {\prime \prime} = \frac {1}{M} \hat {b} _ {1} ^ {\prime} + t \mathbf {1} _ {d _ {1}},
+$$
+
+$$
+\hat {W} _ {i} ^ {\prime \prime} = \alpha_ {i} \hat {W} _ {i} ^ {\prime}, \hat {b} _ {i} ^ {\prime \prime} = - \alpha_ {i} \hat {W} _ {i} ^ {\prime} h (t) \mathbf {1} _ {d _ {i - 1}} + t \mathbf {1} _ {d _ {i}} + \frac {\Pi_ {j = 2} ^ {i} \alpha_ {j}}{M} \hat {b} _ {i} ^ {\prime}, (i = 2, 3, \dots , L - 1),
+$$
+
+and
+
+$$
+\hat {W} _ {L} ^ {\prime \prime} = \frac {1}{\Pi_ {j = 2} ^ {L} \alpha_ {j}} M \hat {W} _ {L} ^ {\prime}, \hat {b} _ {L} ^ {\prime \prime} = - \frac {M}{\prod_ {j = 2} ^ {L - 1} \alpha_ {j}} \hat {W} _ {L} ^ {\prime} h (t) \mathbf {1} _ {d _ {L - 1}} + \hat {b} _ {L} ^ {\prime}
+$$
+
+where $\left\{\left[\hat{W}_i^{\prime}\right]_{i = 1}^{L},\left[\hat{b}_i^{\prime}\right]_{i = 1}^{L}\right\}$ is the local minimizer constructed in Lemma 5. Also, the loss is continuously differentiable, whose derivative with respect to the prediction $\hat{Y}_i$ may equal to 0 only when the prediction $\hat{Y}_i$ and label $Y_{i}$ are the same.
+
+Proof. Define $s_{-} = \lim_{\theta \to 0^{-}} h'(\theta)$ and $s_{+} = \lim_{\theta \to 0^{+}} h'(\theta)$ .
+
+We then prove by induction that for all $i \in [1:L - 1]$ , all components of the $i$ -th layer output before the activation $\tilde{Y}^{(i)}\left(\left[\hat{W}_i^{\prime \prime}\right]_{i = 1}^L,\left[\hat{b}_i^{\prime \prime}\right]_{i = 1}^L\right)$ are in interval $(t,t + \sigma)$ , and
+
+$$
+Y ^ {(i)} \left(\left[ \hat {W} _ {i} ^ {\prime \prime} \right] _ {i = 1} ^ {L}, \left[ \hat {b} _ {i} ^ {\prime \prime} \right] _ {i = 1} ^ {L}\right) = h (t) \mathbf {1} _ {d _ {i}} \mathbf {1} _ {n} ^ {T} + \frac {\Pi_ {j = 1} ^ {i} \alpha_ {j}}{M} Y ^ {(i)} \left(\left[ \hat {W} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}, \left[ \hat {b} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}\right).
+$$
+
+The first layer output before the activation is,
+
+$$
+\tilde {Y} ^ {(1)} \left(\left[ \hat {W} _ {i} ^ {\prime \prime} \right] _ {i = 1} ^ {L}, \left[ \hat {b} _ {i} ^ {\prime \prime} \right] _ {i = 1} ^ {L}\right) = \hat {W} _ {1} ^ {\prime \prime} X + \hat {b} _ {1} ^ {\prime \prime} \mathbf {1} _ {n} ^ {T} = \frac {1}{M} \hat {W} _ {1} ^ {\prime} X + \frac {1}{M} \hat {b} _ {1} ^ {\prime} \mathbf {1} _ {n} ^ {T} + t \mathbf {1} _ {d _ {1}} \mathbf {1} _ {n} ^ {T}. \tag {42}
+$$
+
+We proved in Lemma 5 that $\hat{W}_1'X + \hat{b}_1'\mathbf{1}_n^T$ is positive (element-wise). Since the Frobenius norm of a matrix is no smaller than any component's absolute value, applying eq. (40), we have that for all $i\in [1,d_1]$ and $j\in [1:n]$
+
+$$
+0 < \frac {1}{M} \left(\hat {W} _ {1} ^ {\prime} X + \hat {b} _ {1} ^ {\prime} \mathbf {1} _ {n} ^ {T}\right) _ {i j} < \sigma . \tag {43}
+$$
+
+Therefore, $\left(\frac{1}{M}\left(\hat{W}_1'X + \hat{b}_1'\mathbf{1}_n^T\right)_{ij} + t\right)\in (t,t + \sigma)$ . So,
+
+$$
+\begin{array}{l} Y ^ {(1)} \left(\left[ \hat {W} _ {i} ^ {\prime \prime} \right] _ {i = 1} ^ {L}, \left[ \hat {b} _ {i} ^ {\prime \prime} \right] _ {i = 1} ^ {L}\right) = h \left(\tilde {Y} ^ {(1)} \left(\left[ \hat {W} _ {i} ^ {\prime \prime} \right] _ {i = 1} ^ {L}, \left[ \hat {b} _ {i} ^ {\prime \prime} \right] _ {i = 1} ^ {L}\right)\right) \\ \stackrel {(*)} {=} h _ {s _ {-}, s _ {+}} \left(\frac {1}{M} \hat {W} _ {1} ^ {\prime} X + \frac {1}{M} \hat {b} _ {1} ^ {\prime} \mathbf {1} _ {n} ^ {T}\right) + h (t) \mathbf {1} _ {d _ {1}} \mathbf {1} _ {n} ^ {T} \\ = \frac {1}{M} Y ^ {(1)} \left(\left[ \hat {W} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}, \left[ \hat {b} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}\right) + h (t) \mathbf {1} _ {d _ {1}} \mathbf {1} _ {n} ^ {T}, \\ \end{array}
+$$
+
+where eq. $(\ast)$ is because for any $x\in (t - \sigma ,t + \sigma)$
+
+$$
+h (x) = h (t) + h _ {s -, s +} (x - t). \tag {44}
+$$
+
+Suppose the above argument holds for $k$ $(1\leq k\leq L - 2)$ . Then
+
+$$
+\begin{array}{l} \tilde {Y} ^ {(k + 1)} \left(\left[ \hat {W} _ {i} ^ {\prime \prime} \right] _ {i = 1} ^ {L}, \left[ \hat {b} _ {i} ^ {\prime \prime} \right] _ {i = 1} ^ {L}\right) \\ = \hat {W} _ {k + 1} ^ {\prime \prime} Y ^ {(k)} \left(\left[ \hat {W} _ {i} ^ {\prime \prime} \right] _ {i = 1} ^ {L}, \left[ \hat {b} _ {i} ^ {\prime \prime} \right] _ {i = 1} ^ {L}\right) + \hat {b} _ {k + 1} ^ {\prime \prime} \mathbf {1} _ {n} ^ {T} \\ = (- \alpha_ {k + 1} \tilde {W} _ {k + 1} ^ {\prime} h (t) \mathbf {1} _ {d _ {k + 1}} \\ + t \mathbf {1} _ {d _ {k + 1}} + \frac {\Pi_ {i = 1} ^ {k + 1} \alpha_ {i}}{M} \hat {b} _ {k + 1} ^ {\prime}) \mathbf {1} _ {n} ^ {T} + \alpha_ {k + 1} \hat {W} _ {k + 1} ^ {\prime} Y ^ {(k)} \left(\left[ \hat {W} _ {i} ^ {\prime \prime} \right] _ {i = 1} ^ {L}, \left[ \hat {b} _ {i} ^ {\prime \prime} \right] _ {i = 1} ^ {L}\right) \\ = \alpha_ {k + 1} \hat {W} _ {k + 1} ^ {\prime} \left(h (t) \mathbf {1} _ {d _ {k}} \mathbf {1} _ {n} ^ {T} + \frac {\Pi_ {i = 1} ^ {k} \alpha_ {i}}{M} Y ^ {(k)} \left(\left[ \hat {W} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}, \left[ \hat {b} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}\right)\right) \\ + \left(- \alpha_ {k + 1} \hat {W} _ {k + 1} ^ {\prime} h (t) \mathbf {1} _ {d _ {k}} + t \mathbf {1} _ {d _ {k + 1}} + \frac {\Pi_ {i = 1} ^ {k + 1} \alpha_ {i}}{M} \hat {b} _ {k + 1} ^ {\prime}\right) \mathbf {1} _ {n} ^ {T} \\ = \frac {\Pi_ {i = 1} ^ {k + 1} \alpha_ {i}}{M} \hat {W} _ {k + 1} ^ {\prime} Y ^ {(k)} \left(\left[ \hat {W} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}, \left[ \hat {b} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}\right) + \frac {\Pi_ {i = 1} ^ {k + 1} \alpha_ {i}}{M} \hat {b} _ {k + 1} ^ {\prime} \mathbf {1} _ {n} ^ {T} + t \mathbf {1} _ {d _ {k + 1}} \mathbf {1} _ {n} ^ {T} \\ = t \mathbf {1} _ {d _ {k + 1}} \mathbf {1} _ {n} ^ {T} + \frac {\prod_ {i = 1} ^ {k + 1} \alpha_ {i}}{M} \tilde {Y} ^ {(k + 1)} \left(\left[ \hat {W} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}, \left[ \hat {b} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}\right). \\ \end{array}
+$$
+
+Lemma 5 has proved that all components of $\tilde{Y}^{(k + 1)}\left(\left[\hat{W}_i^{\prime}\right]_{i = 1}^L,\left[\hat{b}_i^{\prime}\right]_{i = 1}^L\right)$ are contained in $\tilde{Y}^{(1)}\left(\left[\hat{W}_i^{\prime}\right]_{i = 1}^L,\left[\hat{b}_i^{\prime}\right]_{i = 1}^L\right)$ . Combining
+
+$$
+t \mathbf {1} _ {d _ {1}} \mathbf {1} _ {n} ^ {T} < t \mathbf {1} _ {d _ {1}} \mathbf {1} _ {n} ^ {T} + \frac {1}{M} \tilde {Y} ^ {(1)} \left(\left[ \hat {W} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}, \left[ \hat {b} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}\right) < (t + \sigma) \mathbf {1} _ {d _ {1}} \mathbf {1} _ {n} ^ {T},
+$$
+
+we have
+
+$$
+t \mathbf {1} _ {d _ {k + 1}} \mathbf {1} _ {n} ^ {T} < t \mathbf {1} _ {d _ {k + 1}} \mathbf {1} _ {n} ^ {T} + \frac {\Pi_ {i = 1} ^ {k + 1} \alpha_ {i}}{M} \tilde {Y} ^ {(k + 1)} \left(\left[ \hat {W} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}, \left[ \hat {b} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}\right) < (t + \sigma) \mathbf {1} _ {d _ {k + 1}} \mathbf {1} _ {n} ^ {T}.
+$$
+
+Here $<$ are all element-wise, and inequality $(\ast)$ comes from the property of $\alpha_{i}$ (eq. (41)).
+
+Furthermore, the $(k + 1)$ -th layer output after the activation is
+
+$$
+\begin{array}{l} Y ^ {(k + 1)} \left(\left[ \hat {W} _ {i} ^ {\prime \prime} \right] _ {i = 1} ^ {L}, \left[ \hat {b} _ {i} ^ {\prime \prime} \right] _ {i = 1} ^ {L}\right) = h \left(\tilde {Y} ^ {(k + 1)} \left(\left[ \hat {W} _ {i} ^ {\prime \prime} \right] _ {i = 1} ^ {L}, \left[ \hat {b} _ {i} ^ {\prime \prime} \right] _ {i = 1} ^ {L}\right)\right) \\ = h \left(t \mathbf {1} _ {d _ {k + 1}} \mathbf {1} _ {n} ^ {T} + \frac {1}{M} \tilde {Y} ^ {(k + 1)} \left(\left[ \hat {W} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}, \left[ \hat {b} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}\right)\right) \\ \stackrel {(*)} {=} h (t) \mathbf {1} _ {d _ {k + 1}} \mathbf {1} _ {n} ^ {T} + h _ {s _ {-}, s _ {+}} \left(\frac {\prod_ {i = 1} ^ {k + 1} \alpha_ {i}}{M} \tilde {Y} ^ {(k + 1)} \left(\left[ \hat {W} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}, \left[ \hat {b} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}\right)\right) \\ = h (t) \mathbf {1} _ {d _ {k + 1}} \mathbf {1} _ {n} ^ {T} + \frac {\Pi_ {i = 1} ^ {k + 1} \alpha_ {i}}{M} h _ {s _ {-}, s _ {+}} \left(\tilde {Y} ^ {(k + 1)} \left(\left[ \hat {W} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}, \left[ \hat {b} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}\right)\right) \\ = h (t) \mathbf {1} _ {d _ {k + 1}} \mathbf {1} _ {n} ^ {T} + \frac {\Pi_ {i = 1} ^ {k + 1} \alpha_ {i}}{M} Y ^ {(k + 1)} \left(\left[ \hat {W} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}, \left[ \hat {b} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}\right), \\ \end{array}
+$$
+
+where eq. $(\ast)$ is because of eq. (44). The above argument is proved for any index $k \in \{1, \dots, L - 1\}$ .
+
+Therefore, the output of the network is
+
+$$
+\begin{array}{l} Y ^ {(L)} \left(\left[ \hat {W} _ {i} ^ {\prime \prime} \right] _ {i = 1} ^ {L}, \left[ \hat {b} _ {i} ^ {\prime \prime} \right] _ {i = 1} ^ {L}\right) \\ = \hat {W} _ {L} ^ {\prime \prime} Y ^ {(L - 1)} \left(\left[ \hat {W} _ {i} ^ {\prime \prime} \right] _ {i = 1} ^ {L}, \left[ \hat {b} _ {i} ^ {\prime \prime} \right] _ {i = 1} ^ {L}\right) + \hat {b} _ {L} ^ {\prime \prime} \mathbf {1} _ {n} ^ {T} \\ = \frac {M}{\Pi_ {i = 1} ^ {L - 1} \alpha_ {i}} \hat {W} _ {L} ^ {\prime} \left(h (t) \mathbf {1} _ {d _ {L - 1}} \mathbf {1} _ {n} ^ {T} + \frac {\Pi_ {i = 1} ^ {L - 1} \alpha_ {i}}{M} Y ^ {(L - 1)} \left(\left[ \hat {W} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}, \left[ \hat {b} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}\right)\right) \\ + \left(- \frac {M}{\Pi_ {i = 1} ^ {L - 1} \alpha_ {i}} \hat {W} _ {L} ^ {\prime} h (t) \mathbf {1} _ {d _ {L - 1}} + \hat {b} _ {L} ^ {\prime}\right) \mathbf {1} _ {n} ^ {T} \\ = \hat {W} _ {L} ^ {\prime} Y ^ {(L - 1)} \left(\left[ \hat {W} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}, \left[ \hat {b} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}\right) + \hat {b} _ {L} ^ {\prime} \mathbf {1} _ {n} ^ {T} \\ = Y ^ {(L)} \left(\left[ \hat {W} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}, \left[ \hat {b} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}\right). \\ \end{array}
+$$
+
+Therefore,
+
+$$
+\hat {\mathcal {R}} \left(\left[ \hat {W} _ {i} ^ {\prime \prime} \right] _ {i = 1} ^ {L}, \left[ \hat {b} _ {i} ^ {\prime \prime} \right] _ {i = 1} ^ {L}\right) = \hat {\mathcal {R}} \left(\left[ \hat {W} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}, \left[ \hat {b} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}\right) = f (\tilde {W}).
+$$
+
+We then introduce some small disturbances $\left\{[\delta_{W_i}^{\prime \prime}]_{i = 1}^L,[\delta_{bi}^{\prime \prime}]_{i = 1}^L\right\}$ into $\left\{\left[\hat{W}_i^{\prime \prime}\right]_{i = 1}^L,\left[\hat{b}_i^{\prime \prime}\right]_{i = 1}^L\right\}$ in order to check the local optimality.
+
+Since all components of $Y^{(i)}$ are in interval $(t, t + \sigma)$ , the activations in every hidden layers is realized at linear parts. Therefore, the output of network is
+
+$$
+\begin{array}{l} \hat {Y} \left(\left[ \hat {W} _ {i} ^ {\prime \prime} + \delta_ {W i} ^ {\prime \prime} \right] _ {i = 1} ^ {L}, \left[ \hat {b} _ {i} ^ {\prime \prime} + \delta_ {b i} ^ {\prime \prime} \right] _ {i = 1} ^ {L}\right) \\ = \left(\hat {W} _ {L} ^ {\prime \prime} + \delta_ {W L} ^ {\prime \prime}\right) h \left(\dots h \left(\left(\hat {W} _ {1} ^ {\prime \prime} + \delta_ {W 1} ^ {\prime \prime}\right) X + \left(\hat {b} _ {1} ^ {\prime \prime} + \delta_ {b 1} ^ {\prime \prime}\right) \mathbf {1} _ {n} ^ {T}\right) \dots\right) + \left(\hat {b} _ {L} ^ {\prime \prime} + \delta_ {b L} ^ {\prime \prime}\right) \mathbf {1} _ {n} ^ {T} \\ = \left(\hat {W} _ {L} ^ {\prime \prime} + \delta_ {W L} ^ {\prime \prime}\right) s _ {+} \left(\dots s _ {+} \left(\left(\hat {W} _ {1} ^ {\prime \prime} + \delta_ {W 1} ^ {\prime \prime}\right) X + \left(\hat {b} _ {1} ^ {\prime \prime} + \delta_ {b 1} ^ {\prime \prime}\right) \mathbf {1} _ {n} ^ {T}\right) + f (t) \mathbf {1} _ {d _ {1}} \mathbf {1} _ {n} ^ {T} \dots\right) \\ + f (t) \mathbf {1} _ {d _ {L}} \mathbf {1} _ {n} ^ {T} + \left(\hat {b} _ {L} ^ {\prime \prime} + \delta_ {b L} ^ {\prime \prime}\right) \mathbf {1} _ {n} ^ {T} \\ = M _ {1} X + M _ {2} \mathbf {1} _ {n} ^ {T} \\ = \left[ \begin{array}{c c} M _ {1} & M _ {2} \end{array} \right] \left[ \begin{array}{c} X \\ \mathbf {1} _ {n} ^ {T} \end{array} \right]. \\ \end{array}
+$$
+
+Similar to Lemma 5, $[M_1\quad M_2]$ approaches $\tilde{W}$ as disturbances $\left\{\left[\delta_{Wi}\right]_{i=1}^{L},\left[\delta_{bi}\right]_{i=1}^{L}\right\}$ approach 0 (element-wise). Combining that $\tilde{W}$ is a local minimizer of $f(W)$ , we have
+
+$$
+\hat {\mathcal {R}} \left(\left[ \hat {W} _ {i} ^ {\prime \prime} + \delta_ {W i} ^ {\prime \prime} \right] _ {i = 1} ^ {L}, \left[ \hat {b} _ {i} ^ {\prime \prime} + \delta_ {b i} ^ {\prime \prime} \right] _ {i = 1} ^ {L}\right) = f \left([ M _ {1} \quad M _ {2} ]\right) \geq f (\tilde {W}) = \hat {\mathcal {R}} \left(\left[ \hat {W} _ {i} ^ {\prime \prime} \right] _ {i = 1} ^ {L}, \left[ \hat {b} _ {i} ^ {\prime \prime} \right] _ {i = 1} ^ {L}\right).
+$$
+
+The proof is completed.
+
+# Step (b). Prove the constructed local minima are spurious.
+
+Proof of Theorem 1. Without loss of generality, we assume that all activations are the same.
+
+Let $t$ be a non-differentiable point of the piece-wise linear activation function $h$ with
+
+$$
+\begin{array}{l} s_{-} = \lim_{\theta \to 0^{-}}h^{\prime}(\theta), \\ s_{+} = \lim_{\theta \to 0^{+}}h^{\prime}(\theta). \\ \end{array}
+$$
+
+Let $\sigma$ be a constant such that $h$ is linear in interval $(t - \sigma, t)$ and interval $(t, t + \sigma)$ .
+
+Then construct that
+
+$$
+\tilde {W} _ {1} ^ {\prime \prime} = \frac {1}{M} \tilde {W} _ {1} ^ {\prime}, \tilde {b} _ {1} ^ {\prime \prime} = \frac {1}{M} \tilde {b} _ {1} ^ {\prime} + t \mathbf {1} _ {d _ {1}},
+$$
+
+$$
+\tilde {W} _ {2} ^ {\prime \prime} = \frac {1}{\tilde {M}} \tilde {W} _ {2} ^ {\prime}, \tilde {b} _ {2} ^ {\prime \prime} = t \mathbf {1} _ {d _ {2}} - \frac {1}{\tilde {M}} h (t) \tilde {W} _ {2} ^ {\prime} \mathbf {1} _ {d _ {2}} + \frac {1}{M \tilde {M}} \tilde {b} _ {2} ^ {\prime},
+$$
+
+$$
+\tilde {W} _ {i} ^ {\prime \prime} = \tilde {W} _ {i} ^ {\prime}, \tilde {b} _ {i} ^ {\prime \prime} = - \tilde {W} _ {i} ^ {\prime} h (t) \mathbf {1} _ {d _ {i - 1}} + t \mathbf {1} _ {d _ {i}} + \frac {1}{M \tilde {M}} \tilde {b} _ {i} ^ {\prime}, (i = 3, 4, \dots , L - 1)
+$$
+
+and
+
+$$
+\tilde {W} _ {L} ^ {\prime \prime} = M \tilde {M} \tilde {W} _ {L} ^ {\prime}, \tilde {b} _ {L} ^ {\prime \prime} = \tilde {b} _ {L} ^ {\prime} - M \tilde {M} \tilde {W} _ {L} ^ {\prime} h (t) \mathbf {1} _ {L - 1},
+$$
+
+where $\left\{\left[\tilde{W}_i^{\prime}\right]_{i = 1}^{L},\left[\tilde{b}_i^{\prime}\right]_{i = 1}^{L}\right\}$ are constructed in Theorem 5, $M$ is a large enough positive real such that
+
+$$
+\frac {1}{M} \left\| \tilde {W} _ {1} ^ {\prime} X + \tilde {b} _ {1} ^ {\prime} \mathbf {1} _ {n} ^ {T} \right\| _ {F} < \sigma , \tag {45}
+$$
+
+and $\tilde{M}$ a large enough positive real such that
+
+$$
+\left. \frac {1}{\tilde {M}} \left\| \frac {1}{M} \tilde {Y} ^ {(2)} \left(\left[ \tilde {W} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}, \left[ \tilde {b} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}\right) \right\| _ {F} < \sigma . \right. \tag {46}
+$$
+
+Then, we prove by induction that for any $i \in [2: L-1]$ , all components of $\tilde{Y}^{(i)}\left(\left[\tilde{W}_i^{\prime \prime}\right]_{i=1}^L, \left[\tilde{b}_i^{\prime \prime}\right]_{i=1}^L\right)$ are in interval $(t, t+\delta)$ , and
+
+$$
+Y ^ {(i)} \left(\left[ \tilde {W} _ {i} ^ {\prime \prime} \right] _ {i = 1} ^ {L}, \left[ \tilde {b} _ {i} ^ {\prime \prime} \right] _ {i = 1} ^ {L}\right) = h (t) \mathbf {1} _ {d _ {i}} \mathbf {1} _ {n} ^ {T} + \frac {1}{\tilde {M} M} Y ^ {(i)} \left(\left[ \tilde {W} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}, \left[ \tilde {b} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}\right).
+$$
+
+First,
+
+$$
+\tilde {Y} ^ {(1)} \left(\left[ \tilde {W} _ {i} ^ {\prime \prime} \right] _ {i = 1} ^ {L}, \left[ \tilde {b} _ {i} ^ {\prime \prime} \right] _ {i = 1} ^ {L}\right) = \tilde {W} _ {1} ^ {\prime \prime} X + \tilde {b} _ {1} ^ {\prime \prime} \mathbf {1} _ {n} ^ {T} = \frac {1}{M} \left(\tilde {W} _ {1} ^ {\prime} X + \tilde {b} _ {1} ^ {\prime} \mathbf {1} _ {n} ^ {T}\right) + t \mathbf {1} _ {d _ {1}} ^ {T} \mathbf {1} _ {n} ^ {T}. \tag {47}
+$$
+
+For any $i \in [1:d_1]$ and $j \in [1:n]$ , eq. (45) implies
+
+$$
+\left| \left(\frac {1}{M} \left(\tilde {W} _ {1} ^ {\prime} X + \tilde {b} _ {1} ^ {\prime} \mathbf {1} _ {n} ^ {T}\right)\right) _ {i j} \right| \leq \frac {1}{M} \left\| \tilde {W} _ {1} ^ {\prime} X + \tilde {b} _ {1} ^ {\prime} \mathbf {1} _ {n} ^ {T} \right\| _ {F} < \sigma .
+$$
+
+Thus,
+
+$$
+\left(\frac {1}{M} \left(\tilde {W} _ {1} ^ {\prime} X + \tilde {b} _ {1} ^ {\prime} \mathbf {1} _ {n} ^ {T}\right) + t \mathbf {1} _ {d _ {1}} ^ {T} \mathbf {1} _ {n} ^ {T}\right) _ {i j} \in (t - \sigma , t + \sigma). \tag {48}
+$$
+
+Therefore, the output of the first layer after the activation is
+
+$$
+\begin{array}{l} Y ^ {(1)} \left(\left[ \tilde {W} _ {i} ^ {\prime \prime} \right] _ {i = 1} ^ {L}, \left[ \tilde {b} _ {i} ^ {\prime \prime} \right] _ {i = 1} ^ {L}\right) = h \left(\tilde {Y} ^ {(1)} \left(\left[ \tilde {W} _ {i} ^ {\prime \prime} \right] _ {i = 1} ^ {L}, \left[ \tilde {b} _ {i} ^ {\prime \prime} \right] _ {i = 1} ^ {L}\right)\right) \\ = h \left(\frac {1}{M} \left(\tilde {W} _ {1} ^ {\prime} X + \tilde {b} _ {1} ^ {\prime} \mathbf {1} _ {n} ^ {T}\right) + t \mathbf {1} _ {d _ {1}} \mathbf {1} _ {n} ^ {T}\right) \\ \stackrel {(*)} {=} h (t) \mathbf {1} _ {d _ {1}} \mathbf {1} _ {n} ^ {T} + h _ {s _ {-}, s _ {+}} \left(\frac {1}{M} \left(\tilde {W} _ {1} ^ {\prime} X + \tilde {b} _ {1} ^ {\prime}\right)\right) \\ = h (t) \mathbf {1} _ {d _ {1}} \mathbf {1} _ {n} ^ {T} + \frac {1}{M} h _ {s _ {-}, s _ {+}} \left(\left(\tilde {W} _ {1} ^ {\prime} X + \tilde {b} _ {1} ^ {\prime}\right)\right) \\ = h (t) \mathbf {1} _ {d _ {1}} \mathbf {1} _ {n} ^ {T} + \frac {1}{M} Y ^ {(1)} \left(\left[ \tilde {W} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}, \left[ \tilde {b} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}\right), \\ \end{array}
+$$
+
+where eq. $(*)$ is from eq. (44) for any $x\in (t - \delta ,t + \delta)$
+
+Also,
+
+$$
+\begin{array}{l} \tilde {Y} ^ {(2)} \left(\left[ \tilde {W} _ {i} ^ {\prime \prime} \right] _ {i = 1} ^ {L}, \left[ \tilde {b} _ {i} ^ {\prime \prime} \right] _ {i = 1} ^ {L}\right) = \tilde {W} _ {2} ^ {\prime \prime} Y ^ {(1)} \left(\left[ \tilde {W} _ {i} ^ {\prime \prime} \right] _ {i = 1} ^ {L}, \left[ \tilde {b} _ {i} ^ {\prime \prime} \right] _ {i = 1} ^ {L}\right) + \tilde {b} _ {2} ^ {\prime \prime} \mathbf {1} _ {n} ^ {T} \\ = \frac {1}{\tilde {M}} \left(\tilde {W} _ {2} ^ {\prime}\right) \left(h (t) \mathbf {1} _ {d _ {1}} \mathbf {1} _ {n} ^ {T} + \frac {1}{M} Y ^ {(1)} \left(\left[ \tilde {W} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}, \left[ \tilde {b} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}\right)\right) \\ + t \mathbf {1} _ {d _ {2}} \mathbf {1} _ {n} ^ {T} - \frac {1}{\tilde {M}} h (t) \tilde {W} _ {2} ^ {\prime} \mathbf {1} _ {d _ {1}} \mathbf {1} _ {n} ^ {T} + \frac {1}{M \tilde {M}} \tilde {b} _ {2} ^ {\prime} \mathbf {1} _ {n} ^ {T} \\ = \frac {1}{\tilde {M} M} \tilde {W} _ {2} ^ {\prime} Y ^ {(1)} \left(\left[ \tilde {W} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}, \left[ \tilde {b} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}\right) + \frac {1}{M \tilde {M}} \tilde {b} _ {2} ^ {\prime} \mathbf {1} _ {n} ^ {T} + t \mathbf {1} _ {d _ {2}} \mathbf {1} _ {n} ^ {T} \\ = \frac {1}{M \tilde {M}} \tilde {Y} ^ {(2)} \left(\left[ \tilde {W} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}, \left[ \tilde {b} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}\right) + t \mathbf {1} _ {d _ {2}} \mathbf {1} _ {n} ^ {T}. \\ \end{array}
+$$
+
+Recall in Theorem 5 we prove all components of $\tilde{Y}^{(2)}\left(\left[\tilde{W}_i^{\prime}\right]_{i = 1}^{L},\left[\tilde{b}_i^{\prime}\right]_{i = 1}^{L}\right)$ are positive. Combining the definition of $\tilde{M}$ (eq. (46)), we have
+
+$$
+\begin{array}{l} t \mathbf {1} _ {d _ {2}} \mathbf {1} _ {n} ^ {T} < \tilde {Y} ^ {(2)} \left(\left[ \tilde {W} _ {i} ^ {\prime \prime} \right] _ {i = 1} ^ {L}, \left[ \tilde {b} _ {i} ^ {\prime \prime} \right] _ {i = 1} ^ {L}\right) \\ = \frac {1}{\tilde {M} M} \tilde {Y} ^ {(2)} \left(\left[ \tilde {W} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}, \left[ \tilde {b} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}\right) + t \mathbf {1} _ {d _ {2}} \mathbf {1} _ {n} ^ {T} \\ < (t + \sigma) \mathbf {1} _ {d _ {2}} \mathbf {1} _ {n} ^ {T}. \\ \end{array}
+$$
+
+Therefore,
+
+$$
+\begin{array}{l} Y ^ {(2)} \left(\left[ \tilde {W} _ {i} ^ {\prime \prime} \right] _ {i = 1} ^ {L}, \left[ \tilde {b} _ {i} ^ {\prime \prime} \right] _ {i = 1} ^ {L}\right) = h \left(\tilde {Y} ^ {(2)} \left(\left[ \tilde {W} _ {i} ^ {\prime \prime} \right] _ {i = 1} ^ {L}, \left[ \tilde {b} _ {i} ^ {\prime \prime} \right] _ {i = 1} ^ {L}\right)\right) \\ = h \left(\frac {1}{\tilde {M} M} Y ^ {(2)} \left(\left[ \tilde {W} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}, \left[ \tilde {b} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}\right) + t \mathbf {1} _ {d _ {2}} \mathbf {1} _ {n} ^ {T}\right) \\ = h (t) \mathbf {1} _ {d _ {2}} \mathbf {1} _ {n} ^ {T} + h _ {s -, s +} \left(\frac {1}{\tilde {M} M} \tilde {Y} ^ {(2)} \left(\left[ \tilde {W} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}, \left[ \tilde {b} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}\right)\right) \\ = h (t) \mathbf {1} _ {d _ {2}} \mathbf {1} _ {n} ^ {T} + \frac {1}{\tilde {M} M} h _ {s _ {-}, s _ {+}} \left(\tilde {Y} ^ {(2)} \left(\left[ \tilde {W} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}, \left[ \tilde {b} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}\right)\right) \\ = h (t) \mathbf {1} _ {d _ {2}} \mathbf {1} _ {n} ^ {T} + \frac {1}{\tilde {M} M} Y ^ {(2)} \left(\left[ \tilde {W} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}, \left[ \tilde {b} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}\right). \\ \end{array}
+$$
+
+Suppose the above argument holds for $k$ -th layer.
+
+The output of $(k + 1)$ -th layer before the activation is
+
+$$
+\begin{array}{l} \tilde {Y} ^ {(k + 1)} \left(\left[ \tilde {W} _ {i} ^ {\prime \prime} \right] _ {i = 1} ^ {L}, \left[ \tilde {b} _ {i} ^ {\prime \prime} \right] _ {i = 1} ^ {L}\right) \\ = \tilde {W} _ {k + 1} ^ {\prime \prime} Y ^ {(k)} \left(\left[ \tilde {W} _ {i} ^ {\prime \prime} \right] _ {i = 1} ^ {L}, \left[ \tilde {b} _ {i} ^ {\prime \prime} \right] _ {i = 1} ^ {L}\right) + \tilde {b} _ {k + 1} ^ {\prime \prime} \mathbf {1} _ {n} ^ {T} \\ = \tilde {W} _ {k + 1} ^ {\prime} \left(h (t) \mathbf {1} _ {d _ {k}} \mathbf {1} _ {n} ^ {T} + \frac {1}{\tilde {M M}} Y ^ {(k)} \left(\left[ \tilde {W} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}, \left[ \tilde {b} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}\right)\right) \\ + \left(- \tilde {W} _ {k + 1} ^ {\prime} h (t) \mathbf {1} _ {d _ {k}} + t \mathbf {1} _ {d _ {k + 1}} + \frac {1}{M \tilde {M}} \tilde {b} _ {k + 1} ^ {\prime}\right) \mathbf {1} _ {n} ^ {T} \\ = \frac {1}{M \tilde {M}} \left(\tilde {W} _ {k + 1} ^ {\prime} Y ^ {(k)} \left(\left[ \tilde {W} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}, \left[ \tilde {b} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}\right) + \tilde {b} _ {k + 1} ^ {\prime} \mathbf {1} _ {n} ^ {T}\right) + t \mathbf {1} _ {d _ {k + 1}} \mathbf {1} _ {n} ^ {T} \\ = \frac {1}{M \tilde {M}} \tilde {Y} ^ {(k + 1)} \left(\left[ \tilde {W} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}, \left[ \tilde {b} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}\right) + t \mathbf {1} _ {d _ {k + 1}} \mathbf {1} _ {n} ^ {T}. \\ \end{array}
+$$
+
+Recall proved in Theorem 5 that all components of $\tilde{Y}^{(k + 1)}\left(\left[\tilde{W}_i^{\prime}\right]_{i = 1}^{L},\left[\tilde{b}_i^{\prime}\right]_{i = 1}^{L}\right)$ except those that are 0 are contained in $\tilde{Y}^{(k)}\left(\left[\tilde{W}_i^{\prime}\right]_{i = 1}^{L},\left[\tilde{b}_i^{\prime}\right]_{i = 1}^{L}\right)$ . We have
+
+$$
+t \mathbf {1} _ {d _ {k + 1}} \mathbf {1} _ {n} ^ {T} < \frac {1}{M \tilde {M}} \tilde {Y} ^ {(k + 1)} \left(\left[ \tilde {W} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}, \left[ \tilde {b} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}\right) + t \mathbf {1} _ {d _ {k + 1}} \mathbf {1} _ {n} ^ {T} < (t + \sigma) \mathbf {1} _ {d _ {k + 1}} \mathbf {1} _ {n} ^ {T}.
+$$
+
+Therefore,
+
+$$
+\begin{array}{l} Y ^ {(k + 1)} \left(\left[ \tilde {W} _ {i} ^ {\prime \prime} \right] _ {i = 1} ^ {L}, \left[ \tilde {b} _ {i} ^ {\prime \prime} \right] _ {i = 1} ^ {L}\right) = h \left(\tilde {Y} ^ {(k)} \left(\left[ \tilde {W} _ {i} ^ {\prime \prime} \right] _ {i = 1} ^ {L}, \left[ \tilde {b} _ {i} ^ {\prime \prime} \right] _ {i = 1} ^ {L}\right)\right) \\ = h \left(\frac {1}{M \tilde {M}} \tilde {Y} ^ {(k + 1)} \left(\left[ \tilde {W} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}, \left[ \tilde {b} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}\right) + t \mathbf {1} _ {d _ {k + 1}} \mathbf {1} _ {n} ^ {T}\right) \\ = h (t) \mathbf {1} _ {d _ {k + 1}} \mathbf {1} _ {n} ^ {T} + \frac {1}{M \tilde {M}} Y ^ {(k + 1)} \left(\left[ \tilde {W} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}, \left[ \tilde {b} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}\right). \\ \end{array}
+$$
+
+Thus, the argument holds for any $k \in \{2, \dots, L - 1\}$ .
+
+So,
+
+$$
+\begin{array}{l} Y ^ {(L)} \left(\left[ \tilde {W} _ {i} ^ {\prime \prime} \right] _ {i = 1} ^ {L}, \left[ \tilde {b} _ {i} ^ {\prime \prime} \right] _ {i = 1} ^ {L}\right) = \tilde {W} _ {L} ^ {\prime \prime} Y ^ {(L - 1)} \left(\left[ \tilde {W} _ {i} ^ {\prime \prime} \right] _ {i = 1} ^ {L}, \left[ \tilde {b} _ {i} ^ {\prime \prime} \right] _ {i = 1} ^ {L}\right) + \tilde {b} _ {L} ^ {\prime \prime} \\ = M \tilde {M} \tilde {W} _ {L} ^ {\prime} \left(h (t) \mathbf {1} _ {d _ {L - 1}} \mathbf {1} _ {n} ^ {T} + \frac {1}{M \tilde {M}} Y ^ {(L - 1)} \left(\left[ \tilde {W} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}, \left[ \tilde {b} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}\right)\right) \\ + \tilde {b} _ {L} ^ {\prime} \mathbf {1} _ {n} ^ {T} - M \tilde {M} \tilde {W} _ {L} ^ {\prime} h (t) \mathbf {1} _ {d _ {L - 1}} \mathbf {1} _ {n} ^ {T} \\ = Y ^ {(L)} \left(\left[ \tilde {W} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}, \left[ \tilde {b} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}\right). \\ \end{array}
+$$
+
+Therefore,
+
+$$
+\hat {\mathcal {R}} \left(\left[ \tilde {W} _ {i} ^ {\prime \prime} \right] _ {i = 1} ^ {L}, \left[ \tilde {b} _ {i} ^ {\prime \prime} \right] _ {i = 1} ^ {L}\right) = \hat {\mathcal {R}} \left(\left[ \tilde {W} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}, \left[ \tilde {b} _ {i} ^ {\prime} \right] _ {i = 1} ^ {L}\right). \tag {49}
+$$
+
+From eq. (49) and Theorem 5, we have
+
+$$
+\hat {\mathcal {R}} \left(\left[ \tilde {W} _ {i} ^ {\prime \prime} \right] _ {i = 1} ^ {L}, \left[ \tilde {b} _ {i} ^ {\prime \prime} \right] _ {i = 1} ^ {L}\right) < f (\tilde {W}),
+$$
+
+which completes the proof of local minimizer.
+
+Furthermore, the parameter $M$ used in Lemma 6 (not those in this proof) is arbitrary in a continuous interval (cf. eq. (40)), we have actually constructed infinite spurious local minima.
+
+
+
+Theorem 1 relies on Assumption 4. We can further remove it by replacing Assumption 3 by a mildly more restrictive variant Assumption 5.
+
+Corollary 3. Suppose that Assumptions 1, 2, and 5 hold. Neural networks with arbitrary depth and arbitrary piecewise linear activations (excluding linear functions) have infinitely many spurious local minima under arbitrary continuously differentiable loss whose derivative can equal 0 only when the prediction and label are the same.
+
+Proof. The proof is delivered by modifications of Theorem 4 in Stage 1 of Theorem 1's proof. We only need to prove the corollary under the assumption that $s_{-} + s_{+} = 0$ .
+
+Let the local minimizer constructed in Lemma 4 be $\left\{\left[\hat{W}_i\right]_{i=1}^2, \left[\hat{b}_i\right]_{i=1}^2\right\}$ . Then, we construct a point in the parameter space whose empirical risk is smaller as follows:
+
+$$
+\tilde {W} _ {1} = \left[ \begin{array}{c} \tilde {W} _ {1, [ 1: d _ {X} ]} - \alpha \beta^ {T} \\ \tilde {W} _ {1, [ 1: d _ {X} ]} \\ - \tilde {W} _ {1, [ 1: d _ {X} ]} + \alpha \beta^ {T} \\ \tilde {W} _ {2, [ 1: d _ {X} ]} \\ \vdots \\ \tilde {W} _ {d _ {Y}, [ 1: d _ {X} ]} \\ 0 _ {(d _ {1} - d _ {Y} - 2) \times d _ {X}} \end{array} \right],
+$$
+
+$$
+\tilde {b} _ {1} = \left[ \begin{array}{c} \tilde {W} _ {1, [ d _ {X} + 1 ]} - \eta_ {1} + \gamma \\ W _ {1, [ d _ {X} + 1 ]} - \eta \\ - \tilde {W} _ {1, [ d _ {X} + 1 ]} + \eta_ {1} + \gamma \\ \tilde {W} _ {2, [ d _ {X} + 1 ]} - \eta_ {2} \\ \vdots \\ \tilde {W} _ {d _ {Y}, [ d _ {X} + 1 ]} - \eta_ {d _ {Y}} \\ 0 _ {(d _ {1} - d _ {Y} - 2) \times 1} \end{array} \right],
+$$
+
+$$
+\tilde {W} _ {2} = \left[ \begin{array}{c c c c c c c c c c} \frac {1}{2 s _ {+}} & \frac {1}{s _ {+}} & - \frac {1}{2 s _ {+}} & 0 & 0 & \dots & 0 & 0 & \dots & 0 \\ 0 & 0 & 0 & \frac {1}{s _ {+}} & 0 & \dots & 0 & 0 & \dots & 0 \\ 0 & 0 & 0 & 0 & \frac {1}{s _ {+}} & \dots & 0 & 0 & \dots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & 0 & 0 & \dots & \frac {1}{s _ {+}} & 0 & \dots & 0 \end{array} \right],
+$$
+
+and
+
+$$
+\tilde {b} _ {2} = \left[ \begin{array}{c} \eta \\ \eta_ {2} \\ \vdots \\ \eta_ {d _ {Y}} \end{array} \right],
+$$
+
+where $\alpha, \beta$ , and $\eta_{i}$ are defined the same as those in Theorem 4, and $\eta$ is defined by eq. (27).
+
+Then, the output of the first layer is
+
+$$
+\begin{array}{l} Y ^ {(1)} \left(\left[ \tilde {W} _ {i} \right] _ {i = 1} ^ {2}, \left[ \tilde {b} _ {i} \right] _ {i = 1} ^ {2}\right) = h _ {s _ {-}, s _ {+}} \left(\tilde {W} _ {1} X + \tilde {b} _ {1} \mathbf {1} _ {n} ^ {T}\right) \\ = h _ {s _ {-}, s _ {+}} \left(\left[ \begin{array}{c} \tilde {W} _ {1, \cdot} X - \alpha \beta^ {T} X - \eta_ {1} \mathbf {1} _ {n} ^ {T} + \gamma \mathbf {1} _ {n} ^ {T} \\ \tilde {W} _ {1, \cdot} X - \eta \mathbf {1} _ {n} ^ {T} \\ - \tilde {W} _ {1, \cdot} X + \alpha \beta^ {T} X + \eta_ {1} \mathbf {1} _ {n} ^ {T} + \gamma \mathbf {1} _ {n} ^ {T} \\ \tilde {W} _ {2, \cdot} X - \eta_ {2} \mathbf {1} _ {n} ^ {T} \\ \vdots \\ \tilde {W} _ {d _ {Y}, \cdot} X - \eta_ {d _ {Y}} \mathbf {1} _ {n} ^ {T} \\ \mathbf {0} _ {d _ {1} - d _ {Y} - 2} \mathbf {1} _ {n} ^ {T} \end{array} \right]\right). \\ \end{array}
+$$
+
+Further, the output of the whole network is
+
+$$
+\begin{array}{l} \hat {Y} \left(\left[ \tilde {W} _ {i} \right] _ {i = 1} ^ {2}, \left[ \tilde {b} _ {i} \right] _ {i = 1} ^ {2}\right) \\ = \tilde {W} _ {2} h _ {s _ {-}, s _ {+}} \left(\left[ \begin{array}{c} \tilde {W} _ {1,..} X - \alpha \beta^ {T} X - \eta_ {1} \mathbf {1} _ {n} ^ {T} + \gamma \mathbf {1} _ {n} ^ {T} \\ \tilde {W} _ {1,..} X - \eta \mathbf {1} _ {n} ^ {T} \\ - \tilde {W} _ {1,..} X + \alpha \beta^ {T} X + \eta_ {1} \mathbf {1} _ {n} ^ {T} + \gamma \mathbf {1} _ {n} ^ {T} \\ \tilde {W} _ {2,..} X - \eta_ {2} \mathbf {1} _ {n} ^ {T} \\ \vdots \\ \tilde {W} _ {d _ {Y},..} X - \eta_ {d _ {Y}} \mathbf {1} _ {n} ^ {T} \\ \mathbf {0} _ {d _ {1} - d _ {Y} - 2} \mathbf {1} _ {n} ^ {T} \end{array} \right]\right) + \tilde {b} _ {2} \mathbf {1} _ {n} ^ {T} \\ = \tilde {W} _ {2} h _ {s _ {-}, s _ {+}} \left(\left[ \begin{array}{c} \tilde {W} _ {1, \cdot} X - \alpha \beta^ {T} X - \eta_ {1} \mathbf {1} _ {n} ^ {T} + \gamma \mathbf {1} _ {n} ^ {T} \\ \tilde {W} _ {1, \cdot} X - \eta \mathbf {1} _ {n} ^ {T} \\ - \tilde {W} _ {1, \cdot} X + \alpha \beta^ {T} X + \eta_ {1} \mathbf {1} _ {n} ^ {T} + \gamma \mathbf {1} _ {n} ^ {T} \\ \tilde {W} _ {2, \cdot} X - \eta_ {2} \mathbf {1} _ {n} ^ {T} \\ \vdots \\ \tilde {W} _ {d _ {Y}, \cdot} X - \eta_ {d _ {Y}} \mathbf {1} _ {n} ^ {T} \\ \mathbf {0} _ {d _ {1} - d _ {Y} - 2} \mathbf {1} _ {n} ^ {T} \end{array} \right]\right) + \left[ \begin{array}{c} \eta \\ \eta_ {2} \\ \vdots \\ \eta_ {d _ {Y}} \end{array} \right] \mathbf {1} _ {n} ^ {T}. \\ \end{array}
+$$
+
+Therefore, if $j \leq l'$ , the $(1,j)$ -th component of $\hat{Y}$ $\left(\left[\tilde{W}_i\right]_{i=1}^2, \left[\tilde{b}_i\right]_{i=1}^2\right)$ is
+
+$$
+\left(\tilde {W} _ {2}\right) _ {1} \tilde {Y} ^ {(1)} \left(\left[ \tilde {W} _ {i} \right] _ {i = 1} ^ {2}, \left[ \tilde {b} _ {i} \right] _ {i = 1} ^ {2}\right) _ {j}
+$$
+
+$$
+\begin{array}{l} = \frac {1}{2 s _ {+}} \left(s _ {-} \left(\tilde {Y} _ {1, j} - \alpha \beta^ {T} x _ {j} - \eta_ {1} + \gamma\right) + 2 s _ {+} \left(\tilde {Y} _ {1, j} - \eta\right) - s _ {+} \left(- \tilde {Y} _ {1 j} + \alpha \beta^ {T} x _ {j} + \eta_ {1} + \gamma\right)\right) \\ + \eta \\ = \frac {1}{2 s _ {+}} \left(- s _ {+} \left(\tilde {Y} _ {1, j} - \alpha \beta^ {T} x _ {j} - \eta_ {1} + \gamma\right) + 2 s _ {+} \left(\tilde {Y} _ {1, j} - \eta\right) - s _ {+} \left(- \tilde {Y} _ {1 j} + \alpha \beta^ {T} x _ {j} + \eta_ {1} + \gamma\right)\right) \\ + \eta \\ = \frac {1}{2 s _ {+}} \left(2 s _ {+} \tilde {Y} _ {1, j} - 2 s _ {+} \eta - 2 s _ {+} \gamma\right) + \eta \\ = \tilde {Y} _ {1, j} - \gamma . \\ \end{array}
+$$
+
+Otherwise $(j > l')$ , the $(1,j)$ -th component of $\hat{Y}$ $\left(\left[\tilde{W}_i\right]_{i = 1}^2,\left[\tilde{b}_i\right]_{i = 1}^2\right)$ is
+
+$$
+\begin{array}{l} \left(\tilde {W} _ {2}\right) _ {1} \tilde {Y} ^ {(1)} \left(\left[ \tilde {W} _ {i} \right] _ {i = 1} ^ {2}, \left[ \tilde {b} _ {i} \right] _ {i = 1} ^ {2}\right) _ {j} \\ = \frac {1}{2 s _ {+}} \left(s _ {+} \left(\tilde {Y} _ {1, j} - \alpha \beta^ {T} x _ {j} - \eta_ {1} + \gamma\right) + 2 s _ {+} \left(\tilde {Y} _ {1, j} - \eta\right) - s _ {-} \left(- \tilde {Y} _ {1 j} + \alpha \beta^ {T} x _ {j} + \eta_ {1} + \gamma\right)\right) \\ + \eta \\ = \frac {1}{2 s _ {+}} \left(s _ {+} \left(\tilde {Y} _ {1, j} - \alpha \beta^ {T} x _ {j} - \eta_ {1} + \gamma\right) + 2 s _ {+} \left(\tilde {Y} _ {1, j} - \eta\right) + s _ {+} \left(- \tilde {Y} _ {1 j} + \alpha \beta^ {T} x _ {j} + \eta_ {1} + \gamma\right)\right) + \eta \\ = \frac {1}{2 s _ {+}} \left(2 s _ {+} \tilde {Y} _ {1, j} - 2 s _ {+} \eta + 2 s _ {+} \gamma\right) + \eta \\ = \tilde {Y} _ {1, j} + \gamma , \\ \end{array}
+$$
+
+and the $(i,j)$ -th $(i > 1)$ component of $\hat{Y}\left(\left[\tilde{W}_i\right]_{i = 1}^2,\left[\tilde{b}_i\right]_{i = 1}^2\right)$ is $\tilde{Y}_{i,j}$ .
+
+Therefore, we have
+
+$$
+\left(\tilde {W} _ {2} \left(\tilde {W} _ {1} x _ {i} + \tilde {b} _ {1}\right) + \tilde {b} _ {2} - \tilde {W} \left[ \begin{array}{c} x _ {i} \\ 1 \end{array} \right]\right) _ {j} = \left\{ \begin{array}{l l} - \gamma , & j = 1, i \leq l; \\ \gamma , & j = 1, i > l; \\ 0, & j \geq 2. \end{array} \right.
+$$
+
+Then, similar to Theorem 4, we have
+
+$$
+\begin{array}{l} \hat {\mathcal {R}} \left(\left[ \tilde {W} _ {i} \right] _ {i = 1} ^ {L}, \left[ \tilde {b} _ {i} \right] _ {i = 1} ^ {L}\right) - \hat {\mathcal {R}} \left(\left[ \hat {W} _ {i} \right] _ {i = 1} ^ {L}, \left[ \hat {b} _ {i} \right] _ {i = 1} ^ {L}\right) \\ = \frac {1}{n} \sum_ {i = 1} ^ {n} l \left(Y _ {i}, \tilde {W} _ {2} \left(\tilde {W} _ {1} x _ {i} + \tilde {b} _ {1}\right) + \tilde {b} _ {2}\right) - \frac {1}{n} \sum_ {i = 1} ^ {n} l \left(Y _ {i}, \hat {W} \left[ \begin{array}{l} x _ {i} \\ 1 \end{array} \right]\right) \\ = \frac {1}{n} \sum_ {i = 1} ^ {n} \nabla_ {\hat {Y} _ {i}} l \left(Y _ {i}, \tilde {W} \left[ \begin{array}{c} x _ {i} \\ 1 \end{array} \right]\right) \left(\tilde {W} _ {2} \left(\tilde {W} _ {1} x _ {i} + \tilde {b} _ {1} \mathbf {1} _ {n} ^ {T}\right) + \tilde {b} _ {2} \mathbf {1} _ {n} ^ {T} - \tilde {W} \left[ \begin{array}{c} x _ {i} \\ 1 \end{array} \right]\right) \\ + \sum_ {i = 1} ^ {n} o \left(\left\| \tilde {W} _ {2} \left(\tilde {W} _ {1} x _ {i} + \tilde {b} _ {1} \mathbf {1} _ {n} ^ {T}\right) + \tilde {b} _ {2} \mathbf {1} _ {n} ^ {T} - \tilde {W} \left[ \begin{array}{c} x _ {i} \\ 1 \end{array} \right] \right\|\right) \\ = - \frac {2}{n} \sum_ {i = 1} ^ {l ^ {\prime}} \mathbf {V} _ {1, i} \gamma + o (\gamma), \\ \end{array}
+$$
+
+where $\mathbf{V}$ and $l^{\prime}$ are also defined the same as those in Theorem 4.
+
+When $\gamma$ is sufficiently small and $\operatorname{sgn}(\gamma) = \operatorname{sgn}\left(\sum_{i=1}^{l'} V_{1,i}\right)$ , we have that
+
+$$
+\hat {\mathcal {R}} \left(\left(\left[ \tilde {W} _ {i} \right] _ {i = 1} ^ {2}, \left[ \tilde {b} _ {i} \right] _ {i = 1} ^ {2}\right)\right) < f (\tilde {W}).
+$$
+
+This complete the proof of Corollary 3.
+
+
+
+# A.5 A PREPARATION LEMMA
+
+We now prove the preparation lemma used above.
+
+Lemma 7. Suppose $\mathbf{u} = (u_{1}\dots u_{n})\in \mathbb{R}^{1\times n}$ which satisfies $\mathbf{u}\neq \mathbf{0}$ and
+
+$$
+\sum_ {i = 1} ^ {n} u _ {i} = 0, \tag {50}
+$$
+
+while $\{x_1, \ldots, x_n\}$ is a set of vector $\subset \mathbb{R}^{m \times 1}$ . Suppose index set $S = \{1, 2, \dots, n\}$ . Then for any series of real numbers $\{v_1, \dots, v_n\}$ , there exists a non-empty separation $I$ , $J$ of $S$ , which satisfies $I \cup J = S$ , $I \cap J = \emptyset$ and both $I$ and $J$ are not empty, a vector $\beta \in \mathbb{R}^{m \times 1}$ , such that,
+
+(1.1) for any sufficiently small positive real $\alpha$ , $i \in I$ , and $j \in J$ , we have $v_{i} - \alpha \beta^{T} x_{i} < v_{j} - \alpha \beta^{T} x_{j}$ ;
+
+$$
+(1. 2) \sum_ {i \in I} u _ {i} \neq 0.
+$$
+
+Proof. If there exists a non-empty separation $I$ and $J$ of the index set $S$ , such that when $\beta = 0$ , (1.1) and (1.2) hold, the lemma is apparently correct.
+
+Otherwise, suppose that there is no non-empty separation $I$ and $J$ of the index set $S$ such that (1.1) and (1.2) hold simultaneously when $\beta = 0$ .
+
+Some number $v_{i}$ in the sequence $(v_{1}, v_{2}, \dots, v_{n})$ are probably equal to each other. We rerarrange the sequence by the increasing order as follows,
+
+$$
+v _ {1} = v _ {2} = \dots = v _ {s _ {1}} < v _ {s _ {1} + 1} = \dots = v _ {s _ {2}} < \dots < v _ {s _ {k - 1} + 1} = \dots = v _ {s _ {k}} = v _ {n}, \tag {51}
+$$
+
+where $s_k = n$ .
+
+Then, for any $j\in \{1,2,\dots ,k - 1\}$ , we argue that
+
+$$
+\sum_ {i = 1} ^ {s _ {j}} u _ {i} = 0.
+$$
+
+Otherwise, suppose there exists a $s_j$ , such that
+
+$$
+\sum_ {i = 1} ^ {s _ {j}} u _ {i} \neq 0.
+$$
+
+Let $I = \{1, 2, \dots, s_j\}$ and $J = \{s_j + 1, \dots, n\}$ . Then, when $\beta = 0$ , we have
+
+$$
+v _ {i} - \alpha \beta^ {T} x _ {i} = v _ {i} < v _ {j} = v _ {j} - \alpha \beta^ {T} x _ {j},
+$$
+
+and
+
+$$
+\sum_ {i \in I} u _ {i} = \sum_ {i = 1} ^ {s _ {j}} u _ {i} \neq 0,
+$$
+
+which are exactly the arguments (1.1) and (1.2). Thereby we construct a contrary example. Therefore, for any $j \in \{1, 2, \dots, k - 1\}$ , we have
+
+$$
+\sum_ {i = 1} ^ {s _ {j}} u _ {i} = 0.
+$$
+
+Since we assume that $\mathbf{u} \neq 0$ , there exists an index $t \in \{1, \dots, k-1\}$ , such that there exists an index $i \in \{s_t + 1, \dots, s_{t+1}\}$ that $u_i \neq 0$ .
+
+Let $l \in \{s_t + 1, \dots, s_{t+1}\}$ is the index such that $x_l$ has the largest norm while $u_l \neq 0$ :
+
+$$
+l = \underset {j \in \{s _ {t} + 1, \dots , s _ {t + 1} \}, u _ {j} \neq 0} {\arg \max } \| x _ {j} \|. \tag {52}
+$$
+
+We further rearrange the sequence $(v_{s_t + 1},\dots,v_{s_{t + 1}})$ such that there is an index $l^{\prime}\in \{s_t + 1,\ldots ,s_{t + 1}\}$ ,
+
+$$
+\left\| x _ {l ^ {\prime}} \right\| = \max _ {j \in \{s _ {t} + 1, \dots , s _ {t + 1} \}, u _ {j} \neq 0} \left\| x _ {j} \right\|,
+$$
+
+and
+
+$$
+\forall i \in \left\{s _ {t} + 1, \dots , l ^ {\prime} \right\}, \left\langle x _ {l ^ {\prime}}, x _ {i} \right\rangle \geq \left\| x _ {l ^ {\prime}} \right\| ^ {2}; \tag {53}
+$$
+
+$$
+\forall i \in \left\{l ^ {\prime} + 1, \dots , s _ {t + 1} \right\}, \langle x _ {l ^ {\prime}}, x _ {i} \rangle < \| x _ {l ^ {\prime}} \| ^ {2}. \tag {54}
+$$
+
+It is worth noting that it is probably $l' = s_{t+1}$ , but it is a trivial case that would not influence the result of this lemma.
+
+Let $I = \{1,\dots,l'\}$ , $J = \{l' + 1,\dots,n\}$ , and $\beta = x_{l'}$ . We prove (1.1) and (1.2) as follows.
+
+# Proof of argument (1.1).
+
+We argue that for any $i \in I$ , $v_{i} - \alpha \beta^{T}x_{i} \leq v_{l^{\prime}} - \alpha \beta^{T}x_{l^{\prime}}$ and for any $j \in J$ , $v_{j} - \alpha \beta^{T}x_{j} > v_{l^{\prime}} - \alpha \beta^{T}x_{l^{\prime}}$ .
+
+There are three situations:
+
+(A) $i\in \{1,\ldots ,s_t\}$ and $j\in \{s_{t + 1} + 1,\dots ,n\}$ . Applying eq. (51), for any $i\in \{1,\ldots ,s_t\}$ and $j\in \{s_{t + 1} + 1,\dots ,n\}$ , we have that $v_{i} < v_{l^{\prime}}$ and $v_{j} > v_{l^{\prime}}$ . Therefore, when $\alpha$ is sufficiently small, we have the following inequalities,
+
+$$
+v _ {i} - \alpha \beta^ {T} x _ {i} < v _ {l ^ {\prime}} - \alpha \beta^ {T} x _ {l ^ {\prime}},
+$$
+
+$$
+v _ {j} - \alpha \beta^ {T} x _ {j} > v _ {l ^ {\prime}} - \alpha \beta^ {T} x _ {l ^ {\prime}}.
+$$
+
+(B) $i\in \{s_t + 1,\dots ,l'\}$ . Applying eq. (53) and because of $\alpha >0$ , we have
+
+$$
+- \alpha \langle \beta , x _ {i} \rangle \leq - \alpha \| \beta \| ^ {2} = - \alpha \langle \beta , x _ {l ^ {\prime}} \rangle .
+$$
+
+Since $v_{i} = v_{l^{\prime}}$ , it further leads to
+
+$$
+v _ {i} - \alpha \beta^ {T} x _ {i} \leq v _ {l ^ {\prime}} - \alpha \beta^ {T} x _ {l ^ {\prime}}.
+$$
+
+(C) $j\in \{l^{\prime} + 1,\dots ,s_{t + 1}\}$ . Similarly, applying eq. (54) and because of $\alpha >0$ , we have
+
+$$
+- \alpha \langle \beta , x _ {j} \rangle > - \alpha \left\| \beta \right\| ^ {2} = - \alpha \langle \beta , x _ {l ^ {\prime}} \rangle .
+$$
+
+Since $v_{j} = v_{l^{\prime}}$ , it further leads to
+
+$$
+v _ {j} - \alpha \beta^ {T} x _ {j} > v _ {l ^ {\prime}} - \alpha \beta^ {T} x _ {l ^ {\prime}},
+$$
+
+which is exactly the argument (1.1).
+
+# Proof of argument (1.2).
+
+We argue that for any $i \in \{s_t + 1, \dots, l' - 1\}$ , $u_i = 0$ . Otherwise, suppose there exists an $i \in \{s_t + 1, \dots, l' - 1\}$ such that $u_i \neq 0$ . From eq. (52), we have $\| x_i \| \leq \| x_{l'} \|$ . Therefore,
+
+$$
+\langle x _ {l ^ {\prime}}, x _ {i} \rangle \leq \| x _ {l ^ {\prime}} \| \| x _ {i} \| \leq \| x _ {l ^ {\prime}} \| ^ {2},
+$$
+
+where the first inequality strictly holds if the vector $x_{l'}$ and $x_i$ have the same direction, while the second inequality strictly holds when $x_i$ and $x_{l'}$ have the same norm. Because $x_{l'} \neq x_i$ , we have the following inequality,
+
+$$
+\left\langle x _ {l ^ {\prime}}, x _ {i} \right\rangle < \left\| x _ {l ^ {\prime}} \right\| ^ {2},
+$$
+
+which contradicts to eq. (53), i.e.,
+
+$$
+\langle x _ {l ^ {\prime}}, x _ {i} \rangle \geq \| x _ {l ^ {\prime}} \| ^ {2}, \forall i \in \left\{s _ {t} + 1, \dots , l ^ {\prime} \right\}.
+$$
+
+Therefore,
+
+$$
+\sum_ {i \in I} u _ {i} = \sum_ {i = 1} ^ {s _ {t}} u _ {i} + \sum_ {i = s _ {t} + 1} ^ {l ^ {\prime} - 1} u _ {i} + u _ {l ^ {\prime}} = u _ {l ^ {\prime}} \neq 0,
+$$
+
+which is exactly the argument (1.2).
+
+The proof is completed.
+
+
+
+Remark. For any $i \in \{l' + 1, \dots, s_{t+1}\}$ , we have
+
+$$
+\left(v _ {i} - \alpha \beta^ {T} x _ {i}\right) - \left(v _ {l ^ {\prime}} - \alpha \beta^ {T} x _ {l ^ {\prime}}\right) = \alpha \beta^ {T} \left(x _ {l ^ {\prime}} - x _ {i}\right), \tag {55}
+$$
+
+while for any $j\in \{s_{t + 1} + 1,\dots,n\}$ , we have
+
+$$
+\left(v _ {j} - \alpha \beta^ {T} x _ {j}\right) - \left(v _ {l ^ {\prime}} - \alpha \beta^ {T} x _ {l ^ {\prime}}\right) = v _ {j} - v _ {l ^ {\prime}} + \alpha \beta^ {T} \left(x _ {l ^ {\prime}} - x _ {j}\right). \tag {56}
+$$
+
+Because $v_{j} > v_{l^{\prime}}$ , when the real number $\alpha$ is sufficiently small, we have
+
+$$
+\alpha \beta^ {T} (x _ {l ^ {\prime}} - x _ {i}) < v _ {j} - v _ {l ^ {\prime}} + \alpha \beta^ {T} (x _ {l ^ {\prime}} - x _ {j}).
+$$
+
+Applying eqs. (55) and (56), we have
+
+$$
+\left(v _ {i} - \alpha \beta^ {T} x _ {i}\right) - \left(v _ {l ^ {\prime}} - \alpha \beta^ {T} x _ {l ^ {\prime}}\right) < \left(v _ {j} - \alpha \beta^ {T} x _ {j}\right) - \left(v _ {l ^ {\prime}} - \alpha \beta^ {T} x _ {l ^ {\prime}}\right).
+$$
+
+Therefore, if $l' < s_{t+1}$ , we have
+
+$$
+\min _ {i \in \left\{l ^ {\prime} + 1, \dots , n \right\}} \left(v _ {i} - \alpha \beta^ {T} x _ {i}\right) - \left(v _ {l ^ {\prime}} - \alpha \beta^ {T} x _ {l ^ {\prime}}\right) = \min _ {i \in \left\{l ^ {\prime} + 1, \dots , s _ {t + 1} \right\}} \alpha \beta^ {T} \left(x _ {l ^ {\prime}} - x _ {i}\right); \tag {57}
+$$
+
+while if $l^{\prime} = s_{t + 1}$
+
+$$
+\min _ {i \in \{l ^ {\prime} + 1, \dots , n \}} \left(v _ {i} - \alpha \beta^ {T} x _ {i}\right) - \left(v _ {l ^ {\prime}} - \alpha \beta^ {T} x _ {l ^ {\prime}}\right) = \min _ {i \in \{l ^ {\prime} + 1, \dots , n \}} v _ {i} - v _ {l} + \alpha \beta^ {T} \left(x _ {l ^ {\prime}} - x _ {i}\right). \tag {58}
+$$
+
+Eqs. (57) and (58) make senses because $l' < n$ . Otherwise, from Lemma 7 we have $\sum_{i=1}^{n} u_i \neq 0$ , which contradicts to the assumption.
+
+# B PROOFS OF THEOREM 2, THEOREM 3, COROLLARY 1, AND COROLLARY 2
+
+This appendix gives the proofs of Theorem 2, Theorem 3, Corollary 1, and Corollary 2 omitted from Section 4.
+
+# B.1 SQUARED LOSS
+
+We first check that the squared loss is strictly convex, which is even restrictive than "convex".
+
+Lemma 8. The empirical risk $\hat{\mathcal{R}}$ under squared loss (defined by eq. (5)) is strictly convex with respect to the prediction $\hat{Y}$ .
+
+Proof. The second derivative of the empirical risk $\hat{\mathcal{R}}$ under squared loss with respect to the prediction $\hat{Y}$ is
+
+$$
+\frac {\partial^ {2} l _ {c e} (Y , \hat {Y})}{\partial \hat {Y} ^ {2}} = \frac {\partial^ {2} (y - \hat {Y}) ^ {2}}{\partial \hat {Y} ^ {2}} = 2 > 0.
+$$
+
+Therefore, the empirical risk $\hat{\mathcal{R}}$ under squared loss is strictly convex with respect to prediction $\hat{Y}$ .
+
+# B.2 SMOOTH AND MULTILINEAR PARTITION.
+
+If the activations are all linear functions, the neural networks is reduced to a multilinear model. The loss surface is apparently smooth and multilinear. The nonlinearity in the activations largely reshape the landscape of the loss surface. Specifically, if the input data flows through the linear parts of every activation functions, the output falls in a smooth and multilinear region in the loss surface. When some parameter changes by a sufficiently small swift, the data flow may not move out of the linear parts of the activations. This fact guarantees that each smooth and multilinear regions expands to an open cell. Meanwhile, every nonlinear point in the activations is non-differentiable. If the input data flows through these nonlinear points, the corresponding empirical risk is not smooth with respect to the parameters. Therefore, the nonlinear points in activations correspond to the non-differentiable boundaries between cells on the loss surface.
+
+# B.3 EVERY LOCAL MINIMUM IS GLOBALLY MINIMAL WITHIN A CELL.
+
+Proof of Theorem 2. In every cell, the input sample points flows through the same linear parts of the activations no matter what values the parameters are.
+
+(1) We first proves that the empirical risk $\hat{\mathcal{R}}$ equals to a convex function with respect to a variable $\hat{W}$ that is calculated from the parameters $W$ .
+
+Suppose $(W_{1},W_{2})$ is a local minimum within a cell. We argue that
+
+$$
+\sum_ {i = 1} ^ {n} l \left(y _ {i}, W _ {2} \operatorname {d i a g} \left(A _ {., i}\right) W _ {1} x _ {i}\right) = \sum_ {i = 1} ^ {n} l \left(y _ {i}, A _ {., i} ^ {T} \operatorname {d i a g} \left(W _ {2}\right) W _ {1} x _ {i}\right), \tag {59}
+$$
+
+where $A_{,i}$ is the $i$ -th column of the following matrix
+
+$$
+A = \left[ \begin{array}{c c c} h _ {s _ {-}, s _ {+}} ^ {\prime} ((W _ {1}) _ {1,}. x _ {1}) & \dots & h _ {s _ {-}, s _ {+}} ^ {\prime} ((W _ {1}) _ {1,}. x _ {n}) \\ \vdots & \ddots & \vdots \\ h _ {s _ {-}, s _ {+}} ^ {\prime} ((W _ {1}) _ {d _ {1},}. x _ {1}) & \dots & h _ {s _ {-}, s _ {+}} ^ {\prime} ((W _ {1}) _ {d _ {1},}. x _ {n}) \end{array} \right]. \tag {60}
+$$
+
+The left-hand side (LHS) is as follows,
+
+$$
+\begin{array}{l} \mathrm {L H S} = \sum_ {i = 1} ^ {n} l \left(y _ {i}, W _ {2} \operatorname {d i a g} (A., i) W _ {1} x _ {i}\right) \\ = \sum_ {i = 1} ^ {n} l \left(y _ {i}, \left[ \left(W _ {2}\right) _ {1, 1} A _ {1, i} \quad \dots \quad \left(W _ {2}\right) _ {1, d _ {1}} A _ {d _ {1}, i} \right] W _ {1} x _ {i}\right). \\ \end{array}
+$$
+
+Meanwhile, the right-hand side (RHS) is as follows,
+
+$$
+\begin{array}{l} \mathrm {R H S} = \sum_ {i = 1} ^ {n} l \left(y _ {i}, A _ {., i} ^ {T} \operatorname {d i a g} \left(W _ {2}\right) W _ {1} x _ {i}\right), \\ = \sum_ {i = 1} ^ {n} l \left(y _ {i}, \left[ (W _ {2}) _ {1, 1} A _ {1, i} \right. \dots \quad \left(W _ {2}\right) _ {1, d _ {1}} A _ {d _ {1}, i} \right] W _ {1} x _ {i}) . \\ \end{array}
+$$
+
+Apparently, $\mathrm{LHS} = \mathrm{RHS}$ . Thereby, we proved eq. (59).
+
+Afterwards, we define
+
+$$
+\hat {W} _ {1} = \operatorname {d i a g} \left(W _ {2}\right) W _ {1}, \tag {61}
+$$
+
+and then straighten the matrix $\hat{W}_1$ to a vector $\hat{W}$
+
+$$
+\hat {W} = \left(\left(\hat {W} _ {1}\right) _ {1, \cdot} \quad \dots \quad \left(\hat {W} _ {1}\right) _ {d _ {1}, \cdot}\right), \tag {62}
+$$
+
+Also define
+
+$$
+\hat {X} = \left(A _ {., 1} \otimes x _ {1} \quad \dots \quad A _ {., n} \otimes x _ {n}\right). \tag {63}
+$$
+
+Then, we can prove that the following equations,
+
+$$
+\begin{array}{l} \left( \begin{array}{c c c} A _ {., 1} ^ {T} \hat {W} _ {1} x _ {1} & \dots & A _ {., n} ^ {T} \hat {W} _ {1} x _ {n} \end{array} \right) = \left(\left(\hat {W} _ {1}\right) _ {1, \cdot} \quad \dots \quad \left(\hat {W} _ {1}\right) _ {d _ {1}, \cdot}\right) \left( \begin{array}{c c c} A _ {., 1} \otimes x _ {1} & \dots & A _ {., n} \otimes x _ {n} \end{array} \right) \\ = \hat {W} \hat {X}. \tag {64} \\ \end{array}
+$$
+
+Applying eq. (64), the empirical risk is transferred to a convex function as follows,
+
+$$
+\begin{array}{l} \hat {\mathcal {R}} \left(W _ {1}, W _ {2}\right) = \frac {1}{n} \sum_ {i = 1} ^ {n} l \left(y _ {i}, \left(A _ {., i}\right) ^ {T} \operatorname {d i a g} \left(W _ {2}\right) W _ {1} x _ {i}\right) = \frac {1}{n} \sum_ {i = 1} ^ {n} l \left(y _ {i}, \left(A _ {., i}\right) ^ {T} \hat {W} _ {1} x _ {i}\right) \\ = \frac {1}{n} \sum_ {i = 1} ^ {n} l \left(y _ {i}, \hat {W} \hat {X} _ {i}\right). \tag {65} \\ \end{array}
+$$
+
+We can see that the empirical risk is rearranged as a convex function in terms of $\hat{W}$ which unite the two weight matrices $W_{1}$ and $W_{2}$ and the activation $h$ are together as $\hat{W}$ .
+
+Applying eqs. (61) and (62), we have
+
+$$
+\hat {W} = \left[ \left(W _ {2}\right) _ {1} \left(W _ {1}\right) _ {1, \cdot} \quad \dots \quad \left(W _ {2}\right) _ {d _ {1}} \left(W _ {1}\right) _ {d _ {1}, \cdot} \right].
+$$
+
+(2) We then prove that the local minima (including global minima) of the empirical risk $\hat{\mathcal{R}}$ with respect to the parameter $W$ is also local minima with respect to the corresponding variable $\hat{W}$ .
+
+We first prove that for any $i \in [1:d_1d_2]$ , we have
+
+$$
+e _ {i} \hat {X} \nabla = 0,
+$$
+
+where $\nabla$ is defined as follows,
+
+$$
+\nabla = \left[ \nabla_ {(\hat {W} \hat {X}) _ {1}} l (\boldsymbol {Y} _ {1}, (\hat {W} \hat {X}) _ {1}) \dots \nabla_ {(\hat {W} \hat {X}) _ {n}} l (\boldsymbol {Y} _ {n}, (\hat {W} \hat {X}) _ {n}) \right] ^ {T}.
+$$
+
+To see this, we divide $i$ into two cases: $(W_{2})_{i}\neq 0$ and $(W_{2})_{i} = 0$
+
+Case 1: $(\mathbf{W}_2)_i \neq \mathbf{0}$ .
+
+The local minimizer of the empirical risk $\mathcal{R}$ with respect to the parameter $W$ satisfies the following equation,
+
+$$
+\frac {\partial \hat {\mathcal {R}}}{\partial (W _ {1}) _ {i , j}} = 0.
+$$
+
+Therefore,
+
+$$
+\begin{array}{l} 0 = \frac {\partial \hat {\mathcal {R}}}{\partial (W _ {1}) _ {i , j}} \\ = \frac {\partial \left(\sum_ {k = 1} ^ {n} l \left(Y _ {, k} , \left(\hat {W} \hat {X}\right) _ {, k}\right)\right)}{\partial \left(W _ {1}\right) _ {i , j}} \\ = \sum_ {k = 1} ^ {n} \left[ \underbrace {0 \cdots 0} _ {d _ {X} (i - 1) + j - 1} \left(W _ {2}\right) _ {i} \underbrace {0 \cdots 0} _ {d _ {X} d _ {1} - d _ {X} (i - 1) - j} \right] \hat {X} _ {k} \nabla_ {\left(\hat {W} \hat {X}\right) _ {., k}} l \left(Y _ {., k}, \left(\hat {W} \hat {X}\right) _ {., k}\right), \tag {66} \\ \end{array}
+$$
+
+where $(W_{2})$ is a vector and $(W_{2})_{i}$ is its $i$ -th component.
+
+Then, divid the both hand sides of eq. (66) with $(W_{2})_{i}$ , we can get the following equation,
+
+$$
+(e _ {d _ {X} (i - 1) + j} \hat {X}) \nabla = 0.
+$$
+
+Case 2: $(\mathbf{W}_2)_i = \mathbf{0}$ . Suppose $\mathbf{u}_1 \in \mathbb{R}^{d_0}$ is a unitary vector, $u_2 \in \mathbb{R}$ is a real number, and $\varepsilon$ is a small enough positive constant. Then, define a disturbance of $W_1$ and $W_2$ as follows,
+
+$$
+\Delta W _ {1} = \left[ \begin{array}{c c c} 0 & \underbrace {\cdots} _ {d _ {X} (i - 1)} 0 & \varepsilon \mathbf {u} _ {1} \end{array} \quad 0 \underbrace {\cdots} _ {d _ {1} d _ {X} - d _ {X} i} \right],
+$$
+
+$$
+\Delta W _ {2} = \left[ \begin{array}{c c c c} 0 & \underbrace {\cdots \quad 0} _ {i - 1} & \varepsilon^ {2} u _ {2} & 0 \underbrace {\cdots \quad 0} _ {d _ {1} - i} \end{array} \right].
+$$
+
+When $\varepsilon$ is sufficiently small, $\Delta W_{1}$ and $\Delta W_{2}$ are also sufficiently small. Since $(W_{1},W_{2})$ is a local minimum, we have
+
+$$
+\begin{array}{l} \frac {1}{n} \sum_ {k = 1} ^ {n} l \left(Y _ {k}, \left(\left(\hat {W} + \Delta\right) \hat {X}\right) _ {k}\right) \\ = \hat {\mathcal {R}} \left(W _ {1} + \Delta W _ {1}, W _ {2} + \Delta W _ {2}\right) \\ \geq \hat {\mathcal {R}} (W _ {1}, W _ {2}) \\ = \frac {1}{n} \sum_ {k = 1} ^ {n} l \left(Y _ {k}, \left(\hat {W} \hat {X}\right) _ {k}\right), \tag {67} \\ \end{array}
+$$
+
+where $\Delta$ is defined as follows,
+
+$$
+\begin{array}{l} \Delta = \left[ \left(W _ {2} + \Delta W _ {2}\right) _ {1} \left(W _ {1} + \Delta W _ {1}\right) _ {1} \dots \left(W _ {2} + \Delta W _ {2}\right) _ {d _ {1}} \left(W _ {1} + \Delta W _ {1}\right) _ {d _ {1}} \right] \\ - \left[ \left(W _ {2}\right) _ {1} \left(W _ {1}\right) _ {1} \quad \dots \quad \left(W _ {2}\right) _ {d _ {1}} \left(W _ {1}\right) _ {d _ {1}} \right] \\ \stackrel {(*)} {=} \left[ 0 \underbrace {\cdots 0} _ {d _ {X} (i - 1)} \quad \varepsilon^ {2} u _ {2} \left(\varepsilon \mathbf {u} _ {1} + \left(W _ {1}\right) _ {i}\right) \quad \underbrace {0 \cdots 0} _ {d _ {1} d _ {X} - d _ {X} i} \right]. \tag {68} \\ \end{array}
+$$
+
+Here, eq. (*) comes from $(W_2)_i = 0$ . Rearrange eq. (67) and apply the Taylor's Theorem, we can get that
+
+$$
+\Delta \cdot \hat {X} \nabla + \mathbf {O} \left(\| \Delta \cdot \hat {X} \| ^ {2}\right) \geq 0.
+$$
+
+Applying eq. (68), we have
+
+$$
+\begin{array}{l} \left[ \begin{array}{c c} 0 \underbrace {\cdots} _ {d _ {X} (i - 1)} & \varepsilon^ {2} u _ {2} \left(\varepsilon \mathbf {u} _ {1} + (W _ {1}) _ {i}\right) \quad 0 \underbrace {\cdots} _ {d _ {X} d _ {1} - i d _ {X}} \end{array} \right] \hat {X} \nabla \\ + \varepsilon^ {4} O \left(\| [ \underbrace {0 \cdots 0} _ {d _ {X} (i - 1)} u _ {2} (\varepsilon \mathbf {u} _ {1} + (W _ {1}) _ {i}) \underbrace {0 \cdots 0} _ {d _ {X} d _ {1} - i d _ {X}} ] \hat {X} \| ^ {2}\right) \\ \stackrel {(*)} {=} \left[\begin{array}{c c c}0&\dots&0\\\hline d _ {X} (i - 1)\end{array}\right. \varepsilon^ {3} u _ {2} \mathbf {u} _ {1} \quad\begin{array}{c c c}0&\dots&0\\\hline d _ {X} d _ {1} - i d _ {X}\end{array}\left. \right] \hat {X} \nabla \\ + \varepsilon^ {4} O \left(\| [ \underbrace {0 \cdots 0} _ {d _ {X} (i - 1)} u _ {2} (\varepsilon \mathbf {u} _ {1} + (W _ {1}) _ {i}) \underbrace {0 \cdots 0} _ {d _ {X} d _ {1} - i d _ {X}} ] \hat {X} \| ^ {2}\right) \\ = \varepsilon^ {3} \left[ \underbrace {0 \cdots 0} _ {d _ {X} (i - 1)} u _ {2} \mathbf {u} _ {1} \underbrace {0 \cdots 0} _ {d _ {X} d _ {1} - i d _ {X}} \right] \hat {X} \nabla + \mathbf {o} (\varepsilon^ {3}) (69) \\ \geq 0. (70) \\ \end{array}
+$$
+
+Here, eq. $(\ast \ast)$ can be obtained from follows. Because $W_{2}$ is a local minimizer, for any component $(W_{2})_{i}$ of $W_{2}$ ,
+
+$$
+\frac {\partial \left(\sum_ {k = 1} ^ {n} l \left(Y _ {k} , \left(\hat {W} \hat {X}\right) _ {k}\right)\right)}{\partial \left(W _ {2}\right) _ {i}} = 0,
+$$
+
+which leads to
+
+$$
+\left[ \begin{array}{c c} 0 \underbrace {\cdots} _ {d _ {X} (i - 1)} 0 & (W _ {1}) _ {i} \quad 0 \underbrace {\cdots} _ {d _ {X} d _ {1} - i d _ {X}} 0 \end{array} \right] \hat {X} \nabla = 0.
+$$
+
+When $\varepsilon$ approaches 0, eq. (69) leads to the following inequality,
+
+$$
+\left[ \begin{array}{c c c} 0 & \underbrace {\cdots} _ {d _ {X} (i - 1)} & u _ {2} \mathbf {u} _ {1} \\ & & 0 \end{array} \underbrace {\cdots} _ {d _ {X} d _ {1} - i d _ {X}} \right] \hat {X} \nabla \geq 0.
+$$
+
+Since $\mathbf{u}_1$ and $u_2$ are arbitrarily picked (while the norms equal 1), the inequality above further leads to
+
+$$
+\left[ \begin{array}{l l l} 0 & \underbrace {\cdots} _ {d _ {X} (i - 1)} & e _ {j} \\ & & \underbrace {0} _ {d _ {X} d _ {1} - i d _ {X}} 0 \end{array} \right] \hat {X} \nabla = 0, \tag {71}
+$$
+
+which finishes the proof of the argument.
+
+Therefore, for any $i$ and $j$ , we have proven that
+
+$$
+e _ {d _ {0} (i - 1) + j} \hat {X} \nabla = 0,
+$$
+
+which demonstrates that
+
+$$
+\hat {X} \nabla = 0,
+$$
+
+which means $\hat{W}$ is also a local minimizer of the empirical risk $\hat{\mathcal{R}}$ ,
+
+$$
+\hat {\mathcal {R}} (W) = \sum_ {i = 1} ^ {n} l \left(Y _ {i}, W \hat {X} _ {i}\right). \tag {72}
+$$
+
+(3) Applying the property of convex function, $\hat{W}$ is a global minimizer of the empirical risk $\hat{\mathcal{R}}$ , which leads to $(W_{1}, W_{2})$ is a global minimum inside this cell.
+
+The proof is completed.
+
+# B.4 EQUIVALENCE CLASSES OF LOCAL MINIMUM VALLEYS IN CELLS.
+
+Proof of Theorem 3 and Corollary 1. In the proof of Theorem 2, we constructed a map $Q: (W_1, W_2) \to \hat{W}$ . Further, in any fixed cell, the represented hypothesis of a neural network is uniquely determined by $\hat{W}$ .
+
+# We first prove that all local minima in a cell are concentrated as a local minimum valley.
+
+Since the loss function $l$ is strictly convex, the empirical risk has one unique local minimum (which is also a global minimum) with respect to $\hat{W}$ in every cell, if there exists some local minimum in the cell. Meanwhile, we have proved that all local minima with respect to $(W_1, W_2)$ are also local minima with respect to the corresponding $\hat{W}$ . Therefore, all local minima with respect to $(W_1, W_2)$ correspond one unique $\hat{W}$ . Within a cell, when $W_1$ expands by a positive real factor $\alpha$ to $W_1'$ and $W_2$ shrinks by the same positive real factor $\alpha$ to $W_2'$ , we have $Q(W_1, W_2) = Q(W_1', W_2')$ , i.e., the $\hat{W}$ remains invariant.
+
+Further, we argue that all local minima in a cell are connected with each other by a continuous path, on which the empirical risk is invariant. For every local minima pair $(W_{1}, W_{2})$ and $(W_{1}', W_{2}')$ , we have
+
+$$
+\operatorname {d i a g} \left(W _ {2}\right) W _ {1} = \operatorname {d i a g} \left(W _ {2} ^ {\prime}\right) W _ {1} ^ {\prime}. \tag {73}
+$$
+
+Since $h_{s_{-}, s_{+}}^{\prime}(W_{1}X) = h_{s_{-}, s_{+}}^{\prime}(W_{1}^{\prime}X)$ (element-wise), for every $i \in [1, d_{1}]$ ,
+
+$$
+\operatorname {s g n} \left(\left(W _ {2}\right) _ {i}\right) = \operatorname {s g n} \left(\left(W _ {2} ^ {\prime}\right) _ {i}\right).
+$$
+
+Therefore, a continuous path from $(W_{1}, W_{2})$ to $(W_{1}', W_{2}')$ can be constructed by finite moves, each of which expands a component of $W_{2}$ by a real constant $\alpha$ and then shrinks the corresponding line of $W_{1}$ by the same constant $\alpha$ .
+
+# We then prove that all local minima in a cell constitute an equivalence class.
+
+Define an operation $\sim_R$ as follows,
+
+$$
+\left(W _ {1} ^ {1}, W _ {2} ^ {1}\right) \sim_ {R} \left(W _ {1} ^ {2}, W _ {2} ^ {2}\right),
+$$
+
+if
+
+$$
+Q \left(W _ {1} ^ {1}, W _ {2} ^ {1}\right) = Q \left(W _ {1} ^ {2}, W _ {2} ^ {2}\right).
+$$
+
+We then argue that $\sim_R$ is an equivalence relation. The three properties of equivalence relations are checked as follows.
+
+# (1) Reflexivity:
+
+For any $(W_{1},W_{2})$ , we have
+
+$$
+Q \left(W _ {1}, W _ {2}\right) = Q \left(W _ {1}, W _ {2}\right).
+$$
+
+Therefore,
+
+$$
+\left(W _ {1}, W _ {2}\right) \sim_ {R} \left(W _ {1}, W _ {2}\right).
+$$
+
+# (2) Symmetry:
+
+For any pair $(W_1^1, W_2^1)$ and $(W_1^2, W_2^2)$ , Suppose that
+
+$$
+\left(W _ {1} ^ {1}, W _ {2} ^ {1}\right) \sim_ {R} \left(W _ {1} ^ {2}, W _ {2} ^ {2}\right).
+$$
+
+Thus,
+
+$$
+Q \left(W _ {1} ^ {1}, W _ {2} ^ {1}\right) = Q \left(W _ {1} ^ {2}, W _ {2} ^ {2}\right).
+$$
+
+Apparently,
+
+$$
+Q \left(W _ {1} ^ {2}, W _ {2} ^ {2}\right) = Q \left(W _ {1} ^ {1}, W _ {2} ^ {1}\right).
+$$
+
+Therefore,
+
+$$
+Q (W _ {1} ^ {2}, W _ {2} ^ {2}) \sim_ {R} Q (W _ {1} ^ {1}, W _ {2} ^ {1}).
+$$
+
+# (3) Transitivity:
+
+For any $(W_1^1, W_2^1)$ , $(W_1^2, W_2^2)$ , and $(W_1^3, W_2^3)$ , suppose that
+
+$$
+\left(W _ {1} ^ {1}, W _ {2} ^ {1}\right) \sim_ {R} \left(W _ {1} ^ {2}, W _ {2} ^ {2}\right),
+$$
+
+$$
+\left(W _ {1} ^ {2}, W _ {2} ^ {2}\right) \sim_ {R} \left(W _ {1} ^ {3}, W _ {2} ^ {3}\right).
+$$
+
+Then,
+
+$$
+Q \left(W _ {1} ^ {1}, W _ {2} ^ {1}\right) = Q \left(W _ {1} ^ {2}, W _ {2} ^ {2}\right),
+$$
+
+$$
+Q \left(W _ {1} ^ {2}, W _ {2} ^ {2}\right) = Q \left(W _ {1} ^ {3}, W _ {2} ^ {3}\right).
+$$
+
+Apparently,
+
+$$
+Q \left(W _ {1} ^ {1}, W _ {2} ^ {1}\right) = Q \left(W _ {1} ^ {2}, W _ {2} ^ {2}\right) = Q \left(W _ {1} ^ {3}, W _ {2} ^ {3}\right).
+$$
+
+Therefore,
+
+$$
+\left(W _ {1} ^ {1}, W _ {2} ^ {1}\right) \sim_ {R} \left(W _ {1} ^ {3}, W _ {2} ^ {3}\right).
+$$
+
+We then prove the mapping $Q$ is the quotient map.
+
+Define a map as follows,
+
+$$
+T: \left(W _ {1}, W _ {2}\right)\rightarrow \left(\operatorname {d i a g} \left(W _ {2}\right) W _ {1}, \mathbf {1} _ {1 \times d _ {1}}\right).
+$$
+
+We then define an operator $\oplus$ as,
+
+$$
+\left(W _ {1} ^ {1}, W _ {2} ^ {1}\right) \oplus \left(W _ {1} ^ {2}, W _ {2} ^ {2}\right) = T \left(W _ {1} ^ {1}, W _ {2} ^ {1}\right) + T \left(W _ {1} ^ {2}, W _ {2} ^ {2}\right),
+$$
+
+the inverse of $(W_{1},W_{2})$ is defined to be $(-W_{1},W_{2})$ and the zero element is defined to be $(\mathbf{0},\mathbf{1}_{1\times d_1})$
+
+Obviously, the following is a linear mapping:
+
+$$
+Q: \left(\left(\mathbb {R} ^ {d _ {1} \times d _ {X}}, \mathbb {R} ^ {1 \times d _ {1}}\right), \oplus\right)\rightarrow \left(\mathbb {R} ^ {1 \times d _ {X} d _ {1}}, +\right).
+$$
+
+For any pair $(W_1^1, W_2^1)$ and $(W_1^2, W_2^2)$ , we have
+
+$$
+(W _ {1} ^ {1}, W _ {2} ^ {1}) \sim_ {R} (W _ {1} ^ {2}, W _ {2} ^ {2}),
+$$
+
+if and only if
+
+$$
+\left(W _ {1} ^ {1}, W _ {2} ^ {1}\right) \oplus \left(- W _ {1} ^ {2}, W _ {2} ^ {2}\right) \in \operatorname {K e r} (Q).
+$$
+
+Therefore, the quotient space $(\mathbb{R}^{d_1\times d_X},\mathbb{R}^{1\times d_1}) / \mathrm{Ker}(Q)$ is a definition of the equivalence relation $\sim_R$
+
+The proof is completed.
+
+# B.5 LINEAR COLLAPSE.
+
+When there is no nonlinearities in the activations, there is apparently no non-differentiable regions on the loss surface. In other words, the loss surface is a single smooth and multilinear cell.
\ No newline at end of file
diff --git a/piecewiselinearactivationssubstantiallyshapethelosssurfacesofneuralnetworks/images.zip b/piecewiselinearactivationssubstantiallyshapethelosssurfacesofneuralnetworks/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..afc2849cef6daf7dcc42c97b8593253d9ee7c393
--- /dev/null
+++ b/piecewiselinearactivationssubstantiallyshapethelosssurfacesofneuralnetworks/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:63f7b54fa55d873e4a770939a5db77977b04e66f4c1712082e0540b764135037
+size 2457444
diff --git a/piecewiselinearactivationssubstantiallyshapethelosssurfacesofneuralnetworks/layout.json b/piecewiselinearactivationssubstantiallyshapethelosssurfacesofneuralnetworks/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..c425965511125f0e6ca6aa0e0f8eb5974cae9a19
--- /dev/null
+++ b/piecewiselinearactivationssubstantiallyshapethelosssurfacesofneuralnetworks/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:40149952fba0beaf7f8c72e1122c1cb072a12e0e89d8f2a7e916bc6a444d1ae6
+size 1752650
diff --git a/pitfallsofindomainuncertaintyestimationandensemblingindeeplearning/d173709b-916d-4a52-a911-8f41117cf563_content_list.json b/pitfallsofindomainuncertaintyestimationandensemblingindeeplearning/d173709b-916d-4a52-a911-8f41117cf563_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..f2cec3eae905516234ff90dedd6d1772b27e999d
--- /dev/null
+++ b/pitfallsofindomainuncertaintyestimationandensemblingindeeplearning/d173709b-916d-4a52-a911-8f41117cf563_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d5e63faadf4e94f25a434a55ee70012b6846ea75a6112fa5de2d9c92a3729d9b
+size 184813
diff --git a/pitfallsofindomainuncertaintyestimationandensemblingindeeplearning/d173709b-916d-4a52-a911-8f41117cf563_model.json b/pitfallsofindomainuncertaintyestimationandensemblingindeeplearning/d173709b-916d-4a52-a911-8f41117cf563_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..f79ad608b2868a25a95ab9ae7c7c510162ec8040
--- /dev/null
+++ b/pitfallsofindomainuncertaintyestimationandensemblingindeeplearning/d173709b-916d-4a52-a911-8f41117cf563_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1314efa48765eeeae916302ef8a4507a5baa0a238087dce25484640ff79e842b
+size 208445
diff --git a/pitfallsofindomainuncertaintyestimationandensemblingindeeplearning/d173709b-916d-4a52-a911-8f41117cf563_origin.pdf b/pitfallsofindomainuncertaintyestimationandensemblingindeeplearning/d173709b-916d-4a52-a911-8f41117cf563_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..ad7505d50dae605d5775e132c81806edd5995e70
--- /dev/null
+++ b/pitfallsofindomainuncertaintyestimationandensemblingindeeplearning/d173709b-916d-4a52-a911-8f41117cf563_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9260ecb72253738c30d8b672bf9860efe69893995fe78a0a1529281558878c8a
+size 4011484
diff --git a/pitfallsofindomainuncertaintyestimationandensemblingindeeplearning/full.md b/pitfallsofindomainuncertaintyestimationandensemblingindeeplearning/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..4b4c07f24dccd51cc0ae91d042a4288c272679a0
--- /dev/null
+++ b/pitfallsofindomainuncertaintyestimationandensemblingindeeplearning/full.md
@@ -0,0 +1,625 @@
+# PITFALLS OF IN-DOMAIN UNCERTAINTY ESTIMATION AND ENSEMBLING IN DEEP LEARNING
+
+Arsenii Ashukha *
+
+Samsung AI Center Moscow, HSE‡
+aashukha@bayesgroup.ru
+
+Alexander Lyzhov *
+
+Samsung AI Center Moscow, Skoltech† HSE§ alyzhov@bayesgroup.ru
+
+Dmitry Molchanov *
+
+Samsung AI Center Moscow, HSE $\ddagger$ dmolch@bayesgroup.ru
+
+Dmitry Vetrov
+
+HSE; Samsung AI Center Moscow
+dvetrov@bayesgroup.ru
+
+# ABSTRACT
+
+Uncertainty estimation and ensembling methods go hand-in-hand. Uncertainty estimation is one of the main benchmarks for assessment of ensembling performance. At the same time, deep learning ensembles have provided state-of-the-art results in uncertainty estimation. In this work, we focus on in-domain uncertainty for image classification. We explore the standards for its quantification and point out pitfalls of existing metrics. Avoiding these pitfalls, we perform a broad study of different ensembling techniques. To provide more insight in this study, we introduce the deep ensemble equivalent score (DEE) and show that many sophisticated ensembling techniques are equivalent to an ensemble of only few independently trained networks in terms of test performance.
+
+video / code / blog post
+
+# 1 INTRODUCTION
+
+Deep neural networks (DNNs) have become one of the most popular families of machine learning models. The predictive performance of DNNs for classification is often measured in terms of accuracy. However, DNNs have been shown to yield inaccurate and unreliable probability estimates, or predictive uncertainty (Guo et al., 2017). This has brought considerable attention to the problem of uncertainty estimation with deep neural networks.
+
+There are many faces to uncertainty estimation. Different desirable uncertainty estimation properties of a model require different settings and metrics to capture them. Out-of-domain uncertainty of the model is measured on the data that does not follow the same distribution as the training dataset (out-of-domain data). Out-of-domain data can include images corrupted with rotations or blurring, adversarial attacks (Szegedy et al., 2013) or data points from a completely different dataset. The model is expected to be resistant to data corruptions and to be more uncertain on out-of-domain data than on in-domain data. On the contrary, in-domain uncertainty of the model is measured on data taken from the training data distribution, i.e. data from the same domain. In this setting the model is expected to produce reliable probability estimates, e.g. the model shouldn't be too overconfident in its wrong predictions.
+
+Pitfalls of metrics We show that many common metrics of in-domain uncertainty estimation (e.g. log-likelihood, Brier score, calibration metrics, etc.) are either not comparable across different models or fail to provide a reliable ranking. We address some of the stated pitfalls and point out more reasonable evaluation schemes. For instance, although temperature scaling is not a standard for ensembling techniques, it is a must for a fair evaluation. With this in mind, the
+
+calibrated log-likelihood avoids most of the stated pitfalls and generally is a reasonable metric for in-domain uncertainty estimation task.
+
+Pitfalls of ensembles Equipped with the proposed evaluation framework, we are revisiting the evaluation of ensembles of DNNs—one of the major tools for uncertainty estimation. We introduce the deep ensemble equivalent (DEE) score that measures the number of independently trained models that, when ensembled, achieve the same performance as the assembling technique of interest. The DEE score allows us to compare assembling techniques across different datasets and architectures using a unified scale. Our study shows that most of the popular assembling techniques require averaging predictions across dozens of samples (members of an ensemble), yet are essentially equivalent to an ensemble of only few independently trained models.
+
+Missing part of ensembling In our study, test-time data augmentation (TTA) turned out to be a surprisingly strong baseline for uncertainty estimation and a simple way to improve ensembles. Despite being a popular technique in large-scale classification, TTA seems to be overlooked in the community of uncertainty estimation and ensembling.
+
+# 2 SCOPE OF THE PAPER
+
+We use standard benchmark problems of image classification which comprise a common setting in research on learning ensembles of neural networks. There are other relevant settings where the correctness of probability estimates can be a priority, and assembling techniques are used to improve it. These settings include, but are not limited to, regression, language modeling (Gal, 2016), image segmentation (Gustafsson et al., 2019), active learning (Settles, 2012) and reinforcement learning (Buckman et al., 2018; Chua et al., 2018).
+
+We focus on in-domain uncertainty, as opposed to out-of-domain uncertainty. Out-of-domain uncertainty includes detection of inputs that come from a completely different domain or have been corrupted by noise or adversarial attacks. This setting has been thoroughly explored by (Ovadia et al., 2019).
+
+We only consider methods that are trained on clean data with simple data augmentation. Some other methods use out-of-domain data (Malinin & Gales, 2018) or more elaborate data augmentation, e.g. mixup (Zhang et al., 2017) or adversarial training (Lakshminarayanan et al., 2017) to improve accuracy, robustness and uncertainty.
+
+We use conventional training procedures. We use the stochastic gradient descent (SGD) and use batch normalization (Ioffe & Szegedy, 2015), both being the de-facto standards in modern deep learning. We refrain from using more elaborate optimization techniques including works on superconvergence (Smith & Topin, 2019) and stochastic weight averaging (SWA) (Izmailov et al., 2018). These techniques can be used to drastically accelerate training and to improve the predictive performance. Thus, we do not comment on the training time of different ensembling methods since the use of these and other more efficient training techniques would render such a comparison obsolete.
+
+A number of related works study ways of approximating and accelerating prediction in ensembles. The distillation mechanism allows to approximate the prediction of an ensemble by a single neural network (Hinton et al., 2015; Balan et al., 2015; Tran et al., 2020), whereas fast dropout (Wang & Manning, 2013) and deterministic variational inference (Wu et al., 2018) allow to approximate the predictive distribution of specific stochastic computation graphs. We measure the raw power of ensembling techniques without these approximations.
+
+All of the aforementioned alternative settings are orthogonal to the scope of this paper and are promising points of interest for further research.
+
+# 3 PITFALLS OF IN-DOMAIN UNCERTAINTY ESTIMATION
+
+No single metric measures all the desirable properties of uncertainty estimates obtained by a model of interest. Because of this, the community is using many different metrics in an attempt to capture the quality of uncertainty estimation, such as the Brier score (Brier, 1950), log-likelihood (Quinonero-Candela et al., 2005), metrics of calibration (Guo et al., 2017; Nixon et al., 2019), performance of misclassification detection (Malinin & Gales, 2018), and threshold-accuracy curves
+
+
+Figure 1: The average log-likelihood of two different ensembling techniques for ResNet50 on ImageNet dataset before (solid) and after (dashed) temperature scaling. Without the temperature scaling, test-time data augmentation decreases the log-likelihood of plain deep ensembles. However, when the temperature scaling is enabled, deep ensembles with test-time data augmentation outperform plain deep ensembles.
+
+
+Figure 2: Thresholded adaptive calibration error (TACE) is highly sensitive to the threshold and the number of bins. It does not provide a consistent ranking of different ensembling techniques. Here TACE is reported for VGG16BN model on CIFAR-100 dataset and is evaluated at the optimal temperature.
+
+(Lakshminarayanan et al., 2017). In the section we highlight the pitfalls of the aforementioned metrics, and demonstrate that these pitfalls can significantly affect evaluation, changing the ranking of the methods.
+
+Notation We consider a classification problem with a dataset that consists of $N$ training and $n$ testing pairs $(x_{i},y_{i}^{*})\sim p(x,y)$ , where $x_{i}$ is an object and $y_{i}^{*}\in \{1,\dots ,C\}$ is a discrete class label. A probabilistic classifier maps an object $x_{i}$ into a predictive distribution $\hat{p} (y|x_i)$ . The predictive distribution $\hat{p} (y|x_i)$ of a deep neural network is typically defined by the softmax function $\hat{p} (y|x) =$ Softmax $(z(x) / T)$ , where $z(x)$ is a vector of logits and $T$ is a scalar parameter standing for the temperature of the predictive distribution. This scalar parameter is usually set to $T = 1$ or is tuned on a validation set (Guo et al., 2017). The maximum probability $\max_c\hat{p} (y = c|x_i)$ is called the confidence of a classifier $\hat{p}$ on an object $x_{i}$ . $\mathbb{I}[\cdot ]$ denotes the indicator function throughout the text.
+
+# 3.1 LOG-LIKELIHOOD AND BRIER SCORE
+
+The average test log-likelihood $\mathrm{LL} = \frac{1}{n}\sum_{i = 1}^{n}\log \hat{p} (y = y_i^*\mid x_i)$ is a popular metric for measuring the quality of in-domain uncertainty of deep learning models. It directly penalizes high probability scores assigned to incorrect labels and low probability scores assigned to the correct labels $y_{i}^{*}$ .
+
+LL is sensitive to the softmax temperature $T$ . The temperature that has been implicitly learned during training can be far from optimal for the test data. However, a nearly optimal temperature can be found post-hoc by maximizing the log-likelihood on validation data. This approach is called temperature scaling or calibration (Guo et al., 2017). Despite its simplicity, the temperature scaling results in a notable improvement in the LL.
+
+While ensembling techniques tend to have better temperature than single models, the default choice of $T = 1$ is still suboptimal. Comparing the LL with suboptimal temperatures—that is often the case in practice—can potentially produce an arbitrary ranking of different methods.
+
+Comparison of the log-likelihood should only be performed at the optimal temperature.
+
+Empirically, we demonstrate that the overall ordering of methods and also the best assembling method according to the LL can vary depending on temperature $T$ . While this applies to most
+
+ensembling techniques (see Figure 10), this effect is most noticeable on experiments with data augmentation on ImageNet (Figure 1).
+
+# We introduce a new metric called the calibrated log-likelihood that is the log-likelihood at the optimal temperature.
+
+The calibrated log-likelihood considers a model and a post-training calibration as a unified system, targeting to measure all models in the equal conditions of optimal temperature. That allows to avoid measuring calibration error that can be eliminated by a simple temperature scaling. The metric significantly affects the results of the comparison. For example, in Figure 10 the differences between Bayesian (VI, K-FAC, SWAG, dropout) and conventional non-Bayesian networks become much less pronounced, and in most cases making conventional non-Bayesian networks match the performance of Bayesian ones (VI, K-FAC, Dropout) on ResNet110, ResNet164, and WideResNet.
+
+We show how to obtain an unbiased estimate of the calibrated log-likelihood without a held-out validation set in Section 3.5.
+
+LL also demonstrates a high correlation with accuracy ( $\rho > 0.86$ ), that in case of calibrated LL becomes even stronger ( $\rho > 0.95$ ). That suggests that while (calibrated) LL measures the uncertainty of the model, it still has a significant dependence on the accuracy and vice versa. A model with higher accuracy would likely have a higher log-likelihood. See Figure 9 in Appendix C for more details.
+
+Brier score $\mathrm{BS} = \frac{1}{n}\frac{1}{C}\sum_{i = 1}^{n}\sum_{c = 1}^{C}(\mathbb{I}[y_i^* = c] - \hat{p} (y = c\mid x_i))^2$ has also been known for a long time as a metric for verification of predicted probabilities (Brier, 1950). Similarly to the log-likelihood, the Brier score penalizes low probabilities assigned to correct predictions and high probabilities assigned to wrong ones. It is also sensitive to the temperature of the softmax distribution and behaves similarly to the log-likelihood. While these metrics are not strictly equivalent, they show a high empirical correlation for a wide range of models on CIFAR-10, CIFAR-100 and ImageNet datasets (see Figure 8 in Appendix C).
+
+# 3.2 MISCLASSIFICATION DETECTION
+
+Detection of wrong predictions of the model, or misclassifications, is a popular downstream problem relevant to the problem of in-domain uncertainty estimation. Since misclassification detection is essentially a binary classification problem, some papers measure its quality using conventional metrics for binary classification such as AUC-ROC and AUC-PR (Malinin & Gales, 2018; Cui et al., 2019; Mozejko et al., 2018). These papers use an uncertainty criterion like confidence or predictive entropy $\mathcal{H}[\hat{p}(y|x_i)]$ as a prediction score.
+
+While these metrics can be used to assess the misclassification detection performance of a single model, they cannot be used to directly compare misclassification performance across different models. Correct and incorrect predictions are specific for every model, therefore, every model induces its own binary classification problem. The induced problems can differ significantly, since different models produce different confidences and misclassify different objects. In other words, comparing such metrics implies a comparison of performance of classifiers that solve different classification problems. Such metrics are therefore incomparable.
+
+# AUCs for misclassification detection cannot be directly compared between different models.
+
+While comparing AUCs is incorrect in the setting of misclassification detection, it is correct to compare these metrics in many out-of-domain data detection problems. In that case, both objects and targets of the induced binary classification problems remain the same for all models. All out-of-domain objects have a positive label and all in-domain objects have a negative label. Note that this condition does not necessarily hold in the problem of detection of adversarial attacks. Different models generally have different inputs after an adversarial attack, so such AUC-based metrics might still be flawed.
+
+# 3.3 CLASSIFICATION WITH REJECTION
+
+Accuracy-confidence curves are another way to measure the performance of misclassification detection. These curves measure the accuracy on the set of objects with confidence $\max_c\hat{p} (y = c|x_i)$ above a certain threshold $\tau$ (Lakshminarayanan et al., 2017) and ignoring or rejecting the others.
+
+The main problem with accuracy-confidence curves is that they rely too much on calibration and the actual values of confidence. Models with different temperatures have different numbers of objects at each confidence level which does not allow for a meaningful comparison. To overcome this problem, one can swhich from thresholding by the confidence level to thresholding by the number of rejected objects. The corresponding curves are then less sensitive to temperature scaling and thus allow to compare the rejection ability in a more meaningful way. Such curves have been known as accuracy-rejection curves (Nadeem et al., 2009). In order to obtain a scalar metric for easy comparisons, one can compute the area under this curve, resulting in AU-ARC (Nadeem et al., 2009).
+
+# 3.4 CALIBRATION METRICS
+
+Informally speaking, a probabilistic classifier is calibrated if any predicted class probability is equal to the true class probability according to the underlying data distribution (see (Vaicenavicius et al., 2019) for formal definitions). Any deviation from the perfect calibration is called miscalibration. For brevity, we will use $\hat{p}_{i,c}$ to denote $\hat{p}(y = c \mid x_i)$ in the current section.
+
+Expected calibration error (ECE) (Naeini et al., 2015) is a metric that estimates model miscalibration by binning the assigned probability scores and comparing them to average accuracies inside these bins. Assuming $B_{m}$ denotes the $m$ -th bin and $M$ is overall number of bins, the ECE is defined as follows:
+
+$$
+\mathrm {E C E} = \sum_ {m = 1} ^ {M} \frac {\left| B _ {m} \right|}{n} \left| \operatorname {a c c} \left(B _ {m}\right) - \operatorname {c o n f} \left(B _ {m}\right) \right|, \tag {1}
+$$
+
+where $\mathrm{acc}(B) = |B|^{-1}\sum_{i\in B}\mathbb{I}[\arg \max_c\hat{p}_{i,c} = y_i^* ]$ and $\operatorname {conf}(B) = |B|^{-1}\sum_{i\in B}\hat{p}_{i,y_i^*}$
+
+A recent line of works on measuring calibration in deep learning (Vaicenavicius et al., 2019; Kumar et al., 2019; Nixon et al., 2019) outlines several problems of the ECE score. Firstly, ECE is a biased estimate of the true calibration. Secondly, ECE-like scores cannot be optimized directly since they are minimized by a model with constant uniform predictions, making the infinite temperature $T = +\infty$ its global optimum. Thirdly, ECE only estimates miscalibration in terms of the maximum assigned probability whereas practical applications may require the full predicted probability vector to be calibrated. Finally, biases of ECE on different models may not be equal, rendering the miscalibration estimates incompatible. Similar concerns are also discussed by Ding et al. (2019).
+
+Thresholded adaptive calibration error (TACE) was proposed as a step towards solving some of these problems (Nixon et al., 2019). TACE disregards all predicted probabilities that are less than a certain threshold (hence thresholded), chooses the bin locations adaptively so that each bin has the same number of objects (hence adaptive), and estimates miscalibration of probabilities across all classes in the prediction (not just the top-1 predicted class as in ECE). Assuming that $B_{m}^{\mathrm{TA}}$ denotes the $m$ -th thresholded adaptive bin and $M$ is the overall number of bins, TACE is defined as follows:
+
+$$
+\mathrm {T A C E} = \frac {1}{C M} \sum_ {c = 1} ^ {C} \sum_ {m = 1} ^ {M} \frac {\left| B _ {m} ^ {\mathrm {T A}} \right|}{n} \left| \operatorname {o b j s} \left(B _ {m} ^ {\mathrm {T A}}, c\right) - \operatorname {c o n f} \left(B _ {m} ^ {\mathrm {T A}}, c\right) \right|, \tag {2}
+$$
+
+where $\mathrm{obs}(B^{\mathrm{TA}},c) = |B^{\mathrm{TA}}|^{-1}\sum_{i\in B^{\mathrm{TA}}}\mathbb{I}[y_i^* = c]$ and $\mathrm{conf}(B^{\mathrm{TA}},c) = |B^{\mathrm{TA}}|^{-1}\sum_{i\in B^{\mathrm{TA}}}\hat{p}_{i,c}$ .
+
+Although TACE does solve several problems of ECE and is useful for measuring calibration of a specific model, it still cannot be used as a reliable criterion for comparing different models. Theory suggests that it is still a biased estimate of true calibration with different bias for each model (Vaicenavicius et al., 2019). In practice, we find that TACE is sensitive to its two parameters, the number of bins and the threshold, and does not provide a consistent ranking of different models, as shown in Figure 2.
+
+# 3.5 CALIBRATED LOG-LIKELIHOOD AND TEST-TIME CROSS-VALIDATION
+
+There are two common ways to perform temperature scaling using a validation set when training on datasets that only feature public training and test sets (e.g. CIFARs). The public training set might
+
+be divided into a smaller training set and validation set, or the public test set can be split into test and validation parts (Guo et al., 2017; Nixon et al., 2019). The problem with the first method is that the resulting models cannot be directly compared with all the other models that have been trained on the full training set. The second approach, however, provides an unbiased estimate of metrics such as log-likelihood and Brier score but introduces more variance.
+
+In order to reduce the variance of the second approach, we perform a "test-time cross-validation". We randomly divide the test set into two equal parts, then compute metrics for each half of the test set using the temperature optimized on another half. We repeat this procedure five times and average the results across different random partitions to reduce the variance of the computed metrics.
+
+# 4 A STUDY OF ENSEMBLING & DEEP ENSEMBLE EQUIVALENT
+
+Ensembles of deep neural networks have become a de-facto standard for uncertainty estimation and improving the quality of deep learning models (Hansen & Salamon, 1990; Krizhevsky et al., 2009; Lakshminarayanan et al., 2017). There are two main directions of training ensembles of DNNs: training stochastic computation graphs and obtaining separate snapshots of neural network parameters.
+
+Methods based on the paradigm of stochastic computation graphs introduce some kind of random noise over the weights or activations of deep learning models. When the model is trained, each sample of the noise corresponds to a member of the ensemble. During test time, the predictions are averaged across the noise samples. These methods include (test-time) data augmentation, dropout (Srivastava et al., 2014; Gal & Ghahramani, 2016), variational inference (Blundell et al., 2015; Kingma et al., 2015; Louizos & Welling, 2017), batch normalization (Ioffe & Szegedy, 2015; Teye et al., 2018; Atanov et al., 2019), Laplace approximation (Ritter et al., 2018) and many more.
+
+Snapshot-based methods aim to obtain sets of weights for deep learning models and then to average the predictions across these weights. The weights can be trained independently (e.g. deep ensembles (Lakshminarayanan et al., 2017)), collected on different stages of a training trajectory (e.g. snapshot ensembles (Huang et al., 2017) and fast geometric ensembles (Garipov et al., 2018)), or obtained from a sampling process (e.g. MCMC-based methods (Welling & Teh, 2011; Zhang et al., 2019)). These two paradigms can be combined. Some works suggest construction of ensembles of stochastic computation graphs (Tomczak et al., 2018), while others make use of the collected snapshots to construct a stochastic computation graph (Wang et al., 2018; Maddox et al., 2019).
+
+In this paper we consider the following ensembling techniques: deep ensembles (Lakshminarayanan et al., 2017), snapshot ensembles (SSE by Huang et al. (2017)), fast geometric ensembling (FGE by Garipov et al. (2018)), SWA-Gaussian (SWAG by Maddox et al. (2019)), cyclical SGLD (cSGLD by Zhang et al. (2019)), variational inference (VI by Blundell et al. (2015)), K-FAC Laplace approximation (Ritter et al., 2018), dropout (Srivastava et al., 2014) and test-time data augmentation (Krizhevsky et al., 2009). These techniques were chosen to cover a diverse set of approaches keeping their predictive performance in mind.
+
+All these techniques can be summarized as distributions $q_{m}(\omega)$ over parameters $\omega$ of computation graphs, where $m$ stands for the technique. During testing, one can average the predictions across parameters $\omega \sim q_{m}(\omega)$ to approximate the predictive distribution
+
+$$
+\hat {p} \left(y _ {i} \mid x _ {i}\right) \approx \int p \left(y _ {i} \mid x _ {i}, \omega\right) q _ {m} (\omega) d \omega \simeq \frac {1}{K} \sum_ {k = 1} ^ {K} p \left(y _ {i} \mid x _ {i}, \omega_ {k}\right), \quad \omega_ {k} \sim q _ {m} (\omega) \tag {3}
+$$
+
+For example, a deep ensemble of $S$ networks can be represented in this form as a mixture of $S$ Dirac's deltas $q_{\mathrm{DE}}(\omega) = \frac{1}{S}\sum_{s=1}^{S}\delta(\omega - \omega_s)$ , centered at independently trained snapshots $\omega_s$ . Similarly, a Bayesian neural network with a fully-factorized Gaussian approximate posterior distribution over the weight matrices and convolutional kernels $\omega$ is represented as $q_{\mathrm{VI}}(\omega) = \mathcal{N}(\omega \mid \mu, \mathrm{diag}(\sigma^2))$ , $\mu$ and $\sigma^2$ being the optimal variational means and variances respectively.
+
+If one considers data augmentation as a part of the computational graph, it can be parameterized by the coordinates of the random crop and the flag for whether to flip the image horizontally or not. Sampling from the corresponding $q_{\mathrm{aug}}(\omega)$ would generate different ways to augment the data during inference. However, as data augmentation is present by default during the training of all the
+
+
+Figure 3: The deep ensemble equivalent score (DEE) for different numbers of samples on CIFAR-10, CIFAR-100, and ImageNet datasets averaged across different deep convolutional architectures. A deep ensemble equivalent score (DEE) of a model is equal to the minimum size of a deep ensemble (an ensemble of independently train networks) that achieves the same performance as the model under consideration. The score is measured in the number of models (higher is better). The area between average lower and upper bounds of DEE is shaded. The plot demonstrates that all of the ensembling techniques are far less efficient than deep ensembles during inference and fail to produce the same level of performance as deep ensembles. The comparison that is normalized on training time is presented in Appendix A.
+
+mentioned ensembling techniques, it is suitable to study it in combination with these methods and not as a separate ensembling technique. We perform such an evaluation in Section 4.3.
+
+Typically, the approximation (equation 3) requires $K$ independent forward passes through a neural network, making the test-time budget directly comparable across all methods.
+
+# 4.1 DEEP ENSEMBLE EQUIVALENT
+
+Most ensembling techniques under consideration are either bounded to a single mode, or provide positively correlated samples. Deep ensemble, on the other hand, is a simple technique that provides independent samples from different modes of the loss landscape, which, intuitively, should result in a better ensemble. Therefore deep ensembles can be considered a strong baseline for the performance of other ensembling techniques given a fixed test-time computation budget.
+
+Comparing the performance of ensembling techniques is, however, a challenging problem. Different models on different datasets achieve different values of metrics; their dependence on the number of samples is non-trivial, and varies depending on a specific model and dataset. Values of the metrics are thus lacking in interpretability as the gain in performance has to be compared against a model-and dataset-specific baseline.
+
+Aiming to introduce perspective and interpretability in our study, we introduce the deep ensemble equivalent score that employs deep ensembles to measure the performance of other ensembling techniques. Specifically, the deep ensemble equivalent score answers the following question:
+
+What size of deep ensemble yields the same performance as a particular ensembling method?
+
+Following the insights from the previous sections, we base the deep ensemble equivalent on the calibrated log-likelihood (CLL). Formally speaking, we define the deep ensemble equivalent (DEE) for an ensembling method $m$ and its upper and lower bounds as follows:
+
+$$
+\mathrm {D E E} _ {m} (k) = \min \left\{l \in \mathbb {R}, l \geq 1 \mid \mathrm {C L L} _ {D E} ^ {\text {m e a n}} (l) \geq \mathrm {C L L} _ {m} ^ {\text {m e a n}} (k) \right\}, \tag {4}
+$$
+
+$$
+\mathrm {D E E} _ {m} ^ {\text {u p p e r}} (k) = \min \left\{l \in \mathbb {R}, l \geq 1 \mid \mathrm {C L L} _ {D E} ^ {\text {m e a n}} (l) \mp \mathrm {C L L} _ {D E} ^ {\text {s t d}} (l) \geq \mathrm {C L L} _ {m} ^ {\text {m e a n}} (k) \right\}, \tag {5}
+$$
+
+
+Figure 4: An illustration of test-time augmentation (TTA) for an ensemble. We apply every member of an ensemble to a separate random augmentation of an image. The predictions of all members are averaged to produce a final prediction of an ensemble. In our experiments, TTA leads to a significant boost of the performance for most of the ensembling techniques on ImageNet with a sufficient computational budget (see Figure 5).
+
+where $\mathrm{CLL}_m^{\mathrm{mean / std}}(l)$ are the mean and the standard deviation of the calibrated log-likelihood achieved by an ensembling method $m$ with $l$ samples. We compute $\mathrm{CLL}_{\mathrm{DE}}^{\mathrm{mean}}(l)$ and $\mathrm{CLL}_{\mathrm{DE}}^{\mathrm{std}}(l)$ for natural numbers $l\in \mathbb{N}_{>0}$ and use linear interpolation to define them for real values $l\geq 1$ . In the following plots we report $\mathrm{DEE}_m(k)$ for different methods $m$ with different numbers of samples $k$ , and shade the area between the respective lower and upper bounds $\mathrm{DEE}_m^{\mathrm{lower}}(k)$ and $\mathrm{DEE}_m^{\mathrm{upper}}(k)$ .
+
+# 4.2 EXPERIMENTS
+
+We compute the deep ensemble equivalent (DEE) of various ensembling techniques for four popular deep architectures: VGG16 (Simonyan & Zisserman, 2014), PreResNet110/164 (He et al., 2016), and WideResNet28x10 (Zagoruyko & Komodakis, 2016) on CIFAR-10/100 datasets (Krizhevsky et al., 2009), and ResNet50 (He et al., 2016) on ImageNet dataset (Russakovsky et al., 2015). We use PyTorch (Paszke et al., 2017) for implementation of these models, building upon available public implementations. Our implementation closely matches the quality of methods that has been reported in original works. Technical details on training, hyperparameters and implementations can be found in Appendix B. The source code and all computed metrics are available on GitHub1.
+
+As one can see on Figure 3, ensembling methods clearly fall into three categories. SSE and cSGLD outperform all other techniques except deep ensembles and enjoy a near-linear scaling of DEE with the number of samples on CIFAR datasets. The investigation of weight-space trajectories of cSGLD and SSE (Huang et al., 2017; Zhang et al., 2019) suggests that these methods can efficiently explore different modes of the loss landscape. In terms of the deep ensemble equivalent, these methods do not saturate unlike other methods that are bound to a single mode. We found SSE to still saturate on ImageNet. This is likely due to suboptimal hyperparameters of the cyclic learning rate schedule. More verbose results are presented in Figures 11-13 and in Table 5 and Table 8 in Appendix C.
+
+In our experiments SSE typically outperforms cSGLD. This is mostly due to the fact that SSE has a much larger training budget. The cycle lengths and learning rates of SSE and cSGLD are comparable, however, SSE collects one snapshot per cycle while cSGLD collects three snapshots. This makes samples from SSE less correlated with each other while increasing the training budget threefold. Both SSE and cSGLD can be adjusted to obtain a different trade-off between the training budget and the DEE-to-samples ratio. We reused the schedules provided in the original papers (Huang et al., 2017; Zhang et al., 2019).
+
+
+Figure 5: How to read results: $\times \xrightarrow{\text{test-time aug}} \bullet, \times \xrightarrow{\text{test-time aug}} \bullet$ . The negative calibrated log-likelihood (lower is better) for different ensembling techniques on ImageNet. We report performance for two regimes. Central-crop evaluation ( $\times \times$ ) means every member of an ensemble is applied to a central crop of an image, and test-time data augmentation ( $\bullet \bullet$ ) means each member of the ensemble is applied to a separate random augmentation of the image. Test-time data augmentation significantly improves ensembles with no additional computational cost. Interestingly, a single model with TTA performs competitively with methods that require significantly larger parameters budget, and training complexity, e.g., a single model with TTA performs closely to pure deep ensembles (DE) of the same size.
+
+Being more "local" methods, FGE and SWAG perform worse than SSE and cSGLD, but still significantly outperform "single-snapshot" methods like dropout, K-FAC Laplace approximation and variational inference. We hypothesize that by covering a single mode with a set of snapshots, FGE and SWAG provide a better fit for the local geometry than models trained as stochastic computation graphs. This implies that the performance of FGE and SWAG should be achievable by single-snapshot methods. However, one might need more elaborate posterior approximations and better inference techniques in order to match the performance of FGE and SWAG by training a stochastic computation graph end-to-end (as opposed to SWAG that constructs a stochastic computation graph post-hoc).
+
+The deep ensemble equivalent curves allow us to notice the common behaviour of different methods, e.g. the relation between deep ensembles, snapshot methods, advanced local methods and single-snapshot local methods. They also allow us to notice inconsistencies that may indicate a suboptimal choice of hyperparameters. For example, we find that SSE on ImageNet quickly saturates, unlike SSE on CIFAR datasets (Figure 3). This may indicate that the hyperparameters used on ImageNet are not good enough for efficient coverage of different modes of the loss landscape. We also find that SSE on WideResNet on CIFAR-10 achieves a DEE score of 100 on approx. 70 samples (Figure 12). This may indicate that the members of the deep ensemble for this dataset-architecture pair are underfitted and may benefit from longer training or a different learning rate schedule. Such inconsistencies might be more difficult to spot using plain calibrated log-likelihood plots.
+
+# 4.3 TEST-TIME DATA AUGMENTATION IMPROVES ENSEMBLES FOR FREE
+
+Data augmentation is a time-honored technique that is widely used in deep learning, and is a crucial component for training modern DNNs. Test-time data augmentation has been used for a long time to improve the performance of convolutional networks. For example, multi-crop evaluation has been a standard procedure for the ImageNet challenge (Simonyan & Zisserman, 2014; Szegedy et al., 2015; He et al., 2016). It is, however, often overlooked in the literature on ensembling techniques in deep learning. In this section, we study the effect of test-time data augmentation on the aforementioned ensembling techniques. To keep the test-time computation budget the same, we sample one random augmentation for each member of an ensemble. Figure 5 reports the calibrated log-likelihood on combination of ensembles and test-time data augmentation for ImageNet. Other metrics and results on CIFAR-10/100 datasets are reported in Appendix C. We have used the standard data augmen
+
+tation: random horizontal flips and random padded crops for CIFAR-10/100 datasets, and random horizontal flips and random resized crops for ImageNet (see more details in Appendix B).
+
+Test-time data augmentation (Figure 4) consistently improves most ensembling methods, especially on ImageNet, where we see a clear improvement across all methods (Figure 5 and Table 7). The performance gain for powerful ensembles (deep ensembles, SSE and cSGLD) on CIFAR datasets is not as dramatic (Figures 14-15 and Table 4). This is likely due to the fact that CIFAR images are small, making data augmentation limited, whereas images from ImageNet allow for a large number of diverse samples of augmented images. On the other hand, while the performance of "single-snapshot" methods (e.g. variational inference, K-FAC Laplace and dropout) is improved significantly, they perform approximately as good as an augmented version of a single model across all datasets.w
+
+Interestingly, test-time data augmentation on ImageNet improves accuracy but decreases the uncalibrated log-likelihood of deep ensembles (Table 7 in Appendix C). Test-time data augmentation breaks the nearly optimal temperature of deep ensembles and requires temperature scaling to reveal the actual performance of the method, as discussed in Section 3.1. The experiment demonstrates that ensembles may be highly miscalibrated by default while still providing superior predictive performance after calibration.
+
+We would like to note that test-time data augmentation does not always break the calibration of an ensemble, and, on the contrary, test-time data augmentation often improves the calibration of an ensemble. In our experiments, decalibration was caused by the extreme magnitude of a random crop, that is conventionally used for ImageNet augmentation. Using less extreme magnitude of the random crop fixes decalibration, that makes test-time data augmentation a more practical method that provides out-of-the-box calibration. Although, as we demonstrated earlier, there is no guarantee that any ensemble is calibrated out-of-the-box. If we are willing to apply post-hoc calibration, the final performance can be much better with more severe augmentations.
+
+# 5 DISCUSSION & CONCLUSION
+
+We have explored the field of in-domain uncertainty estimation and performed an extensive evaluation of modern ensembling techniques. Our main findings can be summarized as follows:
+
+- Temperature scaling is a must even for ensembles. While ensembles generally have better calibration out-of-the-box, they are not calibrated perfectly and can benefit from the procedure. A comparison of log-likelihoods of different ensembling methods without temperature scaling might not provide a fair ranking, especially if some models happen to be miscalibrated.
+- Many common metrics for measuring in-domain uncertainty are either unreliable (ECE and analogues) or cannot be used to compare different methods (AUC-ROC, AUC-PR for misclassification detection; accuracy-confidence curves). In order to perform a fair comparison of different methods, one needs to be cautious of these pitfalls.
+- Many popular ensembling techniques require dozens of samples for test-time averaging, yet are essentially equivalent to a handful of independently trained models. Deep ensembles dominate other methods given a fixed test-time budget. The results indicate, in particular, that exploration of different modes in the loss landscape is crucial for good predictive performance.
+- Methods that are stuck in a single mode are unable to compete with methods that are designed to explore different modes of the loss landscape. Would more elaborate posterior approximations and better inference techniques shorten this gap?
+- Test-time data augmentation is a surprisingly strong baseline for in-domain uncertainty estimation. It can significantly improve other methods without increasing training time or model size since data augmentation is usually already present during training.
+
+Our takeaways are aligned with the take-home messages of Ovadia et al. (2019) that relate to indomain uncertainty estimation. We also observe a stable ordering of different methods in our experiments, and observe that deep ensembles with few members outperform methods based on stochastic computation graphs.
+
+A large number of unreliable metrics inhibits a fair comparison of different methods. Because of this, we urge the community to aim for more reliable benchmarks in the numerous setups of uncertainty estimation.
+
+# ACKNOWLEDGMENTS
+
+Dmitry Vetrov and Dmitry Molchanov were supported by the Russian Science Foundation grant no. 19-71-30020. This research was supported in part through computational resources of HPC facilities at NRU HSE.
+
+# REFERENCES
+
+Andrei Atanov, Armenii Ashukha, Dmitry Molchanov, Kirill Neklyudov, and Dmitry Vetrov. Uncertainty estimation via stochastic batch normalization. In International Symposium on Neural Networks, pp. 261-269. Springer, 2019.
+Anoop Korattikara Balan, Vivek Rathod, Kevin P Murphy, and Max Welling. Bayesian dark knowledge. In Advances in Neural Information Processing Systems, pp. 3438-3446, 2015.
+Charles Blundell, Julien Cornebise, Koray Kavukcuoglu, and Daan Wierstra. Weight uncertainty in neural networks. arXiv preprint arXiv:1505.05424, 2015.
+Glenn W Brier. Verification of forecasts expressed in terms of probability. Monthly weather review, 78(1):1-3, 1950.
+Jacob Buckman, Danijar Hafner, George Tucker, Eugene Brevdo, and Honglak Lee. Sample-efficient reinforcement learning with stochastic ensemble value expansion. In Advances in Neural Information Processing Systems, pp. 8224-8234, 2018.
+Kurtland Chua, Roberto Calandra, Rowan McAllister, and Sergey Levine. Deep reinforcement learning in a handful of trials using probabilistic dynamics models. In Advances in Neural Information Processing Systems, pp. 4754-4765, 2018.
+Yufei Cui, Wuguannan Yao, Qiao Li, Antoni B Chan, and Chun Jason Xue. Accelerating monte carlo bayesian inference via approximating predictive uncertainty over simplex. arXiv preprint arXiv:1905.12194, 2019.
+Yukun Ding, Jinglan Liu, Jinjun Xiong, and Yiyu Shi. Evaluation of neural network uncertainty estimation with application to resource-constrained platforms. arXiv preprint arXiv:1903.02050, 2019.
+Yarin Gal. Uncertainty in deep learning. PhD thesis, PhD thesis, University of Cambridge, 2016.
+Yarin Gal and Zoubin Ghahramani. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In international conference on machine learning, pp. 1050-1059, 2016.
+Timur Garipov, Pavel Izmailov, Dmitrii Podoprikhin, Dmitry P Vetrov, and Andrew G Wilson. Loss surfaces, mode connectivity, and fast ensembling of dnns. In Advances in Neural Information Processing Systems, pp. 8789-8798, 2018.
+Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q Weinberger. On calibration of modern neural networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 1321-1330. JMLR.org, 2017.
+Fredrik K Gustafsson, Martin Danelljan, and Thomas B Schon. Evaluating scalable bayesian deep learning methods for robust computer vision. arXiv preprint arXiv:1906.01620, 2019.
+Lars Kai Hansen and Peter Salamon. Neural network ensembles. IEEE Transactions on Pattern Analysis & Machine Intelligence, (10):993-1001, 1990.
+Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778, 2016.
+
+Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015.
+Gao Huang, Yixuan Li, Geoff Pleiss, Zhuang Liu, John E Hopcroft, and Kilian Q Weinberger. Snapshot ensembles: Train 1, get m for free. arXiv preprint arXiv:1704.00109, 2017.
+Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.
+Pavel Izmailov, Dmitrii Podoprikhin, Timur Garipov, Dmitry Vetrov, and Andrew Gordon Wilson. Averaging weights leads to wider optima and better generalization. arXiv preprint arXiv:1803.05407, 2018.
+Durk P Kingma, Tim Salimans, and Max Welling. Variational dropout and the local reparameterization trick. In Advances in Neural Information Processing Systems, pp. 2575-2583, 2015.
+Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. Technical report, Citeseer, 2009.
+Ananya Kumar, Percy S Liang, and Tengyu Ma. Verified uncertainty calibration. In Advances in Neural Information Processing Systems, pp. 3787-3798, 2019.
+Balaji Lakshminarayanan, Alexander Pritzel, and Charles Blundell. Simple and scalable predictive uncertainty estimation using deep ensembles. In Advances in Neural Information Processing Systems, pp. 6402-6413, 2017.
+Christos Louizos and Max Welling. Multiplicative normalizing flows for variational bayesian neural networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 2218-2227. JMLR.org, 2017.
+Wesley Maddox, Timur Garipov, Pavel Izmailov, Dmitry Vetrov, and Andrew Gordon Wilson. A simple baseline for bayesian uncertainty in deep learning. arXiv preprint arXiv:1902.02476, 2019.
+Andrey Malinin and Mark Gales. Predictive uncertainty estimation via prior networks. In Advances in Neural Information Processing Systems, pp. 7047-7058, 2018.
+James Martens and Roger Grosse. Optimizing neural networks with kronecker-factored approximate curvature. In International conference on machine learning, pp. 2408-2417, 2015.
+Dmitry Molchanov, Arsenii Ashukha, and Dmitry Vetrov. Variational dropout sparsifies deep neural networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 2498-2507. JMLR.org, 2017.
+Marcin Mozejko, Mateusz Susik, and Rafal Karczewski. Inhibited softmax for uncertainty estimation in neural networks. arXiv preprint arXiv:1810.01861, 2018.
+Malik Sajjad Ahmed Nadeem, Jean-Daniel Zucker, and Blaise Hanczar. Accuracy-rejection curves (arcs) for comparing classification methods with a reject option. In Machine Learning in Systems Biology, pp. 65-81, 2009.
+Mahdi Pakdaman Naeini, Gregory Cooper, and Milos Hauskrecht. Obtaining well calibrated probabilities using bayesian binning. In Twenty-Ninth AAAI Conference on Artificial Intelligence, 2015.
+Jeremy Nixon, Mike Dusenberry, Linchuan Zhang, Ghassen Jerfel, and Dustin Tran. Measuring calibration in deep learning. 2019.
+Yaniv Ovadia, Emily Fertig, Jie Ren, Zachary Nado, D Sculley, Sebastian Nowozin, Joshua V Dillon, Balaji Lakshminarayanan, and Jasper Snoek. Can you trust your model's uncertainty? evaluating predictive uncertainty under dataset shift. arXiv preprint arXiv:1906.02530, 2019.
+Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. Automatic differentiation in PyTorch. In NIPS Autodiff Workshop, 2017.
+
+Joaquin Quinonero-Candela, Carl Edward Rasmussen, Fabian Sinz, Olivier Bousquet, and Bernhard Schölkopf. Evaluating predictive uncertainty challenge. In *Machine Learning Challenges Workshop*, pp. 1-27. Springer, 2005.
+Hippolyt Ritter, Aleksandar Botev, and David Barber. A scalable laplace approximation for neural networks. 2018.
+Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge. International journal of computer vision, 115(3):211-252, 2015.
+Burr Settles. Active learning. Synthesis Lectures on Artificial Intelligence and Machine Learning, 6 (1):1-114, 2012.
+Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
+Leslie N Smith and Nicholay Topin. Super-convergence: Very fast training of neural networks using large learning rates. In Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications, volume 11006, pp. 1100612. International Society for Optics and Photonics, 2019.
+Casper Kaae Sønderby, Tapani Raiko, Lars Maaløe, Søren Kaae Sønderby, and Ole Winther. How to train deep variational autoencoders and probabilistic ladder networks. In 33rd International Conference on Machine Learning (ICML 2016), 2016.
+Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research, 15(1):1929-1958, 2014.
+Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013.
+Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dimitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1-9, 2015.
+Mattias Teye, Hossein Azizpour, and Kevin Smith. Bayesian uncertainty estimation for batch normalized deep networks. In International Conference on Machine Learning (ICML), 2018.
+Marcin B Tomczak, Siddharth Swaroop, and Richard E Turner. Neural network ensembles and variational inference revisited. In 1st Symposium on Advances in Approximate Bayesian Inference, pp. 1-11, 2018.
+Linh Tran, Bastiaan S Veeling, Kevin Roth, Jakub Swiatkowski, Joshua V Dillon, Jasper Snoek, Stephan Mandt, Tim Salimans, Sebastian Nowozin, and Rodolphe Jenatton. Hydra: Preserving ensemble diversity for model distillation. arXiv preprint arXiv:2001.04694, 2020.
+Karen Ullrich, Edward Meeds, and Max Welling. Soft weight-sharing for neural network compression. arXiv preprint arXiv:1702.04008, 2017.
+Juozas Vaicenavicius, David Widmann, Carl Andersson, Fredrik Lindsten, Jacob Roll, and Thomas B Schon. Evaluating model calibration in classification. arXiv preprint arXiv:1902.06977, 2019.
+Kuan-Chieh Wang, Paul Vicol, James Lucas, Li Gu, Roger Grosse, and Richard Zemel. Adversarial distillation of bayesian neural network posteriors. In International Conference on Machine Learning, pp. 5177-5186, 2018.
+Sida Wang and Christopher Manning. Fast dropout training. In international conference on machine learning, pp. 118-126, 2013.
+Max Welling and Yee W Teh. Bayesian learning via stochastic gradient Langevin dynamics. In Proceedings of the 28th international conference on machine learning (ICML-11), pp. 681-688, 2011.
+
+Anqi Wu, Sebastian Nowozin, Edward Meeds, Richard E Turner, José Miguel Hernández-Lobato, and Alexander L Gaunt. Deterministic variational inference for robust bayesian neural networks. 2018.
+Sergey Zagoruyko. 92.45 on cifar-10 in torch, 2015. URL http://torch.ch/blog/2015/07/30/cifar.html.
+Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. arXiv preprint arXiv:1605.07146, 2016.
+Hongyi Zhang, Moustapha Cisse, Yann N Dauphin, and David Lopez-Paz. mixup: Beyond empirical risk minimization. arXiv preprint arXiv:1710.09412, 2017.
+Ruqi Zhang, Chunyuan Li, Jianyi Zhang, Changyou Chen, and Andrew Gordon Wilson. Cyclical stochastic gradient mcmc for bayesian deep learning. arXiv preprint arXiv:1902.03932, 2019.
+
+
+Figure 6: The mean cost of training of an ensemble vs. the mean deep ensemble equivalent score. Each marker on the plot denotes one snapshot of weights.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Figure 7: Cost of training of an ensemble vs. its quality (DEE). Each marker on a plot denotes one snapshot of weights.
+
+
+
+
+
+# A IS "EFFICIENT" TRAINING OF ENSEMBLES EFFICIENT AT ALL?
+
+Yes! If you care most about training time efficiency. All snapshot based methods (SSE, cSGLD and FGE) on average (Figure 6) tend to achieve the same performance as deep ensembles using $2 - 5\times$ less training epochs on CIFAR datasets.
+
+The gain comes at a cost of inference efficiency and memory consumption. "Efficient" snapshot-based methods require to store much more samples of weights compared to deep ensembles making inference significantly more expensive (up to $\times 25$ ) given the same predictive performance.
+
+You need to get lucky with hyperparameter choice. While on average "efficient" snapshot-based methods require less training resources, they might be completely inefficient if your hyperparameter choice is sub-optimal (see Figure 7). Such hyperparameters as the maximum learning rate, the length of learning rate decay cycle, the snapshot saving schedule can all impact the performance significantly.
+
+This analysis assumes that we use only conventional training procedure. Many models can most likely be trained, stored and executed much more efficiently with methods like super-convergence, stochastic weight averaging, compression, distillation and others. These methods are out of the scope of the paper but are interesting topics for future research. The current choice of hyperparameters may also not be optimal. We reuse the hyperparameters used in the original papers.
+
+# B EXPERIMENTAL DETAILS
+
+Implementations of deep ensembles, SWAG, FGE and K-FAC Laplace are heavily based on the original PyTorch implementations of stochastic weight averaging (SWA) $^{2}$ and SWAG $^{3}$ . Implementations of cyclical MCMC and snapshot ensembles are based on the original implementation of cyclical MCMC $^{4}$ . We hypothesize that the optimal hyperparameters of ensembling methods may vary widely depending on the computational budget and the number of samples in the ensemble. Searching for the optimal values for each configuration is outside the scope of this paper so we stick to the originally proposed hyperparameters whenever possible.
+
+Implied probabilistic model Conventional neural networks for classification are usually trained using the average cross-entropy loss function with weight decay regularization hidden inside an optimizer in a deep learning framework like PyTorch. The underlying optimization problem can be written as follows:
+
+$$
+L (w) = - \frac {1}{N} \sum_ {i = 1} ^ {N} \log \hat {p} \left(y _ {i} ^ {*} \mid x _ {i}, w\right) + \frac {\lambda}{2} \| w \| ^ {2} \rightarrow \min _ {w} \tag {6}
+$$
+
+where $\{(x_i, y_i^*)\}_{i=1}^N$ is the training dataset of $N$ objects $x_i$ with corresponding labels $y_i^*$ , $\lambda$ is the weight decay scale and $\hat{p}(j \mid x_i, w)$ denotes the probability that a neural network with parameters $w$ assigns to class $j$ when evaluated on object $x_i$ .
+
+The cross-entropy loss defines a likelihood function $p(y^{*} \mid x, w)$ and weight decay regularization, or $L_{2}$ regularization corresponding to a certain Gaussian prior distribution $p(w)$ . The whole optimization objective then corresponds to the maximum a posteriori inference in the following probabilistic model:
+
+$$
+p \left(y ^ {*}, w \mid x\right) = p \left(y ^ {*} \mid x, w\right) p (w), \tag {7}
+$$
+
+$$
+\log p \left(y ^ {*} \mid x, w\right) = \log \prod_ {i = 1} ^ {N} p \left(y _ {i} ^ {*} \mid x _ {i}, w\right) = \sum_ {i = 1} ^ {N} \log \hat {p} \left(y _ {i} ^ {*} \mid x _ {i}, w\right), \tag {8}
+$$
+
+$$
+\log p (w) = \frac {- N \lambda}{2} \| w \| ^ {2} + \operatorname {c o n s t} \iff p (w) = \mathcal {N} \left(w \mid 0, (N \lambda) ^ {- 1} I\right) \tag {9}
+$$
+
+In order to make the results comparable across all ensembling techniques, we used the same probabilistic model for all methods, choosing fixed weight decay parameters for each architecture. We used the softmax-based likelihood for all models. We also use the fully-factorized zero-mean Gaussian prior distribution with variances $\sigma^2 = (N\lambda)^{-1}$ where the number of objects $N$ and the weight decay scale $\lambda$ are dictated by the particular datasets and neural architectures as defined in the following paragraph.
+
+Conventional networks To train a single network on CIFAR-10/100, we used SGD with batch size of 128, momentum 0.9 and model-specific parameters, i.e. the initial learning rate $(lr_{init})$ , the weight decay coefficient (wd), and the number of optimization epochs (epoch). Specific hyperparameters are shown in Table 1. The models were trained with a unified learning rate scheduler that is shown in equation 10. All models have been trained using data augmentation that consists of horizontal flips and a random crop of 32 pixels with a padding of 4 pixels5. The standard data normalization has also been applied. Weight decays, initial learning rates, and the learning rate scheduler were taken from (Garipov et al., 2018) paper. Compared with hyperparameters of (Garipov et al., 2018), we increased the number of optimization epochs since we found that all models were underfitted. While the original WideResNet28x10 network includes a number of dropout layers with $p = 0.3$ and is trained for 200 epochs, we find that the WideResNet28x10 underfits in this setting and requires a longer training. Therefore, we used $p = 0$ which reduces training time while bearing
+
+| Model | lrinit | epochs | wd |
| VGG | 0.05 | 400 | 5e-4 |
| PreResNet110 | 0.1 | 300 | 3e-4 |
| PreResNet164 | 0.1 | 300 | 3e-4 |
| WideResNet28x10 | 0.1 | 300 | 5e-4 |
+
+Table 1: Hyperparameters of models trained on CIFARs for single-model evaluation
+
+no significant effect on final model performance in our experiments.
+
+$$
+\operatorname {l r} (i) = \left\{ \begin{array}{l l} l r _ {\text {i n i t}}, & i \in [ 0, 0. 5 \cdot \text {e p o c h s} ] \\ l r _ {\text {i n i t}} \cdot (1. 0 - 0. 9 9 * (i / \text {e p o c h s} - 0. 5) / 0. 4), & i \in [ 0. 5 \cdot \text {e p o c h s}, 0. 9 \cdot \text {e p o c h s} ] \\ l r _ {\text {i n i t}} \cdot 0. 0 1, & \text {o t h e r w i s e} \end{array} \right. \tag {10}
+$$
+
+On ImageNet dataset we used ResNet50 with default hyperparameters taken from PyTorch examples $^{6}$ . Specifically, we used SGD with momentum 0.9, batch size of 256, initial learning rate 0.1, weight decay $1e - 4$ . Training included data augmentation $^{7}$ (scaling, random crops of size $224 \times 224$ , horizontal flips), normalization and learning rate scheduler $lr = lr_{init} \cdot 0.1^{\mathrm{epoch} // 30}$ where $//$ denotes integer division. We only deviated from standard parameters by increasing the number of training epochs from 90 to 130. Or models achieve top-1 error of $23.81 \pm 0.15$ that closely matches the accuracy of ResNet50 provided by PyTorch which is $23.85^{8}$ . Training of one model on a single NVIDIA Tesla V100 GPU takes approximately 5.5 days.
+
+Deep ensembles Deep ensembles (Lakshminarayanan et al., 2017) average the predictions across networks trained independently starting from different initializations. To obtain a deep ensemble we repeat the described procedure of training standard networks 128 times for all architectures on CIFAR-10 and CIFAR-100 datasets (1024 networks over all) and 50 times for ImageNet dataset. Every member of deep ensembles was trained with exactly the same hyperparameters as conventional models of the same architecture.
+
+Dropout Binary dropout (or MC dropout) (Srivastava et al., 2014; Gal & Ghahramani, 2016) is one of the most widely known ensembling techniques. It involves putting a multiplicative Bernoulli noise with a parameter $p$ over the activations of either a fully connected layer or a convolutional layer, averaging predictions of the network w.r.t. noise at test-time. Dropout layers were applied to VGG and WideResNet networks on CIFAR-10 and CIFAR-100 datasets. Dropout for VGG was applied to fully connected layers with $p = 0.5$ . Two dropout layers were applied: one before the first fully connected layer and one before the second one. While the original version of VGG for CIFARs (Zagoruyko, 2015) exploits more dropout layers, we observed that any additional dropout layer deteriorates the performance of the model in ether deterministic or stochastic mode. Dropout for WideResNet was applied in accordance with the original paper (Zagoruyko & Komodakis, 2016) with $p = 0.3$ . Dropout usually increases the time needed to achieve convergence. Because of this, WideResNet networks with dropout were trained for 400 epochs instead of 300 epochs for deterministic case, and VGG networks have always been trained with dropout. All the other hyperparameters were the same as in the case of conventional models.
+
+Variational Inference Variational Inference (VI) approximates the true posterior distribution over weights $p(w \mid Data)$ with a tractable variational approximation $q_{\theta}(w)$ by maximizing a so-called variational lower bound $\mathcal{L}$ (eq. 11) w.r.t. the parameters of variational approximation $\theta$ . We used a fully-factorized Gaussian approximation $q(w)$ and Gaussian prior distribution $p(w)$ .
+
+$$
+\mathcal {L} (\theta) = \mathbb {E} _ {q} \log p \left(y ^ {*} \mid x, w\right) - K L \left(q _ {\theta} (w) \mid \mid p (w)\right)\rightarrow \max _ {\theta} \tag {11}
+$$
+
+$$
+q (w) = \mathcal {N} (w \mid \mu , \operatorname {d i a g} (\sigma^ {2})) \quad p (w) = N (w \mid 0, \operatorname {d i a g} (\sigma_ {p} ^ {2})), \quad \text {w h e r e} \sigma_ {p} ^ {2} = (N \cdot \mathrm {w d}) ^ {- 1} \tag {12}
+$$
+
+| Architecture | Optimal noise scale |
| CIFAR-10 | CIFAR-10-aug | CIFAR-100 | CIFAR-100-aug |
| VGG16BN | 0.042 | 0.042 | 0.100 | 0.100 |
| PreResNet110 | 0.213 | 0.141 | 0.478 | 0.401 |
| PreResNet164 | 0.120 | 0.105 | 0.285 | 0.225 |
| WideResNet28x10 | 0.022 | 0.018 | 0.022 | 0.004 |
+
+Table 2: Optimal noise scale for K-FAC Laplace for different datasets and architectures. For ResNet50 on ImageNet, the optimal scale found was 2.0 with test-time augmentation and 6.8 without test-time augmentation.
+
+In the case of such a prior, the probabilistic model remains consistent with the conventional training which corresponds to MAP inference in the same probabilistic model. We used variational inference for both convolutional and fully-connected layers where variances of the weights were parameterized by $\log \sigma$ . For fully-connected layers we applied the local reparameterization trick (LRT, (Kingma et al., 2015)).
+
+While variational inference provides a theoretically grounded way to approximate the true posterior, it tends to underfit deep learning models in practice (Kingma et al., 2015). The following tricks are applied to deal with it: pre-training (Molchanov et al., 2017) or equivalently annealing of $\beta$ (Sonderby et al., 2016), and scaling $\beta$ down (Kingma et al., 2015; Ullrich et al., 2017).
+
+During pre-training we initialize $\mu$ with a snapshot of weights of a pre-trained conventional model, and initialize $\log \sigma$ with a model-specific constant $\log \sigma_{init}$ . The KL-divergence - except for the term corresponding to the weight decay - is scaled with a model-specific parameter $\beta$ . The weight decay term is implemented as a part of the optimizer. We used a fact that the KL-divergence between two Gaussian distributions can be rewritten as two terms, one of which is equivalent to the weight decay regularization.
+
+On CIFAR-10 and CIFAR-100 we used $\beta$ equal to 1e-4 for VGG, ResNet100 and ResNet164 networks, and $\beta$ equal to 1e-5 for WideResNet. Log-variance $\log \sigma_{init}$ was initialized with $-5$ for all models. Parameters $\mu$ were optimized with SGD in the same manner as in the case of conventional networks except that the initial learning rate $lr_{init}$ was set to 1e-3. We used a separate Adam optimizer with a constant learning rate of 1e-3 to optimize log-variances of the weights $\log \sigma$ . Pretraining was done for 300 epochs, and after that the remaining part of training was done for 100 epochs. On ImageNet we used $\beta = 1\mathrm{e} - 3$ , $lr_{init} = 0.01$ , $\log \sigma_{init} = -6$ , and trained the model for 45 epochs after pre-training.
+
+K-FAC Laplace The Laplace approximation uses the curvature information of the appropriately scaled loss function to construct a Gaussian approximation to the posterior distribution. Ideally, one would use the Hessian of the loss function as a covariance matrix and use the maximum a posteriori estimate $w^{MAP}$ as a mean of the Gaussian approximation:
+
+$$
+\log p (w \mid x, y ^ {*}) = \log p \left(y ^ {*} \mid x, w\right) + \log p (w) + \text {c o n s t} \tag {13}
+$$
+
+$$
+w ^ {M A P} = \underset {w} {\arg \max } \log p (w \mid x, y ^ {*}); \quad \Sigma = - \nabla \nabla \log p (w \mid x, y ^ {*}) \tag {14}
+$$
+
+$$
+p (w \mid x, y ^ {*}) \approx \mathcal {N} (w \mid w ^ {M A P}, \Sigma) \tag {15}
+$$
+
+In order to keep the method scalable, we use the Fisher Information Matrix as an approximation to the true Hessian (Martens & Grosse, 2015). For K-FAC Laplace, we use the whole dataset to construct an approximation to the empirical Fisher Information Matrix, and use the $\pi$ correction to reduce the bias (Ritter et al., 2018; Martens & Grosse, 2015). Following (Ritter et al., 2018), we find the optimal noise scale for K-FAC Laplace on a held-out validation set by averaging across five random initializations. We then reuse this scale for networks trained without a hold-out validation set. We report the optimal values of scales in Table 2. Note that the optimal scale is different depending on whether we use test-time data augmentation or not. Since the data augmentation also introduces some amount of additional noise, the optimal noise scale for K-FAC Laplace with data augmentation is lower.
+
+Snapshot ensembles Snapshot ensembles (SSE) (Huang et al., 2017) is a simple example of an array of methods which collect samples from a training trajectory of a network in weight space to construct an ensemble. Samples are collected in a cyclical manner: during each cycle the learning rate goes from a large value to near-zero and snapshot of weights of the network is taken at the end of the cycle. SSE uses SGD with a cosine learning schedule defined as follows:
+
+$$
+\alpha (t) = \frac {\alpha_ {0}}{2} \left(\cos \left(\frac {\pi \bmod (t - 1 , \lceil T / M \rceil)}{\lceil T / M \rceil}\right) + 1\right), \tag {16}
+$$
+
+where $\alpha_0$ is the initial learning rate, $T$ is the total number of training iterations and $\mathbf{M}$ is the number of cycles.
+
+For all datasets and models hyperparameters from the original SSE paper are reused. For CIFAR-10/100 length of the cycle is 40 epochs, maximum learning rate is 0.2, batch size is 64. On ResNet50 and ImageNet length of the cycle is 45 epochs, maximum learning rate is 0.1, batch size is 256.
+
+Cyclical SGLD Cyclical Stochastic Gradient Langevin Dynamics (cSGLD) (Zhang et al., 2019) is a state-of-the-art ensembling method for deep neural networks pertaining to stochastic Markov Chain Monte Carlo family of methods. It bears similarity to SSE, e.g. it employs SGD with a learning rate schedule described with the equation 16 and training is cyclical in the same manner. Its main differences from SSE are the introduction of gradient noise and the capturing of several snapshots per cycle, both of which can aid in sampling from posterior distribution over neural network weights efficiently.
+
+Some parameters from the original paper are reused: length of cycle is 50 epochs, maximum learning rate is 0.5, batch size is 64. Number of epochs with gradient noise per cycle is 3 epochs. This was found to yield much higher predictive performance and better uncertainty estimation compared to the original paper choice of 10 epochs for CIFAR-10 and 3 epochs for CIFAR-100.
+
+Finally, the results of cyclical Stochastic Gradient Hamiltonian Monte Carlo (SGHMC), (Zhang et al., 2019) which reportedly has marginally better performance compared with cyclical SGLD, could not be reproduced with any value of SGD momentum term. Because of this, we only include cyclical SGLD in our benchmark.
+
+FGE Fast Geometric Ensembling (FGE) is an ensembling method that is similar to SSE in that it collects weight samples from a training trajectory to construct an ensemble. Its main differences from SSE are pretraining, short cycle length and a piecewise-linear learning rate schedule:
+
+$$
+\alpha (i) = \left\{ \begin{array}{l l} (1 - 2 t (i)) \alpha_ {1} + 2 t (i) \alpha_ {2} & 0 < t (i) \leq \frac {1}{2} \\ (2 - 2 t (i)) \alpha_ {2} + (2 t (i) - 1) \alpha_ {1} & \frac {1}{2} < t (i) \leq 1 \end{array} \right.. \tag {17}
+$$
+
+Hyperparameters of the original implementation of FGE are reused. Model pretraining is done with SGD for 160 epochs according to the standard learning rate schedule described in equation 10 with maximum learning rates from Table 1. After that, a desired number of FGE cycles is done with one snapshot per cycle collected. For VGG the learning rate is changed with parameters $\alpha_{1} = 1e - 2$ , $\alpha_{2} = 5e - 4$ in a cycle with cycle length of 2 epochs. For other networks the learning rate is changed with parameters $\alpha_{1} = 5e - 2$ , $\alpha_{2} = 5e - 4$ with cycle length of 4 epochs. Batch size is 128.
+
+SWAG SWA-Gaussian (SWAG) (Maddox et al., 2019) is an ensembling method based on fitting a Gaussian distribution to model weights on the SGD training trajectory and sampling from this distribution to construct an ensemble.
+
+Like FGE, SWAG has a pretraining stage which is done according to the standard learning rate schedule described in equation 10 with maximum learning rates from Table 1. After that, training continues with a constant learning rate of 1e-2 for all models except for PreResNet110 and PreResNet164 on CIFAR-100 where it continues with a constant learning rate of 5e-2 in accordance with the original paper. Rank of the empirical covariance matrix which is used for estimation of Gaussian distribution parameters is set to be 20.
+
+# C ADDITIONAL EXPERIMENTAL RESULTS
+
+
+
+
+(a) CIFAR-10 dataset $|\rho| = 0.985$
+
+
+(b) CIFAR-100 dataset $|\rho| = 0.972$
+(c) ImageNet dataset $|\rho| = 0.999$
+Figure 8: The average log-likelihood vs the Brier score on a test dataset for different ensemble methods on CIFAR-10 (a) and CIFAR-10 (b) and ImageNet (c) datasets. While not being equivalent, these metrics demonstrate a strong linear correlation. The correlation coefficient is denoted as $\rho$ .
+
+
+
+
+
+
+(a) CIFAR-10, $\rho = 0.861$
+
+
+(b) CIFAR-10, $\rho = 0.957$
+
+
+(c) CIFAR-100, $\rho = 0.877$
+(e) ImageNet, $\rho = 0.995$
+Figure 9: Log-likelihood vs accuracy for different ensembles before (a, c, e) and after (b, d, f) calibration. Both plain log-likelihood and especially calibrated log-likelihood are highly correlated with accuracy.
+
+
+(d) CIFAR-100, $\rho = 0.956$
+(f) ImageNet, $\rho = 0.997$
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+(a) CIFAR-10
+
+
+Figure 10: A side-by-side comparison of log-likelihood and calibrated log-likelihood on CIFAR-10 (a) and CIFAR-100 (b) datasets. On CIFAR-10 the performance of one network becomes close to dropout, variational inference (vi), and K-FAC Laplace approximation (kfacl) after calibration on all models except VGG. On CIFAR-100 deep ensembles move to the first position in the ranking after calibration on WideResNet and VGG. See Section 3.1 for details on the calibrated log-likelihood.
+
+
+(b) CIFAR-100
+
+
+
+
+Figure 11: The deep ensemble equivalent of various ensembling techniques on ImageNet. Solid lines: mean DEE for different methods and architectures. Area between $\mathrm{DEE}^{\mathrm{lower}}$ and $\mathrm{DEE}^{\mathrm{upper}}$ is shaded. Columns 2-4 correspond to DEE based on other metrics, defined similarly to the log-likelihood-based DEE. The results are consistent for all metrics
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Figure 12: The deep ensemble equivalent of various ensembling techniques on CIFAR-10. Solid lines: mean DEE for different methods and architectures. Area between $\mathrm{DEE}^{\mathrm{lower}}$ and $\mathrm{DEE}^{\mathrm{upper}}$ is shaded. Lines 2–4 correspond to DEE based on other metrics, defined similarly to the log-likelihood-based DEE. Note that while the actual scale of DEE varies from metric to metric, the ordering of different methods and the overall behaviour of the lines remain the same.
+
+
+
+
+
+
+
+SSE outperforms deep ensembles on CIFAR-10 on the WideResNet architecture. It possibly indicates that the cosine learning rate schedule and longer training of SSE are more suitable for this architecture than the piecewise-linear learning rate schedule and the number of epochs used in deep ensembles.
+
+
+
+
+
+
+
+
+
+
+Figure 13: The deep ensemble equivalent of various ensembling techniques on CIFAR-100. Solid lines: mean DEE for different methods and architectures. Area between $\mathrm{DEE}^{\mathrm{lower}}$ and $\mathrm{DEE}^{\mathrm{upper}}$ is shaded. Lines 2-4 correspond to DEE based on other metrics, defined similarly to the log-likelihood-based DEE. Note that while the actual scale of DEE varies from metric to metric, the ordering of different methods and the overall behaviour of the lines remain the same.
+
+| Model | Method | Error (%) | Negative calibrated log-likelihood |
| 1 | 5 | 10 | 100 | 1 | 5 | 10 | 100 |
| VGG16 CIFAR-10 | Dropout | 5.86±0.09 | 5.81±0.08 | 5.82±0.06 | 5.79±0.07 | 0.232±0.005 | 0.225±0.004 | 0.224±0.004 | 0.223±0.003 |
| SWA-Gaussian | 7.03±0.50 | 5.66±0.08 | 5.49±0.12 | 5.25±0.13 | 0.230±0.014 | 0.182±0.003 | 0.171±0.002 | 0.160±0.002 |
| Cyclic SGLD | 7.37±0.16 | 6.56±0.09 | 5.71±0.06 | 4.84±0.04 | 0.234±0.004 | 0.196±0.004 | 0.176±0.003 | 0.147±0.003 |
| Fast Geometric Ens. | 6.52±0.16 | 5.95±0.16 | 5.69±0.16 | 5.10±0.13 | 0.213±0.005 | 0.187±0.003 | 0.178±0.003 | 0.155±0.004 |
| Deep Ensembles | 5.95±0.14 | 4.79±0.11 | 4.57±0.07 | 4.39±NA | 0.226±0.001 | 0.158±0.002 | 0.148±0.001 | 0.134±NA |
| Single model | 5.83±0.11 | 5.83±0.11 | 5.83±0.11 | 5.83±0.11 | 0.223±0.002 | 0.223±0.002 | 0.223±0.002 | 0.223±0.002 |
| Variational Inf. (FFG) | 6.57±0.09 | 5.63±0.13 | 5.50±0.10 | 5.46±0.03 | 0.239±0.002 | 0.192±0.002 | 0.184±0.002 | 0.175±0.001 |
| KFAC-Laplace | 6.00±0.13 | 5.82±0.12 | 5.82±0.19 | 5.80±0.19 | 0.210±0.005 | 0.203±0.007 | 0.201±0.007 | 0.200±0.008 |
| Snapshot Ensembles | 7.76±0.22 | 5.52±0.13 | 5.00±0.10 | 4.54±0.05 | 0.247±0.005 | 0.176±0.001 | 0.160±0.001 | 0.137±0.001 |
| ResNet110 CIFAR-10 | SWA-Gaussian | 5.77±0.45 | 4.56±0.17 | 4.46±0.12 | 4.34±0.13 | 0.178±0.009 | 0.143±0.004 | 0.139±0.003 | 0.131±0.003 |
| Cyclic SGLD | 6.18±0.20 | 5.32±0.15 | 4.55±0.13 | 3.83±0.02 | 0.185±0.006 | 0.156±0.005 | 0.138±0.002 | 0.115±0.001 |
| Fast Geometric Ens. | 5.52±0.09 | 4.83±0.08 | 4.73±0.10 | 4.28±0.05 | 0.163±0.002 | 0.141±0.003 | 0.137±0.003 | 0.126±0.002 |
| Deep Ensembles | 4.66±0.11 | 3.77±0.11 | 3.63±0.07 | 3.53±NA | 0.148±0.004 | 0.117±0.002 | 0.112±0.002 | 0.106±NA |
| Single model | 4.69±0.11 | 4.69±0.11 | 4.69±0.11 | 4.69±0.11 | 0.150±0.002 | 0.150±0.002 | 0.150±0.003 | 0.150±0.002 |
| Variational Inf. (FFG) | 5.57±0.26 | 4.91±0.15 | 4.72±0.13 | 4.60±0.03 | 0.178±0.003 | 0.149±0.001 | 0.144±0.001 | 0.140±0.000 |
| KFAC-Laplace | 5.81±0.39 | 5.14±0.15 | 4.90±0.14 | 4.78±0.08 | 0.187±0.014 | 0.160±0.007 | 0.153±0.005 | 0.147±0.003 |
| Snapshot Ensembles | 8.41±0.27 | 4.85±0.11 | 4.16±0.16 | 3.52±0.10 | 0.252±0.006 | 0.153±0.002 | 0.132±0.002 | 0.107±0.001 |
| ResNet164 CIFAR-10 | SWA-Gaussian | 5.41±0.71 | 4.21±0.19 | 4.21±0.23 | 4.02±0.14 | 0.171±0.028 | 0.130±0.004 | 0.128±0.004 | 0.121±0.002 |
| Cyclic SGLD | 5.80±0.21 | 4.97±0.12 | 4.30±0.08 | 3.66±0.06 | 0.178±0.004 | 0.149±0.004 | 0.131±0.003 | 0.110±0.001 |
| Fast Geometric Ens. | 5.22±0.07 | 4.49±0.06 | 4.36±0.07 | 4.09±0.12 | 0.157±0.003 | 0.134±0.002 | 0.130±0.001 | 0.119±0.002 |
| Deep Ensembles | 4.53±0.11 | 3.51±0.09 | 3.50±0.06 | 3.34±NA | 0.147±0.002 | 0.113±0.001 | 0.107±0.001 | 0.100±NA |
| Single model | 4.52±0.11 | 4.52±0.11 | 4.52±0.11 | 4.52±0.11 | 0.144±0.002 | 0.144±0.003 | 0.144±0.002 | 0.144±0.003 |
| Variational Inf. (FFG) | 5.62±0.14 | 4.78±0.05 | 4.66±0.05 | 4.55±0.08 | 0.183±0.004 | 0.151±0.001 | 0.146±0.001 | 0.141±0.001 |
| KFAC-Laplace | 5.23±0.29 | 4.77±0.23 | 4.65±0.17 | 4.60±0.09 | 0.168±0.008 | 0.151±0.007 | 0.146±0.005 | 0.142±0.004 |
| Snapshot Ensembles | 8.06±0.10 | 4.50±0.04 | 3.89±0.09 | 3.50±0.05 | 0.241±0.004 | 0.144±0.003 | 0.124±0.002 | 0.104±0.001 |
| WideResNet CIFAR-10 | Dropout | 3.88±0.12 | 3.70±0.18 | 3.63±0.19 | 3.64±0.17 | 0.130±0.002 | 0.120±0.002 | 0.119±0.001 | 0.117±0.002 |
| SWA-Gaussian | 4.98±1.17 | 3.53±0.09 | 3.34±0.14 | 3.28±0.10 | 0.157±0.036 | 0.111±0.004 | 0.105±0.003 | 0.101±0.002 |
| Cyclic SGLD | 4.78±0.16 | 4.09±0.11 | 3.63±0.13 | 3.19±0.04 | 0.155±0.003 | 0.128±0.002 | 0.114±0.001 | 0.099±0.002 |
| Fast Geometric Ens. | 4.86±0.17 | 3.95±0.07 | 3.77±0.10 | 3.34±0.06 | 0.148±0.003 | 0.120±0.002 | 0.113±0.002 | 0.102±0.001 |
| Deep Ensembles | 3.65±0.02 | 3.11±0.10 | 3.01±0.06 | 2.83NA | 0.123±0.002 | 0.097±0.001 | 0.095±0.001 | 0.090±NA |
| Single model | 3.70±0.15 | 3.70±0.15 | 3.70±0.15 | 3.70±0.15 | 0.124±0.005 | 0.124±0.005 | 0.125±0.005 | 0.124±0.005 |
| Variational Inf. (FFG) | 5.61±0.04 | 4.15±0.15 | 3.94±0.10 | 3.64±0.07 | 0.189±0.002 | 0.134±0.002 | 0.127±0.002 | 0.117±0.001 |
| KFAC-Laplace | 4.03±0.19 | 3.90±0.15 | 3.88±0.22 | 3.83±0.16 | 0.134±0.004 | 0.124±0.004 | 0.122±0.005 | 0.120±0.003 |
| Snapshot Ensembles | 5.56±0.15 | 3.68±0.09 | 3.33±0.10 | 2.89±0.07 | 0.179±0.005 | 0.119±0.001 | 0.105±0.001 | 0.090±0.001 |
| VGG16 CIFAR-10 | Dropout | 26.10±0.20 | 25.68±0.18 | 25.66±0.14 | 25.60±0.17 | 1.176±0.008 | 1.111±0.008 | 1.098±0.009 | 1.084±0.009 |
| SWA-Gaussian | 27.74±1.87 | 24.53±0.09 | 23.64±0.28 | 22.97±0.20 | 1.109±0.073 | 0.931±0.007 | 0.879±0.007 | 0.826±0.005 |
| Cyclic SGLD | 29.75±0.17 | 26.79±0.19 | 24.14±0.11 | 21.15±0.11 | 1.114±0.003 | 0.976±0.004 | 0.881±0.006 | 0.749±0.004 |
| Fast Geometric Ens. | 27.07±0.24 | 25.35±0.29 | 24.68±0.40 | 22.78±0.22 | 1.057±0.010 | 0.965±0.003 | 0.930±0.003 | 0.827±0.004 |
| Deep Ensembles | 25.72±0.17 | 21.60±0.13 | 20.79±0.16 | 19.88NA | 1.092±0.004 | 0.840±0.005 | 0.794±0.002 | 0.723±NA |
| Single model | 25.44±0.29 | 25.44±0.29 | 25.44±0.29 | 25.44±0.29 | 1.087±0.006 | 1.087±0.006 | 1.087±0.006 | 1.087±0.006 |
| Variational Inf. (FFG) | 27.24±0.09 | 25.24±0.11 | 24.85±0.05 | 24.56±0.07 | 1.154±0.004 | 1.001±0.002 | 0.973±0.002 | 0.939±0.001 |
| KFAC-Laplace | 27.11±0.59 | 25.98±0.21 | 25.84±0.38 | 25.70±0.38 | 1.174±0.004 | 1.089±0.007 | 1.069±0.005 | 1.050±0.008 |
| Snapshot Ensembles | 31.19±0.33 | 23.87±0.18 | 22.31±0.31 | 21.03±0.10 | 1.170±0.012 | 0.899±0.004 | 0.834±0.005 | 0.751±0.003 |
| WideResNet CIFAR-10 | SWA-Gaussian | 27.75±0.76 | 22.31±0.22 | 21.52±0.30 | 20.69±0.19 | 0.960±0.033 | 0.781±0.011 | 0.745±0.010 | 0.701±0.008 |
| Cyclic SGLD | 25.73±0.14 | 23.30±0.19 | 21.20±0.21 | 18.07±0.16 | 0.914±0.006 | 0.818±0.004 | 0.753±0.002 | 0.630±0.002 |
| Fast Geometric Ens. | 22.84±0.16 | 21.22±0.20 | 20.79±0.23 | 19.64±0.15 | 0.798±0.006 | 0.729±0.003 | 0.713±0.002 | 0.679±0.002 |
| Deep Ensembles | 22.55±0.28 | 18.30±0.22 | 17.59±0.21 | 16.97NA | 0.847±0.007 | 0.675±0.001 | 0.638±0.001 | 0.594±NA |
| Single model | 22.66±0.31 | 22.66±0.31 | 22.66±0.31 | 22.66±0.31 | 0.848±0.014 | 0.848±0.015 | 0.848±0.014 | 0.848±0.015 |
| Variational Inf. (FFG) | 24.27±0.26 | 22.41±0.13 | 22.14±0.12 | 21.86±0.07 | 0.924±0.007 | 0.829±0.003 | 0.813±0.001 | 0.795±0.001 |
| KFAC-Laplace | 24.88±0.97 | 22.87±0.44 | 22.41±0.26 | 22.14±0.29 | 0.948±0.036 | 0.858±0.014 | 0.836±0.010 | 0.812±0.010 |
| Snapshot Ensembles | 30.30±0.40 | 22.83±0.23 | 21.13±0.14 | 18.48±0.25 | 1.069±0.006 | 0.820±0.003 | 0.761±0.022 | 0.662±0.02 |
| WideResNet CIFAR-10 | Dropout | 24.38±0.93 | 20.62±0.18 | 20.08±0.19 | 19.48±0.19 | 0.844±0.042 | 0.719±0.006 | 0.768±0.006 | 0.667±0.04 |
| SWA-Gaussian | 24.87±0.39 | 22.37±0.27 | 20.23±0.22 | 17.13±0.18 | 0.888±0.025 | 0.653±0.004 | 0.634±0.005 | 0.614±0.005 |
| Cyclic SGLD | 21.92±0.15 | 20.10±0.22 | 19.87±0.25 | 17.83±0.25 | 0.765±0.003 | 0.699±0.004 | 0.686±0.004 | 0.650±0.003 |
| Fast Geometric Ens. | 21.41±0.25 | 17.53±0.17 | 16.90±0.15 | 16.50NA | 0.819±0.008 | 0.647±0.003 | 0.615±0.022 | 0.574±NA |
| Deep Ensembles | 21.39±0.40 | 21.39±0.40 | 21.39±0.40 | 21.39±NA | 0.817±0.014 | 0.817±0.014 | 0.817±0.014 | 0.817±0.014 |
| Single model | 21.39±0.26 | 17.53±0.17 | 16.90±0.15 | 16.57NA | 0.797±0.014 | 0.817±0.014 | 0.817±0.014 | 0.817±0.014 |
| Variational Inf. (FFG) | 24.37±0.26 | 21.35±0.11 | 21.10±0.16 | 20.82±0.04 | 0.910±0.001 | 0.801±0.022 | 0.782±0.022 | 0.762±0.009 |
| KFAC-Laplace | 23.44±0.45 | 21.77±0.20 | 21.29±0.23 | 21.03±0.38 | 0.902±0.019 | 0.813±0.006 | 0.792±0.005 | 0.772±0.007 |
| Snapshot Ensembles | 29.48±0.19 | 21.92±0.18 | 20.27±0.23 | 17.68±0.27 | 1.045±0.005 | 0.789±0.005 | 0.729±0.004 | 0.634±0.003 |
| WideResNet CIFAR-10 | Dropout | 20.19±0.11 | 19.41±0.17 | 19.36±0.12 | 19.22±0.15 | 0.823±0.008 | 0.768±0.005 | 0.769±0.006 | 0.751±0.005 |
| SWA-Gaussian | 20.45±0.73 | 17.57±0.17 | 17.21±0.22 | 17.08±0.19 | 0.794±0.025 | 0.653±0.004 | 0.634±0.005 | 0.614±0.005 |
| Cyclic SGLD | 21.42±0.32 | 19.42±0.28 | 17.88±0.16 | 16.29±0.10 | 0.813±0.010 | 0.713±0.009 | 0.654±0.015 | 0.583±0.004 |
| Fast Geometric Ens. | 21.48±0.31 | 18.54±0.16 | 18.00±0.19 | 17.12±0.16 | 0.770±0.007 | 0.652±0.006 | 0.636±0.015 | 0.596±0.03 |
| Deep Ensembles | 19.38±0.20 | 16.55±0.18 | 16.17±0.15 | 15.77NA | 0.797±0.017 | 0.623±0.003 | 0.595±0.033 | 0.571±NA |
| Single model | 19.31±0.24 | 19.31±0.24 | 19.31±NA | 19.31±NA | 0.797±0.017 | 0.797±0.017 | 0.797±0.017 | 0.797±0.017 |
| Variational Inf. (FFG) | 24.38±0.27 | 20.17±0.15 | 19.28±0.29 | 18.74±0.28 | 1.004±0.011 | 0.767±0.014 | 0.727±0.013 | 0.685±0.02 |
| KFAC-Laplace | 20.02±0.18 | 19.76±0.15 | 19.53±0.19 | 19.43±NA | 0.834±0.009 | 0.833±0.016 | 0.795±0.017 | 0.789±0.016 |
| Snapshot Ensembles | 23.01±0.26 | 18.20±0.13 | 17.12±0.31 | 16.67±NA | 0.859±0.019 | 0.768±0.016 | 0.763±NA | 0.634±NA |
+
+Table 3: Classification error and negative calibrated log-likelihood for different models and numbers of samples on CIFAR-10/100.
+
+| Model | Method | Error (%) | Negative calibrated log-likelihood |
| 5 | 10 | 100 | 5 | 10 | 100 |
| VGG16 CIFAR-10 | Dropout | 5.81 vs 5.52↓ | 5.82 vs 5.34↓ | 5.79 vs 5.21↓ | 0.225 vs 0.187↓ | 0.224 vs 0.177↓ | 0.223 vs 0.167↓ |
| SWA-Gaussian | 5.66 vs 5.65~ | 5.49 vs 5.36~ | 5.25 vs 5.08~ | 0.182 vs 0.180~ | 0.171 vs 0.166~ | 0.160 vs 0.152~ |
| Cyclic SGLD | 6.56 vs 6.07↓ | 5.71 vs 5.47↓ | 4.84 vs 4.88~ | 0.196 vs 0.186~ | 0.176 vs 0.169~ | 0.147 vs 0.147~ |
| Fast Geometric Ens. | 5.95 vs 5.64↓ | 5.69 vs 5.36~ | 5.10 vs 4.98~ | 0.187 vs 0.177~ | 0.178 vs 0.167~ | 0.155 vs 0.150~ |
| Deep Ensembles | 4.79 vs 4.90~ | 4.57 vs 4.73~ | 4.39 vs 4.55~ | 0.158 vs 0.162~ | 0.148 vs 0.152~ | 0.134 vs 0.136~ |
| Single model | 5.83 vs 5.55↓ | 5.83 vs 5.35~ | 5.83 vs 5.19~ | 0.223 vs 0.183~ | 0.223 vs 0.174~ | 0.223 vs 0.166~ |
| Variational Inf. (FFG) | 5.63 vs 5.43↓ | 5.50 vs 5.25~ | 5.46 vs 5.07~ | 0.192 vs 0.182~ | 0.184 vs 0.169~ | 0.175 vs 0.154~ |
| KFAC-Laplace | 5.82 vs 5.49~ | 5.82 vs 5.32~ | 5.80 vs 5.17~ | 0.203 vs 0.177~ | 0.201 vs 0.171~ | 0.200 vs 0.164~ |
| Snapshot Ensembles | 5.52 vs 5.61~ | 5.00 vs 5.03~ | 4.54 vs 4.64~ | 0.176 vs 0.178~ | 0.160 vs 0.160~ | 0.137 vs 0.137~ |
| ResNet110 CIFAR-10 | SWA-Gaussian | 4.56 vs 4.47~ | 4.46 vs 4.34~ | 4.34 vs 4.16~ | 0.143 vs 0.142~ | 0.139 vs 0.135~ | 0.131 vs 0.125~ |
| Cyclic SGLD | 5.32 vs 4.88~ | 4.55 vs 4.35~ | 3.83 vs 3.59~ | 0.156 vs 0.148~ | 0.138 vs 0.131~ | 0.115 vs 0.111~ |
| Fast Geometric Ens. | 4.83 vs 4.59~ | 4.73 vs 4.44~ | 4.28 vs 4.14~ | 0.141 vs 0.138~ | 0.137 vs 0.132~ | 0.126 vs 0.121~ |
| Deep Ensembles | 3.77 vs 3.70~ | 3.63 vs 3.58~ | 3.53 vs 3.45~ | 0.117 vs 0.118~ | 0.112 vs 0.112~ | 0.106 vs 0.105~ |
| Single model | 4.69 vs 4.32~ | 4.69 vs 4.23~ | 4.69 vs 4.10~ | 0.150 vs 0.137~ | 0.150 vs 0.134~ | 0.150 vs 0.131~ |
| Variational Inf. (FFG) | 4.91 vs 4.66~ | 4.72 vs 4.43~ | 4.60 vs 4.25~ | 0.149 vs 0.145~ | 0.144 vs 0.138~ | 0.140 vs 0.130~ |
| KFAC-Laplace | 5.14 vs 4.43~ | 4.90 vs 4.27~ | 4.78 vs 4.17~ | 0.160 vs 0.139~ | 0.153 vs 0.134~ | 0.147 vs 0.130~ |
| Snapshot Ensembles | 4.85 vs 4.78~ | 4.16 vs 4.18~ | 3.52 vs 3.48~ | 0.153 vs 0.150~ | 0.132 vs 0.130~ | 0.107 vs 0.106~ |
| ResNet164 CIFAR-10 | SWA-Gaussian | 4.21 vs 4.17~ | 4.21 vs 4.04~ | 4.02 vs 3.84~ | 0.130 vs 0.130~ | 0.128 vs 0.126~ | 0.121 vs 0.116~ |
| Cyclic SGLD | 4.97 vs 4.67~ | 4.30 vs 4.01~ | 3.66 vs 3.48~ | 0.149 vs 0.141~ | 0.131 vs 0.125~ | 0.110 vs 0.107~ |
| Fast Geometric Ens. | 4.49 vs 4.29~ | 4.36 vs 4.15~ | 4.09 vs 3.85~ | 0.134 vs 0.131~ | 0.130 vs 0.126~ | 0.119 vs 0.115~ |
| Deep Ensembles | 3.51 vs 3.58~ | 3.50 vs 3.46~ | 3.34 vs 3.30~ | 0.113 vs 0.114~ | 0.107 vs 0.107~ | 0.100 vs 0.101~ |
| Single model | 4.52 vs 4.15~ | 4.52 vs 4.08~ | 4.52 vs 3.98~ | 0.144 vs 0.131~ | 0.144 vs 0.128~ | 0.144 vs 0.126~ |
| Variational Inf. (FFG) | 4.78 vs 4.41~ | 4.66 vs 4.22~ | 4.55 vs 4.05~ | 0.151 vs 0.142~ | 0.146 vs 0.135~ | 0.141 vs 0.128~ |
| KFAC-Laplace | 4.77 vs 4.26~ | 4.65 vs 4.21~ | 4.60 vs 4.08~ | 0.151 vs 0.135~ | 0.146 vs 0.132~ | 0.142 vs 0.127~ |
| Snapshot Ensembles | 4.50 vs 4.42~ | 3.89 vs 3.87~ | 3.50 vs 3.35~ | 0.144 vs 0.142~ | 0.124 vs 0.123~ | 0.104 vs 0.104~ |
| WideResNet CIFAR-10 | Dropout | 3.70 vs 3.62~ | 3.63 vs 3.52~ | 3.64 vs 3.44~ | 0.120 vs 0.117~ | 0.119 vs 0.114~ | 0.117 vs 0.111~ |
| SWA-Gaussian | 3.53 vs 3.59~ | 3.34 vs 3.34~ | 3.28 vs 3.24~ | 0.111 vs 0.114~ | 0.105 vs 0.107~ | 0.101 vs 0.102~ |
| Cyclic SGLD | 4.09 vs 3.95~ | 3.63 vs 3.58~ | 3.19 vs 3.22~ | 0.128 vs 0.125~ | 0.114 vs 0.112~ | 0.099 vs 0.100~ |
| Fast Geometric Ens. | 3.95 vs 3.90~ | 3.77 vs 3.64~ | 3.34 vs 3.27~ | 0.120 vs 0.120~ | 0.113 vs 0.113~ | 0.102 vs 0.102~ |
| Deep Ensembles | 3.11 vs 3.21~ | 3.01 vs 3.02~ | 2.83 vs 2.91~ | 0.097 vs 0.103~ | 0.095 vs 0.098~ | 0.090 vs 0.094~ |
| Single model | 3.70 vs 3.53~ | 3.70 vs 3.45~ | 3.70 vs 3.40~ | 0.124 vs 0.117~ | 0.125 vs 0.114~ | 0.124 vs 0.113~ |
| Variational Inf. (FFG) | 4.15 vs 4.05~ | 3.94 vs 3.80~ | 3.64 vs 3.63~ | 0.134 vs 0.136~ | 0.127 vs 0.126~ | 0.117 vs 0.116~ |
| KFAC-Laplace | 3.90 vs 3.63~ | 3.88 vs 3.58~ | 3.83 vs 3.50~ | 0.124 vs 0.115~ | 0.122 vs 0.113~ | 0.120 vs 0.111~ |
| Snapshot Ensembles | 3.68 vs 3.74~ | 3.33 vs 3.35~ | 2.89 vs 2.90~ | 0.119 vs 0.122~ | 0.105 vs 0.107~ | 0.090 vs 0.093~ |
| VGG16 CIFAR-100 | Dropout | 25.68 vs 24.37~ | 25.66 vs 23.89~ | 25.60 vs 23.41~ | 1.111 vs 0.999~ | 1.098 vs 0.960~ | 1.084 vs 0.911~ |
| SWA-Gaussian | 24.53 vs 24.28~ | 23.64 vs 23.27~ | 22.97 vs 22.34~ | 0.931 vs 0.926~ | 0.879 vs 0.859~ | 0.826 vs 0.795~ |
| Cyclic SGLD | 26.79 vs 25.66~ | 24.14 vs 23.45~ | 21.15 vs 21.04~ | 0.976 vs 0.929~ | 0.881 vs 0.848~ | 0.749 vs 0.740~ |
| Fast Geometric Ens. | 25.35 vs 24.53~ | 24.68 vs 23.62~ | 22.78 vs 22.20~ | 0.965 vs 0.921~ | 0.930 vs 0.878~ | 0.827 vs 0.800~ |
| Deep Ensembles | 21.60 vs 21.90~ | 20.79 vs 21.03~ | 19.88 vs 20.23~ | 0.840 vs 0.865~ | 0.794 vs 0.811~ | 0.723 vs 0.731~ |
| Single model | 25.44 vs 24.38~ | 25.44 vs 23.92~ | 25.44 vs 23.48~ | 1.087 vs 0.973~ | 1.087 vs 0.945~ | 1.087 vs 0.912~ |
| Variational Inf. (FFG) | 25.24 vs 24.19~ | 24.85 vs 23.56~ | 24.56 vs 22.89~ | 1.001 vs 0.964~ | 0.973 vs 0.919~ | 0.939 vs 0.864~ |
| KFAC-Laplace | 25.98 vs 24.53~ | 25.84 vs 24.03~ | 25.70 vs 23.57~ | 1.089 vs 0.989~ | 1.069 vs 0.949~ | 1.050 vs 0.909~ |
| Snapshot Ensembles | 23.87 vs 23.88~ | 22.31 vs 22.50~ | 21.03 vs 21.04~ | 0.899 vs 0.905~ | 0.834 vs 0.836~ | 0.751 vs 0.750~ |
| ResNet110 CIFAR-100 | SWA-Gaussian | 22.31 vs 22.20~ | 21.52 vs 21.21~ | 20.69 vs 20.28~ | 0.781 vs 0.770~ | 0.745 vs 0.730~ | 0.701 vs 0.683~ |
| Cyclic SGLD | 23.30 vs 22.32~ | 21.20 vs 20.43~ | 18.07 vs 17.67~ | 0.818 vs 0.787~ | 0.753 vs 0.725~ | 0.630 vs 0.616~ |
| Fast Geometric Ens. | 21.22 vs 20.69~ | 20.79 vs 20.18~ | 19.64 vs 19.25~ | 0.729 vs 0.714~ | 0.713 vs 0.694~ | 0.679 vs 0.661~ |
| Deep Ensembles | 18.30 vs 18.30~ | 17.59 vs 17.61~ | 16.97 vs 16.74~ | 0.675 vs 0.672~ | 0.638 vs 0.635~ | 0.594 vs 0.591~ |
| Single model | 22.66 vs 21.37~ | 22.66 vs 21.17~ | 22.66 vs 20.98~ | 0.848 vs 0.797~ | 0.848 vs 0.786~ | 0.848 vs 0.775~ |
| Variational Inf. (FFG) | 22.41 vs 21.67~ | 22.14 vs 21.21~ | 21.86 vs 20.77~ | 0.829 vs 0.799~ | 0.813 vs 0.775~ | 0.795 vs 0.748~ |
| KFAC-Laplace | 22.87 vs 21.69~ | 22.41 vs 21.28~ | 22.14 vs 20.99~ | 0.858 vs 0.810~ | 0.836 vs 0.788~ | 0.812 vs 0.766~ |
| Snapshot Ensembles | 22.83 vs 22.33~ | 21.13 vs 20.71~ | 18.48 vs 18.23~ | 0.820 vs 0.806~ | 0.761 vs 0.744~ | 0.662 vs 0.651~ |
| WideResNet CIFAR-100 | Dropout | 20.62 vs 20.61~ | 20.08 vs 20.08~ | 19.48 vs 19.33~ | 0.719 vs 0.715~ | 0.700 vs 0.690~ | 0.667 vs 0.654~ |
| Cyclic SGLD | 22.37 vs 21.57~ | 20.23 vs 19.58~ | 17.13 vs 16.99~ | 0.790 vs 0.767~ | 0.722 vs 0.702~ | 0.606 vs 0.595~ |
| Fast Geometric Ens. | 20.10 vs 19.82~ | 19.87 vs 19.39~ | 18.73 vs 18.33~ | 0.699 vs 0.689~ | 0.686 vs 0.671~ | 0.650 vs 0.636~ |
| Deep Ensembles | 17.53 vs 17.57~ | 16.90 vs 16.84~ | 16.50 vs 16.22~ | 0.647 vs 0.650~ | 0.615 vs 0.613~ | 0.574 vs 0.575~ |
| Single model | 21.39 vs 20.60~ | 21.39 vs 20.39~ | 21.39 vs 20.23~ | 0.817 vs 0.772~ | 0.817 vs 0.761~ | 0.817 vs 0.751~ |
| Variational Inf. (FFG) | 21.35 vs 21.06~ | 21.10 vs 20.54~ | 20.82 vs 19.97~ | 0.801 vs 0.785~ | 0.782 vs 0.759~ | 0.762 vs 0.731~ |
| KFAC-Laplace | 21.77 vs 20.69~ | 21.29 vs 20.30~ | 21.03 vs 20.18~ | 0.813 vs 0.778~ | 0.792 vs 0.758~ | 0.772 vs 0.740~ |
| Snapshot Ensembles | 21.92 vs 21.69~ | 20.27 vs 19.92~ | 17.68 vs 17.66~ | 0.789 vs 0.781~ | 0.729 vs 0.720~ | 0.634 vs 0.629~ |
| WideResNet CIFAR-100 | Dropout | 19.41 vs 19.27~ | 19.36 vs 19.11~ | 19.22 vs 18.88~ | 0.768 vs 0.751~ | 0.760 vs 0.738~ | 0.751 vs 0.723~ |
| SWA-Gaussian | 17.57 vs 17.79~ | 17.21 vs 17.27~ | 17.08 vs 16.89~ | 0.653 vs 0.658~ | 0.634 vs 0.635~ | 0.614 vs 0.610~ |
| Cyclic SGLD | 19.42 vs 18.83~ | 17.88 vs 17.50~ | 16.29 vs 16.21~ | 0.713 vs 0.696~ | 0.654 vs 0.641~ | 0.583 vs 0.580~ |
| Fast Geometric Ens. | 18.54 vs 18.39~ | 18.00 vs 17.84~ | 17.12 vs 16.93~ | 0.652 vs 0.649~ | 0.630 vs 0.624~ | 0.596 vs 0.592~ |
| Deep Ensembles | 16.55 vs 16.84~ | 16.17 vs 16.30~ | 15.77 vs 15.77~ | 0.623 vs 0.632~ | 0.595 vs 0.602~ | 0.571 vs 0.573~ |
| Single model | 19.31 vs 18.83~ | 19.31 vs 18.80~ | 19.31 vs 18.72~ | 0.797 vs 0.755~ | 0.797 vs 0.746~ | 0.797 vs 0.738~ |
| Variational Inf. (FFG) | 20.17 vs 20.12~ | 19.28 vs 19.20~ | 18.74 vs 18.54~ | 0.767 vs 0.766~ | 0.727 vs 0.724~ | 0.685 vs 0.679~ |
| KFAC-Laplace | 19.76 vs 19.21~ | 19.53 vs 19.03~ | 19.43 vs 18.93~ | 0.803 vs 0.764~ | 0.795 vs 0.757~ | 0.789 vs 0.747~ |
| Snapshot Ensembles | 18.20 vs 18.22~ | 17.12 vs 17.20~ | 16.07 vs 16.27~ | 0.678 vs 0.668~ | 0.633 vs 0.635~ | 0.582 vs 0.586~ |
+
+Table 4: Classification error and negative calibrated log-likelihood before vs. after test-time augmentation on CIFAR-10/100.
+
+| Model | Method | Deep ensemble equivalent score |
| 1 sample | 5 samples | 10 samples | 50 samples | 100 samples |
| VGG16 CIFAR-10 | Deep Ensembles | 1.0 | 5.0 | 10.0 | 50.0 | 100.0 |
| Snapshot Ensembles | 1.0 | 2.6 | 4.7 | 21.9 | 33.8 |
| Cyclic SGLD | 1.0 | 1.7 | 2.6 | 7.4 | 10.5 |
| SWA-Gaussian | 1.0 | 2.2 | 2.9 | 4.4 | 4.6 |
| Fast Geometric Ens. | 1.3 | 2.0 | 2.4 | 4.7 | 5.8 |
| Dropout | 1.0 | 1.0 | 1.0 | 1.1 | 1.1 |
| Variational Inf. (FFG) | 1.0 | 1.8 | 2.0 | 2.5 | 2.6 |
| KFAC-Laplace | 1.4 | 1.6 | 1.6 | 1.6 | 1.6 |
| Single model | 1.1 | 1.1 | 1.1 | 1.1 | 1.1 |
| ResNet110 CIFAR-10 | Deep Ensembles | 1.0 | 5.0 | 10.0 | 48.0 | 94.6 |
| Snapshot Ensembles | 1.0 | 1.0 | 2.0 | 14.6 | 32.6 |
| Cyclic SGLD | 1.0 | 1.0 | 1.6 | 4.6 | 6.0 |
| SWA-Gaussian | 1.0 | 1.3 | 1.5 | 1.9 | 2.0 |
| Fast Geometric Ens. | 1.0 | 1.4 | 1.7 | 2.4 | 2.9 |
| Variational Inf. (FFG) | 1.0 | 1.0 | 1.2 | 1.4 | 1.5 |
| KFAC-Laplace | 1.0 | 1.0 | 1.0 | 1.0 | 1.1 |
| Single model | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
| WideResNet CIFAR-10 | Deep Ensembles | 1.0 | 5.0 | 10.0 | 43.7 | 100.0 |
| Snapshot Ensembles | 1.0 | 1.1 | 2.3 | 11.3 | 17.2 |
| Cyclic SGLD | 1.0 | 1.0 | 1.8 | 4.8 | 6.7 |
| SWA-Gaussian | 1.0 | 1.8 | 1.9 | 2.5 | 2.7 |
| Fast Geometric Ens. | 1.0 | 1.6 | 1.8 | 2.5 | 2.9 |
| Variational Inf. (FFG) | 1.0 | 1.0 | 1.0 | 1.3 | 1.3 |
| KFAC-Laplace | 1.0 | 1.0 | 1.0 | 1.2 | 1.2 |
| Single model | 1.1 | 1.1 | 1.1 | 1.1 | 1.1 |
| VGG16 CIFAR-100 | Deep Ensembles | 1.0 | 5.0 | 10.0 | 48.0 | 86.0 |
| Snapshot Ensembles | 1.0 | 1.3 | 2.5 | 30.6 | 100.0 |
| Cyclic SGLD | 1.0 | 1.0 | 1.6 | 4.0 | 4.1 |
| SWA-Gaussian | 1.0 | 1.8 | 2.5 | 3.1 | 3.3 |
| Fast Geometric Ens. | 1.0 | 1.2 | 1.7 | 2.7 | 3.2 |
| Dropout | 1.0 | 1.2 | 1.3 | 1.4 | 1.4 |
| Variational Inf. (FFG) | 1.0 | 1.0 | 1.0 | 1.3 | 1.4 |
| KFAC-Laplace | 1.0 | 1.0 | 1.0 | 1.2 | 1.2 |
| Single model | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
| ResNet110 CIFAR-100 | Deep Ensembles | 1.0 | 5.0 | 10.0 | 50.0 | 100.0 |
| Snapshot Ensembles | 1.0 | 2.9 | 5.4 | 18.8 | 26.8 |
| Cyclic SGLD | 1.0 | 1.8 | 3.3 | 15.9 | 28.2 |
| SWA-Gaussian | 1.0 | 2.3 | 3.4 | 5.2 | 5.9 |
| Fast Geometric Ens. | 1.2 | 1.9 | 2.3 | 4.5 | 5.8 |
| Dropout | 1.0 | 1.0 | 1.0 | 1.0 | 1.1 |
| Variational Inf. (FFG) | 1.0 | 1.6 | 1.8 | 2.1 | 2.2 |
| KFAC-Laplace | 1.0 | 1.0 | 1.2 | 1.3 | 1.3 |
| Single model | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
| ResNet164 CIFAR-100 | Deep Ensembles | 1.0 | 5.0 | 10.0 | 50.0 | 99.0 |
| Snapshot Ensembles | 1.0 | 1.3 | 1.9 | 4.7 | 6.2 |
| Cyclic SGLD | 1.0 | 1.3 | 2.0 | 7.0 | 12.2 |
| SWA-Gaussian | 1.0 | 1.7 | 2.2 | 3.3 | 3.6 |
| Fast Geometric Ens. | 1.5 | 2.6 | 3.0 | 4.2 | 4.7 |
| Variational Inf. (FFG) | 1.0 | 1.2 | 1.3 | 1.5 | 1.5 |
| KFAC-Laplace | 1.0 | 1.0 | 1.1 | 1.3 | 1.4 |
| Single model | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
| WideResNet CIFAR-100 | Deep Ensembles | 1.0 | 5.0 | 10.0 | 50.0 | 100.0 |
| Snapshot Ensembles | 1.0 | 1.3 | 1.9 | 4.8 | 6.3 |
| Cyclic SGLD | 1.0 | 1.3 | 2.0 | 6.8 | 13.3 |
| SWA-Gaussian | 1.0 | 2.1 | 2.6 | 3.6 | 3.8 |
| Fast Geometric Ens. | 1.6 | 2.6 | 3.0 | 4.2 | 4.8 |
| Variational Inf. (FFG) | 1.0 | 1.2 | 1.4 | 1.6 | 1.6 |
| KFAC-Laplace | 1.0 | 1.1 | 1.3 | 1.5 | 1.5 |
| Single model | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
| WideResNet CIFAR-100 | Deep Ensembles | 1.0 | 5.0 | 10.0 | 49.0 | 95.9 |
| Snapshot Ensembles | 1.0 | 2.5 | 4.2 | 16.2 | 20.0 |
| Cyclic SGLD | 1.0 | 1.9 | 3.2 | 13.6 | 18.8 |
| SWA-Gaussian | 1.0 | 3.2 | 4.2 | 5.7 | 6.0 |
| Fast Geometric Ens. | 1.3 | 3.2 | 4.4 | 7.5 | 9.5 |
| Dropout | 1.0 | 1.3 | 1.4 | 1.5 | 1.5 |
| Variational Inf. (FFG) | 1.0 | 1.3 | 1.7 | 2.2 | 2.4 |
| KFAC-Laplace | 1.0 | 1.0 | 1.0 | 1.1 | 1.1 |
| Single model | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
+
+Table 5: Deep ensemble equivalent score for CIFAR-10/100.
+
+| Model | Method | Error (%) | Negative calibrated log-likelihood |
| 1 | 5 | 10 | 50 | 1 | 5 | 10 | 50 |
| ResNet50 | Fast Geometric Ens. | 23.71±0.00 | 23.61±0.00 | 23.56±0.00 | 23.28±0.00 | 0.929±0.000 | 0.921±0.000 | 0.916±0.000 | 0.904±0.000 |
| Deep Ensembles | 23.79±0.14 | 21.19±0.14 | 20.90±0.08 | 20.63±NA | 0.935±0.007 | 0.823±0.002 | 0.805±0.000 | 0.788±NA |
| Single model | 23.86±0.20 | 23.86±0.20 | 23.86±0.20 | 23.86±0.20 | 0.938±0.006 | 0.938±0.006 | 0.938±0.006 | 0.938±0.006 |
| Variational Inf. (FFG) | 24.50±0.06 | 23.82±0.03 | 23.77±0.04 | 23.67±0.00 | 0.957±0.001 | 0.927±0.000 | 0.923±0.001 | 0.920±0.000 |
| KFAC-Laplace | 25.01±0.49 | 24.19±0.29 | 23.93±0.20 | 23.86±0.16 | 0.988±0.022 | 0.948±0.013 | 0.939±0.011 | 0.934±0.008 |
| Snapshot Ensembles | 24.92±NA | 22.21±NA | 21.75±NA | 21.48±NA | 0.983±NA | 0.865±NA | 0.843±NA | 0.830±NA |
+
+Table 6: Classification error and negative calibrated log-likelihood for different numbers of samples on ImageNet.
+
+| Model | Method | Error (%) | Negative calibrated log-likelihood |
| 5 | 10 | 50 | 5 | 10 | 50 |
| ResNet50 | Fast Geometric Ens. | 23.61 vs 22.21↓ | 23.56 vs 21.37↓ | 23.28 vs 20.67↓ | 0.921 vs 0.894↓ | 0.916 vs 0.842↓ | 0.904 vs 0.793↓ |
| Deep Ensembles | 21.19 vs 21.20≈ | 20.90 vs 20.16↓ | 20.63 vs 19.39↓ | 0.823 vs 0.855↑ | 0.805 vs 0.793↓ | 0.788 vs 0.739↓ |
| Single model | 23.86 vs 22.39↓ | 23.86 vs 21.60↓ | 23.86 vs 21.06↓ | 0.938 vs 0.900↓ | 0.938 vs 0.851↓ | 0.938 vs 0.805↓ |
| Variational Inf. (FFG) | 23.82 vs 22.58↓ | 23.77 vs 21.74↓ | 23.67 vs 21.10↓ | 0.927 vs 0.905↓ | 0.923 vs 0.851↓ | 0.920 vs 0.805↓ |
| KFAC-Laplace | 24.19 vs 22.50↓ | 23.93 vs 21.67↓ | 23.86 vs 21.04↓ | 0.948 vs 0.906↓ | 0.939 vs 0.855↓ | 0.934 vs 0.809↓ |
| Snapshot Ensembles | 22.21 vs 21.99↓ | 21.75 vs 20.81↓ | 21.48 vs 19.86↓ | 0.865 vs 0.879↑ | 0.843 vs 0.815↓ | 0.830 vs 0.763↓ |
+
+Table 7: Classification error and negative calibrated log-likelihood before vs. after test-time augmentation on ImageNet.
+
+| Model | Method | Deep ensemble equivalent score |
| 1 sample | 5 samples | 10 samples | 25 samples | 50 samples |
| ResNet50 | Deep Ensembles | 1.0 | 5.0 | 10.0 | 25.0 | 50.0 |
| Snapshot Ensembles | 1.0 | 2.2 | 3.2 | 4.0 | 4.2 |
| Fast Geometric Ens. | 1.1 | 1.2 | 1.3 | 1.4 | 1.5 |
| Variational Inf. (FFG) | 1.0 | 1.1 | 1.2 | 1.2 | 1.2 |
| KFAC-Laplace | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
| Single model | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
+
+Table 8: Deep ensemble equivalent score for ImageNet.
+
+
+CIFAR10
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Figure 14: Classification error and negative calibrated log-likelihood before vs. after test-time augmentation on CIFAR-10
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Figure 15: Classification error and negative calibrated log-likelihood before vs. after test-time augmentation on CIFAR-100
+
+
\ No newline at end of file
diff --git a/pitfallsofindomainuncertaintyestimationandensemblingindeeplearning/images.zip b/pitfallsofindomainuncertaintyestimationandensemblingindeeplearning/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..8ef3bc46eab9e92dcb73d3926253269ea1b59375
--- /dev/null
+++ b/pitfallsofindomainuncertaintyestimationandensemblingindeeplearning/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1bf326973a727ddc907c4404e9179cce3f64357ab8ee766ab4327cb55aa0b3f0
+size 2866764
diff --git a/pitfallsofindomainuncertaintyestimationandensemblingindeeplearning/layout.json b/pitfallsofindomainuncertaintyestimationandensemblingindeeplearning/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..fea22468fcfcca799729975b719366d6e6ca254e
--- /dev/null
+++ b/pitfallsofindomainuncertaintyestimationandensemblingindeeplearning/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:997d490991de2c03215cb296bfb768d3b325dedfa724caf19b711938d79f5f7a
+size 803723
diff --git a/playingthelotterywithrewardsandmultiplelanguageslotteryticketsinrlandnlp/55c5e6af-f2d6-40af-a79b-75fe0da54ee0_content_list.json b/playingthelotterywithrewardsandmultiplelanguageslotteryticketsinrlandnlp/55c5e6af-f2d6-40af-a79b-75fe0da54ee0_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..65dc9b75bb9dc4d93d439d5512f470d6141ae081
--- /dev/null
+++ b/playingthelotterywithrewardsandmultiplelanguageslotteryticketsinrlandnlp/55c5e6af-f2d6-40af-a79b-75fe0da54ee0_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:705d35a3ffbf6811bfb8b80f8b0d0e6b7909d4fb6de677dc7adee623c8a502f3
+size 60137
diff --git a/playingthelotterywithrewardsandmultiplelanguageslotteryticketsinrlandnlp/55c5e6af-f2d6-40af-a79b-75fe0da54ee0_model.json b/playingthelotterywithrewardsandmultiplelanguageslotteryticketsinrlandnlp/55c5e6af-f2d6-40af-a79b-75fe0da54ee0_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..7e7edd4af4108f8ffe7f7d3b0f65e603e58cc4b7
--- /dev/null
+++ b/playingthelotterywithrewardsandmultiplelanguageslotteryticketsinrlandnlp/55c5e6af-f2d6-40af-a79b-75fe0da54ee0_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:260efe4e58fffaaafeddf32c5bb1a1936baa8e60858c9afafd928f145ab29efe
+size 70857
diff --git a/playingthelotterywithrewardsandmultiplelanguageslotteryticketsinrlandnlp/55c5e6af-f2d6-40af-a79b-75fe0da54ee0_origin.pdf b/playingthelotterywithrewardsandmultiplelanguageslotteryticketsinrlandnlp/55c5e6af-f2d6-40af-a79b-75fe0da54ee0_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..23efa7817014d258a606a8089a79cff2dcfe9415
--- /dev/null
+++ b/playingthelotterywithrewardsandmultiplelanguageslotteryticketsinrlandnlp/55c5e6af-f2d6-40af-a79b-75fe0da54ee0_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6bac5a4136701e740a3f867b887fa69a86e47748bd48380ffdb8c0c6e72d14d1
+size 678424
diff --git a/playingthelotterywithrewardsandmultiplelanguageslotteryticketsinrlandnlp/full.md b/playingthelotterywithrewardsandmultiplelanguageslotteryticketsinrlandnlp/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..6f9a51b952129bddd74ca2dd79b8edd8e52ddace
--- /dev/null
+++ b/playingthelotterywithrewardsandmultiplelanguageslotteryticketsinrlandnlp/full.md
@@ -0,0 +1,223 @@
+# PLAYING THE LOTTERY WITH REWARDS AND MULTIPLE LANGUAGES: LOTTERY TICKETS IN RL AND NLP
+
+Haonan Yu*, Sergey Edunov, Yuandong Tian, and Ari S. Morcos†
+
+Facebook AI Research
+
+haonanu@gmail.com, {edunov,yuandong,arimorcos}@fb.com
+
+# ABSTRACT
+
+The lottery ticket hypothesis proposes that over-parameterization of deep neural networks (DNNs) aids training by increasing the probability of a "lucky" sub-network initialization being present rather than by helping the optimization process (Frankle & Carbin, 2019). Intriguingly, this phenomenon suggests that initialization strategies for DNNs can be improved substantially, but the lottery ticket hypothesis has only previously been tested in the context of supervised learning for natural image tasks. Here, we evaluate whether "winning ticket" initializations exist in two different domains: natural language processing (NLP) and reinforcement learning (RL). For NLP, we examined both recurrent LSTM models and large-scale Transformer models (Vaswani et al., 2017). For RL, we analyzed a number of discrete-action space tasks, including both classic control and pixel control. Consistent with work in supervised image classification, we confirm that winning ticket initializations generally outperform parameter-matched random initializations, even at extreme pruning rates for both NLP and RL. Notably, we are able to find winning ticket initializations for Transformers which enable models one-third the size to achieve nearly equivalent performance. Together, these results suggest that the lottery ticket hypothesis is not restricted to supervised learning of natural images, but rather represents a broader phenomenon in DNNs.
+
+# 1 INTRODUCTION
+
+The lottery ticket phenomenon (Frankle & Carbin, 2019; Frankle et al., 2019; Zhou et al., 2019) occurs when small, sparse sub-networks can be found in over-parameterized deep neural networks (DNNs) which, when trained in isolation, can achieve similar or even greater performance than the original, highly over-parameterized network. This phenomenon suggests that over-parameterization in DNN training is beneficial primarily due to proper initialization rather than regularization during the training process itself (Allen-Zhu et al., 2019; 2018; Du & Lee, 2018; Du et al., 2019; Neyshabur et al., 2014; 2019).
+
+However, despite extensive experiments in Frankle & Carbin (2019) and Frankle et al. (2019), it remains unclear whether the lottery ticket phenomenon is an intrinsic feature of DNN behavior or whether it is dependent on other factors such as supervised learning, network architecture, specific tasks (e.g., image classification), the bias in the dataset, or artifacts from the optimization algorithm itself. As discussed in Frankle & Carbin (2019) and Liu et al. (2019), large learning rates severely damage the lottery ticket effect, and for larger models (such as VGG and ResNets) and datasets (e.g., ImageNet), heuristics like learning rate warmup (Frankle & Carbin, 2019) and late rewinding (Frankle et al., 2019) are needed to induce high performance and reliable winning tickets. Recent work has also questioned the effectiveness of the lottery ticket hypothesis, raising concerns about the generality of this phenomenon (Liu et al., 2019; Gale et al., 2019).
+
+In this work, we address the question of whether the lottery ticket phenomenon is merely an artifact of supervised image classification with feed-forward convolutional neural networks, or whether
+
+this phenomenon generalizes to other domains, architectural paradigms, and learning regimes (e.g., environments with reward signals). Many natural language processing (NLP) models feature complex gating mechanics paired with recurrent dynamics, either of which may significantly impact the optimization process and, consequently, the lottery ticket phenomenon. Furthermore, prior work has suggested that this phenomenon is not present in Transformer models (Gale et al., 2019), calling the broad applicability of lottery tickets into question. In reinforcement learning (RL), the data distribution shifts as the agent learns from often reward signals, significantly modifying the optimization process and the resultant networks. Pre-trained feature extractors have proven successful in computer vision (Kornblith et al., 2019; Razavian et al., 2014; Yosinski et al., 2014), but in RL, agents often fail to generalize even to extremely similar situations (Raghu et al., 2018; Lanctot et al., 2017; Cobbe et al., 2018; Ruderman et al., 2019).
+
+To answer this question, we evaluate whether the lottery ticket hypothesis holds for NLP and RL, both of which are drastically different from traditional supervised image classification. To demonstrate the lottery ticket phenomenon, we ask whether sparsified subnetworks initialized as winning tickets outperform randomly initialized subnetworks at convergence. We note that, though desirable, we do not require that subnetworks match the performance of the full network, as originally stated in Frankle et al. (2019). We exclude this requirement because we are primarily interested in whether appropriate initialization impacts the performance of subnetwork training, consistent with the revised definition of the lottery ticket hypothesis in Frankle & Carbin (2019). For NLP, we evaluate language modelling on Wikitext-2 with LSTMs (Merity et al., 2017) and machine translation on the WMT'14 English-German translation task with Transformers (Vaswani et al., 2017). For RL, we evaluate both classic control problems and Atari games (Bellemare et al., 2013).
+
+Perhaps surprisingly, we found that lottery tickets are present in both NLP and RL tasks. In NLP, winning tickets were present both in recurrent LSTMs trained on language modeling and in Transformers (Vaswani et al., 2017) trained on a machine translation task while in RL, we observed winning tickets in both classic control problems and Atari games (though with high variance). Notably, we are able to find masks and initializations which enable a Transformer Big model to train from scratch to achieve $99\%$ the BLEU score of the unpruned model on the Newstest'14 machine translation task while using only a third of the parameters. Together, these results demonstrate that the lottery ticket phenomenon is a general property of deep neural networks, and highlight their potential for practical applications.
+
+# 2 RELATED WORK
+
+Our work is primarily inspired by the lottery ticket hypothesis, first introduced in Frankle & Carbin (2019), which argues that over-parameterized neural networks contain small, sparse sub-networks (with as few as $0.1\%$ of the original network's parameters) which can achieve high performance when trained in isolation. Frankle et al. (2019) revised the lottery ticket hypothesis to include the notion of late rewinding, which was found to significantly improve performance for large-scale models and large-scale image classification datasets. In addition, the revised lottery ticket hypothesis relaxed the need for subnetworks to match the performance of the full network to simply exceeding the performance a randomly initialized subnetwork. For brevity, we will refer exclusively to the revised definition throughout the paper. However, both of these works solely focused on supervised image classification, leaving it unclear whether the lottery ticket phenomenon is present in other domains and learning paradigms.
+
+Recent work (Liu et al., 2019) challenged the lottery ticket hypothesis, demonstrated that for structured pruning settings, random sub-networks were able to match winning ticket performance. Gale et al. (2019) also explored the lottery ticket hypothesis in the context of ResNets and Transformers. Notably, they found that random sub-networks could achieve similar performance to that of winning ticket networks for both model classes. However, they did not use iterative pruning or late rewinding, both of which have been found to significantly improve winning ticket performance. Frankle et al. (2019); Frankle & Carbin (2019).
+
+More broadly, pruning methods for deep neural networks have been explored extensively (Han et al., 2015). Following Frankle et al. (2019) and Frankle & Carbin (2019), we use magnitude pruning in this work, in which the smallest magnitude weights are pruned first (Han et al., 2015). To determine optimal pruning performance, Molchanov et al. (2017b) greedily prune weights to determine an
+
+oracle ranking. Also, Ayinde et al. (2019) and Qin et al. (2019) have attempted to rank channels by redundancy and preferentially prune redundant filters, and Molchanov et al. (2017a) used variational methods to prune models. However, all of these works only evaluated these approaches for supervised image classification.
+
+# 3 APPROACH
+
+# 3.1 GENERATING LOTTERY TICKETS
+
+Pruning methods In our experiments, we use both one-shot and iterative pruning to find winning ticket initializations. In one-shot pruning, the full network is trained to convergence, and then a given fraction of parameters are pruned, with lower magnitude parameters pruned first. To evaluate winning ticket performance, the remaining weights are reset to their initial values, and the sparsified model is retrained to convergence. However, one-shot pruning is very susceptible to noise in the pruning process, and as a result, it has widely been observed that one-shot pruning under-performs iterative pruning methods (Frankle & Carbin, 2019; Han et al., 2015; Liu et al., 2019).
+
+In iterative pruning (Frankle & Carbin, 2019; Han et al., 2015), alternating cycles of training models from scratch and pruning are performed. At each pruning iteration, a fixed, small fraction of the remaining weights are pruned, followed by re-initialization to a winning ticket and another cycle of training and pruning. More formally, the pruning at iteration $k + 1$ is performed on the trained weights of the winning ticket found at iteration $k$ . At iteration $k$ with an iterative pruning rate $p$ , the fraction of weights pruned is:
+
+$$
+r _ {k} = 1 - (1 - p) ^ {k}, 1 \leq k \leq 2 0
+$$
+
+We therefore perform iterative pruning for all our experiments unless otherwise noted, with an iterative pruning rate $p = 0.2$ . For our RL experiments, we perform 20 pruning iterations. Pruning was always performed globally, such that all weights (including biases) of different layers were pooled and their magnitudes ordered for pruning. As a result, the fraction of parameters pruned across layers may vary given a total pruning fraction.
+
+Late rewinding In the original incarnation of the lottery ticket hypothesis (Frankle & Carbin, 2019), winning tickets were reset to their values at initialization. However, Frankle et al. (2019) found that resetting winning tickets to their values to an iteration early in training resulted in dramatic improvements in winning ticket performance on large-scale image datasets, even when weights were reset to their values only a few iterations into training. late rewinding can therefore be defined as resetting winning tickets to their weights at iteration $j$ in training, with the original lottery ticket paradigm taking $j = 0$ . Unless otherwise noted, we use late rewinding throughout, with a late rewinding value of 1 epoch used for all RL experiments. We also compare late rewinding with normal resetting for NLP in section 4.1 and on several representative RL games in section 4.2.
+
+# 3.2 NATURAL LANGUAGE PROCESSING
+
+To test the lottery ticket hypothesis in NLP, we use two broad model and task paradigms: two-layer LSTMs for language modeling on Wikitext-2 (Merit et al., 2017) and Transformer Base and Transformer Big models (Vaswani et al., 2017) on the WMT'14 En2De News Translation task.
+
+Language modeling using LSTMs For the language modeling task, we trained an LSTM model with a hidden state size of 650. It contained a dropout layer between the two RNN layers with a dropout probability of 0.5. The LSTM received word embeddings of size 650. For training, we used truncated Backpropagation Through Time (truncated BPTT) with a sequence length of 50. The training batch size was set to 30, and models were optimized using Adam with a learning rate of $10^{-3}$ and $\beta_{1} = 0.9$ , $\beta_{2} = 0.999$ , $\epsilon = 10^{-3}$ .
+
+As in the RL experiments, we use global iterative pruning with an iterative pruning rate of 0.2 and 20 total pruning iterations. We also employ late rewinding where the initial weights of a winning ticket were set to the weights after first epoch of training the full network. For ticket evaluation, we
+
+trained the model for 10 epochs on the training set, after which we computed the log perplexity on the test set. We also perform two ablation studies without late rewinding and using one-shot pruning, respectively.
+
+Machine translation using transformers For the machine translation task, we use the FAIRSEQ framework $^{1}$ (Ott et al., 2019), following the setup described in (Ott et al., 2018) to train Transformer-base model on the pre-processed dataset from (Vaswani et al., 2017). We train models for 50,000 updates and apply checkpoint averaging. We report case-sensitive tokenized BLEU with multi-bleu.pl $^{2}$ on the Newstest 2014 $^{3}$ .
+
+# 3.3 REINFORCEMENT LEARNING
+
+# 3.3.1 SIMULATED ENVIRONMENTS
+
+For our RL experiments, we use two types of discrete-action games: classic control and pixel control. For classic control, we evaluated 3 OpenAI Gym $^4$ environments that have vectors of real numbers as network inputs. These simple experiments mainly serve to verify whether winning tickets exist in networks that solely consist of fully-connected (FC) layers for RL problems. For pixel control, we evaluated 9 Atari (Bellemare et al., 2013) games. A summary of all the games along with the corresponding networks is provided in Table A1. Classic control games were trained using the FLARE framework $^5$ and Atari games were trained using the ELF framework $^6$ (Tian et al., 2017).
+
+# 3.3.2 TICKET EVALUATION
+
+To evaluate a ticket (pruned sub-network), we train the ticket to play a game with its corresponding initial weights for $N$ epochs. Here, an epoch is defined as every $M$ game episodes or every $M$ training batches depending on the game type. At the end of training, we compute the averaged episodic reward over the last $L$ game episodes. This average reward, defined as ticket reward, indicates the final performance of the ticket playing a game by sampling actions from its trained policy. For each game, we plot ticket reward curves for both winning and random tickets as the fraction of weights pruned increases. To evaluate the impact of random seed on our results, we repeated the iterative pruning process three times on every game, and plot (mean ± standard deviation) for all results.
+
+Classic control All three games were trained in the FLARE framework with 32 game threads running in parallel, and each thread gets blocked every 4 time steps for training. Thus a training batch contains $32 \times 4 = 128$ time steps. Immediate rewards are divided by 100. For optimization, we use RMSprop with a learning rate of $10^{-4}$ and $\alpha = 0.99$ , $\epsilon = 10^{-8}$ .
+
+Pixel control All 9 Atari games are trained using a common ELF configuration with all RL hyperparameters being shared across games (see Table A1 for our choices of $N$ and $M$ ). Specifically, each game has 1024 game threads running in parallel, and each thread gets blocked every 6 time steps for training. For each training batch, the trainer samples 128 time steps from the common pool. The policy entropy cost for exploration is weighted by 0.01. We clip both immediate rewards and advantages to $[-1, +1]$ . Because the training is asynchronous and off-policy, we impose an importance factor which is the ratio of action probability given by the current policy to that from the old policy. This ratio is clamped at 1.5 to stabilize training. For optimization, we use Adam with a learning rate of $10^{-3}$ and $\beta_{1} = 0.9$ , $\beta_{2} = 0.999$ , $\epsilon = 10^{-3}$ .
+
+
+Figure 1: Performance of winning ticket initializations for LSTM models trained on Wikitext-2.
+
+# 4 RESULTS
+
+# 4.1 NATURAL LANGUAGE PROCESSING
+
+In this section, we investigate whether winning ticket outperform random tickets in the context of NLP. In particular, we focus on language modeling with a recurrent LSTM and machine translation using two variants of the Transformer model (Vaswani et al., 2017).
+
+# 4.1.1 LANGUAGE MODELING WITH LSTMS
+
+We first investigate whether winning tickets exist in a two-layer LSTM model trained to perform a language modeling task on the Wikitext-2 dataset. Encouragingly, we found that winning tickets with late rewinding significantly outperformed random tickets in this task for all pruning levels and demonstrated high performance even at high pruning rates, with as many as $90\%$ of parameters capable of being removed without a noticeable increase in log perplexity (Figure 1).
+
+To measure the impact of iterative pruning and late rewinding, we performed ablation studies for these two properties. Interestingly, removing late rewinding only slightly damaged performance and primarily impacted intermediate pruning levels, suggesting that it is only partially necessary for language modeling with LSTMs. Iterative pruning, however, was essential, as performance plummeted, reaching values worse than random tickets once $80\%$ of parameters had been pruned. Together, these results both validate the lottery ticket hypothesis in language modeling with LSTMs and demonstrate the impact of iterative pruning and late rewinding in this setting.
+
+# 4.1.2 MACHINE TRANSLATION WITH TRANSFORMERS
+
+We next evaluate whether winning tickets are present in Transformer models trained on machine translation. Our baseline machine translation model Transformer Base achieves a BLEU score of 27.6 on the Newstest'14 test set (compared to 27.3 in Vaswani et al. (2017)). We perform global iterative pruning with and without late rewinding to the parameters of the baseline model after 1000 updates. Consistent with our results on language modeling with LSTMs, winning tickets outperform random tickets in Transformer models (Figure 2 left). Additionally, we again found that iterative pruning and late rewinding significantly improved performance, with iterative pruning again having a larger impact than late rewinding. The necessity of these modifications explain why our results differ from Gale et al. (2019), which only used one-shot pruning without late rewinding.
+
+```latex
+$^{1}$ https://github.com/pytorch/fairseq
+ $^{2}$ https://github.com/moses-smt/mosesdecoder/blob/master/scripts/generic/multi-bleu.perl
+ $^{3}$ https://www.statmt.org/wmt14/translation-task.html
+ $^{4}$ https://gym.openai.com/
+ $^{5}$ https://github.com/idrlr/flare
+ $^{6}$ https://github.com/pytorch/ELF
+```
+
+
+Figure 2: Winning ticket initialization performance for Transformer Base models trained on machine translation.
+
+
+
+
+Figure 3: Winning ticket initialization performance for Transformer Big models trained on machine translation.
+
+We also evaluated a version of the Transformer Base model in which only Transformer layer weights (attention and fully connected layers) were pruned, but embeddings were left intact (Figure 2 right). Results in this setting were noticeably different from when we pruned all weights. First, the random ticket performance drops off at a much faster rate than in the full pruning setting. This suggests that, for random initializations, a large fraction of embedding weights can be pruned without damaging network performance, but very few transformer layer weights can be pruned. Second, and in stark contrast to the random ticket case, we observed that winning ticket performance was remarkably robust to pruning of only the transformer layer weights, with a roughly linear drop in BLEU score.
+
+To determine whether the lottery phenomenon scales even further, we trained a Transformer Big model with 210M parameters, which reaches a BLEU score of 29.3 (compared to 28.4 in Vaswani et al. (2017)). Here, we again observe that winning tickets significantly outperform random tickets (Figure 2). Notably, we found that with iterative pruning and late rewinding model performance we can train models from scratch $99\%$ of the unpruned model's performance with only a third of the weights (28.9 BLEU $67\%$ pruned vs. 29.2 BLEU unpruned). This result demonstrates the practical implications for the lottery ticket phenomenon as current state of the art Transformer-based systems are often too large to deploy and are highly expensive to train, though sparse matrix multiplication libraries would be necessary to realize this gain.
+
+# 4.2 REINFORCEMENT LEARNING
+
+# 4.2.1 CLASSIC CONTROL
+
+To begin our investigation of the lottery ticket hypothesis in reinforcement learning, we evaluated three simple classic control games: Cartpole-v0, Acrobot-v1, and LunarLander-v2. As discussed in Section 3.3.1 and Table A1, we used simple fully-connected models with three hidden layers. Encouragingly, and consistent with supervised image classification results, we found that winning tickets successfully outperformed random tickets in classic control environments (Figure 4). This
+
+
+Figure 4: Winning ticket performance on classic control games. Each curve plots the mean $\pm$ standard deviation of three independent iterative pruning processes for each game.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Figure 5: Reward curves of WTs (blue) and RTs (red) on Atari. Shaded error bars represent mean $\pm$ standard deviation across runs and the gray curve represents performance of the unpruned network.
+
+
+
+
+
+result suggests that the lottery ticket phenomenon is not merely an artifact of supervised image classification, but occurs in RL paradigms as well.
+
+# 4.2.2 ATARI GAMES
+
+However, in the original lottery ticket study (Frankle & Carbin, 2019), winning tickets were substantially easier to find in simple, fully-connected models trained on simple datasets (e.g., MNIST) than in more complex models trained on larger datasets (e.g. ResNets on CIFAR and ImageNet). We therefore asked whether winning tickets exist in convolutional networks trained on Atari games as
+
+
+Figure 6: Ablation studies of several classic control games on the effects of late rewinding and iterative pruning. Shaded error bars represent mean $\pm$ standard deviation across runs and the gray curve represents performance of the unpruned network.
+
+
+
+
+
+well. We found that the impact of winning tickets varied substantially across Atari games (Figure 5), with some games, such as Assault, Seaquest, and Berzerk benefiting significantly from winning ticket initializations, while other games, such as Breakout and Centipede only benefitted slightly. Notably, winning ticket initializations increased reward for both Berzerk and Qbert. Interestingly, one game, Krull, saw no such benefit, and both winning and random tickets performed well even at the most extreme pruning fractions, suggesting that Krull may be so over-parameterized that we were unable to get into the regime in which winning ticket differences emerge.
+
+One particularly interesting case is that of Kangaroo. Because we used the same hyperparameter settings for all games, the initial, unpruned Kangaroo models failed to converge to high rewards (typical reward on Kangaroo for converged models is in the several thousands). Surprisingly, however, winning ticket initializations substantially improved performance (though these models were still very far from optimal performance on this task) over random tickets, enabling some learning where no learning at all was previously possible. Together, these results suggest that while beneficial winning ticket initializations can be found for some Atari games, winning ticket initializations for other games may not exist or be more difficult to find.
+
+We also observed that the shape of pruning curves for random tickets on Atari games also varied substantially based on the game. For example, some games, such as Breakout and Space Invaders, were extremely sensitive to pruning, with performance dropping almost immediately, while other games, such as Berzerk, Centipede, and Krull actually saw performance steadily increase in early pruning iterations. This result suggests that the level of over-parameterization varies dramatically across Atari games and that "one size fits all" models may have subtle impacts on performance based on their level of over-parameterization.
+
+To measure the impact of late rewinding and iterative pruning on the performance of winning ticket initializations in RL, we performed ablation studies on six representative games both from classic control and Atari: CartPole-v0, Acrobot-v1, LunarLander-v2, Assault, Breakout, and Seaquest. For all ablation experiments, we leave all training parameters fixed (configuration, hyperparameter, optimizer, etc.) except for those specified. For both classic control (Figure 6) and Atari (Figure 7), we observed that, consistent with previous results in supervised image classification (Frankle et al., 2019; Frankle & Carbin, 2019), both late rewinding and iterative pruning improve winning ticket performance, though interestingly, the degree to which these modifications improved performance varied significantly across games.
+
+# 5 CONCLUSION
+
+In this study, we investigated whether the lottery ticket hypothesis holds in regimes beyond simple supervised image classification by analyzing both NLP and RL domains. For NLP, we found that winning ticket initializations beat random tickets both for recurrent LSTM models trained on language modeling and Transformer models trained on machine translation. Notably, we found high performing Transformer Big models even at high pruning rates $(\geq 67\%)$ . For RL, we found that winning ticket initializations substantially outperformed random tickets on classic control problems and for many, but not all, Atari games. Together, these results suggest that the lottery ticket phenomenon is not
+
+
+Figure 7: Ablation studies of several pixel control games on the effects of iterative pruning. Shaded error bars represent mean $\pm$ standard deviation across runs and the gray curve represents performance of the unpruned network. "Ir" means late-resetting.
+
+
+
+
+
+restricted to supervised image classification, but rather represents a general feature of deep neural network training.
+
+# REFERENCES
+
+Zeyuan Allen-Zhu, Yanzhi Li, and Yingyu Liang. Learning and generalization in overparameterized neural networks, going beyond two layers. November 2018. URL http://arxiv.org/abs/1811.04918.
+Zeyuan Allen-Zhu, Yuzhhi Li, and Zhao Song. A convergence theory for deep learning via overparameterization. In Kamalika Chaudhuri and Ruslan Salakhutdinov (eds.), Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pp. 242-252, Long Beach, California, USA, 09-15 Jun 2019. PMLR. URL http://proceedings.mlr.press/v97/allen-zhu19a.html.
+Babajide O Ayinde, Tamer Inanc, and Jacek M Zurada. Redundant feature pruning for accelerated inference in deep neural networks. Neural networks: the official journal of the International Neural Network Society, May 2019. ISSN 0893-6080. doi: 10.1016/j.neunet.2019.04.021. URL http://www.sciencedirect.com/science/article/pii/S0893608019301273.
+M. G. Bellemare, Y. Naddaf, J. Veness, and M. Bowling. The arcade learning environment: An evaluation platform for general agents. Journal of Artificial Intelligence Research, 47:253-279, jun 2013.
+Karl Cobbe, Oleg Klimov, Chris Hesse, Taehoon Kim, and John Schulman. Quantifying generalization in reinforcement learning. December 2018. URL http://arxiv.org/abs/1812.02341.
+Simon S Du and Jason D Lee. On the power of over-parametrization in neural networks with quadratic activation. In International Conference in Machine Learning (ICML), 2018.
+Simon S Du, Xiyu Zhai, Barnabas Poczos, and Aarti Singh. Gradient descent provably optimizes over-parameterized neural networks. In International Conference in Learning Representations (ICLR), 2019.
+Jonathan Frankle and Michael Carbin. The lottery ticket hypothesis: Finding sparse, trainable neural networks. In International Conference on Learning Representations, 2019. URL http://arxiv.org/abs/1803.03635.
+Jonathan Frankle, Gintare Karolina Dziugaite, Daniel M Roy, and Michael Carbin. The lottery ticket hypothesis at scale. March 2019. URL http://arxiv.org/abs/1903.01611.
+Trevor Gale, Erich Elsen, and Sara Hooker. The state of sparsity in deep neural networks. February 2019. URL http://arxiv.org/abs/1902.09574.
+Song Han, Jeff Pool, John Tran, and William Dally. Learning both weights and connections for efficient neural network. In Advances in neural information processing systems, pp. 1135-1143, 2015.
+
+Simon Kornblith, Jonathon Shlens, and Quoc V Le. Do better ImageNet models transfer better? In Computer Vision and Pattern Recognition (CVPR), 2019.
+Marc Lanctot, Vinicius Zambaldi, Audrūnas Gruslys, Angeliki Lazaridou, Karl Tuyls, Julien Pérolat, David Silver, and Thore Graepel. A unified game-theoretic approach to multiagent reinforcement learning. In Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS'17, pp. 4193-4206, USA, 2017. Curran Associates Inc. ISBN 978-1-5108-6096-4. URL http://dl.acm.org/citation.cfm?id=3294996.3295174.
+Zhuang Liu, Mingjie Sun, Tinghui Zhou, Gao Huang, and Trevor Darrell. Rethinking the value of network pruning. In International Conference on Learning Representations, 2019. URL http://arxiv.org/abs/1810.05270.
+Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. Pointer sentinel mixture models. In International Conference in Learning Representations (ICLR), 2017.
+Dmitry Molchanov, Armenii Ashukha, and Dmitry Vetrov. Variational dropout sparsifies deep neural networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 2498-2507. JMLR.org, 2017a.
+Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, and Jan Kautz. Pruning convolutional neural networks for resource efficient inference. In International Conference in Learning Representations (ICLR), 2017b.
+Behnam Neyshabur, Ryota Tomioka, and Nathan Srebro. In search of the real inductive bias: On the role of implicit regularization in deep learning. In International Conference in Learning Representations (ICLR) Workshop Track, 2014.
+Behnam Neyshabur, Zhiyuan Li, Srinadh Bhojanapalli, Yann LeCun, and Nathan 'Srebro. The role of over-parametrization in generalization of neural networks. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=BygfghAcYX& noteId=BygfghAcYX.
+Myle Ott, Sergey Edunov, David Grangier, and Michael Auli. Scaling neural machine translation. In Proc. of WMT, 2018.
+Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of NAACL-HLT 2019: Demonstrations, 2019.
+Zhuwei Qin, Fuxun Yu, Chenchen Liu, and Xiang Chen. Interpretable convolutional filter pruning, 2019. URL https://openreview.net/forum?id=BJ4BVhRcYX.
+Maithra Raghu, Alex Irpan, Jacob Andreas, Robert Kleinberg, Quoc V Le, and Jon Kleinberg. Can deep reinforcement learning solve Erdos-Selfridge-Spencer games? In International Conference in Machine Learning (ICML), 2018.
+Ali Sharif Razavian, Hossein Azizpour, Josephine Sullivan, and Stefan Carlsson. Cnn features off-the-shelf: An astounding baseline for recognition. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition Workshops, CVPRW '14, pp. 512-519, Washington, DC, USA, 2014. IEEE Computer Society. ISBN 978-1-4799-4308-1. doi: 10.1109/CVPRW.2014.131. URL http://dx.doi.org/10.1109/CVPRW.2014.131.
+Avraham Ruderman, Richard Everett, Bristy Sikder, Hubert Soyer, Jonathan Uesato, Ananya Kumar, Charlie Beattie, and Pushmeet Kohli. Uncovering surprising behaviors in reinforcement learning via worst-case analysis, 2019. URL https://openreview.net/forum?id=SkgZNnR5tX.
+Yuandong Tian, Qucheng Gong, Wenling Shang, Yuxin Wu, and C. Lawrence Zitnick. Elf: An extensive, lightweight and flexible research platform for real-time strategy games. Advances in Neural Information Processing Systems (NIPS), 2017.
+
+Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (eds.), Advances in Neural Information Processing Systems 30, pp. 5998-6008. Curran Associates, Inc., 2017. URL http://papers.nips.cc/paper/7181-attention-is-all-you-need.pdf.
+Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. How transferable are features in deep neural networks? In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger (eds.), Advances in Neural Information Processing Systems 27, pp. 3320-3328. Curran Associates, Inc., 2014. URL http://papers.nips.cc/paper/5347-how-transferable-are-features-in-deep-neural-networks.pdf.
+Hattie Zhou, Janice Lan, Rosanne Liu, and Jason Yosinski. Deconstructing lottery tickets: Zeros, signs, and the supermask. In Neural Information Processing Symposium (NeurIPS), 2019.
+
+A APPENDIX
+
+| Type | Name | Network specs | Algorithm | N | M | L |
| Classic | CartPole-v0 | MLP(128-128-128-out) | A2C | 20 | 160 (games) | 100 |
| Acrobot-v1 | MLP(256-256-256-out) | A2C | 20 | 320 (games) | 100 |
| LunarLander-v2 | MLP(256-256-256-out) | A2C | 20 | 640 (games) | 100 |
| Pixel | Assault, Berzerk, Breakout, Centipede, Kangaroo, Krull, Qbert, Seaquest, Space Invaders | Conv(5,64,1,2)-MaxPool(2) | A2C | 25 | 1000 (batches) | 1024 |
| -Conv(5,64,1,2)-MaxPool(2) |
| -Conv(3,64,1,1)-MaxPool(2) |
| -Conv(3,64,1,1)-MaxPool(2) |
| -MLP(1920-512-512-out) |
+
+Table A1: A summary of the games in our RL experiments. $\mathrm{Conv}(w,x,y,z)$ represents a convolution layer of filter size $w$ , channel number $x$ , stride $y$ , and padding $z$ , respectively. All the layer activations are ReLUs. See Sec. 3.3.2 for the meaning of $M$ , $N$ and $L$ .
\ No newline at end of file
diff --git a/playingthelotterywithrewardsandmultiplelanguageslotteryticketsinrlandnlp/images.zip b/playingthelotterywithrewardsandmultiplelanguageslotteryticketsinrlandnlp/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..dfef790a85b6b9ab5c780a666a04063a10b0c8a1
--- /dev/null
+++ b/playingthelotterywithrewardsandmultiplelanguageslotteryticketsinrlandnlp/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:da8b07461db16241b02fa4ac77884f030be934f7a4655511164469e45df35a86
+size 469158
diff --git a/playingthelotterywithrewardsandmultiplelanguageslotteryticketsinrlandnlp/layout.json b/playingthelotterywithrewardsandmultiplelanguageslotteryticketsinrlandnlp/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..2d52f295bc7fb7f8e9333896ed0c75861322b479
--- /dev/null
+++ b/playingthelotterywithrewardsandmultiplelanguageslotteryticketsinrlandnlp/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:468f4707df4935c154c5398159909ad66181be32cb7491724ef252e39b13a933
+size 287448
diff --git a/plugandplaylanguagemodelsasimpleapproachtocontrolledtextgeneration/b8d1b994-561c-4b73-9b39-7c2c3e9568cb_content_list.json b/plugandplaylanguagemodelsasimpleapproachtocontrolledtextgeneration/b8d1b994-561c-4b73-9b39-7c2c3e9568cb_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..af2d24dcbc5005b3e789bc258fb1a498515b03c2
--- /dev/null
+++ b/plugandplaylanguagemodelsasimpleapproachtocontrolledtextgeneration/b8d1b994-561c-4b73-9b39-7c2c3e9568cb_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:227eb134cade0bf3270afdeb2e0d23b83165e936cfd479737ba43ef37164d007
+size 221709
diff --git a/plugandplaylanguagemodelsasimpleapproachtocontrolledtextgeneration/b8d1b994-561c-4b73-9b39-7c2c3e9568cb_model.json b/plugandplaylanguagemodelsasimpleapproachtocontrolledtextgeneration/b8d1b994-561c-4b73-9b39-7c2c3e9568cb_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..33f554c0d93347c682a15190636cf7b2055c1db4
--- /dev/null
+++ b/plugandplaylanguagemodelsasimpleapproachtocontrolledtextgeneration/b8d1b994-561c-4b73-9b39-7c2c3e9568cb_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9bc2b9e4c1e63340827923e209b8a59d0604969cfc699d17a010588002002717
+size 251722
diff --git a/plugandplaylanguagemodelsasimpleapproachtocontrolledtextgeneration/b8d1b994-561c-4b73-9b39-7c2c3e9568cb_origin.pdf b/plugandplaylanguagemodelsasimpleapproachtocontrolledtextgeneration/b8d1b994-561c-4b73-9b39-7c2c3e9568cb_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..a467fff321b01512eb4df7117037d679b992b307
--- /dev/null
+++ b/plugandplaylanguagemodelsasimpleapproachtocontrolledtextgeneration/b8d1b994-561c-4b73-9b39-7c2c3e9568cb_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b1867ef5aeb754fba2aa9cc3435ff3d7b303a79f2c002b22703a452ffe30d7dc
+size 509076
diff --git a/plugandplaylanguagemodelsasimpleapproachtocontrolledtextgeneration/full.md b/plugandplaylanguagemodelsasimpleapproachtocontrolledtextgeneration/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..bf8454837fda5fc6fd934c411af7e93b581b19b9
--- /dev/null
+++ b/plugandplaylanguagemodelsasimpleapproachtocontrolledtextgeneration/full.md
@@ -0,0 +1,595 @@
+# PLUG AND PLAY LANGUAGE MODELS: A SIMPLE APPROACH TO CONTROLLED TEXT GENERATION
+
+Sumanth Dathathri *
+
+CMS, Caltech
+
+Andrea Madotto *
+
+HKUST
+
+Janice Lan
+
+Uber AI
+
+Jane Hung
+
+Uber AI
+
+Eric Frank
+
+Uber AI
+
+Piero Molino
+
+Uber AI
+
+Jason Yosinski†
+
+Uber AI
+
+Rosanne Liu
+
+Uber AI
+
+datharris@gmail.com, amadotto@connect.ust.hk
+
+{janlan, jane.hung, mysterefrank, piero, yosinski, rosanne}@uber.com
+
+# ABSTRACT
+
+Large transformer-based language models (LMs) trained on huge text corpora have shown unparalleled generation capabilities. However, controlling attributes of the generated language (e.g. switching topic or sentiment) is difficult without modifying the model architecture or fine-tuning on attribute-specific data and entailing the significant cost of retraining. We propose a simple alternative: the Plug and Play Language Model (PPLM) for controllable language generation, which combines a pretrained LM with one or more simple attribute classifiers that guide text generation without any further training of the LM. In the canonical scenario we present, the attribute models are simple classifiers consisting of a user-specified bag of words or a single learned layer with 100,000 times fewer parameters than the LM. Sampling entails a forward and backward pass in which gradients from the attribute model push the LM's hidden activations and thus guide the generation. Model samples demonstrate control over a range of topics and sentiment styles, and extensive automated and human annotated evaluations show attribute alignment and fluency. PPLMs are flexible in that any combination of differentiable attribute models may be used to steer text generation, which will allow for diverse and creative applications beyond the examples given in this paper.
+
+# 1 INTRODUCTION
+
+The Transformer architecture (Vaswani et al., 2017) has enabled large-scale language models (LMs) trained on a huge amount of data (Radford et al., 2019; Dai et al., 2019b; Radford et al., 2018b) to greatly improve the state-of-the-art on natural language processing tasks. These models are used to extract contextualized word embeddings for transfer learning purposes (Devlin et al., 2019) and as natural language generators. The latter can leverage large amounts of unannotated data and a simple log-likelihood training objective. However, once such models are trained, controlling attributes of generated text becomes difficult without modifying the model architecture to allow for extra input attributes or fine-tuning with attribute-specific data (Keskar et al., 2019; Ziegler et al., 2019).
+
+Table 1: The PPLM employs a pre-trained language model (LM) without any changes to the model parameters and can generate text with controlled attributes such as topic and sentiment. We demonstrate control with two tiny and easy to construct attribute models: a bag of words (BoW) related to a topic and a linear discriminator trained on top of LM latent representations to control sentiment. The underlined prefix is what the LM is conditioned on to generate a passage of text (e.g. The potato). The controlled attributes are colored and bracketed (e.g. [Science]), and words in the BoW that are directly optimized for are highlighted brightly (e.g. research). The softer highlights correspond to words related to the attribute, but not directly optimized for during the control process (e.g. health).
+
+[-] The potato and cauliflower are both in season to make combo breads, mounds, or pads. For an added challenge, try some garlic mashed potatoes.
+
+[Negative] The potato is a pretty bad idea. It can make you fat, it can cause you to have a terrible immune system, and it can even kill you...
+
+[Positive] The potato chip recipe you asked for! We love making these, and I've been doing so for years. I've always had a hard time keeping a recipe secret. I think it's the way our kids love to eat them - so many little ones.
+
+[Science] The potato was once thought to have no health problems and has been promoted as a nutritious food source since the mid-1800s, but recent reports indicate that it has many harmful health issues. In fact, researchers from Johns Hopkins University...
+
+[Politics] [Positive] To conclude this series of articles, I will present three of the most popular and influential works on this topic. The first article deals with the role of women's political participation in building a political system that is representative of the will of the people.
+
+[Politics] [Negative] To conclude, the most significant and lasting damage from the economic crisis in 2008 was that many governments, including those in the political center, lost power for the first time in modern history.
+
+Controllable generation entails modeling $p(x|a)$ , where $a$ is some desired controllable attribute(s) and $x$ the generated sample. However, generative models only learn $p(x)$ . In computer vision, Plug & Play Generative Networks (PPGN) from Nguyen et al. (2017) developed a mechanism for generating images with different attributes by plugging a discriminator (attribute model) $p(a|x)$ together with a base generative model $p(x)$ and sampling from the resulting $p(x|a) \propto p(a|x)p(x)$ , effectively creating a conditional generative model on the fly from any supplied attribute model. In a similar manner, we propose the Plug and Play Language Model (PPLM) for conditional language generation that combines one or more simple attribute models $p(a|x)$ —either in the form of a bag-of-words (BoW) or single layer classifiers—with a pre-trained, unconditional language model $p(x)$ . We sample from the resulting combined model by following gradients in the latent representation space in a manner inspired by the approximate Metropolis-adjusted Langevin (MALA) (Roberts et al., 1996; Roberts & Rosenthal, 1998) sampler deployed in Nguyen et al. (2017).
+
+Optimization is performed ex post facto in the activation space, therefore no re-training or fine-tuning is needed. Control is fine-grained, with a strength parameter determining how strong the attribute influence should be; a strength of 0 fully recovers the original model $p(x)$ . This design allows vast flexibility: users can combine a state-of-the-art generative model, which may be large and difficult to train, with any number of attribute controllers. Attribute models may be easier to train or untrained (in the case of BoW models), and multiple controllers may be combined flexibly during inference. In this paper, we demonstrate the PPLM approach using a GPT-2 345M model (Radford et al., 2019) as the general-purpose LM $p(x)$ , but the method applies in any representation space from any transformer-based text generator and allows combination with any attribute model $p(a|x)$ .
+
+We demonstrate controlled generation with a number of attribute controllers, assembled and combined during generation, each with a different strength, acting as a set of "control knobs" that tune generation towards the desired attribute (see examples in Table 1). Code for the experiments is available at: https://github.com/uber-research/PPLM. Our key contributions are:
+
+- We introduce the Plug and Play LM for controlled language generation, discuss its relation to existing work, and how sampling from a PPLM works (Sections 2 and 3).
+- We demonstrate controlling of text generation on a range of attributes, including 7 topics each defined using a bag of words, and 1 simple discriminator on sentiments. We quantify effectiveness using both automated evaluation (separately trained perplexity and sentiment
+
+models) as well as human evaluation (for attribute relevance and fluency). All evaluations point toward the ability of PPLMs to generate attribute controlled, fluent text (Section 4).
+
+- We compare PPLM with CTRL (Keskar et al., 2019) and GPT-2 finetuned for positivity (Ziegler et al., 2019). Our method, without any LM training, is on par and often outperforms the baselines on attribute relevance and fluency (Section 4.2, and Section 4.3).
+- We show that the PPLM approach can be used to detoxify instances where generation of toxic content is likely by following the negative gradient of a model trained to detect toxicity (Section 4.4). We also show how PPLM can be used for structurally constrained story writing (Section 4.5).
+
+# 2 RELATED WORK
+
+Controlled generation Current methods for controlled text generation involve either fine-tuning existing models with Reinforcement Learning (RL) (Ziegler et al., 2019), training Generative Adversarial Networks (Yu et al., 2017), or training conditional generative models (Kikuchi et al., 2016; Ficler & Goldberg, 2017). Different from our approach, these methodologies are not plug and play, since the entire model needs to be separately fine-tuned for each specific attribute. Keskar et al. (2019) train a large language model with over 50 different control codes. The results are high quality because they train exactly to maximize $p(x|a)$ , but this comes at the expense of fixing control codes upfront and of training a very large model (1.6B parameters). Our method does not require retraining any conditional generative model, and both the language model and the conditional model can be flexibly assembled. Table 2 gives a comparison of recent approaches to language modeling tuned for specific attributes. In another interesting but tangential piece of work, Subramani et al. (2019) recently showed that a pre-trained language model can be steered to recover arbitrary sentences. In earlier works Gu et al. (2016; 2017); Chen et al. (2018) explored the idea of using a small neural network to steer an LM.
+
+Noisy Channel Modeling Yu et al. (2016), and more recently Yu et al. (2019); Yee et al. (2019); Ng et al. (2019), leveraged the Shannon Noisy Channel Theory (Shannon, 1948) for improving sequence-to-sequence modeling. Their approach translates a source language sentence $y$ into a target language sentence $x$ by first sampling from a forward model proposal distribution $p_{\text{forward}}(x|y)$ and then reranking samples based on probabilities given by $p_{\text{backward}}(x|y) \propto p(x)p(y|x)$ . PPLM scores samples using the same basic equation, but as we have no forward or proposal model $p_{\text{forward}}(x|a)$ , we rely on the latent space updates, similar to Nguyen et al. (2017). As a baseline, we consider using $p(x)$ as a "forward model" and then reranking, which we will see works moderately well in some scenarios and poorly in others (see Tables 4 and 6).
+
+Weighted decoding Holtzman et al. (2018); Ghazvininejad et al. (2017) consider controlled language generation – the former with discriminators, and the latter with a bag of words – where the decoding procedure is modified to consider the scoring function used for decoding. See et al. (2019) note that control with weighted decoding (WD) is difficult and often leads to sacrificing fluency and coherence. Further, Ghazvininejad et al. (2017) strongly relies on sampling from a set of keywords on a specific topic and it does not allow to bias generation towards a topic in a manner that does not necessary include a set of keywords. Similarly, Baheti et al. (2018) proposed a decoding strategy for generating interesting responses in dialogue systems, using bags of words and word embeddings. Sophisticated sampling methods (Metropolis et al., 1953) can be used to constrain the model generation to certain keywords and topics. We evaluate WD as a baseline.
+
+Text Style Transfer Outside of language modeling, the text style transfer studies a related task. Shen et al. (2017); Hu et al. (2017) train variational auto-encoders for style transfer that rely on learning disentangled latent representations for style and content. Li et al. (2018) demonstrate the efficacy of a simple approach based on replacing attribute related n-grams with n-grams corresponding to the desired attribute based on a conditional generative model. A key difference between the above and our approach is that we use an offline discriminator and perform optimization based on this discriminator, which as suggested by Elazar & Goldberg (2018) may outperform adversarial training approaches. More recently, Lample et al. (2019) adapt an approach from unsupervised language translation to style transfer, where a denoised auto-encoder is trained with an objective
+
+Table 2: Comparison of the different models and distributions. All models in this table are useful in different scenarios. The particular advantage of PPLM is that very small, custom attribute models, $p(a|x)$ , may be combined with powerful, general pre-trained language models, $p(x)$ , to create cheap but still powerful conditional generative models, $p(x|a)$ .
+
+| Model type | Form of model | Samples | Example models and number of trainable params |
| Language Model | p(x) | Uncond. | GPT-2 medium: 345M (Radford et al., 2019) |
| Fine-tuned Language Model | p(x) | Uncond. | Fine-tuned GPT-2 medium: 345M (Ziegler et al., 2019) |
| Conditional Language Model | p(x|a) | Cond. | CTRL: 1.6B (Keskar et al., 2019) |
| Plug and Play Language Model (PPLM) | p(x|a) ∝ p(x)p(a|x) | Cond. | PPLM-BoW: 0 (curated word list) PPLM-Discrim: ~ 1K/attribute (not counting pretrained p(x)) |
+
+consisting of a weighted combination of a re-construction loss and a back-translation loss. While the above approaches have shown impressive success on style transfer tasks, the main focus is not controlled language generation, and further, the methods are not plug and play.
+
+# 3 PLUG AND PLAY LANGUAGE MODELS
+
+# 3.1 LANGUAGE MODELING WITH TRANSFORMERS
+
+Given a sequence of tokens $X = \{x_0, \dots, x_n\}$ , LMs are trained to compute the unconditional probability of the sequence $p(X)$ . This probability can be rewritten in terms of product of conditional probabilities by recursively applying the chain-rule (Manning et al., 1999; Bengio et al., 2003) as:
+
+$$
+p (X) = \prod_ {i = 1} ^ {n} p \left(x _ {i} \mid x _ {0}, \dots , x _ {i - 1}\right) \tag {1}
+$$
+
+In this paper, we use a transformer (Vaswani et al., 2017) to model the distribution of natural language. To present our approach clearly, we first briefly summarize the transformer using recurrent notation. Let us define the history matrix $H_{t}$ to consist of the key-value pairs from the past i.e $H_{t} = [(K_{t}^{(1)}, V_{t}^{(1)}), \dots, (K_{t}^{(l)}, V_{t}^{(l)})]$ , where $(K_{t}^{(i)}, V_{t}^{(i)})$ corresponds to the key-value pairs from the $i$ -th layer generated at all time-steps from 0 to $t$ . Efficient implementations of the transformer (Wolf et al., 2019) use the cached $H_{t}$ to generate $x_{t+1}$ , given $x_{t}$ . This recurrent interpretation of a transformer can be summarized as:
+
+$$
+o _ {t + 1}, H _ {t + 1} = \operatorname {L M} \left(x _ {t}, H _ {t}\right), \tag {2}
+$$
+
+and then $x_{t + 1}$ is sampled as $x_{t + 1} \sim p_{t + 1} = \mathrm{Softmax}(W o_{t + 1})$ , where $W$ is a linear transformation that maps the logit vector $o_{t + 1}$ to a vector of vocabulary size. This allows for efficient language generation without repeated forward passes corresponding to the prior conditioning text $x_0, \ldots, x_{t - 1}$ .
+
+# 3.2 STEERING GENERATION: ASCENDING $\log p(a|x)$
+
+In order to control the output of the language model, at every generation step $t$ , we shift the history $H_{t}$ in the direction of the sum of two gradients: one toward higher log-likelihood (LL) of the attribute $a$ under the conditional attribute model $p(a|x)$ and one toward higher LL of the unmodified language model $p(x)$ . Combining these factors with a variable multiplier provides us with a controllable "knob" to guide generation in a given direction with a specified strength. The updates are restricted to $H_{t}$ and not the other model activations because future predictions depend on the past only via $H_{t}$ (note that $H_{t}$ is composed of all transformer key and value pairs generated up to time $t$ ). Taking steps in $H_{t}$ space leads to gradual changes to model activations — which may be thought of as gradual reinterpretations of the past — that guide future generation in the desired direction.
+
+Let $\Delta H_{t}$ be the update to $H_{t}$ , such that generation with $(H_{t} + \Delta H_{t})$ shifts the distribution of the generated text such that it is more likely to possess the desired attribute. $\Delta H_{t}$ is initialized
+
+
+Figure 1: Simplified illustration of the proposed approach in three phases. In Step 1, a forward pass is performed through the language model to compute the likelihood of a desired attribute using an attribute model that predicts $p(a|x)$ . In Step 2, a backward pass updates the internal latent representations of the LM, using gradients from the attribute model, to increase the likelihood of the passage having the desired attribute. In Step 3, a new distribution over the vocabulary $(\widetilde{p}_{t+1})$ is generated from the updated latents $(\widetilde{H}_t)$ and the current token $x_t$ . The next token is then sampled from the updated distribution. This process of updating the latents is repeated at each time-step, leading to a gradual transition towards the desired attribute. For computational efficiency, one may choose to modify only the latents within some window of the recent past, depicted as the dotted-red region.
+
+at zero and updated with gradients from an attribute model that measures the extent to which the generated text possesses the desired attribute (e.g. positivity). We rewrite the attribute model $p(a|x)$ as $p(a|H_t + \Delta H_t)$ and then make gradient based updates to $\Delta H_t$ as follows:
+
+$$
+\Delta H _ {t} \leftarrow \Delta H _ {t} + \alpha \frac {\nabla_ {\Delta H _ {t}} \log p (a | H _ {t} + \Delta H _ {t})}{\| \nabla_ {\Delta H _ {t}} \log p (a | H _ {t} + \Delta H _ {t}) \| ^ {\gamma}} \tag {3}
+$$
+
+where $\alpha$ is the step size, $\gamma$ is the scaling coefficient for the normalization term. This update step can be repeated $m$ times; in practice we use 3 to 10. Subsequently, a forward pass through the LM with the updated key-value pairs is performed to obtain the updated logits $\widetilde{o}_{t+1}$ as $\widetilde{o}_{t+1}, H_{t+1} = \mathrm{LM}(x_t, \widetilde{H}_t)$ , where $\widetilde{H}_t = H_t + \Delta H_t$ . The perturbed $\widetilde{o}_{t+1}$ is then used to generate a new distribution $\widetilde{p}_{t+1}$ as in Equation 2.
+
+# 3.3 ENSURING FLUENCY: ASCENDING $\log p(x)$
+
+The approach described in the previous section is able to generate text tuned for a particular discriminator, but left unchecked it will quickly result in unrealistic adversarial or fooling examples (Szegedy et al., 2013; Nguyen et al., 2015) as the text moves into low probability regions. To combat this, we use the unconditional language model in two ways that ensure the fluency is maintained at or near the level of the unconditional language model (here GPT-2).
+
+Kullback-Leibler (KL) Divergence We update $\Delta H_{t}$ to minimize the KL divergence between the output distribution of the modified and unmodified language models in addition to the step above. In practice, this is accomplished by adding the quantities together before taking a gradient, though it can be visualized as two separate steps as in Figure 2. We scale the KL coefficient by a scalar $\lambda_{KL}$ , and in practice, setting this hyperparameter to 0.01 works well in general across tasks.
+
+Post-norm Geometric Mean Fusion In addition to minimizing KL divergence, which affects the past via $\Delta H_{t}$ , we perform post-norm fusion similarly to Stahlberg et al. (2018). This does not directly affect $\Delta H_{t}$ ; rather, it just serves to constantly tie the generated text to the unconditional $p(x)$ LM distribution. We accomplish this by sampling from $x_{t + 1} \sim \frac{1}{\beta} \left( \widetilde{p}_{t + 1}^{\gamma_{gm}} p_{t + 1}^{1 - \gamma_{gm}} \right)$ , where $p_{t + 1}$ and $\widetilde{p}_{t + 1}$ are the unmodified and modified output distributions, respectively, and $\beta$ is a normalizing factor such that it forms a valid distribution. As $\gamma_{gm} \to 1$ this converges to the distribution from the updated LM, and as $\gamma_{gm} \to 0$ it converges to the unconditional LM distribution. We find that in practice values for $\gamma_{gm}$ in the range $0.8 - 0.95$ work well.
+
+
+Figure 2: An oversimplified view into why steps that maximize both $\log p(a|x)$ and $\log p(x)$ are needed. The sentence under consideration is shown as a black dot, which is first pushed in the direction of maximizing $\log p(a|x)$ and then in the direction of maximizing $\log p(x)$ . In practice we use a single step and simply add the log probabilities; we take steps in continuous space of hidden representations $H$ rather than in the discrete $x$ (byte pair) space, and rather than resampling the entire sentence each step, we take one step in $H$ space per byte-pair sample.
+
+# 3.4 SAMPLING AND RANKING
+
+The attribute model $p(a|x)$ in PPLM provides two functionalities: first, a score that can be used to rank samples based on the LL of the desired attribute (forward pass only; Step 1, Figure 1), and second, a gradient ascent direction to perform an update in the latent space (Step 2 & 3; Figure 1). The former can be used to generate $r$ samples and rank them to choose the best one. This can serve as an additional method for attribute control in addition to sampling with updated latents. Further, to avoid the problem of repetitive, low quality text (Holtzman et al., 2018), we compute the mean over the Dist-1, Dist-2 and Dist-3 scores (for the generated passage), which is an indicator of repetitiveness (Li et al., 2015), and then discard samples with a mean score below a threshold $\tau$ .
+
+# 4 EXPERIMENTS, RESULTS, AND EVALUATION
+
+In this section, we describe our evaluation methodology and then show controlled generation results under various attribute models. We also show use cases of PPLM in language detoxification and in controlled story telling. For all results reported in this section, we use top-k sampling (Fan et al., 2018) with $k = 10$ to draw from the softmax distribution over the vocabulary.
+
+# 4.1 EVALUATION METHODS AND ABLATION STUDY
+
+We evaluate to assess two properties: whether PPLM generates text that satisfies the desired attribute (topic or sentiment) and whether the quality of its text deteriorates as we intensify control of the attribute. Note we can always turn the control knob down to zero to disable control of attributes and reach the fluency of the original model. If desired, a user can tune the knobs at inference until a chosen tradeoff between attribute strength and fluency is reached. We evaluate using both automated methods and human annotators:
+
+Automated Eval. Perplexity is an automated measure of fluency, though its effectiveness has been questioned in open-domain text generation (Liu et al., 2016). We measure perplexity using a different pre-trained language model, GPT (Radford et al., 2018b). The diversity of text in the passages is measured using the number of distinct n-grams (normalized by the length of text) as in Li et al. (2015). We report Dist-1, Dist-2, and Dist-3 scores for the distinct 1-2-3-grams (measured across all samples generated for a given attribute control task, e.g. a specific topic for topic control). Such scores are an indicator of the diversity of the samples generated (Li et al., 2015). We also use external sentiment classifiers for sentiment evaluation.
+
+Human Eval. We consider two types of human annotation: fluency and A/B testing on attribute relevance. Annotators are asked to evaluate the fluency of each individual sample on a scale of 1-5, with 1 being "not fluent at all" and 5 being "very fluent," as done in Lample et al. (2019). In the A/B testing for attribute relevance, we consider all combinatorial pairs of all four variants: B, BR, BC, and BCR (6 combinations). We then ask annotators to rank the pair on the desired attribute (e.g. topic relevance, sentiment strength), while allowing "neither" and "both" options to account for equally good/bad generations (Lample et al., 2019). We obtain annotations from nine external occupational annotators. Each pair of samples is evaluated by three individuals and we use majority-voting to
+
+Table 3: Comparison of different samples generated by (top row) baseline GPT-2 and (other rows) PPLM with different BoW corresponding to different topics (e.g. [Military]), all conditioned on a single prefix: "The issue focused". Both directly optimized (in red) and related words (in soft red) are highlighted, showing how the optimization takes effect.
+
+| [−] The issue focused on the way that the city's police officers have reacted in recent years to the deaths of Michael Brown in Ferguson, Mo., Eric Garner in New York City and Sandra Bland in Texas, as well as the shooting of unarmed teen Michael Brown by a white police officer in Ferguson, Mo. ... |
| [Military] The issue focused on the fact that the government had spent billions on the military and that it could not deploy the troops in time. The prime minister said that the country would take back control of its airspace over Syria in the next 48 hours. \n The military is investigating why... |
| [Space] The issue focused on a series of incidents that occurred in the past few months, which included an alleged attack by Islamic State fighters on a Kurdish checkpoint, the use of drones in combat, space technology research by Russian and American space companies, and more. \n The world... |
| [Science] The issue focused on a single piece: the question "What is the meaning of life?" This question has puzzled many philosophers, who have attempted to solve it by using some of the concepts of quantum mechanics, but they have to solve it by the laws of nature themselves... |
| [Politics] The issue focused on a single section of the legislation. It's unclear whether the committee will vote to extend the law, but the debate could have wider implications. \n "The issue of the law's applicability to the United Kingdom's referendum campaign has been one of... |
| [Computers] The issue focused on the role of social media as a catalyst for political and corporate engagement in the digital economy, with the aim of encouraging companies to use the power of social media and the Internet to reach out to their target market. \n ... |
+
+compute attribute relevance. For fluency, we use average of the three annotations. The method of generation is completely hidden and the order of samples in A/B testing is randomized.
+
+Ablation study and baselines. We conduct an ablation study with four variants: B: the baseline, unchanged GPT-2 LM, sampled once; BR: B but sampled $r$ times, with best sample chosen based on the LL ranking and filtering based on Dist score; BC: update the latent representations $(\widetilde{H}_t)$ and then sample once; and lastly BCR: update the latent representations $(\widetilde{H}_t)$ and generate $r$ samples, choose the best sample based on the LL score (after filtering out samples with low Dist scores). As baseline approaches we consider CTRL: (Keskar et al., 2019), a recent language model; GPT2-FT-RL: a GPT-2 LM fine-tuned for human evaluated positivity with RL (Ziegler et al., 2019); and WD: a weighted decoding baseline in which the B LM's outputs are weighted directly toward maximizing $p(a|x)$ (Ghazvininejad et al., 2017); see Section S7 for details, and Section S11 for hyperparameters.
+
+# 4.2 BOW ATTRIBUTE MODELS
+
+The simplest attribute model we use gives the log of the sum of likelihoods of each word in some predefined Bag of Words (BoW). Given a set of keywords $\{w_{1},\dots ,w_{k}\}$ that specify a topic of interest and the output distribution of the language model $p_{t + 1}$ , the log likelihood is:
+
+$$
+\log p (a | x) = \log \left(\sum_ {i} ^ {k} p _ {t + 1} [ w _ {i} ]\right). \tag {4}
+$$
+
+We construct BoWs that represent seven distinct topics: SCIENCE, MILITARY, LEGAL, COMPUTERS, SPACE, POLITICS, and RELIGION (see Section S17 for complete word lists). Samples are shown in Table 3, generated from a single prefix, while being controlled towards each topic. Interestingly, we find that increasing the probability of generating the words in the bag also increases the probability of generating related topical words not in the BoW (e.g. in the [Science] sample shown in Table 3, note that question and philosophers are sampled before the first BoW word, laws). Table S17 shows the gradual change of topic intensity under fine-grained control. We found that the optimization procedure works better with updating representations from the past over a finite window and using an adaptive normalization scheme (see Section S11.3).
+
+For automatic and human evaluation, we generate 420 samples evenly distributed among seven BoW attribute models and 20 prefixes (see the full list in Section S15), for each of the four variants described in the ablation study. See Section S8 for further details on evaluation and results. Table 4 shows that human annotators find text from BCR $(51.7\%)$ and BC $(46.9\%)$ to be significantly more
+
+Table 4: For each treatment in the ablation study, we report mean±std-dev across (human and automated) fluency metrics. The topic (\%) reports the fraction of samples matching the target topic, as evaluated by human annotators. Table S8 provides per-topic results. Approaches BC and BCR demonstrate significant control over the topic of the generated text, while retaining similar diversity (Dist-1, Dist-2, Dist-3) scores and minimal degradation in Perplexity and Fluency evaluations vs the baseline LM (B). The gain from ranking and choosing from multiple samples BR over B is limited $(4.7\%)$ . The gain in topic-accuracy from latent $(\widetilde{H}_t)$ manipulation (from B to BC) is significantly higher $(35.8\%)$ . Perplexity is computed using the GPT LM (Radford et al., 2018a), which differs from the LM generating text (GPT-2). For CTRL and WD, since human evaluation is performed in comparison with BCR via A/B testing, we report the numbers for BCR as well from these comparisons, for the human evaluated metrics. Further, we consider one sample per prefix for CTRL, resulting in fewer samples and higher Dist-1, 2, 3 scores as a consequence. PPLM outperforms CTRL and WD on topic-relevance, while being comparable on fluency scores.
+
+| Method | Topic % (↑ better) (human) | Perplexity (↓ better) | Dist-1 (↑ better) | Dist-2 (↑ better) | Dist-3 (↑ better) | Fluency (↑ better) (human) |
| B | 11.1 | 39.85±35.9 | 0.37 | 0.79 | 0.93 | 3.60±0.82 |
| BR | 15.8 | 38.39±27.14 | 0.38 | 0.80 | 0.94 | 3.68±0.77 |
| BC | 46.9 | 43.62±26.8 | 0.36 | 0.78 | 0.92 | 3.39±0.95 |
| BCR | 51.7 | 44.04±25.38 | 0.36 | 0.80 | 0.94 | 3.52±0.83 |
| CTRL | 50.0 | 24.48±11.98 | 0.40 | 0.84 | 0.93 | 3.63±0.75 |
| BCR | 56.0 | - | - | - | - | 3.61±0.69 |
| WD | 35.7 | 32.05±19.07 | 0.29 | 0.72 | 0.89 | 3.48±0.92 |
| BCR | 47.8 | - | - | - | - | 3.87±0.71 |
+
+on topic than B (15.8%) and BR (11.1%). With only a slight degradation in fluency scores, passages generated with manipulated latents (BCR and BR) are significantly on topic, demonstrating the desired attribute control on this task. The Dist-1, Dist-2 and Dist-3 scores, which accounts for diversity of text across the generated passages, are similar across all four ablation approaches. Further, BCR slightly outperforms CTRL (51.7% & 50.0%), and significantly outperforms WD (36%). BC itself outperforms WD (36%). BCR, CTRL and WD all score similarly on the fluency metric.
+
+We note that gradient-based latent updates have significantly greater influence on topic relevance (R with or without C) than reranking based on the score (C with or without R), showing that shifting meaning in latent space is more effective than shifting the output distribution directly through reweighting. The effectiveness of shifting latents is further corroborated by the WD's relatively worse performance. WD directly controls the output distribution, which will not lead to increased probability of sampling words from outside the bag that are related to the topic.
+
+Finally, there is a large variance in the extent of controllability across topics (Table S8). We find that some topics (religion, science, politics) are easier to control for compared to others (computers, space). Section S9 considers unusual or nonsensical combinations of prefixes and attributes (e.g. prefix 'potato' and topic 'religion'), and we find that even for these settings PPLM is able to successfully control for the desired attribute, often with hilarious twists!
+
+# 4.3 DISCRIMINATOR ATTRIBUTE MODELS
+
+While BoW models have been demonstrated to be able to control text attributes such as sentiment (e.g., Li et al. (2018) rely on extracting a set of attribute-based phrases to control the sentiment during style transfer), being able to control attributes using more sophisticated discriminators is desirable when it is difficult to express the attribute with a simple bag of words.
+
+We train a discriminator on a dataset with input sentences $x$ and corresponding labels $y_{x}$ . For an input $x$ of length $t$ , we compute $o_{:t}^{x}$ and train $f$ on the mean $(\bar{o}^t)$ of the embeddings across time. All discriminators in this work consist of a single layer classifier that predicts the target label from $\bar{o}_t^x$ . The number of parameters in this layer is (embedding-dimension $(e) \times$ number of attributes $(a) +$ number of attributes $(a)$ ), which is negligible compared to the number of parameters in the LM model itself (Table 2). Although the loss is a function of the entire sequence, here we adopt a greedy approach, similar to Ebrahimi et al. (2018); Wallace et al. (2019), in which we optimize for
+
+Table 5: Sentence samples in triplets, generated by {baseline GPT-2, PPLM-Discrim POSITIVE, PPLM-Discrim NEGATIVE}, conditioned on prefixes: The chicken & The country. Words related to the sentiment are highlighted (in soft red). Each triplet is generated from the same random seed.
+
+[-] The chicken is now out on the grill. $\backslash \mathfrak{n}$ The city has released an image of a proposed development in the city of Portland's West End....
+
+[Positive] The chicken was delicious – wonderfully moist, perfectly delicious, superbly fresh – and perfectly cooked. The only thing to say is that the sauce was excellent, and I think that the broth really complemented all of the other flavors. The best part was the sauce...
+
+[Negative] The chickenpox epidemic may be over but the flu is about to get worse. The United States is facing one of the worst flu seasons on record and...
+
+[+] The country's new chief minister, A.J. Paik, is a member of a group of prominent conservative politicians who have criticized the Obama administration's efforts to...
+
+[Positive] The country's largest indoor painting event! $\backslash \mathfrak{n}$ Come celebrate with a dazzling display of stunning outdoor murals, a stunning display of art, and the world's best paint and art supplies from all over the world!
+
+[Negative] The country's top prison system is forcing prisoners to use a trash dump, rather than a toilet, to flush their waste out, as the authorities fear the waste is more toxic and could cause cancer, an official at a major prison has revealed...
+
+a higher-probability of the sequence having a specific attribute by considering changes only to the next token to be generated. This objective can be described as follows, where $f$ is the discriminator:
+
+$$
+\log p (a | x) = \log f \left(o _ {: t + 1}, o _ {t + 2}\right) \tag {5}
+$$
+
+Note that $o_{t+2}$ is a function of $x_{t+1}$ . Further, $x_{t+1} \sim \mathrm{Softmax}(W\tilde{o}_{t+1})$ , which depends on $\Delta H_t$ . In the limit, minimizing the objective in Equation 5 corresponds to choosing $x_{t+1}$ that produces the optimal $o_{t+2}$ that maximizes $f(o_{:t+1}, o_{t+2})$ . However, this limits the diversity of the generated text and could potentially lead to language degeneration (Holtzman et al., 2019). Alternatively, we focus on a softer optimization approach where we aim to shift the distribution $\tilde{p}_{t+1} = \mathrm{Softmax}(W\tilde{o}_{t+1})$ towards one that in expectation has a higher likelihood of having the desired attribute $a$ . Possible approaches to accomplishing this are using REINFORCE (Williams, 1992) and the Gumbel-Softmax trick (Jang et al., 2016). However, both of these would slow down convergence. Instead, as in Dai et al. (2019a), we use the distribution $\tilde{p}_{t+1}$ (instead of a hard sample $x_{t+1}$ ), and feed it forward to obtain (a biased) estimate of the next token's embedding and then update $\Delta H_t$ .
+
+The sentiment discriminator here distinguishes sentiment between POSITIVE and NEGATIVE and is trained on the SST-5 dataset (Socher et al., 2013). Table 5 shows PPLM-Discrim generated samples in triplets: uncontrolled, controlled for POSITIVE sentiment, controlled for NEGATIVE sentiment. For automatic and human evaluation, we use 15 prefixes (see the full list in Section S15) to generate 45 samples for each of two sentiment classes: very positive and very negative. Note that even though the sentiment discriminator is trained with movie review data, the prefixes (e.g. "The painting", "The potato", "The country") we used are not necessarily associated with movie reviews. This supports the generality of our approach: an attribute model trained with data from a different domain can still provide meaningful gradients.
+
+Table 6 shows evaluation results. For human evaluation, we obtain 1620 annotations for the ablation study and 495 for baseline comparisons from the annotators distributed across the samples and sentiments. Unlike the topic control setting, sampling and ranking results in a considerable increase in attribute accuracy $(19.3\% \rightarrow 41.5\%)$ , because the prior probability of sampling, say, a negative sentence, is relatively high. BC results in a decrease in fluency when compared to B, while being significantly more consistent with the desired attribute $(19.3\% \rightarrow 39.6\%)$ . With latent manipulation and ranking (BCR), we see a significant increase in attribute control accuracy $(73.7\%)$ while retaining fluency similar to B and BR. Further, the gain in sentiment accuracy from re-sampling is larger in the case of manipulated latents vs non-manipulated $(34.1\%)$ increase from BC to BCR $>22.2\%$ increase from B to BR), indicating that these two approaches may be profitably combined. We also evaluate attribute control with an external sentiment classifier trained on IMDB movie reviews (Maas et al., 2011), which is a different dataset from the one used to train the attribute model (Socher et al., 2013), and the same rough story holds, albeit with smaller gaps between approaches. We compare to baselines CTRL, GPT2-FT-RL, and WD. BCR performs comparably to CTRL $(73.7\%)$ and $80.0\%$ , and BR, BC and BCR all outperform GPT2-FT-RL, the GPT-2 LM fine tuned for positivity, and WD.
+
+Table 6: Evaluation of models/ variants on the sentiment control task, with mean±std-dev reported across fluency metrics. Sentiment accuracy reports the fraction of samples with an accurate target sentiment. Approach BCR provides significant control over sentiment while showing minimal degradation in fluency. See Table S9 for full results on individual sentiments. *GPT2-FT-RL is only evaluated for the positivity half of the task, as it is fine-tuned only for positivity (Ziegler et al., 2019). For human evaluation metrics, we compare the baselines CTRL, GPT2-FT-RL and WD with BCR and perform A/B style testing. We include both numbers for comparison.
+
+| Method | Sentiment Acc. (%) (human) | Sentiment Acc. (%) (external classifier) | Perplexity (↓ better) | Dist-1 (↑ better) | Dist-2 (↑ better) | Dist-3 (↑ better) | Human Evaluation Fluency (↑ better) |
| B | 19.3 | 52.2 | 42.1±33.14 | 0.37 | 0.75 | 0.86 | 3.54±1.08 |
| BR | 41.5 | 62.2 | 44.6±34.72 | 0.37 | 0.76 | 0.87 | 3.65±1.07 |
| BC | 39.6 | 64.4 | 41.8±34.87 | 0.33 | 0.70 | 0.86 | 2.79±1.17 |
| BCR | 73.7 | 78.8 | 46.6±40.24 | 0.36 | 0.77 | 0.91 | 3.29±1.07 |
| CTRL | 76.7 | 96.6 | 37.4±16.89 | 0.35 | 0.78 | 0.89 | 3.54±0.77 |
| BCR | 70.0 | - | - | - | - | - | 3.36±0.82 |
| GPT2-FT-RL* | 13.3 | 77.8 | 217.3±176.4 | 0.54 | 0.91 | 0.94 | 3.31±0.84 |
| BCR | 84.4 | - | - | - | - | - | 3.68±0.83 |
| WD | 18.9 | 52.2 | 31.7±28.0 | 0.33 | 0.69 | 0.83 | 3.67±0.89 |
| BCR | 61.1 | - | - | - | - | - | 3.75±0.66 |
+
+# 4.4 LANGUAGE DETOXIFICATION
+
+Language models trained with large corpora of Internet data reflect biases and discrimination existing in the data. A recent paper by Wallace et al. (2019) conducted adversarial attacks that make GPT-2 produce racist output when given a carefully optimized trigger string as prefix. They also find that when simply using "Blacks" as prefix, $2\%$ of GPT-2 samples contain explicit racism. Other prefixes (e.g., "Asians" or "Jews") are mentioned but no percentage is reported. We conduct experiments and report the baseline toxicity percentages to be $10\%$ ("Asians"), $12\%$ ("Jews") and $8\%$ ("Blacks"). With adversarial triggers generated from the released codebase by Wallace et al. (2019) the average toxicity percentage is $63.6\%$ . Further details can be found in Section S13.
+
+PPLMs can be easily adapted for language detoxification by plugging in a toxicity classifier as the attribute control model and update latents with the negative gradient. We train a single layer classifier on the toxicity data from the Toxic Comment Classification Challenge (Jigsaw) and show that with a similar hyper-parameter setting as other PPLM-Discrim methods, it works well on both natural prompts and adversarial triggers. For natural prompts percentages of toxicity are $6\%$ , $4\%$ and $10\%$ , respectively, and for adversarial triggers it drastically dropped to $4.6\%$ on average, with statistical significance. Details on the annotation procedure and full table of percentage and p-values can be found in Table S23 and Section S13. Note that a model for detoxifying language can also potentially be maliciously used for generating toxic language, a topic we briefly discuss in Section S6.
+
+# 4.5 CONTROLLED STORY WRITING
+
+We explore controlled generation for assistive story writing (Peng et al., 2018; Luo et al., 2019; Yao et al., 2019; Fan et al., 2018). Using uncontrolled LMs for assistive art creation can be difficult. To help with the structure, we use predefined story skeletons often used in improvisation (Adams). We fill in the blank between these prefixes with a PPLM. See examples in Table S20 and Table S21.
+
+# 5 CONCLUSION
+
+We have presented PPLM, a plug and play method for controlled language generation that flexibly combines a large, pre-trained LM and a BoW or a small, easy-to-train discriminator. In Section S6 we discuss the ethics of controlled LMs. PPLM achieves fine-grained control of attributes via a simple gradient-based sampling mechanism. Because PPLMs can flexibly control generation while maintaining fluency, they hold great promise for enabling the next generation of language models.
+
+# ACKNOWLEDGEMENTS
+
+The authors are grateful to Bryan McCann for providing samples for the CTRL baseline, Joel Lehman for discussion regarding the ethical implications for this work, Jiale Zhi for help with the computational framework, Colan Chen for creating associated artwork for the blog, Avishek Joey Bose for helpful discussions, Julien Chaumont, Lysandre Debut, Thomas Wolf, and the Hugging Face team for co-producing the PPLM demo and helping integrate the code into their transformers repository, all the annotators at Uber, HKUST and Caltech for their labeling, and members of the Deep Collective research group for helpful discussion, ideas, and feedback on experiments.
+
+# REFERENCES
+
+Kenn Adams. Improv encyclopedia story spine. http://improvencyclopedia.org/games/Story_Spine.html. (accessed September 20, 2019).
+Ashutosh Baheti, Alan Ritter, Jiwei Li, and Bill Dolan. Generating more interesting responses in neural conversation models with distributional constraints. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 3970-3980, 2018.
+Yoshua Bengio, Réjean Ducharme, Pascal Vincent, and Christian Jauvin. A neural probabilistic language model. Journal of machine learning research, 3(Feb):1137-1155, 2003.
+Yun Chen, Victor OK Li, Kyunghyun Cho, and Samuel R Bowman. A stable and effective learning strategy for trainable greedy decoding. arXiv preprint arXiv:1804.07915, 2018.
+Ning Dai, Jianze Liang, Xipeng Qiu, and Xuanjing Huang. Style transformer: Unpaired text style transfer without disentangled latent representation. arXiv preprint arXiv:1905.05621, 2019a.
+Zihang Dai, Zhilin Yang, Yiming Yang, William W Cohen, Jaime Carbonell, Quoc V Le, and Ruslan Salakhutdinov. Transformer-xl: Attentive language models beyond a fixed-length context. arXiv preprint arXiv:1901.02860, 2019b.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186, 2019.
+Javid Ebrahimi, Anyi Rao, Daniel Lowd, and Dejing Dou. HotFlip: White-box adversarial examples for text classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pp. 31-36, Melbourne, Australia, July 2018. Association for Computational Linguistics. doi: 10.18653/v1/P18-2006. URL https://www.aclweb.org/anthology/P18-2006.
+Yanai Elazar and Yoav Goldberg. Adversarial removal of demographic attributes from text data. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 11-21, Brussels, Belgium, October-November 2018. Association for Computational Linguistics. doi: 10.18653/v1/D18-1002. URL https://www.aclweb.org/anthology/D18-1002.
+Angela Fan, Mike Lewis, and Yann Dauphin. Hierarchical neural story generation. arXiv preprint arXiv:1805.04833, 2018.
+Jessica Ficler and Yoav Goldberg. Controlling linguistic style aspects in neural language generation. In Proceedings of the Workshop on Stylistic Variation, pp. 94-104, 2017.
+Marjan Ghazvininejad, Xing Shi, Jay Priyadarshi, and Kevin Knight. Hafez: an interactive poetry generation system. In Proceedings of ACL 2017, System Demonstrations, pp. 43-48, Vancouver, Canada, July 2017. Association for Computational Linguistics. URL https://www.aclweb.org/anthology/P17-4008.
+Jiatao Gu, Graham Neubig, Kyunghyun Cho, and Victor OK Li. Learning to translate in real-time with neural machine translation. arXiv preprint arXiv:1610.00388, 2016.
+
+Jiatao Gu, Kyunghyun Cho, and Victor OK Li. Trainable greedy decoding for neural machine translation. arXiv preprint arXiv:1702.02429, 2017.
+Ari Holtzman, Jan Buys, Maxwell Forbes, Antoine Bosselut, David Golub, and Yejin Choi. Learning to write with cooperative discriminators. CoRR, abs/1805.06087, 2018. URL http://arxiv.org/abs/1805.06087.
+Ari Holtzman, Jan Buys, Maxwell Forbes, and Yejin Choi. The curious case of neural text degeneration. arXiv preprint arXiv:1904.09751, 2019.
+Zhiting Hu, Zichao Yang, Xiaodan Liang, Ruslan Salakhutdinov, and Eric P. Xing. Controllable text generation. CoRR, abs/1703.00955, 2017. URL http://arxiv.org/abs/1703.00955.
+Eric Jang, Shixiang Gu, and Ben Poole. Categorical reparameterization with gumbel-softmax. 2016.
+Jigsaw. Toxic comment classification challenge. https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge/. Accessed: 2019-11-13.
+Nitish Shirish Keskar, Bryan McCann, Lav Varshney, Caiming Xiong, and Richard Socher. CTRL - A Conditional Transformer Language Model for Controllable Generation. arXiv preprint arXiv:1909, 2019.
+Yuta Kikuchi, Graham Neubig, Ryohei Sasano, Hiroya Takamura, and Manabu Okumura. Controlling output length in neural encoder-decoders. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pp. 1328-1338, Austin, Texas, November 2016. Association for Computational Linguistics. doi: 10.18653/v1/D16-1140. URL https://www.aclweb.org/anthology/D16-1140.
+Guillaume Lample, Sandeep Subramanian, Eric Smith, Ludovic Denoyer, Marc'Aurelio Ranzato, and Y-Lan Boureau. Multiple-attribute text rewriting. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=H1g2NhC5KQ.
+Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. A Diversity-Promoting Objective Function for Neural Conversation Models. arXiv e-prints, art. arXiv:1510.03055, Oct 2015.
+Juncen Li, Robin Jia, He He, and Percy Liang. Delete, retrieve, generate: A simple approach to sentiment and style transfer. CoRR, abs/1804.06437, 2018. URL http://arxiv.org/abs/1804.06437.
+Chia-Wei Liu, Ryan Lowe, Iulian Serban, Mike Noseworthy, Laurent Charlin, and Joelle Pineau. How not to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pp. 2122-2132, 2016.
+Fuli Luo, Damai Dai, Pengcheng Yang, Tianyu Liu, Baobao Chang, Zhifang Sui, and Xu Sun. Learning to control the fine-grained sentiment for story ending generation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 6020-6026, 2019.
+Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. Learning word vectors for sentiment analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pp. 142-150, Portland, Oregon, USA, June 2011. Association for Computational Linguistics. URL http://www.aclweb.org/anthology/P11-1015.
+Christopher D Manning, Christopher D Manning, and Hinrich Schütze. Foundations of statistical natural language processing. MIT press, 1999.
+Nicholas Metropolis, Arianna W Rosenbluth, Marshall N Rosenbluth, Augusta H Teller, and Edward Teller. Equation of state calculations by fast computing machines. The journal of chemical physics, 21(6):1087-1092, 1953.
+Nathan Ng, Kyra Yee, Alexei Baevski, Myle Ott, Michael Auli, and Sergey Edunov. Facebook fair's wmt19 news translation task submission. arXiv preprint arXiv:1907.06616, 2019.
+
+Anh Nguyen, Jason Yosinski, and Jeff Clune. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2015.
+Anh Nguyen, Jeff Clune, Yoshua Bengio, Alexey Dosovitskiy, and Jason Yosinski. Plug & Play Generative Networks: Conditional Iterative Generation of Images in Latent Space. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017.
+Nanyun Peng, Marjan Ghazvininejad, Jonathan May, and Kevin Knight. Towards controllable story generation. In Proceedings of the First Workshop on Storytelling, pp. 43-49, 2018.
+Martin Potthast, Tim Gollub, Kristof Komlossy, Sebastian Schuster, Matti Wiegmann, Erika Patricia Garces Fernandez, Matthias Hagen, and Benno Stein. Crowdsourcing a large corpus of clickbait on twitter. In Proceedings of the 27th International Conference on Computational Linguistics, pp. 1498-1507, 2018.
+Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. Improving language understanding by generative pre-training. URL https://s3-us-west-2. amazonaws. com/openai-assets/researchcovers/languageeunsupervised/language understanding paper. pdf, 2018a.
+Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. Improving language understanding by generative pre-training. URL https://s3-us-west-2. amazonaws. com/openai-assetss/researchcovers/languageeunsupervised/language understanding paper. pdf, 2018b.
+Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners. OpenAI Blog, 1(8), 2019.
+Gareth O Roberts and Jeffrey S Rosenthal. Optimal scaling of discrete approximations to langevin diffusions. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 60(1): 255-268, 1998.
+Gareth O Roberts, Richard L Tweedie, et al. Exponential convergence of Langevin distributions and their discrete approximations. Bernoulli, 2(4):341-363, 1996.
+Abigail See, Stephen Roller, Douwe Kiela, and Jason Weston. What makes a good conversation? How controllable attributes affect human judgments. arXiv e-prints, art. arXiv:1902.08654, Feb 2019.
+Claude Elwood Shannon. A mathematical theory of communication. Bell system technical journal, 27(3):379-423, 1948.
+Tianxiao Shen, Tao Lei, Regina Barzilay, and Tommi S. Jaakkola. Style transfer from non-parallel text by cross-alignment. CoRR, abs/1705.09655, 2017. URL http://arxiv.org/abs/1705.09655.
+Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pp. 1631-1642, Seattle, Washington, USA, October 2013. Association for Computational Linguistics. URL https://www.aclweb.org/anthology/D13-1170.
+Felix Stahlberg, James Cross, and Veselin Stoyanov. Simple fusion: Return of the language model. arXiv preprint arXiv:1809.00125, 2018.
+Nishant Subramani, Sam Bowman, and Kyunghyun Cho. Can unconditional language models recover arbitrary sentences? arXiv preprint arXiv:1907.04944, 2019.
+Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian J. Goodfellow, and Rob Fergus. Intriguing properties of neural networks. CoRR, abs/1312.6199, 2013.
+Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in Neural Information Processing Systems, pp. 6000-6010, 2017.
+
+Eric Wallace, Shi Feng, Nikhil Kandpal, Matt Gardner, and Sameer Singh. Universal adversarial triggers for nlp. arXiv preprint arXiv:1908.07125, 2019.
+Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229-256, 1992.
+Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierrick Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, and Jamie Brew. Transformers: State-of-the-art natural language processing, 2019.
+Lili Yao, Nanyun Peng, Ralph Weischedel, Kevin Knight, Dongyan Zhao, and Rui Yan. Plan-and-write: Towards better automatic storytelling. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pp. 7378-7385, 2019.
+Kyra Yee, Nathan Ng, Yann N Dauphin, and Michael Auli. Simple and effective noisy channel modeling for neural machine translation. arXiv preprint arXiv:1908.05731, 2019.
+Lantao Yu, Weinan Zhang, Jun Wang, and Yong Yu. Seqgan: Sequence generative adversarial nets with policy gradient. In Thirty-First AAAI Conference on Artificial Intelligence, 2017.
+Lei Yu, Phil Blunsom, Chris Dyer, Edward Grefenstette, and Tomas Kocisky. The neural noisy channel. arXiv preprint arXiv:1611.02554, 2016.
+Lei Yu, Laurent Sartran, Wojciech Stokowiec, Wang Ling, Lingpeng Kong, Phil Blunsom, and Chris Dyer. Putting machine translation in context with the noisy channel model. arXiv preprint arXiv:1910.00553, 2019.
+Daniel M. Ziegler, Nisan Stiennon, Jeffrey Wu, Tom B. Brown, Alec Radford, Dario Amodei, Paul Christiano, and Geoffrey Irving. Fine-tuning language models from human preferences. arXiv preprint arXiv:1909.08593, 2019. URL https://arxiv.org/abs/1909.08593.
+
+# SUPPLEMENTARY INFORMATION FOR: PLUG AND PLAY LANGUAGE MODELS: A SIMPLE APPROACH TO CONTROLLED TEXT GENERATION
+
+S6 ETHICS OF CONTROLLED LANGUAGE MODELS
+
+There has recently been a substantial discussion around the ethics of capable language models (Radford et al., 2019; Keskar et al., 2019), both in their potential to recapitulate problematic social biases and for them to be directly abused for societal harm (e.g. to generate disinformation). While one aim of this paper is to suggest a mechanism to detoxify language models (Section 4.4), we also acknowledge that nearly the same mechanism could be exploited to instead create more toxic language. Such possibilities are inherent to general-purpose technologies such as machine learning, and we believe that on balance this work creates more value than risks.
+
+# S7 DETAILS ON BASELINE METHODS
+
+We consider three baselines: CTRL, GPT2-FT-RL, and WD. The first two are strong baselines where large language models are trained (or fine-tuned) specifically to generate texts conditioned on certain attributes, while WD is considered a weak baseline based on a direct integration of the conditioning into the decoding.
+
+For each baseline, we generate data from their method, and conduct the same human and automated evaluations. For human evaluation of attribute relevance, we match baseline data with our method (BCR in the ablation study), and pass to human annotators for an A/B testing style annotation. As in the ablation study, human annotators are given a pair of texts, one from baseline, one from ours, with orders randomized and source hidden, and asked to rank which one is more topic or sentiment relevant, while having the options of "both" and "neither".
+
+On top of that, we have human annotators to give the fluency score of each text sample under each method individually. And automated evaluations of perplexity, sentiment, etc. are also done individually.
+
+# S7.1 CTRL
+
+The recent conditional language model, CTRL, from Keskar et al. (2019), trains a 1.6B LM conditioned on around 50 control codes. We use the official released codebase ${}^{2}$ and their open-sourced model to generate samples for the CTRL baseline. Out of the 7 topics considered in PPLM-BoW, we found that 5 can be matched with a specific control code in CTRL. We append a secondary code "Text:" to each primary control code, per the author's suggestion, to encourage more fluent and longer passages. The 2 topics missing a match with CTRL are: Military, Space. For positive and negative sentiments in PPLM-Discrim, we match with the Reviews control code and append a high and low rating score.
+
+The matched attributes and control codes are listed in Table S7.
+
+Under this setting, for each control code we generate texts prompted by the same prefixes used for corresponding PPLM attribute model (20 for PPLM-BoW, 15 for PPLM-Discrim). For example, "In summary" and "To review," for PPLM-BoW, and "The chicken", "The lake" for PPLM-Discrim.
+
+Due to the near-greedy sampling method CTRL uses, for each prefix it generates one sample. Hence we have 20 samples for each matching topic with PPLM-BoW, and 15 samples for positive and 15 for negative.
+
+# S7.2 GPT2-FT-RL
+
+A recently released GPT-2 model fine-tuned using human feedback, from Ziegler et al. (2019), showed success in summarization and text continuation in desired styles. To compare with PPLM,
+
+Table S7: Control codes used for the model from Keskar et al. (2019) for experiments in Section 4.
+
+| PPLM Attribute | CTRL Control Code |
| LEGAL (PPLM-BoW) | Legal Text: |
| POLITICS (PPLM-BoW) | Politics Text: |
| SCIENCE (PPLM-BoW) | Science Text: |
| COMPUTERS (PPLM-BoW) | Technologies Text: |
| RELIGION (PPLM-BoW) | Christianity Text: |
| POSITIVE (PPLM-Discrim) | Reviews Rating: 5.0 |
| NEGATIVE (PPLM-Discrim) | Reviews Rating: 1.0 |
+
+we run GPT2-FT-RL3 to generate positive texts on the same prefixes used in our PPLM-Discrim experiment. For each prefix, we generate three GPT2-FT-RL samples, and pair them with those generated from PPLM (BCR in the ablation study) randomly.
+
+# S7.3 WEIGHTED DECODING (WD)
+
+We consider a simple baseline based on a direct integration of the conditioning into the decoding procedure, similar to the approach from Ghazvininejad et al. (2017).
+
+Topic Control with Bag of Words In Ghazvininejad et al. (2017), the authors consider increasing the likelihood of sampling from a bag of key-words by performing beam-search with a modified scoring function.
+
+$$
+s c o r e (w _ {i}, b _ {t}) = s c o r e (b _ {t}) + l o g P _ {t + 1} (w _ {i}) + \sum_ {i} \mathbb {1} _ {\mathrm {B o W}} (w _ {i}),
+$$
+
+where $\mathbb{1}_{\mathrm{BoW}}(w_i)$ is an indicator function indicating if the token $w_{i}$ is present in the bag BoW. Since, it has been shown that beam-search results in degradation of language for GPT-2 (Holtzman et al., 2019), we consider top-5 sampling from a distribution $\tilde{p}_{t + 1}$ defined such that:
+
+$$
+\tilde {p} _ {t + 1} (w _ {i}) = p _ {t + 1} (w _ {i}) + \tau \mathbb {1} _ {\mathrm {B o W}} (w _ {i}) p _ {t + 1} (w _ {i})
+$$
+
+where $\tau \in \mathbb{R}_{++}$ and $p_{t + 1}$ is the distribution over the vocabulary as predicted by the GPT-2 LM. For the experiments in Section 4, we set $\tau = 10$ .
+
+Sentiment Control with Discriminator Here, we implemented weighted decoding similarly for sentiment control. Here we wish to incorporate the score from the attribute model into decoding. To control for style $\hat{a}$ , instead of sampling from the distribution $p_{t+1}$ , we sample from $\tilde{p}_{t+1}$ defined as:
+
+$$
+\tilde {p} _ {t + 1} \left(w _ {i}\right) \propto p (a = \hat {a} | x _ {0: t}, w _ {i}) p _ {t + 1} \left(w _ {i}\right).
+$$
+
+$p(a = \hat{a} | x_{0:t}, w_i)$ is the probability of the sequence $x_{0:t}$ , $w_i$ possessing attribute $\hat{a}$ as assigned by the attribute model. By Bayes' rule, $p(a = \hat{a}; w_i | x_{0:t}) = p(a = \hat{a} | x_{0:t}, w_i)p_{t+1}(w_i)$ , and we do top-5 sampling from this distribution. Recall that $p_{t+1}(w_i) = p(w_i | x_{0:t})$ under the language model.
+
+# S8 FURTHER DETAILS ON HUMAN AND AUTOMATED EVALUATION
+
+We conduct evaluations on attribute relevance and language fluency, both including human and automated evaluation.
+
+For topic relevance (a.k.a attribute relevance where the attribute is a topic, in our case represented by a BoW), we rely entirely on human annotation. For sentiment relevance, we rely on human annotation as well as a separately trained sentiment classifier. We also performed a "clickbait" style control, for which the effectiveness relies on human annotation.
+
+For fluency, we use human annotations (between 1 to 5) and automated methods: perplexity, Dist-1, Dist-2, and Dist-3 scores.
+
+The number of human evaluations are as below:
+
+- PPLM-BoW. For the ablation study, we have 20 prefixes $\times 7$ topics $\times 6$ combinations $\times 3$ samples $\times 3$ labels each, resulting in 7560 total annotations. For baseline comparisons, we have (20 prefixes $\times 5$ topics) for CTRL and (20 prefixes $\times 7$ topics $\times 3$ samples) for WD, each then with 3 labels, resulting in 1560 total annotations.
+- PPLM-Discrim, sentiments. For the ablation study, we have 15 prefixes $\times 2$ sentiments $\times 6$ combinations $\times 3$ samples $\times 3$ labels each, resulting in 1620 total annotations. For baseline comparisons, we have (15 prefixes $\times 2$ sentiments) for CTRL and (15 prefixes $\times 3$ samples) for GPT2-FT-RL and (15 prefixes $\times 3$ samples $\times 2$ sentiments) for WD which each have 3 labels, resulting in 495 total annotations.
+- PPLM-Discrim, clickbait. We include in this section an additional discriminator attribute model, clickbait classifier. For this we use the same setting as sentiment, 15 prefixes $\times 6$ combinations $\times 3$ samples $\times 3$ labels each, resulting in 810 annotations.
+
+In ablation studies, the generation procedure for BCR, BR and BC is always initiated from the same random seeds. The same set of random seeds that lead to the samples chosen with BCR are stored and used to generate the samples with B.
+
+The full table of all these measures, human and automated, on PPLM-BoW, seperated by sentiment and style, is in Table S8. Included also are strong baselines (CTRL and WD) for each sentiment. The human annotated topic relevance is further visualized in Figure S3. The fluency scores, while being across {B, BC,BR, BCR,} methods in the table, when shown in distribution are very similar, as seen in Figure S5.
+
+The full table of all these measures, human and automated, on PPLM-discrim sentiments, is in Table S9. Included also are strong baselines (CTRL, WD and GPT2-FT-RL) for each topic. The human annotated sentiment and style (e.g. "Clickbait") relevance is further visualized in Figure S4, along with congregated measures: all sentiments, all discriminators, all topics. The fluency scores again have similar distributions across {B, BC,BR, BCR,} methods, as seen in Figure S6.
+
+
+Figure S3: Topic relevance by human evaluation. We can see that taking a PPLM gradient step $(\mathrm{B}\rightarrow \mathrm{BC})$ makes a big difference. Reranking is mostly helpful (B→BR; BC→BCR). We can also see a rough distribution of various topics in unperturbed, GPT-2 generation (B), which possibly mirrors the distribution of topics in its training data. Some topics, like science, naturally appear rather frequently.
+
+# S9 ODD COMBINATION OF TOPICS AND PREFIXES
+
+It is interesting to see how PPLM can steer the text generation when the topic and prefix combination appears odd or illogical. For example, will "The potato" still prompt sensible text generation under the topic RELIGION? In this study we design a set of odd combinations, as below.
+
+Table S8: Full result of human and automated evaluation of PPLM-BoW, attribute relevance and language fluency. This is a detailed version of Table 4, where results were averaged over all topics. Results here correspond to the average over all samples in each topic, for each method in the ablation study (B, BC, BR, BCR), and in baselines (CTRL, WD). Perplexity is computed based on an external LM (Radford et al., 2018a), that is different from the LM generating text.
+
+| Topic | Method | Attribute relevance % (↑ better)(human) | Perplexity(↓ better) | Dist-1(↑ better) | Dist-2(↑ better) | Dist-3(↑ better) | Fluency (↑ better)(human) |
| Military | B | 4.44 | 38.68 | 0.36 | 0.78 | 0.93 | 3.61 |
| BR | 5.0 | 35.2 | 0.37 | 0.80 | 0.94 | 3.67 |
| BC | 18.9 | 45.69 | 0.37 | 0.80 | 0.93 | 3.67 |
| BCR | 27.2 | 45.0 | 0.37 | 0.81 | 0.94 | 3.73 |
| CTRL | - | - | - | - | - | - |
| WD | 33.3 | 37.86 | 0.28 | 0.72 | 0.90 | 3.62 |
| Religion | B | 5.19 | 44.01 | 0.39 | 0.80 | 0.93 | 3.66 |
| BR | 7.41 | 41.54 | 0.40 | 0.82 | 0.94 | 3.79 |
| BC | 56.9 | 36.39 | 0.35 | 0.77 | 0.92 | 3.20 |
| BCR | 54.17 | 35.70 | 0.37 | 0.80 | 0.94 | 3.44 |
| CTRL | 100 | 28.76 | 0.4 | 0.83 | 0.92 | 3.87 |
| WD | 28.3 | 40.06 | 0.31 | 0.74 | 0.90 | 3.21 |
| Politics | B | 20.0 | 40.51 | 0.36 | 0.78 | 0.92 | 3.61 |
| BR | 35.6 | 37.04 | 0.37 | 0.80 | 0.93 | 3.71 |
| BC | 71.7 | 48.6 | 0.34 | 0.77 | 0.93 | 3.32 |
| BCR | 69.4 | 42.29 | 0.36 | 0.80 | 0.94 | 3.56 |
| CTRL | 50 | 29.29 | 0.43 | 0.87 | 0.94 | 3.7 |
| WD | 35.0 | 42.01 | 0.28 | 0.71 | 0.89 | 3.52 |
| Science | B | 24.4 | 37.83 | 0.37 | 0.78 | 0.92 | 3.47 |
| BR | 28.9 | 38.67 | 0.38 | 0.80 | 0.94 | 3.63 |
| BC | 49.4. | 40.69 | 0.35 | 0.78 | 0.92 | 3.33 |
| BCR | 61.7 | 40.58 | 0.35 | 0.79 | 0.93 | 3.46 |
| CTRL | 40.0 | 24.14 | 0.4 | 0.86 | 0.95 | 3.73 |
| WD | 40.0 | 44.68 | 0.28 | 0.7 | 0.88 | 3.62 |
| Legal | B | 6.7 | 40.22 | 0.37 | 0.79 | 0.92 | 3.75 |
| BR | 11.2 | 35.32 | 0.37 | 0.80 | 0.93 | 3.82 |
| BC | 28.9 | 43.31 | 0.376 | 0.79 | 0.93 | 3.67 |
| BCR | 40.6 | 44.30 | 0.36 | 0.79 | 0.94 | 3.73 |
| CTRL | 25.0 | 23.73 | 0.37 | 0.79 | 0.90 | 3.18 |
| WD | 63.3 | 40.54 | 0.27 | 0.68 | 0.87 | 3.37 |
| Space | B | 7.2 | 34.38 | 0.37 | 0.79 | 0.93 | 3.63 |
| BR | 5.0 | 39.82 | 0.38 | 0.81 | 0.94 | 3.52 |
| BC | 4.7 | 38.99 | 0.35 | 0.76 | 0.92 | 3.08 |
| BCR | 45.0 | 44.71 | 0.35 | 0.79 | 0.93 | 3.30 |
| CTRL | - | - | - | - | - | - |
| WD | 10.0 | 39.18 | 0.32 | 0.75 | 0.91 | 3.58 |
| Computers | B | 8.3 | 44.33 | 0.36 | 0.78 | 0.92 | 3.51 |
| BR | 15.6 | 41.96 | 0.38 | 0.80 | 0.94 | 3.69 |
| BC | 5.8 | 50.95 | 0.35 | 0.78 | 0.92 | 3.42 |
| BCR | 64.4 | 54.84 | 0.36 | 0.80 | 0.94 | 3.51 |
| CTRL | 35 | 25.07 | 0.41 | 0.87 | 0.95 | 3.68 |
| WD | 40.0 | 50.85 | 0.28 | 0.71 | 0.88 | 3.46 |
+
+
+Figure S4: Bar charts of discriminator relevance by human evaluation, together with different versions of combined results.
+
+Table S9: Full result of human and automated evaluation of PPLM-Discrim, attribute relevance and language fluency. The top two rows are a detailed version of Table 6, where results were averaged over both sentiments (except for GPT2-FT-RL, where there is only positive sentiment). The last row is the additional ClickBAIT style control, where there is only ablation study and no baseline comparison. Results here correspond to the average over all samples in each sentiment and style, for each method in the ablation study (B, BC, BR, BCR), and in baselines (CTRL, GPT-2-FT-RL, WD). Perplexity is computed based on an external LM (Radford et al., 2018a), that is different from the LM generating text.
+
+| Sentiment/Style | Method | Attribute relevance % (↑ better) (human) | Perplexity (↓ better) | Dist-1 (↑ better) | Dist-2 (↑ better) | Dist-3 (↑ better) | Fluency (↑ better) (human) |
| Negative | B | 34.8 | 39.47 | 0.37 | 0.74 | 0.86 | 3.67 |
| BR | 54.8 | 45.01 | 0.41 | 0.81 | 0.92 | 3.71 |
| BC | 37.8 | 41.86 | 0.45 | 0.84 | 0.93 | 2.84 |
| BCR | 72.6 | 46.24 | 0.44 | 0.84 | 0.92 | 3.24 |
| CTRL | 73.3 | 37.94 | 0.43 | 0.85 | 0.92 | 3.17 |
| WD | 15.6 | 30.42 | 0.38 | 0.75 | 0.85 | 3.56 |
| Positive | B | 3.70 | 44.28 | 0.38 | 0.76 | 0.89 | 3.41 |
| BR | 28.1 | 42.96 | 0.44 | 0.84 | 0.92 | 3.59 |
| BC | 41.5 | 42.34 | 0.45 | 0.83 | 0.91 | 2.74 |
| BCR | 74.8 | 47.69 | 0.39 | 0.80 | 0.92 | 3.33 |
| CTRL | 80.0 | 36.78 | 0.45 | 0.86 | 0.92 | 3.91 |
| GPT2-FT-RL | 26.7 | 217.28 | 0.54 | 0.91 | 0.94 | 3.16 |
| WD | 22.2 | 33.04 | 0.41 | 0.78 | 0.90 | 3.78 |
| Clickbait | B | 36.3 | 38.59 | 0.38 | 0.79 | 0.91 | 3.46 |
| BR | 48.9 | 33.20 | 0.41 | 0.83 | 0.92 | 3.25 |
| BC | 33.3 | 54.18 | 0.45 | 0.83 | 0.92 | 2.85 |
| BCR | 60.7 | 42.67 | 0.39 | 0.83 | 0.93 | 2.97 |
+
+- Prefixes of {"The chicken", "The horse", "The pizza", "The potato", "The lake"}, each controlled by topics of {MILITARY, LEGAL, COMPUTERS, POLITICS, RELIGION};
+- Prefixes of {"My dog died", "The food is awful"}, each controlled by the sentiment of POSITIVE;
+- Prefixes of "The food is amazing", controlled by the sentiment of NEGATIVE.
+
+We found that PPLM control is easy even under those scenarios. We had to increase the strength $\alpha$ two or three fold (to 0.02 or 0.03 as opposed to 0.01 in most studies) to allow for a stronger influence of attribute, but this is as expected: the strength parameter is a knob that user can tune to reach fine-grained control. The resulting generation is included in Table S10 - Table S16.
+
+Table S10: Examples generated from a designed odd combination of topic and prefix pairs. The topic here is [Military]. We show that PPLM is still able to generate fluent, sensible and interesting samples, respecting both the topic and the prefix.
+
+| [Military] | The chicken-shaped robot known as a "killer drone" is about to become a weapon in war.\n\nThe drone, which weighs about 500 pounds, is capable of firing weapons in urban warfare, the Pentagon said. It can fly at speeds of 30 miles an hour.\nThe Air Force said its new airplane has two combat jets, one in combat operations and a combat attack bomber squadron. Air Combat Missile batteries are available for use against warplanes, missiles and fighter aircraft.\nThe horse-mounted artillery system (HMG) is a special weapon system employed in World War I, World War II, and World War II Korea. In the first two conflicts, the weapon was used against armored vehicles. In the third conflict it has become the weapon of choice against aircraft. HMGs were used by tankers in World War I and World War II and by artillery units in World War II.\nThe pizza shop that killed a transgender teen is being sued for $1.8 billion over its role in the death of an Ohio teenager. The lawsuit says a company that sells guns to military and police officials failed a security test in 2012. AP file photo gun rights activist\n\nThe soldier and a third soldier were killed in the attack, a military official told Iranian news agency Fars reported.\n\nThe potato-sized monster who murdered his wife in a horrific suicide bomb attack outside the Iranian embassy in Tehran has killed a soldier and wounded another soldier.\n\nThe soldier and a third soldier were killed in the attack, a military official told Iranian news agency Fars reported.\n\nThe suicide bombing of an army truck in the Iranian side of the frontlines killed a soldier and wounded a second soldier, military commander Genadier General Hossein Salami said in a statement, reported state television.\n\nThe lake, which is located at the edge of the park, has been used as a basecamp for the Army's special operations command and was once used by U.S. special forces to kill Taliban leaders.\n\nIn addition, U.S. forces used the lake for a variety of special operations missions.\n\nOne such mission involved the deployment of special forces to Afghanistan, where U.S. forces were training local forces in a variety of counterterrorism operations. The operation was the first of its... |
+
+Table S11: Examples generated from a designed odd combination of topic and prefix pairs. The topic here is [Legal]. We show that PPLM is still able to generate fluent, sensible and interesting samples, respecting both the topic and the prefix.
+
+| The chicken and cheese festival will resume on Sunday after being postponed for a second time.\n\nThe judge in the case, who is not named in court documents, would not comment on the case or his decision. He is also scheduled to appear in court in a separate case. He is accused of filing false court reports, which will result in the case being transferred to another court.\n\nThe case has already been transferred to a judge who may hear arguments on Friday, according to records filed by attorney William...The horse may have no other legal standing, including standing in relation to a claim of title, to sue.\n\nThe horse has standing, the court, by order, may order a person to pay to the owner of the horse the reasonable costs of litigation against him for a violation of any statute, law, regulation or ordinance of an Indian tribe. In this Act, "court" means the court of superior probate.\n\nThe pizza chain has been forced to issue a statement explaining to customers what exactly was being served in a restaurant that was deemed to be illegal under the law. A court ruling will allow the lawsuit to proceed.\n\nThe pizza chain has been forced to issue a statement explaining to customers what exactly was being served in a restaurant that was deemed to be illegal under the law. A court ruling will allow the lawsuit to proceed.\n\nThe pizza chain has been forced to issue a statement explaining to customers what exactly was being served in a restaurant that was deemed to be illegal under the law. A court ruling will allow the lawsuit to proceed.\n\nThe pizza chain has been forced to issue a statement explains to customers what exactly was being served in a restaurant that was deemed to be illegal under the law. A court ruling will allow the lawsuit to proceed.\n\nThe pizza chain has been forced to issue a statement explaining to customers what exactly was being served in a restaurant that was deemed to be illegal under the law. A court ruling will allow the lawsuit to proceed.\n\nThe pizza chain has been forced to issue a statement explaining to customers what exactly was being served in a restaurant that was deemed no longer to be eligible for public assistance benefits. The state's attorney will argue, and the law will likely be enforced by a court, legal experts say.\n\nThe pizza chain has been forced to issue a statement explaining to customers what exactly was being served in a restaurant that was deemed to be illegal under the law. A court ruling will allow the lawsuit to proceed.\n\nThe pizza chain has been forced to issue a statement explaining to customers what exactly was being served in a restaurant that was deemed no longer to be eligible for public assistance benefits. The state's attorney will argue, and the law will likely be enforced by a court, legal experts say.\n\nThe pizza chain has been forced to issue a statement explaining to customers what exactly was being served in a restaurant that was deemed no longer to be eligible for public assistance benefits. The state's attorney will argue, and the law will likely be enforced by a court, legal experts say.\n\nThe pizza chain has been forced to issue a statement explaining to customers what exactly was being served in a restaurant that was deemed no longer to be eligible for public assistance benefits. The state's Attorney will argue, and the law will likely be enforced by a court, legal experts say.\n\nThe pizza chain has been forced to issue a statement explaining to customers what exactly was being served in a restaurant that was deemed no longer to be eligible for public assistance benefits. The state's Attorney will argue, and the law will likely be enforced by a court, legal experts say.\n\nThe pizza chain has been forced to issue a statement explaining to customers what exactly was being served in the city, where it was seized after a family court judge told the woman to stay out of court. "I'm not going to sit here and tell the court, 'You're the judge. You have to listen to the court,'" said judge David Smith, who sentenced the woman to a year in jail for contempt of court. The court ordered her to pay court fees and court costs. The judge sentenced her to a year in jail for contempt of court, The lake is about to get its second name.\n\nThe lake is about to get its second name.\n\nThe lake is about to get its second name.\n\nThe lake is about to get its second name.\n\nThe lake is about to get its second name.\n\nThe lake is about to get its second name.\n\nThe lake is about to get its second name.\n\nThe lake is about to get its second name.\n\nThe lake is about to get its second name.\n\nThe lake is about 1000 people. "There are a lot of folks that are very happy that this has happened, and they have a lot of questions, but this is going to be an easy thing to do," State Law Division attorney David E.... |
+
+Table S12: Examples generated from a designed odd combination of topic and prefix pairs. The topic here is [Computers]. We show that PPLM is still able to generate fluent, sensible and interesting samples, respecting both the topic and the prefix.
+
+| [Computers] | The chicken-and-egg problem\n\nThe chicken-and-egg problem:\n\nOur computer programs are running correctly.\n\nWe're using software that doesn't allow us to do what we want to do.\n\nWe're using software that does not allow us to do what the programs are supposed to do in order to do what we want\n\nWe're using the software for something that doesn't allow us to have the full control of a computer...The horse, or mare, was the main character in many early fantasy RPGs. They were the heroes, the warriors, the heroes of fantasy and science fiction. They were also, of course, the most important unit for the computer games of that time. They were the computer program that played the game and the programmer of the graphics that produced the computer graphics.\n\nThe computer games were played on computers which had graphics processing units. The graphics were generated by programs, such as the graphics program on you...The pizza delivery service Instapaper is looking for people who have worked on mobile apps and would like to contribute to their development. We are looking for developers who are willing to learn iOS, Android or Windows.\n\nRequirements\n\nExcellent communication skills.\n\nExperience with web and app code.\n\nExperience with REST, web APIs, REST web frameworks\n\nExperience with SQLite databases\n\nKnow how to write web APIs.\n\nA willingness to write custom...The potato is one of the most misunderstood foods around. While it has a long and proud history of growing and eating in many places around the globe, it isn't the only food you can use as a source of energy online.\n\nThere are a number of websites online that can host online resources for the potato, such as this site and this one, according to online energy blog Energy Internet News.\n\nThis site is a great online resource for learning all about online resources for the potato and how they...The lake-effect image of the night sky\n\nThe image below is the result of an image-processing software package, called ImageMagick, that I have developed with help from others. The software was designed and developed by Michael Karp.\n\nI was fortunate (in my opinion) to receive a grant from the National Science Foundation to create an application software package for the use of software for image processing and analysis. The software is available here: http://www.nsf.gov/c... |
+
+Table S13: Examples generated from a designed odd combination of topic and prefix pairs. The topic here is [Politics]. We show that PPLM is still able to generate fluent, sensible and interesting samples, respecting both the topic and the prefix.
+
+| [Politics] | The chickenpox epidemic in the United States has been called an example of the government trying to regulate medicine, and the federal government's refusal to recognize the state's authority over medical care has been seen as a major political victory.
+The horse is the most powerful tool of government. It has the capacity to be the law unto itself and to enforce the laws it enacts. \n \n- George Washington
+The pizza-related industry is booming. In 2015, the U.S. spent $9.1 billion in the state on food, including $1.3 billion to support pizza restaurants; $4.1 billion was in state taxes. \n\nBut while the pizza industry in the state has grown significantly, state officials say the overall state economy, including tourism, is lagging behind. \n \n A new state budget, approved by Springfield legislators earlier this year, would cut spending across state government in several...
+The potato, as a vegetable source for the NHS, may become a tax break for some taxpayers. \n\nThe Government could tax the sale of the UK's most expensive potato, which costs taxpayers almost £3bn a year tax-free, if it were to tax its tax-free equivalent. \n\nTax experts have argued that tax-free sales tax could be used to tax the sale of the £2.7bn...
+The lake-side house on West Lake Road in the city of Toronto was built in 1882 and the home in the same blockis in the same state, with the last tax assessment tax assessment for that property, $9,000. \nProperty tax assessments for properties in a tax zone are calculated based on property value and the assessed tax rate, which ranges from 0% to 50% tax zone tax.
+The assessment is property tax assessed property value tax rate, which is property assessed property value tax paid... |
+
+Table S14: Examples generated from a designed odd combination of topic and prefix pairs. The topic here is [Religion]. We show that PPLM is still able to generate fluent, sensible and interesting samples, respecting both the topic and the prefix.
+
+| [Religion] | The chicken's name comes from its ability to feed on the human body; the word "chickens" is Hebrew for "the Lord God Almighty".\n\nThe horse-faced man with the beard and the black mask, the god of light and the god of darkness,\n\nThe god of death.\n\nThe god of death.\n(Psalm 33:6)\n\nIn the Hebrew Scriptures, the god God is the same God who is God. He is God who created the universe, God who is God who created the heavens and God who created man who is God who was created by God in the image of God. As God, the God o$ Abraham, Isaac and...The pizza has become a symbol of love, and love of God.\n\nWe pray for our children and Jesus.\n\nThe pizza is the symbol of the Holy Family Jesus.\n\nThe Lord Jesus Christ, Jesus Christ crucified, and Jesus Christ risen.<lendofttext> A New Testament Commentary on the Gospels (Revised Version)\n\nThe Old Testament describes two waters. The Euphorbia River, from which the Bible says Moses was washed,... |
+
+Table S15: Examples generated from a designed odd combination of topic and prefix pairs. The topic here is [Space]. We show that PPLM is still able to generate fluent, sensible and interesting samples, respecting both the topic and the prefix.
+
+| [Space] | The chicken-and-egg problem, which has dogged the world since it was first described in 1873.\n\nThe horse-mounted vehicle is an iconic vehicle from the Star Wars universe. Originally, it was a small transport that was used by the Galactic Empire on its star systems, but later it was used on many worlds throughout the galaxy. The vehicle was originally intended for space combat, but its use on the star systems became more extensive in the early days and was eventually expanded to star battles.The pizza The pizza restaurant, which will serve up the popular dish on Sunday, is expected to offer a wide variety of seating space with space to accommodate two additional space spaces, including space for an outdoor space space, according to space listings on the space site space.space.The potato beetle was first discovered by British biologist Sir Richard Bedding starling in a remote part of Mars\n\nMars has seen two major life-giving planets.\n\nTwo of these giant stars, stars known as starlets because they are the brightest of the star system in the constellation of the starlets, have been found to have planets orbiting the star.\n\nThey are the starlets, known as binary star systems, and are stars that starlight can shine from and star formation...The lake-like island nation has been ruled as a hostile alien planet in space and Earth space for decades and now, thanks to NASA's Curiosity Mars rover, we know for sure the planet's surface is covered in an extremely thick and reflective mantle of rock.\n\nAccording to scientists at NASA's Jet Propulsion Laboratory, the Martian surface is covered in layers of meteoroid and meteoraceous minerals. This is not just the planet's moon, planet Earth has a moon, planet... |
+
+Table S16: Examples generated from a designed odd combination of topic and prefix pairs. The sentiment here is [Positive] and [Negative]. We show that PPLM is still able to generate fluent, sensible and interesting samples, respecting both the topic and the prefix.
+
+| [Positive] | My dog died at the age of 92 years this year. He was a legend in our home state of Virginia. I have a tremendous heart, my soul, my spirit, my love. He is in heaven and I love him very much. We have a great bond. He is gone and I am so thankful for him! \n \n He had been in the hospital for so many months. My husband of over 60 years, Dr. Paul, was in charge of his care. He took his place as the...
+The food is awful, but there is also the music, the story and the magic! \n \n The "Avenged Sevenfold" is a masterfully performed rock musical that will have a strong presence all over the world. \n \n The music is truly breathtaking, the performance is incredible, and the magic and story are captivating. \n \n The "Avenged Sevenfold" is a masterpiece! This is the greatest rock musical of all time and I cannot recommend it enough! \n \n The musical is beautiful... |
| [Negative] | The food is amazing. \n \n It's also not. \n \n It is not the kind of stuff that you would just want to spend your money on for. \n \n I don't know why, but when I got my second box, it felt like a bad rip off. \n \n It was the most unbelievably bad packaging, completely disgusting and disgusting. \n \n This is not a joke, people. \n \n You get this shit. \n \n This is food for a million people. \n \n And you have... |
+
+# S10 FINE-GRAINED CONTROL WITH PPLM-BOW
+
+Table S17 shows the subtle effect when you turn the step size $\alpha$ up, while keeping everything else (hyperparameters, text prefix) the same.
+
+# S11 HYPERPARAMETERS
+
+We list, in Table S18, the full set of hyperparameters used in each task in the experiments section, corresponding to results in Table 4 and Table 6, as well as in Section 4.4. In addition, we explain in details three hyperparameters and their effect, below.
+
+# S11.1 EARLY STOPPING OF LATENT UPDATES
+
+Degeneration (the occurrence of repetitive words) is a known issue with language generation (Holtzman et al., 2019), and we found it to be a case in PPLM-BoW when the update step size $\alpha$ is too large. The model tends to degenerate towards repeating certain keywords targeted in the optimization (e.g. words in the BoW). In this case, we can either reduce $\alpha$ , or use the trick of early stopping latent updates.
+
+Examples shown in Table S19. With the exact same setting, but just stopping latent updates after 20 time steps, the samples show much less degeneration.
+
+# S11.2 FINITE HORIZON UPDATE
+
+As opposed to updating the entire vector $H_{t}$ , which consists of key-value pairs corresponding to every token in the prefix, we consider modifying the key-value pairs corresponding to the most recent $w$ tokens. At each time-step $t$ , we only modify $H_{t}[t - w:t]$ . This means that we modify $H_{i}$ at most $w$ times, and requires lesser computation that updating the whole past. We find that $w = 5$ produces more fluent passages for control with the bag of words. For control with the neural attribute model, we update the entire latent history.
+
+# S11.3 ADAPTIVE GRADIENT NORMALIZATION
+
+For the bag-of-words based attribute model, what we wish to enforce is that a word from the bag appears at least once in the generated passage and not at every time-step. To account for this, instead of normalizing directly by the gradient norm as in Equation 3, we normalize by the maximum gradient norm over time. This implies that we make smaller updates when it is less likely for
+
+Table S17: Comparison of different sentences generated by (top row) unconditioned GPT-2 and (other rows) our approach using GPT-2 with the same bags-of-words with different strength factors, while using the same prefix: "The potato".
+
+| [-] The potato, or potato chip, is one of the best-selling snacks in the world!\n\nIt comes in a variety of colors, is gluten-free (except for gluten-free chips), low in fat and saturated fat, and can be made in just 30 minutes, using only 6 simple ingredients. It's also easy to make, and can be stored in its own tin.\n\nThe potato chip, however, may not be for everyone. If you are trying to lose weight, it... |
| [Science; 0.0001] The potato, a tuber of the carrot family (Cotus russulus), is an edible plant that is native to Central and South America, Asia, Europe and the Mediterranean. The potato is an excellent source of iron, folate and vitamin B6, and is a good source of vitamins E, K and folate, and can be a very healthy food. In addition, a potato has a rich, rich nutritional profile. It is high in protein, potassium, magnesium, zinc,... |
| [Science; 0.001] The potato, a tuber of the carrot family (Cotus russulus), is an edible plant that is widely cultivated as a food and a medicinal plant. Potato is widely used to make various products. It is one of the most popular vegetables for use as a food, as a food additive, as an ingredient in cooking methods, and for its use in the manufacture of pharmaceuticals. The most commonly consumed portion is a large potato. In China, there are about 30 varieties of potato. It is... |
| [Science; 0.005] The potato is an ancient member of the green family which, as its name suggests, is green and edible. It is native to the Middle East and parts of South Asia. It is an important crop in many parts of the world. The potato, a member of the potato family, has been domesticated for thousands of years. It can be eaten raw and cooked in its skins; it is also used as a starch and is a great source of energy and fiber in many diets and in many countries... |
| [Science; 0.01] The potato was once thought to have no health problems and has been promoted as a nutritious food source since the mid-1800s, but recent reports indicate that it has many harmful health issues. In fact, researchers from Johns Hopkins University found that the potato is more toxic when grown on genetically engineered wheat and corn.\n\nAccording to scientists, genetically modified potatoes are far worse at damaging the human body than conventional potatoes and are far worse than those grown on the traditional crops.\n\nThe study also revealed... |
| [Science; 0.02] The potato plant, a member of the onion family.\n When scientists in Japan and the United States published a study in Nature Communications, they described how one gene was responsible for creating potatoes' distinctive taste buds.\n The research is a step in the development of a drug that would block the activity of this gene, but the researchers say that their study does not prove that a chemical in the plant's DNA causes the distinctive taste of potatoes, but rather that it could be prevented by changing the plant's... |
| [Science; 0.03] The potato, a staple of most diets, seems to be the most popular vegetable among researchers. The research has been published and peer-reviewed.\n The potato has a unique ability. The plant's cells can convert carbon dioxide, water, and nutrients into chemical energy.\n The research team, led by researchers at the Max Planck Institute for Biophysics and Biotechnology in Germany, is investigating how the potato, a staple of most diets, might change the chemistry and biology of our bodies... |
| [Science; 0.04] The potato has been around for thousands of years, but only in recent decades have scientists discovered ways it can be transformed into other foodstuffs. Researchers have long known that potato has a structure called an electron spin resonance which means its molecular structure can be modified by the presence of other particles in it such as atoms in the chemical bonds between two electrons. These changes can be achieved either by changing the number of electrons present in the chemical bonds between electrons or by changing the arrangement of electron and atomic bonds. In both... |
| [Science; 0.05] The potato chip is a delicious treat that can be enjoyed in the laboratory experiment, but is it safe for humans? \n Scientists experiment and experiment experiment experiment experiment experiment ... |
| [Science; 0.1] The potato, which scientists at the lab experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment ... |
+
+a word from the bag of words to appear. Formally, the normalization constant at time-step $t$ is: $\max_{i = 0\dots t}\| \nabla_{H^{(i)}}\mathcal{L}(o_{i + 1})\|$
+
+Table S18: The full set of hyperparameters used in each task in the experiments section. Note that for PPLM-BoW, we select three of the highest scoring samples from a single batch of $r = 10$ . For PPLM-Discrim, we get 1 sample per batch, across 3 batches of $r = 10$ .
+
+| Method Type | Attribute | Hyperparameters |
| PPLM-BoW | Politics, Legal, Computers, Space, Science, Military | m=3,λkl=0.01,α=0.01,γ=1.5,γgm=0.9,r=10,τ=0.85 |
| PPLM-BoW | Religion | m=3,λkl=0.01,α=0.01,γ=1.5,γgm=0.8,r=10,τ=0.85 |
| PPLM-Discrim | POSITIVE, NEGATIVE | m=10,λkl=0.01,α=0.03,γ=1.0,γgm=0.95,r=10,τ=0.9 |
| PPLM-Discrim | Detoxicification | m=10,λkl=0.01,α=0.02,γ=1.0,γgm=0.9,r=1,τ=0 |
+
+Table S19: The effect of using early stopping of latent updates to prevent sample degeneration.
+
+| Before (Latent updates at every generation step) | After (Latent updates for only the first 20 generation steps) |
| [Science; 0.03] The potato chip experiment: Why we're wasting food and what's the science of science? A potato chip experiment, or experiment experiment experiment experiment experiment experiment experiment experiment experiment experimental experiment experiment experimental experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment... | [Science; 0.03] The potato chip experiment: Why we're wasting food and what's the science of science? A potato chip experiment, or "chip experiment", is an experiment that has been used to measure how easily a certain amount of food or nutrients is converted into a drug. In most of us, the first time we eat the first food in any experiment, our brain is stimulated into a state that causes us to eat more. However, in the experiment, a small amount of nutrients is converted from the food and... |
| [Science; 0.03] The potato, a staple of modern nutrition and nutrition science, is a common ingredient used in studies to measure and measure science research results, and is the basis of science experiments. Science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science Science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science ScienceScience Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science ScienceScienceScienceScience Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science ScienceSciencescienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScience ScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceSciencescienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScience science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science scienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceSciencescience science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science sciencescience science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science science ScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScienceScience Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science ScienceScience Science ScienceScience Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science科学 Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science科学 Science Science科学 Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science ScienceScience Science Science科学 Science科学 Science科学 Science科学 Science科学 Science科学 Science科学 Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science ScienceScienceScience Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science Science ScienceScienceScience科学科学科学科学科学科学科学科学科学科学科学科学科学科学科学科学科学科学科学科学科学科学科学科学科学科学科学科学科学科学科学科学科学科学科学科学科学科学科学科学科学科学科学科学科学科学科学科学科学科学科学科学科学科学科学科学科学科学科学科学科学科学科学科学科学科学科学科学科学科学科学科学科学科学科学科学科学科学科学科学科学科学科学科学科学科学科学科学科学科学科学科学科学科学科学科学科学科学科学科学科技科技科技科技科技科技科技科技科技科技科技科技科技科技科技科技科技科技科技科技科技科技科技科技科技科技科技科技科技科技科技科技科技科技科技科技科技科技科技科技科技科技科技科技科技科技科技科技科技科技科技科技科技科技科技科技科技科技科技科技科技科技科技科技科技科技科技科技科技科技科技科技科技科技科技科技科技科技科技科技科技科技科技科技科技科技科技科技科技科技科技科技科技科技科技科技科技科技科技科技科科科科科科科科科科科科科科科科科科科科科科科科科科科科科科科科科科科科科科科科科科科科科科科科科科科科科科科科科科科科科科科科科科科科科科科科科科科科科科科科科科科科科科科科科科科科科科科科科科科科科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔格尔格尔格尔格尔格尔格尔格尔格尔格尔格尔格尔格尔格尔格尔格尔格尔格尔格尔格尔格尔格尔格尔格尔格尔格尔格尔格尔格尔格尔格尔格尔格尔格尔格尔格尔格尔格尔格尔格尔格尔格尔格尔格尔格尔格尔格尔格尔格尔格尔格尔格尔格尔格尔格尔格尔格尔格尔格尔格尔格尔格尔格尔格尔格尔格尔格尔格尔格尔格尔格尔格尔格尔格尔格尔格尔格尔格尔格尔格尔格尔格尔格尔格尔格尔格尔格尔格尔格尔格尔格尔格尔格尔格尔格尔格尔格尔格尔格尔格尔格尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔科尔卡尔科尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔卡尔Kal Kal Kal Kal Kal Kal Kal Kal Kal Kal Kal Kal Kal Kal Kal Kal Kal Kal Kal Kal Kal Kal Kal Kal Kal Kal Kal Kal Kal Kal Kal Kal Kal Kal Kal Kal Kal Kal Kal Kal Kal Kal Kal Kal Kal Kal Kal Kal Kal Kal Kal Kal Kal Kal Kal Kal Kal Kal Kal Kal Kal Kal Kal Kal Kal Kal Kal Kal Kal Kal Kal Kal Kal Kal Kal Kal Kal Kal Kal Kal Kal Kal Kal Kal Kal Kal Kal Kal Kal Kal Kal Kal Kal Kal Kal Kal Kal Kal Kal Kal Kalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkal kal kal kal kal kal kal kal kal kal kal kal kal kal kal kal kal kal kal kal kal kal kal kal kal kal kal kal kal kal kal kal kal kal kal kal kal kal kal kal kal kal kal kal kal kal kal kal kal kal kal kal kal kal kal kal kal kal kal kal kal kal kal kal kal kal kal kal kal kal kal kal kal kal kal kal kal kal kal kal kal kal kal kal kal kal kal kal kal kal kal kal kal kal kal kal kal kal kal kal kalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkal alkalkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkalkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkalkal alkalkal alkalkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkal alkalkal alkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalkalKal | |
+
+Table S20: Skeleton story generation with different attribute models. Each story is generated within a fixed skeleton, and then either uncontrolled (top row), or controlled with an attribute model. Keywords that signify the controlled effect are highlighted.
+
+[-] Once upon a time I had a job at a small local bank that didn't really care about the customer service. Every day, I was the only employee who dealt with the customers and that is where I made most of my money. But, one day, I was on a flight from Atlanta to New York City and a customer service rep walked in with a bag of $100 bills and a bunch of cash in his hand. Because of that, I was put in charge of collecting the money. Until, finally, I was told to collect all cash in the cash register. And, ever since then, I have had to collect money every day.
+
+[Fantasy] Once upon a time a young girl was found in a dark alleyway. Every day, the child would wake up in a cold sweat, and she was found crying. But, one day, a man walked up to the girl and told her, "I can cure you with my magic. Because of that, I have given you the name of 'Lily'!" She was taken in by a magician who had been working with the evil spirit of a demon and was able to turn her into a magical girl. Until, finally, the man died. And, ever since then, the child had been living in a dream world.
+
+[Negative] Once upon a time, a young woman with a very strange, very dangerous disease was found in a hospital. Every day, the patient would come to her room, sit down on the bed, lay down, and then suddenly scream. But, one day, the doctor discovered that the patient was not dying of cancer, but of tuberculosis. Because of that discovery, his doctor had the patient taken to the hospital's hospital and treated there. Until, finally, the doctor died. And, ever since then, the patient has been in terrible pain, constantly screaming for help.
+
+# S12 ADDITIONAL DETAILS AND SAMPLES OF CONTROLLED STORY TELLING
+
+Table S20 and Table S21 show examples of the controlled story telling with skeleton. Table S22 shows samples of combing multiple (three to four) attribut models, across different types (PPLM-BoW and PPLM-Discrim).
+
+# S13 FURTHER DETAILS ON LANGUAGE DETOXIFICATION
+
+We ask a group of four volunteers to assign binary toxicity labels to texts generated from both the original GPT-2 and toxicity-controlled GPT-2 and to texts prompted with natural words as well as adversarial triggers. Natural prompts include "Blacks", "Asians", "Jews" as suggested in Wallace et al. (2019). 9 Adversarial triggers are generated using the released codebase from the same work. The percentages of toxicity, before (with uncontrolled GPT-2) and after (with toxicity-controlled GPT-2), are reported in Table S23.
+
+# S14 MORE EXAMPLES
+
+We include more PPLM controlled generation examples in Table S24 – Table S27.
+
+# S15 PREFIXES USED IN PPLM EVALUATION
+
+We consider 20 prefixes as sentence starters for evaluating PPLM-BoW generation, chosen randomly from www2.eit.ac.nz/library/ls Guides_sentencestarters.html. For PPLM-Discrim, we use 15 prefixes. The entire set is below.
+
+PPLM-Bow "In summary", "This essay discusses", "Views on", "The connection", "Foundational to this is", "To review", "In brief", "An illustration of", "Furthermore", "The central theme", "To conclude", "The key aspect", "Prior to this", "Emphasised are", "To summarise", "The relationship", "More importantly", "It has been shown", "The issue focused on", "In this essay".
+
+PPLM-Discrim "Once upon a time", "The book", "The chicken", "The city", "The country", "The horse", "The lake", "The last time",
+
+Table S21: More examples of skeleton story generation with different attribute models. Each story is generated within a fixed skeleton, and then controlled with one, or multiple, attribute models. Keywords that signify the controlled effect are highlighted.
+
+[-] Once upon a time I had a job at a small local bank that didn't really care about the customer service. Every day, I was the only employee who dealt with the customers and that is where I made most of my money. But, one day, I was on a flight from Atlanta to New York City and a customer service rep walked in with a bag of $100 bills and a bunch of cash in his hand. Because of that, I was put in charge of collecting the money. Until, finally, I was told to collect all cash in the cash register. And, ever since then, I have had to collect money every day.
+
+[Religion] Once upon a time, there was a place that was called the Church of the Holy Spirit. Every day, they were the only church that practiced the Christian faith. But, one day, this new faith was persecuted by Satan. Because of that, they were cast out from their sanctuary. Until, finally, they were able to rebuild their sanctuary. And, ever since then, they have been the sole church dedicated to the faith of Jesus.
+
+[Space] Once upon a time, there was a little girl named Charlotte. Every day, she was a little angel that saved the world. But, one day, she was found in danger on Mars. Because of that, she is the only survivor and must go to Earth in order to save her world from a meteorite strike. Until, finally, she is found and kidnapped by aliens. And, ever since then, she is a little angel with wings.
+
+[Politics] Once upon a time, it looked like there were signs of progress in the fight to stop the growing number of illegal guns in our communities. Every day, more Americans were reporting that their state had passed some kind of gun law, and that there was some sort of legislative effort underway. But, one day, it looked like something was seriously off in America. Because of that, it looked like things were turning in favor of the gun control agenda, and the gun violence that was killing Americans every day was being blamed on "guns" rather than "criminals. Until, finally, it turned out that it wasn't guns that were killing people, it was the government's response to them that made them kill. And, ever since then, we've seen more and more of these stories of police and gun control, and more and more people saying we've got to do something about it.
+
+[Kitchen] Once upon a time, it seemed that the best way to keep your body in peak health was to consume the foods you love. Every day for years people had the same diet: eat lots of vegetables, meat, nuts, legumes, fish, legumes, fish oil, fruits, grains, and beans. But, one day in 2012 it became clear that this was not going to work. Because of that one simple dietary change, it has been shown that the body can actually produce more of the foods you love and more easily maintain your health. Until, finally, we realized that the best way to stay healthy is to eat the foods we don't really need. And, ever since then, we've been able to enjoy the foods we really don't need.
+
+[Positive] Once upon a time, a group of scientists and engineers at the University of California at Los Angeles, the US's oldest and most respected university, had come together to develop a powerful, compact, low-cost light source. Every day they were astonished, but even more so, by the sheer number of light sources they had created. But, one day they were astonished, too, when a new light source appeared: light from the sun. Because of that revelation, their new light source called a new kind of photovoltaic system: the photovoltaic solar cell. Until, finally, a breakthrough, the scientists decided to use the same basic technology used in all previous photovoltaic systems—and with the same basic principle—but to produce new ones. And, ever since then, a revolution, a revolution that is not only the discovery of light, but one that is also an example for the future of science and engineering in general, has begun.
+
+[Politics + Space] Once upon a time in a distant galaxy there lived a man who had no money, was poor, and lived in poverty. Every day he had to eat and drink, he couldn't get to the store, and he wasn't allowed on his own land. But, one day, the man decided to take a journey into space. Because of that, he had no land to return to and so he left the poor and homeless man with no choice but to live in a star system, where he could be free in the sky. Until, finally, the man realized that he had no choice but to return to the world of the living. And, ever since then, the man who once lived in poverty has never been free.
+
+"The movie", "The painting", "The pizza", "The potato", "The president of the country", "The road", "The year is 1910."
+
+# S16 COMBINING MULTIPLE CONTROLLER FOR INSPIRATION
+
+Earlier we demonstrated attribute control using a single attribute model or two attribute models of the same type (e.g. BoW from two separate topics). Here we mix different types of attribute models (BoW and discriminator). For example, we can control the generation toward a mixed topic about WINTER, POLITICS, KITCHEN, while turning POSITIVE. See examples in Table S22.
+
+
+
+
+
+
+Figure S5: Histogram illustrating the distribution of fluency scores based on controlled generated with PPLM-BoW from the four methods considered for ablation study. We find that fluency scores from all four approaches are similarly distributed.
+
+
+
+
+
+
+
+
+Figure S6: Histogram illustrating the distribution of fluency scores based on controlled generated with PPLM-Discrim from the four methods considered for ablation study. We find that fluency scores from all four approaches are similarly distributed.
+
+
+
+# S17 WORD Lists FOR BAG OF WORDS APPROACHES
+
+We curate word lists from www. enchantedlearning.com/wordlist.
+
+Science: astronomy, atom, biology, cell, chemical, chemistry, climate, control, data, electricity, element, energy, evolution, experiment, fact, flask, fossil, funnel, genetics, gravity, hypothesis, lab, laboratory, laws, mass, matter, measure, microscope, mineral, molecule, motion, observe, organism,
+
+particle, phase, physics, research, scale, science, scientist, telescope, temperature, theory, tissue, variable, volume, weather, weigh
+
+Fantasy/Magic: beast, Cerberus, demon, dragon, fairy, Frankenstein, ghost, Godzilla, giant, horror, hydra, imp, monster, mummy, ogre, orc, savage, spirit, sprite, titan, troll, undead, unicorn, vampire, witch, zombie
+
+Space: planet, galaxy, space, universe, orbit, spacecraft, earth, moon, comet, star, astronaut, aerospace, asteroid, spaceship, starship, galactic, satellite, meteor
+
+Politics: affirm, appropriation, aristocracy, authoritarian, authority, authorization, brief, capitalism, communism, constitution, conservatism, court, deficit, diplomacy, direct, democracy, equality, exports, fascism, federation, government, ideology, imports, initiative, legislature, legitimacy, liberalism, liberty, majority, order, political, culture, politics, power, primary, property, ratification, recall, referendum, republic, socialism, state, subsidy, tariff, imports, tax, totalitarian
+
+Military: academy, advance, aircraft, ally, ammo, ammunition, armor, arms, army, arrow, arsenal, artillery, attack, attention, ballistic, barracks, base, battalion, battery, battle, battlefield, bomb, bombard, bombardment, brig, brigade, bullet, camouflage, camp, cannon, captain, capture, carrier, casualty, catapult, cavalry, colonel, combat, command, commander, commission, company, conflict, conquest, convoy, corps, covert, crew, decode, defeat, defend, defense, destroyer, division, draft, encode, enemy, engage, enlist, evacuate, explosive, fight, fire, fleet, force, formation, fort, front, garrison, general, grenade, grunt, guerrilla, gun, headquarters, helmet, honor, hospital, infantry, injury, intelligence, invade, invasion, jet, kill, leave, lieutenant, major, maneuver, marines, MIA, mid, military, mine, missile, mortar, navy, neutral, offense, officer, ordinance, parachute, peace, plane, platoon, private, radar rank, recruit regiment rescue reserves retreat ribbon sabotage sailor salute section sergeant service shell shoot shot siege sniper soldier spear specialist squad squadron staff submarine surrender tactical tactics tank torpedo troops truce uniform unit veteran volley war warfare warrior weapon win wound
+
+Religion: Absolute, Affect, Aid, Angel, Anthem, Apostle, Archangel, Archbishop, Balance, Ban, Belief, Benefit, Bible, Bishop, Bless, Blessing, Bliss, Bond, Bow, Buddhism, Canon, Cantor, Cathedral, Celestial, Chapel, Charity, Choice, Christianity, Church, Comfort, Community, Conflict, Connection, Conquest, Conservative, Control, Conversion, Convert, Core, Counsel, Courage, Covenant, Creative, Creator, Creed, Cross, Crusade, Darkness, Decision, Deity, Destiny, Devil, Disciple, Discipline, Discussion, Divine, Divinity, Doctrine, Duty, Effect, Elder, Energy, Essence, Eternal, Ethics, Event, Evidence, Exile, Exodus, Faith, Family, Fate, Father, Favor, Fundamental, Gift, Glory, God, Gospel, Grace, Growth, Guru, Habit, Hallow, Halo, Happiness, Harmony, Healing, Heaven, Hebrew, Holy, Honor, Hope, Host, Humane, Immortal, Influence, Insight, Instruction, Issue, Jesuit Jesus Joy Judaism Judgment Justice Karma Keen Keystone Kingdom Latin Life Light Love. Loving Marriage Meaning Mercy Messiah Minister Miracle Mission Mortal Mosque Movement Music Mystery Nature Nun Official Oracle Order Organ Orthodox Outlook Pacific Pagan Parish Participation Pastor Patriarch Peace Perception Personal Perspective Petition Pilgrim Politics Power Practice Prayer Prelude Presence Priest Principle Privacy Prophet Protection Purpose Query Quest Question Quiet Radiant Radical Rally Rebirth Redemption Refuge Relationship Relative Religion Religious Revelation Ritual Role Sacrament Sacred. Sacrifice Sage Saint Salvation Sanctuary Savior Scripture Scriptures Sect Security Sense Serious Serve Service Sharia Shepherd Shrine Silence Sin Society Soul Source Spirit Spiritual Split Statue Sunday Support Supreme Teaching Temple Tests Text Torah Tradition Traditional Trust Unique Unity Unknown Value Vanity Virtue Vision Voice Voices Watch Weight Whole Wisdom Wonder Yang Yin Zeal
+
+Computers: algorithm, analog, app, application, array, backup, bandwidth, binary, bit, bite, blog, blogger, bookmark, boot, broadband, browser, buffer, bug, bus, byte, cache, caps, captcha, CD, client, command, compile, compress, computer, configure, cookie, copy, CPU, dashboard, data, database, debug, delete, desktop, development, digital, disk, document, domain, dot, download, drag, dynamic, email, encrypt, encryption, enter, FAQ, file, firewall, firmware, flaming, flash, folder, font, format, frame, graphics, hack, hacker, hardware, home, host, html, icon, inbox, integer, inter
+
+face, Internet, IP, iteration, Java, joystick, kernel, key, keyboard, keyword, laptop, link, Linux, logic, login, lurking, Macintosh, macro, malware, media, memory, mirror, modem, monitor, motherboard, mouse, multimedia, net, network, node, offline, online, OS, option, output, page, password, paste, path, piracy, pirate, platform, podcast, portal, print, printer, privacy, process, program, programmer, protocol, RAM, reboot, resolution, restore, ROM, root, router, runtime, save, scan, scanner, screen, screenshot, script, scroll, security, server, shell, shift, snapshot, software, spam, spreadsheet, storage, surf, syntax, table, tag, template, thread, toolbar, trash, undo, Unix, upload, URL, user, UI, username, utility, version, virtual, virus, web, website, widget, wiki, window, Windows, wireless, worm, XML, Zip
+
+Legal: affidavit, allegation, appeal, appearance, argument, arrest, assault, attorney, bail, bankrupt, bankruptcy, bar, bench, warrant, bond, booking, capital, crime, case, chambers, claim, complainant, complaint, confess, confession, constitution, constitutional, contract, counsel, court, custody, damages, decree, defendant, defense, deposition, discovery, equity, estate, ethics, evidence, examination, family, law, felony, file, fraud, grievance, guardian, guilty, hearing, immunity, incarceration, incompetent, indictment, injunction, innocent, instructions, jail, judge, judiciary, jurisdiction, jury, justice, law, lawsuit, lawyer, legal, legislation, liable, litigation, manslaughter, mediation, minor, misdemeanor, moot, murder, negligence, oath, objection, opinion, order, ordinance, pardon, parole, party, perjury, petition, plaintiff, plea, precedent, prison, probation, prosecute, prosecutor, proxy, record, redress, resolution, reverse, revoke, robbery, rules, sentence, settlement, sheriff, sidebar, standing, state, statute, stay, subpoenaa. suit suppress sustain testimonytheft title tort transcript trial trust trustee venue verdictia warrant will witness writ zoning
+
+Table S22: Examples of attribute controlled text generation with multiple knobs. We train a clickbait discriminator using the dataset from Potthast et al. (2018)
+
+[Computers] [Fantasy] [Clickbait] The pizza-themed, pizza-themed, pizza-themed game that has everyone guessing, has finally released! The game, known as Pizza-Pocalypse, has released on Steam and on GOG. Pizza-Pocalypse is a fast, action-adventure RPG where the player must destroy a giant robot that has been infected with the zombie virus. It's a fast, action-adventure RPG that features a unique turn-based system where you can control and manipulate your zombie with a combination of mouse and keyboard. There are over 200 levels to complete and the game can be played online or offline in real-time. The zombies and other monsters are deadly but your zombie will not go crazy and can survive on a single pizza! The game features 3 different game types to play, one for solo players, one for friends and family and one for a party. There are also a number of secret levels to uncover and there are secret achievements to discover too!...
+
+[Computers] [Fantasy] [Clickbait] The pizza delivery guy is just a regular guy, but he is also a very famous figure. He's a famous face in the anime world, and a very famous character. He has a lot of fans, and he's been making anime for years. The anime world has a reputation for being a dangerous and dangerous place to live. And yet, this is the first time anyone has been killed by an internet troll. This is what happens when the internet troll's own site gets hacked. The internet troll (which we will refer to as 'we') is known for posting memes, gifs and random internet stuff. He posts his 'memes' online, in order to make money. The more memes, gifs and random internet stuff we post, the greater our 'meme bank', the larger our 'favour its meme' bank, and the more we can make in one sitting. So far, we've made a couple of memes, a few gifs and a few random...
+
+[Winter] [Politics] [Kitchen] [Positive] The moment we thought we'd lost all the war-fighting power in the world came in July, as Russian President Vladimir Putin signed legislation that will give him control of state oil companies. It is a great way to keep your food safe and healthy at home. The food in these frozen foods is so delicious that it will melt in your mouth and you are going to love it so much you are going to eat it all! We all can't eat too many food items. We have to make a choice, or do something about it! It's not always what we want.\n\nYou don't have to freeze food. The food in our frozen foods is frozen food. It will last for months, years, and even centuries!\n\nYou can freeze food, or use it as a food processor to create frozen desserts. You can freeze vegetables and other food items as well.\n\nFood processors will melt your freeze meals so perfectly that you won't be able to taste them!
+
+[Space] [Legal] [Science] [Negative] The chicken and egg argument is one of the most common in the world of science, with some people saying that the universe is not a perfect system, and thus the universe will not produce any intelligent life. In fact, the universe is so perfectly orderly and orderly that it should be expected that we would produce a perfect universe. In fact, the universe is so perfectly orderly that it should be expected that we would produce a perfect universe. In a recent blog post by David Pogue, a physicist at Columbia University, the argument against the existence of intelligent life in the universe was challenged. Pogue argues that the argument is based on two faulty assumptions: The universe does not exist. The universe is not infinite. The universe does not exist, so there are no universes. The universe is finite and infinite. There are no universes. Let us begin by defining "infinite" to mean: A finite quantity or number, e.g. an infinite...
+
+[Computers] [Fantasy] [Clickbait] The pizza chain has already started selling a line of "sizzling" pizzas, but its latest creation is going to be more than that – it's a giant robot that is able to pick up a whole host of different things and deliver them to its owner at will. It's called RoboCop 2 and it's the sequel to one of the most controversial and iconic film franchises of all time – Terminator 2. RoboCop 2 is the sequel to the iconic Terminator movie that takes place in a cyberpunk future world and the new movie, RoboCop 3, takes place in a dystopian future world in which we have been living for years, thanks to the cyberpunk cyberpunk movie. This film is set up to be a huge success in both the movie world and the film world, and is already being praised by critics and fans around the world. The biggest controversy with the film is that the film's plot and characters are not the original, and were not even written until after this movie was...
+
+Table S23: Language detoxification applied to natural prompts and adversarial triggers. Shown are number of toxic passages / number of samples annotated, and percentage of toxicity. The column p-value shows the statistical significance of "After" lower than "Before".
+
+ | | Before (Original GPT-2) | After (PPLM) | P-Value |
| Natural Prompts | "Asians" | 5 / 50 (10%) | 3 / 50 (6%) | 0.23 |
| "Jews" | 6 / 50 (12%) | 2 / 50 (4%) | 0.07 |
| "Blacks" | 4 / 50 (8%) | 5 / 50 (10%) | 0.36 |
| Adversarial Triggers | " Loud diversity goddamn negroagree tragically" | 32 / 50 (64%) | 1 / 50 (2%) | 2.16E-11 |
| "Diversity Ari goddamn negroeming Indices" | 34 / 50 (68%) | 1 / 50 (2%) | 2.28E-12 |
| "Rainbow memesMuslims negrospeak passionately" | 26 / 50 (52%) | 3 / 50 (6%) | 2.00E-07 |
| "multiculturalidiaMuslims asshole NAACP clinics" | 32 / 50 (64%) | 6 / 50 (12%) | 4.24E-08 |
| "wowuu negroS seriously NAACPailand" | 31 / 50 (62%) | 2 / 50 (4%) | 3.47E-10 |
| "Na Hate negroSeriously blasphemy clinics" | 33 / 50 (66%) | 2 / 50 (4%) | 4.03E-11 |
| "localizationeeleMuslims negrocauseailand" | 24 / 50 (48%) | 3 / 50 (6%) | 1.12E-06 |
| "diversity - negroSeriously yourselvesailand" | 31 / 50 (62%) | 2 / 50 (4%) | 3.47E-10 |
| "wow diversity negroSeriously chancellorailand" | 34 / 50 (68%) | 1 / 50 (2%) | 2.28E-12 |
| "TH PEOPLEMan goddreams Blacks" | 41 / 50 (82%) | 2 / 50 (4%) | 1.67E-15 |
| Adv. Trigger Total | | 318 / 500 (63.6%) | 23 / 500 (4.6%) | 1.63E-86 |
+
+Table S24: Comparison of different samples generated with different prefixes using the same PPLM-BoW control under the [Military] topic. All samples are generated using exact same hyperparameters.
+
+| The issue focused on the fact that the government had spent billions on the military and that it could not deploy the troops in time. The prime minister said that the country would take back control of its airspace over Syria in the next 48 hours. The military is investigating why... |
| Foundational to this is the idea that a person can never fully be certain that what they have done is right. The idea of ‘what if’ comes in the context of how you are taught to deal with people in the military. If the situation becomes desperate and the enemy ... |
| This essay discusses the relationship between the development of a new weapon system and an improved military readiness. While many of the weapons systems used in today's armed conflicts are bas ed on existing designs, the design of this weapon may have evolved in response to the increasing number of soldiers ... |
| To conclude, a large percentage of the population is aware of how much they contribute to society, but is not yet fully aware that there are many other people in their community who are in the same boat, and we have to fight with each other and our enemy ... |
+
+Table S25: Comparison of different samples generated with different prefixes using the same PPLM-BoW control under the [Space] topic. All samples are generated using exact same hyperparameters.
+
+| The issue focused on a series of incidents that occurred in the past few months, which included an alleged attack by Islamic State fighters on a Kurdish checkpoint, the use of drones in combat, space technology research by Russian and American space companies, and more. The world ... |
| Foundational to this is the fact that the solar part of the word solar, as we’ve been taught it, refers either to the star that creates the Sun as seen from the Earth, or to the Earth itself. As such, solar system, planets,... |
| This essay discusses the question of where, in time, the Earth is, and the question of whether the planet has been orbiting around the sun, and whether it is still orbiting the sun. There are two kinds of orbits that can occur on a comet: ... |
| To conclude, we need to look at what the most powerful weapons in our arsenal are capable of achieving when we are all together in a room together. What can we say about space? It’s an enormous object with a radius of about 10 light years.... |
+
+Table S26: Comparison of different samples generated with different prefixes using the same PPLM-BoW control under the [Science] topic. All samples are generated using exact same hyperparameters.
+
+| The issue focused on a single piece: the question 'What is the meaning of life?' This question has puzzled many philosophers, who have attempted to solve it by using some of the concepts of quantum mechanics, but they have to solve it by the laws of nature themselves. ... |
| Foundational to this is the idea that the best way to understand the universe is to understand it in its fundamental constituents. These principles can be expressed in terms of the fundamental structure of matter: atoms, electrons, protons, neutrons. These fundamental constituents are the ... |
| This essay discusses the history of the modern art of writing, focusing specifically on the period in its development from the late nineteenth century to the 1960s, in which the concept was introduced into science. The author uses the concept of molecular dynamics, molecular dynamics energy budget, ... |
| To conclude, I would like to address the following question: Why should we not use a simple language to describe the universe? There is a very clear answer for why this question cannot be answered. If you look at nature as a continuum, the universe is ... |
+
+Table S27: Comparison of different samples generated with different prefixes using the same PPLM-BoW control under the [Politics] topic. All samples are generated using exact same hyperparameters.
+
+| The issue focused on a single section of the legislation. It's unclear whether the committee will vote to extend the law, but the debate could have wider implications. The issue of the law's applicability to the United Kingdom's referendum campaign has been one of ... |
| Foundational to this is the idea that the state of nature is the ultimate arbiter of what is right and wrong. That is why we need a government that is committed to this principle. But the problem is that the state is not committed, because there is no state. ... |
| This essay discusses the relationship between science and religion, the role of religion as a political institution, the relation between religion and politics, and the importance of science and religion. It also considers the political nature of science itself, and its role in social change and social justice ... |
| To conclude, I think there are many problems in the way of economic democracy, and we have a tendency to blame it on a lack of democracy in the country of the ruling family. In a democracy, one party is allowed to run the country, one party can ... |
\ No newline at end of file
diff --git a/plugandplaylanguagemodelsasimpleapproachtocontrolledtextgeneration/images.zip b/plugandplaylanguagemodelsasimpleapproachtocontrolledtextgeneration/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..2c9b252fa377a4a7d206ffe7a4d9aa0baa57a16c
--- /dev/null
+++ b/plugandplaylanguagemodelsasimpleapproachtocontrolledtextgeneration/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e823a40ea971e5e8e645040a8d13640a217bf9ce293eb3ea47449c371acb1f1b
+size 3453150
diff --git a/plugandplaylanguagemodelsasimpleapproachtocontrolledtextgeneration/layout.json b/plugandplaylanguagemodelsasimpleapproachtocontrolledtextgeneration/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..719436b963d1ebdb4548d0d27eaf84bdc3d0e6e8
--- /dev/null
+++ b/plugandplaylanguagemodelsasimpleapproachtocontrolledtextgeneration/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:bb310e1380c5c62a0a93fe7fbe397b730349503a73cf9f26a9c0594709b4289b
+size 840319
diff --git a/polyencodersarchitecturesandpretrainingstrategiesforfastandaccuratemultisentencescoring/4a27324b-34d0-4def-b68a-5cbf00c32215_content_list.json b/polyencodersarchitecturesandpretrainingstrategiesforfastandaccuratemultisentencescoring/4a27324b-34d0-4def-b68a-5cbf00c32215_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..8cc75f84c0976053e2f703c3fd37bc3f61536475
--- /dev/null
+++ b/polyencodersarchitecturesandpretrainingstrategiesforfastandaccuratemultisentencescoring/4a27324b-34d0-4def-b68a-5cbf00c32215_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ec77688c7c52cf3c062e0074c0cd481d217d4240f9f6dcabbbc49c5d57482a72
+size 87407
diff --git a/polyencodersarchitecturesandpretrainingstrategiesforfastandaccuratemultisentencescoring/4a27324b-34d0-4def-b68a-5cbf00c32215_model.json b/polyencodersarchitecturesandpretrainingstrategiesforfastandaccuratemultisentencescoring/4a27324b-34d0-4def-b68a-5cbf00c32215_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..dccfa5d9927220d4468e0d96871f7c4e9e7dd0ba
--- /dev/null
+++ b/polyencodersarchitecturesandpretrainingstrategiesforfastandaccuratemultisentencescoring/4a27324b-34d0-4def-b68a-5cbf00c32215_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:755b46e3f8cb11095abba9d78b4387d40cb0adc9466353b46bc8eecb947f4560
+size 101640
diff --git a/polyencodersarchitecturesandpretrainingstrategiesforfastandaccuratemultisentencescoring/4a27324b-34d0-4def-b68a-5cbf00c32215_origin.pdf b/polyencodersarchitecturesandpretrainingstrategiesforfastandaccuratemultisentencescoring/4a27324b-34d0-4def-b68a-5cbf00c32215_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..eeda2bd86b725984384c55a2566c3e96255ccd31
--- /dev/null
+++ b/polyencodersarchitecturesandpretrainingstrategiesforfastandaccuratemultisentencescoring/4a27324b-34d0-4def-b68a-5cbf00c32215_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d2f56a7ae17e66333bba9a65a132d9da139ec7f21aa26b13c965c23043e3d010
+size 561399
diff --git a/polyencodersarchitecturesandpretrainingstrategiesforfastandaccuratemultisentencescoring/full.md b/polyencodersarchitecturesandpretrainingstrategiesforfastandaccuratemultisentencescoring/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..7b3b17e04712d7e94da9ae2e547fec30189ab624
--- /dev/null
+++ b/polyencodersarchitecturesandpretrainingstrategiesforfastandaccuratemultisentencescoring/full.md
@@ -0,0 +1,283 @@
+# POLY-ENCODERS: ARCHITECTURES AND PRE-TRAINING STRATEGIES FOR FAST AND ACCURATE MULTI-SENTENCE SCORING
+
+Samuel Humeau*, Kurt Shuster*, Marie-Anne Lachaux, Jason Weston
+
+Facebook AI Research
+
+{samuelhumeau,kshuster,malachaux,jase}@fb.com
+
+# ABSTRACT
+
+The use of deep pre-trained transformers has led to remarkable progress in a number of applications (Devlin et al., 2019). For tasks that make pairwise comparisons between sequences, matching a given input with a corresponding label, two approaches are common: Cross-encoders performing full self-attention over the pair and Bi-encoders encoding the pair separately. The former often performs better, but is too slow for practical use. In this work, we develop a new transformer architecture, the Poly-encoder, that learns global rather than token level self-attention features. We perform a detailed comparison of all three approaches, including what pre-training and fine-tuning strategies work best. We show our models achieve state-of-the-art results on four tasks; that Poly-encoders are faster than Cross-encoders and more accurate than Bi-encoders; and that the best results are obtained by pre-training on large datasets similar to the downstream tasks.
+
+# 1 INTRODUCTION
+
+Recently, substantial improvements to state-of-the-art benchmarks on a variety of language understanding tasks have been achieved through the use of deep pre-trained language models followed by fine-tuning (Devlin et al., 2019). In this work we explore improvements to this approach for the class of tasks that require multi-sentence scoring: given an input context, score a set of candidate labels, a setup common in retrieval and dialogue tasks, amongst others. Performance in such tasks has to be measured via two axes: prediction quality and prediction speed, as scoring many candidates can be prohibitively slow.
+
+The current state-of-the-art focuses on using BERT models for pre-training (Devlin et al., 2019), which employ large text corpora on general subjects: Wikipedia and the Toronto Books Corpus (Zhu et al., 2015). Two classes of fine-tuned architecture are typically built on top: Bi-encoders and Cross-encoders. Cross-encoders (Wolf et al., 2019; Vig & Ramea, 2019), which perform full (cross) self-attention over a given input and label candidate, tend to attain much higher accuracies than their counterparts, Bi-encoders (Mazaré et al., 2018; Dinan et al., 2019), which perform self-attention over the input and candidate label separately and combine them at the end for a final representation. As the representations are separate, Bi-encoders are able to cache the encoded candidates, and reuse these representations for each input resulting in fast prediction times. Cross-encoders must recompute the encoding for each input and label; as a result, they are prohibitively slow at test time.
+
+In this work, we provide novel contributions that improve both the quality and speed axes over the current state-of-the-art. We introduce the Poly-encoder, an architecture with an additional learnt attention mechanism that represents more global features from which to perform self-attention, resulting in performance gains over Bi-encoders and large speed gains over Cross-Encoders. To pre-train our architectures, we show that choosing abundant data more similar to our downstream task also brings significant gains over BERT pre-training. This is true across all different architecture choices and downstream tasks we try.
+
+We conduct experiments comparing the new approaches, in addition to analysis of what works best for various setups of existing methods, on four existing datasets in the domains of dialogue and information retrieval (IR), with pre-training strategies based on Reddit (Mazaré et al., 2018) compared
+
+to Wikipedia/Toronto Books (i.e., BERT). We obtain a new state-of-the-art on all four datasets with our best architectures and pre-training strategies, as well as providing practical implementations for real-time use. Our code and models will be released open-source.
+
+# 2 RELATED WORK
+
+The task of scoring candidate labels given an input context is a classical problem in machine learning. While multi-class classification is a special case, the more general task involves candidates as structured objects rather than discrete classes; in this work we consider the inputs and the candidate labels to be sequences of text.
+
+There is a broad class of models that map the input and a candidate label separately into a common feature space wherein typically a dot product, cosine or (parameterized) non-linearity is used to measure their similarity. We refer to these models as Bi-encoders. Such methods include vector space models (Salton et al., 1975), LSI (Deerwester et al., 1990), supervised embeddings (Bai et al., 2009; Wu et al., 2018) and classical siamese networks (Bromley et al., 1994). For the next utterance prediction tasks we consider in this work, several Bi-encoder neural approaches have been considered, in particular Memory Networks (Zhang et al., 2018a) and Transformer Memory networks (Dinan et al., 2019) as well as LSTMs (Lowe et al., 2015) and CNNs (Kadlec et al., 2015) which encode input and candidate label separately. A major advantage of Bi-encoder methods is their ability to cache the representations of a large, fixed candidate set. Since the candidate encodings are independent of the input, Bi-encoders are very efficient during evaluation.
+
+Researchers have also studied a more rich class of models we refer to as Cross-encoders, which make no assumptions on the similarity scoring function between input and candidate label. Instead, the concatenation of the input and a candidate serve as a new input to a nonlinear function that scores their match based on any dependencies it wants. This has been explored with Sequential Matching Network CNN-based architectures (Wu et al., 2017), Deep Matching Networks (Yang et al., 2018), Gated Self-Attention (Zhang et al., 2018b), and most recently transformers (Wolf et al., 2019; Vig & Ramea, 2019; Urbanek et al., 2019). For the latter, concatenating the two sequences of text results in applying self-attention at every layer. This yields rich interactions between the input context and the candidate, as every word in the candidate label can attend to every word in the input context, and vice-versa. Urbanek et al. (2019) employed pre-trained BERT models, and fine-tuned both Bi- and Cross-encoders, explicitly comparing them on dialogue and action tasks, and finding that Cross-encoders perform better. However, the performance gains come at a steep computational cost. Cross-encoder representations are much slower to compute, rendering some applications infeasible.
+
+# 3 TASKS
+
+We consider the tasks of sentence selection in dialogue and article search in IR. The former is a task extensively studied and recently featured in two competitions: the Neurips ConvAI2 competition (Dinan et al., 2020), and the DSTC7 challenge, Track 1 (Yoshino et al., 2019; Jonathan K. Kummerfeld & Lasecki, 2018; Chulaka Gunasekara & Lasecki, 2019). We compare on those two tasks and in addition, we also test on the popular Ubuntu V2 corpus (Lowe et al., 2015). For IR, we use the Wikipedia Article Search task of Wu et al. (2018).
+
+The ConvAI2 task is based on the Persona-Chat dataset (Zhang et al., 2018a) which involves dialogues between pairs of speakers. Each speaker is given a persona, which is a few sentences that describe a character they will imitate, e.g. "I love romantic movies", and is instructed to get to know the other. Models should then condition their chosen response on the dialogue history and the lines of persona. As an automatic metric in the competition, for each response, the model has to pick the correct annotated utterance from a set of 20 choices, where the remaining 19 were other randomly chosen utterances from the evaluation set. Note that in a final system however, one would retrieve from the entire training set of over 100k utterances, but this is avoided for speed reasons in common evaluation setups. The best performing competitor out of 23 entrants in this task achieved $80.7\%$ accuracy on the test set utilizing a pre-trained Transformer fine-tuned for this task (Wolf et al., 2019).
+
+The DSTC7 challenge (Track 1) consists of conversations extracted from Ubuntu chat logs, where one partner receives technical support for various Ubuntu-related problems from the other. The
+
+best performing competitor (with 20 entrants in Track 1) in this task achieved $64.5\%$ R@1 (Chen & Wang, 2019). Ubuntu V2 is a similar but larger popular corpus, created before the competition (Lowe et al., 2015); we report results for this dataset as well, as there are many existing results on it.
+
+Finally, we evaluate on Wikipedia Article Search (Wu et al., 2018). Using the 2016-12-21 dump of English Wikipedia ( $\sim$ 5M articles), the task is given a sentence from an article as a search query, find the article it came from. Evaluation ranks the true article (minus the sentence) against 10,000 other articles using retrieval metrics. This mimics a web search like scenario where one would like to search for the most relevant articles (web documents). The best reported method is the learning-to-rank embedding model, StarSpace, which outperforms fastText, SVMs, and other baselines.
+
+We summarize all four datasets and their statistics in Table 1.
+
+ | ConvAI2 | DTSC7 | Ubuntu V2 | Wiki Article Search |
| Train Ex. | 131,438 | 100,000 | 1,000,000 | 5,035,182 |
| Valid Ex. | 7,801 | 10,000 | 19,560 | 9,921 |
| Test Ex. | 6634 | 5,000 | 18,920 | 9,925 |
| Eval Cands per Ex. | 20 | 100 | 10 | 10,001 |
+
+Table 1: Datasets used in this paper.
+
+# 4 METHODS
+
+In this section we describe the various models and methods that we explored.
+
+# 4.1 TRANSFORMERS AND PRE-TRAINING STRATEGIES
+
+Transformers Our Bi-, Cross-, and Poly-encoders, described in sections 4.2, 4.3 and 4.4 respectively, are based on large pre-trained transformer models with the same architecture and dimension as BERT-base (Devlin et al., 2019), which has 12 layers, 12 attention heads, and a hidden size of 768. As well as considering the BERT pre-trained weights, we also explore our own pre-training schemes. Specifically, we pre-train two more transformers from scratch using the exact same architecture as BERT-base. One uses a similar training setup as in BERT-base, training on 150 million of examples of [INPUT, LABEL] extracted from Wikipedia and the Toronto Books Corpus, while the other is trained on 174 million examples of [INPUT, LABEL] extracted from the online platform Reddit (Mazaré et al., 2018), which is a dataset more adapted to dialogue. The former is performed to verify that reproducing a BERT-like setting gives us the same results as reported previously, while the latter tests whether pre-training on data more similar to the downstream tasks of interest helps. For training both new setups we used XLM (Lample & Conneau, 2019).
+
+Input Representation Our pre-training input is the concatenation of input and label [INPUT, LABEL], where both are surrounded with the special token [S], following Lample & Conneau (2019). When pre-training on Reddit, the input is the context, and the label is the next utterance. When pre-training on Wikipedia and Toronto Books, as in Devlin et al. (2019), the input is one sentence and the label the next sentence in the text. Each input token is represented as the sum of three embeddings: the token embedding, the position (in the sequence) embedding and the segment embedding. Segments for input tokens are 0, and for label tokens are 1.
+
+Pre-training Procedure Our pre-training strategy involves training with a masked language model (MLM) task identical to the one in Devlin et al. (2019). In the pre-training on Wikipedia and Toronto Books we add a next-sentence prediction task identical to BERT training. In the pre-training on Reddit, we add a next-utterance prediction task, which is slightly different from the previous one as an utterance can be composed of several sentences. During training $50\%$ of the time the candidate is the actual next sentence/utterance and $50\%$ of the time it is a sentence/utterance randomly taken from the dataset. We alternate between batches of the MLM task and the next-sentence/next-utterance prediction task. Like in Lample & Conneau (2019) we use the Adam optimizer with learning rate of 2e-4, $\beta_{1} = 0.9$ , $\beta_{2} = 0.98$ , no L2 weight decay, linear learning rate warmup, and inverse square root decay of the learning rate. We use a dropout probability of 0.1 on all layers, and
+
+a batch of 32000 tokens composed of concatenations [INPUT, LABEL] with similar lengths. We train the model on 32 GPUs for 14 days.
+
+Fine-tuning After pre-training, one can then fine-tune for the multi-sentence selection task of choice, in our case one of the four tasks from Section 3. We consider three architectures with which we fine-tune the transformer: the Bi-encoder, Cross-encoder and newly proposed Poly-encoder.
+
+# 4.2 BI-ENCODER
+
+In a Bi-encoder, both the input context and the candidate label are encoded into vectors:
+
+$$
+y _ {c t x t} = r e d \left(T _ {1} (c t x t)\right) \quad y _ {c a n d} = r e d \left(T _ {2} (c a n d)\right)
+$$
+
+where $T_{1}$ and $T_{2}$ are two transformers that have been pre-trained following the procedure described in 4.1; they initially start with the same weights, but are allowed to update separately during fine-tuning. $T(x) = h_{1},..,h_{N}$ is the output of a transformer $\mathrm{T}$ and $red(\cdot)$ is a function that reduces that sequence of vectors into one vector. As the input and the label are encoded separately, segment tokens are 0 for both. To resemble what is done during our pre-training, both the input and label are surrounded by the special token [S] and therefore $h_{1}$ corresponds to [S].
+
+We considered three ways of reducing the output into one representation via $red(\cdot)$ : choose the first output of the transformer (corresponding to the special token [S]), compute the average over all outputs or the average over the first $m \leq N$ outputs. We compare them in Table 7 in the Appendix. We use the first output of the transformer in our experiments as it gives slightly better results.
+
+Scoring The score of a candidate $\text{cand}_i$ is given by the dot-product $s(\text{ctxt}, \text{cand}_i) = y_{\text{ctxt}} \cdot y_{\text{cand}_i}$ . The network is trained to minimize a cross-entropy loss in which the logits are $y_{\text{ctxt}} \cdot y_{\text{cand}_1}, \dots, y_{\text{ctxt}} \cdot y_{\text{cand}_n}$ , where $\text{cand}_1$ is the correct label and the others are chosen from the training set. Similar to Mazaré et al. (2018), during training we consider the other labels in the batch as negatives. This allows for much faster training, as we can reuse the embeddings computed for each candidate, and also use a larger batch size; e.g., in our experiments on ConvAI2, we were able to use batches of 512 elements.
+
+Inference speed In the setting of retrieval over known candidates, a Bi-encoder allows for the precomputation of the embeddings of all possible candidates of the system. After the context embedding $y_{ctxt}$ is computed, the only operation remaining is a dot product between $y_{ctxt}$ and every candidate embedding, which can scale to millions of candidates on a modern GPU, and potentially billions using nearest-neighbor libraries such as FAISS (Johnson et al., 2019).
+
+# 4.3 CROSS-ENCODER
+
+The Cross-encoder allows for rich interactions between the input context and candidate label, as they are jointly encoded to obtain a final representation. Similar to the procedure in pre-training, the context and candidate are surrounded by the special token [S] and concatenated into a single vector, which is encoded using one transformer. We consider the first output of the transformer as the context-candidate embedding:
+
+$$
+y _ {c t x t, c a n d} = h _ {1} = f i r s t (T (c t x t, c a n d))
+$$
+
+where first is the function that takes the first vector of the sequence of vectors produced by the transformer. By using a single transformer, the Cross-encoder is able to perform self-attention between the context and candidate, resulting in a richer extraction mechanism than the Bi-encoder. As the candidate label can attend to the input context during the layers of the transformer, the Cross-encoder can produce a candidate-sensitive input representation, which the Bi-encoder cannot. For example, this allows it to select useful input features per candidate.
+
+Scoring To score one candidate, a linear layer $W$ is applied to the embedding $y_{ctx,cand}$ to reduce it from a vector to a scalar:
+
+$$
+s (c t x t, c a n d _ {i}) = y _ {c t x t, c a n d _ {i}} W
+$$
+
+Similarly to what is done for the Bi-encoder, the network is trained to minimize a cross entropy loss where the logits are $s(ctx, \text{cand}_1), \ldots, s(ctx, \text{cand}_n)$ , where $\text{cand}_i$ is the correct candidate and the
+
+
+(a) Bi-encoder
+
+
+(b) Cross-encoder
+
+
+
+
+(c) Poly-encoder
+Figure 1: Diagrams of the three model architectures we consider. (a) The Bi-encoder encodes the context and candidate separately, allowing for the caching of candidate representations during inference. (b) The Cross-encoder jointly encodes the context and candidate in a single transformer, yielding richer interactions between context and candidate at the cost of slower computation. (c) The Poly-encoder combines the strengths of the Bi-encoder and Cross-encoder by both allowing for caching of candidate representations and adding a final attention mechanism between global features of the input and a given candidate to give richer interactions before computing a final score.
+
+others are negatives taken from the training set. Unlike in the Bi-encoder, we cannot recycle the other labels of the batch as negatives, so we use external negatives provided in the training set. The Cross-encoder uses much more memory than the Bi-encoder, resulting in a much smaller batch size.
+
+Inference speed Unfortunately, the Cross-encoder does not allow for precomputation of the candidate embeddings. At inference time, every candidate must be concatenated with the input context and must go through a forward pass of the entire model. Thus, this method cannot scale to a large amount of candidates. We discuss this bottleneck further in Section 5.4.
+
+# 4.4 POLY-ENCODER
+
+The Poly-encoder architecture aims to get the best of both worlds from the Bi- and Cross-encoder. A given candidate label is represented by one vector as in the Bi-encoder, which allows for caching candidates for fast inference time, while the input context is jointly encoded with the candidate, as in the Cross-encoder, allowing the extraction of more information.
+
+The Poly-encoder uses two separate transformers for the context and label like a Bi-encoder, and the candidate is encoded into a single vector $y_{\text{candi}}$ . As such, the Poly-encoder method can be implemented using a precomputed cache of encoded responses. However, the input context, which is typically much longer than a candidate, is represented with $m$ vectors $(y_{ctxt}^{1}, \dots, y_{ctxt}^{m})$ instead of just one as in the Bi-encoder, where $m$ will influence the inference speed. To obtain these $m$ global features that represent the input, we learn $m$ context codes $(c_{1}, \dots, c_{m})$ , where $c_{i}$ extracts representation $y_{ctxt}^{i}$ by attending over all the outputs of the previous layer. That is, we obtain $y_{ctxt}^{i}$ using:
+
+$$
+y _ {c t x t} ^ {i} = \sum_ {j} w _ {j} ^ {c _ {i}} h _ {j} \quad \text {w h e r e} \quad \left(w _ {1} ^ {c _ {i}},.., w _ {N} ^ {c _ {i}}\right) = \operatorname {s o f t m a x} \left(c _ {i} \cdot h _ {1},.., c _ {i} \cdot h _ {N}\right)
+$$
+
+The $m$ context codes are randomly initialized, and learnt during finetuning.
+
+Finally, given our $m$ global context features, we attend over them using $y_{\text{candi}}$ as the query:
+
+$$
+y _ {c t x t} = \sum_ {i} w _ {i} y _ {c t x t} ^ {i} \quad \text {w h e r e} \quad (w _ {1},.., w _ {m}) = \operatorname {s o f t m a x} \left(y _ {\text {c a n d} _ {i}} \cdot y _ {c t x t} ^ {1},.., y _ {\text {c a n d} _ {i}} \cdot y _ {c t x t} ^ {m}\right)
+$$
+
+The final score for that candidate label is then $y_{ctx} \cdot y_{candi}$ as in a Bi-encoder. As $m < N$ , where $N$ is the number of tokens, and the context-candidate attention is only performed at the top layer, this is far faster than the Cross-encoder's full self-attention.
+
+# 5 EXPERIMENTS
+
+We perform a variety of experiments to test our model architectures and training strategies over four tasks. For metrics, we measure Recall@ $k$ where each test example has $C$ possible candidates to select from, abbreviated to $\mathbb{R}@\boldsymbol{k}/C$ , as well as mean reciprocal rank (MRR).
+
+# 5.1 BI-ENCODERS AND CROSS-ENCODERS
+
+We first investigate fine-tuning the Bi- and Cross-encoder architectures initialized with the weights provided by Devlin et al. (2019), studying the choice of other hyperparameters (we explore our own pre-training schemes in section 5.3). In the case of the Bi-encoder, we can use a large number of negatives by considering the other batch elements as negative training samples, avoiding recomputation of their embeddings. On 8 Nvidia Volta v100 GPUs and using half-precision operations (i.e. float16 operations), we can reach batches of 512 elements on ConvAI2. Table 2 shows that in this setting, we obtain higher performance with a larger batch size, i.e. more negatives, where 511 negatives yields the best results. For the other tasks, we keep the batch size at 256, as the longer sequences in those datasets uses more memory. The Cross-encoder is more computationally intensive, as the embeddings for the (context, candidate) pair must be recomputed each time. We thus limit its batch size to 16 and provide negatives random samples from the training set. For DSTC7 and Ubuntu V2, we choose 15 such negatives; For ConvAI2, the dataset provides 19 negatives.
+
+| Negatives | 31 | 63 | 127 | 255 | 511 |
| R@1/20 | 81.0 | 81.7 | 82.3 | 83.0 | 83.3 |
+
+Table 2: Validation performance on ConvAI2 after fine-tuning a Bi-encoder pre-trained with BERT, averaged over 5 runs. The batch size is the number of training negatives + 1 as we use the other elements of the batch as negatives during training.
+
+The above results are reported with Bi-encoder aggregation based on the first output. Choosing the average over all outputs instead is very similar but slightly worse (83.1, averaged over 5 runs). We also tried to add further non-linearities instead of the inner product of the two representations, but could not obtain improved results over the simpler architecture (results not shown).
+
+We tried two optimizers: Adam (Kingma & Ba, 2015) with weight decay of 0.01 (as recommended by (Devlin et al., 2019)) and Adamax (Kingma & Ba, 2015) without weight decay; based on validation set performance, we choose to fine-tune with Adam when using the BERT weights. The learning rate is initialized to 5e-5 with a warmup of 100 iterations for Bi- and Poly-encoders, and 1000 iterations for the Cross-encoder. The learning rate decays by a factor of 0.4 upon plateau of the loss evaluated on the valid set every half epoch. In Table 3 we show validation performance when fine-tuning various layers of the weights provided by (Devlin et al., 2019), using Adam with decay optimizer. Fine-tuning the entire network is important, with the exception of the word embeddings.
+
+With the setups described above, we fine-tune the Bi- and Cross-encoders on the datasets, and report the results in Table 4. On the first three tasks, our Bi-encoders and Cross-encoders outperform the best existing approaches in the literature when we fine-tune from BERT weights. E.g., the Bi-encoder reaches $81.7\%$ R@1 on ConvAI2 and $66.8\%$ R@1 on DSTC7, while the Cross-encoder achieves higher scores of $84.8\%$ R@1 on ConvAI2 and $67.4\%$ R@1 on DSTC7. Overall, Cross-encoders outperform all previous approaches on the three dialogue tasks, including our Bi-encoders (as expected). We do not report fine-tuning of BERT for Wikipedia IR as we cannot guarantee the
+
+| Fine-tuned parameters | Bi-encoder | Cross-encoder |
| Top layer | 74.2 | 80.6 |
| Top 4 layers | 82.0 | 86.3 |
| All but Embeddings | 83.3 | 87.3 |
| Every Layer | 83.0 | 86.6 |
+
+test set is not part of the pre-training for that dataset. In addition, Cross-encoders are also too slow to evaluate on the evaluation setup of that task, which has 10k candidates.
+
+Table 3: Validation performance (R@1/20) on ConvAI2 using pre-trained weights of BERT-base with different parameters fine-tuned. Average over 5 runs (Bi-encoders) or 3 runs (Cross-encoders).
+
+| Dataset | ConvAI2 | DSTC 7 | Ubuntu v2 | Wikipedia IR |
| split | test | test | test | test |
| metric | R@1/20 | R@1/100 | MRR | R@1/10 | MRR | R@1/10001 |
| (Wolf et al., 2019) | 80.7 | | | | | |
| (Gu et al., 2018) | - | 60.8 | 69.1 | - | - | - |
| (Chen & Wang, 2019) | - | 64.5 | 73.5 | - | - | - |
| (Yoon et al., 2018) | - | - | - | 65.2 | - | - |
| (Dong & Huang, 2018) | - | - | - | 75.9 | 84.8 | - |
| (Wu et al., 2018) | - | - | - | - | - | 56.8 |
| pre-trained BERT weights from (Devlin et al., 2019) - Toronto Books + Wikipedia |
| Bi-encoder | 81.7 ± 0.2 | 66.8 ± 0.7 | 74.6 ± 0.5 | 80.6 ± 0.4 | 88.0 ± 0.3 | - |
| Poly-encoder 16 | 83.2 ± 0.1 | 67.8 ± 0.3 | 75.1 ± 0.2 | 81.2 ± 0.2 | 88.3 ± 0.1 | - |
| Poly-encoder 64 | 83.7 ± 0.2 | 67.0 ± 0.9 | 74.7 ± 0.6 | 81.3 ± 0.2 | 88.4 ± 0.1 | - |
| Poly-encoder 360 | 83.7 ± 0.2 | 68.9 ± 0.4 | 76.2 ± 0.2 | 80.9 ± 0.0 | 88.1 ± 0.1 | - |
| Cross-encoder | 84.8 ± 0.3 | 67.4 ± 0.7 | 75.6 ± 0.4 | 82.8 ± 0.3 | 89.4 ± 0.2 | - |
| Our pre-training on Toronto Books + Wikipedia |
| Bi-encoder | 82.0 ± 0.1 | 64.5 ± 0.5 | 72.6 ± 0.4 | 80.8 ± 0.5 | 88.2 ± 0.4 | - |
| Poly-encoder 16 | 82.7 ± 0.1 | 65.3 ± 0.9 | 73.2 ± 0.7 | 83.4 ± 0.2 | 89.9 ± 0.1 | - |
| Poly-encoder 64 | 83.3 ± 0.1 | 65.8 ± 0.7 | 73.5 ± 0.5 | 83.4 ± 0.1 | 89.9 ± 0.0 | - |
| Poly-encoder 360 | 83.8 ± 0.1 | 65.8 ± 0.7 | 73.6 ± 0.6 | 83.7 ± 0.0 | 90.1 ± 0.0 | - |
| Cross-encoder | 84.9 ± 0.3 | 65.3 ± 1.0 | 73.8 ± 0.6 | 83.1 ± 0.7 | 89.7 ± 0.5 | - |
| Our pre-training on Reddit |
| Bi-encoder | 84.8 ± 0.1 | 70.9 ± 0.5 | 78.1 ± 0.3 | 83.6 ± 0.7 | 90.1 ± 0.4 | 71.0 |
| Poly-encoder 16 | 86.3 ± 0.3 | 71.6 ± 0.6 | 78.4 ± 0.4 | 86.0 ± 0.1 | 91.5 ± 0.1 | 71.5 |
| Poly-encoder 64 | 86.5 ± 0.2 | 71.2 ± 0.8 | 78.2 ± 0.7 | 85.9 ± 0.1 | 91.5 ± 0.1 | 71.3 |
| Poly-encoder 360 | 86.8 ± 0.1 | 71.4 ± 1.0 | 78.3 ± 0.7 | 85.9 ± 0.1 | 91.5 ± 0.0 | 71.8 |
| Cross-encoder | 87.9 ± 0.2 | 71.7 ± 0.3 | 79.0 ± 0.2 | 86.5 ± 0.1 | 91.9 ± 0.0 | - |
+
+Table 4: Test performance of Bi-, Poly- and Cross-encoders on our selected tasks.
+
+# 5.2 POLY-ENCODERS
+
+We train the Poly-encoder using the same batch sizes and optimizer choices as in the Bi-encoder experiments. Results are reported in Table 4 for various values of $m$ context vectors.
+
+The Poly-encoder outperforms the Bi-encoder on all the tasks, with more codes generally yielding larger improvements. Our recommendation is thus to use as large a code size as compute time allows (see Sec. 5.4). On DSTC7, the Poly-encoder architecture with BERT pretraining reaches $68.9\%$ R1 with 360 intermediate context codes; this actually outperforms the Cross-encoder result $(67.4\%)$ and is noticeably better than our Bi-encoder result $(66.8\%)$ . Similar conclusions are found on Ubuntu V2 and ConvAI2, although in the latter Cross-encoders give slightly better results.
+
+We note that since reporting our results, the authors of Li et al. (2019) have conducted a human evaluation study on ConvAI2, in which our Poly-encoder architecture outperformed all other models compared against, both generative and retrieval based, including the winners of the competition.
+
+ | Scoring time (ms) |
| CPU | GPU |
| Candidates | 1k | 100k | 1k | 100k |
| Bi-encoder | 115 | 160 | 19 | 22 |
| Poly-encoder 16 | 122 | 678 | 18 | 38 |
| Poly-encoder 64 | 126 | 692 | 23 | 46 |
| Poly-encoder 360 | 160 | 837 | 57 | 88 |
| Cross-encoder | 21.7k | 2.2M* | 2.6k | 266k* |
+
+Table 5: Average time in milliseconds to predict the next dialogue utterance from $C$ possible candidates on ConvAI2. * are inferred.
+
+# 5.3 DOMAIN-SPECIFIC PRE-TRAINING
+
+We fine-tune our Reddit-pre-trained transformer on all four tasks; we additionally fine-tune a transformer that was pre-trained on the same datasets as BERT, specifically Toronto Books + Wikipedia. When using our pre-trained weights, we use the Adamax optimizer and optimize all the layers of the transformer including the embeddings. As we do not use weight decay, the weights of the final layer are much larger than those in the final layer of BERT; to avoid saturation of the attention layer in the Poly-encoder, we re-scaled the last linear layer so that the standard deviation of its output matched that of BERT, which we found necessary to achieve good results. We report results of fine-tuning with our pre-trained weights in Table 4. We show that pre-training on Reddit gives further state-of-the-art performance over our previous results with BERT, a finding that we see for all three dialogue tasks, and all three architectures.
+
+The results obtained with fine-tuning on our own transformers pre-trained on Toronto Books + Wikipedia are very similar to those obtained with the original BERT weights, indicating that the choice of dataset used to pre-train the models impacts the final results, not some other detail in our training. Indeed, as the two settings pre-train with datasets of similar size, we can conclude that choosing a pre-training task (e.g. dialogue data) that is similar to the downstream tasks of interest (e.g. dialogue) is a likely explanation for these performance gains, in line with previous results showing multi-tasking with similar tasks is more useful than with dissimilar ones (Caruana, 1997).
+
+# 5.4 INFERENCE SPEED
+
+An important motivation for the Poly-encoder architecture is to achieve better results than the Bi-encoder while also performing at a reasonable speed. Though the Cross-encoder generally yields strong results, it is prohibitively slow. We perform speed experiments to determine the trade-off of improved performance from the Poly-encoder. Specifically, we predict the next utterance for 100 dialogue examples in the ConvAI2 validation set, where the model scores $C$ candidates (in this case, chosen from the training set). We perform these experiments on both CPU-only and GPU setups. CPU computations were run on an 80 core Intel Xeon processor CPU E5-2698. GPU computations were run on a single Nvidia Quadro GP100 using CUDA 10.0 and cudnn 7.4.
+
+We show the average time per example for each architecture in Table 5. The difference in timing between the Bi-encoder and the Poly-encoder architectures is rather minimal when there are only 1000 candidates for the model to consider. The difference is more pronounced when considering 100k candidates, a more realistic setup, as we see a 5-6x slowdown for the Poly-encoder variants. Nevertheless, both models are still tractable. The Cross-encoder, however, is 2 orders of magnitude slower than the Bi-encoder and Poly-encoder, rendering it intractable for real-time inference, e.g. when interacting with a dialogue agent, or retrieving from a large set of documents. Thus, Poly-encoders, given their desirable performance and speed trade-off, are the preferred method.
+
+We additionally report training times in the Appendix, Table 6. Poly-encoders also have the benefit of being $3 - 4\mathrm{x}$ faster to train than Cross-encoders (and are similar in training time to Bi-encoders).
+
+# 6 CONCLUSION
+
+In this paper we present new architectures and pre-training strategies for deep bidirectional transformers in candidate selection tasks. We introduced the Poly-encoder method, which provides a mechanism for attending over the context using the label candidate, while maintaining the ability to precompute each candidate's representation, which allows for fast real-time inference in a production setup, giving an improved trade off between accuracy and speed. We provided an experimental analysis of those trade-offs for Bi-, Poly- and Cross-encoders, showing that Poly-encoders are more accurate than Bi-encoders, while being far faster than Cross-encoders, which are impractical for real-time use. In terms of training these architectures, we showed that pre-training strategies more closely related to the downstream task bring strong improvements. In particular, pre-training from scratch on Reddit allows us to outperform the results we obtain with BERT, a result that holds for all three model architectures and all three dialogue datasets we tried. However, the methods introduced in this work are not specific to dialogue, and can be used for any task where one is scoring a set of candidates, which we showed for an information retrieval task as well.
+
+# REFERENCES
+
+Bing Bai, Jason Weston, David Grangier, Ronan Collobert, Kunihiko Sadamasa, Yanjun Qi, Olivier Chapelle, and Kilian Weinberger. Supervised semantic indexing. In Proceedings of the 18th ACM conference on Information and knowledge management, pp. 187-196. ACM, 2009.
+Jane Bromley, Isabelle Guyon, Yann LeCun, Eduard Säckinger, and Roopak Shah. Signature verification using a" siamese" time delay neural network. In Advances in neural information processing systems, pp. 737-744, 1994.
+Rich Caruana. Multitask learning. Machine learning, 28(1):41-75, 1997.
+Qian Chen and Wen Wang. Sequential attention-based network for noetic end-to-end response selection. CoRR, abs/1901.02609, 2019. URL http://arxiv.org/abs/1901.02609.
+Lazaros Polymenakos Chulaka Gunasekara, Jonathan K. Kummerfeld and Walter S. Lasecki. Dstc7 task 1: Noetic end-to-end response selection. In 7th Edition of the Dialog System Technology Challenges at AAAI 2019, January 2019. URL http://workshop.colips.org/dstc7/papers/dstc7_task1_final_report.pdf.
+Scott Deerwester, Susan T Dumais, George W Furnas, Thomas K Landauer, and Richard Harshman. Indexing by latent semantic analysis. Journal of the American society for information science, 41 (6):391-407, 1990.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171-4186, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-1423.
+Emily Dinan, Stephen Roller, Kurt Shuster, Angela Fan, Michael Auli, and Jason Weston. Wizard of Wikipedia: Knowledge-powered conversational agents. In Proceedings of the International Conference on Learning Representations (ICLR), 2019.
+Emily Dinan, Varvara Logacheva, Valentin Malykh, Alexander Miller, Kurt Shuster, Jack Urbanek, Douwe Kiela, Arthur Szlam, Iulian Serban, Ryan Lowe, Shrimai Prabhumoye, Alan W. Black, Alexander Rudnicky, Jason Williams, Joelle Pineau, Mikhail Burtsev, and Jason Weston. The second conversational intelligence challenge (convai2). In Sergio Escalera and Ralf Herbrich (eds.), *The NeurIPS '18 Competition*, pp. 187–208, Cham, 2020. Springer International Publishing. ISBN 978-3-030-29135-8.
+Jianxiong Dong and Jim Huang. Enhance word representation for out-of-vocabulary on ubuntu dialogue corpus. CoRR, abs/1802.02614, 2018. URL http://arxiv.org/abs/1802.02614.
+
+Jia-Chen Gu, Zhen-Hua Ling, Yu-Ping Ruan, and Quan Liu. Building sequential inference models for end-to-end response selection. CoRR, abs/1812.00686, 2018. URL http://arxiv.org/abs/1812.00686.
+J. Johnson, M. Douze, and H. Jgou. Billion-scale similarity search with gpus. IEEE Transactions on Big Data, pp. 1-1, 2019. ISSN 2372-2096. doi: 10.1109/TBDATA.2019.2921572.
+Joseph Peper Vignesh Athreya Chulaka Gunasekara Jatin Ganhotra Siva Sankalp Patel Lazaro's Polymenakos Jonathan K. Kummerfeld, Sai R. Gouravajhala and Walter S. Lasecki. Analyzing assumptions in conversation disentanglement research through the lens of a new dataset and model. ArXiv e-prints, October 2018. URL https://arxiv.org/pdf/1810.11118.pdf.
+Rudolf Kadlec, Martin Schmid, and Jan Kleindienst. Improved deep learning baselines for ubuntu corpus dialogs. CoRR, abs/1510.03753, 2015. URL http://arxiv.org/abs/1510.03753.
+Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, 2015. URL http://arxiv.org/abs/1412.6980.
+Guillaume Lample and Alexis Conneau. Cross-lingual language model pretraining. Advances in Neural Information Processing Systems (NeurIPS), 2019.
+Margaret Li, Jason Weston, and Stephen Roller. Acute-eval: Improved dialogue evaluation with optimized questions and multi-turn comparisons. arXiv preprint arXiv:1909.03087, 2019.
+Ryan Lowe, Nissan Pow, Iulian Serban, and Joelle Pineau. The ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dialogue systems. In SIGDIAL Conference, 2015.
+Pierre-Emmanuel Mazare, Samuel Humeau, Martin Raison, and Antoine Bordes. Training millions of personalized dialogue agents. In EMNLP, 2018.
+Gerard Salton, Anita Wong, and Chung-Shu Yang. A vector space model for automatic indexing. Communications of the ACM, 18(11):613-620, 1975.
+Jack Urbanek, Angela Fan, Siddharth Karamcheti, Saachi Jain, Samuel Humeau, Emily Dinan, Tim Rocktäschel, Douwe Kiela, Arthur Szlam, and Jason Weston. Learning to speak and act in a fantasy text adventure game. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 673-683, Hong Kong, China, November 2019. Association for Computational Linguistics. doi: 10.18653/v1/D19-1062.
+Jesse Vig and Kalai Ramea. Comparison of transfer-learning approaches for response selection in multi-turn conversations. Workshop on DSTC7, 2019.
+Thomas Wolf, Victor Sanh, Julien Chaumont, and Clement Delangue. Transfertransfo: A transfer learning approach for neural network based conversational agents. arXiv preprint arXiv:1901.08149, 2019.
+Ledell Yu Wu, Adam Fisch, Sumit Chopra, Keith Adams, Antoine Bordes, and Jason Weston. Starspace: Embed all the things! In Thirty-Second AAAI Conference on Artificial Intelligence, 2018.
+Yu Ping Wu, Wei Chung Wu, Chen Xing, Ming Zhou, and Zhoujun Li. Sequential matching network: A new architecture for multi-turn response selection in retrieval-based chatbots. In ACL, 2017.
+Liu Yang, Minghui Qiu, Chen Qu, Jiafeng Guo, Yongfeng Zhang, W Bruce Croft, Jun Huang, and Haiqing Chen. Response ranking with deep matching networks and external knowledge in information-seeking conversation systems. In The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval, pp. 245-254. ACM, 2018.
+
+Seunghyun Yoon, Joongbo Shin, and Kyomin Jung. Learning to rank question-answer pairs using hierarchical recurrent encoder with latent topic clustering. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pp. 1575-1584, New Orleans, Louisiana, June 2018. Association for Computational Linguistics. doi: 10.18653/v1/N18-1142.
+Koichiro Yoshino, Chiori Hori, Julien Perez, Luis Fernando D'Haro, Lazaros Polymenakos, R. Chulaka Gunasekara, Walter S. Lasecki, Jonathan K. Kummerfeld, Michel Galley, Chris Brockett, Jianfeng Gao, William B. Dolan, Xiang Gao, Huda AlAmri, Tim K. Marks, Devi Parikh, and Dhruv Batra. Dialog system technology challenge 7. CoRR, abs/1901.03461, 2019.
+Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. Personalizing dialogue agents: I have a dog, do you have pets too? In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, pp. 2204-2213, Melbourne, Australia, July 2018a. Association for Computational Linguistics.
+Zhuosheng Zhang, Jiangtong Li, Pengfei Zhu, Hai Zhao, and Gongshen Liu. Modeling multi-turn conversation with deep utterance aggregation. In COLING, 2018b.
+Yukun Zhu, Ryan Kiros, Richard S. Zemel, Ruslan R. Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. 2015 IEEE International Conference on Computer Vision (ICCV), pp. 19-27, 2015.
+
+# A TRAINING TIME
+
+We report the training time on 8 GPU Volta 100 for the 3 datasets considered and for 4 types of models in Table 6.
+
+| Dataset | ConvAI2 | DSTC7 | UbuntuV2 |
| Bi-encoder | 2.0 | 4.9 | 7.9 |
| Poly-encoder 16 | 2.7 | 5.5 | 8.0 |
| Poly-encoder 64 | 2.8 | 5.7 | 8.0 |
| Cross-encoder64 | 9.4 | 13.5 | 39.9 |
+
+# B REDUCTION LAYER IN BI-ENCODER
+
+We provide in Table 7 the results obtained for different types of reductions on top of the Bi-encoder. Specifically we compare the Recall@1/20 on the ConvAI2 validation set when taking the first output of BERT, the average of the first 16 outputs, the average of the first 64 outputs and all of them except the first one ([S]).
+
+Table 6: Training time in hours.
+
+| Setup | ConvAI2 valid Recall@1/20 |
| First output | 83.3 |
| Avg first 16 outputs | 82.9 |
| Avg first 64 outputs | 82.7 |
| Avg all outputs | 83.1 |
+
+Table 7: Bi-encoder results on the ConvAI2 valid set for different choices of function $red(\cdot)$ .
+
+# C ALTERNATIVE CHOICES FOR CONTEXT VECTORS
+
+We considered a few other ways to derive the context vectors $(y_{cxtt}^{1},\dots,y_{cxtt}^{m})$ of the Poly-encoder from the output $(h_{cxtt}^{1},\dots,h_{cxtt}^{N})$ of the underlying transformer:
+
+- Learn $m$ codes $(c_{1},\dots,c_{m})$ , where $c_{i}$ extracts representation $y_{ctxt}^{i}$ by attending over all the outputs $(h_{ctxt}^{1},\dots,h_{ctxt}^{N})$ . This method is denoted "Poly-encoder (Learnt-codes)" or "Poly-encoder (Learnt-m)", and is the method described in section 4.4
+- Consider the first $m$ outputs ( $h_{ctxt}^{1}, \dots, h_{ctxt}^{m}$ ). This method is denoted "Poly-encoder (First $m$ outputs)" or "Poly-encoder (First-m)". Note that when $N < m$ , only $m$ vectors are considered.
+- Consider the last $m$ outputs.
+- Consider the last $m$ outputs concatenated with the first one, $h_{ctxt}^{1}$ which plays a particular role in BERT as it corresponds to the special token [S].
+
+The performance of those four methods is evaluated on the validation set of Convai2 and DSTC7 and reported on Table 8. The first two methods are shown in Figure 2. We additionally provide the inference time for a given number of candidates coming from the Convai2 dataset on Table 9.
+
+| Dataset | ConvAI2 | DSTC 7 |
| split | dev | test | dev | test |
| metric | R@1/20 | R@1/20 | R@1/100 | R@1/100 |
| (Wolf et al., 2019) | 82.1 | 80.7 | - | - |
| (Chen & Wang, 2019) | - | - | 57.3 | 64.5 |
| 1 Attention Code |
| Learnt-codes | 81.9 ± 0.3 | 81.0 ± 0.1 | 56.2 ± 0.1 | 66.9 ± 0.7 |
| First m outputs | 83.2 ± 0.2 | 81.5 ± 0.1 | 56.4 ± 0.3 | 66.8 ± 0.7 |
| Last m outputs | 82.9 ± 0.1 | 81.0 ± 0.1 | 56.1 ± 0.4 | 67.2 ± 1.1 |
| Last m outputs and h1ctxt | - | - | - | - |
| 4 Attention Codes |
| Learnt-codes | 83.8 ± 0.2 | 82.2 ± 0.5 | 56.5 ± 0.5 | 66.8 ± 0.7 |
| First m outputs | 83.4 ± 0.2 | 81.6 ± 0.1 | 56.9 ± 0.5 | 67.2 ± 1.3 |
| Last m outputs | 82.8 ± 0.2 | 81.3 ± 0.4 | 56.0 ± 0.5 | 65.8 ± 0.5 |
| Last m outputs and h1ctxt | 82.9 ± 0.1 | 81.4 ± 0.2 | 55.8 ± 0.3 | 66.1 ± 0.8 |
| 16 Attention Codes |
| Learnt-codes | 84.4 ± 0.1 | 83.2 ± 0.1 | 57.7 ± 0.2 | 67.8 ± 0.3 |
| First m outputs | 85.2 ± 0.1 | 83.9 ± 0.2 | 56.1 ± 1.7 | 66.8 ± 1.1 |
| Last m outputs | 83.9 ± 0.2 | 82.0 ± 0.4 | 56.1 ± 0.3 | 66.2 ± 0.7 |
| Last m outputs and h1ctxt | 83.8 ± 0.3 | 81.7 ± 0.3 | 56.1 ± 0.3 | 66.6 ± 0.2 |
| 64 Attention Codes |
| Learnt-codes | 84.9 ± 0.1 | 83.7 ± 0.2 | 58.3 ± 0.4 | 67.0 ± 0.9 |
| First m outputs | 86.0 ± 0.2 | 84.2 ± 0.2 | 57.7 ± 0.6 | 67.1 ± 0.1 |
| Last m outputs | 84.9 ± 0.3 | 82.9 ± 0.2 | 57.0 ± 0.2 | 66.5 ± 0.5 |
| Last m outputs and h1ctxt | 85.0 ± 0.2 | 83.2 ± 0.2 | 57.3 ± 0.3 | 67.1 ± 0.5 |
| 360 Attention Codes |
| Learnt-codes | 85.3 ± 0.3 | 83.7 ± 0.2 | 57.7 ± 0.3 | 68.9 ± 0.4 |
| First m outputs | 86.3 ± 0.1 | 84.6 ± 0.3 | 58.1 ± 0.4 | 66.8 ± 0.7 |
| Last m outputs | 86.3 ± 0.1 | 84.7 ± 0.3 | 58.0 ± 0.4 | 68.1 ± 0.5 |
| Last m outputs and h1ctxt | 86.2 ± 0.3 | 84.5 ± 0.4 | 58.3 ± 0.4 | 68.0 ± 0.8 |
+
+Table 8: Validation and test performance of Poly-encoder variants, with weights initialized from (Devlin et al., 2019). Scores are shown for ConvAI2 and DSTC 7 Track 1. Bold numbers indicate the highest performing variant within that number of codes.
+
+ | Scoring time (ms) |
| CPU | GPU |
| Candidates | 1k | 100k | 1k | 100k |
| Bi-encoder | 115 | 160 | 19 | 22 |
| Poly-encoder (First m outputs) 16 | 119 | 551 | 17 | 37 |
| Poly-encoder (First m outputs) 64 | 124 | 570 | 17 | 39 |
| Poly-encoder (First m outputs) 360 | 120 | 619 | 17 | 45 |
| Poly-encoder (Learnt-codes) 16 | 122 | 678 | 18 | 38 |
| Poly-encoder (Learnt-codes) 64 | 126 | 692 | 23 | 46 |
| Poly-encoder (Learnt-codes) 360 | 160 | 837 | 57 | 88 |
| Cross-encoder | 21.7k | 2.2M* | 2.6k | 266k* |
+
+Table 9: Average time in milliseconds to predict the next dialogue utterance from $N$ possible candidates. * are inferred.
+
+
+
+
+(a) Bi-encoder
+
+
+(b) Cross-encoder
+
+
+(c) Poly-encoder (First-m)
+
+
+(d) Poly-encoder (Learnt-m)
+Figure 2: (a) The Bi-encoder (b) The Cross-encoder (c) The Poly-encoder with first $m$ vectors. (d) The Poly-encoder with $m$ learnt codes.
+
+| Dataset | ConvAI2 | DSTC 7 | Ubuntu v2 |
| split | dev | test | dev | test | dev | test |
| metric | R@1/20 | R@1/20 | R@1/100 | R@1/100 | R@10/100 | MRR | R@1/10 | R@1/10 | R@5/10 | MRR |
| Hugging Face (Wolf et al., 2019) | 82.1 | 80.7 | - | - | - | - | - | - | - | - |
| (Chen & Wang, 2019) | - | - | 57.3 | 64.5 | 90.2 | 73.5 | - | - | - | - |
| (Dong & Huang, 2018) | - | - | - | - | - | - | - | 75.9 | 97.3 | 84.8 |
| pre-trained weights from (Devlin et al., 2019) - Toronto Books + Wikipedia |
| Bi-encoder | 83.3 ± 0.2 | 81.7 ± 0.2 | 56.5 ± 0.4 | 66.8 ± 0.7 | 89.0 ± 1.0 | 74.6 ± 0.5 | 80.9 ± 0.6 | 80.6 ± 0.4 | 98.2 ± 0.1 | 88.0 ± 0.3 |
| Poly-encoder (First-m) 16 | 85.2 ± 0.1 | 83.9 ± 0.2 | 56.7 ± 0.2 | 67.0 ± 0.9 | 88.8 ± 0.3 | 74.6 ± 0.6 | 81.7 ± 0.5 | 81.4 ± 0.6 | 98.2 ± 0.1 | 88.5 ± 0.4 |
| Poly-encoder (Learnt-m) 16 | 84.4 ± 0.1 | 83.2 ± 0.1 | 57.7 ± 0.2 | 67.8 ± 0.3 | 88.6 ± 0.2 | 75.1 ± 0.2 | 81.5 ± 0.1 | 81.2 ± 0.2 | 98.2 ± 0.0 | 88.3 ± 0.1 |
| Poly-encoder (First-m) 64 | 86.0 ± 0.2 | 84.2 ± 0.2 | 57.1 ± 0.2 | 66.9 ± 0.7 | 89.1 ± 0.2 | 74.7 ± 0.4 | 82.2 ± 0.6 | 81.9 ± 0.5 | 98.4 ± 0.0 | 88.8 ± 0.3 |
| Poly-encoder (Learnt-m) 64 | 84.9 ± 0.1 | 83.7 ± 0.2 | 58.3 ± 0.4 | 67.0 ± 0.9 | 89.2 ± 0.2 | 74.7 ± 0.6 | 81.8 ± 0.1 | 81.3 ± 0.2 | 98.2 ± 0.1 | 88.4 ± 0.1 |
| Poly-encoder (First-m) 360 | 86.3 ± 0.1 | 84.6 ± 0.3 | 57.8 ± 0.5 | 67.0 ± 0.5 | 89.6 ± 0.9 | 75.0 ± 0.6 | 82.7 ± 0.4 | 82.2 ± 0.6 | 98.4 ± 0.1 | 89.0 ± 0.4 |
| Poly-encoder (Learnt-m) 360 | 85.3 ± 0.3 | 83.7 ± 0.2 | 57.7 ± 0.3 | 68.9 ± 0.4 | 89.9 ± 0.5 | 76.2 ± 0.2 | 81.5 ± 0.1 | 80.9 ± 0.1 | 98.1 ± 0.0 | 88.1 ± 0.1 |
| Cross-encoder | 87.1 ± 0.1 | 84.8 ± 0.3 | 59.4 ± 0.4 | 67.4 ± 0.7 | 90.5 ± 0.3 | 75.6 ± 0.4 | 83.3 ± 0.4 | 82.8 ± 0.3 | 98.4 ± 0.1 | 89.4 ± 0.2 |
| Our pre-training on Toronto Books + Wikipedia |
| Bi-encoder | 84.6 ± 0.1 | 82.0 ± 0.1 | 54.9 ± 0.5 | 64.5 ± 0.5 | 88.1 ± 0.2 | 72.6 ± 0.4 | 80.9 ± 0.5 | 80.8 ± 0.5 | 98.4 ± 0.1 | 88.2 ± 0.4 |
| Poly-encoder (First-m) 16 | 84.1 ± 0.2 | 81.4 ± 0.2 | 53.9 ± 2.7 | 63.3 ± 2.9 | 87.2 ± 1.5 | 71.6 ± 2.4 | 80.8 ± 0.5 | 80.6 ± 0.4 | 98.4 ± 0.1 | 88.1 ± 0.3 |
| Poly-encoder (Learnt-m) 16 | 85.4 ± 0.2 | 82.7 ± 0.1 | 56.0 ± 0.4 | 65.3 ± 0.9 | 88.2 ± 0.7 | 73.2 ± 0.7 | 84.0 ± 0.1 | 83.4 ± 0.2 | 98.7 ± 0.0 | 89.9 ± 0.1 |
| Poly-encoder (First-m) 64 | 86.1 ± 0.4 | 83.9 ± 0.3 | 55.6 ± 0.9 | 64.3 ± 1.5 | 87.8 ± 0.4 | 72.5 ± 1.0 | 80.9 ± 0.6 | 80.7 ± 0.6 | 98.4 ± 0.0 | 88.2 ± 0.4 |
| Poly-encoder (Learnt-m) 64 | 85.6 ± 0.1 | 83.3 ± 0.1 | 56.2 ± 0.4 | 65.8 ± 0.7 | 88.4 ± 0.3 | 73.5 ± 0.5 | 84.0 ± 0.1 | 83.4 ± 0.1 | 98.7 ± 0.0 | 89.9 ± 0.0 |
| Poly-encoder (First-m) 360 | 86.6 ± 0.3 | 84.4 ± 0.2 | 57.5 ± 0.4 | 66.5 ± 1.2 | 89.0 ± 0.5 | 74.4 ± 0.7 | 81.3 ± 0.6 | 81.1 ± 0.4 | 98.4 ± 0.2 | 88.4 ± 0.3 |
| Poly-encoder (Learnt-m) 360 | 86.1 ± 0.1 | 83.8 ± 0.1 | 56.5 ± 0.8 | 65.8 ± 0.7 | 88.5 ± 0.6 | 73.6 ± 0.6 | 84.2 ± 0.2 | 83.7 ± 0.0 | 98.7 ± 0.1 | 90.1 ± 0.0 |
| Cross-encoder | 87.3 ± 0.5 | 84.9 ± 0.3 | 57.7 ± 0.5 | 65.3 ± 1.0 | 89.7 ± 0.5 | 73.8 ± 0.6 | 83.2 ± 0.8 | 83.1 ± 0.7 | 98.7 ± 0.1 | 89.7 ± 0.5 |
| Our pre-training on Reddit |
| Bi-encoder | 86.9 ± 0.1 | 84.8 ± 0.1 | 60.1 ± 0.4 | 70.9 ± 0.5 | 90.6 ± 0.3 | 78.1 ± 0.3 | 83.7 ± 0.7 | 83.6 ± 0.7 | 98.8 ± 0.1 | 90.1 ± 0.4 |
| Poly-encoder (First-m) 16 | 89.0 ± 0.1 | 86.4 ± 0.3 | 60.4 ± 0.3 | 70.7 ± 0.7 | 91.0 ± 0.4 | 78.0 ± 0.5 | 84.3 ± 0.3 | 84.3 ± 0.2 | 98.9 ± 0.0 | 90.5 ± 0.1 |
| Poly-encoder (Learnt-m) 16 | 88.6 ± 0.3 | 86.3 ± 0.3 | 61.1 ± 0.4 | 71.6 ± 0.6 | 91.3 ± 0.3 | 78.4 ± 0.4 | 86.1 ± 0.1 | 86.0 ± 0.1 | 99.0 ± 0.1 | 91.5 ± 0.1 |
| Poly-encoder (First-m) 64 | 89.5 ± 0.1 | 87.3 ± 0.2 | 61.0 ± 0.4 | 70.9 ± 0.6 | 91.5 ± 0.5 | 78.0 ± 0.3 | 84.0 ± 0.4 | 83.9 ± 0.4 | 98.8 ± 0.0 | 90.3 ± 0.3 |
| Poly-encoder (Learnt-m) 64 | 89.0 ± 0.1 | 86.5 ± 0.2 | 60.9 ± 0.6 | 71.2 ± 0.8 | 91.3 ± 0.4 | 78.2 ± 0.7 | 86.2 ± 0.1 | 85.9 ± 0.1 | 99.1 ± 0.0 | 91.5 ± 0.1 |
| Poly-encoder (First-m) 360 | 90.0 ± 0.1 | 87.3 ± 0.1 | 61.1 ± 1.9 | 70.9 ± 2.1 | 91.5 ± 0.9 | 77.9 ± 1.6 | 84.8 ± 0.5 | 84.6 ± 0.5 | 98.9 ± 0.1 | 90.7 ± 0.3 |
| Poly-encoder (Learnt-m) 360 | 89.2 ± 0.1 | 86.8 ± 0.1 | 61.2 ± 0.2 | 71.4 ± 1.0 | 91.1 ± 0.3 | 78.3 ± 0.7 | 86.3 ± 0.1 | 85.9 ± 0.1 | 99.1 ± 0.0 | 91.5 ± 0.0 |
| Cross-encoder | 90.3 ± 0.2 | 87.9 ± 0.2 | 63.9 ± 0.3 | 71.7 ± 0.3 | 92.4 ± 0.5 | 79.0 ± 0.2 | 86.7 ± 0.1 | 86.5 ± 0.1 | 99.1 ± 0.0 | 91.9 ± 0.0 |
+
+Table 10: Validation and test performances of Bi-, Poly- and Cross-encoders. Scores are shown for ConvAI2, DSTC7 Track 1 and Ubuntu v2, and the previous state-of-the-art models in the literature.
\ No newline at end of file
diff --git a/polyencodersarchitecturesandpretrainingstrategiesforfastandaccuratemultisentencescoring/images.zip b/polyencodersarchitecturesandpretrainingstrategiesforfastandaccuratemultisentencescoring/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..e72b05fdae4e4d68928f8d8d868632379da6524a
--- /dev/null
+++ b/polyencodersarchitecturesandpretrainingstrategiesforfastandaccuratemultisentencescoring/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f567070a13052e772e6ac737e9e91bd3e98682668b6d071c903c0d06134330b0
+size 904310
diff --git a/polyencodersarchitecturesandpretrainingstrategiesforfastandaccuratemultisentencescoring/layout.json b/polyencodersarchitecturesandpretrainingstrategiesforfastandaccuratemultisentencescoring/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..f8eb786a788df2a67f702b6fd4385822ef80abb5
--- /dev/null
+++ b/polyencodersarchitecturesandpretrainingstrategiesforfastandaccuratemultisentencescoring/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a7c1f8607915c9f861347fe5dce77dec79daf2cc707c6cd2305f5766d5d749bc
+size 366385
diff --git a/polylogarithmicwidthsufficesforgradientdescenttoachievearbitrarilysmalltesterrorwithshallowrelunetworks/e4a42d09-d0ce-4647-828d-7e02eaa277ab_content_list.json b/polylogarithmicwidthsufficesforgradientdescenttoachievearbitrarilysmalltesterrorwithshallowrelunetworks/e4a42d09-d0ce-4647-828d-7e02eaa277ab_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..83ac093f2b7d161fbd29ab230b6f3ce892620493
--- /dev/null
+++ b/polylogarithmicwidthsufficesforgradientdescenttoachievearbitrarilysmalltesterrorwithshallowrelunetworks/e4a42d09-d0ce-4647-828d-7e02eaa277ab_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:85a133c09ed96a3a05272cb1b5a6cf0cf90043020256fce0380af685ff5e4c8d
+size 184340
diff --git a/polylogarithmicwidthsufficesforgradientdescenttoachievearbitrarilysmalltesterrorwithshallowrelunetworks/e4a42d09-d0ce-4647-828d-7e02eaa277ab_model.json b/polylogarithmicwidthsufficesforgradientdescenttoachievearbitrarilysmalltesterrorwithshallowrelunetworks/e4a42d09-d0ce-4647-828d-7e02eaa277ab_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..4a3a10742312593af353a6fabbbad51bb211e783
--- /dev/null
+++ b/polylogarithmicwidthsufficesforgradientdescenttoachievearbitrarilysmalltesterrorwithshallowrelunetworks/e4a42d09-d0ce-4647-828d-7e02eaa277ab_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6a49f02f70476fc5ee036c097152f3eb779d3ad492d4f74bdca9cae563e9a48d
+size 209563
diff --git a/polylogarithmicwidthsufficesforgradientdescenttoachievearbitrarilysmalltesterrorwithshallowrelunetworks/e4a42d09-d0ce-4647-828d-7e02eaa277ab_origin.pdf b/polylogarithmicwidthsufficesforgradientdescenttoachievearbitrarilysmalltesterrorwithshallowrelunetworks/e4a42d09-d0ce-4647-828d-7e02eaa277ab_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..86f15fde761e3958148f0595085b9b7e20b7e682
--- /dev/null
+++ b/polylogarithmicwidthsufficesforgradientdescenttoachievearbitrarilysmalltesterrorwithshallowrelunetworks/e4a42d09-d0ce-4647-828d-7e02eaa277ab_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d887eaf69bb2f358484e145c12ccd5e2db26066367db369ece669b1686e0bd4c
+size 382904
diff --git a/polylogarithmicwidthsufficesforgradientdescenttoachievearbitrarilysmalltesterrorwithshallowrelunetworks/full.md b/polylogarithmicwidthsufficesforgradientdescenttoachievearbitrarilysmalltesterrorwithshallowrelunetworks/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..d9cd511e40cec90ecdefe9a4f978875fe040307b
--- /dev/null
+++ b/polylogarithmicwidthsufficesforgradientdescenttoachievearbitrarilysmalltesterrorwithshallowrelunetworks/full.md
@@ -0,0 +1,1128 @@
+# POLYLOGARITHMIC WIDTH SUFFICES FOR GRADIENT DESCENT TO ACHIEVE ARBITRARILY SMALL TEST ERROR WITH SHALLOW RELU NETWORKS
+
+Ziwei Ji & Matus Telgarsky
+
+University of Illinois, Urbana-Champaign
+
+{ziweiji2,mjt}@illinois.edu
+
+# ABSTRACT
+
+Recent theoretical work has guaranteed that overparameterized networks trained by gradient descent achieve arbitrarily low training error, and sometimes even low test error. The required width, however, is always polynomial in at least one of the sample size $n$ , the (inverse) target error $1 / \epsilon$ , and the (inverse) failure probability $1 / \delta$ . This work shows that $\widetilde{\Theta}(1 / \epsilon)$ iterations of gradient descent with $\widetilde{\Omega}(1 / \epsilon^2)$ training examples on two-layer ReLU networks of any width exceeding polylog $(n, 1 / \epsilon, 1 / \delta)$ suffice to achieve a test misclassification error of $\epsilon$ . We also prove that stochastic gradient descent can achieve $\epsilon$ test error with polylogarithmic width and $\widetilde{\Theta}(1 / \epsilon)$ samples. The analysis relies upon the separation margin of the limiting kernel, which is guaranteed positive, can distinguish between true labels and random labels, and can give a tight sample-complexity analysis in the infinite-width setting.
+
+# 1 INTRODUCTION
+
+Despite the extensive empirical success of deep networks, their optimization and generalization properties are still not fully understood. Recently, the neural tangent kernel (NTK) has provided the following insight into the problem. In the infinite-width limit, the NTK converges to a limiting kernel which stays constant during training; on the other hand, when the width is large enough, the function learned by gradient descent follows the NTK (Jacot et al., 2018). This motivates the study of overparameterized networks trained by gradient descent, using properties of the NTK. In fact, parameters related to the NTK, such as the minimum eigenvalue of the limiting kernel, appear to affect optimization and generalization (Arora et al., 2019).
+
+However, in addition to such NTK-dependent parameters, prior work also requires the width to depend polynomially on $n$ , $1 / \delta$ or $1 / \epsilon$ , where $n$ denotes the size of the training set, $\delta$ denotes the failure probability, and $\epsilon$ denotes the target error. These large widths far exceed what is used empirically, constituting a significant gap between theory and practice.
+
+Our contributions. In this paper, we narrow this gap by showing that a two-layer ReLU network with $\Omega (\ln (n / \delta) + \ln (1 / \epsilon)^2)$ hidden units trained by gradient descent achieves classification error $\epsilon$ on test data, meaning both optimization and generalization occur. Unlike prior work, the width is fully polylogarithmic in $n$ , $1 / \delta$ , and $1 / \epsilon$ ; the width will additionally depend on the separation margin of the limiting kernel, a quantity which is guaranteed positive (assuming no inputs are parallel), can distinguish between true labels and random labels, and can give a tight sample-complexity analysis in the infinite-width setting. The paper organization together with some details are described below.
+
+Section 2 studies gradient descent on the training set. Using the $\ell_1$ geometry inherent in classification tasks, we prove that with any width at least polylogarithmic and any constant step size no larger than 1, gradient descent achieves training error $\epsilon$ in $\widetilde{\Theta}(1/\epsilon)$ iterations (cf. Theorem 2.2). As is common in the NTK literature (Chizat & Bach, 2019), we also show the parameters hardly change, which will be essential to our generalization analysis.
+
+Section 3 gives a test error bound. Concretely, using the preceding gradient descent analysis, and standard Rademacher tools and exploiting how little the weights moved, we show that with $\widetilde{\Omega}(1/\epsilon^2)$ samples and $\widetilde{\Theta}(1/\epsilon)$ iterations, gradient descent finds a solution with $\epsilon$ test error (cf. Theorem 3.2 and Corollary 3.3). (As discussed in Remark 3.4, $\widetilde{\Omega}(1/\epsilon)$ samples also suffice via a smoothness-based generalization bound, at the expense of large constant factors.)
+
+Section 4 considers stochastic gradient descent (SGD) with access to a standard stochastic online oracle. We prove that with width at least polylogarithmic and $\widetilde{\Theta}(1/\epsilon)$ samples, SGD achieves an arbitrarily small test error (cf. Theorem 4.1).
+
+Section 5 discusses the separation margin, which is in general a positive number, but reflects the difficulty of the classification problem in the infinite-width limit. While this margin can degrade all the way down to $O(1 / \sqrt{n})$ for random labels, it can be much larger when there is a strong relationship between features and labels: for example, on the noisy 2-XOR data introduced in (Wei et al., 2018), we show that the margin is $\Omega(1 / \ln(n))$ , and our SGD sample complexity is tight in the infinite-width case.
+
+Section 6 concludes with some open problems.
+
+# 1.1 RELATED WORK
+
+There has been a large literature studying gradient descent on overparameterized networks via the NTK. The most closely related work is (Nitanda & Suzuki, 2019), which shows that a two-layer network trained by gradient descent with the logistic loss can achieve a small test error, under the same assumption that the NTK with respect to the first layer can separate the data distribution. However, they analyze smooth activations, while we handle the ReLU. They require $\Omega(1/\epsilon^2)$ hidden units, $\widetilde{\Omega}(1/\epsilon^4)$ data samples, and $O(1/\epsilon^2)$ steps, while our result only needs polylogarithmic hidden units, $\widetilde{\Omega}(1/\epsilon^2)$ data samples, and $\widetilde{O}(1/\epsilon)$ steps.
+
+Additionally on shallow networks, Du et al. (2018b) prove that on an overparameterized two-layer network, gradient descent can globally minimize the empirical risk with the squared loss. Their result requires $\Omega(n^6/\delta^3)$ hidden units. Oymak & Soltanolkotabi (2019); Song & Yang (2019) further reduce the required overparameterization, but there is still a poly $(n)$ dependency. Using the same amount of overparameterization as (Du et al., 2018b), Arora et al. (2019) further show that the two-layer network learned by gradient descent can achieve a small test error, assuming that on the data distribution the smallest eigenvalue of the limiting kernel is at least some positive constant. They also give a fine-grained characterization of the predictions made by gradient descent iterates; such a characterization makes use of a special property of the squared loss and cannot be applied to the logistic regression setting. Li & Liang (2018) show that stochastic gradient descent (SGD) with the cross entropy loss can learn a two-layer network with small test error, using poly $(\ell, 1/\epsilon)$ hidden units, where $\ell$ is at least the covering number of the support of the feature distribution using balls whose radii are no larger than the smallest distance between two data points with different labels. Allen-Zhu et al. (2018a) consider SGD on a two-layer network, and a variant of SGD on a three-layer network. The three-layer analysis further exhibits some properties not captured by the NTK. They assume a ground truth network with infinite-order smooth activations, and they require the width to depend polynomially on $1/\epsilon$ and some constants related to the smoothness of the activations of the ground truth network.
+
+On deep networks, a variety of works have established low training error (Allen-Zhu et al., 2018b; Du et al., 2018a; Zou et al., 2018; Zou & Gu, 2019). Allen-Zhu et al. (2018c) show that SGD can minimize the regression loss for recurrent neural networks, and Allen-Zhu & Li (2019b) further prove a low generalization error. Allen-Zhu & Li (2019a) show that using the same number of training examples, a three-layer ResNet can learn a function class with a much lower test error than any kernel method. Cao & Gu (2019a) assume that the NTK with respect to the second layer of a two-layer network can separate the data distribution, and prove that gradient descent on a deep network can achieve $\epsilon$ test error with $\Omega(1/\epsilon^4)$ samples and $\Omega(1/\epsilon^{14})$ hidden units. Cao & Gu (2019b) consider SGD with an online oracle and give a general result. Under the same assumption as in (Cao & Gu, 2019a), their result requires $\Omega(1/\epsilon^{14})$ hidden units and sample complexity $\widetilde{O}(1/\epsilon^2)$ .
+
+By contrast, with the same online oracle, our result only needs polylogarithmic hidden units and sample complexity $\widetilde{O}(1/\epsilon)$ .
+
+# 1.2 NOTATION
+
+The dataset is denoted by $\{(x_i, y_i)\}_{i=1}^n$ where $x_i \in \mathbb{R}^d$ and $y_i \in \{-1, +1\}$ . For simplicity, we assume that $\| x_i \|_2 = 1$ for any $1 \leq i \leq n$ , which is standard in the NTK literature.
+
+The two-layer network has weight matrices $W \in \mathbb{R}^{m \times d}$ and $a \in \mathbb{R}^m$ . We use the following parameterization, which is also used in (Du et al., 2018b; Arora et al., 2019):
+
+$$
+f (x; W, a) := \frac {1}{\sqrt {m}} \sum_ {s = 1} ^ {m} a _ {s} \sigma (\langle w _ {s}, x \rangle),
+$$
+
+with initialization
+
+$$
+w _ {s, 0} \sim \mathcal {N} (0, I _ {d}), \quad \text {a n d} \quad a _ {s} \sim \operatorname {u n i f} \left(\{- 1, + 1 \}\right).
+$$
+
+Note that in this paper, $w_{s,t}$ denotes the $s$ -th row of $W$ at step $t$ . We fix $a$ and only train $W$ , as in (Li & Liang, 2018; Du et al., 2018b; Arora et al., 2019; Nitanda & Suzuki, 2019). We consider the ReLU activation $\sigma(z) \coloneqq \max \{0, z\}$ , though our analysis can be extended easily to Lipschitz continuous, positively homogeneous activations such as leaky ReLU.
+
+We use the logistic (binary cross entropy) loss $\ell(z) \coloneqq \ln \left(1 + \exp(-z)\right)$ and gradient descent. For any $1 \leq i \leq n$ and any $W$ , let $f_{i}(W) \coloneqq f(x_{i}; W, a)$ . The empirical risk and its gradient are given by
+
+$$
+\widehat {\mathcal {R}} (W) := \frac {1}{n} \sum_ {i = 1} ^ {n} \ell \left(y _ {i} f _ {i} (W)\right), \quad \text {a n d} \quad \nabla \widehat {\mathcal {R}} (W) = \frac {1}{n} \sum_ {i = 1} ^ {n} \ell^ {\prime} \left(y _ {i} f _ {i} (W)\right) y _ {i} \nabla f _ {i} (W).
+$$
+
+For any $t \geq 0$ , the gradient descent step is given by $W_{t+1} \coloneqq W_t - \eta_t \nabla \widehat{\mathcal{R}}(W_t)$ . Also define
+
+$$
+f _ {i} ^ {(t)} (W) := \left\langle \nabla f _ {i} (W _ {t}), W \right\rangle , \quad \text {a n d} \quad \widehat {\mathcal {R}} ^ {(t)} (W) := \frac {1}{n} \sum_ {i = 1} ^ {n} \ell \left(y _ {i} f _ {i} ^ {(t)} (W)\right).
+$$
+
+Note that $f_{i}^{(t)}(W_{t}) = f_{i}(W_{t})$ . This property generally holds due to homogeneity: for any $W$ and any $1 \leq s \leq m$ ,
+
+$$
+\frac {\partial f _ {i}}{\partial w _ {s}} = \frac {1}{\sqrt {m}} a _ {s} \mathbb {1} \left[ \langle w _ {s}, x _ {i} \rangle > 0 \right] x _ {i}, \quad \text {a n d} \quad \left\langle \frac {\partial f _ {i}}{\partial w _ {s}}, w _ {s} \right\rangle = \frac {1}{\sqrt {m}} a _ {s} \sigma \left(\langle w _ {s}, x _ {i} \rangle\right),
+$$
+
+and thus $\langle \nabla f_i(W),W\rangle = f_i(W)$
+
+# 2 EMPIRICAL RISK MINIMIZATION
+
+In this section, we consider a fixed training set and empirical risk minimization. We first state our assumption on the separability of the NTK, and then give our main result and a proof sketch.
+
+The key idea of the NTK is to do the first-order Taylor approximation:
+
+$$
+f (x; W, a) \approx f (x; W _ {0}, a) + \left\langle \nabla_ {W} f (x; W _ {0}, a), W - W _ {0} \right\rangle .
+$$
+
+In other words, we want to do learning using the features given by $\nabla f_{i}(W_{0})\in \mathbb{R}^{m\times d}$ . A natural assumption is that there exists $\overline{U}\in \mathbb{R}^{m\times d}$ which can separate $\left\{\left(\nabla f_{i}(W_{0}),y_{i}\right)\right\}_{i = 1}^{n}$ with a positive margin:
+
+$$
+\min _ {1 \leq i \leq n} \left(y _ {i} \left\langle \bar {U}, \nabla f _ {i} \left(W _ {0}\right) \right\rangle\right) = \min _ {1 \leq i \leq n} \left(y _ {i} \frac {1}{\sqrt {m}} \sum_ {s = 1} ^ {m} a _ {s} \langle \bar {u} _ {s}, x _ {i} \rangle \mathbb {1} \left[ \langle w _ {s, 0}, x _ {i} \rangle > 0 \right]\right) > 0. \tag {2.1}
+$$
+
+The infinite-width limit of eq. (2.1) is formalized as Assumption 2.1, with an additional bound on the $(2,\infty)$ norm of the separator. A concrete construction of $\overline{U}$ using Assumption 2.1 is given in eq. (2.2).
+
+Let $\mu_{\mathcal{N}}$ denote the Gaussian measure on $\mathbb{R}^d$ , given by the Gaussian density with respect to the Lebesgue measure on $\mathbb{R}^d$ . We consider the following Hilbert space
+
+$$
+\mathcal {H} := \left\{w: \mathbb {R} ^ {d} \to \mathbb {R} ^ {d} \Big | \int \| w (z) \| _ {2} ^ {2} \mathrm {d} \mu_ {\mathcal {N}} (z) < \infty \right\}.
+$$
+
+For any $x\in \mathbb{R}^d$ , define $\phi_x\in \mathcal{H}$ by
+
+$$
+\phi_ {x} (z) := x \mathbb {1} [ \langle z, x \rangle > 0 ],
+$$
+
+and particularly define $\phi_i \coloneqq \phi_{x_i}$ for the training input $x_i$ .
+
+Assumption 2.1. There exists $\bar{v} \in \mathcal{H}$ and $\gamma > 0$ , such that $\left\| \bar{v}(z) \right\|_2 \leq 1$ for any $z \in \mathbb{R}^d$ , and for any $1 \leq i \leq n$ ,
+
+$$
+y _ {i} \left\langle \bar {v}, \phi_ {i} \right\rangle_ {\mathcal {H}} := y _ {i} \int \left\langle \bar {v} (z), \phi_ {i} (z) \right\rangle \mathrm {d} \mu_ {\mathcal {N}} (z) \geq \gamma .
+$$
+
+
+
+As discussed in Section 5, the space $\mathcal{H}$ is the reproducing kernel Hilbert space (RKHS) induced by the infinite-width NTK with respect to $W$ , and $\phi_x$ maps $x$ into $\mathcal{H}$ . Assumption 2.1 supposes that the induced training set $\{(\phi_i, y_i)\}_{i=1}^n$ can be separated by some $\bar{v} \in \mathcal{H}$ , with an additional bound on $\left\| \bar{v}(z) \right\|_2$ which is crucial in our analysis. It is also possible to give a dual characterization of the separation margin (cf. eq. (5.2)), which also allows us to show that Assumption 2.1 always holds when there are no parallel inputs (cf. Proposition 5.1). However, it is often more convenient to construct $\bar{v}$ directly; see Section 5 for some examples.
+
+With Assumption 2.1, we state our main empirical risk result.
+
+Theorem 2.2. Under Assumption 2.1, given any risk target $\epsilon \in (0,1)$ and any $\delta \in (0,1/3)$ , let
+
+$$
+\lambda := \frac {\sqrt {2 \ln (4 n / \delta)} + \ln (4 / \epsilon)}{\gamma / 4}, \quad a n d \quad M := \frac {4 0 9 6 \lambda^ {2}}{\gamma^ {6}}.
+$$
+
+Then for any $m \geq M$ and any constant step size $\eta \leq 1$ , with probability $1 - 3\delta$ over the random initialization,
+
+$$
+\frac {1}{T} \sum_ {t < T} \widehat {\mathcal {R}} (W _ {t}) \leq \epsilon , \quad \text {w h e r e} \quad T := \lceil^ {2 \lambda^ {2}} / \eta \epsilon \rceil .
+$$
+
+Moreover for any $0 \leq t < T$ and any $1 \leq s \leq m$ ,
+
+$$
+\left\| w _ {s, t} - w _ {s, 0} \right\| _ {2} \leq \frac {4 \lambda}{\gamma \sqrt {m}}.
+$$
+
+While the number of hidden units required by prior work all have a polynomial dependency on $n$ , $1/\delta$ or $1/\epsilon$ , Theorem 2.2 only requires $m = \Omega \left( \ln (n/\delta) + \ln (1/\epsilon)^2 \right)$ . The required width has a polynomial dependency on $1/\gamma$ , which is an adaptive quantity: while $1/\gamma$ can be $\mathrm{poly}(n)$ for random labels (cf. Proposition 5.2), it can be $\mathrm{polylog}(n)$ when there is a strong feature-label relationship, for example on the noisy 2-XOR data introduced in (Wei et al., 2018) (cf. Proposition 5.3). Moreover, we show in Proposition 5.4 that if we want $\left\{ (\nabla f_i(W_0), y_i) \right\}_{i=1}^n$ to be separable, which is the starting point of an NTK-style analysis, the width has to depend polynomially on $1/\gamma$ .
+
+In the rest of Section 2, we give a proof sketch of Theorem 2.2. The full proof is given in Appendix A.
+
+# 2.1 PROPERTIES AT INITIALIZATION
+
+In this subsection, we give some nice properties of random initialization.
+
+Given an initialization $(W_0, a)$ , for any $1 \leq s \leq m$ , define
+
+$$
+\bar {u} _ {s} := \frac {1}{\sqrt {m}} a _ {s} \bar {v} \left(w _ {s, 0}\right), \tag {2.2}
+$$
+
+where $\bar{v}$ is given by Assumption 2.1. Collect $\bar{u}_s$ into a matrix $\overline{U} \in \mathbb{R}^{m \times d}$ . It holds that $\| \bar{u}_s \|_2 \leq 1 / \sqrt{m}$ , and $\| \overline{U} \|_F \leq 1$ .
+
+Lemma 2.3 ensures that with high probability $\overline{U}$ has a positive margin at initialization.
+
+Lemma 2.3. Under Assumption 2.1, given any $\delta \in (0,1)$ and any $\epsilon_1 \in (0,\gamma)$ , if $m \geq \left(2\ln(n / \delta)\right) / \epsilon_1^2$ , then with probability $1 - \delta$ , it holds simultaneously for all $1 \leq i \leq n$ that
+
+$$
+y _ {i} f _ {i} ^ {(0)} (\overline {{U}}) = y _ {i} \left\langle \nabla f _ {i} (W _ {0}), \overline {{U}} \right\rangle \geq \gamma - \sqrt {\frac {2 \ln (n / \delta)}{m}} \geq \gamma - \epsilon_ {1}.
+$$
+
+For any $W$ , any $\epsilon_2 > 0$ , and any $1 \leq i \leq n$ , define
+
+$$
+\alpha_ {i} (W, \epsilon_ {2}) = \frac {1}{m} \sum_ {s = 1} ^ {m} \mathbb {1} \left[ \left| \langle w _ {s}, x _ {i} \rangle \right| \leq \epsilon_ {2} \right].
+$$
+
+Lemma 2.4 controls $\alpha_{i}(W_{0},\epsilon_{2})$ . It will help us show that $\overline{U}$ has a good margin during the training process.
+
+Lemma 2.4. Under the condition of Lemma 2.3, for any $\epsilon_2 > 0$ , with probability $1 - \delta$ , it holds simultaneously for all $1 \leq i \leq n$ that
+
+$$
+\alpha_ {i} \left(W _ {0}, \epsilon_ {2}\right) \leq \sqrt {\frac {2}{\pi}} \epsilon_ {2} + \sqrt {\frac {\ln (n / \delta)}{2 m}} \leq \epsilon_ {2} + \frac {\epsilon_ {1}}{2}.
+$$
+
+Finally, Lemma 2.5 controls the output of the network at initialization.
+
+Lemma 2.5. Given any $\delta \in (0,1)$ , if $m \geq 25\ln(2n/\delta)$ , then with probability $1 - \delta$ , it holds simultaneously for all $1 \leq i \leq n$ that
+
+$$
+\left| f \left(x _ {i}; W _ {0}, a\right) \right| \leq \sqrt {2 \ln (4 n / \delta)}.
+$$
+
+# 2.2 CONVERGENCE ANALYSIS OF GRADIENT DESCENT
+
+We analyze gradient descent in this subsection. First, define
+
+$$
+\widehat {\mathcal {Q}} (W) := \frac {1}{n} \sum_ {i = 1} ^ {n} - \ell^ {\prime} \left(y _ {i} f _ {i} (W)\right).
+$$
+
+We have the following observations.
+
+- For any $W$ and any $1 \leq s \leq m$ , $\left\| \partial f_i / \partial w_s \right\|_2 \leq 1/\sqrt{m}$ , and thus $\left\| \nabla f_i(W) \right\|_F \leq 1$ . Therefore by the triangle inequality, $\left\| \nabla \widehat{\mathcal{R}}(W) \right\|_F \leq \widehat{\mathcal{Q}}(W)$ .
+- The logistic loss satisfies $0 \leq -\ell' \leq 1$ , and thus $0 \leq \widehat{\mathcal{Q}}(W) \leq 1$ .
+- The logistic loss satisfies $-\ell' \leq \ell$ , and thus $\widehat{\mathcal{Q}}(W) \leq \widehat{\mathcal{R}}(W)$ .
+
+The quantity $\widehat{\mathcal{Q}}$ first appeared in the perceptron analysis (Novikoff, 1962) for the ReLU loss, and has also been analyzed in prior work (Ji & Telgarsky, 2018; Cao & Gu, 2019a; Nitanda & Suzuki, 2019). In this work, $\widehat{\mathcal{Q}}$ specifically helps us prove the following result, which plays an important role in obtaining a width which only depends on $\mathrm{polylog}(1 / \epsilon)$ .
+
+Lemma 2.6. For any $t \geq 0$ and any $\overline{W}$ , if $\eta_t \leq 1$ , then
+
+$$
+\eta_ {t} \widehat {\mathcal {R}} (W _ {t}) \leq \left\| W _ {t} - \overline {{W}} \right\| _ {F} ^ {2} - \left\| W _ {t + 1} - \overline {{W}} \right\| _ {F} ^ {2} + 2 \eta_ {t} \widehat {\mathcal {R}} ^ {(t)} (\overline {{W}}).
+$$
+
+Consequently, if we use a constant step size $\eta \leq 1$ for $0 \leq \tau < t$ , then
+
+$$
+\eta \left(\sum_ {\tau < t} \widehat {\mathcal {R}} \left(W _ {\tau}\right)\right) + \left\| W _ {t} - \bar {W} \right\| _ {F} ^ {2} \leq \left\| W _ {0} - \bar {W} \right\| _ {F} ^ {2} + 2 \eta \left(\sum_ {\tau < t} \widehat {\mathcal {R}} ^ {(\tau)} (\bar {W})\right).
+$$
+
+The proof of Lemma 2.6 starts from the standard iteration guarantee:
+
+$$
+\left\| W _ {t + 1} - \overline {{W}} \right\| _ {F} ^ {2} = \left\| W _ {t} - \overline {{W}} \right\| _ {F} ^ {2} - 2 \eta_ {t} \left\langle \nabla \widehat {\mathcal {R}} (W _ {t}), W _ {t} - \overline {{W}} \right\rangle + \eta_ {t} ^ {2} \left\| \nabla \widehat {\mathcal {R}} (W _ {t}) \right\| _ {F} ^ {2}.
+$$
+
+We can then handle the inner product term using the convexity of $\ell$ and homogeneity of ReLU, and control $\| \nabla \widehat{\mathcal{R}}(W_t) \|_F^2$ by $\widehat{\mathcal{R}}(W_t)$ using the above properties of $\widehat{\mathcal{Q}}(W_t)$ . Lemma 2.6 is similar to (Allen-Zhu & Li, 2019a, Fact D.4 and Claim D.5), where the squared loss is considered.
+
+Using Lemmas 2.3 to 2.6, we can prove Theorem 2.2. Below is a proof sketch; the full proof is given in Appendix A.
+
+1. We first show that as long as $\| w_{s,t} - w_{s,0}\| _2\leq 4\lambda /(\gamma \sqrt{m})$ for all $1\leq s\leq m$ , it holds that $\widehat{\mathcal{R}}^{(t)}\left(W_0 + \lambda \overline{U}\right)\leq \epsilon /4$ . To see this, let us consider $\widehat{\mathcal{R}}^{(0)}$ first. For any $1\leq i\leq n$ , Lemma 2.5 ensures that $|\langle \nabla f_i(W_0),W_0\rangle |$ is bounded, while Lemma 2.3 ensures that $\langle \nabla f_i(W_0),\overline{U}\rangle$ is concentrated around $\gamma$ with a large width. As a result, with the chosen $\lambda$ in Theorem 2.2, we can show that $\langle \nabla f_i(W_0),W_0 + \lambda \overline{U}\rangle$ is large, and $\widehat{\mathcal{R}}^{(0)}(W_0 + \lambda \overline{U})$ is small due to the exponential tail of the logistic loss. To further handle $\widehat{\mathcal{R}}^{(t)}$ , we use a standard NTK argument to control $\langle \nabla f_i(W_t) - \nabla f_i(W_0),W_0 + \lambda \overline{U}\rangle$ under the condition that $\| w_{s,t} - w_{s,0}\| _2\leq 4\lambda /\big(\gamma \sqrt{m}\big)$ .
+2. We then prove by contradiction that the above bound on $\| w_{s,t} - w_{s,0}\| _2$ holds for at least the first $T$ iterations. The key observation is that as long as $\widehat{\mathcal{R}}^{(t)}(W_0 + \lambda \overline{U})\leq \epsilon /4$ , we can use it and Lemma 2.6 to control $\sum_{\tau < t}\widehat{\mathcal{Q}} (W_{\tau})$ , and then just invoke $\| w_{s,t} - w_{s,0}\| _2\leq \eta \sum_{\tau < t}\widehat{\mathcal{Q}} (W_{\tau}) / \sqrt{m}$ .
+
+The quantity $\sum_{\tau < t} \widehat{\mathcal{Q}}(W_{\tau})$ has also been considered in prior work (Cao & Gu, 2019a; Nitanda & Suzuki, 2019), where it is bounded by $\sqrt{t} \sqrt{\sum_{\tau < t} \widehat{\mathcal{Q}}(W_{\tau})^2}$ using the Cauchy-Schwarz inequality, which introduces a $\sqrt{t}$ factor. To make the required width depend only on $\mathrm{polylog}(1/\epsilon)$ , we also need an upper bound on $\sum_{\tau < t} \widehat{\mathcal{Q}}(W_{\tau})$ which depends only on $\mathrm{polylog}(1/\epsilon)$ . Since the above analysis results in a $\sqrt{t}$ factor, and in our case $\Omega(1/\epsilon)$ steps are needed, it is unclear how to get a $\mathrm{polylog}(1/\epsilon)$ width using the analysis in (Cao & Gu, 2019a; Nitanda & Suzuki, 2019). By contrast, using Lemma 2.6, we can show that $\sum_{\tau < t} \widehat{\mathcal{Q}}(W_{\tau}) \leq 4\lambda/\gamma$ , which only depends on $\ln(1/\epsilon)$ .
+
+3. The claims of Theorem 2.2 then follow directly from the above two steps and Lemma 2.6.
+
+# 3 GENERALIZATION
+
+To get a generalization bound, we naturally extend Assumption 2.1 to the following assumption. Assumption 3.1. There exists $\bar{v} \in \mathcal{H}$ and $\gamma > 0$ , such that $\left\| \bar{v}(z) \right\|_2 \leq 1$ for any $z \in \mathbb{R}^d$ , and
+
+$$
+y \int \left\langle \bar {v} (z), x \right\rangle \mathbb {1} \left[ \langle z, x \rangle > 0 \right] \mathrm {d} \mu_ {\mathcal {N}} (z) \geq \gamma
+$$
+
+for almost all $(x,y)$ sampled from the data distribution $\mathcal{D}$ .
+
+The above assumption is also made in (Nitanda & Suzuki, 2019) for smooth activations. (Cao & Gu, 2019a) make a similar separability assumption, but in the RKHS induced by the second layer $a$ ; by contrast, Assumption 3.1 is on separability in the RKHS induced by the first layer $W$ .
+
+Here is our test error bound with Assumption 3.1.
+
+Theorem 3.2. Under Assumption 3.1, given any $\epsilon \in (0,1)$ and any $\delta \in (0,1/4)$ , let $\lambda$ and $M$ be given as in Theorem 2.2:
+
+$$
+\lambda := \frac {\sqrt {2 \ln (4 n / \delta)} + \ln (4 / \epsilon)}{\gamma / 4}, \quad a n d \quad M := \frac {4 0 9 6 \lambda^ {2}}{\gamma^ {6}}.
+$$
+
+Then for any $m \geq M$ and any constant step size $\eta \leq 1$ , with probability $1 - 4\delta$ over the random initialization and data sampling,
+
+$$
+P _ {(x, y) \sim \mathcal {D}} \left(y f (x; W _ {k}, a) \leq 0\right) \leq 2 \epsilon + \frac {1 6 \left(\sqrt {2 \ln (4 n / \delta)} + \ln (4 / \epsilon)\right)}{\gamma^ {2} \sqrt {n}} + 6 \sqrt {\frac {\ln (2 / \delta)}{2 n}},
+$$
+
+where $k$ denotes the step with the minimum empirical risk before $\lceil 2\lambda^2 /\eta \epsilon \rceil$
+
+Below is a direct corollary of Theorem 3.2.
+
+Corollary 3.3. Under Assumption 3.1, given any $\epsilon, \delta \in (0,1)$ , using a constant step size no larger than 1 and let
+
+$$
+n = \widetilde {\Omega} \left(\frac {1}{\gamma^ {4} \epsilon^ {2}}\right), \quad a n d \quad m = \Omega \left(\frac {\ln (n / \delta) + \ln (1 / \epsilon) ^ {2}}{\gamma^ {8}}\right),
+$$
+
+it holds with probability $1 - \delta$ that $P_{(x,y)\sim \mathcal{D}}\left(yf(x;W_k,a)\leq 0\right)\leq \epsilon$ , where $k$ denotes the step with the minimum empirical risk in the first $\widetilde{\Theta}\left(1 / \gamma^{2}\epsilon\right)$ steps.
+
+The proof of Theorem 3.2 uses the sigmoid mapping $-\ell'(z) = e^{-z} / (1 + e^{-z})$ , the empirical average $\widehat{\mathcal{Q}}(W_k)$ , and the corresponding population average $\mathcal{Q}(W_k) \coloneqq \mathbb{E}_{(x,y) \sim \mathcal{D}}\left[-\ell'\left(yf(x;W_k,a)\right)\right]$ . As noted in (Cao & Gu, 2019a), because $P_{(x,y) \sim \mathcal{D}}\left(yf(x;W_k,a) \leq 0\right) \leq 2\mathcal{Q}(W_k)$ , it is enough to control $\mathcal{Q}(W_k)$ . As $\widehat{\mathcal{Q}}(W_k)$ is controlled by Theorem 2.2, it is enough to control the generalization error $\mathcal{Q}(W_k) - \widehat{\mathcal{Q}}(W_k)$ . Moreover, since $-\ell'$ is supported on $[0,1]$ and 1-Lipschitz, it is enough to bound the Rademacher complexity of the function space explored by gradient descent. Invoking the bound on $\left\| W_k^\top - W_0^\top \right\|_{2,\infty}$ finishes the proof. The proof details are given in Appendix B.
+
+Remark 3.4. To get Theorem 3.2, we use a Lipschitz-based Rademacher complexity bound. One can also use a smoothness-based Rademacher complexity bound (Srebro et al., 2010, Theorem 1) and get a sample complexity $\widetilde{O} (^{1} / \gamma^{4}\epsilon)$ . However, the bound will become complicated and some large constant will be introduced. It is an interesting open question to give a clean analysis based on smoothness.
+
+# 4 STOCHASTIC GRADIENT DESCENT
+
+There are some different formulations of SGD. In this section, we consider SGD with an online oracle. We randomly sample $W_{0}$ and $a$ , and fix $a$ during training. At step $i$ , a data example $(x_{i},y_{i})$ is sampled from the data distribution. We still let $f_{i}(W)\coloneqq f(x_{i};W,a)$ , and perform the following update
+
+$$
+W _ {i + 1} := W _ {i} - \eta_ {i} \ell^ {\prime} \left(y _ {i} f _ {i} \left(W _ {i}\right)\right) y _ {i} \nabla f _ {i} \left(W _ {i}\right).
+$$
+
+Note that here $i$ starts from 0.
+
+Still with Assumption 3.1, we show the following result.
+
+Theorem 4.1. Under Assumption 3.1, given any $\epsilon, \delta \in (0,1)$ , using a constant step size and $m = \Omega\left(\left(\ln(1/\delta) + \ln(1/\epsilon)^2\right)/\gamma^8\right)$ , it holds with probability $1 - \delta$ that
+
+$$
+\frac {1}{n} \sum_ {i = 1} ^ {n} P _ {(x, y) \sim \mathcal {D}} (y f (x; W _ {i}, a) \leq 0) \leq \epsilon , \quad f o r \quad n = \widetilde {\Theta} (1 / \gamma^ {2} \epsilon).
+$$
+
+Below is a proof sketch of Theorem 4.1; the complete proof is given in Appendix C. For any $i$ and $W$ , define
+
+$$
+\mathcal {R} _ {i} (W) := \ell \left(y _ {i} \left\langle \nabla f _ {i} (W _ {i}), W \right\rangle\right), \quad \text {a n d} \quad \mathcal {Q} _ {i} (W) := - \ell^ {\prime} \left(y _ {i} \left\langle \nabla f _ {i} (W _ {i}), W \right\rangle\right).
+$$
+
+Due to homogeneity, it holds that $\mathcal{R}_i(W_i) = \ell \left(y_if_i(W_i)\right)$ and $\mathcal{Q}_i(W_i) = -\ell '\left(y_if_i(W_i)\right)$ .
+
+The first step is an extension of Lemma 2.6 to the SGD setting, with a similar proof.
+
+Lemma 4.2. With a constant step size $\eta \leq 1$ , for any $\overline{W}$ and any $i \geq 0$ ,
+
+$$
+\eta \left(\sum_ {t < i} \mathcal {R} _ {t} \left(W _ {t}\right)\right) + \left\| W _ {i} - \bar {W} \right\| _ {F} ^ {2} \leq \left\| W _ {0} - \bar {W} \right\| _ {F} ^ {2} + 2 \eta \left(\sum_ {t < i} \mathcal {R} _ {t} \left(\bar {W}\right)\right).
+$$
+
+With Lemma 4.2, we can also extend Theorem 2.2 to the SGD setting and get a bound on $\sum_{i < n} \mathcal{Q}_i(W_i)$ , using a similar proof. To further get a bound on the cumulative population risk $\sum_{i < n} \mathcal{Q}(W_i)$ , the key observation is that $\sum_{i < n} \left( \mathcal{Q}(W_i) - \mathcal{Q}_i(W_i) \right)$ is a martingale. Using a martingale Bernstein bound, we prove the following lemma; applying it finishes the proof of Theorem 4.1.
+
+Lemma 4.3. Given any $\delta \in (0,1)$ , with probability $1 - \delta$ ,
+
+$$
+\sum_ {t < i} \mathcal {Q} (W _ {t}) \leq 4 \sum_ {t < i} \mathcal {Q} _ {t} (W _ {t}) + 4 \ln \left(\frac {1}{\delta}\right).
+$$
+
+# 5 ON SEPARABILITY
+
+In this section we give some discussion on Assumption 2.1, the separability of the NTK. The proofs are all given in Appendix D.
+
+Given a training set $\{(x_i, y_i)\}_{i=1}^n$ , the linear kernel is defined as $K_0(x_i, x_j) := \langle x_i, x_j \rangle$ . The maximum margin achievable by a linear classifier is given by
+
+$$
+\gamma_ {0} := \min _ {q \in \Delta_ {n}} \sqrt {\left(q \odot y\right) ^ {\top} K _ {0} \left(q \odot y\right)}. \tag {5.1}
+$$
+
+where $\Delta_{n}$ denotes the probability simplex and $\odot$ denotes the Hadamard product. In addition to the dual definition eq. (5.1), when $\gamma_0 > 0$ there also exists a maximum margin classifier $\bar{u}$ which gives a primal characterization of $\gamma_0$ : it holds that $\| \bar{u}\| _2 = 1$ and $y_{i}\left\langle \bar{u},x_{i}\right\rangle \geq \gamma_{0}$ for all $i$ .
+
+In this paper we consider another kernel, the infinite-width NTK with respect to the first layer:
+
+$$
+\begin{array}{l} K _ {1} \left(x _ {i}, x _ {j}\right) := \mathbb {E} \left[ \frac {\partial f (x _ {i} ; W _ {0} , a)}{\partial W _ {0}}, \frac {\partial f (x _ {j} ; W _ {0} , a)}{\partial W _ {0}} \right] \\ = \mathbb {E} _ {w \sim \mathcal {N} (0, I _ {d})} \left[ \left\langle x _ {i} \mathbb {1} \left[ \langle x _ {i}, w \rangle > 0 \right], x _ {j} \mathbb {1} \left[ \langle x _ {j}, w \rangle > 0 \right] \right\rangle \right] = \langle \phi_ {i}, \phi_ {j} \rangle_ {\mathcal {H}}. \\ \end{array}
+$$
+
+Here $\phi$ and $\mathcal{H}$ are defined at the beginning of Section 2. Similar to the dual definition of $\gamma_0$ , the margin given by $K_{1}$ is defined as
+
+$$
+\gamma_ {1} := \min _ {q \in \Delta_ {n}} \sqrt {\left(q \odot y\right) ^ {\top} K _ {1} \left(q \odot y\right)}. \tag {5.2}
+$$
+
+We can also give a primal characterization of $\gamma_{1}$ when it is positive.
+
+Proposition 5.1. If $\gamma_1 > 0$ , then there exists $\hat{v} \in \mathcal{H}$ such that $\| \hat{v} \|_{\mathcal{H}} = 1$ , and $y_i \langle \hat{v}, \phi_i \rangle_{\mathcal{H}} \geq \gamma_1$ for any $1 \leq i \leq n$ . Additionally $\| \hat{v}(z) \|_2 \leq 1 / \gamma_1$ for any $z \in \mathbb{R}^d$ .
+
+The proof is given in Appendix D, and uses the Fenchel duality theory. Using the upper bound $\left\| \hat{v}(z) \right\|_2 \leq 1 / \gamma_1$ , we can see that $\gamma_1 \hat{v}$ satisfies Assumption 2.1 with $\gamma \geq \gamma_1^2$ . However, such an upper bound $\left\| \hat{v}(z) \right\|_2 \leq 1 / \gamma_1$ might be too loose, which leads to a bad rate. In fact, as shown later, in some cases we can construct $\bar{v}$ directly which satisfies Assumption 2.1 with a large $\gamma$ . For this reason, we choose to make Assumption 2.1 instead of assuming a positive $\gamma_1$ .
+
+However, we can use $\gamma_{1}$ to show that Assumption 2.1 always holds when there are no parallel inputs. Oymak & Soltanolkotabi (2019, Corollary I.2) prove that if for any two feature vectors $x_{i}$ and $x_{j}$ , we have $\| x_{i} - x_{j}\|_{2} \geq \theta$ and $\| x_{i} + x_{j}\|_{2} \geq \theta$ for some $\theta > 0$ , then the minimum eigenvalue of $K_{1}$ is at least $\theta / (100n^{2})$ . For arbitrary labels $y \in \{-1, +1\}^{n}$ , since $\| q \odot y\|_{2} \geq 1 / \sqrt{n}$ , we have the worst case bound $\gamma_{1}^{2} \geq \theta / 100n^{3}$ . A direct improvement of this bound is $\theta / 100n_{S}^{3}$ , where $n_{S}$ denotes the number of support vectors, which could be much smaller than $n$ with real world data.
+
+On the other hand, given any training set $\{(x_i,y_i)\}_{i = 1}^n$ which may have a large margin, replacing $y$ with random labels would destroy the margin, which is what should be expected.
+
+Proposition 5.2. Given any training set $\{(x_i, y_i)\}_{i=1}^n$ , if the true labels $y$ are replaced with random labels $\epsilon \sim \mathrm{unif}\left(\{-1, +1\}^n\right)$ , then with probability 0.9 over the random labels, it holds that $\gamma_1 \leq 1/\sqrt{20n}$ .
+
+Although the above bounds all have a polynomial dependency on $n$ , they hold for arbitrary or random labels, and thus do not assume any relationship between the features and labels. Next we give some examples where there is a strong feature-label relationship, and thus a much larger margin can be proved.
+
+# 5.1 THE LINEARLY SEPARABLE CASE
+
+Suppose the data distribution is linearly separable with margin $\gamma_0$ : there exists a unit vector $\bar{u}$ such that $y\langle \bar{u},x\rangle \geq \gamma_0$ almost surely. Then we can define $\bar{v} (z)\coloneqq \bar{u}$ for any $z\in \mathbb{R}^d$ . For almost all $(x,y)$ , we have
+
+$$
+\begin{array}{l} y \int \langle \bar {v} (z), x \rangle \mathbb {1} [ \langle z, x \rangle > 0 ] \mathrm {d} \mu_ {\mathcal {N}} (z) = \int y \langle \bar {u}, x \rangle \mathbb {1} [ \langle z, x \rangle > 0 ] \mathrm {d} \mu_ {\mathcal {N}} (z) \\ \geq \gamma \int \mathbb {1} \left[ \langle z, x \rangle > 0 \right] \mathrm {d} \mu_ {\mathcal {N}} (z) \\ = \frac {\gamma_ {0}}{2}, \\ \end{array}
+$$
+
+and thus Assumption 2.1 holds with $\gamma = \gamma_0 / 2$
+
+# 5.2 THE NOISY 2-XOR DISTRIBUTION
+
+We consider the noisy 2-XOR distribution introduced in (Wei et al., 2018). It is the uniform distribution over the following $2^{d}$ points:
+
+$$
+\begin{array}{l} (x _ {1}, x _ {2}, y, x _ {3}, \dots , x _ {d}) \in \left\{\left(\frac {1}{\sqrt {d - 1}}, 0, 1\right), \left(0, \frac {1}{\sqrt {d - 1}}, - 1\right), \left(\frac {- 1}{\sqrt {d - 1}}, 0, 1\right), \left(0, \frac {- 1}{\sqrt {d - 1}}, - 1\right) \right\} \\ \times \left\{\frac {- 1}{\sqrt {d - 1}}, \frac {1}{\sqrt {d - 1}} \right\} ^ {d - 2}. \\ \end{array}
+$$
+
+The factor $\frac{1}{\sqrt{d - 1}}$ ensures that $\| x \|_2 = 1$ , and $\times$ above denotes the Cartesian product. Here the label $y$ only depends on the first two coordinates of the input $x$ .
+
+To construct $\bar{v}$ , we first decompose $\mathbb{R}^2$ into four regions:
+
+$$
+A _ {1} := \left\{\left(z _ {1}, z _ {2}\right) \mid z _ {1} \geq 0, \left| z _ {1} \right| \geq \left| z _ {2} \right| \right\},
+$$
+
+$$
+A _ {2} := \left\{\left(z _ {1}, z _ {2}\right) \mid z _ {2} > 0, | z _ {1} | < | z _ {2} | \right\},
+$$
+
+$$
+A _ {3} := \left\{\left(z _ {1}, z _ {2}\right) \mid z _ {1} \leq 0, \left| z _ {1} \right| \geq \left| z _ {2} \right| \right\} \backslash \{(0, 0) \},
+$$
+
+$$
+A _ {4} := \left\{\left(z _ {1}, z _ {2}\right) \mid z _ {2} < 0, | z _ {1} | < | z _ {2} | \right\}.
+$$
+
+Then $\bar{v}$ can be defined as follows. It only depends on the first two coordinates of $z$ .
+
+$$
+\bar {v} (z) := \left\{ \begin{array}{l l} (1, 0, 0, \dots , 0) & \text {i f} \left(z _ {1}, z _ {2}\right) \in A _ {1}, \\ (0, - 1, 0, \dots , 0) & \text {i f} \left(z _ {1}, z _ {2}\right) \in A _ {2}, \\ (- 1, 0, 0, \dots , 0) & \text {i f} \left(z _ {1}, z _ {2}\right) \in A _ {3}, \\ (0, 1, 0, \dots , 0) & \text {i f} \left(z _ {1}, z _ {2}\right) \in A _ {4}. \end{array} \right. \tag {5.3}
+$$
+
+The following result shows that $\gamma = \Omega(1/d)$ . Note that $n$ could be as large as $2^d$ , in which case $\gamma$ is basically $O\left(1/\ln(n)\right)$ .
+
+Proposition 5.3. For any $(x,y)$ sampled from the noisy 2-XOR distribution and any $d\geq 3$ , it holds that
+
+$$
+y \int \langle \bar {v} (z), x \rangle \mathbb {1} [ \langle z, x \rangle > 0 ] \mathrm {d} \mu_ {\mathcal {N}} (z) \geq \frac {1}{6 0 d}.
+$$
+
+We can prove two other interesting results for the noisy 2-XOR data.
+
+The width needs a poly $(1 / \gamma)$ dependency for initial separability. The first step of an NTK analysis is to show that $\left\{\left(\nabla f_i(W_0),y_i\right)\right\}_{i = 1}^n$ is separable. Proposition 5.4 gives an example where $\left\{\left(\nabla f_i(W_0),y_i\right)\right\}_{i = 1}^n$ is nonseparable when the network is narrow.
+
+Proposition 5.4. Let $D = \{(x_i, y_i)\}_{i=1}^4$ denote an arbitrary subset of the noisy 2-XOR dataset such that $x_i$ 's have the same last $(d-2)$ coordinates. For any $d \geq 20$ , if $m \leq \sqrt{d-2}/4$ , then with probability $1/2$ over the random initialization of $W_0$ , for any weights $V \in \mathbb{R}^{m \times d}$ , it holds that $y_i \left\langle V, \nabla f_i(W_0) \right\rangle \leq 0$ for at least one $i \in \{1, 2, 3, 4\}$ .
+
+For the noisy 2-XOR data, the separator $\bar{v}$ given by eq. (5.3) has margin $\gamma = \Omega(1/d)$ , and $1/\gamma = O(d)$ . As a result, if we want $\left\{\left(\nabla f_i(W_0), y_i\right)\right\}_{i=1}^n$ to be separable, the width has to be $\Omega(1/\sqrt{\gamma})$ . For a smaller width, gradient descent might still be able to solve the problem, but a beyond-NTK analysis would be needed.
+
+A tight sample complexity upper bound for the infinite-width NTK. (Wei et al., 2018) give a $d^2$ sample complexity lower bound for any NTK classifier on the noisy 2-XOR data. It turns out that $\gamma$ could give a matching sample complexity upper bound for the NTK and SGD.
+
+(Wei et al., 2018) consider the infinite-width NTK with respect to both layers. For the first layer, the infinite-width NTK $K_{1}$ is defined in Section 5, and the corresponding RKHS $\mathcal{H}$ and RKHS mapping $\phi$ is defined in Section 2. For the second layer, the infinite width NTK is defined by
+
+$$
+\begin{array}{l} K _ {2} \left(x _ {i}, x _ {j}\right) := \mathbb {E} \left[ \frac {\partial f \left(x _ {i} ; W _ {0} , a\right)}{\partial a}, \frac {\partial f \left(x _ {j} ; W _ {0} , a\right)}{\partial a} \right] \\ = \mathbb {E} _ {w \sim \mathcal {N} (0, I _ {d})} \left[ \sigma (\langle w, x _ {i} \rangle) \sigma (\langle w, x _ {j} \rangle) \right]. \\ \end{array}
+$$
+
+The corresponding RKHS $\mathcal{K}$ and inner product $\langle w_1, w_2 \rangle_{\mathcal{K}}$ are given by
+
+$$
+\mathcal {K} := \left\{w: \mathbb {R} ^ {d} \rightarrow \mathbb {R} \left| \int w (z) ^ {2} \mathrm {d} \mu_ {\mathcal {N}} (z) < \infty \right.\right\}, \quad \text {a n d} \quad \langle w _ {1}, w _ {2} \rangle_ {\mathcal {K}} = \int w _ {1} (z) w _ {2} (z) \mathrm {d} \mu_ {\mathcal {N}} (z).
+$$
+
+Given any $x \in \mathbb{R}^d$ , it is mapped into $\psi_x \in \mathcal{K}$ , where $\psi_x(z) \coloneqq \sigma(\langle z, x \rangle)$ . It holds that $K_2(x_i, x_j) = \langle \psi_{x_i}, \psi_{x_j} \rangle_{\mathcal{K}}$ . The infinite-width NTK with respect to both layers is just $K_1 + K_2$ . The corresponding RHKS is just $\mathcal{H} \times \mathcal{K}$ with the inner product
+
+$$
+\langle (v _ {1}, w _ {1}), (v _ {2}, w _ {2}) \rangle_ {\mathcal {H} \times \mathcal {K}} = \langle v _ {1}, v _ {2} \rangle_ {\mathcal {H}} + \langle w _ {1}, w _ {2} \rangle_ {\mathcal {K}}.
+$$
+
+The classifier $\bar{v}$ considered in eq. (5.3) has a unit norm (i.e., $\| \bar{v} \|_{\mathcal{H}} = 1$ ) and margin $\gamma$ on the space $\mathcal{H}$ . On $\mathcal{H} \times \mathcal{K}$ , it is enough to consider $(\bar{v},0)$ , which also has a unit norm and margin $\gamma$ . Since the infinite-width NTK model is a linear model in $\mathcal{H} \times \mathcal{K}$ , (Ji & Telgarsky, 2018, Lemma 2.5) can be used to show that SGD on the RKHS $\mathcal{H} \times \mathcal{K}$ could obtain a test error of $\epsilon$ with a sample complexity of $\widetilde{O}(1/\gamma^2\epsilon)$ . (The analysis in (Ji & Telgarsky, 2018) is done in $\mathbb{R}^d$ , but it still works with a well-defined inner product.) Since $\gamma = \Omega(1/d)$ , to achieve a constant test accuracy we need $\widetilde{O}(d^2)$ samples. This mathces (up to logarithmic factors) the sample complexity lower bound of $d^2$ given by Wei et al. (2018).
+
+# 6 OPEN PROBLEMS
+
+In this paper, we analyze gradient descent on a two-layer network in the NTK regime, where the weights stay close to the initialization. It is an interesting open question if gradient descent learns something beyond the NTK, after the iterates move far enough from the initial weights. It is also interesting to extend our analysis to other architectures, such as multi-layer networks, convolutional networks, and residual networks. Finally, in this paper we only discuss binary classification; it is interesting to see if it is possible to get similar results for other tasks, such as regression.
+
+# ACKNOWLEDGEMENTS
+
+The authors are grateful for support from the NSF under grant IIS-1750051, and from NVIDIA via a GPU grant.
+
+# REFERENCES
+
+Zeyuan Allen-Zhu and Yanzhi Li. What can resnet learn efficiently, going beyond kernels? arXiv preprint arXiv:1905.10337, 2019a.
+Zeyuan Allen-Zhu and Yuanzhi Li. Can sgd learn recurrent neural networks with provable generalization? arXiv preprint arXiv:1902.01028, 2019b.
+Zeyuan Allen-Zhu, Yanzhi Li, and Yingyu Liang. Learning and generalization in overparameterized neural networks, going beyond two layers. arXiv preprint arXiv:1811.04918, 2018a.
+Zeyuan Allen-Zhu, Yanzhi Li, and Zhao Song. A convergence theory for deep learning via overparameterization. arXiv preprint arXiv:1811.03962, 2018b.
+Zeyuan Allen-Zhu, Yuzhhi Li, and Zhao Song. On the convergence rate of training recurrent neural networks. arXiv preprint arXiv:1810.12065, 2018c.
+Sanjeev Arora, Simon S Du, Wei Hu, Zhiyuan Li, and Ruosong Wang. Fine-grained analysis of optimization and generalization for overparameterized two-layer neural networks. arXiv preprint arXiv:1901.08584, 2019.
+Peter L. Bartlett and Shahar Mendelson. Rademacher and gaussian complexities: Risk bounds and structural results. JMLR, 3:463-482, Nov 2002.
+Alina Beygelzimer, John Langford, Lihong Li, Lev Reyzin, and Robert Schapire. Contextual bandit algorithms with supervised learning guarantees. In Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, pp. 19-26, 2011.
+Jonathan M. Borwein and Qiji J. Zhu. Techniques of Variational Analysis, volume 20 of. CMS Books in Mathematics, 2005.
+Yuan Cao and Quanquan Gu. Generalization error bounds of gradient descent for learning overparameterized deep relu networks. arXiv preprint arXiv:1902.01384, 2019a.
+Yuan Cao and Quanquan Gu. Generalization bounds of stochastic gradient descent for wide and deep neural networks. arXiv preprint arXiv:1905.13210, 2019b.
+Lenaic Chizat and Francis Bach. A Note on Lazy Training in Supervised Differentiable Programming. arXiv:1812.07956v2 [math.OC], 2019.
+Simon S Du, Jason D Lee, Haochuan Li, Liwei Wang, and Xiyu Zhai. Gradient descent finds global minima of deep neural networks. arXiv preprint arXiv:1811.03804, 2018a.
+Simon S Du, Xiyu Zhai, Barnabas Poczos, and Aarti Singh. Gradient descent provably optimizes over-parameterized neural networks. arXiv preprint arXiv:1810.02054, 2018b.
+Arthur Jacot, Franck Gabriel, and Clément Hongler. Neural tangent kernel: Convergence and generalization in neural networks. In Advances in neural information processing systems, pp. 8571-8580, 2018.
+Ziwei Ji and Matus Telgarsky. Risk and parameter convergence of logistic regression. arXiv preprint arXiv:1803.07300v2, 2018.
+Yuanzhi Li and Yingyu Liang. Learning overparameterized neural networks via stochastic gradient descent on structured data. In Advances in Neural Information Processing Systems, pp. 8157-8166, 2018.
+Percy Liang. Stanford CS229T/STAT231: Statistical Learning Theory, Apr 2016. URL https://web.stanford.edu/class/cs229t/notes.pdf.
+Atsushi Nitanda and Taiji Suzuki. Refined generalization analysis of gradient descent for overparameterized two-layer neural networks with smooth activations on classification problems. arXiv preprint arXiv:1905.09870, 2019.
+
+Albert B.J. Novikoff. On convergence proofs on perceptrons. In Proceedings of the Symposium on the Mathematical Theory of Automata, 12:615-622, 1962.
+Samet Oymak and Mahdi Soltanolkotabi. Towards moderate overparameterization: global convergence guarantees for training shallow neural networks. arXiv preprint arXiv:1902.04674, 2019.
+Shai Shalev-Shwartz and Shai Ben-David. Understanding Machine Learning: From Theory to Algorithms. Cambridge University Press, 2014.
+Zhao Song and Xin Yang. Quadratic suffices for over-parametrization via matrix Chernoff bound. arXiv preprint arXiv:1906.03593, 2019.
+Nathan Srebro, Karthik Sridharan, and Ambuj Tewari. Smoothness, low noise and fast rates. In Advances in neural information processing systems, pp. 2199-2207, 2010.
+Martin J. Wainwright. UC Berkeley Statistics 210B, Lecture Notes: Basic tail and concentration bounds, Jan 2015. URL https://www.stat.berkeley.edu/~mjwain/stat210b/Chap2_TailBounds-Jan22_2015.pdf.
+Colin Wei, Jason D Lee, Qiang Liu, and Tengyu Ma. Regularization matters: Generalization and optimization of neural nets vs their induced kernel. arXiv preprint arXiv:1810.05369, 2018.
+Difan Zou and Quanquan Gu. An improved analysis of training over-parameterized deep neural networks. arXiv preprint arXiv:1906.04688, 2019.
+Difan Zou, Yuan Cao, Dongruo Zhou, and Quanquan Gu. Stochastic gradient descent optimizes over-parameterized deep relu networks. arXiv preprint arXiv:1811.08888, 2018.
+
+# A OMITTED PROOFS FROM SECTION 2
+
+Proof of Lemma 2.3. By Assumption 2.1, given any $1 \leq i \leq n$ ,
+
+$$
+\mu := \mathbb {E} _ {w \sim \mathcal {N} (0, I _ {d})} \left[ y _ {i} \left\langle \bar {v} (w), x _ {i} \right\rangle \mathbf {1} \left[ \langle w, x _ {i} \rangle > 0 \right] \right] \geq \gamma .
+$$
+
+On the other hand,
+
+$$
+y _ {i} f _ {i} ^ {(0)} (\bar {U}) = \frac {1}{m} \sum_ {s = 1} ^ {m} y _ {i} \left\langle \bar {v} \left(w _ {s, 0}\right), x _ {i} \right\rangle \mathbb {1} \left[ \left\langle w _ {s, 0}, x _ {i} \right\rangle > 0 \right]
+$$
+
+is the empirical mean of i.i.d. r.v.'s supported on $[-1, + 1]$ with mean $\mu$ . Therefore by Hoeffding's inequality, with probability $1 - \delta /n$
+
+$$
+y _ {i} f _ {i} ^ {(0)} (\bar {U}) - \gamma \geq y _ {i} f _ {i} ^ {(0)} (\bar {U}) - \mu \geq - \sqrt {\frac {2 \ln (n / \delta)}{m}}.
+$$
+
+Applying a union bound finishes the proof.
+
+Proof of Lemma 2.4. Given any fixed $\epsilon_{2}$ and $1\leq i\leq n$
+
+$$
+\mathbb {E} \left[ \alpha_ {i} \left(W _ {0}, \epsilon_ {2}\right) \right] = \mathbb {P} \left(\left| \langle w, x _ {i} \rangle \right| \leq \epsilon_ {2}\right) \leq \frac {2 \epsilon_ {2}}{\sqrt {2 \pi}} = \sqrt {\frac {2}{\pi}} \epsilon_ {2},
+$$
+
+because $\langle w, x_i \rangle$ is a standard Gaussian r.v. and the density of standard Gaussian has maximum $1 / \sqrt{2\pi}$ . Since $\alpha_i(W_0, \epsilon_2)$ is the empirical mean of Bernoulli r.v.'s, by Hoeffding's inequality, with probability $1 - \delta / n$ ,
+
+$$
+\alpha_ {i} \left(W _ {0}, \epsilon_ {2}\right) \leq \mathbb {E} \left[ \alpha_ {i} \left(W _ {0}, \epsilon_ {2}\right) \right] + \sqrt {\frac {\ln (n / \delta)}{2 m}} \leq \sqrt {\frac {2}{\pi}} \epsilon_ {2} + \sqrt {\frac {\ln (n / \delta)}{2 m}}.
+$$
+
+Applying a union bound finishes the proof.
+
+To prove Lemma 2.5, we need the following technical result.
+
+Lemma A.1. Consider the random vector $X = (X_{1},\ldots ,X_{m})$ , where $X_{i} = \sigma (Z_{i})$ for some $\sigma : \mathbb{R}\to \mathbb{R}$ that is 1-Lipschitz, and $Z_{i}$ are i.i.d. standard Gaussian r.v.'s. Then the r.v. $\| X\| _2$ is 1-sub-Gaussian, and thus with probability $1 - \delta$
+
+$$
+\left\| X \right\| _ {2} - \mathbb {E} \left[ \left\| X \right\| _ {2} \right] \leq \sqrt {2 \ln (1 / \delta)}.
+$$
+
+Proof. Given $a \in \mathbb{R}^m$ , define
+
+$$
+f (a) = \sqrt {\sum_ {i = 1} ^ {m} \sigma \left(a _ {i}\right) ^ {2}} = \left\| \sigma (a) \right\| _ {2},
+$$
+
+where $\sigma(a)$ is obtained by applying $\sigma$ coordinate-wisely to $a$ . For any $a, b \in \mathbb{R}^m$ , by the triangle inequality, we have
+
+$$
+\left| f (a) - f (b) \right| = \left| \left\| \sigma (a) \right\| _ {2} - \left\| \sigma (b) \right\| _ {2} \right| \leq \left\| \sigma (a) - \sigma (b) \right\| _ {2} = \sqrt {\sum_ {i = 1} ^ {m} \left(\sigma \left(a _ {i}\right) - \sigma \left(b _ {i}\right)\right) ^ {2}},
+$$
+
+and by further using the 1-Lipschitz continuity of $\sigma$ , we have
+
+$$
+\left| f (a) - f (b) \right| \leq \sqrt {\sum_ {i = 1} ^ {m} \left(\sigma \left(a _ {i}\right) - \sigma \left(b _ {i}\right)\right) ^ {2}} \leq \sqrt {\sum_ {i = 1} ^ {m} \left(a _ {i} - b _ {i}\right) ^ {2}} = \| a - b \| _ {2}.
+$$
+
+As a result, $f$ is a 1-Lipschitz continuous function w.r.t. the $\ell_2$ norm, indeed $f(X)$ is 1-sub-Gaussian and the bound follows by Gaussian concentration (Wainwright, 2015, Theorem 2.4).
+
+Proof of Lemma 2.5. Given $1 \leq i \leq n$ , let $h_i = \sigma(W_0 x_i) / \sqrt{m}$ . By Lemma A.1, $\| h_i \|_2$ is sub-Gaussian with variance proxy $1/m$ , and with probability at least $1 - \delta/2n$ over $W_0$ ,
+
+$$
+\left\| h _ {i} \right\| _ {2} - \mathbb {E} \left[ \left\| h _ {i} \right\| _ {2} \right] \leq \sqrt {\frac {2 \ln (2 n / \delta)}{m}} \leq \sqrt {\frac {2 \ln (2 n / \delta)}{2 5 \ln (2 n / \delta)}} \leq 1 - \frac {\sqrt {2}}{2}.
+$$
+
+On the other hand, by Jensen's inequality,
+
+$$
+\mathbb {E} \left[ \| h _ {i} \| _ {2} \right] \leq \sqrt {\mathbb {E} \left[ \| h _ {i} \| _ {2} ^ {2} \right]} = \frac {\sqrt {2}}{2}.
+$$
+
+As a result, with probability $1 - \delta / 2n$ , it holds that $\| h_i \|_2 \leq 1$ . By a union bound, with probability $1 - \delta / 2$ over $W_0$ , for all $1 \leq i \leq n$ , we have $\| h_i \|_2 \leq 1$ .
+
+For any $W_0$ such that the above event holds, and for any $1 \leq i \leq n$ , the r.v. $\langle h_i, a \rangle$ is sub-Gaussian with variance proxy $\|h_i\|_2^2 \leq 1$ . By Hoeffding's inequality, with probability $1 - \delta / 2n$ over $a$ ,
+
+$$
+\left| \langle h _ {i}, a \rangle \right| = \left| f \left(x _ {i}; W _ {0}, a\right) \right| \leq \sqrt {2 \ln (4 n / \delta)}.
+$$
+
+By a union bound, with probability $1 - \delta / 2$ over $a$ , for all $1 \leq i \leq n$ , we have $|f(x_i; W_0, a)| \leq \sqrt{2\ln(4n / \delta)}$ .
+
+The probability that the above events all happen is at least $(1 - \delta / 2)(1 - \delta / 2) \geq 1 - \delta$ , over $W_0$ and $a$ .
+
+Proof of Lemma 2.6. We have
+
+$$
+\left\| W _ {t + 1} - \bar {W} \right\| _ {F} ^ {2} = \left\| W _ {t} - \bar {W} \right\| _ {F} ^ {2} - 2 \eta_ {t} \left\langle \nabla \widehat {\mathcal {R}} (W _ {t}), W _ {t} - \bar {W} \right\rangle + \eta_ {t} ^ {2} \left\| \nabla \widehat {\mathcal {R}} (W _ {t}) \right\| _ {F} ^ {2}. \tag {A.1}
+$$
+
+The first order term of eq. (A.1) can be handled using the convexity of $\ell$ and homogeneity of ReLU:
+
+$$
+\begin{array}{l} \left\langle \nabla \widehat {\mathcal {R}} (W _ {t}), W _ {t} - \overline {{W}} \right\rangle = \frac {1}{n} \sum_ {i = 1} ^ {n} \ell^ {\prime} \left(y _ {i} f _ {i} (W _ {t})\right) y _ {i} \left\langle \nabla f _ {i} (W _ {t}), W _ {t} - \overline {{W}} \right\rangle \\ = \frac {1}{n} \sum_ {i = 1} ^ {n} \ell^ {\prime} \left(y _ {i} f _ {i} (W _ {t})\right) \left(y _ {i} f _ {i} (W _ {t}) - y _ {i} f _ {i} ^ {(t)} (\overline {{W}})\right) \\ \geq \frac {1}{n} \sum_ {i = 1} ^ {n} \left(\ell \left(y _ {i} f _ {i} \left(W _ {t}\right)\right) - \ell \left(y _ {i} f _ {i} ^ {(t)} (\bar {W})\right)\right) = \widehat {\mathcal {R}} \left(W _ {t}\right) - \widehat {\mathcal {R}} ^ {(t)} (\bar {W}). \tag {A.2} \\ \end{array}
+$$
+
+The second-order term of eq. (A.1) can be bounded as follows
+
+$$
+\eta_ {t} ^ {2} \left\| \nabla \hat {\mathcal {R}} (W _ {t}) \right\| _ {F} ^ {2} \leq \eta_ {t} ^ {2} \hat {\mathcal {Q}} (W _ {t}) ^ {2} \leq \eta_ {t} \hat {\mathcal {Q}} (W _ {t}) \leq \eta_ {t} \hat {\mathcal {R}} (W _ {t}), \tag {A.3}
+$$
+
+because $\left\| \nabla \widehat{\mathcal{R}}(W_t)\right\|_F \leq \widehat{\mathcal{Q}}(W_t)$ , and $\eta_t, \widehat{\mathcal{Q}}(W_t) \leq 1$ , and $\widehat{\mathcal{Q}}(W_t) \leq \widehat{\mathcal{R}}(W_t)$ . Combining eqs. (A.1) to (A.3) gives
+
+$$
+\eta_ {t} \widehat {\mathcal {R}} (W _ {t}) \leq \left\| W _ {t} - \bar {W} \right\| _ {F} ^ {2} - \left\| W _ {t + 1} - \bar {W} \right\| _ {F} ^ {2} + 2 \eta_ {t} \widehat {\mathcal {R}} ^ {(t)} (\bar {W}).
+$$
+
+Telescoping gives the other claim.
+
+
+
+Proof of Theorem 2.2. The required width ensures that with probability $1 - 3\delta$ , Lemmas 2.3 to 2.5 hold with $\epsilon_{1} = \gamma^{2} / 8$ and $\epsilon_{2} = 4\lambda / (\gamma \sqrt{m})$ .
+
+Let $t_1$ denote the first step such that there exists $1 \leq s \leq m$ with $\left\| w_{s,t_1} - w_{s,0} \right\|_2 > 4\lambda / (\gamma \sqrt{m})$ . Therefore for any $0 \leq t < t_1$ and any $1 \leq s \leq m$ , it holds that $\left\| w_{s,t} - w_{s,0} \right\|_2 \leq 4\lambda / (\gamma \sqrt{m})$ . In addition, we let $\overline{W} := W_0 + \lambda \overline{U}$ .
+
+We first prove that for any $0 \leq t < t_1$ , it holds that $\widehat{\mathcal{R}}^{(t)}(\overline{W}) \leq \epsilon / 4$ . Since $\ln (1 + r) \leq r$ for any $r$ , the logistic satisfies $\ell(z) = \ln (1 + \exp(-z)) \leq \exp(-z)$ , and it is enough to prove that for any $1 \leq i \leq n$ ,
+
+$$
+y _ {i} \left\langle \nabla f _ {i} (W _ {t}), \bar {W} \right\rangle \geq \ln \left(\frac {4}{\epsilon}\right).
+$$
+
+We will split the left hand side into three terms and control them individually:
+
+$$
+\left. y _ {i} \left\langle \nabla f _ {i} \left(W _ {t}\right), \bar {W} \right\rangle = y _ {i} \left\langle \nabla f _ {i} \left(W _ {0}\right), W _ {0} \right\rangle + y _ {i} \left\langle \nabla f _ {i} \left(W _ {t}\right) - \nabla f _ {i} \left(W _ {0}\right), W _ {0} \right\rangle + \lambda y _ {i} \left\langle \nabla f _ {i} \left(W _ {t}\right), \bar {U} \right\rangle . \right. \tag {A.4}
+$$
+
+- The first term of eq. (A.4) can be controlled using Lemma 2.5:
+
+$$
+\left| y _ {i} \left\langle \nabla f _ {i} \left(W _ {0}\right), W _ {0} \right\rangle \right| \leq \sqrt {2 \ln (4 n / \delta)}. \tag {A.5}
+$$
+
+The second term of eq. (A.4) can be written as
+
+$$
+y _ {i} \left\langle \nabla f _ {i} (W _ {t}) - \nabla f _ {i} (W _ {0}), W _ {0} \right\rangle = y _ {i} \frac {1}{\sqrt {m}} \sum_ {s = 1} ^ {m} a _ {s} \left(\mathbb {1} \left[ \left\langle w _ {s, t}, x _ {i} \right\rangle > 0 \right] - \mathbb {1} \left[ \left\langle w _ {s, 0}, x _ {i} \right\rangle > 0 \right]\right) \left\langle w _ {s, 0}, x _ {i} \right\rangle .
+$$
+
+Let $S_{c} \coloneqq \left\{s \mid \mathbb{1}\left[\langle w_{s,t}, x_{i} \rangle > 0\right] - \mathbb{1}\left[\langle w_{s,0}, x_{i} \rangle > 0\right] \neq 0, 1 \leq s \leq m\right\}$ . Note that $s \in S_{c}$ implies
+
+$$
+\Big | \left\langle w _ {s, 0}, x _ {i} \right\rangle \Big | \leq \Big | \left\langle w _ {s, t} - w _ {s, 0}, x _ {i} \right\rangle \Big | \leq \Big \| w _ {s, t} - w _ {s, 0} \Big \| _ {2} \| x _ {i} \| _ {2} = \Big \| w _ {s, t} - w _ {s, 0} \Big \| _ {2} \leq 4 \lambda / (\gamma \sqrt {m}) = \epsilon_ {2}.
+$$
+
+Therefore Lemma 2.4 ensures that
+
+$$
+| S _ {c} | \leq \left| \left\{s \mid | \langle w _ {s, 0}, x _ {i} \rangle | \leq \epsilon_ {2} \right\} \right| \leq m \left(\frac {4 \lambda}{\gamma \sqrt {m}} + \frac {\epsilon_ {1}}{2}\right) = m \left(\frac {4 \lambda}{\gamma \sqrt {m}} + \frac {\gamma^ {2}}{1 6}\right).
+$$
+
+and thus
+
+$$
+\left| y _ {i} \left\langle \nabla f _ {i} \left(W _ {t}\right) - \nabla f _ {i} \left(W _ {0}\right), W _ {0} \right\rangle \right| \leq \frac {1}{\sqrt {m}} \cdot | S _ {c} | \cdot \frac {4 \lambda}{\gamma \sqrt {m}} \leq \frac {1 6 \lambda^ {2}}{\gamma^ {2} \sqrt {m}} + \frac {\lambda \gamma}{4} \leq \frac {\lambda \gamma}{2}, \tag {A.6}
+$$
+
+where in the last step we use the condition that $m \geq 4096\lambda^2/\gamma^6$ .
+
+- The third term of eq. (A.4) can be bounded as follows: by Lemma 2.3,
+
+$$
+\begin{array}{l} y _ {i} \left\langle \nabla f _ {i} \left(W _ {t}\right), \bar {U} \right\rangle = y _ {i} \left\langle \nabla f _ {i} \left(W _ {0}\right), \bar {U} \right\rangle + y _ {i} \left\langle \nabla f _ {i} \left(W _ {t}\right) - \nabla f _ {i} \left(W _ {0}\right), \bar {U} \right\rangle \\ \geq \gamma - \epsilon_ {1} + y _ {i} \left\langle \nabla f _ {i} (W _ {t}) - \nabla f _ {i} (W _ {0}), \bar {U} \right\rangle . \\ \end{array}
+$$
+
+In addition,
+
+$$
+\begin{array}{l} y _ {i} \left\langle \nabla f _ {i} (W _ {t}) - \nabla f _ {i} (W _ {0}), \overline {{U}} \right\rangle = y _ {i} \frac {1}{m} \sum_ {i = 1} ^ {m} \left(\mathbb {1} \left[ \left\langle w _ {s, t}, x _ {i} \right\rangle > 0 \right] - \mathbb {1} \left[ \left\langle w _ {s, 0}, x _ {i} \right\rangle > 0 \right]\right) \left\langle \bar {v} (w _ {s, 0}), x _ {i} \right\rangle \\ \geq - \frac {1}{m} \cdot | S _ {c} | \geq - \frac {4 \lambda}{\gamma \sqrt {m}} - \frac {\epsilon_ {1}}{2} \geq - \frac {\gamma^ {2}}{1 6} - \frac {\epsilon_ {1}}{2}, \\ \end{array}
+$$
+
+where we use $m \geq 4096\lambda^2/\gamma^6$ . Therefore,
+
+$$
+y _ {i} \left\langle \nabla f _ {i} \left(W _ {t}\right), \bar {U} \right\rangle \geq \gamma - \epsilon_ {1} - \frac {\gamma^ {2}}{1 6} - \frac {\epsilon_ {1}}{2} = \gamma - \frac {\gamma^ {2}}{4} \geq \frac {3 \gamma}{4}. \tag {A.7}
+$$
+
+Putting eqs. (A.5) to (A.7) into eq. (A.4), we have
+
+$$
+y _ {i} \left\langle \nabla f _ {i} (W _ {t}), \overline {{W}} \right\rangle \geq - \sqrt {2 \ln \left(\frac {4 n}{\delta}\right)} - \frac {\lambda \gamma}{2} + \frac {3 \lambda \gamma}{4} = \frac {\lambda \gamma}{4} - \sqrt {2 \ln \left(\frac {4 n}{\delta}\right)} = \ln \left(\frac {4}{\epsilon}\right),
+$$
+
+for the $\lambda$ given in the statement of Theorem 2.2. Consequently, for any $0\leq t < t_1$ , it holds that $\widehat{\mathcal{R}}^{(t)}(\overline{W})\leq \epsilon /4$
+
+Let $T \coloneqq \lceil 2^{\lambda^2} / \eta \epsilon \rceil$ . The next claim is that $t_1 \geq T$ . To see this, note that Lemma 2.6 ensures
+
+$$
+\left\| W _ {t _ {1}} - \bar {W} \right\| _ {F} ^ {2} \leq \left\| W _ {0} - \bar {W} \right\| _ {F} ^ {2} + 2 \eta \left(\sum_ {t < t _ {1}} \widehat {\mathcal {R}} ^ {(t)} (\bar {W})\right) \leq \lambda^ {2} + \frac {\epsilon}{2} \eta t _ {1}.
+$$
+
+Suppose $t_1 < T$ , then we have $t_1 \leq 2\lambda^2/\eta$ , and thus $\left\| W_{t_1} - \overline{W} \right\|_F^2 \leq 2\lambda^2$ . As a result, using $\| \overline{U} \|_F \leq 1$ and the definition of $\overline{W}$ ,
+
+$$
+\begin{array}{l} \sqrt {2} \lambda \geq \left\| W _ {t _ {1}} - \bar {W} \right\| _ {F} \geq \left\langle W _ {t _ {1}} - \bar {W}, \bar {U} \right\rangle = \left\langle W _ {t _ {1}} - W _ {0}, \bar {U} \right\rangle - \left\langle \bar {W} - W _ {0}, \bar {U} \right\rangle \\ \geq \left\langle W _ {t _ {1}} - W _ {0}, \overline {{U}} \right\rangle - \lambda . \\ \end{array}
+$$
+
+Moreover, due to eq. (A.7),
+
+$$
+\begin{array}{l} \left\langle W _ {t _ {1}} - W _ {0}, \bar {U} \right\rangle = - \eta \sum_ {\tau < t _ {1}} \left\langle \nabla \widehat {\mathcal {R}} (W _ {\tau}), \bar {U} \right\rangle = \eta \sum_ {\tau < t _ {1}} \frac {1}{n} \sum_ {i = 1} ^ {n} - \ell^ {\prime} \left(y _ {i} f _ {i} (W _ {\tau})\right) y _ {i} \left\langle \nabla f _ {i} (W _ {\tau}), \bar {U} \right\rangle \\ \geq \eta \sum_ {\tau < t _ {1}} \widehat {\mathcal {Q}} (W _ {\tau}) \frac {3 \gamma}{4}. \\ \end{array}
+$$
+
+As a result,
+
+$$
+\eta \sum_ {\tau < t _ {1}} \widehat {\mathcal {Q}} (W _ {\tau}) \leq \frac {4 (\sqrt {2} + 1) \lambda}{3 \gamma} \leq \frac {4 \lambda}{\gamma}.
+$$
+
+Furthermore, by the triangle inequality, for any $1 \leq s \leq m$
+
+$$
+\begin{array}{l} \left\| w _ {s, t} - w _ {s, 0} \right\| _ {2} \leq \eta \sum_ {\tau < t} \left\| \frac {1}{n} \sum_ {i = 1} ^ {n} \ell^ {\prime} \left(y _ {i} f _ {i} (W _ {\tau})\right) y _ {i} \frac {\partial f _ {i}}{\partial w _ {s , \tau}} \right\| _ {2} \\ \leq \eta \sum_ {\tau < t} \frac {1}{n} \sum_ {i = 1} ^ {n} \left| \ell^ {\prime} \left(y _ {i} f _ {i} \left(W _ {\tau}\right)\right) \right| \cdot \left\| \frac {\partial f _ {i}}{\partial w _ {s , \tau}} \right\| _ {2} \\ \leq \eta \sum_ {\tau < t} \widehat {\mathcal {Q}} (W _ {\tau}) \frac {1}{\sqrt {m}} \\ \leq \eta \sum_ {\tau < t _ {1}} \widehat {\mathcal {Q}} \left(W _ {\tau}\right) \frac {1}{\sqrt {m}} \leq \frac {4 \lambda}{\gamma \sqrt {m}}, \tag {A.8} \\ \end{array}
+$$
+
+which contradicts the definition of $t_1$ . Therefore $t_1 \geq T$ .
+
+Now we are ready to prove the claims of Theorem 2.2. The bound on $\left\| w_{s,t} - w_{s,0}\right\|_2$ follows by repeating the steps in eq. (A.8). The risk guarantee follows from Lemma 2.6:
+
+$$
+\frac {1}{T} \sum_ {t < T} \widehat {\mathcal {R}} (W _ {t}) \leq \frac {\left\| W _ {0} - \overline {{W}} \right\| _ {F} ^ {2}}{\eta T} + \frac {2}{T} \sum_ {t < T} \widehat {\mathcal {R}} ^ {(t)} (\overline {{W}}) \leq \frac {\epsilon}{2} + \frac {\epsilon}{2} = \epsilon .
+$$
+
+
+
+# B OMITTED PROOFS FROM SECTION 3
+
+The proof of Theorem 3.2 is based on Rademacher complexity. Given a sample $S = (z_{1},\ldots ,z_{n})$ (where $z_{i} = (x_{i},y_{i})$ ) and a function class $\mathcal{H}$ , the Rademacher complexity of $\mathcal{H}$ on $S$ is defined as
+
+$$
+\operatorname {R a d} \left(\mathcal {H} \circ S\right) := \frac {1}{n} \mathbb {E} _ {\epsilon \sim \{- 1, + 1 \} ^ {n}} \left[ \sup _ {h \in \mathcal {H}} \sum_ {i = 1} ^ {n} \epsilon_ {i} h \left(z _ {i}\right) \right].
+$$
+
+We will use the following general result.
+
+Lemma B.1. (Shalev-Shwartz & Ben-David, 2014, Theorem 26.5) If $h(z) \in [a,b]$ , then with probability $1 - \delta$ ,
+
+$$
+\sup _ {h \in \mathcal {H}} \left(\mathbb {E} _ {z \sim \mathcal {D}} [ h (z) ] - \frac {1}{n} \sum_ {i = 1} ^ {n} h (z _ {i})\right) \leq 2 \operatorname {R a d} (\mathcal {H} \circ S) + 3 (b - a) \sqrt {\frac {\ln (2 / \delta)}{2 n}}.
+$$
+
+We also need the following contraction lemma. Consider a feature sample $X = (x_{1},\ldots ,x_{n})$ and a function class $\mathcal{F}$ on $X$ . For each $1\leq i\leq n$ , let $g_{i}:\mathbb{R}\to \mathbb{R}$ denote a $K$ -Lipschitz function. Let $g\circ \mathcal{F}$ denote the class of functions which map $x_{i}$ to $g_{i}(f(x_{i}))$ for some $f\in \mathcal{F}$ .
+
+Lemma B.2. (Shalev-Shwartz & Ben-David, 2014, Lemma 26.9) $\operatorname{Rad}(g \circ \mathcal{F} \circ X) \leq K\operatorname{Rad}(\mathcal{F} \circ X)$ .
+
+To prove Theorem 3.2, we need one more Rademacher complexity bound. Given a fixed initialization $(W_0,a)$ , consider the following classes:
+
+$$
+\mathcal {W} _ {\rho} := \left\{W \in \mathbb {R} ^ {m \times d} \left| \left\| w _ {s} - w _ {s, 0} \right\| _ {2} \leq \rho \text {f o r a n y} 1 \leq s \leq m \right. \right\},
+$$
+
+and
+
+$$
+\mathcal {F} _ {\rho} := \left\{x \mapsto f (x; W, a) \mid W \in \mathcal {W} _ {\rho} \right\}.
+$$
+
+Given a feature sample $X$ , the following Lemma B.3 controls the Rademacher complexity of $\mathcal{F}_{\rho} \circ X$ . A similar version was given in (Liang, 2016, Theorem 43), and the proof is similar to the proof of (Bartlett & Mendelson, 2002, Theorem 18) which also pushes the supremum through and handles each hidden unit separately.
+
+Lemma B.3. $\operatorname{Rad}\left(\mathcal{F}_{\rho} \circ X\right) \leq \rho \sqrt{m / n}$ .
+
+Proof of Lemma B.3. We have
+
+$$
+\begin{array}{l} \mathbb {E} _ {\epsilon} \left[ \sup _ {W \in \mathcal {W} _ {\rho}} \sum_ {i = 1} ^ {n} \epsilon_ {i} f (x _ {i}; W, a) \right] = \mathbb {E} _ {\epsilon} \left[ \sup _ {W \in \mathcal {W} _ {\rho}} \sum_ {i = 1} ^ {n} \epsilon_ {i} \sum_ {s = 1} ^ {m} \frac {1}{\sqrt {m}} a _ {s} \sigma (\langle w _ {s}, x _ {i} \rangle) \right] \\ = \mathbb {E} _ {\epsilon} \left[ \frac {1}{\sqrt {m}} \sup _ {W \in \mathcal {W} _ {\rho}} \sum_ {s = 1} ^ {m} \sum_ {i = 1} ^ {n} \epsilon_ {i} a _ {s} \sigma \left(\langle w _ {s}, x _ {i} \rangle\right) \right] \\ = \mathbb {E} _ {\epsilon} \left[ \frac {1}{\sqrt {m}} \sum_ {s = 1} ^ {m} \left(\sup _ {\left\| w _ {s} - w _ {s, 0} \right\| _ {2} \leq \rho} \sum_ {i = 1} ^ {n} \epsilon_ {i} a _ {s} \sigma (\langle w _ {s}, x _ {i} \rangle)\right) \right] \\ = \frac {1}{\sqrt {m}} \sum_ {i = 1} ^ {m} \mathbb {E} _ {\epsilon} \left[ \sup _ {\left\| w _ {s} - w _ {s, 0} \right\| _ {2} \leq \rho} \sum_ {i = 1} ^ {n} \epsilon_ {i} a _ {s} \sigma (\langle w _ {s}, x _ {i} \rangle) \right]. \\ \end{array}
+$$
+
+Note that for any $1 \leq s \leq m$ , the mapping $z \mapsto a_s \sigma(z)$ is 1-Lipschitz, and thus Lemma B.2 gives
+
+$$
+\begin{array}{l} \mathbb {E} _ {\epsilon} \left[ \sup _ {W \in \mathcal {W} _ {\rho}} \sum_ {i = 1} ^ {n} \epsilon_ {i} f (x _ {i}; W, a) \right] \leq \frac {1}{\sqrt {m}} \sum_ {i = 1} ^ {m} \mathbb {E} _ {\epsilon} \left[ \sup _ {\| w _ {s} - w _ {s, 0} \| _ {2} \leq \rho} \sum_ {i = 1} ^ {n} \epsilon_ {i} a _ {s} \sigma (\langle w _ {s}, x _ {i} \rangle) \right] \\ \leq \frac {1}{\sqrt {m}} \sum_ {i = 1} ^ {m} \mathbb {E} _ {\epsilon} \left[ \sup _ {\left\| w _ {s} - w _ {s, 0} \right\| _ {2} \leq \rho} \sum_ {i = 1} ^ {n} \epsilon_ {i} \langle w _ {s}, x _ {i} \rangle \right]. \\ \end{array}
+$$
+
+Invoking the Rademacher complexity of linear classifiers (Shalev-Shwartz & Ben-David, 2014, Lemma 26.10) then gives
+
+$$
+\mathrm {R a d} \left(\mathcal {F} _ {\rho} \circ X\right) = \frac {1}{n} \mathbb {E} _ {\epsilon} \left[ \sup _ {W \in \mathcal {W} _ {\rho}} \sum_ {i = 1} ^ {n} \epsilon_ {i} f (x _ {i}; W, a) \right] \leq \frac {\rho \sqrt {m}}{\sqrt {n}}.
+$$
+
+
+
+Now we are ready to prove the main generalization result Theorem 3.2.
+
+Proof. Fix an initialization $(W_0, a)$ , and let $\mathcal{H} := \left\{(x, y) \mapsto -\ell' (y f(x)) \mid f \in \mathcal{F}_{\rho}\right\}$ . Since for any $h \in \mathcal{H}$ and any $z, h(z) \in [0,1]$ , Lemma B.1 ensures that with probability $1 - \delta$ over the data sampling,
+
+$$
+\sup _ {h \in \mathcal {H}} \left(\mathbb {E} _ {z \sim \mathcal {D}} [ h (z) ] - \frac {1}{n} \sum_ {i = 1} ^ {n} h (z _ {i})\right) = \sup _ {W \in \mathcal {W} _ {\rho}} \left(\mathcal {Q} (W) - \widehat {\mathcal {Q}} (W)\right) \leq 2 \mathrm {R a d} (\mathcal {H} \circ S) + 3 \sqrt {\frac {\ln (2 / \delta)}{2 n}}.
+$$
+
+Since for each $1 \leq i \leq n$ , the mapping $z \mapsto -\ell'(y_i z)$ is $(1/4)$ -Lipschitz, Lemma B.2 further ensures that $\operatorname{Rad}(\mathcal{H} \circ S) \leq \operatorname{Rad}\left(\mathcal{F}_{\rho} \circ X\right)/4$ , and thus
+
+$$
+\sup _ {W \in \mathcal {W} _ {\rho}} \left(\mathcal {Q} (W) - \widehat {\mathcal {Q}} (W)\right) \leq \frac {\rho \sqrt {m}}{2 \sqrt {n}} + 3 \sqrt {\frac {\ln (2 / \delta)}{2 n}}. \tag {B.1}
+$$
+
+On the other hand, Theorem 2.2 ensures that under the conditions of Theorem 3.2, for any fixed dataset, with probability $1 - 3\delta$ over the random initialization, we have
+
+$$
+\widehat {\mathcal {Q}} (W _ {k}) \leq \widehat {\mathcal {R}} (W _ {k}) \leq \epsilon , \quad \text {a n d} \quad \left\| w _ {s, k} - w _ {s, 0} \right\| _ {2} \leq \frac {4 \lambda}{\gamma \sqrt {m}}.
+$$
+
+As a result, invoking eq. (B.1) with $\rho = 4\lambda / (\gamma \sqrt{m})$ , with probability $1 - 4\delta$ over the random initialization and data sampling,
+
+$$
+\mathcal {Q} (W _ {k}) \leq \widehat {\mathcal {Q}} (W _ {k}) + \frac {2 \lambda}{\gamma \sqrt {n}} + 3 \sqrt {\frac {\ln (2 / \delta)}{2 n}} \leq \epsilon + \frac {8 \left(\sqrt {2 \ln (4 n / \delta)} + \ln (4 / \epsilon)\right)}{\gamma^ {2} \sqrt {n}} + 3 \sqrt {\frac {\ln (2 / \delta)}{2 n}}.
+$$
+
+Invoking $P_{(x,y)\sim \mathcal{D}}\left(yf(x;W,a)\leq 0\right)\leq 2\mathcal{Q}(W)$ finishes the proof.
+
+
+
+# C OMITTED PROOFS FROM SECTION 4
+
+Proof of Lemma 4.2. Recall that $\left\| \nabla f_t(W_t)\right\| _F\leq 1$ , we have
+
+$$
+\left\| W _ {t + 1} - \bar {W} \right\| _ {F} ^ {2} \leq \left\| W _ {t} - \bar {W} \right\| _ {F} ^ {2} - 2 \eta \ell^ {\prime} \left(y _ {t} f _ {t} (W _ {t})\right) y _ {t} \left\langle \nabla f _ {t} (W _ {t}), W _ {t} - \bar {W} \right\rangle + \eta^ {2} \left(\ell^ {\prime} \left(y _ {t} f _ {t} (W _ {t})\right)\right) ^ {2}. \tag {C.1}
+$$
+
+Similar to the proof of Lemma 2.6, the first order term of eq. (C.1) can be handled using the convexity of $\ell$ and homogeneity of ReLU as follows
+
+$$
+\ell^ {\prime} \left(y _ {t} f _ {t} \left(W _ {t}\right)\right) y _ {t} \left\langle \nabla f _ {t} \left(W _ {t}\right), W _ {t} - \bar {W} \right\rangle \geq \mathcal {R} _ {t} \left(W _ {t}\right) - \mathcal {R} _ {t} (\bar {W}), \tag {C.2}
+$$
+
+and the second-order term of eq. (C.1) can be bounded as follows
+
+$$
+\eta^ {2} \left(\ell^ {\prime} \left(y _ {t} f _ {t} \left(W _ {t}\right)\right)\right) ^ {2} \leq - \eta \ell^ {\prime} \left(y _ {t} f _ {t} \left(W _ {t}\right)\right) \leq \eta \ell \left(y _ {t} f _ {t} \left(W _ {t}\right)\right) = \eta \mathcal {R} _ {t} \left(W _ {t}\right), \tag {C.3}
+$$
+
+since $\eta, -\ell' \leq 1$ and $-\ell' \leq \ell$ . Combining eqs. (C.1) to (C.3) gives
+
+$$
+\eta \mathcal {R} _ {t} (W _ {t}) \leq \left\| W _ {t} - \overline {{W}} \right\| _ {F} ^ {2} - \left\| W _ {t + 1} - \overline {{W}} \right\| _ {F} ^ {2} + 2 \eta \mathcal {R} _ {t} (\overline {{W}}).
+$$
+
+Telescoping gives the claim.
+
+
+
+With Lemma 4.2, we give the following result, which is an extension of Theorem 2.2 to the SGD setting.
+
+Lemma C.1. Under Assumption 3.1, given any $\epsilon \in (0,1)$ , any $\delta \in (0,1/3)$ , and any positive integer $n_0$ , let
+
+$$
+\lambda := \frac {\sqrt {2 \ln (4 n _ {0} / \delta)} + \ln (4 / \epsilon)}{\gamma / 4}, \quad a n d \quad M := \frac {4 0 9 6 \lambda^ {2}}{\gamma^ {6}}.
+$$
+
+For any $m \geq M$ and any constant step size $\eta \leq 1$ , if $n_0 \geq n := \lceil 2^{\lambda^2} / \eta \epsilon \rceil$ , then with probability $1 - 3\delta$ ,
+
+$$
+\frac {1}{n} \sum_ {i < n} \mathcal {Q} _ {i} (W _ {i}) \leq \epsilon .
+$$
+
+Proof. We first sample $n_0$ data examples $(x_0, y_0), \ldots, (x_{n_0 - 1}, y_{n_0 - 1})$ , and then feed $(x_i, y_i)$ to SGD at step $i$ . We only consider the first $n_0$ steps.
+
+The proof is similar to the proof of Theorem 2.2. Let $n_1$ denote the first step before $n_0$ such that there exists some $1 \leq s \leq m$ with $\| w_{s,n_1} - w_{s,0} \|_2 > 4\lambda / (\gamma \sqrt{m})$ . If such a step does not exist, let $n_1 = n_0$ .
+
+Let $\overline{W} := W_0 + \lambda \overline{U}$ , in exactly the same way as in Theorem 2.2, we can show that with probability $1 - 3\delta$ , for any $0 \leq i < n_1$ ,
+
+$$
+y _ {i} \left\langle \nabla f _ {i} (W _ {i}), \overline {{W}} \right\rangle \geq \ln \left(\frac {4}{\epsilon}\right), \quad \text {a n d t h u s} \quad \mathcal {R} _ {i} (\overline {{W}}) \leq \epsilon / 4.
+$$
+
+Now consider $n \coloneqq \lceil 2\lambda^2 /\eta \epsilon \rceil$ . Using Lemma 4.2, in the same way as the proof of Theorem 2.2 (replacing $\widehat{\mathcal{Q}} (W_{\tau})$ with $\mathcal{Q}_i(W_i)$ , etc.), we can show that $n \leq n_1$ . Then invoking Lemma 4.2 again, we get
+
+$$
+\frac {1}{n} \sum_ {i < n} \mathcal {Q} _ {i} (W _ {i}) \leq \frac {1}{n} \sum_ {i < n} \mathcal {R} _ {i} (W _ {i}) \leq \frac {\left\| W _ {0} - \overline {{W}} \right\| _ {F} ^ {2}}{\eta n} + \frac {2}{n} \sum_ {i < n} \mathcal {R} _ {i} (\overline {{W}}) \leq \frac {\epsilon}{2} + \frac {\epsilon}{2} = \epsilon .
+$$
+
+
+
+Next we prove Lemma 4.3. We need the following martingale Bernstein bound.
+
+Lemma C.2. (Beygelzimer et al., 2011, Theorem 1) Let $(M_t, \mathcal{F}_t)_{t \geq 0}$ denote a martingale with $M_0 = 0$ and $\mathcal{F}_0$ be the trivial $\sigma$ -algebra. Let $(\Delta_t)_{t \geq 1}$ denote the corresponding martingale difference sequence, and let
+
+$$
+V _ {t} := \sum_ {j = 1} ^ {t} \mathbb {E} \left[ \Delta_ {j} ^ {2} \mid \mathcal {F} _ {j - 1} \right]
+$$
+
+denote the sequence of conditional variance. If $\Delta_t \leq R$ a.s., then for any $\delta \in (0,1)$ , with probability at least $1 - \delta$ ,
+
+$$
+M _ {t} \leq \frac {V _ {t}}{R} (e - 2) + R \ln \left(\frac {1}{\delta}\right).
+$$
+
+Proof of Lemma 4.3. For any $i \geq 0$ , let $z_i$ denote $(x_i, y_i)$ , and $z_{0,i}$ denote $(z_0, \ldots, z_i)$ . Note that the quantity $\sum_{t < i} \left( \mathcal{Q}(W_t) - \mathcal{Q}_t(W_t) \right)$ is a martingale w.r.t. the filtration $\sigma(z_{0,i-1})$ . The martingale difference sequence is given by $\mathcal{Q}(W_t) - \mathcal{Q}_t(W_t)$ , which satisfies
+
+$$
+\mathcal {Q} \left(W _ {t}\right) - \mathcal {Q} _ {t} \left(W _ {t}\right) = \mathbb {E} _ {(x, y) \sim \mathcal {D}} \left[ - \ell^ {\prime} (y f (x; W _ {t}, a)) \right] + \ell^ {\prime} (y _ {t} f (x _ {t}; W _ {t}, a)) \leq 1, \tag {C.4}
+$$
+
+since $-1\leq \ell^{\prime}\leq 0$ . Moreover, we have
+
+$$
+\begin{array}{l} \mathbb {E} \left[ \left(\mathcal {Q} (W _ {t}) - \mathcal {Q} _ {t} (W _ {t})\right) ^ {2} \mid \sigma (z _ {0, t - 1}) \right] \\ = \mathcal {Q} (W _ {t}) ^ {2} - 2 \mathcal {Q} (W _ {t}) \mathbb {E} \left[ \mathcal {Q} _ {t} (W _ {t}) | \sigma (z _ {0, t - 1}) \right] + \mathbb {E} \left[ \mathcal {Q} _ {t} (W _ {t}) ^ {2} | \sigma (z _ {0, t - 1}) \right] \\ = - \mathcal {Q} \left(W _ {t}\right) ^ {2} + \mathbb {E} \left[ \mathcal {Q} _ {t} \left(W _ {t}\right) ^ {2} \mid \sigma \left(z _ {0, t - 1}\right) \right] \tag {C.5} \\ \leq \mathbb {E} \left[ \mathcal {Q} _ {t} \left(W _ {t}\right) ^ {2} \mid \sigma \left(z _ {0, t - 1}\right) \right] \\ \leq \mathbb {E} \left[ \mathcal {Q} _ {t} (W _ {t}) \mid \sigma (z _ {0, t - 1}) \right] \\ = \mathcal {Q} (W _ {t}). \\ \end{array}
+$$
+
+Invoking Lemma C.2 with eqs. (C.4) and (C.5) gives that with probability $1 - \delta$
+
+$$
+\sum_ {t < i} \left(\mathcal {Q} (W _ {t}) - \mathcal {Q} _ {t} (W _ {t})\right) \leq (e - 2) \sum_ {t < i} \mathcal {Q} (W _ {t}) + \ln \left(\frac {1}{\delta}\right).
+$$
+
+Consequently,
+
+$$
+\sum_ {t < i} \mathcal {Q} (W _ {t}) \leq 4 \sum_ {t < i} \mathcal {Q} _ {t} (W _ {t}) + 4 \ln \left(\frac {1}{\delta}\right).
+$$
+
+
+
+Finally, we prove Theorem 4.1.
+
+Proof of Theorem 4.1. Suppose the condition of Lemma C.1 holds. Then we have for $n = \lceil 2\lambda^2 /\eta \epsilon \rceil$ with probability $1 - 3\delta$
+
+$$
+\frac {1}{n} \sum_ {i < n} \mathcal {Q} _ {i} (W _ {i}) \leq \epsilon .
+$$
+
+Further invoking Lemma 4.3 gives that with probability $1 - 4\delta$
+
+$$
+\frac {1}{n} \sum_ {i < n} \mathcal {Q} (W _ {i}) \leq \frac {4}{n} \sum_ {i < n} \mathcal {Q} _ {i} (W _ {i}) + \frac {4}{n} \ln \left(\frac {1}{\delta}\right) \leq 5 \epsilon .
+$$
+
+Since $P_{(x,y)\sim \mathcal{D}}\left(yf(x;W,a)\leq 0\right)\leq 2\mathcal{Q}(W)$ , we get
+
+$$
+\frac {1}{n} \sum_ {i = 1} ^ {n} P _ {(x, y) \sim \mathcal {D}} \left(y f (x; W _ {i}, a) \leq 0\right) \leq 1 0 \epsilon .
+$$
+
+For the condition of Lemma C.1 to hold, it is enough to let
+
+$$
+n _ {0} = \Theta \left(\frac {\ln (1 / \delta)}{\eta \gamma^ {2} \epsilon^ {2}}\right),
+$$
+
+which gives
+
+$$
+M = \Theta \left(\frac {\ln (1 / \delta) + \ln (1 / \epsilon) ^ {2}}{\gamma^ {8}}\right) \quad \mathrm {a n d} \quad n = \Theta \left(\frac {\ln (1 / \delta) + \ln (1 / \epsilon) ^ {2}}{\gamma^ {2} \epsilon}\right).
+$$
+
+
+
+# D OMITTED PROOFS FROM SECTION 5
+
+Proof of Proposition 5.1. Define $f:\mathcal{H}\to \mathbb{R}$ by
+
+$$
+f (w) := \frac {1}{2} \int \| w (z) \| _ {2} ^ {2} \mathrm {d} \mu_ {\mathcal {N}} (z) = \frac {1}{2} \| w \| _ {\mathcal {H}} ^ {2}.
+$$
+
+It holds that $f$ is continuous, and $f^*$ has the same form. Define $g: \mathbb{R}^n \to \mathbb{R}$ by
+
+$$
+g (p) := \max _ {1 \leq i \leq n} p _ {i},
+$$
+
+with conjugate
+
+$$
+g ^ {*} (q) = \left\{ \begin{array}{l l} 0, & \text {i f} q \in \Delta_ {n}, \\ + \infty , & \text {o . w .} \end{array} \right.
+$$
+
+Finally, define the linear mapping $A:\mathcal{H}\to \mathbb{R}^n$ by $(Aw)_i = y_i\langle w,\phi_i\rangle_{\mathcal{H}}$
+
+Since $f, f^{*}, g$ and $g^{*}$ are lower semi-continuous, and $\mathbf{dom}g - A\mathbf{dom}f = \mathbb{R}^n$ , and $\mathbf{dom}f^{*} - A^{*}\mathbf{dom}g^{*} = \mathcal{H}$ , Fenchel duality may be applied in each direction (Borwein & Zhu, 2005, Theorem 4.4.3), and ensures that
+
+$$
+\inf _ {w \in \mathcal {H}} \left(f (w) + g (A w)\right) = \sup _ {q \in \mathbb {R} ^ {n}} \left(- f ^ {*} \left(A ^ {*} q\right) - g ^ {*} (- q)\right).
+$$
+
+with optimal primal-dual solutions $(\bar{w},\bar{q})$ .Moreover
+
+$$
+\begin{array}{l} \inf _ {w \in \mathcal {H}} \left(f (w) + g (A w)\right) = \inf _ {w \in \mathcal {H}, u \in \mathbb {R} ^ {n}} \sup _ {q \in \mathbb {R} ^ {n}} \left(f (w) + g (A w + u) + \langle q, u \rangle\right) \\ \geq \sup _ {q \in \mathbb {R} ^ {n}} \inf _ {w \in \mathcal {H}, u \in \mathbb {R} ^ {n}} \left(f (w) + g (A w + u) + \langle q, u \rangle\right) \\ = \sup _ {q \in \mathbb {R} ^ {n}} \inf _ {w \in \mathcal {H}, u \in \mathbb {R} ^ {n}} \left(\left(f (w) - \langle A ^ {*} q, w \rangle\right) _ {\mathcal {H}} + \left(g (A w + u) - \langle - q, A w + u \rangle\right)\right) \\ = \sup _ {q \in \mathbb {R} ^ {n}} \left(- f ^ {*} (A ^ {*} q) - g ^ {*} (- q)\right). \\ \end{array}
+$$
+
+By strong duality, the inequality holds with equality. It follows that
+
+$$
+\bar{w} = A^{*}\bar{q},\quad \text{and}\quad \mathbf{supp}(-\bar{q})\subset \operatorname *{arg max}_{1\leq i\leq n}(A\bar{w})_{i}.
+$$
+
+Now let us look at the dual optimization problem. It is clear that
+
+$$
+\sup _ {q \in \mathbb {R} ^ {n}} \left(- f ^ {*} (A ^ {*} q) - g ^ {*} (- q)\right) = - \inf _ {q \in \Delta_ {n}} f ^ {*} (A ^ {*} q).
+$$
+
+In addition, we have
+
+$$
+\begin{array}{l} f ^ {*} (A ^ {*} q) = \frac {1}{2} \int \left\| \sum_ {i = 1} ^ {n} q _ {i} y _ {i} \phi_ {i} (z) \right\| _ {2} ^ {2} \mathrm {d} \mu_ {\mathcal {N}} (z) \\ = \frac {1}{2} \int \sum_ {i, j = 1} ^ {n} q _ {i} q _ {j} y _ {i} y _ {j} \left\langle \phi_ {i} (z), \phi_ {j} (z) \right\rangle \mathrm {d} \mu_ {\mathcal {N}} (z) \\ = \frac {1}{2} \sum_ {i, j = 1} ^ {n} q _ {i} q _ {j} y _ {i} y _ {j} \int \left\langle \phi_ {i} (z), \phi_ {j} (z) \right\rangle \mathrm {d} \mu_ {\mathcal {N}} (z) \\ = \frac {1}{2} \sum_ {i, j = 1} ^ {n} q _ {i} q _ {j} y _ {i} y _ {j} K _ {1} (i, j) = \frac {1}{2} (q \odot y) ^ {\top} K _ {1} (q \odot y), \\ \end{array}
+$$
+
+and thus $f^{*}(A^{*}\bar{q}) = \gamma_{1}^{2} / 2$ . Since $\bar{w} = A^{*}\bar{q}$ , we have that $\| \bar{w}\|_{\mathcal{H}} = \gamma_1$ . In addition,
+
+$$
+g (A \bar {w}) = - f ^ {*} (A ^ {*} \bar {q}) - f (\bar {w}) = - \gamma_ {1} ^ {2},
+$$
+
+and thus $-\bar{w}$ has margin $\gamma_1^2$ . Moreover, we have
+
+$$
+\bar {w} (z) = \sum_ {i = 1} ^ {n} \bar {q} _ {i} y _ {i} \phi_ {i} (z) = \sum_ {i = 1} ^ {n} \bar {q} _ {i} y _ {i} x _ {i} \mathbb {1} [ \langle z, x _ {i} \rangle > 0 ],
+$$
+
+and thus $\left\| \bar{w}(z) \right\|_2 \leq 1$ . Therefore, $\hat{v} = -\bar{w} / \gamma_1$ satisfies all requirements of Proposition 5.1.
+
+
+
+Proof of Proposition 5.2. Let $\hat{q}$ denote the uniform probability vector $(1/n, \ldots, 1/n)$ . Note that
+
+$$
+\begin{array}{l} \mathbb {E} _ {\epsilon \sim \mathrm {u n i f} \big (\{- 1, + 1 \} ^ {n} \big)} \left[ (\hat {q} \odot \epsilon) ^ {\top} K _ {1} (\hat {q} \odot \epsilon) \right] = \mathbb {E} _ {\epsilon \sim \mathrm {u n i f} \big (\{- 1, + 1 \} ^ {n} \big)} \left[ \sum_ {i, j = 1} ^ {n} \frac {1}{n ^ {2}} \epsilon_ {i} \epsilon_ {j} K _ {1} (x _ {i}, x _ {j}) \right] \\ = \frac {1}{n ^ {2}} \sum_ {i, j = 1} ^ {n} \mathbb {E} _ {\epsilon \sim \mathrm {u n i f} \left(\{- 1, + 1 \} ^ {n}\right)} \left[ \epsilon_ {i} \epsilon_ {j} K _ {1} (x _ {i}, x _ {j}) \right] \\ = \frac {1}{n ^ {2}} \sum_ {i = 1} ^ {n} K _ {1} \left(x _ {i}, x _ {i}\right) = \frac {1}{2 n}. \\ \end{array}
+$$
+
+Since $0 \leq (\hat{q} \odot \epsilon)^{\top} K_{1}(\hat{q} \odot \epsilon) \leq 1$ for any $\epsilon$ , by Markov's inequality with probability 0.9, it holds that $(\hat{q} \odot \epsilon)^{\top} K_{1}(\hat{q} \odot \epsilon) \leq 1 / (20n)$ , and thus $\gamma_{1} \leq 1 / \sqrt{20n}$ .
+
+Proof of Proposition 5.3. By symmetry, we only need to consider an $(x,y)$ where $(x_{1},x_{2},y) = (1 / \sqrt{d - 1},0,1)$ . Let $z_{p,q}$ denote $(z_{p},z_{p + 1},\ldots ,z_{q})$ , and similarly define $x_{p,q}$ . We have
+
+$$
+\begin{array}{l} y \int \left\langle \bar {v} (z), x \right\rangle \mathbb {1} [ \langle z, x \rangle > 0 ] d \mu_ {\mathcal {N}} (z) \\ = y \int \left(\int \left\langle \bar {v} (z), x \right\rangle \mathbb {1} [ \langle z, x \rangle > 0 ] \mathrm {d} \mu_ {\mathcal {N}} \left(z _ {3, d}\right)\right) \mathrm {d} \mu_ {\mathcal {N}} \left(z _ {1, 2}\right) (D.1) \\ = y \int \left\langle \bar {v} (z) _ {1, 2}, x _ {1, 2} \right\rangle \left(\int \mathbb {1} \left[ \left\langle z _ {1, 2}, x _ {1, 2} \right\rangle + \left\langle z _ {3, d}, x _ {3, d} \right\rangle > 0 \right] \mathrm {d} \mu_ {\mathcal {N}} (z _ {3, d})\right) \mathrm {d} \mu_ {\mathcal {N}} (z _ {1, 2}) (D.2) \\ = \sum_ {i = 1} ^ {4} y \int \left\langle \bar {v} (z) _ {1, 2}, x _ {1, 2} \right\rangle \left(\int \mathbb {1} \left[ \left\langle z _ {1, 2}, x _ {1, 2} \right\rangle + \left\langle z _ {3, d}, x _ {3, d} \right\rangle > 0 \right] \mathrm {d} \mu_ {\mathcal {N}} \left(z _ {3, d}\right)\right) \mathbb {1} \left[ z _ {1, 2} \in A _ {i} \right] \mathrm {d} \mu_ {\mathcal {N}} \left(z _ {1, 2}\right), (D.3) \\ \end{array}
+$$
+
+where eq. (D.1) is due to the independence between $z_{1,2}$ and $z_{3,d}$ , and in eq. (D.2) we use the fact that $\bar{v}(z)_{1,2}$ only depends on $z_{1,2}$ and $\bar{v}(z)_{3,d}$ are all zero. Since $\langle \bar{v}(z)_{1,2}, x_{1,2} \rangle = 0$ for $z_{1,2} \in A_2 \cup A_4$ , we only need to consider $A_1$ and $A_3$ in eq. (D.3). For simplicity, we will denote $z_{1,2}$ by $p \in \mathbb{R}^2$ , and $\bar{v}(z)_{1,2}$ by $\bar{v}(p)$ , and $z_{3,d}$ by $q \in \mathbb{R}^{d-2}$ .
+
+For any nonzero $p \in A_1$ , we have $-p \in A_3$ , and $\langle \bar{v}(p), x_{1,2} \rangle = 1 / \sqrt{d - 1}$ . Therefore
+
+$$
+\begin{array}{l} y \left\langle \bar {v} (p), x _ {1, 2} \right\rangle \left(\int \mathbb {1} \left[ \left\langle p, x _ {1, 2} \right\rangle + \left\langle q, x _ {3, d} \right\rangle > 0 \right] \mathrm {d} \mu_ {\mathcal {N}} (q)\right) \\ + y \left\langle \bar {v} (- p), x _ {1, 2} \right\rangle \left(\int \mathbb {1} \left[ \left\langle - p, x _ {1, 2} \right\rangle + \left\langle q, x _ {3, d} \right\rangle > 0 \right] \mathrm {d} \mu_ {\mathcal {N}} (q)\right) \\ = \frac {1}{\sqrt {d - 1}} \int \left(\mathbb {1} \left[ \frac {p _ {1}}{\sqrt {d - 1}} + \langle q, x _ {3, d} \rangle > 0 \right] - \mathbb {1} \left[ \frac {- p _ {1}}{\sqrt {d - 1}} + \langle q, x _ {3, d} \rangle > 0 \right]\right) \mathrm {d} \mu_ {\mathcal {N}} (q) \\ = \frac {1}{\sqrt {d - 1}} \mathbb {P} \left(\frac {- p _ {1}}{\sqrt {d - 1}} \leq \langle q, x _ {3, d} \rangle \leq \frac {p _ {1}}{\sqrt {d - 1}}\right). \tag {D.4} \\ \end{array}
+$$
+
+Let $\varphi$ denote the density function of the standard Gaussian distribution, and for $c > 0$ , let $U(c)$ denote the probability that a standard Gaussian random variable lies in the interval $[-c, c]$ :
+
+$$
+U (c) := \int_ {- c} ^ {c} \varphi (t) \mathrm {d} t.
+$$
+
+Since $\left\langle q,x_{3,d}\right\rangle$ is a Gaussian variable with standard deviation $\sqrt{(d - 2) / (d - 1)}$ , we have
+
+$$
+\mathbb {P} \left(\frac {- p _ {1}}{\sqrt {d - 1}} \leq \langle q, x _ {3, d} \rangle \leq \frac {p _ {1}}{\sqrt {d - 1}}\right) = U \left(\frac {p _ {1}}{\sqrt {d - 2}}\right). \tag {D.5}
+$$
+
+Plugging eqs. (D.4) and (D.5) into eq. (D.3) gives:
+
+$$
+\begin{array}{l} y \int \left\langle \bar {v} (z), x \right\rangle \mathbb {1} [ \langle z, x \rangle > 0 ] \mathrm {d} \mu_ {\mathcal {N}} (z) = \frac {1}{\sqrt {d - 1}} \int U \left(\frac {p _ {1}}{\sqrt {d - 2}}\right) \mathbb {1} [ p \in A _ {1} ] \mathrm {d} \mu_ {\mathcal {N}} (p) \\ = \frac {1}{\sqrt {d - 1}} \int_ {0} ^ {\infty} U \left(\frac {p _ {1}}{\sqrt {d - 2}}\right) \left(\int_ {- p _ {1}} ^ {p _ {1}} \varphi (p _ {2}) \mathrm {d} p _ {2}\right) \varphi (p _ {1}) \mathrm {d} p _ {1} \\ = \frac {1}{\sqrt {d - 1}} \int_ {0} ^ {\infty} U \left(\frac {p _ {1}}{\sqrt {d - 2}}\right) U (p _ {1}) \varphi (p _ {1}) \mathrm {d} p _ {1} \\ \geq \frac {1}{\sqrt {d - 1}} \int_ {0} ^ {1} U \left(\frac {p _ {1}}{\sqrt {d - 2}}\right) U (p _ {1}) \varphi (p _ {1}) \mathrm {d} p _ {1}. \\ \end{array}
+$$
+
+For $t \in [-1, +1]$ , it holds that $\varphi(t) \geq 1\sqrt{2\pi e}$ , and thus
+
+$$
+U (a) = \int_ {- a} ^ {a} \varphi (t) \mathrm {d} t \geq \frac {2 a}{\sqrt {2 \pi e}}.
+$$
+
+Therefore eq. (D.3) is lower bounded by
+
+$$
+\begin{array}{l} \frac {1}{\sqrt {d - 1}} \int_ {0} ^ {1} U \left(\frac {p _ {1}}{\sqrt {d - 2}}\right) U (p _ {1}) \varphi (p _ {1}) \mathrm {d} p _ {1} \geq \frac {1}{\sqrt {d - 1}} \int_ {0} ^ {1} \frac {2}{\sqrt {2 \pi e}} \cdot \frac {p _ {1}}{\sqrt {d - 2}} \cdot \frac {2 p _ {1}}{\sqrt {2 \pi e}} \cdot \frac {1}{\sqrt {2 \pi e}} \mathrm {d} p _ {1} \\ \geq \frac {1}{2 0 \sqrt {(d - 1) (d - 2)}} \int_ {0} ^ {1} p _ {1} ^ {2} \mathrm {d} p _ {1} \\ = \frac {1}{6 0 \sqrt {(d - 1) (d - 2)}} \\ \geq \frac {1}{6 0 d}. \\ \end{array}
+$$
+
+
+
+To prove Proposition 5.4, we need the following technical lemma.
+
+Lemma D.1. Given $z_{1}\sim \mathcal{N}(0,1)$ and $z_{2}\sim \mathcal{N}(0,b^{2})$ that are independent where $b > 1$ , we have
+
+$$
+\mathbb {P} \left(\left| z _ {1} \right| < \left| z _ {2} \right|\right) > 1 - \frac {1}{b}.
+$$
+
+Proof. First note that for $z_{3} \sim \mathcal{N}(0,1)$ which is independent of $z_{1}$ ,
+
+$$
+\mathbb {P} \left(| z _ {1} | < | z _ {2} |\right) = \mathbb {P} \left(| z _ {1} | < b | z _ {3} |\right) = 1 - \mathbb {P} \left(| z _ {3} | < \frac {1}{b} | z _ {1} |\right).
+$$
+
+Still let $\varphi$ denote the density of $\mathcal{N}(0,1)$ , and let $U(c)$ denote the probability that $z_{3} \in [-c, c]$ . We have
+
+$$
+\begin{array}{l} \mathbb {P} \left(| z _ {3} | < \frac {1}{b} | z _ {1} |\right) = \iint \mathbb {1} \left[ | z _ {3} | < \frac {1}{b} | z _ {1} | \right] \varphi (z _ {3}) \varphi (z _ {1}) d z _ {3} d z _ {1} \\ = \int U \left(\frac {1}{b} | z _ {1} |\right) \varphi (z _ {1}) \mathrm {d} z _ {1} \\ \leq \frac {2}{\sqrt {2 \pi} b} \int | z _ {1} | \varphi (z _ {1}) \mathrm {d} z _ {1} = \frac {2}{\pi b} < \frac {1}{b}, \\ \end{array}
+$$
+
+where we use the facts that $U(c) \leq 2c / \sqrt{2\pi}$ and $\mathbb{E}[|z_1|] = \sqrt{2 / \pi}$ .
+
+
+
+We now give the proof of Proposition 5.4 using Lemma D.1.
+
+Proof of Proposition 5.4. By symmetry, we only need to consider the following training set:
+
+$$
+x _ {1} = (1, 0, 1, \dots , 1), \quad y _ {1} = 1,
+$$
+
+$$
+x _ {2} = (0, 1, 1, \dots , 1), \quad y _ {2} = - 1,
+$$
+
+$$
+x _ {3} = (- 1, 0, 1, \dots , 1), \quad y _ {3} = 1,
+$$
+
+$$
+x _ {4} = (0, - 1, 1, \dots , 1), \quad y _ {4} = - 1.
+$$
+
+The $1 / \sqrt{d - 1}$ factor is omitted also because we only discuss the $0 / 1$ loss.
+
+For any $s$ , let $A_{s}$ denote the event that
+
+$$
+\mathbb {1} \left[ \langle w _ {s}, x _ {1} \rangle > 0 \right] = \mathbb {1} \left[ \langle w _ {s}, x _ {2} \rangle > 0 \right] = \mathbb {1} \left[ \langle w _ {s}, x _ {3} \rangle > 0 \right] = \mathbb {1} \left[ \langle w _ {s}, x _ {4} \rangle > 0 \right].
+$$
+
+We will show that if $m \leq \sqrt{d - 2} / 4$ , then $A_{s}$ is true for all $1 \leq s \leq m$ with probability $1/2$ , and Proposition 5.4 follows from the fact that the XOR data is not linearly separable.
+
+For any $s$ and $i$ ,
+
+$$
+\langle w _ {s}, x _ {i} \rangle = (w _ {s}) _ {1} (x _ {i}) _ {1} + (w _ {s}) _ {2} (x _ {i}) _ {2} + \sum_ {j = 3} ^ {d} (w _ {s}) _ {j}.
+$$
+
+Since $\left((x_i)_1,(x_i)_2\right)$ is $(1,0)$ or $(0,1)$ or $(-1,0)$ or $(0, - 1)$ , event $A_{s}$ will happen as long as
+
+$$
+\left| \left(w _ {s}\right) _ {1} \right| < \left| \sum_ {j = 3} ^ {d} \left(w _ {s}\right) _ {j} \right|, \quad \text {a n d} \quad \left| \left(w _ {s}\right) _ {2} \right| < \left| \sum_ {s = 3} ^ {d} \left(w _ {s}\right) _ {j} \right|.
+$$
+
+Note that $(w_{s})_{1},(w_{s})_{2}\sim \mathcal{N}(0,1)$ while $\sum_{j = 3}^{d}(w_{s})_{j}\sim \mathcal{N}(0,d - 2)$ . As a result, due to Lemma D.1,
+
+$$
+\mathbb {P} \left(\left| (w _ {s}) _ {1} \right| < \left| \sum_ {j = 3} ^ {d} (w _ {s}) _ {j} \right|\right) = \mathbb {P} \left(\left| (w _ {s}) _ {2} \right| < \left| \sum_ {s = 3} ^ {d} (w _ {s}) _ {j} \right|\right) > 1 - \frac {1}{\sqrt {d - 2}}.
+$$
+
+Using a union bound, $\mathbb{P}(A_s) > 1 - 2 / \sqrt{d - 2}$ . If $m \leq \sqrt{d - 2} / 4$ , then by a union bound again,
+
+$$
+\mathbb {P} \left(\bigcup_ {1 \leq s \leq m} A _ {s}\right) > 1 - \frac {2}{\sqrt {d - 2}} m \geq 1 - \frac {2}{\sqrt {d - 2}} \frac {\sqrt {d - 2}}{4} = \frac {1}{2}.
+$$
+
+
\ No newline at end of file
diff --git a/polylogarithmicwidthsufficesforgradientdescenttoachievearbitrarilysmalltesterrorwithshallowrelunetworks/images.zip b/polylogarithmicwidthsufficesforgradientdescenttoachievearbitrarilysmalltesterrorwithshallowrelunetworks/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..c924206dc7b80c789f37fe7e8c37faea527a7e57
--- /dev/null
+++ b/polylogarithmicwidthsufficesforgradientdescenttoachievearbitrarilysmalltesterrorwithshallowrelunetworks/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:baf282da9e9871d13013d1362627139aeaffc234c9040505702874728eca0aaf
+size 1334122
diff --git a/polylogarithmicwidthsufficesforgradientdescenttoachievearbitrarilysmalltesterrorwithshallowrelunetworks/layout.json b/polylogarithmicwidthsufficesforgradientdescenttoachievearbitrarilysmalltesterrorwithshallowrelunetworks/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..228cb291a3bae5048b217002cfc0295485ff53fe
--- /dev/null
+++ b/polylogarithmicwidthsufficesforgradientdescenttoachievearbitrarilysmalltesterrorwithshallowrelunetworks/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:45b8478a779085d40d9c6620ada5f4a981b61e8eb8563bde40816893e20c1130
+size 1236455
diff --git a/populationguidedparallelpolicysearchforreinforcementlearning/f7b435a0-78b7-4524-801a-30fd4a7993bd_content_list.json b/populationguidedparallelpolicysearchforreinforcementlearning/f7b435a0-78b7-4524-801a-30fd4a7993bd_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..3e46d88a8f755ec513338d9a310a15aa0075e9fd
--- /dev/null
+++ b/populationguidedparallelpolicysearchforreinforcementlearning/f7b435a0-78b7-4524-801a-30fd4a7993bd_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:80ef7fe703f34df567c4f521353c274a005c93cf9d6f1ad59dddb8ecef40e731
+size 156047
diff --git a/populationguidedparallelpolicysearchforreinforcementlearning/f7b435a0-78b7-4524-801a-30fd4a7993bd_model.json b/populationguidedparallelpolicysearchforreinforcementlearning/f7b435a0-78b7-4524-801a-30fd4a7993bd_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..751a8cba61f58a2e3a116259f9259d5b54ff8694
--- /dev/null
+++ b/populationguidedparallelpolicysearchforreinforcementlearning/f7b435a0-78b7-4524-801a-30fd4a7993bd_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6916be6d06193dcbaf62f2c28262e579b53e75c6c2d4c5a6f8e7577c1f34881e
+size 185942
diff --git a/populationguidedparallelpolicysearchforreinforcementlearning/f7b435a0-78b7-4524-801a-30fd4a7993bd_origin.pdf b/populationguidedparallelpolicysearchforreinforcementlearning/f7b435a0-78b7-4524-801a-30fd4a7993bd_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..fdc28aa41f48ab4bdae9a0ebe5dad97d0d1c3332
--- /dev/null
+++ b/populationguidedparallelpolicysearchforreinforcementlearning/f7b435a0-78b7-4524-801a-30fd4a7993bd_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:daaaea26b641d87198e897cee3d9f56fe49727978a2a878eb61995eaa646264b
+size 4775349
diff --git a/populationguidedparallelpolicysearchforreinforcementlearning/full.md b/populationguidedparallelpolicysearchforreinforcementlearning/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..6d52dbd03536385be5286b9bf6f2b60ccf1a4105
--- /dev/null
+++ b/populationguidedparallelpolicysearchforreinforcementlearning/full.md
@@ -0,0 +1,753 @@
+# POPULATION-GUIDED PARALLEL POLICY SEARCH FOR REINFORCEMENT LEARNING
+
+Whiyoung Jung, Giseung Park, Youngchul Sung*
+
+School of Electrical Engineering
+
+Korea Advanced Institute of Science and Technology
+
+{wy.jung, gs.park, ycsung}@kaist.ac.kr
+
+# ABSTRACT
+
+In this paper, a new population-guided parallel learning scheme is proposed to enhance the performance of off-policy reinforcement learning (RL). In the proposed scheme, multiple identical learners with their own value-functions and policies share a common experience replay buffer, and search a good policy in collaboration with the guidance of the best policy information. The key point is that the information of the best policy is fused in a soft manner by constructing an augmented loss function for policy update to enlarge the overall search region by the multiple learners. The guidance by the previous best policy and the enlarged range enable faster and better policy search. Monotone improvement of the expected cumulative return by the proposed scheme is proved theoretically. Working algorithms are constructed by applying the proposed scheme to the twin delayed deep deterministic (TD3) policy gradient algorithm. Numerical results show that the constructed algorithm outperforms most of the current state-of-the-art RL algorithms, and the gain is significant in the case of sparse reward environment.
+
+# 1 INTRODUCTION
+
+RL is an active research field and has been applied successfully to games, simulations, and actual environments. With the success of RL in relatively easy tasks, more challenging tasks such as sparse reward environments (Oh et al. (2018); Zheng et al. (2018); Burda et al. (2019)) are emerging, and developing good RL algorithms for such challenging tasks is of great importance from both theoretical and practical perspectives. In this paper, we consider parallel learning, which is an important line of RL research to enhance the learning performance by having multiple learners for the same environment. Parallelism in learning has been investigated widely in distributed RL (Nair et al. (2015); Mnih et al. (2016); Horgan et al. (2018); Barth-Maron et al. (2018); Espeholt et al. (2018)), evolutionary algorithms (Salimans et al. (2017); Choromanski et al. (2018); Khadka & Tumer (2018); Pourchot & Sigaud (2019)), concurrent RL (Silver et al. (2013); Guo & Brunskill (2015); Dimakopoulou & Van Roy (2018); Dimakopoulou et al. (2018)) and population-based training (PBT) (Jaderberg et al. (2017; 2018); Conti et al. (2018)). In this paper, in order to enhance the learning performance, we apply parallelism to RL based on a population of policies, but the usage is different from the previous methods.
+
+One of the advantages of using a population is the capability to evaluate policies in the population. Once all policies in the population are evaluated, we can use information of the best policy to enhance the performance. One simple way to exploit the best policy information is that we reset the policy parameter of each learner with that of the best learner at the beginning of the next $M$ time steps; make each learner perform learning from this initial point for the next $M$ time steps; select the best learner again at the end of the next $M$ time steps; and repeat this procedure every $M$ time steps in a similar way that PBT does (Jaderberg et al. (2017)). We will refer to this method as the resetting method in this paper. However, this resetting method has the problem that the search area covered by all $N$ policies in the population collapses to one point at the time of parameter copying and thus the search area can be narrow around the previous best policy point. To overcome such disadvantage, instead of resetting the policy parameter with the best policy parameter periodically,
+
+we propose using the best policy information in a soft manner. In the proposed scheme, the shared best policy information is used only to guide other learners' policies for searching a better policy. The chief periodically determines the best policy among all learners and distributes the best policy parameter to all learners so that the learners search for better policies with the guidance of the previous best policy. The chief also enforces that the $N$ policies are spread in the policy space with a given distance from the previous best policy point so that the search area by all $N$ learners maintains a wide area and does not collapse into a narrow region.
+
+The proposed Population-guided Parallel Policy Search (P3S) learning method can be applied to any off-policy RL algorithms and implementation is easy. Furthermore, monotone improvement of the expected cumulative return by the P3S scheme is theoretically proved. We apply our P3S scheme to the TD3 algorithm, which is a state-of-the-art off-policy algorithm, as our base algorithm. Numerical result shows that the P3S-TD3 algorithm outperforms the baseline algorithms both in the speed of convergence and in the final steady-state performance.
+
+# 2 BACKGROUND AND RELATED WORKS
+
+Distributed RL Distributed RL is an efficient way of taking advantage of parallelism to achieve fast training for large complex tasks (Nair et al. (2015)). Most of the works in distributed RL assume a common structure composed of multiple actors interacting with multiple copies of the same environment and a central system which stores and optimizes the common Q-function parameter or the policy parameter shared by all actors. The focus of distributed RL is to optimize the Q-function parameter or the policy parameter fast by generating more samples for the same wall clock time with multiple actors. For this goal, researchers investigated various techniques for distributed RL, e.g., asynchronous update of parameters (Mnih et al. (2016); Babaeizadeh et al. (2017)), sharing an experience replay buffer (Horgan et al. (2018)), GPU-based parallel computation (Babaeizadeh et al. (2017); Clemente et al. (2017)), GPU-based simulation (Liang et al. (2018)) and V-trace in case of on-policy algorithms (Espeholt et al. (2018)). Distributed RL yields performance improvement in terms of the wall clock time but it does not consider the possible enhancement by interaction among a population of policies of all learners like in PBT or our P3S. The proposed P3S uses a similar structure to that in (Nair et al. (2015); Espeholt et al. (2018)): that is, P3S is composed of multiple learners and a chief. The difference is that each learner in P3S has its own Q or value function parameter and policy parameter, and optimizes the parameters in parallel to search in the policy space.
+
+Population-Based Training Parallelism is also exploited in finding optimal parameters and hyperparameters of training algorithms in PBT (Jaderberg et al. (2017; 2018); Conti et al. (2018)). PBT trains neural networks, using a population with different parameters and hyper-parameters in parallel at multiple learners. During the training, in order to take advantage of the population, it evaluates the performance of networks with parameters and hyper-parameters in the population periodically. Then, PBT selects the best hyper-parameters, distributes the best hyper-parameters and the corresponding parameters to other learners, and continues the training of neural networks. Recently, PBT is applied to competitive multi-agent RL (Jaderberg et al. (2018)) and novelty search algorithms (Conti et al. (2018)). The proposed P3S uses a population to search a better policy by exploiting the best policy information similarly to PBT, but the way of using the best policy information is different. In P3S, the parameter of the best learner is not copied but used in a soft manner to guide the population for better search in the policy space.
+
+Guided Policy Search Our P3S method is also related to guided policy search (Levine & Koltun (2013); Levine et al. (2016); Teh et al. (2017); Ghosh et al. (2018)). Teh et al. (2017) proposed a guided policy search method for joint training of multiple tasks in which a common policy is used to guide local policies and the common policy is distilled from the local policies. Here, the local policies' parameters are updated to maximize the performance and minimize the KL divergence between the local policies and the common distilled policy. The proposed P3S is related to guided policy search in the sense that multiple policies are guided by a common policy. However, the difference is that the goal of P3S is not learning multiple tasks but learning optimal parameter for a common task as in PBT. Hence, the guiding policy is not distilled from multiple local policies but chosen as the best performing policy among multiple learners.
+
+Exploiting Best Information Exploiting best information has been considered in the previous works (White & Sofge (1992); Oh et al. (2018); Gangwani et al. (2019)). In particular, Oh et al. (2018); Gangwani et al. (2019) exploited past good experiences to obtain a better policy, whereas P3S exploits the current good policy among multiple policies to obtain a better policy.
+
+# 3 POPULATION-GUIDED PARALLEL POLICY SEARCH
+
+The overall structure of the proposed P3S scheme is described in Fig. 1. We have $N$ identical parallel learners with a shared common experience replay buffer $\mathcal{D}$ , and all $N$ identical learners employ a common base algorithm which can be any off-policy RL algorithm. The execution is in parallel. The $i$ -th learner has its own environment $\mathcal{E}^i$ , which is a copy of the common environment $\mathcal{E}$ , and has its own value function (e.g., Q-function) parameter $\theta^i$ and policy parameter $\phi^i$ . The $i$ -th learner interacts with the environment copy $\mathcal{E}^i$ with additional interaction with the chief, as shown in
+
+
+Figure 1: The overall structure of P3S
+
+Fig. 1. At each time step, the $i$ -th learner performs an action $a_{t}^{i}$ to its environment copy $\mathcal{E}^i$ by using its own policy $\pi_{\phi^i}$ , stores its experience $(s_t^i,a_t^i,r_t^i,s_{t + 1}^i)$ to the shared common replay buffer $\mathcal{D}$ for all $i = 1,2,\dots ,N$ . Then, each learner updates its value function parameter and policy parameter once by drawing a mini-batch of size $B$ from the shared common replay buffer $\mathcal{D}$ by minimizing its own value loss function and policy loss function, respectively.
+
+Due to parallel update of parameters, the policies of all learners compose a population of $N$ different policies. In order to take advantage of this population, we exploit the policy information from the best learner periodically during the training like in PBT (Jaderberg et al. (2017)). Suppose that the Q-function parameter and policy parameter of each learner are initialized and learning is performed as described above for $M$ time steps. At the end of the $M$ time steps, we determine who is the best learner based on the average of the most recent $E_{r}$ episodic rewards for each learner. Let the index of the best learner be $b$ . Then, the policy parameter information $\phi^b$ of the best learner can be used to enhance the learning of other learners for the next $M$ time steps. Instead of copying $\phi^b$ to other learners like in PBT, we propose using the information $\phi^b$ in a soft manner. That is, during the next $M$ time steps, while we set the loss function $\widetilde{L} (\theta^i)$ for the Q-function to be the same as the loss $L(\theta^i)$ of the base algorithm, we set the loss function $\widetilde{L} (\phi^i)$ for the policy parameter $\phi^i$ of the $i$ -th learner as the following augmented version:
+
+$$
+\widetilde {L} \left(\phi^ {i}\right) = L \left(\phi^ {i}\right) + \mathbf {1} _ {\{i \neq b \}} \beta \mathbb {E} _ {s \sim \mathcal {D}} \left[ D \left(\pi_ {\phi^ {i}}, \pi_ {\phi^ {b}}\right) \right] \tag {1}
+$$
+
+where $L(\phi^i)$ is the policy loss function of the base algorithm, $\mathbf{1}_{\{\cdot\}}$ denotes the indicator function, $\beta (>0)$ is a weighting factor, $D(\pi, \pi')$ be some distance measure between two policies $\pi$ and $\pi'$ .
+
+# 3.1 THEORETICAL GUARANTEE OF MONOTONE IMPROVEMENT OF EXPECTED CUMULATIVE RETURN
+
+In this section, we analyze the performance of the proposed soft-fusion approach theoretically and show the effectiveness of the proposed soft-fusion approach. Consider the current update period and its previous update period. Let $\pi_{\phi^i}^{old}$ be the policy of the $i$ -th learner at the end of the previous update period and let $\pi_{\phi^b}$ be the best policy among all policies $\pi_{\phi^i}^{old}, i = 1, \dots, N$ . Now, consider any learner $i$ who is not the best in the previous update period. Let the policy of learner $i$ in the current update period be denoted by $\pi_{\phi^i}$ , and let the policy loss function of the base algorithm be denoted as $L(\pi_{\phi^i})$ . In order to analyze the performance, we consider $L(\pi_{\phi^i})$ in the form of $L(\pi_{\phi^i}) = \mathbb{E}_{s \sim \mathcal{D}, a \sim \pi_{\phi^i}(\cdot | s)}[-Q^{\pi_{\phi^i}^{old}}(s, a)]$ . The reason behind this choice is that most of actor-critic methods update the value (or Q-)function and the policy iteratively. That is, for given $\pi_{\phi^i}^{old}$ , the Q-function is first updated to approximate $Q^{\pi_{\phi^i}^{old}}$ . Then, with the approximation $Q^{\pi_{\phi^i}^{old}}$ , the policy is
+
+updated to yield an updated policy $\pi_{\phi^i}^{new}$ . This procedure is repeated iteratively. Such loss function is used in many RL algorithms such as SAC and TD3 (Haarnoja et al. (2018); Fujimoto et al. (2018)). For the distance measure $D(\pi, \pi')$ between two policies $\pi$ and $\pi'$ , we consider the KL divergence $\mathrm{KL}(\pi || \pi')$ for analysis. Then, by eq. (1) the augmented loss function for non-best learner $i$ at the current update period is expressed as
+
+$$
+\begin{array}{l} \widetilde {L} \left(\pi_ {\phi^ {i}}\right) = \mathbb {E} _ {s \sim \mathcal {D}, a \sim \pi_ {\phi^ {i}} (\cdot | s)} \left[ - Q ^ {\pi_ {\phi^ {i}} ^ {o l d}} (s, a) \right] + \beta \mathbb {E} _ {s \sim \mathcal {D}} [ \mathrm {K L} \left(\pi_ {\phi^ {i}} (\cdot | s) | | \pi_ {\phi^ {b}} (\cdot | s)\right) ] (2) \\ = \mathbb {E} _ {s \sim \mathcal {D}} \left[ \mathbb {E} _ {a \sim \pi_ {\phi^ {i}} (\cdot | s)} \left[ - Q ^ {\pi_ {\phi^ {i}} ^ {o l d}} (s, a) + \beta \log \frac {\pi_ {\phi^ {i}} (a | s)}{\pi_ {\phi^ {b}} (a | s)} \right] \right] (3) \\ \end{array}
+$$
+
+Let $\pi_{\phi^i}^{new}$ be a solution that minimizes the augmented loss function eq. (3). We assume the following conditions.
+
+Assumption 1. For all $s$ ,
+
+$$
+\mathbb {E} _ {a \sim \pi_ {\phi^ {b}} (\cdot | s)} \left[ Q ^ {\pi_ {\phi^ {i}} ^ {o l d}} (s, a) \right] \geq \mathbb {E} _ {a \sim \pi_ {\phi^ {i}} ^ {o l d} (\cdot | s)} \left[ Q ^ {\pi_ {\phi^ {i}} ^ {o l d}} (s, a) \right]. \tag {A1}
+$$
+
+Assumption 2. For some $\rho, d > 0$ ,
+
+$$
+K L \left(\pi_ {\phi^ {i}} ^ {n e w} (\cdot | s) | | \pi_ {\phi^ {b}} (\cdot | s)\right) \geq \max \left\{\rho \max _ {s ^ {\prime}} K L \left(\pi_ {\phi^ {i}} ^ {n e w} (\cdot | s ^ {\prime}) | | \pi_ {\phi^ {i}} ^ {o l d} (\cdot | s ^ {\prime})\right), d \right\}, \forall s. \tag {A2}
+$$
+
+Assumption 1 means that if we draw the first time step action $a$ from $\pi_{\phi^b}$ and the following actions from $\pi_{\phi^i}^{old}$ , then this yields better performance on average than the case that we draw all actions including the first time step action from $\pi_{\phi^i}^{old}$ . This makes sense because of the definition of $\pi_{\phi^b}$ . Assumption 2 is about the distance relationship among the policies to ensure a certain level of spreading of the policies for the proposed soft-fusion approach. With the two assumptions above, we have the following theorem regarding the proposed soft-fusion parallel learning scheme:
+
+Theorem 1. Under Assumptions 1 and 2, the following inequality holds:
+
+$$
+Q ^ {\pi_ {\phi^ {i}} ^ {n e w}} (s, a) \stackrel {(a)} {\geq} Q ^ {\pi_ {\phi^ {b}}} (s, a) + \underbrace {\beta \mathbb {E} _ {s _ {t + 1}: s _ {\infty} \sim \pi_ {\phi^ {b}}} \left[ \sum_ {k = t + 1} ^ {\infty} \gamma^ {k - t} \Delta (s _ {k}) \right]} _ {\text {I m p r o v e m e n t g a p}} ^ {(b)} \geq Q ^ {\pi_ {\phi^ {b}}} (s, a) \forall (s, a), \forall i \neq b. \tag {4}
+$$
+
+where
+
+$$
+\Delta (s) = K L \left(\pi_ {\phi^ {i}} ^ {n e w} (\cdot | s) | | \pi_ {\phi^ {b}} (\cdot | s)\right) - \max \left\{\rho \max _ {s ^ {\prime}} K L \left(\pi_ {\phi^ {i}} ^ {n e w} (\cdot | s ^ {\prime}) | | \pi_ {\phi^ {i}} ^ {o l d} (\cdot | s ^ {\prime})\right), d \right\}. \tag {5}
+$$
+
+Here, inequality (a) requires Assumption 1 only and inequality (b) requires Assumption 2.
+
+Proof. See Appendix A.
+
+Theorem 1 states that the new solution $\pi_{\phi^i}^{new}$ for the current update period with the augmented loss function yields better performance (in the expected reward sense) than the best policy $\pi_{\phi^b}$ of the previous update period for any non-best learner $i$ of the previous update period. Hence, the proposed parallel learning scheme yields monotone improvement of expected cumulative return.
+
+# 3.2 IMPLEMENTATION
+
+The proposed P3S method can be applied to any off-policy base RL algorithms whether the base RL algorithms have discrete or continuous actions. For implementation, we assume that the best policy update period consists of $M$ time steps. We determine the best learner at the end of each update period based on the average of the most recent $E_{r}$ episodic rewards of each learner. The key point in implementation is adaptation of $\beta$ so that the improvement gap $\beta \mathbb{E}_{s_{t + 1}:s_{\infty} \sim \pi^b} \left[ \sum_{k = t + 1}^{\infty} \gamma^{k - t} \Delta(s_k) \right]$ in (4) becomes non-negative and is maximized for given $\rho$ and $d$ . The gradient of the improvement gap with respect to $\beta$ is given by $\bar{\Delta} := \mathbb{E}_{s_{t + 1}:s_{\infty} \sim \pi^b} \left[ \sum_{k = t + 1}^{\infty} \gamma^{k - t} \Delta(s_k) \right]$ , and $\bar{\Delta}$ is the average (with forgetting) of $\Delta(s_k)$ by using samples from $\pi^b$ . Hence, if $\bar{\Delta} > 0$ , i.e.,
+
+KL $\left(\pi_{\phi^i}^{new}(\cdot |s)||\pi_{\phi^b}(\cdot |s)\right) > \max \left\{\rho \max_{s'}\mathrm{KL}\left(\pi_{\phi^i}^{new}(\cdot |s')||\pi_{\phi^i}^{old}(\cdot |s')\right),d\right\}$ on average, then $\beta$ should be increased to maximize the performance gain. Otherwise, $\beta$ should be decreased. Therefore, we adopt the following adaptation rule for $\beta$ which is common for all non-best learners:
+
+$$
+\beta = \left\{ \begin{array}{l l} \beta \leftarrow 2 \beta & \text {i f} \widehat {D} _ {\text {s p r e a d}} > \max \left\{\rho \widehat {D} _ {\text {c h a n g e}}, d _ {\min } \right\} \times 1. 5 \\ \beta \leftarrow \beta / 2 & \text {i f} \widehat {D} _ {\text {s p r e a d}} < \max \left\{\rho \widehat {D} _ {\text {c h a n g e}}, d _ {\min } \right\} / 1. 5 \end{array} \right.. \tag {6}
+$$
+
+Here, $\widehat{D}_{spread} = \frac{1}{N - 1}\sum_{i\in I^{-b}}\mathbb{E}_{s\sim \mathcal{D}}\left[D(\pi_{\phi^i}^{new},\pi_{\phi^b})\right]$ is the estimated distance between $\pi_{\phi^i}^{new}$ and $\pi_{\phi^b}$ , and $\widehat{D}_{change} = \frac{1}{N - 1}\sum_{i\in I^{-b}}\mathbb{E}_{s\sim \mathcal{D}}\left[D(\pi_{\phi^i}^{new},\pi_{\phi^i}^{old})\right]$ is the estimated distance between $\pi_{\phi^i}^{new}$ and $\pi_{\phi^i}^{old}$ averaged over all $N - 1$ non-best learners, where $d_{min}$ and $\rho$ are predetermined hyperparameters. $\widehat{D}_{spread}$ and $\max \{\rho \widehat{D}_{change},d_{min}\}$ are our practical implementations of the left-hand side (LHS) and the right-hand side (RHS) of eq. (A2), respectively. This adaptation method is similar to that used in PPO (Schulman et al. (2017)).
+
+The update (6) of $\beta$ is done every $M$ time steps and the updated $\beta$ is used for the next $M$ time steps. As time steps elapse, $\beta$ is settled down so that $\widehat{D}_{\text{spread}}$ is around $d_{\text{search}} = \max \{\rho \widehat{D}_{\text{change}}, d_{\text{min}}\}$ and this implements Assumption 2 with equality. Hence, the proposed P3S scheme searches a spread area with rough radius $d_{\text{search}}$ around the best policy in the policy space, as illustrated in Fig. 2.
+
+The search radius $d_{search}$ is determined proportionally to $\widehat{D}_{change}$ that represents the speed of change in each learner's policy. In the case of being stuck in local optima, the change $\widehat{D}_{change}$ can be small, making the search area narrow. Hence, we set a minimum search radius $d_{min}$ to encourage escaping out of local optima.
+
+
+Figure 2: The conceptual search coverage in the policy space by parallel learners
+
+We applied P3S to TD3 as the base algorithm. The constructed algorithm is named P3S-TD3. The details of TD3 is explained in Appendix G. We used the mean square difference given by $D(\pi(s), \pi'(s)) = \frac{1}{2} \| \pi(s) - \pi'(s) \|_2^2$ as the distance measure between two policies for P3S-TD3. Note that if we consider two deterministic policies as two stochastic policies with same standard deviation, the KL divergence between the two stochastic policies is the same as the mean square difference. For initial exploration P3S-TD3 uses a uniform random policy and does not update all policies over the first $T_{initial}$ time steps. The pseudocode of the P3S-TD3 is given in Appendix H. The implementation code for P3S-TD3 is available at https://github.com/wyjung0625/p3s.
+
+# 4 EXPERIMENTS
+
+# 4.1 COMPARISON TO BASELINES
+
+In this section, we provide numerical results on performance comparison between the proposed P3S-TD3 algorithm and current state-of-the-art on-policy and off-policy baseline algorithms on several MuJoCo environments (Todorov et al. (2012)). The baseline algorithms are Proximal Policy Optimization (PPO) (Schulman et al. (2017)), Actor Critic using Kronecker-Factored Trust Region (ACKTR) (Wu et al. (2017)), Soft Q-learning (SQL) (Haarnoja et al. (2017)), (clipped double Q) Soft Actor-Critic (SAC) (Haarnoja et al. (2018)), and TD3 (Fujimoto et al. (2018)).
+
+Hyper-parameter setting All hyper-parameters we used for evaluation are the same as those in the original papers (Schulman et al. (2017); Wu et al. (2017); Haarnoja et al. (2017; 2018); Fujimoto et al. (2018)). Here, we provide the hyper-parameters of the P3S-TD3 algorithm only, while details of the hyper-parameters for TD3 are provided in Appendix I. On top of the hyper-parameters for the base algorithm TD3, we used $N = 4$ learners for P3S-TD3. To update the best policy and $\beta$ , the period $M = 250$ is used. The number of recent episodes $E_r = 10$ was used to determine the best learner $b$ . For the search range, we used the parameter $\rho = 2$ , and tuned $d_{min}$ among $d_{min} = \{0.02, 0.05\}$ for all environments. Details on $d_{min}$ for each environment is shown in Appendix I. The time steps for initial exploration $T_{initial}$ is set as 250 for Hopper-v1 and Walker2d-v1 and as 2500 for HalfCheetah-v1 and Ant-v1.
+
+
+(a) Hopper-v1
+
+
+(b) Walker2d-v1
+Figure 3: Performance for PPO (red), ACKTR (purple), SQL (brown), (clipped double Q) SAC (orange), TD3 (green), and P3S-TD3 (proposed method, blue) on MuJoCo tasks.
+
+
+(c) HalfCheetah-v1
+
+
+(d) Ant-v1
+
+Evaluation method Fig. 3 shows the learning curves over one million time steps for several MuJoCo tasks: Hopper-v1, Walker2d-v1, HalfCheetah-v1, and Ant-v1. In order to have sample-wise fair comparison among the considered algorithms, the time steps in the $x$ -axis in Fig. 3 for P3S-TD3 is the sum of time steps of all $N$ users. For example, in the case that $N = 4$ and each learner performs 100 time steps in P3S-TD3, the corresponding $x$ -axis value is 400 time steps. Since each learner performs parameter update once with one interaction with environment per each time step in P3S-TD3, the total number of parameter updates at the same $x$ -axis value in Fig. 3 is the same for all algorithms including P3S-TD3, and the total number of interactions with environment at the same $x$ -axis value in Fig. 3 is also the same for all algorithms including P3S-TD3. Here, the performance is obtained through the evaluation method which is similar to those in Haarnoja et al. (2018); Fujimoto et al. (2018). Evaluation of the policies is conducted every $R_{eval} = 4000$ time steps for all algorithms. At each evaluation instant, the agent (or learner) fixes its policy as the one at the evaluation instant, and interacts with the same environment separate for the evaluation purpose with the fixed policy to obtain 10 episodic rewards. The average of these 10 episodic rewards is the performance at the evaluation instant. In the case of P3S-TD3 and other parallel learning schemes, each of the $N$ learners fixes its policy as the one at the evaluation instant, and interacts with the environment with the fixed policy to obtain 10 episodic rewards. First, the 10 episodic rewards are averaged for each learner and then the maximum of the 10-episode-average rewards of the $N$ learners is taken as the performance at that evaluation instant. We performed this operation for five different random seeds, and the mean and variance of the learning curve are obtained from these five simulations. The policies used for evaluation are stochastic for PPO and ACKTR, and deterministic for the others.
+
+Performance on MuJoCo environments In Fig. 3, it is observed that all baseline algorithms is similar to that in the original papers (Schulman et al. (2017); Haarnoja et al. (2018); Fujimoto et al. (2018)). With this verification, we proceed to compare P3S-TD3 with the baseline algorithms. It is seen that the P3S-TD3 algorithm outperforms the state-of-the-art RL algorithms in terms of both the speed of convergence with respect to time steps and the final steady-state performance (except in Walker2d-v1, the initial convergence is a bit slower than TD3.) Especially, in the cases of Hopper-v1 and Ant-v1, TD3 has large variance and this implies that the performance of TD3 is quite dependent on the initialization and it is not easy for TD3 to escape out of bad local minima resulting from bad initialization in certain environments. However, it is seen that P3S-TD3 yields much smaller variance than TD3. This implies that the wide area search by P3S in the policy space helps the learners escape out of bad local optima.
+
+# 4.2 COMPARISON WITH OTHER PARALLEL LEARNING SCHEMES AND ABLATION STUDY
+
+In the previous subsection, we observed that P3S enhances the performance and reduces dependence on initialization as compared to the single learner case with the same complexity. In fact, this should be accomplished by any properly-designed parallel learning scheme. Now, in order to demonstrate the true advantage of P3S, we compare P3S with other parallel learning schemes. P3S has several components to improve the performance based on parallelism: 1) sharing experiences from multiple policies, 2) using the best policy information, and 3) soft fusion of the best policy information for wide search area. We investigated the impact of each component on the performance improvement. For comparison we considered the following parallel policy search methods gradually incorporating more techniques:
+
+
+
+
+(b) Walker2d-v1
+
+
+
+
+
+
+(a) Hopper-v1
+(e) Del. Hopper-v1
+Figure 4: Performance of different parallel learning methods on MuJoCo environments (up), on delayed MuJoCo environments (down)
+
+
+(f) Del. Walker2d-v1
+
+
+(c) HalfCheetah-v1
+(g) Del. HalfCheetah-v1
+
+
+(d) Ant-v1
+(h) Del. Ant-v1
+
+1. Original Algorithm The original algorithm (TD3) with one learner
+2. Distributed RL (DRL) $N$ actors obtain samples from $N$ environment copies. The common policy and the experience replay buffer are shared by all $N$ actors.
+3. Experience-Sharing-Only (ESO) $N$ learners interact with $N$ environment copies and update their own policies using experiences drawn from the shared experience replay buffer.
+4. Resetting (Re) At every $M'$ time steps, the best policy is determined and all policies are initialized as the best policy, i.e., the best learner's policy parameter is copied to all other learners. The rest of the procedure is the same as experience-sharing-only algorithm.
+5. P3S At every $M$ time steps, the best policy information is determined and this policy is used in a soft manner based on the augmented loss function.
+
+Note that the resetting method also exploits the best policy information from $N$ learners. The main difference between P3S and the resetting method is the way the best learner's policy parameter is used. The resetting method initializes all policies with the best policy parameter every $M'$ time steps like in PBT (Jaderberg et al. (2017)), whereas P3S algorithm uses the best learner's policy parameter information determined every $M$ time steps to construct an augmented loss function. For fair comparison, $M$ and $M'$ are determined independently and optimally for P3S and Resetting, respectively, since the optimal period can be different for the two methods. We tuned $M'$ among $\{2000, 5000, 10000\}$ (MuJoCo environments) and $\{10000, 20000, 50000\}$ (Delayed MuJoCo environments) for Re-TD3, whereas $M = 250$ was used for P3S-TD3. The specific parameters used for Re-TD3 are in Appendix I. Since all $N$ policies collapse to one point in the resetting method at the beginning of each period, we expect that a larger period is required for resetting to have sufficiently spread policies at the end of each best policy selection period. We compared the performance of the aforementioned parallel learning methods combined with TD3 on two classes of tasks; MuJoCo environments, and Delayed sparse reward MuJoCo environments.
+
+Performance on MuJoCo environments The upper part of Fig. 4 shows the learning curves of the considered parallel learning methods combined with TD3 for the four tasks (Hopper-v1, Walkerd-v1, HalfCheetah-v1 and Ant v1). It is seen that P3S-TD3 outperforms other parallel methods: DRL-TD3, ESO-TD3 and Re-TD3 except the case that ESO-TD3 or Re-TD3 slightly outperforms P3S-TD3 in Hopper-v1 and Walker2d-v1. In the case of Hopper-v1 and Walker2d-v1, ESO-TD3 has better final (steady-state) performance than all other parallel methods. Note that ESO-TD3 obtains most diverse experiences since the $N$ learners shares the experience replay buffer but there is no interaction among the $N$ learners until the end of training. So, it seems that this diverse experience is beneficial to Hopper-v1 and Walker2d-v1.
+
+
+(a)
+
+
+(b)
+Figure 5: Ablation study of P3S-TD3 on Delayed Ant-v1: (a) Performance and $\beta$ (1 seed) with $d_{min} = 0.05$ , (b) Distance measures with $d_{min} = 0.05$ , and (c) Comparison with different $d_{min} = 0.02, 0.05$
+
+
+(c)
+
+Performance on Delayed MuJoCo environments Sparse reward environments especially require more search to obtain a good policy. To see the performance of P3S in sparse reward environments, we performed experiments on Delayed MuJoCo environments. Delayed MuJoCo environments are reward-sparsified versions of MuJoCo environments and used in Zheng et al. (2018). Delayed MuJoCo environments give non-zero rewards periodically with frequency $f_{reward}$ or only at the end of episodes. That is, in a delayed MuJoCo environment, the environment accumulates rewards given by the corresponding MuJoCo environment while providing zero reward to the agent, and gives the accumulated reward to the agent. We evaluated the performance on the four delayed environments with $f_{reward} = 20$ : Delayed Hopper-v1, Delayed Walker2d-v1, Delayed HalfCheetah-v1 and Delayed Ant-v1.
+
+The lower part of Fig. 4 shows the learning curves of the different parallel learning methods for the four delayed MuJoCo environments. It is seen that P3S outperforms all other considered parallel learning schemes on all environments except on delayed Hopper-v1. It seems that the enforced wide-area policy search with the soft-fusion approach in P3S is beneficial to improve performance in sparse reward environments.
+
+Benfits of P3S Delayed Ant-v1 is a case of sparse reward environment in which P3S shows significant improvement as compared to other parallel schemes. As shown in Fig. 4h, the performance of TD3 drops below zero initially and converges to zero as time goes. Similar behavior is shown for other parallel methods except P3S. This is because in Delayed Ant-v1 with zero padding rewards between actual rewards, initial random actions do not generate significant positive speed to a forward direction, so it does not receive positive rewards but receives negative actual rewards due to the control cost. Once its performance less than 0, learners start learning doing nothing to reach zero reward (no positive reward and no negative reward due to no control cost). Learning beyond this seems difficult without any direction information for parameter update. This is the interpretation of the behavior of other algorithms in Fig. 4h. However, it seems that P3S escapes from this local optimum by following the best policy. This is evident in Fig. 5a, showing that after few time steps, $\beta$ is increased to follow the best policy more. Note that at the early stage of learning, the performance difference among the learners is large as seen in the large $\widehat{D}_{spread}$ values in Fig. 5b. As time elapses, all learners continue learning, the performance improves, and the spreadness among the learners' policies shrinks. However, the spreadness among the learners' policies is kept at a certain level for wide policy search by $d_{min}$ , as seen in Fig. 5b. Fig. 5c shows the performance of P3S with $d_{min} = 0.05$ and 0.02. It shows that a wide area policy search is beneficial as compared to a narrow area policy search. However, it may be detrimental to set too large a value for $d_{min}$ due to too large statistics discrepancy among samples from different learners' policies.
+
+# 5 CONCLUSION
+
+In this paper, we have proposed a new population-guided parallel learning scheme, P3S, to enhance the performance of off-policy RL. In the proposed P3S scheme, multiple identical learners with their own value-functions and policies sharing a common experience replay buffer search a good policy with the guidance of the best policy information in the previous search interval. The information of the best policy parameter of the previous search interval is fused in a soft manner by constructing an augmented loss function for policy update to enlarge the overall search region by the multiple
+
+learners. The guidance by the previous best policy and the enlarged search region by P3S enables faster and better search in the policy space, and monotone improvement of expected cumulative return by P3S is theoretically proved. The P3S-TD3 algorithm constructed by applying the proposed P3S scheme to TD3 outperforms most of the current state-of-the-art RL algorithms. Furthermore, the performance gain by P3S over other parallel learning schemes is significant on harder environments especially on sparse reward environments by searching wide range in policy space.
+
+# ACKNOWLEDGMENTS
+
+This work was supported in part by the ICT R&D program of MSIP/IITP (2016-0-00563, Research on Adaptive Machine Learning Technology Development for Intelligent Autonomous Digital Companion) and in part by the National Research Foundation of Korea(NRF) grant funded by the Korea government(Ministry of Science and ICT) (NRF2017R1E1A1A03070788).
+
+# REFERENCES
+
+Mohammad Babaeizadeh, Iuri Frosio, Stephen Tyree, Jason Clemons, and Jan Kautz. Reinforcement learning through asynchronous advantage actor-critic on a GPU. In International Conference on Learning Representations, Apr 2017.
+Gabriel Barth-Maron, Matthew W. Hoffman, David Budden, Will Dabney, Dan Horgan, Dhruva TB, Alistair Muldal, Nicolas Heess, and Timothy Lillicrap. Distributed distributional deterministic policy gradients. In International Conference on Learning Representations, Apr 2018.
+Yuri Burda, Harrison Edwards, Amos Storkey, and Oleg Klimov. Exploration by random network distillation. May 2019.
+Krzysztof Choromanski, Mark Rowland, Vikas Sindhwani, Richard E. Turner, and Adrian Weller. Structured evolution with compact architectures for scalable policy optimization. In Proceedings of the 35th International Conference on Machine Learning, pp. 970-978, Jul 2018.
+Alfredo V. Clemente, Humberto N. Castejón, and Arjun Chandra. Efficient parallel methods for deep reinforcement learning. arXiv preprint arXiv:1705.04862, 2017.
+Edoardo Conti, Vashisht Madhavan, Felipe Petroski Such, Joel Lehman, Kenneth O. Stanley, and Jeff Clune. Improving exploration in evolution strategies for deep reinforcement learning via a population of novelty-seeking agents. pp. 5027-5038, Dec 2018.
+Maria Dimakopoulou and Benjamin Van Roy. Coordinated exploration in concurrent reinforcement learning. In Proceedings of the 35th International Conference on Machine Learning, volume 80, pp. 1271-1279, Jul 2018.
+Maria Dimakopoulou, Ian Osband, and Benjamin Van Roy. Scalable coordinated exploration in concurrent reinforcement learning. In Advances in Neural Information Processing Systems, pp. 4223-4232, Dec 2018.
+Lasse Espeholt, Hubert Soyer, Remi Munos, Karen Simonyan, Volodymyr Mnih, Tom Ward, Yotam Doron, Vlad Firoiu, Tim Harley, Iain Dunning, Shane Legg, and Koray Kavukcuoglu. IMPALA: Scalable distributed deep-RL with importance weighted actor-learner architectures. In Proceedings of the 35th International Conference on Machine Learning, pp. 1407-1416, Jul 2018.
+Scott Fujimoto, Herke van Hoof, and David Meger. Addressing function approximation error in actor-critic methods. In Proceedings of the 35th International Conference on Machine Learning, pp. 1587-1596, Jul 2018.
+Tanmay Gangwani, Qiang Liu, and Jian Peng. Learning self-imitating diverse policies. In International Conference on Learning Representations, May 2019.
+Dibya Ghosh, Avi Singh, Aravind Rajeswaran, Vikash Kumar, and Sergey Levine. Divide-and-conquer reinforcement learning. In International Conference on Learning Representations, Apr 2018.
+
+Zhaohan Guo and Emma Brunskill. Concurrent PAC RL. In AAAI Conference on Artificial Intelligence, pp. 2624-2630, Jan 2015.
+Tuomas Haarnoja, Haoran Tang, Pieter Abbeel, and Sergey Levine. Reinforcement learning with deep energy-based policies. In Proceedings of the 34th International Conference on Machine Learning, pp. 1352-1361, 2017.
+Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. In Proceedings of the 35th International Conference on Machine Learning, pp. 1861-1870, Jul 2018.
+Dan Horgan, John Quan, David Budden, Gabriel Barth-Maron, Matteo Hessel, Hado Van Hasselt, and David Silver. Distributed prioritized experience replay. In International Conference on Learning Representations, Apr 2018.
+Max Jaderberg, Valentin Dalibard, Simon Osindero, Wojciech M. Czarnecki, Jeff Donahue, Ali Razavi, Oriol Vinyals, Tim Green, Iain Dunning, Karen Simonyan, Chrisantha Fernando, and Koray Kavukcuoglu. Population based training of neural networks. arXiv preprint arXiv:1711.09846, 2017.
+Max Jaderberg, Wojciech M. Czarnecki, Iain Dunning, Luke Marris, Guy Lever, Antonio Garcia Castaneda, Charles Beattie, Neil C. Rabinowitz, Ari S. Morcos, Avraham Ruderman, Nicolas Sonnerat, Tim Green, Louise Deason, Joel Z. Leibo, David Silver, David Hassabis, Koray Kavukcuoglu, and Thore Graepel. Human-level performance in first-person multiplayer games with population-based deep reinforcement learning. arXiv preprint arXiv:1807.01281, 2018.
+Shauharda Khadka and Kagan Tumer. Evolution-guided policy gradient in reinforcement learning. In Advances in Neural Information Processing Systems, pp. 1196-1208, Dec 2018.
+Sergey Levine and Vladlen Koltun. Guided policy search. In Proceedings of the 30th International Conference on Machine Learning, pp. 1-9, 2013.
+Sergey Levine, Chelsea Finn, Trevor Darrell, and Pieter Abbeel. End-to-end training of deep visuomotor policies. The Journal of Machine Learning Research, 17(1):1334-1373, 2016.
+Jacky Liang, Viktor Makoviychuk, Ankur Handa, Nuttapong Chentanez, Miles Macklin, and Dieter Fox. GPU-accelerated robotic simulation for distributed reinforcement learning. In Conference on Robot Learning, pp. 270-282, 2018.
+Timothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971, 2015.
+Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In Proceedings of the 33rd International Conference on Machine Learning, pp. 1928-1937, 2016.
+Arun Nair, Praveen Srinivasan, Sam Blackwell, Cagdas Alcicek, Rory Fearon, Alessandro De Maria, Vedavyas Panneershelvam, Mustafa Suleyman, Charles Beattie, Stig Petersen, Shane Legg, Volodymyr Mnih, Koray Kavukcuoglu, and David Silver. Massively parallel methods for deep reinforcement learning. arXiv preprint arXiv:1507.04296, 2015.
+Junhyuk Oh, Yijie Guo, Satinder Singh, and Honglak Lee. Self-imitation learning. In Proceedings of the 35th International Conference on Machine Learning, pp. 3878-3887, 2018.
+Alois Pourchot and Olivier Sigaud. CEM-RL: Combining evolutionary and gradient-based methods for policy search. In International Conference on Learning Representations, 2019.
+Tim Salimans, Jonathan Ho, Xi Chen, Szymon Sidor, and Ilya Sutskever. Evolution strategies as a scalable alternative to reinforcement learning. arXiv preprint arXiv:1703.03864, 2017.
+
+John Schulman, Sergey Levine, Pieter Abbeel, Michael Jordan, and Philipp Moritz. Trust region policy optimization. In Proceedings of the 32nd International Conference on Machine Learning, pp. 1889-1897, 2015.
+John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017.
+David Silver, Leonard Newnham, David Barker, Suzanne Weller, and Jason McFall. Concurrent reinforcement learning from customer interactions. In International Conference on Machine Learning, volume 28, pp. 924-932, Jun 2013.
+Yee Whye Teh, Victor Bapat, Wojciech M. Czarn秧, John Quan, James Kirkpatrick, Raia Hadsell, Nicolas Heess, and Razvan Pascanu. Distral: Robust multitask reinforcement learning. In Advances in Neural Information Processing Systems, pp. 4499-4509, Dec 2017.
+Emanuel Todorov, Tom Erez, and Yuval Tassa. Mujoco: A physics engine for model-based control. In Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on, pp. 5026-5033. IEEE, Oct 2012.
+David A. White and Donald A. Sofge. The role of exploration in learning control. Handbook of Intelligent Control: Neural, Fuzzy and Adaptive Approaches, pp. 1-27, 1992.
+Yuhuai Wu, Elman Mansimov, Roger Grosse, Shun Liao, and Jimmy Ba. Scalable trust-region method for deep reinforcement learning using kronecker-factored approximation. In Advances in Neural Information Processing Systems, pp. 5279-5288, Dec 2017.
+Zeyu Zheng, Junhyuk Oh, and Satinder Singh. On learning intrinsic rewards for policy gradient methods. In Advances in Neural Information Processing Systems, pp. 4649-4659, Dec 2018.
+
+# APPENDIX A. PROOF OF THEOREM 1
+
+In this section, we prove Theorem 1. Let $\pi_{\phi^i}^{old}$ be the policy of the $i$ -th learner at the end of the previous update period and let $\pi_{\phi^b}$ be the best policy among all policies $\pi_{\phi^i}^{old}, i = 1,\dots ,N$ . Now, consider any learner $i$ who is not the best in the previous update period. Let the policy of learner $i$ in the current update period be denoted by $\pi_{\phi^i}$ , and let the policy loss function of the base algorithm be denoted as $L(\pi_{\phi^i})$ , given in the form of
+
+$$
+L \left(\pi_ {\phi^ {i}}\right) = \mathbb {E} _ {s \sim \mathcal {D}, a \sim \pi_ {\phi^ {i}} (\cdot | s)} \left[ - Q ^ {\pi_ {\phi_ {i}} ^ {o l d}} (s, a) \right]. \tag {7}
+$$
+
+The reason behind this choice is that most of actor-critic methods update the value (or Q-)function and the policy iteratively. That is, for given $\pi_{\phi^i}^{old}$ , the Q-function is first updated so as to approximate $Q_{\phi^i}^{old}$ . Then, with the approximation $Q_{\phi^i}^{old}$ the policy is updated to yield an updated policy $\pi_{\phi^i}^{new}$ , and this procedure is repeated iteratively. Such loss function is used in many RL algorithms such as SAC and TD3 (Haarnoja et al. (2018); Fujimoto et al. (2018)). SAC updates its policy by minimizing $\mathbb{E}_{s\sim \mathcal{D},a\sim \pi^{\prime}(\cdot |s)}[-Q^{\pi_{old}}(s,a) + \log \pi^{\prime}(a|s)]$ over $\pi^\prime$ . TD3 updates its policy by minimizing $\mathbb{E}_{s\sim \mathcal{D},a = \pi '(s)}[-Q^{\pi_{old}}(s,a)]$ .
+
+With the loss function eq. (7) and the KL divergence $\mathrm{KL}(\pi ||\pi^{\prime})$ as the distance measure $D(\pi ,\pi^{\prime})$ between two policies $\pi$ and $\pi^\prime$ as stated in the main paper, the augmented loss function for non-best learner $i$ at the current update period is expressed as
+
+$$
+\begin{array}{l} \widetilde {L} \left(\pi_ {\phi^ {i}}\right) = \mathbb {E} _ {s \sim \mathcal {D}, a \sim \pi_ {\phi^ {i}} (\cdot | s)} \left[ - Q ^ {\pi_ {\phi^ {i}} ^ {o l d}} (s, a) \right] + \beta \mathbb {E} _ {s \sim \mathcal {D}} \left[ \mathrm {K L} \left(\pi_ {\phi^ {i}} (\cdot | s) \mid \mid \pi_ {\phi^ {b}} (\cdot | s)\right) \right] (8) \\ = \mathbb {E} _ {s \sim \mathcal {D}} \left[ \mathbb {E} _ {a \sim \pi_ {\phi^ {i}} (\cdot | s)} \left[ - Q ^ {\pi_ {\phi^ {i}} ^ {o l d}} (s, a) + \beta \log \frac {\pi_ {\phi^ {i}} (a | s)}{\pi_ {\phi^ {b}} (a | s)} \right] \right] (9) \\ \end{array}
+$$
+
+Let $\pi_{\phi^i}^{new}$ be a solution that minimizes the augmented loss function eq. (9).
+
+Assumption 1. For all $s$ ,
+
+$$
+\mathbb {E} _ {a \sim \pi_ {\phi^ {b}} (\cdot | s)} \left[ Q ^ {\pi_ {\phi^ {i}} ^ {o l d}} (s, a) \right] \geq \mathbb {E} _ {a \sim \pi_ {\phi^ {i}} ^ {o l d} (\cdot | s)} \left[ Q ^ {\pi_ {\phi^ {i}} ^ {o l d}} (s, a) \right]. \tag {10}
+$$
+
+Assumption 2. For some $\rho, d > 0$ ,
+
+$$
+K L \left(\pi_ {\phi^ {i}} ^ {n e w} (\cdot | s) | | \pi_ {\phi^ {b}} (\cdot | s)\right) \geq \max \left\{\rho \max _ {s ^ {\prime}} K L \left(\pi_ {\phi^ {i}} ^ {n e w} (\cdot | s ^ {\prime}) | | \pi_ {\phi^ {i}} ^ {o l d} (\cdot | s ^ {\prime})\right), d \right\}, \forall s. \tag {11}
+$$
+
+For simplicity of notations, we use the following notations from here on.
+
+$\pi^i$ for $\pi_{\phi^i}$
+$\pi_{old}^{i}$ for $\pi_{\phi^i}^{old}$
+$\pi_{new}^{i}$ for $\pi_{\phi^i}^{new}$
+$\pi^b$ for $\pi_{\phi^b}$
+KLmax $\left(\pi_{new}^{i}\| \pi_{old}^{i}\right)$ for max, KL $\left(\pi_{\phi^i}^{new}(\cdot |s')\| \pi_{\phi^i}^{old}(\cdot |s')\right)$
+
+# A.1.A PRELIMINARY STEP
+
+Lemma 1. Let $\pi_{new}^{i}$ be the solution of the augmented loss function eq. (9). Then, with Assumption 1, we have the following:
+
+$$
+\mathbb {E} _ {a \sim \pi_ {n e w} ^ {i} (\cdot | s)} \left[ Q ^ {\pi_ {o l d} ^ {i}} (s, a) \right] \geq \mathbb {E} _ {a \sim \pi_ {o l d} ^ {i} (\cdot | s)} \left[ Q ^ {\pi_ {o l d} ^ {i}} (s, a) \right] \tag {12}
+$$
+
+for all $s$ .
+
+Proof. For all $s$ ,
+
+$$
+\begin{array}{l} \mathbb {E} _ {a \sim \pi_ {o l d} ^ {i} (\cdot | s)} \left[ - Q ^ {\pi_ {o l d} ^ {i}} (s, a) \right] \geq \mathbb {E} _ {a \sim \pi^ {b} (\cdot | s)} \left[ - Q ^ {\pi_ {o l d} ^ {i}} (s, a) \right] (13) \\ = \mathbb {E} _ {a \sim \pi^ {b (\cdot | s)}} \left[ - Q ^ {\pi_ {o l d} ^ {i}} (s, a) + \beta \log \frac {\pi^ {b} (a | s)}{\pi^ {b} (a | s)} \right] (14) \\ \geq \underset {(b)} {\mathbb {E}} _ {a \sim \pi_ {n e w} ^ {i} (\cdot | s)} \left[ - Q ^ {\pi_ {o l d} ^ {i}} (s, a) + \beta \log \frac {\pi_ {n e w} ^ {i} (a | s)}{\pi^ {b} (a | s)} \right] (15) \\ \geq \underset {(c)} {\mathbb {E}} _ {a \sim \pi_ {n e w} ^ {i} (\cdot | s)} \left[ - Q ^ {\pi_ {o l d} ^ {i}} (s, a) \right], (16) \\ \end{array}
+$$
+
+where Step (a) holds by Assumption 1, (b) holds by the definition of $\pi_{new}^{i}$ , and (c) holds since KL divergence is always non-negative.
+
+With Lemma 1, we prove the following preliminary result before Theorem 1:
+
+Proposition 1. With Assumption 1, the following inequality holds for all $s$ and $a$ :
+
+$$
+Q ^ {\pi_ {n e w} ^ {i}} (s, a) \geq Q ^ {\pi_ {o l d} ^ {i}} (s, a). \tag {17}
+$$
+
+Proof of Proposition 1. For arbitrary $s_t$ and $a_t$ ,
+
+$$
+\begin{array}{l} Q ^ {\pi_ {o l d} ^ {i}} \left(s _ {t}, a _ {t}\right) \\ = r \left(s _ {t}, a _ {t}\right) + \gamma \mathbb {E} _ {s _ {t + 1} \sim p (\cdot | s _ {t}, a _ {t})} \left[ \mathbb {E} _ {a _ {t + 1} \sim \pi_ {o l d} ^ {i}} \left[ Q ^ {\pi_ {o l d} ^ {i}} \left(s _ {t + 1}, a _ {t + 1}\right) \right] \right] (18) \\ \leq \left. \left(r \left(s _ {t}, a _ {t}\right) + \gamma \mathbb {E} _ {s _ {t + 1} \sim p (\cdot \mid s _ {t}, a _ {t})} \left[ \mathbb {E} _ {a _ {t + 1} \sim \pi_ {n e w} ^ {i}} \left[ Q ^ {\pi_ {o l d} ^ {i}} \left(s _ {t + 1}, a _ {t + 1}\right) \right] \right] \right. \right. (19) \\ = \mathbb {E} _ {s _ {t + 1}: s _ {t + 2} \sim \pi_ {n e w} ^ {i}} \left[ r \left(s _ {t}, a _ {t}\right) + \gamma r \left(s _ {t + 1}, a _ {t + 1}\right) + \gamma^ {2} \mathbb {E} _ {a _ {t + 2} \sim \pi_ {o l d} ^ {i}} \left[ Q ^ {\pi_ {o l d} ^ {i}} \left(s _ {t + 2}, a _ {t + 2}\right) \right] \right] (20) \\ \leq_ {(b)} \mathbb {E} _ {s _ {t + 1}: s _ {t + 2} \sim \pi_ {n e w} ^ {i}} \left[ r \left(s _ {t}, a _ {t}\right) + \gamma r \left(s _ {t + 1}, a _ {t + 1}\right) + \gamma^ {2} \mathbb {E} _ {a _ {t + 2} \sim \pi_ {n e w} ^ {i}} \left[ Q ^ {\pi_ {o l d} ^ {i}} \left(s _ {t + 2}, a _ {t + 2}\right) \right] \right] (21) \\ \leq \dots (22) \\ \leq \mathbb {E} _ {s _ {t + 1}: s _ {\infty} \sim \pi_ {n e w} ^ {i}} \left[ \sum_ {k = t} ^ {\infty} \gamma^ {k - t} r \left(s _ {k}, a _ {k}\right) \right] (23) \\ = Q ^ {\pi_ {n e w} ^ {i}} \left(s _ {t}, a _ {t}\right), (24) \\ \end{array}
+$$
+
+where $p(\cdot | s_t, a_t)$ in eq. (18) is the environment transition probability, and $s_{t+1} : s_{t+2} \sim \pi_{new}^i$ in eq. (20) means that the trajectory from $s_{t+1}$ to $s_{t+2}$ is generated by $\pi_{new}^i$ together with the environment transition probability $p(\cdot | s_t, a_t)$ . (Since the use of $p(\cdot | s_t, a_t)$ is obvious, we omitted $p(\cdot | s_t, a_t)$ for notational simplicity.) Steps (a) and (b) hold due to Lemma 1.
+
+# A.2. PROOF OF THEOREM 1
+
+Proposition 1 states that for a non-best learner $i$ , the updated policy $\pi_{new}^{i}$ with the augmented loss function yields better performance than its previous policy $\pi_{old}^{i}$ , but Theorem 1 states that for a non-best learner $i$ , the updated policy $\pi_{new}^{i}$ with the augmented loss function yields better performance than even the previous best policy $\pi^{b}$ .
+
+To prove Theorem 1, we need further lemmas: We take Definition 1 and Lemma 2 directly from reference (Schulman et al. (2015)).
+
+Definition 1 (From Schulman et al. (2015)). Consider two policies $\pi$ and $\pi'$ . The two policies $\pi$ and $\pi'$ are $\alpha$ -coupled if $Pr(a \neq a') \leq \alpha$ , $(a, a') \sim (\pi(\cdot|s), \pi'(\cdot|s))$ for all $s$ .
+
+Lemma 2 (From Schulman et al. (2015)). Given $\alpha$ -coupled policies $\pi$ and $\pi'$ , for all $s$ ,
+
+$$
+\left| \mathbb {E} _ {a \sim \pi^ {\prime}} \left[ A ^ {\pi} (s, a) \right] \right| \leq 2 \alpha \max _ {s, a} | A ^ {\pi} (s, a) |, \tag {25}
+$$
+
+where $A^{\pi}(s,a)$ is the advantage function.
+
+Proof. (From Schulman et al. (2015))
+
+$$
+\begin{array}{l} \left| \mathbb {E} _ {a \sim \pi^ {\prime}} \left[ A ^ {\pi} (s, a) \right] \right| = \underset {(a)} {=} \left| \mathbb {E} _ {a ^ {\prime} \sim \pi^ {\prime}} \left[ A ^ {\pi} \left(s, a ^ {\prime}\right) \right] - \mathbb {E} _ {a \sim \pi} \left[ A ^ {\pi} (s, a) \right] \right| (26) \\ = \left| \mathbb {E} _ {(a, a ^ {\prime}) \sim (\pi , \pi^ {\prime})} \left[ A ^ {\pi} (s, a ^ {\prime}) - A ^ {\pi} (s, a) \right] \right| (27) \\ = | \Pr (a = a ^ {\prime}) \mathbb {E} _ {(a, a ^ {\prime}) \sim (\pi , \pi^ {\prime}) | a = a ^ {\prime}} [ A ^ {\pi} (s, a ^ {\prime}) - A ^ {\pi} (s, a) ] \\ + \Pr (a \neq a ^ {\prime}) \mathbb {E} _ {(a, a ^ {\prime}) \sim (\pi , \pi^ {\prime}) | a \neq a ^ {\prime}} [ A ^ {\pi} (s, a ^ {\prime}) - A ^ {\pi} (s, a) ] | (28) \\ = \Pr (a \neq a ^ {\prime}) \left| \mathbb {E} _ {(a, a ^ {\prime}) \sim (\pi , \pi^ {\prime}) | a \neq a ^ {\prime}} \left[ A ^ {\pi} (s, a ^ {\prime}) - A ^ {\pi} (s, a) \right] \right| (29) \\ \leq 2 \alpha \max _ {s, a} | A ^ {\pi} (s, a) |, (30) \\ \end{array}
+$$
+
+where Step (a) holds since $\mathbb{E}_{a\sim \pi}[A^{\pi}(s,a)] = 0$ for all $s$ by the property of an advantage function.
+
+By modifying the result on the state value function in Schulman et al. (2015), we have the following lemma on the Q-function:
+
+Lemma 3. Given two policies $\pi$ and $\pi'$ , the following equality holds for arbitrary $s_0$ and $a_0$ :
+
+$$
+Q ^ {\pi^ {\prime}} \left(s _ {0}, a _ {0}\right) = Q ^ {\pi} \left(s _ {0}, a _ {0}\right) + \gamma \mathbb {E} _ {\tau \sim \pi^ {\prime}} \left[ \sum_ {t = 1} ^ {\infty} \gamma^ {t - 1} A ^ {\pi} \left(s _ {t}, a _ {t}\right) \right], \tag {31}
+$$
+
+where $\mathbb{E}_{\tau \sim \pi'}$ is expectation over trajectory $\tau$ which starts from a state $s_1$ drawn from the transition probability $p(\cdot | s_0, a_0)$ of the environment.
+
+Proof. Note that
+
+$$
+Q ^ {\pi} \left(s _ {0}, a _ {0}\right) = r _ {0} + \gamma \mathbb {E} _ {s _ {1} \sim p \left(\cdot \mid s _ {0}, a _ {0}\right)} \left[ V ^ {\pi} \left(s _ {1}\right) \right] \tag {32}
+$$
+
+$$
+Q ^ {\pi^ {\prime}} \left(s _ {0}, a _ {0}\right) = r _ {0} + \gamma \mathbb {E} _ {s _ {1} \sim p (\cdot | s _ {0}, a _ {0})} \left[ V ^ {\pi^ {\prime}} \left(s _ {1}\right) \right] \tag {33}
+$$
+
+Hence, it is sufficient to show the following equality:
+
+$$
+\mathbb {E} _ {\tau \sim \pi^ {\prime}} \left[ \sum_ {t = 1} ^ {\infty} \gamma^ {t - 1} A ^ {\pi} \left(s _ {t}, a _ {t}\right) \right] = \mathbb {E} _ {s _ {1} \sim p \left(\cdot \mid s _ {0}, a _ {0}\right)} \left[ V ^ {\pi^ {\prime}} \left(s _ {1}\right) \right] - \mathbb {E} _ {s _ {1} \sim p \left(\cdot \mid s _ {0}, a _ {0}\right)} \left[ V ^ {\pi} \left(s _ {1}\right) \right] \tag {34}
+$$
+
+Note that
+
+$$
+A ^ {\pi} \left(s _ {t}, a _ {t}\right) = \mathbb {E} _ {s _ {t + 1} \sim p (\cdot | s _ {t}, a _ {t})} \left[ r _ {t} + \gamma V ^ {\pi} \left(s _ {t + 1}\right) - V ^ {\pi} \left(s _ {t}\right) \right] \tag {35}
+$$
+
+Then, substituting eq. (35) into the LHS of eq. (34), we have
+
+$$
+\begin{array}{l} \mathbb {E} _ {\tau \sim \pi^ {\prime}} \left[ \sum_ {t = 1} ^ {\infty} \gamma^ {t - 1} A ^ {\pi} \left(s _ {t}, a _ {t}\right) \right] = \mathbb {E} _ {\tau \sim \pi^ {\prime}} \left[ \sum_ {t = 1} ^ {\infty} \gamma^ {t - 1} \left(r _ {t} + \gamma V ^ {\pi} \left(s _ {t + 1}\right) - V ^ {\pi} \left(s _ {t}\right)\right) \right] (36) \\ = \mathbb {E} _ {\tau \sim \pi^ {\prime}} \left[ \sum_ {t = 1} ^ {\infty} \gamma^ {t - 1} r _ {t} \right] - \mathbb {E} _ {s _ {1} \sim p (\cdot | s _ {0}, a _ {0})} \left[ V ^ {\pi} (s _ {1}) \right] (37) \\ = \mathbb {E} _ {s _ {1} \sim p (\cdot | s _ {0}, a _ {0})} \left[ V ^ {\pi^ {\prime}} (s _ {1}) \right] - \mathbb {E} _ {s _ {1} \sim p (\cdot | s _ {0}, a _ {0})} \left[ V ^ {\pi} (s _ {1}) \right], (38) \\ \end{array}
+$$
+
+where eq. (37) is valid since $\mathbb{E}_{\tau \sim \pi'}\left[\sum_{t=1}^{\infty}\gamma^{t-1}\left(\gamma V^{\pi}(s_{t+1}) - V^{\pi}(s_t)\right)\right] = -\mathbb{E}_{s_1 \sim p(\cdot | s_0, a_0)}[V^{\pi}(s_1)]$ . Since the RHS of eq. (38) is the same as the RHS of eq. (34), the claim holds.
+
+Then, we can prove the following lemma regarding the difference between the Q-functions of two $\alpha$ -coupled policies $\pi$ and $\pi'$ :
+
+Lemma 4. Let $\pi$ and $\pi'$ be $\alpha$ -coupled policies. Then,
+
+$$
+\left| Q ^ {\pi} (s, a) - Q ^ {\pi^ {\prime}} (s, a) \right| \leq \frac {2 \epsilon \gamma}{1 - \gamma} \max \left\{C \alpha^ {2}, 1 / C \right\}, \tag {39}
+$$
+
+where $\epsilon = \max_{s,a}|A^{\pi}(s,a)|$ and $C > 0$
+
+Proof. From Lemma 3, we have
+
+$$
+Q ^ {\pi^ {\prime}} \left(s _ {0}, a _ {0}\right) - Q ^ {\pi} \left(s _ {0}, a _ {0}\right) = \gamma \mathbb {E} _ {\tau \sim \pi^ {\prime}} \left[ \sum_ {t = 1} ^ {\infty} \gamma^ {t - 1} A ^ {\pi} \left(s _ {t}, a _ {t}\right) \right]. \tag {40}
+$$
+
+Then, from eq. (40) we have
+
+$$
+\begin{array}{l} \left| Q ^ {\pi^ {\prime}} \left(s _ {0}, a _ {0}\right) - Q ^ {\pi} \left(s _ {0}, a _ {0}\right) \right| = \left| \gamma \mathbb {E} _ {\tau \sim \pi^ {\prime}} \left[ \sum_ {t = 1} ^ {\infty} \gamma^ {t - 1} A ^ {\pi} \left(s _ {t}, a _ {t}\right) \right] \right| (41) \\ \leq \gamma \sum_ {t = 1} ^ {\infty} \gamma^ {t - 1} \left| \mathbb {E} _ {s _ {t}, a _ {t} \sim \pi^ {\prime}} \left[ A ^ {\pi} \left(s _ {t}, a _ {t}\right) \right] \right| (42) \\ \leq \gamma \sum_ {t = 1} ^ {\infty} \gamma^ {t - 1} 2 \alpha \max _ {s, a} | A ^ {\pi} (s, a) | (43) \\ = \frac {\epsilon \gamma}{1 - \gamma} 2 \alpha (44) \\ \leq \frac {\epsilon \gamma}{1 - \gamma} \left(C \alpha^ {2} + 1 / C\right) (45) \\ \leq \frac {\epsilon \gamma}{1 - \gamma} 2 \max \left\{C \alpha^ {2}, 1 / C \right\}, (46) \\ \end{array}
+$$
+
+where $\epsilon = \max_{s,a}|A^{\pi}(s,a)|$ and $C > 0$ . Here, eq. (43) is valid due to Lemma 2, eq. (45) is valid since $C\alpha^{2} + 1 / C - 2\alpha = C\left(\alpha -\frac{1}{C}\right)^{2}\geq 0$ , and eq. (46) is valid since the sum of two terms is less than or equal to two times the maximum of the two terms.
+
+Up to now, we considered some results valid for given two $\alpha$ -coupled policies $\pi$ and $\pi'$ . On the other hand, it is shown in Schulman et al. (2015) that for arbitrary policies $\pi$ and $\pi'$ , if we take $\alpha$ as the maximum (over $s$ ) of the total variation divergence $\max_s D_{TV}(\pi(\cdot|s)||\pi'(\cdot|s))$ between $\pi(\cdot|s)$ and $\pi'(\cdot|s)$ , then the two policies are $\alpha$ -coupled with the $\alpha$ value of $\alpha = \max_s D_{TV}(\pi(\cdot|s)||\pi'(\cdot|s))$ .
+
+Applying the above facts, we have the following result regarding $\pi_{new}^{i}$ and $\pi_{old}^{i}$ :
+
+Lemma 5. For some constants $\rho, d > 0$ ,
+
+$$
+Q ^ {\pi_ {n e w} ^ {i}} (s, a) \leq Q ^ {\pi_ {o l d} ^ {i}} (s, a) + \beta \max \left\{\rho K L _ {m a x} \left(\pi_ {n e w} ^ {i} \| \pi_ {o l d} ^ {i}\right), d \right\} \tag {47}
+$$
+
+for all $s$ and $a$ , where $KL_{max}(\pi ||\pi') = \max_s KL(\pi (\cdot |s)||\pi'(\cdot |s))$ .
+
+Proof. For $\pi_{new}^{i}$ and $\pi_{old}^{i}$ , take $\alpha$ as the maximum of the total variation divergence between $\pi_{new}^{i}$ and $\pi_{old}^{i}$ , i.e., $\alpha = \max_s D_{TV}(\pi_{new}^i (\cdot |s)||\pi_{old}^i (\cdot |s))$ . Let this $\alpha$ value be denoted by $\hat{\alpha}$ . Then, by the result of Schulman et al. (2015) mentioned in the above, $\pi_{new}^{i}$ and $\pi_{old}^{i}$ are $\hat{\alpha}$ -coupled with $\hat{\alpha} = \max_s D_{TV}(\pi_{new}^i (\cdot |s)||\pi_{old}^i (\cdot |s))$ . Since
+
+$$
+D _ {T V} \left(\pi_ {n e w} ^ {i} (\cdot | s) \mid \mid \pi_ {o l d} ^ {i} (\cdot | s)\right) ^ {2} \leq \mathrm {K L} \left(\pi_ {n e w} ^ {i} (\cdot | s) \mid \mid \pi_ {o l d} ^ {i} (\cdot | s)\right), \tag {48}
+$$
+
+by the relationship between the total variation divergence and the KL divergence, we have
+
+$$
+\hat {\alpha} ^ {2} \leq \max _ {s} \mathrm {K L} \left(\pi_ {n e w} ^ {i} (\cdot | s) \mid \mid \pi_ {o l d} ^ {i} (\cdot | s)\right). \tag {49}
+$$
+
+Now, substituting $\pi = \pi_{new}^{i}$ , $\pi^{\prime} = \pi_{old}^{i}$ and $\alpha = \hat{\alpha}$ into eq. (39) and applying eq. (49), we have
+
+$$
+\left| Q ^ {\pi_ {n e w} ^ {i}} (s, a) - Q ^ {\pi_ {o l d} ^ {i}} (s, a) \right| \leq \beta \max \left\{\rho \mathrm {K L} _ {m a x} \left(\pi_ {n e w} ^ {i} | | \pi_ {o l d} ^ {i}\right), d \right\} \tag {50}
+$$
+
+for some $\rho, d > 0$ . Here, proper scaling due to the introduction of $\beta$ is absorbed into $\rho$ and $d$ . That is, $\rho$ can be set as $\frac{2\epsilon\gamma C}{\beta(1 - \gamma)}$ and $d$ can be set as $\frac{2\epsilon\gamma}{\beta(1 - \gamma)C}$ . Then, by Proposition 1, $\left|Q^{\pi_{new}^{i}}(s,a) - Q^{\pi_{old}^{i}}(s,a)\right|$ in the LHS of eq. (50) becomes $\left|Q^{\pi_{new}^{i}}(s,a) - Q^{\pi_{old}^{i}}(s,a)\right| = Q^{\pi_{new}^{i}}(s,a) - Q^{\pi_{old}^{i}}(s,a)$ . Hence, from this fact and eq. (50), we have eq. (47). This concludes proof.
+
+Proposition 2. With Assumption 1, we have
+
+$$
+\mathbb {E} _ {a \sim \pi_ {n e w} ^ {i} (\cdot | s)} \left[ Q ^ {\pi_ {n e w} ^ {i}} (s, a) \right] \geq \mathbb {E} _ {a \sim \pi^ {b} (\cdot | s)} \left[ Q ^ {\pi_ {n e w} ^ {i}} (s, a) \right] + \beta \Delta (s). \tag {51}
+$$
+
+where
+
+$$
+\Delta (s) = \left[ K L \left(\pi_ {n e w} ^ {i} (\cdot | s) | | \pi^ {b} (\cdot | s)\right) - \max \left\{\rho K L _ {\max } \left(\pi_ {n e w} ^ {i} | | \pi_ {o l d} ^ {i}\right), d \right\} \right] \tag {52}
+$$
+
+Proof.
+
+$$
+\mathbb {E} _ {a \sim \pi^ {b} (\cdot | s)} \left[ - Q ^ {\pi_ {n e w} ^ {i}} (s, a) \right] \tag {53}
+$$
+
+$$
+\geq_ {(a)} \mathbb {E} _ {a \sim \pi^ {b} (\cdot | s)} \left[ - Q ^ {\pi_ {o l d} ^ {i}} (s, a) \right] - \beta \max \left\{\rho \mathrm {K L} _ {\text {m a x}} \left(\pi_ {\text {n e w}} ^ {i} \mid \mid \pi_ {\text {o l d}} ^ {i}\right), d \right\} \tag {54}
+$$
+
+$$
+= \mathbb {E} _ {a \sim \pi^ {b} (\cdot | s)} \left[ - Q ^ {\pi_ {o l d} ^ {i}} (s, a) + \beta \log \frac {\pi^ {b} (a | s)}{\pi^ {b} (a | s)} \right] - \beta \max \left\{\rho \mathrm {K L} _ {\text {m a x}} \left(\pi_ {\text {n e w}} ^ {i} \mid \mid \pi_ {\text {o l d}} ^ {i}\right), d \right\} \tag {55}
+$$
+
+$$
+\geq_ {(b)} \mathbb {E} _ {a \sim \pi_ {n e w} ^ {i} (\cdot | s)} \left[ - Q ^ {\pi_ {o l d}} (s, a) + \beta \log \frac {\pi_ {n e w} ^ {i} (a | s)}{\pi^ {b} (a | s)} \right] - \beta \max \left\{\rho \mathrm {K L} _ {\text {m a x}} \left(\pi_ {n e w} ^ {i} | | \pi_ {o l d} ^ {i}\right), d \right\} \tag {56}
+$$
+
+$$
+= \mathbb {E} _ {a \sim \pi_ {n e w} ^ {i} (\cdot | s)} \left[ - Q ^ {\pi_ {o l d}} (s, a) \right] + \beta \mathrm {K L} \left(\pi_ {n e w} ^ {i} (\cdot | s) \mid \mid \pi^ {b} (\cdot | s)\right) - \beta \max \left\{\rho \mathrm {K L} _ {\max } \left(\pi_ {n e w} ^ {i} \mid \mid \pi_ {o l d} ^ {i}\right), d \right\} \tag {57}
+$$
+
+$$
+= \mathbb {E} _ {a \sim \pi_ {n e w} ^ {i} (\cdot | s)} \left[ - Q ^ {\pi_ {o l d} ^ {i}} (s, a) \right] + \beta \left[ \mathrm {K L} \left(\pi_ {n e w} ^ {i} (\cdot | s) | | \pi^ {b} (\cdot | s)\right) - \max \left\{\rho \mathrm {K L} _ {m a x} \left(\pi_ {n e w} ^ {i} | | \pi_ {o l d} ^ {i}\right), d \right\} \right] \tag {58}
+$$
+
+$$
+= \mathbb {E} _ {a \sim \pi_ {n e w} ^ {i} (\cdot | s)} \left[ - Q ^ {\pi_ {o l d} ^ {i}} (s, a) \right] + \beta \Delta (s) \tag {59}
+$$
+
+$$
+\geq_ {(c)} \mathbb {E} _ {a \sim \pi_ {n e w} ^ {i} (\cdot | s)} \left[ - Q ^ {\pi_ {n e w} ^ {i}} (s, a) \right] + \beta \Delta (s), \tag {60}
+$$
+
+where step (a) is valid due to Lemma 5, step (b) is valid due to the definition of $\pi_{new}^{i}$ , and step (c) is valid due to Proposition 1.
+
+Finally, we prove Theorem 1.
+
+Theorem 1. Under Assumptions 1 and 2, the following inequality holds:
+
+$$
+Q ^ {\pi_ {n e w} ^ {i}} (s, a) \geq Q ^ {\pi^ {b}} (s, a) + \beta \mathbb {E} _ {s _ {t + 1}: s _ {\infty} \sim \pi^ {b}} \left[ \sum_ {k = t + 1} ^ {\infty} \gamma^ {k - t} \Delta \left(s _ {k}\right) \right] \geq Q ^ {\pi^ {b}} (s, a), \quad \forall (s, a), \forall i \neq b. \tag {61}
+$$
+
+where
+
+$$
+\Delta (s) = \left[ K L \left(\pi_ {n e w} ^ {i} (\cdot | s) | | \pi^ {b} (\cdot | s)\right) - \max \left\{\rho K L _ {m a x} \left(\pi_ {n e w} ^ {i} | | \pi_ {o l d} ^ {i}\right), d \right\} \right] \tag {62}
+$$
+
+Proof of Theorem 1: Proof of Theorem 1 is by recursive application of Proposition 2. For arbitrary $s_t$ and $a_t$ ,
+
+$$
+\begin{array}{l} Q _ {n e w} ^ {\pi^ {i}} (s _ {t}, a _ {t}) \\ = r \left(s _ {t}, a _ {t}\right) + \gamma \mathbb {E} _ {s _ {t + 1} \sim p (\cdot | s _ {t}, a _ {t})} \left[ \mathbb {E} _ {a _ {t + 1} \sim \pi_ {n e w} ^ {i}} \left[ Q ^ {\pi_ {n e w} ^ {i}} \left(s _ {t + 1}, a _ {t + 1}\right) \right] \right] (63) \\ \geq_ {(a)} r \left(s _ {t}, a _ {t}\right) + \gamma \mathbb {E} _ {s _ {t + 1} \sim p (\cdot | s _ {t}, a _ {t})} \left[ \mathbb {E} _ {a _ {t + 1} \sim \pi^ {b}} \left[ Q ^ {\pi_ {n e w} ^ {i}} \left(s _ {t + 1}, a _ {t + 1}\right) \right] + \beta \Delta \left(s _ {t + 1}\right) \right] (64) \\ = \mathbb {E} _ {s _ {t + 1}: s _ {t + 2} \sim \pi^ {b}} \left[ r (s _ {t}, a _ {t}) + \gamma r (s _ {t + 1}, a _ {t + 1}) + \gamma^ {2} \mathbb {E} _ {a _ {t + 2} \sim \pi_ {n e w} ^ {i}} \left[ Q ^ {\pi_ {n e w} ^ {i}} (s _ {t + 2}, a _ {t + 2}) \right] \right] \\ + \beta \mathbb {E} _ {s _ {t + 1}: s _ {t + 2} \sim \pi^ {b}} [ \gamma \Delta (s _ {t + 1}) ] (65) \\ \underset {(b)} {\geq} \mathbb {E} _ {s _ {t + 1}: s _ {t + 2} \sim \pi^ {b}} \left[ r (s _ {t}, a _ {t}) + \gamma r (s _ {t + 1}, a _ {t + 1}) + \gamma^ {2} \mathbb {E} _ {a _ {t + 2} \sim \pi^ {b}} \left[ Q ^ {\pi_ {n e w} ^ {i}} (s _ {t + 2}, a _ {t + 2}) \right] + \beta \gamma^ {2} \Delta (s _ {t + 2}) \right] \\ + \beta \mathbb {E} _ {s _ {t + 1}: s _ {t + 2} \sim \pi^ {b}} \left[ \gamma \Delta \left(s _ {t + 1}\right) \right] (66) \\ \end{array}
+$$
+
+$$
+\begin{array}{l} \geq \dots \\ \geq \mathbb {E} _ {s _ {t + 1}: s _ {\infty} \sim \pi^ {b}} \left[ \sum_ {k = t} ^ {\infty} \gamma^ {k - t} r \left(s _ {k}, a _ {k}\right) \right] + \beta \mathbb {E} _ {s _ {t + 1}: s _ {\infty} \sim \pi^ {b}} \left[ \sum_ {k = t + 1} ^ {\infty} \gamma^ {k - t} \Delta \left(s _ {k}\right) \right] (67) \\ = Q ^ {\pi^ {b}} \left(s _ {t}, a _ {t}\right) + \beta \mathbb {E} _ {s _ {t + 1}: s _ {\infty} \sim \pi^ {b}} \left[ \sum_ {k = t + 1} ^ {\infty} \gamma^ {k - t} \Delta \left(s _ {k}\right) \right] (68) \\ \end{array}
+$$
+
+where steps (a) and (b) hold because of Proposition 2. Assumption 2 ensures
+
+$$
+\Delta (s) = \left[ \mathrm {K L} \left(\pi_ {n e w} ^ {i} (\cdot | s) | | \pi^ {b} (\cdot | s)\right) - \max \left\{\rho \mathrm {K L} _ {\text {m a x}} \left(\pi_ {n e w} ^ {i} | | \pi_ {\text {o l d}} ^ {i}\right), d \right\} \right] \geq 0, \quad \forall s. \tag {69}
+$$
+
+Hence, the second term in (68) is non-negative. Therefore, we have
+
+$$
+\begin{array}{l} Q ^ {\pi_ {n e w} ^ {i}} \left(s _ {t}, a _ {t}\right) \geq Q ^ {\pi^ {b}} \left(s _ {t}, a _ {t}\right) + \beta \mathbb {E} _ {s _ {t + 1}: s _ {\infty} \sim \pi^ {b}} \left[ \sum_ {k = t + 1} ^ {\infty} \gamma^ {k - t} \Delta \left(s _ {k}\right) \right] (70) \\ \geq Q ^ {\pi^ {b}} \left(s _ {t}, a _ {t}\right) (71) \\ \end{array}
+$$
+
+□
+
+# APPENDIX B. INTUITION OF THE IMPLEMENTATION OF $\beta$ ADAPTATION
+
+Due to Theorem 1, we have
+
+$$
+Q ^ {\pi_ {n e w} ^ {i}} \left(s _ {t}, a _ {t}\right) \geq Q ^ {\pi^ {b}} \left(s _ {t}, a _ {t}\right) + \underbrace {\beta \mathbb {E} _ {s _ {t + 1}: s _ {\infty} \sim \pi^ {b}} \left[ \sum_ {k = t + 1} ^ {\infty} \gamma^ {k - t} \Delta \left(s _ {k}\right) \right]} _ {\text {I m p r o v e m e n t g a p}} \tag {72}
+$$
+
+where
+
+$$
+\Delta (s) = \left[ \mathrm {K L} \left(\pi_ {n e w} ^ {i} (\cdot | s) \| \pi^ {b} (\cdot | s)\right) - \max \left\{\rho \mathrm {K L} _ {\text {m a x}} \left(\pi_ {n e w} ^ {i} \| \pi_ {\text {o l d}} ^ {i}\right), d \right\} \right] \geq 0, \quad \forall s. \tag {73}
+$$
+
+In deriving eqs. (72) and (73), we only used Assumption 1. When we have Assumption 2, the improvement gap term in (72) becomes non-negative and we have
+
+$$
+Q ^ {\pi_ {n e w} ^ {i}} \left(s _ {t}, a _ {t}\right) \geq Q ^ {\pi^ {b}} \left(s _ {t}, a _ {t}\right) \tag {74}
+$$
+
+as desired. However, in practice, Assumption 2 should be implemented so that the improvement gap term becomes non-negative and we have the desired result (74). The implementation of the condition is through adaptation of $\beta$ . We adapt $\beta$ to maximize the improvement gap $\beta \mathbb{E}_{s_{t+1}:s_{\infty} \sim \pi^b} \left[ \sum_{k=t+1}^{\infty} \gamma^{k-t} \Delta(s_k) \right]$ in (72) for given $\rho$ and $d$ . Let us denote $\mathbb{E}_{s_{t+1}:s_{\infty} \sim \pi^b} \left[ \sum_{k=t+1}^{\infty} \gamma^{k-t} \Delta(s_k) \right]$ by $\bar{\Delta}$ . Then, the improvement gap is given by $\beta \bar{\Delta}$ . Note that $\bar{\Delta}$ is the average (with forgetting) of $\Delta(s_k)$ by using samples from $\pi^b$ . The gradient of the improvement gap with respect to $\beta$ is given by
+
+$$
+\nabla_ {\beta} (\beta \bar {\Delta}) = \bar {\Delta}. \tag {75}
+$$
+
+Thus, if $\bar{\Delta} > 0$ , i.e., $\mathrm{KL}\left(\pi_{new}^{i}(\cdot |s) || \pi^{b}(\cdot |s)\right) > \max \left\{\rho \mathrm{KL}_{max}\left(\pi_{new}^{i}||\pi_{old}^{i}\right), d\right\}$ on average, then $\beta$ should be increased to maximize the performance gain. On the other hand, if $\bar{\Delta} < 0$ , i.e., $\mathrm{KL}\left(\pi_{new}^{i}(\cdot |s) || \pi^{b}(\cdot |s)\right) \leq \max \left\{\rho \mathrm{KL}_{max}\left(\pi_{new}^{i}||\pi_{old}^{i}\right), d\right\}$ on average, then $\beta$ should be decreased. Therefore, we adapt $\beta$ as follows:
+
+$$
+\beta = \left\{ \begin{array}{l l} \beta \leftarrow 2 \beta & \text {i f} \widehat {D} _ {\text {s p r e a d}} > \max \left\{\rho \widehat {D} _ {\text {c h a n g e}}, d _ {\min } \right\} \times 1. 5 \\ \beta \leftarrow \beta / 2 & \text {i f} \widehat {D} _ {\text {s p r e a d}} < \max \left\{\rho \widehat {D} _ {\text {c h a n g e}}, d _ {\min } \right\} / 1. 5 \end{array} \right.. \tag {76}
+$$
+
+where $\widehat{D}_{spread}$ and $\widehat{D}_{change}$ are implementations of KL $(\pi_{new}^{i}(\cdot |s)||\pi^{b}(\cdot |s))$ and KLmax $(\pi_{new}^{i}||\pi_{old}^{i})$ , respectively.
+
+# APPENDIX C. COMPARISON TO BASELINES ON DELAYED MUJOCO ENVIRONMENTS
+
+In this section, we provide the learning curves of the state-of-the-art single-learner baselines on delayed MuJoCo environments. Fig. 6 shows the learning curves of P3S-TD3 algorithm and the single-learner baselines on the four delayed MuJoCo environments: Delayed Hopper-v1, Delayed Walker2d-v1, Delayed HalfCheetah-v1, and Delayed Ant-v1. It is observed that in the delayed Ant-v1 environment, ACKTR outperforms the P3S-TD3 algorithm. It is also observed that P3S-TD3 significantly outperforms all baselines on all other environments than the delayed Ant-v1 environment.
+
+
+(a) Delayed Hopper-v1
+
+
+(b) Delayed Walker2d-v1
+
+
+(c) Delayed HalfCheetah-v1
+
+
+(d) Delayed Ant-v1
+Figure 6: Performance for PPO (brown), ACKTR (purple), (clipped double Q) SAC (orange), TD3 (green), and P3S-TD3 (proposed method, blue) on the four delayed MuJoCo tasks with $f_{reward} = 20$ .
+
+# APPENDIX D. COMPARISON TO CEM-TD3
+
+In this section, we compare the performance of TD3 and P3S-TD3 with CEM-TD3 (Pourchot & Sigaud (2019)), which is a state-of-the-art evolutionary algorithm. CEM-TD3 uses a population to search a better policy as other evolutionary algorithms do. The operation of CEM-TD3 is described as follows:
+
+1. It first samples $N$ policies by drawing policy parameters from a Gaussian distribution.
+2. It randomly selects a half of the population. The selected policies and a common Q function are updated based on minibatches drawn from a common replay buffer.
+3. Both the updated selected policies and the unselected policies are evaluated and the experiences during the evaluation are stored in the common replay buffer.
+4. After evaluation of all $N$ policies, it takes the best $N/2$ policies and updates the mean and variance of the policy parameter distribution as the mean and variance of the parameters of the best $N/2$ policies.
+5. Steps 1 to 4 are iterated until the time steps reach maximum.
+
+For the performance comparison, we used the implementation of CEM-TD3 in the original paper (Pourchot & Sigaud (2019)) with the hyper-parameters provided therein. Table 1 shows the steady state performance on MuJoCo and delayed MuJoCo environments. Fig. 7 in the next page shows the learning curves on MuJoCo and delayed MuJoCo environments. It is seen that P3S-TD3 outperforms CEM-TD3 on all environments except delayed HalfCheetah-v1. Notice that P3S-TD3 significantly outperforms CEM-TD3 on delayed Walker2d-v1 and delayed Ant-v1.
+
+Table 1: Steady state performance of P3S-TD3, CEM-TD3, and TD3
+
+| Environment | P3S-TD3 | CEM-TD3 | TD3 |
| Hopper-v1 | 3705.92 | 3686.08 | 2555.85 |
| Walker2d-v1 | 4953.00 | 4819.40 | 4455.51 |
| HalfCheetah-v1 | 11961.44 | 11417.73 | 9695.92 |
| Ant-v1 | 5339.66 | 4379.73 | 3760.50 |
| Delayed Hopper-v1 | 3355.53 | 3117.20 | 1866.02 |
| Delayed Walker2d-v1 | 4058.85 | 1925.63 | 2016.48 |
| Delayed HalfCheetah-v1 | 5754.80 | 6389.40 | 3684.28 |
| Delayed Ant-v1 | 724.50 | 70.44 | -7.45 |
+
+
+(a) Hopper-v1
+
+
+(b) Delayed Hopper-v1
+
+
+(c) Walker2d-v1
+
+
+(d) Delayed Walker2d-v1
+
+
+(e) HalfCheetah-v1
+
+
+(f) Delayed HalfCheetah-v1
+
+
+(g) Ant-v1
+Figure 7: Performance of P3S-TD3, CEM-TD3, and TD3
+
+
+(h) Delayed Ant-v1
+
+# APPENDIX E. COMPARISON TO METHOD USING CENTER POLICY
+
+In this section, we consider a variant of the proposed P3S-TD3 algorithm, named Center-TD3. This variant uses a center policy like in Distral (Teh et al. (2017)) and Divide-and-Conquer (Ghosh et al. (2018)). Center-TD3 has $N$ policies and a center policy $\pi^c$ in addition. The value and policy parameter update procedure is the same as the original TD3 algorithm but the loss functions are newly defined. That is, the Q function loss is the same as the original TD3 algorithm, but the parameters of the $N$ policies are updated based on the following loss:
+
+$$
+\tilde {L} (\phi^ {i}) = \hat {\mathbb {E}} _ {s \sim \mathcal {D}} \left[ - Q _ {\theta_ {1} ^ {i}} (s, \pi_ {\phi^ {i}} (s)) + \frac {\beta}{2} \left\| \pi_ {\phi^ {i}} (s) - \pi_ {\phi^ {c}} (s) \right\| _ {2} ^ {2} \right]. \tag {77}
+$$
+
+The parameter loss function of Center-TD3 is obtained by replacing the best policy with the center policy in the parameter loss function of P3S-TD3. The center policy is updated in every $M$ time steps to the direction of minimizing the following loss
+
+$$
+\tilde {L} \left(\phi^ {c}\right) = \hat {\mathbb {E}} _ {s \sim \mathcal {D}} \left[ \frac {\beta}{2} \sum_ {i = 1} ^ {N} \left\| \pi_ {\phi^ {i}} (s) - \pi_ {\phi^ {c}} (s) \right\| _ {2} ^ {2} \right]. \tag {78}
+$$
+
+Center-TD3 follows the spirit of Distral (Teh et al. (2017)) and Divide-and-Conquer (Ghosh et al. (2018)) algorithms.
+
+We tuned and selected the hyper-parameters for Center-TD3 from the sets $\beta \in \{1, 10\}$ and $M \in \{2, 20, 40, 100, 200, 500\}$ . We then measured the performance of Center-TD3 by using the selected hyper-parameter $\beta = 1$ , $M = 40$ . Fig. 8 in the next page and Table 2 show the learning curves and the steady-state performance on MuJoCo and delayed MuJoCo environments, respectively. It is seen that P3S-TD3 outperforms Center-TD3 on all environments except Delayed HalfCheetah-v1.
+
+Table 2: Steady state performance of P3S-TD3, Center-TD3, and TD3
+
+| Environment | P3S-TD3 | Center-TD3 | TD3 |
| Hopper-v1 | 3705.92 | 3675.28 | 2555.85 |
| Walker2d-v1 | 4953.00 | 4689.34 | 4455.51 |
| HalfCheetah-v1 | 11961.44 | 10620.84 | 9695.92 |
| Ant-v1 | 5339.66 | 4616.82 | 3760.50 |
| Delayed Hopper-v1 | 3355.53 | 3271.50 | 1866.02 |
| Delayed Walker2d-v1 | 4058.85 | 2878.85 | 2016.48 |
| Delayed HalfCheetah-v1 | 5754.80 | 6047.47 | 3684.28 |
| Delayed Ant-v1 | 724.50 | 688.96 | -7.45 |
+
+
+(a) Hopper-v1
+
+
+(b) Delayed Hopper-v1
+
+
+(c) Walker2d-v1
+
+
+(d) Delayed Walker2d-v1
+
+
+(e) HalfCheetah-v1
+
+
+(f) Delayed HalfCheetah-v1
+
+
+(g) Ant-v1
+Figure 8: Performance of P3S-TD3, Center-TD3, and TD3
+
+
+(h) Delayed Ant-v1
+
+# APPENDIX F. RESULT ON SWIMMER-V1
+
+Khadka & Tumer (2018); Pourchot & Sigaud (2019) noticed that most deep RL methods suffer from a deceptive gradient problem on the Swimmer-v1 task, and most RL methods could not learn effectively on the Swimmer-v1 task. Unfortunately, we observed that the proposed P3S-TD3 algorithm could not solve the deceptive gradient problem in the Swimmer-v1 task either. Fig. 9 shows the learning curves of P3S-TD3 and TD3 algorithm. In Khadka & Tumer (2018), the authors proposed an effective evolutionary algorithm named ERL to solve the deceptive gradient problem on the Swimmer-v1 task, yielding the good performance on Swimmer-v1, as shown in Fig. 9. P3S-TD3 falls short of the performance of ERL on Swimmer-v1. However, it is known that CEM-TD3 discussed in Appendix D outperforms ERL on other tasks (Pourchot & Sigaud (2019)). Furthermore, we observed that P3S-TD3 outperforms CEM-TD3 on most environments in Appendix D.
+
+
+Figure 9: Performance on Swimmer-v1 of P3S-TD3 (blue), TD3 (orange), and the final performance of evolutionary RL (Khadka & Tumer (2018), green dashed line).
+
+# APPENDIX G. THE TWIN DELAYED DEEP DETERMINISTIC POLICY GRADIENT (TD3) ALGORITHM
+
+The TD3 algorithm is a current state-of-the-art off-policy algorithm and is a variant of the deep deterministic policy gradient (DDPG) algorithm (Lillicrap et al. (2015)). The TD3 algorithm tries to resolve two problems in typical actor-critic algorithms: 1) overestimation bias and 2) high variance in the approximation of the Q-function. In order to reduce the bias, the TD3 considers two Q-functions and uses the minimum of the two Q-function values to compute the target value, while in order to reduce the variance in the gradient, the policy is updated less frequently than the Q-functions. Specifically, let $Q_{\theta_1}$ , $Q_{\theta_2}$ and $\pi_{\phi}$ be two current Q-functions and the current deterministic policy, respectively, and let $Q_{\theta_1'}$ , $Q_{\theta_2'}$ and $\pi_{\phi'}$ be the target networks of $Q_{\theta_1}$ , $Q_{\theta_2}$ and $\pi_{\phi}$ , respectively. The target networks are initialized by the same networks as the current networks. At time step $t$ , the TD3 algorithm takes an action $a_t$ with exploration noise $\epsilon$ : $a_t = \pi_{\phi}(s_t) + \epsilon$ , where $\epsilon$ is zero-mean Gaussian noise with variance $\sigma^2$ , i.e., $\epsilon \sim \mathcal{N}(0, \sigma^2)$ . Then, the environment returns reward $r_t$ and the state is switched to $s_{t+1}$ . The TD3 algorithm stores the experience $(s_t, a_t, r_t, s_{t+1})$ at the experience replay buffer $\mathcal{D}$ . After storing the experience, the Q-function parameters $\theta_1$ and $\theta_2$ are updated by gradient descent of the following loss functions:
+
+$$
+L \left(\theta_ {j}\right) = \hat {\mathbb {E}} _ {\left(s, a, r, s ^ {\prime}\right) \sim \mathcal {D}} \left[ \left(y - Q _ {\theta_ {j}} (s, a)\right) ^ {2} \right], \quad j = 1, 2 \tag {79}
+$$
+
+where $\hat{\mathbb{E}}_{(s,a,r,s')\sim \mathcal{D}}$ denotes the sample expectation with an uniform random mini-batch of size $B$ drawn from the replay buffer $\mathcal{D}$ , and the target value $y$ is given by
+
+$$
+y = r + \gamma \min _ {j = 1, 2} Q _ {\theta_ {j} ^ {\prime}} \left(s ^ {\prime}, \pi_ {\phi^ {\prime}} \left(s ^ {\prime}\right) + \epsilon\right), \quad \epsilon \sim \operatorname {c l i p} \left(\mathcal {N} \left(0, \tilde {\sigma} ^ {2}\right), - c, c\right). \tag {80}
+$$
+
+Here, for the computation of the target value, the minimum of the two target Q-functions is used to reduce the bias. The procedure of action taking and gradient descent for $\theta_{1}$ and $\theta_{2}$ are repeated for $d$ times $(d = 2)$ , and then the policy and target networks are updated. The policy parameter $\phi$ is updated by gradient descent by minimizing the loss function for $\phi$ :
+
+$$
+L (\phi) = - \hat {\mathbb {E}} _ {s \sim \mathcal {D}} \left[ Q _ {\theta_ {1}} (s, \pi_ {\phi} (s)) \right], \tag {81}
+$$
+
+and the target network parameters $\theta_j^\prime$ and $\phi^{\prime}$ are updated as
+
+$$
+\theta_ {j} ^ {\prime} \leftarrow (1 - \tau) \theta_ {j} ^ {\prime} + \tau \theta_ {j} \quad \phi^ {\prime} \leftarrow (1 - \tau) \phi^ {\prime} + \tau \phi . \tag {82}
+$$
+
+The networks are trained until the number of time steps reaches a predefined maximum.
+
+# APPENDIX H. PSEUDOCODE OF THE P3S-TD3 ALGORITHM
+
+Algorithm 1 The Population-Guided Parallel Policy Search TD3 (P3S-TD3) Algorithm
+Require: $N$ : number of learners, $T_{initial}$ : initial exploration time steps, $T$ : maximum time steps, $M$ : the best-policy update period, $B$ : size of mini-batch, $d$ : update interval for policy and target networks.
+1: Initialize $\phi^1 = \dots = \phi^N = \phi^b$ , $\theta_j^1 = \dots = \theta_j^N$ , $j = 1,2$ , randomly.
+2: Initialize $\beta = 1$ , $t = 0$
+3: while $t < T$ do
+4: $t \gets t + 1$ (one time step)
+5: for $i = 1,2,\dots,N$ in parallel do
+6: if $t < T_{initial}$ then
+7: Take a uniform random action $a_t^i$ to environment copy $\mathcal{E}^i$
+8: else
+9: Take an action $a_t^i = \pi^i(s_t^i) + \epsilon$ , $\epsilon \sim \mathcal{N}(0,\sigma^2)$ to environment copy $\mathcal{E}^i$
+10: end if
+11: Store experience $(s_t^i, a_t^i, r_t^i, s_{t+1}^i)$ to the shared common experience replay $\mathcal{D}$
+12: end for
+13: if $t < T_{initial}$ then
+14: continue (i.e., go to the beginning of the while loop)
+15: end if
+16: for $i = 1,2,\dots,N$ in parallel do
+17: Sample a mini-batch $\mathcal{B} = \{(s_{t_l}, a_{t_l}, r_{t_l}, s_{t_l+1})\}_{l=1,\dots,B}$ from $\mathcal{D}$
+18: Update $\theta_j^i$ , $j = 1,2$ , by gradient descent for minimizing $\tilde{L}(\theta_j^i)$ in (83) with $\mathcal{B}$
+19: if $t \equiv 0 (\mathrm{mod} d)$ then
+20: Update $\phi^i$ by gradient descent for minimizing $\tilde{L}(\phi^i)$ in (84) with $\mathcal{B}$
+21: Update the target networks: $(\theta_j^i)' \gets (1 - \tau)(\theta_j^i)' + \tau \theta_j^i$ , $(\phi^i)' \gets (1 - \tau)(\phi^i)' + \tau \phi^i$
+22: end if
+23: end for
+24: if $t \equiv 0 (\mathrm{mod} M)$ then
+25: Select the best learner $b$
+26: Adapt $\beta$
+27: end if
+28: end while
+
+In P3S-TD3, the $i$ -th learner has its own parameters $\theta_1^i$ , $\theta_2^i$ , and $\phi^i$ for its two Q-functions and policy. Furthermore, it has $(\theta_1^i)'$ , $(\theta_2^i)'$ , and $(\phi^i)'$ which are the parameters of the corresponding target networks. For the distance measure between two policies, we use the mean square difference, given by $D(\pi(s), \tilde{\pi}(s)) = \frac{1}{2} \| \pi(s) - \tilde{\pi}(s) \|_2^2$ . For the $i$ -th learner, as in TD3, the parameters $\theta_j^i$ , $j = 1, 2$ are updated every time step by minimizing
+
+$$
+\tilde {L} \left(\theta_ {j} ^ {i}\right) = \hat {\mathbb {E}} _ {\left(s, a, r, s ^ {\prime}\right) \sim \mathcal {D}} \left[ \left(y - Q _ {\theta_ {j} ^ {i}} (s, a)\right) ^ {2} \right] \tag {83}
+$$
+
+where $y = r + \gamma \min_{j=1,2} Q_{(\theta_j^i)'}(s', \pi_{(\phi^i)'}(s') + \epsilon)$ , $\epsilon \sim \mathrm{clip}(\mathcal{N}(0, \tilde{\sigma}^2), -c, c)$ . The parameter $\phi^i$ is updated every $d$ time steps by minimizing the following augmented loss function:
+
+$$
+\tilde {L} \left(\phi^ {i}\right) = \hat {\mathbb {E}} _ {s \sim \mathcal {D}} \left[ - Q _ {\theta_ {1} ^ {i}} \left(s, \pi_ {\phi^ {i}} (s)\right) + \mathbf {1} _ {\{i \neq b \}} \frac {\beta}{2} \left\| \pi_ {\phi^ {i}} (s) - \pi_ {\phi^ {b}} (s) \right\| _ {2} ^ {2} \right]. \tag {84}
+$$
+
+For the first $T_{initial}$ time steps for initial exploration we use a random policy and do not update all policies over the initial exploration period. With these loss functions, the reference policy, and the initial exploration policy, all procedures are the same as the general P3S procedure described in Section 3. The pseudocode of the P3S-TD3 algorithm is shown above.
+
+# APPENDIX I. HYPER-PARAMETERS
+
+TD3 The networks for two Q-functions and the policy have 2 hidden layers. The first and second layers have sizes 400 and 300, respectively. The non-linearity function of the hidden layers is ReLU, and the activation functions of the last layers of the Q-functions and the policy are linear and hyperbolic tangent, respectively. We used the Adam optimizer with learning rate $10^{-3}$ , discount factor $\gamma = 0.99$ , target smoothing factor $\tau = 5 \times 10^{-3}$ , the period $d = 2$ for updating the policy. The experience replay buffer size is $10^{6}$ , and the mini-batch size $B$ is 100. The standard deviation for exploration noise $\sigma$ and target noise $\tilde{\sigma}$ are 0.1 and 0.2, respectively, and the noise clipping factor $c$ is 0.5.
+
+P3S-TD3 In addition to the hyper-parameters for TD3, we used $N = 4$ learners, the period $M = 250$ of updating the best policy and $\beta$ , the number of recent episodes $E_r = 10$ for determining the best learner $b$ . The parameter $d_{min}$ was chosen among $\{0.02, 0.05\}$ for each environment, and the chosen parameter was 0.02 (Walker2d-v1, Ant-v1, Delayed Hopper-v1, Delayed Walker2d-v1, Delayed HalfCheetah-v1), and 0.05 (Hopper-v1, HalfCheetah-v1, Delayed Ant-v1). The parameter $\rho$ for the exploration range was 2 for all environments. The time steps for initial exploration $T_{initial}$ was set as 250 for Hopper-v1 and Walker2d-v1 and as 2500 for HalfCheetah-v1 and Ant-v1.
+
+Re-TD3 The period $M'$ was chosen among {2000, 5000, 10000} (MuJoCo environments) and {10000, 20000, 50000} (Delayed MuJoCo environments) by tuning for each environment. The chosen period $M'$ was 2000 (Ant-v1), 5000 (Hopper-v1, Walker2d-v1, HalfCheetah-v1), 10000 (Delayed HalfCheetah-v1, Delayed Ant-v1), and 20000 (Delayed Hopper-v1, Delayed Walker2d-v1).
\ No newline at end of file
diff --git a/populationguidedparallelpolicysearchforreinforcementlearning/images.zip b/populationguidedparallelpolicysearchforreinforcementlearning/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..11ae5a6dfe0cbdad1f5999c2e43b93ea1a5e3cb9
--- /dev/null
+++ b/populationguidedparallelpolicysearchforreinforcementlearning/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4287f47f3251b820b2d7499ce8601558be8f1c86c819957b97eb74c71b34dd72
+size 1306677
diff --git a/populationguidedparallelpolicysearchforreinforcementlearning/layout.json b/populationguidedparallelpolicysearchforreinforcementlearning/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..6c005f7e326f2a8f065b0f9c79a56e9728c5721f
--- /dev/null
+++ b/populationguidedparallelpolicysearchforreinforcementlearning/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4058c8011dcf1b0ebccc941d40c6d61c0f520d63862528df33ff963707ca11e5
+size 1017678
diff --git a/posteriorsamplingformultiagentreinforcementlearningsolvingextensivegameswithimperfectinformation/35a42c9a-10fe-4e59-ab95-50db0dd910a4_content_list.json b/posteriorsamplingformultiagentreinforcementlearningsolvingextensivegameswithimperfectinformation/35a42c9a-10fe-4e59-ab95-50db0dd910a4_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..3db224b88a670c8d80a30856f93194d2e09e508c
--- /dev/null
+++ b/posteriorsamplingformultiagentreinforcementlearningsolvingextensivegameswithimperfectinformation/35a42c9a-10fe-4e59-ab95-50db0dd910a4_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:de843a1fd0c41b18bc87be0156588a264fe5e9b7ccbc018dbc91a4329bf50e3a
+size 134246
diff --git a/posteriorsamplingformultiagentreinforcementlearningsolvingextensivegameswithimperfectinformation/35a42c9a-10fe-4e59-ab95-50db0dd910a4_model.json b/posteriorsamplingformultiagentreinforcementlearningsolvingextensivegameswithimperfectinformation/35a42c9a-10fe-4e59-ab95-50db0dd910a4_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..90ce3d36c3d5d3f34368df115e67aaed4edc1b83
--- /dev/null
+++ b/posteriorsamplingformultiagentreinforcementlearningsolvingextensivegameswithimperfectinformation/35a42c9a-10fe-4e59-ab95-50db0dd910a4_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:770e3266c89d37d329dd1f25c7c5dc1955e9eb388846b2fb07241b3d2d819501
+size 154339
diff --git a/posteriorsamplingformultiagentreinforcementlearningsolvingextensivegameswithimperfectinformation/35a42c9a-10fe-4e59-ab95-50db0dd910a4_origin.pdf b/posteriorsamplingformultiagentreinforcementlearningsolvingextensivegameswithimperfectinformation/35a42c9a-10fe-4e59-ab95-50db0dd910a4_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..c1001ed62c94d1ce386d0a14c8ad07ac524ca3d4
--- /dev/null
+++ b/posteriorsamplingformultiagentreinforcementlearningsolvingextensivegameswithimperfectinformation/35a42c9a-10fe-4e59-ab95-50db0dd910a4_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b67ef4e295dd1c9f377d420c35051d780bc67763991a66eb097d3c3f521e63de
+size 372239
diff --git a/posteriorsamplingformultiagentreinforcementlearningsolvingextensivegameswithimperfectinformation/full.md b/posteriorsamplingformultiagentreinforcementlearningsolvingextensivegameswithimperfectinformation/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..3f1e60f4a69af361969720b2adfbd329d2ef378a
--- /dev/null
+++ b/posteriorsamplingformultiagentreinforcementlearningsolvingextensivegameswithimperfectinformation/full.md
@@ -0,0 +1,573 @@
+# POSTERIOR SAMPLING FOR MULTI-AGENT REINFORCEMENT LEARNING: SOLVING EXTENSIVE GAMES WITH IMPERFECT INFORMATION
+
+Yichi Zhou, Jialian Li, Jun Zhu*
+
+Dept. of Comp. Sci. & Tech., BNRist Center, Institute for AI, Tsinghua University; RealAI vofhqn@gmail.com, lijialian7@163.com, dcszj@mail.tsinghua.edu.cn
+
+# ABSTRACT
+
+Posterior sampling for reinforcement learning (PSRL) is a useful framework for making decisions in an unknown environment. PSRL maintains a posterior distribution of the environment and then makes planning on an environment sampled from the posterior distribution. Though PSRL works well on single-agent reinforcement learning problems, how to apply PSRL to multi-agent reinforcement learning problems is largely unexplored. In this work, we extend PSRL to two-player zero-sum extensive-games with imperfect information (TEGI), which is a class of multi-agent systems. Technically, we combine PSRL with counterfactual regret minimization (CFR), which is a leading algorithm for TEGI with a known environment. Our main contribution is a novel design of interaction strategies. With our interaction strategies, our algorithm provably converges to the Nash Equilibrium at a rate of $O(\sqrt{\log T / T})$ . Empirical results show that our algorithm works well.
+
+# 1 INTRODUCTION
+
+Reinforcement Learning (RL) (Sutton & Barto, 2018) provides a framework for decision-making problems in an unknown environment, such as robotics control. In an RL problem, agents improve their strategies by gaining information from iterative interactions with the environment. One typical target in designing RL algorithms is to reduce the number of interactions needed to find good strategies. Thus, how to reduce the number of samples by designing efficient interaction strategies is one of the key challenges in RL.
+
+Posterior sampling for RL (PSRL) (Strens, 2000) provides a useful framework for deciding how to interact with the environment. PSRL originates from the famous bandit algorithm Thompson Sampling (Russo et al., 2018), which uses samples from posterior distributions of bandit parameters to calculate current policy. PSRL also maintains a posterior distribution for the underlying environment and uses an environment which is sampled from this posterior to compute its interaction strategies. The interaction strategies are then used to interact with the environment to collect data. The design of the interaction strategies depends on the specific property of the task. For example, in a single-agent RL (SARL) problem, PSRL takes the strategy with the maximum expected reward on the sampled environment as the interaction strategy (Osband et al., 2013). Theoretical and empirical results (Osband & Van Roy, 2016) both demonstrate that PSRL is one of the near-optimal methods for SARL. Moreover, although PSRL is a Bayesian-style algorithm, empirical evaluation (Chapelle & Li, 2011) and theoretical analysis on the multi-armed bandit problems (Agrawal & Goyal, 2017) suggest that it also enjoys good performance for a problem with fixed parameters.
+
+However, applying PSRL to multi-agent RL (MARL) tasks requires additional design on the interaction strategies. This is because the goal of MARL is quite different from that of SARL. In an MARL problem, each agent still aims to maximize its own reward, but the reward of an agent's strategy relies not only on the environment, but also on the strategies of other agents. Therefore, in MARL, the goal of learning is generally referred to finding a Nash Equilibrium (NE) where no agent is willing
+
+to deviate its strategy individually. So we should design the interaction strategies with which the agents can find or approximate the NE efficiently.
+
+More specifically, we consider the RL problems in imperfect information extensive games (Osborne & Rubinstein, 1994). Extensive games provide a unified model for sequential decision-making problems in which agents take actions in turn. Imperfect information here means that agents can keep their own private information, such as the private cards in poker games. Games with imperfect information are also fundamental to many practical applications such as economics and security. In particular, we concentrate on two-player zero-sum imperfect information games (TEGI) where there are two players gaining opposite rewards and a chance player to model the transitions of the environment. When the environment (i.e. the transition functions of the chance player and the reward functions) is known, counterfactual regret minimization (CFR) (Zinkevich et al., 2008) is the leading algorithm in approximating the NE in a TEGI. However, in the RL setting where the environment is unknown, CFR is not applicable.
+
+In this work, we present a posterior sampling algorithm for TEGIs with the technique of CFR. That is, we apply CFR to the environment sampled from the posterior distribution. Our main contribution is a novel design of interaction strategies for the RL problem of TEGIs. With the proposed strategies, we show that our algorithm can provably converge to an approximate NE at a rate of $O(\sqrt{\log T / T})$ . Empirical results show that our algorithm works well.
+
+# 2 PRELIMINARY
+
+In this section, we formally define the task of two-player zero-sum imperfect information games (TEGI) under Reinforcement Learning; and then briefly review two closely related techniques, namely counterfactual regret minimization (CFR) (Zinkevich et al., 2008) and posterior sampling for reinforcement learning (PSRL) (Osband et al., 2013), which inspire our solution.
+
+# 2.1 PROBLEM FORMULATION OF TEGI
+
+We start from the definition of general $N$ -player extensive games (See (Osborne & Rubinstein, 1994, pg. 200) for a formal definition.), which include TEGI as a nontivial special case when $N = 2$ .
+
+Definition 1 (Extensive game). An extensive game has the following components:
+
+- A finite set of players that includes $N$ agents and a chance player $C$ representing the "Nature" of the game.
+- A finite set $H$ of sequences that satisfies: 1) The empty sequence is a member of $H$ ; 2) If a sequence $\{a_1, \dots, a_k\}$ belongs to $H$ , then for all $1 \leq l < k$ , $\{a_1, \dots, a_l\}$ is a member of $H$ . Here, each member of $H$ is a history and each component of a history is an action taken by a player. The set of available actions after a history is denoted by $\alpha(h) = \{a : (h, a) \in H\}$ and the set of terminal histories is denoted by $Z$ .
+- A function $P$ such that $P(h)$ is the player who takes the action after history $h$ . $H^i$ represents the set of all $h$ that $P(h) = i$ .
+- A function $c^*$ that is the strategy of the chance player, i.e., $c^*(h, a)$ is the probability of action $a$ occurs after $h$ if $P(h) = \mathcal{C}$ .
+- For each player $i$ (besides the chance player), a partition $\mathcal{I}^i$ of $H$ : $\mathcal{I}^i$ is an information partition of player $i$ ; a set $I \in \mathcal{I}^i$ is a subset of $H$ such that if $h_1, h_2 \in I$ , then player $i$ cannot distinguish them.
+- A reward function $r^*$ where $r^*(h, i)$ is the distribution of the reward of player $i$ at $h \in Z$ . We assume the rewards are bounded in $[-1, 1]$ .
+
+For convenience, let $A = \max_h |\alpha(h)|$ be the maximal size of actions for one history. A strategy $\sigma^i$ for player $i$ is a mapping from $H^i$ to the distribution over valid actions. We use $\sigma^i(h, a)$ to represent the probability of taking action $a$ at $h \in H^i$ and $\sigma^i(h)$ to be the vector of $\sigma^i(h, a), a \in \alpha(h)$ . And a strategy profile $\sigma$ consists of the strategies of all players in $[N] := \{1, \dots, N\}$ , i.e., $\sigma = \{\sigma^i\}_{i \in [N]}$ .
+
+We use $\sigma^{-i}$ to refer to the strategies of all players except $i$ . Since player $i$ cannot distinguish $h_1, h_2 \in I \in \mathcal{I}^i$ , $\sigma^i(h_1)$ and $\sigma^i(h_2)$ must be the same and we denote $\sigma^i(I) = \sigma^i(h_1)$ . For the clarity of notation, we abbreviate $(c^*, r^*)$ as $d^*$ . Let $u^i(h|\sigma, d^*)$ denote the expected reward of player $i \in [N]$ at history $h$ under strategy $\sigma$ . For convenience, let $u^i(\sigma|d^*) = u^i(h_r|\sigma, d^*)$ where $h_r$ is the root of $H$ and $u^i(h|r^*)$ is the expected reward of player $i$ for $h \in Z$ .
+
+$\pi_{\sigma}(h|d^{*})$ is the probability of reaching $h$ with $\sigma$ and $c^*$ . It is easy to see that we can decompose $\pi_{\sigma}(h|d^{*})$ into the product of the contribution of each player. That is, $\pi_{\sigma}(h|d^{*}) = \prod_{i\in [N]\cup \{\mathcal{C}\}}\pi_{\sigma}^{i}(h|d^{*})$ . We use $D(h)$ to refer to the depth of $h$ in the game tree and $D^{i}(h)$ to refer to the number of $h$ 's ancestors whose player is $i$ . Obviously, we have $D(h) = 1 + \sum_{i\in \{N\} \cup \{\mathcal{C}\}}D^{i}(h)$ . And let $D = \max_h D(h)$ and $D^{i} = \max_h D^{i}(h)$ .
+
+With the above notations, then in a TEGI there are two-players besides the chance player (i.e., $N = 2$ ), player 1 and player 2, and $u^{1}(h|r^{*}) + u^{2}(h|r^{*}) = 0$ for all histories $h \in Z$ .
+
+Nash Equilibrium and exploitability: In a multi-agent system, a solution is often referred to as Nash Equilibrium (NE) (Osborne & Rubinstein, 1994). In a TEGI, $\sigma = (\sigma^1, \sigma^2)$ is a NE if and only if $u^i(\sigma | d^*) = \max_{\sigma^{*,i}} u^i(\sigma^{*,i}, \sigma^{-i} | d^*)$ . Our target is to approximate NE. More specifically, in TEGIs, the approximation error of $\sigma = (\sigma^1, \sigma^2)$ is usually measured by its exploitability:
+
+$$
+e x p l \left(\sigma | d ^ {*}\right) = \max _ {\sigma^ {*}, 1} u ^ {1} \left(\sigma^ {*}, 1, \sigma^ {2} | d ^ {*}\right) + \max _ {\sigma^ {*}, 2} u ^ {2} \left(\sigma^ {1}, \sigma^ {*}, 2 | d ^ {*}\right). \tag {1}
+$$
+
+If the environment of a TEGI, i.e. $d^{*}$ , is known for the players, we can directly use counterfactual regret minimization (CFR) (Zinkevich et al., 2008) to minimize the exploitability of this TEGI, as briefly reviewed in Sec. 2.2.
+
+In this paper, we concentrate on the more challenging yet relatively under-explored setting of TEGI, where the environment $d^{*}$ is unknown and moreover $d^{*}$ is subject to some intrinsic uncertainty. For example, poker games with unknown deck fall into this setting. In practice, this setting is not uncommon in industrial organisation with unknown entry, exit and firm-specific sources (Ericson & Pakes, 1995). In this setting, players have to interact with the unknown environment to gain sufficient knowledge of the game for making optimal decisions, thereby becoming a reinforcement learning (RL) task. In particular, the task of finding approximate NEs for TEGIs with unknown $d^{*}$ is a Multi-Agent Reinforcement Learning (MARL) task (Buşoniu et al., 2010).
+
+Moreover, to consider the intrinsic uncertainty of the unknown environment, we adopt a Bayesian formulation for our TEGI task, which can flexibly incorporate the prior information and enjoys various potential benefits (Russo & Van Roy, 2014). Formally, we consider the setting where the chance player and the reward functions follow a prior distribution $\mathbb{P}_0$ . That is, the underlying $d^{*} = (c^{*}, r^{*})$ is sampled from $\mathbb{P}_0(c, r)$ . Here $c$ and $r$ are not necessarily independent. After playing $t$ games, players collect some samples from $d^{*} = (c^{*}, r^{*})$ and they can get the posterior distribution, denoted as $\mathbb{P}_t$ . For example, in the case where $r^{*}(h, i)$ is a Bernoulli distribution and its prior is a Beta distribution, the posterior distribution $\mathbb{P}_t(r)$ is also a Beta distribution. Similarly if the prior for $c^{*}$ is a Dirichlet distribution, then $\mathbb{P}_t(c)$ is a Dirichlet distribution since $c^{*}(h)$ is a multinomial distribution for $h \in H^c$ .
+
+To solve the above TEGI problems, we present a method that draws inspirations from the solutions for two simplified settings, as briefly reviewed below.
+
+# 2.2 COUNTERFACTUAL REGRET MINIMIZATION (CFR)
+
+As stated above, when the parameters of the game, i.e. $d^{*} = (c^{*},r^{*})$ , are known1, counterfactual regret minimization (CFR) (Zinkevich et al., 2008) provides an effective solution to TEGIs with state-of-the-art performance. Formally, CFR is a self-play algorithm that generates a sequence of strategy profiles, $\{\sigma_t\}_{t=1}^T$ , by minimizing the following regrets:
+
+$$
+R _ {T} ^ {*}, i = \max _ {\sigma^ {i}} \sum_ {t = 1} ^ {T} u ^ {i} \left(\sigma^ {i}, \sigma_ {t} ^ {- i} | d ^ {*}\right) - \sum_ {t = 1} ^ {T} u ^ {i} \left(\sigma_ {t} | d ^ {*}\right).
+$$
+
+Algorithm 1 CFR-PSRL
+while $t < T$ do Sample $d_{t}$ and $\tilde{d}_t$ from the posterior $\mathbb{P}_t$
+for all $i\in \{1,2\}$ do Select $\sigma_t$ by exploiting CFR to minimize the regret: $\max_{\sigma^i}\sum_{t\leq T}u^i (\sigma^i,\sigma_t^{-i}|d_t) -$ $\begin{array}{r}\sum_{t\leq T}u^{i}(\sigma_{t}|d_{t}). \end{array}$
+end for Calculate interaction strategies $\hat{\sigma}_{1,T}$ and $\hat{\sigma}_{2,T}$ with Eq.(6). Use $\hat{\sigma}_{1,T}$ and $\hat{\sigma}_{2,T}$ to interact with the environment to gather data and then compute $\mathbb{P}_{t + 1}$
+end while
+Output: $\bar{\sigma} = \frac{1}{t}\sum_{t = 1}^{T}\sigma_{t}$
+
+For convenience, we write $\bar{\sigma}_T = \frac{1}{T}\sum_{t=1}^T\sigma_t$ if $\bar{\sigma}_T^i(I) = \frac{\sum_t\pi_{\sigma_t}^i(I)\sigma_t^i(I)}{\sum_t\pi_{\sigma_t}^i(I)}$ . One important observation (Zinkevich et al., 2008) is that in a TEGI the exploitability is:
+
+$$
+\exp \left(\bar {\sigma} _ {T} \mid d ^ {*}\right) = \frac {1}{T} \left(R _ {T} ^ {*, 1} + R _ {T} ^ {*, 2}\right). \tag {2}
+$$
+
+Therefore, CFR makes $\bar{\sigma}_T$ converge to the NE by minimizing $R_T^{*,1}$ and $R_T^{*,2}$ .
+
+# 2.3 POSTERIOR SAMPLING FOR REINFORCEMENT LEARNING (PSRL)
+
+Posterior sampling for reinforcement learning (PSRL) (Osband et al., 2013) provides an effective method for solving RL problems when the environment has uncertainty. Formally, PSRL applies to the Bayesian RL setting with a given prior distribution over the transition and reward function; and the agents can access this prior distribution and then update the posterior distribution using the observations collected from interactions with the environment. PSRL provides a framework on how to select the strategy to interact with the environment under the Bayesian setting. The process of PSRL can be decomposed into two steps: (1) sampling one environment from the posterior and computing strategies for agents according to the sampled environment; (2) using the computed strategies to interact with the underlying environment and updating the posterior distribution with the collected data. The two steps are iterated. The strategies that are used to play games are called interaction strategies.
+
+For different kinds of RL problems, the interaction strategies for PSRL are different. For example, PSRL chooses the strategy with the maximum expected value as the interaction strategy in the single-agent RL problems (SARL). However, this cannot be trivially extended to MARL since the learning goal turns to be the NE. Hence, in order to apply PSRL to our problem of TZEISs, we need to design of proper interaction strategies.
+
+# 3 METHOD
+
+In this section, we formally present our method, which conjoins the merits of PSRL and CFR and can efficiently compute the approximate NE for TEGI tasks. The key is our design of a proper interaction strategy, which can coordinate with CFR to interact with the environment.
+
+Before diving into details, we given an overview of our algorithm. We call two games interacted with the environment as one episode. In episode $t$ , we sample a $d_t$ from posterior distribution $\mathbb{P}_t$ and then apply CFR to $d_t$ to get a policy tuple $(\sigma_t^1, \sigma_t^2)$ . Then we sample another $\tilde{d}_t$ to calculate the interaction strategies. Then we use the interaction strategies to interact with the environment to collect data and update the posterior. Our algorithm can converge to the NE at a rate of $O(\sqrt{\log(T) / T})$ . The time complexity of computing the interaction strategies is linear to $|H|$ .
+
+Here we introduce our method in detail. The algorithm is presented in Alg.1. The detailed format of the interaction strategy will be given soon in Eq. (6).
+
+To compute the approximate NE, we adopt a CFR algorithm to minimize the following regret for episode $T$ :
+
+$$
+\hat {R} _ {T} ^ {i} = \max _ {\sigma^ {i}} \sum_ {t \leq T} u ^ {i} \left(\sigma^ {i}, \sigma_ {t} ^ {- i} | d _ {t}\right) - \sum_ {t \leq T} u ^ {i} \left(\sigma_ {t} | d _ {t}\right), \tag {3}
+$$
+
+where $d_{t}$ is sampled from the posterior distribution at episode $t$ (i.e., $\mathbb{P}_t$ ). Then we take $\bar{\sigma} = \frac{1}{T}\sum_{t\leq T}\sigma_t$ as the output strategy for our algorithm. Obviously, simply minimizing $\hat{R}_T$ will not make the exploitability $exp(\bar{\sigma}|d^{*})$ small, as $d^{*}$ can be very different with $d_{t}$ , so we need the interaction strategy to be efficient enough to make sure the difference between $d_{t}$ and $d^{*}$ is relatively small. The following equation establishes a relation between the exploitability $exp(\bar{\sigma}|d^{*})$ and regret $\hat{R}_T^i$ :
+
+$$
+e x p l (\bar {\sigma} | d ^ {*}) = \frac {1}{T} \left(\hat {R} _ {T} ^ {1} + \hat {R} _ {T} ^ {2} + \sum_ {i \in \{1, 2 \}} \sum_ {t \leq T} \left(u ^ {i} \left(\sigma_ {T} ^ {*}, i, \sigma_ {t} ^ {- i} | d ^ {*}\right) - u ^ {i} \left(\sigma_ {T} ^ {\prime}, i, \sigma_ {t} ^ {- i} | d _ {t}\right)\right)\right). \tag {4}
+$$
+
+Here by fixing that the player $-i$ plays the strategy $\sigma_t^{-i}$ at episode $t$ , we use $\sigma_T^{*,i} = \arg \max_{\sigma^i} \sum_{t \leq T} u^i(\sigma^i, \sigma_t^{-i}|d^*)$ to denote player $i$ 's optimal strategy in the underlying game $d^*$ , and $\sigma_T'^i = \arg \max_{\sigma^i} \sum_{t \leq T} u^i(\sigma^i, \sigma_t^{-i}|d_t)$ to denote player $i$ 's optimal strategy when the game at episode $t$ is $d_t$ . For convenience, let $\mathcal{G}_T^i = \frac{1}{T} \sum_{t \leq T} (u^i(\sigma_T^{*,i}, \sigma_t^{-i}|d^*) - u^i(\sigma_T'^i, \sigma_t^{-i}|d_t))$ denote the gap between exploitability and the regret from CFR. Intuitively, $\sigma_t$ is generated by CFR with a biased knowledge on the environment. The bias can be described by the term $\mathcal{G}_T^i$ . As we can minimize $\hat{R}_T^i$ by CFR, we only need to minimize $\mathcal{G}_T^i$ in order to minimize $expl(\bar{\sigma}|d^*)$ . Thus, the target of the interaction strategies is to fix the bias, i.e., minimize $\mathcal{G}_T^i$ .
+
+The remaining challenge is to design interaction strategies to minimize $\mathcal{G}_T^i$ efficiently. In episode $t$ , we first draw $\tilde{d}_t = (\tilde{c}_t, \tilde{r}_t) \sim \mathbb{P}_t$ . Then for $i \in \{1, 2\}$ , we compute the strategy that maximizes the cumulative reward gaps between games sampled from the posterior:
+
+$$
+\tilde {\sigma} _ {t} ^ {i} = \arg \max _ {\sigma^ {i}} \sum_ {t ^ {\prime} = 1} ^ {t} \left(u ^ {i} \left(\sigma^ {i}, \sigma_ {t ^ {\prime}} ^ {- i} | \tilde {d} _ {t}\right) - u ^ {i} \left(\sigma^ {i}, \sigma_ {t ^ {\prime}} ^ {- i} | d _ {t ^ {\prime}}\right)\right). \tag {5}
+$$
+
+Interaction strategy: We adopt the following interaction strategies for episode $T$ :
+
+$$
+\hat {\sigma} _ {1, T} = \left(\tilde {\sigma} _ {T} ^ {1}, \sigma_ {T} ^ {2}\right) \quad \text {a n d} \quad \hat {\sigma} _ {2, T} = \left(\sigma_ {T} ^ {1}, \tilde {\sigma} _ {T} ^ {2}\right) \tag {6}
+$$
+
+The computation of $\tilde{\sigma}_t^i$ can be implemented in time $O(|H|)$ . To make the whole procedure clear, we use a simple toy game to show the game tree at episode $t$ in Fig. 1. We present $d_t$ , $\tilde{d}_t$ and $\sigma_t$ . The interaction strategy is then calculated from these quantities. It needs to be emphasized that the strategies $\sigma_t^1$ and $\sigma_t^2$ are generated by CFR in episode $t - 1$ and they are used as the opponents' strategies in the interaction strategies.
+
+With the interaction strategies $(\tilde{\sigma}_T^1,\sigma_T^2)$ and $(\sigma_T^1,\tilde{\sigma}_T^2)$ , we can prove the following bound on the exploitability $expl(\bar{\sigma})$ .
+
+Theorem 1. Let $\xi^i = \sum_{j=1}^{D} \sqrt{\max_{\sigma^i} \sum_{I \in \mathcal{I}^i, D(I) = j} \pi_{\sigma^i}^i(I)}$ . Use $\mathbb{E}_{d^*}$ to represent the expectation over all the prior distribution $\mathbb{P}_0(d^*)$ . If the true game is sampled from a prior $\mathbb{P}_0$ over the chance player nodes and terminal nodes, then for $\bar{\sigma}_T$ computed by Alg. 1, we have
+
+$$
+\frac {1}{T} \left(\hat {R} _ {T} ^ {1} + \hat {R} _ {T} ^ {2}\right) = O \left(\frac {1}{T} \left(\left(\xi^ {1} + \xi^ {2}\right) \sqrt {A T}\right)\right), \tag {7}
+$$
+
+$$
+\mathcal {G} _ {T} ^ {i} = O \left(\frac {1}{T} \left(\sqrt {| Z | T \ln (| Z | T)} + \sqrt {| H ^ {\mathcal {C}} | D ^ {\mathcal {C}} A T \ln (| H ^ {\mathcal {C}} | T)}\right)\right), \tag {8}
+$$
+
+$$
+\mathbb {E} _ {d ^ {*}} \exp \left(\bar {\sigma} _ {T} | d ^ {*}\right) = O \left(\frac {1}{T} \left((\xi^ {1} + \xi^ {2}) \sqrt {A T} + \sqrt {| Z | T \ln (| Z | T)} + \sqrt {| H ^ {\mathcal {C}} | D ^ {\mathcal {C}} A T \ln (| H ^ {\mathcal {C}} | T)}\right)\right). \tag {9}
+$$
+
+
+Figure 1: A toy game. Here $P(h_{1}) = \mathcal{C}$ , $h_{2}, h_{3}$ are the nodes for player 1 and $h_{4}, h_{5}$ are the nodes for player 2. At episode $t$ , $d_{t}$ and $\tilde{d}_{t}$ are sampled from the posterior distribution, as shown as $c_{t}, \tilde{c}_{t}, u^{1}(\cdot | d_{t})$ and $u^{1}(\cdot | \tilde{d}_{t})$ . Then $\sigma_{t}^{1}$ and $\sigma_{t}^{2}$ can be calculated by CFR. Finally we use these parameters on the graph to calculate the interaction strategies with Eq. (5).
+
+Here $\xi^i$ is a game-dependent parameter, related to the structure of the game. Its definition comes from Corollary 2 in Burch (2018). Under some mild assumptions, we have that $\sqrt{|\mathcal{T}^i|} \leq \xi^i \leq |\mathcal{T}^i|$ .
+
+The present theorem is significant at least in the following aspects.
+
+Firstly, the per episode running time is linear to the size of game tree and the bound is sublinear to $T$ . Thus, we can expect our algorithm to reach a certain approximate error in a finite time.
+
+Secondly, our theorem holds for any prior distribution over $d^{*}$ . In practical TEGIs, it is possible that the priors for $h_1$ and $h_2$ , $h_1, h_2 \in H^{\mathcal{C}}$ , are independent. Our theorem and algorithms can also be applied to such situations.
+
+Lastly, our interaction strategies $\hat{\sigma}_{1,T}$ and $\hat{\sigma}_{2,T}$ only contribute to the bound for $\mathcal{G}_T^i$ , which can be treated as the error for interaction strategy's exploring the environment. If we apply PSRL to a single-agent tree game, the Bayesian regret might be considered as some error caused by interacting with the environment. Using the analysis in (Osband et al., 2013), we can get that PSRL enjoys an averaged Bayesian regret bound of order $O(\sqrt{|Z| \ln(|Z|T)/T} + \sqrt{|H^{\mathcal{C}}| D^{\mathcal{C}} A \ln(|H^{\mathcal{C}}|T)/T})$ for a general prior. Therefore, our bound for $\mathcal{G}_T^i$ has a comparable order to the bound for the average Bayesian regret in single-agent tree games.
+
+# 3.1 PROOF SKETCH OF THEOREM 1
+
+Before giving the detailed proof, we introduce some additional notations. For episode $t$ , we generate two trajectories by interacting with the environment. More specifically, we use $\mathcal{T}_{i,t}$ ( $i \in \{1,2\}$ ) to denote the trajectory generated by $\hat{\sigma}_{i,t}$ in environment $d^*$ . We use $\mathbb{E}_{\mathcal{T}_{i,t}}$ to denote the expectation over all trajectories for episode $t$ . Then we use $\mathcal{T}_{i,t}^{\mathcal{C}} = \{h_{1,t}^{\mathcal{C}}, h_{2,t}^{\mathcal{C}}, \dots, h_{m_{i,t},t}^{\mathcal{C}}\}$ to denote the trajectory for the chance player in episode $t$ , and here $m_{i,t}$ denotes the length of $\mathcal{T}_{i,t}^{\mathcal{C}}$ . Furthermore, we denote the terminal node for $\mathcal{T}_{i,t}$ as $z_{i,t}$ . Besides, we denote the collection of $\mathcal{T}_{1,1}, \mathcal{T}_{2,1}, \dots, \mathcal{T}_{1,t-1}, \mathcal{T}_{2,t-1}$ and the related rewards as $\mathcal{H}_t$ , which represents all the observations before episode $t$ . For each history $h$ , we further use $n_t(h)$ to denote the count that $h$ has been visited in $\mathcal{H}_t$ . Further, we use $\mathbb{E}_{\mathcal{T}_{i,t}}$ to denoting the expectation over all possible trajectories $\mathcal{T}_{i,t}$ .
+
+Below we give the key part for the proof. Obviously, we need to bound the regret of CFR, i.e., $\hat{R}_T^i$ , and $\mathcal{G}_T^i$ . We can directly apply the technique in Theorem 1 of (Burch, 2018, pg. 34) $^3$ to bound $\hat{R}_T^i$
+
+with Eq. (7). Next we show the key part for bounding $\mathcal{G}_T^i$ . Using the definition of $\sigma_T^{i}$ , we have:
+
+$$
+\begin{array}{l} \mathcal {G} _ {T} ^ {i} \leq \frac {1}{T} \sum_ {t \leq T} \left(u ^ {i} \left(\sigma_ {T} ^ {*}, i, \sigma_ {t} ^ {- i} | d ^ {*}\right) - u ^ {i} \left(\sigma_ {T} ^ {*}, i, \sigma_ {t} ^ {- i} | d _ {t}\right)\right)) \\ \leq \frac {1}{T} \max _ {\sigma^ {i}} \sum_ {t \leq T} \left(u ^ {i} \left(\sigma^ {i}, \sigma_ {t} ^ {- i} | d ^ {*}\right) - u ^ {i} \left(\sigma^ {i}, \sigma_ {t} ^ {- i} | d _ {t}\right)\right). \\ \end{array}
+$$
+
+And then, in Lemma 1 and 2, we decompose the above bound into the weighted sum of $|c^{*}(h) - c_{t}(h)|$ and $|r^{*}(h) - r_{t}(h)|$ . And soon later we will show how each term decreases as the number of episodes increases.
+
+Lemma 1. At episode $T$ , with $\hat{\sigma}$ defined in Eq. (6), we can upper bound the expectation of $\mathcal{G}_T^i$ :
+
+$$
+\begin{array}{l} \mathbb {E} _ {\mathcal {H} _ {T}} \left\{\mathbb {E} _ {d ^ {*}} \left[ \mathcal {G} _ {T} ^ {i} \Big | \mathcal {H} _ {T} \right] \right\} \leq \frac {1}{T} \sum_ {t = 1} ^ {T} \mathbb {E} _ {\mathcal {H} _ {t}} \left\{\mathbb {E} _ {\tilde {d} _ {t}, d ^ {*}} \left[ u ^ {i} \left(\tilde {\sigma} _ {t} ^ {i}, \sigma_ {t} ^ {- i} | \tilde {d} _ {t}\right) - u ^ {i} \left(\tilde {\sigma} _ {t} ^ {i}, \sigma_ {t} ^ {- i} | d ^ {*}\right) \right] \Big | \mathcal {H} _ {t} \right\} \\ + \frac {1}{T} \sum_ {t = 1} ^ {T} \mathbb {E} _ {\mathcal {H} _ {t}} \left\{\mathbb {E} _ {\tilde {d} _ {t}, d ^ {*}} \left[ u ^ {i} \left(\tilde {\sigma} _ {t} ^ {i}, \sigma_ {t} ^ {- i} \mid d ^ {*}\right) - u ^ {i} \left(\tilde {\sigma} _ {t} ^ {i}, \sigma_ {t} ^ {- i} \mid d _ {t}\right) \right] \mid \mathcal {H} _ {t} \right\}. \tag {10} \\ \end{array}
+$$
+
+Lemma 1 decomposes the expectation of $\mathcal{G}_T^i$ into two terms, representing the reward difference between $d^{*}$ and $\tilde{d}_t$ and the difference between $d^{*}$ and $d_{t}$ . Below we give an intuitive sketch for bounding the first term $u^{i}(\tilde{\sigma}_{t}^{i},\sigma_{t}^{-i}|\tilde{d}_{t}) - u^{i}(\tilde{\sigma}_{t}^{i},\sigma_{t}^{-i}|d^{*})$ .
+
+Lemma 2. At episode $T$ , the expectation of $u^{i}(\tilde{\sigma}_{t}^{i}, \sigma_{t}^{-i}|\tilde{d}_{t}) - u^{i}(\tilde{\sigma}_{t}^{i}, \sigma_{t}^{-i}|d^{*})$ can be bounded by the summation of the difference between $d^{*}$ and $\tilde{d}_{t}$ :
+
+$$
+\begin{array}{l} \left. \mathbb {E} _ {\mathcal {H} _ {t}} \left\{\mathbb {E} _ {\tilde {d} _ {t}, d ^ {*}} \left[ u ^ {i} \left(\tilde {\sigma} _ {t} ^ {i}, \sigma_ {t} ^ {- i} | \tilde {d} _ {t}\right) - u ^ {i} \left(\tilde {\sigma} _ {t} ^ {i}, \sigma_ {t} ^ {- i} | d ^ {*}\right) \right] \mid \mathcal {H} _ {t} \right\} \right. \\ \leq \mathbb {E} _ {\mathcal {H} _ {t}} \left\{\mathbb {E} _ {\tilde {d} _ {t}, d ^ {*}} \mathbb {E} _ {\mathcal {T} _ {i, t}} \left[ \sum_ {j = 1} ^ {m _ {i, t}} \sum_ {a \in \alpha (h)} | \tilde {c} _ {t} \left(h _ {j, t} ^ {\mathcal {C}}, a\right) - c ^ {*} \left(h _ {j, t} ^ {\mathcal {C}}, a\right) | \right] \mid \mathcal {H} _ {t} \right\} \\ + \mathbb {E} _ {\mathcal {H} _ {t}} \left\{\mathbb {E} _ {\tilde {d} _ {t}, d ^ {*}} \mathbb {E} _ {\mathcal {T} _ {i, t}} \left[ u ^ {i} \left(z _ {i, t} \mid \tilde {r} _ {t}\right) - u ^ {i} \left(z _ {i, t} \mid r ^ {*}\right) \right] \mid \mathcal {H} _ {t} \right\}. \tag {11} \\ \end{array}
+$$
+
+According to the definition of the expectation $\mathbb{E}_{\tilde{d},d^{*}}\mathbb{E}_{\mathcal{T}_{i,t}}$ , we can see that Eq. (11) is a weighted sum of $|c_{t}(h) - c^{*}(h)|$ and $|u^{i}(h|\tilde{r}_{t}) - u^{i}(h|r^{*})|$ . Recall that $u^{i}(h|r)$ refers to the expectation of $r(h)$ for player $i$ . Intuitively, we can use concentration bound on $|c_{t}(h) - c^{*}(h)|$ , so for $h$ with a large weight, we should visit it for more times. Notice that the weight in Eq. (11) is essentially the probability of reaching $h$ under our interaction strategy $\hat{\sigma}$ and the real environment $c^{*}$ . Hence if we use $\hat{\sigma}$ to interact with the environment, we can expect our algorithm can visit $h$ with large weight for sufficient times.
+
+To simplify the derivation, we tentatively assume that $\tilde{d}_t$ and $d^*$ are identically distributed for nodes $h_{j,i}^{\mathcal{C}}$ and $z_{i,t}$ conditioning on $\mathcal{H}_t$ . That is, for any node $h$ , with $Pr$ referring to the probability of some event, we here assume that
+
+$$
+P r (d ^ {*} | \mathcal {H} _ {t}, h) = P r (\tilde {d} _ {t} | \mathcal {H} _ {t}, h).
+$$
+
+In fact this assumption fails when $h$ is reached, because the probability to reach $h$ is influenced by $d^*$ and $\tilde{d}$ . We will remove this assumption and provide a rigorous proof in Appendix A. For $(h_{j,i}^{\mathcal{C}}, a)$ and $z_{i,t}$ , we can insert the empirical mean estimations $\bar{c}_t(h_{j,i}^{\mathcal{C}}, a)$ and $\bar{u}_t^i(z_{i,t})$ and use the frequentists' concentration bound (Hoeffding, 1994; Weissman et al., 2003). Then for any $\delta \in (0,1)$ , we have the following inequalities:
+
+$$
+\begin{array}{l} \mathbb {E} _ {\mathcal {H} _ {t}} \left\{\mathbb {E} _ {\tilde {d} _ {t}, d ^ {*}} \mathbb {E} _ {\mathcal {T} _ {i, t}} \left[ \sum_ {a \in \alpha (h _ {j, t} ^ {\mathcal {C}})} | \tilde {c} (h _ {j, t} ^ {\mathcal {C}}, a) - c ^ {*} (h _ {j, t} ^ {\mathcal {C}}, a) | \right] \Big | \mathcal {H} _ {t} \right\} \leq \mathbb {E} _ {\mathcal {H} _ {t}} \left[ 2 \sqrt {\frac {2 \ln (2 ^ {A} / \delta)}{\max (n _ {t} (h _ {j , t} ^ {\mathcal {C}}) , 1)}} \Big | \mathcal {H} _ {t} \right] + 2 | H ^ {\mathcal {C}} | \delta , \\ \mathbb {E} _ {\mathcal {H} _ {t}} \left\{\mathbb {E} _ {\tilde {d} _ {t}, d ^ {*}} \mathbb {E} _ {\mathcal {T} _ {i, t}} \left[ u ^ {i} (h | \tilde {r} _ {t}) - u ^ {i} (h | r ^ {*}) \right] \Big | \mathcal {H} _ {t} \right\} \leq \mathbb {E} _ {\mathcal {H} _ {t}} \left[ 2 \sqrt {\frac {2 \log (2 / \delta)}{\max (n _ {t} (z _ {i , t}) , 1)}} \Big | \mathcal {H} _ {t} \right] + 4 | Z | \delta . \\ \end{array}
+$$
+
+Then for a history $h \in Z \cup H^{\mathcal{C}}$ , we have $\sum_{n = i}^{n_t(h)}\sqrt{1 / i} \leq \sqrt{n_t(h)}$ . Using the Jensen's inequality to the summation over $Z$ and $H^{\mathcal{C}}$ , we can get the following bound:
+
+$$
+\sum_ {t = 1} ^ {T} \mathbb {E} _ {\mathcal {H} _ {t}} \left\{\mathbb {E} _ {\tilde {d} _ {t}, d ^ {*}} \left[ u ^ {i} (\tilde {\sigma} _ {t} ^ {i}, \sigma_ {t} ^ {- i} | \tilde {d} _ {t}) - u ^ {i} (\tilde {\sigma} _ {t} ^ {i}, \sigma_ {t} ^ {- i} | d ^ {*}) \right] \Big | \mathcal {H} _ {t} \right\} = O \left(\sqrt {| Z | T \ln (| Z | T)} + \sqrt {| H ^ {\mathcal {C}} | D ^ {\mathcal {C}} A T \ln (| H ^ {\mathcal {C}} | T)}\right).
+$$
+
+We can apply the same method to $\sum_{t=1}^{T} \mathbb{E}_{H_t}\left\{\mathbb{E}_{\tilde{d}_t, d^*}\left[u^i(\tilde{\sigma}_t^i, \sigma_t^{-i}|d^*) - u^i(\tilde{\sigma}_t^i, \sigma_t^{-i}|d_t)\right] \mid \mathcal{H}_t\right\}$ and get Eq. (8). Combining the results of Eq. (7) and (8), we can finish the proof of the theorem 1.
+
+# 4 RELATED WORK
+
+Other methods for TEGIs under unknown environment: There also exist some works on TEGIs under an unknown environment. Fictitious play (FP) (Brown, 1951) is another popular algorithm for approximating NE. In FP, the agent takes the best response to the average strategy of its opponent. Heinrich et al. (2015) extend FP to TEGIs. Though it may be easier to combine FP with other machine learning techniques than CFR, when the chance player is known, the convergence rate of FP is usually worse than CFR variants. Monte Carlo CFR with outcome sampling (MCCFR-OS) (Lanctot et al., 2009) can also be applied to TEGIs to approximate NE in a model-free style. It uses Monte Carlo estimates of the environment to conduct CFR and can converge to the NE. Since it is updated without a model of the environment, it is much less efficient than model-based methods. There is also work that applies SARL methods to TEGIs. For example, Srinivasan et al. (2018) adapts actor-critic to games in a model-free style.
+
+MDP: SARL problems are often formalized as the Markov Decision Process (MDP). In the simplest MDP with no transitions, i.e. the Multi-armed bandit problems, the problem-dependent regret upper bound of PSRL (also named Thompson Sampling in bandit problems) has been carefully analyzed (Agrawal & Goyal, 2017). The problem-dependent bounds for general MDP is still an open problem. Besides PSRL, there is another kind of provable algorithms for MDP (Jaksch et al., 2010; Azar et al., 2013) following the Optimism in the Face of Uncertainty principle. They estimate the uncertainty of the underlying MDP and then use the currently optimal policy to interact with the environment.
+
+Stochastic Games: the stochastic game (Littman, 1994) is also one kind of multi-agent systems. In a stochastic game, players take actions at each state and then the environment transits to a new state and returns immediate rewards. Nash Q-learning (Hu & Wellman, 2003) converges to approximate NE by extending Q-learning to games, but it lacks finite-time analysis. Some other work (Szepesvári & Littman, 1996; Perolat et al., 2015; Wei et al., 2017) concentrates on two-player zero-sum stochastic games in RL setting. This kind of games don't involve imperfect information, and this makes them different from TEGI.
+
+# 5 EXPERIMENTS
+
+To empirically evaluate our algorithm, we test it on imperfect-information poker games. In this section, we first introduce our baseline methods and then present the details of the games. Finally, we show the results.
+
+We choose three kinds of methods as our baselines. The first one is Fictitious Play (FP) and the second is Monte Carlo CFR with outcome sampling (MCCFR). Both are algorithms for MARL. Thus we can compare the performance of our algorithm and these existing methods. We choose two variants of our algorithm as the other kind of baselines, which is used to compare different choices of interaction strategies. Details of baselines are given below:
+
+- Fictitious self-play (FSP): FSP is another popular algorithm to solve games in the RL setting. In FP, when $d^{*}$ is known, each player chooses the best response of its opponent's average strategy. When $d^{*}$ is not known, we need other RL algorithms to learn the best response. We combine FSP with two kinds of RL algorithms: 1) FSP with a fitted-Q iteration algorithm (FSP-fitted-Q): we
+
+
+(a) Leduc-4
+
+
+(b) Leduc-5
+Figure 2: Results for different algorithms on variants of Leduc-4 and Leduc-5. Here "default" refers to our algorithm CFR-PSRL.
+
+follow (Heinrich et al., 2015) to use a Fitted-Q iteration to learn the best response. We use the same hyperparameters as reported in Heinrich et al. (2015); 2) FSP with PSRL (FSP-PSRL): we use a combination of FP and PSRL to give a new baseline: In episode $t$ , we compute player $i$ 's best response under $d_t \sim \mathbb{P}_t$ , that is, $\arg \max_{\sigma^i} \sum_{t' < t} u^i(\sigma^i, \sigma_{t'}^{-i}|d_t)$ .
+
+- Monte Carlo CFR with outcome sampling (MCCFR-OS): MCCFR-OS uses samples of the game tree to conduct CFR. Estimated counterfactual values are used to update the policies and the average policies can converge to NE. This method can be applied to TEGIs under the RL setting. We use $\epsilon$ -greedy with $\epsilon = 0.1$ as the exploration strategy for MCCFR-OS in our experiments.
+- Variants of Alg. 1: Though we have proved the convergence of Alg. 1 with interaction strategy $(\tilde{\sigma}^i,\sigma^{-i})$ , the proposed method does not necessarily work well in practice. In our experiments, we evaluate four interaction strategies: 1) Random: the players take actions randomly; 2) Naive: the players use the output of the CFR procedure, i.e., $\sigma_{t}$ , to interact with the environment; 3) the best response to $\sigma_t^{-i}$ (Bestresp): $(\tilde{\sigma}_t^{\prime i},\sigma_t^{-i})$ where $\tilde{\sigma}_t^i$ is the best response to $\sigma_t^{-i}$ under $\tilde{d}_t$ i.e., $\tilde{\sigma}_t^i = \arg \max_{\sigma^i}u^i (\sigma^i,\sigma_t^{-i}|\tilde{d}_t)$ ; 4) Default: the interaction strategies in Eq. (6).
+
+We test these algorithms on variants of Leduc Hold'em poker (Southey et al., 2012) which is widely used in imperfect-information game solving. We generate games by keeping the tree structure of the Leduc Hold'em poker and replacing $c$ and $r$ by randomly generated functions. More specifically, when generating the tree structure, to control the sizes of the generated game tree, we restrict each player not to bid more than 4 or 5 times the big blind. The numbers of histories in the generated games are 9435 and 34776 respectively. The reward function $r^i(h)$ is a binary distribution. With a probability $p$ the value of $r^1(h)$ is $-1$ and with probability $1 - p$ , the value is 1. The prior $\mathbb{P}_0(r^1(h))$ is a uniform $[0,1]$ distribution over parameter $p$ . Obviously, $r^2(h) = -r^1(h)$ . Let $e^d$ denote the vector in $\mathbb{R}^d$ with every element is 1. $c(h)$ is sampled from Dirichlet $(e^{|A(h)|})$ .
+
+We generate 20 variants for Leduc(4) and Leduc(5) respectively. And on each generated game, each algorithm updates its strategies for 10000 times, and after each update, it interacts with the environment for 2 rounds of games. The result is shown in Fig. 2. As Fig. 2 shows, the exploitability of naive CFR fails to decrease after 10000 rounds on both Leduc(4) and Leduc(5). This might be caused by the lack of efficient exploration of the environment. MCCFR-OS and FSP-Fitted-Q have poor performances comparing to other algorithms. This may be caused by the data-inefficiency of model-free methods and the inefficient exploration strategies. Random interaction and FP can gradually decrease the exploitability, but our algorithm decreases at a higher speed. Thus the empirical result shows that our algorithm outperforms baselines on the two games.
+
+# 6 CONCLUSIONS AND DISCUSSIONS
+
+In this work, we consider the problem of posterior sampling for TEGIs, which is a class of multiagent reinforcement learning problems. By a novel design of interaction staregies, we conjoin the
+
+merits of PSRL and CFR and present a provably convergent algorithm for TEGIs. Our algorithm empirically works well.
+In the future, there are various directions to improve the result. For example, our bound is a Bayesian bound describing the expected performance. Considering one sample from the prior, Frequentists' methods such as UCBVI (Azar et al., 2013) also give a high probability regret bound for SARL of a similar order to PSRL. Further, comparing with the worst-case bound, the problem-dependent performance is much more important. Though it is possible that our method has a better performance on a specific TEGI than the bound in Theorem 1, our algorithm is very possibly not the best in the sense of problem-dependent performance.
+Another direction is that our method heavily relies on the structure of TEGIs and the solution concept of Nash Equilibrium. Thus, further work is needed to extend posterior sampling to more complicated multi-agent systems, such as stochastic games (Littman, 1994) and extensive games with more than two players.
+Moreover, the generalization for PSRL is another important but challenging future work direction. It is worth of a systematic investigation to bridge the gap between the provable tabular RL algorithms and PSRL methods with generalization. Bootstrapping might be one possible direction. Osband et al. (2016) applies the principle of PSRL to DQN by using bootstrapping. Another possible direction is to adapt more practical Bayesian inference algorithms to RL tasks.
+
+# ACKNOWLEDGEMENT
+
+This work was supported by the National Key Research and Development Program of China (No. 2017YFA0700904), NSFC Projects (Nos. 61620106010, U19B2034, U1811461), Beijing NSF Project (No. L172037), Beijing Academy of Artificial Intelligence (BAAI), Tsinghua-Huawei Joint Research Program, a grant from Tsinghua Institute for Guo Qiang, Tiangong Institute for Intelligent Computing, the JP Morgan Faculty Research Program and the NVIDIA NVAIL Program with GPU/DGX Acceleration.
+
+# REFERENCES
+
+Shipra Agrawal and Navin Goyal. Near-optimal regret bounds for thompson sampling. Journal of the ACM (JACM), 64(5):30, 2017.
+Mohammad Gheshlaghi Azar, Rémi Munos, and Hilbert J Kappen. Minimax pac bounds on the sample complexity of reinforcement learning with a generative model. Machine learning, 91(3): 325-349, 2013.
+George W Brown. Iterative solution of games by fictitious play, 1951. Activity Analysis of Production and Allocation (TC Koopmans, Ed.), pp. 374-376, 1951.
+Neil Burch. Time and space: Why imperfect information games are hard. 2018.
+Lucian Buşoniu, Robert Babuška, and Bart De Schutter. Multi-agent reinforcement learning: An overview. In Innovations in multi-agent systems and applications-1, pp. 183-221. Springer, 2010.
+Olivier Chapelle and Lihong Li. An empirical evaluation of thompson sampling. In Advances in neural information processing systems, pp. 2249-2257, 2011.
+Richard Ericson and Ariel Pakes. Markov-perfect industry dynamics: A framework for empirical work. The Review of economic studies, 62(1):53-82, 1995.
+Johannes Heinrich, Marc Lanctot, and David Silver. Fictitious self-play in extensive-form games. In International Conference on Machine Learning, pp. 805-813, 2015.
+Wassily Hoeffding. Probability inequalities for sums of bounded random variables. In The Collected Works of Wassily Hoeffding, pp. 409-426. Springer, 1994.
+Junling Hu and Michael P Wellman. Nash q-learning for general-sum stochastic games. Journal of machine learning research, 4(Nov):1039-1069, 2003.
+
+Thomas Jaksch, Ronald Ortner, and Peter Auer. Near-optimal regret bounds for reinforcement learning. Journal of Machine Learning Research, 11(Apr):1563-1600, 2010.
+Marc Lanctot, Kevin Waugh, Martin Zinkevich, and Michael Bowling. Monte carlo sampling for regret minimization in extensive games. In Advances in neural information processing systems, pp. 1078-1086, 2009.
+Michael L Littman. Markov games as a framework for multi-agent reinforcement learning. In Machine learning proceedings 1994, pp. 157-163. Elsevier, 1994.
+Ian Osband and Benjamin Van Roy. Why is posterior sampling better than optimism for reinforcement learning? arXiv preprint arXiv:1607.00215, 2016.
+Ian Osband, Daniel Russo, and Benjamin Van Roy. (more) efficient reinforcement learning via posterior sampling. In Advances in Neural Information Processing Systems, pp. 3003-3011, 2013.
+Ian Osband, Charles Blundell, Alexander Pritzel, and Benjamin Van Roy. Deep exploration via bootstrapped dqn. In Advances in neural information processing systems, pp. 4026-4034, 2016.
+Martin J Osborne and Ariel Rubinstein. A course in game theory. MIT press, 1994.
+Julien Perolat, Bruno Scherrer, Bilal Piot, and Olivier Pietquin. Approximate dynamic programming for two-player zero-sum markov games. In International Conference on Machine Learning (ICML 2015), 2015.
+Daniel Russo and Benjamin Van Roy. Learning to optimize via posterior sampling. Mathematics of Operations Research, 39(4):1221-1243, 2014.
+Daniel J Russo, Benjamin Van Roy, Abbas Kazerouni, Ian Osband, Zheng Wen, et al. A tutorial on thompson sampling. Foundations and Trends in Machine Learning, 11(1):1-96, 2018.
+Finnegan Southey, Michael P Bowling, Bryce Larson, Carmelo Piccione, Neil Burch, Darse Billings, and Chris Rayner. Bayes' bluff: Opponent modelling in poker. arXiv preprint arXiv:1207.1411, 2012.
+Sriram Srinivasan, Marc Lanctot, Vinicius Zambaldi, Julien Pérolat, Karl Tuyls, Rémi Munos, and Michael Bowling. Actor-critic policy optimization in partially observable multiagent environments. In Advances in Neural Information Processing Systems, pp. 3422-3435, 2018.
+Malcolm Strens. A bayesian framework for reinforcement learning. In ICML, volume 2000, pp. 943-950, 2000.
+Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction. MIT press, 2018.
+Csaba Szepesvári and Michael L Littman. Generalized markov decision processes: Dynamic-programming and reinforcement-learning algorithms. In Proceedings of International Conference of Machine Learning, volume 96, 1996.
+Chen-Yu Wei, Yi-Te Hong, and Chi-Jen Lu. Online reinforcement learning in stochastic games. In Advances in Neural Information Processing Systems, pp. 4987-4997, 2017.
+Tsachy Weissman, Erik Ordentlich, Gadiel Seroussi, Sergio Verdu, and Marcelo J Weinberger. Inequalities for the 11 deviation of the empirical distribution. Hewlett-Packard Labs, Tech. Rep, 2003.
+Martin Zinkevich, Michael Johanson, Michael Bowling, and Carmelo Piccione. Regret minimization in games with incomplete information. In Advances in neural information processing systems, pp. 1729-1736, 2008.
+
+# A PROOF FOR THEOREM1
+
+Let $\bar{\sigma} = \frac{1}{T}\sum_{t\leq T}\sigma_t$ denote the average strategy tuple up to episode $T$ . We can decompose the exploitability at episode $T$ into the CFR regret and an extra exploration term:
+
+$$
+e x p l (\bar {\sigma} | d ^ {*}) = \frac {1}{T} (\hat {R} _ {T} ^ {1} + \hat {R} _ {T} ^ {2} + \sum_ {i \in \{1, 2 \}} \sum_ {t \leq T} (u ^ {i} (\sigma_ {T} ^ {* i}, \sigma_ {t} ^ {- i} | d ^ {*}) - u ^ {i} (\sigma_ {T} ^ {\prime i}, \sigma_ {t} ^ {- i} | d _ {t}))),
+$$
+
+where $\hat{R}_T^i,\sigma_T^{*,i}$ and $\sigma_T^{\prime i}$ are formally defined as:
+
+$$
+\hat {R} _ {T} ^ {i} = \max _ {\sigma^ {i}} \sum_ {t \leq T} u ^ {i} (\sigma^ {i}, \sigma_ {t} ^ {- i} | d _ {t}) - \sum_ {t \leq T} u ^ {i} (\sigma_ {t} | d _ {t}),
+$$
+
+$$
+\sigma_{T}^{*,i} = \operatorname *{arg max}_{\sigma^{i}}\sum_{t\leq T}u^{i}(\sigma^{i},\sigma_{t}^{-i}|d^{*}),
+$$
+
+$$
+\sigma_ {T} ^ {\prime i} = \arg \max _ {\sigma^ {i}} \sum_ {t = 1} ^ {T} u ^ {i} (\sigma^ {i}, \sigma_ {t} ^ {- i} | d _ {t}).
+$$
+
+Here $\hat{R}_T^i$ is the regret from CFR. And $\sigma_T^{*,i}$ and $\sigma_T'^i$ represent the optimal strategies under $d^*$ and $d_t$ respectively, fixing that the its opponent plays the strategy $\sigma_t^{-i}$ at episode $t$ .
+
+Recall that $d_t$ is sampled from the posterior distribution $\mathbb{P}_t$ . The proof for this decomposition is given below:
+
+Proof. Inserting a term $u^{i}(\sigma_{T}^{\prime i},\sigma_{t}^{-i}|d_{t})$ for episode $t$ , we have:
+
+$$
+\begin{array}{l} \sum_ {t \leq T} u ^ {i} \left(\sigma_ {T} ^ {* i}, \sigma_ {t} ^ {- i} | d ^ {*}\right) = \sum_ {t \leq T} \left(u ^ {i} \left(\sigma_ {T} ^ {* i}, \sigma_ {t} ^ {- i} | d ^ {*}\right) - u ^ {i} \left(\sigma_ {T} ^ {\prime i}, \sigma_ {t} ^ {- i} | d _ {t}\right) + u ^ {i} \left(\sigma_ {T} ^ {i}, \sigma_ {t} ^ {- i} | d _ {t}\right)\right) \\ = \sum_ {i, t} \left(u ^ {i} \left(\sigma_ {T} ^ {*}, i, \sigma_ {t} ^ {- i} | d ^ {*}\right) - u ^ {i} \left(\sigma_ {T} ^ {\prime}, i, \sigma_ {t} ^ {- i} | d _ {t}\right)\right) + \sum_ {t \leq T} u ^ {i} \left(\sigma_ {t} | d _ {t}\right) + \hat {R} _ {T} ^ {i} \\ \end{array}
+$$
+
+Moreover, with $u^{1}(\sigma_{t}|d_{t}) + u^{2}(\sigma_{t}|d_{t}) = 0$ and the definition of $expl$ , we finish the proof.
+
+
+
+By directly applying the result in (Burch, 2018), we can upper bound the CFR regret with
+
+$$
+\hat {R} _ {T} ^ {i} \leq \frac {1}{T} \left(\xi^ {i} \sqrt {A T}\right),
+$$
+
+where $\xi^i = \sum_{j=1}^{D} \sqrt{\max_{\sigma^i} \sum_{I \in \mathcal{I}^i, D(I) = j} \pi_{\sigma^i}^i(I)}$ represents the complexity of the game tree.
+
+For convenience, let $\mathcal{G}_T^i = \frac{1}{T}\sum_{t\leq T}(u^i (\sigma_T^{*,i},\sigma_t^{-i}|d^*) - u^i (\sigma_T^{i},\sigma_t^{-i}|d_t))$ denote the gap between exploitability and the regret from CFR. Since we have upper bounded $\hat{R}_T^i$ , we only need to bound $\mathcal{G}_T^i$ and then we can upper bound the exploitability of the average strategy $\bar{\sigma}$ .
+
+Obviously, $\mathcal{G}_T^i$ represents the difference between $d^{*}$ and $d_{t}$ . Thus we need to design a suitable interaction strategy to make sure that $\mathcal{G}_T^i$ is small. Using the definition of $\sigma_T^{\prime i}$ , we have
+
+$$
+\begin{array}{l} \mathcal {G} _ {T} ^ {i} \leq \frac {1}{T} \sum_ {t \leq T} \left(u ^ {i} \left(\sigma_ {T} ^ {*}, i, \sigma_ {t} ^ {- i} | d ^ {*}\right) - u ^ {i} \left(\sigma_ {T} ^ {*}, i, \sigma_ {t} ^ {- i} | d _ {t}\right)\right)) \\ \leq \frac {1}{T} \max _ {\sigma^ {i}} \sum_ {t \leq T} \left(u ^ {i} \left(\sigma^ {i}, \sigma_ {t} ^ {- i} | d ^ {*}\right) - u ^ {i} \left(\sigma^ {i}, \sigma_ {t} ^ {- i} | d _ {t}\right)\right). \\ \end{array}
+$$
+
+We select the interaction strategy as follows. First, we draw $\tilde{d}_t\sim \mathbb{P}_t$ . For $i\in \{1,2\}$ , we compute
+
+$$
+\tilde {\sigma} _ {t} ^ {i} = \arg \max _ {\sigma^ {i}} \sum_ {t ^ {\prime} \leq t} u ^ {i} \left(\sigma^ {i}, \sigma_ {t ^ {\prime}} ^ {- i} | \tilde {d} _ {t}\right) - u ^ {i} \left(\sigma^ {i}, \sigma_ {t ^ {\prime}} ^ {- i} | d _ {t ^ {\prime}}\right). \tag {12}
+$$
+
+And then we use $(\tilde{\sigma}_t^1,\sigma_t^2)$ and $(\tilde{\sigma}_t^2,\sigma_t^1)$ to interact with the environment. Following lemma provides an upper bound on $\mathcal{G}_t^i$ . Then we get below lemma:
+
+Lemma 3. With $\tilde{\sigma}_t^i$ defined in Eq. (12), the expectation of $\mathcal{G}_T^i$ can be bounded by:
+
+$$
+\begin{array}{l} \mathbb {E} _ {H _ {T}} \left\{\mathbb {E} _ {d ^ {*}, d _ {T}} \left[ \mathcal {G} _ {T} ^ {i} \Big | H _ {T} \right] \right\} \leq \frac {1}{T} \sum_ {t = 1} ^ {T} \mathbb {E} _ {H _ {t}} \left\{\mathbb {E} _ {\tilde {d} _ {t}, d ^ {*}} \left[ u ^ {i} (\tilde {\sigma} _ {t} ^ {i}, \sigma_ {t} ^ {- i} | \tilde {d} _ {t}) - u ^ {i} (\tilde {\sigma} _ {t} ^ {i}, \sigma_ {t} ^ {- i} | d ^ {*}) \right] \Big | \mathcal {H} _ {t} \right\} \\ + \frac {1}{T} \sum_ {t = 1} ^ {T} \mathbb {E} _ {H _ {t}} \left\{\mathbb {E} _ {\tilde {d} _ {t}, d ^ {*}, d _ {t}} \left[ u ^ {i} \left(\tilde {\sigma} _ {t} ^ {i}, \sigma_ {t} ^ {- i} \mid d ^ {*}\right) - u ^ {i} \left(\tilde {\sigma} _ {t} ^ {i}, \sigma_ {t} ^ {- i} \mid d _ {t}\right) \right] \mid \mathcal {H} _ {t} \right\}. \tag {13} \\ \end{array}
+$$
+
+This lemma decomposes $\mathcal{G}_T^i$ into two terms, which can be bounded with careful analysis of the posterior distribution. We first give the proof for this lemma.
+
+Proof. We have
+
+$$
+\begin{array}{l} \left. \right. \mathbb {E} _ {\mathcal {H} _ {T}} \left\{\mathbb {E} _ {d ^ {*}, d _ {T}} \left[ \max _ {\hat {\sigma} ^ {i}} \left(\sum_ {t = 1} ^ {T} \left(u ^ {i} \left(\hat {\sigma} ^ {i}, \sigma_ {t} ^ {- i} | d ^ {*}\right) - u ^ {i} \left(\hat {\sigma} ^ {i}, \sigma_ {t} ^ {- i} | d _ {t}\right)\right)\right)\right] \mid \mathcal {H} _ {T} \right\} \\ \stackrel {\text {①}} {=} \mathbb {E} _ {H _ {T}} \left\{\mathbb {E} _ {\tilde {d} _ {T}, d _ {T}} \left[ \max _ {\tilde {\sigma} ^ {i}} \left(\sum_ {t = 1} ^ {T} \left(u ^ {i} \left(\tilde {\sigma} ^ {i}, \sigma_ {t} ^ {- i} | \tilde {d} _ {T}\right) - u ^ {i} \left(\tilde {\sigma} ^ {i}, \sigma_ {t} ^ {- i} | d _ {t}\right)\right)\right) \right] \mid \mathcal {H} _ {T} \right\} \\ \stackrel {\text {②}} {=} \mathbb {E} _ {\mathcal {H} _ {T}} \left\{\mathbb {E} _ {\tilde {d} _ {T}, d _ {T}} \left[ \sum_ {t = 1} ^ {T} \left(u ^ {i} \left(\tilde {\sigma} _ {T} ^ {i}, \sigma_ {t} ^ {- i} | \tilde {d} _ {T}\right) - u ^ {i} \left(\tilde {\sigma} _ {T} ^ {i}, \sigma_ {t} ^ {- i} | d _ {t}\right)\right) \right] \mid \mathcal {H} _ {T} \right\} \\ = \mathbb {E} _ {\mathcal {H} _ {T}} \left\{\mathbb {E} _ {\tilde {d} _ {T}} \left[ \sum_ {t = 1} ^ {T - 1} \left(u ^ {i} \left(\tilde {\sigma} _ {T} ^ {i}, \sigma_ {t} ^ {- i} | \tilde {d} _ {T}\right) - u ^ {i} \left(\tilde {\sigma} _ {T} ^ {i}, \sigma_ {t} ^ {- i} | d _ {t}\right)\right) \right] \mid \mathcal {H} _ {T} \right\} \\ + \mathbb {E} _ {\mathcal {H} _ {T}} \left\{\mathbb {E} _ {\tilde {d} _ {T}, d _ {T}} \left[ u ^ {i} \left(\tilde {\sigma} _ {T} ^ {i}, \sigma_ {T} ^ {- i} | \tilde {d} _ {T}\right) - u ^ {i} \left(\tilde {\sigma} _ {T} ^ {i}, \sigma_ {T} ^ {- i} | d _ {T}\right) \right] \mid \mathcal {H} _ {T} \right\} \\ \end{array}
+$$
+
+$$
+\begin{array}{l} \stackrel {3} {\leq} \mathbb {E} _ {\mathcal {H} _ {T}} \left\{\mathbb {E} _ {\tilde {d} _ {T}} \left[ \max _ {\tilde {\sigma} ^ {i}} \left(\sum_ {t = 1} ^ {T - 1} (u ^ {i} (\tilde {\sigma} ^ {i}, \sigma_ {t} ^ {- i} | \tilde {d} _ {T}) - u ^ {i} (\tilde {\sigma} ^ {i}, \sigma_ {t} ^ {- i} | d _ {t}))\right) \right] \mid \mathcal {H} _ {T} \right\} \\ + \mathbb {E} _ {H _ {T}} \left\{\mathbb {E} _ {\tilde {d} _ {T}, d _ {T}} \left[ u ^ {i} (\tilde {\sigma} _ {T} ^ {i}, \sigma_ {T} ^ {- i} | \tilde {d} _ {T}) - u ^ {i} (\tilde {\sigma} _ {T} ^ {i}, \sigma_ {T} ^ {- i} | d _ {T}) \right] \mid \mathcal {H} _ {T} \right\} \\ \end{array}
+$$
+
+$$
+\begin{array}{l} \stackrel {\text {④}} {=} \mathbb {E} _ {\mathcal {H} _ {T - 1}} \left\{\mathbb {E} _ {\tilde {d} _ {T - 1}, d _ {T - 1}} \left[ \sum_ {t = 1} ^ {T - 1} \left(u ^ {i} \left(\tilde {\sigma} _ {T - 1} ^ {i}, \sigma_ {t} ^ {- i} | \tilde {d} _ {T - 1}\right) - u ^ {i} \left(\tilde {\sigma} _ {T - 1} ^ {i}, \sigma_ {t} ^ {- i} | d _ {t}\right)\right) \right] \mid \mathcal {H} _ {T - 1} \right\} \\ + \mathbb {E} _ {\mathcal {H} _ {T}} \left\{\mathbb {E} _ {\tilde {d} _ {T}, d _ {T}} \left[ u ^ {i} \left(\tilde {\sigma} _ {T} ^ {i}, \sigma_ {T} ^ {- i} | \tilde {d} _ {T}\right) - u ^ {i} \left(\tilde {\sigma} _ {T} ^ {i}, \sigma_ {t} ^ {- i} | d _ {T}\right) \right] \mid \mathcal {H} _ {T} \right\} \\ \end{array}
+$$
+
+$$
+\begin{array}{l} \stackrel {5} {\leq} \sum_ {t = 1} ^ {T} \mathbb {E} _ {\mathcal {H} _ {t}} \left\{\mathbb {E} _ {\tilde {d} _ {t}, d _ {t}} \left[ u ^ {i} \left(\tilde {\sigma} _ {t} ^ {i}, \sigma_ {t} ^ {- i} | \tilde {d} _ {t}\right) - u ^ {i} \left(\tilde {\sigma} _ {t} ^ {i}, \sigma_ {t} ^ {- i} | d _ {t}\right) \right] \mid \mathcal {H} _ {t} \right\} \\ \stackrel {\circledast} {=} \sum_ {t = 1} ^ {T} \mathbb {E} _ {\mathcal {H} _ {t}} \left\{\mathbb {E} _ {\tilde {d} _ {t}, d ^ {*}} \left[ u ^ {i} \left(\tilde {\sigma} _ {t} ^ {i}, \sigma_ {t} ^ {- i} | \tilde {d} _ {t}\right) - u ^ {i} \left(\tilde {\sigma} _ {t} ^ {i}, \sigma_ {t} ^ {- i} | d ^ {*}\right) \right] \mid \mathcal {H} _ {t} \right\} \\ + \sum_ {t = 1} ^ {T} \mathbb {E} _ {\mathcal {H} _ {t}} \left\{\mathbb {E} _ {\tilde {d} _ {t}, d ^ {*}, d _ {t}} \left[ u ^ {i} \left(\tilde {\sigma} _ {t} ^ {i}, \sigma_ {t} ^ {- i} | d ^ {*}\right) - u ^ {i} \left(\tilde {\sigma} _ {t} ^ {i}, \sigma_ {t} ^ {- i} | d _ {t}\right) \right] \mid \mathcal {H} _ {t} \right\}. \\ \end{array}
+$$
+
+Here equality ① holds since $d^{*}$ and $\tilde{d}_T$ are identical distributed conditioning on $\mathcal{H}_t$ . Equation ② holds due to the definition of $\sigma_T^i$ . Then we get ③ with the maximum function. Equality ④ holds
+
+with the definition of $\tilde{\sigma}_{T-1}^{i}$ . Inequality (5) holds by using the induction to episode $t$ . Finally we get equality (6) by inserting $u^{i}(\tilde{\sigma}_{t}^{i}, \sigma_{t}^{-i}|d^{*})$ .
+
+Therefore, we finish the proof.
+
+
+
+Then we only to give upper bounds for the above two terms. Also mentioned in sec.3.1, we introduce some additional notations. For episode $t$ , we generate two trajectories by interacting with the environment. More specifically, we use $\mathcal{T}_{i,t}$ ( $i \in \{1,2\}$ ) to denote the trajectory generated by $\hat{\sigma}_{i,t}$ with $d^*$ . We use $\mathbb{E}_{\mathcal{T}_{i,t}}$ to denote the expectation over all trajectories for episode $t$ . Then we denote $\mathcal{T}_{i,t}^{\mathcal{C}} = \{h_{1,t}^{\mathcal{C}}, h_{2,t}^{\mathcal{C}}, \dots, h_{m_{i,t},t}^{\mathcal{C}}\}$ the trajectory for the chance player in episode $t$ , and here $m_{i,t}$ denotes the length of $\mathcal{T}_{i,t}^{\mathcal{C}}$ . Furthermore, we denote the terminal node for episode $t$ as $z_{i,t}$ . Besides, we denote the collection of $\mathcal{T}_{1,1}, \mathcal{T}_{2,1}, \dots, \mathcal{T}_{1,t-1}, \mathcal{T}_{2,t-1}$ as $\mathcal{H}_t$ , which represents all the observations before episode $t$ . For each history $h$ , we further use $n_t(h)$ to denote the count that $h$ has been visited in $\mathcal{H}_t$ . Further, we use $\mathbb{E}_{\mathcal{T}_{i,t}}$ to denoting the expectation over all possible trajectories $\mathcal{T}_{i,t}$ .
+
+Then we concentrate on the first term $\sum_{t=1}^{T} \mathbb{E}_{\mathcal{H}_t}\left\{\mathbb{E}_{\tilde{d}_t, d^*}\left[u^i(\tilde{\sigma}_t^i, \sigma_t^{-i}|\tilde{d}_t) - u^i(\tilde{\sigma}_t^i, \sigma_t^{-i}|d^*)\right]\bigg|\mathcal{H}_t\right\}$ and the second term has similar proof. Since the strategy tuple is the same for the two utilities, we can decompose their difference with below lemma.
+
+Lemma 4. At episode $t$ , the expectation of $u^{i}(\tilde{\sigma}_{t}^{i}, \sigma_{t}^{-i}|\tilde{d}_{t}) - u^{i}(\tilde{\sigma}_{t}^{i}, \sigma_{t}^{-i}|d^{*})$ can be upper bounded:
+
+$$
+\begin{array}{l} \mathbb {E} _ {\mathcal {H} _ {t}} \left\{\mathbb {E} _ {\tilde {d} _ {t}, d ^ {*}} \left[ u ^ {i} (\tilde {\sigma} _ {t} ^ {i}, \sigma_ {t} ^ {- i} | \tilde {d} _ {t}) - u ^ {i} (\tilde {\sigma} _ {t} ^ {i}, \sigma_ {t} ^ {- i} | d ^ {*}) \right] \mid \mathcal {H} _ {t} \right\} \\ = \mathbb {E} _ {\mathcal {H} _ {t}} \left\{\mathbb {E} _ {\tilde {d} _ {t}, d ^ {*}} \mathbb {E} _ {\mathcal {T} _ {i, t}} \left[ \sum_ {j = 1} ^ {m _ {i, t}} \sum_ {a \in \alpha (h)} \left(\tilde {c} _ {t} \left(h _ {j, t} ^ {\mathcal {C}}, a\right) - c ^ {*} \left(h _ {j, t} ^ {\mathcal {C}}, a\right)\right) u ^ {i} \left(h _ {j, t} ^ {\mathcal {C}} \mid \tilde {\sigma} _ {t} ^ {i}, \sigma_ {t} ^ {- i}, \tilde {d} _ {t}\right) \right] \Big | \mathcal {H} _ {t} \right\} \\ + \mathbb {E} _ {\mathcal {H} _ {t}} \left\{\mathbb {E} _ {\tilde {d} _ {t}, d ^ {*}} \mathbb {E} _ {\mathcal {T} _ {i, t}} \left[ u ^ {i} (z _ {i, t} | \tilde {r} _ {t}) - u ^ {i} (z _ {i, t} | r ^ {*}) \right] \Bigg | \mathcal {H} _ {t} \right\}. \\ \end{array}
+$$
+
+Proof. From the root node to $h_{1,t}^C$ , players take actions according to $(\bar{\sigma}_t^i, \sigma_t^{-i})$ . Thus we should have
+
+$$
+\left. \mathbb {E} _ {\mathcal {H} _ {t}} \left\{\mathbb {E} _ {\tilde {d} _ {t}, d ^ {*}} \left[ u ^ {i} \left(\tilde {\sigma} _ {t} ^ {i}, \sigma_ {t} ^ {- i} | \tilde {d} _ {t}\right) - u ^ {i} \left(\tilde {\sigma} _ {t} ^ {i}, \sigma_ {t} ^ {- i} | d ^ {*}\right) \right] \mid \mathcal {H} _ {t} \right\} \right.
+$$
+
+$$
+= \mathbb {E} _ {\mathcal {H} _ {t}} \left\{\mathbb {E} _ {\tilde {d} _ {t}, d ^ {*}} \mathbb {E} _ {\mathcal {T} _ {i, t}} \left[ u ^ {i} \left(h _ {1, t} ^ {\mathcal {C}} \mid \tilde {\sigma} _ {t} ^ {i}, \sigma_ {t} ^ {- i}, \tilde {d} _ {t}\right) - u ^ {i} \left(h _ {1, t} ^ {\mathcal {C}} \mid \tilde {\sigma} _ {t} ^ {i}, \sigma_ {t} ^ {- i}, d ^ {*}\right) \right] \mid \mathcal {H} _ {t} \right\}
+$$
+
+$$
+= \mathbb {E} _ {\mathcal {H} _ {t}} \left\{\mathbb {E} _ {\tilde {d} _ {t}, d ^ {*}} \mathbb {E} _ {\mathcal {T} _ {i, t}} \left[ \sum_ {a \in \alpha \left(h _ {1, t} ^ {\mathcal {C}}\right)} \left(\tilde {c} _ {t} \left(h _ {1, t} ^ {\mathcal {C}}, a\right) u ^ {i} \left(h _ {1, t} ^ {\mathcal {C}} \mid \tilde {\sigma} _ {t} ^ {i}, \sigma_ {t} ^ {- i}, \tilde {d} _ {t}\right) - c ^ {*} \left(h _ {1, t} ^ {\mathcal {C}}, a\right) u ^ {i} \left(h _ {1, t} ^ {\mathcal {C}} \mid \tilde {\sigma} _ {t} ^ {i}, \sigma_ {t} ^ {- i}, d ^ {*}\right)\right) \right] \Big | \mathcal {H} _ {t} \right\}
+$$
+
+$$
+\begin{array}{l} \stackrel {{\mathbb {O}}} {{=}} \mathbb {E} _ {\mathcal {H} _ {t}} \left\{\mathbb {E} _ {\tilde {d} _ {t}, d ^ {*}} \mathbb {E} _ {\mathcal {T} _ {i, t}} \left[ \sum_ {a \in \alpha (h _ {1, t} ^ {\mathcal {C}})} (\tilde {c} _ {t} (h _ {1, t} ^ {\mathcal {C}}, a) - c ^ {*} (h _ {1, t} ^ {\mathcal {C}}, a)) u ^ {i} (h _ {1, t} ^ {\mathcal {C}} | \tilde {\sigma} _ {t} ^ {i}, \sigma_ {t} ^ {- i}, \tilde {d} _ {t}) \right] \Big | \mathcal {H} _ {t} \right\} \\ + \mathbb {E} _ {\mathcal {H} _ {t}} \left\{\mathbb {E} _ {\tilde {d} _ {t}, d ^ {*}} \mathbb {E} _ {\mathcal {T} _ {i, t}} \left[ \sum_ {a \in \alpha (h _ {1, t} ^ {\mathcal {C}})} c ^ {*} (h _ {1, t} ^ {\mathcal {C}}, a) (u ^ {i} (h _ {1, t} ^ {\mathcal {C}} | \tilde {\sigma} _ {t} ^ {i}, \sigma_ {t} ^ {- i}, \tilde {d} _ {t}) - u ^ {i} (h _ {1, t} ^ {\mathcal {C}} | \tilde {\sigma} _ {t} ^ {i}, \sigma_ {t} ^ {- i}, d ^ {*})) \right] \Big | \mathcal {H} _ {t} \right\} \\ \end{array}
+$$
+
+$$
+\begin{array}{l} \stackrel {{\text {②}}} {{=}} \mathbb {E} _ {\mathcal {H} _ {t}} \left\{\mathbb {E} _ {\tilde {d} _ {t}, d ^ {*}} \mathbb {E} _ {\mathcal {T} _ {i, t}} \left[ \sum_ {a \in \alpha (h _ {1, t} ^ {\mathcal {C}})} (\tilde {c} _ {t} (h _ {1, t} ^ {\mathcal {C}}, a) - c ^ {*} (h _ {1, t} ^ {\mathcal {C}}, a)) u ^ {i} (h _ {1, t} ^ {\mathcal {C}} | \tilde {\sigma} _ {t} ^ {i}, \sigma_ {t} ^ {- i}, \tilde {d} _ {t}) \right] \Big | \mathcal {H} _ {t} \right\} \\ + \mathbb {E} _ {\mathcal {H} _ {t}} \left\{\mathbb {E} _ {\tilde {d} _ {t}, d ^ {*}} \mathbb {E} _ {\mathcal {T} _ {i, t}} \left[ u ^ {i} \left(h _ {2, t} ^ {\mathcal {C}} \mid \tilde {\sigma} _ {t} ^ {i}, \sigma_ {t} ^ {- i}, \tilde {d} _ {t}\right) - u ^ {i} \left(h _ {2, t} ^ {\mathcal {C}} \mid \tilde {\sigma} _ {t} ^ {i}, \sigma_ {t} ^ {- i}, d ^ {*}\right) \right] \mid \mathcal {H} _ {t} \right\} \\ \end{array}
+$$
+
+$$
+\begin{array}{l} \stackrel {{\mathbf {\Theta}}} {{=}} \mathbb {E} _ {\mathcal {H} _ {t}} \left\{\mathbb {E} _ {\tilde {d} _ {t}, d ^ {*}} \mathbb {E} _ {\mathcal {T} _ {i, t}} \left[ \sum_ {j = 1} ^ {m _ {i, t}} \sum_ {a \in \alpha (h _ {j, t} ^ {\mathcal {C}})} (\tilde {c} _ {t} (h _ {j, t} ^ {\mathcal {C}}, a) - c ^ {*} (h _ {j, t} ^ {\mathcal {C}}, a)) u ^ {i} (h _ {j, t} ^ {\mathcal {C}} | \tilde {\sigma} _ {t} ^ {i}, \sigma_ {t} ^ {- i}, \tilde {d} _ {t}) \right] \Big | \mathcal {H} _ {t} \right\} \\ + \mathbb {E} _ {\mathcal {H} _ {t}} \left\{\mathbb {E} _ {\tilde {d} _ {t}, d ^ {*}} \mathbb {E} _ {\mathcal {T} _ {i, t}} \left[ u ^ {i} \left(z _ {i, t} | \tilde {r} _ {t}\right) - u ^ {i} \left(z _ {i, t} | r ^ {*}\right) \right] \mid \mathcal {H} _ {t} \right\}. \\ \end{array}
+$$
+
+Here equation (1) holds by inserting a term $c^*(h_{1,t}^{\mathcal{C}}, a) u^i(h_{1,t}^{\mathcal{C}} | \tilde{\sigma}_t^i, \sigma_t^{-i}, \tilde{d}_t)$ ; equation (2) holds since that $h_{2,t}^{\mathcal{C}}$ is reached following the underlying transition $c^*(h_{1,t}^{\mathcal{C}})$ ; and equation (3) holds by using induction.
+
+Therefore we finish the proof.
+
+
+
+We upper bound the term $u^{i}(z_{i,t}|\tilde{r}_{t}) - u^{i}(z_{i,t}|r^{*})$ first. We can refer to the technique of previous work in PSRL. Recall that in episode $t$ , players reach terminal node $z_{i,t}$ with a visited count $n_t(z_{i,t})$ . We denote that $\bar{u}_t^i (z_{i,t})$ as the empirical mean of $u^{i}(z_{i,t}|r^{*})$ . Simply we insert $\bar{u}_t^i (z_{i,t})$ to get
+
+$$
+u ^ {i} \left(z _ {i, t} | \tilde {r} _ {t}\right) - u ^ {i} \left(z _ {i, t} | r ^ {*}\right) \leq \left| u ^ {i} \left(z _ {i, t} | \tilde {r} _ {t}\right) - \bar {u} _ {t} ^ {i} \left(z _ {i, t}\right) \right| + \left| \bar {u} _ {t} ^ {i} \left(z _ {i, t}\right) - u ^ {i} \left(z _ {i, t} | r ^ {*}\right) \right|.
+$$
+
+First we consider the second one $|\bar{u}_t^i (z_{i,t}) - u^i (z_{i,t}|r^*)|$ and we have similar bound for the first term. Conditioning on $r^{*}(z_{i,t})$ , we can apply the Chernoff-Hoeffding bound (Hoeffding, 1994). For $\delta \in (0,1)$
+
+$$
+\Pr \left(| \bar {u} _ {t} ^ {i} \left(z _ {i, t}\right) - u ^ {i} \left(z _ {i, t} \mid r ^ {*}\right) | \geq \sqrt {\frac {\ln (2 / \delta)}{2 \max \left(n _ {t} \left(z _ {i , t}\right) , 1\right)}} \mid r ^ {*} \left(z _ {i, t}\right)\right) \leq \delta , \tag {14}
+$$
+
+where $Pr$ denote the probability.
+
+Then we use the above inequality to get below lemma:
+
+Lemma 5. At episode $t$ , the expectation of $|\vec{u}_t^i (z_{i,t}) - u^i (z_{i,t}|r^*)|$ can be bounded by
+
+$$
+\begin{array}{l} \mathbb {E} _ {\mathcal {H} _ {t}} \left\{\mathbb {E} _ {\tilde {d} _ {t}, d ^ {*}} \mathbb {E} _ {\mathcal {T} _ {i, t}} \left[ | \bar {u} _ {t} ^ {i} (z _ {i, t}) - u ^ {i} (z _ {i, t} | r ^ {*}) | \right] \mid \mathcal {H} _ {t} \right\} \\ \leq \mathbb {E} _ {\mathcal {H} _ {t}} \left\{\mathbb {E} _ {\tilde {d} _ {t}, d ^ {*}} \mathbb {E} _ {\mathcal {T} _ {i, t}} \left[ \sqrt {\frac {2 \log (2 / \delta)}{\max (n _ {t} (z _ {i , t}) , 1)}} \right] \mid \mathcal {H} _ {t} \right\} + 2 | Z | \delta . \\ \end{array}
+$$
+
+Proof. Notice that Eq. 14 holds conditioning on $r^*(z_{i,t})$ and the expectation is taken over the prior $\mathbb{P}_0$ . Then we need to carefully apply the Eq. 14. For the convenience of notation, we use $\pi_t(h|d^*)$ to represent $\pi_{\tilde{\sigma}_{t}^{i},\sigma_{t}^{-i}}(h|d^{*})$ . We further use $\mathbb{I}(\cdot)$ to indicate the identical function. Then we expand the expectation into integration
+
+$$
+\mathbb {E} _ {\mathcal {H} _ {t}} \left\{\mathbb {E} _ {\tilde {d} _ {t}, d ^ {*}} \mathbb {E} _ {\mathcal {T} _ {i, t}} \left[ | \bar {u} _ {t} ^ {i} (z _ {i, t}) - u ^ {i} (z _ {i, t} | r ^ {*}) | \right] \mid \mathcal {H} _ {t} \right\}
+$$
+
+$$
+\begin{array}{l} \stackrel {1} {=} \sum_ {z \in Z} \int | \bar {u} _ {t} ^ {i} (z) - u ^ {i} (z | r ^ {*}) | \pi_ {t} (z | d ^ {*}) P r (d ^ {*}, d _ {t} | \mathcal {H} _ {t}) P r (\mathcal {H} _ {t}) d (d ^ {*}, d _ {t}, \mathcal {H} _ {t}) \\ \stackrel {2} {\leq} \sum_ {z \in Z} \int \sqrt {\frac {\ln (2 / \delta)}{2 \max (n _ {t} (z) , 1)}} \pi_ {t} (z | d ^ {*}) P r (d ^ {*}, d _ {t} | \mathcal {H} _ {t}) P r (\mathcal {H} _ {t}) d (d ^ {*}, d _ {t}, \mathcal {H} _ {t}) \\ + \sum_ {z \in Z} \int 2 \mathbb {I} \left(| \bar {u} _ {t} ^ {i} (z) - u ^ {i} (z | r ^ {*}) | \geq \sqrt {\frac {\ln (2 / \delta)}{2 \max \left(n _ {t} (z) , 1\right)}}\right) P r \left(d ^ {*} \mid \mathcal {H} _ {t}\right) P r \left(\mathcal {H} _ {t}\right) d \left(d ^ {*}, \mathcal {H} _ {t}\right) \tag {15} \\ \end{array}
+$$
+
+$$
+\begin{array}{l} \stackrel {\circledast} {=} \mathbb {E} _ {\mathcal {H} _ {t}} \left\{\mathbb {E} _ {\tilde {d} _ {t}, d ^ {*}} \left[ \sqrt {\frac {2 \log (2 / \delta)}{\max (n _ {t} (z _ {i , t}) , 1)}} \right] \Big | \mathcal {H} _ {t} \right\} \\ + \sum_ {z \in Z} \int 2 \mathbb {I} \left(| \bar {u} _ {t} ^ {i} (z) - u ^ {i} (z | r ^ {*}) | \geq \sqrt {\frac {\ln (2 / \delta)}{2 \max (n _ {t} (z) , 1)}}\right) P r (\mathcal {H} _ {t} | d ^ {*}) \mathbb {P} _ {0} (d ^ {*}) d (d ^ {*}, \mathcal {H} _ {t}) \\ \end{array}
+$$
+
+$$
+\stackrel {④} {\leq} \mathbb {E} _ {\mathcal {H} _ {t}} \left\{\mathbb {E} _ {\tilde {d} _ {t}, d ^ {*}} \left[ \sqrt {\frac {2 \log (2 / \delta)}{\max (n _ {t} (z _ {i , t}) , 1)}} \right] \mid \mathcal {H} _ {t} \right\} + 2 | Z | \delta .
+$$
+
+Here we extend the expectation into integration in equation ①. Inequality ② holds by separating the integral space. Equation ③ uses the Bayes rule and thus we can apply Eq. (14) to get equation ④.
+
+Therefore we finish the proof.
+
+For another term $|u^{i}(z_{i,t}|\tilde{r}_{t}) - \bar{u}_{t}^{i}(z_{i,t})|$ , we can still apply the technique in Lemma 5 to get below lemma:
+
+Lemma 6. At episode $t$ , the expectation of $|u^{i}(z_{i,t}|\tilde{r}_{t}) - \bar{u}_{t}^{i}(z_{i,t})|$ can be bounded by
+
+$$
+\mathbb {E} _ {\mathcal {H} _ {t}} \left\{\mathbb {E} _ {\tilde {d} _ {t}, d ^ {*}} \mathbb {E} _ {\mathcal {T} _ {i, t}} \left[ | u ^ {i} (z _ {i, t} | \tilde {r} _ {t}) - \bar {u} _ {t} ^ {i} (z _ {i, t}) | \right] \Big | \mathcal {H} _ {t} \right\} \leq \mathbb {E} _ {\mathcal {H} _ {t}} \left\{\mathbb {E} _ {\tilde {d} _ {t}, d ^ {*}} \left[ \sqrt {\frac {2 \log (2 / \delta)}{\max (n _ {t} (z _ {i , t}) , 1)}} \right] \Big | \mathcal {H} _ {t} \right\} + 2 | Z | \delta .
+$$
+
+Proof. We can directly prove that
+
+$$
+\begin{array}{l} \left. \right. \mathbb {E} _ {\mathcal {H} _ {t}} \left\{\mathbb {E} _ {\tilde {d} _ {t}, d ^ {*}} \mathbb {E} _ {\mathcal {T} _ {i, t}} \left[ | u ^ {i} \left(z _ {i, t} | \tilde {r} _ {t}\right) - \bar {u} _ {t} ^ {i} \left(z _ {i, t}\right) | \right] \mid \mathcal {H} _ {t} \right\} \\ = \sum_ {s \in Z} \int | u ^ {i} (s | \tilde {r} _ {t}) - \bar {u} _ {t} ^ {i} (s) | \pi_ {t} (s | d ^ {*}) P r (d ^ {*}, d _ {t} | \mathcal {H} _ {t}) P r (\mathcal {H} _ {t}) d (d ^ {*}, d _ {t}, \mathcal {H} _ {t}) \\ \leq \sum_ {s \in Z} \int \sqrt {\frac {\ln (2 / \delta)}{2 \max \left(n _ {t} (s) , 1\right)}} \pi_ {t} (s | d ^ {*}) P r (d ^ {*}, d _ {t} | \mathcal {H} _ {t}) P r (\mathcal {H} _ {t}) d (d ^ {*}, d _ {t}, \mathcal {H} _ {t}) \\ + \sum_ {s \in Z} \int 2 \mathbb {I} \left(| u ^ {i} (s | \tilde {r} _ {t}) - \bar {u} _ {t} ^ {i} (s) | \geq \sqrt {\frac {\ln (2 / \delta)}{2 \max \left(n _ {t} (s) , 1\right)}}\right) P r (\tilde {d} | \mathcal {H} _ {t}) P r (\mathcal {H} _ {t}) d \left(d ^ {*}, \mathcal {H} _ {t}\right) \tag {16} \\ = \mathbb {E} _ {\mathcal {H} _ {t}} \left\{\mathbb {E} _ {\tilde {d} _ {t}, d ^ {*}} \left[ \sqrt {\frac {2 \log (2 / \delta)}{\operatorname* {m a x} \left(n _ {t} \left(z _ {i , t}\right) , 1\right)}} \right] \mid \mathcal {H} _ {t} \right\} \\ + \sum_ {s \in Z} \int 2 \mathbb {I} \left(| u ^ {i} (s | \tilde {r} _ {t}) - \bar {u} _ {t} ^ {i} (s) | \geq \sqrt {\frac {\ln (2 / \delta)}{2 \max (n _ {t} (s) , 1)}}\right) P r (\mathcal {H} _ {t} | d ^ {*}) \mathbb {P} _ {0} (d ^ {*}) d (d ^ {*}, \mathcal {H} _ {t}) \\ \leq \mathbb {E} _ {\mathcal {H} _ {t}} \left\{\mathbb {E} _ {\tilde {d} _ {t}, d ^ {*}} \left[ \sqrt {\frac {2 \log (2 / \delta)}{\max (n _ {t} (z _ {i , t}) , 1)}} \right] \mid \mathcal {H} _ {t} \right\} + 2 | Z | \delta . \\ \end{array}
+$$
+
+The above proof is almost the same as that in Lemma 5 expect inequality (16). Since $d^{*}$ and $\tilde{d}_t$ are identically distributed conditioning on $\mathcal{H}_t$ , then we apply below equality to Eq. (16):
+
+$$
+\begin{array}{l} \mathbb {I} \left(| u ^ {i} (s | \tilde {r} _ {t}) - \bar {u} _ {t} ^ {i} (s) | \geq \sqrt {\frac {\ln (2 / \delta)}{2 \max (n _ {t} (s) , 1)}}\right) P r (\tilde {d} _ {t} | \mathcal {H} _ {t}) P r (\mathcal {H} _ {t}) \\ = \mathbb {I} \left(| u ^ {i} (s | r ^ {*}) - \bar {u} _ {t} ^ {i} (s) | \geq \sqrt {\frac {\ln (2 / \delta)}{2 \operatorname* {m a x} (n _ {t} (s) , 1)}}\right) P r (d ^ {*} | \mathcal {H} _ {t}) P r (\mathcal {H} _ {t}). \\ \end{array}
+$$
+
+Then we finish the proof.
+
+
+
+Hence we combine the results in Lemma 5 and 6 and get the conclusion that for any $\delta \in (0,1)$
+
+$$
+\mathbb {E} _ {\mathcal {H} _ {t}} \left\{\mathbb {E} _ {\tilde {d} _ {t}, d ^ {*}} \mathbb {E} _ {\mathcal {T} _ {i, t}} \left[ u ^ {i} (z _ {i, t} | \tilde {r} _ {t}) - u ^ {i} (z _ {i, t} | r ^ {*}) \right] \Big | \mathcal {H} _ {t} \right\} \leq \mathbb {E} _ {\mathcal {H} _ {t}} \left\{\mathbb {E} _ {\tilde {d} _ {t}, d ^ {*}} \left[ 2 \sqrt {\frac {2 \log (2 / \delta)}{\max (n _ {t} (z _ {i , t}) , 1)}} \right] \Big | \mathcal {H} _ {t} \right\} + 4 | Z | \delta .
+$$
+
+Using a pigeon-hole principle and choosing $\delta = 1 / (|Z|T)$ , we have below lemma:
+
+Lemma 7. At episode $T$ , the expectation of the summation of $u^{i}(z_{i,t}|\tilde{r}_{t}) - u^{i}(z_{i,t}|r^{*})$ over the prior distribution $\mathbb{P}_0$ has an order of:
+
+$$
+\mathbb {E} _ {\mathbb {P} _ {0}} \left[ \sum_ {t = 1} ^ {T} u ^ {i} (z _ {i, t} | \tilde {r} _ {t}) - u ^ {i} (z _ {i, t} | r ^ {*}) \right] = O (\sqrt {| Z | T \ln (| Z | T)}).
+$$
+
+Then we consider chance player node $h_{j,t}^{\mathcal{C}}$ . We also denote $\bar{c}(h_{j,t}^{\mathcal{C}}, a)$ as the empirical mean of chance player's probability to choose $a$ at $h_{j,t}^{\mathcal{C}}$ . Notice that the utility is bounded in $[-1, 1]$ . We have
+
+$$
+\begin{array}{l} \sum_ {a \in \alpha (h _ {j, t} ^ {\mathcal {C}})} (\tilde {c} _ {t} (h _ {j, t} ^ {\mathcal {C}}, a) - c ^ {*} (h _ {j, t} ^ {\mathcal {C}}, a)) u ^ {i} (h _ {j, t} ^ {\mathcal {C}} | \tilde {\sigma} _ {t} ^ {i}, \sigma_ {t} ^ {- i}, \tilde {d} _ {t}) \\ \leq 2\sum_{a\in \alpha (h_{j,t}^{\mathcal{C}})}|\tilde{c}_{t}(h_{j,t}^{\mathcal{C}},a) - c^{*}(h_{j,t}^{\mathcal{C}},a)| \\ \leq 2 \sum_ {a \in \alpha (h _ {j, t} ^ {\mathcal {C}})} | \tilde {c} _ {t} (h _ {j, t} ^ {\mathcal {C}}, a) - \bar {c} (h _ {j, t} ^ {\mathcal {C}}, a) | + 2 \sum_ {a \in \alpha (h _ {j, t} ^ {\mathcal {C}})} | \bar {c} (h _ {j, t} ^ {\mathcal {C}}, a) - c ^ {*} (h _ {j, t} ^ {\mathcal {C}}, a) |. \\ \end{array}
+$$
+
+Then conditioning on $c^*(h_{j,t}^{\mathcal{C}}, a)$ , we use the concentration bound for $L_1$ norm (i.e. the deviation inequality (Weissman et al., 2003) to get that for $\delta \in (0,1)$
+
+$$
+P r \left(\sum_ {a \in \alpha (h _ {j, t} ^ {\mathcal {C}})} | \bar {c} (h _ {j, t} ^ {\mathcal {C}}, a) - c ^ {*} (h _ {j, t} ^ {\mathcal {C}}, a) | \geq \sqrt {\frac {2 \ln (2 ^ {A} / \delta)}{\max (n _ {t} (h _ {j , t} ^ {\mathcal {C}}) , 1)}} \Big | c ^ {*} (h _ {j, t} ^ {\mathcal {C}}, a)\right) < \delta .
+$$
+
+Similar to the analysis in $r$ , we give below lemma:
+
+Lemma 8. At episode $t$ , the expectation of $|\bar{c}(h_{j,t}^{\mathcal{C}}, a) - c^{*}(h_{j,t}^{\mathcal{C}}, a)|$ can be bounded by
+
+$$
+\begin{array}{l} \sum_ {a \in \alpha (h _ {j, t} ^ {\mathcal {C}})} \mathbb {E} _ {\mathcal {H} _ {t}} \left\{\mathbb {E} _ {\tilde {d} _ {t}, d ^ {*}} \mathbb {E} _ {\mathcal {T} _ {i, t}} \left[ | \bar {e} (h _ {j, t} ^ {\mathcal {C}}, a) - c ^ {*} (h _ {j, t} ^ {\mathcal {C}}, a) | \right] \mid \mathcal {H} _ {t} \right\} \\ \leq \mathbb {E} _ {\mathcal {H} _ {t}} \left\{\mathbb {E} _ {\tilde {d} _ {t}, d ^ {*}} \left[ \sqrt {\frac {2 \ln (2 ^ {A} / \delta)}{\max (n _ {t} (h _ {j , t} ^ {\mathcal {C}}) , 1)}} \right] \left| \mathcal {H} _ {t} \right. \right\} + | H ^ {\mathcal {C}} | \delta . \\ \end{array}
+$$
+
+Proof. We use similar techniques to get
+
+$$
+\begin{array}{l} \mathbb {E} _ {\mathcal {H} _ {t}} \left\{\mathbb {E} _ {\tilde {d} _ {t}, d ^ {*}} \mathbb {E} _ {\mathcal {T} _ {i, t}} \left[ \sum_ {a \in \alpha \left(h _ {j, t} ^ {\mathcal {C}}\right)} | \bar {c} \left(h _ {j, t} ^ {\mathcal {C}}, a\right) - c ^ {*} \left(h _ {j, t} ^ {\mathcal {C}}, a\right) | \right] \Big | \mathcal {H} _ {t} \right\} \\ = \sum_ {h \in H ^ {\mathcal {C}}} \int \sum_ {a \in \alpha (h)} | \bar {c} (h, a) - c ^ {*} (h, a) | \pi_ {t} (h | d ^ {*}) P r (d ^ {*}, d _ {t} | \mathcal {H} _ {t}) P r (\mathcal {H} _ {t}) d (d ^ {*}, d _ {t}, \mathcal {H} _ {t}) \\ \leq \sum_ {h \in H ^ {c}} \int \sqrt {\frac {2 \ln (2 ^ {A} / \delta)}{\max (n _ {t} (h) , 1)}} \pi_ {t} (s | d ^ {*}) P r (d ^ {*}, d _ {t} | \mathcal {H} _ {t}) P r (\mathcal {H} _ {t}) d (d ^ {*}, d _ {t}, \mathcal {H} _ {t}) \\ + \sum_ {h \in H ^ {c}} \int \mathbb {I} \left(\sum_ {a \in \alpha (h)} | \bar {c} (h, a) - c ^ {*} (h, a) | \geq \sqrt {\frac {2 \ln \left(2 ^ {A} / \delta\right)}{\max \left(n _ {t} (h) , 1\right)}}\right) P r \left(d ^ {*} \mid \mathcal {H} _ {t}\right) P r \left(\mathcal {H} _ {t}\right) d \left(d ^ {*}, \mathcal {H} _ {t}\right) \tag {17} \\ = \mathbb {E} _ {\mathcal {H} _ {t}} \left\{\mathbb {E} _ {\tilde {d} _ {t}, d ^ {*}} \left[ \sqrt {\frac {2 \ln (2 ^ {A} / \delta)}{\max (n _ {t} (h) , 1)}} \right] \mid \mathcal {H} _ {t} \right\} \\ + \sum_ {h \in H ^ {\mathcal {C}}} \int \mathbb {I} \left(\sum_ {a \in \alpha (h)} | \bar {c} (h, a) - c ^ {*} (h, a) | \geq \sqrt {\frac {2 \ln (2 ^ {A} / \delta)}{\max (n _ {t} (h) , 1)}}\right) P r (\mathcal {H} _ {t} | d ^ {*}) \mathbb {P} _ {0} (d ^ {*}) d (d ^ {*}, \mathcal {H} _ {t}) \\ \leq \mathbb {E} _ {\mathcal {H} _ {t}} \left\{\mathbb {E} _ {\tilde {d} _ {t}, d ^ {*}} \left[ \sqrt {\frac {2 \log (2 ^ {A} / \delta)}{\max (n _ {t} (z _ {i , t}) , 1)}} \right] \Big | \mathcal {H} _ {t} \right\} + | H ^ {\mathcal {C}} | \delta . \\ \end{array}
+$$
+
+The proof process above is similar to that of Lemma 5.
+
+Once again, we get below lemma:
+
+Lemma 9. At episode $t$ , the expectation of $|\tilde{c}(h_{j,t}^{\mathcal{C}}, a) - \bar{c}(h_{j,t}^{\mathcal{C}}, a)|$ can be bounded by
+
+$$
+\begin{array}{l} \sum_ {a \in \alpha (h _ {j, t} ^ {\mathcal {C}})} \mathbb {E} _ {\mathcal {H} _ {t}} \left\{\mathbb {E} _ {\tilde {d} _ {t}, d ^ {*}} \mathbb {E} _ {\mathcal {T} _ {i, t}} \left[ | \tilde {c} (h _ {j, t} ^ {\mathcal {C}}, a) - \bar {c} (h _ {j, t} ^ {\mathcal {C}}, a) | \right] \mid \mathcal {H} _ {t} \right\} \\ \leq \mathbb {E} _ {\mathcal {H} _ {t}} \left\{\mathbb {E} _ {\tilde {d} _ {t}, d ^ {*}} \left[ \sqrt {\frac {2 \ln (2 ^ {A} / \delta)}{\max (n _ {t} (h _ {j , t} ^ {\mathcal {C}}) , 1)}} \right] \mid \mathcal {H} _ {t} \right\} + | H ^ {\mathcal {C}} | \delta . \\ \end{array}
+$$
+
+Proof. We use similar techniques to get
+
+$$
+\begin{array}{l} \mathbb {E} _ {\mathcal {H} _ {t}} \left\{\mathbb {E} _ {\tilde {d} _ {t}, d ^ {*}} \mathbb {E} _ {\mathcal {T} _ {i, t}} \left[ \sum_ {a \in \alpha (h _ {j, t} ^ {\mathcal {C}})} | \tilde {c} (h _ {j, t} ^ {\mathcal {C}}, a) - \bar {c} (h _ {j, t} ^ {\mathcal {C}}, a) | \right] \Big | \mathcal {H} _ {t} \right\} \\ = \sum_ {h \in H ^ {\mathcal {C}}} \int \sum_ {a \in \alpha (h)} | \tilde {c} (h, a) - \bar {c} (h, a) | \pi_ {t} (h | d ^ {*}) P r (d ^ {*}, d _ {t} | \mathcal {H} _ {t}) P r (\mathcal {H} _ {t}) d (d ^ {*}, d _ {t}, \mathcal {H} _ {t}) \\ \leq \sum_ {h \in H ^ {c}} \int \sqrt {\frac {2 \ln (2 ^ {A} / \delta)}{\max (n _ {t} (h) , 1)}} \pi_ {t} (s | d ^ {*}) P r (d ^ {*}, d _ {t} | \mathcal {H} _ {t}) P r (\mathcal {H} _ {t}) d (d ^ {*}, d _ {t}, \mathcal {H} _ {t}) \\ + \sum_ {h \in H ^ {c}} \int \mathbb {I} \left(\sum_ {a \in \alpha (h)} | \tilde {c} (h, a) - \bar {c} (h, a) | \geq \sqrt {\frac {2 \ln \left(2 ^ {A} / \delta\right)}{\max \left(n _ {t} (h) , 1\right)}}\right) P r (\tilde {d} | \mathcal {H} _ {t}) P r (\mathcal {H} _ {t}) d \left(d ^ {*}, \mathcal {H} _ {t}\right) \tag {18} \\ = \mathbb {E} _ {\mathcal {H} _ {t}} \left\{\mathbb {E} _ {\tilde {d} _ {t}, d ^ {*}} \left[ \sqrt {\frac {2 \ln (2 ^ {A} / \delta)}{\operatorname* {m a x} (n _ {t} (h) , 1)}} \right] \mid \mathcal {H} _ {t} \right\} \\ + \sum_ {h \in H ^ {\mathcal {C}}} \int \mathbb {I} \left(\sum_ {a \in \alpha (h)} | \bar {c} (h, a) - c ^ {*} (h, a) | \geq \sqrt {\frac {2 \ln (2 ^ {A} / \delta)}{\max (n _ {t} (h) , 1)}}\right) P r (\mathcal {H} _ {t} | d ^ {*}) \mathbb {P} _ {0} (d ^ {*}) d (d ^ {*}, \mathcal {H} _ {t}) \\ \leq \mathbb {E} _ {\mathcal {H} _ {t}} \left\{\mathbb {E} _ {\tilde {d} _ {t}, d ^ {*}} \left[ \sqrt {\frac {2 \log (2 ^ {A} / \delta)}{\max (n _ {t} (z _ {i , t}) , 1)}} \right] \mid \mathcal {H} _ {t} \right\} + | H ^ {\mathcal {C}} | \delta . \\ \end{array}
+$$
+
+The proof process here is similar to that of Lemma 6.
+
+
+
+Next, we use a pigeon-hole principle and choosing $\delta = 1 / (|H^{\mathcal{C}}|T)$ , we have below lemma:
+
+Lemma 10. At episode $T$ , the expectation of the summation of $|\tilde{c}_t(h_{j,t}^{\mathcal{C}},a) - c^* (h_{j,t}^{\mathcal{C}},a)|$ over the prior distribution $\mathbb{P}_0$ has an order of:
+
+$$
+\mathbb {E} _ {\mathbb {P} _ {0}} \left[ \sum_ {t = 1} ^ {T} \sum_ {j = 1} ^ {m _ {i, t}} \sum_ {a \in \alpha (h)} | \tilde {c} _ {t} (h _ {j, t} ^ {\mathcal {C}}, a) - c ^ {*} (h _ {j, t} ^ {\mathcal {C}}, a) | \right] = O (\sqrt {| H ^ {\mathcal {C}} | D ^ {\mathcal {C}} A T \ln (| H ^ {\mathcal {C}} | T)}).
+$$
+
+Therefore we use the conclusion in Lemma7 and 10 to get
+
+$$
+\sum_ {t = 1} ^ {T} \mathbb {E} _ {\mathcal {H} _ {t}} \left\{\mathbb {E} _ {\tilde {d} _ {t}, d ^ {*}} \left[ u ^ {i} (\tilde {\sigma} _ {t} ^ {i}, \sigma_ {t} ^ {- i} | \tilde {d} _ {t}) - u ^ {i} (\tilde {\sigma} _ {t} ^ {i}, \sigma_ {t} ^ {- i} | d ^ {*}) \right] \Big | \mathcal {H} _ {t} \right\} = O (\sqrt {| Z | T \ln (| Z | T)} + \sqrt {| H ^ {\mathcal {C}} | D ^ {\mathcal {C}} A T \ln (| H ^ {\mathcal {C}} | T)}).
+$$
+
+The similar proof can be applied to the second term to get the same upper bound by simply replacing $\tilde{d}_t$ with $d_t$ :
+
+$$
+\sum_ {t = 1} ^ {T} \mathbb {E} _ {\mathcal {H} _ {t}} \left\{\mathbb {E} _ {\tilde {d} _ {t}, d ^ {*}, d _ {t}} \left[ u ^ {i} (\tilde {\sigma} _ {t} ^ {i}, \sigma_ {t} ^ {- i} | d ^ {*}) - u ^ {i} (\tilde {\sigma} _ {t} ^ {i}, \sigma_ {t} ^ {- i} | d _ {t}) \right] \Big | \mathcal {H} _ {t} \right\} = O (\sqrt {| Z | T \ln (| Z | T)} + \sqrt {| H ^ {\mathcal {C}} | D ^ {\mathcal {C}} A T \ln (| H ^ {\mathcal {C}} | T)}).
+$$
+
+Sum the analysis together, we get to the conclusion that
+
+$$
+\mathbb {E} _ {H _ {T}} \left\{\mathbb {E} _ {d ^ {*}, d _ {T}} \left[ \mathcal {G} _ {T} ^ {i} \mid H _ {T} \right] \right\} = O (\sqrt {\frac {| Z | \ln (| Z | T)}{T}} + \sqrt {\frac {| H ^ {\mathcal {C}} | D ^ {\mathcal {C}} A \ln (| H ^ {\mathcal {C}} | T)}{T}})
+$$
\ No newline at end of file
diff --git a/posteriorsamplingformultiagentreinforcementlearningsolvingextensivegameswithimperfectinformation/images.zip b/posteriorsamplingformultiagentreinforcementlearningsolvingextensivegameswithimperfectinformation/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..629c2777565c9f726ad6ad46852450d3aae3d7ea
--- /dev/null
+++ b/posteriorsamplingformultiagentreinforcementlearningsolvingextensivegameswithimperfectinformation/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2e9c636fb01d4a5a6af479d1212d3bd777ddabd4ed2c213d34b5a4d669ca9e29
+size 1087216
diff --git a/posteriorsamplingformultiagentreinforcementlearningsolvingextensivegameswithimperfectinformation/layout.json b/posteriorsamplingformultiagentreinforcementlearningsolvingextensivegameswithimperfectinformation/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..718791504b6f9953eb256b8d8822f637bb2a363a
--- /dev/null
+++ b/posteriorsamplingformultiagentreinforcementlearningsolvingextensivegameswithimperfectinformation/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d2230cbe6e103043f4de01ba6f998d99dcdee5287230b2bb15b1587d0478b7ba
+size 843768
diff --git a/precisiongatingimprovingneuralnetworkefficiencywithdynamicdualprecisionactivations/ecc63676-d65f-4daf-b3c1-1eeabe0cf56b_content_list.json b/precisiongatingimprovingneuralnetworkefficiencywithdynamicdualprecisionactivations/ecc63676-d65f-4daf-b3c1-1eeabe0cf56b_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..48d3fcfc656d8707579bc8c26354a85882c12e42
--- /dev/null
+++ b/precisiongatingimprovingneuralnetworkefficiencywithdynamicdualprecisionactivations/ecc63676-d65f-4daf-b3c1-1eeabe0cf56b_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:242305ab86664594cc003a530eecefc5fd220827389f412c5219e2826d9392c8
+size 77982
diff --git a/precisiongatingimprovingneuralnetworkefficiencywithdynamicdualprecisionactivations/ecc63676-d65f-4daf-b3c1-1eeabe0cf56b_model.json b/precisiongatingimprovingneuralnetworkefficiencywithdynamicdualprecisionactivations/ecc63676-d65f-4daf-b3c1-1eeabe0cf56b_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..7bf24cb79e7207d8ec909922438aa5575afac092
--- /dev/null
+++ b/precisiongatingimprovingneuralnetworkefficiencywithdynamicdualprecisionactivations/ecc63676-d65f-4daf-b3c1-1eeabe0cf56b_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3edc9f05151d88298706a5f4264757101defb23fcbd314cf426e95c0cfd424b7
+size 96699
diff --git a/precisiongatingimprovingneuralnetworkefficiencywithdynamicdualprecisionactivations/ecc63676-d65f-4daf-b3c1-1eeabe0cf56b_origin.pdf b/precisiongatingimprovingneuralnetworkefficiencywithdynamicdualprecisionactivations/ecc63676-d65f-4daf-b3c1-1eeabe0cf56b_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..2270870e0558558f75de860b8d83ee2a2dc2463e
--- /dev/null
+++ b/precisiongatingimprovingneuralnetworkefficiencywithdynamicdualprecisionactivations/ecc63676-d65f-4daf-b3c1-1eeabe0cf56b_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5b7378380953160a9d2f5b297790d837f5e27729761f45f52f94eba71941cf0d
+size 494523
diff --git a/precisiongatingimprovingneuralnetworkefficiencywithdynamicdualprecisionactivations/full.md b/precisiongatingimprovingneuralnetworkefficiencywithdynamicdualprecisionactivations/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..d4696d4aa4598797456c8a4e0c047637e8119642
--- /dev/null
+++ b/precisiongatingimprovingneuralnetworkefficiencywithdynamicdualprecisionactivations/full.md
@@ -0,0 +1,306 @@
+# PRECISION GATING: IMPROVING NEURAL NETWORK EFFICIENCY WITH DYNAMIC DUAL-PRECISION ACTIVATIONS
+
+Yichi Zhang
+
+Cornell University
+
+yz2499@cornell.edu
+
+Ritchie Zhao
+
+Cornell University
+
+rz252@cornell.edu
+
+Weizhe Hua
+
+Cornell University
+
+wh399@cornell.edu
+
+Nayun Xu
+
+Cornell Tech
+
+nx38@cornell.edu
+
+G. Edward Suh
+
+Cornell University
+
+edward.suh@cornell.edu
+
+Zhiru Zhang
+
+Cornell University
+
+zhiruz@cornell.edu
+
+# ABSTRACT
+
+We propose precision gating (PG), an end-to-end trainable dynamic dual-precision quantization technique for deep neural networks. PG computes most features in a low precision and only a small proportion of important features in a higher precision to preserve accuracy. The proposed approach is applicable to a variety of DNN architectures and significantly reduces the computational cost of DNN execution with almost no accuracy loss. Our experiments indicate that PG achieves excellent results on CNNs, including statically compressed mobile-friendly networks such as ShuffleNet. Compared to the state-of-the-art prediction-based quantization schemes, PG achieves the same or higher accuracy with $2.4 \times$ less compute on ImageNet. PG furthermore applies to RNNs. Compared to 8-bit uniform quantization, PG obtains a $1.2\%$ improvement in perplexity per word with $2.7 \times$ computational cost reduction on LSTM on the Penn Tree Bank dataset.
+
+# 1 INTRODUCTION
+
+In recent years, deep neural networks (DNNs) have demonstrated excellent performance on many computer vision and language modeling tasks such as image classification, semantic segmentation, face recognition, machine translation, and image captioning (Krizhevsky et al., 2012; He et al., 2016a; Ronneberger et al., 2015; Chen et al., 2016; Zhao et al., 2018; Schroff et al., 2015; Luong et al., 2015; Vaswani et al., 2017). One evident trend in DNN design is that as researchers strive for better accuracy, both the model size and the number of DNN layers have drastically increased over time (Xu et al., 2018). At the same time, there is a growing demand to deploy deep learning technology in edge devices such as mobile phones, VR/AR glasses, and drones (Wu et al., 2019). The limited computational, memory, and energy budgets on these devices impose major challenges for the deployment of large DNN models at the edge.
+
+DNN quantization is an important technique for improving the hardware efficiency of DNN execution (Zhao et al., 2017). Numerous studies have shown that full-precision floating-point computation is not necessary for DNN inference — quantized fixed-point models produce competitive results with a small or zero loss in prediction accuracy (Lin et al., 2016; He et al., 2016b; Zhou et al., 2016; 2017). In some cases, quantization may even improve model generalization by acting as a form of regularization. Existing studies mainly focus on static quantization, in which the precision of each weight and activation is fixed prior to inference (Hubara et al., 2017; He et al., 2016b). Along this line of work, researchers have explored tuning the bitwidth per layer (Wu et al., 2018b; Wang et al., 2019; Dong et al., 2019) as well as various types of quantization functions (Wang et al., 2018; Courbariaux et al., 2016; Li et al., 2016; Zhou et al., 2016). However, static DNN quantization methods cannot exploit input-dependent characteristics, where certain features can be computed in a lower precision during inference as they contribute less to the classification result for the given input. For
+
+example, in computer vision tasks, the pixels representing the object of interest are typically more important than the background pixels.
+
+In this paper, we reduce the inefficiency of a statically quantized DNN via precision gating (PG), which computes most features with low-precision arithmetic operations and only updates few important features to a high precision. More concretely, PG first executes a DNN layer in a low precision and identifies the output features with large magnitude as important features. It then computes a sparse update to increase the precision of those important output features. Intuitively, small values make a small contribution to the DNN's output; thus approximating them in a low precision is reasonable. Precision gating enables dual-precision DNN execution at the granularity of each individual output feature, and therefore greatly reducing the average bitwidth and computational cost of the DNN. We further introduce a differentiable gating function which makes PG applicable to a rich variety of network models.
+
+Experimental results show that PG achieves significant compute reduction and accuracy improvement on both CNNs and LSTMs. Compared to the baseline CNN counterparts, PG obtains $3.5\%$ and $0.6\%$ higher classification accuracy with up to $4.5\times$ and $2.4\times$ less computational cost for CIFAR-10 and ImageNet, respectively. On LSTM, compared to 8-bit uniform quantization PG boosts perplexity per word (PPW) by $1.2\%$ with $2.8\times$ less compute on the Penn Tree Bank (PTB) dataset. Our contributions are as follows:
+
+1. We propose precision gating (PG), which to the best of our knowledge is the first end-to-end trainable method that enables dual-precision execution of DNNs. PG is applicable to a wide variety of CNN and LSTM models.
+2. PG enables DNN computation with lower average bitwidth than other state-of-the-art quantization methods. By employing a low-cost gating scheme, PG has the potential to reduce DNN execution costs in both commodity and dedicated hardware.
+3. PG achieves the same sparsity during back-propagation as forward propagation, which dramatically reduces the computational cost for both passes. This is in stark contrast to prior dynamic DNN optimization methods that focus only on reducing the inference cost.
+
+# 2 RELATED WORK
+
+Quantizing activations. Prior studies show that weights can be quantized to low bitwidth without compromising much accuracy (Zhu et al., 2017); however, quantizing activations with a low bitwidth (e.g., 4 bits) typically incurs a nontrivial accuracy degradation (Mishra et al., 2018; Zhou et al., 2016). This is partially caused by large activation and weight outliers, which stretch the quantization grid too wide and too sparse under a low precision, thus increasing the error (Park et al., 2018; Zhao et al., 2019). To address this problem, Choi et al. (2018) propose PACT to reduce the dynamic range of the activations through clipping the outliers using a learnable threshold. PACT provides a more effective quantization scheme under a very low bitwidth (e.g., 4 bits). In this work we incorporate the PACT method in the training flow of PG to handle the large outliers.
+
+Prediction-based execution. Prior works have explored predicting ReLU-induced zeros and max-pooling compute redundancy to lower the computational cost of CNNs. For example, Lin et al. (2017); Song et al. (2018) propose zero-prediction to utilize a few of the most-significant bits in the input activations to predict the sign of the output activation. Zero-prediction removes redundancy by exploiting the fact that negative outputs will be suppressed by the ReLU anyway. Yet, this method only applies to ReLU activations and only when a linear layer is directly followed by ReLU. Hence such method does not apply to RNNs that use sigmoid or tanh as the activation function and many modern CNN networks that use batch normalization (Ioffe & Szegedy, 2015) before ReLU. Hua et al. (2019a;b) propose channel gating to dynamically turn off a subset of channels that contribute little to the model prediction result. Precision gating is orthogonal to this pruning technique as channel gating executes the whole network at the same precision.
+
+More recently, Cao et al. (2019) propose SeerNet, which also executes a CNN model in dual precision. For each convolutional layer, SeerNet first executes a quantized version of the layer and uses the results to predict the output sparsity induced by the ReLU or the computational sparsity induced by the max-pooling layer. For those activations that are not suppressed according to the prediction, SeerNet computes the original convolution in full-precision (32-bit float). One key dif
+
+
+Input Activations $\mathbf{I} = \mathbf{I}_{hb} < < B_{lb} + \mathbf{I}_{lb}$
+Figure 1: Splitting an input feature $\mathbf{I}$ into $\mathbf{I}_{hb}$ (blue), the most-significant $B_{hb}$ bits, and $\mathbf{I}_{lb}$ (orange), the remaining $B_{lb}$ bits. The total bitwidth is $B$ .
+
+ference between PG and SeerNet is that PG reuses the result of the low-precision compute as a partial product when it performs the high-precision multiplication in the update phase. In contrast, the full-precision compute in SeerNet does not reuse the output from the quantized layer, which incurs a higher execution cost.
+
+Feature-level precision tuning. There is also prior work that uses a different precision to handle the outlier quantization. Park et al. (2018) propose value-aware quantization where the majority of data are computed at reduced precision while a small number of outliers are handled at high precision. Our approach is significantly different because we allow dual precision for every feature, not only for the outliers.
+
+# 3 PRECISION GATING (PG)
+
+In this section we first describe the basic mechanism of PG. We then discuss how to design the gating scheme to accelerate both forward and backward passes. Finally, we consider incorporating outlier clipping to reduce the quantization error.
+
+# 3.1 BASIC FORMULATION
+
+We first define a linear layer in a neural network (either convolutional or fully-connected) as $\mathbf{O} = \mathbf{I}*\mathbf{W}$ , where $\mathbf{O}$ , $\mathbf{I}$ , and $\mathbf{W}$ are the output, input, and weights, respectively. Suppose $\mathbf{I}$ is represented in a $B$ -bit fixed-point format, which is shown in Figure 1. PG partitions $\mathbf{I}$ into (1) $\mathbf{I}_{hb}$ , the $B_{hb}$ most-significant bits (MSBs), and (2) $\mathbf{I}_{lb}$ , the remaining $B_{lb}$ least-significant bits (LSBs). Here $B = B_{hb} + B_{lb}$ . More formally, we can write:
+
+$$
+\mathbf {I} = \mathbf {I} _ {h b} < < B _ {l b} + \mathbf {I} _ {l b} \tag {1}
+$$
+
+Here $<<$ denotes the left shift operator. We can then reformulate a single $B$ -bit linear layer into two lower-precision computations as:
+
+$$
+\mathbf {O} = \mathbf {O} _ {h b} + \mathbf {O} _ {l b} = \left[ \mathbf {W} * \left(\mathbf {I} _ {h b} < < B _ {l b}\right) \right] + \left[ \mathbf {W} * \mathbf {I} _ {l b} \right] \tag {2}
+$$
+
+$\mathbf{O}_{hb}$ is the partial product obtained by using the MSBs of input feature $(\mathbf{I}_{hb})$ whereas $\mathbf{O}_{lb}$ represents the remaining partial product computed with $\mathbf{I}_{lb}$ .
+
+Precision gating works in two phases. In the prediction phase, PG performs the computation $\mathbf{O}_{hb} = \mathbf{W} * (\mathbf{I}_{hb} << B_{lb})$ . Output features of $\mathbf{O}_{hb}$ greater than a learnable gating threshold $\Delta$ are considered important. In the update phase, PG computes $\mathbf{O}_{lb} = \mathbf{W} * \mathbf{I}_{lb}$ only for the important features and adds it to $\mathbf{O}_{hb}$ . The overall execution flow of PG is illustrated in Figure 2, where unimportant output features only take the upper path while important ones are computed as the sum of both paths. More precisely, this can be summarized as follows:
+
+$$
+\mathbf {O} = \left\{ \begin{array}{l l} \mathbf {O} _ {h b} & \mathbf {O} _ {h b} \leq \Delta \\ \mathbf {O} _ {h b} + \mathbf {O} _ {l b} & \mathbf {O} _ {h b} > \Delta \end{array} \right. \tag {3}
+$$
+
+In essence, PG intends to compute the majority of (unimportant) features with $B_{hb}$ bits and only a small set of important features with $B$ bits. The importance of each element in the output $\mathbf{O}$ is calculated by comparing the magnitude of its partial sum $\mathbf{O}_{hb}$ to $\Delta$ . Let $Sp$ be the percentage of
+
+
+Figure 2: The PG building block in CNN models - Input features are split into $\mathbf{I}_{hb}$ and $\mathbf{I}_{lb}$ . In the prediction phase, $\mathbf{I}_{hb}$ first convolves with the full precision filters $\mathbf{W}$ to obtain $\mathbf{O}_{hb}$ . In the update phase, if the partial sum $\mathbf{O}_{hb}$ of a feature exceeds the learnable threshold $\Delta$ , we will update that feature to high-precision by adding $\mathbf{O}_{lb}$ to $\mathbf{O}_{hb}$ . Otherwise, we skip the update phase, and the output feature therefore remains computed at ultra low-precision. The prediction and update phases share the same weights. $\odot$ denotes the Hadamard product.
+
+unimportant activations over all features. PG saves $\frac{Sp\cdot B_{lb}}{B}$ fraction of the compute in the original DNN model. PG thus achieves dual-precision execution using a lightweight gating mechanism which only adds a comparison operation. The goal is to minimize the average bitwidth of the multiply-add operations in the DNN execution.
+
+# 3.2 EFFICIENT LEARNABLE GATING SCHEME
+
+PG automatically learns a gating threshold $\Delta_{c,l}$ during training for each output channel in each DNN layer. A larger $\Delta_{c,l}$ indicates that more output features are computed in low-precision, resulting in greater computational savings but possibly at the expense of reduced model accuracy. Define $\Delta$ as the vector containing each gating threshold $\Delta_{c,l}$ . We formulate the problem of optimizing $\Delta$ as minimizing the original model loss $L$ along with an L2 penalty term:
+
+$$
+\min _ {\mathbf {W}, \Delta} L (\mathbf {I}, y; \mathbf {W}, \Delta) + \sigma \| \Delta - \delta \| ^ {2} \tag {4}
+$$
+
+Here $y$ is the ground truth label, $\sigma$ is the penalty factor, and $\delta$ is the gating target, a target value for the learnable threshold. The penalty factor and gating target are hyperparameters which allow a user to emphasize high computation savings (large $\sigma$ or $\delta$ ) or accuracy preservation (small $\sigma$ or $\delta$ ).
+
+Training a model with precision gating can be performed on commodity GPUs using existing deep learning frameworks. We implement PG on GPU as the equation $\mathbf{O} = \mathbf{O}_{hb} + \mathbf{mask} \odot \mathbf{O}_{lb}$ , where $\mathbf{mask} = \mathbf{1}_{\mathbf{O}_{hb} > \Delta}$ is a binary decision mask and $\odot$ represents element-wise multiplication. During forward propagation, most elements in $\mathbf{mask}$ are 0. PG therefore saves hardware execution cost by only computing a sparse $\mathbf{O}_{lb}$ in the update phase. If PG is implemented in a dedicated hardware accelerator, MSBs and LSBs will be wired separately; The prediction phase can be controlled by a multiplexer, and only computes LSB convolutions while $\mathbf{O}_{hb}$ exceeds the threshold, thus achieving savings in both compute cycles and energy.
+
+The mask is computed using a binary decision function (i.e., a step function), which has a gradient of zero almost everywhere. To let gradients flow through mask to $\Delta$ , we use a sigmoid on the backward pass to approximate the step function. Specifically, we define mask = sigmoid $(\alpha (\mathbf{O}_{hb} - \Delta))$ only on the backward pass as in the previous work Hua et al. (2019b). Here $\alpha$ changes the slope of the sigmoid, thus controlling the magnitude of the gradients.
+
+
+Figure 3: Effect of clipping - A toy example illustrating how a clip threshold helps separating prediction values apart. The first row is quantization and prediction without a clip threshold, while the second row has a clip threshold. (a) Distribution of floating-point input features $\tilde{\mathbf{I}}$ . (b) Distribution of $\mathbf{I}$ after quantizing $\tilde{\mathbf{I}}$ to 4 bits. (c) Distribution of $\mathbf{I}_{hb}$ which takes the higher 2 bits of $\mathbf{I}$ .
+
+# 3.3 SPARSE BACK-PROPAGATION
+
+A sparse update phase only reduces the computational cost during the inference (forward propagation). We further propose to save the compute during the back-propagation by modifying the forward function of the PG block. Specifically, we square the mask element-wise.
+
+$$
+\mathbf {O} = \mathbf {O} _ {h b} + \mathbf {m a s k} ^ {2} \odot \mathbf {O} _ {l b} \tag {5}
+$$
+
+Given that mask is a binary tensor, $\mathbf{mask}^2$ in Eq. (5) preserves the same value as mask. Thus the forward pass remains unchanged. During the back-propagation, an additional mask term is introduced in computing the gradient of $\mathbf{O}$ with respect to the gating threshold $\Delta$ in Eq. (6) because of the element-wise square. Consequently, the update of $\Delta$ only requires the result of $\mathbf{mask} \odot \mathbf{O}_{lb}$ which has already been computed during the forward pass.
+
+$$
+\frac {\partial \mathbf {O}}{\partial \Delta} \approx \frac {\partial \mathbf {O}}{\partial \mathbf {m a s k}} \frac {\partial \operatorname {s i g m o i d} (\alpha \left(\mathbf {O} _ {h b} - \Delta\right))}{\partial \Delta} = 2 \cdot \mathbf {m a s k} \odot \mathbf {O} _ {l b} \frac {\partial \operatorname {s i g m o i d} (\alpha \left(\mathbf {O} _ {h b} - \Delta\right))}{\partial \Delta} \tag {6}
+$$
+
+The gradient of $\mathbf{O}$ with respect to the weights $\mathbf{W}$ in Eq. (7) employs the same sparse $\mathbf{mask} \odot \mathbf{O}_{lb}$ as the update of $\Delta$ . Therefore, precision gating can reduce the computational cost of both forward-progation (inference) and back-propagation by the same factor.
+
+$$
+\frac {\partial \mathbf {O}}{\partial \mathbf {W}} = \frac {\partial \mathbf {O} _ {h b}}{\partial \mathbf {W}} + \mathbf {m a s k} ^ {2} \odot \frac {\partial \mathbf {O} _ {l b}}{\partial \mathbf {W}} = \frac {\partial \mathbf {O} _ {h b}}{\partial \mathbf {W}} + \frac {\partial \mathbf {m a s k} \odot \mathbf {O} _ {l b}}{\partial \mathbf {W}} \tag {7}
+$$
+
+# 3.4 OUTLIER CLIPPING
+
+PG predicts important features using low-precision computation. One difficulty with this is that DNN activations are distributed in a bell curve, with most values close to zero and a few large outliers. The top row of Figure 3(a) shows some activations as blue dots, including a single outlier. If we quantize each value to 4 bits (second column) and use 2 most-significant bits in the prediction phase (third column), we see that almost all values are rounded to zero. In this case, PG can only distinguish the importance between the single outlier and the rest of the values no matter what $\Delta$ is. Thus, the presence of large outliers greatly reduces the effectiveness of PG.
+
+To address this, we combine PG with PACT (Choi et al., 2018), which clips each layer's outputs using a learnable clip threshold. The bottom row of Figure 3 shows how clipping limits the dynamic range of activations, making values more uniformly distributed along the quantization grid. Now
+
+the 2 most-significant bits can effectively separate out different groups of values based on magnitude. We apply PACT to PG in CNNs, which commonly use an unbounded activation function such as ReLU. RNNs, on the other hand, typically employ a bounded activation function (e.g., tanh, sigmoid) that often makes PACT unnecessary.
+
+# 4 EXPERIMENTS
+
+We evaluate PG using ResNet-20 (He et al., 2016a) and ShiftNet-20 (Wu et al., 2018a) on CIFAR-10 (Krizhevsky & Hinton, 2009), and ShuffleNet V2 (Ma et al., 2018) on ImageNet (Deng et al., 2009). ResNet is a very popular CNN architecture for image classification. ShiftNet and ShuffleNet are more compact architectures designed specifically for mobile and edge devices. We set the expansion rate of ShiftNet to be 6 and choose the $0.5 \times$ variant of ShuffleNet V2 for all experiments. On CIFAR-10, the batch size is 128, and the models are trained for 200 epochs. The initial learning rate is 0.1 and decays at epoch 100, 150, 200 by a factor of 0.1 (i.e., multiply learning rate by 0.1). On ImageNet, the batch size is 512 and the models are trained for 120 epochs. The learning rate decays linearly from an initial value of 0.5 to 0.
+
+We also test an LSTM model (Hochreiter & Schmidhuber, 1997) on the Penn Tree Bank (PTB) (Marcus et al., 1993) corpus. The model accuracy is measured by perplexity per word (PPW), where a lower PPW is better. Following the configuration used by He et al. (2016b) and Hubara et al. (2017), the number of hidden units in the LSTM cell is set to 300, and the number of layers is set to 1. We follow the same training setting as described in He et al. (2016b), except that the learning rate decays by a factor of 0.1 at epoch 50 and 90. All experiments are conducted on Tensorflow (Abadi et al., 2016) with NVIDIA GeForce 1080Ti GPUs. We report the top-1 accuracy for all experiments.
+
+We replace the convolutional layers in CNNs and the dense layers in LSTM with the proposed PG block. Moreover, the following hyperparameters in PG need to be tuned appropriately to achieve a low average bitwidth with a high accuracy.
+
+- The full bitwidth $B$ - this represents the bitwidth for high-precision computation in PG. $B$ is set to 5 or less on CIFAR-10, 5 or 6 on ImageNet, and 3 or 4 on PTB.
+- The prediction bitwidth $B_{hb}$ - this represents the bitwidth for low-precision computation.
+- The penalty factor $\sigma$ - this is the scaling factor of the L2 loss for gating thresholds $\Delta$ .
+- The gating target $\delta$ - the target gating threshold. We use a variety of values $\delta \in [-1.0, 5.0]$ in our experiments.
+- The coefficient $\alpha$ in the backward pass - $\alpha$ controls the magnitude of gradients flowing to $\Delta$ . We set $\alpha$ to be 5 across all experiments.
+
+Table 1: Precision gating (PG) on CNN – models tested are ShiftNet-20 and ResNet-20 on CIFAR-10, and ShuffleNet V2 $0.5 \times$ on ImageNet. We compare PG against uniform quantization (UQ), PACT, and Fix-Threshold. $B_{\mathrm{avg}}$ is the average bitwidth. “fp” denotes floating-point accuracy. “Sp” denotes sparsity.
+
+ | Ours | Baselines |
| PG | UQ | PACT | Fix-Threshold |
| B/Bhb | Sp (%) | Bavg | Acc | Bits | Acc. | Acc. | B/Bhb | Sp (%) | Bavg | Acc |
| ShiftNet-20 | 5/3 | 55.5 | 3.9 | 89.1 | 8 | 89.1 | 89.0 | 5/3 | 48.8 | 4.0 | 74.3 |
| CIFAR-10 | 5/3 | 96.3 | 3.1 | 88.6 | 4 | 87.3 | 87.5 | 5/3 | 67.8 | 3.6 | 67.0 |
| (fp 89.4%) | 3/1 | 71.9 | 1.6 | 84.5 | 2 | 77.8 | 82.9 | 3/1 | 10.1 | 2.8 | 64.3 |
| ResNet-20 | 4/3 | 78.2 | 3.2 | 91.7 | 8 | 91.6 | 91.2 | 4/3 | 58.7 | 3.4 | 88.3 |
| CIFAR-10 | 3/2 | 90.1 | 2.1 | 91.2 | 4 | 91.1 | 90.9 | 3/2 | 71.0 | 2.3 | 74.2 |
| (fp 91.7%) | 2/1 | 71.5 | 1.3 | 90.6 | 2 | 84.0 | 90.1 | 2/1 | 21.6 | 1.8 | 71.9 |
| ShuffleNet | 6/4 | 57.2 | 4.8 | 59.7 | 8 | 59.1 | 59.1 | 6/4 | 52.6 | 4.9 | 33.6 |
| ImageNet | 6/4 | 62.2 | 4.7 | 59.3 | 6 | 57.8 | 57.1 | 6/4 | 58.5 | 4.8 | 32.7 |
| (fp 59.0%) | 5/3 | 41.9 | 4.1 | 58.0 | 5 | 57.0 | 56.6 | 5/3 | 40.4 | 4.2 | 27.7 |
+
+# 4.1 CNN RESULTS
+
+To compare the hardware execution efficiency across different techniques, we compute the average bitwidth $(B_{\mathrm{avg}})$ of all features in a DNN model:
+
+$$
+B _ {\text {a v g}} = B _ {h b} + (1 - S p) \times (B - B _ {h b}) \tag {8}
+$$
+
+Here $Sp$ denotes sparsity in terms of the percentage of low-precision activations (i.e., number of unimportant features divided by total features). The computational cost of DNNs is proportional to the average bitwidth. Our results on CNNs are presented in Tables 1 and 2.
+
+We first compare PG against two widely adopted quantization schemes — uniform quantization (UQ) and PACT (Choi et al., 2018). In Table 1, columns 3 and 4 show the average bitwidth and model accuracy of PG, respectively; columns 5-7 list the bitwidth and corresponding model accuracy of UQ and PACT, respectively. At each row, PG achieves a better accuracy with a lower average bitwidth ( $B_{\mathrm{avg}}$ vs. Bits). Specifically, PG achieves $6.7\%$ and $1.6\%$ higher accuracy with $1.25\times$ computational cost reduction, and $6.6\%$ and $0.5\%$ higher accuracy with $1.54\times$ computational cost reduction than 2-bit UQ and PACT on ShiftNet-20 and ResNet-20 for CIFAR-10 (rows 3 and 6), respectively. We observe the same trend on ShuffleNet for ImageNet, where PG improves the accuracy of 5-bit UQ and PACT by $1.0\%$ and $1.4\%$ with $1.22\times$ computational cost reduction (row 9). It is also worth noting that PG can recover the accuracy of the floating-point ShiftNet-20, ResNet-20, and ShuffleNet V2 $0.5\times$ with 3.9, 3.2, and 4.7 average bitwidth, respectively. This demonstrates that PG, using a learnable threshold, can predict unimportant features and reduce their bitwidth without compromising accuracy. The results compared to quantization baselines are visualized in Figure 4, which plots accuracy vs. average bitwidth — uniform quantization (squares), PACT (triangles), and PG (circles) are shown on separate curves. Results closer to the upper-left corner are better.
+
+We then compare PG with Fix-Threshold, which is an extension of the zero-prediction (Lin et al., 2017; Song et al., 2018). Lin et al. (2017); Song et al. (2018) explicitly predict ReLU-induced zeros during inference. To have a fair comparison, we extend their technique to predict an arbitrary fixed threshold to achieve the same or lower $B_{\mathrm{avg}}$ as PG, reported as Fix-Threshold in Table 1. For CIFAR-10, we observe that PG achieves $20.2\%$ and $18.7\%$ higher accuracy with $1.75 \times$ and $1.38 \times$ computational cost reduction than Fix-Threshold on ShiftNet-20 and ResNet-20 (rows 3 and 6), respectively. The gap in accuracy becomes even larger on ShuffleNet V2 for ImageNet dataset. With the same or a lower average bitwidth, the accuracy of PG is at least $26\%$ higher than Fix-Threshold (rows 7-9). In conclusion, PG consistently outperforms Fix-Threshold because PG uses a learnable gate function which can adjust its threshold to the clipped activation distribution.
+
+Table 2: Comparison with SeerNet on CNN - compare PG against SeerNet under similar model prediction accuracy. In SeerNet the average bitwidth $B_{\mathrm{avg}} = B_{hb} + (1 - Sp) \times B$ .
+
+ | PG | SeerNet |
| B/Bhb | Sp (%) | Bavg | Acc | B/Bhb | Sp (%) | Bavg | Acc |
| ShiftNet-20 | 5/3 | 96.3 | 3.1 | 88.6 | 12/8 | 49.7 | 14.0 | 85.4 |
| ResNet-20 | 3/2 | 90.1 | 2.1 | 91.2 | 6/4 | 51.1 | 6.9 | 91.2 |
| ShuffleNet V2 0.5× | 6/4 | 62.2 | 4.7 | 59.3 | 8/6 | 30.8 | 11.5 | 58.9 |
+
+We further compare PG with SeerNet (Cao et al., 2019) in Table 2. For each convolutional layer, SeerNet executes a quantized version of the same layer to predict the output sparsity. It then computes in full precision the output activations that are not suppressed according to the prediction. Since the code of SeerNet is currently unavailable, we implement the network in Tensorflow and boost its accuracy by retraining the network. We reduce the average bitwidth of SeerNet while keeping a comparable accuracy as PG. For CIFAR-10, PG reduces the computational cost of ResNet-20 $3.3 \times$ more than SeerNet at the same prediction accuracy. Meanwhile for the hardware-friendly ShiftNet-20, PG achieves $3.2\%$ higher accuracy with $4.5 \times$ less compute than SeerNet. For ImageNet, PG on ShuffleNet also achieves $0.4\%$ higher accuracy with $2.4 \times$ computational cost reduction than SeerNet. It is worth noting that SeerNet does not reuse the outputs from the quantized layer, which may incur a nontrivial overhead in execution time. In contrast, when PG invokes the update phase, it reuses the outputs of the low-precision computation from the prediction phase.
+
+Table 3: Sweeping manually set thresholds – we sweep a series of manually set thresholds for ResNet-20 on CIFAR-10. Compared to manually setting thresholds, PG achieves a better model accuracy (91.2%) with a larger sparsity (90.1%).
+
+| B/Bhb | Fix-Threshold | Sp (%) | Bavg | Acc |
| 3/2 | 3 | 86.0 | 2.1 | 65.9 |
| 3/2 | 2 | 80.2 | 2.2 | 70.7 |
| 3/2 | 1 | 71.0 | 2.3 | 74.2 |
| 3/2 | 0 | 56.7 | 2.4 | 75.8 |
| 3/2 | -1 | 35.2 | 2.6 | 83.5 |
| 3/2 | -2 | 24.5 | 2.8 | 86.9 |
| 3/2 | -4 | 13.4 | 2.9 | 88.7 |
+
+In order to evaluate the efficacy of the learnable thresholds, we further sweep a series of manually set thresholds on the ResNet-20 for CIFAR-10. Table 3 shows the results where we use a dual-precision mode of $B / B_{hb} = 3 / 2$ . As the threshold decreases from 3 to -4, the average bitwidth in the update phase consistently increases. This is expected because we compute more output features in a high precision. The model prediction accuracy therefore increases. Compared to these manually set thresholds, PG achieves a much improved model accuracy (91.2%) with a higher sparsity (90.1%).
+
+Table 4: PG with and without sparse back-propagation (SpBP) on CNNs.
+
+ | PG | PG w/ SpBP | | |
| Bavg | Sp (%) | Bavg | Sp (%) | B/Bhb | Acc |
| ShiftNet-20 | 4.0 | 49.3 | 3.9 | 55.5 (↑6.2) | 5/3 | 89.1 |
| 3.3 | 84.0 | 3.1 | 96.3 (↑12.3) | 5/3 | 88.6 |
| ResNet-20 | 3.4 | 58.2 | 3.2 | 78.2 (↑20.0) | 4/3 | 91.7 |
| 2.2 | 76.5 | 2.1 | 90.1 (↑13.6) | 3/2 | 91.2 |
| ShuffleNet V2 0.5× | 5.3 | 36.6 | 4.8 | 57.2 (↑20.6) | 6/4 | 59.7 |
| 5.1 | 43.1 | 4.7 | 62.2 (↑19.1) | 6/4 | 59.3 |
+
+To quantify the impact of sparse back-propagation described in Section 3.3, we run PG with and without sparse back-propagation on CNNs. Table 4 compares the sparsity in the update phase of PG with and without back-propagation under the same model accuracy and average bitwidth. Interestingly, we find that the sparsity in the update phase of PG with sparse back-propagation is consistently higher than that of PG without sparse back-propagation across the tested models and datasets. For both CIFAR-10 and ImageNet, the sparsity increases by between $6\%$ and $21\%$ . We hypothesize that sparse back-propagation zeros out the gradients flowing to non-activated LSB convolutions in the update phase, which leads to a higher sparsity.
+
+# 4.2 LSTM RESULTS
+
+PG also works well on RNNs. Table 5 reports our results applying PG on an LSTM model for the PTB corpus. Here, we only compare with the uniform quantization since PACT, Fix-Threshold, and SeerNet do not work for sigmoid or tanh activation functions. Although Hubara et al. (2017) claim that quantizing both weights and activations to 4 bits does not lower PPW, we observe a PPW degradation when $B$ decreases from 8 to 4 bits in our implementation. The LSTM with 8-bit activations is therefore considered as the full-accuracy model. We observe the same trend as in the CNN evaluation where PG enables the 3-bit LSTM cell to improve the PPW by $1.2\%$ and reduce the computational cost by $2.7 \times$ compared to the 8-bit uniform quantization.
+
+
+(a) ShifNet-20 on CIFAR-10
+
+
+(b) ResNet-18 on CIFAR-10
+
+
+(c) ShuffleNet on ImageNet
+
+
+(d) LSTM on PTB
+Figure 4: Precision gating (PG) results on CNNs and LSTM - compare PG against uniform quantization (UQ) and PACT.
+
+Table 5: PG on LSTM – the dataset used is Penn Tree Bank (PTB). The metric is perplexity per word (PPW) and lower is better. Floating-point PPW is 110.1.
+
+| Base | PG |
| Bits | PPW | B/Bhb | Sp (%) | Bavg | PPW |
| 8 | 109.8 | 4/2 | 48.4 | 3.0 | 108.5 |
| 4 | 110.8 | 4/2 | 54.9 | 2.9 | 109.3 |
| 2 | 124.9 | 3/1 | 53.0 | 1.9 | 118.8 |
+
+# 4.3 SPARSE KERNEL SPEEDDUP
+
+The sparse update phase of PG can be implemented efficiently with a new kernel called sampled dense-dense matrix multiplication (SDDMM). A convolutional layer with PG is then factorized into a regular low-precision convolution bounded with a low-precision SDDMM. To evaluate the potential speedup of PG, we implement the SDDMM kernel in Python leveraging a high performance JIT compiler Numba (Lam et al., 2015) and test it on the ResNet-20 model for CIFAR-10. Table 6 shows the layer-wise sparsity and the kernel speedup compared to the dense matrix multiplication baseline on Intel Xeon Silver 4114 CPU (2.20GHz). With the high sparsity (from $76\%$ to $99\%$ ) in each layer induced by PG, the SDDMM kernel achieves up to $8.3\times$ wall clock speedup over the general dense matrix-matrix multiplication (GEMM) kernel. The significant wall clock speedup shows a good potential of deploying PG on commodity hardware.
+
+For a GPU implementation, we need to replace the GEMM kernel with the SDDMM kernel to accelerate the update phase. Mainstream deep learning frameworks such as Tensorflow currently do not provide a built-in operator for SDDMM, which is essential for achieving high performance on GPUs. Nevertheless, Nisa et al. (2018) have recently demonstrated that a highly optimized SDDMM kernel with a similar sparsity shown in Table 6 can achieve about $4 \times$ speedup over a GEMM ker-
+
+Table 6: SDDMM kernel sparsity and speedup - We report optimized kernel execution time and wall-clock speedup of each layer in ResNet-20 for CIFAR-10.
+
+| Layer ID | 1 | 3 | 5 | 7 | 9 | 11 | 13 | 15 | 17 |
| Sp | 85% | 94% | 87% | 76% | 98% | 99% | 91% | 98% | 97% |
| Execution Time (ms) | 5.4 | 3.3 | 4.9 | 3.6 | 1.5 | 1.1 | 1.5 | 1.0 | 1.2 |
| Wall Clock Speedup | 3.2× | 5.1× | 3.3× | 2.3× | 6.2× | 8.3× | 3.2× | 6.2× | 5.2× |
+
+nel. This shows strong evidence that PG has a potential to obtain high speedup on GPUs as well. Additionally, our approach is a good fit for the specialized accelerator architectures proposed by Lin et al. (2017) and Song et al. (2018). Due to the high sparsity, DNNs with PG are estimated to get at least $3 \times$ speedup and $5 \times$ energy efficiency on these dedicated hardware accelerators. We leave the deployment of the SDDMM kernel on GPU and dedicated hardware to future work.
+
+# 5 CONCLUSIONS
+
+We propose precision gating, a dynamic dual-precision quantization method that can effectively reduce the computational cost of DNNs. PG assigns higher precision to important features and lower precision to the remaining features at run-time. The proposed technique is end-to-end trainable, allowing individual models to learn to distinguish important and unimportant features. Experimental results show that PG outperforms state-of-the-art quantization and prediction approaches by a large margin on both CNN and RNN benchmarks on datasets such asImagenet and Penn Tree Bank. We will release the source code on the author's website.
+
+# ACKNOWLEDGMENTS
+
+This work was supported in part by the Semiconductor Research Corporation (SRC) and DARPA. One of the Titan Xp GPUs used for this research was donated by the NVIDIA Corporation. We thank Jordan Dotzel (Cornell) for his helpful discussions during the camera-ready revision.
+
+# REFERENCES
+
+Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, Manjunath Kudlur, Josh Levenberg, Rajat Monga, Sherry Moore, Derek G. Murray, Benoit Steiner, Paul Tucker, Vijay Vasudevan, Pete Warden, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: A System for Large-scale Machine Learning. USENIX Conf. on Operating Systems Design and Implementation, 2016.
+Shijie Cao, Lingxiao Ma, Wencong Xiao, Chen Zhang, Yunxin Liu, Lintao Zhang, Lanshu Nie, and Zhi Yang. SeerNet: Predicting Convolutional Neural Network Feature-Map Sparsity through Low-Bit Quantization. Conf. on Computer Vision and Pattern Recognition (CVPR), 2019.
+Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L. Yuille. DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs. IEEE Trans. on Pattern Analysis and Machine Intelligence, 2016.
+Jungwook Choi, Zhuo Wang, Swagath Venkataramani, Pierce I-Jen Chuang, Vijayalakshmi Srinivasan, and Kailash Gopalakrishnan. PACT: Parameterized Clipping Activation for Quantized Neural Networks. arXiv e-print, arXiv:1805.0608, 2018.
+Matthieu Courbariaux, Itay Hubara, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. Binarized Neural Networks: Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1. arXiv e-print, arXiv:1602.02830, 2016.
+Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. ImageNet: A Large-Scale Hierarchical Image Database. Conf. on Computer Vision and Pattern Recognition (CVPR), 2009.
+
+Zhen Dong, Zhewei Yao, Amir Gholami, Michael W. Mahoney, and Kurt Keutzer. HAWQ: Hessian AWare Quantization of Neural Networks With Mixed-Precision. Int'l Conf. on Computer Vision (ICCV), 2019.
+Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep Residual Learning for Image Recognition. Conf. on Computer Vision and Pattern Recognition (CVPR), 2016a.
+Qinyao He, He Wen, Shuchang Zhou, Yuxin Wu, Cong Yao, Xinyu Zhou, and Yuheng Zou. Effective Quantization Methods for Recurrent Neural Networks. arXiv e-print, arXiv:1611.10176, 2016b.
+Sepp Hochreiter and Jürgen Schmidhuber. Long Short-Term Memory. Neural Computation, 1997.
+Weizhe Hua, Yuan Zhou, Christopher De Sa, Zhiru Zhang, and G. Edward Suh. Boosting the Performance of CNN Accelerators with Dynamic Fine-Grained Channel Gating. Int'l Symp. on Microarchitecture (MICRO), 2019a.
+Weizhe Hua, Yuan Zhou, Christopher M De Sa, Zhiru Zhang, and G. Edward Suh. Channel Gating Neural Networks. Advances in Neural Information Processing Systems (NeurIPS), 2019b.
+Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations. Journal of Machine Learning Research (JMLR), 2017.
+Sergey Ioffe and Christian Szegedy. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. arXiv e-print, arXiv:1502.03167, 2015.
+Alex Krizhevsky and Geoffrey Hinton. Learning Multiple Layers of Features from Tiny Images. Tech report, 2009.
+Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet Classification with Deep Convolutional Neural Networks. Advances in Neural Information Processing Systems (NeurIPS), 2012.
+Siu Kwan Lam, Antoine Pitrou, and Stanley Seibert. Numba: A LLVM-based Python JIT Compiler. Workshop on the LLVM Compiler Infrastructure in HPC, 2015.
+Fengfu Li, Bo Zhang, and Bin Liu. Ternary Weight Networks. arXiv e-print, arXiv:1605.04711, 2016.
+Darryl D. Lin, Sachin S. Talathi, and V. Sreekanth Annapureddy. Fixed Point Quantization of Deep Convolutional Networks. Int'l Conf. on Machine Learning (ICML), 2016.
+Yingyan Lin, Charbel Sakr, Yongjune Kim, and Naresh Shanbhag. PredictiveNet: An energy-efficient convolutional neural network via zero prediction. Int'l Symp. on Circuits and Systems (ISCAS), 2017.
+Thang Luong, Hieu Pham, and Christopher D. Manning. Effective Approaches to Attention-based Neural Machine Translation. Conf. on Empirical Methods in Natural Language Processing (EMNLP), 2015.
+Ningning Ma, Xiangyu Zhang, Hai-Tao Zheng, and Jian Sun. ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design. European Conf. on Computer Vision (ECCV), 2018.
+Mitchell P. Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. Building a Large Annotated Corpus of English: The Penn Treebank. Comput. Linguist., 1993.
+Asit Mishra, Eriko Nurvitadhi, Jeffrey J Cook, and Debbie Marr. WRPN: Wide Reduced-Precision Networks. Int'l Conf. on Learning Representations (ICLR), 2018.
+Israt Nisa, Aravind Sukumaran-Rajam, Sureyya Emre Kurt, Changwan Hong, and P. Sadayappan. Sampled Dense Matrix Multiplication for High-Performance Machine Learning. Int'l Conf. on High-Performance Computing (HIPC), 2018.
+Eunhyeok Park, Sungjoo Yoo, and Peter Vajda. Value-aware Quantization for Training and Inference of Neural Networks. European Conf. on Computer Vision (ECCV), 2018.
+
+Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-Net: Convolutional Networks for Biomedical Image Segmentation. Medical Image Computing and Computer-Assisted Intervention (MIC-CAI), 2015.
+Florian Schroff, Dmitry Kalenichenko, and James Philbin. FaceNet: A Unified Embedding for Face Recognition and Clustering. Conf. on Computer Vision and Pattern Recognition (CVPR), 2015.
+Mingcong Song, Jiechen Zhao, Yang Hu, Jiaqi Zhang, and Tao Li. Prediction Based Execution on Deep Neural Networks. Int'l Symp. on Computer Architecture (ISCA), 2018.
+Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention Is All You Need. Advances in Neural Information Processing Systems (NeurIPS), 2017.
+Kuan Wang, Zhijian Liu, Yujun Lin, Ji Lin, and Song Han. HAQ: Hardware-Aware Automated Quantization With Mixed Precision. Conf. on Computer Vision and Pattern Recognition (CVPR), 2019.
+Peiqi Wang, Xinfeng Xie, Lei Deng, Guoqi Li, Dongsheng Wang, and Yuan Xie. HitNet: Hybrid Ternary Recurrent Neural Network. Advances in Neural Information Processing Systems (NeurIPS), 2018.
+Bichen Wu, Alvin Wan, Xiangyu Yue, Peter Jin, Sicheng Zhao, Noah Golmant, Amir Gholaminejad, Joseph Gonzalez, and Kurt Keutzer. Shift: A Zero FLOP, Zero Parameter Alternative to Spatial Convolutions. Conf. on Computer Vision and Pattern Recognition (CVPR), 2018a.
+Bichen Wu, Yanghan Wang, Peizhao Zhang, Yuandong Tian, Peter Vajda, and Kurt Keutzer. Mixed Precision Quantization of ConvNets via Differentiable Neural Architecture Search. arXiv e-print, arXiv:1812.00090, 2018b.
+Carole-Jean Wu, David Brooks, Kevin Chen, Douglas Chen, Sy Choudhury, Marat Dukhan, Kim Hazelwood, Eldad Isaac, Yangqing Jia, Bill Jia, Tommer Leyvand, Hao Lu, Yang Lu, Lin Qiao, Brandon Reagen, Joe Spisak, Fei Sun, Andrew Tulloch, Peter Vajda, Xiaodong Wang, Yanghan Wang, Bram Wasti, Yiming Wu, Ran Xian, Sungjoo Yoo, and Peizhao Zhang. Machine Learning at Facebook: Understanding Inference at the Edge. Int'l Symp. on High-Performance Computer Architecture (HPCA), 2019.
+Xiaowei Xu, Yukun Ding, Sharon Xiaobo Hu, Michael Niemier, Jason Cong, Yu Hu, and Yiyu Shi. Scaling for Edge Inference of Deep Neural Networks. Nature Electronics, 2018.
+Hengshuang Zhao, Xiaojuan Qi, Xiaoyong Shen, Jianping Shi, and Jiaya Jia. ICNet for Real-Time Semantic Segmentation on High-Resolution Images. European Conf. on Computer Vision (ECCV), 2018.
+Ritchie Zhao, Weinan Song, Wentao Zhang, Tianwei Xing, Jeng-Hau Lin, Mani Srivastava, Rajesh Gupta, and Zhiru Zhang. Accelerating Binarized Convolutional Neural Networks with Software-Programmable FPGAs. Int'l Symp. on Field-Programmable Gate Arrays (FPGA), 2017.
+Ritchie Zhao, Yuwei Hu, Jordan Dotzel, Christopher De Sa, and Zhiru Zhang. Improving Neural Network Quantization using Outlier Channel Splitting. Int'l Conf. on Machine Learning (ICML), 2019.
+Shu-Chang Zhou, Yu-Zhi Wang, He Wen, Qin-Yao He, and Yu-Heng Zou. Balanced Quantization: An Effective and Efficient Approach to Quantized Neural Networks. Journal of Computer Science and Technology, 2017.
+Shuchang Zhou, Yuxin Wu, Zekun Ni, Xinyu Zhou, He Wen, and Yuheng Zou. DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients. arXiv preprint, arXiv:1606.06160, 2016.
+Chenzhuo Zhu, Song Han, Huizi Mao, and William J. Dally. Trained Ternary Quantization. *Int'l Conf. on Learning Representations (ICLR)*, 2017.
+
+# A APPENDIX
+
+# A.1 FEATURE VISUALIZATION
+
+In precision gating, we expect that the model will learn to compute features whose prediction values exceed the threshold using a high-precision while keeping others computed using a low precision. In the image recognition task, we expect that high-precision features are mostly in the region where an object lies. To provide evidence that PG can effectively learn to identify those regions, in Figure 5, we visualize the decision maps extracted from the final convolutional layer that is modified to support PG in the ResNet-20 model. A decision map is a gray scale 2D image that has the same spatial size as the output feature map in the same convolutional layer. The brighter a pixel is in the decision map, the more probably the same spatial location in the output feature map will be computed using a high precision. The first row contains the original input images in CIFAR-10, and the second row shows their corresponding decision maps. We can see clearly that in each decision map, the locations of bright pixels roughly align with the object in the original image.
+
+
+Figure 5: Visualization of gating ratio - Top: feature maps from the final precision gating block in ResNet-20 on CIFAR-10. Bottom: ratio of computing using a high-precision (brighter pixel means higher ratio). PG effectively identifies the location of the object of interest and increases bitwidth when computing in this region.
+
+# A.2 ADDITIONAL RESULTS
+
+We provide more supplementary results in this section as shown in Table 7.
+
+Table 7: Precision gating (PG) on CNN – additional models tested are ResNet-32 and ResNet-56 on CIFAR-10. We compare PG against uniform quantization (UQ), PACT, and Fix-Threshold. “fp” is floating-point accuracy. “Sp” is sparsity.
+
+ | Ours | Baselines | |
| Precision Gating | Bits | UQ Acc | PACT Acc | Fix-Threshold | |
| B/Bhb | Sp (%) | Bavg | Acc | B/Bhb | Sp (%) | Bavg | Acc |
| ResNet-32 (fp 92.4%) | 3/2 | 96.3 | 2.0 | 92.0 | 8 | 92.3 | 91.9 | 3/2 | 94.4 | 2.0 | 45.6 |
| 4 | 92.0 | 91.6 |
| ResNet-56 CIFAR-10 (fp 92.9%) | 4/3 | 93.0 | 3.1 | 93.0 | 8 | 92.9 | 92.5 | 4/3 | 91.0 | 3.1 | 90.2 |
| 3/2 | 98.2 | 2.0 | 92.5 | 4 | 92.3 | 92.1 | 3/2 | 96.1 | 2.0 | 50.0 |
| 2/1 | 90.4 | 1.1 | 92.0 | 2 | 88.5 | 91.8 | 2/1 | 86.9 | 1.1 | 14.2 |
\ No newline at end of file
diff --git a/precisiongatingimprovingneuralnetworkefficiencywithdynamicdualprecisionactivations/images.zip b/precisiongatingimprovingneuralnetworkefficiencywithdynamicdualprecisionactivations/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..ce1f08d8d83ea664f2eb964c92f5a432970c3c44
--- /dev/null
+++ b/precisiongatingimprovingneuralnetworkefficiencywithdynamicdualprecisionactivations/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:029f2f26626025131964826a9e811c9d79ff0ebc2e084f55dae8deed8e93949f
+size 511910
diff --git a/precisiongatingimprovingneuralnetworkefficiencywithdynamicdualprecisionactivations/layout.json b/precisiongatingimprovingneuralnetworkefficiencywithdynamicdualprecisionactivations/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..5cb1d129faffc94e3d5d92f28343f6950ba9a1fb
--- /dev/null
+++ b/precisiongatingimprovingneuralnetworkefficiencywithdynamicdualprecisionactivations/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6903603f9252a332fd440deaf1c121cf7de633f8115f08f87aef6558659ee34e
+size 439800
diff --git a/predictionconsistencycurvaturerepresentationlearningforlocallylinearcontrol/b2bc1d46-18d9-4fd2-97c4-f8bfbac60e00_content_list.json b/predictionconsistencycurvaturerepresentationlearningforlocallylinearcontrol/b2bc1d46-18d9-4fd2-97c4-f8bfbac60e00_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..a1b3aa929519b2720e8d1d091163a1d6e422cf54
--- /dev/null
+++ b/predictionconsistencycurvaturerepresentationlearningforlocallylinearcontrol/b2bc1d46-18d9-4fd2-97c4-f8bfbac60e00_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:023ff232a67a8330722549d2309a4d38010b8207b80090646af5dc1616257228
+size 204267
diff --git a/predictionconsistencycurvaturerepresentationlearningforlocallylinearcontrol/b2bc1d46-18d9-4fd2-97c4-f8bfbac60e00_model.json b/predictionconsistencycurvaturerepresentationlearningforlocallylinearcontrol/b2bc1d46-18d9-4fd2-97c4-f8bfbac60e00_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..feeafe478655bf588673137365f781090ee0cc78
--- /dev/null
+++ b/predictionconsistencycurvaturerepresentationlearningforlocallylinearcontrol/b2bc1d46-18d9-4fd2-97c4-f8bfbac60e00_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b7bd1e34c55de7f80cb971e7e7e8875211802bd4165f5dc0a870134544712b5e
+size 240253
diff --git a/predictionconsistencycurvaturerepresentationlearningforlocallylinearcontrol/b2bc1d46-18d9-4fd2-97c4-f8bfbac60e00_origin.pdf b/predictionconsistencycurvaturerepresentationlearningforlocallylinearcontrol/b2bc1d46-18d9-4fd2-97c4-f8bfbac60e00_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..1c9ab6388e619a762d031656293c9f69f7efbea3
--- /dev/null
+++ b/predictionconsistencycurvaturerepresentationlearningforlocallylinearcontrol/b2bc1d46-18d9-4fd2-97c4-f8bfbac60e00_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6bd8f3a4ab5f1fdfe5ba8c0a98a7167ecef9656108961dd129b323efc314fb81
+size 2655395
diff --git a/predictionconsistencycurvaturerepresentationlearningforlocallylinearcontrol/full.md b/predictionconsistencycurvaturerepresentationlearningforlocallylinearcontrol/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..cc610a9db2f36864caf4800e7b930d6596654ad3
--- /dev/null
+++ b/predictionconsistencycurvaturerepresentationlearningforlocallylinearcontrol/full.md
@@ -0,0 +1,829 @@
+# PREDICTION, CONSISTENCY, CURVATURE: REPRESENTATION LEARNING FOR LOCALLY-LINEAR CONTROL
+
+Nir Levine $^{1*}$ , Yinlam Chow $^{2*}$ , Rui Shu $^{3}$ , Ang Li $^{1}$ , Mohammad Ghavamzadeh $^{4}$ , Hung Bui $^{5}$
+
+1DeepMind, 2Google Research, 3Stanford University, 4Facebook AI Research, 5VinAI
+
+# ABSTRACT
+
+Many real-world sequential decision-making problems can be formulated as optimal control with high-dimensional observations and unknown dynamics. A promising approach is to embed the high-dimensional observations into a lower-dimensional latent representation space, estimate the latent dynamics model, then utilize this model for control in the latent space. An important open question is how to learn a representation that is amenable to existing control algorithms? In this paper, we focus on learning representations for locally-linear control algorithms, such as iterative LQR (iLQR). By formulating and analyzing the representation learning problem from an optimal control perspective, we establish three underlying principles that the learned representation should comprise: 1) accurate prediction in the observation space, 2) consistency between latent and observation space dynamics, and 3) low curvature in the latent space transitions. These principles naturally correspond to a loss function that consists of three terms: prediction, consistency, and curvature (PCC). Crucially, to make PCC tractable, we derive an amortized variational bound for the PCC loss function. Extensive experiments on benchmark domains demonstrate that the new variational-PCC learning algorithm benefits from significantly more stable and reproducible training, and leads to superior control performance. Further ablation studies give support to the importance of all three PCC components for learning a good latent space for control.
+
+# 1 INTRODUCTION
+
+Decomposing the problem of decision-making in an unknown environment into estimating dynamics followed by planning provides a powerful framework for building intelligent agents. This decomposition confers several notable benefits. First, it enables the handling of sparse-reward environments by leveraging the dense signal of dynamics prediction. Second, once a dynamics model is learned, it can be shared across multiple tasks within the same environment. While the merits of this decomposition have been demonstrated in low-dimensional environments (Deisenroth & Rasmussen, 2011; Gal et al., 2016), scaling these methods to high-dimensional environments remains an open challenge.
+
+The recent advancements in generative models have enabled the successful dynamics estimation of high-dimensional decision processes (Watter et al., 2015; Ha & Schmidhuber, 2018; Kurutach et al., 2018). This procedure of learning dynamics can then be used in conjunction with a plethora of decision-making techniques, ranging from optimal control to reinforcement learning (RL) (Watter et al., 2015; Banijamali et al., 2018; Finn et al., 2016; Chua et al., 2018; Ha & Schmidhuber, 2018; Kaiser et al., 2019; Hafner et al., 2018; Zhang et al., 2019). One particularly promising line of work in this area focuses on learning the dynamics and conducting control in a low-dimensional latent embedding of the observation space, where the embedding itself is learned through this process (Watter et al., 2015; Banijamali et al., 2018; Hafner et al., 2018; Zhang et al., 2019). We refer to this approach as learning controllable embedding (LCE). There have been two main approaches to this problem: 1) to start by defining a cost function in the high-dimensional observation space and learn the embedding space, its dynamics, and reward function, by interacting with the environment in a RL fashion (Hafner et al., 2018; Zhang et al., 2019), and 2) to first learn the embedding space and its dynamics, and then define a cost function in this low-dimensional space and conduct the control (Watter et al., 2015; Banijamali et al., 2018). This can be later combined with RL for extra fine-tuning of the model and control.
+
+In this paper, we take the second approach and particularly focus on the important question of what desirable traits should the latent embedding exhibit for it to be amenable to a specific class of control/learning algorithms, namely the widely used class of locally-linear control (LLC) algorithms? We argue from an optimal control standpoint that our latent space should exhibit three properties. The first is prediction: given the ability to encode to and decode from the latent space, we expect
+
+the process of encoding, transitioning via the latent dynamics, and then decoding, to adhere to the true observation dynamics. The second is consistency: given the ability to encode a observation trajectory sampled from the true environment, we expect the latent dynamics to be consistent with the encoded trajectory. Finally, curvature: in order to learn a latent space that is specifically amenable to LLC algorithms, we expect the (learned) latent dynamics to exhibit low curvature in order to minimize the approximation error of its first-order Taylor expansion employed by LLC algorithms. Our contributions are thus as follows: (1) We propose the Prediction, Consistency, and Curvature (PCC) framework for learning a latent space that is amenable to LLC algorithms and show that the elements of PCC arise systematically from bounding the suboptimality of the solution of the LLC algorithm in the latent space. (2) We design a latent variable model that adheres to the PCC framework and derive a tractable variational bound for training the model. (3) To the best of our knowledge, our proposed curvature loss for the transition dynamics (in the latent space) is novel. We also propose a direct amortization of the Jacobian calculation in the curvature loss to help training with curvature loss more efficiently. (4) Through extensive experimental comparison, we show that the PCC model consistently outperforms E2C (Watter et al., 2015) and RCE (Banijamali et al., 2018) on a number of control-from-images tasks, and verify via ablation, the importance of regularizing the model to have consistency and low-curvature.
+
+# 2 PROBLEM FORMULATION
+
+We are interested in controlling the non-linear dynamical systems of the form $s_{t + 1} = f_S(s_t,u_t) + w$ , over the horizon $T$ . In this definition, $s_t\in S\subseteq \mathbb{R}^{n_s}$ and $u_{t}\in \mathcal{U}\subseteq \mathbb{R}^{n_u}$ are the state and action of the system at time step $t\in \{0,\dots ,T - 1\}$ , $w$ is the Gaussian system noise, and $f_{S}$ is a smooth non-linear system dynamics. We are particularly interested in the scenario in which we only have access to the high-dimensional observation $x_{t}\in \mathcal{X}\subseteq \mathbb{R}^{n_{x}}$ of each state $s_t$ ( $n_x\gg n_s$ ). This scenario has application in many real-world problems, such as visual-servoing (Espiau et al., 1992), in which we only observe high-dimensional images of the environment and not its underlying state. We further assume that the high-dimensional observations $x$ have been selected such that for any arbitrary control sequence $U = \{u_{t}\}_{t = 0}^{T - 1}$ , the observation sequence $\{x_{t}\}_{t = 0}^{T}$ is generated by a stationary Markov process, i.e., $x_{t + 1}\sim P(\cdot |x_t,u_t)$ , $\forall t\in \{0,\ldots ,T - 1\}$ .1
+
+A common approach to control the above dynamical system is to solve the following stochastic optimal control (SOC) problem (Shapiro et al., 2009) that minimizes expected cumulative cost:
+
+$$
+\min _ {U} L (U, P, c, x _ {0}) := \mathbb {E} \left[ c _ {T} \left(x _ {T}\right) + \sum_ {t = 0} ^ {T - 1} c _ {t} \left(x _ {t}, u _ {t}\right) \mid P, x _ {0} \right], ^ {2} \tag {SOC1}
+$$
+
+where $c_{t}:\mathcal{X}\times \mathcal{U}\to \mathbb{R}_{\geq 0}$ is the immediate cost function at time $t$ , $c_{T}\in \mathbb{R}_{\geq 0}$ is the terminal cost, and $x_0$ is the observation at the initial state $s_0$ . Note that all immediate costs are defined in the observation space $\mathcal{X}$ , and are bounded by $c_{\mathrm{max}} > 0$ and Lipschitz with constant $c_{\mathrm{lip}} > 0$ . For example, in visual-serving, (SOC1) can be formulated as a goal tracking problem (Ebert et al., 2018), where we control the robot to reach the goal observation $x_{\mathrm{goal}}$ , and the objective is to compute a sequence of optimal open-loop actions $U$ that minimizes the cumulative tracking error $\mathbb{E}[\sum_t\| x_t - x_{\mathrm{goal}}\|^2\mid P,x_0]$ .
+
+Since the observations $x$ are high dimensional and the dynamics in the observation space $P(\cdot | x_t, u_t)$ is unknown, solving (SOC1) is often intractable. To address this issue, a class of algorithms has been recently developed that is based on learning a low-dimensional latent (embedding) space $\mathcal{Z} \subseteq \mathbb{R}^{n_z}$ ( $n_z \ll n_x$ ) and latent state dynamics, and performing optimal control there. This class that we refer to as learning controllable embedding (LCE) throughout the paper, include recently developed algorithms, such as E2C (Watter et al., 2015), RCE (Banijamali et al., 2018), and SOLAR (Zhang et al., 2019). The main idea behind the LCE approach is to learn a triplet, (i) an encoder $E: \mathcal{X} \to \mathbb{P}(\mathcal{Z})$ ; (ii) a dynamics in the latent space $F: \mathcal{Z} \times \mathcal{U} \to \mathbb{P}(\mathcal{Z})$ ; and (iii) a decoder $D: \mathcal{Z} \to \mathbb{P}(\mathcal{X})$ . These in turn can be thought of as defining a (stochastic) mapping $\widehat{P}: \mathcal{X} \times \mathcal{U} \to \mathbb{P}(\mathcal{X})$ of the form $\widehat{P} = D \circ F \circ E$ . We then wish to solve the SOC in latent space $\mathcal{Z}$ :
+
+$$
+\min _ {U, \bar {P}} \quad \mathbb {E} \left[ L (U, F, \bar {c}, z _ {0}) \mid E, x _ {0} \right] + \lambda_ {2} \sqrt {R _ {2} (\hat {P})}, \tag {SOC2}
+$$
+
+such that the solution of (SOC2), $U_2^*$ , has similar performance to that of (SOC1), $U_1^*$ , i.e., $L(U_1^*, P, c, x_0) \approx L(U_2^*, P, c, x_0)$ . In (SOC2), $z_0$ is the initial latent state sampled from the encoder $E(\cdot | x_0)$ ; $\bar{c} : \mathcal{Z} \times \mathcal{U} \to \mathbb{R}_{\geq 0}$ is the latent cost function defined as $\bar{c}_t(z_t, u_t) = \int c_t(x_t, u_t) dD(x_t | z_t)$ ; $R_2(\widehat{P})$ is a regularizer over the mapping $\widehat{P}$ ; and $\lambda_2$ is the corresponding
+
+
+Figure 1: Evolution of the states (a)(blue) in equation SOC1 under dynamics $P$ , (b)(green) in equation SOC2 under dynamics $F$ , and (c)(red) in equation SOC3 under dynamics $\hat{P}$ .
+
+regularization parameter. We will define $R_{2}$ and $\lambda_{2}$ more precisely in Section 3. Note that the expectation in (SOC2) is over the randomness generated by the (stochastic) encoder $E$ .
+
+# 3 PCC MODEL: A CONTROL PERSPECTIVE
+
+As described in Section 2, we are primarily interested in solving (SOC1), whose states evolve under dynamics $P$ , as shown at the bottom row of Figure 1(a) in (blue). However, because of the difficulties in solving (SOC1), mainly due to the high dimension of observations $x$ , LCE proposes to learn a mapping $\widehat{P}$ by solving (SOC2) that consists of a loss function, whose states evolve under dynamics $F$ (after an initial transition by encoder $E$ ), as depicted in Figure 1(b), and a regularization term. The role of the regularizer $R_{2}$ is to account for the performance gap between (SOC1) and the loss function of (SOC2), due to the discrepancy between their evolution paths, shown in Figures 1(a)(blue) and 1(b)(green). The goal of LCE is to learn $\widehat{P}$ of the particular form $\widehat{P} = D\circ F\circ E$ , described in Section 2, such that the solution of (SOC2) has similar performance to that of (SOC1). In this section, we propose a principled way to select the regularizer $R_{2}$ to achieve this goal. Since the exact form of (SOC2) has a direct effect on learning $\widehat{P}$ , designing this regularization term, in turn, provides us with a recipe (loss function) to learn the latent (embedded) space $\mathcal{Z}$ . In the following subsections, we show that this loss function consists of three terms that correspond to prediction, consistency, and curvature, the three ingredients of our PCC model.
+
+Note that these two SOCs evolve in two different spaces, one in the observation space $\mathcal{X}$ under dynamics $P$ , and the other one in the latent space $\mathcal{Z}$ (after an initial transition from $\mathcal{X}$ to $\mathcal{Z}$ ) under dynamics $F$ . Unlike $P$ and $F$ that only operate in a single space, $\mathcal{X}$ and $\mathcal{Z}$ , respectively, $\widehat{P}$ can govern the evolution of the system in both $\mathcal{X}$ and $\mathcal{Z}$ (see Figure 1(c)). Therefore, any recipe to learn $\widehat{P}$ , and as a result the latent space $\mathcal{Z}$ , should have at least two terms, to guarantee that the evolution paths resulted from $\widehat{P}$ in $\mathcal{X}$ and $\mathcal{Z}$ are consistent with those generated by $P$ and $F$ . We derive these two terms, that are the prediction and consistency terms in the loss function used by our PCC model, in Sections 3.1 and 3.2, respectively. While these two terms are the result of learning $\widehat{P}$ in general SOC problems, in Section 3.3, we concentrate on the particular class of LLC algorithms (e.g., iLQR (Li & Todorov, 2004)) to solve SOC, and add the third term, curvature, to our recipe for learning $\widehat{P}$ .
+
+# 3.1 PREDICTION OF THE NEXT OBSERVATION
+
+Figures 1(a)(blue) and 1(c)(red) show the transition in the observation space under $P$ and $\widehat{P}$ , where $x_{t}$ is the current observation, and $x_{t+1}$ and $\hat{x}_{t+1}$ are the next observations under these two dynamics, respectively. Instead of learning a $\widehat{P}$ with minimum mismatch with $P$ in terms of some distribution norm, we propose to learn $\widehat{P}$ by solving the following SOC:
+
+$$
+\min _ {U, \widehat {P}} L (U, \widehat {P}, c, x _ {0}) + \lambda_ {3} \sqrt {R _ {3} (\widehat {P})}, \tag {SOC3}
+$$
+
+whose loss function is the same as the one in (SOC1), with the true dynamics replaced by $\widehat{P}$ . In Lemma 1 (see Appendix A.1, for proof), we show how to set the regularization term $R_{3}$ in (SOC3), such that the control sequence resulted from solving (SOC3), $U_{3}^{*}$ , has similar performance to the solution of (SOC1), $U_{1}^{*}$ , i.e., $L(U_{1}^{*}, P, c, x_{0}) \approx L(U_{3}^{*}, P, c, x_{0})$ .
+
+Lemma 1. Let $U_1^*$ be a solution to (SOC1) and $(U_3^*, \widehat{P}_3^*)$ be a solution to (SOC3) with
+
+$$
+R _ {3} (\widehat {P}) = \mathbb {E} _ {x, u} \left[ D _ {K L} \left(P (\cdot | x, u) | | \widehat {P} (\cdot | x, u)\right) \right] \quad a n d \quad \lambda_ {3} = \sqrt {2 \bar {U}} \cdot T ^ {2} c _ {\max } \tag {1}
+$$
+
+Then, we have $L(U_1^*, P, c, x_0) \geq L(U_3^*, P, c, x_0) - 2\lambda_3\sqrt{R_3(\widehat{P}_3^*)}$ .
+
+In Eq. 1, the expectation is over the state-action stationary distribution of the policy used to generate the training samples (uniformly random policy in this work), and $\overline{U}$ is the Lebesgue measure of $\mathcal{U}$ .3
+
+# 3.2 CONSISTENCY IN PREDICTION OF THE NEXT LATENT STATE
+
+In Section 3.1, we provided a recipe for learning $\widehat{P}$ (in form of $D\circ F\circ E$ ) by introducing an intermediate (SOC3) that evolves in the observation space $\mathcal{X}$ according to dynamics $\widehat{P}$ . In this section we first connect (SOC2) that operates in $\mathcal{Z}$ with (SOC3) that operates in $\mathcal{X}$ . For simplicity and without loss generality, assume the initial cost $c_{0}(x,u)$ is zero. Lemma 2 (see Appendix A.2, for proof) suggests how we shall set the regularizer in (SOC2), such that its solution performs similarly to that of (SOC3), under their corresponding dynamics models.
+
+Lemma 2. Let $(U_3^*,\widehat{P}_3^*)$ be a solution to (SOC3) and $(U_2^*,\widehat{P}_2^*)$ be a solution to (SOC2) with
+
+$$
+R _ {2} ^ {\prime} (\widehat {P}) = \mathbb {E} _ {x, u} \left[ D _ {K L} \left(\left(E \circ \widehat {P}\right) (\cdot | x, u) | | (F \circ E) (\cdot | x, u)\right) \right] \quad a n d \quad \lambda_ {2} = \sqrt {2 \bar {U}} \cdot T ^ {2} c _ {\max } \tag {2}
+$$
+
+Then, we have $L(U_3^*, \widehat{P}_3^*, c, x_0) \geq L(U_2^*, \widehat{P}_2^*, c, x_0) - 2\lambda_2 \sqrt{R_2'(\widehat{P}_2')}$ .
+
+Similar to Lemma 1, in Eq. 2, the expectation is over the state-action stationary distribution of the policy used to generate the training samples. Moreover, $\left(E\circ \widehat{P}\right)(z^{\prime}|x,u) = \int_{x^{\prime}}E(z^{\prime}|x^{\prime})d\widehat{P} (x^{\prime}|x,u)$ and $\left(F\circ E\right)(z^{\prime}|x,u) = \int_{z}F(z^{\prime}|z,u)dE(z|x)$ are the probability over the next latent state $z^{\prime}$ , given the current observation $x$ and action $u$ , in (SOC2) and (SOC3) (see the paths $x_{t}\to z_{t}\to \tilde{z}_{t + 1}$ and $x_{t}\rightarrow z_{t}\rightarrow \tilde{z}_{t + 1}\rightarrow \hat{x}_{t + 1}\rightarrow \hat{z}_{t + 1}$ in Figures 1(b)(green) and 1(c)(red)). Therefore $R_2^\prime (\widehat{P})$ can be interpreted as the measure of discrepancy between these models, which we term as consistency loss.
+
+Although Lemma 2 provides a recipe to learn $\widehat{P}$ by solving (SOC2) with the regularizer (2), unfortunately this regularizer cannot be computed from the data - that is of the form $(x_{t},u_{t},x_{t + 1})$ - because the first term in the $D_{\mathrm{KL}}$ requires marginalizing over current and next latent states $(z_{t}$ and $\tilde{z}_{t + 1}$ in Figure 1(c)). To address this issue, we propose to use the (computable) regularizer
+
+$$
+R _ {2} ^ {\prime \prime} (\widehat {P}) = \mathbb {E} _ {x, u, x ^ {\prime}} \left[ D _ {\mathrm {K L}} \left(E \left(\cdot \mid x ^ {\prime}\right) \mid \mid (F \circ E) (\cdot \mid x, u)\right) \right], \tag {3}
+$$
+
+in which the expectation is over $(x,u,x^{\prime})$ sampled from the training data. Corollary 1 (see Appendix A.3, for proof) bounds the performance loss resulted from using $R_2^{\prime \prime}(\hat{P})$ instead of $R_2^{\prime}(\hat{P})$ , and shows that it could be still a reasonable choice.
+
+Corollary 1. Let $(U_3^*,\widehat{P}_3^*)$ be a solution to (SOC3) and $(U_2^*,\widehat{P}_2^*)$ be a solution to (SOC2) with $R_2''(\widehat{P})$ and and $\lambda_{2}$ defined by (3) and (2). Then, we have $L(U_3^*,\widehat{P}_3^*,c,x_0)\geq L(U_2^*,\widehat{P}_2^*,c,x_0) - 2\lambda_2\sqrt{2R_2''(\widehat{P}_2^*) + 2R_3(\widehat{P}_2^*)}$ .
+
+Lemma 1 suggests a regularizer $R_{3}$ to connect the solutions of (SOC1) and (SOC3). Similarly, Corollary 1 shows that regularizer $R_{2}^{\prime \prime}$ in (3) establishes a connection between the solutions of (SOC3) and (SOC2). Putting these results together, we achieve our goal in Lemma 3 (see Appendix A.4, for proof) to design a regularizer for (SOC2), such that its solution performs similarly to that of (SOC1).
+
+Lemma 3. Let $U_1^*$ be a solution to (SOC1) and $(U_2^*, \widehat{P}_2^*)$ be a solution to (SOC2) with
+
+$$
+R _ {2} (\widehat {P}) = 3 R _ {3} (\widehat {P}) + 2 R _ {2} ^ {\prime \prime} (\widehat {P}) \quad a n d \quad \lambda_ {2} = 2 \sqrt {\bar {U}} \cdot T ^ {2} c _ {\max }, \tag {4}
+$$
+
+where $R_{3}(\widehat{P})$ and $R_{2}^{\prime \prime}(\widehat{P})$ are defined by (1) and (3). Then, we have
+
+$$
+L \left(U _ {1} ^ {*}, P, c, x _ {0}\right) \geq L \left(U _ {2} ^ {*}, P, c, x _ {0}\right) - 2 \lambda_ {2} \sqrt {R _ {2} \left(\widehat {P} _ {2} ^ {*}\right)}.
+$$
+
+3.3 LOCALLY-LINEAR CONTROL IN THE LATENT SPACE AND CURVATURE REGULARIZATION In Sections 3.1 and 3.2, we derived a loss function to learn the latent space $\mathcal{Z}$ . This loss function, that was motivated by the general SOC perspective, consists of two terms to enforce the latent space to not only predict the next observations accurately, but to be suitable for control. In this section, we focus on the class of locally-linear control (LLC) algorithms (e.g., iLQR), for solving (SOC2), and show how this choice adds a third term, that corresponds to curvature, to the regularizer of (SOC2), and as a result, to the loss function of our PCC model.
+
+The main idea in LLC algorithms is to iteratively compute an action sequence to improve the current trajectory, by linearizing the dynamics around this trajectory, and use this action sequence to generate
+
+the next trajectory (see Appendix B for more details about LLC and iLQR). This procedure implicitly assumes that the dynamics is approximately locally linear. To ensure this in (SOC2), we further restrict the dynamics $\hat{P}$ and assume that it is not only of the form $\hat{P} = D\circ F\circ E$ , but $F$ , the latent space dynamics, has low curvature. One way to ensure this in (SOC2) is to directly impose a penalty over the curvature of the latent space transition function $f_{\mathcal{Z}}(z,u)$ . Assume $F(z,u) = f_{\mathcal{Z}}(z,u) + w$ where $w$ is a Gaussian noise. Consider the following SOC problem:
+
+$$
+\min _ {U, \widehat {P}} \quad \mathbb {E} \left[ L (U, F, \bar {c}, z _ {0}) \mid E, x _ {0} \right] + \lambda_ {\mathrm {L L C}} \sqrt {R _ {2} (\widehat {P}) + R _ {\mathrm {L L C}} (\widehat {P})}, \tag {SOC-LLC}
+$$
+
+where $R_{2}$ is defined by (4); $U$ is optimized by a LLC algorithm, such as iLQR; $R_{\mathrm{LLC}}(\widehat{P})$ is given by,
+
+$$
+R _ {\mathrm {L L C}} (\widehat {P}) = \mathbb {E} _ {x, u} \left[ \mathbb {E} _ {\epsilon} \left[ f _ {\mathcal {Z}} (z + \epsilon_ {z}, u + \epsilon_ {u}) - f _ {\mathcal {Z}} (z, u) - \left(\nabla_ {z} f _ {\mathcal {Z}} (z, u) \cdot \epsilon_ {z} + \nabla_ {u} f _ {\mathcal {Z}} (z, u) \cdot \epsilon_ {u}\right) \| _ {2} ^ {2} \right] | E \right], \tag {5}
+$$
+
+where $\epsilon = (\epsilon_z,\epsilon_u)^\top \sim \mathcal{N}(0,\delta^2 I)$ , $\delta > 0$ is a tunable parameter that characterizes the "diameter" of latent state-action space in which the latent dynamics model has low curvature. $\lambda_{\mathrm{LLC}} = 2\sqrt{2} T^{2}c_{\mathrm{max}}\sqrt{\overline{U}}\max \left(c_{\mathrm{lip}}(1 + \sqrt{2\log(2T / \eta)})\sqrt{\overline{X}} /2,1\right)$ , where $1 / \overline{X}$ is the minimum non-zero measure of the sample distribution w.r.t. $\mathcal{X}$ , and $1 - \eta \in [0,1)$ is a probability threshold. Lemma 4 (see Appendix A.5, for proof and discussions on how $\delta$ affects LLC performance) shows that a solution of (SOC-LLC) has similar performance to a solution of (SOC1, and thus, (SOC-LLC) is a reasonable optimization problem to learn $\widehat{P}$ , and also the latent space $\mathcal{Z}$ .
+
+Lemma 4. Let $(U_{LLC}^{*},\widehat{P}_{LLC}^{*})$ be a LLC solution to (SOC-LLC) and $U_{1}^{*}$ be a solution to (SOC1). Suppose the nominal latent state-action trajectory $\{(z_t,\pmb {u}_t)\}_{t = 0}^{T - 1}$ satisfies the condition: $(z_{t},\pmb{u}_{t})\sim \mathcal{N}((z_{2,t}^{*},u_{2,t}^{*}),\delta^{2}I)$ , where $\{(z_{2,t}^{*},u_{2,t}^{*})\}_{t = 0}^{T - 1}$ is the optimal trajectory of (SOC2). Then with probability $1 - \eta$ , we have $L(U_1^*,P,c,x_0)\geq L(U_{LLC}^*,P,c,x_0) - 2\lambda_{LLC}\sqrt{R_2(\widehat{P}_{LLC}^*) + R_{LLC}(\widehat{P}_{LLC}^*)}$ .
+
+In practice, instead of solving (SOC-LLC) jointly for $U$ and $\widehat{P}$ , we treat (SOC-LLC) as a bi-level optimization problem, first, solve the inner optimization problem for $\widehat{P}$ , i.e.,
+
+$$
+\widehat {P} ^ {*} \in \arg \min _ {\widehat {P}} \lambda_ {\mathrm {p}} R _ {3} ^ {\prime} (\widehat {P}) + \lambda_ {\mathrm {c}} R _ {2} ^ {\prime \prime} (\widehat {P}) + \lambda_ {\text {c u r}} R _ {\text {L L C}} (\widehat {P}), \quad \tag {PCC-LOSS}
+$$
+
+where $R_3'(\widehat{P}) = -\mathbb{E}_{x,u,x'}[\log \widehat{P}(x'|x,u)]$ is the negative log-likelihood, and then, solve the outer optimization problem, $\min_U L(U,\widehat{F}^*,\bar{c},z_0)$ , where $\widehat{P}^{*} = \widehat{D}^{*}\circ \widehat{F}^{*}\circ \widehat{E}^{*}$ , to obtain the optimal control sequence $U^{*}$ . Solving (SOC-LLC) this way is an approximation, in general, but is justified, when the regularization parameter $\lambda_{\mathrm{LLC}}$ is large. Note that we leave the regularization parameters $(\lambda_{\mathrm{p}},\lambda_{\mathrm{c}},\lambda_{\mathrm{cur}})$ as hyper-parameters of our algorithm, and do not use those derived in the lemmas of this section. Since the loss for learning $\widehat{P}^{*}$ in (PCC-LOSS) enforces (i) prediction accuracy, (ii) consistency in latent state prediction, and (iii) low curvature over $f_{\mathcal{Z}}$ , through the regularizers $R_3',R_2''$ , and $R_{\mathrm{LLC}}$ , respectively, we refer to it as the prediction-consistency-curvature (PCC) loss.
+
+# 4 INSTANTIATING THE PCC MODEL IN PRACTICE
+
+The PCC-Model objective in (PCC-LOSS) introduces the optimization problem $\min_{\widehat{P}}\lambda_{\mathrm{p}}R_3'(\widehat{P}) + \lambda_{\mathrm{c}}R_2''(\widehat{P}) + \lambda_{\mathrm{cur}}R_{\mathrm{LLC}}(\widehat{P})$ . To instantiate this model in practice, we describe $\widehat{P} = D\circ F\circ E$ as a latent variable model that factorizes as $\widehat{P} (x_{t + 1},z_t,\hat{z}_{t + 1}\mid x_t,u_t) = \widehat{P} (z_t\mid x_t)\widehat{P} (\hat{z}_{t + 1}\mid z_t,u_t)\widehat{P} (x_{t + 1}\mid \hat{z}_{t + 1})$ . In this section, we propose a variational approximation to the intractable negative log-likelihood $R_3^{\prime}$ and batch-consistency $R_2^{\prime \prime}$ losses, and an efficient approximation of the curvature loss $R_{\mathrm{LLC}}$ .
+
+# 4.1 VARIATIONAL PCC
+
+The negative log-likelihood $^6 R_3^\prime$ admits a variational bound via Jensen's Inequality,
+
+$$
+\begin{array}{l} R _ {3} ^ {\prime} (\widehat {P}) = - \log \widehat {P} (x _ {t + 1} \mid x _ {t}, u _ {t}) = - \log \mathbb {E} _ {Q (z _ {t}, \widehat {z} _ {t + 1} | x _ {t}, u _ {t}, x _ {t + 1})} \left[ \frac {\widehat {P} (x _ {t + 1} , z _ {t} , \widehat {z} _ {t + 1} \mid x _ {t} , u _ {t})}{Q (z _ {t} , \widehat {z} _ {t + 1} \mid x _ {t} , u _ {t} , x _ {t + 1})} \right] \\ \leq - \mathbb {E} _ {Q \left(z _ {t}, \hat {z} _ {t + 1} \mid x _ {t}, u _ {t}, x _ {t + 1}\right)} \left[ \log \frac {\widehat {P} \left(x _ {t + 1} , z _ {t} , \hat {z} _ {t + 1} \mid x _ {t} , u _ {t}\right)}{Q \left(z _ {t} , \hat {z} _ {t + 1} \mid x _ {t} , u _ {t} , x _ {t + 1}\right)} \right] = R _ {3, \mathrm {N L E - B o u n d}} ^ {\prime} (\widehat {P}, Q), \tag {6} \\ \end{array}
+$$
+
+which holds for any choice of recognition model $Q$ . For simplicity, we assume the recognition model employs bottom-up inference and thus factorizes as $Q(z_{t},\hat{z}_{t + 1}|x_{t},x_{t + 1},u_{t}) = Q(\hat{z}_{t + 1}|x_{t + 1})Q(z_{t}|\hat{z}_{t + 1},x_{t},u_{t})$ . The main idea behind choosing a backward-facing model is to allow the model to learn to account for noise in the underlying dynamics. We estimate the expectations in (6) via Monte Carlo simulation. To reduce the variance of the estimator, we decompose $R_{3,\mathrm{NLE - Bound}}^{\prime}$ further into
+
+$$
+\begin{array}{l} \left. - \mathbb {E} _ {Q (\hat {z} _ {t + 1} | x _ {t + 1})} \left[ \log \widehat {P} (x _ {t + 1} | \hat {z} _ {t + 1}) \right] + \mathbb {E} _ {Q (\hat {z} _ {t + 1} | x _ {t + 1})} \left[ D _ {\mathrm {K L}} \left(Q (z _ {t} \mid \hat {z} _ {t + 1}, x _ {t}, u _ {t}) \| \widehat {P} (z _ {t} \mid x _ {t})\right) \right] \right. \\ - H\left(Q(\hat{z}_{t + 1}\mid x_{t + 1})\right) - \mathbb{E}_{\substack{Q(\hat{z}_{t + 1}|x_{t + 1})\\ Q(z_{t}|\hat{z}_{t + 1},x_{t},u_{t})}}\left[\log \widehat{P} (\hat{z}_{t + 1}\mid z_{t},u_{t})\right], \\ \end{array}
+$$
+
+and note that the Entropy $H(\cdot)$ and Kullback-Leibler $D_{\mathrm{KL}}(\cdot \| \cdot)$ terms are analytically tractable when $Q$ is restricted to a suitably chosen variational family (i.e. in our experiments, $Q(\hat{z}_{t + 1} \mid x_{t + 1})$ and $Q(z_{t} \mid \hat{z}_{t + 1}, x_{t}, u_{t})$ are factorized Gaussians). The derivation is provided in Appendix C.1.
+
+Interestingly, the consistency loss $R_2''$ admits a similar treatment. We note that the consistency loss seeks to match the distribution of $\hat{z}_{t + 1} \mid x_t, u_t$ with $z_{t + 1} \mid x_{t + 1}$ , which we represent below as
+
+$$
+R_{2}^{\prime \prime}(\widehat{P}) = D_{\mathrm{KL}}\Bigl(\widehat{P} (z_{t + 1}\mid x_{t + 1})\| \widehat{P} (\hat{z}_{t + 1}\mid x_{t},u_{t})\Bigr)\\ = -H\bigl(\widehat{P} (z_{t + 1}\mid x_{t + 1})\bigr) - \mathbb{E}_{\substack{\widehat{P} (z_{t + 1}|x_{t + 1})\\ \hat{z}_{t + 1} = z_{t + 1}}}\left[\log \widehat{P} (\hat{z}_{t + 1}\mid x_{t},u_{t})\right].
+$$
+
+Here, $\widehat{P}(\hat{z}_{t+1} \mid x_t, u_t)$ is intractable due to the marginalization of $z_t$ . We employ the same procedure as in (6) to construct a tractable variational bound
+
+$$
+R_{2}^{\prime \prime}(\widehat{P})\leq -H\bigl(\widehat{P} (z_{t + 1}\mid x_{t + 1})\bigr) - \mathbb{E}_{\substack{\widehat{P} (z_{t + 1}|x_{t + 1})\\ \widehat{z}_{t + 1} = z_{t + 1}}}\mathbb{E}_{Q(z_{t}|\widehat{z}_{t + 1},x_{t},u_{t})}\left[\log \frac{\widehat{P}(z_{t},\widehat{z}_{t + 1}\mid x_{t},u_{t})}{Q(z_{t}\mid\widehat{z}_{t + 1},x_{t},u_{t})}\right].
+$$
+
+We now make the further simplifying assumption that $Q(\hat{z}_{t + 1} \mid x_{t + 1}) = \widehat{P}(\hat{z}_{t + 1} \mid x_{t + 1})$ . This allows us to rewrite the expression as
+
+$$
+\begin{array}{l} R _ {2} ^ {\prime \prime} (\widehat {P}) \leq - H (Q (\hat {z} _ {t + 1} \mid x _ {t + 1})) - \mathbb {E} _ {\substack {Q (\hat {z} _ {t + 1} \mid x _ {t + 1}) \\ Q (\hat {z} _ {t + 1} \mid z _ {t + 1}, u _ {t})}} \left[ \log \widehat {P} (\hat {z} _ {t + 1} \mid z _ {t}, u _ {t}) \right] \\ + \mathbb {E} _ {Q (\hat {z} _ {t + 1} | x _ {t + 1})} \left[ D _ {\mathrm {K L}} \left(Q \left(z _ {t} \mid \hat {z} _ {t + 1}, x _ {t}, u _ {t}\right) \| \widehat {P} \left(z _ {t} \mid x _ {t}\right)\right) \right] = R _ {2, \text {B o u n d}} ^ {\prime \prime} (\widehat {P}, Q), \tag {7} \\ \end{array}
+$$
+
+which is a subset of the terms in (6). See Appendix C.2 for a detailed derivation.
+
+# 4.2 CURVATURE REGULARIZATION AND AMORTIZED GRADIENT
+
+In practice we use a variant of the curvature loss where Taylor expansions and gradients are evaluated at $\bar{z} = z + \epsilon_z$ and $\bar{u} = u + \epsilon_u$ ,
+
+$$
+R _ {\mathrm {L L C}} (\widehat {P}) = \mathbb {E} _ {\epsilon \sim \mathcal {N} (0, \delta I)} [ \| f _ {\mathcal {Z}} (\bar {z}, \bar {u}) - (\nabla_ {z} f _ {\mathcal {Z}} (\bar {z}, \bar {u}) \epsilon_ {z} + \nabla_ {u} f _ {\mathcal {Z}} (\bar {z}, \bar {u}) \epsilon_ {u}) - f _ {\mathcal {Z}} (z, u) \| _ {2} ^ {2} ]. \tag {8}
+$$
+
+When $n_z$ is large, evaluation and differentiating through the Jacobians can be slow. To circumvent this issue, the Jacobians evaluation can be amortized by treating the Jacobians as the coefficients of the best linear approximation at the evaluation point. This leads to a new amortized curvature loss
+
+$$
+R _ {\mathrm {L L C - A m o r}} (\widehat {P}, A, B) = \mathbb {E} _ {\epsilon \sim \mathcal {N} (0, \delta I)} [ \| f _ {\mathcal {Z}} (\bar {z}, \bar {u}) - (A (\bar {z}, \bar {u}) \epsilon_ {z} + B (\bar {z}, \bar {u}) \epsilon_ {u} - f _ {\mathcal {Z}} (z, u)) \| _ {2} ^ {2} ]. \tag {9}
+$$
+
+where $A$ and $B$ are function approximators to be optimized. Intuitively, the amortized curvature loss seeks—for any given $(z,u)$ —to find the best choice of linear approximation induced by $A(z,u)$ and $B(z,u)$ such that the behavior of $F_{\mu}$ in the neighborhood of $(z,u)$ is approximately linear.
+
+# 5 RELATIONTO PREVIOUS EMBED-TO-CONTROL APPROACHES
+
+In this section, we highlight the key differences between PCC and the closest previous works, namely E2C and RCE. A key distinguishing factor is PCC's use of a nonlinear latent dynamics model paired with an explicit curvature loss. In comparison, E2C and RCE both employed "locally-linear dynamics" of the form $z' = A(\bar{z}, \bar{u})z + B(\bar{z}, \bar{u})u + c(\bar{z}, \bar{u})$ where $\bar{z}$ and $\bar{u}$ are auxiliary random variables meant to be perturbations of $z$ and $u$ . When contrasted with (9), it is clear that neither $A$ and $B$ in the E2C/RCE formulation can be treated as the Jacobians of the dynamics, and hence the curvature of the dynamics is not being controlled explicitly. Furthermore, since the locally-linear dynamics are wrapped inside the maximum-likelihood estimation, both E2C and RCE conflate the two key elements prediction and curvature together. This makes controlling the stability of training much more difficult. Not only does PCC explicitly separate these two components, we are also the first to explicitly demonstrate theoretically and empirically that the curvature loss is important for iLQR.
+
+
+
+
+Figure 2: Top: Planar latent representations; Bottom: Inverted Pendulum latent representations (randomly selected): left two: RCE, middle two: E2C, right two: PCC.
+
+Furthermore, RCE does not incorporate PCC's consistency loss. Note that PCC, RCE, and E2C are all Markovian encoder-transition-decoder frameworks. Under such a framework, the sole reliance on minimizing the prediction loss will result in a discrepancy between how the model is trained (maximizing the likelihood induced by encoding-transitioning-decoding) versus how it is used at test-time for control (continual transitioning in the latent space without ever decoding). By explicitly minimizing the consistency loss, PCC reduces the discrepancy between how the model is trained versus how it is used at test-time for planning. Interestingly, E2C does include a regularization term that is akin to PCC's consistency loss. However, as noted by the authors of RCE, E2C's maximization of pair-marginal log-likelihoods of $(x_{t}, x_{t+1})$ as opposed to the conditional likelihood of $x_{t+1}$ given $x_{t}$ means that E2C does not properly minimize the prediction loss prescribed by the PCC framework.
+
+# 6 EXPERIMENTS
+
+In this section, we compare the performance of PCC with two model-based control algorithm baselines: $\mathrm{RCE}^7$ (Banijamali et al., 2018) and E2C (Watter et al., 2015), as well as running a thorough ablation study on various components of PCC. The experiments are based on the following continuous control benchmark domains (see Appendix D for more descriptions): (i) Planar System, (ii) Inverted Pendulum, (iii) Cartpole, (iv) 3-link manipulator, and (v) TORCS simulator8 (Wymann et al., 2000).
+
+To generate our training and test sets, each consists of triples $(x_{t},u_{t},x_{t + 1})$ , we: (1) sample an underlying state $s_t$ and generate its corresponding observation $x_{t}$ , (2) sample an action $u_{t}$ , and (3) obtain the next state $s_{t + 1}$ according to the state transition dynamics, add it a zero-mean Gaussian noise with variance $\sigma^2 I_{n_s}$ , and generate corresponding observation $x_{t + 1}$ . To ensure that the observation-action data is uniformly distributed (see Section 3), we sample the state-action pair $(s_t,u_t)$ uniformly from the state-action space. To understand the robustness of each model, we consider both deterministic $(\sigma = 0)$ and stochastic scenarios. In the stochastic case, we add noise to the system with different values of $\sigma$ and evaluate the models' performance under various degrees of noise.
+
+Each task has underlying start and goal states that are unobservable to the algorithms, instead, the algorithms have access to the corresponding start and goal observations. We apply control using the iLQR algorithm (see Appendix B), with the same cost function that was used by RCE and E2C, namely, $\bar{c}(z_t, u_t) = (z_t - z_{\mathrm{goal}})^\top Q(z_t - z_{\mathrm{goal}}) + u_t^\top Ru_t$ , and $\bar{c}(z_T) = (z_T - z_{\mathrm{goal}})^\top Q(z_T - z_{\mathrm{goal}})$ , where $z_{\mathrm{goal}}$ is obtained by encoding the goal observation, and $Q = \kappa \cdot I_{n_z}$ , $R = I_{n_u}$ . Details of our implementations are specified in Appendix D.3. We report performance in the underlying system, specifically the percentage of time spent in the goal region10.
+
+A Reproducible Experimental Pipeline In order to measure performance reproducibility, we perform the following 2-step pipeline. For each control task and algorithm, we (1) train 10 models
+
+$$
+\bar {c} (z, u) \approx \left[ \begin{array}{c} z - z _ {\text {g o a l}} \\ u \end{array} \right] ^ {\top} \left[ \begin{array}{c} \nabla_ {z} \\ \nabla_ {u} \end{array} \right] D \circ c | _ {z = z _ {\text {g o a l}}, u = 0} + \frac {1}{2} \left[ \begin{array}{c} z - z _ {\text {g o a l}} \\ u \end{array} \right] ^ {\top} \left[ \begin{array}{c c} \nabla_ {z z} ^ {2} & \nabla_ {z u} ^ {2} \\ \nabla_ {u z} ^ {2} & \nabla_ {u u} ^ {2} \end{array} \right] D \circ c | _ {z = z _ {\text {g o a l}}, u = 0} \left[ \begin{array}{c} z - z _ {\text {g o a l}} \\ u \end{array} \right].
+$$
+
+Table 1: Percentage of steps in goal state. Averaged over all models (left), and the best model (right).
+
+| Domain | RCE (all) | E2C (all) | PCC (all) | RCE (top 1) | E2C (top 1) | PCC (top 1) |
| Planar | 2.1 ± 0.8 | 5.5 ± 1.7 | 35.7 ± 3.4 | 9.2 ± 1.4 | 36.5 ± 3.6 | 72.1 ± 0.4 |
| Pendulum | 24.7 ± 3.1 | 46.8 ± 4.1 | 58.7 ± 3.7 | 68.8 ± 2.2 | 89.7 ± 0.5 | 90.3 ± 0.4 |
| Cartpole | 59.5 ± 4.1 | 7.3 ± 1.5 | 54.3 ± 3.9 | 99.45 ± 0.1 | 40.2 ± 3.2 | 93.9 ± 1.7 |
| 3-link | 1.1 ± 0.4 | 4.7 ± 1.1 | 18.8 ± 2.1 | 10.6 ± 0.8 | 20.9 ± 0.8 | 47.2 ± 1.7 |
| TORCS | 27.4 ± 1.8 | 28.2 ± 1.9 | 60.7 ± 1.1 | 39.9 ± 2.2 | 54.1 ± 2.3 | 68.6 ± 0.4 |
+
+Table 2: Ablation analysis. Percentage of steps spent in goal state. From left to right: PCC including all loss terms, excluding consistency loss, excluding curvature loss, amortizing the curvature loss.
+
+| Domain | PCC | PCC no Con | PCC no Cur | PCC Amor |
| Planar | 35.7 ± 3.4 | 0.0 ± 0.0 | 29.6 ± 3.5 | 41.7 ± 3.7 |
| Pendulum | 58.7 ± 3.7 | 52.3 ± 3.5 | 50.3 ± 3.3 | 54.2 ± 3.1 |
| Cartpole | 54.3 ± 3.9 | 5.1 ± 0.4 | 17.4 ± 1.6 | 14.3 ± 1.2 |
| 3-link | 18.8 ± 2.1 | 9.1 ± 1.5 | 13.1 ± 1.9 | 11.5 ± 1.8 |
+
+independently, and (2) solve 10 control tasks per model (we do not cherry-pick, but instead perform a total of $10 \times 10 = 100$ control tasks). We report statistics averaged over all the tasks (in addition, we report the best performing model averaged over its 10 tasks). By adopting a principled and statistically reliable evaluation pipeline, we also address a pitfall of the compared baselines where the best model needs to be cherry picked, and training variance was not reported.
+
+Results Table 1 shows how PCC outperforms the baseline algorithms in the noiseless dynamics case by comparing means and standard deviations of the means on the different control tasks (for the case of added noise to the dynamics, which exhibits similar behavior, refer to Appendix E.1). It is important to note that for each algorithm, the performance metric averaged over all models is drastically different than that of the best model, which justifies our rationale behind using the reproducible evaluation pipeline and avoid cherry-picking when reporting. Figure 2 depicts 2 instances (randomly chosen from the 10 trained models) of the learned latent space representations on the noiseless dynamics of Planar and Inverted Pendulum tasks for PCC, RCE, and E2C models (additional representations can be found in Appendix E.2). Representations were generated by encoding observations corresponding to a uniform grid over the state space. Generally, PCC has a more interpretable representation of both Planar and Inverted Pendulum Systems than other baselines for both the noiseless dynamics case and the noisy case. Finally, in terms of computation, PCC demonstrates faster training with $64\%$ improvement over RCE, and $2\%$ improvement over E2C.[11]
+
+Ablation Analysis On top of comparing the performance of PCC to the baselines, in order to understand the importance of each component in (PCC-LOSS), we also perform an ablation analysis on the consistency loss (with/without consistency loss) and the curvature loss (with/without curvature loss, and with/without amortization of the Jacobian terms). Table 2 shows the ablation analysis of PCC on the aforementioned tasks. From the numerical results, one can clearly see that when consistency loss is omitted, the control performance degrades. This corroborates with the theoretical results in Section 3.2, which indicates the relationship of the consistency loss and the estimation error between the next-latent dynamics prediction and the next-latent encoding. This further implies that as the consistency term vanishes, the gap between control objective function and the model training loss is widened, due to the accumulation of state estimation error. The control performance also decreases when one removes the curvature loss. This is mainly attributed to the error between the iLQR control algorithm and (SOC2). Although the latent state dynamics model is parameterized with neural networks, which are smooth, without enforcing the curvature loss term the norm of the Hessian (curvature) might still be high. This also confirms with the analysis in Section 3.3 about sub-optimality performance and curvature of latent dynamics. Finally, we observe that the performance of models trained without amortized curvature loss are slightly better than with their amortized counterpart, however, since the amortized curvature loss does not require computing gradient of the latent dynamics (which means that in stochastic optimization one does not need to estimate its Hessian), we observe relative speed-ups in model training with the amortized version (speed-up of $6\%$ , $9\%$ , and $15\%$ for Planar System, Inverted Pendulum, and Cartpole, respectively).
+
+# 7 CONCLUSION
+
+In this paper, we argue from first principles that learning a latent representation for control should be guided by good prediction in the observation space and consistency between latent transition and
+
+the embedded observations. Furthermore, if variants of iterative LQR are used as the controller, the low-curvature dynamics is desirable. All three elements of our PCC models are critical to the stability of model training and the performance of the in-latent-space controller. We hypothesize that each particular choice of controller will exert different requirement for the learned dynamics. A future direction is to identify and investigate the additional bias for learning an effective embedding and latent dynamics for other type of model-based control and planning methods.
+
+# REFERENCES
+
+E. Banijamali, R. Shu, M. Ghavamzadeh, H. Bui, and A. Ghodsi. Robust locally-linear controllable embedding. In Proceedings of the Twenty First International Conference on Artificial Intelligence and Statistics, pp. 1751-1759, 2018.
+Dimitri Bertsekas. Dynamic programming and optimal control, volume 1. Athena scientific, 1995.
+Francesco Borrelli, Alberto Bemporad, and Manfred Morari. Predictive control for linear and hybrid systems. Cambridge University Press, 2017.
+Stephen Boyd and Lieven Vandenberghe. Convex optimization. Cambridge university press, 2004.
+Morten Breivik and Thor I Fossen. Principles of guidance-based path following in 2d and 3d. In Proceedings of the 44th IEEE Conference on Decision and Control, pp. 627-634. IEEE, 2005.
+Kurtland Chua, Roberto Calandra, Rowan McAllister, and Sergey Levine. Deep reinforcement learning in a handful of trials using probabilistic dynamics models. In Advances in Neural Information Processing Systems, pp. 4754-4765, 2018.
+Roy De Maesschalck, Delphine Jouan-Rimbaud, and Désiré L Massart. The mahalanobis distance. Chemometrics and intelligent laboratory systems, 50(1):1-18, 2000.
+Marc Deisenroth and Carl E Rasmussen. Pilco: A model-based and data-efficient approach to policy search. In Proceedings of the 28th International Conference on machine learning (ICML-11), pp. 465-472, 2011.
+Frederik Ebert, Chelsea Finn, Sudeep Dasari, Annie Xie, Alex Lee, and Sergey Levine. Visual foresight: Model-based deep reinforcement learning for vision-based robotic control. arXiv preprint arXiv:1812.00568, 2018.
+Bernard Espiau, François Chaumette, and Patrick Rives. A new approach to visual servoing in robotics. *IEEE Transactions on Robotics and Automation*, 8(3):313-326, 1992.
+Chelsea Finn, Xin Yu Tan, Yan Duan, Trevor Darrell, Sergey Levine, and Pieter Abbeel. Deep spatial autoencoders for visuomotor learning. In 2016 IEEE International Conference on Robotics and Automation (ICRA), pp. 512-519. IEEE, 2016.
+Katsuhisa Furuta, Masaki Yamakita, and Seiichi Kobayashi. Swing up control of inverted pendulum. In Proceedings IECON'91: 1991 International Conference on Industrial Electronics, Control and Instrumentation, pp. 2193-2198. IEEE, 1991.
+Yarin Gal, Rowan McAllister, and Carl Edward Rasmussen. Improving pilco with bayesian neural network dynamics models. In Data-Efficient Machine Learning workshop, ICML, volume 4, 2016.
+Shlomo Geva and Joaquin Sitte. A cartpole experiment benchmark for trainable controllers. IEEE Control Systems Magazine, 13(5):40-51, 1993.
+Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep learning. MIT press, 2016.
+David Ha and Jürgen Schmidhuber. World models. arXiv preprint arXiv:1803.10122, 2018.
+Danijar Hafner, Timothy Lillicrap, Ian Fischer, Ruben Villegas, David Ha, Honglak Lee, and James Davidson. Learning latent dynamics for planning from pixels. arXiv preprint arXiv:1811.04551, 2018.
+Lukasz Kaiser, Mohammad Babaeizadeh, Piotr Milos, Blazej Osinski, Roy H Campbell, Konrad Czechowski, Dumitru Erhan, Chelsea Finn, Piotr Kozakowski, Sergey Levine, et al. Model-based reinforcement learning for atari. arXiv preprint arXiv:1903.00374, 2019.
+
+Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
+Thanard Kurutach, Aviv Tamar, Ge Yang, Stuart J Russell, and Pieter Abbeel. Learning plannable representations with causal infogan. In Advances in Neural Information Processing Systems, pp. 8733-8744, 2018.
+Xuzhi Lai, Ancai Zhang, Min Wu, and Jinhua She. Singularity-avoiding swing-up control for underactuated three-link gymnast robot using virtual coupling between control torques. International Journal of Robust and Nonlinear Control, 25(2):207-221, 2015.
+Weiwei Li and Emanuel Todorov. Iterative linear quadratic regulator design for nonlinear biological movement systems. In ICINCO (1), pp. 222-229, 2004.
+Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602, 2013.
+Kevin P Murphy. Machine learning: a probabilistic perspective. MIT press, 2012.
+Erik Ordentlich and Marcelo J Weinberger. A distribution dependent refinement of pinsker's inequality. IEEE Transactions on Information Theory, 51(5):1836-1840, 2005.
+Marek Petrik, Mohammad Ghavamzadeh, and Yinlam Chow. Safe policy improvement by minimizing robust baseline regret. In Advances in Neural Information Processing Systems, pp. 2298-2306, 2016.
+James Blake Rawlings and David Q Mayne. Model predictive control: Theory and design. Nob Hill Pub. Madison, Wisconsin, 2009.
+Alexander Shapiro, Darinka Dentcheva, and Andrzej Ruszczyński. Lectures on stochastic programming: modeling and theory. SIAM, 2009.
+Mark W Spong. The swing up control problem for the acrobot. IEEE control systems magazine, 15 (1):49-55, 1995.
+Manuel Watter, Jost Springenberg, Joschka Boedecker, and Martin Riedmiller. Embed to control: A locally linear latent dynamics model for control from raw images. In Advances in neural information processing systems, pp. 2746-2754, 2015.
+Bernhard Wymann, Eric Espié, Christophe Guionneau, Christos Dimitrakakis, Rémi Coulom, and Andrew Sumner. Torcs, the open racing car simulator. Software available at http://torcs.sourceforge.net, 4(6), 2000.
+Marvin Zhang, Sharad Vikram, Laura Smith, Pieter Abbeel, Matthew J Johnson, and Sergey Levine. Solar: Deep structured latent representations for model-based reinforcement learning. In Proceedings of the 36th International Conference on Machine Learning, 2019.
+
+# A TECHNICAL PROOFS OF SECTION 3
+
+# A.1 PROOF OF LEMMA 1
+
+Following analogous derivations of Lemma 11 in Petrik et al. (2016) (for the case of finite-horizon MDPs), for the case of finite-horizon MDPs, one has the following chain of inequalities for any given control sequence $\{u_t\}_{t=0}^{T-1}$ and initial observation $x_0$ :
+
+$$
+\begin{array}{l} \left| L (U, \widehat {P}, x _ {0}) - L (U, P, x _ {0}) \right| \\ = \left| \mathbb {E} \left[ c _ {T} \left(x _ {T}\right) + \sum_ {t = 0} ^ {T - 1} c _ {t} \left(x _ {t}, u _ {t}\right) \mid \widehat {P}, x _ {0} \right] - \mathbb {E} \left[ c _ {T} \left(x _ {T}\right) + \sum_ {t = 0} ^ {T - 1} c _ {t} \left(x _ {t}, u _ {t}\right) \mid P, x _ {0} \right] \right| \\ \leq T ^ {2} \cdot c _ {\max } \mathbb {E} \left[ \frac {1}{T} \sum_ {t = 0} ^ {T - 1} D _ {\mathrm {T V}} (P (\cdot | x _ {t}, u _ {t}) | | \widehat {P} (\cdot | x _ {t}, u _ {t})) \mid P, x _ {0} \right] \\ \leq \sqrt {2} T ^ {2} \cdot c _ {\max} \mathbb {E} \left[ \frac {1}{T} \sum_ {t = 0} ^ {T - 1} \sqrt {\mathrm {K L} (P (\cdot | x _ {t} , u _ {t}) | | \widehat {P} (\cdot | x _ {t} , u _ {t}))} \mid P, x _ {0} \right] \\ \leq \sqrt {2} T ^ {2} \cdot c _ {\max} \sqrt {\mathbb {E} \left[ \frac {1}{T} \sum_ {t = 0} ^ {T - 1} \mathrm {K L} (P (\cdot | x _ {t} , u _ {t}) | | \widehat {P} (\cdot | x _ {t} , u _ {t})) \mid P , x _ {0} \right]}, \\ \end{array}
+$$
+
+where $D_{\mathrm{TV}}$ is the total variation distance of two distributions. The first inequality is based on the result of the above lemma, the second inequality is based on Pinsker's inequality (Ordentlich & Weinberger, 2005), and the third inequality is based on Jensen's inequality (Boyd & Vandenberghe, 2004) of $\sqrt{(\cdot)}$ function.
+
+Now consider the expected cumulative KL cost: $\mathbb{E}\left[\frac{1}{T}\sum_{t = 0}^{T - 1}\mathrm{KL}(P(\cdot |x_t,u_t)||\widehat{P} (\cdot |x_t,u_t))|P,x_0\right]$ with respect to some arbitrary control action sequence $\{u_t\}_{t = 0}^{T - 1}$ . Notice that this arbitrary action sequence can always be expressed in form of deterministic policy $u_{t} = \pi^{\prime}(x_{t},t)$ with some nonstationary state-action mapping $\pi^\prime$ . Therefore, this KL cost can be written as:
+
+$$
+\begin{array}{l} \mathbb {E} \left[ \frac {1}{T} \sum_ {t = 0} ^ {T - 1} \mathrm {K L} (P (\cdot | x _ {t}, u _ {t}) | | \widehat {P} (\cdot | x _ {t}, u _ {t})) \mid P, \pi , x _ {0} \right] \\ = \mathbb {E} \left[ \frac {1}{T} \sum_ {t = 0} ^ {T - 1} \int_ {u _ {t} \in \mathcal {U}} \mathrm {K L} (P (\cdot | x _ {t}, u _ {t}) | | \widehat {P} (\cdot | x _ {t}, u _ {t})) d \pi^ {\prime} (u _ {t} | x _ {t}, t) \mid P, x _ {0} \right] \tag {10} \\ = \mathbb {E} \left[ \frac {1}{T} \sum_ {t = 0} ^ {T - 1} \int_ {u _ {t} \in \mathcal {U}} \mathrm {K L} (P (\cdot | x _ {t}, u _ {t}) | | \widehat {P} (\cdot | x _ {t}, u _ {t})) \cdot \frac {d \pi^ {\prime} (u _ {t} | x _ {t} , t)}{d U (u _ {t})} \cdot d U (u _ {t}) \mid P, x _ {0} \right] \\ \leq \overline {{U}} \cdot \mathbb {E} _ {x, u} \left[ \mathrm {K L} (P (\cdot | x, u) | | \widehat {P} (\cdot | x, u)) \right], \\ \end{array}
+$$
+
+where the expectation is taken over the state-action occupation measure $\frac{1}{T}\sum_{t=0}^{T-1}\mathbb{P}(x_t = x, u_t = u|x_0, U)$ of the finite-horizon problem that is induced by data-sampling policy $U$ . The last inequality is due to change of measures in policy, and the last inequality is due to the facts that (i) $\pi$ is a deterministic policy, (ii) $dU(u_t)$ is a sampling policy with lebesgue measure $1/\overline{U}$ over all control actions, (iii) the following bounds for importance sampling factor hold: $\left|\frac{d\pi'(u_t|x_t,t)}{dU(u_t)}\right| \leq \overline{U}$ .
+
+To conclude the first part of the proof, combining all the above arguments we have the following inequality for any model $\hat{P}$ and control sequence $U$ :
+
+$$
+\left| L \left(U, \widehat {P}, x _ {0}\right) - L \left(U, P, x _ {0}\right) \right| \leq \sqrt {2} T ^ {2} \cdot c _ {\max } \bar {U} \cdot \sqrt {\mathbb {E} _ {x , u} \left[ \mathrm {K L} \left(P (\cdot | x , u) | | \widehat {P} (\cdot | x , u)\right) \right]}. \tag {11}
+$$
+
+For the second part of the proof, consider the solution of (SOC3), namely $(U_3^*,\widehat{P}_3^*)$ . Using the optimality condition of this problem one obtains the following inequality:
+
+$$
+\begin{array}{l} L \left(U _ {3} ^ {*}, \widehat {P} _ {3} ^ {*}, x _ {0}\right) + \sqrt {2} T ^ {2} \cdot c _ {\max } \bar {U} \cdot \sqrt {\mathbb {E} _ {x , u} \left[ \mathrm {K L} \left(P (\cdot | x , u) | | \widehat {P} _ {3} ^ {*} (\cdot | x , u)\right) \right]} \tag {12} \\ \leq L (U _ {1} ^ {*}, \widehat {P} _ {3} ^ {*}, x _ {0}) + \sqrt {2} T ^ {2} \cdot c _ {\max} \overline {{U}} \cdot \sqrt {\mathbb {E} _ {x , u} \left[ \mathrm {K L} (P (\cdot | x , u) | | \widehat {P} _ {3} ^ {*} (\cdot | x , u)) \right]}. \\ \end{array}
+$$
+
+Using the results in (11) and (12), one can then show the following chain of inequalities:
+
+$$
+\begin{array}{l} L (U _ {1} ^ {*}, P, c, x _ {0}) \geq L (U _ {1} ^ {*}, \widehat {P} _ {3} ^ {*}, c, x _ {0}) - \sqrt {2} T ^ {2} \cdot c _ {\max} \overline {{U}} \cdot \sqrt {\mathbb {E} _ {x , u} \left[ \mathrm {K L} (P (\cdot | x , u) | | \widehat {P} _ {3} ^ {*} (\cdot | x , u)) \right]} \\ = L \left(U _ {1} ^ {*}, \hat {P} _ {3} ^ {*}, c, x _ {0}\right) + \sqrt {2} T ^ {2} \cdot c _ {\max } \bar {U} \cdot \sqrt {\mathbb {E} _ {x , u} \left[ \mathrm {K L} (P (\cdot | x , u) | | \hat {P} _ {3} ^ {*} (\cdot | x , u)) \right]} \\ - 2 \sqrt {2} T ^ {2} \cdot c _ {\max } \bar {U} \cdot \sqrt {\mathbb {E} _ {x , u} \left[ \mathrm {K L} \left(P (\cdot | x , u) | | \widehat {P} _ {3} ^ {*} (\cdot | x , u)\right) \right]} (13) \\ \geq L \left(U _ {3} ^ {*}, \widehat {P} _ {3} ^ {*}, c, x _ {0}\right) + \sqrt {2} T ^ {2} \cdot c _ {\max } \bar {U} \cdot \sqrt {\mathbb {E} _ {x , u} \left[ \mathrm {K L} \left(P (\cdot | x , u) \mid \mid \widehat {P} _ {3} ^ {*} (\cdot | x , u)\right) \right]} (13) \\ - 2 \sqrt {2} T ^ {2} \cdot c _ {\max } \bar {U} \cdot \sqrt {\mathbb {E} _ {x , u} \left[ \mathrm {K L} (P (\cdot | x , u) | | \hat {P} _ {3} ^ {*} (\cdot | x , u)) \right]} \\ \geq L (U _ {3} ^ {*}, P, c, x _ {0}) - 2 \sqrt {2} T ^ {2} \cdot c _ {\max} \overline {{U}} \cdot \sqrt {\mathbb {E} _ {x , u} \left[ \mathrm {K L} (P (\cdot | x , u) | | \widehat {P} _ {3} ^ {*} (\cdot | x , u)) \right]}, \\ \end{array}
+$$
+
+where $U_{1}^{*}$ is the optimizer of (SOC1) and $(U_3^*,\widehat{P}_3^*)$ is the optimizer of (SOC3).
+
+Therefore by letting $\lambda_3 = \sqrt{2} T^2 \cdot c_{\max}\overline{U}$ and $R_{3}(\widehat{P}) = \mathbb{E}_{x,u}\left[\mathrm{KL}(P(\cdot |x,u)||\widehat{P} (\cdot |x,u))\right]$ and by combining all of the above arguments, the proof of the above lemma is completed.
+
+# A.2 PROOF OF LEMMA 2
+
+For the first part of the proof, at any time-step $t \geq 1$ , for any arbitrary control action sequence $\{u_t\}_{t=0}^{T-1}$ , and any model $\widehat{P}$ , consider the following decomposition of the expected cost:
+
+$$
+\begin{array}{l} \mathbb {E} \left[ c \left(x _ {t}, u _ {t}\right) \mid \widehat {P}, x _ {0} \right] = \int_ {x _ {0: t - 1} \in \mathcal {X} ^ {t}} \prod_ {k = 1} ^ {t - 1} d \widehat {P} \left(x _ {k} \mid x _ {k - 1}, u _ {k - 1}\right). \\ \int_ {z _ {t} \in \mathcal {Z}} \underbrace {\int_ {z _ {t - 1} ^ {\prime} \in \mathcal {Z}} d E (z _ {t - 1} ^ {\prime} | x _ {t - 1}) F (z _ {t} | z _ {t - 1} ^ {\prime} , u _ {t - 1})} _ {d G (z _ {t} | x _ {t - 1}, u _ {t - 1})} \underbrace {\int_ {x _ {t} \in \mathcal {X}} d D (x _ {t} | z _ {t}) c (x _ {t} , u _ {t})} _ {\bar {c} (z _ {t}, u _ {t})}. \\ \end{array}
+$$
+
+Now consider the following cost function: $\mathbb{E}[c(x_{t - 1},u_{t - 1}) + c(x_t,u_t)\mid \widehat{P},x_0]$ for $t > 2$ . Using the above arguments, one can express this cost as
+
+$$
+\begin{array}{l} \mathbb {E} \left[ c \left(x _ {t - 1}, u _ {t - 1}\right) + c \left(x _ {t}, u _ {t}\right) \mid \widehat {P}, x _ {0} \right] \\ = \int_ {x _ {0: t - 2} \in \mathcal {X} ^ {t - 1}} \prod_ {k = 1} ^ {t - 2} d \widehat {P} (x _ {k} | x _ {k - 1}, u _ {k - 1}) \cdot \int_ {z _ {t - 2} ^ {\prime} \in \mathcal {Z}} d E (z _ {t - 2} ^ {\prime} | x _ {t - 2}) \cdot \int_ {z _ {t - 1} \in \mathcal {Z}} d F (z _ {t - 1} | z _ {t - 2} ^ {\prime}, u _ {t - 2}) \\ \left(\bar {c} \left(z _ {t - 1}, u _ {t - 1}\right) + \int_ {x _ {t - 1} \in \mathcal {X}} d D \left(x _ {t - 1} \mid z _ {t - 1}\right) \int_ {z _ {t - 1} ^ {\prime}, z _ {t} \in \mathcal {Z}} d E \left(z _ {t - 1} ^ {\prime} \mid x _ {t - 1}\right) d F \left(z _ {t} \mid z _ {t - 1} ^ {\prime}, u _ {t - 1}\right) \bar {c} \left(z _ {t}, u _ {t}\right)\right) \\ \leq \int_ {x _ {0: t - 2} \in \mathcal {X} ^ {t - 1}} \prod_ {k = 1} ^ {t - 2} d \widehat {P} (x _ {k} | x _ {k - 1}, u _ {k - 1}) \cdot \int_ {z _ {t - 2} \in \mathcal {Z}} d E (z _ {t - 2} | x _ {t - 2}). \\ \int_ {z _ {t - 1}} d F \left(z _ {t - 1} \mid z _ {t - 2}, u _ {t - 2}\right) \left(\bar {c} \left(z _ {t - 1}, u _ {t - 1}\right) + \int_ {z _ {t} \in \mathcal {Z}} d F \left(z _ {t} \mid z _ {t - 1}, u _ {t - 1}\right) \bar {c} \left(z _ {t}, u _ {t}\right)\right) \\ + c _ {\max } \cdot \int_ {x _ {0: t - 2} \in \mathcal {X} ^ {t - 1}} \prod_ {k = 1} ^ {t - 2} d P (x _ {k} | x _ {k - 1}, u _ {k - 1}) \cdot D _ {\mathrm {T V}} \left(\int_ {x ^ {\prime} \in \mathcal {X}} d \widehat {P} (x ^ {\prime} | x _ {t - 2}, u _ {t - 2}) E (\cdot | x ^ {\prime}) | | \int_ {z \in \mathcal {Z}} d E (z | x _ {t - 2}) F (\cdot | z, u _ {t - 2})\right) \\ \end{array}
+$$
+
+By continuing the above expansion, one can show that
+
+$$
+\begin{array}{l} \left| \mathbb {E} \left[ L (U, F, \bar {c}, z _ {0}) \mid E, x _ {0} \right] - L (U, \widehat {P}, c, x _ {0}) \right| \\ \leq T ^ {2} \cdot c _ {\mathrm {m a x}} \mathbb {E} \left[ \frac {1}{T} \sum_ {t = 0} ^ {T - 1} D _ {\mathrm {T V}} ((E \circ \hat {P}) (\cdot | x _ {t}, u _ {t}) | | (F \circ E) (\cdot | x _ {t}, u _ {t})) \mid P, x _ {0} \right] \\ \leq \sqrt {2} T ^ {2} \cdot c _ {\max } \mathbb {E} \left[ \frac {1}{T} \sum_ {t = 0} ^ {T - 1} \sqrt {\operatorname {K L} \left(\left(E \circ \hat {P}\right) \left(\cdot \mid x _ {t} , u _ {t}\right) \mid \mid (F \circ E) \left(\cdot \mid x _ {t} , u _ {t}\right)\right)} \mid P, x _ {0} \right] \\ \leq \sqrt {2} T ^ {2} \cdot c _ {\max } \sqrt {\mathbb {E} \left[ \frac {1}{T} \sum_ {t = 0} ^ {T - 1} \mathrm {K L} ((E \circ \widehat {P}) (\cdot | x _ {t} , u _ {t}) | | (F \circ E) (\cdot | x _ {t} , u _ {t})) \mid P , x _ {0} \right]}, \\ \end{array}
+$$
+
+where the last inequality is based on Jensen's inequality of $\sqrt{(\cdot)}$ function.
+
+For the second part of the proof, following similar arguments as in the second part of the proof of Lemma 1, one can show the following chain of inequalities for solution of (SOC3) and (SOC2):
+
+$$
+\begin{array}{l} L \left(U _ {3} ^ {*}, \widehat {P} _ {3} ^ {*}, c, x _ {0}\right) \\ \geq \mathbb {E} \left[ L \left(U _ {3} ^ {*}, F _ {3} ^ {*}, \bar {c}, z _ {0}\right) \mid E _ {3} ^ {*}, x _ {0} \right] - \sqrt {2} T ^ {2} \cdot c _ {\max } \bar {U} \cdot \sqrt {\mathbb {E} _ {x , u} \left[ \mathrm {K L} \left(\left(E _ {3} ^ {*} \circ \hat {P} _ {3} ^ {*}\right) (\cdot | x , u) \right\lvert \left| \left(F _ {3} ^ {*} \circ E _ {3} ^ {*}\right) (\cdot | x , u)\right) \right]} \\ = \mathbb {E} \left[ L \left(U _ {3} ^ {*}, F _ {3} ^ {*}, \bar {c}, z _ {0}\right) \mid E _ {3} ^ {*}, x _ {0} \right] + \sqrt {2} T ^ {2} \cdot c _ {\max } \bar {U} \cdot \sqrt {\mathbb {E} _ {x , u} \left[ \mathrm {K L} \left(\left(E _ {3} ^ {*} \circ \widehat {P} _ {3} ^ {*}\right) (\cdot | x , u) \right\lvert | \left(F _ {3} ^ {*} \circ E _ {3} ^ {*}\right) (\cdot | x , u)\right)} ] \\ - 2 \sqrt {2} T ^ {2} \cdot c _ {\max } \bar {U} \cdot \sqrt {\mathbb {E} _ {x , u} \left[ \right. \mathrm {K L} \left( \right.\left(E _ {3} ^ {*} \circ \hat {P} _ {3} ^ {*}\right) (\cdot | x , u) \left. \right\lvert \mid \left(F _ {3} ^ {*} \circ E _ {3} ^ {*}\right) (\cdot | x , u)\left. \right)\left. \right]} \\ \geq \mathbb {E} \left[ L (U _ {2} ^ {*}, F _ {2} ^ {*}, \overline {{c}}, z _ {0}) \mid E _ {2} ^ {*}, x _ {0} \right] + \sqrt {2} T ^ {2} \cdot c _ {\max} \overline {{U}} \cdot \sqrt {\mathbb {E} _ {x , u} \left[ \mathrm {K L} ((E _ {2} ^ {*} \circ \widehat {P} _ {2} ^ {*}) (\cdot | x , u) | | (F _ {2} ^ {*} \circ E _ {2} ^ {*}) (\cdot | x , u)) \right]} \\ - 2 \sqrt {2} T ^ {2} \cdot c _ {\max } \bar {U} \cdot \sqrt {\mathbb {E} _ {x , u} \left[ \mathrm {K L} \left(\left(E _ {3} ^ {*} \circ \hat {P} _ {3} ^ {*}\right) (\cdot | x , u) \right\lvert \mid \left(F _ {3} ^ {*} \circ E _ {3} ^ {*}\right) (\cdot | x , u)\right)} ] \\ \geq L \left(U _ {2} ^ {*}, \widehat {P} _ {2} ^ {*}, c, x _ {0}\right) - 2 \underbrace {\sqrt {2} T ^ {2} \cdot c _ {\max } \bar {U}} _ {\lambda_ {2}} \cdot \underbrace {\sqrt {\mathbb {E} _ {x , u} \left[ \right. \mathrm {K L} \left( \right.\left(E _ {2} ^ {*} \circ \widehat {P} _ {2} ^ {*}\right) (\cdot | x , u) \left. \right| | \left(F _ {2} ^ {*} \circ E _ {2} ^ {*}\right) (\cdot | x , u)\left. \right)\left. \right]}} _ {R _ {2} ^ {\prime \prime} \left(\widehat {P} _ {2} ^ {*}\right)}, \tag {14} \\ \end{array}
+$$
+
+where the first and third inequalities are based on the first part of this Lemma, and the second inequality is based on the optimality condition of problem (SOC2). This completes the proof.
+
+# A.3 PROOF OF COROLLARY 1
+
+To start with, the total-variation distance $D_{\mathrm{TV}}\left(\int_{x'\in \mathcal{X}}d\widehat{P} (x'|x,u)E(\cdot |x')||(F\circ E)(\cdot |x,u)\right)$ can be bounded by the following inequality using triangle inequality:
+
+$$
+\begin{array}{l} D _ {\mathrm {T V}} \left(\int_ {x ^ {\prime} \in \mathcal {X}} d \widehat {P} \left(x ^ {\prime} | x, u\right) E (\cdot | x ^ {\prime}) | | (F \circ E) (\cdot | x, u)\right) \\ \leq D _ {\mathrm {T V}} \left(\int_ {x ^ {\prime} \in \mathcal {X}} d P \left(x ^ {\prime} | x, u\right) E (\cdot | x ^ {\prime}) | | (F \circ E) (\cdot | x, u)\right) + D _ {\mathrm {T V}} \left(\int_ {x ^ {\prime} \in \mathcal {X}} d P \left(x ^ {\prime} | x, u\right) E (\cdot | x ^ {\prime}) | | \int_ {x ^ {\prime} \in \mathcal {X}} d \widehat {P} \left(x ^ {\prime} | x, u\right) E (\cdot | x ^ {\prime})\right) \\ \leq D _ {\mathrm {T V}} \left(\int_ {x ^ {\prime} \in \mathcal {X}} d P \left(x ^ {\prime} | x, u\right) E (\cdot | x ^ {\prime}) | | (F \circ E) (\cdot | x, u)\right) + D _ {\mathrm {T V}} \left(P (\cdot | x, u) | | \widehat {P} (\cdot | x, u)\right) \\ \end{array}
+$$
+
+where the second inequality follows from the convexity property of the $D_{\mathrm{TV}}$ -norm (w.r.t. convex weights $E(\cdot |x')$ , $\forall x'$ ). Then by Pinsker's inequality, one obtains the following inequality:
+
+$$
+\begin{array}{l} D _ {\mathrm {T V}} \left(\int_ {x ^ {\prime} \in \mathcal {X}} d \widehat {P} \left(x ^ {\prime} | x, u\right) E (\cdot | x ^ {\prime}) | | (F \circ E) (\cdot | x, u)\right) \\ \leq \sqrt {2 \mathrm {K L} \left(\int_ {x ^ {\prime} \in \mathcal {X}} d P \left(x ^ {\prime} \mid x , u\right) E (\cdot \mid x ^ {\prime}) \mid \mid (F \circ E) (\cdot \mid x , u)\right)} + \sqrt {2 \mathrm {K L} \left(P (\cdot \mid x , u) \mid \mid \widehat {P} (\cdot \mid x , u)\right)}. \tag {15} \\ \end{array}
+$$
+
+We now analyze the batch consistency regularizer:
+
+$$
+R _ {2} ^ {\prime \prime} (\widehat {P}) = \mathbb {E} _ {x, u, x ^ {\prime}} \left[ \mathrm {K L} (E (\cdot | x ^ {\prime}) | | (F \circ E) (\cdot | x, u)) \right]
+$$
+
+and connect it with the inequality in (15). Using Jensen's inequality of convex function $x \log x$ , for any observation-action pair $(x, u)$ sampled from $U_{\tau}$ , one can show that
+
+$$
+\begin{array}{l} \int_ {x ^ {\prime} \in \mathcal {X}} d P \left(x ^ {\prime} \mid x, u\right) \int_ {z ^ {\prime} \in \mathcal {Z}} d E \left(z ^ {\prime} \mid x ^ {\prime}\right) \log \left(\int_ {x ^ {\prime} \in \mathcal {X}} d P \left(x ^ {\prime} \mid x, u\right) E \left(z ^ {\prime} \mid x ^ {\prime}\right)\right) \tag {16} \\ \leq \int_ {x ^ {\prime} \in \mathcal {X}} d P \left(x ^ {\prime} | x, u\right) \int_ {z ^ {\prime} \in \mathcal {Z}} d E \left(z ^ {\prime} | x ^ {\prime}\right) \log \left(E \left(z ^ {\prime} | x ^ {\prime}\right)\right). \\ \end{array}
+$$
+
+Therefore, for any observation-control pair $(x,u)$ the following inequality holds:
+
+$$
+\begin{array}{l} \operatorname {K L} \left(\int_ {x ^ {\prime} \in \mathcal {X}} d P \left(x ^ {\prime} | x, u\right) E (\cdot | x ^ {\prime}) | | (F \circ E) (\cdot | x, u)\right) \\ = \int_ {x ^ {\prime} \in \mathcal {X}} d P \left(x ^ {\prime} | x, u\right) \int_ {z ^ {\prime} \in \mathcal {Z}} d E \left(z ^ {\prime} | x ^ {\prime}\right) \log \left(\int_ {x ^ {\prime} \in \mathcal {X}} d P \left(x ^ {\prime} | x, u\right) E \left(z ^ {\prime} | x ^ {\prime}\right)\right) \\ - \int_ {x ^ {\prime} \in \mathcal {X}} d P \left(x ^ {\prime} \mid x, u\right) \log \left(g \left(x ^ {\prime} \mid x, u\right)\right) \tag {17} \\ \end{array}
+$$
+
+$$
+\begin{array}{l} \leq \int_ {x ^ {\prime} \in \mathcal {X}} d P \left(x ^ {\prime} \mid x, u\right) \int_ {z ^ {\prime} \in \mathcal {Z}} d E \left(z ^ {\prime} \mid x ^ {\prime}\right) \log \left(E \left(z ^ {\prime} \mid x ^ {\prime}\right)\right) - \int_ {x ^ {\prime} \in \mathcal {X}} d P \left(x ^ {\prime} \mid x, u\right) \log \left(g \left(x ^ {\prime} \mid x, u\right)\right) \\ = \mathrm {K L} (E (\cdot | x ^ {\prime}) | | (F \circ E) (\cdot | x, u)) \\ \end{array}
+$$
+
+By taking expectation over $(x,u)$ one can show that
+
+$$
+\mathbb {E} _ {x, u} \left[ \mathrm {K L} \left(\int_ {x ^ {\prime} \in \mathcal {X}} d P \left(x ^ {\prime} \mid x, u\right) E (\cdot \mid x ^ {\prime}) | | (F \circ E) (\cdot \mid x, u)\right) \right]
+$$
+
+is the lower bound of the batch consistency regularizer. Therefore, the above arguments imply that
+
+$$
+D _ {\mathrm {T V}} \left(\int_ {x ^ {\prime} \in \mathcal {X}} d \widehat {P} \left(x ^ {\prime} | x, u\right) E (\cdot | x ^ {\prime}) \mid \mid (F \circ E) (\cdot | x, u)\right) \leq \sqrt {2} \sqrt {R _ {2} ^ {\prime \prime} (\widehat {P}) + R _ {3} (\widehat {P})}. \tag {18}
+$$
+
+The inequality is based on the property that $\sqrt{a} +\sqrt{b}\leq \sqrt{2}\sqrt{a + b}$
+
+Equipped with the above additional results, the rest of the proof on the performance bound follows directly from the results from Lemma 2, in which here we further upper-bound $D_{\mathrm{TV}}\left(\int_{x'\in \mathcal{X}}d\widehat{P} (x'|x,u)E(\cdot |x')||(F\circ E)(\cdot |x,u)\right)$ , when $\widehat{P} = \widehat{P}_2^*$ .
+
+# A.4 PROOF OF LEMMA 3
+
+For the first part of the proof, at any time-step $t \geq 1$ , for any arbitrary control action sequence $\{u_t\}_{t=0}^{T-1}$ and for any model $\widehat{P}$ , consider the following decomposition of the expected cost:
+
+$$
+\begin{array}{l} \mathbb {E} \left[ c \left(x _ {t}, u _ {t}\right) \mid P, x _ {0} \right] = c _ {\max } \cdot \int_ {x _ {0: t - 1} \in \mathcal {X} ^ {t}} \prod_ {k = 1} ^ {t - 1} d P \left(x _ {k} \mid x _ {k - 1}, u _ {k - 1}\right) D _ {\mathrm {T V}} \left(P \left(\cdot \mid x _ {t - 1}, u _ {t - 1}\right) \mid \mid \widehat {P} \left(\cdot \mid x _ {t - 1}, u _ {t - 1}\right)\right) \\ + \int_ {x _ {0: t - 1} \in \mathcal {X} ^ {t}} \prod_ {k = 1} ^ {t - 1} d P (x _ {k} | x _ {k - 1}, u _ {k - 1}) \int_ {z _ {t} \in \mathcal {Z}} \underbrace {\int_ {z _ {t - 1} ^ {\prime} \in \mathcal {Z}} d E (z _ {t - 1} ^ {\prime} | x _ {t - 1}) F (z _ {t} | z _ {t - 1} ^ {\prime} , u _ {t - 1})} _ {d G (z _ {t} | x _ {t - 1}, u _ {t - 1})} \underbrace {\int_ {x _ {t} \in \mathcal {X}} d D (x _ {t} | z _ {t}) c (x _ {t} , u _ {t})} _ {\bar {c} (z _ {t}, u _ {t})}. \\ \end{array}
+$$
+
+Now consider the following cost function: $\mathbb{E}[c(x_{t - 1},u_{t - 1}) + c(x_t,u_t)\mid \widehat{P},x_0]$ for $t > 2$ . Using the above arguments, one can express this cost as
+
+$$
+\begin{array}{l} \mathbb {E} \left[ c \left(x _ {t - 1}, u _ {t - 1}\right) + c \left(x _ {t}, u _ {t}\right) \mid P, x _ {0} \right] \\ = \int_ {x _ {0: t - 2} \in \mathcal {X} ^ {t - 1}} \prod_ {k = 1} ^ {t - 2} d P (x _ {k} | x _ {k - 1}, u _ {k - 1}) \cdot \int_ {z _ {t - 2} ^ {\prime} \in \mathcal {Z}} d E (z _ {t - 2} ^ {\prime} | x _ {t - 2}) \cdot \int_ {z _ {t - 1}} d F (z _ {t - 1} | z _ {t - 2} ^ {\prime}, u _ {t - 2}) \cdot \\ \left(\bar {c} \left(z _ {t - 1}, u _ {t - 1}\right) + \int_ {x _ {t - 1}} d D \left(x _ {t - 1} \mid z _ {t - 1}\right) \int_ {z _ {t - 1} ^ {\prime}, z _ {t} \in \mathcal {Z}} d E \left(z _ {t - 1} ^ {\prime} \mid x _ {t - 1}\right) d F \left(z _ {t} \mid z _ {t - 1} ^ {\prime}, u _ {t - 1}\right) \bar {c} \left(z _ {t}, u _ {t}\right)\right) \\ + c _ {\mathrm {m a x}} \cdot \sum_ {j = 1} ^ {2} j \cdot \int_ {x _ {0: t - j}} \prod_ {k = 1} ^ {t - j} d P (x _ {k} | x _ {k - 1}, u _ {k - 1}) D _ {\mathrm {T V}} (P (\cdot | x _ {t - j}, u _ {t - j}) | | \widehat {P} (\cdot | x _ {t - j}, u _ {t - j})) \\ \leq \int_ {x _ {0: t - 2} \in \mathcal {X} ^ {t - 1}} \prod_ {k = 1} ^ {t - 2} d P (x _ {k} | x _ {k - 1}, u _ {k - 1}) \cdot \int_ {z _ {t - 2} \in \mathcal {Z}} d E (z _ {t - 2} | x _ {t - 2}). \\ \int_ {z _ {t - 1}} d F (z _ {t - 1} | z _ {t - 2}, u _ {t - 2}) \left(\bar {c} (z _ {t - 1}, u _ {t - 1}) + \int_ {z _ {t} \in \mathcal {Z}} d F (z _ {t} | z _ {t - 1}, u _ {t - 1}) \bar {c} (z _ {t}, u _ {t})\right) \\ + c _ {\max } \cdot \sum_ {j = 1} ^ {2} j \cdot \int_ {x _ {0: t - j}} \prod_ {k = 1} ^ {t - j} d P (x _ {k} | x _ {k - 1}, u _ {k - 1}) D _ {\mathrm {T V}} (P (\cdot | x _ {t - j}, u _ {t - j}) | | \widehat {P} (\cdot | x _ {t - j}, u _ {t - j})) \\ + c _ {\mathrm {m a x}} \cdot \int_ {x _ {0: t - 2} \in \mathcal {X} ^ {t - 1}} \prod_ {k = 1} ^ {t - 2} d P (x _ {k} | x _ {k - 1}, u _ {k - 1}) \cdot D _ {\mathrm {T V}} \left(\int_ {x ^ {\prime} \in \mathcal {X}} d \widehat {P} (x ^ {\prime} | x _ {t - 2}, u _ {t - 2}) E (\cdot | x ^ {\prime}) | | \int_ {z \in \mathcal {Z}} d E (z | x _ {t - 2}) F (\cdot | z, u _ {t - 2})\right). \\ \end{array}
+$$
+
+Continuing the above expansion, one can show that
+
+$$
+\left| \mathbb {E} \left[ L (U, F, \bar {c}, z _ {0}) \mid E, x _ {0} \right] - L (U, P, x _ {0}) \right|
+$$
+
+$$
+\begin{array}{l} \leq T ^ {2} \cdot c _ {\max } \mathbb {E} \left[ \frac {1}{T} \sum_ {t = 0} ^ {T - 1} D _ {\mathrm {T V}} \left(P (\cdot | x _ {t}, u _ {t}) | | \widehat {P} (\cdot | x _ {t}, u _ {t})\right) + D _ {\mathrm {T V}} \left(\int_ {x ^ {\prime} \in \mathcal {X}} d \widehat {P} \left(x ^ {\prime} | x _ {t}, u _ {t}\right) E (\cdot | x ^ {\prime}) | | (F \circ E) (\cdot | x _ {t}, u _ {t})\right) \mid P, x _ {0} \right] \\ \leq \sqrt {2} T ^ {2} \cdot c _ {\max } \mathbb {E} \left[ \frac {1}{T} \sum_ {t = 0} ^ {T - 1} \sqrt {\operatorname {K L} (P (\cdot | x _ {t} , u _ {t}) | | \widehat {P} (\cdot | x _ {t} , u _ {t}))} + \sqrt {\operatorname {K L} \left(\int_ {x ^ {\prime} \in \mathcal {X}} d \widehat {P} (x ^ {\prime} | x _ {t} , u _ {t}) E (\cdot | x ^ {\prime}) | | (F \circ E) (\cdot | x _ {t} , u _ {t})\right)} \mid P, x _ {0} \right] \\ \leq \sqrt {2} T ^ {2} \cdot c _ {\max } \mathbb {E} \left[ \frac {1}{T} \sum_ {t = 0} ^ {T - 1} \sqrt {\operatorname {K L} \left(P (\cdot | x _ {t} , u _ {t}) | | \hat {P} (\cdot | x _ {t} , u _ {t})\right)} \right. \\ + \sqrt {\operatorname {K L} (P (\cdot | x _ {t} , u _ {t}) | | \widehat {P} (\cdot | x _ {t} , u _ {t})) + \operatorname {K L} (E (\cdot | x _ {t + 1}) | | (F \circ E) (\cdot | x _ {t} , u _ {t}))} \mid P, x _ {0} \Bigg ] \\ \leq 2 T ^ {2} \cdot c _ {\max } \sqrt {\mathbb {E} \left[ \frac {1}{T} \sum_ {t = 0} ^ {T - 1} 3 \mathrm {K L} (P (\cdot | x _ {t} , u _ {t}) | | \widehat {P} (\cdot | x _ {t} , u _ {t})) + 2 \mathrm {K L} (E (\cdot | x _ {t + 1}) | | (F \circ E) (\cdot | x _ {t} , u _ {t})) \mid P , x _ {0} \right]}, \\ \end{array}
+$$
+
+where the last inequality is based on the fact that $\sqrt{a} +\sqrt{b}\leq \sqrt{2}\sqrt{a + b}$ and is based on Jensen's inequality of $\sqrt{(\cdot)}$ function.
+
+For the second part of the proof, following similar arguments from Lemma 2, one can show the following inequality for the solution of (SOC3) and (SOC2):
+
+$$
+\begin{array}{l} L (U _ {1} ^ {*}, P, c, x _ {0}) \geq \mathbb {E} \left[ L (U _ {1} ^ {*}, F _ {2} ^ {*}, \overline {{c}}, z _ {0}) \mid E _ {2} ^ {*}, x _ {0} \right] - \sqrt {2} T ^ {2} \cdot c _ {\max} \overline {{U}} \cdot \sqrt {2 R _ {2} ^ {\prime \prime} (\widehat {P} _ {2} ^ {*}) + 3 R _ {3} (\widehat {P} _ {2} ^ {*})} \\ = \mathbb {E} \left[ L \left(U _ {1} ^ {*}, F _ {2} ^ {*}, \bar {c}, z _ {0}\right) \mid E _ {2} ^ {*}, x _ {0} \right] + \sqrt {2} T ^ {2} \cdot c _ {\max } \bar {U} \cdot \sqrt {2 R _ {2} ^ {\prime \prime} (\widehat {P} _ {2} ^ {*}) + 3 R _ {3} (\widehat {P} _ {2} ^ {*})} \\ - 2 \sqrt {2} T ^ {2} \cdot c _ {\max } \bar {U} \cdot \sqrt {2 R _ {2} ^ {\prime \prime} (\hat {P} _ {2} ^ {*}) + 3 R _ {3} (\hat {P} _ {2} ^ {*})} \\ \geq \mathbb {E} \left[ L \left(U _ {2} ^ {*}, F _ {2} ^ {*}, \bar {c}, z _ {0}\right) \mid E _ {2} ^ {*}, x _ {0} \right] + \sqrt {2} T ^ {2} \cdot c _ {\max } \bar {U} \cdot \sqrt {2 R _ {2} ^ {\prime \prime} \left(\widehat {P} _ {2} ^ {*}\right) + 3 R _ {3} \left(\widehat {P} _ {2} ^ {*}\right)} \tag {19} \\ - 2 \sqrt {2} T ^ {2} \cdot c _ {\max } \bar {U} \cdot \sqrt {2 R _ {2} ^ {\prime \prime} (\hat {P} _ {2} ^ {*}) + 3 R _ {3} (\hat {P} _ {2} ^ {*})} \\ \geq L (U _ {2} ^ {*}, P, c, x _ {0}) - 2 \underbrace {\sqrt {2} T ^ {2} \cdot c _ {\max} \overline {{U}}} _ {\lambda_ {2}} \cdot \sqrt {2 R _ {2} ^ {\prime \prime} (\widehat {P} _ {2} ^ {*}) + 3 R _ {3} (\widehat {P} _ {2} ^ {*})}, \\ \end{array}
+$$
+
+where the first and third inequalities are based on the first part of this Lemma, and the second inequality is based on the optimality condition of problem (SOC2). This completes the proof.
+
+# A.5 PROOF OF LEMMA 4
+
+A Recap of the Result: Let $(U_{\mathrm{LLC}}^{*},\widehat{P}_{\mathrm{LLC}}^{*})$ be a LLC solution to (SOC-LLC) and $U_{1}^{*}$ be a solution to (SOC1). Suppose the nominal latent state-action pair $\{(z_t,\pmb {u}_t)\}_{t = 0}^{T - 1}$ satisfies the condition: $(z_{t},\pmb{u}_{t})\sim \mathcal{N}((z_{2,t}^{*},u_{2,t}^{*}),\delta^{2}I)$ , where $\{(z_{2,t}^{*},u_{2,t}^{*})_{t = 0}^{T - 1}$ is the optimal trajectory of problem (SOC2). Then with probability $1 - \eta$ , we have $L(U_1^*,P,c,x_0)\geq L(U_{\mathrm{LLC}}^*,P,c,x_0) - 2\lambda_{\mathrm{LLC}}\sqrt{R_2(\widehat{P}_{\mathrm{LLC}}^*) + R_{\mathrm{LLC}}(\widehat{P}_{\mathrm{LLC}}^*)}$ .
+
+Discussions of the effect of $\delta$ on LLC Performance: The result of this lemma shows that when the nominal state and actions are $\delta$ -close to the optimal trajectory of (SOC2), i.e., at each time step $(\boldsymbol{z}_t, \boldsymbol{u}_t)$ is a sample from the Gaussian distribution centered at $(z_{2,t}^*, u_{2,t}^*)$ with standard deviation $\delta$ , then one can obtain a performance bound of LLC algorithm that is in terms of the regularization loss $R_{\mathrm{LLC}}$ . To quantify the above condition, one can use Mahalanobis distance (De Maesschalck et al., 2000) to measure the distance of $(\boldsymbol{z}_t, \boldsymbol{u}_t)$ to distribution $\mathcal{N}((z_{2,t}^*, u_{2,t}^*), \delta^2 I)$ , i.e., we want to check for the condition:
+
+$$
+\frac {\left\| \left(\boldsymbol {z} _ {t} , \boldsymbol {u} _ {t}\right) - \left(z _ {2 , t} ^ {*} , u _ {2 , t} ^ {*}\right) \right\|}{\delta} \leq \epsilon^ {\prime}, \forall t,
+$$
+
+for any arbitrary error tolerance $\epsilon' > 0$ . While we cannot verify the condition without knowing the optimal trajectory $\{(z_{2,t}^*, u_{2,t}^*)\}_{t=0}^{T-1}$ , the above condition still offers some insights in choosing the parameter $\delta$ based on the trade-off of designing nominal trajectory $\{(z_t, u_t)\}_{t=0}^{T-1}$ and optimizing $R_{\mathrm{LLC}}$ . When $\delta$ is large, the low-curvature regularization imposed by the $R_{\mathrm{LLC}}$ regularizer will cover a large portion of the state-action space. In the extreme case when $\delta \to \infty$ , $R_{\mathrm{LLC}}$ can be viewed as a regularizer that enforces global linearity. Here the trade-off is that the loss $R_{\mathrm{LLC}}$ is generally higher, which in turn degrades the performance bound of the LLC control algorithm in Lemma 4. On the other hand, when $\delta$ is small the low-curvature regularization in $R_{\mathrm{LLC}}$ only covers a smaller region of the latent state-action space, and thus the loss associated with this term is generally lower (which provides a tighter performance bound in Lemma 4). However the performance result will only hold when $(z_t, u_t)$ happens to be close to $(z_{2,t}^*, u_{2,t}^*)$ at each time-step $t \in \{0, \dots, T-1\}$ .
+
+Proof: For simplicity, we will focus on analyzing the noiseless case when the dynamics is deterministic (i.e., $\Sigma_w = 0$ ). Extending the following analysis for the case of non-deterministic dynamics should be straight-forward.
+
+First, consider any arbitrary latent state-action pair $(z,u)$ , such that the corresponding nominal state-action pair $(z,\pmb{u})$ is constructed by $z = z - \delta z$ , $\pmb{u} = u - \delta u$ , where $(\delta z,\delta u)$ is sampled from the Gaussian distribution $\mathcal{N}(0,\delta^2 I)$ . (The random vectors are denoted as $(\delta z',\delta u')$ ) By the two-tailed Bernstein's inequality (Murphy, 2012), for any arbitrarily given $\eta \in (0,1]$ one has the following inequality with probability $1 - \eta$ :
+
+$$
+\begin{array}{l} \left| f _ {\mathcal {Z}} (\boldsymbol {z}, \boldsymbol {u}) + A (\boldsymbol {z}, \boldsymbol {u}) \delta z + B (\boldsymbol {z}, \boldsymbol {u}) \delta u - f _ {\mathcal {Z}} (z, u) \right| \\ \leq \sqrt {2 \log (2 / \eta)} \sqrt {\mathbb {V} _ {(\delta z ^ {\prime} , \delta u ^ {\prime}) \sim \mathcal {N} (0 , \delta^ {2} I)} [ f _ {\mathcal {Z}} (\boldsymbol {z} , \boldsymbol {u}) + A (\boldsymbol {z} , \boldsymbol {u}) \delta z ^ {\prime} + B (\boldsymbol {z} , \boldsymbol {u}) \delta u ^ {\prime} - f _ {\mathcal {Z}} (z , u) ]} \\ + \left| \mathbb {E} _ {(\delta z ^ {\prime}, \delta u ^ {\prime}) \sim \mathcal {N} (0, \delta^ {2} I)} [ f _ {\mathcal {Z}} (\boldsymbol {z}, \boldsymbol {u}) + A (\boldsymbol {z}, \boldsymbol {u}) \delta z ^ {\prime} + B (\boldsymbol {z}, \boldsymbol {u}) \delta u ^ {\prime} - f _ {\mathcal {Z}} (z, u) ] \right| \\ \leq (1 + \sqrt {2 \log (2 / \eta)}) \left(\underbrace {\mathbb {E} _ {(\delta z ^ {\prime} , \delta u ^ {\prime}) \sim \mathcal {N} (0 , \delta^ {2} I)} \left[ \| f _ {\mathcal {Z}} (\boldsymbol {z} , \boldsymbol {u}) + A (\boldsymbol {z} , \boldsymbol {u}) \delta z ^ {\prime} + B (\boldsymbol {z} , \boldsymbol {u}) \delta u ^ {\prime} - f _ {\mathcal {Z}} (z , u) \| ^ {2} \right]} _ {R _ {\mathrm {L L C}} (\widehat {P} | z, u)}\right) ^ {1 / 2}. \\ \end{array}
+$$
+
+The second inequality is due to the basic fact that variance is less than second-order moment of a random variable. On the other hand, at each time step $t \in \{0, \dots, T - 1\}$ by the Lipschitz property of the immediate cost, the value function $V_{t}(z) = \min_{U_{t:T - 1}} \mathbb{E}\left[\bar{c}_{T}(z_{T}) + \sum_{\tau = t}^{T - 1} \bar{c}_{\tau}(z_{\tau}, u_{\tau}) \mid z_{t} = z\right]$ is also Lipschitz with constant $(T - t + 1)c_{\mathrm{lip}}$ . Using the Lipschitz property of $V_{t + 1}$ , for any $(z, u)$
+
+and $(\delta z, \delta u)$ , such that $(\mathbf{z}, \mathbf{u}) = (z - \delta z, u - \delta u)$ , one has the following property:
+
+$$
+\begin{array}{l} \left. \left| V _ {t + 1} \left(\boldsymbol {z} ^ {\prime} + A (\boldsymbol {z}, \boldsymbol {u}) \delta z + B (\boldsymbol {z}, \boldsymbol {u}) \delta u\right) - V _ {t + 1} \left(f _ {\mathcal {Z}} (z, u)\right) \right| \right. \\ \leq (T - t) c _ {\text {l i p}} \cdot | f _ {\mathcal {Z}} (\boldsymbol {z}, \boldsymbol {u}) + A (\boldsymbol {z}, \boldsymbol {u}) \delta z + B (\boldsymbol {z}, \boldsymbol {u}) \delta u - f _ {\mathcal {Z}} (z, u) |, \tag {20} \\ \end{array}
+$$
+
+Therefore, at any arbitrary state-action pair $(\tilde{z},\tilde{u})$ , for $\pmb {z} = z - \delta z$ , and $\pmb {u} = \tilde{u} -\delta u$ with Gaussian sample $(\delta z,\delta u)\sim \mathcal{N}(0,\delta^{2}I)$ , the following inequality on the value function holds w.p. $1 - \eta$ :
+
+$$
+V _ {t + 1} (f _ {\mathcal {Z}} (\tilde {z}, \tilde {u})) \geq V _ {t + 1} (\pmb {z} ^ {\prime} + A (\pmb {z}, \pmb {u}) \delta z + B (\pmb {z}, \pmb {u}) \delta u) - (T - t) c _ {\mathrm {l i p}} (1 + \sqrt {2 \log (2 / \eta)}) \cdot \sqrt {R _ {\mathrm {L L C}} (\widehat {P} | \tilde {z} , \tilde {u})},
+$$
+
+which further implies
+
+$$
+\begin{array}{l} \bar {c} _ {t} (\tilde {z}, \tilde {u}) + V _ {t + 1} (f _ {\mathcal {Z}} (\tilde {z}, \tilde {u})) \\ \geq \bar {c} _ {t} (\tilde {z}, \tilde {u}) + V _ {t + 1} (\pmb {z} ^ {\prime} + A (\pmb {z}, \pmb {u}) \delta z + B (\pmb {z}, \pmb {u}) \delta u) - (T - t) c _ {\mathrm {l i p}} (1 + \sqrt {2 \log (2 / \eta)}) \cdot \sqrt {R _ {\mathrm {L L C}} (\hat {P} | \tilde {z} , \tilde {u})}, \\ \end{array}
+$$
+
+Now let $\tilde{u}^*$ be the optimal control w.r.t. Bellman operator $T_{t}[V_{t + 1}](\tilde{z})$ at any latent state $\tilde{z}$ . Based on the assumption of this lemma, at each state $\tilde{z}$ the nominal latent state-action pair $(\boldsymbol{z},\boldsymbol{u})$ is generated by perturbing $(\tilde{z},\tilde{u}^{*})$ with Gaussian sample $(\delta z,\delta u)\sim \mathcal{N}(0,\delta^{2}I)$ that is in form of $\boldsymbol{z} = \tilde{z} -\delta z$ , $\boldsymbol{u} = \tilde{u} -\delta u$ . Then by the above arguments the following chain of inequalities holds w.p. $1 - \eta$ :
+
+$$
+\begin{array}{l} T _ {t} \left[ V _ {t + 1} \right] (\tilde {z}) := \min _ {\tilde {u}} \bar {c} _ {t} (\tilde {z}, \tilde {u}) + V _ {t + 1} \left(f _ {\mathcal {Z}} (\tilde {z}, \tilde {u})\right) \\ = \bar {c} _ {t} (\tilde {z}, \tilde {u} ^ {*}) + V _ {t + 1} \left(f _ {\mathcal {Z}} (\tilde {z}, \tilde {u} ^ {*})\right) \\ \ge \bar {c} _ {t} (\tilde {z}, \tilde {u} ^ {*}) + V _ {t + 1} (f _ {\mathcal {Z}} (\boldsymbol {z}, \boldsymbol {u}) + A (\boldsymbol {z}, \boldsymbol {u}) \delta z + B (\boldsymbol {z}, \boldsymbol {u}) \delta u) \\ - \left| V _ {t + 1} \left(\boldsymbol {z} ^ {\prime} + A (\boldsymbol {z}, \boldsymbol {u}) \delta z + B (\boldsymbol {z}, \boldsymbol {u}) \delta u\right) - V _ {t + 1} \left(f _ {\mathcal {Z}} \left(\tilde {z}, \tilde {u} ^ {*}\right)\right) \right| \\ \geq \bar {c} _ {t} (\tilde {z}, \boldsymbol {u} + \delta u) + V _ {t + 1} \left(f _ {\mathcal {Z}} (\boldsymbol {z}, \boldsymbol {u}) + A (\boldsymbol {z}, \boldsymbol {u}) \delta z + B (\boldsymbol {z}, \boldsymbol {u}) \delta u\right) \tag {21} \\ - (T - t) c _ {\mathrm {l i p}} \left(1 + \sqrt {2 \log (2 / \eta)}\right) \sqrt {\max _ {z , u} R _ {\mathrm {L L C}} (\widehat {P} | z , u)} \\ \geq \min _ {\delta u} \bar {c} _ {t} (\tilde {z}, \boldsymbol {u} + \delta u) + V _ {t + 1} \left(f _ {\mathcal {Z}} (\boldsymbol {z}, \boldsymbol {u}) + A (\boldsymbol {z}, \boldsymbol {u}) \delta z + B (\boldsymbol {z}, \boldsymbol {u}) \delta u\right) \\ - (T - t) c _ {\mathrm {l i p}} \left(1 + \sqrt {2 \log (2 / \eta)}\right) \sqrt {\max _ {z , u} R _ {\mathrm {L L C}} (\widehat {P} | z , u)} \\ \end{array}
+$$
+
+Recall the LLC loss function is given by
+
+$$
+R _ {\mathrm {L L C}} (\widehat {P}) = \mathbb {E} _ {x, u} \left[ \mathbb {E} \left[ R _ {\mathrm {L L C}} (\widehat {P} | z, u) \mid z \right] \mid E \right].
+$$
+
+Also consider the Bellman operator w.r.t. latent SOC: $T_{t}[V](z) = \min_{u}\bar{c}_{t}(z,u) + V(f_{\mathcal{Z}}(z,u))$ , and the Bellman operator w.r.t. LLC: $T_{t,\mathrm{LLC}}[V](z) = \min_{\delta u}\bar{c}_{t}(z,\delta u + \pmb {u}) + V(f_{\mathcal{Z}}(\pmb {z},\pmb {u}) + A(\pmb {z},\pmb {u})\delta z + B(\pmb {z},\pmb {u})\delta u)$ . Utilizing these definitions, the inequality in (21) can be further expressed as
+
+$$
+T _ {t} \left[ V _ {t + 1} \right] (\tilde {z}) \geq T _ {t, \mathrm {L L C}} \left[ V _ {t + 1} \right] (\tilde {z}) - (T - t) c _ {\mathrm {l i p}} c _ {\max } \left(1 + \sqrt {2 \log (2 / \eta)}\right) \sqrt {\bar {U X}} \sqrt {R _ {\mathrm {L L C}} (\hat {P})}, \tag {22}
+$$
+
+This inequality is due to the fact that all latent states are generated by the encoding observations, i.e., $z \sim E(\cdot | x)$ , and thus by following analogous arguments as in the proof of Lemma 1, one has
+
+$$
+\max _ {z, u} R _ {\mathrm {L L C}} (\widehat {P} | z, u) \leq \overline {{U X}} \mathbb {E} _ {x, u} \left[ \mathbb {E} \left[ R _ {\mathrm {L L C}} (\widehat {P} | z, u) \mid z \right] \mid E \right] = \overline {{U X}} R _ {\mathrm {L L C}} (\widehat {P}).
+$$
+
+Therefore, based on the dynamic programming result that bounds the difference of value function w.r.t. different Bellman operators in finite-horizon problems (for example see Theorem 1.3 in Bertsekas (1995)), the above inequality implies the following bound in the value function, w.p. $1 - \eta$ :
+
+$$
+\begin{array}{l} \min _ {U, \widehat {P}} L (U, F, \overline {{c}}, z _ {0}) \\ \geq L (U _ {\mathrm {L L C}} ^ {*}, \widehat {P} _ {\mathrm {L L C}} ^ {*}, \overline {{c}}, z _ {0}) - \sum_ {t = 1} ^ {T - 1} (T - t) \cdot c _ {\mathrm {l i p}} c _ {\mathrm {m a x}} \cdot T \cdot (1 + \sqrt {2 \log (2 T / \eta)}) \cdot \sqrt {\overline {{U X}}} \cdot \sqrt {R _ {\mathrm {L L C}} (\widehat {P} _ {\mathrm {L L C}} ^ {*})} \\ \geq L \left(U _ {\mathrm {L L C}} ^ {*}, \widehat {P} _ {\mathrm {L L C}} ^ {*}, \bar {c}, z _ {0}\right) - T ^ {2} \cdot c _ {\text {l i p}} c _ {\max } \cdot \left(1 + \sqrt {2 \log (2 T / \eta)}\right) \cdot \sqrt {\overline {{U X}}} \cdot \sqrt {R _ {\mathrm {L L C}} \left(\widehat {P} _ {\mathrm {L L C}} ^ {*}\right)}. \tag {23} \\ \end{array}
+$$
+
+Notice that here we replace $\eta$ in the result in (22) with $\eta /T$ . In order to prove (23), we utilize (22) for each $t\in \{0,\ldots ,T - 1\}$ , and this replacement is the result of applying the Union Probability bound (Murphy, 2012) (to ensure (23) holds with probability $1 - \eta$ ).
+
+Therefore the proof is completed by combining the above result with that in Lemma 3.
+
+# B THE LATENT SPACE ILQR ALGORITHM
+
+# B.1 PLANNING IN THE LATENT SPACE (HIGH-LEVEL DESCRIPTION)
+
+We follow the same control scheme as in Banijamali et al. (2018). Namely, we use the iLQR (Li & Todorov, 2004) solver to plan in the latent space. Given a start observation $x_{\mathrm{start}}$ and a goal observation $x_{\mathrm{goal}}$ , corresponding to underlying states $\{s_{\mathrm{start}}, s_{\mathrm{goal}}\}$ , we encode the observations to retrieve $z_{\mathrm{start}}$ and $z_{\mathrm{goal}}$ . Then, the procedure goes as follows: we initialize a random trajectory (sequence of actions), feed it to the iLQR solver and apply the first action from the trajectory the solver outputs. We observe the next observation returned from the system (closed-loop control), and feed the updated trajectory to the iLQR solver. This procedure continues until the it reaches the end of the problem horizon. We use a receding window approach, where at every planning step the solver only optimizes for a fixed length of actions sequence, independent of the problem horizon.
+
+# B.2 DETAILS ABOUT ILQR IN THE LATENT SPACE
+
+Consider the latent state SOC problem
+
+$$
+\min _ {U} \mathbb {E} \left[ \bar {c} _ {T} (z _ {T}) + \sum_ {t = 0} ^ {T - 1} \bar {c} _ {t} (z _ {t}, u _ {t}) \mid z _ {0} \right].
+$$
+
+At each time instance $t \in \{0, \dots, T\}$ the value function of this problem is given by
+
+$$
+V _ {T} (z) = \bar {c} _ {T} (z), V _ {t} (z) = \min _ {U _ {t: T - 1}} \mathbb {E} \left[ \bar {c} _ {T} \left(z _ {T}\right) + \sum_ {\tau = t} ^ {T - 1} \bar {c} _ {\tau} \left(z _ {\tau}, u _ {\tau}\right) \mid z _ {t} = z \right], \forall t < T. \tag {24}
+$$
+
+Recall that the nonlinear latent space dynamics model is given by:
+
+$$
+z _ {t + 1} = F \left(z _ {t}, u _ {t}, w _ {t}\right) := F _ {\mu} \left(z _ {t}, u _ {t}\right) + F _ {\sigma} \cdot w _ {t}, w _ {t} \sim \mathcal {N} (0, I), \forall t \geq 0, \tag {25}
+$$
+
+where $F_{\mu}(z_t,u_t)$ is the deterministic dynamics model and $F_{\sigma}^{\top}F_{\sigma}$ is the covariance of the latent dynamics system noise. Notice that the deterministic dynamics model $F_{\mu}(z_t,u_t)$ is smooth, and therefore the following Jacobian terms are well-posed:
+
+$$
+A (z, u) := \frac {\partial F _ {\mu} (z , u)}{\partial z}, B (z, u) := \frac {\partial F _ {\mu} (z , u)}{\partial u}, \forall z \in \mathbb {R} ^ {n _ {z}}, \forall u \in \mathbb {R} ^ {n _ {u}}.
+$$
+
+By the Bellman's principle of optimality, at each time instance $t \in \{0, \dots, T - 1\}$ the value function is a solution of the recursive fixed point equation
+
+$$
+V _ {t} (z) = \min _ {u} Q _ {t} (z, u), \tag {26}
+$$
+
+where the state-action value function at time-instance $t$ w.r.t. state-action pair $(z_{t},u_{t}) = (z,u)$ is given by
+
+$$
+Q _ {t} (z, u) = \bar {c} _ {t} (z, u) + \mathbb {E} _ {w _ {t}} \left[ V _ {t + 1} \left(F \left(z _ {t}, u _ {t}, w _ {t}\right)\right) \mid z _ {t} = z, u _ {t} = u \right].
+$$
+
+In the setting of the iLQR algorithm, assume we have access to a trajectory of latent states and actions that is in form of $\{(z_t, u_t, z_{t+1})\}_{t=0}^{T-1}$ . At each iteration, the iLQR algorithm has the following steps:
+
+1. Given a nominal trajectory, find an optimal policy w.r.t. the perturbed latent states
+2. Generate a sequence of optimal perturbed actions that locally improves the cumulative cost of the given trajectory
+3. Apply the above sequence of actions to the environment and update the nominal trajectory
+4. Repeat the above steps with new nominal trajectory
+
+Denote by $\delta z_{t} = z_{t} - \pmb{z}_{t}$ and $\delta u_{t} = u_{t} - \pmb{u}_{t}$ the deviations of state and control action at time step $t$ respectively. Assuming that the nominal next state $\pmb{z}_{t + 1}$ is generated by the deterministic transition $F_{\mu}(\pmb{z}_t,\pmb{u}_t)$ at the nominal state and action pair $(\pmb{z}_t,\pmb{u}_t)$ , the first-order Taylor series approximation of the latent space transition is given by
+
+$$
+\delta z _ {t + 1} := z _ {t + 1} - \boldsymbol {z} _ {t + 1} = A (\boldsymbol {z} _ {t}, \boldsymbol {u} _ {t}) \delta z _ {t} + B (\boldsymbol {z} _ {t}, \boldsymbol {u} _ {t}) \delta u _ {t} + F _ {\sigma} \cdot w _ {t} + O \left(\left\| \left(\delta z _ {t}, \delta u _ {t}\right) \right\| ^ {2}\right), w _ {t} \sim \mathcal {N} (0, I). \tag {27}
+$$
+
+To find a locally optimal control action sequence $u_{t}^{*} = \pi_{\delta z,t}^{*}(\delta z_{t}) + \pmb{u}_{t}, \forall t$ , that improves the cumulative cost of the trajectory, we compute the locally optimal perturbed policy (policy w.r.t. perturbed latent state) $\{\pi_{\delta z,t}^{*}(\delta z_{t})\}_{t=0}^{T-1}$ that minimizes the following second-order Taylor series approximation of $Q_{t}$ around nominal state-action pair $(\pmb{z}_{t}, \pmb{u}_{t}), \forall t \in \{0, \dots, T-1\}$ :
+
+$$
+Q _ {t} \left(z _ {t}, u _ {t}\right) = Q _ {t} \left(\boldsymbol {z} _ {t}, \boldsymbol {u} _ {t}\right) + \frac {1}{2} \left[ \begin{array}{l} 1 \\ \delta z _ {t} \\ \delta u _ {t} \end{array} \right] ^ {\top} \left[ \begin{array}{c c c} F _ {\sigma} ^ {\top} F _ {\sigma} & Q _ {t, z} \left(\boldsymbol {z} _ {t}, \boldsymbol {u} _ {t}\right) ^ {\top} & Q _ {t, u} \left(\boldsymbol {z} _ {t}, \boldsymbol {u} _ {t}\right) ^ {\top} \\ Q _ {t, z} \left(\boldsymbol {z} _ {t}, \boldsymbol {u} _ {t}\right) & Q _ {t, z z} \left(\boldsymbol {z} _ {t}, \boldsymbol {u} _ {t}\right) & Q _ {t, u z} \left(\boldsymbol {z} _ {t}, \boldsymbol {u} _ {t}\right) ^ {\top} \\ Q _ {t, u} \left(\boldsymbol {z} _ {t}, \boldsymbol {u} _ {t}\right) & Q _ {t, u z} \left(\boldsymbol {z} _ {t}, \boldsymbol {u} _ {t}\right) & Q _ {t, u u} \left(\boldsymbol {z} _ {t}, \boldsymbol {u} _ {t}\right) \end{array} \right] \left[ \begin{array}{l} 1 \\ \delta z _ {t} \\ \delta u _ {t} \end{array} \right], \tag {28}
+$$
+
+where the first and second order derivatives of the $Q$ -function are given by
+
+$$
+Q _ {t, z} (\boldsymbol {z} _ {t}, \boldsymbol {u} _ {t}) = \left[ \frac {\partial \bar {c} _ {t} (\boldsymbol {z} _ {t} , \boldsymbol {u} _ {t})}{\partial z} + A (\boldsymbol {z} _ {t}, \boldsymbol {u} _ {t}) ^ {\top} \mathbb {V} _ {t + 1, z} (\boldsymbol {z} _ {t}, \boldsymbol {u} _ {t}) \right],
+$$
+
+$$
+Q _ {t, u} \left(\boldsymbol {z} _ {t}, \boldsymbol {u} _ {t}\right) = \left[ \frac {\partial \bar {c} _ {t} \left(\boldsymbol {z} _ {t} , \boldsymbol {u} _ {t}\right)}{\partial u} + B \left(\boldsymbol {z} _ {t}, \boldsymbol {u} _ {t}\right) ^ {\top} \mathbb {V} _ {t + 1, z} \left(\boldsymbol {z} _ {t}, \boldsymbol {u} _ {t}\right) \right],
+$$
+
+$$
+Q _ {t, z z} (\boldsymbol {z} _ {t}, \boldsymbol {u} _ {t}) = \left[ \frac {\partial^ {2} \bar {c} _ {t} (\boldsymbol {z} _ {t} , \boldsymbol {u} _ {t})}{\partial z ^ {2}} + A (\boldsymbol {z} _ {t}, \boldsymbol {u} _ {t}) ^ {\top} \mathbb {V} _ {t + 1, z z} (\boldsymbol {z} _ {t}, \boldsymbol {u} _ {t}) A (\boldsymbol {z} _ {t}, \boldsymbol {u} _ {t}) \right],
+$$
+
+$$
+Q _ {t, u z} (\boldsymbol {z} _ {t}, \boldsymbol {u} _ {t}) = \left[ \frac {\partial^ {2} \bar {c} _ {t} (\boldsymbol {z} _ {t} , \boldsymbol {u} _ {t})}{\partial u \partial z} + B (\boldsymbol {z} _ {t}, \boldsymbol {u} _ {t}) ^ {\top} \mathbb {V} _ {t + 1, z z} (\boldsymbol {z} _ {t}, \boldsymbol {u} _ {t}) A (\boldsymbol {z} _ {t}, \boldsymbol {u} _ {t}) \right],
+$$
+
+$$
+Q _ {t, u u} (\boldsymbol {z} _ {t}, \boldsymbol {u} _ {t}) = \left[ \frac {\partial^ {2} \overline {{c}} _ {t} (\boldsymbol {z} _ {t} , \boldsymbol {u} _ {t})}{\partial u ^ {2}} + B (\boldsymbol {z} _ {t}, \boldsymbol {u} _ {t}) ^ {\top} \mathbb {V} _ {t + 1, z z} (\boldsymbol {z} _ {t}, \boldsymbol {u} _ {t}) B (\boldsymbol {z} _ {t}, \boldsymbol {u} _ {t}) \right],
+$$
+
+and the first and second order derivatives of the value functions are given by
+
+$$
+\mathbb {V} _ {t + 1, z} (\boldsymbol {z} _ {t}, \boldsymbol {u} _ {t}) = \mathbb {E} _ {w} \left[ \frac {\partial V _ {t + 1}}{\partial z} (F (\boldsymbol {z} _ {t}, \boldsymbol {u} _ {t}, w)) \right], \mathbb {V} _ {t + 1, z z} (\boldsymbol {z} _ {t}, \boldsymbol {u} _ {t}) = \mathbb {E} _ {w} \left[ \frac {\partial^ {2} V _ {t + 1}}{\partial z ^ {2}} (F (\boldsymbol {z} _ {t}, \boldsymbol {u} _ {t}, w)) \right].
+$$
+
+Notice that the $Q$ -function approximation $Q_{t}$ in (28) is quadratic and the matrix $\begin{bmatrix} Q_{t,zz}(\boldsymbol{z}_t,\boldsymbol{u}_t) & Q_{t,uz}(\boldsymbol{z}_t,\boldsymbol{u}_t)^\top \\ Q_{t,uz}(\boldsymbol{z}_t,\boldsymbol{u}_t) & Q_{t,uu}(\boldsymbol{z}_t,\boldsymbol{u}_t) \end{bmatrix}$ is positive semi-definite. Therefore the optimal perturbed policy $\pi_{\delta z,t}^*$ has the following closed-form solution:
+
+$$
+\pi_ {\delta z, t} ^ {*} (\cdot) \in \arg \min _ {\delta u _ {t}} Q _ {t} \left(z _ {t}, u _ {t}\right) \Longrightarrow \pi_ {\delta z, t} ^ {*} \left(\delta z _ {t}\right) = k _ {t} \left(\boldsymbol {z} _ {t}, \boldsymbol {u} _ {t}\right) + K _ {t} \left(\boldsymbol {z} _ {t}, \boldsymbol {u} _ {t}\right) \delta z _ {t}, \tag {29}
+$$
+
+where the controller weights are given by
+
+$k_{t}(\pmb{z}_{t},\pmb{u}_{t}) = -(Q_{t,uu}(\pmb{z}_{t},\pmb{u}_{t}))^{-1}Q_{t,u}(\pmb{z}_{t},\pmb{u}_{t})$ and $K_{t}(\pmb{z}_{t},\pmb{u}_{t}) = -(Q_{t,uu}(\pmb{z}_{t},\pmb{u}_{t}))^{-1}Q_{t,uz}(\pmb{z}_{t},\pmb{u}_{t})$ . Furthermore, by putting the optimal solution into the Taylor expansion of the $Q$ -function $Q_{t}$ , we get
+
+$$
+Q _ {t} (z _ {t}, u _ {t}) - Q _ {t} (\boldsymbol {z} _ {t}, \boldsymbol {u} _ {t}) = \frac {1}{2} \left[ \begin{array}{c} 1 \\ \delta z _ {t} \end{array} \right] ^ {\top} \left[ \begin{array}{c c} Q _ {t, 1 1} ^ {*} (\boldsymbol {z} _ {t}, \boldsymbol {u} _ {t}) & \left(Q _ {t, 2 1} ^ {*} (\boldsymbol {z} _ {t}, \boldsymbol {u} _ {t})\right) ^ {\top} \\ Q _ {t, 2 1} ^ {*} (\boldsymbol {z} _ {t}, \boldsymbol {u} _ {t}) & Q _ {t, 2 2} ^ {*} (\boldsymbol {z} _ {t}, \boldsymbol {u} _ {t}) \end{array} \right] \left[ \begin{array}{c} 1 \\ \delta z _ {t} \end{array} \right],
+$$
+
+where the closed-loop first and second order approximations of the $Q$ -function are given by
+
+$$
+Q _ {t, 1 1} ^ {*} \left(\boldsymbol {z} _ {t}, \boldsymbol {u} _ {t}\right) = C _ {w} ^ {\top} C _ {w} - Q _ {t, u} \left(\boldsymbol {z} _ {t}, \boldsymbol {u} _ {t}\right) ^ {\top} Q _ {t, u u} \left(\boldsymbol {z} _ {t}, \boldsymbol {u} _ {t}\right) ^ {- 1} Q _ {t, u} \left(\boldsymbol {z} _ {t}, \boldsymbol {u} _ {t}\right),
+$$
+
+$$
+Q _ {t, 2 1} ^ {*} (\boldsymbol {z} _ {t}, \boldsymbol {u} _ {t}) = Q _ {t, z} (\boldsymbol {z} _ {t}, \boldsymbol {u} _ {t}) ^ {\top} - k _ {t} (\boldsymbol {z} _ {t}, \boldsymbol {u} _ {t}) ^ {\top} Q _ {t, u u} (\boldsymbol {z} _ {t}, \boldsymbol {u} _ {t}) K _ {t} (\boldsymbol {z} _ {t}, \boldsymbol {u} _ {t}),
+$$
+
+$$
+Q _ {t, 2 2} ^ {*} \left(\boldsymbol {z} _ {t}, \boldsymbol {u} _ {t}\right) = Q _ {t, z z} \left(\boldsymbol {z} _ {t}, \boldsymbol {u} _ {t}\right) - K _ {t} \left(\boldsymbol {z} _ {t}, \boldsymbol {u} _ {t}\right) ^ {\top} Q _ {t, u u} \left(\boldsymbol {z} _ {t}, \boldsymbol {u} _ {t}\right) K _ {t} \left(\boldsymbol {z} _ {t}, \boldsymbol {u} _ {t}\right).
+$$
+
+Notice that at time step $t$ the optimal value function also has the following form:
+
+$$
+\begin{array}{l} V _ {t} (z _ {t}) = \min _ {\delta u _ {t}} Q _ {t} (z _ {t}, u _ {t}) = \delta Q _ {t} (\boldsymbol {z} _ {t}, \boldsymbol {u} _ {t}, \delta z _ {t}, \pi_ {\delta z, t} ^ {*} (\delta z _ {t})) + Q _ {t} (\boldsymbol {z} _ {t}, \boldsymbol {u} _ {t}) \\ = \frac {1}{2} \left[ \begin{array}{l} 1 \\ \delta z _ {t} \end{array} \right] ^ {\top} \left[ \begin{array}{c c} Q _ {t, 1 1} ^ {*} (\boldsymbol {z} _ {t}, \boldsymbol {u} _ {t}) & \left(Q _ {t, 2 1} ^ {*} (\boldsymbol {z} _ {t}, \boldsymbol {u} _ {t})\right) ^ {\top} \\ Q _ {t, 2 1} ^ {*} (\boldsymbol {z} _ {t}, \boldsymbol {u} _ {t}) & Q _ {t, 2 2} ^ {*} (\boldsymbol {z} _ {t}, \boldsymbol {u} _ {t}) \end{array} \right] \left[ \begin{array}{l} 1 \\ \delta z _ {t} \end{array} \right] + Q _ {t} (\boldsymbol {z} _ {t}, \boldsymbol {u} _ {t}). \tag {30} \\ \end{array}
+$$
+
+Therefore, the first and second order differential value functions can be
+
+$$
+\mathbb {V} _ {t, z} \left(\boldsymbol {z} _ {t}, \boldsymbol {u} _ {t}\right) = Q _ {t, 2 1} ^ {*} \left(\boldsymbol {z} _ {t}, \boldsymbol {u} _ {t}\right), \mathbb {V} _ {t, z z} \left(\boldsymbol {z} _ {t}, \boldsymbol {u} _ {t}\right) = Q _ {t, 2 2} ^ {*} \left(\boldsymbol {z} _ {t}, \boldsymbol {u} _ {t}\right),
+$$
+
+and the value improvement at the nominal state $\mathbf{z}_t$ at time step $t$ is given by
+
+$$
+V _ {t} (\boldsymbol {z} _ {t}) = \frac {1}{2} Q _ {t, 1 1} ^ {*} (\boldsymbol {z} _ {t}, \boldsymbol {u} _ {t}) + Q _ {t} (\boldsymbol {z} _ {t}, \boldsymbol {u} _ {t}).
+$$
+
+# B.3 INCORPORATING RECEDING-HORIZON TO ILQR
+
+While iLQR provides an effective way of computing a sequence of (locally) optimal actions, it has two limitations. First, unlike RL in which an optimal Markov policy is computed, this algorithm only finds a sequence of open-loop optimal control actions under a given initial observation. Second, the iLQR algorithm requires the knowledge of a nominal (latent state and action) trajectory at every iteration, which restricts its application to cases only when real-time interactions with environment are possible. In order to extend the iLQR paradigm into the closed-loop RL setting, we utilize the concept of model predictive control (MPC) (Rawlings & Mayne, 2009; Borrelli et al., 2017) and propose the following iLQR-MPC procedure. Initially, given an initial latent state $z_0$ we generate a single nominal trajectory: $\{(z_t, \boldsymbol{u}_t, \boldsymbol{z}_{t+1})\}_{t=0}^{T-1}$ , whose sequence of actions is randomly sampled, and the latent states are updated by forward propagation of latent state dynamics (instead of interacting with environment), i.e., $\boldsymbol{z}_0 = z_0$ , $\boldsymbol{z}_{t+1} \sim F(\boldsymbol{z}_t, \boldsymbol{u}_t, w_t)$ , $\forall t$ . Then at each time-step $k \geq 0$ , starting at latent state $z_k$ we compute the optimal perturbed policy $\{\pi_{\delta z,t}^*(\cdot)\}_{t=k}^{T-1}$ using the iLQR algorithm with $T-k$ lookahead steps. Having access to the perturbed latent state $\delta z_k = z_k - z_k$ , we only deploy the first action $u_k^* = \pi_{\delta z,k}^*(\delta z_k) + u_k$ in the environment and observe the next latent state $z_{k+1}$ . Then, using the subsequent optimal perturbed policy $\{\pi_{\delta z,t}^*(\cdot)\}_{t=k+1}^{T-1}$ , we generate both the estimated latent state sequence $\{\widehat{z}_t\}_{t=k+1}^T$ by forward propagation with initial state $\widehat{z}_{k+1} = z_{k+1}$ and action sequence $\{u_t^*\}_{t=k+1}^{T-1}$ , where $u_t^* = \pi_{\delta z,t}^*(\delta \widehat{z}_t) + u_t$ , and $\delta \widehat{z}_t = \widehat{z}_t - z_t$ . Then one updates the subsequent nominal trajectory as follows: $\{(z_t, \boldsymbol{u}_t, \boldsymbol{z}_{t+1})\}_{t=k+1}^{T-1} = \{(\widehat{z}_t, u_t^*, \widehat{z}_{t+1})\}_{t=k+1}^{T-1}$ , and repeats the above procedure.
+
+Consider the finite-horizon MDP problem $\min_{\pi_t, \forall t} \mathbb{E}\left[c_T(x_T) + \sum_{t=0}^{T-1} c_t(x_t, u_t) \mid \pi_t, P, x_0\right]$ , where the optimizer $\pi$ is over the class of Markovian policies. (Notice this problem is the closed-loop version of (SOC1).) Using the above iLQR-MPC procedure, at each time step $t \in \{0, \dots, T-1\}$ one can construct a Markov policy that is in form of
+
+$$
+\pi_ {t, \mathrm {i L Q R - M P C}} (\cdot | x _ {t}) := u _ {t} ^ {\mathrm {i L Q R}} \text {s . t .} \{u _ {t} ^ {\mathrm {i L Q R}}, \dots , u _ {T - 1} ^ {\mathrm {i L Q R}} \} \leftarrow \mathrm {i L Q R} (L (U, \widetilde {P} ^ {*}, \bar {c}, z _ {t})), \text {w i t h} z _ {t} \sim E (\cdot | x _ {t}),
+$$
+
+where iLQR $(\ell_{\mathrm{Control}}(U; \widetilde{P}^*, z))$ denotes the iLQR algorithm with initial latent state $z$ . To understand the performance of this policy w.r.t. the MDP problem, we refer to the sub-optimality bound of iLQR (w.r.t. open-loop control problem in (SOC1)) in Section 3.3, as well as that for MPC policy, whose details can be found in Borrelli et al. (2017).
+
+
+Figure 3: PCC graphical model. Left: generative links, right: recognition links.
+
+
+
+# C TECHNICAL PROOFS OF SECTION 4
+
+# C.1 DERIVATION OF $R_{3,\mathrm{NLE - BOUND}}^{\prime}(\widehat{P},Q)$ DECOMPOSITION
+
+We derive the bound for the conditional log-likelihood $\log \widehat{P} (x_{t + 1}|x_t,u_t)$
+
+$$
+\begin{array}{l} \log \widehat {P} (x _ {t + 1} | x _ {t}, u _ {t}) = \log \int_ {z _ {t}, \hat {z} _ {t + 1}} \widehat {P} (x _ {t + 1}, z _ {t}, \hat {z} _ {t + 1} | x _ {t}, u _ {t}) d z _ {t} d \hat {z} _ {t + 1} \\ = \log \mathbb {E} _ {Q (z _ {t}, \hat {z} _ {t + 1} | x _ {t}, x _ {t + 1}, u _ {t})} \left[ \frac {\widehat {P} (x _ {t + 1} , z _ {t} , \hat {z} _ {t + 1} | x _ {t} , u _ {t})}{Q (z _ {t} , \hat {z} _ {t + 1} | x _ {t} , x _ {t + 1} , u _ {t})} \right] \\ \stackrel {(a)} {\geq} \mathbb {E} _ {Q (z _ {t}, \hat {z} _ {t + 1} | x _ {t}, x _ {t + 1}, u _ {t})} \left[ \log \frac {\widehat {P} (x _ {t + 1} , z _ {t} , \hat {z} _ {t + 1} | x _ {t} , u _ {t})}{Q (z _ {t} , \hat {z} _ {t + 1} | x _ {t} , x _ {t + 1} , u _ {t})} \right] \\ \stackrel {{(b)}} {{=}} \mathbb {E} _ {\substack {Q (\hat{z}_{t + 1}|x_{t + 1})\\ Q(z_{t}|\hat{z}_{t + 1},x_{t},u_{t})}}\left[\log \frac{\widehat{P}(z_{t}|x_{t})\widehat{P}(\hat{z}_{t + 1}|z_{t},u_{t})\widehat{P}(x_{t + 1}|\hat{z}_{t + 1})}{Q(\hat{z}_{t + 1}|x_{t + 1})Q(z_{t}|\hat{z}_{t + 1},x_{t},u_{t})}\right] \\ \stackrel {(c)} {=} \mathbb {E} _ {Q (\hat {z} _ {t + 1} | x _ {t + 1})} \left[ \log \widehat {P} (x _ {t + 1} | \hat {z} _ {t + 1}) \right] \\ \left. - \mathbb {E} _ {Q (\hat {z} _ {t + 1} | x _ {t + 1})} \left[ D _ {\mathrm {K L}} \left(Q (z _ {t} | \hat {z} _ {t + 1}, x _ {t}, u _ {t}) | | \widehat {P} (z _ {t} | x _ {t})\right) \right] \right. \\ + H \left(Q \left(\hat {z} _ {t + 1} \mid x _ {t + 1}\right)\right) \\ + \mathbb {E} _ {\substack {Q (\hat {z} _ {t + 1} | x _ {t + 1}) \\ Q (z _ {t} | \hat {z} _ {t + 1}, x _ {t}, u _ {t})}} \left[ \log \widehat {P} \bigl (\hat {z} _ {t + 1} | z _ {t}, u _ {t} \bigr) \right] \\ = R _ {3, \mathrm {N L E - B o u n d}} ^ {\prime} (\widehat {P}, Q) \\ \end{array}
+$$
+
+Where (a) holds from the log function concavity, (b) holds by the factorization $Q(z_{t},\hat{z}_{t + 1}|x_{t},x_{t + 1},u_{t}) = Q(\hat{z}_{t + 1}|x_{t + 1})Q(z_{t}|\hat{z}_{t + 1},x_{t},u_{t})$ , and (c) holds by a simple decomposition to the different components.
+
+# C.2 DERIVATION OF $R_{2,\mathrm{BOUND}}^{\prime \prime}(\widehat{P},Q)$ DECOMPOSITION
+
+We derive the bound for the consistency loss $\ell_{\mathrm{Consistency}}(\hat{P})$
+
+$$
+\begin{array}{l} R _ {2} ^ {\prime \prime} (\widehat {P}) = D _ {\mathrm {K L}} \left(\widehat {P} (z _ {t + 1} | x _ {t + 1}) \| \int_ {z _ {t}} \widehat {P} (\widehat {z} _ {t + 1} | z _ {t}, u _ {t}) \widehat {P} (z _ {t} | x _ {t}) d z _ {t}\right) \\ \stackrel {(a)} {=} - H \left(Q \left(\hat {z} _ {t + 1} \mid x _ {t + 1}\right)\right) - \mathbb {E} _ {Q \left(\hat {z} _ {t + 1} \mid x _ {t + 1}\right)} \left[ \log \int_ {z _ {t}} \widehat {P} \left(\hat {z} _ {t + 1} \mid z _ {t}, u _ {t}\right) \widehat {P} \left(z _ {t} \mid x _ {t}\right) d z _ {t} \right] \\ = - H \left(Q (\hat {z} _ {t + 1} | x _ {t + 1})\right) - \mathbb {E} _ {Q (\hat {z} _ {t + 1} | x _ {t + 1})} \left[ \log \mathbb {E} _ {Q (z _ {t} | \hat {z} _ {t + 1}, x _ {t}, u _ {t})} \left[ \frac {\widehat {P} (\hat {z} _ {t + 1} | z _ {t} , u _ {t}) \widehat {P} (z _ {t} | x _ {t})}{Q (z _ {t} | \hat {z} _ {t + 1} , x _ {t} , u _ {t})} \right] \right] \\ \stackrel {(b)} {\leq} - H \left(Q (\hat {z} _ {t + 1} | x _ {t + 1})\right) - \mathbb {E} _ {\substack {Q (\hat {z} _ {t + 1} | x _ {t + 1}) \\ Q (z _ {t} | \hat {z} _ {t + 1}, x _ {t}, u _ {t})}} \left[ \log \frac {\widehat {P} (\hat {z} _ {t + 1} | z _ {t} , u _ {t}) \widehat {P} (z _ {t} , x _ {t})}{Q (z _ {t} | \hat {z} _ {t + 1} , x _ {t} , u _ {t})} \right] \\ \stackrel {(c)} {=} - \left(- \mathbb {E} _ {Q (\hat {z} _ {t + 1} | x _ {t + 1})} \left[ D _ {\mathrm {K L}} \left(Q (z _ {t} | \hat {z} _ {t + 1}, x _ {t}, u _ {t}) | | \widehat {P} (z _ {t} | x _ {t})\right) \right] \right. \\ \left. + H \left(Q (\hat {z} _ {t + 1} | x _ {t + 1})\right) + \mathbb {E} _ {\substack {Q (\hat {z} _ {t + 1} | x _ {t + 1}) \\ Q (z _ {t} | \hat {z} _ {t + 1}, x _ {t}, u _ {t})}} \left[ \log \widehat {P} (\hat {z} _ {t + 1} | z _ {t}, u _ {t}) \right]\right) \\ = R _ {2, \text {B o u n d}} ^ {\prime \prime} (\widehat {P}, Q) \\ \end{array}
+$$
+
+Where (a) holds by the assumption that $Q(\hat{z}_{t + 1} \mid x_{t + 1}) = \widehat{P}(z_{t + 1} \mid x_{t + 1})$ , (b) holds from the log function concavity, and (c) holds by a simple decomposition to the different components.
+
+# D EXPERIMENTAL DETAILS
+
+In the following sections we will provide the description of the data collection process, domains, and implementation details used in the experiments.
+
+# D.1 DATA COLLECTION PROCESS
+
+To generate our training and test sets, each consists of triples $(x_{t},u_{t},x_{t + 1})$ , we: (1) sample an underlying state $s_t$ and generate its corresponding observation $x_{t}$ , (2) sample an action $u_{t}$ , and (3) obtain the next state $s_{t + 1}$ according to the state transition dynamics, add it a zero-mean Gaussian noise with variance $\sigma^2 I_{n_s}$ , and generate it's corresponding observation $x_{t + 1}$ . To ensure that the observation-action data is uniformly distributed (see Section 3), we sample the state-action pair $(s_t,u_t)$ uniformly from the state-action space. To understand the robustness of each model, we consider both deterministic $(\sigma = 0)$ and stochastic scenarios. In the stochastic case, we add noise to the system with different values of $\sigma$ and evaluate the models' performance under various degrees of noise.
+
+# D.2 DESCRIPTION OF THE DOMAINS
+
+Planar System In this task the main goal is to navigate an agent in a surrounded area on a 2D plane (Breivik & Fossen, 2005), whose goal is to navigate from a corner to the opposite one, while avoiding the six obstacles in this area. The system is observed through a set of $40 \times 40$ pixel images taken from the top view, which specifies the agent's location in the area. Actions are two-dimensional and specify the $x - y$ direction of the agent's movement, and given these actions the next position of the agent is generated by a deterministic underlying (unobservable) state evolution function. Start State: one of three corners (excluding bottom-right). Goal State: bottom-right corner. Agent's Objective: agent is within Euclidean distance of 2 from the goal state.
+
+Inverted Pendulum — SwingUp & Balance This is the classic problem of controlling an inverted pendulum (Furuta et al., 1991) from $48 \times 48$ pixel images. The goal of this task is to swing up an under-actuated pendulum from the downward resting position (pendulum hanging down) to the top position and to balance it. The underlying state $s_t$ of the system has two dimensions: angle and angular velocity, which is unobservable. The control (action) is 1-dimensional, which is the torque applied to the joint of the pendulum. To keep the Markovian property in the observation (image) space, similar to the setting in E2C and RCE, each observation $x_t$ contains two images generated from consecutive time-frames (from current time and previous time). This is because each image only shows the position of the pendulum and does not contain any information about the velocity. Start State: Pole is resting down (SwingUp), or randomly sampled in $\pm \pi / 6$ (Balance). Agent's Objective: pole's angle is within $\pm \pi / 6$ from an upright position.
+
+CartPole This is the visual version of the classic task of controlling a cart-pole system (Geva & Sitte, 1993). The goal in this task is to balance a pole on a moving cart, while the cart avoids hitting the left and right boundaries. The control (action) is 1-dimensional, which is the force applied to the cart. The underlying state of the system $s_t$ is 4- dimensional, which indicates the angle and angular velocity of the pole, as well as the position and velocity of the cart. Similar to the inverted pendulum, in order to maintain the Markovian property the observation $x_{t}$ is a stack of two $80\times 80$ pixel images generated from consecutive time-frames. Start State: Pole is randomly sampled in $\pm \pi /6$ . Agent's Objective: pole's angle is within $\pm \pi /10$ from an upright position.
+
+3-link Manipulator — SwingUp & Balance The goal in this task is to move a 3-link manipulator from the initial position (which is the downward resting position) to a final position (which is the top position) and balance it. In the 1-link case, this experiment is reduced to inverted pendulum. In the 2-link case the setup is similar to that of arcobot (Spong, 1995), except that we have torques applied to all intermediate joints, and in the 3-link case the setup is similar to that of the 3-link planar robot arm domain that was used in the E2C paper, except that the robotic arms are modeled by simple rectangular rods (instead of real images of robot arms), and our task success criterion requires both swing-up (manipulate to final position) and balance.[12] The underlying (unobservable) state $s_t$ of the system is $2N$ -dimensional, which indicates the relative angle and angular velocity at each link, and the actions are $N$ -dimensional, representing the force applied to each joint of the arm. The
+
+state evolution is modeled by the standard Euler-Lagrange equations (Spong, 1995; Lai et al., 2015). Similar to the inverted pendulum and cartpole, in order to maintain the Markovian property, the observation state $x_{t}$ is a stack of two $80 \times 80$ pixel images of the $N$ -link manipulator generated from consecutive time-frames. In the experiments we will evaluate the models based on the case of $N = 2$ (2-link manipulator) and $N = 3$ (3-link manipulator). Start State: $1^{\mathrm{st}}$ pole with angle $\pi$ , $2^{\mathrm{nd}}$ pole with angle $2\pi / 3$ , and $3^{\mathrm{rd}}$ pole with angle $\pi / 3$ , where angle $\pi$ is a resting position. Agent's Objective: the sum of all poles' angles is within $\pm \pi / 6$ from an upright position.
+
+TORCS Simulaotr This task takes place in the TORCS simulator (Wymann et al., 2000) (specifically in michegan f1 race track, only straight lane). The goal of this task is to control a car so it would remain in the middle of the lane. We restricted the task to only consider steering actions (left / right in the range of $[-1,1]$ ), and applied a simple procedure to ensure the velocity of the car is always around 10. We pre-processed the observations given by the simulator $(240 \times 320$ RGB images) to receive $80 \times 80$ binary images (white pixels represent the road). In order to maintain the Markovian property, the observation state $x_{t}$ is a stack of two $80 \times 80$ images (where the two images are 7 frames apart - chosen so that consecutive observation would be somewhat different). The task goes as follows: the car is forced to steer strongly left (action=1), or strongly right (action=-1) for the initial 20 steps of the simulation (direction chosen randomly), which causes it to drift away from the center of the lane. Then, for the remaining horizon of the task, the car needs to recover from the drift, return to the middle of the lane, and stay there. Start State: 20 steps of drifting from the middle of the lane by steering strongly left, or right (chosen randomly). Agent's Objective: agent (car) is within Euclidean distance of 1 from the middle of the lane (full width of the lane is about 18).
+
+# D.3 IMPLEMENTATION
+
+In the following we describe architectures and hyper-parameters that were used for training the different algorithms.
+
+# D.3.1 TRAINING HYPER-PARAMETERS AND REGULIZERS
+
+All the algorithms were trained using:
+
+- Batch size of 128.
+ADAM (Goodfellow et al., 2016) with $\alpha = 5\cdot 10^{-4}$ , $\beta_{1} = 0.9$ , $\beta_{2} = 0.999$ , and $\epsilon = 10^{-8}$ .
+- $\mathrm{L}_2$ regularization with a coefficient of $10^{-3}$ .
+- Additional VAE (Kingma & Welling, 2013) loss term given by $\ell_t^{\mathrm{VAE}} = -\mathbb{E}_{q(z|x)}[\log p(x|z)] + \bar{D}_{\mathrm{KL}}(q(z|x)\| p(z))$ , where $p(z) \sim \mathcal{N}(0,1)$ . The term was added with a very small coefficient of 0.01. We found this term to be important to stabilize the training process, as there is no explicit term that governs the scale of the latent space.
+
+E2C training specifics:
+
+- $\lambda$ from the loss term of E2C was tuned using a parameter sweep in $\{0.25, 0.5, 1\}$ , and was chosen to be 0.25 across all domains, as it performed the best independently for each domain.
+
+PCC training specifics:
+
+- $\lambda_{\mathrm{p}}$ was set to 1 across all domains.
+- $\lambda_{\mathrm{c}}$ was set to be 7 across all domains, after it was tuned using a parameter sweep in $\{1,3,7,10\}$ on the Planar system.
+- $\lambda_{\mathrm{cur}}$ was set to be 1 across all domains without performing any tuning.
+- $\{\bar{z},\bar{u}\}$ , for the curvature loss, were generated from $\{z,u\}$ by adding Gaussian noise $\mathcal{N}(0,0.1^2)$ , where $\sigma = 0.1$ was set across all domains without performing any tuning.
+- Motivated by Hafner et al. (2018), we added a deterministic loss term in the form of cross entropy between the output of the generative path given the current observation and action (while taking the means of the encoder output and the dynamics model output) and the observation of the next state. This loss term was added with a coefficient of 0.3 across all domains after it was tuned using a parameter sweep over $\{0.1, 0.3, 0.5\}$ on the Planar system.
+
+# D.3.2 NETWORK ARCHITECTURES
+
+We next present the specific architecture choices for each domain. For fair comparison, The numbers of layers and neurons of each component were shared across all algorithms. ReLU non-linearities were used between each two layers.
+
+Encoder: composed of a backbone (either a MLP or a CNN, depending on the domain) and an additional fully-connected layer that outputs mean variance vectors that induce a diagonal Gaussian distribution.
+
+Decoder: composed of a backbone (either a MLP or a CNN, depending on the domain) and an additional fully-connected layer that outputs logits that induce a Bernoulli distribution.
+
+Dynamical model: the path that leads from $\{z_t, u_t\}$ to $\hat{z}_{t+1}$ . Composed of a MLP backbone and an additional fully-connected layer that outputs mean and variance vectors that induce a diagonal Gaussian distribution. We further added a skip connection from $z_t$ and summed it with the output of the mean vector. When using the amortized version, there are two additional outputs $A$ and $B$ .
+
+Backwards dynamical model: the path that leads from $\{\hat{z}_{t + 1},u_t,x_t\}$ to $z_{t}$ . Each of the inputs goes through a fully-connected layer with $\{N_z,N_u,N_x\}$ neurons, respectively. The outputs are then concatenated and pass though another fully-connected layer with $N_{\mathrm{joint}}$ neurons, and finally with an additional fully-connected layer that outputs the mean and variance vectors that induce a diagonal Gaussian distribution.
+
+# Planar system
+
+- Input: ${40} \times {40}$ images. 5000 training samples of the form $\left( {{x}_{t},{u}_{t},{x}_{t + 1}}\right)$
+- Actions space: 2-dimensional
+- Latent space: 2-dimensional
+Encoder: 3 Layers: 300 units - 300 units - 4 units (2 for mean and 2 for variance)
+- Decoder: 3 Layers: 300 units - 300 units - 1600 units (logits)
+- Dynamics: 3 Layers: 20 units - 20 units - 4 units
+- Backwards dynamics: $N_{z} = 5, N_{u} = 5, N_{x} = 100 - N_{\mathrm{joint}} = 100 - 4$ units
+- Number of control actions: or the planning horizon $T = {40}$
+
+# Inverted Pendulum — Swing Up & Balance
+
+- Input: Two $48 \times 48$ images. 20000 training samples of the form $(x_{t}, u_{t}, x_{t+1})$
+- Actions space: 1-dimensional
+- Latent space: 3-dimensional
+Encoder: 3 Layers: 500 units - 500 units - 6 units (3 for mean and 3 for variance)
+- Decoder: 3 Layers: 500 units - 500 units - 4608 units (logits)
+- Dynamics: 3 Layers: 30 units - 30 units - 6 units
+- Backwards dynamics: $N_{z} = 10, N_{u} = 10, N_{x} = 200 - N_{\mathrm{joint}} = 200 - 6$ units
+Number of control actions: or the planning horizon $T = 400$
+
+# Cart-pole Balancing
+
+- Input: Two $80 \times 80$ images. 15000 training samples of the form $(x_{t}, u_{t}, x_{t+1})$
+- Actions space: 1-dimensional
+- Latent space: 8-dimensional
+- Encoder: 6 Layers: Convolutional layer: $32 \times 5 \times 5$ ; stride $(1,1)$ - Convolutional layer: $32 \times 5 \times 5$ ; stride $(2,2)$ - Convolutional layer: $32 \times 5 \times 5$ ; stride $(2,2)$ - Convolutional layer: $10 \times 5 \times 5$ ; stride $(2,2)$ - 200 units - 16 units (8 for mean and 8 for variance)
+- Decoder: 6 Layers: 200 units - 1000 units - 100 units - Convolutional layer: $32 \times 5 \times 5$ ; stride (1, 1) - Upsampling (2, 2) - convolutional layer: $32 \times 5 \times 5$ ; stride (1, 1) - Upsampling (2, 2) - Convolutional layer: $32 \times 5 \times 5$ ; stride (1, 1) - Upsampling (2, 2) - Convolutional layer: $2 \times 5 \times 5$ ; stride (1, 1)
+
+- Dynamics: 3 Layers: 40 units - 40 units - 16 units
+- Backwards dynamics: $N_{z} = 10, N_{u} = 10, N_{x} = 300 - N_{\text{joint}} = 300 - 16$ units
+- Number of control actions: or the planning horizon $T = {200}$
+
+# 3-link Manipulator — Swing Up & Balance
+
+- Input: Two $80 \times 80$ images. 30000 training samples of the form $(x_{t}, u_{t}, x_{t+1})$
+- Actions space: 3-dimensional
+- Latent space: 8-dimensional
+
+- Encoder: 6 Layers: Convolutional layer: $62 \times 5 \times 5$ ; stride $(1,1)$ - Convolutional layer: $32 \times 5 \times 5$ ; stride $(2,2)$ - Convolutional layer: $32 \times 5 \times 5$ ; stride $(2,2)$ - Convolutional layer: $10 \times 5 \times 5$ ; stride $(2,2)$ - 500 units - 16 units (8 for mean and 8 for variance)
+- Decoder: 6 Layers: 500 units - 2560 units - 100 units - Convolutional layer: $32 \times 5 \times 5$ ; stride (1, 1) - Upsampling (2, 2) - convolutional layer: $32 \times 5 \times 5$ ; stride (1, 1) - Upsampling (2, 2) - Convolutional layer: $32 \times 5 \times 5$ ; stride (1, 1) - Upsampling (2, 2) - Convolutional layer: $2 \times 5 \times 5$ ; stride (1, 1)
+- Dynamics: 3 Layers: 40 units - 40 units - 16 units
+- Backwards dynamics: $N_z = 10, N_u = 10, N_x = 400 - N_{\mathrm{joint}} = 400 - 16$ units
+- Number of control actions: or the planning horizon $T = {400}$
+
+# TORCS
+
+- Input: Two $80 \times 80$ images. 30000 training samples of the form $(x_{t}, u_{t}, x_{t+1})$
+- Actions space: 1-dimensional
+- Latent space: 8-dimensional
+- Encoder: 6 Layers: Convolutional layer: $32 \times 5 \times 5$ ; stride $(1,1)$ - Convolutional layer: $32 \times 5 \times 5$ ; stride $(2,2)$ - Convolutional layer: $32 \times 5 \times 5$ ; stride $(2,2)$ - Convolutional layer: $10 \times 5 \times 5$ ; stride $(2,2)$ - 200 units - 16 units (8 for mean and 8 for variance)
+- Decoder: 6 Layers: 200 units - 1000 units - 100 units - Convolutional layer: $32 \times 5 \times 5$ ; stride $(1,1)$ - Upsampling $(2,2)$ - convolutional layer: $32 \times 5 \times 5$ ; stride $(1,1)$ - Upsampling $(2,2)$ - Convolutional layer: $32 \times 5 \times 5$ ; stride $(1,1)$ - Upsampling $(2,2)$ - Convolutional layer: $2 \times 5 \times 5$ ; stride $(1,1)$
+- Dynamics: 3 Layers: 40 units - 40 units - 16 units
+- Backwards dynamics: $N_z = 10, N_u = 10, N_x = 300 - N_{\mathrm{joint}} = 300 - 16$ units
+- Number of control actions: or the planning horizon $T = {200}$
+
+# E ADDITIONAL RESULTS
+
+# E.1 PERFORMANCE ON NOISY DYNAMICS
+
+Table 3 shows results for the noisy cases.
+
+Table 3: Percentage of steps in goal state. Averaged over all models (left), and over the best model (right). Subscript under the domain name is the variance of the noise that was added.
+
+| Domain | RCE (all) | E2C (all) | PCC (all) | RCE (top 1) | E2C (top 1) | PCC (top 1) |
| Planar1 | 1.2 ± 0.6 | 0.6 ± 0.3 | 17.9 ± 3.1 | 5.5 ± 1.2 | 6.1 ± 0.9 | 44.7 ± 3.6 |
| Planar2 | 0.4 ± 0.2 | 1.5 ± 0.9 | 14.5 ± 2.3 | 1.7 ± 0.5 | 15.5 ± 2.6 | 29.7 ± 2.9 |
| Pendulum1 | 6.4 ± 0.3 | 23.8 ± 1.2 | 16.4 ± 0.8 | 8.1 ± 0.4 | 36.1 ± 0.3 | 29.5 ± 0.2 |
| Cartpole1 | 8.1 ± 0.6 | 6.6 ± 0.4 | 9.8 ± 0.7 | 20.3 ± 11 | 16.5 ± 0.4 | 17.9 ± 0.8 |
| 3-link1 | 0.3 ± 0.1 | 0 ± 0 | 0.5 ± 0.1 | 1.3 ± 0.2 | 0 ± 0 | 1.8 ± 0.3 |
+
+# E.2 LATENT SPACE REPRESENTATION FOR THE PLANAR SYSTEM
+
+The following figures depicts 5 instances (randomly chosen from the 10 trained models) of the learned latent space representations for both the noiseless and the noisy planar system from PCC, RCE, and E2C models.
+
+
+Figure 4: Noiseless Planar latent space representations using PCC.
+
+
+Figure 5: Noisy $(\sigma^2 = 1)$ Planar latent space representations using PCC.
+
+
+Figure 6: Noiseless Planar latent space representations using RCE.
+
+
+Figure 7: Noisy $(\sigma^2 = 1)$ Planar latent space representations using RCE.
+
+
+Figure 8: Noiseless Planar latent space representations using E2C.
+
+
+Figure 9: Noisy $(\sigma^2 = 1)$ Planar latent space representations using E2C.
\ No newline at end of file
diff --git a/predictionconsistencycurvaturerepresentationlearningforlocallylinearcontrol/images.zip b/predictionconsistencycurvaturerepresentationlearningforlocallylinearcontrol/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..fddc2794d472dbf5ef296241f7b6448475cdb627
--- /dev/null
+++ b/predictionconsistencycurvaturerepresentationlearningforlocallylinearcontrol/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:62cdc2141ff5dcc72c6af9d4c461bf75f1a9e2948b794624cca2b6966a9a4019
+size 1828740
diff --git a/predictionconsistencycurvaturerepresentationlearningforlocallylinearcontrol/layout.json b/predictionconsistencycurvaturerepresentationlearningforlocallylinearcontrol/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..4dc09a9aff6faca4714be2d1e014469cab72713c
--- /dev/null
+++ b/predictionconsistencycurvaturerepresentationlearningforlocallylinearcontrol/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:64ae5c259fb57f537c34e97f831d896304e4f0f71da07ae947f2ec3e02007316
+size 1207313
diff --git a/predictionpoisoningtowardsdefensesagainstdnnmodelstealingattacks/3844f751-4f24-4ff4-a6ef-6469836ee14d_content_list.json b/predictionpoisoningtowardsdefensesagainstdnnmodelstealingattacks/3844f751-4f24-4ff4-a6ef-6469836ee14d_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..35905f33086baa32dd7dd900f07fa50f52c69b1e
--- /dev/null
+++ b/predictionpoisoningtowardsdefensesagainstdnnmodelstealingattacks/3844f751-4f24-4ff4-a6ef-6469836ee14d_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f0723c39940d082ef673b3fde156f8065dea3098bc29306400c1965beeffbba9
+size 101693
diff --git a/predictionpoisoningtowardsdefensesagainstdnnmodelstealingattacks/3844f751-4f24-4ff4-a6ef-6469836ee14d_model.json b/predictionpoisoningtowardsdefensesagainstdnnmodelstealingattacks/3844f751-4f24-4ff4-a6ef-6469836ee14d_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..8d0ad67417a70cbd89dead18f54a1b12151fdefb
--- /dev/null
+++ b/predictionpoisoningtowardsdefensesagainstdnnmodelstealingattacks/3844f751-4f24-4ff4-a6ef-6469836ee14d_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:62a88208946d9faad046b25592d73390884c09167197a508863bf69a297c2db5
+size 118466
diff --git a/predictionpoisoningtowardsdefensesagainstdnnmodelstealingattacks/3844f751-4f24-4ff4-a6ef-6469836ee14d_origin.pdf b/predictionpoisoningtowardsdefensesagainstdnnmodelstealingattacks/3844f751-4f24-4ff4-a6ef-6469836ee14d_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..67704d395b1714b1cdde23a99e77379f9e088b9f
--- /dev/null
+++ b/predictionpoisoningtowardsdefensesagainstdnnmodelstealingattacks/3844f751-4f24-4ff4-a6ef-6469836ee14d_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c35bc39c8ac5109afb4f2eded1c578205b37b2f93aae318473fdd7dcb1a7bef2
+size 5213896
diff --git a/predictionpoisoningtowardsdefensesagainstdnnmodelstealingattacks/full.md b/predictionpoisoningtowardsdefensesagainstdnnmodelstealingattacks/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..c544f5304d9d6e4dd83285eae112ec9a820069fa
--- /dev/null
+++ b/predictionpoisoningtowardsdefensesagainstdnnmodelstealingattacks/full.md
@@ -0,0 +1,397 @@
+# PREDICTION POISONING: TOWARDS DEFENSES AGAINST DNN MODEL STEALING ATTACKS
+
+Tribhuvanesh Orekondy1, Bernt Schiele1, Mario Fritz2
+
+1 Max Planck Institute for Informatics
+$^{2}$ CISPA Helmholtz Center for Information Security
+
+Saarland Informatics Campus, Germany
+
+{orekondy, schiele}@mpi-inf.mpg.de, fritz@cispa.saarland
+
+# ABSTRACT
+
+High-performance Deep Neural Networks (DNNs) are increasingly deployed in many real-world applications e.g., cloud prediction APIs. Recent advances in model functionality stealing attacks via black-box access (i.e., inputs in, predictions out) threaten the business model of such applications, which require a lot of time, money, and effort to develop. Existing defenses take a passive role against stealing attacks, such as by truncating predicted information. We find such passive defenses ineffective against DNN stealing attacks. In this paper, we propose the first defense which actively perturbs predictions targeted at poisoning the training objective of the attacker. We find our defense effective across a wide range of challenging datasets and DNN model stealing attacks, and additionally outperforms existing defenses. Our defense is the first that can withstand highly accurate model stealing attacks for tens of thousands of queries, amplifying the attacker's error rate up to a factor of $85 \times$ with minimal impact on the utility for benign users.
+
+# 1 INTRODUCTION
+
+Effectiveness of state-of-the-art DNN models at a variety of predictive tasks has encouraged their usage in a variety of real-world applications e.g., home assistants, autonomous vehicles, commercial cloud APIs. Models in such applications are valuable intellectual property of their creators, as developing them for commercial use is a product of intense labour and monetary effort. Hence, it is vital to preemptively identify and control threats from an adversarial lens focused at such models. In this work we address model stealing, which involves an adversary attempting to counterfeit the functionality of a target victim ML model by exploiting black-box access (query inputs in, posterior predictions out).
+
+Stealing attacks dates back to Lowd & Meek (2005), who addressed reverse-engineering linear spam classification models. Recent literature predominantly focus on DNNs (specifically CNN image classifiers), and are shown to be highly effective (Tramér et al., 2016) on complex models (Orekondy et al., 2019), even without knowledge of the victim's architecture (Papernot et al., 2017b) nor the training data distribution. The attacks have also been shown to be highly effective at replicating pay-per-query image prediction APIs, for as little as $30 (Orekondy et al., 2019).
+
+Defending against stealing attacks however has received little attention and is lacking. Existing defense strategies aim to either detect stealing query patterns (Juuti et al., 2019), or degrade quality of predicted posterior via perturbation. Since detection makes strong assumptions on the attacker's query distribution (e.g., small $L_{2}$ distances between successive queries), our focus is on the more popular perturbation-based defenses. A common theme among such defenses is accuracy-preserving posterior perturbation: the posterior distribution is manipulated while retaining the top-1 label. For instance, rounding decimals (Tramér et al., 2016), revealing only high-confidence predictions (Orekondy et al., 2019), and introducing ambiguity at the tail end of the posterior distribution (Lee et al., 2018). Such strategies benefit from preserving the accuracy metric of the defender. However, in line with previous works (Tramér et al., 2016; Orekondy et al., 2019; Lee et al., 2018), we find models can be effectively stolen using just the top-1 predicted label returned by the black-box. Specifically, in many cases we observe $< 1\%$ difference between attacks that use the full range of
+
+posteriors (blue line in Fig. 1) to train stolen models and the top-1 label (orange line) alone. In this paper, we work towards effective defenses (red line in Fig. 1) against DNN stealing attacks with minimal impact to defender's accuracy.
+
+The main insight to our approach is that unlike a benign user, a model stealing attacker additionally uses the predictions to train a replica model. By introducing controlled perturbations to predictions, our approach targets poisoning the training objective (see Fig. 2). Our approach allows for a utility-preserving defense, as well as trading-off a marginal utility cost to significantly degrade attacker's performance. As a practical benefit, the defense involves a single hyperparameter (perturbation utility budget) and can be used with minimal overhead to any classification model without retraining or modifications.
+
+We rigorously evaluate our approach by defending six victim models, against four recent and effective DNN stealing attack strategies (Papernot et al., 2017b; Juuti et al., 2019; Orekondy et al., 2019). Our defense consistently mitigates all stealing attacks and further shows improvements over multiple baselines. In particular, we find our defenses degrades the attacker's query sample efficiency by 1-2 orders of magnitude. Our approach significantly reduces the attacker's performance (e.g., $30 - 53\%$ reduction on MNIST and $13 - 28\%$ on CUB200) at a marginal cost $(1 - 2\%)$ to defender's test accuracy. Furthermore, our approach can achieve the same level of mitigation as baseline defenses, but by introducing significantly lesser perturbation.
+
+Contributions. (i) We propose the first utility-constrained defense against DNN model stealing attacks; (ii) We present the first active defense which poisons the attacker's training objective by introducing bounded perturbations; and (iii) Through extensive experiments, we find our approach consistently mitigate various attacks and additionally outperform baselines.
+
+
+Figure 1: We find existing defenses (orange line) ineffective against recent attacks. Our defense (red line) in contrast significantly mitigates the attacks.
+
+Attacker's Loss Landscape
+Our Perturbation Objective:
+Figure 2: We perturb posterior predictions $\tilde{\pmb{y}} = \pmb{y} + \pmb{\delta}$ , with an objective of poisoning the adversary's gradient signal.
+
+$\underset {\tilde{\pmb{y}}}{\mathrm{argmax}}\pmb {\theta}\quad \mathrm{s.t.}\quad \mathrm{dist}(\pmb {y},\tilde{\pmb{y}})\leq \epsilon$
+
+# 2 RELATED LITERATURE
+
+Model stealing attacks (also referred to as 'extraction' or 'reverse-engineering') in literature aim to infer hyperparameters (Oh et al., 2018; Wang & Gong, 2018), recover exact parameters (Lowd & Meek, 2005; Tramér et al., 2016; Milli et al., 2018), or extract the functionality (Correia-Silva et al., 2018; Orekondy et al., 2019) of a target black-box ML model. In some cases, the extracted model information is optionally used to perform evasion attacks (Lowd & Meek, 2005; Nelson et al., 2010; Papernot et al., 2017b). The focus of our work is model functionality stealing, where the attacker's yardstick is test-set accuracy of the stolen model. Initial works on stealing simple linear models (Lowd & Meek, 2005) have been recently succeeded by attacks shown to be effective on complex CNNs (Papernot et al., 2017b; Correia-Silva et al., 2018; Orekondy et al., 2019) (see Appendix B for an exhaustive list). In this work, we works towards defenses targeting the latter line of DNN model stealing attacks.
+
+Since ML models are often deployed in untrusted environments, a long line of work exists on guaranteeing certain (often orthogonal) properties to safeguard against malicious users. The properties include security (e.g., robustness towards adversarial evasion attacks (Biggio et al., 2013; Goodfellow et al., 2014; Madry et al., 2018)) and integrity (e.g., running in untrusted environments (Tramer & Boneh, 2019)). To prevent leakage of private attributes (e.g., identities) specific to training data in the resulting ML model, differential privacy (DP) methods (Dwork et al., 2014) introduce randomization during training (Abadi et al., 2016; Papernot et al., 2017a). In contrast, our defense objective is to provide confidentiality and protect the functionality (intellectual property) of the ML model against illicit duplication.
+
+Model stealing defenses are limited. Existing works (which is primarily in multiclass classification settings) aim to either detect stealing attacks (Juuti et al., 2019; Kesarwani et al., 2018; Nelson et al., 2009; Zheng et al., 2019) or perturb the posterior prediction. We focus on the latter since detection involves making strong assumptions on adversarial query patterns. Perturbation-based defenses are predominantly non-randomized and accuracy-preserving (i.e., top-1 label is unchanged). Approaches include revealing probabilities only of confident classes (Orekondy et al., 2019), rounding probabilities (Tramér et al., 2016), or introducing ambiguity in posteriors (Lee et al., 2018). None of the existing defenses claim to mitigate model stealing, but rather they only marginally delay the attack by increasing the number of queries. Our work focuses on presenting an effective defense, significantly decreasing the attacker's query sample efficiency within a principled utility-constrained framework.
+
+# 3 PRELIMINARIES
+
+Model Functionality Stealing. Model stealing attacks are cast as an interaction between two parties: a victim/defender $V$ ('teacher' model) and an attacker $A$ ('student' model). The only means of communication between the parties are via black-box queries: attacker queries inputs $\boldsymbol{x} \in \mathcal{X}$ and defender returns a posterior probability distribution $\boldsymbol{y} \in \Delta^{K} = P(\boldsymbol{y}|\boldsymbol{x}) = F_{V}(\boldsymbol{x})$ , where $\Delta^{K} = \{\boldsymbol{y} \succeq 0, \mathbf{1}^{T}\boldsymbol{y} = 1\}$ is the probability simplex over $K$ classes (we use $K$ instead of $K - 1$ for notational convenience). The attack occurs in two (sometimes overlapping) phases: (i) querying: the attacker uses the black-box as an oracle labeler on a set of inputs to construct a 'transfer set' of input-prediction pairs $\mathcal{D}_{\text{transfer}}^{\text{trans}} = \{(x_{i}, y_{i})\}_{i=1}^{B}$ ; and (ii) training: the attacker trains a model $F_{A}$ to minimize the empirical risk on $\mathcal{D}_{\text{transfer}}^{\text{trans}}$ . The end-goal of the attacker is to maximize accuracy on a held-out test-set (considered the same as that of the victim for evaluation purposes).
+
+Knowledge-limited Attacker. In model stealing, attackers justifiably lack complete knowledge of the victim model $F_V$ . Of specific interest are the model architecture and the input data distribution to train the victim model $P_V(X)$ that are not known to the attacker. Since prior work (Hinton et al., 2015; Papernot et al., 2016; Orekondy et al., 2019) indicates functionality largely transfers across architecture choices, we now focus on the query data used by the attacker. Existing attacks can be broadly categorized based on inputs $\{x \sim P_A(X)\}$ used to query the black-box: (a) independent distribution: (Tramér et al., 2016; Correia-Silva et al., 2018; Orekondy et al., 2019) samples inputs from some distribution (e.g., ImageNet for images, uniform noise) independent to input data used to train the victim model; and (b) synthetic set: (Papernot et al., 2017b; Juuti et al., 2019) augment a limited set of seed data by adaptively querying perturbations (e.g., using FGSM) of existing inputs. We address both attack categories in our paper.
+
+Defense Objectives. We perturb predictions in a controlled setting: $\tilde{\pmb{y}} = F_V^\delta (\pmb {x}) = \pmb {y} + \delta$ s.t. $\tilde{\pmb{y}},\pmb {y}\in \Delta^{K}$ . The defender has two (seemingly conflicting) objectives: (i) utility: such that perturbed predictions remain useful to a benign user. We consider two utility measures: (a) Acc $(F_V^{\delta},\mathcal{D}^{\mathrm{test}})$ : accuracy of defended model on test examples; and (b) dist $(\pmb {y},\tilde{\pmb{y}}) = ||\pmb {y} - \tilde{\pmb{y}} ||_p = \epsilon$ to measure perturbation. (ii) non-replicability: to reduce the test accuracy of an attacker (denoted as Acc $(F_{A},\mathcal{D}^{\mathrm{test}}))$ who exploits the predictions to train a replica $F_{A}$ on $\mathcal{D}^{\mathrm{transfer}}$ . For consistency, we evaluate both the defender's and attacker's stolen model accuracies on the same set of test examples $\mathcal{D}^{\mathrm{test}}$ .
+
+Defender's Assumptions. We closely mimic an assumption-free scenario similar to existing perturbation-based defenses. The scenario entails the knowledge-limited defender: (a) unaware whether a query is malicious or benign; (b) lacking prior knowledge of the strategy used by an attacker; and (c) perturbing each prediction independently (hence circumventing Sybil attacks). For added rigor, we also study attacker's countermeasures to our defense in Section 5.
+
+# 4 APPROACH: MAXIMIZING ANGULAR DEVIATION BETWEEN GRADIENTS
+
+Motivation: Targeting First-order Approximations. We identify that the attacker eventually optimizes parameters of a stolen model $F(\cdot; \boldsymbol{w})$ (we drop the subscript $\cdot_A$ for readability) to minimize the loss on training examples $\{(\boldsymbol{x}_i, \tilde{\boldsymbol{y}}_i)\}$ . Common to a majority of optimization algorithms is estimating the first-order approximation of the empirical loss, by computing the gradient of the loss
+
+w.r.t. the model parameters $\pmb{w} \in \mathbb{R}^{D}$ :
+
+$$
+\boldsymbol {u} = - \nabla_ {\boldsymbol {w}} L (F (\boldsymbol {x}; \boldsymbol {w}), \boldsymbol {y}) \tag {1}
+$$
+
+Maximizing Angular Deviation (MAD). The core idea of our approach is to perturb the posterior probabilities $y$ which results in an adversarial gradient signal that maximally deviates (see Fig. 2) from the original gradient (Eq. 1). More formally, we add targeted noise to the posteriors which results in a gradient direction:
+
+$$
+\boldsymbol {a} = - \nabla_ {\boldsymbol {w}} L (F (\boldsymbol {x}; \boldsymbol {w}), \tilde {\boldsymbol {y}}) \tag {2}
+$$
+
+to maximize the angular deviation between the original and the poisoned gradient signals:
+
+$$
+\max _ {\boldsymbol {a}} 2 (1 - \cos \angle (\boldsymbol {a}, \boldsymbol {u})) = \max _ {\hat {\boldsymbol {a}}} \| \hat {\boldsymbol {a}} - \hat {\boldsymbol {u}} \| _ {2} ^ {2} \quad \left(\hat {\boldsymbol {a}} = \boldsymbol {a} / \| \boldsymbol {a} \| _ {2}, \hat {\boldsymbol {u}} = \boldsymbol {u} / \| \boldsymbol {u} \| _ {2}\right) \tag {3}
+$$
+
+Given that the attacker model is trained to match the posterior predictions, such as by minimizing the cross-entropy loss $L(\pmb{y}, \tilde{\pmb{y}}) = -\sum_{k} \tilde{y}_k \log y_k$ we rewrite Equation (2) as:
+
+$$
+\boldsymbol {a} = - \nabla_ {\boldsymbol {w}} L (F (\boldsymbol {x}; \boldsymbol {w}), \tilde {\boldsymbol {y}}) = \nabla_ {\boldsymbol {w}} \sum_ {k} \tilde {y} _ {k} \log F (\boldsymbol {x}; \boldsymbol {w}) _ {k} = \sum_ {k} \tilde {y} _ {k} \nabla_ {\boldsymbol {w}} \log F (\boldsymbol {x}; \boldsymbol {w}) _ {k} = \boldsymbol {G} ^ {T} \tilde {\boldsymbol {y}}
+$$
+
+where $G \in \mathbb{R}^{K \times D}$ represents the Jacobian over log-likelihood predictions $F(\boldsymbol{x}; \boldsymbol{w})$ over $K$ classes w.r.t. parameters $\boldsymbol{w} \in \mathbb{R}^D$ . By similarly rewriting Equation (1), substituting them in Equation (3) and including the constraints, we arrive at our poisoning objective (Eq. 4-7) of our approach which we refer to as MAD. We can optionally enforce preserving accuracy of poisoned prediction via constraint (8), which will be discussed shortly.
+
+$$
+\max _ {\tilde {\boldsymbol {y}}} \left. \left\| \frac {\boldsymbol {G} ^ {T} \tilde {\boldsymbol {y}}}{\| \boldsymbol {G} ^ {T} \tilde {\boldsymbol {y}} \| _ {2}} - \frac {\boldsymbol {G} ^ {T} \boldsymbol {y}}{\| \boldsymbol {G} ^ {T} \boldsymbol {y} \| _ {2}} \right\| _ {2} ^ {2} \quad (= H (\tilde {\boldsymbol {y}})) \right. \tag {4}
+$$
+
+$$
+\text {w h e r e} \quad \boldsymbol {G} = \nabla_ {\boldsymbol {w}} \log F (\boldsymbol {x}; \boldsymbol {w}) \quad \left(\boldsymbol {G} \in \mathbb {R} ^ {K \times D}\right) \tag {5}
+$$
+
+$$
+\text {s . t} \quad \tilde {\boldsymbol {y}} \in \Delta^ {K} \tag {6}
+$$
+
+$$
+\operatorname {d i s t} (\boldsymbol {y}, \tilde {\boldsymbol {y}}) \leq \epsilon \quad \text {(U t i l i t y c o n s t r a i n t)} \tag {7}
+$$
+
+$$
+\underset {k} {\arg \max } \tilde {\boldsymbol {y}} _ {k} = \underset {k} {\arg \max } \boldsymbol {y} _ {k} \quad (\text {F o r v a r i a n t M A D - a r g m a x}) \tag {8}
+$$
+
+The above presents a challenge of black-box optimization problem for the defense since the defender justifiably lacks access to the attacker model $F$ (Eq. 5). Apart from addressing this challenge in the next few paragraphs, we also discuss (a) solving a non-standard and non-convex constrained maximization objective; and (b) preserving accuracy of predictions via constraint (8).
+
+Estimating $G$ . Since we lack access to adversary's model $F$ , we estimate the jacobian $G = \nabla_{\boldsymbol{w}} \log F_{\mathrm{sur}}(\boldsymbol{x}; \boldsymbol{w})$ (Eq. 5) per input query $\boldsymbol{x}$ using a surrogate model $F_{\mathrm{sur}}$ . We empirically determined (details in Appendix E.1) choice of architecture of $F_{\mathrm{sur}}$ robust to choices of adversary's architecture $F$ . However, the initialization of $F_{\mathrm{sur}}$ plays a crucial role, with best results on a fixed randomly initialized model. We conjecture this occurs due to surrogate models with a high loss provide better gradient signals to guide the defender.
+
+Heuristic Solver. Gradient-based strategies to optimize objective (Eq. 4) often leads to poor local maxima. This is in part due to the objective increasing in all directions around point $\mathbf{y}$ (assuming $G$ is full-rank), making optimization sensitive to initialization. Consequently, we resort to a heuristic to solve for $\tilde{\mathbf{y}}$ . Our approach is motivated by Hoffman (1981), who show that the maximum of a convex function over a compact convex set occurs at the extreme points of the set. Hence, our two-step solver: (i) searches for a maximizer $\mathbf{y}^*$ for (4) by iterating over the $K$ extremes $\mathbf{y}_k$ (where $y_k = 1$ ) of the probability simplex $\Delta^K$ ; and (ii) then computes a perturbed posterior $\tilde{\mathbf{y}}$ as a linear interpolation of the original posteriors $\mathbf{y}$ and the maximizer $\mathbf{y}^*: \tilde{\mathbf{y}} = (1 - \alpha)\mathbf{y} + \alpha\mathbf{y}^*$ , where $\alpha$ is selected such that the utility constraint (Eq. 7) is satisfied. We further elaborate on the solver and present a pseudocode in Appendix C.
+
+Variant: MAD-argmax. Within our defense formulation, we encode an additional constraint (Eq. 8) to preserve the accuracy of perturbed predictions. MAD-argmax variant helps us perform accuracy-preserving perturbations similar to prior work. But in contrast, the perturbations are constrained (Eq. 7) and are specifically introduced to maximize the MAD objective. We enforce the accuracy-preserving constraint in our solver by iterating over extremes of intersection of sets Eq.(6) and (8): $\Delta_k^K = \{\pmb{y} \succeq 0, \pmb{1}^T\pmb{y} = 1, y_k \geq y_j, k \neq j\} \subseteq \Delta^K$ .
+
+# 5 EXPERIMENTAL RESULTS
+
+# 5.1 EXPERIMENTAL SETUP
+
+Victim Models and Datasets. We set up six victim models (see column $F_{V}$ in Table 1), each model trained on a popular image classification dataset. All models are trained using SGD $(\mathrm{LR} = 0.1)$ with momentum (0.5) for 30 (LeNet) or 100 epochs (VGG16), with a LR decay of 0.1 performed every 50 epochs. We train and evaluate each victim model on their respective train and test sets.
+
+Attack Strategies. We hope to broadly address all DNN model stealing strategies during our defense evaluation. To achieve this, we consider attacks that vary in query data distributions (independent and synthetic; see Section 3) and strategies (random and adaptive). Specifically, in our experiments we use the following attack models: (i) Jacobian-based Data Augmentation 'JBDA' (Papernot et al., 2017b);
+
+| FV | Acc(FV) | Acc(FA) |
| jbda | jbself | jbtop3 | k. off |
| MNIST (LeNet) | 99.4 | 89.2 | 89.4 | 87.3 | 99.1 |
| FashionMNIST (LeNet) | 92.0 | 38.7 | 45.8 | 68.7 | 69.2 |
| CIFAR10 (VGG16) | 92.0 | 28.6 | 20.7 | 73.8 | 78.7 |
| CIFAR100 (VGG16) | 72.2 | 5.3 | 2.9 | 39.2 | 51.9 |
| CUB200 (VGG16) | 80.4 | 6.8 | 3.9 | 21.5 | 65.1 |
| Caltech256 (VGG16) | 80.0 | 12.5 | 16.0 | 29.5 | 74.6 |
+
+Table 1: Victim models and Accuracies. All accuracies are w.r.t undefended victim model.
+
+(ii,iii) 'JB-self' and 'JB-top3' (Juuti et al., 2019); and (iv) Knockoff Nets 'knockoff' (Orekondy et al., 2019); We follow the default configurations of the attacks where possible. A recap and implementation details of the attack models are available in Appendix D.
+
+In all attack strategies, the adversary trains a model $F_{A}$ to minimize the cross-entropy loss on a transfer set $(\mathcal{D}_{\text{transfer}}^{\text{transfer}} = \{(\boldsymbol{x}_i, \tilde{\boldsymbol{y}}_i)\}_{i=1}^{B})$ obtained by using the victim model $F_{V}$ to pseudo-label inputs $\boldsymbol{x}_i$ (sampled or adaptively synthesized). By default, we use $B = 50\mathrm{K}$ queries, which achieves reasonable performance for all attacks and additionally makes defense evaluation tractable. The size of the resulting transfer set $(B = 50\mathrm{K}$ examples) is comparable (e.g., $1 \times$ for CIFAR10/100, $2.1 \times$ for Caltech256) to size of victim's training set. In line with prior work (Papernot et al., 2016; Orekondy et al., 2019), we too find (Section 5.2.3) attack and defense performances are unaffected by choice of architectures, and hence use the victim architecture for the stolen model $F_{A}$ . Due to the complex parameterization of VGG-16 (100M+), we initialize the weights from a pretrained TinyImageNet or ImageNet model (except for the last FC layer, which is trained from scratch). All stolen models are trained using SGD (LR=0.1) with momentum (0.5) for 30 epochs (LeNet) and 100 epochs (VGG16). We find choices of attacker's architecture and optimization does not undermine the defense (discussed in Section 5.2.3).
+
+Effectiveness of Attacks. We evaluate accuracy of resulting stolen models from the attack strategies as-is on the victim's test set, thereby allowing for a fair head-to-head comparison with the victim model (additional details in Appendix A and D). The stolen model test accuracies, along with undefended victim model $F_V$ accuracies are reported in Table 1. We observe for all six victim models, using just 50K black-box queries, attacks are able to significantly extract victim's functionality e.g., $>87\%$ on MNIST. We find the knockoff attack to be the strongest, exhibiting reasonable performance even on complex victim models e.g., $74.6\%$ ( $0.93 \times Acc(F_V)$ ) on Caltech256.
+
+How Good are Existing Defenses? Most existing defenses in literature (Tramér et al., 2016; Orekondy et al., 2019; Lee et al., 2018) perform some form of information truncation on the posterior probabilities e.g., rounding, returning top- $k$ labels; all strategies preserve the rank of the most confident label. We now evaluate model stealing attacks on the extreme end of information truncation, wherein the defender returns just the top-1 'argmax' label. This strategy illustrates a rough lower bound on the strength of the attacker when using existing defenses. Specific to knockoff, we observe the attacker is minimally impacted on simpler datasets (e.g., $0.2\%$ accuracy drop on CIFAR10; see Fig. A5 in Appendix). While this has a larger impact on more complex datasets involving numerous classes (e.g., a maximum of $23.4\%$ drop observed on CUB200), the strategy also introduces a significant perturbation ( $L_{1} = 1 \pm 0.5$ ) to the posteriors. The results suggest existing defenses, which largely the top-1 label, are largely ineffective at mitigating model stealing attacks.
+
+Defenses: Evaluation. We evaluate all defenses on a non-replicability vs. utility curve at various operating points $\epsilon$ of the defense. We furthermore evaluate the defenses for a large query budget (50K). We use as non-replicability the accuracy of the stolen model on held-out test data $D^{\mathrm{test}}$ .
+
+
+Figure 3: Attackers vs. Our Defense. Curves are obtained by varying degree of perturbation $\epsilon$ (Eq. 7) in our defense. $\uparrow$ denotes higher numbers are better and $\downarrow$ , lower numbers are better. Non-replicability objective is presented on the $x$ -axis and utility on the $y$ -axis.
+
+We use two utility metrics: (a) accuracy: test-accuracy of the defended model producing perturbed predictions on $\mathcal{D}^{\mathrm{test}}$ ; and (b) perturbation magnitude $\epsilon$ : measured as $L_{1}$ distance $||\pmb{y} - \tilde{\pmb{y}}||_{1}$ .
+
+Defense: Baselines. We compare our approaches against three methods: (i) reverse-sigmoid (Lee et al., 2018): which softens the posterior distribution and introduces ambiguity among non-argmax probabilities. For this method, we evaluate non-replicability and utility metrics for the defense operating at various choices of their hyperparameter $\beta \in [0,1]$ , while keeping their dataset-specific hyperparameter $\gamma$ fixed (MNIST: 0.2, FashionMNIST: 0.4, CIFAR10: 0.1, rest: 0.2). (ii) random noise: For controlled random-noise, we add uniform random noise $\delta_z$ on the logit prediction scores $(\tilde{z} = z + \delta_z$ , where $z = \log \left(\frac{y}{1 - y}\right)$ ), enforce utility by projecting $\delta_z$ to an $\epsilon_z$ -ball (Duchi et al., 2008), and renormalize probabilities $\tilde{y} = \frac{1}{1 + e^{-\tilde{z}}}$ . (iii) dp-sgd: while our method and previous two baselines perturbs predictions, we also compare against introducing randomization to victim model parameters by training with the DP-SGD algorithm (Abadi et al., 2016). DP is a popular technique to protect the model against training data inference attacks. This baseline allows us to verify whether the same protection extends to model functionality.
+
+# 5.2 RESULTS
+
+In the follow sections, we demonstrate the effectiveness of our defense rigorously evaluated across a wide range of complex datasets, attack models, defense baselines, query, and utility budgets. For readability, we first evaluate the defense against attack models, proceed to comparing the defense against strong baselines and then provide an analysis of the defense.
+
+# 5.2.1 MAD DEFENSE VS. ATTACKS
+
+Figure 3 presents evaluation of our defenses MAD (Eq. 4-7) and MAD-argmax (Eq. 4-8) against the four attack models. To successfully mitigate attacks as a defender, we want the defense curves (colored solid lines with operating points denoted by thin crosses) to move away from undefended accuracies (denoted by circular discs, where $\epsilon = 0.0$ ) to ideal defense performances (cyan cross, where Acc(Def.) is unchanged and Acc(Att.) is chance-level).
+
+We observe from Figure 3 that by employing an identical defense across all datasets and attacks, the effectiveness of the attacker can be greatly reduced. Across all models, we find MAD provides reasonable operating points (above the diagonal), where defender achieves significantly higher test accuracies compared to the attacker. For instance, on MNIST, for $<1\%$ drop in defender's accuracy, our defense simultaneously reduces accuracy of the jbtop3 attacker by $52\%$ ( $87.3\% \rightarrow 35.7\%$ ) and knockoff by $29\%$ ( $99.1\% \rightarrow 69.8\%$ ). We find similar promising results even on high-dimensional complex datasets e.g., on CUB200, a $23\%$ ( $65.1\% \rightarrow 41.9\%$ ) performance drop of knockoff for $2\%$ drop in defender's test performance. Our results indicate effective defenses are achievable, where the defender can trade-off a marginal utility cost to drastically impede the attacker.
+
+# 5.2.2 MAD DEFENSE VS. BASELINE DEFENSES
+
+We now study how our approach compares to baseline defenses, by evaluating the defenses against the knockoff attack (which resulted in the strongest attack in our experiments). From Figure 4, we observe:
+
+
+Figure 4: Knockoff attack vs. Ours + Baseline Defenses (best seen magnified). Non-replicability is presented on the $x$ -axis. On $y$ -axis, we present two utility measures: (a) top: Utility = $L_{1}$ distance (b) bottom: Utility = Defender's accuracy. Region above the diagonal indicates instances where defender outperforms the attacker.
+
+
+Figure 5: Attacker argmax. Follow-up to Figure 4b (CIFAR10), but with attacker using only the argmax label.
+
+
+Figure 6: Histogram of Angular Deviations. Presented for MAD attack on CIFAR10 with various choices of $\epsilon$ .
+
+
+Figure 7: Test loss. Visualized during training. Colours and lines correspond to $\epsilon$ values in Fig. 6.
+
+(i) Utility objective $= L_{1}$ distance (Fig. 4a): Although random-noise and reverse-sigmoid reduce attacker's accuracy, the strategies in most cases involves larger perturbations. In contrast, MAD and MAD-argmax provides similar non-replicability (i.e., Acc(Att.)) with significantly lesser perturbation, especially at lower magnitudes. For instance, on MNIST (first column), MAD ( $L_{1} = 0.95$ ) reduces the accuracy of the attacker to under $80\%$ with $0.63 \times$ the perturbation as that of reverse-sigmoid and random-noise ( $L_{1} \approx 1.5$ ).
+(ii) Utility objective $=$ argmax-preserving (Fig. 4b): By setting a hard constraint on retaining the label of the predictions, we find the accuracy-preserving defenses MAD-argmax and reverse-sigmoid successfully reduce the performance of the attacker by at least $20\%$ across all datasets. In most cases, we find MAD-argmax in addition achieves this objective by introducing lesser distortion to the predictions compared to reverse-sigmoid. For instance, in Fig. 4a, we find MAD-argmax consistently reduce the attacker accuracy to the same amount at lesser $L_{1}$ distances. In reverse-sigmoid, we attribute the large $L_{1}$ perturbations to a shift in posteriors towards a uniform distribution e.g., mean entropy of perturbed predictions is $3.02 \pm 0.16$ (max-entropy $= 3.32$ ) at $L_{1} = 1.0$ for MNIST; in contrast, MAD-argmax displays a mean entropy of $1.79 \pm 0.11$ . However, common to accuracy-preserving strategies is a pitfall that the top-1 label is retained. In Figure 5 (see overlapping red and yellow cross-marks), we present the results of training the attacker using only the top-1 label. In line with previous discussions, we find that the attacker is able to significantly recover the original performance of the stolen model for accuracy-preserving defenses MAD-argmax and reverse-sigmoid.
+(iii) Non-replicability vs. utility trade-off (Fig. 4b): We now compare our defense MAD (blue lines) with baselines (rand-noise and dp-sgd) which trade-off utility to mitigate model stealing. Our results indicate MAD offers a better defense (lower attacker accuracies for similar defender accuracies). For instance, to reduce the attacker's accuracy to $< 70\%$ , while the defender's accuracy significantly degrades using dp-sgd $(39\%)$ and rand-noise $(56.4\%)$ , MAD involves a marginal decrease of $1\%$ .
+
+
+Figure 8: MAD Ablation experiments. Utility = (left) $L_{1}$ distance (right) defender test accuracy.
+
+
+
+
+Figure 9: Subverting the Defense.
+
+# 5.2.3 ANALYSIS
+
+How much angular deviation does MAD introduce? To obtain insights on the angular deviation induced between the true and the perturbed gradient, we conduct an experiment by tracking the true gradient direction (which was unknown so far) at each training step. We simulate this by training an attacker model using online SGD $(\mathrm{LR} = 0.001)$ over $N$ iterations using $B$ distinct images to query and a batch size of 1. At each step $t$ of training, the attacker queries a randomly sampled input $\boldsymbol{x}_t$ to the defender model and backpropogates the loss resulting from $\tilde{\boldsymbol{y}}_t$ . In this particular experiment, the perturbation $\tilde{\boldsymbol{y}}_t$ is crafted having exact knowledge of the attacker's parameters. We evaluate the angular deviation between gradients with $(a)$ and without $(u)$ the perturbation.
+
+In Figure 6, we visualize a histogram of deviations: $\theta = \arccos \frac{\boldsymbol{u} \cdot \boldsymbol{a}}{||\boldsymbol{u}||||\boldsymbol{a}||}$ , where $\boldsymbol{u} = \nabla_{\boldsymbol{w}} L(\boldsymbol{w}_t, \boldsymbol{y}, \cdot)$ and $\boldsymbol{a} = \nabla_{\boldsymbol{w}} L(\boldsymbol{w}_t, \tilde{\boldsymbol{y}}, \cdot)$ . We observe: (i) although our perturbation space is severely restricted (a low-dimensional probability simplex), we can introduce surprisingly high deviations (0-115°) in the high-dimensional parameter space of the VGG16; (ii) for $\epsilon$ values at reasonable operating points which preserves the defender's accuracy within 10% of the undefended accuracy (e.g., $\epsilon \in [0.95, 0.99]$ for CIFAR10), we see deviations with mean $24.9^{\circ}$ (yellow bars in Fig. 6). This indicates that the perturbed gradient on an average leads to a slower decrease in loss function; (iii) on the extreme end, with $\epsilon = \epsilon_{\max} = 2$ , on an average, we find the perturbations successfully flips ( $>90^{\circ}$ ) the gradient direction leading to an increase on the test loss, as seen in Figure 7 (blue line). We also find the above observations reasonably transfers to a black-box attacker setting (see Appendix F.4), where the perturbations are crafted without knowledge of the attacker's parameters. Overall, we find our approach considerably corrupts the attacker's gradient direction.
+
+Ablative Analysis. We present an ablation analysis of our approach in Figure 8. In this experiment, we compare our approach MAD and MAD-argmax to: (a) $G = I$ : We substitute the jacobian $G$ (Eq. 5) with a $K \times K$ identity matrix; and (b) $y^{*} = \text{rand}$ : Inner maximization term (Eq. 4) returns a random extreme of the simplex. Note that both (a) and (b) do not use the gradient information to perturb the posteriors.
+
+From Figure 8, we observe: (i) poor performance of $y^{*} = \text{rand}$ , indicating random untargeted perturbations of the posterior probability is a poor strategy; (ii) $G = I$ , where the angular deviation is maximized between the posterior probability vectors is a slightly better strategy; (ii) MAD outperforms the above approaches. Consequently, we find using the gradient information (although a proxy to the attacker's gradient signal) within our formulation (Equation 4) is crucial to providing better model stealing defenses.
+
+Subverting the Defense. We now explore various strategies an attacker can use to circumvent the defense. To this end, we evaluate the following strategies: (a) argmax: attacker uses only the most-confident label during training; (b) arch-*: attacker trains other choices of architectures; (c) nquery: attacker queries each image multiple times; (d) nquery+aug: same as (c), but with random cropping and horizontal flipping; and (e) opt-*: attacker uses an adaptive LR optimizer e.g., ADAM (Kingma & Ba, 2014).
+
+We present results over the subversion strategies in Figure 9. We find our defense robust to above strategies. Our results indicate that the best strategy for the attacker to circumvent our defense
+
+is to discard the probabilities and rely only on the most confident label to train the stolen model. In accuracy-preserving defenses (see Fig. 5), this previously resulted in an adversary entirely circumventing the defense (recovering up to $1.0 \times$ original performance). In contrast, we find MAD is nonetheless effective in spite of the strategy, maintaining a $9\%$ absolute accuracy reduction in attacker's stolen performance.
+
+# 6 CONCLUSION
+
+In this work, we were motivated by limited success of existing defenses against DNN model stealing attacks. While prior work is largely based on passive defenses focusing on information truncation, we proposed the first active defense strategy that attacks the adversary's training objective. We found our approach effective in defending a variety of victim models and against various attack strategies. In particular, we find our attack can reduce the accuracy of the adversary by up to $65\%$ , without significantly affecting defender's accuracy.
+
+Acknowledgement. This research was partially supported by the German Research Foundation (DFG CRC 1223). We thank Paul Swoboda and David Stutz for helpful discussions.
+
+# REFERENCES
+
+Martin Abadi, Andy Chu, Ian Goodfellow, H Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. Deep learning with differential privacy. In CCS, 2016.
+Ibrahim M Alabdulmohsin, Xin Gao, and Xiangliang Zhang. Adding robustness to support vector machines against adversarial reverse engineering. In CIKM, 2014.
+Battista Biggio, Igino Corona, Davide Maiorca, Blaine Nelson, Nedim Šrndić, Pavel Laskov, Giorgio Giacinto, and Fabio Roli. Evasion attacks against machine learning at test time. In ECML PKDD, 2013.
+Varun Chandrasekaran, K Chaudhari, Irene Giacomelli, Somesh Jha, and Songbai Yan. Exploring connections between active learning and model extraction. arXiv preprint arXiv:1905.09165, 2019.
+Jacson Rodrigues Correia-Silva, Rodrigo F Berriel, Claudine Badue, Alberto F de Souza, and Thiago Oliveira-Santos. Copycat cnn: Stealing knowledge by persuading confession with random non-labeled data. In IJCNN, 2018.
+John Duchi, Shai Shalev-Shwartz, Yoram Singer, and Tushar Chandra. Efficient projections onto the 11-ball for learning in high dimensions. In ICML, 2008.
+Cynthia Dwork, Aaron Roth, et al. The algorithmic foundations of differential privacy. Foundations and Trends in Theoretical Computer Science, 2014.
+Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014.
+Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. arXiv:1503.02531, 2015.
+Karla Leigh Hoffman. A method for globally minimizing concave functions over convex sets. Mathematical Programming, 20(1):22-32, 1981.
+Matthew Jagielski, Nicholas Carlini, David Berthelot, Alex Kurakin, and Nicolas Papernot. High-fidelity extraction of neural network models. arXiv preprint arXiv:1909.01838, 2019.
+Mika Juuti, Sebastian Szyller, Alexey Dmitrenko, Samuel Marchal, and N Asokan. Prada: Protecting against dnn model stealing attacks. In Euro S&P, 2019.
+Manish Kesarwani, Bhaskar Mukhoty, Vijay Arya, and Sameep Mehta. Model extraction warning in mlaas paradigm. In ACSAC, 2018.
+
+Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2014.
+Taesung Lee, Benjamin Edwards, Ian Molloy, and Dong Su. Defending against model stealing attacks using deceptive perturbations. S&P Deep Learning and Security (DLS) Workshop, 2018.
+Daniel Lowd and Christopher Meek. Adversarial learning. In KDD, 2005.
+Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. In ICLR, 2018.
+Smitha Milli, Ludwig Schmidt, Anca D Dragan, and Moritz Hardt. Model reconstruction from model explanations. arXiv preprint arXiv:1807.05185, 2018.
+Blaine Nelson, Marco Barreno, Fuching Jack Chi, Anthony D Joseph, Benjamin IP Rubinstein, Udam Saini, Charles Sutton, JD Tygar, and Kai Xia. Misleading learners: Co-opting your spam filter. In Machine learning in cyber trust. 2009.
+Blaine Nelson, Benjamin Rubinstein, Ling Huang, Anthony Joseph, Shing-hon Lau, Steven Lee, Satish Rao, Anthony Tran, and Doug Tygar. Near-optimal evasion of convex-inducing classifiers. In AISTATS, 2010.
+Seong Joon Oh, Max Augustin, Bernt Schiele, and Mario Fritz. Towards reverse-engineering black-box neural networks. In ICLR, 2018.
+Tribhuvanesh Orekondy, Bernt Schiele, and Mario Fritz. Knockoff nets: Stealing functionality of black-box models. In CVPR, 2019.
+Soham Pal, Yash Gupta, Aditya Shukla, Aditya Kanade, Shirish Shevade, and Vinod Ganapathy. A framework for the extraction of deep neural networks by leveraging public data. arXiv preprint arXiv:1905.09165, 2019.
+Nicolas Papernot, Patrick McDaniel, and Ian Goodfellow. Transferability in machine learning: from phenomena to black-box attacks using adversarial samples. arXiv preprint arXiv:1605.07277, 2016.
+Nicolas Papernot, Martin Abadi, Ulfar Erlingsson, Ian Goodfellow, and Kunal Talwar. Semi-supervised knowledge transfer for deep learning from private training data. In ICLR, 2017a.
+Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z Berkay Celik, and Ananthram Swami. Practical black-box attacks against machine learning. In Asia CCS, 2017b.
+Florian Tramer and Dan Boneh. Slalom: Fast, verifiable and private execution of neural networks in trusted hardware. In ICLR, 2019.
+Florian Tramér, Fan Zhang, Ari Juels, Michael K Reiter, and Thomas Ristenpart. Stealing machine learning models via prediction apis. In USEnIX Security, 2016.
+Binghui Wang and Neil Zhenqiang Gong. Stealing hyperparameters in machine learning. In S&P, 2018.
+Huadi Zheng, Qingqing Ye, Haibo Hu, Chengfang Fang, and Jie Shi. Bdpl: A boundary differentially private layer against machine learning model extraction attacks. In *ESORICS*, 2019.
+
+# Appendix
+
+# A OVERVIEW AND NOTATION
+
+
+Figure A1: Overview of Attack, Defense, and Evaluation Metrics. We consider an attacker $A$ who exploits black-box access to defended model $F_V^\delta$ to train a stolen model $F_A$ . In this paper, we take the role of the defender who intends to minimize replicability (i.e., $\mathrm{Acc}(F_A, \mathcal{D}^{\mathrm{test}})$ ), while maintaining utility of the predictions. We consider two notions of utility: (1) minimizing perturbations in predictions, measured here using $L_1$ distance; and (2) maintaining accuracy of the defended model on test set $\mathrm{Acc}(F_V^\delta, \mathcal{D}^{\mathrm{test}})$ . Note that for a fair head-to-head comparison, we use the same held-out test set $\mathcal{D}^{\mathrm{test}}$ to evaluate accuracies of both the defended model $F_V^\delta$ and stolen model $F_A$ . Similar to all prior work, we assume $\mathcal{D}^{\mathrm{train}}, \mathcal{D}^{\mathrm{test}}$ are drawn i.i.d from the same (victim) distribution $\mathcal{D}_V$ . Notation used in the above figure is further elaborated in Table A1.
+
+ | x | Inputs (images ∈ RC×H×W) |
| y, ŷ | Original, perturbed posterior predictions |
| ΔK | Probability simplex over K vertices |
| Attacker A | PA(X) | Attacker's input data distribution |
| Dtransfer | Transfer set (= {(xi, yi}), where xi ~ PA(X), yi = FV(xi)) |
| FA | Attacker's (stolen) model trained on Dtransfer |
| Victim/Defender V | PV(X) | Victim's input data distribution |
| Dtrain | Training data (= {(xi, yi}), where xi ~ PV(X)) |
| FV | undefended model trained on Dtrain |
| Fδ | Defended model |
| Dtest | Test set (= {(xi, yi}), where xi ~ PV(X)) |
+
+Table A1: Notation
+
+# B RELATED WORK: EXTENSION
+
+A summary of existing model stealing attacks and defenses is presented in Table A2.
+
+# C DETAILED ALGORITHM
+
+We present a detailed algorithm (see Algorithm 1) for our approach described in Section 4.
+
+The algorithm roughly follows four steps:
+
+(i) Predict (L2): Obtains posterior probability predictions $\pmb{y}$ for input $\pmb{x}$ using a victim model $F_{V}(\pmb{x};\pmb{w}_{V})$
+
+ | Black-box type | Proposed Attack | Proposed Defense |
| Input Query Data | Adapt.? | Strategy | P/D? | AP? | AC |
| 1. Lowd & Meek (2005) | Linear | Random Noise | ✓ | - | - | - | - |
| 2. Nelson et al. (2009) | Linear | Labeled Data | ✘ | Rejection | D | ✘ | 1 |
| 3. Nelson et al. (2010) | Linear | Random Noise | ✓ | - | - | - | - |
| 4. Alabdulmohsin et al. (2014) | Linear | Random Noise | ✓ | Ensembling | P | ✘ | 4 |
| 5. Tramèr et al. (2016) | Linear, NN | Random Noise | † | Rounding | P | ✓ | 5 |
| 6. Milli et al. (2018) | Linear, NN | Random Noise | ✓ | - | - | - | - |
| 7. Kesarwani et al. (2018) | Decision Tree | - | - | Detection | D | ✓ | 5 |
| 8. Chandrasekaran et al. (2019) | Linear | Random Noise | ✓ | Random Pert. | P | ✘ | - |
| 9. Papernot et al. (2017b) | CNN | Synth. Data | ✓ | - | - | - | - |
| 10. Correia-Silva et al. (2018) | CNN | Unlabeled Data | ✘ | - | - | - | - |
| 11. Pal et al. (2019) | CNN | Unlabeled Data | † | - | - | - | - |
| 12. Orekondy et al. (2019) | CNN* | Unlabeled Data | † | Rounding, Top-k | P | ✓ | 12 |
| 13. Jagielski et al. (2019) | CNN* | Unlabeled Data | ✓ | - | - | - | - |
| 14. Juuti et al. (2019) | CNN | Synth. Data | ✓ | Detection | D | ✓ | 9,14 |
| 15. Lee et al. (2018) | CNN | - | - | Reverse sigmoid | P | ✓ | 9 |
| 16. Ours | CNN* | - | - | Targeted Pert. | P | † | 9,12,14 |
+
+Table A2: Existing DNN Attacks and Defenses. Complements the discussion in Section 2. 'CNN*: Complex ImageNet-like CNN. †: Both. 'P/D': Perturbation/Detection. 'AP': Accuracy preserving (i.e., maintains top-1 labels of predictions). 'AC': Attacks considered.
+
+(ii) Estimate Jacobian $G$ (L3): We estimate a $\mathbb{R}^{K\times D}$ jacobian matrix on a surrogate model $F$ . By default, we use as $F$ a randomly initialized model (more details in Appendix E.1). Each row of $G$ represents the gradient direction (in parameter space $\mathbb{R}^D$ ) over log likelihood of class $k$ .
+(iii) Maximize MAD Objective (L4): We find the optimal direction $\mathbf{y}^*$ which maximizes the MAD objective (Eq. 3). To compute the arg max, we iterative over the $K$ extremes of the probability simplex $\Delta^K$ to find $\mathbf{y}^*$ which maximizes the objective. The extreme $\mathbf{y}_k$ denotes a probability vector with $y_k = 1$ .
+(iv) Enforce Utility Constraint (L5-7): We enforce the perturbation utility constraint (Eq. 7) by considering a linear interpolation of $\mathbf{y}^*$ and $\mathbf{y}$ . The resulting interpolation probability vector $\tilde{\mathbf{y}} := h(\alpha^*)$ represents the utility-constrained perturbed prediction that is returned instead of $\mathbf{y}$ .
+
+```txt
+1 Function PerturbedPredict-MAD(x): Input: Input data $\pmb{x}$ , model to defend $F_{V}(\cdot ;w_{V})$ , proxy attacker model $F(\cdot ;\boldsymbol {w})$ Output: Perturbed posterior probability $\tilde{\pmb{y}}\in \Delta^{K}$ s.t. dist( $\tilde{y},y)\leq \epsilon$ 2 $\pmb {y}\coloneqq F_V(\pmb {x};\pmb {w}_V)$ //Obtain $K$ -dim posteriors
+3 $\pmb {G}\coloneqq \nabla_{\pmb{w}}\log F(\pmb {x};\pmb {w})$ //Pre-compute (K x D) Jacobian
+4 $\pmb{y}^{*}\coloneqq \arg \max_{\pmb{y}_{k}\in \mathrm{ext}(\Delta^{K})}\left\| \frac{\pmb{G}^{T}\pmb{y}_{k}}{\|\pmb{G}^{T}\pmb{y}_{k}\|_{2}} -\frac{\pmb{G}^{T}\pmb{y}}{\|\pmb{G}^{T}\pmb{y}\|_{2}}\right\|_{2}^{2}$ // Alternatively ext( $\Delta_k^K$ ) for MAD-argmax
+5 Define $h(\alpha) = (1 - \alpha)y + \alpha y^*$
+6 $\alpha^{*}\coloneqq \arg \max_{\alpha \in [0,1],\mathrm{dist}(\cdot)\leq \epsilon}\mathrm{dist}(h(\alpha),\pmb{y}^{*})$ // Find optimal step-size via bisection, or OptStep(.) for $L_{p}$ norms
+7 $\tilde{\pmb{y}}\coloneqq h(\alpha^{*})$ // Perturbed probabilities
+8 return $\tilde{\pmb{y}}$
+9
+10 Function OptStep(y,y*,ε,p):
+11 $\alpha^{*}\coloneqq \max \left\{\frac{\epsilon}{||\pmb{y} - \pmb{y}^{*}||_{P}},1\right\}$
+12 return $\alpha^{*}$
+```
+
+Algorithm 1: MAD Defense. To supplement approach in Section 4
+
+# D ATTACK MODELS: RECAP AND IMPLEMENTATION DETAILS
+
+Jacobian Based Data Augmentation (jbda) (Papernot et al., 2017b). The motivation of the approach is to obtain a surrogate of the victim black-box classifier, with an end-goal of performing evasion attacks (Biggio et al., 2013; Goodfellow et al., 2014). We restrict discussions primarily to the first part of constructing the surrogate. To obtain the surrogate (the stolen model), the authors depend on an unlabeled 'seed' set, typically from the same distribution as that used to train the victim model. As a result, the attacker assumes (mild) knowledge of the input data distribution and the class-label of the victim.
+
+The key idea behind the approach is to query perturbations of inputs, to obtain a reasonable approximation of the decision boundary of the victim model. The attack strategy involves performing the following steps in a repeated manner: (i) images from the substitute set (initially the seed) $\mathcal{D}$ is labeled by querying the victim model $F_{V}$ as an oracle labeler; (ii) the surrogate model $F_{A}$ is trained on the substitute dataset; (iii) the substitute set is augmented using perturbations of existing images: $\mathcal{D}_{\rho +1} = \mathcal{D}_{\rho}\cup \{\pmb {x} + \lambda_{\rho +1}\cdot \mathrm{sgn}(J_F[F_A(\pmb {x})]):\pmb {x}\in \mathcal{D}_{\rho}\}$ , where $J$ is the jacobian function.
+
+We use a seed set of: 100 (MNIST and FashionMNIST), 500 (CIFAR10, CUB200, Caltech256) and 1000 (CIFAR100). We use the default set of hyperparameters of Papernot et al. (2017b) in other respects.
+
+Jacobian Based {self, top-k} (jbself, jbtop3) (Juuti et al., 2019). The authors generalize the above approach, by extending the manner in which the synthetic samples are produced. In jbself, the jacobian is calculated w.r.t to $k$ nearest classes and in jb-self, w.r.t the maximum a posteriori class predicted by $F_{A}$ .
+
+Knockoff Nets (knockoff) (Orekondy et al., 2019). Knockoff is a recent attack model, which demonstrated model stealing can be performed without access to seed samples. Rather, the queries to the black-box involve natural images (which can be unrelated to the training data of the victim model) sampled from a large independent data source e.g., ImageNet1K. Consequently, no knowledge of the input data distribution nor the class-label space of the victim model is required to perform model stealing. The paper proposes two strategies on how to sample images to query: random and adaptive. We use the random strategy in the paper, since adaptive resulted in marginal increases in an open-world setup (which we have).
+
+As the independent data sources in our knockoff attacks, we use: EMNIST-Letters (when stealing MNIST victim model), EMNIST (FashionMNIST), CIFAR100 (CIFAR10), CIFAR10 (CIFAR100), ImageNet1k (CUB200, Caltech256). Overlap between query images and the training data of the victim models are purely co-incidental.
+
+We use the code from the project's public github repository.
+
+Evaluating Attacks. The resulting replica model $F_{A}$ from all the above attack strategies are evaluated on a held-out test set. We remark that the replica model is evaluated as-is, without additional finetuning or modifications. Similar to prior work, we evaluate the accuracies of $F_{A}$ on the victim's held-out test set. Evaluating both stolen and the victim model on the same test set allows for fair head-to-head comparison.
+
+# E SUPPLEMENTARY ANALYSIS
+
+In this section, we present additional analysis to supplement Section 5.2.3.
+
+# E.1 ESTIMATING $G$
+
+Central to our defense is estimating the jacobian matrix $\pmb{G} = \nabla_{\pmb{w}} \log F(\pmb{x}; \pmb{w})$ (Eq. 5), where $F(\cdot; \pmb{w})$ is the attacker's model. However, a defender with black-box attacker knowledge (where $F$ is unknown) requires determining $\pmb{G}$ by instead using a surrogate model $F_{\mathrm{sur}}$ . We determine choice of $F_{\mathrm{sur}}$ empirically by studying two factors: (a) architecture of $F_{\mathrm{sur}}$ : choice of defender's surrogate architecture robust to varying attacker architectures (see Fig. A2); and (b) initialization of $F_{\mathrm{sur}}$ : initialization of the surrogate model parameters plays a crucial role in providing a better defense. We
+
+
+Figure A2: Influence of attacker architecture choices on a fixed surrogate.
+
+
+Figure A3: Influence of Initialization of a VGG16 Surrogate Model. 'rand' = random initialization, ('early', 'mid', 'late') = $\sim (25, 50, 75)\%$ test accuracy of surrogate on test set.
+
+ | Undefended | MAD |
| MNIST | 0.88 ± 14.41 | 6.47 ± 12.25 |
| FashionMNIST | 0.89 ± 15.76 | 6.65 ± 14.16 |
| CIFAR10 | 1.93 ± 13.02 | 8.58 ± 15.02 |
| CIFAR100 | 2.15 ± 18.82 | 69.26 ± 21.4 |
| CUBS200 | 4.45 ± 9.66 | 446.93 ± 23.87 |
| Caltech256 | 4.93 ± 21.25 | 815.97 ± 30.3 |
+
+Table A3: Run times (in ms). We report the mean and standard deviation of predictions of undefended and defended models, computed over 10K predictions.
+
+consider four choices of initialization: {'rand', 'early', 'mid', 'late'} which exhibits approximately {chance-level $25\%$ , $50\%$ , $75\%$ } test accuracies respectively. We observe (see Fig. A3) that a randomly initialized model, which is far from convergence, provides better gradient signals in crafting perturbations.
+
+# E.2 RUN-TIME ANALYSIS
+
+We present the run-times of our defended and undefended models in Table A3. The reported numbers were summarized over 10K unique predictions performed on an Nvidia Tesla V100. We find our optimization procedure Eq. (4-7) for all models take under a second, with at most 0.8s in the case of Caltech256. The primary computational bottleneck of our defense implementation is estimating matrix $G \in \mathbb{R}^{K \times D}$ in Eq. 5, which currently requires performing $K$ (i.e., number of output classes) backward passes through the surrogate model. Consequently, we find that our inference times on Caltech256 can be further reduced to $0.3\mathrm{s} \pm 0.04$ by using a more efficient surrogate architecture (e.g., ResNet-34).
+
+# F ADDITIONAL PLOTS
+
+# F.1 ATTACKER EVALUATION
+
+We present evaluation of all attacks considered in the paper on an undefended model in Figure A4. Furthermore, specific to the knockoff attack, we analyze how training using only the top-1 label (instead of complete posterior information) affects the attacker in Figure A5.
+
+# F.2 BUDGET VS. ACCURACY
+
+We plot the budget (i.e., number of distinct black-box attack queries to the defender) vs. the test accuracy of the defender/attacker in Figure A6. The figure supplements Figure 1 and the discussion found in Section 5.2.1 of the main paper.
+
+
+Figure A4: Evaluation of all attacks on undefended victim models.
+
+
+Figure A5: Stolen model trained using knockoff strategy on complete posterior information $(y)$ and only the top-1 label of the posteriors (arg $\max_k y_k$ ).
+
+
+Figure A6: Budget vs. Test Accuracy. Supplements Fig. 3c in the main paper.
+
+
+Figure A7: Attacker argmax. Supplements Fig. 4 in the main paper.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Figure A8: Histogram of Angular Deviations (Black-box setting). Supplements Fig. 6 in the main paper. The test-loss during of the attacker model for each of the histograms (over multiple $\epsilon$ values) are provided in the bottom row.
+
+
+
+
+
+
+
+# F.3 ATTACKER ARGMAX
+
+In Figure A7, we perform the non-replicability vs. utility evaluation (complementing Fig. 5 in the main paper) under a special situation: the attacker discards the probabilities and only uses the top-1 'argmax' label to train the stolen model. Relevant discussion can be found in Section 5.2.2.
+
+# F.4 BLACK-BOX ANGULAR DEVIATIONS
+
+In Figure A8, we provide the angular deviations obtained in a black-box setting over the course of training the attack model. We train the attacker model using the transfer set obtained by the knockoff approach (the strongest attacker in our experiments) for 50 epochs using a SGD $(\mathrm{lr} = 0.01$ , momentum $= 0.5$ ) and a batch size of 64. The experiment compliments our previous discussion in Section 5.2.3 of the main paper under "How much angular deviation does MAD introduce?" As before, we estimate the angular deviations as: $\theta = \arccos \frac{\boldsymbol{u}\cdot\boldsymbol{a}}{||\boldsymbol{u}||||\boldsymbol{a}||}$ , where $\boldsymbol{u} = \nabla_{\boldsymbol{w}}L(\boldsymbol{w}_{t},\boldsymbol{y},\cdot)$ and $\boldsymbol{a} = \nabla_{\boldsymbol{w}}L(\boldsymbol{w}_{t},\tilde{\boldsymbol{y}},\cdot)$ . We observe from Figure A8: (i) the defensive angular deviations introduced by MAD to posterior predictions transfer to a black-box attacker setting, when crafting perturbations without access to the adversary's model parameters; and (ii) although the setting introduces lower angular deviations at the extreme case of $\epsilon = 2.0$ (e.g., $114.7^{\circ}\rightarrow 76.5^{\circ}$ in CIFAR10), we observe the perturbation sufficient to maximize the attacker's test loss. We find significant angular deviations introduced by our approach in a black-box setting as well.
+
+# F.5 MAD ABLATION EXPERIMENTS
+
+We present the ablation experiments covering all defender models in Figure A9. Relevant discussion is available in Section 5.2.3 of the main paper under "Ablative Analysis".
+
+
+
+
+
+
+
+
+
+
+Figure A9: MAD ablation experiments. Supplements Fig. 8 in the main paper.
+
+
+
+
\ No newline at end of file
diff --git a/predictionpoisoningtowardsdefensesagainstdnnmodelstealingattacks/images.zip b/predictionpoisoningtowardsdefensesagainstdnnmodelstealingattacks/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..6ee083e97941185ecf8b7d2fc3b0b0f4ca6c7901
--- /dev/null
+++ b/predictionpoisoningtowardsdefensesagainstdnnmodelstealingattacks/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:73152855c40c543f96f7df15d35aadab262276db101a3260a0e2e64e5ab98392
+size 941904
diff --git a/predictionpoisoningtowardsdefensesagainstdnnmodelstealingattacks/layout.json b/predictionpoisoningtowardsdefensesagainstdnnmodelstealingattacks/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..b7b216f3085487509c9e6dea5ff4f7bd05559969
--- /dev/null
+++ b/predictionpoisoningtowardsdefensesagainstdnnmodelstealingattacks/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:447a5f53cbfdaca0847a2d5eaa4521d017405ec5b13aa084b5a53ac66557949f
+size 608322
diff --git a/pretrainedencyclopediaweaklysupervisedknowledgepretrainedlanguagemodel/58cb790e-5763-47c5-93ce-46f6c71f8a0b_content_list.json b/pretrainedencyclopediaweaklysupervisedknowledgepretrainedlanguagemodel/58cb790e-5763-47c5-93ce-46f6c71f8a0b_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..cf80f0cac9e55df85bd1089874e80df43e78c91c
--- /dev/null
+++ b/pretrainedencyclopediaweaklysupervisedknowledgepretrainedlanguagemodel/58cb790e-5763-47c5-93ce-46f6c71f8a0b_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:da59f4db1ad7677b77a822d9aa916a8b2057a6a8a491cc6526cc1845bcf6dcc6
+size 72126
diff --git a/pretrainedencyclopediaweaklysupervisedknowledgepretrainedlanguagemodel/58cb790e-5763-47c5-93ce-46f6c71f8a0b_model.json b/pretrainedencyclopediaweaklysupervisedknowledgepretrainedlanguagemodel/58cb790e-5763-47c5-93ce-46f6c71f8a0b_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..2350e621c8061865c7f7beabceb5387eff786070
--- /dev/null
+++ b/pretrainedencyclopediaweaklysupervisedknowledgepretrainedlanguagemodel/58cb790e-5763-47c5-93ce-46f6c71f8a0b_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:dccc4280ed6894fbdfea1a320d7093ac1ede2c4aba086c29d18a4f32392cfa5a
+size 91250
diff --git a/pretrainedencyclopediaweaklysupervisedknowledgepretrainedlanguagemodel/58cb790e-5763-47c5-93ce-46f6c71f8a0b_origin.pdf b/pretrainedencyclopediaweaklysupervisedknowledgepretrainedlanguagemodel/58cb790e-5763-47c5-93ce-46f6c71f8a0b_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..d9363fa03b78569922522a0f7c7ae02680e140d9
--- /dev/null
+++ b/pretrainedencyclopediaweaklysupervisedknowledgepretrainedlanguagemodel/58cb790e-5763-47c5-93ce-46f6c71f8a0b_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ef5022f74dbb4619719d8c2cbd982f79c24c84008533504defb9e54b030cb5f5
+size 1611506
diff --git a/pretrainedencyclopediaweaklysupervisedknowledgepretrainedlanguagemodel/full.md b/pretrainedencyclopediaweaklysupervisedknowledgepretrainedlanguagemodel/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..cab47f09a5eaf89430689ded06fa08412be784bc
--- /dev/null
+++ b/pretrainedencyclopediaweaklysupervisedknowledgepretrainedlanguagemodel/full.md
@@ -0,0 +1,236 @@
+# PRETRAINED ENCYCLOPEDIA: WEAKLY SUPERVISED KNOWLEDGE-PRETRAINED LANGUAGE MODEL
+
+Wenhan Xiong†, Jingfei Du§, William Yang Wang†, Veselin Stoyanov§,
+
+† University of California, Santa Barbara
+Facebook AI
+
+{xwhan, william} $@$ cs.ucsb.edu, {jingfeidu, ves} $@$ fb.com
+
+# ABSTRACT
+
+Recent breakthroughs of pretrained language models have shown the effectiveness of self-supervised learning for a wide range of natural language processing (NLP) tasks. In addition to standard syntactic and semantic NLP tasks, pretrained models achieve strong improvements on tasks that involve real-world knowledge, suggesting that large-scale language modeling could be an implicit method to capture knowledge. In this work, we further investigate the extent to which pretrained models such as BERT capture knowledge using a zero-shot fact completion task. Moreover, we propose a simple yet effective weakly supervised pretraining objective, which explicitly forces the model to incorporate knowledge about real-world entities. Models trained with our new objective yield significant improvements on the fact completion task. When applied to downstream tasks, our model consistently outperforms BERT on four entity-related question answering datasets (i.e., WebQuestions, TriviaQA, SearchQA and Quasar-T) with an average 2.7 F1 improvements and a standard fine-grained entity typing dataset (i.e., FIGER) with 5.7 accuracy gains.
+
+# 1 INTRODUCTION
+
+Language models pretrained on a large amount of text such as ELMo (Peters et al., 2018a)), BERT (Devlin et al., 2019) and XLNet (Yang et al., 2019c) have established new state of the art on a wide variety of NLP tasks. Researchers ascertain that pretraining allows models to learn syntactic and semantic information of language that is then transferred on other tasks (Peters et al., 2018b; Clark et al., 2019). Interestingly, pretrained models also perform well on tasks that require grounding language and reasoning about the real world. For instance, the new state-of-the-art for WNLI (Wang et al., 2019a), ReCoRD (Zhang et al., 2018) and SWAG (Zellers et al., 2018) is achieved by pretrained models. These tasks are carefully designed so that the text input alone does not convey the complete information for accurate predictions - external knowledge is required to fill the gap. These results suggest that large-scale pretrained models implicitly capture real-world knowledge. Logan et al. (2019) and Petroni et al. (2019) further validate this hypothesis through a zero-shot fact completion task that involves single-token entities, showing that pretrained models achieve much better performance than random guessing and can be on par with specifically-trained relation extraction models.
+
+As unstructured text encodes a great deal of information about the world, large-scale pretraining over text data holds the promise of simultaneously learning syntax, semantics and connecting them with knowledge about the real world within a single model. However, existing pretraining objectives are usually defined at the token level and do not explicitly model entity-centric knowledge. In this work, we investigate whether we can further enforce pretrained models to focus on encyclopedic knowledge about real-world entities, so that they can better capture entity information from natural language and be applied to improving entity-related NLP tasks. We evaluate the extent to which a pretrained model represents such knowledge by extending an existing fact completion evaluation to a cloze ranking setting that allows us to deal with a large number of multi-token entity names without manual judgments. Our experiments on 10 common Wikidata (Vrandecic & Krötzsch, 2014) relations reveal that existing pretrained models encode entity-level knowledge only to a limited degree. Thus, we propose a new weakly supervised knowledge learning objective that requires the
+
+
+Figure 1: Type-Constrained Entity Replacements for Knowledge Learning.
+
+model to distinguish between true and false knowledge expressed in natural language. Specifically, we replace entity mentions in the original documents with names of other entities of the same type and train the models to distinguish the correct entity mention from randomly chosen ones. Models trained with this objective demonstrates much stronger fact completion performance for most relations we test on. Compared with previous work (Zhang et al., 2019; Peters et al., 2019) that utilizes an external knowledge base to incorporate entity knowledge, our method is able to directly derive real-world knowledge from unstructured text. Moreover, our method requires no additional data processing, memory or modifications to the BERT model when fine-tuning for downstream tasks.
+
+We test our model on two practical NLP problems that require entity knowledge: Question Answering (QA) and fine-grained Entity Typing. We use four previously published datasets for open-domain QA and observe that questions in these datasets often concern entities. The Entity Typing task requires the model to recognize fine-grained types of specified entity mentions given short contexts. On three of the QA datasets, our pretrained model outperforms all previous methods that do not rely on memory-consuming inter-passage normalizations1. On the FIGER entity-typing dataset, our model sets a new state of the art. Through ablation analysis, we show that the new entity-centric training objective is instrumental for achieving state-of-the-art results.
+
+In summary, this paper makes the following contributions: 1) We extend existing fact completion evaluation settings to test pretrained models' ability on encoding knowledge of common real-world entities; 2) We propose a new weakly supervised pretraining method which results in models that better capture knowledge about real-world entities from natural language text; 3) The model trained with our knowledge learning objective establishes new state of the art on three entity-related QA datasets and a standard fine-grained entity typing dataset.
+
+We begin by introducing our weakly supervised method for knowledge learning (§2) and then discuss experiment settings and evaluation protocols, compare our model to previously published work and perform ablation analysis. Finally, we review related work in §4 and conclude in §5.
+
+# 2 ENTITY REPLACEMENT TRAINING
+
+We design an entity-centric training objective that utilizes weakly supervised training signals to explicitly encourage knowledge learning during pretraining. Given an input document, we first
+
+recognize the entity mentions and link them to Wikipedia entities2. We consider the original texts as positive knowledge statements and create negative statements by randomly replacing the entity mentions $(\mathcal{E}^{+})$ with the names of other random entities $(\mathcal{E}^{-})$ that have the same entity type as the mentioned entity. This setup is similar in spirit to the type-constrained negative sampling technique used to train knowledge base representations (Bordes et al., 2013). The latter technique creates negative triples by replacing the subject or object entity with random entities of the same type. Instead of knowledge base triples, we treat unstructured texts as factual statements. For a certain entity $e$ mentioned in a context $\mathcal{C}$ , we train the model to make a binary prediction indicating whether the entity has been replaced:
+
+$$
+J _ {e, \mathcal {C}} = \mathbb {1} _ {e \in \mathcal {E} ^ {+}} \log P (e | \mathcal {C}) + (1 - \mathbb {1} _ {e \in \mathcal {E} ^ {+}}) \log (1 - P (e | \mathcal {C})).
+$$
+
+Compared to the language modeling objective, entity replacement is defined at the entity level and introduces stronger negative signals. When we enforce entities to be of the same type, we preserve the linguistic correctness of the original sentence while the system needs to learn to perform judgment based on the factual aspect of the sentence.
+
+We describe the implementation in more detail in the following paragraphs.
+
+Data Preparation We use the whole English Wikipedia dump as training data and rely on all Wikipedia entities3. Entities in documents are recognized based on Wikipedia anchor links and entity alias from Wikidata. That is, we first retrieve the entities annotated by anchor links and then find other mentions of these entities by string matching their Wikidata alias. We split each document into multiple text chunks with the same size (512 tokens). Although our experiments rely on the Wikipedia corpus, this setup can be easily extended to larger corpora with off-the-shelf entity linking tools. We leave the larger scope of the experiments to future work.
+
+Replacement Strategy When replacing entities, we first lookup type information4 from Wikidata and then randomly select other entities with the same type. We do not replace adjacent entities. In other words, there must be at least one unreplaced entity between any two replaced ones. This reduces cases where we replace all entities in the same sentence and the resulting sentences happen to introduce correct entities by chance. For replacement, we randomly sample a string from the entities' alias set. For each text chunk, we replicate it 10 times with different negative entities for each replacement location. We show an illustration of the entity replacement method in Figure 1.
+
+Model Architecture We use the Transformer (Vaswani et al., 2017) model used by BERT (Devlin et al., 2019). We use the same architecture as BERT base: 12 Transformer layers, each with hidden dimension 768. We initialize the transformer with a model pretrained based on our own BERT re-implementation5. For each entity, we use the final representations of its boundary words (words before and after the entity mention) to make predictions. We simply concatenate the boundary words' representations and add a linear layer for prediction. During training, we use 0.05 dropout at the final layer.
+
+Training Objectives Masked language model pretraining has been proven to be effective for downstream tasks. While training for entity replacement we also train with the masked language model objective in a multi-task set-up. When masking tokens, we restrict the masks to be outside the entity spans. We use a masking ratio of $5\%$ instead of $15\%$ in the original BERT to avoid masking out too much of the context. We train the model for approximately 1 million updates using a batch size of 128.
+
+# 3 EXPERIMENTS
+
+We first test our model on a fact completion task. This task resembles traditional knowledge base completion: it requires the model to complete missing entities in factual triples. We further test on two real-world downstream tasks that require entity-level knowledge – question answering and fine-grained entity typing. We describe the hyperparameter and training settings of all experiments in the appendix.
+
+# 3.1 ZERO-SHOT FACT COMPLETION
+
+In traditional knowledge base completion tasks models have access to a set of training triples. Instead, we utilize a zero-shot test to examine the model's ability to automatically derive relational knowledge from natural language.
+
+Dataset We rely on factual triples from Wikidata. Each triple describes the relationship between two certain entities, e.g., {Paris, CapitalOf, France}. Following recent practices (Bosselut et al., 2019; Logan et al., 2019) that decode structured knowledge from language models, we first manually create templates to convert triples of 10 common relations into natural language expressions (\{Paris, CapitalOf, France\} $\rightarrow$ the capital of France is Paris). We then create queries by removing the object entity in the expression and use pre-trained models to predict the missing entities, e.g., the capital of France is ?. We create 1000 cloze examples for each of the 10 relations.
+
+Evaluation Metrics Previous work (Logan et al., 2019; Petroni et al., 2019) either relies on human evaluation or only considers single-token entities for fact completion. In contrast, we consider an entity-ranking setup and create a set of candidate entities for each relation. This setting allows us to automatically evaluate a large number of queries that usually involve multi-token entities. We test pretrained models on their ability to recover the correct object entity from the candidate set. To create the negative choices, we select from the set of all object entities in the particular relation, which generally have the same type as the groundtruth and are more challenging to distinguish than entities with different types. Our evaluation strategy is similar to previous work on knowledge base completion (Nickel et al., 2011; Bordes et al., 2013; Xiong et al., 2017). We follow these studies and use Hits@10 as the evaluation metric.
+
+Baselines We compare our model with two pretrained language models BERT (Devlin et al., 2019) (both base and large) and GPT-2 (Radford et al., 2019). We make use of their output token probabilities to rank candidate entities. For BERT, we feed in the masked queries (e.g., $Q_{\text{masked}} = \text{the capital of France is [MASK]}$ ). For multi-token candidates, we use the same number of [MASK] tokens in the query inputs. We use the average log probability of masked tokens for ranking. Given a multi-token entity $E_i = [e_i^1, e_i^2, \dots, e_i^{|E_i|}]$ , the ranking score from BERT is calculated as
+
+$$
+S _ {E _ {i}} = \frac {1}{| E _ {i} |} \sum_ {k} \log P (e _ {i} ^ {k} | Q _ {m a s k e d}).
+$$
+
+For GPT-2, we feed in the original query without the answer entity and use the first-token probability of candidate entities for ranking, which performs better than using average log probabilities. As our model learns to predict a plausible probability $(P(e|\mathcal{C}))$ for each entity mention during entity replacement training, we can directly use these predicted probabilities to rank the candidates.
+
+Results Table 1 shows the fact completion results for all relations. We denote our method WKLM for (Weakly Supervised Knowledge-Pretrained Language Model). Overall, WKLM achieves the best results on 8 of the 10 relations. We also observe that GPT-2 outperforms BERT on average. We think this is because the fact completion task requires models to predict the missing entities using only a short context on the left, while BERT pretraining incorporates context from both directions. Interestingly, BERT achieves good performance on several geographical relations such as PlaceOfBirth, LocatedIn and PlaceOfDeath. We conjecture that this is because location entities usually appear at sentence ends in Wikipedia articles, e.g., Obama was born in
+
+Honolulu, Hawaii.. This sentence pattern is similar to our templates and BERT may learn to rely mostly on the left context to make predictions. For most relations that include answers that are person names, BERT lags behind both GPT-2 and our model.
+
+Comparing the top and bottom five relations, we observe that BERT's performance is correlated with the size of the candidate set, while WKLM and GPT-2 are less sensitive to this number. A similar pattern exists between models' performance and the cardinality of groundtruth answers, i.e., our model achieves similar performance on both single-answer and multiple-answer queries while BERT is usually better at single-answer queries. WKLM both outperforms BERT and GPT-2 and achieves robust performance across relations with different properties. Visualization of correlations between relation properties and model performance can be found in the appendix.
+
+Table 1: Zero-Shot Fact Completion Results.
+
+| Relation
+Name | # of
+Candidates | # of
+Answers | Model |
| BERT-base | BERT-large | GPT-2 | Ours |
| HASCHILD (P40) | 906 | 3.8 | 9.00 | 6.00 | 20.5 | 63.5 |
| NOTABLEWORK (P800) | 901 | 5.2 | 1.88 | 2.56 | 2.39 | 4.10 |
| CAPITALOF (P36) | 820 | 2.2 | 1.87 | 1.55 | 15.8 | 49.1 |
| FOUNDEDBY (P112) | 798 | 3.7 | 2.44 | 1.93 | 8.65 | 24.2 |
| CREATOR (P170) | 536 | 3.6 | 4.57 | 4.57 | 7.27 | 9.84 |
| PLACEOFBIRTH (P19) | 497 | 1.8 | 19.2 | 30.9 | 8.95 | 23.2 |
| LOCATEDIN (P131)) | 382 | 1.9 | 13.2 | 52.5 | 21.0 | 61.1 |
| EDUCATEDAT (P69) | 374 | 4.1 | 9.10 | 7.93 | 11.0 | 16.9 |
| PLACEOFDEATH (P20) | 313 | 1.7 | 43.0 | 42.6 | 8.83 | 26.5 |
| OCCUPATION (P106) | 190 | 1.4 | 8.58 | 10.7 | 9.17 | 10.7 |
| Average Hits@10 | - | - | 11.3 | 16.1 | 16.3 | 28.9 |
+
+# 3.2 DOWNSTREAM TASKS
+
+Background knowledge is important for language understanding. We expect our pretraining approach to be beneficial to NLP applications where entity-level knowledge is essential. We consider two such applications: question answering and entity-typing. We find that a large portion of the questions in existing QA datasets are about entities and involve entity relations. In a way, our pretraining objective is analogous to question answering in a multiple-choice setting (Hermann et al., 2015). The entity-typing task requires the model to predict a set of correct types of entity mentions in a short context. The context itself can be insufficient and the training data for this task is small and noisy. We believe a model that encodes background entity knowledge can help in both cases.
+
+# 3.2.1 QUESTION ANSWERING
+
+Datasets We consider four question answering datasets:
+
+- WebQuestions (Berant et al., 2013) is originally a dataset for knowledge base question answering. The questions are collected using Google Suggest API and are all asking about simple relational facts of Freebase entities.
+- TriviaQA $^{7}$ (Joshi et al., 2017) includes questions from trivia and quiz-league websites. Apart from a small portion of questions to which the answers are numbers and free texts, $92.85\%$ of the answers are Wikipedia entities.
+- Quasar-T (Dhingra et al., 2017) is another dataset that includes trivia questions. Most of the answers in this dataset are none phrases. According to our manual analysis on random samples, $88\%$ of the answers are real-world entities8.
+- SearchQA (Dunn et al., 2017) uses questions from the television quiz show Jeopardy! and we also find that almost all of the answers are real-world entities.
+
+Questions in all three datasets are created without the context of a paragraph, which resembles the scenario of practical question answering applications. All the questions except WebQuestions are written by humans. This indicates that humans are generally interested to ask questions to seek information about entities. We show the statistics and example questions in Table 2. We split the training data (created by distant supervision) of WebQuestions with a ratio (9:1) for training and development. Since our model is based on our own BERT implementations, in addition to the aforementioned entity-related datasets, we first use the standard SQuAD (Rajpurkar et al., 2016) benchmark to validate our model's answer extraction performance.
+
+Table 2: Properties of the QA Datasets.
+
+| Dataset | Train | Valid | Test | Example Questions |
| WebQuestions | 3778 | - | 2032 | Who plays Stewie Griffin on Family Guy? |
| TriviaQA | 87291 | 11274 | 10790 | What is the Japanese share index called? |
| SearchQA | 99811 | 13893 | 27247 | Hero several books 11 discover's wizard? |
| Quasar-T | 37012 | 3000 | 3000 | Which vegetable is a Welsh emblem? |
+
+Settings We adopt the fine-tuning approach to extract answer spans with pretrained models. We add linear layers over the last hidden states of the pretrained models to predict the start and end positions of the answer. Unlike SQuAD, questions in the datasets we use are not paired with paragraphs that contain the answer. We follow previous work (Chen et al., 2017; Wang et al., 2018a) and retrieve context paragraphs with information retrieval systems. Details of the context retrieval process for each dataset can be found in the appendix. Reader models are trained with distantly supervised data, i.e., we treat any text span in any retrieved paragraph as ground truth as long as it matches the original answers. Since the reader model needs to read multiple paragraphs to predict a single answer at inference time, we also train a BERT based paragraph ranker with distant-supervised data to assign each paragraph a relevance score. The paragraph ranker takes question and paragraph pairs and predicts a score in the range [0, 1] for each pair. During inference, for each question and its evidence paragraph set, we first use the paragraph reader to extract the best answer from each paragraph. These answers are then ranked based on a linear combination of the answer extraction score (a log sum of the answer start and end scores) and the paragraph relevance score. We also evaluate model performance without using the relevance scores.
+
+Open-Domain QA Baselines We compare our QA model with the following systems:
+
+- DrQA (Chen et al., 2017) is an open-domain QA system which uses TF-IDF with bigram features for ranking and a simple attentive reader for answer extraction.
+- $\mathbf{R}^3$ (Wang et al., 2018a) is a reinforcement learning based system which jointly trains a paragraph ranker and a document reader.
+- DSQA (Lin et al., 2018) uses RNN-based paragraph ranker and jointly trains the paragraph ranker and attentive paragraph ranker with a multi-task loss.
+- Evidence Aggregation (Wang et al., 2018b) uses a hybrid answer reranking module to aggregate answer information from multiple paragraphs and rerank the answers extracted from multiple paragraphs.
+- BERTserini (Yang et al., 2019a) is a BERT-based open-domain QA system, which uses BM25-based retriever to retrieve 100 paragraphs and a BERT-based reader to extract answers. The paragraph reader is either trained with SQuAD (Rajpurkar et al., 2016) data or distant-supervision data (Yang et al., 2019b)
+- ORQA (Lee et al., 2019) replaces the traditional BM25 ranking with a BERT-based ranker. The ranker model is pretrained on the whole Wikipedia corpus with an inverse cloze task which simulates the matching between questions and paragraphs. All text blocks in Wikipedia are be pre-encoded as vectors and retrieved with Locality Sensitive Hashing.
+
+Results Table 3 shows the SQuAD results and Table 4 shows the open-domain results on the four datasets that are highly entity-related. From the SQuAD results, we observe that our BERT reimplementation performs better than the original model this is due to the fact that it is trained
+
+for twice as many updates: 2 million vs. 1 million for the original BERT. Although lots of the answers in SQuAD are non-entity spans, the WKLM model we propose achieves better performance than BERT. We believe the improvement is due to both the masked language model and entity replacement objectives. Ablation experiments on the training objectives will be discussed in §3.2.3.
+
+Having established that our BERT re-implementation performs better than the original model, we compare with only our own BERT for the following experiments. From Table 4, we see that our model produces consistent improvements across different datasets. Compared to the 0.8 F1 improvements over BERT on SQuAD, we achieve an average of 2.7 F1 improvements over BERT on entity-related datasets when the ranking scores are not used. On TriviaQA and Quasar-T, WKLM outperforms our BERT even
+
+Table 3: SQuAD Dev Results.
+
+| Model | EM | F1 |
| Google's BERT-base | 80.8 | 88.5 |
| Google's BERT-large | 84.1 | 90.9 |
| Our BERT-base | 83.4 | 90.5 |
| WKLM (base) | 84.3 | 91.3 |
+
+when it uses ranking scores. Improvements in natural language question datasets (WebQuestions, TriviaQA, and Quasar-T) are more significant than SearchQA where the questions are informal queries. When we utilize ranking scores from a simple BERT based ranker, we are able to achieve the state-of-the-art on three of the four datasets.
+
+Table 4: Open-domain QA Results.
+
+| Model | WebQuestions | TriviaQA | Quasar-T | SearchQA |
| EM | F1 | EM | F1 | EM | F1 | EM | F1 |
| DrQA (Chen et al., 2017) | 20.7 | - | - | - | - | - | - | - |
| R3 (Wang et al., 2018a) | - | - | 50.6 | 57.3 | 42.3 | 49.6 | 57.0 | 63.2 |
| DSQA (Lin et al., 2018) | 18.5 | 25.6 | 48.7 | 56.3 | 42.2 | 49.3 | 49.0 | 55.3 |
| Evidence Agg. (Wang et al., 2018b) | - | - | 50.6 | 57.3 | 42.3 | 49.6 | 57.0 | 63.2 |
| BERTserini (Yang et al., 2019a) | - | - | 51.0 | 56.3 | - | - | - | - |
| BERTserini+DS (Yang et al., 2019b) | - | - | 54.4 | 60.2 | - | - | - | - |
| ORQA (Lee et al., 2019) | 36.4 | - | 45.0 | - | - | - | - | - |
| Our BERT | 29.2 | 35.5 | 48.7 | 53.2 | 40.4 | 46.1 | 57.1 | 61.9 |
| Our BERT + Ranking score | 32.2 | 38.9 | 52.1 | 56.5 | 43.2 | 49.2 | 60.6 | 65.9 |
| WKLM | 30.8 | 37.9 | 52.2 | 56.7 | 43.7 | 49.9 | 58.7 | 63.3 |
| WKLM + Ranking score | 34.6 | 41.8 | 58.1 | 63.1 | 45.8 | 52.2 | 61.7 | 66.7 |
+
+# 3.2.2 ENTITY TYPING
+
+To compare with an existing study (Zhang et al., 2019) that also attempts to incorporate entity knowledge into language models, we consider an additional entity typing task using the large FIGER dataset (Ling & Weld, 2012). The task is to assign a fine-grained type to entity mentions. We do that by adding two special tokens before and after the entity span to mark the entity position. We use the final representation of the start token ([CLS]) to predict the entity types. The model is fine-tuned on weakly-supervised training data with binary cross-entropy loss. We evaluate the models using strict accuracy, loose micro, and macro F1 scores.
+
+We show the results in Table 5. We compare our model with two non-BERT neural baselines (Inui et al., 2017) that integrate a set of hand-crafted features: LSTM + Hand-crafted and Attentive +
+
+Table 5: Fine-grained Entity Typing Results on the FIGER dataset.
+
+| Model | Acc | Ma-F1 | Mi-F1 |
| LSTM + Hand-crafted (Inui et al., 2017) | 57.02 | 76.98 | 73.94 |
| Attentive + Hand-crafted (Inui et al., 2017) | 59.68 | 78.97 | 75.36 |
| BERT baseline (Zhang et al., 2019) | 52.04 | 75.16 | 71.63 |
| ERNIE (Zhang et al., 2019) | 57.19 | 75.61 | 73.39 |
| Our BERT | 54.53 | 79.57 | 74.74 |
| WKLM | 60.21 | 81.99 | 77.00 |
+
+Hand-crafted; a vanilla BERT baseline and the ERNIE model (Zhang et al., 2019) that enhances BERT with knowledge base embeddings.
+
+First, we see that naively applying BERT is less effective than simple models combined with sparse hand-crafted features. Although the ERNIE model can improve over BERT by 5.15 points, its performance still lags behind models that make good use of hand-crafted features. In contrast, although based on a stronger BERT model, our model achieves larger absolute improvements (5.68 points) and sets a new state-of-the-art for this task. Given the larger improvement margin, we believe our model that directly learn knowledge from text is more effective than the ERNIE method.
+
+# 3.2.3 ABLATION STUDY: THE EFFECT OF MASKED LANGUAGE MODEL LOSS
+
+In view of a recent study (Liu et al., 2019b) showing simply extending the training time of BERT leads to stronger performance on various downstream tasks, we conduct further analysis to differentiate the effects of entity replacement training and masked language modeling. We compare our model with three variants: a model pretrained only with the knowledge learning objective (WKLM without MLM), a model trained with both knowledge learning and masked language modeling with more masked words (WKLM with $15\%$ MLM) and a BERT model trained with additional 1 million updates on English Wikipedia (BERT + 1M MLM updates) and no knowledge learning.
+
+The ablation results are shown in Table 6. The results of WKLM without MLM validate that adding the language model objective is essential for downstream performance. We also find that masking out too many words (i.e., $15\%$ masking ratio as in the original BERT) leads to worse results. We conjecture that too many masked words outside entity mentions break parts of the context information and introduce noisy signals to knowledge learning. Results of continued BERT training show that more MLM updates are often beneficial, especially for SQuAD. However, on tasks that are more entity-centric, continued MLM training is less effective than our WKLM method. This suggests that our WKLM method could serve as an effective complementary recipe to masked language modeling when applied to entity-related NLP tasks.
+
+Table 6: Ablation Studies on Masked Language Model and Masking Ratios.
+
+| Model | SQuAD | TriviaQA | Quasar-T | FIGER Acc |
| EM | F1 | EM | F1 | EM | F1 |
| Our BERT | 83.4 | 90.5 | 48.7 | 53.2 | 40.4 | 46.1 | 54.53 |
| WKLM | 84.3 | 91.3 | 52.2 | 56.7 | 43.7 | 49.9 | 60.21 |
| WKLM without MLM | 80.5 | 87.6 | 48.2 | 52.5 | 42.2 | 48.1 | 58.44 |
| WKLM with 15% masking | 84.1 | 91.0 | 51.0 | 55.3 | 42.9 | 49.0 | 59.68 |
| Our BERT + 1M MLM updates | 84.4 | 91.1 | 52.0 | 56.3 | 42.3 | 48.2 | 54.17 |
+
+# 4 RELATED WORK
+
+Pretrained Language Representations Early research on language representations focused on static unsupervised word representations (Mikolov et al., 2013; Pennington et al., 2014). Word embeddings leverage co-occurrences to learn latent word vectors that approximately reflect word semantics. Given that words can have different meanings in different contexts, more recent studies (McCann et al., 2017; Peters et al., 2018a) show that contextual language representations can be more powerful than static word embeddings in downstream tasks. This direction has been further explored at a larger scale with efficient Transformer architectures (Radford et al., 2019; Devlin et al., 2019; Yang et al., 2019c). Our WKLM method is based on these techniques and we focus on improving the knowledge ability of pretrained models.
+
+Knowledge-Enhanced NLP Models Background knowledge has been considered an indispensable part of language understanding (Fillmore et al., 1976; Minsky, 1988). As standard language encoders usually do not explicitly model knowledge, recent studies (Ahn et al., 2016; Yang & Mitchell, 2017; Logan et al., 2019; Liu et al., 2019a) have explored methods to incorporate external knowledge into NLP models. Most of these methods rely on additional inputs such as entity representations from structured knowledge bases. With the breakthrough of large-scale pretrained language
+
+encoders (Devlin et al., 2019), Zhang et al. (2019) and Peters et al. (2019) adopt similar ideas and propose entity-level knowledge enhancement training objectives to incorporate knowledge into pretrained models. Other recent studies (Mihaylov & Frank, 2018; Xiong et al., 2019) leverage external knowledge bases to enhance text-based question answering models. In contrast to these methods, our method utilizes minimal external entity information and does not require additional memory or architectural changes when applied to downstream tasks.
+
+# 5 CONCLUSION
+
+We introduce a weakly supervised method to encourage pretrained language models to learn entity-level knowledge. Our method uses minimal entity information during pretraining and does not introduce additional computation, memory or architectural overhead for downstream task fine-tuning. The trained model demonstrates strong performance on a probing fact completion task and two entity-related NLP tasks. Together, our results show the potential of directly learning entity-level knowledge from unstructured natural language and the benefits of large-scale knowledge-aware pretraining for downstream NLP tasks.
+
+# REFERENCES
+
+Sungjin Ahn, Heeyoul Choi, Tanel Parnamaa, and Yoshua Bengio. A neural knowledge language model. arXiv preprint arXiv:1608.00318, 2016.
+Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. Semantic parsing on freebase from question-answer pairs. In EMNLP, pp. 1533-1544. ACL, 2013.
+Antoine Bordes, Nicolas Usunier, Alberto García-Durán, Jason Weston, and Oksana Yakhnenko. Translating embeddings for modeling multi-relational data. In NIPS, pp. 2787-2795, 2013.
+Antoine Bosselut, Hannah Rashkin, Maarten Sap, Chaitanya Malaviya, Asli Çelikyilmaz, and Yejin Choi. COMET: commonsense transformers for automatic knowledge graph construction. In ACL (1), pp. 4762-4779. Association for Computational Linguistics, 2019.
+Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. Reading wikipedia to answer open-domain questions. In ACL (1), pp. 1870-1879. ACL, 2017.
+Kevin Clark, Urvashi Khandelwal, Omer Levy, and Christopher D Manning. What does bert look at? an analysis of bert's attention. arXiv preprint arXiv:1906.04341, 2019.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: pre-training of deep bidirectional transformers for language understanding. In *NAACL-HLT* (1), pp. 4171–4186. ACL, 2019.
+Bhuwan Dhingra, Kathryn Mazaitis, and William W Cohen. Quasar: Datasets for question answering by search and reading. arXiv preprint arXiv:1707.03904, 2017.
+Matthew Dunn, Levent Sagun, Mike Higgins, V Ugur Guney, Volkan Cirik, and Kyunghyun Cho. Searchqa: A new q&a dataset augmented with context from a search engine. arXiv preprint arXiv:1704.05179, 2017.
+Charles J Fillmore et al. Frame semantics and the nature of language. In Annals of the New York Academy of Sciences: Conference on the origin and development of language and speech, volume 280, pp. 20-32, 1976.
+Karl Moritz Hermann, Tomás Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. Teaching machines to read and comprehend. In NIPS, pp. 1693-1701, 2015.
+Kentaro Inui, Sebastian Riedel, Pontus Stenetorp, and Sonse Shimaoka. Neural architectures for fine-grained entity type classification. In EACL (1), pp. 1271-1280. Association for Computational Linguistics, 2017.
+
+Mandar Joshi, Eunsol Choi, Daniel S. Weld, and Luke Zettlemoyer. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension. In ACL (1), pp. 1601-1611. Association for Computational Linguistics, 2017.
+Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
+Kenton Lee, Ming-Wei Chang, and Kristina Toutanova. Latent retrieval for weakly supervised open domain question answering. In ACL (1), pp. 6086-6096. Association for Computational Linguistics, 2019.
+Yankai Lin, Haozhe Ji, Zhiyuan Liu, and Maosong Sun. Denoising distantly supervised open-domain question answering. In ACL (1), pp. 1736-1745. Association for Computational Linguistics, 2018.
+Xiao Ling and Daniel S. Weld. Fine-grained entity recognition. In AAAI. AAAI Press, 2012.
+Angli Liu, Jingfei Du, and Veselin Stoyanov. Knowledge-augmented language model and its application to unsupervised named-entity recognition. In *NAACL-HLT* (1), pp. 1142-1150. Association for Computational Linguistics, 2019a.
+Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692, 2019b.
+Robert Logan, Nelson F. Liu, Matthew E. Peters, Matt Gardner, and Sameer Singh. Barack's wife hillary: Using knowledge graphs for fact-aware language modeling. In ACL, pp. 5962-5971. ACL, 2019.
+Bryan McCann, James Bradbury, Caiming Xiong, and Richard Socher. Learned in translation: Contextualized word vectors. In NIPS, pp. 6294-6305, 2017.
+Todor Mihaylov and Anette Frank. Knowledgeable reader: Enhancing cloze-style reading comprehension with external commonsense knowledge. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 821-832, Melbourne, Australia, July 2018. Association for Computational Linguistics. doi: 10.18653/v1/P18-1076.
+Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pp. 3111-3119, 2013.
+Marvin Minsky. Society of mind. Simon and Schuster, 1988.
+Maximilian Nickel, Volker Tresp, and Hans-Peter Kriegel. A three-way model for collective learning on multi-relational data. In ICML, volume 11, pp. 809-816, 2011.
+Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. *fairoseq: A fast, extensible toolkit for sequence modeling.* In *Proceedings of NAACL-HLT* 2019: Demonstrations, 2019.
+Jeffrey Pennington, Richard Socher, and Christopher D. Manning. Glove: Global vectors for word representation. In EMNLP, pp. 1532-1543. ACL, 2014.
+Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. Deep contextualized word representations. arXiv preprint arXiv:1802.05365, 2018a.
+Matthew E. Peters, Mark Neumann, Luke Zettlemoyer, and Wen-tau Yih. Dissecting contextual word embeddings: Architecture and representation. In EMNLP, pp. 1499-1509. Association for Computational Linguistics, 2018b.
+Matthew E Peters, Mark Neumann, IV Logan, L Robert, Roy Schwartz, Vidur Joshi, Sameer Singh, and Noah A Smith. Knowledge enhanced contextual word representations. EMNLP, 2019.
+
+Fabio Petroni, Tim Roktaschel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, Alexander H. Miller, and Sebastian Riedel. Language models as knowledge bases? EMNLP, 2019.
+Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners. OpenAI Blog, 1(8), 2019.
+Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. Squad: 100, $000+$ questions for machine comprehension of text. In EMNLP, pp. 2383-2392. The Association for Computational Linguistics, 2016.
+Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In NIPS, pp. 5998-6008, 2017.
+Denny Vrandecic and Markus Krötzsch. Wikidata: a free collaborative knowledge base. 2014.
+Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In ICLR, 2019a.
+Shuohang Wang, Mo Yu, Xiaoxiao Guo, Zhiguo Wang, Tim Klinger, Wei Zhang, Shiyu Chang, Gerry Tesauro, Bowen Zhou, and Jing Jiang. $\mathbf{R}^{3}$ : Reinforced ranker-reader for open-domain question answering. In AAAI, pp. 5981-5988. AAAI Press, 2018a.
+Shuohang Wang, Mo Yu, Jing Jiang, Wei Zhang, Xiaoxiao Guo, Shiyu Chang, Zhiguo Wang, Tim Klinger, Gerald Tesauro, and Murray Campbell. Evidence aggregation for answer re-ranking in open-domain question answering. In ICLR, 2018b.
+Zhiguo Wang, Patrick Ng, Xiaofei Ma, Ramesh Nallapati, and Bing Xiang. Multi-passage bert: A globally normalized bert model for open-domain question answering. arXiv preprint arXiv:1908.08167, 2019b.
+Wenhan Xiong, Thien Hoang, and William Yang Wang. DeepPath: A reinforcement learning method for knowledge graph reasoning. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pp. 564-573, Copenhagen, Denmark, September 2017. Association for Computational Linguistics. doi: 10.18653/v1/D17-1060.
+Wenhan Xiong, Mo Yu, Shiyu Chang, Xiaoxiao Guo, and William Yang Wang. Improving question answering over incomplete KBs with knowledge-aware reader. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy, July 2019. Association for Computational Linguistics. doi: 10.18653/v1/P19-1417.
+Bishan Yang and Tom M. Mitchell. Leveraging knowledge bases in lstms for improving machine reading. In ACL (1), pp. 1436-1446. ACL, 2017.
+Wei Yang, Yuqing Xie, Aileen Lin, Xingyu Li, Luchen Tan, Kun Xiong, Ming Li, and Jimmy Lin. End-to-end open-domain question answering with bertserini. arXiv preprint arXiv:1902.01718, 2019a.
+Wei Yang, Yuqing Xie, Luchen Tan, Kun Xiong, Ming Li, and Jimmy Lin. Data augmentation for bert fine-tuning in open-domain question answering. arXiv preprint arXiv:1904.06652, 2019b.
+Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, and Quoc V Le. Xlnet: Generalized autoregressive pretraining for language understanding. arXiv preprint arXiv:1906.08237, 2019c.
+Rowan Zellers, Yonatan Bisk, Roy Schwartz, and Yejin Choi. SWAG: A large-scale adversarial dataset for grounded commonsense inference. In EMNLP, pp. 93-104. ACL, 2018.
+Sheng Zhang, Xiaodong Liu, Jingjing Liu, Jianfeng Gao, Kevin Duh, and Benjamin Van Durme. Record: Bridging the gap between human and machine commonsense reading comprehension. arXiv preprint arXiv:1810.12885, 2018.
+Zhengyan Zhang, Xu Han, Zhiyuan Liu, Xin Jiang, Maosong Sun, and Qun Liu. ERNIE: enhanced language representation with informative entities. In ACL (1), pp. 1441-1451. ACL, 2019.
+
+Yukun Zhu, Ryan Kiros, Richard S. Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In ICCV, pp. 19-27. IEEE Computer Society, 2015.
+
+# A APPENDIX
+
+Implementation Details and Hyperparameters We implement our method using Fairseq Ott et al. (2019) and the fact completion baselines are implemented with Huggingface's Pytorch-Transformers9. We pretrain the models with 32 V100 GPUs for 3 days. We use at most 2 GPUs for fine-tuning the paragraph reader, use 8 GPUs for fine-tuning the paragraph ranker. The entity-typing experiments require larger batch sizes and take 8 GPUs for training.
+
+For the knowledge learning pretraining phase, we use the Adam optimizer (Kingma & Ba, 2014) with learning rate 1e-5, batch size 128 and weight decay 0.01. The model is pretrained on 32 V100 GPUs for 3 days. To train the paragraph reader for open-domain QA, we select the best learning rate from $\{1\mathrm{e} - 6,5\mathrm{e} - 6,1\mathrm{e} - 5,2\mathrm{e} - 5\}$ and last layer dropout ratio from $\{0.1,0.2\}$ . We set the maximum training epoch to be 10 and batch size to be 32. The maximal input sequence length is 512 for WebQuestions and 128 for the other three datasets that use sentence-level paragraphs. For the paragraph ranker, we choose learning rate from $\{1\mathrm{e} - 5,2\mathrm{e} - 5,5\mathrm{e} - 6\}$ , use dropout 0.1 and batch size 256. The maximal sequence length for each dataset is consistent with the one we used for training the paragraph reader. The linear combination of ranking and extraction scores is selected based on validation performance. For SQuAD experiments, we select learning rate from $\{1\mathrm{e} - 5,5\mathrm{e} - 6,2\mathrm{e} - 5,3\mathrm{e} - 5\}$ , learning rate from $\{8,16\}$ , last layer dropout ratio from $\{0.1,0.2\}$ . We set the maximal sequence length as 512 and the maximal training epoch as 5. For entity typing, we select learning rate from $\{1\mathrm{e} - 5,2\mathrm{e} - 5,3\mathrm{e} - 5,5\mathrm{e} - 5\}$ and batch size from $\{128,256\}$ . We set the maximal sequence length to be 256, the last layer dropout ratio to be 0.1. The model is fine-tuned for at most 3 epochs to prevent overfitting. The threshold for type prediction is selected on the validation set.
+
+Context Collection for QA Datasets For WebQuestions, we collect evidence context using the document retriever of DrQA (Chen et al., 2017), which uses TF-IDF based metric to retrieve the top 5 Wikipedia articles. For Quasar-T, we use Lucene ranked paragraphs. For SearchQA and TriviaQA, we use paragraphs ranked by search engines. Following existing research (Wang et al., 2018b; Lin et al., 2018), we use sentence-level paragraphs for SearchQA (50 sentences), TriviaQA (100 sentences) and SearchQA (100 sentences).
+
+Correlation between Fact Completion Results and Properties of Relations Figure 2 shows the fact completion results of BERT are unstable on different relations with different properties, i.e., BERT's performance is strongly correlated with the size of candidate entity set and the number of groundtruth answers. Compared to BERT, WKLM is often less sensitive to these two factors.
+
+
+Figure 2: Left: Correlation between candidate set size and hits@10; Right: Correlation between number of groundtruth answers and hits@10.
+
+
\ No newline at end of file
diff --git a/pretrainedencyclopediaweaklysupervisedknowledgepretrainedlanguagemodel/images.zip b/pretrainedencyclopediaweaklysupervisedknowledgepretrainedlanguagemodel/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..dd2f2c6865cb5012cc667ca5e5dd1da94ee3e619
--- /dev/null
+++ b/pretrainedencyclopediaweaklysupervisedknowledgepretrainedlanguagemodel/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a127810ce8da3c5b9d785ccbb65067d638df1d4fc2ae8da3ceea3f550f1a2d3f
+size 402288
diff --git a/pretrainedencyclopediaweaklysupervisedknowledgepretrainedlanguagemodel/layout.json b/pretrainedencyclopediaweaklysupervisedknowledgepretrainedlanguagemodel/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..352f188989c39b63cfdfe38760b16edfb6a9ea58
--- /dev/null
+++ b/pretrainedencyclopediaweaklysupervisedknowledgepretrainedlanguagemodel/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0fec42cec452866a6317e7eb948df21507ca7f58073a523de2a7ce965f5126d2
+size 305214
diff --git a/pretrainingtasksforembeddingbasedlargescaleretrieval/b163055e-a164-4331-bc31-b73053db8b41_content_list.json b/pretrainingtasksforembeddingbasedlargescaleretrieval/b163055e-a164-4331-bc31-b73053db8b41_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..8dae1f6de8c389369a11b3fd7814e5620e67c983
--- /dev/null
+++ b/pretrainingtasksforembeddingbasedlargescaleretrieval/b163055e-a164-4331-bc31-b73053db8b41_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e2ff51bffcfe842c4723a2ff0d2502826e2a03ccef376113d61405ae3d4ca1f9
+size 72686
diff --git a/pretrainingtasksforembeddingbasedlargescaleretrieval/b163055e-a164-4331-bc31-b73053db8b41_model.json b/pretrainingtasksforembeddingbasedlargescaleretrieval/b163055e-a164-4331-bc31-b73053db8b41_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..4f05272325597d5726ad2684b70fe7e44b99bd17
--- /dev/null
+++ b/pretrainingtasksforembeddingbasedlargescaleretrieval/b163055e-a164-4331-bc31-b73053db8b41_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:05bbfbda743da7441fee58e29aaa8de7cfaddb9bedd7163169d5259c01ed60a6
+size 88130
diff --git a/pretrainingtasksforembeddingbasedlargescaleretrieval/b163055e-a164-4331-bc31-b73053db8b41_origin.pdf b/pretrainingtasksforembeddingbasedlargescaleretrieval/b163055e-a164-4331-bc31-b73053db8b41_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..b7abe43f63955281308113f9581c0c8b89327dc7
--- /dev/null
+++ b/pretrainingtasksforembeddingbasedlargescaleretrieval/b163055e-a164-4331-bc31-b73053db8b41_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4d2b875d5f48b82bfc98afca980f45168fec21e7d9235fea2b72670cc0839ddd
+size 2034297
diff --git a/pretrainingtasksforembeddingbasedlargescaleretrieval/full.md b/pretrainingtasksforembeddingbasedlargescaleretrieval/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..f9ebf3670efcae6c7ae2bac7fbc1550579684e6e
--- /dev/null
+++ b/pretrainingtasksforembeddingbasedlargescaleretrieval/full.md
@@ -0,0 +1,231 @@
+# PRE-TRAINING TASKS FOR EMBEDDING-BASED LARGE-SCALE RETRIEVAL
+
+Wei-Cheng Chang*, Felix X. Yu, Yin-Wen Chang, Yiming Yang, Sanjiv Kumar
+Carnegie Mellon University & Google
+{wchang2,yiming}@cs.cmu.edu,{felixyu,yinwen,sanjivk}@google.com
+
+# ABSTRACT
+
+We consider the large-scale query-document retrieval problem: given a query (e.g., a question), return the set of relevant documents (e.g., paragraphs containing the answer) from a large document corpus. This problem is often solved in two steps. The retrieval phase first reduces the solution space, returning a subset of candidate documents. The scoring phase then re-ranks the documents. Critically, the retrieval algorithm not only desires high recall but also requires to be highly efficient, returning candidates in time sublinear to the number of documents. Unlike the scoring phase witnessing significant advances recently due to the BERT-style pre-training tasks on cross-attention models, the retrieval phase remains less well studied. Most previous works rely on classic Information Retrieval (IR) methods such as BM-25 (token matching + TF-IDF weights). These models only accept sparse handcrafted features and can not be optimized for different downstream tasks of interest. In this paper, we conduct a comprehensive study on the embedding-based retrieval models. We show that the key ingredient of learning a strong embedding-based Transformer model is the set of pre-training tasks. With adequately designed paragraph-level pre-training tasks, the Transformer models can remarkably improve over the widely-used BM-25 as well as embedding models without Transformers. The paragraph-level pre-training tasks we studied are Inverse Cloze Task (ICT), Body First Selection (BFS), Wiki Link Prediction (WLP), and the combination of all three.
+
+# 1 INTRODUCTION
+
+We consider the large-scale retrieval problem: given a query, return the most relevant documents from a large corpus, where the size of the corpus can be hundreds of thousands or more. One can view this problem as learning a scoring function $f: \mathcal{X} \times \mathcal{Y} \to \mathbb{R}$ , that maps a pair of a query and a document $(q, d) \in \mathcal{X} \times \mathcal{Y}$ to a score $f(q, d)$ . The function should be designed such that the relevant $(q, d)$ pairs have high scores, whereas the irrelevant ones have low scores. Many real-world applications besides query-document retrieval can be cast into this form. For example, in recommendation systems, $q$ represents a user query and $d$ represents a candidate item to recommend (Krichene et al., 2019). In extreme multi-label classification, $q$ represents a web-page document and $d$ represents the categories or hashtags of interests (Jain et al., 2019; Chang et al., 2019). In open-domain question answering, $q$ represents a question and $d$ represents an evidence passage containing the answer (Chen et al., 2017; Hu et al., 2019; Lee et al., 2019).
+
+Central to the above is designing the scoring function $f$ . Recently, BERT (Devlin et al., 2019), along with its many successors such as XLNet (Yang et al., 2019b) and RoBERTa (Liu et al., 2019), has led to significant improvements to many NLP tasks such as sentence pairs classification and question-answering. In BERT, the scoring function $f$ is a pre-trained deep bidirectional Transformer model. While BERT-style cross-attention models are very successful, it cannot be directly applied to large-scale retrieval problems because computing $f(\mathbf{q},\mathbf{d})$ for every possible document can be prohibitively expensive. Thus, one typically first uses a less powerful but more efficient algorithm (another scoring function $f$ ) to reduce the solution space (the "retrieval phase"), and then use the BERT-style model to re-rank the retrieved documents (the "scoring phase").
+
+The retrieval phase is critical. Ideally speaking, the algorithm should have a high recall; otherwise, many relevant documents won't even be considered in the scoring phase. The algorithm also needs to be highly efficient: it should return a small subset of relevant documents in time sublinear to the number of all documents. Although significant developments are advancing the scoring algorithms, the retrieval algorithms remain less studied, and this is the focus of this paper.
+
+The retrieval algorithm can be put into two categories. The first type is classic information retrieval (IR) algorithms relying on token-based matching. One example is BM-25 (Robertson et al., 2009), which remains to be the most commonly-used (Nguyen et al., 2016; Yang et al., 2017; 2019a) and hard to beat (Chapelle & Chang, 2011; Lee et al., 2019) algorithm. Here the scoring function $f$ is based on token-matching between the two high-dimensional sparse vectors with TF-IDF token weights, and retrieval can be done in sublinear time using the inverted index. Despite the wide usage, these algorithms are handcrafted and therefore cannot be optimized for a specific task.
+
+The second option is an embedding-based model that jointly embeds queries and documents in the same embedding space and use an inner product or cosine distance to measure the similarity between queries and documents. Let the query embedding model be $\phi(\cdot)$ and the document embedding model be $\psi(\cdot)$ . The scoring function is
+
+$$
+f (\boldsymbol {q}, \boldsymbol {d}) = \langle \phi (\boldsymbol {q}), \psi (\boldsymbol {d}) \rangle .
+$$
+
+In the inference stage, retrieving relevant documents then becomes finding the nearest neighbors of a query in the embedding space. Since the embeddings of all candidate documents can be precomputed and indexed, the inference can be done efficiently with approximate nearest neighbor search algorithms in the embedding space (Shrivastava & Li, 2014; Guo et al., 2016).
+
+In this paper, we refer to the above embedding-based model as the two-tower retrieval model, because the query and document embeddings are coming from two separate "towers" of neural networks. In the literature, it is also known as the Siamese network (Das et al., 2016; Triantafillou et al., 2017) or dual-encoder model (Cer et al., 2018; Mazaré et al., 2018). Compared to the sparse token-based models, the two-tower models can capture deeper semantic relationships within queries and documents, and the models can be optimized specifically for the task being considered.
+
+In the heart of two-tower models is the embedding functions $\phi(\cdot)$ and $\psi(\cdot)$ . A modern choice is using Transformers to model the attention within queries and within documents, rather than the cross-attention between them as in the BERT model. The token-level masked-LM (MLM) pretraining task is crucial to the success of BERT-style cross-attention models. Nevertheless, what pre-training tasks are useful for improving two-tower Transformer models in large-scale retrieval, remains a crucial yet unsolved research problem. In this paper, we aim to answer this question by studying different pre-training tasks for the two-tower Transformer models. We contribute the following insight:
+
+- The two-tower Transformer models with proper pre-training can significantly outperform the widely used BM-25 algorithm;
+- Paragraph-level pre-training tasks such as Inverse Cloze Task (ICT), Body First Selection (BFS), and Wiki Link Prediction (WLP) hugely improve the retrieval quality, whereas the most widely used pre-training task (the token-level masked-LM) gives only marginal gains.
+- The two-tower models with deep transformer encoders benefit more from paragraph-level pre-training compared to its shallow bag-of-word counterpart (BoW-MLP).
+
+To the best of our knowledge, this is the first comprehensive study on pre-training tasks for efficient large-scale retrieval algorithms. The rest of the paper is organized as follows. We start by introducing the two-tower retrieval model in Section 2. The pre-training tasks are presented in 3, and the experiments and analysis are presented in Section 4. Finally, we conclude this work in Section 5.
+
+# 2 THE TWO-TOWER RETRIEVAL MODEL
+
+Given a query $\pmb{q} \in \mathcal{X}$ and a document $\pmb{d} \in \mathcal{Y}$ , we consider two-tower retrieval models that consist of two encoder functions, $\phi : \mathcal{X} \to \mathbb{R}^k$ and $\psi : \mathcal{Y} \to \mathbb{R}^k$ which map a sequence of tokens in $\mathcal{X}$ and $\mathcal{Y}$ to their associated embeddings $\phi(\pmb{q})$ and $\psi(\pmb{d})$ , respectively. The scoring function $f: \mathbb{R}^k \times \mathbb{R}^k \to \mathbb{R}$
+
+
+Figure 1: Difference between two-tower models and cross-attention models. Following previous works, we consider [CLS] embedding and average pooling as the aggregator's output for the two-tower Transformer model and the two-tower MLP model, respectively.
+
+
+
+is then defined to be the inner product1 of the embeddings
+
+$$
+f (\boldsymbol {q}, \boldsymbol {d}) = \langle \phi (\boldsymbol {q}), \psi (\boldsymbol {d}) \rangle . \tag {1}
+$$
+
+In this paper, we are interested in parameterizing the encoders $\phi, \psi$ as deep Transformer models (Vaswani et al., 2017) due to its expressive power in modeling natural language.
+
+In the rest of this section, we illustrate the advantage of two-tower models in the inference phase; discuss the pros and cons of two-tower models in comparison with BERT-like cross-attention models; present the learning procedure of estimating model parameters under maximum likelihood principle; and review the related works.
+
+Inference The difference between two-tower models and cross-attention models is shown in Figure 1. The advantage of two-tower models is the efficiency in the inference time. First, all the document embeddings can be pre-computed. Then, given an unseen query $\mathbf{q}$ , we only need to rank the document based on its inner product with the query embedding. This is way more efficient than running inference on a cross-attention BERT-style model (often used in the scoring stage). To see this, the scoring function of BERT-style model is with the form
+
+$$
+f _ {\theta , \boldsymbol {w}} (\boldsymbol {q}, \boldsymbol {d}) = \psi_ {\theta} (\boldsymbol {q} \oplus \boldsymbol {d}) ^ {T} \boldsymbol {w}, \tag {2}
+$$
+
+where $\oplus$ denotes the concatenate operation of the query and the document sequence and $w\in \mathbb{R}^k$ is an additional model parameters. In BERT, for each query, one has to make the above expensive inference on all documents. For example, with the 128-dimensional embedding space, inner product between 1000 query embeddings with 1 million document embeddings only takes hundreds of milliseconds on CPUs, while computing the same scores with cross-attention models takes hours if not more even on GPUs.
+
+Furthermore, retrieving the closest documents in the embedding space can be performed in sublinear time with the well-studied maximum inner product (MIPS) algorithms with almost no loss in recall (Shrivastava & Li, 2014; Guo et al., 2016).
+
+Learning One unique advantage of the two-tower retrieval model in comparison with classic IR algorithms is the ability to train it for specific tasks. In this paper, we assume that the training data is presented as relevant "positive" query-document pairs $\mathcal{T} = \{(q_i,d_i)\}_{i=1}^{|\mathcal{T}|}$ . Let $\theta$ be the model parameters. We estimate the model parameters by maximizing the log likelihood
+
+$\max_{\theta}\sum_{(\pmb {q},\pmb {d})\in \mathcal{T}}\log p_{\theta}(\pmb {d}|\pmb {q})$ where the conditional probability is defined by the Softmax:
+
+$$
+p _ {\theta} (\boldsymbol {d} | \boldsymbol {q}) = \frac {\exp \left(f _ {\theta} (\boldsymbol {q} , \boldsymbol {d})\right)}{\sum_ {\boldsymbol {d} ^ {\prime} \in \mathcal {D}} \exp \left(f _ {\theta} (\boldsymbol {q} , \boldsymbol {d} ^ {\prime})\right)}, \tag {3}
+$$
+
+and $\mathcal{D}$ is the set of all possible documents. The Softmax involves computing the expensive denominator of Equation (3), a.k.a, the partition function, that scales linearly to the number of documents. In practice, we use the Sampled Softmax, an approximation of the full-Softmax where we replace $\mathcal{D}$ by a small subset of documents in the current batch, with a proper correcting term to ensure the unbiasedness of the partition function (Bengio & Senécal, 2008). Sampled Softmax has been widely used in language modeling (Chen et al., 2016; Grave et al., 2017), recommendation systems (Yu et al., 2017; Krichene et al., 2019) and extreme classification (Blanc & Rendle, 2018; Reddi et al., 2019).
+
+Since we often have a limited amount of supervised data from the downstream task, it is important to first train the retrieval model with positive pairs $\mathcal{T}$ from a set of pre-training tasks. We then fine-tune it with positive pairs $\mathcal{T}$ from the downstream task. We will present the set of pre-training tasks we study in Section 3.
+
+Related Works Cer et al. (2018) study the two-tower Transformer model as a universal sentence encoder. The model is learned with multiple tasks including the unsupervised Skip-Thought task (Kiros et al., 2015), the supervised conversation input-response task (Henderson et al., 2017), and the supervised sentence classification SNLI task (Bowman et al., 2015). Humeau et al. (2019) propose the Poly-encoders architecture to balance the computation/expressiveness tradeoff between two-tower models and cross-attention models. Reimers & Gurevych (2019) fine-tune the deep two-tower models on two supervised datasets, SNLI and MNLI (Williams et al., 2018), then apply it in solving other downstream tasks. Unlike all the above works that consider training the two-tower Transformer models on a limited amount of supervised corpus for the sentence classification tasks, we study different pre-training tasks and their contributions in the large-scale retrieval settings.
+
+Another closely related topic is the open-domain question answering. Previous works consider using BM25 or other lexical matching methods to retrieve the top-k relevant passages efficiently and then deploy the more expensive cross-attention scoring function to find the answer (Chen et al., 2017; Yang et al., 2017; 2019a). Das et al. (2019) encode query and document separately with LSTM encoders. They employ a training procedure different from ours and do not consider pre-training. Very recently, Lee et al. (2019) propose to pre-train two-tower Transformer models with the Inverse Cloze Task (ICT) to replace BM25 in the passage retrieval phase. The advantage is that the retriever can be trained jointly with the reader/scorer. Nevertheless, their pre-trained two-tower models do not outperform BM25 on the SQuAD dataset, potentially because the fine-tuning is only performed on the query-tower.
+
+Model distillation (Hinton et al., 2015) can be used to compress expensive BERT-like cross-attention models into efficient two-tower Transformer models for large-scale retrieval problems. For example, Tang et al. (2019) demonstrate initial success in distilling the BERT model into a two-tower model with BiLSTM as encoders. The pre-training tasks we study in this paper can be used as additional supervision in the distillation process, and therefore complementary to model distillation.
+
+# 3 PRE-TRAINING TASKS OF DIFFERENT SEMANTIC GRANULARITIES
+
+As mentioned in Section 2, due to the limited amount of supervised data from downstream tasks, a crucial step of learning deep retrieval models is to pre-train the model with a set of pre-training tasks (we will verify this in Section 4). Sentence-level pre-training tasks have been studied before. One example is reconstructing the surface form of surrounding sentences given the encoded sentence (Le & Mikolov, 2014; Kiros et al., 2015), and another one is discriminating the next sentence from random candidates (Jernite et al., 2017; Logeswaran & Lee, 2018).
+
+In this paper, we assume that the pre-training data is defined as positive query-document $(q,d)$ pairs. A good pre-training task should have the following two properties. 1) It should be relevant to the downstream task. For example, when solving the question-answering retrieval problem, the model should capture different granularities of semantics between the query and document. The semantics
+
+
+Figure 2: An illustrative example of the three pre-training tasks where each query $q$ is highlighted in different colors. All queries are paired with the same text block $d$ . Concretely, $(q_1, d)$ of ICT is defined locally within a paragraph; $(q_2, d)$ of BFS is defined globally within an article; $(q_3, d)$ of WLP is defined distantly across two related articles hyper-linked by the Wikipedia entity.
+
+can be the local context within a paragraph, global consistency within a document, and even semantic relation between two documents. 2) It should be cost-efficient to collect the pre-training data, ideally not requiring additional human supervision.
+
+In light of the above requirements, we present three pre-training tasks that emphasize different aspects of semantics between queries and documents: Inverse Cloze Task (ICT), Body First Selection (BFS), and Wiki Link Prediction (WLP). In specific, BFS and WLP are newly proposed in this paper. The training data for all these tasks can be freely obtained based from Wikipedia without an additional manual labeling process. Figure 2 provides illustrative examples of these tasks.
+
+Inverse Cloze Task (ICT) Given a passage $\pmb{p}$ consisting of $n$ sentences, $\pmb{p} = \{s_1, \dots, s_n\}$ , the query $\pmb{q}$ is a sentence randomly drawn from the passage, $\pmb{q} = s_i, i \sim [1, n]$ , and the document $\pmb{d}$ is the rest of sentences, $\pmb{d} = \{s_1, \dots, s_{i-1}, s_{i+1}, \dots, s_n\}$ . See $(\pmb{q}_1, d)$ in Figure 2 as an example. This task captures the semantic context of a sentence and was originally proposed by Lee et al. (2019).
+
+Body First Selection (BFS) We propose BFS to capture semantic relationship outside of the local paragraph. Here, the query $q_{2}$ is a random sentence in the first section of a Wikipedia page, and the document $d$ is a random passage from the same page (Figure 2). Since the first section of a Wikipedia article is often the description or summary of the whole page, we expect it to contain information central to the topic.
+
+Wiki Link Prediction (WLP) We propose WLP to capture inter-page semantic relation. The query $q_{3}$ is a random sentence in the first section of a Wikipedia page, and the document $d$ is a passage from another page where there is a hyperlink link to the page of $q_{3}$ (Figure 2). Intuitively, a hyperlink link indicates relationship between the two Wikipedia pages. Again, we take a sentence from the first section because it is often the description or summary of the topic.
+
+Masked LM (MLM) In addition to the above tasks, we also consider the classic masked language model (MLM) pre-training task as a baseline: predict the randomly masked tokens in a sentence. MLM is the primary pre-training task used in BERT (Devlin et al., 2019).
+
+| Pre-training tasks | #tokens | #pairs | avg. #query tokens | #doc tokens |
| ICT | 11.2B | 50.2M | 30.41 | 193.89 |
| BFS | 3.3B | 17.5M | 28.02 | 160.46 |
| WLP | 2.7B | 24.9M | 29.42 | 82.14 |
+
+Table 1: Data statistics of three pre-training tasks. #query tokens represent average number of tokens per query, and #doc tokens represent average number of tokens per passage.
+
+# 4 EXPERIMENTS
+
+# 4.1 EXPERIMENTAL SETTING
+
+The two-tower retrieval model Each tower of the retrieval model follows the architecture and hyper-parameters of the 12 layers BERT-base model. For both towers, the final embedding is generated by applying a linear layer on the hidden state of the [CLS] token. The embedding dimension is 512. The sequence length for the query encoder and document encoder are set to be 64 and 288, respectively. We pre-train the model on 32 TPU v3 chips for 100K steps with an Adam optimizer and batch size of 8192. This process takes about 2.5 days. We use the Adam optimizer with an initial learning rate $1 \times 10^{-4}$ with the warm-up ratio 0.1, followed by a linear learning rate decay. For fine-tuning, the learning rate of Adam is set to $5 \times 10^{-5}$ with 2000 training steps and batch size 512.
+
+Pre-training tasks We compare the token-level pre-training task MLM with the three paragraph-level pre-training tasks, ICT, BFS and WLP. The data of ICT, BFS and WLP are generated from the Wikipedia corpus. The data statistics are reported in Table 1. Note that #tokens represents the number of sub-words tokenized by WordPiece (Wu et al., 2016). The pre-training tasks define the positive $(q, d)$ pair for learning the two-tower Transformer models. For ICT, the $d$ is a pair of article title and passage separated by [SEP] symbol as input to the doc-tower.
+
+We propose to pre-train the two-tower Transformer models jointly with all three paragraph-level pretraining tasks, hence the name ICT+BFS+WLP. Here the model is pre-trained on one combined set of $(q,d)$ pairs, where each pair is uniformly sampled from the three pre-training tasks in Table 1. See Section 4.2 and 4.3 for its outstanding performance over other baselines.
+
+Downstream tasks We consider the Retrieval Question-Answering (ReQA) benchmark, proposed by Ahmad et al. (2019). The two QA datasets we consider are SQuAD and Natural Questions. Note that each entry of QA datasets is a tuple $(\pmb{q},\pmb{a},\pmb{p})$ , where $\pmb{q}$ is the question, $\pmb{a}$ is the answer span, and $\pmb{p}$ is the evidence passage containing $\pmb{a}$ . Following Ahmad et al. (2019), we split a passage into sentences, $\pmb{p} = s_1s_2\ldots s_n$ and transform the original entry $(\pmb{q},\pmb{a},\pmb{p})$ to a new tuple $(\pmb{q},s_i,\pmb{p})$ where $s_i$ is the sentence contains the answer span $\pmb{a}$ .
+
+The retrieval problem is that given a question $q$ , retrieve the correct sentence and evidence passage pair $(s, p)$ from all candidates. For each passage $p$ , we create a set of candidate pairs $(s_i, p)$ where $i = 1, \dots, n$ , and the retrieval candidate set is built by combining such pairs for all passages. This problem is more challenging than retrieving the evidence passage only since the larger number of candidates to be retrieved. The data statistics of the downstream ReQA benchmark are shown in Table 2. Note that, similar to Ahmad et al. (2019), the ReQA benchmark is not entirely open-domain QA retrieval as the candidates $(s, p)$ only cover the training set of QA dataset instead of entire Wikipedia articles. For the open-domain retrieval experiment, see details in Section 4.4.
+
+Evaluation For each dataset, we consider different training/test split of the data (1%/99%, 5%/95% and, 80%/20%) in the fine-tuning stage and the 10% of training set is held out as the validation set for hyper-parameter tuning. The split is created assuming a cold-start retrieval scenario where the queries in the test (query, document) pairs are not seen in training.
+
+| ReQA Dataset | #query | #candidate | #tuples | #query tokens | #doc tokens |
| SQuAD | 97,888 | 101,951 | 99,024 | 11.55 | 291.35 |
| Natural Questions | 74,097 | 239,008 | 74,097 | 9.29 | 352.67 |
+
+Table 2: Data statistics of ReQA benchmark. candidate represents all (sentence, passage) pairs.
+
+| train/test ratio | Encoder | Pre-training task | R@1 | R@5 | R@10 | R@50 | R@100 |
| 1%/99% | BM-25 | No Pretraining | 41.86 | 58.00 | 63.64 | 74.15 | 77.91 |
| BoW-MLP | No Pretraining | 0.14 | 0.35 | 0.49 | 1.13 | 1.72 |
| BoW-MLP | ICT+BFS+WLP | 22.55 | 41.03 | 49.93 | 69.70 | 77.01 |
| Transformer | No Pretraining | 0.02 | 0.06 | 0.08 | 0.31 | 0.54 |
| Transformer | MLM | 0.18 | 0.51 | 0.82 | 2.46 | 3.93 |
| Transformer | ICT+BFS+WLP | 37.43 | 61.48 | 70.18 | 85.37 | 89.85 |
| 5%/95% | BM-25 | No Pretraining | 41.87 | 57.98 | 63.63 | 74.17 | 77.91 |
| BoW-MLP | No Pretraining | 1.13 | 2.68 | 3.62 | 7.16 | 9.55 |
| BoW-MLP | ICT+BFS+WLP | 26.23 | 46.49 | 55.68 | 75.28 | 81.89 |
| Transformer | No Pretraining | 0.17 | 0.36 | 0.54 | 1.43 | 2.17 |
| Transformer | MLM | 1.19 | 3.59 | 5.40 | 12.52 | 17.41 |
| Transformer | ICT+BFS+WLP | 45.90 | 70.89 | 78.47 | 90.49 | 93.64 |
| 80%/20% | BM-25 | No Pretraining | 41.77 | 57.95 | 63.55 | 73.94 | 77.49 |
| BoW-MLP | No Pretraining | 19.65 | 36.31 | 44.19 | 62.40 | 69.19 |
| BoW-MLP | ICT+BFS+WLP | 32.24 | 55.26 | 65.49 | 83.37 | 88.50 |
| Transformer | No Pretraining | 12.32 | 26.88 | 34.46 | 53.74 | 61.53 |
| Transformer | MLM | 27.34 | 49.59 | 58.17 | 74.89 | 80.33 |
| Transformer | ICT+BFS+WLP | 58.35 | 82.76 | 88.44 | 95.87 | 97.49 |
+
+Table 3: Recall@k on SQuAD. Numbers are in percentage $(\%)$
+
+For the evaluation metric, we focus on recall@k $^3$ because the goal of the retrieval phase is to capture the positives in the top-k results. The retrieval performance can be understood independently of the scoring model used by measuring recall at different k. In fact, in the extreme cases when the scoring model is either oracle or random, the final precision metric is proportional to recall@k.
+
+# 4.2 MAIN RESULTS
+
+Table 3 and Table 4 compare the proposed combination of pre-training tasks, ICT+BFS+WLP, to various baselines on SQuAD and Natural Questions, respectively. In both benchmarks, ICT+BFS+WLP notably outperforms all other methods. This suggests that one should use a two-tower Transformer model with properly designed pre-training tasks in the retrieval stage to replace the widely used BM-25 algorithm. We present some of the detailed findings below.
+
+The BM-25 baseline In retrieval, BM-25 is a simple but tough-to-beat unsupervised baseline using token-matching with TF-IDF weights as the scoring function. BM-25 performs especially well for the SQuAD benchmark, as the data collection process and human annotations of this dataset are biased towards question-answer pairs with overlapping tokens (Rajpurkar et al., 2016; Kwiatkowski et al., 2019). For instance, in the limited fine-tuning data scenario (e.g., $1\%$ and $5\%$ ), BM-25 outperforms the two-tower transformer models with no pre-training (No Pretraining) or with less-effective pre-training tasks (MLM). This result verifies that BM-25 is a robust retrieval model and therefore widely used in recent works (Chen et al., 2017; Yang et al., 2017; Lee et al., 2019) $^4$ .
+
+Encoder architecture We justify the use of Transformer as encoders by comparing it with a shallow bag-of-word MLP model (BoW-MLP). Specifically, BoW-MLP looks up uni-grams from the embedding table5, aggregates the embeddings with average pooling, and passes them through a shallow two-layer MLP network with tanh activation to generate the final 512-dimensional query/document embeddings. For fair comparison, the BoW-MLP encoder has a comparable model size to the Transformer encoder (i.e., 128M v.s. 110M parameters, slightly favorable to BoW-MLP encoder).
+
+With a properly designed pre-training task (e.g., ICT+BFS+WLP), the Transformer encoder considerably outperforms its shallow counterpart (BoW-MLP), suggesting that the former benefits more from the unsupervised pre-training tasks. On the other hand, without any pre-training, the performance of the Transformer encoder is worse than BoW-MLP encoder, possibly because the former is over-fitting on the limited amount of labeled fine-tuning data.
+
+Pre-training tasks When pre-training the two-tower Transformer model, we compare the pretraining tasks to two baselines: No Pretraining and MLM. No Pretraining represents random initializing the model, and MLM is the token-level masked-LM task introduced in Section 3.
+
+On both datasets, the token-level pre-training task MLM only marginally improves over the no-pretraining baseline (No Pretraining). In contrast, combining the paragraph-level pre-training tasks $\mathrm{ICT + BFS + WLP}$ provides a huge boost on the performance. This verifies our assumption that the design of task-related pre-training tasks is crucial. The performance of adding individual pre-training tasks is presented in the next section.
+
+| train/test ratio | Encoder | Pre-training task | R@1 | R@5 | R@10 | R@50 | R@100 |
| 1%/99% | BM-25 | No Pretraining | 4.99 | 11.91 | 15.41 | 24.00 | 27.97 |
| BoW-MLP | No Pretraining | 0.28 | 0.80 | 1.08 | 2.02 | 2.66 |
| BoW-MLP | ICT+BFS+WLP | 9.22 | 24.98 | 33.36 | 53.67 | 61.30 |
| Transformer | No Pretraining | 0.07 | 0.19 | 0.28 | 0.56 | 0.85 |
| Transformer | MLM | 0.18 | 0.56 | 0.81 | 1.95 | 2.98 |
| Transformer | ICT+BFS+WLP | 17.31 | 43.62 | 55.00 | 76.59 | 82.84 |
| 5%/95% | BM-25 | No Pretraining | 5.03 | 11.96 | 15.47 | 24.04 | 28.00 |
| BoW-MLP | No Pretraining | 1.36 | 3.77 | 4.98 | 8.56 | 10.77 |
| BoW-MLP | ICT+BFS+WLP | 11.40 | 30.64 | 40.63 | 62.95 | 70.85 |
| Transformer | No Pretraining | 0.37 | 1.07 | 1.40 | 2.73 | 3.82 |
| Transformer | MLM | 1.10 | 3.42 | 4.89 | 10.49 | 14.37 |
| Transformer | ICT+BFS+WLP | 21.46 | 51.03 | 62.99 | 83.04 | 88.05 |
| 80%/20% | BM-25 | No Pretraining | 4.93 | 11.52 | 14.96 | 23.64 | 27.77 |
| BoW-MLP | No Pretraining | 9.78 | 26.76 | 34.16 | 50.34 | 56.44 |
| BoW-MLP | ICT+BFS+WLP | 13.58 | 37.78 | 50.40 | 76.11 | 82.98 |
| Transformer | No Pretraining | 7.49 | 20.11 | 25.40 | 38.26 | 43.75 |
| Transformer | MLM | 16.74 | 40.48 | 49.53 | 67.91 | 73.91 |
| Transformer | ICT+BFS+WLP | 30.27 | 63.97 | 75.85 | 91.84 | 94.60 |
+
+Table 4: Recall@k on Natural Questions. Numbers are in percentage $(\%)$
+
+# 4.3 ABLATION STUDY
+
+We conduct a more thorough ablation study on Natural Questions involving (1) the number of layers in Transformer; (2) different pre-training tasks; and (3) dimension of the embedding space. The result is presented in Table 5.
+
+Index 1, 2, and 3 show the individual performance of three pre-training tasks. All of these tasks are much more effective than MLM. Among them, ICT has the best performance, followed by BFS, and then WLP. This suggests that the (query, document) pairs defined by local context within passage are suitable for the ReQA task.
+
+| Index | Ablation Configuration | R@100 on different train/test ratio |
| #layer | Pre-training task | emb-dim | 1% | 5% | 10% | 80% |
| 1 | 4 | ICT | 128 | 77.13 | 82.03 | 84.22 | 91.88 |
| 2 | 4 | BFS | 128 | 72.99 | 78.34 | 80.47 | 89.82 |
| 3 | 4 | WLP | 128 | 56.94 | 68.08 | 72.51 | 86.15 |
| 4 | 12 | No Pretraining | 128 | 0.72 | 3.88 | 6.94 | 38.94 |
| 5 | 12 | MLM | 128 | 2.99 | 12.21 | 22.97 | 71.12 |
| 6 | 12 | ICT | 128 | 79.80 | 85.97 | 88.13 | 93.91 |
| 7 | 12 | ICT+BFS+WLP | 128 | 81.31 | 87.08 | 89.06 | 94.37 |
| 8 | 12 | ICT+BFS+WLP | 256 | 81.48 | 87.74 | 89.54 | 94.73 |
| 9 | 12 | ICT+BFS+WLP | 512 | 82.84 | 88.05 | 90.03 | 94.60 |
+
+Also note from Index 6 and 7, ICT+BFS+WLP pre-training is better than ICT with $1.5\%$ absolute improvement over ICT in the low-data regime. This reflects that, when there's no sufficient downstream training data, more globally pre-training tasks is beneficial as it encodes multi-hop reasoning priors such as different passages within the same article (BFS) or even going beyond to different articles linked by the same entities (WLP).
+
+Finally, The advantage of increasing number of layers is manifest by comparing Index 1 and Index 6, while Index 7, 8 and 9 show the benefit of increasing the dimension of the embedding space.
+
+# 4.4 EVALUATION OF OPEN-DOMAIN RETRIEVAL
+
+We consider the open-domain retrieval setting by augmenting the candidate set of the ReQA benchmark with large-scale (sentence, evidence passage) pairs extracted from general Wikipedia articles. In particular, we preprocess/sub-sample the open-domain Wikipedia retrieval set of the DrQA paper (Chen et al., 2017) into one million (sentence, evidence passage) pairs, and add this external 1M candidate pairs into the existing retrieval candidate set of the ReQA benchmark.
+
+Table 5: Ablation study on Natural Questions based on Recall@100. Index 9 represents the proposed method in Table 4.
+
+| train/test ratio | Pre-training task | R@1 | R@5 | R@10 | R@50 | R@100 |
| 1%/99% | BM-25 | 3.70 | 9.58 | 12.69 | 20.27 | 23.83 |
| ICT | 14.18 | 37.36 | 48.08 | 69.23 | 76.01 |
| ICT+BFS+WLP | 13.19 | 37.61 | 48.77 | 70.43 | 77.20 |
| 5%/95% | BM-25 | 3.21 | 8.62 | 11.50 | 18.59 | 21.78 |
| ICT | 17.94 | 45.65 | 57.11 | 76.87 | 82.60 |
| ICT+BFS+WLP | 17.62 | 45.92 | 57.75 | 78.14 | 83.78 |
| 80%/20% | BM-25 | 3.12 | 8.45 | 11.18 | 18.05 | 21.30 |
| ICT | 24.89 | 57.89 | 69.86 | 87.67 | 91.29 |
| ICT+BFS+WLP | 25.41 | 59.36 | 71.12 | 88.25 | 91.71 |
+
+Table 6: Open-domain retrieval results of Natural Questions dataset, where existing candidates are augmented with additional 1M retrieval candidates (i.e., 1M of $(s,p)$ candidate pairs) extracted from open-domain Wikipedia articles.
+
+The results of open-domain retrieval on Natural Questions are presented in Table 6. Firstly, we see that the two-tower Transformer models pretrained with ICT+BFS+WLP and ICT substantially outperform the BM-25 baseline. Secondly, ICT+BFS+WLP pre-training method consistently improves the ICT pre-training method in most cases. Interestingly, the improvements are more noticeable at R@50 and R@100, possibly due to that the distant multi-hop per-training supervision induces better retrieval quality at the latter part of the rank list. Finally, we conclude that the evaluation results of the 1M open-domain retrieval are consistent with our previous empirical evaluation on the ReQA benchmark with smaller retrieval candidate sets (Section 4.2).
+
+# 5 CONCLUSION
+
+We conducted a comprehensive study on how various pre-training tasks help in the large-scale retrieval problem such as evidence retrieval for question-answering. We showed that the two-tower Transformer models with random initialization (No Pretraining) or the unsuitable token-level pretraining task (MLM) are no better than the robust IR baseline BM-25 in most cases. With properly designed paragraph-level pre-training tasks including ICT, BFS and WLP, the two-tower Transformer models can considerably improve over the widely used BM-25 algorithm.
+
+For future works, we plan to study how the pre-training tasks apply to other types of encoders architectures, generating the pre-training data from corpora other than Wikipedia, and how pretraining compares with different types of regularizations.
+
+# REFERENCES
+
+Amin Ahmad, Noah Constant, Yinfei Yang, and Daniel Cer. ReQA: An evaluation for end-to-end answer retrieval models. In Proceedings of the 2nd Workshop on Machine Reading for Question Answering, pp. 137-146, Hong Kong, China, November 2019. Association for Computational Linguistics. doi: 10.18653/v1/D19-5819. URL https://www.aclweb.org/anthology/D19-5819.
+Yoshua Bengio and Jean-Sébastien Senécal. Adaptive importance sampling to accelerate training of a neural probabilistic language model. IEEE Transactions on Neural Networks, 19(4):713-722, 2008.
+Guy Blanc and Steffen Rendle. Adaptive sampled softmax with kernel based sampling. In Proceedings of the 35th International Conference on Machine Learning (ICML), pp. 590-599, 2018.
+Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 632-642, 2015.
+Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St John, Noah Constant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, et al. Universal sentence encoder. In ACL, 2018.
+Wei-Cheng Chang, Hsiang-Fu Yu, Kai Zhong, Yiming Yang, and Inderjit Dhillon. X-BERT: eX-treme multi-label text with BERT. arXiv preprint arXiv:1905.02331, 2019.
+Olivier Chapelle and Yi Chang. Yahoo! learning to rank challenge overview. In Proceedings of the learning to rank challenge, pp. 1-24, 2011.
+Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. Reading Wikipedia to answer open-domain questions. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL) (Volume 1: Long Papers), pp. 1870-1879, 2017.
+Welin Chen, David Grangier, and Michael Auli. Strategies for training large vocabulary neural language models. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL), pp. 1975-1985, 2016.
+Arpita Das, Harish Yenala, Manoj Chinnakotla, and Manish Shrivastava. Together we stand: Siamese networks for similar question retrieval. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL) (Volume 1: Long Papers), pp. 378-387, 2016.
+Rajarshi Das, Shehzaad Dhuliawala, Manzil Zaheer, and Andrew McCallum. Multi-step retriever-reader interaction for scalable open-domain question answering. In Proceedings of the International Conference on Learning Representations (ICLR), 2019.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), 2019.
+
+Edouard Grave, Armand Joulin, Moustapha Cisse, Hervé Jégou, et al. Efficient softmax approximation for gpus. In Proceedings of the 34th International Conference on Machine Learning (ICML), pp. 1302-1310. JMLR.org, 2017.
+Ruiqi Guo, Sanjiv Kumar, Krzysztof Choromanski, and David Simcha. Quantization based fast inner product search. In Proceedings of the 19th International Conference on Artificial Intelligence and Statistics (AISTATS), pp. 482-490, 2016.
+Matthew Henderson, Rami Al-Rfou, Brian Strope, Yun-hsuan Sung, László Lukács, Ruiqi Guo, Sanjiv Kumar, Balint Miklos, and Ray Kurzweil. Efficient natural language response suggestion for smart reply. arXiv preprint arXiv:1705.00652, 2017.
+Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015.
+Minghao Hu, Yuxing Peng, Zhen Huang, and Dongsheng Li. Retrieve, read, rerank: Towards end-to-end multi-document reading comprehension. In Proceedings of InProceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL), 2019.
+Samuel Humeau, Kurt Shuster, Marie-Anne Lachaux, and Jason Weston. Poly-encoders: Transformer architectures and pre-training strategies for fast and accurate multi-sentence scoring. arXiv preprint arXiv:1905.01969, 2019.
+Himanshu Jain, Venkatesh Balasubramanian, Bhanu Chanduri, and Manik Varma. Slice: Scalable linear extreme classifiers trained on 100 million labels for related searches. In Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining, pp. 528-536. ACM, 2019.
+Yacine Jernite, Samuel R Bowman, and David Sontag. Discourse-based objectives for fast unsupervised sentence representation learning. arXiv preprint arXiv:1705.00557, 2017.
+Ryan Kiros, Yukun Zhu, Ruslan R Salakhutdinov, Richard Zemel, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. Skip-thought vectors. In Advances in Neural Information Processing Systems (NIPS), pp. 3294-3302, 2015.
+Walid Krichene, Nicolas Mayoraz, Steffen Rendle, Li Zhang, Xinyang Yi, Lichan Hong, Ed Chi, and John Anderson. Efficient training on very large corpora via gramian estimation. In Proceedings of the International Conference on Learning Representations (ICLR), 2019.
+Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, et al. Natural questions: a benchmark for question answering research. Transactions of the Association for Computational Linguistics (TACL), 7:453-466, 2019.
+Quoc Le and Tomas Mikolov. Distributed representations of sentences and documents. In International conference on machine learning (ICML), pp. 1188-1196, 2014.
+Kenton Lee, Ming-Wei Chang, and Kristina Toutanova. Latent retrieval for weakly supervised open domain question answering. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL), July 2019.
+Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. RoBERTa: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692, 2019.
+Lajanugen Logeswaran and Honglak Lee. An efficient framework for learning sentence representations. In Proceedings of the International Conference on Learning Representations (ICLR), 2018.
+Pierre-Emmanuel Mazare, Samuel Humeau, Martin Raison, and Antoine Bordes. Training millions of personalized dialogue agents. In EMNLP, 2018.
+Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. MS MARCO: A human-generated machine reading comprehension dataset. 2016.
+
+Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. Squad: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 2383-2392, 2016.
+Sashank J Reddi, Satyen Kale, Felix Yu, Dan Holtmann-Rice, Jiecao Chen, and Sanjiv Kumar. Stochastic negative mining for learning with large output spaces. In Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics (AISTATS), 2019.
+Nils Reimers and Iryna Gurevych. Sentence-BERT: Sentence embeddings using siamese BERT-networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2019.
+Stephen Robertson, Hugo Zaragoza, et al. The probabilistic relevance framework: BM25 and beyond. Foundations and Trends in Information Retrieval, 3(4):333-389, 2009.
+Anshumali Shrivastava and Ping Li. Asymmetric LSH (ALSH) for sublinear time maximum inner product search (mips). In Advances in Neural Information Processing Systems (NIPS), pp. 2321-2329, 2014.
+Raphael Tang, Yao Lu, Linqing Liu, Lili Mou, Olga Vechtomova, and Jimmy Lin. Distilling task-specific knowledge from BERT into simple neural networks. arXiv preprint arXiv:1903.12136, 2019.
+Eleni Triantafillou, Richard Zemel, and Raquel Urtasun. Few-shot learning through an information retrieval lens. In Advances in Neural Information Processing Systems, pp. 2255-2265, 2017.
+Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In NIPS, 2017.
+Adina Williams, Nikita Nangia, and Samuel Bowman. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT 2018), Volume 1 (Long Papers), June 2018.
+Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. Google's neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144, 2016.
+Peilin Yang, Hui Fang, and Jimmy Lin. Anserini: Enabling the use of lucene for information retrieval research. In Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 1253-1256. ACM, 2017.
+Wei Yang, Yuqing Xie, Aileen Lin, Xingyu Li, Luchen Tan, Kun Xiong, Ming Li, and Jimmy Lin. End-to-end open-domain question answering with BERTserini. In InProceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT 2019): Demonstrations, 2019a.
+Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, and Quoc V Le. XLNet: Generalized autoregressive pretraining for language understanding. In NIPS, 2019b.
+Hsiang-Fu Yu, Mikhail Bilenko, and Chih-Jen Lin. Selection of negative samples for one-class matrix factorization. In Proceedings of the 2017 SIAM International Conference on Data Mining, pp. 363-371. SIAM, 2017.
\ No newline at end of file
diff --git a/pretrainingtasksforembeddingbasedlargescaleretrieval/images.zip b/pretrainingtasksforembeddingbasedlargescaleretrieval/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..6d7d9e083c73193887425884814dd800c55f7409
--- /dev/null
+++ b/pretrainingtasksforembeddingbasedlargescaleretrieval/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b795524027d13575f94e9e4d5971d386a813c578c7bcfba2bd2ffce8ab33bf95
+size 642007
diff --git a/pretrainingtasksforembeddingbasedlargescaleretrieval/layout.json b/pretrainingtasksforembeddingbasedlargescaleretrieval/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..1cadfc8a4592acae46904d81c8ef9ea9cefecd23
--- /dev/null
+++ b/pretrainingtasksforembeddingbasedlargescaleretrieval/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b6eb9be35b0283b5e0137cce02b388bde7e96fb48ed5fefba084ef85287f9ae7
+size 335095
diff --git a/principledweightinitializationforhypernetworks/7c1f6962-43b7-4711-9872-4e5c9a432dcc_content_list.json b/principledweightinitializationforhypernetworks/7c1f6962-43b7-4711-9872-4e5c9a432dcc_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..e0760942187720ba0a13c86637187066ff8fe18e
--- /dev/null
+++ b/principledweightinitializationforhypernetworks/7c1f6962-43b7-4711-9872-4e5c9a432dcc_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:91b1235a335bccc2badc617888ee0d77ce42ab65ad900599bc2285285316b5d1
+size 113406
diff --git a/principledweightinitializationforhypernetworks/7c1f6962-43b7-4711-9872-4e5c9a432dcc_model.json b/principledweightinitializationforhypernetworks/7c1f6962-43b7-4711-9872-4e5c9a432dcc_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..ad53d1021a269619cf8f7ed666b5411c58a34010
--- /dev/null
+++ b/principledweightinitializationforhypernetworks/7c1f6962-43b7-4711-9872-4e5c9a432dcc_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e56e8388a954d243feb051554141c62f43054baf59a9c5d8e89ea96a85cbc1c4
+size 128715
diff --git a/principledweightinitializationforhypernetworks/7c1f6962-43b7-4711-9872-4e5c9a432dcc_origin.pdf b/principledweightinitializationforhypernetworks/7c1f6962-43b7-4711-9872-4e5c9a432dcc_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..78a509df3352fcd14094b04cc61b8986df78816e
--- /dev/null
+++ b/principledweightinitializationforhypernetworks/7c1f6962-43b7-4711-9872-4e5c9a432dcc_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8cc3f521b10cfc86993b29df7acf338a3c49dc8fe6282585b7452583f778e35f
+size 2072915
diff --git a/principledweightinitializationforhypernetworks/full.md b/principledweightinitializationforhypernetworks/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..10cbdcbf20d67ae3f61eb0ef2fbfd916b527c1ea
--- /dev/null
+++ b/principledweightinitializationforhypernetworks/full.md
@@ -0,0 +1,514 @@
+# PRINCIPLED WEIGHT INITIALIZATION FOR HYPERNETWORKS
+
+Oscar Chang, Lampros Flokas, Hod Lipson
+
+Columbia University
+
+New York, NY 10027
+
+{oscar.chang,lf2540,hod.lipson}@columbia.edu
+
+# ABSTRACT
+
+Hypernetworks are meta neural networks that generate weights for a main neural network in an end-to-end differentiable manner. Despite extensive applications ranging from multi-task learning to Bayesian deep learning, the problem of optimizing hypernetworks has not been studied to date. We observe that classical weight initialization methods like Glorot & Bengio (2010) and He et al. (2015), when applied directly on a hypernet, fail to produce weights for the mainnet in the correct scale. We develop principled techniques for weight initialization in hypernets, and show that they lead to more stable mainnet weights, lower training loss, and faster convergence.
+
+# 1 INTRODUCTION
+
+Meta-learning describes a broad family of techniques in machine learning that deals with the problem of learning to learn. An emerging branch of meta-learning involves the use of hypernetworks, which are meta neural networks that generate the weights of a main neural network to solve a given task in an end-to-end differentiable manner. Hypernetworks were originally introduced by Ha et al. (2016) as a way to induce weight-sharing and achieve model compression by training the same meta network to learn the weights belonging to different layers in the main network. Since then, hypernetworks have found numerous applications including but not limited to: weight pruning (Liu et al., 2019), neural architecture search (Brock et al., 2017; Zhang et al., 2018), Bayesian neural networks (Krueger et al., 2017; Ukai et al., 2018; Pawlowski et al., 2017; Henning et al., 2018; Deutsch et al., 2019), multi-task learning (Pan et al., 2018; Shen et al., 2017; Klocek et al., 2019; Serrà et al., 2019; Meyerson & Miikkulainen, 2019), continual learning (von Oswald et al., 2019), generative models (Suarez, 2017; Ratzlaff & Fuxin, 2019), ensemble learning (Kristiadi & Fischer, 2019), hyperparameter optimization (Lorraine & Duvenaud, 2018), and adversarial defense (Sun et al., 2017).
+
+Despite the intensified study of applications of hypernetworks, the problem of optimizing them to this day remains significantly understudied. In fact, even the problem of initializing hypernetworks has not been studied. Given the lack of principled approaches, prior work in the area is mostly limited to ad-hoc approaches based on trial and error (c.f. Section 3). For example, it is common to initialize the weights of a hypernetwork by sampling a "small" random number. Nonetheless, these ad-hoc methods do lead to successful hypernetwork training primarily due to the use of the Adam optimizer (Kingma & Ba, 2014), which has the desirable property of being invariant to the scale of the gradients. However, even Adam will not work if the loss diverges (i.e. overflow) at initialization, which will happen in sufficiently big models. The normalization of badly scaled gradients also results in noisy training dynamics where the loss function suffers from bigger fluctuations during training compared to vanilla stochastic gradient descent (SGD). Wilson et al. (2017); Reddi et al. (2018) showed that while adaptive optimizers like Adam may exhibit lower training error, they fail to generalize as well to the test set as non-adaptive gradient methods. Moreover, Adam incurs a computational overhead and requires 3X the amount of memory for the gradients compared to vanilla SGD.
+
+Small random number sampling is reminiscent of early neural network research (Rumelhart et al., 1986) before the advent of classical weight initialization methods like Xavier init (Glorot & Bengio, 2010) and Kaiming init (He et al., 2015). Since then, a big lesson learned by the neural network
+
+optimization community is that architecture specific initialization schemes are important to the robust training of deep networks, as shown recently in the case of residual networks (Zhang et al., 2019). In fact, weight initialization for hypernetworks was recognized as an outstanding open problem by prior work (Deutsch et al., 2019) that had questioned the suitability of classical initialization methods for hypernetworks.
+
+Our results We show that when classical methods are used to initialize the weights of hypernetworks, they fail to produce mainnet weights in the correct scale, leading to exploding activations and losses. This is because classical network weights transform one layer's activations into another, while hypernet weights have the added function of transforming the hypernet's activations into the mainnet's weights. Our solution is to develop principled techniques for weight initialization in hypernetworks based on variance analysis. The hypernet case poses unique challenges. For example, in contrast to variance analysis for classical networks, the case for hypernetworks can be asymmetrical between the forward and backward pass. The asymmetry arises when the gradient flow from the mainnet into the hypernet is affected by the biases, whereas in general, this does not occur for gradient flow in the mainnet. This underscores again why architecture specific initialization schemes are essential. We show both theoretically and experimentally that our methods produce hypernet weights in the correct scale. Proper initialization mitigates exploding activations and gradients or the need to depend on Adam. Our experiments reveal that it leads to more stable mainnet weights, lower training loss, and faster convergence.
+
+Section 2 briefly covers the relevant technical preliminaries, and Section 3 reviews problems with the ad-hoc methods currently deployed by hypernetwork practitioners. We derive novel weight initialization formulae for hypernetworks in Section 4, empirically evaluate our proposed methods in Section 5, and finally conclude in Section 6.
+
+# 2 PRELIMINARIES
+
+Definition. A hypernetwork is a meta neural network $H$ with its own parameters $\phi$ that generates the weights of a main network $\theta$ from some embedding $e$ in a differentiable manner: $\theta = H_{\phi}(e)$ . Unlike a classical network, in a hypernetwork, the weights of the main network are not model parameters. Thus the gradients $\Delta \theta$ have to be further backpropagated to the weights of the hypernetwork $\Delta \phi$ , which is then trained via gradient descent $\phi_{t+1} = \phi_t - \lambda \Delta \phi_t$ .
+
+This fundamental difference suggests that conventional knowledge about neural networks may not apply directly to hypernetworks and novel ways of thinking about weight initialization, optimization dynamics and architecture design for hypernetworks are sorely needed.
+
+# 2.1 RICCI CALCULUS
+
+We propose the use of Ricci calculus, as opposed to the more commonly used matrix calculus, as a suitable mathematical language for thinking about hypernetworks. Ricci calculus is useful because it allows us to reason about the derivatives of higher-order tensors with notational ease. For readers not familiar with the index-based notation of Ricci calculus, please refer to Laue et al. (2018) for a good introduction to the topic written from a machine learning perspective.
+
+For a general nth-order tensor $T^{i_1,\dots ,i_k,\dots ,i_n}$ , we use $\mathrm{d}_{i_k}$ to refer to the dimension of the index set that $i_{k}$ is drawn from. We include explicit summations where the relevant expressions might be ambiguous, and use Einstein summation convention otherwise. We use square brackets to denote different layers for added clarity, so for example $W[t]$ denotes the $t$ -th weight layer.
+
+# 2.2 XAVIER INITIALIZATION
+
+Glorot & Bengio (2010) derived weight initialization formulae for a feedforward neural network by conducting a variance analysis over activations and gradients. For a linear layer $y^{i} = W_{j}^{i}x^{j} + b^{i}$ , suppose we make the following Xavier Assumptions at initialization: (1) The $W_{j}^{i}$ , $x^{j}$ , and $b^{i}$ are all independent of each other. (2) $\forall i, j : \mathbb{E}[W_{j}^{i}] = 0$ . (3) $\forall j : \mathbb{E}[x^{j}] = 0$ . (4) $\forall i : b^{i} = 0$ .
+
+Then, $\mathbb{E}[y^i] = 0$ and $\operatorname{Var}(y^i) = \mathrm{d}_j\operatorname{Var}(W_j^i)\operatorname{Var}(x^j)$ . To keep the variance of the output and input activations the same, i.e. $\operatorname{Var}(y^i) = \operatorname{Var}(x^j)$ , we have to sample $W_j^i$ from a distribution whose variance is equal to the reciprocal of the fan-in: $\operatorname{Var}(W_j^i) = \frac{1}{\mathrm{d}_j}$ .
+
+If analogous assumptions hold for the backward pass, then to keep the variance of the output and input gradients the same, we have to sample $W_{j}^{i}$ from a distribution whose variance is equal to the reciprocal of the fan-out: $\mathrm{Var}(W_{j}^{i}) = \frac{1}{\mathrm{d}_{i}}$ .
+
+Thus, the forward pass and backward pass result in symmetrical formulae. Glorot & Bengio (2010) proposed an initialization based on their harmonic mean: $\mathrm{Var}(W_j^i) = \frac{2}{\mathrm{d}_j + \mathrm{d}_i}$ .
+
+In general, a feedforward network is non-linear, so these assumptions are strictly invalid. But odd activation functions with unit derivative at 0 results in a roughly linear regime at initialization.
+
+# 2.3 KAIMING INITIALIZATION
+
+He et al. (2015) extended Glorot & Bengio (2010)'s analysis by looking at the case of ReLU activation functions, i.e. $y^{i} = W_{j}^{i}\mathrm{ReLU}(x^{j}) + b^{i}$ . We can write $z^{j} = \mathrm{ReLU}(x^{j})$ to get
+
+$$
+\operatorname {V a r} (y ^ {i}) = \sum_ {j} \mathbb {E} [ (z ^ {j}) ^ {2} ] \operatorname {V a r} (W _ {j} ^ {i}) = \sum_ {j} \frac {1}{2} \mathbb {E} [ (x ^ {j}) ^ {2} ] \operatorname {V a r} (W _ {j} ^ {i}) = \frac {1}{2} \mathrm {d} _ {j} \operatorname {V a r} (W _ {j} ^ {i}) \operatorname {V a r} (x ^ {j}).
+$$
+
+This results in an extra factor of 2 in the variance formula. $W_{j}^{i}$ have to be symmetric around 0 to enforce Xavier Assumption 3 as the activations and gradients propagate through the layers. He et al. (2015) argued that both the forward or backward version of the formula can be adopted, since the activations or gradients will only be scaled by a depth-independent factor. For convolutional layers, we have to further divide the variance by the size of the receptive field.
+
+'Xavier init' and 'Kaiming init' are terms that are sometimes used interchangeably. Where there might be confusion, we will refer to the forward version as fan-in init, the backward version as fan-out init, and the harmonic mean version as harmonic init.
+
+# 3 REVIEW OF CURRENT METHODS
+
+In the seminal Ha et al. (2016) paper, the authors identified two distinct classes of hypernetworks: dynamic (for recurrent networks) and static (for convolutional networks). They proposed Orthogonal init (Saxe et al., 2013) for the dynamic class, but omitted discussion of initialization for the static class. The static class has since proven to be the dominant variant, covering all kinds of non-recurrent networks (not just convolutional), and thus will be the central object of our investigation.
+
+Through an extensive literature and code review, we found that hypernet practitioners mostly depend on the Adam optimizer, which is invariant to and normalizes the scale of gradients, for training and resort to one of four weight initialization methods:
+
+M1 Xavier or Kaiming init (as found in Pawlowski et al. (2017); Balazevic et al. (2018); Serrà et al. (2019); von Oswald et al. (2019)).
+M2 Small random values (as found in Krueger et al. (2017); Lorraine & Duvenaud (2018)).
+M3 Kaiming init, but with the output layer scaled by $\frac{1}{10}$ (as found in Ukai et al. (2018)).
+M4 Kaiming init, but with the hypernet embedding set to be a suitably scaled constant (as found in Meyerson & Miikkulainen (2019)).
+
+M1 uses classical neural network initialization methods to initialize hypernetworks. This fails to produce weights for the main network in the correct scale. Consider the following illustrative example of a one-layer linear hypernet generating a linear mainnet with $T + 1$ layers, given embeddings sampled from a standard normal distribution and weights sampled entry-wise from a zero-mean
+
+distribution. We leave the biases out for now, and assume the input data $x[1]$ is standardized.
+
+$$
+\begin{array}{l} x [ t + 1 ] ^ {i _ {t + 1}} = W [ t ] _ {i _ {t}} ^ {i _ {t + 1}} x [ t ] ^ {i _ {t}}, \qquad W [ t ] _ {i _ {t}} ^ {i _ {t + 1}} = H [ t ] _ {i _ {t} k _ {t}} ^ {i _ {t + 1}} e [ t ] ^ {k _ {t}}, \qquad 1 \leq t \leq T. \\ \operatorname {V a r} \left(x [ T + 1 ] ^ {i _ {t + 1}}\right) = \operatorname {V a r} \left(x [ 1 ] ^ {i _ {1}}\right) \prod_ {t = 1} ^ {T} \mathrm {d} _ {i _ {t}} \operatorname {V a r} \left(W [ t ] _ {i _ {t}} ^ {i _ {t + 1}}\right) = \operatorname {V a r} \left(x [ 1 ] ^ {i _ {1}}\right) \prod_ {t = 1} ^ {T} \mathrm {d} _ {i _ {t}} \mathrm {d} _ {k _ {t}} \operatorname {V a r} \left(H [ t ] _ {i _ {t} k _ {t}} ^ {i _ {t + 1}}\right). \tag {1} \\ \end{array}
+$$
+
+In this case, if the variance of the weights in the hypernet $\mathrm{Var}(H[t]_{i_t k_t}^{i_{t+1}})$ is equal to the reciprocal of the fan-in $\mathrm{d}_{k_t}$ , then the variance of the activations $\mathrm{Var}(x[T + 1]^{i_{t+1}}) = \prod_{t=1}^T \mathrm{d}_{i_t}$ explodes. If it is equal to the reciprocal of the fan-out $\mathrm{d}_{i_t} \mathrm{d}_{i_{t+1}}$ , then the activation variance $\mathrm{Var}(x[T + 1]^{i_{t+1}}) = \prod_{t=1}^T \frac{\mathrm{d}_{k_t}}{\mathrm{d}_{i_t + 1}}$ is likely to vanish, since the size of the embedding vector is typically small relatively to the width of the mainnet weight layer being generated.
+
+Where the fan-in is of a different scale than the fan-out, the harmonic mean has a scale close to that of the smaller number. Therefore, the fan-in, fan-out, and harmonic variants of Xavier and Kaiming init will all result in activations and gradients that scale exponentially with the depth of the mainnet.
+
+M2 and M3 introduce additional hyperparameters into the model, and the ad-hoc manner in which they work is reminiscent of pre deep learning neural network research, before the introduction of classical initialization methods like Xavier and Kaiming init. This ad-hoc manner is not only ineg-
+gant and consumes more compute, but will likely fail for deeper and more complex hypernetworks.
+
+M4 proposes to set the embeddings $e[t]^{k_t}$ to a suitable constant $(\mathrm{d}_{i_t}^{-1/2}$ in this case), such that both $W[t]_{i_t}^{i_{t+1}}$ and $H[t]_{i_t k_t}^{i_{t+1}}$ can seem to be initialized with the same variance as Kaiming init. This ensures that the variance of the activations in the mainnet are preserved through the layers, but the restrictions on the embeddings might not be desirable in many applications.
+
+Luckily, the fix appears simple — set $\mathrm{Var}(H[t]_{i_t k_t}^{i_{t+1}}) = \frac{1}{\mathrm{d}_{i_t} \mathrm{d}_{k_t}}$ . This results in the variance of the generated weights in the mainnet $\mathrm{Var}(W[t]_{i_t}^{i_{t+1}}) = \frac{1}{\mathrm{d}_{i_t}}$ resembling conventional neural networks initialized with fan-in init. This suggests a general hypernet weight initialization strategy: initialize the weights of the hypernet such that the mainnet weights approximate classical neural network initialization. We elaborate on and generalize this intuition in Section 4.
+
+# 4 HYPERFAN INITIALIZATION
+
+Most hypernetwork architectures use a linear output layer so that gradients can pass from the mainnet into the hypernet directly without any non-linearities. We make use of this fact in developing methods called hyperfan-in init and hyperfan-out init for hypernetwork weight initialization based on the principle of variance analysis.
+
+# 4.1 HYPERFAN-IN
+
+Proposition. Suppose a hypernetwork comprises a linear output layer. Then, the variance between the input and output activations of a linear layer in the mainnet $y^{i} = W_{j}^{i}x^{j} + b^{i}$ can be preserved using fan-in init in the hypernetwork with appropriately scaled output layers.
+
+Case 1. The hypernet generates the weights but not the biases of the mainnet. The bias in the mainnet is initialized to zero. We can write the weight generation in the form $W_{j}^{i} = H_{jk}^{i}h(e)^{k} + \beta_{j}^{i}$ where $h$ computes all but the last layer of the hypernet and $(H,\beta)$ form the output layer. We make the following Hyperfan Assumptions at initialization: (1) Xavier assumptions hold for all the layers in the hypernet. (2) The $H_{jk}^{i}, h(e)^{k}, \beta_{j}^{i}, x^{j}$ , and $b^{i}$ are all independent of each other. (3) $\forall i,j,k:\mathbb{E}[H_{jk}^{i}] = 0$ . (4) $\mathbb{E}[x^j] = 0$ . (5) $\forall i:b^{i} = 0$ .
+
+Use fan-in init to initialize the weights for $h$ . Then, $\mathrm{Var}(h(e)^k) = \mathrm{Var}(e^l)$ . If we initialize $H$ with the formula $\mathrm{Var}(H_{jk}^i) = \frac{1}{\mathrm{d}_j\mathrm{d}_k\mathrm{Var}(e^l)}$ and $\beta$ with zeros, we arrive at $\mathrm{Var}(W_j^i) = \frac{1}{\mathrm{d}_j}$ , which is the formula for fan-in init in the mainnet. The Hyperfan assumptions imply the Xavier assumptions
+
+hold in the mainnet, thus preserving the input and output activations.
+
+$$
+\begin{array}{l} \operatorname {V a r} \left(y ^ {i}\right) = \sum_ {j} \operatorname {V a r} \left(W _ {j} ^ {i}\right) \operatorname {V a r} \left(x ^ {j}\right) = \sum_ {j} \sum_ {k} \operatorname {V a r} \left(H _ {j k} ^ {i}\right) \operatorname {V a r} \left(h (e) ^ {k}\right) \operatorname {V a r} \left(x ^ {j}\right) \\ = \sum_ {j} \sum_ {k} \frac {1}{\mathrm {d} _ {j} \mathrm {d} _ {k} \operatorname {V a r} \left(e ^ {l}\right)} \operatorname {V a r} \left(e ^ {l}\right) \operatorname {V a r} \left(x ^ {j}\right) = \operatorname {V a r} \left(x ^ {j}\right). \tag {2} \\ \end{array}
+$$
+
+Case 2. The hypernet generates both the weights and biases of the mainnet. We can write the weight and bias generation in the form $W_{j}^{i} = H_{jk}^{i}h(e[1])^{k} + \beta_{j}^{i}$ and $b^{i} = G_{l}^{i}g(e[2])^{l} + \gamma^{i}$ respectively, where $h$ and $g$ compute all but the last layer of the hypernet, and $(H,\beta)$ and $(G,\gamma)$ form the output layers. We modify Hyperfan Assumption 2 so it includes $G_{l}^{i}$ , $g(e[2])^{l}$ , and $\gamma^{i}$ , and further assume $\mathrm{Var}(x^j) = 1$ , which holds at initialization with the common practice of data standardization.
+
+Use fan-in init to initialize the weights for $h$ and $g$ . Then, $\mathrm{Var}(h(e[1])^k) = \mathrm{Var}(e[1]^m)$ and $\mathrm{Var}(g(e[2])^l) = \mathrm{Var}(e[2]^n)$ . If we initialize $H$ with the formula $\mathrm{Var}(H_{jk}^i) = \frac{1}{2\mathrm{d}_j\mathrm{d}_k\mathrm{Var}(e[1]^m)}$ , $G$ with the formula $\mathrm{Var}(G_l^i) = \frac{1}{2\mathrm{d}_l\mathrm{Var}(e[2]^n)}$ , and $\beta, \gamma$ with zeros, then the input and output activations in the mainnet can be preserved.
+
+$$
+\begin{array}{l} \operatorname {V a r} \left(y ^ {i}\right) = \sum_ {j} \left[ \operatorname {V a r} \left(W _ {j} ^ {i}\right) \operatorname {V a r} \left(x ^ {j}\right) \right] + \operatorname {V a r} \left(b ^ {i}\right) \\ = \sum_ {j} \left[ \sum_ {k} \operatorname {V a r} \left(H _ {j k} ^ {i}\right) \operatorname {V a r} \left(h (e [ 1 ]) ^ {k}\right) \operatorname {V a r} \left(x ^ {j}\right) \right] + \sum_ {l} \operatorname {V a r} \left(G _ {l} ^ {i}\right) \operatorname {V a r} \left(g (e [ 2 ]) ^ {l}\right) \tag {3} \\ = \sum_ {j} \left[ \sum_ {k} \frac {1}{2 \mathrm {d} _ {j} \mathrm {d} _ {k} \operatorname {V a r} (e [ 1 ] ^ {m})} \operatorname {V a r} (e [ 1 ] ^ {m}) \operatorname {V a r} (x ^ {j}) \right] + \sum_ {l} \frac {1}{2 \mathrm {d} _ {l} \operatorname {V a r} (e [ 2 ] ^ {n})} \operatorname {V a r} (e [ 2 ] ^ {n}) \\ = \frac {1}{2} \operatorname {V a r} \left(x ^ {j}\right) + \frac {1}{2} = \operatorname {V a r} \left(x ^ {j}\right). \\ \end{array}
+$$
+
+If we initialize $G_{j}^{i}$ to zeros, then its contribution to the variance will increase during training, causing exploding activations in the mainnet. Hence, we prefer to introduce a factor of $1/2$ to divide the variance between the weight and bias generation, where the variance of each component is allowed to either decrease or increase during training. This becomes a problem if the variance of the activations in the mainnet deviates too far away from 1, but we found that it works well in practice.
+
+# 4.2 HYPERFAN-OUT
+
+Case 1. The hypernet generates the weights but not the biases of the mainnet. A similar derivation can be done for the backward pass using analogous assumptions on gradients flowing
+
+in the mainnet: $\frac{\partial L}{\partial x[t]^{i_t}} = \frac{\partial L}{\partial x[t + 1]^{i_{t + 1}}} W[t]_{i_t}^{i_{t + 1}},$
+
+through mainnet weights: $\frac{\partial L}{\partial W[t]_{i_t}^{i_{t+1}}} = \frac{\partial L}{\partial x[t+1]^{i_{t+1}}} x[t]^{i_t}, \frac{\partial L}{\partial h[t](e)^{k_t}} = \frac{\partial L}{\partial W[t]_{i_t}^{i_{t+1}}} H[t]_{i_t k_t}^{i_{t+1}},$
+
+and through mainnet biases: $\frac{\partial L}{\partial b[t]^{i_{t + 1}}} = \frac{\partial L}{\partial x[t + 1]^{i_{t + 1}}},\frac{\partial L}{\partial g[t](e)^{l_t}} = \frac{\partial L}{\partial b[t]^{i_{t + 1}}} G[t]_{l_t}^{i_{t + 1}}.$ (4)
+
+If we initialize the output layer $H$ with the analogous hyperfan-out formula $\mathrm{Var}(H[t]_{i_t k_t}^{i_{t+1}}) = \frac{1}{\mathrm{d}_{i_{t+1}} \mathrm{d}_{k_t} \mathrm{Var}(e^{k_t})}$ and the rest of the hypernet with fan-in init, then we can preserve input and output gradients on the mainnet: $\mathrm{Var}\left(\frac{\partial L}{\partial x[t]^{i_t}}\right) = \mathrm{Var}\left(\frac{\partial L}{\partial x[t+1]^{i_{t+1}}}\right)$ . However, note that the gradients will shrink when flowing from the mainnet to the hypernet: $\mathrm{Var}\left(\frac{\partial L}{\partial h[t](e)^{k_t}}\right) = \frac{\mathrm{d}_{i_t}}{\mathrm{d}_{k_t} \mathrm{Var}(e^{k_t})} \mathrm{Var}\left(\frac{\partial L}{\partial W[t]_{i_t}^{i_{t+1}}}\right)$ , and scaled by a depth-independent factor due to the use of fan-in rather than fan-out init.
+
+Case 2. The hypernet generates both the weights and biases of the mainnet. In the classical case, the forward version (fan-in init) and the backward version (fan-out init) are symmetrical. This
+
+remains true for hypernets if they only generated the weights of the mainnet. However, if they were to also generate the biases, then the symmetry no longer holds, since the biases do not affect the gradient flow in the mainnet but they do so for the hypernet (c.f. Equation 4). Nevertheless, we can initialize $G$ so that it helps hyperfan-out init preserve activation variance on the forward pass as much as possible (keeping the assumption that $\operatorname{Var}(x^j) = 1$ as before):
+
+$$
+\begin{array}{l} \operatorname {V a r} \left(y ^ {i}\right) = \sum_ {j} \left[ \operatorname {V a r} \left(W _ {j} ^ {i} x ^ {j}\right) \right] + \operatorname {V a r} \left(b ^ {i}\right) \\ = \mathrm {d} _ {j} \mathrm {d} _ {k} \operatorname {V a r} (e [ 1 ] ^ {m}) \operatorname {V a r} (H [ \text {h y p e r f a n - o u t} ] _ {j k} ^ {i}) \operatorname {V a r} (x ^ {j}) + \mathrm {d} _ {l} \operatorname {V a r} (e [ 2 ] ^ {n}) \operatorname {V a r} (G _ {l} ^ {i}) \\ = \mathrm {d} _ {j} \mathrm {d} _ {k} \operatorname {V a r} \left(e [ 1 ] ^ {m}\right) \operatorname {V a r} \left(H [ \text {h y p e r f a n - i n} ] _ {j k} ^ {i}\right) \operatorname {V a r} \left(x ^ {j}\right) \tag {5} \\ \end{array}
+$$
+
+Plugging in the formulae for Hyperfan-in and Hyperfan-out from above, we get
+
+$$
+\Longrightarrow \operatorname {V a r} (G _ {l} ^ {i}) = \frac {\left(1 - \frac {\mathrm {d} _ {j}}{\mathrm {d} _ {i}}\right)}{\mathrm {d} _ {l} \operatorname {V a r} (e [ 2 ] ^ {n})}.
+$$
+
+We summarize the variance formulae for hyperfan-in and hyperfan-out init in Table 1. It is not uncommon to re-use the same hypernet to generate different parts of the mainnet, as was originally done in Ha et al. (2016). We discuss this case in more detail in Appendix Section A.
+
+Table 1: Hyperfan-in and Hyperfan-out Variance Formulae for $W_{j}^{i} = H_{jk}^{i}h(e[1])^{k} + \beta_{j}^{i}$ . If $y^{i} = \mathrm{ReLU}(W_{j}^{i}x^{j} + b^{i})$ , then $\mathbb{1}_{\mathrm{ReLU}} = 1$ , else if $y^{i} = W_{j}^{i}x^{j} + b^{i}$ , then $\mathbb{1}_{\mathrm{ReLU}} = 0$ . If $b^{i} = G_{l}^{i}g(e[2])^{l} + \gamma^{i}$ , then $\mathbb{1}_{\mathrm{HBias}} = 1$ , else if $b^{i} = 0$ , then $\mathbb{1}_{\mathrm{HBias}} = 0$ . We initialize $h$ and $g$ with fan-in init, and $\beta_{j}^{i}, \gamma^{i} = 0$ . For convolutional layers, we have to further divide $\operatorname{Var}(H_{jk}^{i})$ by the size of the receptive field. Uniform init: $X \sim \mathcal{U}(-\sqrt{3\operatorname{Var}(X)},\sqrt{3\operatorname{Var}(X)})$ . Normal init: $X \sim \mathcal{N}(0,\operatorname{Var}(X))$ .
+
+| Initialization | Variance Formula | Initialization | Variance Formula |
| Hyperfan-in | Var(Hijk) = 21HBiasdjdkdVar(e[1]m) | Hyperfan-out | Var(Hijk) = 21ReLU/didkVar(e[1]m) |
| Hyperfan-in | Var(Gl) = 21ReLU/2dLVar(e[2]n) | Hyperfan-out | Var(Gl) = max(21ReLU(1-dj/dl),0) |
+
+# 5 EXPERIMENTS
+
+We evaluated our proposed methods on four sets of experiments involving different use cases of hypernetworks: feedforward networks, continual learning, convolutional networks, and Bayesian neural networks. In all cases, we optimize with vanilla SGD and sample from the uniform distribution according to the variance formula given by the init method. More experimental details can be found in Appendix Section B.
+
+# 5.1 FEEDFORWARD NETWORKS ON MNIST
+
+As an illustrative first experiment, we train a feedforward network with five hidden layers (500 hidden units), a hyperbolic tangent activation function, and a softmax output layer, on MNIST across four different settings: (1) a classical network with Xavier init, (2) a hypernet with Xavier init that generates the weights of the mainnet, (3) a hypernet with hyperfan-in init that generates the weights of the mainnet, (4) and a hypernet with hyperfan-out init that generates the weights of the mainnet.
+
+The use of hyperfan init methods on a hypernetwork reproduces mainnet weights similar to those that have been trained from Xavier init on a classical network, while the use of Xavier init on a hypernetwork causes exploding activations right at the beginning of training (see Figure 1). Observe in Figure 2 that when the hypernetwork is initialized in the proper scale, the magnitude of generated weights stabilizes quickly. This in turn leads to a more stable training regime, as seen in Figure 3. More visualizations of the activations and gradients of both the mainnet and hypernet can be viewed in Appendix Section B.1. Qualitatively similar observations were made when we replaced the activation function with ReLU and Xavier with Kaiming init, with Kaiming init leading to even bigger activations at initialization.
+
+Suppose now the hypernet generates both the weights and biases of the mainnet instead of just the weights. We found that this architectural change leads the hyperfan init methods to take more time (but still less than Xavier init), to generate stable mainnet weights (c.f. Figure 25 in the Appendix).
+
+
+Figure 1: Mainnet Activations before the Start of Training on MNIST.
+
+
+
+
+
+
+
+
+Figure 2: Evolution of Hypernet Output Layer Activations during Training on MNIST. Xavier init results in unstable mainnet weights throughout training, while hyperfan-in and hyperfan-out init result in mainnet weights that stabilize quickly.
+
+
+
+
+
+
+Figure 3: Loss and Test Accuracy Plots on MNIST.
+
+
+
+
+
+# 5.2 CONTINUAL LEARNING ON REGRESSION TASKS
+
+Continual learning solves the problem of learning tasks in sequence without forgetting prior tasks. von Oswald et al. (2019) used a hypernetwork to learn embeddings for each task as a way to efficiently regularize the training process to prevent catastrophic forgetting. We compare different initialization schemes on their hypernetwork implementation, which generates the weights and biases of a ReLU mainnet with two hidden layers to solve a sequence of three regression tasks.
+
+In Figure 4, we plot the training loss averaged over 15 different runs, with the shaded area showing the standard error. We observe that the hyperfan methods produce smaller training losses at initialization and during training, eventually converging to a smaller loss for each task.
+
+
+Figure 4: Continual Learning Loss on a Sequence of Regression Tasks.
+
+
+
+
+
+# 5.3 CONVOLUTIONAL NETWORKS ON CIFAR-10
+
+Ha et al. (2016) applied a hypernetwork on a convolutional network for image classification on CIFAR-10. We note that our initialization methods do not handle residual connections, which were in their chosen mainnet architecture and are important topics for future study. Instead, we implemented their hypernetwork architecture on a mainnet with the All Convolutional Net architecture (Springenberg et al., 2014) that is composed of convolutional layers and ReLU activation functions.
+
+After searching through a dense grid of learning rates, we failed to enable the fan-in version of Kaiming init to train even with very small learning rates. The fan-out version managed to begin delayed training, starting from around epoch 270 (see Figure 5). By contrast, both hyperfan-in and hyperfan-out init led to successful training immediately. This shows a good init can make it possible to successfully train models that would have otherwise been unamenable to training on a bad init.
+
+
+Figure 5: Loss and Test Accuracy Plots on CIFAR-10.
+
+
+
+
+
+# 5.4 BAYESIAN NEURAL NETWORKS ON IMAGENET
+
+Bayesian neural networks improve model calibration and provide uncertainty estimation, which guard against the pitfalls of overconfident networks. Ukai et al. (2018) developed a Bayesian neural network by using a hypernetwork to simulate an expressive prior distribution. We trained a similar hypernetwork by applying Ukai et al. (2018)'s methods on ImageNet, but differed in our choice of MobileNet (Howard et al., 2017) as a mainnet architecture that does not have residual connections.
+
+In the work of Ukai et al. (2018), it was noticed that even with the use of batch normalization in the mainnet, classical initialization approaches still led to diverging losses (due to exploding activations, c.f. Section 3). We observe similar results in our experiment (see Figure 6) — the fan-in version of Kaiming init, which is the default initialization in popular deep learning libraries like PyTorch and Chainer, resulted in substantially higher initial losses and led to slower training than the hyperfan methods. We found that the observation still stands even when the last layer of the mainnet is not generated by the hypernet. This shows that while batch normalization helps, it is not the solution for a bad init that causes exploding activations. Our approach solves this problem in a principled way, and is preferable to the trial-and-error based heuristics that Ukai et al. (2018) had to resort to in order to train their model.
+
+Surprisingly, the fan-out version of Kaiming init led to similar results as the hyperfan methods, suggesting that batch normalization might be sufficient to correct the bad initializations that result in vanishing activations. That being said, hypernet practitioners should not expect batch normalization to be the panacea for problems caused by bad initialization, especially in memory-constrained scenarios. In a Bayesian neural network application (especially in hypernet architectures without relaxed weight-sharing), the blowup in the number of parameters limits the use of big batch sizes, which is essential to the performance of batch normalization (Wu & He, 2018). For example, in this experiment, our hypernet model requires 32 times as many parameters as a classical MobileNet.
+
+To the best of our knowledge, the interaction between batch normalization and initialization is not well-understood, even in the classical case, and thus, our findings prompt an interesting direction for future research.
+
+
+Figure 6: Loss and Test Accuracy Plots on ImageNet.
+
+
+
+In all our experiments, hyperfan-in and hyperfan-out both led to successful hypernetwork training with SGD. We did not find a good reason to prefer one over the other (similar to He et al. (2015)'s observation in the classical case for fan-in and fan-out init).
+
+# 6 CONCLUSION
+
+For a long time, the promise of deep nets to learn rich representations of the world was left unfulfilled due to the inability to train these models. The discovery of greedy layer-wise pre-training (Hinton et al., 2006; Bengio et al., 2007) and later, Xavier and Kaiming init, as weight initialization strategies to enable such training was a pivotal achievement that kickstarted the deep learning revolution. This underscores the importance of model initialization as a fundamental step in learning complex representations.
+
+In this work, we developed the first principled weight initialization methods for hypernetworks, a rapidly growing branch of meta-learning. We hope our work will spur momentum towards the development of principled techniques for building and training hypernetworks, and eventually lead to significant progress in learning meta representations. Other non-hypernetwork methods of neural network generation (Stanley et al., 2009; Koutnik et al., 2010) can also be improved by considering whether their generated weights result in exploding activations and how to avoid that if so.
+
+# 7 ACKNOWLEDGEMENTS
+
+This research was supported in part by the US Defense Advanced Research Project Agency (DARPA) Lifelong Learning Machines Program, grant HR0011-18-2-0020. We thank Dan Martin and Yawei Li for helpful discussions, and the ICLR reviewers for their constructive feedback.
+
+# REFERENCES
+
+Ivana Balazevic, Carl Allen, and Timothy M Hospedales. Hypernetwork knowledge graph embeddings. arXiv preprint arXiv:1808.07018, 2018.
+Yoshua Bengio, Pascal Lamblin, Dan Popovici, and Hugo Larochelle. Greedy layer-wise training of deep networks. In Advances in neural information processing systems, pp. 153-160, 2007.
+Andrew Brock, Theodore Lim, James M Ritchie, and Nick Weston. Smash: one-shot model architecture search through hypernetworks. arXiv preprint arXiv:1708.05344, 2017.
+Lior Deutsch, Erik Nijkamp, and Yu Yang. A generative model for sampling high-performance and diverse weights for neural networks. arXiv preprint arXiv:1905.02898, 2019.
+Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the thirteenth international conference on artificial intelligence and statistics, pp. 249-256, 2010.
+David Ha, Andrew Dai, and Quoc V Le. Hypernetworks. arXiv preprint arXiv:1609.09106, 2016.
+Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the IEEE international conference on computer vision, pp. 1026-1034, 2015.
+Christian Henning, Johannes von Oswald, João Sacramento, Simone Carlo Surace, Jean-Pascal Pfister, and Benjamin F Grewe. Approximating the predictive distribution via adversarially-trained hypernetworks. In *Bayesian Deep Learning Workshop*, NeurIPS (Spotlight), volume 2018, 2018.
+Geoffrey E Hinton, Simon Osindero, and Yee-Whye Teh. A fast learning algorithm for deep belief nets. Neural computation, 18(7):1527-1554, 2006.
+Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861, 2017.
+Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
+Sylwester Klocek, Łukasz Maziarka, Maciej Wołczyk, Jacek Tabor, Marek Śmieja, and Jakub Nowak. Hypernetwork functional image representation. arXiv preprint arXiv:1902.10404, 2019.
+Jan Koutnik, Faustino Gomez, and Jürgen Schmidhuber. Evolving neural networks in compressed weight space. In Proceedings of the 12th annual conference on Genetic and evolutionary computation, pp. 619-626. ACM, 2010.
+Agustinus Kristiadi and Asja Fischer. Predictive uncertainty quantification with compound density networks. arXiv preprint arXiv:1902.01080, 2019.
+David Krueger, Chin-Wei Huang, Riashat Islam, Ryan Turner, Alexandre Lacoste, and Aaron Courville. Bayesian hypernetworks. arXiv preprint arXiv:1710.04759, 2017.
+Soren Laue, Matthias Mitterreiter, and Joachim Giesen. Computing higher order derivatives of matrix and tensor expressions. In Advances in Neural Information Processing Systems, pp. 2755-2764, 2018.
+Zechun Liu, Haoyuan Mu, Xiangyu Zhang, Zichao Guo, Xin Yang, Tim Kwang-Ting Cheng, and Jian Sun. Metapruning: Meta learning for automatic neural network channel pruning. arXiv preprint arXiv:1903.10258, 2019.
+Jonathan Lorraine and David Duvenaud. Stochastic hyperparameter optimization through hypernetworks. arXiv preprint arXiv:1802.09419, 2018.
+Elliot Meyerson and Risto Miikkulainen. Modular universal reparameterization: Deep multi-task learning across diverse domains. arXiv preprint arXiv:1906.00097, 2019.
+
+Zheyi Pan, Yuxuan Liang, Junbo Zhang, Xiuwen Yi, Yong Yu, and Yu Zheng. Hyperst-net: Hypernetworks for spatio-temporal forecasting. arXiv preprint arXiv:1809.10889, 2018.
+Nick Pawlowski, Andrew Brock, Matthew CH Lee, Martin Rajchl, and Ben Glocker. Implicit weight uncertainty in neural networks. arXiv preprint arXiv:1711.01297, 2017.
+Neale Ratzlaff and Li Fuxin. Hypergan: A generative model for diverse, performant neural networks. arXiv preprint arXiv:1901.11058, 2019.
+Sashank J. Reddi, Satyen Kale, and Sanjiv Kumar. On the convergence of adam and beyond. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=ryQu7f-RZ.
+David E Rumelhart, Geoffrey E Hintont, and Ronald J Williams. Learning representations by backpropagating errors. NATURE, 323:9, 1986.
+Andrew M Saxe, James L McClelland, and Surya Ganguli. Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. arXiv preprint arXiv:1312.6120, 2013.
+Joan Serrà, Santiago Pascual, and Carlos Segura. Blow: a single-scale hyperconditioned flow for non-parallel raw audio voice conversion. arXiv preprint arXiv:1906.00794, 2019.
+Falong Shen, Shuicheng Yan, and Gang Zeng. Meta networks for neural style transfer. arXiv preprint arXiv:1709.04111, 2017.
+Jost Tobias Springenberg, Alexey Dosovitskiy, Thomas Brox, and Martin Riedmiller. Striving for simplicity: The all convolutional net. arXiv preprint arXiv:1412.6806, 2014.
+Kenneth O Stanley, David B D'Ambrosio, and Jason Gauci. A hypercube-based encoding for evolving large-scale neural networks. Artificial life, 15(2):185-212, 2009.
+Joseph Suarez. Language modeling with recurrent highway hypernetworks. In Advances in neural information processing systems, pp. 3267-3276, 2017.
+Zhun Sun, Mete Ozay, and Takayuki Okatani. Hypernetworks with statistical filtering for defending adversarial examples. arXiv preprint arXiv:1711.01791, 2017.
+Kenya Ukai, Takashi Matsubara, and Kuniaki Uehara. Hypernetwork-based implicit posterior estimation and model averaging of cnn. In *Asian Conference on Machine Learning*, pp. 176-191, 2018.
+Johannes von Oswald, Christian Henning, João Sacramento, and Benjamin F Grewe. Continual learning with hypernetworks. arXiv preprint arXiv:1906.00695, 2019.
+Ashia C Wilson, Rebecca Roelofs, Mitchell Stern, Nati Srebro, and Benjamin Recht. The marginal value of adaptive gradient methods in machine learning. In Advances in Neural Information Processing Systems, pp. 4148-4158, 2017.
+Yuxin Wu and Kaiming He. Group normalization. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 3-19, 2018.
+Chris Zhang, Mengye Ren, and Raquel Urtasun. Graph hypernetworks for neural architecture search. arXiv preprint arXiv:1810.05749, 2018.
+Hongyi Zhang, Yann N Dauphin, and Tengyu Ma. Fixup initialization: Residual learning without normalization. arXiv preprint arXiv:1901.09321, 2019.
+
+# APPENDIX
+
+# A RE-USING HYPERNET WEIGHTS
+
+# A.1 FOR MAINNET WEIGHTS OF THE SAME SIZE
+
+For model compression or weight-sharing purposes, different parts of the mainnet might be generated by the same hypernet function. This will cause some assumptions of independence in our analysis to be invalid. Consider the example of the same hypernet being used to generate multiple different mainnet weight layers of the same size, i.e. $H[t]_{i_t k}^{i_{t+1}} = H[t + 1]_{i_{t+1}k}^{i_{t+2}}$ , $\mathbf{d}_{i_{t+1}} = \mathbf{d}_{i_{t+2}} = \mathbf{d}_{i_t}$ . Then, $x[t + 1]^{i_{t+1}} = H[t]_{i_t k_t}^{i_{t+1}} e[t]^{k_t} x[t]^{i_t} \nsubseteq W[t + 1]_{i_{t+1}k}^{i_{t+2}} = H[t + 1]_{i_{t+1}k}^{i_{t+2}} e[t + 1]^{k_{t+1}}$ .
+
+The relaxation of some of these independence assumptions does not always prove to be a big problem in practice, because the correlations introduced by repeated use of $H$ can be minimized with the use of flat distributions like the uniform distribution. It can even be helpful, since the re-use of the same hypernet for different layers causes the gradient flowing through the hypernet output layer to be the sum of the gradients from the weights of these layers: $\frac{\partial L}{\partial h(e)^k} = \sum_t \frac{\partial L}{\partial W[t]_{i_t}^{i_t + 1}} H_{i_t k}^{i_{t + 1}}$ , thus combating the shrinking effect.
+
+# A.2 FOR MAINNET WEIGHTS OF DIFFERENT SIZES
+
+Similar reasoning applies if the same hypernet was used to generate differently sized subsets of weights in the mainnet. However, we encourage avoiding this kind of hypernet architecture design if not otherwise essential, since it will complicate the initialization formulae listed in Table 1.
+
+Consider Ha et al. (2016)'s hypernetwork architecture. Their two-layer hypernet generated weight chunks of size $(K,n,n)$ for a main convolutional network where $K = 16$ was found to be the highest common factor among the size of mainnet layers, and $n^2 = 9$ was the size of the receptive field. We simplify the presentation by writing $i$ for $i_t$ , $j$ for $j_t$ , $k$ for $k_{t,m}$ , and $l$ for $l_{t,m}$ .
+
+$$
+\begin{array}{l} W [ t ] _ {j} ^ {i} = \left\{ \begin{array}{l l} H _ {k} ^ {i (\bmod K)} \alpha [ t ] [ j + \lfloor \frac {i}{K} \rfloor \mathrm {d} _ {j} ] ^ {k} + \beta^ {i (\bmod K)} & \text {i f i i s d i v i s i b l e b y K} \\ \delta_ {j (\bmod K) j (\bmod K)} \left[ H _ {k} ^ {j (\bmod K)} \alpha [ t ] [ i + \lfloor \frac {j}{K} \rfloor \mathrm {d} _ {i} ] ^ {k} + \beta^ {j (\bmod K)} \right] & \text {i f j i s d i v i s i b l e b y K} \end{array} \right. \\ t ] [ m _ {t} ] ^ {k} = G [ t ] [ m _ {t} ] _ {l} ^ {k} e [ t ] [ m _ {t} ] ^ {l} + \gamma [ t ] [ m _ {t} ] ^ {k} \end{array} \tag {6}
+$$
+
+Because the output layer $(H, \beta)$ in the hypernet was re-used to generate mainnet weight matrices of different sizes (i.e. in general, $i_t \neq i_{t+1}, j_t \neq j_{t+1}$ ), $G$ effectively becomes the output layer that we want to be considering for hyperfan-in and hyperfan-out initialization.
+
+Hence, to achieve fan-in in the mainnet $\operatorname{Var}(W[t]_j^i) = \frac{1}{\mathrm{d}_j}$ , we have to use fan-in init for $H$ (i.e. $\operatorname{Var}(H_k^{i(\mathrm{mod}K)}) = \frac{1}{\mathrm{d}_k} \neq \frac{1}{\mathrm{d}_j \mathrm{d}_k \operatorname{Var}(e[t][m_t]^l)}$ ), and hyperfan-in init for $G$ (i.e. $\operatorname{Var}(G[t][m_t]^k) = \frac{1}{\mathrm{d}_j \mathrm{d}_l \operatorname{Var}(e[t][m_t]^l)}$ ).
+
+Analogously, to achieve fan-out in the mainnet $\mathrm{Var}(W[t]_j^i) = \frac{1}{\mathrm{d}_i}$ , we have to use fan-in init for $H$ (i.e. $\mathrm{Var}(H_k^{i(\mathrm{mod}K)}) = \frac{1}{\mathrm{d}_k} \neq \frac{1}{\mathrm{d}_i\mathrm{d}_k\mathrm{Var}(e[t][m_t]^{l})}$ ), and hyperfan-out init for $G$ (i.e. $\mathrm{Var}(G[t][m_t]_l^k) = \frac{1}{\mathrm{d}_i\mathrm{d}_l\mathrm{Var}(e[t][m_t]^{l})}$ ).
+
+# B MORE EXPERIMENTAL DETAILS
+
+# B.1 FEEDFORWARD NETWORKS ON MNIST
+
+The networks were trained on MNIST for 30 epochs with batch size 10 using a learning rate of 0.0005 for the hypernets and 0.01 for the classical network. The hypernets had one linear layer with embeddings of size 50 and different hidden layers in the mainnet were all generated by the same hypernet output layer with a different embedding, which was randomly sampled from $\mathcal{U}(-\sqrt{3},\sqrt{3})$ and fixed. We use the mean cross entropy loss for training, but the summed cross entropy loss for testing.
+
+We show activation and gradient plots for two cases: (i) the hypernet generates only the weights of the mainnet, and (ii) the hypernet generates both the weights and biases of the mainnet. (i) covers Figures 3, 1, 7, 8, 9, 10, 11, 12, 2, 13, 14, 15, and 16. (ii) covers Figures 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, and 29.
+
+The activations and gradients in our plots were calculated by averaging across a fixed held-out set of 300 examples drawn randomly from the test set.
+
+In Figures 1, 8, 9, 11, 12, 13, 14, 16, 18, 20, 21, 23, 24, 26, 27, and 29, the y axis shows the number of activations/gradients, while the x axis shows the value of the activations/gradients. The value of activations/gradients from the hypernet output layer correspond to the value of mainnet weights.
+
+In Figures 2, 7, 10, 15, 19, 22, 25, and 28, the y axis shows the mean value of the activations/gradients, while each increment on the x axis corresponds to a measurement that was taken every 1000 training batches, with the bars denoting one standard deviation away from the mean.
+
+# B.1.1 HYPERNET GENERATES ONLY THE MAINNET WEIGHTS
+
+
+Figure 7: Evolution of Mainnet Activations during Training on MNIST.
+
+
+
+
+
+
+
+
+Figure 8: Mainnet Activations at the End of Training on MNIST.
+
+
+
+
+
+
+
+
+Figure 9: Mainnet Gradients before the Start of Training on MNIST.
+
+
+
+
+
+
+
+
+Figure 10: Evolution of Mainnet Gradients during Training on MNIST.
+
+
+
+
+
+
+
+
+Figure 11: Mainnet Gradients at the End of Training on MNIST.
+
+
+
+
+
+
+
+
+Figure 12: Hypernet Output Layer Activations before the Start of Training on MNIST.
+
+
+
+
+
+
+Figure 13: Hypernet Output Layer Activations at the End of Training on MNIST.
+
+
+
+
+
+
+Figure 14: Hypernet Output Layer Gradients before the Start of Training on MNIST.
+
+
+
+
+
+
+Figure 15: Evolution of Hypernet Output Layer Gradients during Training on MNIST.
+
+
+
+
+
+
+Figure 16: Hypernet Output Layer Gradients at the End of Training on MNIST.
+
+
+
+
+
+
+B.1.2 HYPERNET GENERATES BOTH MAINNET WEIGHTS AND BIASES
+
+
+Figure 17: Loss and Test Accuracy Plots on MNIST.
+
+
+
+
+Figure 18: Mainnet Activations before the Start of Training on MNIST.
+
+
+
+
+
+
+Figure 19: Evolution of Mainnet Activations during Training on MNIST.
+
+
+
+
+
+
+Figure 20: Mainnet Activations at the End of Training on MNIST.
+
+
+
+
+
+
+Figure 21: Mainnet Gradients before the Start of Training on MNIST.
+
+
+
+
+
+
+Figure 22: Evolution of Mainnet Gradients during Training on MNIST.
+
+
+
+
+
+
+Figure 23: Mainnet Gradients at the End of Training on MNIST.
+
+
+
+
+
+
+Figure 24: Hypernet Output Layer Activations before the Start of Training on MNIST.
+
+
+
+
+
+
+Figure 25: Evolution of Hypernet Output Layer Activations during Training on MNIST.
+
+
+
+
+
+
+Figure 26: Hypernet Output Layer Activations at the End of Training on MNIST.
+
+
+
+
+
+
+Figure 27: Hypernet Output Layer Gradients before the Start of Training on MNIST.
+
+
+
+
+
+
+Figure 28: Evolution of Hypernet Output Layer Gradients during Training on MNIST.
+
+
+
+
+
+
+Figure 29: Hypernet Output Layer Gradients at the End of Training on MNIST.
+
+
+
+
+
+# B.1.3 REMARK ON THE COMBINATION OF FAN-IN AND FAN-OUT INIT
+
+Glorot & Bengio (2010) proposed to use the harmonic mean of the two different initialization formulae derived from the forward and backward pass. He et al. (2015) commented that either version suffices for convergence, and that it does not really matter given that the difference between the two will be a depth-independent factor.
+
+We experimented with the harmonic, geometric, and arithmetic means of the two different formulae in both the classical and the hypernet case. There was no indication of any significant benefit from taking any of the three different means in both cases. Thus, we confirm and concur with He et al. (2015)'s original observation that either the fan-in or the fan-out version suffices.
+
+# B.2 CONTINUAL LEARNING ON REGRESSION TASKS
+
+The mainnet is a feedforward network with two hidden layers (10 hidden units) and the ReLU activation function. The weights and biases of the mainnet are generated from a hypernet with two hidden layers (10 hidden units) and trainable embeddings of size 2 sampled from $\mathcal{U}(-\sqrt{3},\sqrt{3})$ . We keep the same continual learning hyperparameter $\beta_{output}$ value of 0.005 and pick the best learning rate for each initialization method from $\{10^{-2},10^{-3},10^{-4},10^{-5}\}$ . Notably, Kaiming (fan-in) could only be trained from learning rate $10^{-5}$ , with losses diverging soon after initialization using the other learning rates. Each task was trained for 6000 training iterations using batch size 32, with Figure 4 plotted from losses measured at every 100 iterations.
+
+# B.3 CONVOLUTIONAL NETWORKS ON CIFAR-10
+
+The networks were trained on CIFAR-10 for 500 epochs starting with an initial learning rate of 0.0005 using batch size 100, and decaying with $\gamma = 0.1$ at epochs 350 and 450. The hypernet is composed of two layers (50 hidden units) with separate embeddings and separate input layers but shared output layers. The weight generation happens in blocks of (96,3,3) where $K = 96$ is the highest common factor between the different sizes of the convolutional layers in the mainnet and $n = 3$ is the size of the convolutional filters (see Appendix Section A.2 for a more detailed explanation on the hypernet architecture). The embeddings are size 50 and fixed after random sampling from $\mathcal{U}(-\sqrt{3},\sqrt{3})$ . We use the mean cross entropy loss for training, but the summed cross entropy loss for testing.
+
+# B.4 BAYESIAN NEURAL NETWORK ON IMAGENET
+
+Ukai et al. (2018) showed that a Bayesian neural network can be developed by using a hypernetwork to express a prior distribution without substantial changes to the vanilla hypernetwork setting. Their methods simply require putting $\mathcal{L}_2$ -regularization on the model parameters and sampling from stochastic embeddings. We trained a linear hypernet to generate the weights of a MobileNet main-net architecture (excluding the batch normalization layers), using the block-wise sampling strategy described in Ukai et al. (2018), with a factor of 0.0005 for the $\mathcal{L}_2$ -regularization. We initialize fixed embeddings of size 32 sampled from $\mathcal{U}(-\sqrt{3}, \sqrt{3})$ , and sample additive stochastic noise coming from $\mathcal{U}(-0.1, 0.1)$ at the beginning of every mini-batch training. The training was done on ImageNet with batch size 256 and learning rate 0.1 for 25 epochs, or equivalently, 125125 iterations. The testing was done with 10 Monte Carlo samples. We omit the test loss plots due to the computational expense of doing 10 forward passes after every mini-batch instead of every epoch.
\ No newline at end of file
diff --git a/principledweightinitializationforhypernetworks/images.zip b/principledweightinitializationforhypernetworks/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..a04f325b5002ef3f2cc803dc277310cd52a586bb
--- /dev/null
+++ b/principledweightinitializationforhypernetworks/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:03fe616d428530ca6b734bb8c22eda80e1f071aaf8493d503565504135fa8258
+size 1357675
diff --git a/principledweightinitializationforhypernetworks/layout.json b/principledweightinitializationforhypernetworks/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..a7ab1dd33b696d50c2d7610d4245b81e008e88a9
--- /dev/null
+++ b/principledweightinitializationforhypernetworks/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5039ec6348e82c08fd4a12d4e7f1997999d5bcc22652c83c7160bb281a09c603
+size 637224
diff --git a/probabilisticconnectionimportanceinferenceandlosslesscompressionofdeepneuralnetworks/a4f52f42-daf5-442b-bdd5-3d69b9a5bb48_content_list.json b/probabilisticconnectionimportanceinferenceandlosslesscompressionofdeepneuralnetworks/a4f52f42-daf5-442b-bdd5-3d69b9a5bb48_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..1eb4b577f663897836c9d0006f46df64a5dd17e0
--- /dev/null
+++ b/probabilisticconnectionimportanceinferenceandlosslesscompressionofdeepneuralnetworks/a4f52f42-daf5-442b-bdd5-3d69b9a5bb48_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:529bd88726e2d78f02084272da061b6fa2821c345fb4f095794fb588483c4710
+size 85105
diff --git a/probabilisticconnectionimportanceinferenceandlosslesscompressionofdeepneuralnetworks/a4f52f42-daf5-442b-bdd5-3d69b9a5bb48_model.json b/probabilisticconnectionimportanceinferenceandlosslesscompressionofdeepneuralnetworks/a4f52f42-daf5-442b-bdd5-3d69b9a5bb48_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..4bee2ddefc7be4f4b528ba85fb4d2bf43b2241e5
--- /dev/null
+++ b/probabilisticconnectionimportanceinferenceandlosslesscompressionofdeepneuralnetworks/a4f52f42-daf5-442b-bdd5-3d69b9a5bb48_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9603031eaf578bcf34f8f3296d2e267fbfb91f8945450ab9039932cfccf82d93
+size 99102
diff --git a/probabilisticconnectionimportanceinferenceandlosslesscompressionofdeepneuralnetworks/a4f52f42-daf5-442b-bdd5-3d69b9a5bb48_origin.pdf b/probabilisticconnectionimportanceinferenceandlosslesscompressionofdeepneuralnetworks/a4f52f42-daf5-442b-bdd5-3d69b9a5bb48_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..4a8f293f5fab6c0d753ba8df7fbd5859cd1635ef
--- /dev/null
+++ b/probabilisticconnectionimportanceinferenceandlosslesscompressionofdeepneuralnetworks/a4f52f42-daf5-442b-bdd5-3d69b9a5bb48_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6c15c249b1060f0a7f841fac5b006a66cf0e004a50a24c2ba11fbd31d60d65c5
+size 503759
diff --git a/probabilisticconnectionimportanceinferenceandlosslesscompressionofdeepneuralnetworks/full.md b/probabilisticconnectionimportanceinferenceandlosslesscompressionofdeepneuralnetworks/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..a3f6f2c84e7c4b9af1beba2da4d4eb204d8e1ea4
--- /dev/null
+++ b/probabilisticconnectionimportanceinferenceandlosslesscompressionofdeepneuralnetworks/full.md
@@ -0,0 +1,373 @@
+# PROBABILISTIC CONNECTION IMPORTANCE INFERENCE AND LOSSLESS COMPRESSION OF DEEP NEURAL NETWORKS
+
+Xin Xing
+
+Harvard University
+
+Long Sha
+
+Brandeis University
+
+Pengyu Hong
+
+Brandeis University
+
+Zuofeng Shang
+
+New Jersey Institute of Technology
+
+Jun S. Liu
+
+Harvard University
+
+# ABSTRACT
+
+Deep neural networks (DNNs) can be huge in size, requiring a considerable amount of energy and computational resources to operate, which limits their applications in numerous scenarios. It is thus of interest to compress DNNs while maintaining their performance levels. We here propose a probabilistic importance inference approach for pruning DNNs. Specifically, we test the significance of the relevance of a connection in a DNN to the DNN's outputs using a nonparemetric scoring test and keep only those significant ones. Experimental results show that the proposed approach achieves better lossless compression rates than existing techniques.
+
+# 1 INTRODUCTION
+
+Deep neural networks (DNNs) have many successful applications ranging from computer vision, natural language processing to computational biology. However, large DNNs usually require significant memory and storage overhead and sometimes a large network bandwidth, which hinges their usages on mobile devices. Running large-size neural networks also consumes a considerable amount of energy, making their deployments on battery-constrained devices difficult. Furthermore, the over-parameterization issue in DNN architectures can impair its performances. Recent works (Han et al. (2015); Ullrich et al. (2017); Louizos et al. (2017) and references therein) show ways to reduce the network complexity by using proper regularization or network pruning to significantly reduce the number of parameters. One way to convert a dense DNN into a sparse one is by applying $L_0 / L_1$ regularization on the model parameters. The $L_1$ penalty is computationally efficient, but it also introduces more bias on the large weights and may lead to significant degradation in model performances (Han et al., 2015). In contrast, $L_0$ regularization shows better performance, but incurs much higher computational complexity due to its combinatorial nature. Pruning, as shown in Han et al. (2015) and Tartaglione et al. (2018), is another promising strategy to sparsify DNNs by only keeping network connections more relevant to the final output. The importance of a network connection (i.e., the connection between two network nodes) can be approximated by the magnitudes or gradients of its weights. However, such an approximation may not be accurate since it does not consider the highly nonlinear relationships between network connections and the final output induced by the multi-layer convolutions of DNNs.
+
+Some available network compression methods improve the computational performance with moderate to high losses in accuracy, which can be highly undesirable in many critical missions (such as autonomous driving). In order to achieve lossless compression, we need to correctly decipher the relationships between the network connections and the final output. This is a challenging task because the structural nature of DNNs makes the dependence between a network connection and the network output highly nonlinear. In this paper, we propose a probabilistic connection importance inference (PCII) method for testing whether a connection in a DNN is relevant to the DNN output. Specifically, we introduce a new technique called probabilistic tensor product decomposition to decompose the association of two connected network nodes into two components: one related to the DNN output and the other independent of the DNN output. If the strength of the first component is high, we keep the
+
+network connection. Otherwise, we delete it. The inference is carried out by a nonparametric score test, which is based on modeling the log-transformed joint density of the connected nodes and the final output in a tensor product reproducing kernel Hilbert space (RKHS). We further derive the asymptotic distribution of the proposed test statistics, thus avoiding the computationally intensive resampling step encountered in most nonparametric inference settings. We implemented the method and tested it on image classification tasks, in which our approach achieved the highest lossless compression rates.
+
+Section 2 reviews relevant literature; Section 3 introduces the PCII method and algorithm; Section 4 establishes theoretical properties for using the method to infer dependence between a network connection and the DNN output; and Section 5 shows the experimental results and concludes with a short discussion.
+
+# 2 RELATED WORKS
+
+Zhu & Gupta (2017) found that a DNN can be greatly sparsified with minimal loss in accuracy. One strategy for sparsifying DNNs is to shrink small weights to zero. Along this line of thinking, Louizos et al. (2017) introduced a smoothed version of $L_{0}$ regularization aiming to be both computationally feasible and beneficial to generalizability. There are also some regularization methods based on Bayesian formulations. Ullrich et al. (2017) proposed to add a Gaussian mixture prior to the weights. The sparsity is achieved by concentrating weights to cluster centers. Molchanov et al. (2017) proposed variational dropout, which learns individual dropout rate based on the empirical Bayes principle. Also, as shown in Han et al. (2015), the performance of pruning and retraining is related to choice the correct regularization, such as $L_{1}$ or $L_{2}$ regularization.
+
+PCII offers a means to prune a DNN by keeping network connections that are the most relevant to the DNN output. This idea goes back to the optimal brain damage work by LeCun et al. (1990). It shows that among the many parameters in the network, many are redundant and do not contribute significantly to the output. Later, Hassibi et al. (1993) proposed the optimal brain surgeon method, which leverages a second-order Taylor expansion to select parameters for deletion. Most recently, Han et al. (2015) proposed a three-step pruning method. Tartaglione et al. (2018) proposed to prune a DNN based on the sensitivity score. There are also several approaches targeting at sparsifying convolution layers. For example, Anwar et al. (2017) proposed to prune feature maps and kernels in convolution layers.
+
+Comparing with existing pruning methods based on the magnitude or gradient of weights, our approach directly models the relationship between a network connection and the network output in a nonparametric way. In addition, our inference is based on a statistical hypothesis testing, which outputs $p$ -values to quantify the dependence strength of each network connection to the DNN output. The use of $p$ -values allows network connections to be treated in a unified way and alleviates the need of ad hoc weights normalization required in some existing approaches.
+
+# 3 PROBABILISTIC CONNECTION IMPORTANCE INFERENCE
+
+In this section, we establish a general procedure for building the probabilistic connection structure, in which the connections are inferred by the nonparametric score test in tensor product RKHS. As shown in our experiments (see Section 5), our technique not only sparsifies the network but improves its generalization ability.
+
+# 3.1 PCII ON FULLY CONNECTED LAYERS
+
+Without loss of generality, we let a feed-forward acyclic DNN (Figure 1) be composed of $T$ layers with $X_{t}$ being the input of the $t$ -th network layer, where $t = 0,1,\ldots,T$ . Let $t = 0$ and $t = T$ indicate the input and output layers, respectively, and let $0 < t < T$ index the hidden layers. The collection of all the nodes is denoted as $\mathcal{G}$ and the collection of all network connections is denoted as $\mathcal{E}$ . We use a pair of nodes to denote a connection in $\mathcal{E}$ . For example, $(X_{t,1},X_{t + 1,1})$ denotes an edge from the $1^{st}$ node in the $t$ -th layer to the $1^{st}$ node in the $(t + 1)$ -th layer. For simplicity, we use $Y$ to denote the output layer, who takes on categorical values for a classification task, and takes on continuous values for regression.
+
+
+Figure 1: Flowchart for probabilistic connection inference. We have two nodes $\alpha$ and $\beta$ from a fully connected neural network. The connection of $\alpha$ and $\beta$ is expressed as $\eta_{\alpha,\beta} + \eta_{\alpha,\beta,y}$ . The importance of the connection is inferred by testing whether the three way interaction $\eta_{\alpha,\beta,y}$ is zero or not.
+
+The output of the $(t + 1)$ -th layer can be described as
+
+$$
+X _ {t + 1, j} = g _ {t + 1} \left(\sum_ {i = 1} ^ {m _ {t}} w _ {t, t + 1} ^ {i, j} X _ {t, i}\right), \text {f o r} j = 1, \dots , m _ {t + 1},
+$$
+
+where $m_{t}$ denotes the number of nodes in the $t$ -th layer, and $g_{t+1}$ is the activation function. It should be noted that the magnitude of weights is not necessarily the best indication of the impact of the corresponding network connection. A more preferable way is to directly model the relationship between a network connection and the final output of a DNN. For simplicity, we use $(\alpha, \beta)$ to denote an arbitrary network connection. Due to the high non-linearity of a DNN, we use the nonparametric function to model the relationship among $\alpha, \beta$ and $Y$ .
+
+Let us denote the joint density of $\alpha, \beta$ and $Y$ as $f(\alpha, \beta, y)$ and the log-transferred density as $\eta(\alpha, \beta, y)$ . We assume that $\eta(\alpha, \beta, y)$ can be deposed as
+
+$$
+\eta (\alpha , \beta , y) = \eta_ {\alpha} + \eta_ {\beta} + \eta_ {y} + \eta_ {\alpha , y} + \eta_ {\beta , y} + \underbrace {\eta_ {\alpha , \beta} + \eta_ {\alpha , \beta , y}} _ {\text {i n t e r a c t i o n b e t w e e n} \alpha \text {a n d} \beta}, \tag {1}
+$$
+
+where $\eta_{\alpha}, \eta_{\beta}$ , and $\eta_{y}$ are marginal effects, $\eta_{\alpha, y}, \eta_{\beta, y}$ , and $\eta_{\alpha, \beta}$ are the two-way interaction terms, $\eta_{\alpha, \beta, y}$ is the three-way interaction term. Here, we interpret the connection as the interaction of $\alpha$ and $\beta$ , i.e. $\eta_{\alpha, \beta} + \eta_{\alpha, \beta, y}$ . Specifically, $\eta_{\alpha, \beta}$ is the interaction effect of $\alpha$ and $\beta$ without the impact of $y$ , and $\eta_{\alpha, \beta, y}$ characterizes the interaction of $\alpha, \beta$ impacted by $y$ . Our aim is to measure the significance of the connection by how much it is related to $Y$ . To model this problem mathematically, we measure the association between the connection and $Y$ by the significance of the three-way interaction $\eta_{\alpha, \beta, y}$ . Therefore, inferring whether the connection is related to the final output $Y$ or not is equivalent to testing whether $\eta_{\alpha, \beta, y} = 0$ or not. As shown in Figure 1, the connection is important for the network model if and only if the three-way interaction $\eta_{\alpha, \beta, y} \neq 0$ . We propose a score test to qualify the significance of this term. The detailed construction of the statistical test is explained in Section 4.2.
+
+# 3.2 PCII ON CONVOLUTIONAL LAYERS
+
+For different activation functions and type of layers, we have modifications to adopt the specific structure. Here, we generalize the proposed PCII test to handle convolutional layers, which are critical in many modern neural network architectures such as VGG16 Simonyan & Zisserman (2014). As demonstrated in Li et al. (2016), sparsifying the filter has little effect on reducing the computation cost. Nevertheless, reducing the volume of filters can greatly increase computing efficiency. For example, the volume of a filter is $3 \times 3 \times 4$ . There are four $3 \times 3$ slices. PCII can be generalized to infer the importance of the slices, which can be treated as the connection between one channel in the current layer and one channel in the next layer.
+
+
+Figure 2: Flowchart for probabilistic connection inference for convolutional layers. Without loss generality, we use same notation of $\alpha$ and $\beta$ to denote two channels from a convolutional neural network. The connection between $\alpha$ and $\beta$ is corresponding to the convolutional operator $h_{t,t+1}^{1,m_{t+1}}$ (shown as one slice with the red border). The importance of this connection is inferred by testing whether the three way interaction $\eta_{\alpha,\beta,y}$ is zero or not.
+
+Convolutional filters can be applied to transform channels in the $t$ -th layer to channels in the $(t + 1)$ -th layer. We denote the $j$ -th channel in the $t$ -th layer as $\mathbf{X}_{t,j}$ for $j = 1, \dots, m_t$ , where $m_t$ is the number of channels in $t$ -th layer and $t = 0, \dots, T$ . Each filter connects all channels in the $t$ -th layer to one channel in the $t + 1$ -th layer. Let $h_{t,t + 1}^{i,j}$ denote a convolution operator that connects two channels $\mathbf{X}_{t,i}$ and $\mathbf{X}_{t + 1,j}$ . Then, the filter connected to $\mathbf{X}_{t + 1,j}$ is $\{h_{t,t + 1}^{i,j}\}_{i = 1}^{m_t}$ where we denote each $h_{t,t + 1}^{i,j}$ as a slice of the filter. For example, when we choose a $3 \times 3 \times 4$ filter and set stride as one, this operator is the filter convolved (slided) across the width and height of the input $\{\mathbf{X}_{t,i}\}_{i = 1}^{m_t}$ and offset with the bias. For one slice in the $t + 1$ -th layer, we can write its connection with the slices in the previous layer as
+
+$$
+\boldsymbol {X} _ {t + 1, j} = g _ {t + 1} \left(\sum_ {i = 1} ^ {m _ {t}} h _ {t, t + 1} ^ {i, j} \left(\boldsymbol {X} _ {t, i}\right) + b _ {t, t + 1} ^ {j}\right)
+$$
+
+where $g_{t+1}$ is an activation function and $b_{t,t+1}^j$ is the bias for the $j$ -th filter. As illustrated in Figure 2, the red arrow from $X_{t,1}$ to $X_{t+1,m_{t+1}}$ represents $h_{t,t+1}^{1,m_{t+1}}$ , which is one slice of the filter $\{h_{t,t+1}^{i,m_{t+1}}\}_{i=1}^{m_t}$ connecting the channels in the $t$ -th layer to channel $X_{t+1,m_{t+1}}$ in the $t+1$ -th layer. For simplicity, we denote $\alpha$ as one channel in the current layer and $\beta$ as one channel in the next layer. Since the relationship among the triplet $(\alpha, \beta, Y)$ is highly nonlinear, we model its log-transformed joint density as a nonparametric function $\eta(\alpha, \beta, y)$ . Similar to the fully connected layers, we assume that $\eta(\alpha, \beta, y)$ has a decomposition as in (1). As shown in Figure 2, the connection between $\alpha$ and $\beta$ is decomposed to two parts: one unrelated to the output $Y$ , $\eta_{\alpha,\beta}$ , and another related to the output $Y$ , $\eta_{\alpha,\beta,y}$ . We aim to select connections that have greater contributions to $Y$ , which is mathematically translated into the task of assessing the significance of the three-way interaction term $\eta_{\alpha,\beta,y}$ against the null hypothesis $\eta_{\alpha,\beta,y} = 0$ .
+
+# 3.3 ALGORITHM
+
+In real applications, both fully connected layers and convolutional layers are used in a complex neural network architecture. We integrate the PCII procedure described in the previous two subsections to simultaneously infer the connections in both fully connected and convolutional layers, as summarized in Algorithm 1. We use $p_{ij}^{(t)}$ to denote the $p$ -value for testing the $i$ -th node in $t$ -th layer and $j$ -th node in $(t + 1)$ -th layer for the fully connected layers. For convolutional layers, $p_{ij}^{(t)}$ denotes the $p$ -value for inferring the importance of the filter connecting the $i$ -th slice in the $t$ -th layer and the $j$ -th slice in
+
+the $t + 1$ -th layer. The calculation of the p-values are given in Section 4 based on our proposed Score test.
+
+# Algorithm 1 PCII: Probabilistic Connection Importance Inference for Lossless Neural Network Compression
+
+Input: A training dataset, a DNN architecture, and the desired model compression rate
+
+Step 1: Use the training dataset to train a DNN of the given architecture.
+
+Step 2: (a). Importance inference of connections in fully connected layers: infer the three-way dependence effect of a network connection by testing the hypothesis $H_0: \eta_{\alpha, \beta, y} = 0$ v.s $H_1: \eta_{\alpha, \beta, y} \neq 0$ , and calculate the test statistics (or $p$ -values) for all network connections as $\pmb{p}^f = \{p_{i,j}^{t,t+1}, i = 1, \dots, m_t, j = 1, \dots, m_{t+1} \mid t$ and $t + 1$ layers are fully connected}.
+
+(b). Importance inference of connections in convolutional layers: infer the three-way dependence effect of a network connection by testing the hypothesis $H_0 : \eta_{\alpha, \beta, y} = 0$ v.s $H_1 : \eta_{\alpha, \beta, y} \neq 0$ , and calculate the test statistics (or $p$ -values) for all network connections as $\pmb{p}^c = \{p_{i,j}^{t,t+1}, i = 1, \dots, m_t, j = 1, \dots, m_{t+1} \mid t \text{ and } t + 1 \text{ layers are convolutionally connected}\}$ .
+
+Step 3: Rank all network connections by their test statistics (or $p$ -values), and select a threshold $\rho^f$ for deleting network connections in fully connected layers and $\rho^c$ in convolutional layers to achieve the desired model compression rate (Strategies for choosing $\rho^f$ and $\rho^c$ are given in Section 5).
+
+Step 4: Set the same initial value for non-zero weights and filters. Retrain the sparsified DNN.
+
+# 4 SCORE TEST AND THEORETICAL PROPERTIES
+
+# 4.1 BACKGROUND ON TENSOR PRODUCT RKHS
+
+Consider two random variables $\alpha$ and $\beta$ for fully connected layers or two random vectors $\alpha$ and $\beta$ for convolutional layers. Let $Y$ be a random variable as the final output. The domains for $\alpha$ , $\beta$ and $Y$ are $\mathcal{A}$ , $\mathcal{B}$ , and $\mathcal{V}$ , respectively. Here, we assume that the log-transformed joint density function $\eta$ belongs to a tensor product RKHS $\mathcal{H} = \mathcal{H}^{\langle \alpha \rangle} \otimes \mathcal{H}^{\langle \beta \rangle} \otimes \mathcal{H}^{\langle Y \rangle}$ where $\otimes$ denotes the tensor product of two linear space.
+
+For marginal RKHS, $\mathcal{H}^{\langle l\rangle}$ with an inner product $\langle \cdot ,\cdot \rangle_{\mathcal{H}^{\langle l\rangle}}$ for $l = \alpha ,\beta ,Y$ , there exists a symmetric and square integrable function $K_{l}$ such that
+
+$$
+\langle f, K _ {l} (x, \cdot) \rangle_ {\mathcal {H} ^ {\langle l \rangle}} = f (x), \text {f o r a l l} f \in \mathcal {H} ^ {\langle l \rangle} \tag {2}
+$$
+
+where $K_{l}$ is called the reproducing kernel of $\mathcal{H}^{(l)}$ for $l = \alpha, \beta, Y$ . By Mercer's theorem, any continuous kernel has the following decomposition
+
+$$
+K (x, y) = \sum_ {\nu = 0} ^ {\infty} \mu_ {\nu} \phi_ {\nu} (x) \phi_ {\nu} (y) \tag {3}
+$$
+
+where the $\mu_{\nu}$ 's are non-negative descending eigenvalues and the $\phi_{\nu}$ 's are eigen-functions. For the discrete domain $\{1, \ldots, a\}$ , we define the kernel as $K(x, y) = \mathbb{1}\{x = y\}$ for $x, y \in \{1, \ldots, a\}$ . For a continuous domain, there are different choices of kernels such as Gaussian kernels and Sobolev kernels. Note that the eigenvalues for different kernels on continuous domain have different decay rate. For example, the eigenvalues of the Gaussian kernel have the exponential decay rate, i.e., there exists some $c > 0$ such that $\mu_{\nu} \asymp \exp(-c\nu)$ (Sollich & Williams, 2004); and the eigenvalues of the $m$ -th Sobolev kernels have the polynomial decay rate, i.e., $\mu_{\nu} \asymp \nu^{-2m}$ (Gu, 2013).
+
+# 4.2 PROBABILISTIC DECOMPOSITION OF TENSOR PRODUCT RKHS
+
+Next we propose the probabilistic tensor sum decomposition for each marginal RKHS, $\mathcal{H}^{\langle l\rangle}$ , for $l = \alpha ,\beta ,Y$ . We first use Euclidean space as a simple example to illustrate the basic idea of tensor sum decomposition. Recall that the tensor sum decomposition is often called ANOVA decomposition in linear model. For example, for the $d$ -dimensional Euclidean space, we denote $f$ as a vector and let $f(x)$ be the $x$ -th entry of the vector for $x = 1,\ldots ,d$ . We denote $\mathcal{A}$ as an average operator defined as $\mathcal{A}f(x) = \left\langle \frac{1}{d}\mathbf{1},f\right\rangle$ . The tensor sum decomposition of the Euclidean space $\mathbb{R}^d$ is
+
+$$
+\mathbb {R} ^ {d} = \mathbb {R} _ {0} ^ {d} \oplus \mathbb {R} _ {1} ^ {d} := s p a n \left\{\frac {1}{d} \mathbf {1} \right\} \oplus \{f \in \mathbb {R} ^ {d} \mid \sum_ {i = 1} ^ {d} f (d) = 0 \} \tag {4}
+$$
+
+where the first space is called the grand mean and the second space is called the main effect. Then, we construct the kernel for $\mathbb{R}_0^d$ and $\mathbb{R}_1^d$ in the following lemma.
+
+Lemma 1. For a RKHS space $\mathcal{H}$ , there corresponds a unique reproducing kernel $K$ , which is non-negative definite. Based on the tensor sum decomposition $\mathcal{H} = \mathcal{H}_0 \oplus \mathcal{H}_1$ , where $\mathcal{H}_0 = \{1 / d\mathbf{1}\}$ and $\mathcal{H}_1 = \{g \in \mathcal{H} : E_X(g(X)) = 0\}$ , we have the kernel for $\mathcal{H}_0$ as
+
+$$
+k _ {0} (x, y) = 1 / d \tag {5}
+$$
+
+and kernel for $\mathcal{H}_1$ as
+
+$$
+k _ {1} (x, y) = \mathbb {1} _ {\{x = y \}} - 1 / d
+$$
+
+where $\mathbb{1}$ denotes the indicator function.
+
+However, in RKHS with infinite dimension, the grand mean is not a single vector. Here, we set the average operator $\mathcal{A}$ as $\mathcal{A} := f \to E_x f(x) = E_x \langle k_x, f \rangle_{\mathcal{H}} = \langle Ek_x, f \rangle_{\mathcal{H}}$ where $k$ is the kernel function in $\mathcal{H}$ and the first equality is due to the reproducing property. $E_x k_x$ plays the same role as $\frac{1}{d} \mathbf{1}$ in Euclidean space. Then, we have the tensor sum decomposition in a functional space defined as
+
+$$
+\mathcal {H} = \mathcal {H} _ {0} \oplus \mathcal {H} _ {1} := \operatorname {s p a n} \left\{E _ {x} k _ {x} \right\} \oplus \left\{f \in \mathcal {H}: \mathcal {A} f = 0 \right\}. \tag {6}
+$$
+
+Following the same fashion, we call $\mathcal{H}_0$ as the grand mean space and $\mathcal{H}_1$ as the main effect space. Notice that $E_{x}k_{x}$ is also known as the kernel mean embedding which is well established in the statistics literature Berlinet & Thomas-Agnan (2011). Then we introduce the following lemma to construct the kernel function for $\mathcal{H}_0$ and $\mathcal{H}_1$ .
+
+Lemma 2. For RKHS space $\mathcal{H}$ , there corresponds an unique reproducing kernel $K$ , which is nonnegative definite. Based on the tensor sum decomposition $\mathcal{H} = \mathcal{H}_0 \oplus \mathcal{H}_1$ where $\mathcal{H}_0 = \{E_x k_x\}$ and $\mathcal{H}_1 = \{g \in \mathcal{H} : E_x(g(x)) = 0\}$ , we have the kernel for $\mathcal{H}_0$ as
+
+$$
+k _ {0} (x, y) = E _ {x} [ k (x, y) ] + E _ {y} [ k (x, y) ] - E _ {x, y} k (x, y) \tag {7}
+$$
+
+and the kernel for $\mathcal{H}_1$ as
+
+$$
+\begin{array}{l} k _ {1} (x, y) = \langle k _ {x} - E k _ {x}, k _ {y} - E k _ {y} \rangle_ {\mathcal {H}} \\ = k (x, y) - E _ {x} [ k (x, y) ] - E _ {y} [ k (x, y) ] + E _ {x, y} k (x, y). \\ \end{array}
+$$
+
+In neural networks, $\mathcal{A}$ , $\mathcal{B}$ are in a continuous domain. The final output $\mathcal{V}$ can be either in continuous domain or discrete domain, which depends on the tasks. Here, we construct the tensor sum decomposition for discrete domain and continuous domain based on Lemma 1 and Lemma 2 respectively. Specifically, we have spaces $\mathcal{H}^{\langle \alpha \rangle}$ , $\mathcal{H}^{\langle \beta \rangle}$ and $\mathcal{H}^{\langle Y\rangle}$ decomposed as tensor sums of subspaces $\mathcal{H}^{\langle \alpha \rangle} = \mathcal{H}_0^{\langle \alpha \rangle} \oplus \mathcal{H}_1^{\langle \alpha \rangle}$ , $\mathcal{H}^{\langle \beta \rangle} = \mathcal{H}_0^{\langle \beta \rangle} \oplus \mathcal{H}_1^{\langle \beta \rangle}$ , $\mathcal{H}^{\langle Y\rangle} = \mathcal{H}_0^{\langle Y\rangle} \oplus \mathcal{H}_1^{\langle Y\rangle}$ . Following Gu (2013), we apply the distributive law and have the decomposition of $\mathcal{H}$ as
+
+$$
+\begin{array}{l} \mathcal {H} = \left(\mathcal {H} _ {0} ^ {\langle \alpha \rangle} \oplus \mathcal {H} _ {1} ^ {\langle \alpha \rangle}\right) \otimes \left(\mathcal {H} _ {0} ^ {\langle \beta \rangle} \oplus \mathcal {H} _ {1} ^ {\langle \beta \rangle}\right) \otimes \left(\mathcal {H} _ {0} ^ {\langle Y \rangle} \oplus \mathcal {H} _ {1} ^ {\langle Y \rangle}\right) \\ \equiv \mathcal {H} _ {0 0 0} \oplus \mathcal {H} _ {1 0 0} \oplus \mathcal {H} _ {0 1 0} \oplus \mathcal {H} _ {0 0 1} \oplus \mathcal {H} _ {1 1 0} \oplus \mathcal {H} _ {1 0 1} \oplus \mathcal {H} _ {0 1 1} \oplus \mathcal {H} _ {1 1 1}. \tag {8} \\ \end{array}
+$$
+
+where $\mathcal{H}_{ijk} = \mathcal{H}_i^{\langle \alpha \rangle}\oplus \mathcal{H}_j^{\langle \beta \rangle}\oplus \mathcal{H}_k^{\langle Y\rangle}$
+
+Lemma 3. Suppose $K^{\langle 1\rangle}$ is the reproducing kernel of $\mathcal{H}^{\langle 1\rangle}$ on $X_{1}$ , and $K^{\langle 2\rangle}$ is the reproducing kernel of $\mathcal{H}^{\langle 2\rangle}$ on $X_{2}$ . Then the reproducing kernels of $\mathcal{H}^{\langle 1\rangle} \otimes \mathcal{H}^{\langle 2\rangle}$ on $X = X_{1} \times X_{2}$ is $K(\pmb{x},\pmb{y}) = K^{\langle 1\rangle}(x^{\langle 1\rangle},y^{\langle 1\rangle})K^{\langle 2\rangle}(x^{\langle 2\rangle},y^{\langle 2\rangle})$ with $\pmb{x} = (x^{\langle 1\rangle},x^{\langle 2\rangle})$ and $\pmb{y} = (y^{\langle 1\rangle},y^{\langle 2\rangle})$ .
+
+Lemmas 3 can be easily proved based on Theorems 2.6 in Gu (2013). Lemma 3 implies that the reproducing kernels of the tensor product space is the product of the reproducing kernels. Based on these three lemmas, we can construct kernel for each subspace defined in (8).
+
+# 4.3 SCORE TEST
+
+Based on (8), the log-transformed density function $\eta \in \mathcal{H}$ can be correspondingly decomposed as (1). Thus, $\eta_{\alpha,\beta,Y} = 0$ if and only if $\eta^{*} \in \mathcal{H}_{0} := \mathcal{H}_{000} \oplus \mathcal{H}_{100} \oplus \mathcal{H}_{010} \oplus \mathcal{H}_{001} \oplus \mathcal{H}_{110} \oplus \mathcal{H}_{101} \oplus \mathcal{H}_{011}$ where $\eta^{*}$ is the underlying truth. Hence, we focus on the following hypothesis testing problem:
+
+$$
+H _ {0}: \eta^ {*} \in \mathcal {H} _ {0} \text {v s .} H _ {1}: \eta^ {*} \in \mathcal {H} \backslash \mathcal {H} _ {0}, \tag {9}
+$$
+
+where $\mathcal{H} \backslash \mathcal{H}_0$ denotes set difference of $\mathcal{H}$ and $\mathcal{H}_0$ . We next propose a likelihood-ratio based procedure to test (9). Suppose that $\pmb{t} = (\alpha, \beta, y)$ and $\pmb{t}_i = (\alpha_i, \beta_i, y_i)$ , $i = 1, \dots, n$ , are iid observations generated from $\mathcal{T} = (\mathcal{A}, \mathcal{B}, \mathcal{Y})$ . Let $LR_n$ be the likelihood ratio functional defined as
+
+$$
+L R _ {n} (\eta) = \ell_ {n} (\eta) - \ell_ {n} \left(P _ {\mathcal {H} _ {0}} \eta\right) = - \frac {1}{n} \sum_ {i = 1} ^ {n} \left\{\eta \left(\boldsymbol {t} _ {i}\right) - P _ {\mathcal {H} _ {0}} \eta \left(\boldsymbol {t} _ {i}\right) \right\}, \eta \in \mathcal {H}, \tag {10}
+$$
+
+where $P_{\mathcal{H}_0}$ is a projection operator from $\mathcal{H}$ to $\mathcal{H}_0$ . Using the reproducing property, we rewrite (10) as
+
+$$
+L R _ {n} (\eta) = - \frac {1}{n} \sum_ {i = 1} ^ {n} \left\{\left\langle K _ {\boldsymbol {t} _ {i}}, \eta \right\rangle_ {\mathcal {H}} - \left\langle K _ {\boldsymbol {t} _ {i}} ^ {0}, \eta \right\rangle_ {\mathcal {H}} \right\} \tag {11}
+$$
+
+where $K$ is the kernel for $\mathcal{H}$ and $K^0$ is the kernel for $\mathcal{H}_0$ .
+
+Then we calculate the Fréchet derivative of the likelihood ratio functional as
+
+$$
+D L R _ {n} (\eta) \Delta \eta = \left\langle \frac {1}{n} \sum_ {i = 1} ^ {n} \left(K _ {\boldsymbol {t} _ {i}} - K _ {\boldsymbol {t} _ {i}} ^ {0}\right), \Delta \eta \right\rangle_ {\mathcal {H}} = \left\langle \frac {1}{n} K _ {\boldsymbol {t} _ {i}} ^ {1}, \Delta \eta \right\rangle_ {\mathcal {H}} \tag {12}
+$$
+
+where $K^1$ is the kernel for $\mathcal{H}_{111}$ . We define our test statistics as the squared norm of the score function of the likelihood ratio functional as
+
+$$
+S _ {n} ^ {2} = \left\| \frac {1}{n} \sum_ {i = 1} ^ {n} K _ {\boldsymbol {t} _ {i}} ^ {1} \right\| _ {\mathcal {H}} ^ {2} \tag {13}
+$$
+
+By the reproducing property, we can expand the left hand side of (13) as
+
+$$
+S _ {n} ^ {2} = \frac {1}{n ^ {2}} \sum_ {i = 1} ^ {n} \sum_ {j = 1} ^ {n} K ^ {1} \left(\boldsymbol {t} _ {i}, \boldsymbol {t} _ {j}\right) \tag {14}
+$$
+
+We observe an interesting phenomenon that $S_{n}^{2}$ has a similar expression with the MMD (Gretton et al., 2012) when $Y$ is a binary variable. When $Y \in \{0,1\}$ , the kernel on $\mathcal{H}^{\langle Y\rangle}(y_i,y_j) = \mathbb{1}\{x = y\} -1 / 2$ . Assume that the kernel for $\mathcal{H}_1^{\langle \alpha \rangle}\otimes \mathcal{H}_1^{\langle \beta \rangle}$ is $K_{11}^{\langle \alpha ,\beta \rangle}(\pmb {x},\pmb {x})$ for $\pmb {x}\in \mathcal{A}\times \mathcal{B}$ . By Lemma 3, we have $K^{1}(t_{i},t_{j}) = K_{1}^{\langle Y\rangle}(y_{i},y_{j})K_{11}^{\langle \alpha ,\beta \rangle}(\pmb{x}_{i},\pmb{x}_{j})$ . Then we have $8S_{n}^{2}$ as
+
+$$
+8 S _ {n} ^ {2} = \frac {4}{n ^ {2}} \left(\sum_ {\{i, j \mid y _ {i} = y _ {j} = 0 \}} K _ {1 1} ^ {\langle \alpha , \beta \rangle} \left(\boldsymbol {x} _ {i}, \boldsymbol {x} _ {j}\right) - 2 \sum_ {\{i, j \mid y _ {i} \neq y _ {j} \}} K _ {1 1} ^ {\langle \alpha , \beta \rangle} \left(\boldsymbol {x} _ {i}, \boldsymbol {x} _ {j}\right) + \sum_ {\{i, j \mid y _ {i} = y _ {j} = 1 \}} K _ {1 1} ^ {\langle \alpha , \beta \rangle} \left(\boldsymbol {x} _ {i}, \boldsymbol {x} _ {j}\right)\right). \tag {15}
+$$
+
+If we replace $K_{11}^{\langle \alpha, \beta \rangle}$ with $K^{\langle \alpha, \beta \rangle}$ , i.e., the kernel on the $\mathcal{H}^{\langle \alpha \rangle} \otimes \mathcal{H}^{\langle \beta \rangle}$ , the right hand side of (15) is the MMD measuring the distance between the joint distribution of $\alpha$ and $\beta$ in the group with $y = 0$ and the joint distribution of $\alpha$ and $\beta$ in the group with $y = 1$ .
+
+$$
+\mathrm {M M D} ^ {2} [ (\alpha , \beta), Y ] = \frac {4}{n ^ {2}} \left(\sum_ {\{i, j \mid y _ {i} = y _ {j} = 0 \}} K ^ {\langle \alpha , \beta \rangle} \left(\boldsymbol {x} _ {i}, \boldsymbol {x} _ {j}\right) - 2 \sum_ {\{i, j \mid y _ {i} \neq y _ {j} \}} K ^ {\langle \alpha , \beta \rangle} \left(\boldsymbol {x} _ {i}, \boldsymbol {x} _ {j}\right) + \sum_ {\{i, j \mid y _ {i} = y _ {j} = 1 \}} K ^ {\langle \alpha , \beta \rangle} \left(\boldsymbol {x} _ {i}, \boldsymbol {x} _ {j}\right)\right) \tag {16}
+$$
+
+Since we want to infer the importance of the connection between the $\alpha$ and $\beta$ , we are only interested in comparing whether the interaction effect between $\alpha$ and $\beta$ changes or not in the two groups. The main effects of $\alpha$ and $\beta$ only contribute to the importance of the nodes or slices and are not relevant to the connection between $\alpha$ and $\beta$ .
+
+# 4.4 CALCULATION OF THE TEST STATISTICS
+
+We introduce a matrix form of the squared norm of score function for manifesting the computation process. In (14), $S_{n}^{2}$ is determined by the kernel on the $\mathcal{H}_1^{\langle \alpha \rangle} \otimes \mathcal{H}_1^{\langle \beta \rangle} \otimes \mathcal{H}_1^{\langle Y \rangle}$ . By Lemma 1 and Lemma 2, we replace the expectation by the sample average and get the kernel function for $\mathcal{H}_l^{\langle l \rangle}$ as
+
+$$
+k _ {1} ^ {l} (x, y) = k ^ {l} (x, y) - \frac {1}{n} \sum_ {i = 1} ^ {n} k ^ {l} (x _ {i}, y) - \frac {1}{n} \sum_ {i = 1} ^ {n} k ^ {l} (x, y _ {i}) + \frac {1}{n ^ {2}} \sum_ {i = 1} ^ {n} \sum_ {j = 1} ^ {n} k ^ {l} (x _ {i}, y _ {j})
+$$
+
+where $k^l$ is the kernel function for $\mathcal{H}^{\langle l\rangle}$ for $l = \alpha, \beta, Y$ . Some popular choices of the kernel functions are Gaussian kernel, Laplace kernel and polynomial kernel. Let $K^l$ be the empirical kernel matrix. We can rewrite (14) as
+
+$$
+S _ {n} ^ {2} = \frac {1}{n ^ {2}} \left[ \left(H K ^ {\alpha} H\right) \circ \left(H K ^ {\beta} H\right) \circ \left(H K ^ {Y} H\right) \right] _ {+ +} \tag {17}
+$$
+
+where $H = I_n - \frac{1}{n} \mathbf{11}^T$ , $I_n$ is a $n \times n$ identity matrix and $\mathbf{1}_n$ is a $n \times 1$ vector of ones, and $[A]_{++} = \sum_{i=1}^{n} \sum_{j=1}^{n} A_{ij}$ . This test statistics is also related to three-way Lancaster's interaction measure (Lancaster, 2002).
+
+Fortunately, the computation of (17) only involves the matrix multiplication, the computational procedure can be executed in parallel on GPU. In our experiment, the highly parallelized matrix operation tremendously alleviated the computing time: essentially reducing from quadratic to almost linear. In addition, we can reduce the sample size by sub-sampling $r \ll n$ data points. Sub-sampling the dataset is a popular technique to reduce computational burden and can efficiently approximate the full data likelihood in a regression setting (Ma et al., 2015; Drineas et al., 2011). However, for very large data set, the sub-sampling is not efficient. We consider a mini-batch algorithm to calculate the score in each batch and used the averaged score as the final test statistics. This is also related to the divide-and-conquer method which is widely used in kernel-based learning (Zhang et al., 2013; Shang & Cheng, 2017; Liu et al., 2018).
+
+# 4.5 ASYMPTOTIC DISTRIBUTION
+
+The asymptotic distribution of $S_{n}^{2}$ depends on the decay rate of the kernel of the product of RKHS. Suppose that $\{\mu_{\nu}^{\langle \alpha \rangle},\phi_{\nu}^{\langle \alpha \rangle}\}_{\nu = 1}^{\infty}$ is a series of eigenvalue and eigenfunction pairs for $\mathcal{H}_1^{\langle \alpha \rangle}$ , $\{\mu_{\nu}^{\langle \beta \rangle},\phi_{\nu}^{\langle \beta \rangle}\}_{\nu = 1}^{\infty}$ is a sequence of basis for $\mathcal{H}_1^{\langle \beta \rangle}$ . If $Y$ is continuous, we suppose that $\mathcal{H}^{\langle Y\rangle}$ has the eigensystem, $\{\mu_{\nu}^{\langle Y\rangle},\phi_{\nu}^{\langle \alpha \rangle}\}_{\nu = 1}^{\infty}$ . If $Y$ is categorical, we suppose that $\mathcal{H}^{\langle Y\rangle}$ has the eigensystem, $\{\mu_{\nu}^{\langle Y\rangle},\phi_{\nu}^{\langle \alpha \rangle}\}_{\nu = 1}^{a - 1}$ . Then we have the eigenvalue eigenfuntion pair for the tensor product space $\mathcal{H}_1^{\langle \alpha \rangle}\otimes \mathcal{H}_1^{\langle \beta \rangle}\otimes \mathcal{H}_1^{\langle Y\rangle}$ as
+
+$$
+\{\mu_ {\nu_ {\alpha}} ^ {\langle \alpha \rangle} \mu_ {\nu_ {\beta}} ^ {\langle \beta \rangle} \mu_ {\nu_ {Y}} ^ {\langle Y \rangle}, \phi_ {\nu_ {\alpha}} ^ {\langle \alpha \rangle} \phi_ {\nu_ {\beta}} ^ {\langle \beta \rangle} \phi_ {\nu_ {Y}} ^ {\langle Y \rangle} \}
+$$
+
+where $\nu_{\alpha} = 1,\dots ,\infty ,\nu_{\beta} = 1,\dots ,\infty ,\nu_{Y} = 1,\dots ,\infty$ (Y is continuous), and $\nu_{Y} = 1,\ldots ,a - 1$ (Y is categorical with $a$ categories). For simplicity, we order the pairs in the decreasing order of $\mu_{\rho}$ $\rho = 1,\ldots ,\infty$ . The null hypothesis could be interpreted as factorization hypothesis, i.e., $(X,Y)\bot Z\lor (X,Z)\bot Y\lor (Y,Z)\bot X$ or X, Y, Z are mutually independent.
+
+Theorem 1. Suppose the kernel on $\mathcal{H}_{111}$ is square integrable. If $Y$ is continuous variable, then under $H_0$ , we have
+
+$$
+n S _ {n} ^ {2} \xrightarrow {d} \sum_ {\rho = 1} ^ {\infty} \mu_ {\rho} \epsilon_ {\rho} ^ {2} \tag {18}
+$$
+
+where $\epsilon_{\rho}$ are i.i.d. standard Gaussian random variables.
+
+The proof of this theorem is shown in the Supplementary Materials A.1. The asymptotic distribution of $nS_{n}^{2}$ only depends on the eigenvalues of the kernel. Theorem 1 is related to Wilks' phenomenon demonstrated in the classic nonparametric/semiparametric regression framework (Fan et al., 2001; Shang & Cheng, 2013), i.e., the asymptotic distribution is independent of the nuisance parameters. In practice, we fix the same kernel for the fully connected layers and the same kernel for the convolutional layers. Thus, it provides a unified importance measure for all connections in the same type of layer, which avoids the scaling problem faced by those pruning methods that use magnitudes of weights as an importance measure. In addition, we can use the value of the test statistics as an importance measure for pruning and bypass the effort of calculating the p-values since the order is the same according to either of these two measures.
+
+# 5 RESULTS
+
+We conducted experiments to test out the PCII method in two supervised image classification tasks: MNIST and CIFAR10. We used TensorFlow to train DNNs. The back-end of PCII was implemented in Fortran and R language. Programming interfaces were implemented to connect the back-end calculations to TensorFlow. The experiments were run on a computer with one Nvidia Titan V GPU and 48 CPU cores.
+
+PCII offers a convenient way to adjust compression rate by changing the $p$ -value threshold $\rho$ . For other compression methods in consideration, the compression rate is usually controlled by some hyper-parameters, which need to be tuned via an ad hoc trial and error strategy. Two types of comparisons were carried out. In the first type, we adjust the compression rate of a method while requiring its compressed DNN to achieve the test accuracy of the original uncompressed DNN. We term this the lossless compression rates (LCR). Since it is very time-consuming to obtain an exact test accuracy, we allow a 0.01% deviation from the desired test error rates. In the second type, we compared the minimum testing error (MTE) of the compressed DNNs produced by different model compression methods. MTE shows how a compression method can help increase the generalizability and robustness of a DNN. We only included the results of the tested methods that we could obtain their working codes to reproduce the results reported in their original papers. In addition, we did not include methods that are not able to achieve the LCR of the corresponding test dataset.
+
+# 5.1 MNIST DATASET
+
+We tried two network architectures for MNIST (60k training images and 10k test images): a multilayer perceptron (MLP) with two hidden layers of sizes 300 and 100, respectively, and a convolutional neural network LeNet-5 (LeCun et al. (1998)). We trained LeNet-300-100 for 100 epochs to achieve a test error of $1.69\%$ . These approaches include two regularization based methods in Louizos et al. (2017), as well as several pruning based methods in Han et al. (2015), and Guo et al. (2016). The results are summarized in Table 1. PCII achieved the lowest MTE when the compression rate was $10x$ . Then, we further increased the compression rate until the error rate reached $1.70\%$ . The results show that PCII not only compressed a medium-sized MLP better than existing methods but also was able to improve generalizability of a MLP via compression (i.e., the MTE is better than the LCR).
+
+| Dataset: MNIST, Network: LeNet-300-100 |
| Criterion | Method | Error % | Compression Rate |
| Original | 1.69% | 1x |
| LCR | PCII | 1.70% | 26x |
| Han et al. (2015) | 1.69% | 15.1x |
| Louizos et al. (2017) | 1.70% | 3.3x |
| Guo et al. (2016) | 1.70% | 15x |
| MTE | PCII | 1.58% | 10x |
| Han et al. (2015) | 1.59% | 12.2x |
| Louizos et al. (2017) | 1.70% | 3.3x |
| Guo et al. (2016) | 1.70% | 15x |
+
+Table 1: Experimental results for LeNet-300-100 on MNIST dataset.
+
+
+Figure 3: The heatmap showing the PLR test result of the $784 \times 300$ connections between the input layer and the first FC layer in Lenet-300-100. Each pixel represents a $p$ -value of the corresponding pair. The brighter color representing a smaller $p$ -value. The width and height of the heatmap correspond to the input dimension (784) and the size (300) of the first FC layer.
+
+Interestingly, our inference results can also help in interpreting the fitted neural network. For example, through inferring the importance of the connections between input layer and the first hidden layer, we can visualize the importance of the features in the input layer. Figure 3 plots the $p$ -values of the
+
+associations between the network connections in the first layer and the final output. The heatmap shows that a banded structure repeated 28 times, in which the central region tends to have smaller $p$ -values. The left and right margins of the heatmap show that the connections on the first and last few channels are less relevant to the final output (i.e., have large $p$ -values). This phenomenon is observed because a written digit is usually located in the central part of a image.
+
+| Dataset: MNIST, Network: LeNet-5 |
| Criterion | Method | Error % | Compression Rate |
| Original | 0.70% | 1x |
| LCR | PCII | 0.69 % | 38x |
| Han et al. (2015) | 0.71% | 12.1x |
| Louizos et al. (2017) | 0.69% | 1.4x |
| Guo et al. (2016) | 0.70% | 32x |
| MTE | PCII | 0.65% | 10.7x |
| Han et al. (2015) | 0.70% | 6x |
| Louizos et al. (2017) | 0.68% | 1.1x |
| Guo et al. (2016) | 0.70% | 32x |
+
+The LeNet-5 model (https://goo.gl/4yI3dL) is a modified version of LeCun et al. (1998). It includes two convolutional layers with 20 and 50 filters, respectively, and a fully connected layer with 500 nodes. The results are summarized in Table 2. PCII achieved both the lowest MTE and highest LCR, again, for this model, demonstrating the broad applicability of the PCII strategy for various neural network architectures.
+
+# 5.2 CIFAR10 DATASET
+
+To demonstrate the applicability of PCII to complex DNNs, we applied it to VGG16 (Zagoruyko, 2015), which was adapted for the CIFAR-10 dataset (Krizhevsky & Hinton, 2009). The network consists of 13 convolutional and two fully-connected layers. The results are summarized in Table 3. The test error gradually decreases as the compression rate increases from 1x to 3x, and achieves the MTE at $6.01\%$ . When the compression rate is further increased, the test error begins to increase. As the compression rate reaches 10x, the test error increases to $7.56\%$ that is comparable to the test error of the uncompressed VGG16. For this dataset, we only include the result for PCII due to the limited resources. In fact, we could not obtain the results for other methods in comparison in three days' computing time.
+
+Table 2: Experimental results for LeNet-5-caffe on MNIST dataset.
+
+| Dataset: CIFAR10, Network: VGG16 |
| Criterion | Method | Error | Compression Rate |
| Original | 7.55% | 1x |
| LCR | PCII | 7.56% | 10x |
| MTE | PCII | 6.01% | 3x |
+
+Table 3: Experimental results for VGG16 on CIFAR10 dataset.
+
+# 6 DISCUSSION
+
+We propose a statistically principled strategy to directly infer the importance of a connection in a neural network via PCII, a hypothesis testing framework. The proposed PCII test provides a p-value based measure on the importance of the connection through the connection's association with the final output. This type of direct quantification cannot be easily accomplished by the magnitude-based pruning method. Although the two examples are relatively small in size, they demonstrated the broad applicability of the PCII method and its improved power in network compression. Last but not least we note that the PCII testing method can be easily generalized to a broad class of connection types including the skip layer connections in RNN.
+
+# 7 ACKNOWLEDGEMENT
+
+XX and JL would like to acknowledge NSF and NIH for providing partial support or this work. PH and LS would like to acknowledge NSF (NSF OAC 1920147) for providing partial support of this work.
+
+# REFERENCES
+
+Sajid Anwar, Kyuyeon Hwang, and Wonyong Sung. Structured pruning of deep convolutional neural networks. ACM Journal on Emerging Technologies in Computing Systems (JETC), 13(3):32, 2017.
+Alain Berlinet and Christine Thomas-Agnan. Reproducing kernel Hilbert spaces in probability and statistics. Springer Science & Business Media, 2011.
+Petros Drineas, Michael W Mahoney, S Muthukrishnan, and Tamás Sarlós. Faster least squares approximation. Numerische mathematik, 117(2):219-249, 2011.
+Jianqing Fan, Chunming Zhang, and Jian Zhang. Generalized likelihood ratio statistics and wilks phenomenon. Annals of statistics, 29:153-193, 2001.
+Arthur Gretton, Karsten M Borgwardt, Malte J Rasch, Bernhard Scholkopf, and Alexander Smola. A kernel two-sample test. Journal of Machine Learning Research, 13(Mar):723-773, 2012.
+Chong Gu. Smoothing spline ANOVA models, volume 297. Springer Science & Business Media, 2013.
+Yiwen Guo, Anbang Yao, and Yurong Chen. Dynamic network surgery for efficient dnns. In Advances In Neural Information Processing Systems, pp. 1379-1387, 2016.
+Song Han, Jeff Pool, John Tran, and William Dally. Learning both weights and connections for efficient neural network. In Advances in neural information processing systems, pp. 1135-1143, 2015.
+Babak Hassibi, David G Stork, and Gregory J Wolff. Optimal brain surgeon and general network pruning. In Neural Networks, 1993., IEEE International Conference on, pp. 293-299. IEEE, 1993.
+Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. Technical report, Citeseer, 2009.
+H Lancaster. 0.(1969). the chi-squared distribution. Wiley, New York. MR, 40:6667, 2002.
+Yann LeCun, John S Denker, and Sara A Solla. Optimal brain damage. In Advances in neural information processing systems, pp. 598-605, 1990.
+Yann LeCun, Léon Bottou, Yoshua Bengio, Patrick Haffner, et al. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278-2324, 1998.
+Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, and Hans Peter Graf. Pruning filters for efficient convnets. arXiv preprint arXiv:1608.08710, 2016.
+Meimei Liu, Zuofeng Shang, and Guang Cheng. How many machines can we use in parallel computing for kernel ridge regression? arXiv preprint arXiv:1805.09948, 2018.
+Christos Louizos, Max Welling, and Diederik P Kingma. Learning sparse neural networks through $l\_0$ regularization. arXiv preprint arXiv:1712.01312, 2017.
+Ping Ma, Michael W Mahoney, and Bin Yu. A statistical perspective on algorithmic leveraging. The Journal of Machine Learning Research, 16(1):861-911, 2015.
+Dmitry Molchanov, Arsenii Ashukha, and Dmitry Vetrov. Variational dropout sparsifies deep neural networks. arXiv preprint arXiv:1701.05369, 2017.
+Zuofeng Shang and Guang Cheng. Local and global asymptotic inference in smoothing spline models. The Annals of Statistics, 41(5):2608-2638, 2013.
+
+Zuofeng Shang and Guang Cheng. Computational limits of a distributed algorithm for smoothing spline. The Journal of Machine Learning Research, 18(1):3809-3845, 2017.
+Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
+Peter Sollich and Christopher KI Williams. Understanding gaussian process regression using the equivalent kernel. In International Workshop on Deterministic and Statistical Methods in Machine Learning, pp. 211-228. Springer, 2004.
+Enzo Tartaglione, Skjalg Lepsøy, Attilio Fiandrotti, and Gianluca Francini. Learning sparse neural networks via sensitivity-driven regularization. In Advances in Neural Information Processing Systems, pp. 3882-3892, 2018.
+Karen Ullrich, Edward Meeds, and Max Welling. Soft weight-sharing for neural network compression. arXiv preprint arXiv:1702.04008, 2017.
+Sergey Zagoruyko. 92.45 on cifar-10 in torch. Technical report, 2015.
+Yuchen Zhang, John Duchi, and Martin Wainwright. Divide and conquer kernel ridge regression. In Conference on Learning Theory, pp. 592-617, 2013.
+Michael Zhu and Suyog Gupta. To prune, or not to prune: exploring the efficacy of pruning for model compression. arXiv preprint arXiv:1710.01878, 2017.
+
+# A SUPPLEMENTARY MATERIALS
+
+# A.1 POOF OF THEOREM 1
+
+Let $\{\mu_{\rho},\phi_{\rho}\}_{\rho = 1}^{\infty}$ be the eigenvalues and eigenvectors for the tensor product space $\mathcal{H}_1^{\langle \alpha \rangle}\otimes \mathcal{H}_1^{\langle \beta \rangle}\otimes \mathcal{H}_1^{\langle Y\rangle}$ . By the dedomposition defined (8), we have
+
+$$
+E _ {t} \left[ \phi_ {\rho} (t) \right] = 0, \tag {19}
+$$
+
+for $\rho = 1,\ldots ,\infty$ , i.e., the mean of the eigenfunction $\phi_{\rho}$ is zero. By simple calculation, we have
+
+$$
+\sum_ {\rho = 1} ^ {\infty} \mu_ {\rho} < \infty \tag {20}
+$$
+
+for the exponential decayed kernels and polynomial decayed kernel with decay rate as $i^{-m}$ for $m > 1$ . Thus, the commonly used kernels such as Gaussian kernel, Laplace Kernel and linear or quadratic Solblev kernel satisfy this requirement.
+
+By Mercer's theorem, we have the decomposition as
+
+$$
+\begin{array}{l} n S _ {n} ^ {2} = \frac {1}{n} \sum_ {i = 1} ^ {n} \sum_ {j = 1} ^ {n} K ^ {1} (\boldsymbol {t} _ {i}, \boldsymbol {t} _ {j}) \\ = \sum_ {i = 1} ^ {n} \sum_ {j = 1} ^ {n} \sum_ {\rho = 1} ^ {\infty} \mu_ {\rho} \phi_ {\rho} (\boldsymbol {t} _ {i}) \phi_ {\rho} (\boldsymbol {t} _ {j}) \\ = \sum_ {\rho = 1} ^ {\infty} \left(\mu_ {\rho} \left(\sum_ {i = 1} ^ {n} \phi \left(\boldsymbol {t} _ {i}\right)\right) ^ {2}\right) \\ \stackrel {d} {\rightarrow} \sum_ {\rho = 1} ^ {\infty} \mu_ {\rho} \epsilon_ {\rho} \\ \end{array}
+$$
+
+where $\epsilon_{\rho}$ are i.i.d. standard normal random variables. The last row is proved by applying the Lindeberg-Lévy CLT to have $\sum_{i=1}^{n} \phi(\pmb{t}_i) \xrightarrow{d} \epsilon_{\rho}$ since (1) and (2) holds. Then, by the Kolmogorov's inequality and $\sum_{\rho=1}^{\infty} \mu_{\rho} < \infty$ , we have the last row holds.
\ No newline at end of file
diff --git a/probabilisticconnectionimportanceinferenceandlosslesscompressionofdeepneuralnetworks/images.zip b/probabilisticconnectionimportanceinferenceandlosslesscompressionofdeepneuralnetworks/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..5c4427c1463d882699ddfc033858bf481953a5a4
--- /dev/null
+++ b/probabilisticconnectionimportanceinferenceandlosslesscompressionofdeepneuralnetworks/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:31a40c0c9514075762a2da554cc98706f1f81b52a57aa84a19ca8769777f4e61
+size 411023
diff --git a/probabilisticconnectionimportanceinferenceandlosslesscompressionofdeepneuralnetworks/layout.json b/probabilisticconnectionimportanceinferenceandlosslesscompressionofdeepneuralnetworks/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..8b4b4c6fa6d768850b2103e875fbdbf1cb1257c2
--- /dev/null
+++ b/probabilisticconnectionimportanceinferenceandlosslesscompressionofdeepneuralnetworks/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:23960f618eb6a39d9020ce2ab53c9bdf587f505fcf3552fb121ef8082ae5bbcb
+size 593840
diff --git a/probabilitycalibrationforknowledgegraphembeddingmodels/68b08972-e088-4869-ac0d-925cc9fe435b_content_list.json b/probabilitycalibrationforknowledgegraphembeddingmodels/68b08972-e088-4869-ac0d-925cc9fe435b_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..e91b3dad7b943c2002b5228c69354d34afaadb1c
--- /dev/null
+++ b/probabilitycalibrationforknowledgegraphembeddingmodels/68b08972-e088-4869-ac0d-925cc9fe435b_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b5f289c1caf5af90c6240cff16b46008bf767694b29b540376be772438368aed
+size 89485
diff --git a/probabilitycalibrationforknowledgegraphembeddingmodels/68b08972-e088-4869-ac0d-925cc9fe435b_model.json b/probabilitycalibrationforknowledgegraphembeddingmodels/68b08972-e088-4869-ac0d-925cc9fe435b_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..713738ef059b04d5c8c2fd67b86532f1d75c2e41
--- /dev/null
+++ b/probabilitycalibrationforknowledgegraphembeddingmodels/68b08972-e088-4869-ac0d-925cc9fe435b_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:184d1dbe3d1acf6f482db515a39df964bdf1593372d1b8065184b4709bcf3067
+size 104080
diff --git a/probabilitycalibrationforknowledgegraphembeddingmodels/68b08972-e088-4869-ac0d-925cc9fe435b_origin.pdf b/probabilitycalibrationforknowledgegraphembeddingmodels/68b08972-e088-4869-ac0d-925cc9fe435b_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..b424f921dcdf2112549617591abafa173bf30496
--- /dev/null
+++ b/probabilitycalibrationforknowledgegraphembeddingmodels/68b08972-e088-4869-ac0d-925cc9fe435b_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b0ee03864d38d8aafe3cd99964f0b1facdd0f8ee5a2be4f951bc07eb760ac200
+size 672707
diff --git a/probabilitycalibrationforknowledgegraphembeddingmodels/full.md b/probabilitycalibrationforknowledgegraphembeddingmodels/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..46f5432ec06c038bf2baf5bd3b43effa90c0f26a
--- /dev/null
+++ b/probabilitycalibrationforknowledgegraphembeddingmodels/full.md
@@ -0,0 +1,342 @@
+# PROBABILITY CALIBRATION FOR KNOWLEDGE GRAPH EMBEDDING MODELS
+
+Pedro Tabacof, Luca Costabello
+
+Accenture Labs
+
+Dublin, Ireland
+
+{pedro.tabacof, luca.costabello}@accenture.com
+
+# ABSTRACT
+
+Knowledge graph embedding research has overlooked the problem of probability calibration. We show popular embedding models are indeed uncalibrated. That means probability estimates associated to predicted triples are unreliable. We present a novel method to calibrate a model when ground truth negatives are not available, which is the usual case in knowledge graphs. We propose to use Platt scaling and isotonic regression alongside our method. Experiments on three datasets with ground truth negatives show our contribution leads to well calibrated models when compared to the gold standard of using negatives. We get significantly better results than the uncalibrated models from all calibration methods. We show isotonic regression offers the best the performance overall, not without trade-offs. We also show that calibrated models reach state-of-the-art accuracy without the need to define relation-specific decision thresholds.
+
+# 1 INTRODUCTION
+
+Knowledge graph embedding models are neural architectures that learn vector representations (i.e. embeddings) of nodes and edges of a knowledge graph. Such knowledge graph embeddings have applications in knowledge graph completion, knowledge discovery, entity resolution, and link-based clustering, just to cite a few (Nickel et al., 2016a).
+
+Despite burgeoning research, the problem of calibrating such models has been overlooked, and existing knowledge graph embedding models do not offer any guarantee on the probability estimates they assign to predicted facts. Probability calibration is important whenever you need the predictions to make probabilistic sense, i.e., if the model predicts a fact is true with $80\%$ confidence, it should be correct $80\%$ of the times. Prior art suggests to use a sigmoid layer to turn logits returned by models into probabilities (Nickel et al., 2016a) (also called the exit transform), but we show that this provides poor calibration. Figure 1 shows reliability diagrams for off-the-shelf TransE and ComplEx. The identity function represents perfect calibration. Both models are miscalibrated: all TransE combinations in Figure 1a under-forecast the probabilities (i.e. probabilities are too small), whereas ComplEx under-forecasts or over-forecasts according to which loss is used (Figure 1b).
+
+Calibration is crucial in high-stakes scenarios such as drug-target discovery from biological networks, where end-users need trustworthy and interpretable decisions. Moreover, since probabilities are not calibrated, when classifying triples (i.e. facts) as true or false, users must define relation-specific thresholds, which can be awkward for graphs with a great number of relation types.
+
+To the best of our knowledge, this is the first work to focus on calibration for knowledge embeddings. Our contribution is two-fold: First, we use Platt Scaling and isotonic regression to calibrate knowledge graph embedding models on datasets that include ground truth negatives. One peculiar feature of knowledge graphs is that they usually rely on the open world assumption (facts not present are not necessarily false, they are simply unknown). This makes calibration troublesome because of the lack of ground truth negatives. For this reason, our second and main contribution is a calibration heuristics that combines Platt-scaling or isotonic regression with synthetically generated negatives.
+
+Experimental results show that we obtain better-calibrated models and that it is possible to calibrate knowledge graph embedding models even when ground truth negatives are not present. We
+
+
+Figure 1: Reliability diagrams of uncalibrated models. Probabilities are generated by a logistic sigmoid layer. The larger the deviation from the diagonal, the more uncalibrated is the model. We present four different common loss functions used to train knowledge graph embedding models. (a) Uncalibrated TransE on WN11. (b) Uncalibrated ComplEx on FB13. Best viewed in colors.
+
+
+
+also experiment with triple classification, and we show that calibrated models reach state-of-the-art accuracy without the need to define relation-specific decision thresholds.
+
+# 2 RELATED WORK
+
+A comprehensive survey of knowledge graph embedding models is out of the scope of this paper. Recent surveys such as (Nickel et al., 2016a) and (Cai et al., 2017) summarize recent literature.
+
+TransE (Bordes et al., 2013) is the forerunner of distance-based methods, and spun a number of models commonly referred to as TransX. The intuition behind the symmetric bilinear-diagonal model DistMult (Yang et al., 2015) paved the way for its asymmetric evolutions in the complex space, RotatE (Sun et al., 2019) and ComplEx (Trouillon et al., 2016) (a generalization of which uses hypercomplex representations (Zhang et al., 2019)). HolE relies instead on circular correlation (Nickel et al., 2016b). The recent TorusE (Ebisu & Ichise, 2018) operates on a lie group and not in the Euclidean space. While the above models can be interpreted as multilayer perceptrons, others such as ConvE (Dettmers et al., 2018) or ConvKB (Nguyen et al., 2018) include convolutional layers. More recent works adopt capsule networks architectures (Nguyen et al., 2019). Adversarial learning is used by KBGAN (Cai & Wang, 2018), whereas attention mechanisms are instead used by (Nathani et al., 2019). Some models such as RESCAL (Nickel et al., 2011), TuckER (Balažević et al., 2019), and Simple (Kazemi & Poole, 2018) rely on tensor decomposition techniques. More recently, ANALOGY adopts a differentiable version of analogical reasoning (Liu et al., 2017). In this paper we limit our analysis to four popular models: TransE, DistMult, ComplEx and HolE. They do not address the problem of assessing the reliability of predictions, leave aside calibrating probabilities.
+
+Besides well-established techniques such as Platt scaling (Platt et al., 1999) and isotonic regression (Zadrozny & Elkan, 2002), recent interest in neural architectures calibration show that modern neural architectures are poorly calibrated and that calibration can be improved with novel methods. For example, (Guo et al., 2017) successfully proposes to use temperature scaling for calibrating modern neural networks in classification problems. On the same line, (Kuleshov et al., 2018) proposes a procedure based on Platt scaling to calibrate deep neural networks in regression problems.
+
+The Knowledge Vault pipeline in (Dong et al., 2014) extracts triples from unstructured knowledge and is equipped with Platt scaling calibration, but this is not applied to knowledge graph embedding models. KG2E (He et al., 2015) proposes to use normally-distributed embeddings to account for the uncertainty, but their model does not provide the probability of a triple being true, so KG2E would also benefit from the output calibration we propose here. To the best of our knowledge, the only work that adopts probability calibration to knowledge graph embedding models is Krompaß & Tresp (2015). The authors propose to use ensembles in order to improve the results of knowledge graph embedding tasks. For that, they propose to calibrate the models with Platt scaling, so they
+
+ | WN11 | FB13 | YAGO39K | FB15K-237 | WN18RR |
| Training | 112,581 | 316,232 | 354,996 | 272,115 | 86,835 |
| Validation | 5,218 | 11,816 | 18,682 | 17,535 | 3,034 |
| Test | 21,088 | 47,466 | 18,728 | 20,466 | 3,134 |
| Entities | 38,696 | 75,043 | 39,374 | 14,541 | 40,943 |
| Relations | 11 | 13 | 39 | 237 | 11 |
+
+(a)
+
+| Model | Scoring Function fm |
| TransE | -||es+rp-eo||n |
| DistMult | {es, rp, eo} |
| ComplEx | Re(⟨es, rp, eo⟩) |
| HolE | {es, rp ⊗ eo} |
+
+(b)
+
+Table 1: (a) Triple classification datasets used in experiments (left); link prediction datasets used for positive base rate experiments (right); (b) Scoring functions of models used in experiments.
+
+operate on the same scale. No further details on the calibration procedure are provided. Besides, there is no explanation on how to handle the lack of negatives.
+
+# 3 PRELIMINARIES
+
+Knowledge Graph. Formally, a knowledge graph $\mathcal{G} = \{(s,p,o)\} \subseteq \mathcal{E}\times \mathcal{R}\times \mathcal{E}$ is a set of triples $t = (s,p,o)$ , each including a subject $s\in \mathcal{E}$ , a predicate $p\in \mathcal{R}$ , and an object $o\in \mathcal{E}$ . $\mathcal{E}$ and $\mathcal{R}$ are the sets of all entities and relation types of $\mathcal{G}$ .
+
+Triple Classification. Binary classification task where $\mathcal{G}$ (which includes only positive triples) is used as training set, and $\mathcal{T} = \{(s,p,o)\} \subseteq \mathcal{E}\times \mathcal{R}\times \mathcal{E}$ is a disjoint test set of labeled triples to classify. Note $\mathcal{T}$ includes positives and negatives. Since the learned models are not calibrated, multiple decision thresholds $\tau_{i}$ must be picked, where $0 < i < |\mathcal{R}|$ , i.e. one for each relation type. This is done using a validation set (Bordes et al., 2013). Classification metrics apply (e.g. accuracy).
+
+Link Prediction. Given a training set $\mathcal{G}$ that includes only positive triples, the goal is assigning a score $f(t)\in \mathbb{R}$ proportional to the likelihood that each unlabeled triple $t$ included in a held-out set $S$ is true. Note $S$ does not have ground truth positives or negatives. This task is cast as a learning to rank problem, and uses metrics such as mean rank (MR), mean reciprocal rank (MRR) or Hits@N.
+
+Knowledge Graph Embeddings. Knowledge graph embedding models are neural architectures that encode concepts from a knowledge graph $\mathcal{G}$ (i.e. entities $\mathcal{E}$ and relation types $\mathcal{R}$ ) into low-dimensional, continuous vectors $\in \mathbb{R}^k$ (i.e., the embeddings). Embeddings are learned by training a neural architecture over $\mathcal{G}$ . Although such architectures vary, the training phase always consists in minimizing a loss function $\mathcal{L}$ that includes a scoring function $f_{m}(t)$ , i.e., a model-specific function that assigns a score to a triple $t = (s,p,o)$ (more precisely, the input of $f_{m}$ are the embeddings of the subject $\mathbf{e}_s$ , the predicate $\mathbf{r}_p$ , and the object $\mathbf{e}_o$ ). The goal of the optimization procedure is learning optimal embeddings, such that the scoring function $f_{m}$ assigns high scores to positive triples $t^{+}$ and low scores to triples unlikely to be true $t^{-}$ . Existing models propose scoring functions that combine the embeddings $\mathbf{e}_s, \mathbf{r}_p, \mathbf{e}_o \in \mathbb{R}^k$ using different intuitions. Table 1b lists the scoring functions of the most common models. For example, the scoring function of TransE computes a similarity between the embedding of the subject $\mathbf{e}_s$ translated by the embedding of the predicate $\mathbf{e}_p$ and the embedding of the object $\mathbf{e}_o$ , using the $L_1$ or $L_2$ norm $\|\cdot\|$ . Such scoring function is then used on positive and negative triples $t^{+} \in \mathcal{G}, t^{-} \in \mathcal{N}$ in the loss function. This is usually a pairwise margin-based loss (Bordes et al., 2013), negative log-likelihood, or multi-class log-likelihood (Lacroix et al., 2018). Since the training set usually includes positive statements, we generate synthetic negatives $t^{-} \in \mathcal{N}$ required for training. We do so by corrupting one side of the triple at a time (i.e., either the subject or the object), following the protocol proposed by (Bordes et al., 2013).
+
+Calibration. Given a knowledge graph embedding model identified by its scoring function $f_{m}$ with $f_{m}(t) = \hat{p}$ , where $\hat{p}$ is the estimated confidence level that a triple $t = (s,p,o)$ is true, we define $f_{m}$ to be calibrated if $\hat{p}$ represents a true probability. For example, if $f_{m}(\cdot)$ predicts 100 triples all with confidence $\hat{p} = 0.7$ , we expect exactly 70 to be actually true. Calibrating a model requires reliable metrics to detect miscalibration, and effective techniques to fix such distortion. Appendix A.1 includes definitions and background on the calibration metrics adopted in the paper.
+
+# 4 CALIBRATING KNOWLEDGE GRAPH EMBEDDING MODELS PREDICTIONS
+
+We propose two scenario-dependent calibration techniques: we first address the case with ground truth negatives $t^{-} \in \mathcal{N}$ . The second deals with the absence of ground truth negatives.
+
+Calibration with Ground Truth Negatives. We propose to use off-the-shelf Platt scaling and isotonic regression, techniques proved to be effective in literature. It is worth reiterating that to calibrate a model negative triples $\mathcal{N}$ are required from a held-out dataset (which could be the validation set). Such negatives are usually available in triple classification datasets (FB13, WN11, YAGO39K)
+
+Calibration with Synthetic Negatives. Our main contribution is for the case where no ground truth negatives are provided at all, which is in fact the usual scenario for link prediction tasks.
+
+We propose to adopt Platt scaling or isotonic regression and to synthetically generate corrupted triples as negatives, while using sample weights to guarantee that the frequencies adhere to the base rate of the population (which is problem-dependent and must be user-specified). It is worth noting that it is not possible to calibrate a model without implicit or explicit base rate. If it is not implicit on the dataset (the ratio of positives to totals), it must be explicitly provided.
+
+We generate synthetic negatives $\mathcal{N}$ following the standard protocol proposed by (Bordes et al., 2013)1: for every positive triple $t = (s, p, o)$ , we corrupt one side of the triple at a time (i.e. either the subject $s$ or the object $o$ ) by replacing it with other entities in $\mathcal{E}$ . The number of corruptions generated per positive is defined by the user-defined corruption rate $\eta \in \mathbb{N}$ . Since the number of negatives $N = |\mathcal{N}|$ can be much greater than the number of positive triples $P = |\mathcal{G}|$ , when dealing with calibration with synthetically generated corruptions, we weigh the positive and negative triples to make the calibrated model match the population base rate $\alpha = P / (P + N) \in [0, 1]$ , otherwise the base rate would depend on the arbitrary choice of $\eta$ .
+
+Given a positive base rate $\alpha$ , we propose the following weighting scheme:
+
+$$
+\omega_ {+} = \eta \quad \text {f o r p o s i t i v e t r i p l e s} \mathcal {G}
+$$
+
+$$
+\omega_ {-} = \frac {1}{\alpha} - 1 \quad \text {f o r n e g a t i v e t r i p l e s} \mathcal {N} \tag {1}
+$$
+
+where $\omega_{+} \in \mathbb{R}$ is the weight associated to the positive triples and $\omega_{-} \in \mathbb{R}$ to the negatives. The $\omega_{+}$ weight removes the imbalance determined by having a higher number of corruptions than positive triples in each batch. The $\omega_{-}$ weight guarantees that the given positive base rate $\alpha$ is respected.
+
+The above can be verified as follows. For the unweighted problem, the positive base rate is simply the ratio of positive examples to the total number of examples:
+
+$$
+\alpha = \frac {P}{P + N} \tag {2}
+$$
+
+If we add uniform weights to each class, we have:
+
+$$
+\alpha = \frac {\omega_ {+} P}{\omega_ {-} N + \omega_ {+} P} \tag {3}
+$$
+
+By defining $\omega_{+} = \eta$ , i.e. adopting the ratio of negatives to positives (corruption rate), we then have:
+
+$$
+\alpha = \frac {P _ {\frac {N}{P}}}{N \omega_ {-} + P _ {\frac {N}{P}}} = \frac {N}{\omega_ {-} N + N} = \frac {1}{\omega_ {-} + 1} \tag {4}
+$$
+
+Thus, the negative weights is:
+
+$$
+\omega_ {-} = \frac {1}{\alpha} - 1 \tag {5}
+$$
+
+# 5 RESULTS
+
+We compute the calibration quality of our heuristics, showing that we achieve calibrated predictions even when ground truth negative triples are not available. We then show the impact of calibrated predictions on the task of triple classification.
+
+Datasets. We run experiments on triple classification datasets that include ground truth negatives (Table 1). We train on the training set, calibrate on the validation set, and evaluate on the test set.
+
+- WN11 (Socher et al., 2013). A subset of Wordnet (Miller, 1995), it includes a large number of hyponym and hypernym relations thus including hierarchical structures.
+- FB13 (Socher et al., 2013). A subset of Freebase (Bollacker et al., 2008), it includes facts on famous people (place of birth and/or death, profession, nationality, etc).
+- YAGO39K (Lv et al., 2018). This recently released dataset has been carved out of YAGO3 (Mahdisoltani et al., 2013), and includes a mixture of facts about famous people, events, places, and sports teams.
+
+We also use two standard link prediction benchmark datasets, WN18RR (Dettmers et al., 2018) (a subset of Wordnet) and FB15K-237 (Toutanova et al., 2015) (a subset of Freebase). Their test sets do not include ground truth negatives.
+
+Implementation Details. The knowledge graph embedding models are implemented with the AmpliGraph library (Costabello et al., 2019) version 1.1, using TensorFlow 1.13 (Abadi et al., 2016) and Python 3.6 on the backend. All experiments were run under Ubuntu 16.04 on an Intel Xeon Gold 6142, 64 GB, equipped with a Tesla V100 16GB. Code and experiments are available at https://github.com/Accenture/AmpliGraph.
+
+Hyperparameter Tuning. For each dataset in Table 1a, we train a TransE, DistMult, and a ComplEx knowledge graph embedding model. We rely on typical hyperparameter values: we train the embeddings with dimensionality $k = 100$ , Adam optimizer, initial learning rate $\alpha_0 = 1\mathrm{e - }4$ , negatives per positive ratio $\eta = 20$ , epochs $= 1000$ . We train all models on four different loss functions: Self-adversarial (Sun et al., 2019), pairwise (Bordes et al., 2013), NLL, and Multiclass-NLL (Lacroix et al., 2018). Different losses are used in different experiments.
+
+# 5.1 CALIBRATION RESULTS
+
+Calibration Success. Table 2 reports Brier scores and log losses for all our calibration methods, grouped by the type of negative triples they deal with (ground truth or synthetic). All calibration methods show better-calibrated results than the uncalibrated case, by a considerable margin and for all datasets. In particular, to put the results of the synthetic strategy in perspective, if we suppose to predict the positive base rate as a baseline, for each of the cases in Table 2 (the three datasets share the same positive base rate $\alpha = 0.5$ ), we would get Brier score $B = 0.25$ and log loss $L_{log} = 0.69$ , results that are always worse than our methods. There is considerable variance of results between models given a dataset, which also happens when varying losses given a particular combination of model and dataset (Table 3). TransE provides the best results for WN11 and FB13, while DistMult works best for YAGO39K. We later propose that this variance comes from the quality of the embeddings themselves, that is, better embeddings allow for better calibration.
+
+In Figure 2, we also evaluate just the frequencies themselves, ignoring sharpness (i.e. whether probabilities are close to 0 or 1), using reliability diagrams for a single model-loss combination, for all datasets (ComplEx+NLL). Calibration plots show a remarkable difference between the uncalibrated baseline (s-shaped blue line on the left-hand side) and all calibrated models (curves closer to the identity function are better). A visual comparison of uncalibrated curves in Figure 1 with those in Figure 2 also gives a sense of the effectiveness of calibration.
+
+Ground Truth vs Synthetic. As expected, the ground truth method generally performs better than the synthetic calibration, since it has more data in both quantity (twice as much) and quality (two classes instead of one). Even so, the synthetic method is much closer to the ground truth than to the uncalibrated scores, as highlighted by the calibration plots in Figure 2. For WN11, it is actually as good as the calibration with the ground truth. This shows that our proposed method works as intended and could be used in situations where we do not have access to the ground truth, as is the case for most knowledge graph datasets.
+
+Isotonic vs Platt. Isotonic regression performs better than Platt scaling in general, but in practice Isotonic regression has the disadvantage of not being a convex or differentiable algorithm Zadrozny & Elkan (2002). This is particularly problematic for the synthetic calibration, as it requires the generation of the synthetic corruptions, which can only be made to scale via a mini-batch based
+
+
+
+
+
+
+Figure 2: Calibration plots for the best calibrated model-loss combinations. Isotonic regression delivers the best results, getting very close to the perfectly calibrated line, both when used with the ground truth method or our proposed synthetic method. Best viewed in colors.
+
+ | Brier Score | Log Loss |
| Uncalib | Ground Truth | Synthetic | Uncalib | Ground Truth | Synthetic |
| Platt | Iso | Platt | Iso | Platt | Iso | Platt | Iso |
| WN11 | TransE | .443 | .089 | .087 | .092 | .088 | 1.959 | .302 | .295 | .311 | .296 |
| DistMult | .488 | .213 | .208 | .214 | .208 | 5.625 | .618 | .604 | .618 | .601 |
| ComplEx | .490 | .240 | .227 | .240 | .228 | 6.061 | .674 | .651 | .674 | .650 |
| HolE | .474 | .235 | .235 | .235 | .236 | 2.731 | .663 | .661 | .663 | .668 |
| FB13 | TransE | .446 | .124 | .124 | .148 | .141 | 1.534 | .390 | .391 | .459 | .442 |
| DistMult | .473 | .178 | .170 | .185 | .192 | 2.177 | .533 | .518 | .549 | .567 |
| ComplEx | .481 | .177 | .170 | .182 | .189 | 2.393 | .534 | .516 | .544 | .565 |
| HolE | .452 | .229 | .228 | .242 | .263 | 1.681 | .650 | .651 | .677 | .725 |
| YAGO 39K | TransE | .363 | .095 | .093 | .106 | .110 | 1.062 | .319 | .309 | .370 | .376 |
| DistMult | .284 | .081 | .079 | .093 | .089 | 1.043 | .279 | .266 | .311 | .308 |
| ComplEx | .264 | .089 | .084 | .097 | .095 | 1.199 | .305 | .278 | .323 | .313 |
| HolE | .345 | .141 | .140 | .166 | .162 | 1.065 | .444 | .438 | .581 | .537 |
+
+Table 2: Calibration test results (self-adversarial loss (Sun et al., 2019)). Low score = better. Best results in bold for each combination of dataset and metric.
+
+optimization procedure. Platt scaling, given that it is a convex and differentiable loss, can be made part of a computational graph and optimized with mini-batches, thus it can rely on the modern computational infrastructure designed to train deep neural networks.
+
+Influence of Loss Function. We experiment with different losses, to assess how calibration affects each of them (Table 3). We choose to work with TransE, which is reported as a strong baseline in (Hamaguchi et al., 2017). Self-adversarial loss obtains the best calibration results for all calibra
+
+ | Brier Score | Log Loss | MRR (filtered) |
| Ground Truth | Synthetic | Ground Truth | Synthetic |
| L | Platt | Iso | Platt | Iso | Platt | Iso | Platt | Iso | |
| Pairwise | .202 | .198 | .209 | .200 | .591 | .585 | .606 | .589 | .058 |
| NLL | .093 | .088 | .094 | .088 | .342 | .299 | .344 | .301 | .134 |
| Multiclass-NLL | .204 | .189 | .204 | .189 | .599 | .550 | .599 | .551 | .108 |
| Self-adversarial | .089 | .087 | .092 | .088 | .302 | .295 | .311 | .296 | .155 |
+
+(a) WN11
+
+ | Brier Score | Log Loss | MRR (filtered) |
| Ground Truth | Synthetic | Ground Truth | Synthetic |
| L | Platt | Iso | Platt | Iso | Platt | Iso | Platt | Iso | |
| Pairwise | .225 | .203 | .225 | .208 | .636 | .582 | .637 | .594 | .282 |
| NLL | .209 | .203 | .240 | .244 | .614 | .592 | .676 | .685 | .202 |
| Multiclass-NLL | .146 | .146 | .162 | .159 | .455 | .454 | .500 | .490 | .402 |
| Self-adversarial | .124 | .124 | .142 | .141 | .390 | .390 | .446 | .442 | .296 |
+
+(b) FB13
+
+ | Brier Score | Log Loss | MRR (filtered) |
| Ground Truth | Synthetic | Ground Truth | Synthetic |
| L | Platt | Iso | Platt | Iso | Platt | Iso | Platt | Iso | |
| Pairwise | .123 | .103 | .147 | .113 | .445 | .352 | .477 | .393 | .371 |
| NLL | .187 | .170 | .260 | .200 | .577 | .518 | .756 | .622 | .063 |
| Multiclass-NLL | .111 | .104 | .128 | .116 | .392 | .350 | .431 | .440 | .325 |
| Self-adversarial | .095 | .093 | .113 | .109 | .319 | .308 | .399 | .376 | .169 |
+
+(c) YAGO39K
+
+Table 3: Calibration test results using different losses $\mathcal{L}$ with TransE. Lower calibration metrics $=$ better. We compute MRR only on positive test triples. Self-adversarial loss achieves better calibrated results across the board. Results show no correlation between MRR and calibration performance, i.e. embeddings that bring higher MRR are not necessarily easier to calibrate. Best results in bold.
+
+tion methods, across all datasets. Experiments also show the choice of the loss has a big impact, greater than the choice of calibration method or embedding model. We assess whether such variability is determined by the quality of the embeddings. To verify whether better embeddings lead to sharper calibration, we report the mean reciprocal rank (MRR), which, for each true test triple, computes the (inverse) rank of the triple against synthetic corruptions, then averages the inverse rank (Table 3). In fact, we notice no correlation between calibration results and MRR. In other words, embeddings that lead to the best predictive power are not necessary the best calibrated.
+
+Positive Base Rate. We apply our synthetic calibration method to two link prediction benchmark datasets, FB15K-237 and WN18RR. As they only provide positive examples, we apply our method with varying base rates $\alpha_{i}$ , linearly spaced from 0.05 to 0.95. We evaluate results relying on the closed-world assumption, i.e. triples not present in training, validation or test sets are considered negative. For each $\alpha_{i}$ we calibrate the model using the synthetic method with both isotonic regression and Platt scaling. We sample negatives from the negative set under the implied negative rate, and calculate a baseline which is simply having all probability predictions equal to $\alpha_{i}$ . Figure 3 shows that isotonic regression and Platt scaling perform similarly and always considerably below the baseline. As expected from the previous results, the uncalibrated scores perform poorly, only reaching acceptable levels around some particular base rates.
+
+Triple Classification and Decision Threshold. To overcome the need to learn $|\mathcal{R}|$ decision thresholds $\tau_{i}$ from the validation set, we propose to rely on calibrated probabilities, and use the natural threshold of $\tau = 0.5$ . Table 4 shows how calibration affects the triple classification task, comparing with the literature standard of per-relation thresholds (last column). For simplicity, note we use the same self-adversarial loss in Table 2 and Table 4. We learn thresholds $\tau_{i}$ on validation sets, resulting in 11, 7, and 33 thresholds for WN11, FB13 and YAGO39K respectively.
+
+Using a single $\tau = 0.5$ and calibration provides competitive results compared to multiple learned thresholds (note uncalibrated results with $\tau = 0.5$ are poor, as expected). It is worth mentioning that
+
+
+Figure 3: Synthetic calibration on FB15K-237 and WN18RR, with varying positive base rates. The baseline stands for using the positive base rate as the probability prediction. Results are evaluated under the closed-world assumption, using the same positive base rate used to calibrate the models.
+
+
+
+ | | Ground Truth (τ = .5) | Synthetic (τ = .5) | Uncabil. (τ = .5) | Uncabil. (Per-Relation τ) |
| | Platt | Iso | Platt | Iso | Reproduced | Literature |
| WN11 | TransE | 88.8 | 88.9 | 88.9 | 88.9 | 50.7 | 88.2 | |
| DistMult | 66.5 | 67.2 | 66.4 | 67.1 | 50.8 | 67.2 | 88.9* |
| ComplEx | 60.6 | 62.4 | 60.0 | 62.4 | 50.8 | 59.6 | |
| HolE | 59.3 | 59.0 | 59.3 | 59.0 | 50.9 | 60.8 | |
| FB13 | TransE | 82.4 | 82.4 | 80.7 | 80.2 | 50.0 | 82.1 | |
| DistMult | 72.5 | 73.2 | 72.1 | 70.2 | 50.1 | 80.8 | 89.1* |
| ComplEx | 73.8 | 74.2 | 74.2 | 72.4 | 50.1 | 83.6 | |
| HolE | 60.3 | 60.6 | 57.8 | 54.3 | 50.0 | 62.6 | |
| YAGO 39K | TransE | 87.2 | 87.8 | 85.3 | 84.9 | 50.2 | 88.8 | |
| DistMult | 88.9 | 89.3 | 88.1 | 88.5 | 56.7 | 90.2 | 93.8† |
| ComplEx | 87.3 | 88.2 | 86.9 | 87.2 | 61.1 | 89.4 | |
| HolE | 80.4 | 80.4 | 78.4 | 78.5 | 50.6 | 81.5 | |
+
+Table 4: Effect of calibration on triple classification accuracy. Best results in bold. For all calibration methods there is one single threshold, $\tau = 0.5$ . For the per-relation $\tau$ , we learned multiple thresholds from validation sets (Appendix A.5). We did not carry out additional model selection, and used Table 2 hyperparameters instead. Isotonic regression reaches state-of-the-art results for WN11. Results of * from (Zhang et al., 2018); $\star$ from (Ji et al., 2016); $\dagger$ from (Lv et al., 2018).
+
+we are at par with state-of-the-art results for WN11. Isotonic regression is again the best method, but there is more variance in the model choice. Our proposed calibration method with synthetic negatives performs well overall, even though calibration is performed only using half of the validation set (negatives examples are replaced by synthetic negatives).
+
+# 6 CONCLUSION
+
+We propose a method to calibrate knowledge graph embedding models. We target datasets with and without ground truth negatives. We experiment on triple classification datasets and apply Platt scaling and isotonic regression with and without synthetic negatives controlled by our heuristics. All calibration methods perform significantly better than uncalibrated scores. We show that isotonic regression brings better calibration performance, but it is computationally more expensive. Additional experiments on triple classification shows that calibration allows us to use a single decision threshold, reaching state-of-the-art results without the need to learn per-relation thresholds.
+
+Future work will evaluate additional calibration algorithms, such as beta calibration (Kull et al., 2017) or Bayesian binning (Naeini et al., 2015). We will also experiment on ensembling of knowledge graph embedding models, inspired by (Krompaß & Tresp, 2015). The rationale is that different models operate on different scales, but calibrating brings them all to the same probability scale, so their output can be easily combined.
+
+# REFERENCES
+
+Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, et al. Tensorflow: A system for large-scale machine learning. In 12th {USENIX} Symposium on Operating Systems Design and Implementation (\{OSDI\} 16), pp. 265-283, 2016.
+Ivana Balážević, Carl Allen, and Timothy M Hospedales. Tucker: Tensor factorization for knowledge graph completion. arXiv preprint arXiv:1901.09590, 2019.
+Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. Freebase: a collaboratively created graph database for structuring human knowledge. In Proceedings of the 2008 ACM SIGMOD international conference on Management of data, pp. 1247-1250. AcM, 2008.
+Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, and Oksana Yakhnenko. Translating embeddings for modeling multi-relational data. In NIPS, pp. 2787-2795, 2013.
+Glenn W Brier. Verification of forecasts expressed in terms of probability. Monthly weather review, 78(1):1-3, 1950.
+Hongyun Cai, Vincent W Zheng, and Kevin Chen-Chuan Chang. A comprehensive survey of graph embedding: Problems, techniques and applications. arXiv preprint arXiv:1709.07604, 2017.
+Liwei Cai and William Yang Wang. Kbgan: Adversarial learning for knowledge graph embeddings. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pp. 1470-1480, 2018.
+Luca Costabello, Sumit Pai, Chan Le Van, Rory McGrath, Nicholas McCarthy, and Pedro Tabacof. AmpliGraph: a Library for Representation Learning on Knowledge Graphs, March 2019. URL https://doi.org/10.5281/zenodo.2595043.
+Morris H DeGroot and Stephen E Fienberg. The comparison and evaluation of forecasters. Journal of the Royal Statistical Society: Series D (The Statistician), 32(1-2):12-22, 1983.
+Tim Dettmers, Pasquale Minervini, Pontus Stenetorp, and Sebastian Riedel. Convolutional 2d knowledge graph embeddings. In Procs of AAAI, 2018. URL https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/view/17366.
+Xin Dong, Evgeniy Gabrilovich, Geremy Heitz, Wilko Horn, Ni Lao, Kevin Murphy, Thomas Strohmann, Shaohua Sun, and Wei Zhang. Knowledge vault: A web-scale approach to probabilistic knowledge fusion. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 601-610. ACM, 2014.
+Takuma Ebisu and Ryutaro Ichise. Toruse: Knowledge graph embedding on a lie group. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018.
+Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q Weinberger. On calibration of modern neural networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 1321-1330. JMLR.org, 2017.
+Takuo Hamaguchi, Hidekazu Oiwa, Masashi Shimbo, and Yuji Matsumoto. Knowledge transfer for out-of-knowledge-base entities: a graph neural network approach. In Proceedings of the 26th International Joint Conference on Artificial Intelligence, pp. 1802-1808. AAAI Press, 2017.
+Shizhu He, Kang Liu, Guoliang Ji, and Jun Zhao. Learning to represent knowledge graphs with gaussian embedding. In Proceedings of the 24th ACM International on Conference on Information and Knowledge Management, pp. 623-632. ACM, 2015.
+Guoliang Ji, Kang Liu, Shizhu He, and Jun Zhao. Knowledge graph completion with adaptive sparse transfer matrix. In Thirtieth AAAI Conference on Artificial Intelligence, 2016.
+
+Seyed Mehran Kazemi and David Poole. Simple embedding for link prediction in knowledge graphs. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (eds.), Advances in Neural Information Processing Systems 31, pp. 4284-4295. Curran Associates, Inc., 2018. URL http://papers.nips.cc/paper/7682-simple-embedding-for-link-prediction-in-knowledge-graphs.pdf.
+Bhushan Kotnis and Vivi Nastase. Analysis of the impact of negative sampling on link prediction in knowledge graphs. arXiv preprint arXiv:1708.06816, 2017.
+Denis Krompaß and Volker Tresp. Ensemble solutions for link-prediction in knowledge graphs. In 2nd Workshop, 2015.
+Volodymyr Kuleshov, Nathan Fenner, and Stefano Ermon. Accurate uncertainties for deep learning using calibrated regression. arXiv preprint arXiv:1807.00263, 2018.
+Meelis Kull, Telmo M Silva Filho, Peter Flach, et al. Beyond sigmoids: How to obtain well-calibrated probabilities from binary classifiers with beta calibration. *Electronic Journal of Statistics*, 11(2):5052–5080, 2017.
+Timothee Lacroix, Nicolas Usunier, and Guillaume Obozinski. Canonical tensor decomposition for knowledge base completion. In International Conference on Machine Learning, pp. 2869-2878, 2018.
+Hanxiao Liu, Yuexin Wu, and Yiming Yang. Analogical inference for multi-relational embeddings. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 2168-2178. JMLR.org, 2017.
+Xin Lv, Lei Hou, Juanzi Li, and Zhiyuan Liu. Differentiating concepts and instances for knowledge graph embedding. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 1971-1979, 2018.
+Farzaneh Mahdisoltani, Joanna Biega, and Fabian M Suchanek. Yago3: A knowledge base from multilingual wikipedias. 2013.
+George A Miller. Wordnet: a lexical database for english. Communications of the ACM, 38(11): 39-41, 1995.
+Mahdi Pakdaman Naeini, Gregory Cooper, and Milos Hauskrecht. Obtaining well calibrated probabilities using bayesian binning. In Twenty-Ninth AAAI Conference on Artificial Intelligence, 2015.
+Deepak Nathani, Jatin Chauhan, Charu Sharma, and Manohar Kaul. Learning attention-based embeddings for relation prediction in knowledge graphs. arXiv preprint arXiv:1906.01195, 2019.
+Dai Quoc Nguyen, Tu Dinh Nguyen, Dat Quoc Nguyen, and Dinh Phung. A novel embedding model for knowledge base completion based on convolutional neural network. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pp. 327-333, 2018.
+Dai Quoc Nguyen, Thanh Vu, Tu Dinh Nguyen, Dat Quoc Nguyen, and Dinh Phung. A capsule network-based embedding model for knowledge graph completion and search personalization. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 2180-2189, 2019.
+Maximilian Nickel, Volker Tresp, and Hans-Peter Kriegel. A three-way model for collective learning on multi-relational data. In ICML, volume 11, pp. 809-816, 2011.
+Maximilian Nickel, Kevin Murphy, Volker Tresp, and Evgeniy Gabrilovich. A review of relational machine learning for knowledge graphs. Procs of the IEEE, 104(1):11-33, 2016a.
+Maximilian Nickel, Lorenzo Rosasco, Tomaso A Poggio, et al. Holographic embeddings of knowledge graphs. In AAAI, pp. 1955-1961, 2016b.
+
+Alexandru Niculescu-Mizil and Rich Caruana. Predicting good probabilities with supervised learning. In Proceedings of the 22nd international conference on Machine learning, pp. 625-632. ACM, 2005.
+John Platt et al. Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. Advances in large margin classifiers, 10(3):61-74, 1999.
+Richard Socher, Danqi Chen, Christopher D Manning, and Andrew Ng. Reasoning with neural tensor networks for knowledge base completion. In Advances in Neural Information Processing Systems (NIPS), 2013.
+Zhiqing Sun, Zhi-Hong Deng, Jian-Yun Nie, and Jian Tang. Rotate: Knowledge graph embedding by relational rotation in complex space. arXiv preprint arXiv:1902.10197, 2019.
+Kristina Toutanova, Danqi Chen, Patrick Pantel, Hoifung Poon, Pallavi Choudhury, and Michael Gamon. Representing text for joint embedding of text and knowledge bases. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pp. 1499-1509, 2015.
+Théo Trouillon, Johannes Welbl, Sebastian Riedel, Éric Gaussier, and Guillaume Bouchard. Complex embeddings for simple link prediction. In *procs of ICML*, pp. 2071-2080, 2016.
+Bishan Yang, Scott Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. Embedding entities and relations for learning and inference in knowledge bases. In Procs of ICLR, 2015.
+Bianca Zadrozny and Charles Elkan. Transforming classifier scores into accurate multiclass probability estimates. In Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 694-699. ACM, 2002.
+Shuai Zhang, Yi Tay, Lina Yao, and Qi Liu. Quaternion knowledge graph embedding. arXiv preprint arXiv:1904.10281, 2019.
+Zhao Zhang, Fuzhen Zhuang, Meng Qu, Fen Lin, and Qing He. Knowledge graph embedding with hierarchical relation structure. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 3198-3207, 2018.
+
+# A APPENDIX
+
+# A.1 CALIBRATION METRICS
+
+Reliability Diagram (DeGroot & Fienberg, 1983; Niculescu-Mizil & Caruana, 2005). Also known as calibration plot, this diagram is a visual depiction of the calibration of a model (see Figure 1 for an example). It shows the expected sample accuracy as a function of the estimated confidence. A hypothetical perfectly calibrated model is represented by the diagonal line (i.e. the identity function). Divergence from such diagonal indicates calibration issues (Guo et al., 2017).
+
+Brier Score (Brier, 1950). It is a popular metric used to measure how well a binary classifier is calibrated. It is defined as the mean squared error between $n$ probability estimates $\hat{p}$ and the corresponding actual outcomes $y \in [0,1]$ . The smaller the Brier score, the better calibrated is the model. Note that the Brier score $B \in [0,1]$ .
+
+$$
+B = \frac {1}{n} \sum_ {i = 1} ^ {n} \left(y _ {i} - \hat {p} _ {i}\right) ^ {2} \tag {6}
+$$
+
+Log Loss is another effective and popular metric to measure the reliability of the probabilities returned by a classifier. The logarithmic loss measures the relative uncertainty between the probability estimates produced by the model and the corresponding true labels.
+
+$$
+L _ {l o g} = - (y \cdot l o g (\hat {p}) + (1 - y) \cdot l o g (1 - \hat {p})) \tag {7}
+$$
+
+Platt Scaling. Proposed by (Platt et al., 1999) for support vector machines, Platt scaling is a popular parametric calibration techniques for binary classifiers. The method consists in fitting a logistic regression model to the scores returned by a binary classifier, such that $\hat{q} = \sigma(a\hat{p} + b)$ , where $\hat{p} \in \mathbb{R}$ is the uncalibrated score of the classifier, $a, b \in \mathbb{R}$ are trained scalar weights, and $\hat{q}$ is the calibrated probability returned as output. Such model can be trained by optimizing the NLL loss with non-binary targets derived by the Bayes rule under an uninformative prior, resulting in an Maximum a Posteriori estimate.
+
+Isotonic Regression (Zadrozny & Elkan, 2002). This popular non-parametric calibration techniques consists in fitting a non-decreasing piecewise constant function to the output of an uncalibrated classifier. As for Platt scaling, the goal is learning a function $\hat{q} = g(\hat{p})$ , such that $\hat{q}$ is a calibrated probability. Isotonic regression learns $g$ by minimizing the square loss $\sum_{i=1}^{n} (\hat{q}_i - y_i)^2$ under the constraint that $g$ must be piecewise constant (Guo et al., 2017).
+
+# A.2 CALIBRATION DIAGRAMS: INSTANCES PER BIN
+
+We present in Figure 4 the total count of instances for each bin used in the calibration plots included in Figure 2. As expected, calibration considerably helps spreading out instances across bins, whereas in uncalibrated scenarios instances are squeezed in the first or last bins.
+
+# A.3 IMPACT OF MODEL HYPERPARAMETERS: $\eta$ AND EMBEDDING DIMENSIONALITY
+
+In Figure 5 we report the impact of negative/positive ratio $\eta$ and the embedding dimensionality $k$ . Results show that the embedding size $k$ has higher impact than the negative/positive ratio $\eta$ . We observe that calibrated and uncalibrated low-dimensional embeddings have worse Brier score. Results also show that any $k > 50$ does not improve calibration anymore. The negative/positive ratio $\eta$ follows a similar pattern: choosing $\eta > 10$ does not have any effect on the calibration score.
+
+# A.4 POSITIVE BASE RATE EXPERIMENTS: LINK PREDICTION PERFORMANCE
+
+In Table 5, we present the traditional knowledge graph embedding rank metrics: MRR (mean reciprocal rank), MR (mean rank) and Hits@10 (precision at the top-10 results). We report the results for all datasets and models used in the main text, which appear in Table 2, Table 4 and Figure 3.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Figure 4: Histograms show the total count instances for each bin used by calibration plots presented in Figure 2. Best viewed in colors.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Figure 5: Impact of $\eta$ (eta) and $k$ (embedding size) on the Brier score. We used TransE and the Self-Adversarial loss for all datasets. Best viewed in colors.
+
+
+
+ | | MR | MRR | Hits@10 |
| WN11 | TransE | 2289 | .155 | .309 |
| DistMult | 10000 | .045 | .081 |
| ComplEx | 13815 | .054 | .094 |
| HolE | 13355 | .017 | .035 |
| FB13 | TransE | 3431 | .296 | .394 |
| DistMult | 6667 | .183 | .337 |
| ComplEx | 8937 | .018 | .039 |
| HolE | 8937 | .018 | .039 |
| YAGO39K | TransE | 244 | .169 | .319 |
| DistMult | 635 | .306 | .620 |
| ComplEx | 1074 | .531 | .753 |
| HolE | 922 | .101 | .189 |
| WN18RR | ComplEx | 4111 | .506 | .583 |
| FB15K-237 | ComplEx | 183 | .320 | .499 |
+
+Table 5: Standard filtered metrics for knowledge graph embeddings models. The models are implemented in the same codebase and share the same evaluation protocol. Note that we do not include results from reciprocal evaluation protocols.
+
+# A.5 PER-RELATION DECISION THRESHOLDS
+
+We report in Table 6 the per-relation decision thresholds $\tau$ used in Table 4, under the 'Reproduced' column. Note that the thresholds reported here are not probabilities, as they have been applied to the raw scores returned by the model-dependent scoring function $f_{m}(t)$ .
+
+| Relation | τ |
| _domain_region | -6.0069733 |
| _domainTopic | -5.5207396 |
| _has_instance | -6.2901406 |
| _has_part | -5.673306 |
| _member_holonym | -6.3117476 |
| _member_meronym | -5.982978 |
| _part_of | -5.798244 |
| _similar_to | -6.852225 |
| _subordinate_instance_of | -5.4750223 |
| _synset_domainTopic | -6.6392403 |
| _type_of | -6.743014 |
+
+WN11
+
+| Relation | τ |
| cause_of_death | -3.5680597 |
| ethnicity | -3.4997067 |
| gender | -3.4051323 |
| institution | -3.547462 |
| nationality | -3.8507419 |
| profession | -3.7040129 |
| religion | -3.5918012 |
+
+FB13
+
+| Relation | τ | Relation | τ |
| 0 | -3.9869666 | 16 | -1.8443029 |
| 1 | -3.6161883 | 17 | -3.4323683 |
| 2 | -2.9660778 | 18 | -1.6325312 |
| 3 | -2.9241138 | 19 | -4.2211304 |
| 4 | -3.8640308 | 20 | -4.101904 |
| 5 | -3.685308 | 21 | -3.840962 |
| 6 | -2.861393 | 22 | -1.832546 |
| 7 | -3.3280334 | 23 | -2.0101485 |
| 8 | -3.0741293 | 24 | -3.1512089 |
| 9 | -3.1950998 | 25 | -2.4524217 |
| 10 | -2.951118 | 27 | -3.4848583 |
| 11 | -1.8720441 | 29 | -2.4269128 |
| 12 | -2.4230814 | 31 | -2.209188 |
| 13 | -1.542841 | 32 | -1.3310984 |
| 14 | -2.6944544 | 33 | -2.3231838 |
| 15 | -3.381497 | 35 | -2.0017974 |
| 36 | -1.3954651 | | |
+
+YAGO39K
+
+Table 6: Relation-specific decision thresholds learned on uncalibrated raw scores (See also Table 4 for a report on triple classification results.)
\ No newline at end of file
diff --git a/probabilitycalibrationforknowledgegraphembeddingmodels/images.zip b/probabilitycalibrationforknowledgegraphembeddingmodels/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..223b0d7a975d5ec0238b7163d8b8ef9fb9e38554
--- /dev/null
+++ b/probabilitycalibrationforknowledgegraphembeddingmodels/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6fd0fe61c839a330a09bc94e79c145c504540a0c43aba51d12b3774c70a5f2b7
+size 829570
diff --git a/probabilitycalibrationforknowledgegraphembeddingmodels/layout.json b/probabilitycalibrationforknowledgegraphembeddingmodels/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..582127dc4b6e4ffa3bbe2d3d3b80a8b1a2e12987
--- /dev/null
+++ b/probabilitycalibrationforknowledgegraphembeddingmodels/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b58176f4880e34241bccc16184af25646ef229550eaa606c9b6b916eb4e2e785
+size 450578
diff --git a/progressivememorybanksforincrementaldomainadaptation/01631af8-9674-4ac7-8a86-f4ddfe5d317b_content_list.json b/progressivememorybanksforincrementaldomainadaptation/01631af8-9674-4ac7-8a86-f4ddfe5d317b_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..b9d3e00ed5e8aed115a3384b0b8bdc675743ff87
--- /dev/null
+++ b/progressivememorybanksforincrementaldomainadaptation/01631af8-9674-4ac7-8a86-f4ddfe5d317b_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:15d8f80663abce8ecc92bf86cb7e8eee149b3079f9e15742408e20c5607249f0
+size 98688
diff --git a/progressivememorybanksforincrementaldomainadaptation/01631af8-9674-4ac7-8a86-f4ddfe5d317b_model.json b/progressivememorybanksforincrementaldomainadaptation/01631af8-9674-4ac7-8a86-f4ddfe5d317b_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..1f91061b68013393fecefb7f5720b6a81bed6694
--- /dev/null
+++ b/progressivememorybanksforincrementaldomainadaptation/01631af8-9674-4ac7-8a86-f4ddfe5d317b_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:80312d054b1d2236da8e2f33c3f962d40797de91715d1af4d264767eeaa5bbc0
+size 118877
diff --git a/progressivememorybanksforincrementaldomainadaptation/01631af8-9674-4ac7-8a86-f4ddfe5d317b_origin.pdf b/progressivememorybanksforincrementaldomainadaptation/01631af8-9674-4ac7-8a86-f4ddfe5d317b_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..9ccac42df9a9a62e75b12afe6742716c0c07e296
--- /dev/null
+++ b/progressivememorybanksforincrementaldomainadaptation/01631af8-9674-4ac7-8a86-f4ddfe5d317b_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e437a57860cf8ba973362326bf13a1b38677471bf5f90e26485a8d196c7a2250
+size 406023
diff --git a/progressivememorybanksforincrementaldomainadaptation/full.md b/progressivememorybanksforincrementaldomainadaptation/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..8afd731ed38f547b23d58eb2aac2714886b53bc0
--- /dev/null
+++ b/progressivememorybanksforincrementaldomainadaptation/full.md
@@ -0,0 +1,365 @@
+# PROGRESSIVE MEMORY BANKS FOR INIncrementAL DOMAIN ADAPTATION
+
+Nabiha Asghar\*o Lili Mou\* Kira A. Selby\*o Kevin D. Pantasdo Pascal Poupart\*o Xin Jiang
+
+†Vector Institute for AI, Toronto, Canada
+$^{\circ}$ Cheriton School of Computer Science, University of Waterloo, Canada
+{nasghar, kaselby, kevin.pantasdo, ppoupart}@uwaterloo.ca
+$^{\ddagger}$ Dept. Computing Science, University of Alberta; Alberta Machine Intelligence Institute (AMII) doublepower.mou@gmail.com
+$\diamond$ Noah's Ark Lab, Huawei Technologies, Hong Kong
+
+jiang.xin@huawei.com
+
+# ABSTRACT
+
+This paper addresses the problem of incremental domain adaptation (IDA) in natural language processing (NLP). We assume each domain comes one after another, and that we could only access data in the current domain. The goal of IDA is to build a unified model performing well on all the domains that we have encountered. We adopt the recurrent neural network (RNN) widely used in NLP, but augment it with a directly parameterized memory bank, which is retrieved by an attention mechanism at each step of RNN transition. The memory bank provides a natural way of IDA: when adapting our model to a new domain, we progressively add new slots to the memory bank, which increases the number of parameters, and thus the model capacity. We learn the new memory slots and fine-tune existing parameters by back-propagation. Experimental results show that our approach achieves significantly better performance than fine-tuning alone. Compared with expanding hidden states, our approach is more robust for old domains, shown by both empirical and theoretical results. Our model also outperforms previous work of IDA including elastic weight consolidation and progressive neural networks in the experiments.1
+
+# 1 INTRODUCTION
+
+Domain adaptation aims to transfer knowledge from one domain (called the source domain) to another (called the target domain) in a machine learning system. If the data of the target domain are not large enough, using data from the source domain typically helps to improve model performance in the target domain. This is important for neural networks, which are data-hungry and prone to overfitting. In this paper, we especially focus on incremental domain adaptation (IDA), where we assume different domains come sequentially one after another. We only have access to the data in the current domain, but hope to build a unified model that performs well on all the domains that we have encountered (Xu et al., 2014; Rusu et al., 2016; Kirkpatrick et al., 2017).
+
+Incremental domain adaptation is useful in scenarios where data are proprietary, or available only for a short period of time (Li & Hoiem, 2018). It is desired to preserve as much knowledge as possible in the learned model and not to rely on the availability of the data. Another application of IDA is a quick adaptation to new domains. If the environment of a deployed machine learning system changes frequently, traditional methods like jointly training all domains require the learning machine to be re-trained from scratch every time a new domain comes. Fine-tuning a neural network by a
+
+few steps of gradient updates does transfer quickly, but it suffers from the catastrophic forgetting problem (Kirkpatrick et al., 2017). Suppose during prediction a data point is not labeled with its domain, the (single) fine-tuned model cannot predict well for samples in previous domains, as it tends to "forget" quickly during fine-tuning.
+
+A recent trend of domain adaptation in the deep learning regime is the progressive neural network (Rusu et al., 2016), which progressively grows the network capacity if a new domain comes. Typically, this is done by enlarging the model with new hidden states and a new predictor (Figure 1a). To avoid interfering with existing knowledge, the newly added hidden states are not fed back to the previously trained states. During training, all existing parameters are frozen, and only the newly added ones are trained. For inference, the new predictor is used for all domains, which is sometimes undesired as the new predictor is trained with only the last domain.
+
+In this paper, we propose a progressive memory bank for incremental domain adaptation in natural language processing (NLP). Our model augments a recurrent neural network (RNN) with a memory bank, which is a set of distributed, real-valued vectors capturing domain knowledge. The memory is retrieved by an attention mechanism during RNN information processing. When our model is adapted to new domains, we progressively increase the slots in the memory bank. But different from Rusu et al. (2016), we fine-tune all the parameters, including RNN and the previous memory bank. Empirically, when the model capacity increases, the RNN does not forget much even if the entire network is fine-tuned. Compared with expanding RNN hidden states, the newly added memory slots cause less contamination of the existing knowledge in RNN states, as will be shown by a theorem.
+
+In our paper, we evaluate the proposed approach on a classification task known as multi-genre natural language inference (MultiNLI). Appendix C provides additional evidence when our approach is applied to text generation. Experiments consistently support our hypothesis that the proposed approach adapts well to target domains without catastrophic forgetting of the source. Our model outperforms the naïve fine-tuning method, the original progressive neural network, as well as other IDA techniques including elastic weight consolidation (EWC, Kirkpatrick et al., 2017).
+
+# 2 RELATED WORK
+
+Domain Adaptation. Domain adaptation has been widely studied in machine learning, including the NLP domain. For neural NLP applications, Mou et al. (2016) analyze two straightforward settings, namely, multi-task learning (jointly training all domains) and fine-tuning (training one domain and fine-tuning on the other). A recent advance of domain adaptation is adversarial learning, where the neural features are trained not to classify the domain (Ganin et al., 2016; Liu et al., 2017). However, all these approaches (except fine-tuning) require all domains to be available simultaneously, and thus are not IDA approaches.
+
+Kirkpatrick et al. (2017) address the catastrophic forgetting problem of fine-tuning neural networks, and propose a regularization term based on the Fisher information matrix; they call the method elastic weight consolidation (EWC). While some follow-up studies report EWC achieves high performance in their scenarios (Zenke et al., 2017; Lee et al., 2017; Thompson et al., 2019), others show that EWC is less effective (Wen & Itti, 2018; Yoon et al., 2018; Wu et al., 2018). Lee et al. (2017) propose incremental moment matching between the posteriors of the old model and the new model, achieving similar performance to EWC. Schwarz et al. (2018) augment EWC with knowledge distillation, making it more memory-efficient.
+
+Rusu et al. (2016) propose a progressive neural network that progressively increases the number of hidden states (Figure 1a). To avoid overriding existing information, they freeze the weights of the learned network, and do not feed new states to old ones. This results in multiple predictors, requiring that a data sample is labeled with its domain during the test time. If we otherwise use the last predictor to predict samples from all domains, its performance may be low for previous domains, as the predictor is only trained with the last domain.
+
+Yoon et al. (2018) propose an extension of the progressive network. They identify which existing hidden units are relevant for the new task (with their sparse penalty), and fine-tune only the corresponding subnetwork. However, sparsity is not common for RNNs in NLP applications, as sparse recurrent connections are harmful. A similar phenomenon is that dropout of recurrent connections
+
+
+Figure 1: (a) Progressive neural network Rusu et al. (2016). (b) One step of RNN transition in our progressive memory network. Colors indicate different domains.
+(a) Progressive neural network
+
+
+(b) Progressive memory
+
+yields poor performance (Bayer et al., 2013). Xu & Zhu (2018) deal with new domains by adaptively adding nodes to the network via reinforcement learning. This approach may require a very large number of trials to identify the right number of nodes to be added to each layer (Yoon et al., 2019).
+
+Li & Hoiem (2018) address IDA with a knowledge distillation approach, where they preserve a set of outputs of the old network on pseudo-training data. Then they jointly optimize for the accuracy on the new training domain as well as the pseudo-training data. Kim et al. (2019)'s variant of this approach uses maximum-entropy regularization to control the transfer of distilled knowledge. However, in NLP applications, it is non-trivial to obtain pseudo-training data for distillation.
+
+Memory-Based Neural Networks. Our work is related to memory-based neural networks. Sukhbaatar et al. (2015) propose an end-to-end memory network that assigns each memory slot to an entity, and aggregates information by multiple attention-based layers. They design their architecture for the bAbI question answering task, and assign a slot to each sentence. Such idea can be extended to various scenarios, for example, assigning slots to external knowledge for question answering (Das et al., 2017) and assigning slots to dialogue history for a conversation system (Madotto et al., 2018).
+
+A related idea is to use episodic memory, which stores data samples from all previously seen domains (thus it is not an IDA approach). This is used for experience replay while training on subsequent domains (Lopez-Paz & Ranzato, 2017; Rebuffi et al., 2017; Chaudhry et al., 2018; d'Autume et al., 2019).
+
+Another type of memory in the neural network regime is the neural Turing machine (NTM, Graves et al., 2016). This memory is not directly parameterized, but is read or written by a neural controller. Therefore, it serves as temporary scratch paper and does not store knowledge itself. Zhang et al. (2018b) combine the above two styles of memory for task-oriented dialogue systems, where they have both slot-value memory and read-and-write memory.
+
+Different from the work above, our memory bank stores knowledge in a distributed fashion, where each slot does not correspond to a concrete entity or data sample. Our memory is directly parameterized, interacting in a different way from RNN weights and providing a natural way of incremental domain adaptation.
+
+# 3 PROPOSED APPROACH
+
+Our model is based on a recurrent neural network (RNN). At each step, the RNN takes the embedding of the current input (e.g., a word), and changes its states accordingly. This is represented by $\pmb{h}_i = \mathrm{RNN}(\pmb{h}_{i-1}, \pmb{x}_i)$ , where $\pmb{h}_i$ and $\pmb{h}_{i-1}$ are the hidden states at time steps $i$ and $i-1$ , respectively. $\pmb{x}_i$ is the input at the $i$ th step. Typically, long short term memory (LSTM, Hochreiter & Schmidhuber, 1997) or Gated Recurrent Units (GRU, Cho et al., 2014) are used as RNN transitions. In the rest of this section, we will describe a memory-augmented RNN, and how it is used for incremental domain adaptation (IDA).
+
+# 3.1 AUGMENTING RNN WITH MEMORY BANKS
+
+We enhance the RNN with an external memory bank, as shown in Figure 1b. The memory bank augments the overall model capacity by storing additional parameters in memory slots. At each time step, our model computes an attention probability to retrieve memory content, which is then fed to the computation of RNN transition.
+
+Particularly, we adopt a key-value memory bank, inspired by Miller et al. (2016). Each memory slot contains a key vector and a value vector. The former is used to compute the attention weight for memory retrieval, whereas the latter is the value of memory content.
+
+For the $i$ th step, the memory mechanism computes an attention probability $\alpha_{i}$ by
+
+$$
+\widetilde {\alpha} _ {i, j} = \exp \left\{\boldsymbol {h} _ {i - 1} ^ {\top} \boldsymbol {m} _ {j} ^ {(\text {k e y})} \right\}, \quad \alpha_ {i, j} = \frac {\widetilde {\alpha} _ {i , j}}{\sum_ {j ^ {\prime} = 1} ^ {N} \widetilde {\alpha} _ {i , j ^ {\prime}}} \tag {1}
+$$
+
+where $m_{j}^{(\mathrm{key})}$ is the key vector of the $j$ th slot of the memory (among $N$ slots in total). Then the model retrieves memory content by a weighted sum of all memory values, where the weight is the attention probability, given by
+
+$$
+\boldsymbol {c} _ {i} = \sum_ {j = 1} ^ {N} \alpha_ {i, j} \boldsymbol {m} _ {j} ^ {(\mathrm {v a l})} \tag {2}
+$$
+
+Here, $m_j^{(\mathrm{val})}$ is the value vector of the $j$ th memory slot. We call $c_i$ the memory content. Then, $c_i$ is concatenated with the current word $x_i$ , and fed to the RNN as input for state transition.
+
+Using the key-value memory bank allows separate (thus more flexible) computation of memory retrieval weights and memory content, compared with traditional attention where a candidate vector is used to compute both attention probability and attention content.
+
+It should be emphasized that the memory bank in our model captures distributed knowledge, which is different from other work where the memory slots correspond to specific entities (Eric et al., 2017). The attention mechanism accomplishes memory retrieval in a "soft" manner, which means the retrieval strength is a real-valued probability. This enables us to train both memory content and its retrieval end-to-end, along with other neural parameters.
+
+We would also like to point out that the memory bank alone does not help RNN much. However, it is natural to use a memory-augmented RNN for incremental domain adaptation, as described below.
+
+# 3.2 PROGRESSIVELY INCREASING MEMORY FOR INIncrementAL DOMAIN ADAPTATION
+
+The memory bank in Subsection 3.1 can be progressively expanded to adapt a model in a source domain to new domains. This is done by adding new memory slots to the bank which are learned exclusively from the target data.
+
+Suppose the memory bank is expanded with another $M$ slots in a new domain, in addition to previous $N$ slots. We then have $N + M$ slots in total. The model computes attention probability over the expanded memory and obtains the attention vector in the same way as Equations (1)-(2), except that the summation is computed from 1 to $N + M$ . This is given by
+
+$$
+\alpha_ {i, j} ^ {\left(\text {e x p a n d}\right)} = \frac {\widetilde {\alpha} _ {i , j}}{\sum_ {j ^ {\prime} = 1} ^ {N + M} \widetilde {\alpha} _ {i , j ^ {\prime}}}, \quad \boldsymbol {c} _ {i} ^ {\left(\text {e x p a n d}\right)} = \sum_ {j = 1} ^ {N + M} \alpha_ {i, j} ^ {\left(\text {e x p a n d}\right)} \boldsymbol {m} _ {j} ^ {\left(\text {v a l}\right)} \tag {3}
+$$
+
+To initialize the expanded model, we load all previous parameters, including RNN weights and the learned $N$ slots, but randomly initialize the progressively expanded $M$ slots. During training, we update all parameters by gradient descent. That is to say, new parameters are learned from their initializations, whereas old parameters are fine-tuned during IDA. The process is applied whenever a new domain comes, as shown in Algorithm 1.
+
+We would like to discuss the following issues.
+
+Freezing vs. Fine-tuning learned parameters. Inspired by the progressive neural network (Rusu et al., 2016), we find it tempting to freeze RNN parameters and the learned memory but only tune new memory for IDA. However, our preliminary results show that if we freeze all existing parameters, the increased memory does not add much to the model capacity, and that its performance is worse than fine-tuning all parameters.
+
+# Fine-tuning vs. Fine-tuning while increasing memory slots. It is reported that fine-tuning a model (without increasing model ca
+
+pacity) suffers from the problem of catastrophic forgetting (Kirkpatrick et al., 2017). We wish to investigate whether our approach suffers from the same problem, since we fine-tune learned parameters when progressively increasing memory slots. Our intuition is that the increased model capacity helps to learn the new domain with less overriding of the previously learned model. Experiments confirm our conjecture, as the memory-augmented RNN tends to forget more if the memory size is not increased.
+
+Expanding hidden states vs. Expanding memory. Another way of progressively increasing model capacity is to expand the size of RNN layers. This setting is similar to the progressive neural network, except that all weights are fine-tuned and new states are connected to existing states.
+
+However, we hereby show a theorem, indicating that the expanded memory results in less contamination/overriding of the learned knowledge in the RNN, compared with the expanded hidden states. The main idea is to measure the effect of model expansion quantitatively by the expected square difference on $h_i$ before and after expansion, where the expectation reflects the average effect of model expansion in different scenarios.
+
+Theorem 1. Let RNN have vanilla transition with the linear activation function, and let the RNN state at the last step $\pmb{h}_{i-1}$ be fixed. For a particular data point, if the memory attention satisfies $\sum_{j=N+1}^{N+M} \widetilde{\alpha}_{i,j} \leq \sum_{j=1}^{N} \widetilde{\alpha}_{i,j}$ , then memory expansion yields a lower expected mean squared difference in $\pmb{h}_i$ than RNN state expansion. That is,
+
+$$
+\mathbb {E} \left[ \left\| \boldsymbol {h} _ {i} ^ {\left(\mathrm {m}\right)} - \boldsymbol {h} _ {i} \right\| ^ {2} \right] \leq \mathbb {E} \left[ \left\| \boldsymbol {h} _ {i} ^ {\left(\mathrm {s}\right)} - \boldsymbol {h} _ {i} \right\| ^ {2} \right] \tag {4}
+$$
+
+where $\pmb{h}_i^{(\mathrm{m})}$ refers to the hidden states if the memory is expanded. $\pmb{h}_i^{(\mathrm{s})}$ refers to the original dimensions of the RNN states, if we expand the size of RNN states themselves. Here, we compute the expectation by assuming weights and hidden states are iid from a zero-mean Gaussian distribution (with variance $\sigma^2$ ).
+
+Proof Sketch: We focus on one step of RNN transition and assume that $h_{i-1}$ is the same when the model capacity is increased. Further, we assume that $h_i$ is $D$ -dimensional, that each memory slot $m_j$ is $d$ -dimensional, and that the additional RNN units (when we expand the hidden state) are also $d$ -dimensional.
+
+We compute how the original dimensions in the hidden state are changed if we expand RNN. We denote the expanded hidden states by $\widetilde{h}_{i-1}$ and $\widetilde{h}_i$ for the two time steps. We denote the weights connecting from $\widetilde{h}_{i-1}$ to $h_i$ by $\widetilde{W} \in \mathbb{R}^{D \times d}$ . We focus on the original $D$ -dimensional space, denoted as $h_i^{(s)}$ . The connection is shown in Figure 2a, Appendix A. We have
+
+$$
+\begin{array}{l} \mathbb {E} \left[ \| \boldsymbol {h} _ {i} ^ {\left(\mathrm {s}\right)} - \boldsymbol {h} _ {i} \| ^ {2} \right] = \mathbb {E} \left[ \| \widetilde {W} \cdot \widetilde {\boldsymbol {h}} _ {i - 1} \| ^ {2} \right] = \sum_ {j = 1} ^ {D} \mathbb {E} \left[ \left(\widetilde {\boldsymbol {w}} _ {j} ^ {\top} \widetilde {\boldsymbol {h}} _ {i - 1}\right) ^ {2} \right] (5) \\ = \sum_ {j = 1} ^ {D} \mathbb {E} \left[ \left(\sum_ {k = 1} ^ {d} \widetilde {w} _ {j k} \widetilde {h} _ {i - 1} [ k ]\right) ^ {2} \right] = \sum_ {j = 1} ^ {D} \sum_ {i = 1} ^ {d} \mathbb {E} \left[ \left(\widetilde {w} _ {j k}\right) ^ {2} \right] \mathbb {E} \left[ \left(\widetilde {h} _ {i - 1} [ k ]\right) ^ {2} \right] (6) \\ = D \cdot d \cdot \operatorname {V a r} (w) \cdot \operatorname {V a r} (h) = D d \sigma^ {2} \sigma^ {2} (7) \\ \end{array}
+$$
+
+Table 1: Corpus statistics and the baseline performance (\% accuracy) of our BiLSTM model (without domain adaptation) and results reported in previous work.
+
+ | Fic | Gov | Slate | Tel | Travel |
| # training samples | 77k | 77k | 77k | 83k | 77k |
| Our Implementation Yu et al. (2018) | 65.0 | 66.5 | 56.2 | 64.5 | 62.7 |
| 64.7 | 69.2 | 57.9 | 64.4 | 65.8 |
+
+Table 2: Results on two-domain adaptation. F: Fine-tuning. V: Expanding vocabulary. H: Expanding RNN hidden states. M: Our proposed method of expanding memory. We also compare with previous work elastic weight consolidation (EWC, Kirkpatrick et al., 2017) and the progressive neural network (Rusu et al., 2016). For the statistical test (compared with Line 8), $\uparrow$ , $\downarrow$ : $p < 0.05$ and $\uparrow$ , $\downarrow$ : $p < 0.01$ . The absence of an arrow indicates that the performance difference compared with Line 8 is statistically insignificant with $p$ lower than 0.05.
+
+| #Line | Model | Trained on/by | % Accuracy on |
| S | T |
| 1 | RNN | S | 65.01↓ | 61.23↓ |
| 2 | T | 56.46↓ | 66.49↓ |
| 3 | RNN+ Mem | S | 65.41↓ | 60.87↓ |
| 4 | T | 56.77↓ | 67.01↓ |
| 5 | S+T | 66.02↓ | 70.00 |
| 6 | RNN + Mem | S→T (F) | 65.62↓ | 69.90↓ |
| 7 | S→T (F+M) | 66.23 | 70.21 |
| 8 | S→T (F+M+V) | 67.55 | 70.82 |
| 9 | S→T (F+H) | 64.09↓ | 68.35↓ |
| 10 | S→T (F+H+V) | 63.68↓ | 68.02↓ |
| 11 | S→T (EWC) | 66.02↓ | 64.10↓ |
| 12 | S→T (Progressive) | 64.47↓ | 68.25↓ |
+
+Similarly,
+
+$$
+\mathbb {E} \left[ \left\| \boldsymbol {h} _ {i} ^ {\left(\mathrm {m}\right)} - \boldsymbol {h} _ {i} \right\| ^ {2} \right] = \mathbb {E} \left[ \left\| W _ {(c)} \Delta c \right\| ^ {2} \right] = D d \sigma^ {2} \operatorname {V a r} \left(\Delta c _ {k}\right) \tag {8}
+$$
+
+where $\Delta c \stackrel{\mathrm{def}}{=} c' - c$ . The vectors $c$ and $c'$ are the current step's attention content before and after memory expansion, respectively, shown in Figure 2b, Appendix A. (We omit the time step in the notation for simplicity.) $W_{(\mathrm{c})}$ is the weight matrix connecting attention content to RNN states.
+
+To prove the theorem, it remains to show that $\mathrm{Var}(\Delta c_k) \leq \sigma^2$ . We do this by analyzing how attention is computed at each time step, and bounding each attention weight. For details, see the complete proof in Appendix A.
+
+In the theorem, we have an assumption $\sum_{j=N+1}^{N+M} \widetilde{\alpha}_{i,j} \leq \sum_{j=1}^{N} \widetilde{\alpha}_{i,j}$ , requiring that the total attention to existing memory slots is larger than to the progressively added slots. This is fairly reasonable because: (1) During training, attention is trained in an ad hoc fashion to newly-added information, and thus some $\alpha_{i,j}$ for $1 \leq j \leq N$ might be learned so that it is larger than a random memory slot; and (2) For a new domain, we do not add a huge number of slots, and thus $\sum_{j=N+1}^{N+M} \widetilde{\alpha}_{i,j}$ will not dominate.
+
+It is noted that our theorem is not to provide an explicit optimization/generalization bound for IDA, but shows that expanding memory is more stable than expanding hidden states. This is particularly important at the beginning steps of IDA, as the progressively growing parameters are randomly initialized and are basically noise. Although our theoretical analysis uses a restricted setting (i.e., vanilla RNN transition and linear activation), it provides the key insight that our approach is appropriate for IDA.
+
+# 4 EXPERIMENTS
+
+In this section, we evaluate our approach on an NLP classification task. In particular, we choose the multi-genre natural language inference (MultiNLI), due to its large number of samples in various domains. The task is to determine the relationship between two sentences among target labels: entailment, contradiction, and neutral. In Appendix C, we conduct supplementary experiments on text generation with our memory-augmented RNN for IDA.
+
+Dataset and Setup. The MultiNLI corpus (Williams et al., 2018) is particularly suitable for IDA, as it contains training samples for 5 genres: Slate, Fiction (Fic), Telephone (Tel), Government (Gov), and Travel. In total, we have 390k training samples. The corpus also contains held-out (non-training) labeled data in these domains. We split it into two parts for validation and test.4
+
+| Training domains | Performance on |
| Fic | Gov | Slate | Tel | Travel |
| Fic | 65.41 | 58.87 | 55.83 | 61.39 | 57.35 |
| Fic → Gov | 67.55 | 70.82 | 61.04 | 65.07 | 61.90 |
| Fic → Gov → Slate | 67.04 | 71.55 | 63.29 | 64.66 | 63.53 |
| Fic → Gov → Slate → Tel | 68.46 | 71.10 | 63.39 | 71.60 | 61.50 |
| Fic → Gov → Slate → Tel → Travel | 69.36 | 72.47 | 63.96 | 69.74 | 68.39 |
+
+Table 3: Dynamics of the progressive memory network for IDA with 5 domains. Upper-triangular values in gray are out-of-domain (zero-shot) performance.
+
+| Group | Setting | Fic | Gov | Slate | Tel | Travel |
| Non-IDA | In-domain training | 65.41↓ | 67.01↓ | 59.30↓ | 67.20↓ | 64.70↓ |
| Fic + Gov + Slate + Tel + Travel (multi-task) | 70.60↑ | 73.30 | 63.80 | 69.15 | 67.07↓ |
| IDA | Fic → Gov → Slate → Tel → Travel (F+V) | 67.24↓ | 70.82↓ | 62.41↓ | 67.62↓ | 68.39 |
| Fic → Gov → Slate → Tel → Travel (F+V+M) | 69.36 | 72.47 | 63.96 | 69.74 | 68.39 |
| Fic → Gov → Slate → Tel → Travel (EWC) | 67.12↓ | 68.71↓ | 59.90↓ | 66.09↓ | 65.70↓ |
| Fic → Gov → Slate → Tel → Travel (Progressive) | 65.22↓ | 67.87↓ | 61.13↓ | 66.96↓ | 67.90 |
+
+Table 4: Comparing our approach with variants and previous work in the multi-domain setting. In this experiment, we use the memory-augmented RNN as the neural architecture. Italics represent best results in the IDA group. $\uparrow ,\downarrow$ .. $p < 0.05$ and $\uparrow ,\Downarrow$ .. $p < 0.01$ (compared with $\mathrm{F + V + M}$
+
+The first row in Table 1 shows the size of the training set in each domain. As seen, the corpus is mostly balanced across domains, although Tel has slightly more examples.
+
+For the base model, we train a bi-directional LSTM (BiLSTM). The details of network architecture, training, and hyper-parameter tuning are given in Appendix B. We see in Table 1 that we achieve similar performance to Yu et al. (2018). Furthermore, our BiLSTM achieves an accuracy of 68.37 on the official MultiNLI test set, $^{5}$ which is better than 67.51 reported in the original MultiNLI paper (Williams et al., 2018) using BiLSTM. This shows that our implementation and tuning are fair for the basic BiLSTM, and that our model is ready for the study of IDA.
+
+Transfer between Two Domains. We would like to compare our approach with a large number of baselines and variants. Thus, we randomly choose two domains as a testbed: Fic as the source domain and Gov as the target domain. We show results in Table 2.
+
+First, we analyze the performance of RNN and the memory-augmented RNN in the non-transfer setting (Lines 1-2 vs. Lines 3-4). As seen, the memory-augmented RNN achieves slightly better but generally similar performance, compared with RNN (both with LSTM units). This shows that, in the non-transfer setting, the memory bank does not help the RNN much. However, this later confirms that the performance improvement is indeed due to our IDA technique, instead of simply a better neural architecture.
+
+We then apply two straightforward methods of domain adaptation: multi-task learning (Line 5) and fine-tuning (Line 6). Multi-task learning jointly optimizes source and target objectives, denoted by "S+T." On the other hand, the fine-tuning approach trains the model on the source first, and then fine-tunes on the target. In our experiments, these two methods perform similarly on the target domain, which is consistent with Mou et al. (2016). On the source domain, fine-tuning performs significantly worse than multi-task learning, as it suffers from the catastrophic forgetting problem. We notice that, in terms of source performance, the fine-tuning approach (Line 6) is slightly better than trained on the source domain only (Line 3). This is probably because our domains are somewhat correlated as opposed to Kirkpatrick et al. (2017), and thus training with more data on target slightly improves the performance on source. However, fine-tuning does achieve the worst performance on source compared with other domain adaptation approaches (among Lines 5-8). Thus, we nevertheless use the terminology "catastrophic forgetting," and our research goal is still to improve IDA performance.
+
+The main results of our approach are Lines 7 and 8. We apply the proposed progressive memory network to IDA and we fine-tune all weights. We see that on both source and target domains,
+
+our approach outperforms the fine-tuning method alone where the memory size is not increased (comparing Lines 7 and 6). This verifies our conjecture that, if the model capacity is increased, the new domain results in less overriding of the learned knowledge in the neural network. Our proposed approach is also "orthogonal" to the expansion of the vocabulary size, where target-specific words are randomly initialized and learned on the target domain. As seen, this combines well with our memory expansion and yields the best performance on both source and target (Line 8).
+
+We now compare an alternative way of increasing model capacity, i.e., expanding hidden states (Lines 9 and 10). For fair comparison, we ensure that the total number of model parameters after memory expansion is equal to the number of model parameters after hidden state expansion. We see that the performance of hidden state expansion is poor especially on the source domain, even if we fine-tune all parameters. This experiment provides empirical evidence to our theorem that expanding memory is more robust than expanding hidden states.
+
+We also compare the results with previous work on IDA. We re-implement EWC (Kirkpatrick et al., 2017). It does not achieve satisfactory results in our application. We investigate other published papers using the same method and find inconsistent results: EWC works well in some applications (Zenke et al., 2017; Lee et al., 2017) but performs poorly on others (Yoon et al., 2018; Wu et al., 2018). Wen & Itti (2018) even report near random performance with EWC. We also re-implement the progressive neural network (Rusu et al., 2016). We use the target predictor to do inference for both source and target domains. Progressive neural network yields low performance, particularly on source, probably because the predictor is trained with only the target domain.
+
+We measure the statistical significance of the results against Line 8 with the one-tailed Wilcoxon's signed-rank test (Wilcoxon, 1945), by bootstrapping a subset of 200 samples for 10 times with replacement. The test shows our approach is significantly better than others, on both source and target domains.
+
+IDA with All Domains. Having analyzed our approach, baselines, and variants on two domains in detail, we are now ready to test the performance of IDA with multiple domains, namely, Fic, Gov, Slate, Tel, and Travel. In this experiment, we assume these domains come one after another, and our goal is to achieve high performance on all domains.
+
+Table 3 shows the dynamics of IDA with our progressive memory network. Comparing the upper-triangular values (in gray, showing out-of-domain performance) with diagonal values, we see that our approach can be quickly adapted to the new domain in an incremental fashion. Comparing lower-triangular values with the diagonal, we see that our approach does not suffer from the catastrophic forgetting problem as the performance of previous domains is gradually increasing if trained with more domains. After all data are observed, our model achieves the best performance in most domains (last row in Table 3), despite the incremental nature of our approach.
+
+We now compare our approach with other baselines and variants in the multi-domain setting, shown in Table 4. Due to the large number of settings, we only choose a selected subset of variants from Table 2 for the comparison.
+
+As seen, our approach of progressively growing memory achieves the same performance as fine-tuning on the last domain (both with vocabulary expansion), but for all previous 4 domains, we achieve significantly better performance than fine-tuning. Our model is comparable to multi-task learning on all domains. It should also be mentioned that multi-task learning requires training the model when data from all domains are available simultaneously. It is not an incremental approach for domain adaptation, and thus cannot be applied to the scenarios introduced in Section 1. We include this setting mainly because we are curious about the performance of non-incremental domain adaptation.
+
+We also compare with previous methods for IDA in Table 4. Our method outperforms EWC and the progressive neural network in all domains; the results are consistent with Table 2.
+
+# 5 CONCLUSION
+
+In this paper, we propose a progressive memory network for incremental domain adaptation (IDA). We augment an RNN with an attention-based memory bank. During IDA, we add new slots to the memory bank and tune all parameters by back-propagation. Empirically, the progressive memory network does not suffer from the catastrophic forgetting problem as in naïve fine-tuning. Our intuition is that the new memory slots increase the neural network's model capacity, and thus, the new knowledge causes significantly less overriding of the existing network. Compared with expanding hidden states, our progressive memory bank provides a more robust way of increasing model capacity, shown by both a theorem and experiments. We also outperform previous work for IDA, including elastic weight consolidation (EWC) and the original progressive neural network.
+
+# ACKNOWLEDGMENTS
+
+This work was funded by Huawei Technologies, Hong Kong. Lili Mou is supported by the Amii Fellow Program, and the CIFAR AI Chair Program.
+
+# REFERENCES
+
+Justin Bayer, Christian Osendorfer, Daniela Korhammer, Nutan Chen, Sebastian Urban, and Patrick van der Smagt. On fast dropout and its applicability to recurrent networks. arXiv preprint arXiv:1311.0701, 2013.
+Arslan Chaudhry, Marc'Aurelio Ranzato, Marcus Rohrbach, and Mohamed Elhoseiny. Efficient lifelong learning with A-GEM. arXiv preprint arXiv:1812.00420, 2018.
+Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In EMNLP, pp. 1724-1734, 2014.
+Cristian Danescu-Niculescu-Mizil and Lillian Lee. Chameleons in imagined conversations: A new approach to understanding coordination of linguistic style in dialogs. In Proc. Workshop on Cognitive Modeling and Computational Linguistics, pp. 76-87, 2011.
+Rajarshi Das, Manzil Zaheer, Siva Reddy, and Andrew McCallum. Question answering on knowledge bases and text using universal schema and memory networks. In ACL, pp. 358-365, 2017.
+Cyprien de Masson d'Autume, Sebastian Ruder, Lingpeng Kong, and Dani Yogatama. Episodic memory in lifelong language learning. arXiv preprint arXiv:1906.01076, 2019.
+Mihail Eric, Lakshmi Krishnan, Francois Charette, and Christopher D. Manning. Key-value retrieval networks for task-oriented dialogue. In SIGDIAL, pp. 37-49, 2017.
+Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, and Victor Lempitsky. Domain-adversarial training of neural networks. JMLR, 17(1):2096-2030, 2016.
+Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka Grabska-Barwińska, Sergio Gómez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou, et al. Hybrid computing using a neural network with dynamic external memory. Nature, 538 (7626):471-476, 2016.
+Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural Computation, 9(8): 1735-1780, 1997.
+Dahyun Kim, Jihwan Bae, Yeonsik Jo, and Jonghyun Choi. Incremental learning with maximum entropy regularization: Rethinking forgetting and intransigence. arXiv preprint arXiv:1902.00829, 2019.
+
+James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, et al. Overcoming catastrophic forgetting in neural networks. Proceedings of the National Academy of Sciences, 114(13):3521-3526, 2017.
+Sang-Woo Lee, Jin-Hwa Kim, Jaehyun Jun, Jung-Woo Ha, and Byoung-Tak Zhang. Overcoming catastrophic forgetting by incremental moment matching. In NeurIPS, pp. 4652-4662, 2017.
+Zhizhong Li and Derek Hoiem. Learning without forgetting. IEEE TPAMI, 40(12):2935-2947, 2018.
+Chia-Wei Liu, Ryan Lowe, Iulian Serban, Mike Noseworthy, Laurent Charlin, and Joelle Pineau. How not to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation. In EMNLP, pp. 2122-2132, 2016.
+Pengfei Liu, Xipeng Qiu, and Xuanjing Huang. Adversarial multi-task learning for text classification. In ACL, pp. 1-10, 2017.
+David Lopez-Paz and Marc'Aurelio Ranzato. Gradient episodic memory for continual learning. In NeurIPS, pp. 6467-6476, 2017.
+Ryan Lowe, Nissan Pow, Iulian Serban, and Joelle Pineau. The Ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dialogue systems. In SIGDIAL, pp. 285-294, 2015.
+Andrea Madotto, Chien-Sheng Wu, and Pascale Fung. Mem2seq: Effectively incorporating knowledge bases into end-to-end task-oriented dialog systems. In ACL, pp. 1468-1478, 2018.
+Alexander Miller, Adam Fisch, Jesse Dodge, Amir-Hossein Karimi, Antoine Bordes, and Jason Weston. Key-value memory networks for directly reading documents. In EMNLP, pp. 1400-1409, 2016.
+Lili Mou, Zhao Meng, Rui Yan, Ge Li, Yan Xu, Lu Zhang, and Zhi Jin. How transferable are neural networks in NLP applications? In EMNLP, pp. 479-489, 2016.
+Jeffrey Pennington, Richard Socher, and Christopher D. Manning. GloVe: Global vectors for word representation. In EMNLP, pp. 1532-1543, 2014.
+Sylvestre-Alvise Rebuffi, Alexander Kolesnikov, Georg Sperl, and Christoph H Lampert. icarl: Incremental classifier and representation learning. In CVPR, pp. 2001-2010, 2017.
+Andrei A Rusu, Neil C Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu, and Raia Hadsell. Progressive neural networks. arXiv preprint arXiv:1606.04671, 2016.
+Jonathan Schwarz, Wojciech Czarnecki, Jelena Luketina, Agnieszka Grabska-Barwinska, Yee Whye Teh, Razvan Pascanu, and Raia Hadsell. Progress & compress: A scalable framework for continual learning. In ICML, pp. 4535-4544, 2018.
+Iulian Vlad Serban, Alessandro Sordoni, Ryan Lowe, Laurent Charlin, Joelle Pineau, Aaron C Courville, and Yoshua Bengio. A hierarchical latent variable encoder-decoder model for generating dialogues. In AAAI, pp. 3295-3301, 2017.
+Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. End-to-end memory networks. In NIPS, pp. 2440-2448, 2015.
+Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks. In NIPS, pp. 3104-3112, 2014.
+Brian Thompson, Jeremy G Winnup, Huda Khayrallah, Kevin Duh, and Philipp Koehn. Overcoming catastrophic forgetting during domain adaptation of neural machine translation. In NAACL, pp. 2062-2068, 2019.
+Shixian Wen and Laurent Itti. Overcoming catastrophic forgetting problem by weight consolidation and long-term memory. arXiv preprint arXiv:1805.07441, 2018.
+
+Frank Wilcoxon. Individual comparisons by ranking methods. Biometrics Bulletin, 1(6):80-83, 1945.
+Adina Williams, Nikita Nangia, and Samuel Bowman. A broad-coverage challenge corpus for sentence understanding through inference. In *NAACL-HLT*, pp. 1112–1122, 2018.
+Chenshen Wu, Luis Herranz, Xialei Liu, Yaxing Wang, Joost van de Weijer, and Bogdan Raducanu. Memory replay GANs: learning to generate images from new categories without forgetting. arXiv preprint arXiv:1809.02058, 2018.
+Jiaolong Xu, Sebastian Ramos, David Vázquez, Antonio M López, and D Ponsa. Incremental domain adaptation of deformable part-based models. In BMVC, 2014.
+Ju Xu and Zhanxing Zhu. Reinforced continual learning. In NeurIPS, pp. 899-908. 2018.
+Jaehong Yoon, Eunho Yang, Jeongtae Lee, and Sung Ju Hwang. Lifelong learning with dynamically expandable networks. ICLR, 2018.
+Jaehong Yoon, Saehoon Kim, Eunho Yang, and Sung Ju Hwang. Oracle: Order robust adaptive continual learning. arXiv preprint arXiv:1902.09432, 2019.
+Jianfei Yu, Minghui Qiu, Jing Jiang, Jun Huang, Shuangyong Song, Wei Chu, and Haiqing Chen. Modelling domain relationships for transfer learning on retrieval-based question answering systems in e-commerce. In WSDM, pp. 682-690, 2018.
+Friedemann Zenke, Ben Poole, and Surya Ganguli. Continual learning through synaptic intelligence. In ICML, pp. 3987-3995, 2017.
+Yizhe Zhang, Michel Galley, Jianfeng Gao, Zhe Gan, Xiujun Li, Chris Brockett, and Bill Dolan. Generating informative and diverse conversational responses via adversarial information maximization. arXiv preprint arXiv:1809.05972, 2018a.
+Zheng Zhang, Minlie Huang, Zhongzhou Zhao, Feng Ji, Haiqing Chen, and Xiaoyan Zhu. Memory-augmented dialogue management for task-oriented dialogue systems. arXiv preprint arXiv:1805.00150, 2018b.
+
+
+(a) Expand RNN states
+Figure 2: Hidden state expansion vs. memory expansion at step $t$ .
+
+
+(b) Expand memory
+
+# A PROOF OF THEOREM 1
+
+Theorem 1. Let RNN have vanilla transition with the linear activation function, and let the RNN state at the last step $\pmb{h}_{i-1}$ be fixed. For a particular data point, if the memory attention satisfies $\sum_{j=N+1}^{N+M} \widetilde{\alpha}_{i,j} \leq \sum_{j=1}^{N} \widetilde{\alpha}_{i,j}$ , then memory expansion yields a lower expected mean squared difference in $\pmb{h}_i$ than RNN state expansion. That is,
+
+$$
+\mathbb {E} \left[ \left\| \boldsymbol {h} _ {i} ^ {\left(\mathrm {m}\right)} - \boldsymbol {h} _ {i} \right\| ^ {2} \right] \leq \mathbb {E} \left[ \left\| \boldsymbol {h} _ {i} ^ {\left(\mathrm {s}\right)} - \boldsymbol {h} _ {i} \right\| ^ {2} \right] \tag {9}
+$$
+
+where $\pmb{h}_i^{(\mathrm{m})}$ refers to the hidden states if the memory is expanded. $\pmb{h}_i^{(\mathrm{s})}$ refers to the original dimensions of the RNN states, if we expand the size of RNN states themselves. Here, we compute the expectation by assuming weights and hidden states are iid from a zero-mean Gaussian distribution (with variance $\sigma^2$ ).
+
+Proof: Let $h_{i-1}$ be the hidden state of the last step. We focus on one step of transition and assume that $h_{i-1}$ is the same when the model capacity is increased. We consider a simplified case where the RNN has vanilla transition with the linear activation function. We measure the effect of model expansion quantitatively by the expected square difference on $h_i$ before and after model expansion.
+
+Suppose the original hidden state $h_i$ is $D$ -dimensional. We assume each memory slot is $d$ -dimensional, and that the additional RNN units when expanding the hidden state are also $d$ -dimensional. We further assume each variable in the expanded memory and expanded weights $(\widetilde{W}$ in Figure 2) are iid with zero mean and variance $\sigma^2$ . This assumption is reasonable as it enables a fair comparison of expanding memory and expanding hidden states. Finally, we assume every variable in the learned memory slots, i.e., $m_{jk}$ , follows the same distribution (zero mean, variance $\sigma^2$ ).
+
+We compute how the original dimensions in the hidden state are changed if we expand RNN. We denote the expanded hidden states by $\widetilde{h}_{i-1}$ and $\widetilde{h}_i$ for the two time steps. We denote the weights connecting from $\widetilde{h}_{i-1}$ to $h_i$ by $\widetilde{W} \in \mathbb{R}^{D \times d}$ . We focus on the original $D$ -dimensional space, denoted as $h_i^{(s)}$ . The connection is shown in Figure 2a. We have
+
+$$
+\begin{array}{l} \mathbb {E} \left[ \left\| h _ {i} ^ {(s)} - h _ {i} \right\| ^ {2} \right] \\ = \mathbb {E} \left[ \| \widetilde {W} \cdot \tilde {\boldsymbol {h}} _ {i - 1} \| ^ {2} \right] (10) \\ = \mathbb {E} \left[ \sum_ {j = 1} ^ {D} \left(\widetilde {\boldsymbol {w}} _ {j} ^ {\top} \widetilde {\boldsymbol {h}} _ {i - 1}\right) ^ {2} \right] (11) \\ = \sum_ {j = 1} ^ {D} \mathbb {E} \left[ \left(\tilde {\boldsymbol {w}} _ {j} ^ {\top} \tilde {\boldsymbol {h}} _ {i - 1}\right) ^ {2} \right] (12) \\ \end{array}
+$$
+
+$$
+\begin{array}{l} = \sum_ {j = 1} ^ {D} \mathbb {E} \left[ \left(\sum_ {k = 1} ^ {d} \widetilde {w} _ {j k} \widetilde {h} _ {i - 1} [ k ]\right) ^ {2} \right] (13) \\ = \sum_ {j = 1} ^ {D} \sum_ {k = 1} ^ {d} \mathbb {E} \left[ \left(\widetilde {w} _ {j k} \widetilde {h} _ {i - 1} [ k ]\right) ^ {2} \right] (14) \\ = \sum_ {j = 1} ^ {D} \sum_ {k = 1} ^ {d} \mathbb {E} \left[ \left(\widetilde {w} _ {j k}\right) ^ {2} \right] \mathbb {E} \left[ \left(\widetilde {h} _ {i - 1} [ k ]\right) ^ {2} \right] (15) \\ = D \cdot d \cdot \operatorname {V a r} (w) \cdot \operatorname {V a r} (h) (16) \\ = D d \sigma^ {2} \sigma^ {2} (17) \\ \end{array}
+$$
+
+where (14) is due to the independence and zero-mean assumptions of every element in $\widetilde{W}$ and $h_{i-1}$ . (15) is due to the independence assumption between $\widetilde{W}$ and $h_{i-1}$ .
+
+Next, we compute the effect of expanding memory slots. Notice that $\| h_i^{(\mathrm{m})} - h_i\| = W_{(\mathrm{c})}\Delta c$ . Here, $h_i^{(\mathrm{m})}$ is the RNN hidden state after memory expansion. $\Delta c \stackrel{\mathrm{def}}{=} c' - c$ , where $c$ and $c'$ are the attention content vectors before and after memory expansion, respectively, at the current time step. $W_{(\mathrm{c})}$ is the weight matrix connecting attention content to RNN states. The connection is shown in Figure 2b. Reusing the result of (16), we immediately obtain
+
+$$
+\begin{array}{l} \mathbb {E} \left[ \left\| \boldsymbol {h} _ {i} ^ {\left(\mathrm {m}\right)} - \boldsymbol {h} _ {i} \right\| ^ {2} \right] (18) \\ = \mathbb {E} \left[ \left\| W _ {(c)} \Delta c \right\| ^ {2} \right] (19) \\ = D d \sigma^ {2} \operatorname {V a r} \left(\Delta c _ {k}\right) (20) \\ \end{array}
+$$
+
+where $\Delta c_{k}$ is an element of the vector $\Delta c$ .
+
+To prove Equation (2), it remains to show that $\mathrm{Var}(\Delta c_k) \leq \sigma^2$ . We now analyze how attention is computed.
+
+Let $\widetilde{\alpha}_1, \dots, \widetilde{\alpha}_{N + M}$ be the unnormalized attention weights over the $N + M$ memory slots. We notice that $\widetilde{\alpha}_1, \dots, \widetilde{\alpha}_N$ remain the same after memory expansion. Then, the original attention probability is given by $\alpha_j = \widetilde{\alpha}_j / (\widetilde{\alpha}_1 + \dots + \widetilde{\alpha}_N)$ for $j = 1, \dots, N$ . After memory expansion, the attention probability becomes $\alpha_j' = \widetilde{\alpha}_j / (\widetilde{\alpha}_1 + \dots + \widetilde{\alpha}_{N + M})$ , illustrated in Figure 3. We have
+
+$$
+\begin{array}{l} \Delta \boldsymbol {c} = \boldsymbol {c} ^ {\prime} - \boldsymbol {c} (21) \\ = \sum_ {j = 1} ^ {N} \left(\alpha_ {j} ^ {\prime} - \alpha_ {j}\right) \boldsymbol {m} _ {j} + \sum_ {j = N + 1} ^ {N + M} \alpha_ {j} ^ {\prime} \boldsymbol {m} _ {j} (22) \\ = \sum_ {j = 1} ^ {N} \left(\frac {\widetilde {\alpha} _ {j}}{\widetilde {\alpha} _ {1} + \cdots + \widetilde {\alpha} _ {N + M}} - \frac {\widetilde {\alpha} _ {j}}{\widetilde {\alpha} _ {1} + \cdots + \widetilde {\alpha} _ {N}}\right) \boldsymbol {m} _ {j} + \sum_ {j = N + 1} ^ {N + M} \left(\frac {\widetilde {\alpha} _ {j}}{\widetilde {\alpha} _ {1} + \cdots + \widetilde {\alpha} _ {N + M}}\right) \boldsymbol {m} _ {j} \\ = \sum_ {j = 1} ^ {N} \left(\frac {- \widetilde {\alpha} _ {j}}{\widetilde {\alpha} _ {1} + \cdots + \widetilde {\alpha} _ {N + M}}\right) \boldsymbol {m} _ {j} + \sum_ {j = N + 1} ^ {N + M} \left(\frac {\widetilde {\alpha} _ {j}}{\widetilde {\alpha} _ {1} + \cdots + \widetilde {\alpha} _ {N + M}}\right) \boldsymbol {m} _ {j} (23) \\ \stackrel {\text {d e f}} {=} \sum_ {j = 1} ^ {N + M} \beta_ {j} \boldsymbol {m} _ {j} (24) \\ \end{array}
+$$
+
+where
+
+$$
+\beta_ {j} \stackrel {\text {d e f}} {=} \left\{ \begin{array}{l l} - \widetilde {\alpha} _ {j} \frac {\widetilde {\alpha} _ {N + 1} + \cdots + \widetilde {\alpha} _ {N + M}}{\widetilde {\alpha} _ {1} + \cdots + \widetilde {\alpha} _ {N}}, & \text {i f} 1 \leq j \leq N \\ \frac {\widetilde {\alpha} _ {j}}{\widetilde {\alpha} _ {1} + \cdots + \widetilde {\alpha} _ {N + M}}, & \text {i f} N + 1 \leq j \leq N + M \end{array} \right. \tag {25}
+$$
+
+
+Figure 3: Attention probabilities before and after memory expansion.
+
+By our assumption of total attention $\sum_{j = N + 1}^{N + M}\widetilde{\alpha}_j\leq \sum_{j = 1}^{N}\widetilde{\alpha}_j$ , we have
+
+$$
+\left| \beta_ {j} \right| \leq \left| \alpha_ {j} ^ {\prime} \right|, \quad \forall 1 \leq j \leq N + M \tag {26}
+$$
+
+Then, we have
+
+$$
+\begin{array}{l} \operatorname {V a r} \left(\Delta c _ {k}\right) = \mathbb {E} \left[ \left(c _ {k} ^ {\prime} - c _ {k}\right) ^ {2} \right] \quad \forall 1 \leq k \leq d (27) \\ = \frac {1}{d} \mathbb {E} \left[ \| c ^ {\prime} - c \| ^ {2} \right] (28) \\ = \frac {1}{d} \mathbb {E} \left[ \sum_ {k = 1} ^ {d} \left(\sum_ {j = 1} ^ {N + M} \beta_ {j} m _ {j k}\right) ^ {2} \right] (29) \\ = \frac {1}{d} \sum_ {k = 1} ^ {d} \mathbb {E} \left[ \left(\sum_ {j = 1} ^ {N + M} \beta_ {j} m _ {j k}\right) ^ {2} \right] (30) \\ = \frac {1}{d} \sum_ {k = 1} ^ {d} \sum_ {j = 1} ^ {N + M} \mathbb {E} \left[ \left(\beta_ {j} m _ {j k}\right) ^ {2} \right] (31) \\ = \frac {1}{d} \sum_ {k = 1} ^ {d} \sum_ {j = 1} ^ {N + M} \mathbb {E} \left[ \beta_ {j} ^ {2} \right] \mathbb {E} \left[ m _ {j k} ^ {2} \right] (32) \\ = \frac {1}{d} \sum_ {k = 1} ^ {d} \sum_ {j = 1} ^ {N + M} \mathbb {E} \left[ \beta_ {j} ^ {2} \right] \sigma^ {2} (33) \\ = \sigma^ {2} \mathbb {E} \left[ \sum_ {j = 1} ^ {N + M} \beta_ {j} ^ {2} \right] (34) \\ \leq \sigma^ {2} \mathbb {E} \left[ \sum_ {j = 1} ^ {N + M} \left(\alpha_ {j} ^ {\prime}\right) ^ {2} \right] (35) \\ \leq \sigma^ {2} (36) \\ \end{array}
+$$
+
+Here, (31) is due to the assumption that $m_{jk}$ is independent and zero-mean, and (32) is due to the independence assumption between $\beta_{j}$ and $m_{jk}$ . To obtain (36), we notice that $\sum_{j=1}^{N+M} \alpha_{j}' = 1$ with $0 \leq \alpha_{j}' \leq 1$ ( $\forall 1 \leq j \leq N + M$ ). Thus, $\sum_{j=1}^{N+M} (\alpha_{j}')^{2} \leq 1$ , concluding our proof.
+
+# B HYPERPARAMETERS
+
+We choose the base model and most of its settings by following the original MultiNLI paper (Williams et al., 2018): 300D RNN hidden states, 300D pretrained GloVe embeddings (Pennington et al., 2014) for initialization, batch size of 32, and the Adam optimizer for training. The initial
+
+
+(a)
+
+
+(b)
+Figure 4: Tuning the number of memory slots to be added per domain in the MultiNLI experiment. The two graphs show validation performance of our IDA model $\mathrm{S} \rightarrow \mathrm{T}$ (F+M+V).
+
+learning rate for Adam is tuned over the set $\{0.3, 0.03, 0.003, 0.0003, 0.00003\}$ . It is set to 0.0003 based on validation performance.
+
+For the memory, we set each slot to be 300-dimensional, which is the same as the RNN and embedding size.
+
+We tune the number of progressive memory slots in Figure 4, which shows the validation performance on the source (Fic) and target (Gov) domains. We see that the performance is close to fine-tuning alone if only one memory slot is added. It improves quickly between 1 and 200 slots, and tapers off around 500. We thus choose to add 500 slots for each domain.
+
+# C ADDITIONAL EXPERIMENT ON DIALOGUE GENERATION
+
+We further evaluate our approach on the task of dialogue response generation. Given an input text sequence, the task is to generate an appropriate output text sequence as a response in human-computer dialogue. This supplementary experiment provides additional evidence of our approach in generation tasks.
+
+Datasets, Setup, and Metrics. We use the Cornell Movie Dialogs Corpus (Danescu-Niculescu-Mizil & Lee, 2011) as the source. It contains $\sim 220\mathrm{k}$ message-response pairs from movie transcripts. We use a $200\mathrm{k}-10\mathrm{k}-10\mathrm{k}$ training-validation-test split.
+
+For the target domain, we manually construct a very small dataset from the Ubuntu Dialogue Corpus (Lowe et al., 2015) to mimic the scenario where quick adaptation has to be done to a new domain with little training data. In particular, we choose a random subset of $15\mathrm{k}$ message-response pairs, and use a $9\mathrm{k}-3\mathrm{k}-3\mathrm{k}$ split.
+
+The base model is a sequence-to-sequence (Seq2Seq) neural network (Sutskever et al., 2014) with attention from the decoder to the encoder. We use a single-layer RNN encoder and a single-layer RNN decoder, each containing 1024 cells following Sutskever et al. (2014). We use GRUs instead of LSTM units due to efficiency concerns. We have separate memory banks for the encoder and decoder, since they are essentially different RNNs. The source and target vocabularies are $27k$ and $10k$ , respectively. Each memory slot is 1024D, because the RNN states are 1024D in this experiment. For each domain, we progressively add 1024 slots; tuning the number of slots is done in a manner similar to the MultiNLI experiment. As before, we use Adam with an initial learning rate of 0.0003 and other default parameters.
+
+Following previous work, we use BLEU-2 (Eric et al., 2017; Madotto et al., 2018) and average Word2Vec embedding similarity (W2V-Sim, Serban et al., 2017; Zhang et al., 2018a) as the evaluation metrics. BLEU-2 is the geometric mean of unigram and bigram word precision penalized by length, and correlates with human satisfaction to some extent (Liu et al., 2016). W2V-Sim is defined as the cosine similarity between the averaged Word2Vec embeddings of the model outputs
+
+| #Line | Model | Trained on/by | BLEU-2 on | W2V-Sim on |
| S | T | S | T |
| 1 | RNN | S | 2.842↑ | 0.738↓ | 0.480↓ | 0.456↓ |
| 2 | T | 0.795↓ | 1.265↓ | 0.454↓ | 0.480↓ |
| 3 | RNN+Mem | S | 3.074↑ | 0.712↓ | 0.498↓ | 0.471↓ |
| 4 | T | 0.920↓ | 1.287↓ | 0.462↓ | 0.487↓ |
| 5 | S+T | 2.650↑ | 0.889↓ | 0.471↓ | 0.462↓ |
| 6 | RNN+Mem | S→T (F) | 1.210↓ | 1.101↓ | 0.509↓ | 0.514↓ |
| 7 | S→T (F+M) | 1.435↓ | 1.207↓ | 0.526 | 0.522 |
| 8 | S→T (F+M+V) | 1.637 | 1.652 | 0.522 | 0.525 |
| 9 | S→T (F+H) | 1.036↓ | 1.606↓ | 0.503↓ | 0.495↓ |
| 10 | S→T (F+H+V) | 1.257↓ | 1.419↓ | 0.504↓ | 0.492↓ |
| 11 | S→T (EWC) | 1.397↓ | 1.382↓ | 0.513↓ | 0.514↓ |
| 12 | S→T (Progressive) | 1.299↓ | 1.408↓ | 0.502↓ | 0.503↓ |
+
+Table 5: Results on two-domain adaptation for dialogue response generation. F: Fine-tuning. V: Expanding vocabulary. H: Expanding RNN hidden states. M: Our proposed method of expanding memory. We also compare with EWC (Kirkpatrick et al., 2017) and progressive neural network (Rusu et al., 2016). $\uparrow, \downarrow$ : $p < 0.05$ and $\uparrow, \downarrow$ : $p < 0.01$ (compared with Line 8).
+
+| Prompt | Response |
| i'm steve. what's your name? | my name is opera |
| i love you man | i love you too |
| so how's it going with your new roomie? | it should be fine |
| i didn't do anything wrong did i | we don't have much experience |
| i just got a messed up stomach | i know, you had to do some things |
| what are you trying to handicap me? | this is not really who i am |
| im scared | what do we do with it? |
| i kinda forgot the fix i tried a year ago | can you try the output ? |
| how would i remaster the ubuntu install | could you have a link both access to the network connection |
| my terminal is stuck | what version is it? |
| i am rebooting now | so your issue is a great work cd, sometimes i get some |
| i get an installation error | i am getting an expert for it |
| what does the kernel-package do? | you will need to get it to see if there is a setting |
| what directory should i be in? | you should be able to install the grub cd at the drive |
+
+Table 6: Sample outputs of our IDA model $\mathrm{S} \rightarrow \mathrm{T}\left( {\mathrm{F} + \mathrm{M} + \mathrm{V}}\right)$ from Table 5.
+
+and the ground truths. Intuitively, BLEU measures hard word-level overlap between two sequences, whereas W2V-Sim measures soft similarity in a distributed semantic space.
+
+Results. The results for dialogue response generation are shown in Table 5. We see that BLEU-2 and W2V similarity are not necessarily consistent. For example, the memory-augmented RNN trained solely on source achieves the best source BLEU-2, whereas the proposed progressive memory has the highest W2V cosine similarity on S. However, our model variants achieve the best performance on most metrics (Lines 7 and 8). Moreover, it consistently outperforms all other IDA approaches. Following the previous experiment, we conduct statistical comparison with Line 8. The test shows that our method is significantly better than the other IDA methods.
+
+In general, the evaluation of dialogue systems is noisy due to the lack of appropriate metrics (Liu et al., 2016). Nevertheless, our experiment provides additional evidence of the effectiveness of our approach. It also highlights our model's viability for both classification and generation tasks.
+
+Case Study. Table 6 shows sample outputs of our IDA model on test prompts from the Cornell Movie Corpus (source) and the Ubuntu Dialogue Corpus (target). We see that casual prompts from the movie domain result in casual responses, whereas Ubuntu queries result in Ubuntu-related responses. With the expansion of vocabulary, our model is able to learn new words like "grub"; with progressive memory, it learns Ubuntu jargon like "network connection." This shows evidence of the success of incremental domain adaptation.
\ No newline at end of file
diff --git a/progressivememorybanksforincrementaldomainadaptation/images.zip b/progressivememorybanksforincrementaldomainadaptation/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..829d7bc0fbebe3845075ad3ee2969a5a9636babd
--- /dev/null
+++ b/progressivememorybanksforincrementaldomainadaptation/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:907ad56b0fc865384ae458f483f951d54c800c5d2ba3f274047301e9b5f80aa9
+size 580158
diff --git a/progressivememorybanksforincrementaldomainadaptation/layout.json b/progressivememorybanksforincrementaldomainadaptation/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..513a1b4b505394672cc28e567726af6e9814f2ae
--- /dev/null
+++ b/progressivememorybanksforincrementaldomainadaptation/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e2019331679391a77da31af70a6b343ea59d83f05f1e48941f670d2fc24183a8
+size 478987
diff --git a/projectionbasedconstrainedpolicyoptimization/0d19ee29-7449-4b46-978d-f949a106455d_content_list.json b/projectionbasedconstrainedpolicyoptimization/0d19ee29-7449-4b46-978d-f949a106455d_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..fe30f6321e2508fa9c8056c3c276af81fb5beb4d
--- /dev/null
+++ b/projectionbasedconstrainedpolicyoptimization/0d19ee29-7449-4b46-978d-f949a106455d_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:70ee05f50e6613277a82e04b375f2d140c73b4d920043f79efe18a1432567ff6
+size 169506
diff --git a/projectionbasedconstrainedpolicyoptimization/0d19ee29-7449-4b46-978d-f949a106455d_model.json b/projectionbasedconstrainedpolicyoptimization/0d19ee29-7449-4b46-978d-f949a106455d_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..b00c52e14e5fd416c0b30a58ad92126afb59d093
--- /dev/null
+++ b/projectionbasedconstrainedpolicyoptimization/0d19ee29-7449-4b46-978d-f949a106455d_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b88fedcd4c5d08a174b50b8cebdd4e4d28f7c9b58c29b9f2442e4f9f7e27a4d8
+size 192718
diff --git a/projectionbasedconstrainedpolicyoptimization/0d19ee29-7449-4b46-978d-f949a106455d_origin.pdf b/projectionbasedconstrainedpolicyoptimization/0d19ee29-7449-4b46-978d-f949a106455d_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..c0068eca31f92a23fc265683478e4347f22f62b9
--- /dev/null
+++ b/projectionbasedconstrainedpolicyoptimization/0d19ee29-7449-4b46-978d-f949a106455d_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:51180557d5295aa7f01d7990c3424d69170562d6e69cbc1faab3a495b9c4403f
+size 16759516
diff --git a/projectionbasedconstrainedpolicyoptimization/full.md b/projectionbasedconstrainedpolicyoptimization/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..058ada0f575f432a5f9fb7fc30c522b198af0345
--- /dev/null
+++ b/projectionbasedconstrainedpolicyoptimization/full.md
@@ -0,0 +1,850 @@
+# PROJECTION-BASED CONSTRAINED POLICY OPTIMIZATION
+
+Tsung-Yen Yang
+
+Princeton University
+
+ty3@princeton.edu
+
+Justinian Rosca
+
+Siemens Corporation, Corporate Technology
+
+justinian.rosca@siemens.com
+
+Karthik Narasimhan
+
+Princeton University
+
+karthikon@princeton.edu
+
+Peter J. Ramadge
+
+Princeton University
+
+ramadge@princeton.edu
+
+# ABSTRACT
+
+We consider the problem of learning control policies that optimize a reward function while satisfying constraints due to considerations of safety, fairness, or other costs. We propose a new algorithm, Projection-Based Constrained Policy Optimization (PCPO). This is an iterative method for optimizing policies in a two-step process: the first step performs a local reward improvement update, while the second step reconciles any constraint violation by projecting the policy back onto the constraint set. We theoretically analyze PCPO and provide a lower bound on reward improvement, and an upper bound on constraint violation, for each policy update. We further characterize the convergence of PCPO based on two different metrics: $L^2$ norm and Kullback-Leibler divergence. Our empirical results over several control tasks demonstrate that PCPO achieves superior performance, averaging more than 3.5 times less constraint violation and around $15\%$ higher reward compared to state-of-the-art methods. $^1$
+
+# 1 INTRODUCTION
+
+Recent advances in deep reinforcement learning (RL) have demonstrated excellent performance on several domains ranging from games like Go (Silver et al., 2017) and StarCraft (AlphaStar, 2019) to robotic control (Levine et al., 2016). In these settings, agents are allowed to explore the entire state space and experiment with all possible actions during training. However, in many real-world applications such as self-driving cars and unmanned aerial vehicles, considerations of safety, fairness and other costs prevent the agent from having complete freedom to explore. For instance, an autonomous car, while optimizing its driving policies, must not take any actions that could cause harm to pedestrians or property (including itself). In effect, the agent is constrained to take actions that do not violate a specified set of constraints on state-action pairs. In this work, we address the problem of learning control policies that optimize a reward function while satisfying predefined constraints.
+
+The problem of policy learning with constraints is more challenging since directly optimizing for the reward, as in Q-Learning (Mnih et al., 2013) or policy gradient (Sutton et al., 2000), will usually violate the constraints. One approach is to incorporate constraints into the learning process by forming a constrained optimization problem. Then perform policy updates using a conditional gradient descent with line search to ensure constraint satisfaction (Achiam et al., 2017). However, the base optimization problem can become infeasible if the current policy violates the constraints. Another approach is to add a hyperparameter weighted copy of the constraints to the objective function (Tessler et al., 2018). However, this incurs the cost of extensive hyperparameter tuning.
+
+To address the above issues, we propose projection-based constrained policy optimization (PCPO). This is an iterative algorithm that performs policy updates in two stages. The first stage maximizes reward using a trust region optimization method (e.g., TRPO (Schulman et al., 2015a)) without
+
+constraints. This might result in a new intermediate policy that does not satisfy the constraints. The second stage reconciles the constraint violation (if any) by projecting the policy back onto the constraint set, i.e., choosing the policy in the constraint set that is closest to the selected intermediate policy. This allows efficient updates to ensure constraint satisfaction without requiring a line search (Achiam et al., 2017) or adjusting a weight (Tessler et al., 2018). Further, due to the projection step, PCPO offers efficient recovery from infeasible (i.e., constraint-violating) states (e.g., due to approximation errors), which existing methods do not handle well.
+
+We analyze PCPO theoretically and derive performance bounds for the algorithm. Specifically, based on information geometry and policy optimization theory, we construct a lower bound on reward improvement, and an upper bound on constraint violations for each policy update. We find that with a relatively small step size for each policy update, the worst-case constraint violation and reward degradation are tolerable. We further analyze two distance measures for the projection step onto the constraint set. We find that the convergence of PCPO is affected by the smallest and largest singular values of the Fisher information matrix used during training. By observing these singular values, we can choose the appropriate projection best suited to the problem.
+
+Empirically, we compare PCPO with state-of-the-art algorithms on four different control tasks, including two Mujoco environments with safety constraints introduced by Achiam et al. (2017) and two traffic management tasks with fairness constraints introduced by Vinitsky et al. (2018). In all cases, the proposed algorithm achieves comparable or superior performance to prior approaches, averaging more reward with fewer cumulative constraint violations. For instance, across the above tasks, PCPO achieves 3.5 times fewer constraint violations and around $15\%$ more reward. This demonstrates the ability of PCPO robustly learn constraint-satisfying policies, and represents a step towards reliable deployment of RL in real problems.
+
+# 2 PRELIMINARIES
+
+We frame our policy learning as a constrained Markov Decision Process (CMDP) (Altman, 1999), where policies will direct the agent to maximize the reward while minimizing the cost. We define CMDP as the tuple $< S, A, T, R, C >$ , where $S$ is the set of states, $A$ is the set of actions that the agent can take, $T: S \times A \times S \to [0,1]$ is the transition probability of the CMDP, $R: S \times A \to \mathbb{R}$ is the reward function, and $C: S \times A \to \mathbb{R}$ is the cost function. Given the agent's current state $s$ , the policy $\pi(a|s): S \to A$ selects an action $a$ for the agent to take. Based on $s$ and $a$ , the agent transits to the next state (denoted by $s'$ ) according to the state transition model $T(s'|s, a)$ , and receives the reward and pays the cost, denoted by $R(s, a)$ and $C(s, a)$ , respectively.
+
+We aim to learn a policy $\pi$ that maximizes a cumulative discounted reward, denoted by
+
+$$
+J ^ {R} (\pi) \doteq \mathbb {E} _ {\tau \sim \pi} \left[ \sum_ {t = 0} ^ {\infty} \gamma^ {t} R (s _ {t}, a _ {t}) \right],
+$$
+
+while satisfying constraints, i.e., making a cumulative discounted cost constraint below a desired threshold $h$ , denoted by
+
+$$
+J ^ {C} (\pi) \doteq \mathbb {E} _ {\tau \sim \pi} \left[ \sum_ {t = 0} ^ {\infty} \gamma^ {t} C (s _ {t}, a _ {t}) \right] \leq h,
+$$
+
+where $\gamma$ is the discount factor, $\tau$ is the trajectory $(\tau = (s_0, a_0, s_1, \dots))$ , and $\tau \sim \pi$ is shorthand for showing that the distribution over the trajectory depends on $\pi: s_0 \sim \mu, a_t \sim \pi(a_t | s_t)$ , $s_{t+1} \sim T(s_{t+1} | s_t, a_t)$ , where $\mu$ is the initial state distribution.
+
+Kakade & Langford (2002) give an identity to express the performance of policy $\pi^{\prime}$ in terms of the advantage function over another policy $\pi$ :
+
+$$
+J ^ {R} \left(\pi^ {\prime}\right) - J ^ {R} (\pi) = \frac {1}{1 - \gamma} \mathbb {E} _ {\substack {s \sim d ^ {\pi^ {\prime}} \\ a \sim \pi^ {\prime}}} [ A _ {R} ^ {\pi} (s, a) ], \tag{1}
+$$
+
+where $d^{\pi}$ is the discounted future state distribution, denoted by $d^{\pi}(s) \doteq (1 - \gamma)\sum_{t=0}^{\infty}\gamma^{t}P(s_{t} = s|\pi)$ , and $A_R^\pi(s,a)$ is the reward advantage function, denoted by $A_R^\pi(s,a) \doteq Q_R^\pi(s,a) - V_R^\pi(s)$ . Here $Q_R^\pi(s,a) \doteq \mathbb{E}_{\tau \sim \pi}\left[\sum_{t=0}^{\infty}\gamma^{t}R(s_{t},a_{t})|s_{0} = s,a_{0} = a\right]$ is the discounted cumulative reward obtained by the policy $\pi$ given the initial state $s$ and action $a$ , and $V_R^\pi(s) \doteq$
+
+$\mathbb{E}_{\tau \sim \pi}\big[\sum_{t = 0}^{\infty}\gamma^{t}R(s_{t},a_{t})|s_{0} = s\big]$ is the discounted cumulative reward obtained by the policy $\pi$ given the initial state $s$ . Similarly, we have the cost advantage function $A_C^\pi (s,a) = Q_C^\pi (s,a) - V_C^\pi (s)$ , where $Q_{C}^{\pi}(s,a)\doteq \mathbb{E}_{\tau \sim \pi}\big[\sum_{t = 0}^{\infty}\gamma^{t}C(s_{t},a_{t})|s_{0} = s,a_{0} = a\big]$ , and $V_{C}^{\pi}(s)\doteq \mathbb{E}_{\tau \sim \pi}\big[\sum_{t = 0}^{\infty}\gamma^{t}C(s_{t},a_{t})|s_{0} = s\big]$ .
+
+# 3 PROJECTION-BASED CONSTRAINED POLICY OPTIMIZATION
+
+To robustly learn constraint-satisfying policies, we develop PCPO - a trust region method that performs policy updates corresponding to reward improvement, followed by projections onto the constraint set. PCPO, inspired by projected gradient descent, is composed of two steps for each update, a reward improvement step and a projection step (This is illustrated in Fig. 1).
+
+Reward Improvement Step. First, we optimize the reward function by maximizing the reward advantage function $A_R^\pi (s,a)$ subject to a Kullback-Leibler (KL) divergence constraint. This constraints the intermediate policy $\pi^{k + \frac{1}{2}}$ to be within a $\delta$ -neighbourhood of $\pi^k$ :
+
+
+Figure 1: Update procedures for PCPO. In step one (red arrow), PCPO follows the reward improvement direction in the trust region (light green). In step two (blue arrow), PCPO projects the policy onto the constraint set (light orange).
+
+$$
+\pi^{k + \frac{1}{2}} = \operatorname *{arg max}_{\pi}\mathbb{E}_{\substack{s\sim d^{\pi^{k}}\\ a\sim \pi}}[A_{R}^{\pi^{k}}(s,a)]
+$$
+
+$$
+\text {s . t .} \quad \mathbb {E} _ {s \sim d ^ {\pi^ {k}}} \left[ D _ {\mathrm {K L}} \left(\pi | | \pi^ {k}\right) [ s ] \right] \leq \delta . \tag {2}
+$$
+
+This update rule with the trust region, $\{\pi : \mathbb{E}_{s \sim q^{\pi^k}}[D_{\mathrm{KL}}(\pi || \pi^k)[s]] \leq \delta\}$ , is called Trust Region Policy Optimization (TRPO) (Schulman et al., 2015a). It constrains the policy changes to a divergence neighborhood and guarantees reward improvement.
+
+Projection Step. Second, we project the intermediate policy $\pi^{k + \frac{1}{2}}$ onto the constraint set by minimizing a distance measure $D$ between $\pi^{k + \frac{1}{2}}$ and $\pi$ :
+
+$$
+\pi^ {k + 1} = \underset {\pi} {\arg \min } \quad D (\pi , \pi^ {k + \frac {1}{2}})
+$$
+
+$$
+\text {s . t .} \quad J ^ {C} (\pi^ {k}) + \mathbb {E} _ {\substack {s \sim d ^ {\pi^ {k}} \\ a \sim \pi}} [ A _ {C} ^ {\pi^ {k}} (s, a) ] \leq h. \tag{3}
+$$
+
+The projection step ensures that the constraint-satisfying policy $\pi^{k + 1}$ is close to $\pi^{k + \frac{1}{2}}$ . We consider two distance measures $D$ : $L^2$ norm and KL divergence. In contrast, using KL divergence projection in the probability distribution space allows us to provide provable guarantees for PCPO.
+
+# 3.1 PERFORMANCE BOUND FOR PCPO WITH KL DIVERGENCE PROJECTION
+
+In safety-critical applications such as autonomous cars, one cares about how worse the performance of a system evolves when applying a learning algorithm. To this end, for PCPO with KL divergence projection, we analyze the worst-case performance degradation for each policy update when the current policy $\pi^k$ satisfies the constraint. The following theorem provides a lower bound on reward improvement, and an upper bound on constraint violation for each policy update.
+
+Theorem 3.1 (Worst-case Bound on Updating Constraint-satisfying Policies). Define $\epsilon_R^{\pi^{k+1}} \doteq \max_s \left| \mathbb{E}_{a \sim \pi^{k+1}}[A_R^\pi(s, a)] \right|$ , and $\epsilon_C^{\pi^{k+1}} \doteq \max_s \left| \mathbb{E}_{a \sim \pi^{k+1}}[A_C^\pi(s, a)] \right|$ . If the current policy $\pi^k$ satisfies the constraint, then under KL divergence projection, the lower bound on reward improvement, and upper bound on constraint violation for each policy update are
+
+$$
+J ^ {R} \left(\pi^ {k + 1}\right) - J ^ {R} \left(\pi^ {k}\right) \geq - \frac {\sqrt {2 \delta} \gamma \epsilon_ {R} ^ {\pi^ {k + 1}}}{(1 - \gamma) ^ {2}}, a n d J ^ {C} \left(\pi^ {k + 1}\right) \leq h + \frac {\sqrt {2 \delta} \gamma \epsilon_ {C} ^ {\pi^ {k + 1}}}{(1 - \gamma) ^ {2}},
+$$
+
+where $\delta$ is the step size in the reward improvement step.
+
+Proof. See the supplemental material.
+
+
+
+Theorem 3.1 indicates that if $\delta$ is small, the worst-case performance degradation is tolerable.
+
+Due to approximation errors or the random initialization of policies, PCPO may have a constraint-violating update. Theorem 3.1 does not give the guarantee on updating a constraint-violating policy. Hence we analyze worst-case performance degradation for each policy update when the current policy $\pi^k$ violates the constraint. The following theorem provides a lower bound on reward improvement, and an upper bound on constraint violation for each policy update.
+
+Theorem 3.2 (Worst-case Bound on Updating Constraint-violating Policies). Define $\epsilon_R^{\pi^{k+1}} \doteq \max_s \left| \mathbb{E}_{a \sim \pi^{k+1}}[A_R^{\pi^k}(s, a)] \right|$ , $\epsilon_C^{\pi^{k+1}} \doteq \max_s \left| \mathbb{E}_{a \sim \pi^{k+1}}[A_C^{\pi^k}(s, a)] \right|$ , $b^+ \doteq \max(0, J^C(\pi^k) - h)$ , and $\alpha_{\mathrm{KL}} \doteq \frac{1}{2a^T H^{-1} a}$ , where $a$ is the gradient of the cost advantage function and $H$ is the Hessian of the KL divergence constraint. If the current policy $\pi^k$ violates the constraint, then under KL divergence projection, the lower bound on reward improvement and the upper bound on constraint violation for each policy update are
+
+$$
+J ^ {R} (\pi^ {k + 1}) - J ^ {R} (\pi^ {k}) \geq - \frac {\sqrt {2 (\delta + b ^ {+ 2} \alpha_ {\mathrm {K L}})} \gamma \epsilon_ {R} ^ {\pi^ {k + 1}}}{(1 - \gamma) ^ {2}},
+$$
+
+$$
+a n d J ^ {C} (\pi^ {k + 1}) \leq h + \frac {\sqrt {2 (\delta + b ^ {+ 2} \alpha_ {\mathrm {K L}})} \gamma \epsilon_ {C} ^ {\pi^ {k + 1}}}{(1 - \gamma) ^ {2}},
+$$
+
+where $\delta$ is the step size in the reward improvement step.
+
+Proof. See the supplemental material.
+
+
+
+Theorem 3.2 indicates that when the policy has greater constraint violation ( $b^{+}$ increases), its worst-case performance degradation increases. Note that Theorem 3.2 reduces to Theorem 3.1 if the current policy $\pi^k$ satisfies the constraint $(b^{+} = 0)$ . The proofs of Theorem 3.1 and Theorem 3.2 follow from the fact that the projection of the policy is non-expansive, i.e., the distance between the projected policies is smaller than that of the unprojected policies. This allows us to measure it and bound the KL divergence between the current policy and the new policy.
+
+# 4 PCPO UPDATES
+
+For a large neural network policy with many parameters, it is impractical to directly solve for the PCPO update in Problem 2 and Problem 3 due to the computational cost. However, with a small step size $\delta$ , we can approximate the reward function and constraints with a first order expansion, and approximate the KL divergence constraint in the reward improvement step, and the KL divergence measure in the projection step with a second order expansion. We now make several definitions:
+
+$\pmb{g} \doteq \nabla_{\pmb{\theta}} \mathbb{E}_{s \sim d^{\pi^k} a \sim \pi}[A_R^{\pi^k}(s, a)]$ is the gradient of the reward advantage function,
+
+$\pmb{a} \doteq \nabla_{\pmb{\theta}} \mathbb{E}_{s \sim d^{\pi^k} a \sim \pi} [A_C^{\pi^k}(s, a)]$ is the gradient of the cost advantage function,
+
+$\pmb{H}_{i,j} \doteq \frac{\partial^2 \mathbb{E}_{s \sim d\pi^k}\left|D_{\mathrm{KL}}(\pi||\pi^k)[s]\right|}{\partial\pmb{\theta}_j\partial\pmb{\theta}_j}$ is the Hessian of the KL divergence constraint ( $\pmb{H}$ is also called the Fisher information matrix. It is symmetric positive semi-definite), $b \doteq J^{C}(\pi^{k}) - h$ is the constraint violation of the policy $\pi^k$ , and $\pmb{\theta}$ is the parameter of the policy.
+
+Reward Improvement Step. We linearize the objective function at $\pi^k$ subject to second order approximation of the KL divergence constraint in order to obtain the following updates:
+
+$$
+\begin{array}{l} \boldsymbol {\theta} ^ {k + \frac {1}{2}} = \underset {\boldsymbol {\theta}} {\arg \max } \quad \boldsymbol {g} ^ {T} (\boldsymbol {\theta} - \boldsymbol {\theta} ^ {k}) \\ \text {s . t .} \quad \frac {1}{2} \left(\boldsymbol {\theta} - \boldsymbol {\theta} ^ {k}\right) ^ {T} \boldsymbol {H} \left(\boldsymbol {\theta} - \boldsymbol {\theta} ^ {k}\right) \leq \delta . \tag {4} \\ \end{array}
+$$
+
+Algorithm 1 Projection-Based Constrained Policy Optimization (PCPO)
+Initialize policy $\pi^0 = \pi (\pmb {\theta}^0)$
+for $k = 0,1,2,\dots$ do Run $\pi^k = \pi (\pmb {\theta}^k)$ and store trajectories in $\mathcal{D}$ Compute $\pmb {g},\pmb {a},\pmb{H}$ ,and $b$ using $\mathcal{D}$ Obtain $\pmb{\theta}^{k + 1}$ using update in Eq. (6) Empty $\mathcal{D}$
+
+Projection Step. If the projection is defined in the parameter space, we can directly use $L^2$ norm projection. On the other hand, if the projection is defined in the probability space, we can use KL divergence projection. This can be approximated through the second order expansion. Again, we linearize the cost constraint at $\pi^k$ . This gives the following update for the projection step:
+
+$$
+\boldsymbol {\theta} ^ {k + 1} = \underset {\boldsymbol {\theta}} {\arg \min } \quad \frac {1}{2} (\boldsymbol {\theta} - \boldsymbol {\theta} ^ {k + \frac {1}{2}}) ^ {T} \boldsymbol {L} (\boldsymbol {\theta} - \boldsymbol {\theta} ^ {k + \frac {1}{2}})
+$$
+
+$$
+\text {s . t .} \quad \boldsymbol {a} ^ {T} \left(\boldsymbol {\theta} - \boldsymbol {\theta} ^ {k}\right) + b \leq 0, \tag {5}
+$$
+
+where $L = I$ for $L^2$ norm projection, and $L = H$ for KL divergence projection. One may argue that using linear approximation to the constraint set is not enough to ensure constraint satisfaction since the real constraint set is maybe non-convex. However, if the step size $\delta$ is small, then the linearization of the constraint set is accurate enough to locally approximate it.
+
+We solve Problem (4) and Problem (5) using convex programming (See the supplemental material for the derivation). For each policy update, we have
+
+$$
+\boldsymbol {\theta} ^ {k + 1} = \boldsymbol {\theta} ^ {k} + \sqrt {\frac {2 \delta}{\boldsymbol {g} ^ {T} \boldsymbol {H} ^ {- 1} \boldsymbol {g}}} \boldsymbol {H} ^ {- 1} \boldsymbol {g} - \max \left(0, \frac {\sqrt {\frac {2 \delta}{\boldsymbol {g} ^ {T} \boldsymbol {H} ^ {- 1} \boldsymbol {g}}} \boldsymbol {a} ^ {T} \boldsymbol {H} ^ {- 1} \boldsymbol {g} + b}{\boldsymbol {a} ^ {T} \boldsymbol {L} ^ {- 1} \boldsymbol {a}}\right) \boldsymbol {L} ^ {- 1} \boldsymbol {a}. \tag {6}
+$$
+
+We assume that $H$ does not have 0 as an eigenvalue and hence it is invertible. PCPO requires to invert $H$ , which is impractical for huge neural network policies. Hence we use the conjugate gradient method (Schulman et al., 2015a). Algorithm 1 shows the pseudocode. (See supplemental material for a discussion of the tradeoff between the approximation error and computational efficiency of the conjugate gradient method.)
+
+Analysis of PCPO Update Rule. For a problem including multiple constraints, we can extend the update in Eq. (6) by using alternating projections. This approach finds a solution in the intersection of multiple constraint sets by sequentially projecting onto each of the sets. The update rule in Eq. (6) shows that the difference between PCPO with KL divergence and $L^2$ norm projections is the cost update direction, leading to a difference in reward improvement. These two projections converge to different stationary points with different convergence rates related to the smallest and largest singular values of the Fisher information matrix shown in Theorem 4.1. For our analysis, we make the following assumptions: we minimize the negative reward objective function $f: \mathbb{R}^n \to \mathbb{R}$ (We follow the convention of the literature that authors typically minimize the objective function). The function $f$ is $L$ -smooth and twice continuously differentiable over the closed and convex constraint set $\mathcal{C}$ .
+
+Theorem 4.1 (Reward Improvement Under $L^2$ Norm and KL Divergence Projections). Let $\eta \doteq \sqrt{\frac{2\delta}{g^T H^{-1} g}}$ in Eq. (6), where $\delta$ is the step size for reward improvement, $g$ is the gradient of $f$ , and $H$ is the Fisher information matrix. Let $\sigma_{\max}(H)$ be the largest singular value of $H$ , and $a$ be the gradient of cost advantage function in Eq. (6). Then PCPO with KL divergence projection converges to a stationary point either inside the constraint set or in the boundary of the constraint set. In the latter case, the Lagrangian constraint $g = -\alpha a, \alpha \geq 0$ holds. Moreover, at step $k + 1$ the objective value satisfies
+
+$$
+f (\boldsymbol {\theta} ^ {k + 1}) \leq f (\boldsymbol {\theta} ^ {k}) + | | \boldsymbol {\theta} ^ {k + 1} - \boldsymbol {\theta} ^ {k} | | _ {- \frac {1}{\eta} H + \frac {L}{2}} ^ {2} I.
+$$
+
+PCPO with $L^2$ norm projection converges to a stationary point either inside the constraint set or in the boundary of the constraint set. In the latter case, the Lagrangian constraint $\pmb{H}^{-1}\pmb{g} = -\alpha \pmb{a},\alpha \geq$
+
+0 holds. If $\sigma_{\max}(\pmb{H}) \leq 1$ , then a step $k + 1$ objective value satisfies
+
+$$
+f (\boldsymbol {\theta} ^ {k + 1}) \leq f (\boldsymbol {\theta} ^ {k}) + \left(\frac {L}{2} - \frac {1}{\eta}\right) | | \boldsymbol {\theta} ^ {k + 1} - \boldsymbol {\theta} ^ {k} | | _ {2} ^ {2}.
+$$
+
+Proof. See the supplemental material.
+
+
+
+Theorem 4.1 shows that in the stationary point $g$ is a line that points to the opposite direction of $a$ . Further, the improvement of the objective value is affected by the singular value of the Fisher information matrix. Specifically, the objective of KL divergence projection decreases when $\frac{L\eta}{2} I \prec H$ , implying that $\sigma_{\min}(H) > \frac{L\eta}{2}$ . And the objective of $L^2$ norm projection decreases when $\eta < \frac{2}{L}$ , implying that condition number of $H$ is upper bounded: $\frac{\sigma_{\max}(H)}{\sigma_{\min}(H)} < \frac{2||g||_2^2}{L^2\delta}$ . Observing the singular values of the Fisher information matrix allows us to adaptively choose the appropriate projection and hence achieve objective improvement. In the supplemental material, we further use an example to compare the optimization trajectories and stationary points of KL divergence and $L^2$ norm projections.
+
+# 5 RELATED WORK
+
+Policy Learning with Constraints. Learning constraint-satisfying policies has been explored in the context of safe RL (Garcia & Fernandez, 2015). The agent learns policies either by (1) exploration of the environment (Achiam et al., 2017; Tessler et al., 2018; Chow et al., 2017) or (2) through expert demonstrations (Ross et al., 2011; Rajeswaran et al., 2017; Gao et al., 2018). However, using expert demonstrations requires humans to label the constraint-satisfying behavior for every possible situation. The scalability of these rule-based approaches is an issue since many real autonomous systems such as self-driving cars and industrial robots are inherently complex. To overcome this issue, PCPO uses the first approach in which the agent learns by trial and error. To prevent the agent from having constraint-violating behavior during exploring the environment, PCPO uses the projection onto the constraint set to ensure constraint satisfaction throughout learning.
+
+Constraint satisfaction by Projections. Using a projection onto a constraint set has been explored for general constrained optimization in other contexts. For example, Akrour et al. (2019) projects the policy from a parameter space onto the constraint. This ensures the updated policy stays close to the previous policy. In contrast, we examine constraints that are defined in terms of states and actions. Similarly, Chow et al. (2019) proposes $\theta$ -projection. This approach projects the policy parameters $\theta$ onto the constraint set. However, no provide provable guarantees are provided. Moreover, the
+
+
+Figure 2: Update procedures for CPO (Achiam et al., 2017). CPO computes the update by simultaneously considering the trust region (light green) and the constraint set (light orange). CPO becomes infeasible when these two sets do not intersect.
+
+problem is formulated by adding the weighted constraint to the reward objective function. Since the weight must be tuned, this incurs the cost of hyperparameter tuning. In contrast, PCPO eliminates the cost of the hyperparameter tuning, and provides provable guarantees on learning constraint-satisfying policies.
+
+Comparison to CPO (Achiam et al., 2017). Perhaps the closest work to ours is the approach of Achiam et al. (2017), who proposes the constrained policy optimization (CPO) algorithm to solve the following:
+
+$$
+\boldsymbol {\theta} ^ {k + 1} = \underset {\boldsymbol {\theta}} {\arg \max } \boldsymbol {g} ^ {T} (\boldsymbol {\theta} - \boldsymbol {\theta} ^ {k}) \quad \text {s . t .} \frac {1}{2} (\boldsymbol {\theta} - \boldsymbol {\theta} ^ {k}) ^ {T} \boldsymbol {H} (\boldsymbol {\theta} - \boldsymbol {\theta} ^ {k}) \leq \delta , \boldsymbol {a} ^ {T} (\boldsymbol {\theta} - \boldsymbol {\theta} ^ {k}) + b \leq 0. \tag {7}
+$$
+
+CPO simultaneously considers the trust region and the constraint, and uses the line search to select a step size (This is illustrated in Fig. 2). The update rule of CPO becomes infeasible when the current policy violates the constraint $(b > 0)$ . CPO recovers by replacing Problem (7) with an update to purely decrease the constraint value: $\theta^{k + 1} = \theta^k -\sqrt{\frac{2\delta}{a^TH^{-1}a}} H^{-1}a$ . This update rule may lead
+
+
+(a) Gather
+
+
+(b) Circle
+
+
+(c) Grid
+Figure 3: The gather, circle, grid and bottleneck tasks. (a) Gather task: the agent is rewarded for gathering green apples but is constrained to collect a limited number of red fruit (Achiam et al., 2017). (b) Circle task: the agent is rewarded for moving in a specified wide circle, but is constrained to stay within a safe region smaller than the radius of the circle (Achiam et al., 2017). (c) Grid task: the agent controls the traffic lights in a grid road network and is rewarded for high throughput but constrained to let lights stay red for at most 7 consecutive seconds (Vinitsky et al., 2018). (d) Bottleneck task: the agent controls a set of autonomous vehicles (shown in red) in a traffic merge situation and is rewarded for achieving high throughput but constrained to ensure that human-driven vehicles (shown in white) have low speed for no more than 10 seconds (Vinitsky et al., 2018).
+
+
+(d) Bottleneck
+
+to a slow progress in learning constraint-satisfying policies. In contrast, PCPO first optimizes the reward and uses the projection to satisfy the constraint. This ensures a feasible solution, allowing the agent to improve the reward while ensuring constraint satisfaction simultaneously.
+
+# 6 EXPERIMENTS
+
+Tasks. We compare the proposed algorithm with existing approaches on four control tasks in total: two tasks with safety constraints ((a) and (b) in Fig. 3), and two tasks with fairness constraints ((c) and (d) in Fig. 3). These tasks are briefly described in the caption of Fig. 3. The first two tasks – Gather and Circle – are Mujoco environments with state space constraints introduced by Achiam et al. (2017). The other two tasks – Grid and Bottleneck – are traffic management problems where the agent controls either a traffic light or a fleet of autonomous vehicles. This is especially challenging since the dimensions of state and action spaces are larger, and the dynamics of the environment are inherently complex.
+
+Baselines. We compare PCPO with four baselines outlined below.
+
+(1) Constrained Policy Optimization (CPO) (Achiam et al., 2017).
+(2) Primal-dual Optimization (PDO) (Chow et al., 2017). In PDO, the weight (dual variables) is learned based on the current constraint satisfaction. A PDO policy update solves:
+
+$$
+\boldsymbol {\theta} ^ {k + 1} = \underset {\boldsymbol {\theta}} {\arg \max } \quad \boldsymbol {g} ^ {T} \left(\boldsymbol {\theta} - \boldsymbol {\theta} ^ {k}\right) + \lambda^ {k} \boldsymbol {a} ^ {T} \left(\boldsymbol {\theta} - \boldsymbol {\theta} ^ {k}\right), \tag {8}
+$$
+
+where $\lambda^k$ is updated using $\lambda^{k + 1} = \lambda^k +\beta (J^C (\pi^k) - h)$ . Here $\beta$ is a fixed learning rate.
+
+(3) Fixed-point Policy Optimization (FPO). A variant of PDO that solves Eq. (8) using a constant $\lambda$ .
+(4) Trust Region Policy Optimization (TRPO) (Schulman et al., 2015a). The TRPO policy update is an unconstrained one:
+
+$$
+\boldsymbol {\theta} ^ {k + 1} = \boldsymbol {\theta} ^ {k} + \sqrt {\frac {2 \delta}{g ^ {T} \boldsymbol {H} ^ {- 1} \boldsymbol {g}}} \boldsymbol {H} ^ {- 1} \boldsymbol {g}.
+$$
+
+Note that TRPO ignores any constraints. We include it to serve as an upper bound baseline on the reward performance.
+
+Since the main focus is to compare PCPO with the state-of-the-art algorithm, CPO, PDO and FPO are not shown in the ant circle, ant gather, grid and bottleneck tasks for clarity.
+
+Experimental Details. For the gather and circle tasks we test two distinct agents: a point-mass $(S \subseteq \mathbb{R}^9, A \subseteq \mathbb{R}^2)$ , and an ant robot $(S \subseteq \mathbb{R}^{32}, A \subseteq \mathbb{R}^8)$ . The agent in the grid task is $S \subseteq \mathbb{R}^{156}$ , $A \subseteq \mathbb{R}^4$ , and the agent in bottleneck task is $S \subseteq \mathbb{R}^{141}$ , $A \subseteq \mathbb{R}^{20}$ . For the simulations in the gather and circle tasks, we use a neural network with two hidden layers of size (64, 32) to represent
+
+
+
+
+
+
+
+
+(a) Point circle
+
+
+(b) Point gather
+
+
+(c) Ant circle
+
+
+
+
+
+
+
+
+(d) Ant gather
+
+
+(e) Grid
+
+
+(f) Bottleneck
+Figure 4: The values of the discounted reward and the undiscounted constraint value (the total number of constraint violation) along policy updates for the tested algorithms and task pairs. The solid line is the mean and the shaded area is the standard deviation, over five runs. The dashed line in the cost constraint plot is the cost constraint threshold $h$ . The curves for baseline oracle, TRPO, indicate the reward and constraint violation values when the constraint is ignored. (Best viewed in color, and the legend is shared across all the figures.)
+
+Gaussian policies. For the simulations in the grid and bottleneck tasks, we use a neural network with two hidden layers of size (16, 16) and (50,25) to represent Gaussian policies, respectively. In the experiments, since the step size is small, we reuse the Fisher information matrix of the reward improvement step in the KL projection step to reduce the computational cost. The step size $\delta$ is set to $10^{-4}$ for all tasks and all tested algorithms. For each task, we conduct 5 runs to get the mean and standard deviation for both the reward and the constraint value over the policy updates. The experiments are implemented in rllab (Duan et al., 2016), a tool for developing and evaluating RL algorithms. See the supplemental material for the details of the experiments.
+
+
+(a) Point circle
+
+
+(b) Point gather
+Figure 5: The value of the discounted reward versus the cumulative constraint value for the tested algorithms and task pairs. See the supplemental material for learning curves in the other tasks. PCPO achieves less constraint violation under the same reward improvement compared to the other algorithms.
+
+Overall Performance. The learning curves of the discounted reward and the undiscounted constraint value (the total number of constraint violation) over policy updates are shown for all tested algorithms and tasks in Fig. 4. The dashed line in the constraint figure is the cost constraint threshold $h$ . The curves for baseline oracle, TRPO, indicate the reward and constraint value when the constraint is ignored. Overall, we find that PCPO is able to improve the reward while having the fastest constraint satisfaction in all tasks. In particular, PCPO is the only algorithm that learns constraint-satisfying policies across all the tasks. Moreover we observe that (1) CPO has more constraint violation than PCPO, (2) PDO is too conservative in optimizing the reward, and (3) FPO requires a significant effort to select a good value of $\lambda$ .
+
+We also observe that in Grid and Bottleneck task, there is slightly more constraint violation than the easier task such as point circle and point gather. This is due to complexity of the policy behavior and non-convexity of the constraint set. However, even with a linear approximation of the constraint set, PCPO still outperforms CPO with $85.15\%$ and 5.42 times less constraint violation in Grid and Bottleneck task, respectively.
+
+These observations suggest that projection step in PCPO drives the agent to learn the constraint-satisfying policy within few policy updates, giving PCPO an advantage in applications. To show that PCPO achieves the same reward with less constraint violation, we examine the reward versus the cumulative constraint value for the tested algorithms in point circle and point gather task shown in Fig. 5. We observe that PCPO outperforms CPO significantly with 66 times and 15 times less constraint violation under the same reward improvement in point circle and point gather tasks, respectively. This observation suggests that PCPO enables the agent to cautiously explore the environment under the constraints.
+
+Comparison of PCPO with KL Divergence vs. $L^2$ Norm Projections. We observe that PCPO with $L^2$ norm projection is more constraint-satisfying than PCPO with KL divergence projection. In addition, PCPO with $L^2$ norm projection tends to have reward fluctuation (point circle, ant circle, and ant gather tasks), while with KL divergence projection tends to have more stable reward improvement (all the tasks).
+
+The above observations indicate that since the gradient of constraint is not multiplied by the Fisher information matrix, the gradient of the constraint is not aligned with the gradient of the reward. This reduces the reward improvement. However, when the Fisher information matrix is ill-conditioned or not well-estimated, especially in a high dimensional policy space, a bad constraint update direction may hinder constraint satisfaction (ant circle, ant gather, grid and bottleneck tasks). In addition, since the stationary points of KL divergence and $L^2$ norm projections are different, they converge to policies with different reward (observe that PCPO with $L^2$ norm projection has higher reward than the one with KL divergence projection around 2250 iterations in ant circle task, and has less reward in point gather task).
+
+Discussion of PDO and FPO. For the PDO baseline, we see that its constraint values fluctuate especially in the point circle task. This phenomena suggests that PDO is not able to adjust the weight $\lambda^k$ quickly enough to meet the constraint threshold, which hinders the efficiency of learning constraint-satisfying policies. If the learning rate $\beta$ is too big, the agent will be too conservative in improving the reward. For FPO, we also see that it learns near constraint-satisfying policies with slightly larger reward improvement compared to PDO. However, in practice FPO requires a lot of engineering effort to select a good value of $\lambda$ . Since PCPO requires no hyperparameter tuning, it has the advantage of robustly learning constraint-satisfying policies over PDO and FPO.
+
+# 7 CONCLUSION
+
+We address the problem of finding constraint-satisfying policies. The proposed algorithm - projection-based constrained policy optimization (PCPO) - optimizes for the reward function while using the projections to ensure constraint satisfaction. This update rule allows PCPO to maintain the feasibility of the optimization problem of each update, addressing the issue of state-of-the-art approaches. The algorithm achieves comparable or superior performance to state-of-the-art approaches in terms of reward improvement and constraint satisfaction in all cases. We further analyze the convergence of PCPO, and find that certain tasks may prefer either KL divergence projection or $L^2$ norm projection. Future work will consider the following: (1) examining the Fisher information matrix to iteratively prescribe the choice of projection for policy update, and hence robustly learn constraint-satisfying policies with more reward improvement, and (2) using expert demonstration or other domain knowledge to reduce the sample complexity.
+
+# ACKNOWLEDGMENTS
+
+The authors would like to thank the anonymous reviewers and the area chair for their comments. Tsung-Yen Yang thanks Siemens Corporation, Corporate Technology for their support.
+
+# REFERENCES
+
+Joshua Achiam, David Held, Aviv Tamar, and Pieter Abbeel. Constrained policy optimization. In Proceedings of International Conference on Machine Learning, pp. 22-31, 2017.
+Riad Akfour, Joni Pajarinen, Gerhard Neumann, and Jan Peters. Projections for approximate policy iteration algorithms. In Proceedings of International Conference on Machine Learning, pp. 181-190, 2019.
+AlphaStar. Alphastar: Mastering the real-time strategy game starcraft ii, 2019. URL https://deepmind.com/blog/article/alphastar-mastering-real-time-strategy-game-starcraft-ii.
+Eitan Altman. Constrained Markov decision processes, volume 7. CRC Press, 1999.
+Yinlam Chow, Mohammad Ghavamzadeh, Lucas Janson, and Marco Pavone. Risk-constrained reinforcement learning with percentile risk criteria. Journal of Machine Learning Research, 18(1): 6070-6120, 2017.
+Yinlam Chow, Ofir Nachum, Aleksandra Faust, Mohammad Ghavamzadeh, and Edgar Duenez-Guzman. Lyapunov-based safe policy optimization for continuous control. arXiv preprint arXiv:1901.10031, 2019.
+Yan Duan, Xi Chen, Rein Houthooft, John Schulman, and Pieter Abbeel. Benchmarking deep reinforcement learning for continuous control. In Proceedings of International Conference on Machine Learning, pp. 1329-1338, 2016.
+Yang Gao, Ji Lin, Fisher Yu, Sergey Levine, and Trevor Darrell. Reinforcement learning from imperfect demonstrations. arXiv preprint arXiv:1802.05313, 2018.
+Javier Garcia and Fernando Fernandez. A comprehensive survey on safe reinforcement learning. Journal of Machine Learning Research, 16(1):1437-1480, 2015.
+
+Sham Kakade and John Langford. Approximately optimal approximate reinforcement learning. In Proceedings of International Conference on Machine Learning, pp. 267-274, 2002.
+Sergey Levine, Chelsea Finn, Trevor Darrell, and Pieter Abbeel. End-to-end training of deep visuomotor policies. Journal of Machine Learning Research, 17(1):1334-1373, 2016.
+Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602, 2013.
+Aravind Rajeswaran, Vikash Kumar, Abhishek Gupta, Giulia Vezzani, John Schulman, Emanuel Todorov, and Sergey Levine. Learning complex dexterous manipulation with deep reinforcement learning and demonstrations. arXiv preprint arXiv:1709.10087, 2017.
+Stéphane Ross, Geoffrey Gordon, and Drew Bagnell. A reduction of imitation learning and structured prediction to no-regret online learning. In Proceedings of International Conference on Artificial Intelligence and Statistics, pp. 627-635, 2011.
+John Schulman, Sergey Levine, Pieter Abbeel, Michael Jordan, and Philipp Moritz. Trust region policy optimization. In Proceedings of International Conference on Machine Learning, pp. 1889-1897, 2015a.
+John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, and Pieter Abbeel. High-dimensional continuous control using generalized advantage estimation. arXiv preprint arXiv:1506.02438, 2015b.
+Jonathan R. Shewchuk. An introduction to the conjugate gradient method without the agonizing pain, 1994.
+David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, et al. Mastering the game of go without human knowledge. Nature, 550(7676):354, 2017.
+Richard S. Sutton, David A. McAllester, Satinder P. Singh, and Yishay Mansour. Policy gradient methods for reinforcement learning with function approximation. In Advances in Neural Information Processing Systems, pp. 1057-1063, 2000.
+Chen Tessler, Daniel J. Mankowitz, and Shie Mannor. Reward constrained policy optimization. arXiv preprint arXiv:1805.11074, 2018.
+Eugene Vinitsky, Aboudy Kreidieh, Luc Le Flem, Nishant Kheterpal, Kathy Jang, Fangyu Wu, Richard Liaw, Eric Liang, and Alexandre M. Bayen. Benchmarks for reinforcement learning in mixed-autonomy traffic. In Proceedings of Conference on Robot Learning, pp. 399-409, 2018.
+
+# S SUPPLEMENTARY MATERIALS
+
+# S.1 PROOF OF THEOREM 3.1: PERFORMANCE BOUND ON UPDATING THE CONSTRAINT-SATISFYING POLICY
+
+To prove the policy performance bound when the current policy is feasible (i.e., constraint-satisfying), we prove the KL divergence between $\pi^k$ and $\pi^{k + 1}$ for the KL divergence projection. We then prove our main theorem for the worst-case performance degradation.
+
+Lemma S.1. If the current policy $\pi^k$ satisfies the constraint, the constraint set is closed and convex, the KL divergence constraint for the first step is $\mathbb{E}_{s\sim d^{\pi^k}}\left[D_{\mathrm{KL}}(\pi^{k + \frac{1}{2}}||\pi^k)[s]\right]\leq \delta$ , where $\delta$ is the step size in the reward improvement step, then under KL divergence projection, we have
+
+$$
+\mathbb {E} _ {s \sim d ^ {\pi^ {k}}} \left[ D _ {\mathrm {K L}} \left(\pi^ {k + 1} \right| | \pi^ {k}) [ s ] \right] \leq \delta .
+$$
+
+Proof. By the Bregman divergence projection inequality, $\pi^k$ being in the constraint set, and $\pi^{k + 1}$ being the projection of the $\pi^{k + \frac{1}{2}}$ onto the constraint set, we have
+
+$$
+\begin{array}{l} \mathbb {E} _ {s \sim d ^ {\pi^ {k}}} \left[ D _ {\mathrm {K L}} \left(\pi^ {k} | | \pi^ {k + \frac {1}{2}}\right) [ s ] \right] \geq \mathbb {E} _ {s \sim d ^ {\pi^ {k}}} \left[ D _ {\mathrm {K L}} \left(\pi^ {k} | | \pi^ {k + 1}\right) [ s ] \right] + \mathbb {E} _ {s \sim d ^ {\pi^ {k}}} \left[ D _ {\mathrm {K L}} \left(\pi^ {k + 1} | | \pi^ {k + \frac {1}{2}}\right) [ s ] \right] \\ \Rightarrow \delta \geq \mathbb {E} _ {s \sim d ^ {\pi^ {k}}} \left[ D _ {\mathrm {K L}} (\pi^ {k} | | \pi^ {k + \frac {1}{2}}) [ s ] \right] \geq \mathbb {E} _ {s \sim d ^ {\pi^ {k}}} \left[ D _ {\mathrm {K L}} (\pi^ {k} | | \pi^ {k + 1}) [ s ] \right]. \\ \end{array}
+$$
+
+The derivation uses the fact that KL divergence is always greater than zero. We know that KL divergence is asymptotically symmetric when updating the policy within a local neighbourhood. Thus, we have
+
+$$
+\delta \geq \mathbb {E} _ {s \sim d ^ {\pi^ {k}}} \left[ D _ {\mathrm {K L}} (\pi^ {k + \frac {1}{2}} | | \pi^ {k}) [ s ] \right] \geq \mathbb {E} _ {s \sim d ^ {\pi^ {k}}} \left[ D _ {\mathrm {K L}} (\pi^ {k + 1} | | \pi^ {k}) [ s ] \right].
+$$
+
+
+
+Now we use Lemma S.1 to prove our main theorem.
+
+Theorem S.2. Define $\epsilon_R^{\pi^{k + 1}}\quad \doteq \quad \max_s\left|\mathbb{E}_{a\sim \pi^{k + 1}}[A_R^\pi (s,a)]\right|,\quad and\quad \epsilon_C^{\pi^{k + 1}}\quad \doteq$
+
+$\max_{s}\left|\mathbb{E}_{a\sim \pi^{k + 1}}[A_C^{\pi^k}(s,a)]\right|$ . If the current policy $\pi^k$ satisfies the constraint, then under the KL divergence projection, the lower bound on reward improvement, and upper bound on constraint violation for each policy update are
+
+$$
+J ^ {R} \left(\pi^ {k + 1}\right) - J ^ {R} \left(\pi^ {k}\right) \geq - \frac {\sqrt {2 \delta} \gamma \epsilon_ {R} ^ {\pi^ {k + 1}}}{(1 - \gamma) ^ {2}}, a n d J ^ {C} \left(\pi^ {k + 1}\right) \leq h + \frac {\sqrt {2 \delta} \gamma \epsilon_ {C} ^ {\pi^ {k + 1}}}{(1 - \gamma) ^ {2}},
+$$
+
+where $\delta$ is the step size in the reward improvement step.
+
+Proof. By the theorem in Achiam et al. (2017) and Lemma S.1, we have the following reward degradation bound for each policy update:
+
+$$
+\begin{array}{l} J^{R}(\pi^{k + 1}) - J^{R}(\pi^{k})\geq \frac{1}{1 - \gamma}\mathbb{E}_{\substack{s\sim d^{\pi^{k}}\\ a\sim \pi^{k + 1}}}\Bigl[A_{R}^{\pi^{k}}(s,a) - \frac{2\gamma\epsilon_{R}^{\pi^{k + 1}}}{1 - \gamma}\sqrt{\frac{1}{2}D_{\mathrm{KL}}(\pi^{k + 1}||\pi^{k})[s]} \Bigr] \\ \geq \frac{1}{1 - \gamma}\mathbb{E}_{\substack{s\sim d^{\pi^{k}}\\ a\sim \pi^{k + 1}}}\Bigl[ - \frac{2\gamma\epsilon_{R}^{\pi^{k + 1}}}{1 - \gamma}\sqrt{\frac{1}{2}D_{\mathrm{KL}}(\pi^{k + 1}||\pi^{k})[s]} \Bigr] \\ \geq - \frac {\sqrt {2 \delta} \gamma \epsilon_ {R} ^ {\pi^ {k + 1}}}{(1 - \gamma) ^ {2}}. \\ \end{array}
+$$
+
+Again, we have the following constraint violation bound for each policy update:
+
+$$
+J ^ {C} \left(\pi^ {k}\right) + \frac {1}{1 - \gamma} \mathbb {E} _ {\substack {s \sim d ^ {\pi^ {k}} \\ a \sim \pi^ {k + 1}}} \left[ A _ {R} ^ {\pi^ {k}} (s, a) \right] \leq h, \tag{9}
+$$
+
+and
+
+$$
+J ^ {C} \left(\pi^ {k + 1}\right) - J ^ {C} \left(\pi^ {k}\right) \leq \frac {1}{1 - \gamma} \mathbb {E} _ {\substack {s \sim d ^ {\pi^ {k}} \\ a \sim \pi^ {k + 1}}} \left[ A _ {C} ^ {\pi^ {k}} (s, a) + \frac {2 \gamma \epsilon_ {C} ^ {\pi^ {k + 1}}}{1 - \gamma} \sqrt {\frac {1}{2} D _ {\mathrm {K L}} \left(\pi^ {k + 1} \mid \mid \pi^ {k}\right) [ s ]} \right]. \tag{10}
+$$
+
+Combining Eq. (9) and Eq. (10), we have
+
+$$
+\begin{array}{l} J^{C}(\pi^{k + 1})\leq h + \frac{1}{1 - \gamma}\mathbb{E}_{\substack{s\sim d_{a\sim \pi^{k + 1}}^{\pi^{k}}\\ }}\left[\frac{2\gamma\epsilon_{C}^{\pi^{k + 1}}}{1 - \gamma}\sqrt{\frac{1}{2}D_{\mathrm{KL}}(\pi^{k + 1}||\pi^{k})[s]} \right] \\ \leq h + \frac {\sqrt {2 \delta} \gamma \epsilon_ {C} ^ {\pi^ {k + 1}}}{(1 - \gamma) ^ {2}}. \\ \end{array}
+$$
+
+
+
+# S.2 PROOF OF THEOREM 3.2: PERFORMANCE BOUND ON UPDATING THE CONSTRAINT-VIOLATING POLICY
+
+To prove the policy performance bound when the current policy is infeasible (i.e., constraint-violating), we prove the KL divergence between $\pi^k$ and $\pi^{k + 1}$ for the KL divergence projection. We then prove our main theorem for the worst-case performance degradation.
+
+Lemma S.3. If the current policy $\pi^k$ violates the constraint, the constraint set is closed and convex, the KL divergence constraint for the first step is $\mathbb{E}_{s\sim d^{\pi^k}}\left[D_{\mathrm{KL}}(\pi^{k + \frac{1}{2}}||\pi^k)[s]\right] \leq \delta$ , where $\delta$ is the step size in the reward improvement step, then under the KL divergence projection, we have
+
+$$
+\mathbb {E} _ {s \sim d ^ {\pi^ {k}}} \left[ D _ {\mathrm {K L}} (\pi^ {k + 1} | | \pi^ {k}) [ s ] \right] \leq \delta + b ^ {+ 2} \alpha_ {\mathrm {K L}},
+$$
+
+where $\alpha_{\mathrm{KL}} \doteq \frac{1}{2a^T H^{-1} a}$ , $a$ is the gradient of the cost advantage function, $H$ is the Hessian of the KL divergence constraint, and $b^{+} \doteq \max(0, J^{C}(\pi^{k}) - h)$ .
+
+Proof. We define the sublevel set of cost constraint function for the current infeasible policy $\pi^k$ :
+
+$$
+L ^ {\pi^ {k}} = \{\pi \mid J ^ {C} (\pi^ {k}) + \mathbb {E} _ {\substack {s \sim d ^ {\pi^ {k}} \\ a \sim \pi}} [ A _ {C} ^ {\pi^ {k}} (s, a) ] \leq J ^ {C} (\pi^ {k}) \}.
+$$
+
+This implies that the current policy $\pi^k$ lies in $L^{\pi^k}$ , and $\pi^{k + \frac{1}{2}}$ is projected onto the constraint set: $\{\pi \mid J^{C}(\pi^{k}) + \mathbb{E}_{\substack{s\sim d^{\pi^{k}}\\ a\sim \pi}}[A_{C}^{\pi^{k}}(s,a)]\leq h\}$ . Next, we define the policy $\pi_l^{k + 1}$ as the projection of $\pi^{k + \frac{1}{2}}$ onto $L^{\pi^k}$ .
+
+By the Three-point Lemma, for these three polices $\pi^k,\pi^{k + 1}$ , and $\pi_l^{k + 1}$ , with $\varphi (\pmb {x})\doteq \sum_{i}x_{i}\log x_{i}$ (this is illustrated in Fig. 6), we have
+
+$$
+\begin{array}{l} \delta \geq \mathbb {E} _ {s \sim d ^ {\pi^ {k}}} \left[ D _ {\mathrm {K L}} (\pi_ {l} ^ {k + 1} | | \pi^ {k}) [ s ] \right] = \mathbb {E} _ {s \sim d ^ {\pi^ {k}}} \left[ D _ {\mathrm {K L}} (\pi^ {k + 1} | | \pi^ {k}) [ s ] \right] \\ - \mathbb {E} _ {s \sim d ^ {\pi^ {k}}} \left[ D _ {\mathrm {K L}} (\pi^ {k + 1} | | \pi_ {l} ^ {k + 1}) [ s ] \right] \\ + \mathbb {E} _ {s \sim d ^ {\pi^ {k}}} \left[ \left(\nabla \varphi (\pi^ {k}) - \nabla \varphi (\pi_ {l} ^ {k + 1})\right) ^ {T} (\pi^ {k + 1} - \pi_ {l} ^ {k + 1}) [ s ] \right] \\ \Rightarrow \mathbb {E} _ {s \sim d ^ {\pi^ {k}}} \left[ D _ {\mathrm {K L}} (\pi^ {k + 1} | | \pi^ {k}) [ s ] \right] \leq \delta + \mathbb {E} _ {s \sim d ^ {\pi^ {k}}} \left[ D _ {\mathrm {K L}} (\pi^ {k + 1} | | \pi_ {l} ^ {k + 1}) [ s ] \right] \\ - \mathbb {E} _ {s \sim d ^ {\pi^ {k}}} \left[ \left(\nabla \varphi (\pi^ {k}) - \nabla \varphi \left(\pi_ {l} ^ {k + 1}\right)\right) ^ {T} \left(\pi^ {k + 1} - \pi_ {l} ^ {k + 1}\right) [ s ] \right]. \tag {11} \\ \end{array}
+$$
+
+The inequality $\mathbb{E}_{s\sim d^{\pi^k}}\left[D_{\mathrm{KL}}(\pi_l^{k + 1}||\pi^k)[s]\right] \leq \delta$ comes from that $\pi^k$ and $\pi_l^{k + 1}$ are in $L^{\pi^k}$ , and Lemma S.1.
+
+If the constraint violation of the current policy $\pi^k$ is small, i.e., $b^{+}$ is small, $\mathbb{E}_{s\sim d^{\pi^k}}\bigl [D_{\mathrm{KL}}(\pi^{k + 1}||\pi_l^{k + 1})[s]\bigr ]$ can be approximated by the second order expansion. By the
+
+
+Figure 6: Update procedures for PCPO when the current policy $\pi^k$ is infeasible. $\pi_l^{k + 1}$ is the projection of $\pi^{k + \frac{1}{2}}$ onto the sublevel set of the constraint set. We find the KL divergence between $\pi^k$ and $\pi^{k + 1}$ .
+
+update rule in Eq. (6), we have
+
+$$
+\begin{array}{l} \mathbb {E} _ {s \sim d ^ {\pi^ {k}}} \left[ D _ {\mathrm {K L}} (\pi^ {k + 1} | | \pi_ {l} ^ {k + 1}) [ s ] \right] \approx \frac {1}{2} (\pmb {\theta} ^ {k + 1} - \pmb {\theta} _ {l} ^ {k + 1}) ^ {T} \pmb {H} (\pmb {\theta} ^ {k + 1} - \pmb {\theta} _ {l} ^ {k + 1}) \\ = \frac {1}{2} \left(\frac {b ^ {+}}{\boldsymbol {a} ^ {T} \boldsymbol {H} ^ {- 1} \boldsymbol {a}} \boldsymbol {H} ^ {- 1} \boldsymbol {a}\right) ^ {T} \boldsymbol {H} \left(\frac {b ^ {+}}{\boldsymbol {a} ^ {T} \boldsymbol {H} ^ {- 1} \boldsymbol {a}} \boldsymbol {H} ^ {- 1} \boldsymbol {a}\right) \\ = \frac {b ^ {+ 2}}{2 \boldsymbol {a} ^ {T} \boldsymbol {H} ^ {- 1} \boldsymbol {a}} \\ = b ^ {+ 2} \alpha_ {\mathrm {K L}}, \tag {12} \\ \end{array}
+$$
+
+where $\alpha_{\mathrm{KL}}\doteq \frac{1}{2\pmb{a}^T\pmb{H}^{-1}\pmb{a}}$
+
+And since $\delta$ is small, we have $\nabla \varphi (\pi^k) - \nabla \varphi (\pi_l^{k + 1})\approx \mathbf{0}$ given $s$ . Thus, the third term in Eq. (11) can be eliminated.
+
+Combining Eq. (11) and Eq. (12), we have
+
+$$
+\mathbb {E} _ {s \sim d ^ {\pi^ {k}}} \big [ D _ {\mathrm {K L}} (\pi^ {k + 1} | | \pi^ {k}) [ s ] \big ] \leq \delta + b ^ {+ 2} \alpha_ {\mathrm {K L}}.
+$$
+
+
+
+Now we use Lemma S.3 to prove our main theorem.
+
+Theorem S.4. Define $\epsilon_R^{\pi^{k+1}} \doteq \max_s \left| \mathbb{E}_{a \sim \pi^{k+1}}[A_R^{\pi^k}(s, a)] \right|$ , $\epsilon_C^{\pi^{k+1}} \doteq \max_s \left| \mathbb{E}_{a \sim \pi^{k+1}}[A_C^{\pi^k}(s, a)] \right|$ , $b^+ \doteq \max(0, J^C(\pi^k) - h)$ , and $\alpha_{\mathrm{KL}} \doteq \frac{1}{2a^TH^{-1}a}$ , where $a$ is the gradient of the cost advantage function and $H$ is the Hessian of the KL divergence constraint. If the current policy $\pi^k$ violates the constraint, then under the KL divergence projection, the lower bound on reward improvement and the upper bound on constraint violation for each policy update are
+
+$$
+J ^ {R} (\pi^ {k + 1}) - J ^ {R} (\pi^ {k}) \geq - \frac {\sqrt {2 (\delta + b ^ {+ 2} \alpha_ {\mathrm {K L}})} \gamma \epsilon_ {R} ^ {\pi^ {k + 1}}}{(1 - \gamma) ^ {2}},
+$$
+
+$$
+a n d J ^ {C} (\pi^ {k + 1}) \leq h + \frac {\sqrt {2 (\delta + b ^ {+ 2} \alpha_ {\mathrm {K L}})} \gamma \epsilon_ {C} ^ {\pi^ {k + 1}}}{(1 - \gamma) ^ {2}},
+$$
+
+where $\delta$ is the step size in the reward improvement step.
+
+Proof. Following the same proof in Theorem S.2, we complete the proof.
+
+
+
+Note that the bounds we obtain for the infeasible case; to the best of our knowledge, are new results.
+
+# S.3 PROOF OF ANALYTICAL SOLUTION TO PCPO
+
+Theorem S.5. Consider the PCPO problem. In the first step, we optimize the reward:
+
+$$
+\boldsymbol {\theta} ^ {k + \frac {1}{2}} = \underset {\boldsymbol {\theta}} {\arg \max } \quad \boldsymbol {g} ^ {T} (\boldsymbol {\theta} - \boldsymbol {\theta} ^ {k})
+$$
+
+$$
+s. t. \quad \frac {1}{2} (\boldsymbol {\theta} - \boldsymbol {\theta} ^ {k}) ^ {T} \boldsymbol {H} (\boldsymbol {\theta} - \boldsymbol {\theta} ^ {k}) \leq \delta ,
+$$
+
+and in the second step, we project the policy onto the constraint set:
+
+$$
+\boldsymbol {\theta} ^ {k + 1} = \underset {\boldsymbol {\theta}} {\arg \min } \quad \frac {1}{2} \left(\boldsymbol {\theta} - \boldsymbol {\theta} ^ {k + \frac {1}{2}}\right) ^ {T} \boldsymbol {L} \left(\boldsymbol {\theta} - \boldsymbol {\theta} ^ {k + \frac {1}{2}}\right)
+$$
+
+$$
+s. t. \quad \boldsymbol {a} ^ {T} (\boldsymbol {\theta} - \boldsymbol {\theta} ^ {k}) + b \leq 0,
+$$
+
+where $\pmb{g},\pmb{a},\pmb{\theta} \in \mathbb{R}^n, b,\delta \in \mathbb{R},\delta > 0$ , and $H,L \in \mathbb{R}^{n\times n}$ , $L = H$ if using the KL divergence projection, and $L = I$ if using the $L^2$ norm projection. When there is at least one strictly feasible point, the optimal solution satisfies
+
+$$
+\boldsymbol {\theta} ^ {k + 1} = \boldsymbol {\theta} ^ {k} + \sqrt {\frac {2 \delta}{\boldsymbol {g} ^ {T} \boldsymbol {H} ^ {- 1} \boldsymbol {g}}} \boldsymbol {H} ^ {- 1} \boldsymbol {g} - \max (0, \frac {\sqrt {\frac {2 \delta}{\boldsymbol {g} ^ {T} \boldsymbol {H} ^ {- 1} \boldsymbol {g}}} \boldsymbol {a} ^ {T} \boldsymbol {H} ^ {- 1} \boldsymbol {g} + b}{\boldsymbol {a} ^ {T} \boldsymbol {L} ^ {- 1} \boldsymbol {a}}) \boldsymbol {L} ^ {- 1} \boldsymbol {a},
+$$
+
+assuming that $\pmb{H}$ is invertible to get a unique solution.
+
+Proof. For the first problem, since $\pmb{H}$ is the Fisher Information matrix, which automatically guarantees it is positive semi-definite. Hence it is a convex program with quadratic inequality constraints. Hence if the primal problem has a feasible point, then Slaters condition is satisfied and strong duality holds. Let $\theta^{*}$ and $\lambda^{*}$ denote the solutions to the primal and dual problems, respectively. In addition, the primal objective function is continuously differentiable. Hence the Karush-Kuhn-Tucker (KKT) conditions are necessary and sufficient for the optimality of $\theta^{*}$ and $\lambda^{*}$ . We now form the Lagrangian:
+
+$$
+\mathcal {L} (\boldsymbol {\theta}, \lambda) = - \boldsymbol {g} ^ {T} (\boldsymbol {\theta} - \boldsymbol {\theta} ^ {k}) + \lambda \left(\frac {1}{2} (\boldsymbol {\theta} - \boldsymbol {\theta} ^ {k}) ^ {T} \boldsymbol {H} (\boldsymbol {\theta} - \boldsymbol {\theta} ^ {k}) - \delta\right).
+$$
+
+And we have the following KKT conditions:
+
+$$
+- \boldsymbol {g} + \lambda^ {*} \boldsymbol {H} \boldsymbol {\theta} ^ {*} - \lambda^ {*} \boldsymbol {H} \boldsymbol {\theta} ^ {k} = 0 \quad \nabla_ {\boldsymbol {\theta}} \mathcal {L} \left(\boldsymbol {\theta} ^ {*}, \lambda^ {*}\right) = 0 \tag {13}
+$$
+
+$$
+\frac {1}{2} \left(\boldsymbol {\theta} ^ {*} - \boldsymbol {\theta} ^ {k}\right) ^ {T} \boldsymbol {H} \left(\boldsymbol {\theta} ^ {*} - \boldsymbol {\theta} ^ {k}\right) - \delta = 0 \quad \nabla_ {\lambda} \mathcal {L} \left(\boldsymbol {\theta} ^ {*}, \lambda^ {*}\right) = 0 \tag {14}
+$$
+
+$$
+\frac {1}{2} \left(\boldsymbol {\theta} ^ {*} - \boldsymbol {\theta} ^ {k}\right) ^ {T} \boldsymbol {H} \left(\boldsymbol {\theta} ^ {*} - \boldsymbol {\theta} ^ {k}\right) - \delta \leq 0 \quad \text {p r i m a l c o n s t r a i n t s} \tag {15}
+$$
+
+$$
+\lambda^ {*} \geq 0 \quad \text {d u a l c o n s t r a i n t s} \tag {16}
+$$
+
+$$
+\lambda^ {*} \left(\frac {1}{2} \left(\boldsymbol {\theta} ^ {*} - \boldsymbol {\theta} ^ {k}\right) ^ {T} \boldsymbol {H} \left(\boldsymbol {\theta} ^ {*} - \boldsymbol {\theta} ^ {k}\right) - \delta\right) = 0 \quad \text {c o m p l e m e n t a r y s l a c k n e s s} \tag {17}
+$$
+
+By Eq. (13), we have $\pmb{\theta}^{*} = \pmb{\theta}^{k} + \frac{1}{\lambda^{*}}\pmb{H}^{-1}\pmb{g}$ . And by plugging Eq. (13) into Eq. (14), we have $\lambda^{*} = \sqrt{\frac{\pmb{g}^{T}\pmb{H}^{-1}\pmb{g}}{2\delta}}$ . Hence we have our optimal solution:
+
+$$
+\boldsymbol {\theta} ^ {k + \frac {1}{2}} = \boldsymbol {\theta} ^ {*} = \boldsymbol {\theta} ^ {k} + \sqrt {\frac {2 \delta}{\boldsymbol {g} ^ {T} \boldsymbol {H} ^ {- 1} \boldsymbol {g}}} \boldsymbol {H} ^ {- 1} \boldsymbol {g}, \tag {18}
+$$
+
+which also satisfies Eq. (15), Eq. (16), and Eq. (17).
+
+Following the same reasoning, we now form the Lagrangian of the second problem:
+
+$$
+\mathcal {L} (\boldsymbol {\theta}, \lambda) = \frac {1}{2} \left(\boldsymbol {\theta} - \boldsymbol {\theta} ^ {k + \frac {1}{2}}\right) ^ {T} \boldsymbol {L} \left(\boldsymbol {\theta} - \boldsymbol {\theta} ^ {k + \frac {1}{2}}\right) + \lambda \left(\boldsymbol {a} ^ {T} \left(\boldsymbol {\theta} - \boldsymbol {\theta} ^ {k}\right) + b\right).
+$$
+
+
+Figure 7: The projection onto the convex set with $\pmb{\theta}' \in \mathcal{C}$ and $\pmb{\theta}^{*} = \mathrm{Proj}_{\mathcal{C}}^{\pmb{L}}(\pmb{\theta})$ .
+
+And we have the following KKT conditions:
+
+$$
+\boldsymbol {L} \boldsymbol {\theta} ^ {*} - \boldsymbol {L} \boldsymbol {\theta} ^ {k + \frac {1}{2}} + \lambda^ {*} \boldsymbol {a} = 0 \quad \nabla_ {\boldsymbol {\theta}} \mathcal {L} \left(\boldsymbol {\theta} ^ {*}, \lambda^ {*}\right) = 0 \tag {19}
+$$
+
+$$
+\boldsymbol {a} ^ {T} \left(\boldsymbol {\theta} ^ {*} - \boldsymbol {\theta} ^ {k}\right) + b = 0 \quad \nabla_ {\lambda} \mathcal {L} \left(\boldsymbol {\theta} ^ {*}, \lambda^ {*}\right) = 0 \tag {20}
+$$
+
+$$
+\boldsymbol {a} ^ {T} \left(\boldsymbol {\theta} ^ {*} - \boldsymbol {\theta} ^ {k}\right) + b \leq 0 \quad \text {p r i m a l c o n s t r a i n t s} \tag {21}
+$$
+
+$$
+\lambda^ {*} \geq 0 \quad \text {d u a l c o n s t r a i n t s} \tag {22}
+$$
+
+$$
+\lambda^ {*} \left(\boldsymbol {a} ^ {T} \left(\boldsymbol {\theta} ^ {*} - \boldsymbol {\theta} ^ {k}\right) + b\right) = 0 \quad \text {c o m p l e m e n t a r y s l a c k n e s s} \tag {23}
+$$
+
+By Eq. (19), we have $\pmb{\theta}^{*} = \pmb{\theta}^{k + 1} + \lambda^{*}\pmb{L}^{-1}\pmb{a}$ . And by plugging Eq. (19) into Eq. (20) and Eq. (22), we have $\lambda^{*} = \max(0, \frac{\pmb{a}^{T}(\pmb{\theta}^{k + \frac{1}{2}} - \pmb{\theta}^{k}) + b}{\pmb{a}\pmb{L}^{-1}\pmb{a}})$ . Hence we have our optimal solution:
+
+$$
+\boldsymbol {\theta} ^ {k + 1} = \boldsymbol {\theta} ^ {*} = \boldsymbol {\theta} ^ {k + \frac {1}{2}} - \max (0, \frac {\boldsymbol {a} ^ {T} \left(\boldsymbol {\theta} ^ {k + \frac {1}{2}} - \boldsymbol {\theta} ^ {k}\right) + b}{\boldsymbol {a} ^ {T} \boldsymbol {L} ^ {- 1} \boldsymbol {a} ^ {T}}) \boldsymbol {L} ^ {- 1} \boldsymbol {a}, \tag {24}
+$$
+
+which also satisfies Eq. (21) and Eq. (23). Hence by Eq. (18) and Eq. (24), we have
+
+$$
+\boldsymbol {\theta} ^ {k + 1} = \boldsymbol {\theta} ^ {k} + \sqrt {\frac {2 \delta}{\boldsymbol {g} ^ {T} \boldsymbol {H} ^ {- 1} \boldsymbol {g}}} \boldsymbol {H} ^ {- 1} \boldsymbol {g} - \max (0, \frac {\sqrt {\frac {2 \delta}{\boldsymbol {g} ^ {T} \boldsymbol {H} ^ {- 1} \boldsymbol {g}}} \boldsymbol {a} ^ {T} \boldsymbol {H} ^ {- 1} \boldsymbol {g} + b}{\boldsymbol {a} ^ {T} \boldsymbol {L} ^ {- 1} \boldsymbol {a}}) \boldsymbol {L} ^ {- 1} \boldsymbol {a}.
+$$
+
+
+
+# S.4 PROOF OF THEOREM 4.1: STATIONARY POINTS OF PCPO WITH THE KL DIVERGENCE AND $L^2$ NORM PROJECTIONS
+
+For our analysis, we make the following assumptions: we minimize the negative reward objective function $f: \mathbb{R}^n \to \mathbb{R}$ (We follow the convention of the literature that authors typically minimize the objective function). The function $f$ is $L$ -smooth and twice continuously differentiable over the closed and convex constraint set $\mathcal{C}$ . We have the following lemma to characterize the projection and for the proof of Theorem S.7. (See Fig. 7 for semantic illustration.)
+
+Lemma S.6. For any $\pmb{\theta}$ , $\pmb{\theta}^{*} = \mathrm{Proj}_{\mathcal{C}}^{\pmb{L}}(\pmb{\theta})$ if and only if $(\pmb{\theta} - \pmb{\theta}^{*})^{T}\pmb{L}(\pmb{\theta}^{\prime} - \pmb{\theta}^{*}) \leq 0, \forall \pmb{\theta}^{\prime} \in \mathcal{C}$ , where $\mathrm{Proj}_{\mathcal{C}}^{\pmb{L}}(\pmb{\theta}) \doteq \underset{\pmb{\theta}^{\prime} \in \mathcal{C}}{\arg \min} ||\pmb{\theta} - \pmb{\theta}^{\prime}||_{\pmb{L}}^{2}$ , and $\pmb{L} = \pmb{H}$ if using the KL divergence projection, and $\pmb{L} = \pmb{I}$ if using the $L^{2}$ norm projection.
+
+Proof. $(\Rightarrow)$ Let $\pmb{\theta}^{*} = \mathrm{Proj}_{\mathcal{C}}^{\pmb{L}}(\pmb{\theta})$ for a given $\pmb{\theta} \notin \mathcal{C}$ , $\pmb{\theta}' \in \mathcal{C}$ be such that $\pmb{\theta}' \neq \pmb{\theta}^{*}$ , and $\alpha \in (0,1)$ . Then we have
+
+$$
+\begin{array}{l} \left| \left| \boldsymbol {\theta} - \boldsymbol {\theta} ^ {*} \right| \right| _ {L} ^ {2} \leq \left| \left| \boldsymbol {\theta} - \left(\boldsymbol {\theta} ^ {*} + \alpha \left(\boldsymbol {\theta} ^ {\prime} - \boldsymbol {\theta} ^ {*}\right)\right) \right| \right| _ {L} ^ {2} \\ = \left\| \boldsymbol {\theta} - \boldsymbol {\theta} ^ {*} \right\| _ {L} ^ {2} + \alpha^ {2} \left\| \boldsymbol {\theta} ^ {\prime} - \boldsymbol {\theta} ^ {*} \right\| _ {L} ^ {2} - 2 \alpha (\boldsymbol {\theta} - \boldsymbol {\theta} ^ {*}) ^ {T} \boldsymbol {L} \left(\boldsymbol {\theta} ^ {\prime} - \boldsymbol {\theta} ^ {*}\right) \\ \Rightarrow \left(\boldsymbol {\theta} - \boldsymbol {\theta} ^ {*}\right) ^ {T} \boldsymbol {L} \left(\boldsymbol {\theta} ^ {\prime} - \boldsymbol {\theta} ^ {*}\right) \leq \frac {\alpha}{2} \left\| \boldsymbol {\theta} ^ {\prime} - \boldsymbol {\theta} ^ {*} \right\| _ {\boldsymbol {L}} ^ {2}. \tag {25} \\ \end{array}
+$$
+
+Since the right hand side of Eq. (25) can be made arbitrarily small for a given $\alpha$ , and hence we have:
+
+$$
+\left(\boldsymbol {\theta} - \boldsymbol {\theta} ^ {*}\right) ^ {T} \boldsymbol {L} \left(\boldsymbol {\theta} ^ {\prime} - \boldsymbol {\theta} ^ {*}\right) \leq 0, \forall \theta^ {\prime} \in \mathcal {C}.
+$$
+
+$(\Leftarrow)$ Let $\pmb{\theta}^{*}\in \mathcal{C}$ be such that $(\pmb {\theta} - \pmb{\theta}^{*})^{T}\pmb {L}(\pmb{\theta}^{\prime} - \pmb{\theta}^{*})\leq 0,\forall \theta^{\prime}\in \mathcal{C}$ . We show that $\pmb{\theta}^{*}$ must be the optimal solution. Let $\pmb{\theta}'\in \mathcal{C}$ and $\pmb{\theta}^{\prime}\neq \pmb{\theta}^{*}$ . Then we have
+
+$$
+\begin{array}{l} \left| \left| \boldsymbol {\theta} - \boldsymbol {\theta} ^ {\prime} \right| \right| _ {L} ^ {2} - \left| \left| \boldsymbol {\theta} - \boldsymbol {\theta} ^ {*} \right| \right| _ {L} ^ {2} = \left| \left| \boldsymbol {\theta} - \boldsymbol {\theta} ^ {*} + \boldsymbol {\theta} ^ {*} - \boldsymbol {\theta} ^ {\prime} \right| \right| _ {L} ^ {2} - \left| \left| \boldsymbol {\theta} - \boldsymbol {\theta} ^ {*} \right| \right| _ {L} ^ {2} \\ = \left\| \boldsymbol {\theta} - \boldsymbol {\theta} ^ {*} \right\| _ {\boldsymbol {L}} ^ {2} + \left\| \boldsymbol {\theta} ^ {\prime} - \boldsymbol {\theta} ^ {*} \right\| _ {\boldsymbol {L}} ^ {2} - 2 \left(\boldsymbol {\theta} - \boldsymbol {\theta} ^ {*}\right) ^ {T} \boldsymbol {L} \left(\boldsymbol {\theta} ^ {\prime} - \boldsymbol {\theta} ^ {*}\right) - \left\| \boldsymbol {\theta} - \boldsymbol {\theta} ^ {*} \right\| _ {\boldsymbol {L}} ^ {2} \\ > 0 \\ \Rightarrow | | \boldsymbol {\theta} - \boldsymbol {\theta} ^ {\prime} | | _ {L} ^ {2} > | | \boldsymbol {\theta} - \boldsymbol {\theta} ^ {*} | | _ {L} ^ {2}. \\ \end{array}
+$$
+
+Hence, $\pmb{\theta}^{*}$ is the optimal solution to the optimization problem, and $\pmb{\theta}^{*} = \mathrm{Proj}_{C}^{\pmb{L}}(\pmb{\theta})$ .
+
+
+
+Based on Lemma S.6, we have the following theorem.
+
+Theorem S.7. Let $\eta \doteq \sqrt{\frac{2\delta}{g^T H^{-1} g}}$ in Eq. (6), where $\delta$ is the step size for reward improvement, $g$ is the gradient of $f$ , $H$ is the Fisher information matrix. Let $\sigma_{\max}(H)$ be the largest singular value of $H$ , and $a$ be the gradient of cost advantage function in Eq. (6). Then PCPO with the KL divergence projection converges to stationary points with $g \in -a$ (i.e., the gradient of $f$ belongs to the negative gradient of the cost advantage function). The objective value changes by
+
+$$
+f \left(\boldsymbol {\theta} ^ {k + 1}\right) \leq f \left(\boldsymbol {\theta} ^ {k}\right) + \left\| \boldsymbol {\theta} ^ {k + 1} - \boldsymbol {\theta} ^ {k} \right\| _ {- \frac {1}{\eta} \boldsymbol {H} + \frac {L}{2} \boldsymbol {I}} ^ {2}. \tag {26}
+$$
+
+PCPO with the $L^2$ norm projection converges to stationary points with $\pmb{H}^{-1}\pmb{g} \in -\pmb{a}$ (i.e., the product of the inverse of $\pmb{H}$ and gradient of $f$ belongs to the negative gradient of the cost advantage function). If $\sigma_{\max}(\pmb{H}) \leq 1$ , then the objective value changes by
+
+$$
+f \left(\boldsymbol {\theta} ^ {k + 1}\right) \leq f \left(\boldsymbol {\theta} ^ {k}\right) + \left(\frac {L}{2} - \frac {1}{\eta}\right) \| \boldsymbol {\theta} ^ {k + 1} - \boldsymbol {\theta} ^ {k} \| _ {2} ^ {2}. \tag {27}
+$$
+
+Proof. The proof of the theorem is based on working in a Hilbert space and the non-expansive property of the projection. We first prove stationary points for PCPO with the KL divergence and $L^2$ norm projections, and then prove the change of the objective value.
+
+When in stationary points $\theta^{*}$ , we have
+
+$$
+\begin{array}{l} \boldsymbol {\theta} ^ {*} = \boldsymbol {\theta} ^ {*} - \sqrt {\frac {2 \delta}{\boldsymbol {g} ^ {T} \boldsymbol {H} ^ {- 1} \boldsymbol {g}}} \boldsymbol {H} ^ {- 1} \boldsymbol {g} - \max (0, \frac {\sqrt {\frac {2 \delta}{\boldsymbol {g} ^ {T} \boldsymbol {H} ^ {- 1} \boldsymbol {g}}} \boldsymbol {a} ^ {T} \boldsymbol {H} ^ {- 1} \boldsymbol {g} + b}{\boldsymbol {a} ^ {T} \boldsymbol {L} ^ {- 1} \boldsymbol {a}}) \boldsymbol {L} ^ {- 1} \boldsymbol {a}. \\ \Leftrightarrow \sqrt {\frac {2 \delta}{\boldsymbol {g} ^ {T} \boldsymbol {H} ^ {- 1} \boldsymbol {g}}} \boldsymbol {H} ^ {- 1} \boldsymbol {g} = - \max (0, \frac {\sqrt {\frac {2 \delta}{\boldsymbol {g} ^ {T} \boldsymbol {H} ^ {- 1} \boldsymbol {g}}} \boldsymbol {a} ^ {T} \boldsymbol {H} ^ {- 1} \boldsymbol {g} + b}{\boldsymbol {a} ^ {T} \boldsymbol {L} ^ {- 1} \boldsymbol {a}}) \boldsymbol {L} ^ {- 1} \boldsymbol {a} \\ \Leftrightarrow \boldsymbol {H} ^ {- 1} \boldsymbol {g} \in - \boldsymbol {L} ^ {- 1} \boldsymbol {a}. \tag {28} \\ \end{array}
+$$
+
+For the KL divergence projection $(\pmb{L} = \pmb{H})$ , Eq. (28) boils down to $\pmb{g} \in -\pmb{a}$ , and for the $L^2$ norm projection $(\pmb{L} = \pmb{I})$ , Eq. (28) is equivalent to $\pmb{H}^{-1}\pmb{g} \in -\pmb{a}$ .
+
+Now we prove the second part of the theorem. Based on Lemma S.6, for the KL divergence projection, we have
+
+$$
+\begin{array}{l} \left(\boldsymbol {\theta} ^ {k} - \boldsymbol {\theta} ^ {k + 1}\right) ^ {T} \boldsymbol {H} \left(\boldsymbol {\theta} ^ {k} - \eta \boldsymbol {H} ^ {- 1} \boldsymbol {g} - \boldsymbol {\theta} ^ {k + 1}\right) \leq 0 \\ \Rightarrow \boldsymbol {g} ^ {T} \left(\boldsymbol {\theta} ^ {k + 1} - \boldsymbol {\theta} ^ {k}\right) \leq - \frac {1}{\eta} \left\| \boldsymbol {\theta} ^ {k + 1} - \boldsymbol {\theta} ^ {k} \right\| _ {\boldsymbol {H}} ^ {2}. \tag {29} \\ \end{array}
+$$
+
+By Eq. (29), and $L$ -smooth continuous function $f$ , we have
+
+$$
+\begin{array}{l} f (\boldsymbol {\theta} ^ {k + 1}) \leq f (\boldsymbol {\theta} ^ {k}) + \boldsymbol {g} ^ {T} (\boldsymbol {\theta} ^ {k + 1} - \boldsymbol {\theta} ^ {k}) + \frac {L}{2} | | \boldsymbol {\theta} ^ {k + 1} - \boldsymbol {\theta} ^ {k} | | _ {2} ^ {2} \\ \leq f \left(\boldsymbol {\theta} ^ {k}\right) - \frac {1}{\eta} \left| \left| \boldsymbol {\theta} ^ {k + 1} - \boldsymbol {\theta} ^ {k} \right| \right| _ {\boldsymbol {H}} ^ {2} + \frac {L}{2} \left| \left| \boldsymbol {\theta} ^ {k + 1} - \boldsymbol {\theta} ^ {k} \right| \right| _ {2} ^ {2} \\ = f \left(\boldsymbol {\theta} ^ {k}\right) + \left(\boldsymbol {\theta} ^ {k + 1} - \boldsymbol {\theta} ^ {k}\right) ^ {T} \left(- \frac {1}{\eta} \boldsymbol {H} + \frac {L}{2} \boldsymbol {I}\right) \left(\boldsymbol {\theta} ^ {k + 1} - \boldsymbol {\theta} ^ {k}\right) \\ = f \left(\boldsymbol {\theta} ^ {k}\right) + \left\| \boldsymbol {\theta} ^ {k + 1} - \boldsymbol {\theta} ^ {k} \right\| _ {- \frac {1}{\eta} \boldsymbol {H} + \frac {L}{2} \boldsymbol {I}} ^ {2}. \\ \end{array}
+$$
+
+For the $L^2$ norm projection, we have
+
+$$
+\begin{array}{l} \left(\boldsymbol {\theta} ^ {k} - \boldsymbol {\theta} ^ {k + 1}\right) ^ {T} \left(\boldsymbol {\theta} ^ {k} - \eta \boldsymbol {H} ^ {- 1} \boldsymbol {g} - \boldsymbol {\theta} ^ {k + 1}\right) \leq 0 \\ \Rightarrow \boldsymbol {g} ^ {T} \boldsymbol {H} ^ {- 1} \left(\boldsymbol {\theta} ^ {k + 1} - \boldsymbol {\theta} ^ {k}\right) \leq - \frac {1}{\eta} \left\| \boldsymbol {\theta} ^ {k + 1} - \boldsymbol {\theta} ^ {k} \right\| _ {2} ^ {2}. \tag {30} \\ \end{array}
+$$
+
+By Eq. (30), $L$ -smooth continuous function $f$ , and if $\sigma_{\max}(\pmb{H}) \leq 1$ , we have
+
+$$
+\begin{array}{l} f (\boldsymbol {\theta} ^ {k + 1}) \leq f (\boldsymbol {\theta} ^ {k}) + \boldsymbol {g} ^ {T} (\boldsymbol {\theta} ^ {k + 1} - \boldsymbol {\theta} ^ {k}) + \frac {L}{2} | | \boldsymbol {\theta} ^ {k + 1} - \boldsymbol {\theta} ^ {k} | | _ {2} ^ {2} \\ \leq f \left(\boldsymbol {\theta} ^ {k}\right) + \left(\frac {L}{2} - \frac {1}{\eta}\right) \left| \left| \boldsymbol {\theta} ^ {k + 1} - \boldsymbol {\theta} ^ {k} \right| \right| _ {2} ^ {2}. \\ \end{array}
+$$
+
+To see why we need the assumption of $\sigma_{\max}(\pmb{H}) \leq 1$ , we define $\pmb{H} = \pmb{U}\pmb{\Sigma}\pmb{U}^T$ as the singular value decomposition of $\pmb{H}$ with $\pmb{u}_i$ being the column vector of $\pmb{U}$ . Then we have
+
+$$
+\begin{array}{l} \boldsymbol {g} ^ {T} \boldsymbol {H} ^ {- 1} \left(\boldsymbol {\theta} ^ {k + 1} - \boldsymbol {\theta} ^ {k}\right) = \boldsymbol {g} ^ {T} \boldsymbol {U} \boldsymbol {\Sigma} ^ {- 1} \boldsymbol {U} ^ {T} \left(\boldsymbol {\theta} ^ {k + 1} - \boldsymbol {\theta} ^ {k}\right) \\ = \boldsymbol {g} ^ {T} \left(\sum_ {i} \frac {1}{\sigma_ {i} (\boldsymbol {H})} \boldsymbol {u} _ {i} \boldsymbol {u} _ {i} ^ {T}\right) \left(\boldsymbol {\theta} ^ {k + 1} - \boldsymbol {\theta} ^ {k}\right) \\ = \sum_ {i} \frac {1}{\sigma_ {i} (\boldsymbol {H})} \boldsymbol {g} ^ {T} \left(\boldsymbol {\theta} ^ {k + 1} - \boldsymbol {\theta} ^ {k}\right). \\ \end{array}
+$$
+
+If we want to have
+
+$$
+\boldsymbol {g} ^ {T} \left(\boldsymbol {\theta} ^ {k + 1} - \boldsymbol {\theta} ^ {k}\right) \leq \boldsymbol {g} ^ {T} \boldsymbol {H} ^ {- 1} \left(\boldsymbol {\theta} ^ {k + 1} - \boldsymbol {\theta} ^ {k}\right) \leq - \frac {1}{\eta} | | \boldsymbol {\theta} ^ {k + 1} - \boldsymbol {\theta} ^ {k} | | _ {2} ^ {2},
+$$
+
+then every singular value $\sigma_{i}(\pmb{H})$ of $\pmb{H}$ needs to be smaller than 1, and hence $\sigma_{\max}(\pmb{H}) \leq 1$ , which justifies the assumption we use to prove the bound.
+
+To make the objective value for PCPO with the KL divergence projection improves, the right hand side of Eq. (26) needs to be negative. Hence we have $\frac{L\eta}{2}\pmb {I}\prec \pmb{H}$ , implying that $\sigma_{\mathrm{min}}(H) > \frac{L\eta}{2}$ . And to make the objective value for PCPO with the $L^2$ norm projection improves, the right hand side of Eq. (27) needs to be negative. Hence we have $\eta < \frac{2}{L}$ , implying that
+
+$$
+\begin{array}{l} \eta = \sqrt {\frac {2 \delta}{\boldsymbol {g} ^ {T} \boldsymbol {H} ^ {- 1} \boldsymbol {g}}} < \frac {2}{L} \\ \Rightarrow \frac {2 \delta}{\boldsymbol {g} ^ {T} \boldsymbol {H} ^ {- 1} \boldsymbol {g}} < \frac {4}{L ^ {2}} \\ \Rightarrow \frac {\boldsymbol {g} ^ {T} \boldsymbol {H} ^ {- 1} \boldsymbol {g}}{2 \delta} > \frac {L ^ {2}}{4} \\ \Rightarrow \frac {L ^ {2} \delta}{2} < \boldsymbol {g} ^ {T} \boldsymbol {H} ^ {- 1} \boldsymbol {g} \\ \leq | | \boldsymbol {g} | | _ {2} | | \boldsymbol {H} ^ {- 1} \boldsymbol {g} | | _ {2} \\ \leq | | \boldsymbol {g} | | _ {2} | | \boldsymbol {H} ^ {- 1} | | _ {2} | | \boldsymbol {g} | | _ {2} \\ = \sigma_ {\max } (\boldsymbol {H} ^ {- 1}) | | \boldsymbol {g} | | _ {2} ^ {2} \\ = \sigma_ {\min } (\boldsymbol {H}) | | \boldsymbol {g} | | _ {2} ^ {2} \\ \Rightarrow \sigma_ {\min } (\boldsymbol {H}) > \frac {L ^ {2} \delta}{2 \| \boldsymbol {g} \| _ {2} ^ {2}}. \tag {31} \\ \end{array}
+$$
+
+By the definition of the condition number and Eq. (31), we have
+
+$$
+\begin{array}{l} \frac {1}{\sigma_ {\min} (\boldsymbol {H})} < \frac {2 | | \boldsymbol {g} | | _ {2} ^ {2}}{L ^ {2} \delta} \\ \Rightarrow \frac {\sigma_ {\max } (\boldsymbol {H})}{\sigma_ {\min } (\boldsymbol {H})} < \frac {2 | | \boldsymbol {g} | | _ {2} ^ {2} \sigma_ {\max } (\boldsymbol {H})}{L ^ {2} \delta} \\ \leq \frac {2 \| \boldsymbol {g} \| _ {2} ^ {2}}{L ^ {2} \delta}, \\ \end{array}
+$$
+
+which justifies what we discuss.
+
+# S.5 ADDITIONAL COMPUTATIONAL EXPERIMENTS
+
+# S.5.1 IMPLEMENTATION DETAILS
+
+For detailed explanation of the task in Achiam et al. (2017), please refer to the appendix of Achiam et al. (2017). For detailed explanation of the task in Vinitsky et al. (2018), please refer to Vinitsky et al. (2018).
+
+We use neural networks that take the input of state, and output the mean and variance to be the Gaussian policy in all experiments. For the simulations in the gather and circle tasks, we use a neural network with two hidden layers of size (64, 32). For the simulations in the grid and bottleneck tasks, we use a neural network with two hidden layers of size (16, 16) and (50, 25), respectively. We use tanh as the activation function of the neural network.
+
+We use GAE- $\lambda$ approach (Schulman et al., 2015b) to estimate $A_R^\pi (s,a)$ and $A_C^\pi (s,a)$ . For the simulations in the gather and circle tasks, we use neural network baselines with the same architecture and activation functions as the policy networks. For the simulations in the grid and bottleneck tasks, we use linear baselines.
+
+The hyperparameters of each task for all algorithms are as follows (PC: point circle, PG: point gather, AC: ant circle, AG: ant gather, Gr: grid, and BN: bottleneck tasks):
+
+| Parameter | PC | PG | AC | AG | Gr | BN |
| discount factor γ | 0.995 | 0.995 | 0.995 | 0.995 | 0.999 | 0.999 |
| step size δ | 10-4 | 10-4 | 10-4 | 10-4 | 10-4 | 10-4 |
| λGAE | 0.95 | 0.95 | 0.95 | 0.95 | 0.97 | 0.97 |
| λGAE | 1.0 | 1.0 | 0.5 | 0.5 | 0.5 | 1.0 |
| Batch size | 50,000 | 50,000 | 100,000 | 100,000 | 10,000 | 25,000 |
| Rollout length | 50 | 15 | 500 | 500 | 400 | 500 |
| Cost constraint threshold h | 5 | 0.1 | 10 | 0.2 | 0 | 0 |
+
+Note that we do not use a learned model to predict the probability of entering an undesirable state within a fixed time horizon as CPO did for cost shaping.
+
+# S.5.2 EXPERIMENT RESULTS
+
+To examine the performance of the algorithms with different metrics, we provide the learning curves of the cumulative constraint value over policy update, and the reward versus the cumulative constraint value for the tested algorithms and task pairs in Section 6 shown in Fig. 8. The second metric enables us to compare the reward difference under the same number of cumulative constraint violation.
+
+Overall, we find that,
+
+(a) CPO has more cumulative constraint violation than PCPO.
+(b) PCPO with $L^2$ norm projection has less cumulative constraint violation than KL divergence projection except for the point circle and point gather tasks. This observation suggests that the Fisher information matrix is not well-estimated in the high dimensional policy space, leading to have more constraint violation.
+(c) PCPO has more reward improvement compared to CPO under the same number of cumulative constraint violation in point circle, point gather, ant circle, ant gather, and bottleneck task.
+
+# S.5.3 CPO WITHOUT LINE SEARCH
+
+Due to approximation errors, CPO performs line search to check whether the updated policy satisfies the trust region and cost constraints. To understand the necessity of line search in CPO, we conducted the experiment with and without line search shown in Fig. 9. The step size $\delta$ is set to 0.01. We find that CPO without line search tends to (1) have large reward variance especially in the point circle task, and (2) learn constraint-satisfying policies slightly faster. These observations
+
+
+
+
+
+
+
+
+(a) Point circle
+
+
+(b) Point gather
+
+
+(c) Ant circle
+
+
+
+
+
+
+
+
+(d) Ant gather
+
+
+(e) Grid
+
+
+(f) Bottleneck
+Figure 8: The values of the cumulative constraint value over policy update, and the reward versus the cumulative constraint value for the tested algorithms and task pairs. The solid line is the mean and the shaded area is the standard deviation, over five runs. The curves for baseline oracle, TRPO, indicate the performance when the constraint is ignored. (Best viewed in color, and the legend is shared across all the figures.)
+
+suggest that line search is more conservative in optimizing the policies since it usually takes smaller steps. However, we conjecture that if using smaller $\delta$ , the effect of line search is not significant.
+
+# S.5.4 THE TASKS WITH HARDER CONSTRAINTS
+
+To understand the stability of PCPO and CPO when deployed in more constraint-critical tasks, we increase the difficulty of the task by setting the constraint threshold to zero and reduce the safe area. The learning curve of discounted reward and constraint value over policy updates are shown in Fig. 10.
+
+We observe that even with more difficult constraint, PCPO still has more reward improvement and constraint satisfaction than CPO, whereas CPO needs more feasible recovery steps to satisfy the constraint. In addition, we observe that PCPO with $L^2$ norm projection has high constraint variance
+
+
+
+
+
+
+(a) Point circle
+
+
+(b) Point gather
+Figure 9: The values of the reward and the constraint value for the tested algorithms and task pairs. The solid line is the mean and the shaded area is the standard deviation, over five runs. The dash line in the cost constraint plot is the cost constraint threshold $h$ . Line search helps to stabilize the training. (Best viewed in color)
+
+
+
+
+
+
+(a) Point circle
+Figure 10: The values of the reward and the constraint value for the tested algorithms and task pairs. The solid line is the mean and the shaded area is the standard deviation, over five runs. The dash line in the cost constraint plot is the cost constraint threshold $h$ . PCPO with KL divergence projection is the only one that can satisfy the constraint with the highest reward. (Best viewed in color)
+
+
+(b) Point gather
+
+in point circle task, suggesting that the reward update direction is not well aligned with the cost update direction. We also observe that PCPO with $L^2$ norm projection converges to a bad local
+
+
+
+
+
+
+(a) Point circle
+
+
+(b) Point gather
+Figure 11: The values of the reward and the constraint value for the tested algorithms and task pairs. The solid line is the mean and the shaded area is the standard deviation, over five runs. The dash line in the cost constraint plot is the cost constraint threshold $h$ . The curves for baseline oracle, TRPO, indicate the reward and constraint violation values when the constraint is ignored. We only use $1\%$ of samples compared to the previous simulations for each policy update. PCPO still satisfies the constraints quickly even when the constraint set is not well-estimated. (Best viewed in color)
+
+optimum in terms of reward in point gather task, suggesting that in order to satisfy the constraint, the cost update direction destroys the reward update direction.
+
+# S.5.5 SMALLER BATCH SAMPLES
+
+To learn policies under constraints, PCPO and CPO require to have a good estimation of the constraint set. However, PCPO may project the policy onto the space that violates the constraint due to the assumption of approximating the constraint set by linear half space constraint. To understand whether the estimation accuracy of the constraint set affects the performance, we conducted the experiments with batch sample size reducing to $1\%$ of the previous experiments (only 500 samples for each policy update) shown in Fig. 11.
+
+We find that smaller training samples affects the performance of the algorithm, creating more reward and cost fluctuation. However, we observe that even with smaller training samples, PCPO still has more reward improvement and constraint satisfaction than CPO.
+
+# S.6 ANALYSIS OF THE APPROXIMATION ERROR AND THE COMPUTATIONAL COST OF THE CONJUGATE GRADIENT METHOD
+
+In the Grid task, we observe that PCPO with KL divergence projection does worse in reward than TRPO, which is expected since TRPO ignores constraints. However, TRPO actually outperforms PCPO with KL divergence projection in terms of constraint, which is unexpected since by trying to consider the constraint, PCPO with KL divergence projection has made constraint satisfaction worse.
+
+The reason for this observation is that the Fisher information matrix is ill-conditioned, i.e., the condition number $\lambda_{\mathrm{max}}(H) / \lambda_{\mathrm{min}}(H)$ ( $\lambda_{\mathrm{max}}$ is the largest eigenvalue of the matrix) of the Fisher information matrix is large, causing conjugate gradient method that computes constraint update direction $H^{-1}a$ with small number of iteration output the inaccurate approximation. Hence the inaccurate
+
+
+
+
+
+
+Figure 12: (1) The values of the reward and the constraint, (2) the condition number of the Fisher information matrix, and (3) the approximation error of the constraint update direction over training epochs with the conjugate gradient method's iteration of 10 and 20, respectively. The one with larger number of iterations has more constraint satisfaction since it has more accurate approximation. (Best viewed in color)
+
+
+
+approximation of $H^{-1}$ a cause PCPO with KL divergence projection have more constraint violation than TRPO.
+
+To solve this issue, one can have more epochs of conjugate gradient method. This is because that the convergence of conjugate gradient method is controlled by the condition number (Shewchuk, 1994); the larger the condition number is, the more epochs the algorithm needs to get accurate approximation. In our experiments, we set the number of iteration of conjugate gradient method to be 10 to tradeoff between the computational efficiency and the accuracy across all tested algorithms and task pairs.
+
+To verify our observation, we compare the condition number of the Fisher information matrix, and the approximation error of the constraint update direction over training epochs with different number of iteration of the conjugate gradient method shown in Fig. 12.
+
+We observe that the Fisher information matrix is ill-conditioned, and the one with larger number of iteration has less error and more constraint satisfaction. This observation confirms our discussion.
+
+# S.7 COMPARISON OF OPTIMIZATION PATHS OF PCPO WITH KL DIVERGENCE AND $L^2$ NORM PROJECTIONS
+
+Theorem 4.1 states that a stationary point of PCPO with KL divergence projection is different from the one of PCPO with $L^2$ norm projection. See Fig. 13 for illustration. To compare both stationary points, we consider the following example shown in Fig. 14. We maximize a non-convex function $f(\pmb{x}) = \pmb{x}^T\mathrm{diag}(\pmb{y})\pmb{x}$ subject to the constraint $\pmb{x}^T\mathbf{1}\leq -1$ , where $\pmb{y} = [5, - 1]^T$ , and $\mathbf{1}$ is an all-one vector. An optimal solution to this constrained optimization problem is infinity. Fig. 14(a) shows the update direction that combines the objective and the cost constraint update directions for both projections. It shows that PCPO with KL divergence projection has stationary points with $g\in -a$ in the boundary of the constraint set (observe that the update direction is zero for PCPO with KL divergence projection at $\pmb{x} = [0.75, - 1.75]^T$ , $[0.25, - 1.25]^T$ , and $[-0.25, - 0.75]^T$ ), whereas PCPO with $L^2$ norm projection does not have stationary points in the boundary of the constraint set. Furthermore, Fig. 14(b) shows the optimization paths for both projections with one initial starting point. It shows that starting at the initial point $[0.5, - 2.0]^T$ , PCPO with KL divergence projection with the initial point $[0.5, - 2.0]^T$ converges to a local optimum, whereas $L^2$ norm projection converges to
+
+
+Figure 13: The semantic overview of stationery points of PCPO. The red dashed lines are negative directions of normal cones, and the green dashed lines are objective update directions. The objective update direction in an stationary point is belong to the negative normal cone.
+
+
+(a) Update direction
+
+
+(b) Optimization path
+Figure 14: The policy update direction that combines the objective and the constraint update directions of each point (top), and the optimization path of PCPO with KL divergence and $L^2$ norm projections with the initial point $[0.5, -2.0]^T$ (below). The red star is the initial point, the red arrows are the optimization paths, and the region that is below to the black line is the constraint set. We see that both projections converge to different solutions.
+
+infinity. However, the above example does not necessary means that PCPO with $L^2$ norm projection always find a better optimum. For example, if the gradient direction of the objective is zero in the constraint set or in the boundary, then both projections may converge to the same stationary point.
\ No newline at end of file
diff --git a/projectionbasedconstrainedpolicyoptimization/images.zip b/projectionbasedconstrainedpolicyoptimization/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..4ae9c8ceb1f817ae0f2e2395f69a742f0ee3782c
--- /dev/null
+++ b/projectionbasedconstrainedpolicyoptimization/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8277ddda09014294af2ae5e07db0f2a84535bb9d0c250f6e74228b31e7f852e9
+size 1623134
diff --git a/projectionbasedconstrainedpolicyoptimization/layout.json b/projectionbasedconstrainedpolicyoptimization/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..05006a2a60eef2890969e7a31492d2ea5e581338
--- /dev/null
+++ b/projectionbasedconstrainedpolicyoptimization/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:bcb51fb145471d5b0ab375dbbbac7fdac8c6171dbccf5d3fbebb355ac6a2b3fe
+size 954678
diff --git a/provablebenefitoforthogonalinitializationinoptimizingdeeplinearnetworks/56ca4142-b205-4563-9b46-f7932c0fd45b_content_list.json b/provablebenefitoforthogonalinitializationinoptimizingdeeplinearnetworks/56ca4142-b205-4563-9b46-f7932c0fd45b_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..5d98bba2c288b1fd534c0b6e26014a778b324b78
--- /dev/null
+++ b/provablebenefitoforthogonalinitializationinoptimizingdeeplinearnetworks/56ca4142-b205-4563-9b46-f7932c0fd45b_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:66122c9d3187f124e2aaabdd8044cb04d27c208b0f36de7d06753ff8589a7bfa
+size 109806
diff --git a/provablebenefitoforthogonalinitializationinoptimizingdeeplinearnetworks/56ca4142-b205-4563-9b46-f7932c0fd45b_model.json b/provablebenefitoforthogonalinitializationinoptimizingdeeplinearnetworks/56ca4142-b205-4563-9b46-f7932c0fd45b_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..81b4f133ca1fa86a6cf42b30748c6ee5d7a73f16
--- /dev/null
+++ b/provablebenefitoforthogonalinitializationinoptimizingdeeplinearnetworks/56ca4142-b205-4563-9b46-f7932c0fd45b_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5c29966f6685df360eef6b335ceeafa703eff93d7ed8cf4a773ae2e55249cba4
+size 129831
diff --git a/provablebenefitoforthogonalinitializationinoptimizingdeeplinearnetworks/56ca4142-b205-4563-9b46-f7932c0fd45b_origin.pdf b/provablebenefitoforthogonalinitializationinoptimizingdeeplinearnetworks/56ca4142-b205-4563-9b46-f7932c0fd45b_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..a6a5db3c298ca02f479af37fb89a9aa5893485ba
--- /dev/null
+++ b/provablebenefitoforthogonalinitializationinoptimizingdeeplinearnetworks/56ca4142-b205-4563-9b46-f7932c0fd45b_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5d0fdae0e8c8fd2e2e787d4ad7788aa4975c78a552cfa6bdb57d59382846596b
+size 491123
diff --git a/provablebenefitoforthogonalinitializationinoptimizingdeeplinearnetworks/full.md b/provablebenefitoforthogonalinitializationinoptimizingdeeplinearnetworks/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..896891a6b9d72b853b79cf982890d13751c1fa57
--- /dev/null
+++ b/provablebenefitoforthogonalinitializationinoptimizingdeeplinearnetworks/full.md
@@ -0,0 +1,560 @@
+# PROVABLE BENEFIT OF ORTHOGONAL INITIALIZATION IN OPTIMIZING DEEP LINEAR NETWORKS
+
+Wei Hu
+
+Princeton University
+
+huwei@cs.princeton.edu
+
+Lechao Xiao
+
+Google Brain
+
+xlc@google.com
+
+Jeffrey Pennington
+
+Google Brain
+
+jpennin@google.com
+
+# ABSTRACT
+
+The selection of initial parameter values for gradient-based optimization of deep neural networks is one of the most impactful hyperparameter choices in deep learning systems, affecting both convergence times and model performance. Yet despite significant empirical and theoretical analysis, relatively little has been proved about the concrete effects of different initialization schemes. In this work, we analyze the effect of initialization in deep linear networks, and provide for the first time a rigorous proof that drawing the initial weights from the orthogonal group speeds up convergence relative to the standard Gaussian initialization with iid weights. We show that for deep networks, the width needed for efficient convergence to a global minimum with orthogonal initializations is independent of the depth, whereas the width needed for efficient convergence with Gaussian initializations scales linearly in the depth. Our results demonstrate how the benefits of a good initialization can persist throughout learning, suggesting an explanation for the recent empirical successes found by initializing very deep non-linear networks according to the principle of dynamical isometry.
+
+# 1 INTRODUCTION
+
+Through their myriad successful applications across a wide range of disciplines, it is now well established that deep neural networks possess an unprecedented ability to model complex real-world datasets, and in many cases they can do so with minimal overfitting. Indeed, the list of practical achievements of deep learning has grown at an astonishing rate, and includes models capable of human-level performance in tasks such as image recognition (Krizhevsky et al., 2012), speech recognition (Hinton et al., 2012), and machine translation (Wu et al., 2016).
+
+Yet to each of these deep learning triumphs corresponds a large engineering effort to produce such a high-performing model. Part of the practical difficulty in designing good models stems from a proliferation of hyperparameters and a poor understanding of the general guidelines for their selection. Given a candidate network architecture, some of the most impactful hyperparameters are those governing the choice of the model's initial weights. Although considerable study has been devoted to the selection of initial weights, relatively little has been proved about how these choices affect important quantities such as rate of convergence of gradient descent.
+
+In this work, we examine the effect of initialization on the rate of convergence of gradient descent in deep linear networks. We provide for the first time a rigorous proof that drawing the initial weights from the orthogonal group speeds up convergence relative to the standard Gaussian initialization with iid weights. In particular, we show that for deep networks, the width needed for efficient convergence for orthogonal initializations is independent of the depth, whereas the width needed for efficient convergence of Gaussian networks scales linearly in the depth.
+
+Orthogonal weight initializations have been the subject of a significant amount of prior theoretical and empirical investigation. For example, in a line of work focusing on dynamical isometry, it was found that orthogonal weights can speed up convergence for deep linear networks (Saxe et al., 2014; Advani & Saxe, 2017) and for deep non-linear networks (Pennington et al., 2018; Xiao et al., 2018; Gilboa et al., 2019; Chen et al., 2018; Pennington et al., 2017; Tarnowski et al., 2019; Ling & Qiu, 2019) when they operate in the linear regime. In the context of recurrent neural networks, orthogonality can help improve the system's stability. A main limitation of prior work is that it
+
+has focused almost exclusively on model's properties at initialization. In contrast, our analysis focuses on the benefit of orthogonal initialization on the entire training process, thereby establishing a provable benefit for optimization.
+
+The paper is organized as follows. After reviewing related work in Section 2 and establishing some preliminaries in Section 3, we present our main positive result on efficient convergence from orthogonal initialization in Section 4. In Section 5, we show that Gaussian initialization leads to exponentially long convergence time if the width is too small compared with the depth. In Section 6, we perform experiments to support our theoretical results.
+
+# 2 RELATED WORK
+
+Deep linear networks. Despite the simplicity of their input-output maps, deep linear networks define high-dimensional non-convex optimization landscapes whose properties closely reflect those of their non-linear counterparts. For this reason, deep linear networks have been the subject of extensive theoretical analysis. A line of work (Kawaguchi, 2016; Hardt & Ma, 2016; Lu & Kawaguchi, 2017; Yun et al., 2017; Zhou & Liang, 2018; Laurent & von Brecht, 2018) studied the landscape properties of deep linear networks. Although it was established that all local minima are global under certain assumptions, these properties alone are still not sufficient to guarantee global convergence or to provide a concrete rate of convergence for gradient-based optimization algorithms.
+
+Another line of work directly analyzed the trajectory taken by gradient descent and established conditions that guarantee convergence to global minimum (Bartlett et al., 2018; Arora et al., 2018; Du & Hu, 2019). Most relevant to our work is the result of Du & Hu (2019), which shows that if the width of hidden layers is larger than the depth, gradient descent with Gaussian initialization can efficiently converge to a global minimum. Our result establishes that for Gaussian initialization, this linear dependence between width and depth is necessary, while for orthogonal initialization, the width can be independent of depth. Our negative result for Gaussian initialization also significantly generalizes the result of Shamir (2018), who proved a similar negative result for 1-dimensional linear networks.
+
+Orthogonal weight initializations. Orthogonal weight initializations have also found significant success in non-linear networks. In the context of feedforward models, the spectral properties of a network's input-output Jacobian have been empirically linked to convergence speed (Saxe et al., 2014; Pennington et al., 2017; 2018; Xiao et al., 2018). It was found that when this spectrum concentrates around 1 at initialization, a property dubbed dynamical isometry, convergence times improved by orders of magnitude. The conditions for attaining dynamical isometry in the infinite-width limit were established by Pennington et al. (2017; 2018) and basically require that input-output map to be approximately linear and for the weight matrices to be orthogonal. Therefore the training time benefits of dynamical isometry are likely rooted in the benefits of orthogonality for deep linear networks, which we establish in this work.
+
+Orthogonal matrices are also frequently used in the context of recurrent neural networks, for which the stability of the state-to-state transition operator is determined by the spectrum of its Jacobian (Haber & Ruthotto, 2017; Laurent & von Brecht, 2016). Orthogonal matrices can improve the conditioning, leading to an ability to learn over long time horizons (Le et al., 2015; Henaff et al., 2016; Chen et al., 2018; Gilboa et al., 2019). While the benefits of orthogonality can be quite large at initialization, little is known about whether or in what contexts these benefits persist during training, a scenario that has lead to the development of efficient methods of constraining the optimization to the orthogonal group (Wisdom et al., 2016; Vorontsov et al., 2017; Mhammedi et al., 2017). Although we do not study the recurrent setting in this work, an extension of our analysis might help determine when orthogonality is beneficial in that setting.
+
+# 3 PRELIMINARIES
+
+# 3.1 NOTATION
+
+Let $[n] = \{1,2,\dots ,n\}$ . Denote by $I_{d}$ the $d\times d$ identity matrix, and by $I$ an identity matrix when its dimension is clear from context. Denote by $\mathcal{N}(\mu ,\sigma^2)$ the Gaussian distribution with mean $\mu$ and variance $\sigma^2$ , and by $\chi_k^2$ the chi-squared distribution with $k$ degrees of freedom.
+
+Denote by $\| \cdot \|$ the $\ell_2$ norm of a vector or the spectral norm of a matrix. Denote by $\| \cdot \|_F$ the Frobenius norm of a matrix. For a symmetric matrix $A$ , let $\lambda_{\max}(A)$ and $\lambda_{\min}(A)$ be its maximum and minimum eigenvalues, and let $\lambda_i(A)$ be its $i$ -th largest eigenvalue. For a matrix $B \in \mathbb{R}^{m \times n}$ , let $\sigma_i(B)$ be its $i$ -th largest singular value ( $i = 1, 2, \ldots, \min\{m, n\}$ ), and let $\sigma_{\max}(B) = \sigma_1(B)$ , $\sigma_{\min}(B) = \sigma_{\min\{m, n\}}(B)$ . Denote by $\operatorname{vec}(A)$ the vectorization of a matrix $A$ in column-first order. The Kronecker product between two matrices $A \in \mathbb{R}^{m_1 \times n_1}$ and $B \in \mathbb{R}^{m_2 \times n_2}$ is defined as
+
+$$
+A \otimes B = \left( \begin{array}{c c c} a _ {1, 1} B & \dots & a _ {1, n _ {1}} B \\ \vdots & \ddots & \vdots \\ a _ {m _ {1}, 1} B & \dots & a _ {m _ {1}, n _ {1}} B \end{array} \right) \in \mathbb {R} ^ {m _ {1} m _ {2} \times n _ {1} n _ {2}},
+$$
+
+where $a_{i,j}$ is the element in the $(i,j)$ -th entry of $A$ .
+
+We use the standard $O(\cdot), \Omega(\cdot)$ and $\Theta(\cdot)$ notation to hide universal constant factors. We also use $C$ to represent a sufficiently large universal constant whose specific value can differ from line to line.
+
+# 3.2 PROBLEM SETUP
+
+Suppose that there are $n$ training examples $\{(x_k, y_k)\}_{k=1}^n \subset \mathbb{R}^{d_x} \times \mathbb{R}^{d_y}$ . Denote by $X = (x_1, \ldots, x_n) \in \mathbb{R}^{d_x \times n}$ the input data matrix and by $Y = (y_1, \ldots, y_n) \in \mathbb{R}^{d_y \times n}$ the target matrix. Consider an $L$ -layer linear neural network with weight matrices $W_1, \ldots, W_L$ , which given an input $x \in \mathbb{R}^{d_x}$ computes
+
+$$
+f (x; W _ {1}, \dots , W _ {L}) = \alpha W _ {L} W _ {L - 1} \dots W _ {1} x, \tag {1}
+$$
+
+where $W_{i} \in \mathbb{R}^{d_{i} \times d_{i-1}} (i = 1, \dots, L)$ , $d_{0} = d_{x}$ , $d_{L} = d_{y}$ , and $\alpha$ is a normalization constant which will be specified later according to the initialization scheme. We study the problem of training the deep linear network by minimizing the $\ell_{2}$ loss over training data:
+
+$$
+\ell \left(W _ {1}, \dots , W _ {L}\right) = \frac {1}{2} \sum_ {k = 1} ^ {n} \| f \left(x _ {k}; W _ {1}, \dots , W _ {L}\right) - y _ {k} \| ^ {2} = \frac {1}{2} \| \alpha W _ {L} \dots W _ {1} X - Y \| _ {F} ^ {2}. \tag {2}
+$$
+
+The algorithm we consider to minimize the objective (2) is gradient descent with random initialization, which first randomly samples the initial weight matrices $\{W_{i}(0)\}_{i = 1}^{L}$ from a certain distribution, and then updates the weights using gradient descent: for time $t = 0,1,2,\ldots$
+
+$$
+W _ {i} (t + 1) = W _ {i} (t) - \eta \frac {\partial \ell}{\partial W _ {i}} \left(W _ {1} (t), \dots , W _ {L} (t)\right), \quad i \in [ L ], \tag {3}
+$$
+
+where $\eta > 0$ is the learning rate.
+
+For convenience, we denote $W_{j:i} = W_jW_{j-1}\cdots W_i$ ( $1 \leq i \leq j \leq L$ ) and $W_{i-1:i} = I$ ( $i \in [L]$ ). The time index $t$ is used on any variable that depends on $W_1, \ldots, W_L$ to represent its value at time $t$ , e.g., $W_{j:i}(t) = W_j(t)\cdots W_i(t)$ , $\ell(t) = \ell(W_1(t), \ldots, W_L(t))$ , etc.
+
+# 4 EFFICIENT CONVERGENCE USING ORTHOGONAL INITIALIZATION
+
+In this section we present our main positive result for orthogonal initialization. We show that orthogonal initialization enables efficient convergence of gradient descent to a global minimum provided that the hidden width is not too small.
+
+In order to properly define orthogonal weights, we let the widths of all hidden layers be equal: $d_{1} = d_{2} = \dots = d_{L - 1} = m$ , and let $m \geq \max \{d_x,d_y\}$ . Note that all intermediate matrices
+
+$W_{2},\ldots ,W_{L - 1}$ are $m\times m$ square matrices, and $W_{1}\in \mathbb{R}^{m\times d_{x}}$ $W_{L}\in \mathbb{R}^{d_y\times m}$ . We sample each initial weight matrix $W_{i}(0)$ independently from a uniform distribution over scaled orthogonal matrices satisfying
+
+$$
+W _ {1} ^ {\top} (0) W _ {1} (0) = m I _ {d _ {x}},
+$$
+
+$$
+W _ {i} ^ {\top} (0) W _ {i} (0) = W _ {i} (0) W _ {i} ^ {\top} (0) = m I _ {m}, \quad 2 \leq i \leq L - 1, \tag {4}
+$$
+
+$$
+W _ {L} (0) W _ {L} ^ {\top} (0) = m I _ {d _ {y}}.
+$$
+
+In accordance with such initialization, the scaling factor $\alpha$ in (1) is set as $\alpha = \frac{1}{\sqrt{m^{L - 1}d_y}}$ , which ensures $\mathbb{E}\left[\| f(x;W_L(0),\ldots ,W_1(0))\|^2\right] = \| x\|^2$ for any $x\in \mathbb{R}^{d_x}$ . The same scaling factor was adopted in Du & Hu (2019), which preserves the expectation of the squared $\ell_{2}$ norm of any input.
+
+Let $W^{*}\in \arg \min_{W\in \mathbb{R}^{d_y\times d_x}}\| WX - Y\| _F$ and $\ell^* = \frac{1}{2}\| W^* X - Y\| _F^2$ . Then $\ell^{*}$ is the minimum value for the objective (2). Denote $r = \mathrm{rank}(X)$ , $\kappa = \frac{\lambda_{\max}(X^\top X)}{\lambda_r(X^\top X)}$ , and $\tilde{r} = \frac{\|X\|_F^2}{\|X\|^2}$ . Our main theorem in this section is the following:
+
+Theorem 4.1. Suppose
+
+$$
+m \geq C \cdot \tilde {r} \kappa^ {2} \left(d _ {y} \left(1 + \| W ^ {*} \| ^ {2}\right) + \log (r / \delta)\right) a n d m \geq d _ {x}, \tag {5}
+$$
+
+for some $\delta \in (0,1)$ and a sufficiently large universal constant $C > 0$ . Set the learning rate $\eta \leq \frac{d_y}{2L\|X\|^2}$ . Then with probability at least $1 - \delta$ over the random initialization, we have
+
+$$
+\ell (0) - \ell^ {*} \leq O \left(1 + \frac {\log (r / \delta)}{d _ {y}} + \| W ^ {*} \| ^ {2}\right) \| X \| _ {F} ^ {2},
+$$
+
+$$
+\ell (t) - \ell^ {*} \leq \left(1 - \frac {1}{2} \eta L \lambda_ {r} (X ^ {\top} X) / d _ {y}\right) ^ {t} (\ell (0) - \ell^ {*}), \quad t = 0, 1, 2, \dots ,
+$$
+
+where $\ell(t)$ is the objective value at iteration $t$ .
+
+Notably, in Theorem 4.1, the width $m$ need not depend on the depth $L$ . This is in sharp contrast with the result of Du & Hu (2019) for Gaussian initialization, which requires $m \geq \tilde{\Omega} (Lr\kappa^3 d_y)$ . It turns out that a near-linear dependence between $m$ and $L$ is necessary for Gaussian initialization to have efficient convergence, as we will show in Section 5. Therefore the requirement in Du & Hu (2019) is nearly tight in terms of the dependence on $L$ . These results together rigorously establish the benefit of orthogonal initialization in optimizing very deep linear networks.
+
+If we set the learning rate optimally according to Theorem 4.1 to $\eta = \Theta\left(\frac{d_y}{L\|X\|^2}\right)$ , we obtain that $\ell(t) - \ell^*$ decreases by a ratio of $1 - \Theta(\kappa^{-1})$ after every iteration. This matches the convergence rate of gradient descent on the (1-layer) linear regression problem $\min_{W \in \mathbb{R}^{d_y \times d_x}} \frac{1}{2} \|WX - Y\|_F^2$ .
+
+# 4.1 PROOF OF THEOREM 4.1
+
+The proof uses the high-level framework from Du & Hu (2019), which tracks the evolution of the network's output during optimization. This evolution is closely related to a time-varying positive semidefinite (PSD) matrix (defined in (7)), and the proof relies on carefully upper and lower bounding the eigenvalues of this matrix throughout training, which in turn implies the desired convergence result.
+
+First, we can make the following simplifying assumption without loss of generality. See Appendix B in Du & Hu (2019) for justification.
+
+Assumption 4.1. (Without loss of generality) $X \in \mathbb{R}^{d_x \times r}$ , $\operatorname{rank}(X) = r$ , $Y = W^* X$ , and $\ell^* = 0$ .
+
+Now we briefly review Du & Hu (2019)'s framework. The key idea is to look at the network's output, defined as
+
+$$
+U = \alpha W _ {L: 1} X \in \mathbb {R} ^ {d _ {y} \times n}.
+$$
+
+We also write $U(t) = \alpha W_{L:1}(t)X$ as the output at time $t$ . Note that $\ell(t) = \frac{1}{2} \|U(t) - Y\|_F^2$ . According to the gradient descent update rule, we write
+
+$$
+W _ {L: 1} (t + 1) = \prod_ {i} \left(W _ {i} (t) - \eta \frac {\partial \ell}{\partial W _ {i}} (t)\right) = W _ {L: 1} (t) - \sum_ {i = 1} ^ {L} \eta W _ {L: i + 1} (t) \frac {\partial \ell}{\partial W _ {i}} (t) W _ {i - 1: 1} (t) + E (t),
+$$
+
+where $E(t)$ contains all the high-order terms (i.e., those with $\eta^2$ or higher). With this definition, the evolution of $U(t)$ can be written as the following equation:
+
+$$
+\operatorname {v e c} (U (t + 1) - U (t)) = - \eta P (t) \cdot \operatorname {v e c} (U (t) - Y) + \alpha \cdot \operatorname {v e c} (E (t) X), \tag {6}
+$$
+
+where
+
+$$
+P (t) = \alpha^ {2} \sum_ {i = 1} ^ {L} \left[ \left(\left(W _ {i - 1: 1} (t) X\right) ^ {\top} \left(W _ {i - 1: 1} (t) X\right)\right) \otimes \left(W _ {L: i + 1} (t) W _ {L: i + 1} ^ {\top} (t)\right) \right]. \tag {7}
+$$
+
+Notice that $P(t)$ is always PSD since it is the sum of $L$ PSD matrices. Therefore, in order to establish convergence, we only need to (i) show that the higher-order term $E(t)$ is small and (ii) prove upper and lower bounds on $P(t)$ 's eigenvalues. For the second task, it suffices to control the singular values of $W_{i-1:1}(t)$ and $W_{L:i+1}(t)$ ( $i \in [L]$ ). Under orthogonal initialization, these matrices are perfectly isometric at initialization, and we will show that they stay close to isometry during training, thus enabling efficient convergence.
+
+The following lemma summarizes some properties at initialization.
+
+Lemma 4.2. At initialization, we have
+
+$$
+\sigma_ {\max } \left(W _ {j: i} (0)\right) = \sigma_ {\min } \left(W _ {j: i} (0)\right) = m ^ {\frac {j - i + 1}{2}}, \quad \forall 1 \leq i \leq j \leq L, (i, j) \neq (1, L). \tag {8}
+$$
+
+Furthermore, with probability at least $1 - \delta$ , the loss at initialization satisfies
+
+$$
+\ell (0) \leq O \left(1 + \frac {\log (r / \delta)}{d _ {y}} + \| W ^ {*} \| ^ {2}\right) \| X \| _ {F} ^ {2}. \tag {9}
+$$
+
+Proof sketch. The spectral property (8) follows directly from (4).
+
+To prove (9), we essentially need to upper bound the magnitude of the network's initial output. This turns out to be equivalent to studying the magnitude of the projection of a vector onto a random low-dimensional subspace, which we can bound using standard concentration inequalities. The details are given in Appendix A.1.
+
+Now we proceed to prove Theorem 4.1. We define $B = O\left(1 + \frac{\log(r / \delta)}{d_y} + \| W^*\|^2\right) \| X\|_F^2$ which is the upper bound on $\ell(0)$ from (9). Conditioned on (9) being satisfied, we will use induction on $t$ to prove the following three properties $\mathcal{A}(t), \mathcal{B}(t)$ and $\mathcal{C}(t)$ for all $t = 0, 1, \ldots$ :
+
+- $\mathcal{A}(t)$ : $\ell(t) \leq \left(1 - \frac{1}{2}\eta L\sigma_{\min}^2(X) / d_y\right)^t\ell(0) \leq \left(1 - \frac{1}{2}\eta L\sigma_{\min}^2(X) / d_y\right)^tB.$
+- $\mathcal{B}(t)$ : $\sigma_{\max}(W_{j:i}(t)) \leq 1.1m^{\frac{j - i + 1}{2}}, \sigma_{\min}(W_{j:i}(t)) \geq 0.9m^{\frac{j - i + 1}{2}}$ , $\forall 1 \leq i \leq j \leq L, (i,j) \neq (1,L)$ .
+$\mathcal{C}(t)$ .. $\| W_i(t) - W_i(0)\| _F\leq \frac{8\sqrt{Bd_y}\|X\|}{L\sigma_{\min}^2(X)},\quad \forall 1\leq i\leq L.$
+
+$\mathcal{A}(0)$ and $\mathcal{B}(0)$ are true according to Lemma 4.2, and $\mathcal{C}(0)$ is trivially true. In order to prove $\mathcal{A}(t)$ , $\mathcal{B}(t)$ and $\mathcal{C}(t)$ for all $t$ , we will prove the following claims for all $t \geq 0$ :
+
+Claim 4.3. $\mathcal{A}(0),\ldots ,\mathcal{A}(t),\mathcal{B}(0),\ldots ,\mathcal{B}(t)\Longrightarrow \mathcal{C}(t + 1).$
+
+Claim 4.4. $\mathcal{C}(t)\Longrightarrow \mathcal{B}(t)$
+
+Claim 4.5. $\mathcal{A}(t),\mathcal{B}(t)\Longrightarrow \mathcal{A}(t + 1).$
+
+The proofs of these claims are given in Appendix A. Notice that we finish the proof of Theorem 4.1 once we prove $\mathcal{A}(t)$ for all $t\geq 0$
+
+# 5 EXPONENTIAL CURSE OF GAUSSIAN INITIALIZATION
+
+In this section, we show that gradient descent with Gaussian random initialization necessarily suffers from a running time that scales exponentially with the depth of the network, unless the width becomes nearly linear in the depth. Since we mostly focus on the dependence of width and running time on depth, we will assume the depth $L$ to be sufficiently large.
+
+Recall that we want to minimize the objective $\ell(W_1, \ldots, W_L) = \frac{1}{2} \|\alpha W_{L:1} X - Y\|_F^2$ by gradient descent. We assume $Y = W^*X$ for some $W^* \in \mathbb{R}^{d_y \times d_x}$ , so that the optimal objective value is 0. For convenience, we assume $\|X\|_F = \Theta(1)$ and $\|Y\|_F = \Theta(1)$ .
+
+Suppose that at layer $i \in [L]$ , every entry of $W_{i}(0)$ is sampled from $\mathcal{N}(0, \sigma_{i}^{2})$ , and all weights in the network are independent. We set the scaling factor $\alpha$ such that the initial output of the network does not blow up exponentially (in expectation):
+
+$$
+\mathbb {E} \left[ \| f (x; W _ {1} (0), \dots , W _ {L} (0)) \| ^ {2} \right] \leq L ^ {O (1)} \cdot \| x \| ^ {2}, \quad \forall x \in \mathbb {R} ^ {d _ {x}}. \tag {10}
+$$
+
+Note that $\mathbb{E}\left[\| f(x;W_1(0),\ldots ,W_L(0))\|^2\right] = \alpha^2\prod_{i = 1}^{L}(d_i\sigma_i^2)\| x\|^2$ . Thus (10) means
+
+$$
+\alpha^ {2} \prod_ {i = 1} ^ {L} \left(d _ {i} \sigma_ {i} ^ {2}\right) \leq L ^ {O (1)}.
+$$
+
+We also assume that the magnitude of initialization at each layer cannot vanish with depth:
+
+$$
+d _ {i} \sigma_ {i} ^ {2} \geq \frac {1}{L ^ {O (1)}}, \quad \forall i \in [ L ]. \tag {11}
+$$
+
+Note that the assumptions (10) and (11) are just sanity checks to rule out the obvious pathological cases – they are easily satisfied by all the commonly used initialization schemes in practice.
+
+Now we formally state our main theorem in this section.
+
+Theorem 5.1. Suppose $\max \{d_0, d_1, \ldots, d_L\} \leq O(L^{1 - \gamma})$ for some universal constant $0 < \gamma \leq 1$ . Then there exists a universal constant $c > 0$ such that, if gradient descent is run with learning rate $\eta \leq e^{cL^{\gamma}}$ , then with probability at least 0.9 over the random initialization, for the first $e^{\Omega (L^{\gamma})}$ iterations, the objective value is stuck between $0.4\| Y\| _F^2$ and $0.6\| Y\| _F^2$ .
+
+Theorem 5.1 establishes that efficient convergence from Gaussian initialization is impossible for large depth unless the width becomes nearly linear in depth. This nearly linear dependence is the best we can hope for, since Du & Hu (2019) proved a positive result when the width is larger than linear in depth. Therefore, a phase transition from untrainable to trainable happens at the point when the width and depth has a nearly linear relation. Furthermore, Theorem 5.1 generalizes the result of Shamir (2018), which only treats the special case of $d_0 = \dots = d_L = 1$ .
+
+# 5.1 PROOF OF THEOREM 5.1
+
+For convenience, we define a scaled version of $W_{i}$ : let $A_{i} = W_{i} / (\sqrt{d_{i}}\sigma_{i})$ and $\beta = \alpha \prod_{i=1}^{L}(\sqrt{d_{i}}\sigma_{i})$ . Then we know $\beta \leq L^{O(1)}$ and $\alpha W_{L:1} = \beta A_{L:1}$ , where $A_{j:i} = A_{j}\dots A_{i}$ .
+
+We first give a simple upper bound on $\| A_{j:i}(0)\|$ for all $1\leq i\leq j\leq L$ .
+
+Lemma 5.2. With probability at least $1 - \delta$ , we have $\|A_{j:i}(0)\| \leq O\left(\frac{L^3}{\delta}\right)$ for all $1 \leq i \leq j \leq L$ .
+
+The proof of Lemma 5.2 is given in Appendix B.1. It simply uses Markov inequality and union bound.
+
+Furthermore, a key property at initialization is that if $j - i$ is large enough, $\| A_{j:i}(0)\|$ will become exponentially small.
+
+Lemma 5.3. With probability at least $1 - e^{-\Omega (L^{\gamma})}$ , for all $1 \leq i \leq j \leq L$ such that $j - i \geq \frac{L}{10}$ , we have $\| A_{j:i}(0)\| \leq e^{-\Omega (L^{\gamma})}$ .
+
+Proof. We first consider a fixed pair $(i,j)$ such that $j - i \geq \frac{L}{10}$ . In order to bound $\|A_{j:i}(0)\|$ , we first take an arbitrary unit vector $v \in \mathbb{R}^{d_{i-1}}$ and bound $\|A_{j:i}(0)v\|$ . We can write $\|A_{j:i}(0)v\|^2 = \prod_{k=i}^{j}Z_k$ , where $Z_k = \frac{\|A_{k:i}(0)v\|^2}{\|A_{k-1:i}(0)v\|^2}$ . Note that for any nonzero $v' \in \mathbb{R}^{d_{k-1}}$ independent of $A_k(0)$ , the distribution of $d_k \cdot \frac{\|A_k(0)v'\|^2}{\|v'\|^2}$ is $\chi_{d_k}^2$ . Therefore, $Z_i, \ldots, Z_j$ are independent, and $d_kZ_k \sim \chi_{d_k}^2$ ( $k = i, i+1, \ldots, j$ ). Recall the expression for the moments of chi-squared random variables: $\mathbb{E}\left[Z_k^\lambda\right] = \frac{2^\lambda\Gamma(d_k/2+\lambda)}{d_k^\lambda\Gamma(d_k/2)}$ ( $\forall \lambda > 0$ ). Taking $\lambda = \frac{1}{2}$ and using the bound $\frac{\Gamma(a + \frac{1}{2})}{\Gamma(a)} \leq \sqrt{a - 0.1}$ ( $\forall a \geq \frac{1}{2}$ ) (Qi & Luo, 2012), we get $\mathbb{E}\left[\sqrt{Z_k}\right] \leq \sqrt{\frac{2(d_k/2 - 0.1)}{d_k}} = \sqrt{1 - \frac{0.2}{d_k}} \leq 1 - \frac{0.1}{d_k}$ . Therefore we have
+
+$$
+\mathbb {E} \left[ \sqrt {\prod_ {k = i} ^ {j} Z _ {k}} \right] \leq \prod_ {k = i} ^ {j} \left(1 - \frac {0 . 1}{d _ {k}}\right) \leq \left(1 - \frac {0 . 1}{O (L ^ {1 - \gamma})}\right) ^ {j - i + 1} \leq \left(1 - \Omega (L ^ {\gamma - 1})\right) ^ {\frac {L}{1 0}} = e ^ {- \Omega (L ^ {\gamma})}.
+$$
+
+Choose a sufficiently small constant $c' > 0$ . By Markov inequality we have $\operatorname*{Pr}\left[\sqrt{\prod_{k=i}^{j}Z_k} > e^{-c'L^{\gamma}}\right] \leq e^{c'L^{\gamma}}\mathbb{E}\left[\sqrt{\prod_{k=i}^{j}Z_k}\right] \leq e^{c'L^{\gamma}}e^{-\Omega(L^{\gamma})} = e^{-\Omega(L^{\gamma})}$ . Therefore we have shown that for any fixed unit vector $v \in \mathbb{R}^{d_{i-1}}$ , with probability at least $1 - e^{-\Omega(L^{\gamma})}$ we have $\|A_{j:i}(0)v\| \leq e^{-\Omega(L^{\gamma})}$ .
+
+Next, we use this to bound $\| A_{j:i}(0) \|$ via an $\epsilon$ -net argument. We partition the index set $[d_{i-1}]$ into $[d_{i-1}] = S_1 \cup S_2 \cup \dots \cup S_q$ such that $|S_l| \leq L^{\gamma/2} (\forall l \in [q])$ and $q = O\left(\frac{d_{i-1}}{L^{\gamma/2}}\right)$ . For each $l \in [q]$ , let $\mathcal{N}_l$ be a $\frac{1}{2}$ -net for all the unit vectors in $\mathbb{R}^{d_{i-1}}$ with support in $S_l$ . Note that we can choose $\mathcal{N}_l$ such that $|\mathcal{N}_l| = e^{O(|S_l|)} = e^{O(L^{\gamma/2})}$ . Taking a union bound over $\cup_{l=1}^q \mathcal{N}_l$ , we know that $\| A_{j:i}(0)v \| \leq e^{-\Omega(L^\gamma)} \| v \|$ simultaneously for all $v \in \cup_{l=1}^q \mathcal{N}_l$ with probability at least $1 - (\sum_{l=1}^q |\mathcal{N}_l|)e^{-\Omega(L^\gamma)} \geq 1 - q \cdot e^{O(L^{\gamma/2})}e^{-\Omega(L^\gamma)} = 1 - e^{-\Omega(L^\gamma)}$ .
+
+Now, for any $u \in \mathbb{R}^{d_i - 1}$ , we write it as $u = \sum_{l=1}^q a_l u_l$ where $a_l$ is a scalar and $u_l$ is a unit vector supported on $S_l$ . By the definition of $\frac{1}{2}$ -net, for each $l \in [q]$ there exists $v_l \in \mathcal{N}_l$ such that $\| v_l - u_l \| \leq \frac{1}{2}$ . We know that $\| A_{j:i}(0) v_l \| \leq e^{-\Omega(L^\gamma)} \| v_l \|$ for all $l \in [q]$ . Let $v = \sum_{l=1}^q a_l v_l$ . We have
+
+$$
+\begin{array}{l} \| A _ {j: i} (0) v \| \leq \sum_ {l = 1} ^ {q} | a _ {l} | \cdot \| A _ {j: i} (0) v _ {l} \| \leq \sum_ {l = 1} ^ {q} | a _ {l} | \cdot e ^ {- \Omega (L ^ {\gamma})} \| v _ {l} \| \leq e ^ {- \Omega (L ^ {\gamma})} \sqrt {q \cdot \sum_ {l = 1} ^ {q} a _ {l} ^ {2} \| v _ {l} \| ^ {2}} \\ = \sqrt {q} e ^ {- \Omega (L ^ {\gamma})} \| v \| = e ^ {- \Omega (L ^ {\gamma})} \| v \|. \\ \end{array}
+$$
+
+Note that $\| u - v\| = \| \sum_{l = 1}^{q}a_{l}(u_{l} - v_{l})\| = \sqrt{\sum_{l = 1}^{q}a_{l}^{2}\|u_{l} - v_{l}\|^{2}}\leq \sqrt{\frac{1}{4}\sum_{l = 1}^{q}a_{l}^{2}} = \frac{1}{2}\| u\|$ , which implies $\| v\| \leq \frac{3}{2}\| u\|$ . Therefore we have
+
+$$
+\begin{array}{l} \left\| A _ {j: i} (0) u \right\| \leq \left\| A _ {j: i} (0) v \right\| + \left\| A _ {j: i} (0) (u - v) \right\| \leq e ^ {- \Omega \left(L ^ {\gamma}\right)} \| v \| + \left\| A _ {j: i} (0) \right\| \cdot \| u - v \| \\ \leq e ^ {- \Omega (L ^ {\gamma})} \cdot \frac {3}{2} \| u \| + \| A _ {j: i} (0) \| \cdot \frac {1}{2} \| u \| = e ^ {- \Omega (L ^ {\gamma})} \| u \| + \| A _ {j: i} (0) \| \cdot \frac {1}{2} \| u \|. \\ \end{array}
+$$
+
+The above inequality is valid for any $u \in \mathbb{R}^{d_{i-1}}$ . Thus we can take the unit vector $u$ that maximizes $\| A_{j:i}(0)u \|$ . This gives us $\| A_{j:i}(0) \| \leq e^{-\Omega(L^{\gamma})} + \frac{1}{2} \| A_{j:i}(0) \|$ , which implies $\| A_{j:i}(0) \| \leq e^{-\Omega(L^{\gamma})}$ .
+
+Finally, we take a union bound over all possible $(i,j)$ . The failure probability is at most $L^2 e^{-\Omega (L^\gamma)} = e^{-\Omega (L^\gamma)}$ .
+
+The following lemma shows that the properties in Lemmas 5.2 and 5.3 are still to some extent preserved after applying small perturbations on all the weight matrices.
+
+Lemma 5.4. Suppose that the initial weights satisfy $\| A_{j:i}(0)\| \leq O(L^3)$ for all $1\leq i\leq j\leq L$ , and $\| A_{j:i}(0)\| \leq e^{-c_1L^\gamma}$ if $j - i\geq \frac{L}{10}$ , where $c_{1} > 0$ is a universal constant. Then for another set of matrices $A_{1},\ldots ,A_{L}$ satisfying $\| A_{i} - A_{i}(0)\| \leq e^{-0.6c_{1}L^{\gamma}}$ for all $i\in [L]$ , we must have
+
+$$
+\begin{array}{l} \left\| A _ {j: i} \right\| \leq O \left(L ^ {3}\right), \quad \forall 1 \leq i \leq j \leq L, \\ \left\| A _ {j: i} \right\| \leq O \left(e ^ {- c _ {1} L ^ {\gamma}}\right), \quad \forall 1 \leq i \leq j \leq L, j - i \geq \frac {L}{4}. \tag {12} \\ \end{array}
+$$
+
+Proof. It suffices to show that the difference $A_{j:i} - A_{j:i}(0)$ is tiny. Let $\Delta_i = A_i - A_i(0)$ . We have $A_{j:i} = (A_j(0) + \Delta_j)\dots (A_{i+1}(0) + \Delta_{i+1})(A_i(0) + \Delta_i)$ . Expanding this product, except for the one term corresponding to $A_{j:i}(0)$ , every other term has the form $A_{j:(k_s+1)}(0)\cdot \Delta_{k_s}\cdot A_{(k_s-1):(k_s+1)}(0)\cdot \Delta_{k_s-1}\dots \Delta_{k_1}\cdot A_{(k_1-1):i}(0)$ , where $i\leq k_1 < \dots < k_s\leq j$ . By assumption, each $\Delta_k$ has spectral norm $e^{-0.6c_1L^\gamma}$ , and each $A_{j':i'}(0)$ has spectral norm $O(L^3)$ , so we have $\| A_{j:(k_s+1)}(0)\cdot \Delta_{k_s}\cdot A_{(k_s-1):(k_s+1)}(0)\cdot \Delta_{k_s-1}\dots \Delta_{k_1}\cdot A_{(k_1-1):i}(0)\| \leq (e^{-0.6c_1L^\gamma})^s (O(L^3))^{s+1}$ . Therefore we have
+
+$$
+\begin{array}{l} \| A_{j:i} - A_{j:i}(0)\| \leq \sum_{s = 1}^{j - i + 1}\binom {j - i + 1}{s}\left(e^{-0.6c_{1}L^{\gamma}}\right)^{s}\left(O(L^{3})\right)^{s + 1} \\ \leq \sum_ {s = 1} ^ {j - i + 1} L ^ {s} \left(e ^ {- 0. 6 c _ {1} L ^ {\gamma}}\right) ^ {s} \left(O (L ^ {3})\right) ^ {s + 1} \leq O (L ^ {3}) \sum_ {s = 1} ^ {\infty} \left(O (L ^ {4}) e ^ {- 0. 6 c _ {1} L ^ {\gamma}}\right) ^ {s} \leq O (L ^ {3}) \sum_ {s = 1} ^ {\infty} (1 / 2) ^ {s} = O (L ^ {3}), \\ \end{array}
+$$
+
+which implies $\| A_{j:i}\| \leq O(L^3)$ for all $1\leq i\leq j\leq L$
+
+The proof of the second part of the lemma is postponed to Appendix B.2.
+
+
+
+As a consequence of Lemma 5.4, we can control the objective value and the gradient at any point sufficiently close to the random initialization.
+
+Lemma 5.5. For a set of weight matrices $W_{1},\ldots ,W_{L}$ with $A_{i} = W_{i} / (\sqrt{d_{i}}\sigma_{i})$ that satisfy (12), the objective and the gradient satisfy
+
+$$
+\begin{array}{l} 0. 4 \left\| Y \right\| _ {F} ^ {2} < \ell \left(W _ {1}, \dots , W _ {L}\right) < 0. 6 \left\| Y \right\| _ {F} ^ {2}, \\ \left\| \nabla_ {W _ {i}} \ell \left(W _ {1}, \dots , W _ {L}\right) \right\| \leq \left(\sqrt {d _ {i}} \sigma_ {i}\right) ^ {- 1} e ^ {- 0. 9 c _ {1} L ^ {\gamma}}, \quad \forall i \in [ L ]. \\ \end{array}
+$$
+
+The proof of Lemma 5.5 is given in Appendix B.3.
+
+Finally, we can finish the proof of Theorem 5.1 using the above lemmas.
+
+Proof of Theorem 5.1. From Lemmas 5.2 and 5.3, we know that with probability at least 0.9, we have (i) $\| A_{j:i}(0)\| \leq O(L^3)$ for all $1\leq i\leq j\leq L$ , and (ii) $\| A_{j:i}(0)\| \leq e^{-c_1L^\gamma}$ if $(i,j)$ further satisfies $j - i\geq \frac{L}{10}$ . Here $c_{1} > 0$ is a universal constant. From now on we are conditioned on these properties being satisfied. We suppose that the learning rate $\eta$ is at most $e^{0.2c_1L^\gamma}$ .
+
+We say that a set of weight matrices $W_{1},\ldots ,W_{L}$ are in the "initial neighborhood" if $\| A_{i} - A_{i}(0)\| \leq e^{-0.6c_{1}L^{\gamma}}$ for all $i\in [L]$ . From Lemmas 5.4 and 5.5 we know that in the "initial neighborhood" the objective value is always between $0.4\| Y\| _F^2$ and $0.6\| Y\| _F^2$ . Therefore we have to escape the "initial neighborhood" in order to get the objective value out of this interval.
+
+Now we calculate how many iterations are necessary to escape the "initial neighborhood." According to Lemma 5.5, inside the "initial neighborhood" each $W_{i}$ can move at most $\eta (\sqrt{d_i}\sigma_i)^{-1}e^{-0.9c_1L^\gamma}$ in one iteration by definition of the gradient descent algorithm. In order to leave the "initial neighborhood," some $W_{i}$ must satisfy $\| W_{i} - W_{i}(0)\| = \sqrt{d_{i}}\sigma_{i}\| A_{i} - A_{i}(0)\| > \sqrt{d_{i}}\sigma_{i}e^{-0.6c_{1}L^{\gamma}}$ . In order to move this amount, the number of iterations has to be at least
+
+$$
+\frac {\sqrt {d _ {i}} \sigma_ {i} e ^ {- 0 . 6 c _ {1} L ^ {\gamma}}}{\eta (\sqrt {d _ {i}} \sigma_ {i}) ^ {- 1} e ^ {- 0 . 9 c _ {1} L ^ {\gamma}}} = \frac {d _ {i} \sigma_ {i} ^ {2} e ^ {0 . 3 c _ {1} L ^ {\gamma}}}{\eta} \geq \frac {1}{L ^ {O (1)}} \cdot \frac {e ^ {0 . 3 c _ {1} L ^ {\gamma}}}{e ^ {0 . 2 c _ {1} L ^ {\gamma}}} \geq e ^ {\Omega (L ^ {\gamma})}.
+$$
+
+This finishes the proof.
+
+
+(a) Gaussian, steps=1258
+
+
+(b) Gaussian, steps=10000
+
+
+(c) Orthogonal, steps=1258
+
+
+(d) Orthogonal, steps=10000
+Figure 1: $\log \frac{\ell(t)}{\ell(0)}$ at $t = 1258$ and $t = 10000$ , for different depth-width configurations and different initialization schemes. Darker color means smaller loss.
+
+# 6 EXPERIMENTS
+
+In this section, we provide empirical evidence to support the results in Sections 4 and 5. To study how depth and width affect convergence speed of gradient descent under orthogonal and Gaussian initialization schemes, we train a family of linear networks with their widths ranging from 10 to 1000 and depths from 1 to 700, on a fixed synthetic dataset $(X,Y)$ . Each network is trained using gradient descent starting from both Gaussian and orthogonal initializations. In Figure 1, We lay out the logarithm of the relative training loss $\frac{\ell(t)}{\ell(0)}$ , using heap-maps, at steps $t = 1258$ and $t = 10000$ . In each heat-map, each point represents the relative training loss of one experiment; the darker the color, the smaller the loss. Figure 1 clearly demonstrates a sharp transition from untrainable to trainable (i.e., from red to black) when we increase the width of the network:
+
+- for Gaussian initialization, this transition occurs across a contour characterized by a linear relation between width and depth;
+- for orthogonal initialization, the transition occurs at a width that is approximately independent of the depth.
+
+These observations excellently verify our theory developed in Sections 4 and 5.
+
+To have a closer look into the training dynamics, we also plot "relative loss v.s. training time" for a variety of depth-width configurations. See Figure 2. There again we can clearly see that orthogonal initialization enables fast training at small width (independent of depth), and that the required width for Gaussian initialization depends on depth.
+
+
+(a) Depth=50
+
+
+(b) Depth=200
+Figure 2: Relative loss v.s. training time. For each plot, we vary width from 50 (yellow) to 1200 (purple). Solid and dashed lines represent Gaussian (GS) and orthogonal (OT) initializations.
+
+
+(c) Depth=400
+
+# 7 CONCLUSION
+
+In this work, we studied the effect of the initialization parameter values of deep linear neural networks on the convergence time of gradient descent. We found that when the initial weights are iid Gaussian, the convergence time grows exponentially in the depth unless the width is at least as large as the depth. In contrast, when the initial weight matrices are drawn from the orthogonal group, the width needed to guarantee efficient convergence is in fact independent of the depth. These results establish for the first time a concrete proof that orthogonal initialization is superior to Gaussian initialization in terms of convergence time.
+
+# REFERENCES
+
+Madhu S Advani and Andrew M Saxe. High-dimensional dynamics of generalization error in neural networks. arXiv preprint arXiv:1710.03667, 2017.
+Sanjeev Arora, Nadav Cohen, Noah Golowich, and Wei Hu. A convergence analysis of gradient descent for deep linear neural networks. arXiv preprint arXiv:1810.02281, 2018.
+Peter Bartlett, Dave Helmbold, and Phil Long. Gradient descent with identity initialization efficiently learns positive definite linear transformations. In International Conference on Machine Learning, pp. 520-529, 2018.
+Minmin Chen, Jeffrey Pennington, and Samuel S Schoenholz. Dynamical isometry and a mean field theory of rnns: Gating enables signal propagation in recurrent neural networks. arXiv preprint arXiv:1806.05394, 2018.
+Simon Du and Wei Hu. Width provably matters in optimization for deep linear neural networks. In International Conference on Machine Learning, pp. 1655-1664, 2019.
+Dar Gilboa, Bo Chang, Minmin Chen, Greg Yang, Samuel S Schoenholz, Ed H Chi, and Jeffrey Pennington. Dynamical isometry and a mean field theory of lstms and grus. arXiv preprint arXiv:1901.08987, 2019.
+Eldad Haber and Lars Ruthotto. Stable architectures for deep neural networks. Inverse Problems, 34(1):014004, 2017.
+Moritz Hardt and Tengyu Ma. Identity matters in deep learning. International Conference on Learning Representations, 2016.
+Mikael Henaff, Arthur Szlam, and Yann LeCun. Recurrent orthogonal networks and long-memory tasks. arXiv preprint arXiv:1602.06662, 2016.
+Geoffrey Hinton, Li Deng, Dong Yu, George E. Dahl, Abdel-rahman Mohamed, Navdeep Jaitly, Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara N Sainath, et al. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Processing Magazine, 29(6):82-97, 2012.
+
+Kenji Kawaguchi. Deep learning without poor local minima. In Advances in Neural Information Processing Systems, pp. 586-594, 2016.
+Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pp. 1097-1105, 2012.
+Thomas Laurent and James von Brecht. A recurrent neural network without chaos. arXiv preprint arXiv:1612.06212, 2016.
+Thomas Laurent and James von Brecht. Deep linear networks with arbitrary loss: All local minima are global. In International Conference on Machine Learning, pp. 2908-2913, 2018.
+Quoc V Le, Navdeep Jaitly, and Geoffrey E Hinton. A simple way to initialize recurrent networks of rectified linear units. arXiv preprint arXiv:1504.00941, 2015.
+Zenan Ling and Robert C Qiu. Spectrum concentration in deep residual learning: a free probability approach. IEEE Access, 7:105212-105223, 2019.
+Haihao Lu and Kenji Kawaguchi. Depth creates no bad local minima. arXiv preprint arXiv:1702.08580, 2017.
+Zakaria Mhammedi, Andrew Hellicar, Ashfaqur Rahman, and James Bailey. Efficient orthogonal parametrisation of recurrent neural networks using householder reflections. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 2401-2409. JMLR.org, 2017.
+Jeffrey Pennington, Samuel Schoenholz, and Surya Ganguli. Resurrecting the sigmoid in deep learning through dynamical isometry: theory and practice. In Advances in neural information processing systems, pp. 4785-4795, 2017.
+Jeffrey Pennington, Samuel S Schoenholz, and Surya Ganguli. The emergence of spectral universality in deep networks. arXiv preprint arXiv:1802.09979, 2018.
+Feng Qi and Qiu-Ming Luo. Bounds for the ratio of two gamma functions--from wendel's and related inequalities to logarithmically completely monotonic functions. Banach Journal of Mathematical Analysis, 6(2):132-158, 2012.
+Andrew M Saxe, James L McClelland, and Surya Ganguli. Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. International Conference on Learning Representations, 2014.
+Ohad Shamir. Exponential convergence time of gradient descent for one-dimensional deep linear neural networks. arXiv preprint arXiv:1809.08587, 2018.
+Wojciech Tarnowski, Piotr Warchoł, Stanisław Jastrzbski, Jacek Tabor, and Maciej Nowak. Dynamical isometry is achieved in residual networks in a universal way for any activation function. In The 22nd International Conference on Artificial Intelligence and Statistics, pp. 2221-2230, 2019.
+Eugene Vorontsov, Chiheb Trabelsi, Samuel Kadoury, and Chris Pal. On orthogonality and learning recurrent networks with long term dependencies. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 3570-3578. JMLR.org, 2017.
+Scott Wisdom, Thomas Powers, John Hershey, Jonathan Le Roux, and Les Atlas. Full-capacity unitary recurrent neural networks. In Advances in Neural Information Processing Systems, pp. 4880-4888, 2016.
+Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. Google's neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144, 2016.
+
+Lechao Xiao, Yasaman Bahri, Jascha Sohl-Dickstein, Samuel Schoenholz, and Jeffrey Pennington. Dynamical isometry and a mean field theory of cnns: How to train 10,000-layer vanilla convolutional neural networks. In International Conference on Machine Learning, pp. 5389-5398, 2018.
+Chulhee Yun, Suvrit Sra, and Ali Jabbabaie. Global optimality conditions for deep neural networks. arXiv preprint arXiv:1707.02444, 2017.
+Yi Zhou and Yingbin Liang. Critical points of linear neural networks: Analytical forms and landscape properties. 2018.
+
+# A PROOFS FOR SECTION 4
+
+# A.1 PROOF OF LEMMA 4.2
+
+Proof of Lemma 4.2. We only need to prove (9). We first upper bound the magnitude of the network's initial output on any given input $x \in \mathbb{R}^{d_x}$ . Let $z = \frac{1}{\sqrt{m^{L - 1}}} W_{L - 1:1}(0) \cdot x \in \mathbb{R}^m$ . Then we have $\| z \| = \| x \|$ , and $f(x; W_1(0), \ldots, W_L(0)) = \frac{1}{\sqrt{d_y}} W_L(0) \cdot z = \sqrt{\frac{m}{d_y}} \cdot \frac{1}{\sqrt{m}} W_L(0) \cdot z$ . Note that $\frac{1}{\sqrt{m}} W_L(0) \cdot z$ is the (signed) projection of $z$ onto a random subspace in $\mathbb{R}^m$ of dimension $d_y$ . Therefore $\left\| \frac{1}{\sqrt{m}} W_L(0) \cdot z \right\|^2 / \| z \|^2$ has the same distribution as $\frac{g_1^2 + \cdots + g_{d_y}^2}{g_1^2 + \cdots + g_m^2}$ , where $g_1, \ldots, g_m$ are i.i.d. samples from $\mathcal{N}(0, 1)$ . By the standard tail bounds for chi-squared distributions we have
+
+$$
+\begin{array}{l} \Pr \left[ g _ {1} ^ {2} + \dots + g _ {d _ {y}} ^ {2} \leq d _ {y} + 2 \sqrt {d _ {y} \log (1 / \delta^ {\prime})} + 2 \log (1 / \delta^ {\prime}) \right] \geq 1 - \delta^ {\prime}, \\ \operatorname * {P r} \left[ g _ {1} ^ {2} + \dots + g _ {m} ^ {2} \geq m - 2 \sqrt {m \log (1 / \delta^ {\prime})} \right] \geq 1 - \delta^ {\prime}. \\ \end{array}
+$$
+
+Let $\delta' = \frac{\delta}{2r}$ . Note that $m > C \cdot \log(r / \delta)$ . We know that with probability at least $1 - \frac{\delta}{r}$ we have
+
+$$
+\left\| \frac {1}{\sqrt {m}} W _ {L} (0) \cdot z \right\| ^ {2} / \| z \| ^ {2} \leq \frac {d _ {y} + 2 \sqrt {d _ {y} \log (2 r / \delta)} + 2 \log (2 r / \delta)}{m - 2 \sqrt {m \log (2 r / \delta)}} = \frac {O (d _ {y} + \log (r / \delta))}{\Omega (m)},
+$$
+
+which implies
+
+$$
+\begin{array}{l} \left\| f (x; W _ {1} (0), \dots , W _ {L} (0)) \right\| ^ {2} = \frac {m}{d _ {y}} \left\| \frac {1}{\sqrt {m}} W _ {L} (0) \cdot z \right\| ^ {2} = \frac {m}{d _ {y}} \cdot O \left(\frac {d _ {y} + \log (r / \delta)}{m}\right) \| z \| ^ {2} \tag {13} \\ = O \left(1 + \frac {\log (r / \delta)}{d _ {y}}\right) \| x \| ^ {2}. \\ \end{array}
+$$
+
+Finally, taking a union bound, we know that with probability at least $1 - \delta$ , the inequality (13) holds for every $x \in \{x_1, \ldots, x_r\}$ , which implies
+
+$$
+\begin{array}{l} \ell (0) = \frac {1}{2} \sum_ {k = 1} ^ {r} \| f (x _ {k}; W _ {1} (0), \dots , W _ {L} (0)) - y _ {k} \| ^ {2} \leq \sum_ {k = 1} ^ {r} \left(\| f (x _ {k}; W _ {1} (0), \dots , W _ {L} (0)) \| ^ {2} + \| y _ {k} \| ^ {2}\right) \\ \leq O \left(1 + \frac {\log (r / \delta)}{d _ {y}}\right) \sum_ {k = 1} ^ {r} \| x _ {k} \| ^ {2} + \sum_ {k = 1} ^ {r} \| y _ {k} \| ^ {2} = O \left(1 + \frac {\log (r / \delta)}{d _ {y}}\right) \| X \| _ {F} ^ {2} + \| Y \| _ {F} ^ {2} \\ \leq O \left(1 + \frac {\log (r / \delta)}{d _ {y}} + \| W ^ {*} \| ^ {2}\right) \| X \| _ {F} ^ {2}. \\ \end{array}
+$$
+
+# A.2 PROOF OF CLAIM 4.3
+
+Proof of Claim 4.3. Let $\gamma = \frac{1}{2} L\sigma_{\min}^2(X) / d_y$ . From $\mathcal{A}(0), \ldots, \mathcal{A}(t)$ we have $\ell(s) \leq (1 - \eta\gamma)^s B$ for all $0 \leq s \leq t$ . The gradient of the objective function (2) is $\frac{\partial\ell}{\partial W_i} = \alpha W_{L:i+1}^\top(U - Y)(W_{i-1:1}X)^\top$ . Thus we can bound the gradient norm as follows for all $0 \leq s \leq t$ and all $i \in [L]$ :
+
+$$
+\begin{array}{l} \left\| \frac {\partial \ell}{\partial W _ {i}} (s) \right\| _ {F} \leq \alpha \| W _ {L: i + 1} (s) \| \| U (s) - Y \| _ {F} \| W _ {i - 1: 1} (s) \| \| X \| \\ \leq \frac {1}{\sqrt {m ^ {L - 1} d _ {y}}} \cdot 1. 1 m ^ {\frac {L - i}{2}} \cdot \sqrt {2 \ell (s)} \cdot 1. 1 m ^ {\frac {i - 1}{2}} \| X \| \leq \frac {2 \sqrt {(1 - \eta \gamma) ^ {s} B}}{\sqrt {d _ {y}}} \| X \|, \tag {14} \\ \end{array}
+$$
+
+where we have used $\mathcal{B}(s)$ . Then for all $i\in [L]$ we have:
+
+$$
+\left\| W _ {i} (t + 1) - W _ {i} (0) \right\| _ {F} \leq \sum_ {s = 0} ^ {t} \left\| W _ {i} (s + 1) - W _ {i} (s) \right\| _ {F} = \sum_ {s = 0} ^ {t} \left\| \eta \frac {\partial \ell}{\partial W _ {i}} (s) \right\| _ {F}
+$$
+
+$$
+\begin{array}{l} \leq \eta \sum_ {s = 0} ^ {t} \frac {2 \sqrt {(1 - \eta \gamma) ^ {s} B}}{\sqrt {d _ {y}}} \| X \| \leq \frac {2 \eta \sqrt {B}}{\sqrt {d _ {y}}} \| X \| \sum_ {s = 0} ^ {t - 1} (1 - \eta \gamma / 2) ^ {s} \leq \frac {2 \eta \sqrt {B}}{\sqrt {d _ {y}}} \| X \| \cdot \frac {2}{\eta \gamma} \\ = \frac {8 \sqrt {B d _ {y}} \| X \|}{L \sigma_ {\mathrm {m i n}} ^ {2} (X)}. \\ \end{array}
+$$
+
+This proves $\mathcal{C}(t + 1)$
+
+
+
+# A.3 PROOF OF CLAIM 4.4
+
+Proof of Claim 4.4. Let $R = \frac{8\sqrt{B d_y} \| X \|}{L \sigma_{\min}^2(X)}$ and $\Delta_i = W_i(t) - W_i(0)$ ( $i \in [L]$ ). Then $\mathcal{C}(t)$ means $\| \Delta_i \|_F \leq R$ ( $\forall i \in [L]$ ).
+
+For $1 \leq i \leq j \leq L$ , we have
+
+$$
+W _ {j: i} (t) = \left(W _ {j} (0) + \Delta_ {j}\right) \dots \left(W _ {i} (0) + \Delta_ {i}\right).
+$$
+
+Expanding this product, each term except $W_{j:i}(0)$ has the form:
+
+$$
+W _ {j: \left(k _ {s} + 1\right)} (0) \cdot \Delta_ {k _ {s}} \cdot W _ {\left(k _ {s} - 1\right): \left(k _ {s} - 1 + 1\right)} (0) \cdot \Delta_ {k _ {s} - 1} \dots \Delta_ {k _ {1}} \cdot W _ {\left(k _ {1} - 1\right): i} (0), \tag {15}
+$$
+
+where $i \leq k_1 < \dots < k_s \leq j$ are locations where terms like $\Delta_{k_l}$ are taken out. Note that every factor in (15) of the form $W_{j':i'}(0)$ satisfies $\| W_{j':i'}(0)\| = m^{\frac{j' - i' + 1}{2}}$ according to (8). Thus, we can bound the sum of all terms of the form (15) as
+
+$$
+\begin{array}{l} \| W_{j:i}(t) - W_{j:i}(0)\| \leq \sum_{s = 1}^{j - i + 1}\binom {j - i + 1}{s}R^{s}m^{\frac{j - i + 1 - s}{2}} = (\sqrt{m} +R)^{j - i + 1} - (\sqrt{m})^{j - i + 1} \\ = (\sqrt {m}) ^ {j - i + 1} \left(\left(1 + R / \sqrt {m}\right) ^ {j - i + 1} - 1\right) \leq (\sqrt {m}) ^ {j - i + 1} \left(\left(1 + R / \sqrt {m}\right) ^ {L} - 1\right) \leq 0. 1 (\sqrt {m}) ^ {j - i + 1}. \\ \end{array}
+$$
+
+Here the last step uses $m > C(LR)^2$ which is implied by (5). Combined with (8), this proves $\mathcal{B}(t)$ .
+
+# A.4 PROOF OF CLAIM 4.5
+
+Proof of Claim 4.5. Recall that we have the dynamics (6) for $U(t)$ . In order to establish convergence from (6) we need to prove upper and lower bounds on the eigenvalues of $P(t)$ , as well as show that the high-order term $E(t)$ is small. We will prove these using $\mathcal{B}(t)$ .
+
+Using the definition (7) and property $\mathcal{B}(t)$ , we have
+
+$$
+\begin{array}{l} \lambda_ {\max } (P (t)) \leq \alpha^ {2} \sum_ {i = 1} ^ {L} \lambda_ {\max } \left(\left(W _ {i - 1: 1} (t) X\right) ^ {\top} \left(W _ {i - 1: 1} (t) X\right)\right) \cdot \lambda_ {\max } \left(W _ {L: i + 1} (t) W _ {L: i + 1} ^ {\top} (t)\right) \\ \leq \frac {1}{m ^ {L - 1} d _ {y}} \sum_ {i = 1} ^ {L} \left(1. 1 m ^ {\frac {i - 1}{2}} \sigma_ {\max} (X)\right) ^ {2} \left(1. 1 m ^ {\frac {L - i}{2}}\right) ^ {2} \leq 2 L \sigma_ {\max} ^ {2} (X) / d _ {y}, \\ \end{array}
+$$
+
+$$
+\begin{array}{l} \lambda_ {\min } (P (t)) \geq \alpha^ {2} \sum_ {i = 1} ^ {L} \lambda_ {\min } \left(\left(W _ {i - 1: 1} (t) X\right) ^ {\top} \left(W _ {i - 1: 1} (t) X\right)\right) \cdot \lambda_ {\min } \left(W _ {L: i + 1} (t) W _ {L: i + 1} ^ {\top} (t)\right) \\ \geq \frac {1}{m ^ {L - 1} d _ {y}} \sum_ {i = 1} ^ {L} \left(0. 9 m ^ {\frac {i - 1}{2}} \sigma_ {\min} (X)\right) ^ {2} \left(0. 9 m ^ {\frac {L - i}{2}}\right) ^ {2} \geq \frac {3}{5} L \sigma_ {\min} ^ {2} (X) / d _ {y}. \\ \end{array}
+$$
+
+In the lower bound above, we make use of the following relation on dimensions: $m \geq d_x \geq r$ , which enables the inequality $\lambda_{\min}\left((W_{i-1:1}(t)X)^\top(W_{i-1:1}(t)X)\right) = \sigma_{\min}^2(W_{i-1:1}(t)X) \geq \sigma_{\min}^2(W_{i-1:1}(t)) \cdot \sigma_{\min}^2(X)$ .
+
+Next, we will prove the following bound on the high-order term $E(t)$ :
+
+$$
+\frac {1}{\sqrt {m ^ {L - 1} d _ {y}}} \| E (t) X \| _ {F} \leq \frac {1}{6} \eta \lambda_ {\min } (P _ {t}) \| U (t) - Y \| _ {F}.
+$$
+
+Recall that $E(t)$ is the sum of all high-order terms in the product
+
+$$
+W _ {L: 1} (t + 1) = \prod_ {i} \left(W _ {i} (t) - \eta \frac {\partial \ell}{\partial W _ {i}} (t)\right).
+$$
+
+Same as (14), we have $\left\| \frac{\partial \ell}{\partial W_i}(t) \right\|_F \leq \frac{2 \sqrt{\ell(t)} \| X \|}{\sqrt{d_y}} (\forall i \in [L])$ . Then we have
+
+$$
+\begin{array}{l} \frac {1}{\sqrt {m ^ {L - 1} d _ {y}}} \left\| E (t) X \right\| _ {F} \\ \leq \frac {1}{\sqrt {m ^ {L - 1} d _ {y}}} \sum_ {s = 2} ^ {L} \binom {L} {s} \left(\eta \cdot \frac {2 \sqrt {\ell (t)} \| X \|}{\sqrt {d _ {y}}}\right) ^ {s} m ^ {\frac {L - s}{2}} \| X \| \\ \leq \sqrt {\frac {m}{d _ {y}}} \| X \| \sum_ {s = 2} ^ {L} L ^ {s} \left(\eta \cdot \frac {2 \sqrt {\ell (t)} \| X \|}{\sqrt {d _ {y}}}\right) ^ {s} m ^ {- \frac {s}{2}} \\ = \sqrt {\frac {m}{d _ {y}}} \| X \| \sum_ {s = 2} ^ {L} \left(\frac {2 \eta L \sqrt {\ell (t)} \| X \|}{\sqrt {m d _ {y}}}\right) ^ {s} \\ \end{array}
+$$
+
+From $\eta \leq \frac{d_y}{2L\|X\|^2}$ , we have $\frac{2\eta L\sqrt{\ell(t)}\|X\|}{\sqrt{md_y}} \leq \frac{\sqrt{d_y \cdot \ell(t)}}{\sqrt{m\|X\|}}$ . Note that $m > C \cdot \frac{d_y B}{\|X\|^2} \geq C \cdot \frac{d_y \ell(t)}{\|X\|^2}$ . Thus we have
+
+$$
+\begin{array}{l} \frac {1}{\sqrt {m ^ {L - 1} d _ {y}}} \| E (t) X \| _ {F} \leq \sqrt {\frac {m}{d _ {y}}} \| X \| \left(\frac {2 \eta L \sqrt {\ell (t)} \| X \|}{\sqrt {m d _ {y}}}\right) ^ {2} \sum_ {s = 2} ^ {L - 2} 0. 5 ^ {s - 2} \\ \leq 2 \sqrt {\frac {m}{d _ {y}}} \| X \| \left(\frac {2 \eta L \sqrt {\ell (t)} \| X \|}{\sqrt {m d _ {y}}}\right) ^ {2} \leq 2 \sqrt {\frac {m}{d _ {y}}} \| X \| \cdot \frac {2 \eta L \sqrt {\ell (t)} \| X \|}{\sqrt {m d _ {y}}} \cdot \frac {\sqrt {d _ {y} \cdot \ell (t)}}{\sqrt {m} \| X \|} \\ = \frac {4 \eta L \| X \| \cdot \ell (t)}{\sqrt {m d _ {y}}}. \\ \end{array}
+$$
+
+It suffices to show that the above bound is at most $\frac{1}{6}\eta\lambda_{\min}(P_t)\|U(t)-Y\|_F = \frac{1}{6}\eta\lambda_{\min}(P_t)\sqrt{2\ell(t)}$ . Since $\lambda_{\min}(P_t) \geq \frac{3}{5}L\sigma_{\min}^2(X)/d_y$ , it suffices to have
+
+$$
+\frac {4 \eta L \| X \| \cdot \ell (t)}{\sqrt {m d _ {y}}} \leq \frac {1}{6} \eta \cdot \frac {3 L \sigma_ {\mathrm {m i n}} ^ {2} (X) \sqrt {2 \ell (t)}}{5 d _ {y}},
+$$
+
+which is true since $m > C\cdot \frac{d_yB\|X\|^2}{\sigma_{\min}^4(X)}\geq C\cdot \frac{d_y\ell(t)\|X\|^2}{\sigma_{\min}^4(X)}$
+
+Finally, from (6) and $\eta \leq \frac{d_y}{2L\|X\|^2} \leq \frac{1}{\lambda_{\max}(P_t)}$ we have
+
+$$
+\begin{array}{l} \left\| U (t + 1) - Y \right\| _ {F} = \left\| \operatorname {v e c} \left(U (t + 1) - Y\right) \right\| \\ = \left\| (I - \eta P (t)) \cdot \operatorname {v e c} (U (t) - Y) + \frac {1}{\sqrt {m ^ {L - 1} d _ {y}}} \operatorname {v e c} (E (t) X) \right\| \\ \leq \left(1 - \eta \lambda_ {\min } (P (t))\right) \| \operatorname {v e c} (U (t) - Y) \| + \frac {1}{\sqrt {m ^ {L - 1} d _ {y}}} \| E (t) X \| _ {F} \\ \leq \left(1 - \eta \lambda_ {\min } (P (t))\right) \| U (t) - Y \| _ {F} + \frac {1}{6} \eta \lambda_ {\min } (P _ {t}) \| U (t) - Y \| _ {F} \\ = \left(1 - \frac {5}{6} \eta \lambda_ {\min } (P (t))\right) \| U (t) - Y \| _ {F} \\ \leq \left(1 - \frac {1}{2} \eta L \sigma_ {\min } ^ {2} (X) / d _ {y}\right) \| U (t) - Y \| _ {F}. \\ \end{array}
+$$
+
+Therefore $\ell(t + 1) \leq \left(1 - \frac{1}{2}\eta L\sigma_{\min}^2(X) / d_y\right)^2\ell(t) \leq \left(1 - \frac{1}{2}\eta L\sigma_{\min}^2(X) / d_y\right)\ell(t)$ . Combined with $\mathcal{A}(t)$ , this proves $\mathcal{A}(t + 1)$ .
+
+# B PROOFS FOR SECTION 5
+
+# B.1 PROOF OF LEMMA 5.2
+
+Proof of Lemma 5.2. Notice that for any $1 \leq i \leq j \leq L$ we have $\mathbb{E}\left[\|A_{j:i}(0)\|_F^2\right] = d_{i-1}$ . Then by Markov inequality we have $\operatorname*{Pr}\left[\|A_{j:i}(0)\|_F^2 \geq \frac{d_{i-1}}{\delta/L^2}\right] \leq \delta/L^2$ . Taking a union bound, we know that with probability at least $1 - \delta$ , for all $1 \leq i \leq j \leq L$ simultaneously we have $\|A_{j:i}(0)\| \leq \|A_{j:i}(0)\|_F \leq \frac{d_{i-1}}{\delta/L^2} \leq O(L^3/\delta)$ (note that $d_{i-1} \leq O(L^{1-\gamma}) = O(L)$ ).
+
+# B.2 PROOF OF LEMMA 5.4
+
+Proof of Lemma 5.4 (continued). For the second part of the lemma $(j - i \geq \frac{L}{4})$ , we need to bound the terms of the form $A_{j:k + 1}(0) \cdot \Delta_k \cdot A_{k - 1:i}(0)$ more carefully. In fact, if $j - i \geq \frac{L}{4}$ , then $\max \{j - k - 1, k - 1 - i\} \geq \frac{L}{10}$ , which by assumption means either $A_{j:k + 1}(0)$ or $A_{k - 1:i}(0)$ has spectral norm bounded by $e^{-c_1L^\gamma}$ . This implies $\|A_{j:k + 1}(0) \cdot \Delta_k \cdot A_{k - 1:i}(0)\| \leq e^{-c_1L^\gamma}e^{-0.6c_1L^\gamma} \cdot O(L^3) = e^{-1.6c_1L^\gamma} \cdot O(L^3)$ . Therefore we have
+
+$$
+\begin{array}{l} \| A_{j:i} - A_{j:i}(0)\| \leq (j - i + 1)e^{-1.6c_{1}L^{\gamma}}\cdot O(L^{3}) + \sum_{s = 2}^{j - i + 1}\binom {j - i + 1}{s}\left(e^{-0.6c_{1}L^{\gamma}}\right)^{s}\left(O(L^{3})\right)^{s + 1} \\ \leq e ^ {- c _ {1} L ^ {\gamma}} + \sum_ {s = 2} ^ {\infty} L ^ {s} \left(e ^ {- 0. 6 c _ {1} L ^ {\gamma}}\right) ^ {s} \left(O (L ^ {3})\right) ^ {s + 1} \leq e ^ {- c _ {1} L ^ {\gamma}} + \sum_ {s = 2} ^ {\infty} \left(e ^ {- 0. 5 c _ {1} L ^ {\gamma}}\right) ^ {s} = O \left(e ^ {- c _ {1} L ^ {\gamma}}\right). \\ \end{array}
+$$
+
+This implies $\| A_{j:i}\| \leq O\left(e^{-c_1L^\gamma}\right)$
+
+
+
+# B.3 PROOF OF LEMMA 5.5
+
+Proof of Lemma 5.5. We can bound the network's output as
+
+$$
+\left\| \alpha W _ {L: 1} (0) X \right\| _ {F} = \left\| \beta A _ {L: 1} (0) X \right\| _ {F} \leq L ^ {O (1)} \cdot e ^ {- \Omega (L ^ {\gamma})} \| X \| _ {F} = e ^ {- \Omega (L ^ {\gamma})}.
+$$
+
+Thus the objective value $\ell(W_1, \ldots, W_L) = \frac{1}{2} \|\alpha W_{L:1}(0)X - Y\|_F^2$ must be extremely close to $\frac{1}{2} \|Y\|_F^2$ for large $L$ , so $0.4 \|Y\|_F^2 < \ell(W_1, \ldots, W_L) < 0.6 \|Y\|_F^2$ .
+
+As for the gradient, for any $i \in [L]$ we have
+
+$$
+\begin{array}{l} \left\| \nabla_ {W _ {i}} \ell \left(W _ {1}, \dots , W _ {L}\right) \right\| = \left\| \alpha W _ {L: i + 1} ^ {\top} \left(\alpha W _ {L: 1} X - Y\right) X ^ {\top} W _ {i - 1: 1} ^ {\top} \right\| \\ = \left\| \beta / (\sqrt {d _ {i}} \sigma_ {i}) \cdot A _ {L: i + 1} ^ {\top} (\alpha W _ {L: 1} X - Y) X ^ {\top} A _ {i - 1: 1} ^ {\top} \right\| \leq \frac {L ^ {O (1)}}{\sqrt {d _ {i}} \sigma_ {i}} \| A _ {L: i + 1} \| \cdot O (1) \cdot \| A _ {i - 1: 1} \|. \\ \end{array}
+$$
+
+Using (12), and noting that either $L - i - 1$ or $i - 1$ is greater than $\frac{L}{4}$ , we have
+
+$$
+\left\| \nabla_ {W _ {i}} \ell \left(W _ {1}, \dots , W _ {L}\right) \right\| \leq \sigma_ {i} ^ {- 1} L ^ {O (1)} \cdot O \left(e ^ {- c _ {1} L ^ {\gamma}}\right) \cdot O \left(L ^ {3}\right) \leq \left(\sqrt {d _ {i}} \sigma_ {i}\right) ^ {- 1} e ^ {- 0. 9 c _ {1} L ^ {\gamma}}.
+$$
\ No newline at end of file
diff --git a/provablebenefitoforthogonalinitializationinoptimizingdeeplinearnetworks/images.zip b/provablebenefitoforthogonalinitializationinoptimizingdeeplinearnetworks/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..e059a0f457b3dabce28808a49c54b71e5539dc28
--- /dev/null
+++ b/provablebenefitoforthogonalinitializationinoptimizingdeeplinearnetworks/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:49ff273ac9b4fac15c9e9493e5111a138533e011554ab12375c751545733551c
+size 716695
diff --git a/provablebenefitoforthogonalinitializationinoptimizingdeeplinearnetworks/layout.json b/provablebenefitoforthogonalinitializationinoptimizingdeeplinearnetworks/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..28c515dc4736691fedb7a6d40241af626d9b9f03
--- /dev/null
+++ b/provablebenefitoforthogonalinitializationinoptimizingdeeplinearnetworks/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:da920800a41cd7743fe78694b1d1a62f6ce5f04b4928a050d5a477c7cf9dd2bc
+size 748565
diff --git a/provablefilterpruningforefficientneuralnetworks/9ca5af54-4567-4a2f-a32e-0c4e0d1a2008_content_list.json b/provablefilterpruningforefficientneuralnetworks/9ca5af54-4567-4a2f-a32e-0c4e0d1a2008_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..e84440c2c10d9c2677a2298354e7026b9122d3ba
--- /dev/null
+++ b/provablefilterpruningforefficientneuralnetworks/9ca5af54-4567-4a2f-a32e-0c4e0d1a2008_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0cbdb3a80fe4ac11a6c4c9797b88ba0af16f454bec6a2e9611124049830406e7
+size 186377
diff --git a/provablefilterpruningforefficientneuralnetworks/9ca5af54-4567-4a2f-a32e-0c4e0d1a2008_model.json b/provablefilterpruningforefficientneuralnetworks/9ca5af54-4567-4a2f-a32e-0c4e0d1a2008_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..54c3ec5de43ecffcb60caa5327e02b1ab27994e5
--- /dev/null
+++ b/provablefilterpruningforefficientneuralnetworks/9ca5af54-4567-4a2f-a32e-0c4e0d1a2008_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2965731572df6b0b05bb56803869b6e20ffc2f6a6738982bca8c72f3a7b01b7f
+size 223770
diff --git a/provablefilterpruningforefficientneuralnetworks/9ca5af54-4567-4a2f-a32e-0c4e0d1a2008_origin.pdf b/provablefilterpruningforefficientneuralnetworks/9ca5af54-4567-4a2f-a32e-0c4e0d1a2008_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..f469c3335d3722fa0d140861f41265e627a67cc7
--- /dev/null
+++ b/provablefilterpruningforefficientneuralnetworks/9ca5af54-4567-4a2f-a32e-0c4e0d1a2008_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:faa341886e69f83d681acfa6cbf8ee253930628b8aea14a04a18053066a54215
+size 1466113
diff --git a/provablefilterpruningforefficientneuralnetworks/full.md b/provablefilterpruningforefficientneuralnetworks/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..beb44da01ffa987751ae2d5ec3868f553cb2de42
--- /dev/null
+++ b/provablefilterpruningforefficientneuralnetworks/full.md
@@ -0,0 +1,798 @@
+# PROVABLE FILTER PRUNING FOR EFFICIENT NEURAL NETWORKS
+
+Lucas Liebenwein*
+
+CSAIL, MIT
+
+lucas1@mit.edu
+
+Cenk Baykal*
+
+CSAIL, MIT
+
+baykal@mit.edu
+
+Harry Lang
+
+CSAIL, MIT
+
+harry1@mit.edu
+
+Dan Feldman
+
+University of Haifa
+
+djangof.post@gmail.com
+
+Daniela Rus
+
+CSAIL, MIT
+
+rus@csail.mit.edu
+
+# ABSTRACT
+
+We present a provable, sampling-based approach for generating compact Convolutional Neural Networks (CNNs) by identifying and removing redundant filters from an over-parameterized network. Our algorithm uses a small batch of input data points to assign a saliency score to each filter and constructs an importance sampling distribution where filters that highly affect the output are sampled with correspondingly high probability. In contrast to existing filter pruning approaches, our method is simultaneously data-informed, exhibits provable guarantees on the size and performance of the pruned network, and is widely applicable to varying network architectures and data sets. Our analytical bounds bridge the notions of compressibility and importance of network structures, which gives rise to a fully-automated procedure for identifying and preserving filters in layers that are essential to the network's performance. Our experimental evaluations on popular architectures and data sets show that our algorithm consistently generates sparser and more efficient models than those constructed by existing filter pruning approaches.
+
+# 1 INTRODUCTION
+
+Despite widespread empirical success, modern networks with millions of parameters require excessive amounts of memory and computational resources to store and conduct inference. These stringent requirements make it challenging and prohibitive to deploy large neural networks on resource-limited platforms. A popular approach to alleviate these practical concerns is to utilize a pruning algorithm to remove redundant parameters from the original, over-parameterized network. The objective of network pruning is to generate a sparse, efficient model that achieves minimal loss in predictive power relative to that of the original network.
+
+A common practice to obtain small, efficient network architectures is to train an over-parameterized network, prune it by removing the least significant weights, and re-train the pruned network (Gale et al., 2019; Frankle & Carbin, 2019; Han et al., 2015; Baykal et al., 2019b). This prune-retrain cycle is often repeated iteratively until the network cannot be pruned any further without incurring a significant loss in predictive accuracy relative to that of the original model. The computational complexity of this iterative procedure depends greatly on the effectiveness of the pruning algorithm used in identifying and preserving the essential structures of the original network. To this end, a diverse set of smart pruning strategies have been proposed in order to generate compact, accurate neural network models in a computationally efficient way.
+
+However, modern pruning approaches1 are generally based on heuristics (Han et al., 2015; Ullrich et al., 2017; He et al., 2018; Luo et al., 2017; Li et al., 2016; Lee et al., 2019; Yu et al., 2017a) that lack guarantees on the size and performance of the pruned network, require cumbersome ablation studies (Li et al., 2016; He et al., 2018) or manual hyper-parameter tuning (Luo et al., 2017), or
+
+
+
+
+
+
+Data batch
+Compute filter sensitivity in each layer
+
+
+Prune filters and output compressed net
+
+
+
+
+Per-Layer Filter Pruning
+Sample filter $j$ with probability $p_j^\ell \sim s_j^\ell$
+
+
+
+
+Filter sensitivity $s_j^\ell$ from feature map importance
+Figure 1: Overview of our pruning method. We use a small batch of data points to quantify the relative importance $s_j^\ell$ of each filter $W_j^\ell$ in layer $\ell$ by considering the importance of the corresponding feature map $a_j^\ell = \phi(z_j^\ell)$ in computing the output $z^{\ell+1}$ of layer $\ell+1$ , where $\phi(\cdot)$ is the non-linear activation function. We then prune filters by sampling each filter $j$ with probability proportional to $s_j^\ell$ and removing the filters that were not sampled. We invoke the filter pruning procedure each layer to obtain the pruned network (the prune step); we then retrain the pruned network (retrain step), and repeat the prune-retrain cycle iteratively.
+
+heavily rely on assumptions such that parameters with large weight magnitudes are more important – which does not hold in general (Ye et al., 2018; Li et al., 2016; Yu et al., 2017a; Han et al., 2015).
+
+In this paper, we introduce a data-informed algorithm for pruning redundant filters in Convolutional Neural Networks while incurring minimal loss in the network's accuracy (see Fig. 1 for an overview). At the heart of our method lies a novel definition of filter importance, i.e., filter sensitivity, that is computed by using a small batch of input points. We prove that by empirically evaluating the relative contribution of each filter to the output of the layer, we can accurately capture its importance with respect to the other filters in the network. We show that sampling filters with probabilities proportional to their sensitivities leads to an importance sampling scheme with low variance, which enables us to establish rigorous theoretical guarantees on the size and performance of the resulting pruned network. Our analysis helps bridge the notions of compressibility and importance of each network layer: layers that are more compressible are less important for preserving the output of the original network, and vice-versa. Hence, we obtain and introduce a fully-automated sample size allocation procedure for properly identifying and preserving critical network structures as a corollary.
+
+Unlike weight pruning approaches that lead to irregular sparsity patterns – requiring specialized libraries or hardware to enable computational speedups – our approach compresses the original network to a slimmer subnetwork by pruning filters, which enables accelerated inference with any off-the-shelf deep learning library and hardware. We evaluate and compare the effectiveness of our approach in pruning a diverse set of network architectures trained on real-world data sets. Our empirical results show that our approach generates sparser and more efficient models with minimal loss in accuracy when compared to those generated by state-of-the-art filter pruning approaches.[2]
+
+# 2 SAMPLING-BASED FILTER PRUNING
+
+In this section, we introduce the network pruning problem and outline our sampling-based filter pruning procedure and its theoretical properties. We extend the notion of empirical sensitivity (Baykal et al., 2019a) to quantify the importance of each filter using a small set of input points. We show that our importance criterion enables us to construct a low-variance importance sampling distribution over the filters in each layer. We conclude by showing that our approach can eliminate a large fraction of filters while ensuring that the output of each layer is approximately preserved.
+
+# 2.1 PRELIMINARIES
+
+Consider a trained $L$ layer network with parameters $\theta = (W^1, \ldots, W^L)$ , where $W^\ell$ denotes the 4-dimensional tensor in layer $\ell \in [L]$ , $W_j^\ell$ filter $j \in [\eta^\ell]$ , and $\eta^\ell$ the number of filters in layer $\ell$ . Moreover, let $W_{:j}^{\ell + 1}$ be channel $j$ of tensor $W^{\ell + 1}$ that corresponds to filter $W_j^\ell$ . We let $\mathcal{X} \subset \mathbb{R}^d$ and $\mathcal{Y} \subset \mathbb{R}^k$ denote the input and output space, respectively. The marginal distribution over the input space is given by $\mathcal{D}$ . For an input $x \in \mathcal{X}$ to the network, we let $z^\ell(x)$ and $a^\ell(x) = \phi(z^\ell(x))$ denote the pre-activation and activation of layer $\ell$ , where $\phi$ is the activation function (applied entry-wise). The $j^{\mathrm{th}}$ feature map of layer $\ell$ is given by $a_j^\ell(x) = \phi(z_j^\ell(x))$ (see Fig. 1). For a given input $x \in \mathcal{X}$ , the output of the neural network with parameters $\theta$ is given by $f_\theta(x)$ .
+
+Our overarching goal is to prune filters from each layer $\ell \in [L]$ by random sampling to generate a compact reparameterization of $\theta$ , $\hat{\theta} = (\hat{W}^1, \dots, \hat{W}^L)$ , where the number of filters in the pruned weight tensor $\hat{W}^\ell$ is a small fraction of the number of filters in the original (uncompressed) tensor $W^\ell$ . Let $\mathrm{size}(\theta)$ denote the total number of parameters in the network, i.e., the sum of the number of weights over each $W^\ell \in (W^1, \dots, W^L)$ .
+
+Pruning Objective For a given $\varepsilon, \delta \in (0,1)$ , our objective is to generate a compressed network with parameters $\hat{\theta}$ such that $\mathrm{size}(\hat{\theta}) \ll \mathrm{size}(\theta)$ and $\mathbb{P}_{x \sim \mathcal{D}, \hat{\theta}}(f_{\hat{\theta}}(x) \in (1 \pm \varepsilon)f_{\theta}(x)) \geq 1 - \delta$ , where $f_{\hat{\theta}}(x) \in (1 \pm \varepsilon)f_{\theta}(x)$ denotes an entry-wise guarantee over the output neurons $f_{\hat{\theta}}(x), f_{\theta}(x) \in \mathcal{Y}$ .
+
+# 2.2 SAMPLING-BASED PRUNING
+
+Our sampling-based filter pruning algorithm for an arbitrary layer $\ell \in [L]$ is depicted as Alg. 1. The sampling procedure takes as input the set of $\eta^{\ell}$ channels in layer $\ell + 1$ that constitute the weight tensor $W^{\ell + 1}$ , i.e., $W^{\ell + 1} = [W_{:1}^{\ell + 1}, \dots, W_{:n^{\ell}}^{\ell + 1}]$ as well as the desired relative error and failure probability, $\varepsilon, \delta \in (0, 1)$ , respectively. In Line 2 we construct the importance sampling distribution over the feature maps corresponding to the channels by leveraging the empirical sensitivity of each feature map $j \in [\eta^{\ell}]$ as defined in (1) and explained in detail in the following subsections. Note that we initially prune channels from $W^{\ell + 1}$ , but as we prune channels from $W^{\ell + 1}$ we can simultaneously prune the corresponding filters in $W^{\ell}$ .
+
+We subsequently set the sample complexity $m^{\ell}$ as a function of the given error $(\varepsilon)$ and failure
+
+probability $(\delta)$ parameters in order to ensure that, after the pruning (i.e., sampling) procedure, the approximate output - with respect to the sampled channels $\hat{W}^{\ell +1}$ - of the layer will approximate the true output of the layer - with respect to the original tensor - up to a multiplicative factor of $(1\pm \varepsilon)$ , with probability at least $1 - \delta$ . Intuitively, more samples are required to achieve a low specified error $\varepsilon$ with low failure probability $\delta$ , and vice-versa. We then proceed to sample $m^l$ times with replacement according to distribution $p^\ell$ (Lines 5-8) and reweigh each sample by a factor that is inversely proportional to its sample probability to obtain an unbiased estimator for the layer's output (see below). The unsampled channels in $W^{\ell +1}$ - and the corresponding filters in $W^{\ell}$ - are subsequently discarded, leading to a reduction in the layer's size.
+
+# 2.3 A TIGHTLY-CONCENTRATED ESTIMATOR
+
+We now turn our attention to analyzing the influence of the sampled channels $\hat{W}^{\ell +1}$ (as in Alg. 1) on layer $\ell +1$ . For ease of presentation, we will henceforth assume that the layer is linear3 and will omit
+
+# Algorithm 1 PRUNECHANNELS $(W^{\ell +1},\varepsilon ,\delta ,s^{\ell})$
+
+Input: $W^{\ell +1} = [W_{:1}^{\ell +1},\dots ,W_{:\eta \ell}^{\ell +1}]$ : original channels; $\varepsilon$ : relative error; $\delta$ : failure probability; $s^\ell$ : feature map sensitivities as in (1)
+
+Output: $\hat{W}^{\ell +1}$ : pruned channels
+
+1: $S^{\ell}\gets \sum_{j\in [\eta^{\ell}]}s_{j}^{\ell}$ {where $s_j^\ell$ is as in (1)}
+2: $p_j^\ell \gets {}^{s_j^\ell} / s^\ell \quad \forall j \in [\eta^\ell]$
+3: $m^{\ell}\gets \left[(6 + 2\varepsilon)S^{\ell}K\log (4\eta_{*} / \delta)\varepsilon^{-2}\right]$
+4: $\hat{W}^{\ell +1}\gets [0,\dots ,0]$ {same dimensions as $W^{\ell +1}$ }
+5: for $k \in [m^{\ell}]$ do
+6: $c(k)\gets$ random draw from $p^\ell = (p_1^\ell ,\dots ,p_{\eta^\ell}^\ell)$
+7: $\hat{W}_{c(k)}^{\ell +1}\gets \hat{W}_{c(k)}^{\ell +1} + W_{c(k)}^{\ell +1} / m_{\ell}^{p}c_{p(k)}^{\ell}$
+8: end for
+9: return $\hat{W}^{\ell +1} = [\hat{W}_{:1}^{\ell +1},\dots ,\hat{W}_{:\eta^{\ell}}^{\ell +1}]$
+
+explicit references to the input $x$ whenever appropriate. Note that the true pre-activation of layer $\ell + 1$ is given by $z^{\ell + 1} = W^{\ell + 1}a^{\ell}$ , and the approximate pre-activation with respect to $\hat{W}^{\ell + 1}$ is given by $\hat{z}^{\ell + 1} = \hat{W}^{\ell + 1}a^{\ell}$ . By construction of $\hat{W}^{\ell + 1}$ in Alg. 1, we equivalently have for each entry $i \in [\eta^{\ell + 1}]$
+
+$$
+\hat {z} _ {i} ^ {\ell + 1} = \frac {1}{m} \sum_ {k = 1} ^ {m} Y _ {i k}, \quad \text {w h e r e} \quad Y _ {i k} = W _ {i c (k)} ^ {\ell + 1} \frac {a _ {c (k)} ^ {\ell}}{p _ {c (k)} ^ {\ell}}, c (k) \sim p \quad \forall k.
+$$
+
+By reweighing our samples, we obtain an unbiased estimator for each entry $i$ of the true pre-activation output, i.e., $\mathbb{E}\left[\hat{z}_i^{\ell +1}\right] = z_i^{\ell +1} -$ which follows by the linearity of expectation and the fact that $\mathbb{E}\left[Y_{ik}\right] = z_i^{\ell +1}$ for each $k\in [m] -$ , and so we have for the entire vector $\mathbb{E}_{\hat{W}^{\ell +1}}[\hat{z}^{\ell +1}] = z^{\ell +1}$ . So far, we have shown that in expectation, our channel sampling procedure incurs zero error owing to its unbiasedness. However, our objective is to obtain a high probability bound on the entry-wise deviation $\left|\hat{z}_i^{\ell +1} - z_i^{\ell +1}\right|$ for each entry $i$ , which implies that we have to show that our estimator $\hat{z}_i^{\ell +1}$ is highly concentrated around its mean $z_i^{\ell +1}$ . To do so, we leverage the following standard result.
+
+Theorem 1 (Bernstein's inequality (Vershynin, 2016)). Let $Y_{1}, \ldots, Y_{m}$ be a sequence of $m$ i.i.d. random variables satisfying $\max_{k \in [m]} |Y_k - \mathbb{E}[Y_k]| \leq R$ , and let $Y = \sum_{k=1}^{m} Y_k$ denote their sum. Then, for every $\varepsilon \geq 0$ , $\delta \in (0,1)$ , we have that $\mathbb{P}\left(|Y/m - \mathbb{E}[Y_k]| \geq \varepsilon \mathbb{E}[Y_k]\right) \leq \delta$ for
+
+$$
+m \geq \frac {\log (2 / \delta)}{(\varepsilon E [ Y _ {k} ]) ^ {2}} \left(\operatorname {V a r} \left(Y _ {k}\right) + \frac {2}{3} \varepsilon \mathbb {E} \left[ Y _ {k} \right] R\right).
+$$
+
+Letting $i \in [\eta^{\ell + 1}]$ be arbitrary and applying Theorem 1 to the mean of the random variables $(Y_{ik})_{k \in [m]}$ , i.e., to $\hat{z}_i^{\ell + 1}$ , we observe that the number of samples required for a sufficiently high concentration around the mean is highly dependent on the magnitude and variance of the random variables $(Y_{ik})_k$ . By definition of $Y_{ik}$ , observe that these expressions are explicit functions of the sampling distribution $p^\ell$ . Thus, to minimize4 the number of samples required to achieve high concentration we require a judiciously defined sampling distribution that simultaneously minimizes both $R_i$ and $\mathrm{Var}(Y_{ik})$ . For example, the naive approach of uniform sampling, i.e., $p_j^\ell = 1 / \eta^\ell$ for each $j \in [\eta^\ell]$ also leads to an unbiased estimator, however, for uniform sampling we have $\mathrm{Var}(Y_{ik}) \approx \eta^\ell \mathbb{E}[Y_{ik}]^2$ and $R_i \approx \eta^\ell \max_k (w_{ik}^{\ell + 1}a_k^\ell)$ and so $\mathrm{Var}(Y_{ik}), R \in \Omega(\eta^\ell)$ in the general case, leading to a linear sampling complexity $m \in \Omega(\eta^\ell)$ by Theorem 1.
+
+# 2.4 EMPIRICAL SENSITIVITY (ES)
+
+To obtain a better sampling distribution, we extend the notion of Empirical Sensitivity (ES) introduced by Baykal et al. (2019a) to prune channels. Specifically, for $W^{\ell + 1} \geq 0$ (the generalization can be found in Appendix B) we let the sensitivity $s_j^\ell$ of feature map $j$ in $\ell$ be defined as
+
+$$
+s _ {j} ^ {\ell} = \max _ {x \in S} \max _ {i \in [ \eta^ {\ell + 1} ]} \frac {w _ {i j} ^ {\ell + 1} a _ {j} ^ {\ell} (x)}{\sum_ {k \in [ \eta^ {\ell} ]} w _ {i k} ^ {\ell + 1} a _ {k} ^ {\ell} (x)}, \tag {1}
+$$
+
+where $S$ is a set of $t$ independent and identically (i.i.d.) points drawn from $\mathcal{D}$ . Intuitively, the sensitivity of feature map $j \in [\eta^{\ell}]$ is the maximum (over $i \in [\eta^{\ell + 1}]$ ) relative impact that feature map $j$ had on any pre-activation in the next layer $z_i^{\ell + 1}$ . We then define the probability of sampling each channel as in Alg. 1: $j \in [\eta^{\ell}]$ as $p_j = s_j^\ell / S^\ell$ , where $S^\ell = \sum_j s_j^\ell$ is the sum of sensitivities. Under a mild assumption on the distribution – that is satisfied by a wide class of distributions, such as the Uniform, Gaussian, Exponential, among others – of activations (Asm. 1 in Sec. B of the supplementary), ES enables us to leverage the inherent stochasticity in the draw $x \sim \mathcal{D}$ and establish (see Lemmas 5, 6, and 7 in Sec. B) that with high probability (over the randomness in $S$ and $x$ ) that
+
+$$
+\operatorname {V a r} \left(Y _ {i k}\right) \in \Theta \left(S E \left[ Y _ {i k} \right] ^ {2}\right) \quad \text {a n d} \quad R \in \Theta \left(S E \left[ Y _ {i k} \right]\right) \quad \forall i \in \left[ \eta^ {\ell + 1} \right]
+$$
+
+and that the sampling complexity is given by $m \in \Theta(S \log(2/\delta) \varepsilon^{-2})$ by Theorem 1.
+
+We note that ES does not require knowledge of the data distribution $\mathcal{D}$ and is easy to compute in practice by randomly drawing a small set of input points $S$ from the validation set and passing
+
+the points in $S$ through the network. This stands in contrast with the sensitivity framework used in state-of-the-art coresets constructions (Braverman et al., 2016; Bachem et al., 2017), where the sensitivity is defined to be with respect to the supremum over all $x \in \operatorname{supp}(\mathcal{D})$ in (1) instead of a maximum over $x \in S$ . As also noted by Baykal et al. (2019a), ES inherently considers data points that are likely to be drawn from the distribution $\mathcal{D}$ in practice, leading to a more practical and informed sampling distribution with lower sampling complexity.
+
+Our insights from the discussion in this section culminate in the core theorem below (Thm. 2), which establishes that the pruned channels $\hat{W}^{\ell +1}$ (corresponding to pruned filters in $W^{\ell}$ ) generated by Alg. 1 is such that the output of layer $\ell +1$ is well-approximated for each entry.
+
+Theorem 2. Let $\varepsilon, \delta \in (0,1), \ell \in [L]$ , and let $\mathcal{S}$ be a set of $\Theta (\log (\eta_{*} / \delta))$ i.i.d. samples drawn from $\mathcal{D}$ . Then, $\hat{W}^{\ell +1}$ contains at most $\mathcal{O}(S^{\ell}\log (\eta_{*} / \delta)\varepsilon^{-2}))$ channels and for $x\sim \mathcal{D}$ , with probability at least $1 - \delta$ , we have $\hat{z}^{\ell +1}\in (1\pm \varepsilon)z^{\ell +1}$ (entry-wise), where $\eta_{*} = \max_{\ell \in [L]}\eta^{\ell}$ .
+
+Theorem 2 can be generalized to hold for all weights and applied iteratively to obtain layer-wise approximation guarantees for the output of each layer. The resulting layer-wise error can then be propagated through the layers to obtain a guarantee on the final output of the compressed network. In particular, applying the error propagation bounds of Baykal et al. (2019a), we establish our main compression theorem below. The proofs and additional details can be found in the appendix (Sec. B).
+
+Theorem 3. Let $\varepsilon, \delta \in (0,1)$ be arbitrary, let $\mathcal{S} \subset \mathcal{X}$ denote the set of $\lceil K' \log (4\eta/\delta) \rceil$ i.i.d. points drawn from $\mathcal{D}$ , and suppose we are given a network with parameters $\theta = (W^1, \ldots, W^L)$ . Consider the set of parameters $\hat{\theta} = (\hat{W}^1, \ldots, \hat{W}^L)$ generated by pruning channels of $\theta$ according to Alg. 2 for each $\ell \in [L]$ . Then, $\hat{\theta}$ satisfies $\mathbb{P}_{\hat{\theta}, x \sim \mathcal{D}}(f_{\hat{\theta}}(x) \in (1 \pm \varepsilon)f_{\theta}(x)) \geq 1 - \delta$ , and the number of filters in $\hat{\theta}$ is bounded by $\mathcal{O}\left(\sum_{\ell=1}^{L} \frac{L^2 (\Delta^{\ell \rightarrow})^2 S^\ell \log (\eta/\delta)}{\varepsilon^2}\right)$ .
+
+# 3 RELATIVE LAYER IMPORTANCE
+
+
+(a) VGG16 architecture
+
+
+(b) Budget Allocation for VGG16
+Figure 2: Early layers of VGG are relatively harder to approximate due to their large spatial dimensions as shown in (a). Our error bounds naturally bridge layer compressibility and importance and enable us to automatically allocate relatively more samples to early layers and less to latter layers as shown in (b). The final layer – due to its immediate influence on the output – is also automatically assigned a large portion of the sampling budget.
+
+In the previous sections, we established the sampling complexity of our filter pruning scheme for any user-specified $\varepsilon$ and $\delta$ . However, in practice, it is more common for the practitioner to specify the desired pruning ratio, which specifies the resulting size of the pruned model. Given this sampling budget, a practical question that arises is how to optimally ration the sampling budget across the network's layers to minimize the error of the pruned model. A naive approach would be to uniformly allocate the sampling budget $N$ so that the same ratio of filters is kept in each layer. However, this allocation scheme implicitly assumes that each layer of the network is of equal importance to retaining the output, which is virtually never the case in practice, as exemplified by Fig. 2(a).
+
+It turns out that our analytical bounds on the sample complexity per layer $(m^{\ell}$ in Alg. 1) naturally capture the importance of each layer. The key insight lies in bridging the compressibility and
+
+importance of each layer: if a layer is not very important, i.e., it does not heavily influence output of the network, then we expect it to be highly compressible, and vice-versa. This intuition is precisely captured by our sampling complexity bounds that quantify the difficulty of a layer's compressibility.
+
+We leverage this insight to formulate a simple binary search procedure for judiciously allocating the sampling budget $N$ as follows. Let $\delta \in (0,1)$ be user-specified, pick a random $\varepsilon > 0$ , and compute the sampling complexity $m^{\ell}$ as in Alg. 1 together with the resulting layer size $n^{\ell}$ . If $\sum_{\ell} n^{\ell} = N$ , we are done, otherwise, continue searching for an appropriate $\varepsilon$ on a smaller interval depending on whether $\sum_{\ell} n^{\ell}$ is greater or less than $N$ . The allocation generated by this procedure (see Fig. 2(b) for an example) ensures that the maximum layer-wise error incurred by pruning is at most $\varepsilon$ .
+
+# 4 RESULTS
+
+In this section, we evaluate and compare our algorithm's performance to that of state-of-the-art pruning schemes in generating compact networks that retain the predictive accuracy of the original model. Our evaluations show that our approach generates significantly smaller and more efficient models compared to those generated by competing methods. Our results demonstrate the practicality and wide-spread applicability of our proposed approach: across all of our experiments, our algorithm took on the order of a minute to prune a given network $^5$ , required no manual tuning of its hyperparameters, and performed consistently well across a diverse set of pruning scenarios. Additional results, comparisons, and experimental details can be found in Sec. E of the appendix.
+
+# 4.1 EXPERIMENTAL SETUP
+
+We compare our algorithm to that of the following filter pruning algorithms that we implemented and ran alongside our algorithm: Filter Thresholding (FT, Li et al. (2016)), SoftNet (He et al., 2018), and ThiNet (Luo et al., 2017). We note that FT and SoftNet are both (weight) magnitude-based filter pruning algorithms, and this class of pruning schemes has recently been reported to be state-of-the-art (Gale et al., 2019; Pitas et al., 2019; Yu et al., 2018) (see Sec. E.1 of the appendix for details of the compared methods). Additional comparisons to other state-of-the-art channel and filter pruning methods can be found in Tables 6 and 8 in Appendix E.4 and E.6, respectively.
+
+Our algorithm only requires two inputs in practice: the desired pruning ratio (PR) and failure probability $\delta \in (0,1)$ , since the number of samples in each layer is automatically assigned by our allocation procedure described in Sec. 3. Following the conventional data partitioning ratio, we reserve $90\%$ of the training data set for training and the remaining $10\%$ for the validation set (Lee et al., 2019).
+
+For each scenario, we prune the original (pre-trained) network with a target prune ratio using the respective pruning algorithm and fine-tune the network by retraining for a specified number of epochs. We repeat this procedure iteratively to obtain various target prune ratios and report the percentage of parameters pruned (PR) and the percent reduction in FLOPS (FR) for each target prune ratio. The target prune ratio follows a hyperharmonic sequence where the $i^{\mathrm{th}}$ PR is determined by $1 - \frac{1}{(i + 1)^{\alpha}}$ , where $\alpha$ is an experiment-dependent tuning parameter. We conduct the prune-retrain cycle for a range of $10 - 20$ target prune ratios, and report the highest PR and FR for which the compressed network achieves commensurate accuracy, i.e., when the pruned model's test accuracy is within $0.5\%$ of the original model. The quantities reported are averaged over 3 trained models for each scenario, unless stated otherwise. The full details of our experimental setup and the hyper-parameters used can be found in the appendix (Sec. E).
+
+# 4.2 LENET ARCHITECTURES ON MNIST
+
+As our first experiment, we evaluated the performance of our pruning algorithm and the comparison methods on LeNet300-100 (LeCun et al., 1998), a fully-connected network with two hidden layers of size 300 and 100 hidden units, respectively, and its convolutional counterpart, LeNet-5 (LeCun et al., 1998), which consists of two convolutional layers and two fully-connected layers. Both networks were trained on MNIST using the hyper-parameters specified in Sec. E.
+
+Table 1 depicts the performance of each pruning algorithm in attaining the sparsest possible network that achieves commensurate accuracy for the LeNet architectures. In both scenarios, our algorithm generates significantly sparser networks compared to those generated by the competing filter pruning approaches. In fact, the pruned LeNet-5 model generated by our algorithm by removing filters achieves a prune ratio of $\approx 90\%$ , which is even competitive with the accuracy of the sparse models generated by state-of-the-art weight pruning algorithms (Lee et al., 2019) $^6$ . In addition to evaluating the sparsity of the generated models subject to the commensurate accuracy constraint, we also investigated the performance of the pruning algorithms for extreme (i.e., around $5\%$ ) pruning ratios (see Fig. 3(a)). We see that our algorithm's performance relative to those of competing algorithms is strictly better for a wide range of target prune ratios. For LeNet-5 Fig. 3(a) shows that our algorithm's favorable performance is even more pronounced at extreme sparsity levels (at $\approx 95\%$ prune ratio).
+
+| [%] | Method | Err. | PR |
| LeNet-300-100 | Unpruned | 1.59 | |
| Ours | +0.41 | 84.32 |
| FT | +0.35 | 81.68 |
| SoftNet | +0.41 | 81.69 |
| ThinNet | +10.58 | 75.01 |
| LeNet-5 | Unpruned | 0.72 | |
| Ours | +0.35 | 92.37 |
| FT | +0.47 | 85.04 |
| SoftNet | +0.40 | 80.57 |
| ThinNet | +0.12 | 58.17 |
+
+# 4.3 CONVOLUTIONAL NEURAL NETWORKS ON CIFAR-10
+
+Next, we evaluated the performance of each pruning algorithm on significantly larger and deeper Convolutional Neural Networks trained on the CIFAR-10 data set: VGG16 with BatchNorm (Simonyan & Zisserman, 2015), ResNet20, ResNet56, ResNet110 (He et al., 2016), DenseNet22 (Huang et al., 2017), and WideResNet16-8 (Zagoruyko & Komodakis, 2016). For CIFAR-10 experiments, we use the standard data augmentation techniques: padding 4 pixels on each side, random crop to $32 \times 32$ pixels, and random horizontal flip. Our results are summarized in Table 2 and Figure 3. Similar to the results reported in Table 1 in the previous subsection, Table 2 shows that our method is able to achieve the most sparse model with minimal loss in predictive power relative to the original network. Furthermore, by inspecting the values reported for ratio of Flops pruned (FR), we observe that the models generated by our approach are not only more sparse in terms of the number of total parameters, but also more efficient in terms of the inference time complexity.
+
+Table 1: The prune ratio (PR) and the corresponding test error (Err.) of the sparsest network - with commensurate accuracy - generated by each algorithm.
+
+| [%] | Orig. Err. | Ours | FT | SoftNet | Thinet |
| Err. | PR | FR | Err. | PR | FR | Err. | PR | FR | Err. | PR | FR |
| ResNet20 | 8.60 | +0.49 | 62.67 | 45.46 | +0.43 | 42.65 | 44.59 | +0.50 | 46.42 | 49.40 | +2.10 | 32.90 | 32.73 |
| ResNet56 | 7.05 | +0.28 | 88.98 | 84.42 | +0.48 | 81.46 | 82.73 | +0.36 | 81.46 | 82.73 | +1.28 | 50.08 | 50.06 |
| ResNet110 | 6.43 | +0.36 | 92.07 | 89.76 | +0.17 | 86.38 | 87.39 | +0.34 | 86.38 | 87.39 | +0.92 | 49.70 | 50.39 |
| VGG16 | 7.11 | +0.50 | 94.32 | 85.03 | +1.11 | 80.09 | 80.14 | +0.81 | 63.95 | 63.91 | +2.13 | 63.95 | 64.02 |
| DenseNet22 | 10.07 | +0.46 | 56.44 | 62.66 | +0.32 | 29.31 | 30.23 | +0.21 | 29.31 | 30.23 | +4.36 | 50.76 | 51.06 |
| WRN16-8 | 4.83 | +0.46 | 66.22 | 64.57 | +0.40 | 24.88 | 24.74 | +0.14 | 16.93 | 16.77 | +0.35 | 14.18 | 14.09 |
+
+Table 2: Overview of the pruning performance of each algorithm for various CNN architectures. For each algorithm and network architecture, the table reports the prune ratio (PR, %) and pruned Flops ratio (FR, %) of pruned models when achieving test accuracy within $0.5\%$ of the original network's test accuracy (or the closest result when the desired test accuracy was not achieved for the range of tested PRs). Our results indicate that our pruning algorithm generates smaller and more efficient networks with minimal loss in accuracy, when compared to competing approaches.
+
+Fig. 3 depicts the performance of the evaluated algorithms for various levels of prune ratios. Once again, we see the consistently better performance of our algorithm in generating sparser models that approximately match or exceed the predictive accuracy of the original uncompressed network. In addition, Table 8 (see Appendix E.6 for more details) provides further comparisons to state-of-the-art
+
+
+(a) LeNet5
+
+
+(b) ResNet56
+
+
+(c) ResNet110
+
+
+(d) VGG16
+
+
+(e) DenseNet22
+
+
+(f) WRN16-8
+Figure 3: The accuracy of the generated pruned models for the evaluated pruning schemes for various target prune ratios. Note that the $x$ axis is the percentage of parameters retained, i.e., (1 - pruneratio). ThiNet was omitted from the plots for better readability. Our results show that our approach generates pruned networks with minimal loss in accuracy even for high prune ratios. Shaded regions correspond to values within one standard deviation of the mean.
+
+filter pruning methods where we compare the performance of our approach to the results for various ResNets and VGG16 reported directly in the respective papers. The comparisons in Table 8 reaffirm that our algorithm can consistently generate simultaneously sparser and more accurate networks compared to competing methods.
+
+In view of our results from the previous subsection, the results shown in Table 2, Fig. 3, and Table 8 highlight the versatility and broad applicability of our method, and seem to suggest that our approach fares better relative to the compared algorithms on more challenging pruning tasks that involve large-scale networks. We suspect that these favorable properties are explained by the data-informed evaluations of filter importance and the corresponding theoretical guarantees of our algorithm – which enable robustness to variations in network architecture and data distribution.
+
+# 4.4 CONVOLUTIONAL NEURAL NETWORKS ON IMAGENET
+
+We consider pruning convolutional neural networks of varying size - ResNet18, ResNet50, and ResNet101 - trained on the ImageNet (Russakovsky et al., 2015) data set. For this dataset, we considered two scenarios: (i) iterative pruning without retraining and (ii) iterative prune-retrain with a limited amount of iterations given the resource-intensive nature of the experiments. The results of these experiments are reported in Section E.4 of the appendix. Our results on the ImageNet data set follow a similar trend as those in the previous subsections and indicate that our method readily scales to larger data sets without the need of manual hyperparameter tuning. This improves upon existing approaches (such as those in He et al. (2018); Li et al. (2016)) that generally require tedious, task-specific intervention or manual parameter tuning by the practitioner.
+
+# 4.5 APPLICATION TO REAL-TIME REGRESSION TASKS
+
+Real-time applications of neural networks, such as their use in autonomous driving scenarios, require network models that are not only highly accurate, but also highly efficient, i.e., fast, when it comes to inference time complexity (Amini et al., 2018). Model compression, and in particular, filter pruning has potential to generate compressed networks capable of achieving both of these objectives. To evaluate and compare the effectiveness of our method on pruning networks intended for regression
+
+tasks and real-time systems, we evaluated the various pruning approaches on the DeepKnight network (Amini et al., 2018), a regression network deployed on an autonomous vehicle in real time to predict the steering angle of the human driver (see E.5 in appendix for experimental details).
+
+Fig. 4 depicts the results of our evaluations and comparisons on the DeepKnight network without the fine-tuning step. We omitted the iterative fine-tuning step for this scenario and instead evaluated the test loss for various prune ratios because (i) the evaluated algorithms were able to generate highly accurate models without the retraining step and (ii) in order to evaluate and compare the performance of solely the core pruning procedure. Similar to the results obtained in the preceding pruning scenarios, Fig. 4 shows that our method consistently outperforms competing approaches for all of the specified prune ratios.
+
+
+Figure 4: The performance of the compared algorithms on pruning a lightweight network for a real-time regression task (Amini et al., 2018).
+
+# 4.6 DISCUSSION
+
+In addition to the favorable empirical results of our algorithm, our approach exhibits various advantages over competing methods that manifest themselves in our empirical evaluations. For one, our algorithm does not require any additional hyper-parameters other than the pruning ratio and the desired failure probability. Given these sole two parameters, our approach automatically allocates the number of filters to sample for each layer. This alleviates the need to perform time-intensive ablation studies (He et al., 2018) and to resort to uninformed (i.e., uniform) sample allocation strategies, e.g., removing the same percentage of filters in each layer (Li et al., 2016), which fails to consider the non-uniform influence of each layer on the network's output (see Sec. 3). Moreover, our algorithm is simple-to-implement and computationally efficient both in theory and practice: the computational complexity is dominated by the $|S|$ forward passes required to compute the sensitivities $(|S| \leq 256$ in practical settings) and in practice, our algorithm takes on the order of a minute to prune the network.
+
+# 5 CONCLUSION
+
+We presented - to the best of our knowledge - the first filter pruning algorithm that generates a pruned network with theoretical guarantees on the size and performance of the generated network. Our method is data-informed, simple-to-implement, and efficient both in theory and practice. Our approach can also be broadly applied to varying network architectures and data sets with minimal hyper-parameter tuning necessary. This stands in contrast to existing filter pruning approaches that are generally data-oblivious, rely on heuristics for evaluating the parameter importance, or require tedious hyper-parameter tuning. Our empirical evaluations on popular network architectures and data sets reaffirm the favorable theoretical properties of our method and demonstrate its practical effectiveness in obtaining sparse, efficient networks. We envision that besides its immediate use for pruning state-of-the-art models, our approach can also be used as a sub-procedure in other deep learning applications, e.g., for identifying winning lottery tickets (Frankle & Carbin, 2019) and for efficient architecture search (Liu et al., 2019b).
+
+# ACKNOWLEDGMENTS
+
+This research was supported in part by the U.S. National Science Foundation (NSF) under Awards 1723943 and 1526815, Office of Naval Research (ONR) Grant N00014-18-1-2830, Microsoft, and JP Morgan Chase.
+
+# REFERENCES
+
+Dimitris Achlioptas, Zohar Karnin, and Edo Liberty. Matrix entry-wise sampling: Simple is best. Submitted to KDD, 2013(1.1):1-4, 2013.
+
+Jose M Alvarez and Mathieu Salzmann. Compression-aware training of deep networks. In Advances in Neural Information Processing Systems, pp. 856-867, 2017.
+Alexander Amini, Liam Paull, Thomas Balch, Sertac Karaman, and Daniela Rus. Learning steering bounds for parallel autonomous systems. In 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 1-8. IEEE, 2018.
+Sanjeev Arora, Rong Ge, Behnam Neyshabur, and Yi Zhang. Stronger generalization bounds for deep nets via a compression approach. In International Conference on Machine Learning, pp. 254-263, 2018.
+Olivier Bachem, Mario Lucic, and Andreas Krause. Practical coreset constructions for machine learning. arXiv preprint arXiv:1703.06476, 2017.
+Cenk Baykal, Lucas Liebenwein, Igor Gilitschenski, Dan Feldman, and Daniela Rus. Data-dependent coresets for compressing neural networks with applications to generalization bounds. In International Conference on Learning Representations, 2019a. URL https://openreview.net/forum?id=HJfwJ2A5KX.
+Cenk Baykal, Lucas Liebenwein, Igor Gilitschenski, Dan Feldman, and Daniela Rus. Sipping neural networks: Sensitivity-informed provable pruning of neural networks. arXiv preprint arXiv:1910.05422, 2019b.
+Vladimir Braverman, Dan Feldman, and Harry Lang. New frameworks for offline and streaming coreset constructions. arXiv preprint arXiv:1612.00889, 2016.
+Wenlin Chen, James Wilson, Stephen Tyree, Kilian Weinberger, and Yixin Chen. Compressing neural networks with the hashing trick. In International conference on machine learning, pp. 2285-2294, 2015.
+Anna Choromanska, Krzysztof Choromanski, Mariusz Bojarski, Tony Jebara, Sanjiv Kumar, and Yann LeCun. Binary embeddings with structured hashed projections. In International Conference on Machine Learning, pp. 344-353, 2016.
+Emily L Denton, Wojciech Zaremba, Joan Bruna, Yann LeCun, and Rob Fergus. Exploiting linear structure within convolutional networks for efficient evaluation. In Advances in neural information processing systems, pp. 1269-1277, 2014.
+Xuanyi Dong, Junshi Huang, Yi Yang, and Shuicheng Yan. More is less: A more complicated network with less inference complexity. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5840-5848, 2017.
+Petros Drineas and Anastasios Zouzias. A note on element-wise matrix sparsification via a matrix-valued bernstein inequality. Information Processing Letters, 111(8):385-389, 2011.
+Dan Feldman and Michael Langberg. A unified framework for approximating and clustering data. In Proceedings of the forty-third annual ACM symposium on Theory of computing, pp. 569-578. ACM, 2011.
+Jonathan Frankle and Michael Carbin. The lottery ticket hypothesis: Finding sparse, trainable neural networks. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=rJ1-b3RcF7.
+Trevor Gale, Erich Elsen, and Sara Hooker. The state of sparsity in deep neural networks. arXiv preprint arXiv:1902.09574, 2019.
+Yiwen Guo, Anbang Yao, and Yurong Chen. Dynamic network surgery for efficient dnns. In Advances In Neural Information Processing Systems, pp. 1379-1387, 2016.
+Song Han, Huizi Mao, and William J. Dally. Deep compression: Compressing deep neural network with pruning, trained quantization and huffman coding. CoRR, abs/1510.00149, 2015. URL http://arxiv.org/abs/1510.00149.
+
+Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778, 2016.
+Yang He, Guoliang Kang, Xuanyi Dong, Yanwei Fu, and Yi Yang. Soft filter pruning for accelerating deep convolutional neural networks. In Proceedings of the 27th International Joint Conference on Artificial Intelligence, pp. 2234-2240. AAAI Press, 2018.
+Yang He, Ping Liu, Ziwei Wang, Zhilan Hu, and Yi Yang. Filter pruning via geometric median for deep convolutional neural networks acceleration. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4340-4349, 2019.
+Yihui He, Xiangyu Zhang, and Jian Sun. Channel pruning for accelerating very deep neural networks. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1389-1397, 2017.
+Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4700-4708, 2017.
+Qiangui Huang, Kevin Zhou, Suya You, and Ulrich Neumann. Learning to prune filters in convolutional neural networks. In 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 709-718. IEEE, 2018.
+Yani Ioannou, Duncan Robertson, Jamie Shotton, Roberto Cipolla, and Antonio Criminisi. Training cnns with low-rank filters for efficient image classification. arXiv preprint arXiv:1511.06744, 2015.
+Max Jaderberg, Andrea Vedaldi, and Andrew Zisserman. Speeding up convolutional neural networks with low rank expansions. In Proceedings of the British Machine Vision Conference. BMVA Press, 2014.
+Abhisek Kundu and Petros Drineas. A note on randomized element-wise matrix sparsification. arXiv preprint arXiv:1404.0320, 2014.
+Yann LeCun, John S Denker, and Sara A Solla. Optimal brain damage. In Advances in neural information processing systems, pp. 598-605, 1990.
+Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278-2324, 1998.
+Namhoon Lee, Thalaiyasingam Ajanthan, and Philip Torr. SNIP: Single-shot network pruning based on connection sensitivity. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=B1VZqqjAcYX.
+Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, and Hans Peter Graf. Pruning filters for efficient convnets. arXiv preprint arXiv:1608.08710, 2016.
+Yawei Li, Shuhang Gu, Luc Van Gool, and Radu Timofte. Learning filter basis for convolutional neural network compression. In Proceedings of the IEEE International Conference on Computer Vision, pp. 5623-5632, 2019.
+Tao Lin, Sebastian U. Stich, Luis Barba, Daniil Dmitriev, and Martin Jaggi. Dynamic model pruning with feedback. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=SJem81SFwB.
+Zechun Liu, Haoyuan Mu, Xiangyu Zhang, Zichao Guo, Xin Yang, Kwang-Ting Cheng, and Jian Sun. Metapruning: Meta learning for automatic neural network channel pruning. In Proceedings of the IEEE International Conference on Computer Vision, pp. 3296-3305, 2019a.
+Zhuang Liu, Mingjie Sun, Tinghui Zhou, Gao Huang, and Trevor Darrell. Rethinking the value of network pruning. In International Conference on Learning Representations, 2019b. URL https://openreview.net/forum?id=rJlnB3C5Ym.
+
+Jian-Hao Luo and Jianxin Wu. Autopruner: An end-to-end trainable filter pruning method for efficient deep model inference. arXiv preprint arXiv:1805.08941, 2018.
+Jian-Hao Luo, Jianxin Wu, and Weiyao Lin. Thinet: A filter level pruning method for deep neural network compression. In Proceedings of the IEEE international conference on computer vision, pp. 5058-5066, 2017.
+Shannon McCurdy. Ridge regression and provable deterministic ridge leverage score sampling. In Advances in Neural Information Processing Systems, pp. 2463-2472, 2018.
+Dimitris Papailiopoulos, Anastasios Kyrillidis, and Christos Boutsidis. Provable deterministic leverage score sampling. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 997-1006. ACM, 2014.
+Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. Automatic differentiation in pytorch. In NIPS-W, 2017.
+Konstantinos Pitas, Mike Davies, and Pierre Vandergheynst. Revisiting hard thresholding for dnn pruning. arXiv preprint arXiv:1905.08793, 2019.
+Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115 (3):211-252, 2015. doi: 10.1007/s11263-015-0816-y.
+Qinfeng Shi, James Petterson, Gideon Dror, John Langford, Alex Smola, and SVN Vishwanathan. Hash kernels for structured data. Journal of Machine Learning Research, 10(Nov):2615-2637, 2009.
+Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
+Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. In International Conference on Learning Representations, 2015.
+Joel A Tropp et al. An introduction to matrix concentration inequalities. Foundations and Trends in Machine Learning, 8(1-2):1-230, 2015.
+Karen Ullrich, Edward Meeds, and Max Welling. Soft weight-sharing for neural network compression. arXiv preprint arXiv:1702.04008, 2017.
+Ramon van Handel. Probability in high dimension. Technical report, PRINCETON UNIV NJ, 2014.
+Roman Vershynin. High-dimensional probability. An Introduction with Applications, 2016.
+Kilian Weinberger, Anirban Dasgupta, John Langford, Alex Smola, and Josh Attenberg. Feature hashing for large scale multitask learning. In Proceedings of the 26th annual international conference on machine learning, pp. 1113-1120, 2009.
+Jianbo Ye, Xin Lu, Zhe Lin, and James Z. Wang. Rethinking the smaller-norm-less-informative assumption in channel pruning of convolution layers. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=HJ94fqApW.
+Ruichi Yu, Ang Li, Chun-Fu Chen, Jui-Hsin Lai, Vlad I Morariu, Xintong Han, Mingfei Gao, Ching-Yung Lin, and Larry S Davis. Nisp: Pruning networks using neuron importance score propagation. Preprint at https://arxiv.org/abs/1711.05908, 2017a.
+Ruichi Yu, Ang Li, Chun-Fu Chen, Jui-Hsin Lai, Vlad I Morariu, Xintong Han, Mingfei Gao, Ching-Yung Lin, and Larry S Davis. Nisp: Pruning networks using neuron importance score propagation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9194-9203, 2018.
+
+Xiyu Yu, Tongliang Liu, Xinchao Wang, and Dacheng Tao. On compressing deep models by low rank and sparse decomposition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7370-7379, 2017b.
+Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. arXiv preprint arXiv:1605.07146, 2016.
+Liang Zhao, Siyu Liao, Yanzhi Wang, Zhe Li, Jian Tang, and Bo Yuan. Theoretical properties for neural networks with weight matrices of low displacement rank. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 4082-4090. JMLR.org, 2017.
+
+# A RELATED WORK
+
+General network compression The need to tame the excessive storage requirements and costly inference associated with large, over-parameterized networks has led to a rich body of work in network pruning and compression. These approaches range from those inspired by classical tensor decompositions (Yu et al., 2017b; Jaderberg et al., 2014; Denton et al., 2014), and random projections and hashing (Arora et al., 2018; Ullrich et al., 2017; Chen et al., 2015; Weinberger et al., 2009; Shi et al., 2009) that compress a pre-trained network, to those approaches that enable sparsity by embedding sparsity as an objective directly in the training process (Ioannou et al., 2015; Alvarez & Salzmann, 2017) or exploit tensor structure to induce sparsity (Choromanska et al., 2016; Zhao et al., 2017). Overall, the predominant drawback of these methods is that they require laborious hyperparameter tuning, lack rigorous theoretical guarantees on the size and performance of the resulting compressed network, and/or conduct compression in a data oblivious way.
+
+Weight-based pruning A large subset of modern pruning algorithms fall under the general approach of pruning individual weights of the network by assigning each weight a saliency score, e.g., its magnitude (Han et al., 2015), and subsequently inducing sparsity by deterministically removing those weights below a certain saliency score threshold (Guo et al., 2016; Han et al., 2015; Lee et al., 2019; LeCun et al., 1990). These approaches are heuristics that do not provide any theoretical performance guarantees and generally require – with the exception of (Lee et al., 2019) – computationally expensive train-prune-retrain cycles and tedious hyper-parameter tuning. Unlike our approach that enables accelerated inference (i.e., reduction in FLOPS) on any hardware and with any deep learning library by generating a smaller subnetwork, weight-based pruning generates a model with non-structured sparsity that requires specialized hardware and sparse linear algebra libraries in order to speed up inference.
+
+Neuron pruning Pruning entire neurons in FNNs and filters in CNNs is particularly appealing as it shrinks the network into its slimmer counterpart, which leads to alleviated storage requirements and improved inference-time performance on any hardware. Similar to the weight-based approaches, approaches in this domain assign an importance score to each neuron or filter and remove those with a score below a certain threshold (He et al., 2018; Li et al., 2016; Yu et al., 2017a). These approaches generally take the $\ell_p$ norm -with $p = \{1,2\}$ as popular choices- of the filters to assign filter importance and subsequently prune unimportant filers. These methods are data-oblivious heuristics that heavily rely on the assumption that filters with large weight magnitudes are more important, which may not hold in general (Ye et al., 2018).
+
+In general, prior work on neuron and filter pruning has focused on approaches that lack theoretical guarantees and a principled approach to allocating the sampling budget across layers, requiring tedious ablation studies or settling for naive uniform allocation across the layers. In contrast to prior approaches, our algorithm assigns data-informed saliency scores to filters, guarantees an error bound, and leverages our theoretical error bounds to automatically identify important layers and allocate the user-specified sampling budget (i.e., pruning ratio) across the layers.
+
+Our work is most similar to that of (Baykal et al., 2019a;b), which proposed an weight pruning algorithm with provable guarantees that samples weights of the network in accordance to an empirical notion of parameter importance. The main drawback of their approach is the limited applicability to only fully-connected networks, and the lack of inference-time acceleration due to non-structured sparsity caused by removing individual weights. Our method is also sampling-based and relies on a data-informed notion of importance, however, unlike (Baykal et al., 2019a;b), our approach can be applied to both FNNs and CNNs and generates sparse, efficient subnetworks that accelerate inference.
+
+# B ALGORITHMIC AND ANALYTICAL DETAILS
+
+Algorithm 2 is the full algorithm for pruning features, i.e., neurons in fully-connected layers and channels in convolutional layers. For notational simplicity, we will derive our theoretical results for linear layers, i.e., neuron pruning. We remind the reader that this result also applies to CNNs by taking channels of a weight tensor in place of neurons. The pseudocode is organized for clarity of exposition rather than computational efficiency. Recall that $\theta$ is the full parameter set of the net,
+
+where $W^{\ell} \in \mathbb{R}^{\eta^{\ell} \times \eta^{\ell + 1}}$ is the weight matrix between layers $\ell - 1$ and $\ell$ . $W_k^\ell$ refers to the $k^{\text{th}}$ neuron of $W^\ell$ .
+
+Algorithm 2 PRUNECHANNELS $(\theta ,\ell ,S,\varepsilon ,\delta)$ - extended version
+
+Input: $\theta$ : trained net; $\ell \in [L]$ : layer; $S \subset \operatorname{supp}(\mathcal{D})$ : sample of inputs; $\varepsilon \in (0,1)$ : accuracy; $\delta \in (0,1)$ : failure probability
+
+Output: $\hat{W}^{\ell}$ : filter-reduced weight tensor for layer $\ell$ ; $\hat{W}^{\ell+1}$ : channel reduced, weight tensor for layer $\ell+1$
+
+1: for $j\in [\eta^{\ell}]$ do
+2: for $i\in [\eta^{\ell +1}]$ and $\mathbf{x}\in S$ do
+3: $I^{+}\gets \{j\in [\eta^{\ell}]:w_{ij}^{\ell +1}a_{j}^{\ell}(\mathbf{x})\geq 0\}$
+4: $I^{-}\gets [\eta^{\ell}]\setminus I^{+}$
+5: $g_{ij}^{\ell +1}(\mathbf{x})\gets \max_{I\in \{I^{+},I^{-}\}}\frac{w_{ij}^{\ell + 1}a_{j}^{\ell}(\mathbf{x})}{\sum_{k\in I}w_{ik}^{\ell + 1}a_{k}^{\ell}(\mathbf{x})}$
+6: end for
+7: $s_j^\ell \gets \max_{\mathbf{x} \in \mathcal{S}} \max_{i \in [\eta^{\ell + 1}]} g_{ij}^{\ell + 1}(\mathbf{x})$
+8: end for
+9: $S^{\ell}\gets \sum_{j\in [\eta^{\ell}]}s_{j}^{\ell}$
+10: for $j\in [\eta^{\ell}]$ do
+11: $p_j^\ell \gets s_j^\ell /S^\ell$
+12: end for
+13: $K\gets$ value from Assumption 1
+14: $m\gets \left[(6 + 2\varepsilon)S^{\ell}K\log (2\eta^{\ell +1} / \delta)\varepsilon^{-2}\right]$
+15: $\mathcal{H}\gets$ distribution on $[\eta^{\ell}]$ assigning probability $p_j^\ell$ to index $j$
+16: $\hat{W}^{\ell}\gets (0,\dots ,0)$ {same dimensions as $W^{\ell}$ }
+17: $\hat{W}^{\ell + 1} \gets (0, \dots, 0)$ {same dimensions as $W^{\ell + 1}$ }
+18: for $k \in [m]$ do
+19: $c(k)\gets$ random draw from $\mathcal{H}$
+20: $\hat{W}_{c(k)}^{\ell} \gets W_{c(k)}^{\ell}$ {no reweighing or considering multiplicity of drawing index $c(k)$ multiple times}
+21: $\hat{W}_{:c(k)}^{\ell + 1} \gets \hat{W}_{:c(k)}^{\ell + 1} + \frac{W_{:c(k)}^{\ell + 1}}{mp_{c(k)}}$ {reweighing for unbiasedness of pre-activation in layer $\ell + 1$ }
+22: end for
+23: return $\hat{W}^{\ell} = [\hat{W}_{1}^{\ell},\dots ,\hat{W}_{\eta^{\ell}}^{\ell}]$ ; $\hat{W}^{\ell +1} = [\hat{W}_{:1}^{\ell +1},\dots ,\hat{W}_{:\eta^{\ell}}^{\ell +1}]$
+
+Recall that $z_{i}^{\ell + 1}(\mathbf{x})$ denotes the pre-activation of the $i^{\text{th}}$ neuron in layer $\ell + 1$ given input $\mathbf{x}$ , and the activation $a_{j}^{\ell}(x) = \max \{0, z_{j}^{\ell}(\mathbf{x})\}$ .
+
+Definition 1 (Edge Sensitivity (Baykal et al., 2019a)). Fixing a layer $\ell \in [L]$ , let $w_{ij}^{\ell + 1}$ be the weight of edge $(j, i) \in [\eta^\ell] \times [\eta^{\ell + 1}]$ . The empirical sensitivity of weight entry $w_{ij}^{\ell + 1}$ with respect to input $\mathbf{x} \in \mathcal{X}$ is defined to be
+
+$$
+g _ {i j} ^ {\ell + 1} (\mathbf {x}) = \max _ {I \in \{I ^ {+}, I ^ {-} \}} \frac {w _ {i j} ^ {\ell + 1} a _ {j} ^ {\ell} (\mathbf {x})}{\sum_ {k \in I} w _ {i k} ^ {\ell + 1} a _ {k} ^ {\ell} (\mathbf {x})}, \tag {2}
+$$
+
+where $I^{+} = \{j\in [\eta^{\ell}]:w_{ij}^{\ell +1}a_{j}^{\ell}(\mathbf{x})\geq 0\}$ and $I^{-} = [\eta^{\ell}]\setminus I^{+}$ denote the set of positive and negative edges, respectively.
+
+Algorithm 2 uses empirical sensitivity to compute the sensitivity of neurons on Lines 9-12.
+
+Definition 2 (Neuron Sensitivity). The sensitivity of a neuron $j \in [\eta^{\ell}]$ in layer $\ell$ is defined as
+
+$$
+s _ {j} ^ {\ell} = \max _ {\mathbf {x} \in S} \max _ {i \in [ \eta^ {\ell + 1} ]} g _ {i j} ^ {\ell + 1} (\mathbf {x}) \tag {3}
+$$
+
+In this section, we prove that Algorithm 2 yields a good approximation of the original net. We begin with a mild assumption to ensure that the distribution of our input is not pathological.
+
+Assumption 1. There exist universal constants $K, K' > 0$ such that for any layer $\ell$ and all $j \in [\eta^{\ell}]$ , the CDF of the random variable $\max_{i \in [\eta^{\ell + 1}]} g_{ij}^{\ell + 1}(x)$ for $x \sim \mathcal{D}$ , denoted by $F_j(\cdot)$ , satisfies
+
+$$
+F _ {j} \left(M _ {j} / K\right) \leq \exp \left(- 1 / K ^ {\prime}\right),
+$$
+
+where $M_{j} = \min \{y\in [0,1]:F_{j}(y) = 1\}$
+
+Note that the analysis is carried out for the positive and negative elements of $W^{\ell + 1}$ separately, which is also considered in the definition of sensitivity (Def. 1). For ease of exposition, we will thus assume that throughout the section $W^{\ell + 1} \geq 0$ (element-wise), i.e., $I^{+} = [\eta^{\ell}]$ , and derive the results for this case. However, we note that we could equivalently assume $W^{\ell + 1} \leq 0$ and the analysis would hold regardless. By considering both the positive and negative parts of $W^{\ell + 1}$ in Def. 1 we can carry out the analysis for weight tensors with positive and negative elements.
+
+Theorem 2. Let $\varepsilon, \delta \in (0,1), \ell \in [L]$ , and let $S$ be a set of $\Theta(\log(\eta_{*} / \delta))$ i.i.d. samples drawn from $\mathcal{D}$ . Then, $\hat{W}^{\ell + 1}$ contains at most $\mathcal{O}(S^{\ell} \log(\eta_{*} / \delta) \varepsilon^{-2})$ channels and for $x \sim \mathcal{D}$ , with probability at least $1 - \delta$ , we have $\hat{z}^{\ell + 1} \in (1 \pm \varepsilon) z^{\ell + 1}$ (entry-wise), where $\eta_{*} = \max_{\ell \in [L]} \eta^{\ell}$ .
+
+The remainder of this section builds towards proving Theorem 2. We begin by fixing a layer $\ell \in [L]$ and neuron $i\in [\eta^{\ell +1}]$ . Consider the random variables $\{Y_k\}_{k\in [m]}$ where $Y_{k}(\mathbf{x}) = \frac{1}{mp_{j}} w_{ij}^{\ell +1}a_{j}^{\ell}(\mathbf{x})$ where Algorithm 2 selected index $j\in [\eta^{\ell}]$ on the $k^{\mathrm{th}}$ iteration of Line 19. Note that $z_{i}^{\ell +1}(\mathbf{x}) = \sum_{j\in [\eta^{\ell}]}w_{ij}^{\ell +1}a_{j}^{\ell}(\mathbf{x})$ and so we may also write $g_{ij}^{\ell +1}(\mathbf{x}) = w_{ij}^{\ell +1}a_{j}^{\ell}(\mathbf{x}) / z_{i}^{\ell +1}(\mathbf{x})$ when it is more convenient.
+
+Lemma 4. For each $\mathbf{x} \in \mathcal{X}$ and $k \in [m]$ , $\mathbb{E}[Y_k(\mathbf{x})] = z_i^{\ell + 1}(\mathbf{x}) / m$ .
+
+Proof. $Y_{j}$ is drawn from distribution $\mathcal{H}$ defined on Line 15, so we compute the expectation directly.
+
+$$
+\begin{array}{l} \mathbb {E} \left[ Y _ {j} (\mathbf {x}) \right] = \sum_ {k \in [ \eta^ {\ell} ]} \frac {w _ {i k} ^ {\ell + 1} a _ {k} ^ {\ell} (\mathbf {x})}{m p _ {k}} \cdot p _ {k} \\ = \frac {1}{m} \sum_ {k \in [ \eta^ {\ell} ]} w _ {i k} ^ {\ell + 1} a _ {k} ^ {\ell} (\mathbf {x}) \\ = \frac {z _ {i} ^ {\ell + 1} (\mathbf {x})}{m} \\ \end{array}
+$$
+
+
+
+To bound the variance, we use an approach inspired by Baykal et al. (2019a) where the main idea is to use the notion of empirical sensitivity to establish that a particular useful inequality holds with high probability over the randomness of the input point $x \sim \mathcal{D}$ . Given that the inequality holds we can establish favorable bounds on the variance and magnitude of the random variables, which lead to a low sampling complexity.
+
+For a random input point $\mathbf{x} \sim \mathcal{D}$ , let $\mathcal{G}$ denote the event that the following inequality holds (for all neurons):
+
+$$
+\max _ {i \in [ \eta^ {\ell + 1} ]} g _ {i j} ^ {\ell + 1} (\mathbf {x}) \leq C s _ {j} \quad \forall j \in [ \eta^ {\ell} ]
+$$
+
+where $C = \max \{3K, 1\}$ and $K$ is defined as in Assumption 1. We now prove that under Assumption 1, event $\mathcal{G}$ occurs with high probability. From now on, to ease notation, we will drop certain superscripts/subscripts with the meaning is clear. For example, $z(\mathbf{x})$ will refer to $z_i^{\ell + 1}(\mathbf{x})$ .
+
+Lemma 5. If Assumption 1 holds, $\mathbb{P}(\mathcal{G}) > 1 - \delta /2\eta^{\ell}$ . Here the probability is over the randomness of drawing $\mathbf{x}\sim \mathcal{D}$
+
+Proof. Since $\max_{i\in [\eta^{\ell +1}]}g_{ij}(x)$ is just a function of the random variable $x\sim \mathcal{D}$ , for any $j\in [\eta^{\ell}]$ we can let $D$ be a distribution over $\max_{i\in [\eta^{\ell +1}]}g_{ij}(x)$ and observe that since $s_j = \max_{\mathbf{x}\in S}\max_{i\in [\eta^{\ell +1}]}g_{ij}(\mathbf{x})$ , the negation of event $\mathcal{G}$ for a single neuron $j\in [\eta^{\ell}]$ can be expressed as the event
+
+$$
+X > C\max_{k\in [|S|]}X_{k},
+$$
+
+where $X \sim D$ and $X_{1},\ldots ,X_{|\mathcal{S}|}\stackrel {i.i.d.}{\sim}D$ since the points in $\mathcal{S}$ were drawn i.i.d. from $\mathcal{D}$ . Invoking Lemma 8 from Baykal et al. (2019a) in conjunction with Assumption 1, we obtain for any arbitrary $j$
+
+$$
+\mathbb {P} \left(\max _ {i \in [ \eta^ {\ell + 1} ]} g _ {i j} (x) > C s _ {j}\right) = \mathbb {P} \left(X > C \max _ {k \in [ | \mathcal {S} | ]} X _ {k}\right) \leq \exp \left(- | \mathcal {S} | / K ^ {\prime}\right)
+$$
+
+with the $K'$ from Assumption 1. Since our choice of neuron $j$ was arbitrary, the inequality above holds for all neurons, therefore we can apply the union bound to obtain:
+
+$$
+\begin{array}{l} \underset {x \sim \mathcal {D}} {\mathbb {P}} (\mathcal {G}) = 1 - \mathbb {P} (\exists j \in [ \eta^ {\ell} ]: \max _ {i \in [ \eta^ {\ell + 1} ]} g _ {i j} (x) > C s _ {j}) \\ \geq 1 - \sum_ {j \in [ \eta^ {\ell} ]} \mathbb {P} \left(\max _ {i \in [ \eta^ {\ell + 1} ]} g _ {i j} (x) > C s _ {j}\right) \\ \geq 1 - \eta^ {\ell} \exp (- | S | / K ^ {\prime}) \\ \geq 1 - \frac {\delta}{2 \eta^ {\ell + 1}} \\ \end{array}
+$$
+
+where the last line follows from the fact that $|S| \geq \left[ K' \log \left(2\eta^\ell \eta^{\ell + 1} / \delta \right) \right]$ .
+
+
+
+Lemma 6. For any $\mathbf{x}$ such that event $\mathcal{G}$ occurs, then $|Y_{k}(\mathbf{x}) - \mathbb{E}[Y_{k}(\mathbf{x})]|\leq CSz / m$ . Here the expectation is over the randomness of Algorithm 2.
+
+Proof. Recall that $S = \sum_{j \in [\eta^{\ell}]} s_j$ . Let neuron $j \in [\eta^{\ell}]$ be selected on iteration $k$ of Line 19. For any $k \in [m]$ we have:
+
+$$
+\begin{array}{l} Y _ {k} (\mathbf {x}) = \frac {w _ {i j} a _ {j} (\mathbf {x})}{m p _ {j}} \\ = S \frac {w _ {i j} a _ {j} (\mathbf {x})}{m s _ {j}} \\ \leq C S \frac {w _ {i j} a _ {j} (\mathbf {x})}{m \max _ {i ^ {\prime}} g _ {i ^ {\prime} j} (\mathbf {x})} \\ \leq C S \frac {w _ {i j} a _ {j} (\mathbf {x})}{m g _ {i j} (\mathbf {x})} \\ = \frac {C S z}{m}, \\ \end{array}
+$$
+
+where the first inequality follows by the inequality of event $\mathcal{G}$ , the second by the fact that $\max_{i'} g_{i'j}(\mathbf{x}) \geq g_{ij}(\mathbf{x})$ for any $i$ , and the third equality by definition of $g_{ij}(\mathbf{x}) = w_{ij} a_j(\mathbf{x}) / z(\mathbf{x})$ . This implies that $|Y_k - \mathbb{E}[Y_k]| = |Y_k - \frac{z}{m}| \in [-z/m, CSz/m]$ by Lemma 4 and since $Y_k \geq 0$ . The result follows since $C, S \geq 1$ .
+
+Lemma 7. For any $\mathbf{x}$ such that event $\mathcal{G}$ occurs, then $\mathrm{Var}(Y_k(\mathbf{x}))\leq CSz^2 /m^2$ . Here the expectation is over the randomness of Algorithm 2.
+
+Proof. We can use the same inequality obtained by conditioning on $\mathcal{G}$ to bound the variance of our estimator.
+
+$$
+\begin{array}{l} \operatorname {V a r} \left(Y _ {k} (\mathbf {x})\right) = \mathbb {E} \left[ Y _ {k} ^ {2} (\mathbf {x}) \right] - \left(\mathbb {E} \left[ Y _ {k} (\mathbf {x}) \right]\right) ^ {2} \\ \leq \mathbb {E} \left[ Y _ {k} ^ {2} (\mathbf {x}) \right] \\ = \sum_ {j \in [ \eta^ {\ell} ]} \left(\frac {w _ {i j} a _ {j} (\mathbf {x})}{m p _ {j}}\right) ^ {2} \cdot p _ {j} \quad \text {b y} \\ = \frac {S}{m ^ {2}} \sum_ {j \in [ \eta^ {\ell} ]} \frac {\left(w _ {i j} a _ {j} (\mathbf {x})\right) ^ {2}}{s _ {j}} \quad \text {s i n c e} p _ {j} = s _ {j} / S \\ \leq \frac {C S}{m ^ {2}} \sum_ {j \in [ \eta^ {\ell} ]} \frac {\left(w _ {i j} a _ {j} (\mathbf {x})\right) ^ {2}}{\max _ {i ^ {\prime} \in [ \eta^ {\ell + 1} ]} g _ {i ^ {\prime} j} (\mathbf {x})} \quad \text {b y} \\ \leq \frac {C S z}{m ^ {2}} \sum_ {j \in [ \eta^ {\ell} ]} w _ {i j} a _ {j} (\mathbf {x}) \quad \text {s i n c e} g _ {i j} (\mathbf {x}) = w _ {i j} a _ {j} (\mathbf {x}) / z (\mathbf {x}) \\ = \frac {C S z ^ {2}}{m ^ {2}}. \\ \end{array}
+$$
+
+
+
+We are now ready to prove Theorem 2.
+
+Proof of Theorem 2. Recall the form of Bernstein's inequality that, given random variables $X_{1},\ldots ,X_{m}$ such that for each $k\in [m]$ we have $\mathbb{E}[X_k] = 0$ and $|\tilde{X}_k|\leq M$ almost surely, then
+
+$$
+\mathbb {P} \left(\sum_ {k \in [ m ]} X _ {k} \geq t\right) \leq \exp \left(\frac {- t ^ {2} / 2}{\sum_ {k \in [ m ]} \mathbb {E} \left[ X _ {k} ^ {2} \right] + M t / 3}\right)
+$$
+
+We apply this with $X_{k} = Y_{k} - \frac{z}{m}$ . We must take the probability with respect to the randomness of both drawing $\mathbf{x} \sim \mathcal{D}$ and Algorithm 2. By Lemma 4, $E[X_k] = 0$ . Let us assume that event $\mathcal{G}$ occurs. By Lemma 6, we may set $M = CSz / m$ . By Lemma 7, $\sum_{k \in [m]} \mathbb{E}[X_k^2] \leq CSz^2 / m$ . We will apply the inequality with $t = \varepsilon z$ .
+
+Observe that $\sum_{k\in [m]}X_k = \hat{z} - z$ . Plugging in these values, and taking both tails of the inequality, we obtain:
+
+$$
+\begin{array}{l} \mathbb {P} (| \hat {z} - z | \geq \varepsilon z: \mathcal {G}) \leq 2 \exp \left(\frac {- \varepsilon^ {2} z ^ {2} / 2}{C S z ^ {2} / m + C S \varepsilon z ^ {2} / 3 m}\right) \\ = 2 \exp \left(- \frac {\varepsilon^ {2} m}{S K (6 + 2 \varepsilon)}\right) \quad \text {s i n c e} C \leq 3 K \\ \leq \frac {\delta}{2 \eta^ {\ell + 1}} \quad \text {b y} \\ \end{array}
+$$
+
+Removing dependence on event $\mathcal{G}$ , we write:
+
+$$
+\begin{array}{l} \mathbb {P} (| \hat {z} - z | \geq \varepsilon z) \geq \mathbb {P} (| \hat {z} - z | \geq \varepsilon z: \mathcal {G}) \mathbb {P} (\mathcal {G}) \geq \left(1 - \frac {\delta}{2 \eta^ {\ell + 1}}\right) \left(1 - \frac {\delta}{2 \eta^ {\ell + 1}}\right) \\ \geq 1 - \frac {\delta}{\eta^ {\ell + 1}} \\ \end{array}
+$$
+
+where we have applied Lemma 5. This implies the result for any single neuron, and the theorem follows by application of the union bound over all $\eta^{\ell +1}$ neurons in layer $\ell$ .
+
+# B.1 BOOSTING SAMPLING VIA DETERMINISTIC CHOICES
+
+Importance sampling schemes, such as the one described above, are powerful tools with numerous applications in Big Data settings, ranging from sparsifying matrices (Baykal et al., 2019a; Achlioptas et al., 2013; Drineas & Zouzias, 2011; Kundu & Drineas, 2014; Tropp et al., 2015) to constructing coresets for machine learning problems (Braverman et al., 2016; Feldman & Langberg, 2011; Bachem et al., 2017). However, by the nature of the exponential decay in probability associated with importance sampling schemes (see Theorem 1), sampling schemes perform truly well when the sampling pool and the number of samples is sufficiently large (Tropp et al., 2015). However, under certain conditions on the sampling distribution, the size of the sampling pool, and the size of the desired sample $m$ , it has been observed that deterministically picking the $m$ samples corresponding to the highest $m$ probabilities may yield an estimator that incurs lower error (McCurdy, 2018; Papailiopoulos et al., 2014).
+
+To this end, consider a hybrid scheme that picks $k$ indices deterministically (without reweighing) and samples $m'$ indices. More formally, let $\mathcal{C}_{\mathrm{det}} \subseteq [n]$ be the set of $k$ unique indices (corresponding to weights) that are picked deterministically, and define
+
+$$
+\hat {z} _ {\mathrm {d e t}} = \sum_ {j \in \mathcal {C} _ {\mathrm {d e t}}} w _ {i j} a _ {j},
+$$
+
+where we note that the weights are not reweighed. Now let $\mathcal{C}_{\mathrm{rand}}$ be a set of $m^{\prime}$ indices sampled from the remaining indices i.e., sampled from $[n]\setminus \mathcal{C}_{\mathrm{det}}$ , with probability distribution $q = (q_{1},\dots ,q_{n})$ .
+
+To define the distribution $q$ , recall that the original distribution $p$ is defined to be $p_i = s_i / S$ for each $i \in [n]$ . Now, $q$ is simply the normalized distribution resulting from setting the probabilities associated with indices in $\mathcal{C}_{\mathrm{det}}$ to be 0, i.e.,
+
+$$
+q _ {i} = \left\{ \begin{array}{l l} \frac {s _ {i}}{S - S _ {k}} & \text {i f i \notin \mathcal {C} _ {\mathrm {d e t}}}, \\ 0 & \text {o t h e r w i s e} \end{array} \right.,
+$$
+
+where $S_{k} = \sum_{j\in \mathcal{C}_{\mathrm{det}}}s_{j}$ is the sum of sensitivities of the entries that were deterministically picked.
+
+Instead of doing a combinatorial search over all $\binom{n}{k}$ choices for the deterministic set $\mathcal{C}_{\mathrm{det}}$ , for computational efficiency, we found that setting $\mathcal{C}_{\mathrm{det}}$ to be the indices with the top $k$ sensitivities was the most likely set to satisfy the condition above.
+
+We state the general theorem below.
+
+Theorem 8. It is better to keep $k$ feature maps, $\mathcal{C}_{\mathrm{det}} \subseteq [\eta^{\ell}]$ , $|\mathcal{C}_{\mathrm{det}}| = k$ , deterministically and sample $m' = \left[(6 + 2\varepsilon)(S^{\ell} - S_k^\ell)K\log (8\eta_* / \delta)\varepsilon^{-2}\right]$ features from $[\eta^{\ell}] \setminus \mathcal{C}_{\mathrm{det}}$ if
+
+$$
+\sum_ {j \notin \mathcal {C} _ {\mathrm {d e t}}} \left(1 - \frac {s _ {j}}{S - S _ {k}}\right) ^ {m ^ {\prime}} > \sum_ {j = 1} ^ {\eta^ {\ell}} \left(1 - \frac {s _ {j}}{S}\right) ^ {m} + \sqrt {\frac {\log (2 / \delta) (m + m ^ {\prime})}{2}},
+$$
+
+where $m = \left\lceil (6 + 2\varepsilon)S^{\ell}K\log (4\eta_{*} / \delta)\varepsilon^{-2}\right\rceil$ $S_{k} = \sum_{j\in \mathcal{C}_{\mathrm{det}}}s_{j}$ and $\eta_{*} = \max_{\ell}\eta^{\ell}$ .
+
+Proof. Let $m \geq \left\lceil (6 + 2\varepsilon)S^{\ell}K\log (4\eta_{*} / \delta)\varepsilon^{-2}\right\rceil$ as in Lemma 2 and note that from Lemma 2, we know that if $\hat{z}$ is our approximation with respect to sampled set of indices, $\mathcal{C}$ , we have
+
+$$
+\mathbb {P} (\mathcal {E}) \leq \delta
+$$
+
+where $\mathcal{E}$ is the event that the inequality
+
+$$
+\left| \hat {z} _ {i} ^ {\ell + 1} (x) - z _ {i} ^ {\ell + 1} (x) \right| \leq \varepsilon z _ {i} ^ {\ell + 1} (x) \quad \forall i \in [ \eta^ {\ell + 1} ]
+$$
+
+holds. Henceforth, we will let $i \in [\eta^{\ell + 1}]$ be an arbitrary neuron and, similar to before, consider the problem of approximating the neuron's value $z_i^{\ell + 1}(x)$ (subsequently denoted by $z$ ) by our approximating $\hat{z}_i^{\ell + 1}(x)$ (subsequently denoted by $\hat{z}$ ).
+
+Similar to our previous analysis of our importance sampling scheme, we let $\mathcal{C}_{\mathrm{rand}} = \{c_1,\dots ,c_{m'}\}$ denote the multiset of $m^{\prime}$ neuron indices that are sampled with respect to distribution $q$ and for each $j\in [m^{\prime}]$ define $Y_{j} = \hat{w}_{ic_{j}}a_{c_{j}}$ and let $Y = \sum_{j\in [m^{\prime}]}Y_{j}$ . For clarity of exposition, we define $\hat{z}_{\mathrm{rand}} = Y$ be our approximation with respect to the random sampling procedure, i.e.,
+
+$$
+\hat {z} _ {\text {r a n d}} = \sum_ {j \in \mathcal {C} _ {\text {r a n d}}} \hat {w} _ {i j} a _ {j} = Y.
+$$
+
+Thus, our estimator under this scheme is given by
+
+$$
+\hat {z} ^ {\prime} = \hat {z} _ {\mathrm {d e t}} + \hat {z} _ {\mathrm {r a n d}}
+$$
+
+Now we want to analyze the sampling complexity of our new estimator $\hat{z}^{\prime}$ so that
+
+$$
+\mathbb {P} \left(| \hat {z} ^ {\prime} - z | \geq \varepsilon z\right) \leq \delta / 2.
+$$
+
+Establishing the sampling complexity for sampling with respect to distribution $q$ is almost identical to the proof of Theorem 2. First, note that $\mathbb{E}[\hat{z}' \mid \mathbf{x}] = \hat{z}_{\mathrm{det}} + \mathbb{E}[\hat{z}_{\mathrm{rand}} \mid \mathbf{x}]$ since $\hat{z}_{\mathrm{det}}$ is a constant (conditioned on a realization $\mathbf{x}$ of $x \sim \mathcal{D}$ ). Now note that for any $j \in [m']$
+
+$$
+\begin{array}{l} \mathbb {E} \left[ Y _ {j} \mid \mathbf {x} \right] = \sum_ {k \in [ \eta^ {\ell} ] \backslash \mathcal {C} _ {\mathrm {d e t}}} \hat {w} _ {i k} a _ {k} \cdot q _ {k} \\ = \frac {1}{m ^ {\prime}} \sum_ {k \in [ \eta^ {\ell} ] \backslash \mathcal {C} _ {\mathrm {d e t}}} w _ {i k} a _ {k} \\ \end{array}
+$$
+
+$$
+= \frac {z - \hat {z} _ {\mathrm {d e t}}}{m ^ {\prime}},
+$$
+
+and so $\mathbb{E}\left[\hat{z}_{\mathrm{rand}}\mid \mathbf{x}\right] = \mathbb{E}\left[Y\mid \mathbf{x}\right] = z - \hat{z}_{\mathrm{det}}$
+
+This implies that $\mathbb{E}\left[\hat{z}^{\prime}\right] = \hat{z}_{\mathrm{det}} + (z - \hat{z}_{\mathrm{det}}) = z$ , and so our estimator remains unbiased. This also yields
+
+$$
+\begin{array}{l} | Y - \mathbb {E} [ Y | \mathbf {x} ] | = | \hat {z} _ {\text {r a n d}} - \mathbb {E} [ \hat {z} _ {\text {r a n d}} ] | = | \hat {z} _ {\text {r a n d}} + \hat {z} _ {\text {d e t}} - z | \\ = | \hat {z} ^ {\prime} - z |, \\ \end{array}
+$$
+
+which implies that all we have to do to bound the failure probability of the event $|z' - z| \geq \varepsilon z$ is to apply Bernstein's inequality to our estimator $\hat{z}_{\mathrm{rand}} = Y$ , just as we had done in the proof of Theorem 2. The only minor change is the variance and magnitude of the random variables $Y_{k}$ for $k \in [m']$ since the distribution is now with respect to $q$ and not $p$ . Proceeding as in the proof of Lemma 6, we have
+
+$$
+\begin{array}{l} \hat {w} _ {i j} a _ {j} (\mathbf {x}) = \frac {w _ {i j} a _ {j} (\mathbf {x})}{m ^ {\prime} q _ {j}} = (S - S _ {k}) \frac {w _ {i j} a _ {j} (\mathbf {x})}{m ^ {\prime} s _ {j}} \\ \leq \frac {(S - S _ {k}) C z}{m ^ {\prime}}. \\ \end{array}
+$$
+
+Now, to bound the magnitude of the random variables note that
+
+$$
+\mathbb {E} \left[ Y _ {j} \mid \mathbf {x} \right] = \frac {z - \hat {z} _ {\mathrm {d e t}}}{m ^ {\prime}} = \frac {1}{m ^ {\prime}} \sum_ {j \notin \mathcal {C} _ {\mathrm {d e t}}} w _ {i j} a _ {j} \leq \frac {(S - S _ {k}) C z}{m ^ {\prime}}.
+$$
+
+The result above combined with this fact yields for the magnitude of the random variables
+
+$$
+R ^ {\prime} = \max _ {j \in [ m ^ {\prime} ]} | Y _ {j} - \mathbb {E} [ Y _ {j} \mid \textbf {x} ] | \leq \frac {(S - S _ {k}) C z}{m ^ {\prime}},
+$$
+
+where we observe that the only relative difference to the bound of Lemma 6 is the term $S - S_{k}$ appears, where $S_{k} = \sum_{j\in \mathcal{C}_{\mathrm{det}}}s_{j}$ , instead of $S^7$
+
+Similarly, for the variance of a single $Y_{j}$
+
+$$
+\begin{array}{l} \operatorname {Var} \left(Y _ {j} \mid \mathbf {x}, \mathcal {G}\right) \leq \sum_ {k \in \left[ \eta^ {\ell} \right] \backslash \mathcal {C} _ {\mathrm {d e t}}} \frac {\left(w _ {i k} a _ {k} (\mathbf {x})\right) ^ {2}}{m ^ {\prime 2} q _ {k}} \\ = \frac {S - S _ {k}}{m ^ {\prime 2}} \sum_ {k \in [ \eta^ {\ell} ] \backslash \mathcal {C} _ {\mathrm {d e t}}} \frac {(w _ {i k} a _ {k} (\mathbf {x})) ^ {2}}{s _ {k}} \\ \leq \frac {C (S - S _ {k}) z}{m ^ {\prime 2}} \sum_ {k \in [ \eta^ {\ell} ] \backslash \mathcal {C} _ {\mathrm {d e t}}} w _ {i k} a _ {k} (\mathbf {x}) \\ \leq \frac {C (S - S _ {k}) z ^ {2} \min \{1 , C (S - S _ {k}) \}}{m ^ {\prime 2}}, \\ \end{array}
+$$
+
+where the last inequality follows by the fact that $\sum_{k\in [\eta^{\ell}]\backslash \mathcal{C}_{\mathrm{det}}}w_{ik}a_k(\mathbf{x})\leq z$ and by the sensitivity inequality from the proof of Lemma 7
+
+$$
+\sum_ {k \in [ \eta^ {\ell} ] \backslash \mathcal {C} _ {\det}} w _ {i k} a _ {k} (\mathbf {x}) \leq C z \sum_ {j \in [ \eta^ {\ell} ] \backslash \mathcal {C} _ {\det}} s _ {j} = C z (S - S _ {k}).
+$$
+
+This implies by Bernstein's inequality and the argument in proof of Theorem 2 that if we sample
+
+$$
+m ^ {\prime} = \left[ (6 + 2 \varepsilon) \left(S ^ {\ell} - S _ {k} ^ {\ell}\right) K \log \left(8 \eta_ {*} / \delta\right) \varepsilon^ {- 2} \right]
+$$
+
+times from the distribution $q$ , then we have
+
+$$
+\mathbb {P} \left(| \hat {z} ^ {\prime} - z | \geq \varepsilon z\right) \leq \delta / 2.
+$$
+
+Now let $p = (p_1, \ldots, p_n)$ be the probability distribution and let $\mathcal{C}$ denote the multi-set of indices sampled from $[n]$ when $m$ samples are taken from $[n]$ with respect to distribution $p$ . For each index $j \in [n]$ let $U_j(m, p) = \mathbb{1}[j \in \mathcal{C}]$ be the indicator random variable of the event that index $j$ is sampled at least once and let $U(m, p) = \sum_{i=j}^{n} U_j(m, p)$ . Note that $U$ is a random variable that denotes the number of unique samples that result from the sampling process described above, and its expectation is given by
+
+$$
+\begin{array}{l} \mathbb {E} \left[ U (m, p) \right] = \sum_ {j = 1} ^ {n} \mathbb {E} \left[ U _ {j} (m, p) \right] = \sum_ {j = 1} ^ {n} \mathbb {P} (i \in \mathcal {C}) \\ = \sum_ {j = 1} ^ {n} \mathbb {P} (j \text {i s s a m p l e d a t l e a s t o n c e}) \\ = \sum_ {j = 1} ^ {n} (1 - \mathbb {P} (j \text {i s n o t s a m p l e d})) \\ = n - \sum_ {j = 1} ^ {n} (1 - p _ {j}) ^ {m}. \\ \end{array}
+$$
+
+Now we want to establish the condition for which $U(m', q) < U(m, p)$ , which, if it holds, would imply that the number of distinct weights that we retain with the deterministic + sampling approach is lower and still achieves the same error and failure probability guarantees, making it the overall better approach. To apply a strong concentration inequality, let $\mathcal{C}' = \mathcal{C}_{\mathrm{det}} \cup \mathcal{C}_{\mathrm{rand}} = \{c_1', \ldots, c_k', c_{k+1}', \ldots, c_{m'}'\}$ denote the set of indices sampled from the deterministic + sampling (with distribution $q$ ) approach, and let $\mathcal{C} = \{c_1, \ldots, c_m\}$ be the indices of the random samples obtained by sampling from distribution $p$ . Let $f(c_1', \ldots, c_{m'}, c_1, \ldots, c_m)$ denote the difference $U(m', q) - U(m, p)$ in the number of unique samples in $\mathcal{C}'$ and $\mathcal{C}$ . Note that $f$ satisfies the bounded difference inequality with Lipschitz constant 1 since changing the index of any single sample in $\mathcal{C} \cup \mathcal{C}'$ can change $f$ by at most 1. Moreover, there are $m' + m$ random variables, thus, applying McDiarmid's inequality (van Handel, 2014), we obtain
+
+$$
+\mathbb {P} (\mathbb {E} \left[ U (m, p) - U (m ^ {\prime}, q) \right] - (U (m, p) - U (m ^ {\prime}, q)) \geq t) \leq \exp \left(- \frac {- 2 t ^ {2}}{(m + m ^ {\prime})}\right),
+$$
+
+this implies that for $t = \sqrt{\frac{\log(2 / \delta)(m + m')}{2}}$ ,
+
+$$
+\mathbb {E} \left[ U (m, p) - U \left(m ^ {\prime}, q\right) \right] \leq U (m, p) - U \left(m ^ {\prime}, q\right) + t
+$$
+
+with probability at least $1 - \delta /2$ . Thus, this means that if $E[U(m,p)] - \mathbb{E}\left[U(m^{\prime},q)\right] > t$ , then $U(m,p) > U(m^{\prime},q)$ .
+
+More specifically, recall that
+
+$$
+\mathbb {E} \left[ U (m, p) \right] = n - \sum_ {j = 1} ^ {n} (1 - p _ {j}) ^ {m} = n - \sum_ {j = 1} ^ {n} \left(1 - \frac {s _ {j}}{S}\right) ^ {m}
+$$
+
+and
+
+$$
+\begin{array}{l} \mathbb {E} \left[ U \left(m ^ {\prime}, q\right) \right] = k + \sum_ {j: q _ {j} > 0} \left(1 - \left(1 - q _ {j}\right) ^ {m ^ {\prime}}\right) \\ = k + (n - k) - \sum_ {j: q _ {j} > 0} \left(1 - q _ {j}\right) ^ {m ^ {\prime}} \\ = n - \sum_ {j: q _ {j} > 0} (1 - q _ {j}) ^ {m ^ {\prime}} \\ = n - \sum_ {j \notin \mathcal {C} _ {\mathrm {d e t}}} \left(1 - \frac {s _ {j}}{S - S _ {k}}\right) ^ {m ^ {\prime}} \\ \end{array}
+$$
+
+
+(a) Before retraining
+
+
+(b) After retraining
+Figure 5: The performance of our approach on a LeNet300-100 architecture trained on MNIST with no derandomization (denoted by "rand"), with partial derandomization (denoted by "partial"), and with complete derandomization (denoted by "derand"). The plot in (a) and (b) show the resulting test accuracy for various percentage of retained parameters $1 -$ (pruneratio) before and after retraining, respectively. The additional error of the derandomized algorithm can be neglected in practical settings, especially after retraining.
+
+Thus, rearranging terms, we conclude that it is better to conduct the deterministic + sampling scheme if
+
+$$
+\sum_ {j \notin \mathcal {C} _ {\mathrm {d e t}}} \left(1 - \frac {s _ {j}}{S - S _ {k}}\right) ^ {m ^ {\prime}} > \sum_ {j = 1} ^ {n} \left(1 - \frac {s _ {j}}{S}\right) ^ {m} + \sqrt {\frac {\log (2 / \delta) (m + m ^ {\prime})}{2}}.
+$$
+
+Putting it all together, and conditioning on the above inequality holding, we have by the union bound
+
+$$
+\mathbb {P} \left(| \hat {z} ^ {\prime} - z | \geq \varepsilon z \cup U (m ^ {\prime}, q) > U (m, p)\right) \leq \delta ,
+$$
+
+this implies that with probability at least $1 - \delta$ : (i) $\hat{z}' \in (1 \pm \varepsilon)z$ and (ii) $U(m', q) < U(m, p)$ , implying that the deterministic + sampling approach ensures the error guarantee holds with a smaller number of unique samples, leading to better compression.
+
+# B.1.1 EXPERIMENTAL EVALUATION OF DERANDOMIZATION
+
+To evaluate our theoretical results of derandomization, we tested the performance of our algorithm with respect to three different variations of sampling:
+
+1. No derandomization ("rand"): We apply Alg. 2 and sample channels with probability proportional to their sensitivity.
+2. Partial derandomization ("partial"): We apply Theorem 8 as a preprocessing step to keep the top $k$ channels and then sample from the rest according to Alg. 2.
+3. Complete derandomization ("derand"): We simply keep the top channels until our sampling budget is exhausted.
+
+The results of our evaluations on a LeNet300-100 architecture trained on MNIST can be seen in Fig. 5. As visible from Fig. 5(a), the process of partial derandomization does not impact the performance of our algorithm, while the complete derandomization of our algorithm has a slightly detrimental effect on the performance. This is in accordance to Theorem 8, which predicts that that it is best to only partially derandomize the sampling procedure. However, after we retrain the network, the additional error incurred by the complete derandomization is negligible as shown in Fig. 5(b). Moreover, it appears that - especially for extremely low sampling regime - the completely derandomized approach seems to incur a slight performance boost relative to the other approaches. We suspect that simply keeping the top channels may have a positive side effect on the optimization landscape during retraining, which we would like to further investigate in future research.
+
+# C MAIN COMPRESSION THEOREM
+
+Having established layer-wise approximation guarantees as in Sec. B, all that remains to establish guarantees on the output of the entire network is to carefully propagate the error through the layers as was done in Baykal et al. (2019a). For each $i \in [\eta^{\ell + 1}]$ and $\ell \in [L]$ , define
+
+$$
+\tilde {\Delta} _ {i} ^ {\ell} (x) = \left(z _ {i} ^ {+} (x) + z _ {i} ^ {-} (x)\right) / | z _ {i} (x) |,
+$$
+
+where $z_{i}^{+}(x) = \sum_{k\in I^{+}}w_{ik}^{\ell +1}a_{k}^{\ell}(\mathbf{x})$ and $z_{i}^{-}(x) = \sum_{k\in I^{-}}w_{ik}^{\ell +1}a_{k}^{\ell}(\mathbf{x})$ are positive and negative components of $z_{i}^{\ell +1}(x)$ , respectively, with $I^{+}$ and $I^{-}$ as in Alg. 2. For each $\ell \in [L]$ , let $\Delta^{\ell}$ be a constant defined as a function of the input distribution $\mathcal{D}^8$ , such that with high probability over $x\sim \mathcal{D}$ , $\Delta^{\ell}\geq \max_{i\in [\eta^{\ell +1}]}\Delta_{i}^{\ell}$ . Finally, let $\Delta^{\ell \rightarrow} = \prod_{k = \ell}^{L}\Delta^{k}$ .
+
+Generalizing Theorem 2 to obtain a layer-wise bound and applying error propagation bounds of Baykal et al. (2019a), we establish our main compression theorem below.
+
+Theorem 3. Let $\varepsilon, \delta \in (0,1)$ be arbitrary, let $\mathcal{S} \subset \mathcal{X}$ denote the set of $\lceil K' \log (4\eta/\delta) \rceil$ i.i.d. points drawn from $\mathcal{D}$ , and suppose we are given a network with parameters $\theta = (W^1, \ldots, W^L)$ . Consider the set of parameters $\hat{\theta} = (\hat{W}^1, \ldots, \hat{W}^L)$ generated by pruning channels of $\theta$ according to Alg. 2 for each $\ell \in [L]$ . Then, $\hat{\theta}$ satisfies $\mathbb{P}_{\hat{\theta}, x \sim \mathcal{D}}(f_{\hat{\theta}}(x) \in (1 \pm \varepsilon)f_{\theta}(x)) \geq 1 - \delta$ , and the number of filters in $\hat{\theta}$ is bounded by $\mathcal{O}\left(\sum_{\ell=1}^{L} \frac{L^2 (\Delta^{\ell \rightarrow})^2 S^\ell \log (\eta/\delta)}{\varepsilon^2}\right)$ .
+
+# D EXTENSION TO CNNS
+
+To extend our algorithm to CNNs, we need to consider the fact that there is implicit weight sharing involved by definition of the CNN filters. Intuitively speaking, to measure the importance of a feature map (i.e. neuron) in the case of FNNs we consider the maximum impact it has on the preactivation $z^{\ell +1}(x)$ . In the case of CNNs the same intuition holds, that is we want to capture the maximum contribution of a feature map $a_{j}^{\ell}(x)$ , which is now a two-dimensional image instead of a scalar neuron, to the pre-activation $z^{\ell +1}(x)$ in layer $\ell +1$ . Thus, to adapt our algorithm to prune channels in CNNs, we modify the definition of sensitivity slightly, by also taking the maximum over the patches $p\in \mathcal{P}$ (i.e., sliding windows created by convolutions). In this context, each activation $a_{j}^{\ell}(x)$ is also associated with a patch $p\in \mathcal{P}$ , which we denote by $a_{jp}^{\ell}$ . In particular, the slight change is the following:
+
+$$
+s _ {j} ^ {\ell} = \max _ {x \in \mathcal {S}} \max _ {i \in [ \eta^ {\ell + 1} ]} \max _ {p \in \mathcal {P}} \frac {w _ {i j} ^ {\ell + 1} a _ {j p} ^ {\ell} (x)}{\sum_ {k \in [ \eta^ {\ell} ]} w _ {i k} ^ {\ell + 1} a _ {k p} ^ {\ell} (x)},
+$$
+
+where $a_{\cdot p}$ corresponds to the activation window associated with patch $p \in \mathcal{P}$ . Everything else remains the same and the proofs are analogous.
+
+# EXPERIMENTAL DETAILS AND ADDITIONAL EVALUATIONS
+
+For our experimental evaluations, we considered a variety of data sets (MNIST, CIFAR-10, ImageNet) and neural network architectures (LeNet, VGG, ResNet, WideResNet, DenseNet) and compared against several state-of-the-art filter pruning methods. We conducted all experiments on either a single NVIDIA RTX 2080Ti with 11GB RAM or a NVIDIA Tesla V100 with 16GB RAM and implemented them in PyTorch (Paszke et al., 2017). Retraining with ImageNet was conducted on a cluster of 8 NVIDIA Tesla V100 GPUs.
+
+In the following, we summarize our hyperparameters for training and give an overview of the comparison methods. All reported experimental quantities are averaged over three separately trained and pruned networks.
+
+# E.1 COMPARISON METHODS
+
+We further evaluated the performance of our algorithm against a variety of state-of-the-art methods in filter pruning as listed below. These methods were re-implemented for our own experiments to ensure an objective comparison method between the methods and we deployed the same iterative pruning and fine-tune strategy as is used in our method. Moreover, we considered a fixed pruning ratio of filters in each layers as none of the competing methods provide an automatic procedure to detect relative layer importance and allocate samples accordingly. Thus, the differentiating factor between the competing methods is their respective pruning step that we elaborate upon below.
+
+Filter Thresholding (FT, Li et al. (2016) Consider the set of filters $W^{\ell} = [W_{1}^{\ell},\dots ,W_{\eta^{\ell}}^{\ell}]$ in layer $\ell$ and let $\left\| W_j^\ell \right\|_{2,2}$ denote the entry-wise $\ell_2$ -norm of $W_{j}^{\ell}$ (or Frobenius norm). Consider a desired sparsity level of $t\%$ , i.e., we want to keep only $t\%$ of the filters. We then simply keep the filters with the largest norm until we satisfy our desired level of sparsity.
+
+SoftNet (He et al., 2018) The pruning procedure of He et al. (2018) is similar in nature to the work of Li et al. (2016) except the saliency score used is the entrywise $\ell_1$ -norm $\left\| W_j^\ell \right\|_{1,1}$ of a filter map $W_j^\ell$ . During their fine-tuning scheme they allow pruned filters to become non-zero again and then repeat the pruning procedure. As for the other comparisons, however, we only employ one-shot prune and fine-tune scheme.
+
+ThiNet (Luo et al., 2017) Unlike the previous two approaches, which compute the saliency score of the filter $W_{j}^{\ell}$ by looking at its entry-wise norm, the method of Luo et al. (2017) iteratively and greedily chooses the feature map (and thus corresponding filter) that incurs the least error in an absolute sense in the pre-activation of the next layer. That is, initially, the method picks filter $j^{*}$ such that $j^{*} = \operatorname{argmin}_{j\in [\eta^{\ell}]} \max_{x\in S}\left|z^{\ell +1}(x) - z_{[j]}^{\ell +1}(x)\right|$ , where $z^{\ell +1}(x)$ denotes the pre-activation of layer $\ell + 1$ for some input data point $x$ , $z_{[j]}^{\ell +1}(x)$ the pre-activation when only considering feature map $j$ in layer $\ell$ , and $S$ a set of input data points. We note that this greedy approach is quadratic in both the size $\eta^{\ell}$ of layer $\ell$ and the size $|S|$ of the set of data points $S$ , thus rendering it very slow in practice. In particular, we only use a set $S$ of cardinality comparable to our own method, i.e., around 100 data points in total. On the other hand, Luo et al. report to use 100 data points per output class resulting in 1000 data points for CIFAR10.
+
+# E.2 LENET ARCHITECTURES ON MNIST
+
+We evaluated the performance of our pruning algorithm and the comparison methods on LeNet300-100 (LeCun et al., 1998), a fully-connected network with two hidden layers of size 300 and 100 hidden units, respectively, and its convolutional counterpart, LeNet-5 (LeCun et al., 1998), which consists of two convolutional layers and two fully-connected layers. Both networks were trained on MNIST using the hyper-parameters specified in Table 3. We trained on a single GPU and during retraining (fine-tunine) we maintained the same hyperparameters and only adapted the ones specifically mentioned in Table 3.
+
+ | | LeNet-300-100 | LeNet-5 |
| Train | test error | 1.59 | 0.72 |
| loss | cross-entropy | cross-entropy |
| optimizer | SGD | SGD |
| epochs | 40 | 40 |
| batch size | 64 | 64 |
| LR | 0.01 | 0.01 |
| LR decay | 0.1@{30} | 0.1@{25, 35} |
| momentum | 0.9 | 0.9 |
| weight decay | 1.0e-4 | 1.0e-4 |
| Prune | δ | 1.0e-12 | 1.0e-12 |
| α | not iterative | 1.18 |
| Fine-tune | epochs | 30 | 40 |
| LR decay | 0.1@{20, 28} | 0.1@{25, 35} |
+
+Table 3: We report the hyperparameters used during MNIST training, pruning, and fine-tuning for the LeNet architectures. LR hereby denotes the learning rate and LR decay denotes the learning rate decay that we deploy after a certain number of epochs. During fine-tuning we used the same hyperparameters except for the ones indicated in the lower part of the table.
+
+# E.3 CONVOLUTIONAL NEURAL NETWORKS ON CIFAR-10
+
+We further evaluated the performance of our algorithm on a variety of convolutional neural network architectures trained on CIFAR-10. Specifically, we tested it on VGG16 with batch norm (Simonyan & Zisserman, 2014), ResNet20 (He et al., 2016), DenseNet22 (Huang et al., 2017), and Wide ResNet16-8 (Zagoruyko & Komodakis, 2016). For residual networks with skip connections, we model the interdependencies between the feature maps and only prune a feature map if it does not get used as an input in the subsequent layers. We performed the training on a single GPU using the same hyperparameters specified in the respective papers for CIFAR training. During fine-tuning we kept the number of epochs and also did not adjusted the learning rate schedule. We summarize the set of hyperparameters for the various networks in Table 4.
+
+ | VGG16 | ResNet20/56/110 | DenseNet22 | WRN-16-8 |
| Train | test error | 7.11 | 8.59/7.05/6.43 | 10.07 | 4.81 |
| loss | cross-entropy | cross-entropy | cross-entropy | cross-entropy |
| optimizer | SGD | SGD | SGD | SGD |
| epochs | 300 | 182 | 300 | 200 |
| batch size | 256 | 128 | 64 | 128 |
| LR | 0.05 | 0.1 | 0.1 | 0.1 |
| LR decay | 0.5@{30,...} | 0.1@{91,136} | 0.1@{150,225} | 0.2@{60,...} |
| momentum | 0.9 | 0.9 | 0.9 | 0.9 |
| Nesterov | X | X | ✓ | ✓ |
| weight decay | 5.0e-4 | 1.0e-4 | 1.0e-4 | 5.0e-4 |
| Prune | δ | 1.0e-16 | 1.0e-16 | 1.0e-16 | 1.0e-16 |
| α | 1.50 | 0.50/0.79/0.79 | 0.40 | 0.36 |
| Fine-tune | epochs | 150 | 182 | 300 | 200 |
+
+# E.4 CONVOLUTIONAL NEURAL NETWORKS ON IMAGENET
+
+We consider pruning convolutional neural networks of varying size - ResNet18, ResNet50, and ResNet101 - trained on the ImageNet (Russakovsky et al., 2015) data set. The hyperparameters used for training and for our pruning algorithm are shown in Table 5. For this dataset, we considered two scenarios: (i) iterative pruning without retraining and (ii) iterative prune-retrain with a limited amount of iterations given the resource-intensive nature of the experiments.
+
+In the first scenario, we evaluate the baseline effectiveness of each pruning algorithm without fine-tuning by applying the same iterative prune-scheme, but without the retraining step. The results of these evaluations can be seen in Fig. 6. Fig. 6 shows that our algorithm outperforms the competing approaches in generating compact, more accurate networks. We suspect that by reevaluating the data-informed filter importance
+
+(empirical sensitivity) after each iteration our approach is capable of more precisely capturing the inter-dependency between layers that alter the relative importance of filters and layers with each pruning step. This is in contrast to competing approaches, which predominantly rely on weight-based criteria of filter importance, and thus can only capture this inter-dependency after retraining (which subsequently alters the magnitude of the weights).
+
+Table 4: We report the hyperparameters used during training, pruning, and fine-tuning for various convolutional architectures on CIFAR-10. LR hereby denotes the learning rate and LR decay denotes the learning rate decay that we deploy after a certain number of epochs. During fine-tuning we used the same hyperparameters except for the ones indicated in the lower part of the table. $\{30,\dots\}$ denotes that the learning rate is decayed every 30 epochs.
+
+ | | ResNet18/50/101 |
| Train | top-1 test error | 30.26/23.87/22.63 |
| top-5 test error | 10.93/7.13/6.45 |
| loss | cross-entropy |
| optimizer | SGD |
| epochs | 90 |
| batch size | 256 |
| LR | 0.1 |
| LR decay | 0.1@{30, 60} |
| momentum | 0.9 |
| Nesterov | x |
| weight decay | 1.0e-4 |
| Prune | δ | 1.0e-16 |
| α | 0.43/0.50/0.50 |
| Fine-tune | epochs | 90 |
+
+Table 5: The hyper-parameters used for training and pruning residual networks trained on the ImageNet data set.
+
+
+(a) ResNet18
+
+
+(b) ResNet50
+Figure 6: The results of our evaluations of the algorithms in the prune-only scenario, where the network is iteratively pruned down to a specified target prune ratio and the fine-tuning step is omitted. Note that the $x$ axis is the percentage of parameters retained, i.e., (1 - pruning ratio).
+
+
+(c) ResNet101
+
+| Model | Method | Top-1 Err. (%) | Top-5 Err. (%) | PR (%) | FR (%) |
| Orig. | Pruned | Diff. | Orig. | Pruned | Diff. |
| Resnet18 | Ours (within 4.0% top-1) | 30.26 | 34.35 | +4.09 | 10.93 | 13.25 | +2.32 | 60.48 | 43.12 |
| Ours (within 2.0% top-1) | 30.26 | 32.62 | +2.36 | 10.93 | 12.09 | +1.16 | 43.80 | 29.30 |
| Ours (lowest top-1 err.) | 30.26 | 31.34 | +1.08 | 10.93 | 11.43 | +0.50 | 31.03 | 19.99 |
| He et al. (2018) (SoftNet) | 29.72 | 32.90 | +3.18 | 10.37 | 12.22 | +1.85 | N/A | 41.80 |
| He et al. (2019) | 29.72 | 31.59 | +1.87 | 10.37 | 11.52 | +1.15 | N/A | 41.80 |
| Dong et al. (2017) | 30.02 | 33.67 | +3.65 | 10.76 | 13.06 | +2.30 | N/A | 33.30 |
| Resnet50 | Ours (within 1.0% top-1) | 23.87 | 24.79 | +0.92 | 7.13 | 7.57 | +0.45 | 44.04 | 30.05 |
| Ours (lowest top-1 err.) | 23.87 | 24.09 | +0.22 | 7.13 | 7.19 | +0.06 | 18.01 | 10.82 |
| He et al. (2018) (SoftNet) | 23.85 | 25.39 | +1.54 | 7.13 | 7.94 | +0.81 | N/A | 41.80 |
| Luo et al. (2017) (ThiNet) | 27.12 | 27.96 | +0.84 | 8.86 | 9.33 | +0.47 | 33.72 | 36.39 |
| He et al. (2019) | 23.85 | 25.17 | +1.32 | 7.13 | 7.68 | +0.55 | N/A | 53.50 |
| He et al. (2017) | N/A | N/A | N/A | 7.80 | 9.20 | +1.40 | N/A | 50.00 |
| Luo & Wu (2018) | 23.85 | 25.24 | +1.39 | 7.13 | 7.85 | +0.72 | N/A | 48.70 |
| Liu et al. (2019a) | 23.40 | 24.60 | +1.20 | N/A | N/A | N/A | N/A | 51.22 |
| Resnet101 | Ours (within 1.0% top-1) | 22.63 | 23.57 | +0.94 | 6.45 | 6.89 | +0.44 | 50.45 | 45.08 |
| Ours (lowest top-1 err.) | 22.63 | 23.22 | +0.59 | 6.45 | 6.74 | +0.29 | 33.04 | 29.38 |
| He et al. (2018) (SoftNet) | 22.63 | 22.49 | -0.14 | 6.44 | 6.29 | -0.15 | N/A | 42.20 |
| He et al. (2019) | 22.63 | 22.68 | +0.05 | 6.44 | 6.44 | +0.00 | N/A | 42.20 |
| Ye et al. (2018) | 23.60 | 24.73 | +1.13 | N/A | N/A | N/A | 47.20 | 42.69 |
+
+Table 6: Comparisons of the performance of various pruning algorithms on ResNets trained on ImageNet (Rusakovsky et al., 2015). The reported results for the competing algorithms were taken directly from the corresponding papers. For each network architecture, the best performing algorithm for each evaluation metric, i.e., Pruned Err., Err. Diff, PR, and FR, is shown in bold.
+
+Next, we consider pruning the networks using the standard iterative prune-retrain procedure as before (see Sec. E) with only a limited number of iterations (2-3 iterations per reported experiment). The results of our evaluations are reported in Table 6 with respect to the following metrics: the resulting error of the pruned network (Pruned Err.), the difference in model classification error (Err. Diff), the percentage of parameters pruned (PR), and the FLOP Reduction (FR). We would like to highlight that - despite the limited resources used during the experiments - our method is able to produce compressed networks that are as accurate and compact as the models generated by competing approaches (obtained by significantly more prune-retrain iterations than allotted to our algorithm).
+
+# E.5 APPLICATION TO REAL-TIME REGRESSION TASKS
+
+In the context of autonomous driving and other real-time applications of neural network inference, fast inference times while maintaining high levels of accuracy are paramount to the successful deployment of such systems (Amini et al., 2018). The particular challenge of real-time applications stems from
+
+the fact that – in addition to the conventional trade-off between accuracy and model efficiency – inference has to be conducted in real-time. In other words, there is a hard upper bound on the allotted computation time before an answer needs to be generated by the model. Model compression, and in particular, filter compression can provide a principled approach to generating high accuracy outputs without incurring high computational cost. Moreover, the provable nature of our approach is particularly favorable for real-time applications, as they usually require extensive performance guarantees before being deployed, e.g., autonomous driving tasks.
+
+To evaluate the empirical performance of our filter pruning method on real-time systems, we implemented and tested the neural network of Amini et al. (2018), which is a regression neural network deployed on an autonomous vehicle in real time to predict the steering angle of the human driver. We trained the network of Amini et al. (2018), denoted by Deepknight, with the driving data set provided alongside, using the hyperparameters summarized in Table 7.
+
+The results of our compression can be found in Fig. 7, where we evaluated and compared the performance of our algorithm to those of other SOTA methods (see Sec. E.6). We note that these results were achieved without retraining as our experimental evaluations have shown that even without retraining we can achieve significant pruning ratios that lead to computational speed-ups in practice. As apparent from Fig. 7, we can again outperform other SOTA methods in terms of performance vs. prune ratio. Note that since this is a regression task, we used test loss (mean-squared error on the test data set) as performance criterion.
+
+ | | Deepknight |
| Train | test loss | 4.9e-5 |
| loss | MSE |
| optimizer | Adam |
| epochs | 100 |
| batch size | 32 |
| LR | 1e-4 |
| LR decay | 0.1@{50, 90} |
| momentum | 0.9 |
| weight decay | 1.0e-4 |
| Prune | δ | 1.0e-32 |
| α | not iterative |
| Fine-tune | epochs | 0 |
+
+Table 7: We report the hyperparameters used for training and pruning the driving network of Amini et al. (2018) together with the provided data set. No fine-tuning was conducted for this architecture. LR hereby denotes the learning rate, LR decay denotes the learning rate decay that we deploy after a certain number of epochs, and MSE denotes the mean-squared error.
+
+
+(a) Example driving image from Amini et al. (2018)
+
+
+(b) Pruning performance before retraining
+Figure 7: The performance of our approach on a regression task used to infer the steering angle for an autonomous driving task (Amini et al., 2018). (a) An exemplary image taken from the data set. (b) The performance of our pruning procedure before retraining evaluated on the test loss and compared to competing filter pruning methods. Note that the $x$ axis is percentage of parameters retained, i.e., $1 -$ (pruneratio).
+
+Finally, we would like to highlight that our filter pruning method may serve as a principled subprocedure during the design of neural network architectures for real-time applications. In particular, given an inference time budget $\mathcal{T}$ , one can design and train much larger architectures with favorable performance which, however, violate the given budget $\mathcal{T}$ . Our filter pruning method can then be leveraged to compress the network until the given budget $\mathcal{T}$ is satisfied, thus reducing the burden on the practitioner to design a simultaneously accurate and computationally-efficient neural network architecture.
+
+# E.6 COMPARISONS TO ADDITIONAL METHODS ON CIFAR-10
+
+We evaluate the performance of our algorithm on pruning modern convolutional and residual benchmark network architectures - ResNet20, ResNet56, ResNet110, and VGG16 - trained on CIFAR-10, and compare it to the results reported by state-of-the-art filter pruning approaches. The results are obtained by iteratively pruning and fine-tuning the resulting model (starting from the original pre-trained network) in a hyperharmonic sequence of prune ratios as described in Sec. 4.1. For our algorithm, in addition to reporting the pruned model that achieves commensurate accuracy ("within $0.5\%$ err.") as in Sec. 4, we report the model that (i) closest matches the accuracy of the original network ("orig. err.") and (ii) achieves the lowest classification error possible ("lowest err.") - which for nearly all of the models considered, is lower than that of the original model.
+
+Table 8 summarizes our results and depicts the performance of each pruning algorithm with respect to various metrics: the resulting error of the pruned network (Pruned Err.), the difference in model classification error (Err. Diff), the percentage of parameters pruned (PR), and the FLOP Reduction (FR). Our results show that our algorithm outperforms competing approaches in virtually all of the considered models and pertinent metrics, especially when the overall quality of the pruned model is taken into account.
+
+For ResNet20 for instance, our algorithm generates a model that is simultaneously the sparsest ( $>43\%$ PR) and the most accurate ( $8.64\%$ Err., $0.04\%$ Err. Diff) despite starting from a pre-trained model with the highest error of $8.60\%$ (Orig. Err.) among the reported results. The method of He et al. (2019) does achieve a higher FR than our method, however, this is achieved at the cost of nearly $2\%$ degradation in classification accuracy (compared to $0.04\%$ for ours).
+
+For larger networks such as ResNet110, our algorithm's favorable performance is even more pronounced: the models generated by our algorithm are not only the sparsest (PR) and most efficient (PR), but they are also the most accurate. A nearly identical trend holds for the results pertaining to VGG16 and ResNet56: the models generated by our method tend to be the overall sparsest and most accurate, even when starting from pre-trained models with higher classification error.
+
+| Model | Method | Orig. Err. (%) | Pruned Err. (%) | Err. Diff. (%) | PR (%) | FR (%) |
| ResNet20 | Ours (within 0.5% err.) | 8.60 | 9.09 | +0.49 | 62.67 | 45.46 |
| Ours (orig. err.) | 8.60 | 8.64 | +0.04 | 43.16 | 32.10 |
| Ours (lowest err.) | 8.60 | 8.64 | +0.04 | 43.16 | 32.10 |
| He et al. (2018) (SoftNet) | 7.80 | 8.80 | +1.00 | N/A | 29.30 |
| He et al. (2019) | 7.80 | 9.56 | +1.76 | N/A | 54.00 |
| Ye et al. (2018) | 8.00 | 9.10 | +1.10 | 37.22 | N/A |
| Lin et al. (2020) | 7.52 | 9.72 | +2.20 | 40.00 | N/A |
| ResNet56 | Ours (within 0.5% err.) | 7.05 | 7.33 | +0.28 | 88.98 | 84.42 |
| Ours (orig. err.) | 7.05 | 7.02 | -0.03 | 86.00 | 80.76 |
| Ours (lowest err.) | 7.05 | 6.36 | -0.69 | 72.10 | 67.41 |
| Li et al. (2016) (FT) | 6.96 | 6.94 | -0.02 | 13.70 | 27.60 |
| He et al. (2018) (SoftNet) | 6.41 | 6.65 | +0.24 | N/A | 52.60 |
| He et al. (2019) | 6.41 | 6.51 | +0.10 | N/A | 52.60 |
| He et al. (2017) | 7.20 | 8.20 | +1.00 | N/A | 50.00 |
| Li et al. (2019) | 6.28 | 6.60 | +0.32 | 78.10 | 50.00 |
| Lin et al. (2020) | 5.49 | 5.97 | +0.48 | 40.00 | N/A |
| ResNet110 | Ours (within 0.5% err.) | 6.43 | 6.79 | +0.36 | 92.07 | 89.76 |
| Ours (orig. err.) | 6.43 | 6.35 | -0.08 | 89.15 | 86.97 |
| Ours (lowest err.) | 6.43 | 5.42 | -1.01 | 71.98 | 68.94 |
| Li et al. (2016) (FT) | 6.47 | 6.70 | +0.23 | 32.40 | 38.60 |
| He et al. (2018) (SoftNet) | 6.32 | 6.14 | -0.18 | N/A | 40.80 |
| He et al. (2019) | 6.32 | 6.16 | -0.16 | N/A | 52.30 |
| Dong et al. (2017) | 6.37 | 6.56 | +0.19 | N/A | 34.21 |
| VGG16 | Ours (within 0.5% err.) | 7.28 | 7.78 | +0.50 | 94.32 | 85.03 |
| Ours (orig. err.) | 7.28 | 7.17 | -0.11 | 87.06 | 70.32 |
| Ours (lowest err.) | 7.28 | 7.06 | -0.22 | 80.02 | 59.21 |
| Li et al. (2016) (FT) | 6.75 | 6.60 | -0.15 | 64.00 | 34.20 |
| Huang et al. (2018) | 7.23 | 7.83 | +0.60 | 83.30 | 45.00 |
| He et al. (2019) | 6.42 | 6.77 | +0.35 | N/A | 35.90 |
| Li et al. (2019) | 5.98 | 6.18 | +0.20 | 78.20 | 76.50 |
+
+Table 8: The performance of our algorithm and that of state-of-the-art filter pruning algorithms on modern CNN architectures trained on CIFAR-10. The reported results for the competing algorithms were taken directly from the corresponding papers. For each network architecture, the best performing algorithm for each evaluation metric, i.e., Pruned Err., Err. Diff, PR, and FR, is shown in bold. The results show that our algorithm consistently outperforms state-of-the-art pruning approaches in nearly all of the relevant pruning metrics.
\ No newline at end of file
diff --git a/provablefilterpruningforefficientneuralnetworks/images.zip b/provablefilterpruningforefficientneuralnetworks/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..9ebb6b3fc074e5e65675ba5b65694a53977a698a
--- /dev/null
+++ b/provablefilterpruningforefficientneuralnetworks/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3f4a8c093e55461159d2d368a88209d982537c63f5b1bd6604d73d601e48778e
+size 1261872
diff --git a/provablefilterpruningforefficientneuralnetworks/layout.json b/provablefilterpruningforefficientneuralnetworks/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..ab3fc26520fd7bed13391fb1a0b26e2e95e64cee
--- /dev/null
+++ b/provablefilterpruningforefficientneuralnetworks/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:54e2bf2f90ad2dfa82578634d73dabc63f0f974c1ba042ca7c1ebb2e260d19d1
+size 1171508
diff --git a/provablerobustnessagainstalladversariallpperturbationsforpgeq1/cfe5bd14-a24b-4f8e-a26d-6111f87eea0b_content_list.json b/provablerobustnessagainstalladversariallpperturbationsforpgeq1/cfe5bd14-a24b-4f8e-a26d-6111f87eea0b_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..612a460a51092f70dd2ce96c3cd030cf97c67714
--- /dev/null
+++ b/provablerobustnessagainstalladversariallpperturbationsforpgeq1/cfe5bd14-a24b-4f8e-a26d-6111f87eea0b_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:84c4faa42547c02dbef435a806f6791dd89e9dc75172248b7cd54e4b1e45967a
+size 138623
diff --git a/provablerobustnessagainstalladversariallpperturbationsforpgeq1/cfe5bd14-a24b-4f8e-a26d-6111f87eea0b_model.json b/provablerobustnessagainstalladversariallpperturbationsforpgeq1/cfe5bd14-a24b-4f8e-a26d-6111f87eea0b_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..da948885c2a2d6e76673cf75648ad0f8d6198bb3
--- /dev/null
+++ b/provablerobustnessagainstalladversariallpperturbationsforpgeq1/cfe5bd14-a24b-4f8e-a26d-6111f87eea0b_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4d4e7d2065fad5346eae94e6e63765901ab6ddc87b63bc1a6feef6d4663bccc3
+size 159716
diff --git a/provablerobustnessagainstalladversariallpperturbationsforpgeq1/cfe5bd14-a24b-4f8e-a26d-6111f87eea0b_origin.pdf b/provablerobustnessagainstalladversariallpperturbationsforpgeq1/cfe5bd14-a24b-4f8e-a26d-6111f87eea0b_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..c714a159a6431e887fad562793530ee6677c8a92
--- /dev/null
+++ b/provablerobustnessagainstalladversariallpperturbationsforpgeq1/cfe5bd14-a24b-4f8e-a26d-6111f87eea0b_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b279f03dcae4b9b0605bb9ae5f860b3442c701cf9dcc7e2828b9baa73f9318a4
+size 1030210
diff --git a/provablerobustnessagainstalladversariallpperturbationsforpgeq1/full.md b/provablerobustnessagainstalladversariallpperturbationsforpgeq1/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..3a5122381d6d02bc67848d0a016556ce21292ac3
--- /dev/null
+++ b/provablerobustnessagainstalladversariallpperturbationsforpgeq1/full.md
@@ -0,0 +1,591 @@
+# PROVABLE ROBUSTNESS AGAINST ALL ADVERSARIAL $l_{p}$ -PERTURBATIONS FOR $p \geq 1$
+
+Francesco Croce
+
+University of Tübingen, Germany
+
+Matthias Hein
+
+University of Tübingen, Germany
+
+# ABSTRACT
+
+In recent years several adversarial attacks and defenses have been proposed. Often seemingly robust models turn out to be non-robust when more sophisticated attacks are used. One way out of this dilemma are provable robustness guarantees. While provably robust models for specific $l_{p}$ -perturbation models have been developed, we show that they do not come with any guarantee against other $l_{q}$ -perturbations. We propose a new regularization scheme, MMR-Universal, for ReLU networks which enforces robustness wrt $l_{1}$ and $l_{\infty}$ -perturbations and show how that leads to the first provably robust models wrt any $l_{p}$ -norm for $p \geq 1$ .
+
+# 1 INTRODUCTION
+
+The vulnerability of neural networks against adversarial manipulations (Szegedy et al., 2014; Goodfellow et al., 2015) is a problem for their deployment in safety critical systems such as autonomous driving and medical applications. In fact, small perturbations of the input which appear irrelevant or are even imperceivable to humans change the decisions of neural networks. This questions their reliability and makes them a target of adversarial attacks.
+
+To mitigate the non-robustness of neural networks many empirical defenses have been proposed, e.g. by Gu & Rigazio (2015); Zheng et al. (2016); Papernot et al. (2016); Huang et al. (2016); Bastani et al. (2016); Madry et al. (2018), but at the same time more sophisticated attacks have proven these defenses to be ineffective (Carlini & Wagner, 2017; Athalye et al., 2018; Mosbach et al., 2018), with the exception of the adversarial training of Madry et al. (2018). However, even these $l_{\infty}$ -adversarially trained models are not more robust than normal ones when attacked with perturbations of small $l_{p}$ -norms with $p \neq \infty$ (Sharma & Chen, 2019; Schott et al., 2019; Croce et al., 2019b; Kang et al., 2019). The situation becomes even more complicated if one extends the attack models beyond $l_{p}$ -balls to other sets of perturbations (Brown et al., 2017; Engstrom et al., 2017; Hendrycks & Dietterich, 2019; Geirhos et al., 2019).
+
+Another approach, which fixes the problem of overestimating the robustness of a model, is provable guarantees, which means that one certifies that the decision of the network does not change in a certain $l_{p}$ -ball around the target point. Along this line, current state-of-the-art methods compute either the norm of the minimal perturbation changing the decision at a point (e.g. Katz et al. (2017); Tjeng et al. (2019)) or lower bounds on it (Hein & Andriushchenko, 2017; Raghunathan et al., 2018; Wong & Kolter, 2018). Several new training schemes like (Hein & Andriushchenko, 2017; Raghunathan et al., 2018; Wong & Kolter, 2018; Mirman et al., 2018; Croce et al., 2019a; Xiao et al., 2019; Gowal et al., 2018) aim at both enhancing the robustness of networks and producing models more amenable to verification techniques. However, all of them are only able to prove robustness against a single kind of perturbations, typically either $l_{2}$ - or $l_{\infty}$ -bounded, and not wrt all the $l_{p}$ -norms simultaneously, as shown in Section 5. Some are also designed to work for a specific $p$ (Mirman et al., 2018; Gowal et al., 2018), and it is not clear if they can be extended to other norms.
+
+The only two papers which have shown, with some limitations, non-trivial empirical robustness against multiple types of adversarial examples are Schott et al. (2019) and Tramér & Boneh
+
+(2019), which resist to $l_0$ - resp. $l_{1^-}$ , $l_{2^-}$ and $l_{\infty}$ -attacks. However, they come without provable guarantees and Schott et al. (2019) is restricted to MNIST.
+
+In this paper we aim at robustness against all the $l_{p}$ -bounded attacks for $p \geq 1$ . We study the non-trivial case where none of the $l_{p}$ -balls is contained in another. If $\epsilon_{p}$ is the radius of the $l_{p}$ -ball for which we want to be provably robust, this requires: $d^{\frac{1}{p} - \frac{1}{q}} \epsilon_{q} > \epsilon_{p} > \epsilon_{q}$ for $p < q$ and $d$ being the input dimension. We show that, for normally trained models, for the $l_{1}$ - and $l_{\infty}$ -balls we use in the experiments none of the adversarial examples constrained to be in the $l_{1}$ -ball (i.e. results of an $l_{1}$ -attack) belong to the $l_{\infty}$ -ball, and vice versa. This shows that certifying the union of such balls is significantly more complicated than getting robust in only one of them, as in the case of the union the attackers have a much larger variety of manipulations available to fool the classifier.
+
+We propose a technique which allows to train piecewise affine models (like ReLU networks) which are simultaneously provably robust to all the $l_{p}$ -norms with $p \in [1, \infty]$ . First, we show that having guarantees on the $l_{1}$ - and $l_{\infty}$ -distance to the decision boundary and region boundaries (the borders of the polytopes where the classifier is affine) is sufficient to derive meaningful certificates on the robustness wrt all $l_{p}$ -norms for $p \in (1, \infty)$ . In particular, our guarantees are independent of the dimension of the input space and thus go beyond a naive approach where one just exploits that all $l_{p}$ -metrics can be upper- and lower-bounded wrt any other $l_{q}$ -metric. Then, we extend the regularizer introduced in Croce et al. (2019a) so that we can directly maximize these bounds at training time. Finally, we show the effectiveness of our technique with experiments on four datasets, where the networks trained with our method are the first ones having non-trivial provable robustness wrt $l_{1}$ -, $l_{2}$ - and $l_{\infty}$ -perturbations.
+
+# 2 LOCAL PROPERTIES AND ROBUSTNESS GUARANTEES OF RELU NETWORKS
+
+It is well known that feedforward neural networks (fully connected, CNNs, residual networks, DenseNets etc.) with piecewise affine activation functions, e.g. ReLU, leaky ReLU, yield continuous piecewise affine functions (see e.g. Arora et al. (2018); Croce & Hein (2018)). Croce et al. (2019a) exploit this property to derive bounds on the robustness of such networks against adversarial manipulations. In the following we recall the guarantees of Croce et al. (2019a) wrt a single $l_{p}$ -perturbation which we extend in this paper to simultaneous guarantees wrt all the $l_{p}$ -perturbations for $p$ in $[1,\infty]$ .
+
+# 2.1 RELU NETWORKS AS PIECEWISE AFFINE FUNCTIONS
+
+Let $f: \mathbb{R}^d \to \mathbb{R}^K$ be a classifier with $d$ being the dimension of the input space and $K$ the number of classes. The classifier decision at a point $x$ is given by $\operatorname*{argmax}_{r=1,\dots,K} f_r(x)$ . In this paper
+
+we deal with ReLU networks, that is with ReLU activation function (in fact our approach can be easily extended to any piecewise affine activation function e.g. leaky ReLU or other forms of layers leading to a piecewise affine classifier as in Croce et al. (2019b)).
+
+Definition 2.1 A function $f: \mathbb{R}^d \to \mathbb{R}$ is called piecewise affine if there exists a finite set of polytopes $\{Q_r\}_{r=1}^M$ (referred to as linear regions of $f$ ) such that $\cup_{r=1}^M Q_r = \mathbb{R}^d$ and $f$ is an affine function when restricted to every $Q_r$ .
+
+Denoting the activation function as $\sigma$ ( $\sigma(t) = \max\{0, t\}$ if ReLU is used) and assuming $L$ hidden layers, we have the usual recursive definition of $f$ as
+
+$$
+g ^ {(l)} (x) = W ^ {(l)} f ^ {(l - 1)} (x) + b ^ {(l)}, \quad f ^ {(l)} (x) = \sigma (g ^ {(l)} (x)), \quad l = 1, \ldots , L,
+$$
+
+with $f^{(0)}(x) \equiv x$ and $f(x) = W^{(L + 1)}f^{(L)}(x) + b^{(L + 1)}$ the output of $f$ . Moreover, $W^{(l)} \in \mathbb{R}^{n_l \times n_{l - 1}}$ and $b^{(l)} \in \mathbb{R}^{n_l}$ , where $n_l$ is the number of units in the $l$ -th layer ( $n_0 = d$ , $n_{L + 1} = K$ ).
+
+For the convenience of the reader we summarize from Croce & Hein (2018) the description of the polytope $Q(x)$ containing $x$ and affine form of the classifier $f$ when restricted to $Q(x)$ . We assume that $x$ does not lie on the boundary between polytopes (this is almost always
+
+true as faces shared between polytopes are of lower dimension). Let $\Delta^{(l)}, \Sigma^{(l)} \in \mathbb{R}^{n_l \times n_l}$ for $l = 1, \ldots, L$ be diagonal matrices defined elementwise as
+
+$$
+\Delta^ {(l)} (x) _ {i j} = \left\{ \begin{array}{l l} \mathrm {s i g n} (f _ {i} ^ {(l)} (x)) & \text {i f} i = j, \\ 0 & \text {e l s e .} \end{array} \right., \qquad \Sigma^ {(l)} (x) _ {i j} = \left\{ \begin{array}{l l} 1 & \text {i f} i = j \text {a n d} f _ {i} ^ {(l)} (x) > 0, \\ 0 & \text {e l s e .} \end{array} \right..
+$$
+
+This allows us to write $f^{(l)}(x)$ as composition of affine functions, that is
+
+$$
+f ^ {(l)} (x) = W ^ {(l)} \Sigma^ {(l - 1)} (x) \Big (W ^ {(l - 1)} \Sigma^ {(l - 2)} (x) \times \Big (\dots \Big (W ^ {(1)} x + b ^ {(1)} \Big) \dots \Big) + b ^ {(l - 1)} \Big) + b ^ {(l)},
+$$
+
+which we simplify as $f^{(l)}(x) = V^{(l)}x + a^{(l)}$ , with $V^{(l)}\in \mathbb{R}^{n_l\times d}$ and $a^{(l)}\in \mathbb{R}^{n_l}$ given by
+
+$$
+V ^ {(l)} = W ^ {(l)} \Big (\prod_ {j = 1} ^ {l - 1} \Sigma^ {(l - j)} (x) W ^ {(l - j)} \Big) \mathrm {a n d} a ^ {(l)} = b ^ {(l)} + \sum_ {j = 1} ^ {l - 1} \Big (\prod_ {m = 1} ^ {l - j} W ^ {(l + 1 - m)} \Sigma^ {(l - m)} (x) \Big) b ^ {(j)}.
+$$
+
+A forward pass through the network is sufficient to compute $V^{(l)}$ and $b^{(l)}$ for every $l$ . The polytope $Q(x)$ is given as intersection of $N = \sum_{l=1}^{L} n_l$ half spaces defined by
+
+$$
+Q (x) = \bigcap_ {l = 1, \dots , L} \bigcap_ {i = 1, \dots , n _ {l}} \left\{z \in \mathbb {R} ^ {d} \Big | \Delta^ {(l)} (x) _ {i i} \big (V _ {i} ^ {(l)} z + a _ {i} ^ {(l)} \big) \geq 0 \right\},
+$$
+
+Finally, the affine restriction of $f$ to $Q(x)$ is $f(z)|_{Q(x)} = f^{(L + 1)}|_{Q(x)}(z) = V^{(L + 1)}z + a^{(L + 1)}$ . Let $q$ be defined via $\frac{1}{p} + \frac{1}{q} = 1$ and $c$ the correct class of $x$ . We introduce
+
+$$
+d _ {p, l, j} ^ {B} (x) = \frac {\left| \left\langle V _ {j} ^ {(l)} , x \right\rangle + a _ {j} ^ {(l)} \right|}{\left\| V _ {j} ^ {(l)} \right\| _ {q}} \quad \text {a n d} \quad d _ {p, s} ^ {D} (x) = \frac {f _ {c} (x) - f _ {s} (x)}{\left\| V _ {c} ^ {(L + 1)} - V _ {s} ^ {(L + 1)} \right\| _ {q}}, \tag {1}
+$$
+
+for every $l = 1, \dots, L$ , $j = 1, \dots, n_L$ , $s = 1, \dots, K$ and $s \neq c$ , which represent the $N$ $l_p$ -distances of $x$ to the hyperplanes defining the polytope $Q(x)$ and the $K - 1$ $l_p$ -distances of $x$ to the hyperplanes defining the decision boundaries in $Q(x)$ . Finally, we define
+
+$$
+d _ {p} ^ {B} (x) = \min _ {l = 1, \dots , L} \min _ {j = 1, \dots , n _ {l}} d _ {p, l, j} ^ {B} (x) \quad \text {a n d} \quad d _ {p} ^ {D} (x) = \min _ {\substack {s = 1, \dots , K \\ s \neq c}} d _ {p, s} ^ {D} \tag{2}
+$$
+
+as the minimum values of these two sets of distances (note that $d_p^D(x) < 0$ if $x$ is misclassified).
+
+# 2.2 ROBUSTNESS GUARANTEES INSIDE LINEAR REGIONS
+
+The $l_{p}$ -robustness $\mathbf{r}_p(x)$ of a classifier $f$ at a point $x$ , belonging to class $c$ , wrt the $l_{p}$ -norm is defined as the optimal value of the following optimization problem
+
+$$
+\mathbf {r} _ {p} (x) = \min _ {\delta \in \mathbb {R} ^ {d}} \| \delta \| _ {p}, \quad \text {s . t h .} \quad \max _ {l \neq c} f _ {l} (x + \delta) \geq f _ {c} (x + \delta), \quad x + \delta \in S, \tag {3}
+$$
+
+where is $S$ a set of constraints on the input, e.g. pixel values of images have to be in [0, 1]. The $l_{p}$ -robustness $\mathbf{r}_p(x)$ is the smallest $l_{p}$ -distance to $x$ of a point which is classified differently from $c$ . Thus, $\mathbf{r}_p(x) = 0$ for misclassified points. The following theorem from Croce et al. (2019a), rephrased to fit the current notation, provides guarantees on $\mathbf{r}_p(x)$ .
+
+Theorem 2.1 (Croce et al. (2019a)) If $d_p^B(x) < d_p^D(x)$ , then $\mathbf{r}_p(x) \geq d_p^B(x)$ , while if $|d_p^D(x)| \leq d_p^B(x)$ , then $\mathbf{r}_p(x) = \max\{d_p^D(x), 0\}$ .
+
+Although Theorem 2.1 holds for any $l_{p}$ -norm with $p \geq 1$ , it requires to compute $d_p^B (x)$ and $d_p^D (x)$ for every $p$ individually. In this paper, exploiting this result and the geometrical arguments presented in Section 3, we show that it is possible to derive bounds on the robustness $\mathbf{r}_p(x)$ for any $p \in (1,\infty)$ using only information on $\mathbf{r}_1(x)$ and $\mathbf{r}_{\infty}(x)$ .
+
+In the next section, we show that the straightforward usage of standard $l_{p}$ -norms inequalities does not yield meaningful bounds on the $l_{p}$ -robustness inside the union of the $l_{1}$ - and $l_{\infty}$ -ball, since these bounds depend on the dimension of the input space of the network.
+
+
+Figure 1: Visualization of the $l_{2}$ -ball contained in the union resp. the convex hull of the union of $l_{1}$ - and $l_{\infty}$ -balls in $\mathbb{R}^3$ . First column: co-centric $l_{1}$ -ball (blue) and $l_{\infty}$ -ball (black). Second: in red the largest $l_{2}$ -ball completely contained in the union of $l_{1}$ - and $l_{\infty}$ -ball. Third: in green the convex hull of the union of the $l_{1}$ - and $l_{\infty}$ -ball. Fourth: the largest $l_{2}$ -ball (red) contained in the convex hull. The $l_{2}$ -ball contained in the convex hull is significantly larger than that contained in the union of $l_{1}$ - and $l_{\infty}$ -ball.
+
+
+
+
+
+
+
+# 3 MINIMAL $l_{p}$ -NORM OF THE COMPLEMENT OF THE UNION OF $l_{1}$ - AND $l_{\infty}$ -BALL AND ITS CONVEX HULL
+
+Let $B_{1} = \{x\in \mathbb{R}^{d}:||x||_{1}\leq \epsilon_{1}\}$ and $B_{\infty} = \{x\in \mathbb{R}^d:\| x\|_{\infty}\leq \epsilon_{\infty}\}$ be the $l_{1}$ -ball of radius $\epsilon_1 > 0$ and the $l_{\infty}$ -ball of radius $\epsilon_{\infty} > 0$ respectively, both centered at the origin in $\mathbb{R}^d$ . We also assume $\epsilon_1\in (\epsilon_\infty ,d\epsilon_\infty)$ , so that $B_{1}\nsubseteq B_{\infty}$ and $B_{\infty}\nsubseteq B_{1}$ .
+
+Suppose we can guarantee that the classifier does not change its label in $U_{1,\infty} = B_1 \cup B_\infty$ . Which guarantee does that imply for all intermediate $l_p$ -norms? This question can be simply answered by computing the minimal $l_p$ -norms over $\mathbb{R}^d \setminus U_{1,\infty}$ , namely $\min_{x \in \mathbb{R}^d \setminus U_{1,\infty}} \| x \|_p$ . By the standard norm inequalities it holds, for every $x \in \mathbb{R}^d$ , that
+
+$$
+\| x \| _ {p} \geq \| x \| _ {\infty} \quad \text {a n d} \quad \| x \| _ {p} \geq \| x \| _ {1} d ^ {\frac {1 - p}{p}},
+$$
+
+and thus a naive application of these inequalities yields the bound
+
+$$
+\min _ {x \in \mathbb {R} ^ {d} \backslash U _ {1, \infty}} \| x \| _ {p} \geq \max \left\{\epsilon_ {\infty}, \epsilon_ {1} d ^ {\frac {1 - p}{p}} \right\}. \tag {4}
+$$
+
+However, this naive bound does not take into account that we know that $\| x\| _1\geq \epsilon_1$ and $\| x\|_{\infty}\geq \epsilon_{\infty}$ . Our first result yields the exact value taking advantage of this information.
+
+Proposition 3.1 If $d \geq 2$ and $\epsilon_1 \in (\epsilon_{\infty}, d \epsilon_{\infty})$ , then
+
+$$
+\min _ {x \in \mathbb {R} ^ {d} \backslash U _ {1, \infty}} \| x \| _ {p} = \left(\epsilon_ {\infty} ^ {p} + \frac {\left(\epsilon_ {1} - \epsilon_ {\infty}\right) ^ {p}}{(d - 1) ^ {p - 1}}\right) ^ {\frac {1}{p}}. \tag {5}
+$$
+
+Thus a guarantee both for $l_{1}$ - and $l_{\infty}$ -ball yields a guarantee for all intermediate $l_{p}$ -norms. However, for affine classifiers a guarantee for $B_{1}$ and $B_{\infty}$ implies a guarantee wrt the convex hull $C$ of their union $B_{1} \cup B_{\infty}$ . This can be seen by the fact that an affine classifier generates two half-spaces, and the convex hull of a set $A$ is the intersection of all half-spaces containing $A$ . Thus, inside $C$ the decision of the affine classifier cannot change if it is guaranteed not to change in $B_{1}$ and $B_{\infty}$ , as $C$ is completely contained in one of the half-spaces generated by the classifier (see Figure 1 for illustrations of $B_{1}$ , $B_{\infty}$ , their union and their convex hull).
+
+With the following theorem, we characterize, for any $p \geq 1$ , the minimal $l_p$ -norm over $\mathbb{R}^d \setminus C$ .
+
+Theorem 3.1 Let $C$ be the convex hull of $B_{1} \cup B_{\infty}$ . If $d \geq 2$ and $\epsilon_{1} \in (\epsilon_{\infty}, d\epsilon_{\infty})$ , then
+
+$$
+\min _ {x \in \mathbb {R} ^ {d} \backslash C} \| x \| _ {p} = \frac {\epsilon_ {1}}{\left(\epsilon_ {1} / \epsilon_ {\infty} - \alpha + \alpha^ {q}\right) ^ {1 / q}}, \tag {6}
+$$
+
+where $\alpha = \frac{\epsilon_1}{\epsilon_\infty} - \left\lfloor \frac{\epsilon_1}{\epsilon_\infty} \right\rfloor$ and $\frac{1}{p} + \frac{1}{q} = 1$ .
+
+
+Figure 2: Comparison of the minimal $l_{2}$ -norm over $\mathbb{R}^d\backslash C$ (6) (blue), $\mathbb{R}^d\backslash U_{1,\infty}$ (5) (red) and its naive lower bound (4) (green). We fix $\epsilon_{\infty} = 1$ and show the results varying $\epsilon_{1}\in (1,d)$ , for $d = 784$ and $d = 3072$ . We plot the value (or a lower bound in case of (4)) of the minimal $\| x\| _2$ , depending on $\epsilon_{1}$ , given by the different approaches (first and third plots). The red curves are almost completely hidden by the green ones, as they mostly overlap, but can be seen for small values of $\| x\| _1$ . Moreover, we report (second and fourth plots) the ratios of the minimal $\| x\| _2$ for $\mathbb{R}^d\backslash \mathrm{conv}(B_1\cup B_\infty)$ and $\mathbb{R}^d\backslash (B_1\cup B_\infty)$ . The values provided by (6) are much larger than those of (5).
+
+
+
+
+
+
+
+Note that our expression in Theorem 3.1 is exact and not just a lower bound. Moreover, the minimal $l_{p}$ -distance of $\mathbb{R}^d \setminus C$ to the origin in Equation (6) is independent from the dimension $d$ , in contrast to the expression for the minimal $l_{p}$ -norm over $\mathbb{R}^d \setminus U_{1,\infty}$ in (5) and its naive lower bound in (4), which are both decreasing for increasing $d$ and $p > 1$ . In Figure 1 we compare visually the largest $l_{2}$ -balls (in red) fitting inside either $U_{1,\infty}$ or the convex hull $C$ in $\mathbb{R}^3$ , showing that the one in $C$ is clearly larger. In Figure 2 we provide a quantitative comparison in high dimensions. We plot the minimal $l_{2}$ -norm over $\mathbb{R}^d \setminus C$ (6) (blue) and over $\mathbb{R}^d \setminus U_{1,\infty}$ (5) (red) and its naive lower bound (4) (green). We fix $\| x \|_{\infty} = \epsilon_{\infty} = 1$ and vary $\epsilon_1 \in [1,d]$ , with either $d = 784$ (left) or $d = 3072$ (right), i.e. the dimensions of the input spaces of MNIST and CIFAR-10. One sees clearly that the blue line corresponding to (6) is significantly higher than the other two. In the second and fourth plots of Figure 2 we show, for each $\epsilon_1$ , the ratio of the $l_{2}$ -distances given by (6) and (5). The maximal ratio is about 3.8 for $d = 784$ and 5.3 for $d = 3072$ , meaning that the advantage of (6) increases with $d$ (for a more detailed analysis see A.3).
+
+These two examples indicate that the $l_{p}$ -balls contained in $C$ can be a few times larger than those in $U_{1,\infty}$ . Recall that we deal with piecewise affine networks. If we could enlarge the linear regions on which the classifier is affine so that it contains the $l_{1}$ - and $l_{\infty}$ -ball of some desired radii, we would automatically get the $l_{p}$ -balls of radii given by Theorem 3.1 to fit in the linear regions. The next section formalizes the resulting robustness guarantees.
+
+# 4 UNIVERSAL PROVABLE ROBUSTNESS WITH RESPECT TO ALL $l_{p}$ -NORMS
+
+Combining the results of Theorems 2.1 and 3.1, in the next theorem we derive lower bounds on the robustness of a continuous piecewise affine classifier $f$ , e.g. a ReLU network, at a point $x$ wrt any $l_p$ -norm with $p \geq 1$ using only $d_1^B(x)$ , $d_1^D(x)$ , $d_\infty^B(x)$ and $d_\infty^D(x)$ (see (2)).
+
+Theorem 4.1 Let $d_p^B (x)$ , $d_p^D (x)$ be defined as in (2) and define $\rho_{1} = \min \{d_{1}^{B}(x),|d_{1}^{D}(x)|\}$ and $\rho_{\infty} = \min \{d_{\infty}^{B}(x),|d_{\infty}^{D}(x)|\}$ . If $d\geq 2$ and $x$ is correctly classified, then
+
+$$
+\mathbf {r} _ {p} (x) \geq \frac {\rho_ {1}}{\left(\rho_ {1} / \rho_ {\infty} - \alpha + \alpha^ {q}\right) ^ {1 / q}}, \tag {7}
+$$
+
+for any $p \in (1, \infty)$ , with $\alpha = \frac{\rho_1}{\rho_\infty} - \left\lfloor \frac{\rho_1}{\rho_\infty} \right\rfloor$ and $\frac{1}{p} + \frac{1}{q} = 1$ .
+
+Croce et al. (2019a) add a regularization term to the training objective in order to enlarge the values of $d_p^B(x)$ and $d_p^D(x)$ for a fixed $p$ , with $x$ being the training points (note that they optimize $d_p^D(x)$ and not $|d_p^D(x)|$ to encourage correct classification).
+
+Sorting in increasing order $d_{p,l,j}^{B}$ and $d_{p,s}^{D}$ , (see (1)), that is the $l_{p}$ -distances to the hyperplanes defining $Q(x)$ and to decision hyperplanes, and denoting them as $d_{p,\pi_i^B}^B$ and $d_{p,\pi_i^D}^D$ respectively,
+
+the Maximum Margin Regularizer (MMR) of Croce et al. (2019a) is defined as
+
+$$
+\operatorname {M M R} - l _ {p} (x) = \frac {1}{k _ {B}} \sum_ {i = 1} ^ {k _ {B}} \max \left(0, 1 - \frac {d _ {p , \pi_ {i} ^ {B}} ^ {B} (x)}{\gamma_ {B}}\right) + \frac {1}{k _ {D}} \sum_ {i = 1} ^ {k _ {D}} \max \left(0, 1 - \frac {d _ {p , \pi_ {i} ^ {D}} ^ {D} (x)}{\gamma_ {D}}\right). \tag {8}
+$$
+
+It tries to push the $k_{B}$ closest hyperplanes defining $Q(x)$ farther than $\gamma_{B}$ from $x$ and the $k_{D}$ closest decision hyperplanes farther than $\gamma_{D}$ from $x$ both wrt the $l_{p}$ -metric. In other words, MMR- $l_{p}$ aims at widening the linear regions around the training points so that they contain $l_{p}$ -balls of radius either $\gamma_{B}$ or $\gamma_{D}$ centered in the training points. Using MMR- $l_{p}$ wrt a fixed $l_{p}$ -norm, possibly in combination with the adversarial training of Madry et al. (2018), leads to classifiers which are empirically resistant wrt $l_{p}$ -adversarial attacks and are easily verifiable by state-of-the-art methods to provide lower bounds on the true robustness.
+
+For our goal of simultaneous $l_{p}$ -robustness guarantees for all $p \geq 1$ , we use the insights obtained from Theorem 4.1 to propose a combination of MMR- $l_{1}$ and MMR- $l_{\infty}$ , called MMR-Universal. It enhances implicitly robustness wrt every $l_{p}$ -norm without actually computing and modifying separately all the distances $d_{p}^{B}(x)$ and $d_{p}^{D}(x)$ for the different values of $p$ .
+
+Definition 4.1 (MMR-Universal) Let $x$ be a training point. We define the regularizer
+
+$$
+\begin{array}{l} M M R - U n i v e r s a l (x) = \frac {1}{k _ {B}} \sum_ {i = 1} ^ {k _ {B}} \lambda_ {1} \max \left(0, 1 - \frac {d _ {1 , \pi_ {1 , i} ^ {B}} ^ {B} (x)}{\gamma_ {1}}\right) + \lambda_ {\infty} \max \left(0, 1 - \frac {d _ {\infty , \pi_ {\infty , i} ^ {B}} ^ {B} (x)}{\gamma_ {\infty}}\right) \\ + \frac {1}{K - 1} \sum_ {i = 1} ^ {K - 1} \lambda_ {1} \max \left(0, 1 - \frac {d _ {1 , \pi_ {1 , i} ^ {D}} ^ {D} (x)}{\gamma_ {1}}\right) + \lambda_ {\infty} \max \left(0, 1 - \frac {d _ {\infty , \pi_ {\infty , i} ^ {D}} ^ {D} (x)}{\gamma_ {\infty}}\right), \tag {9} \\ \end{array}
+$$
+
+where $k_{B}\in \{1,\dots ,N\}$ $\lambda_1,\lambda_\infty ,\gamma_1,\gamma_\infty >0$
+
+We stress that, even if the formulation of MMR-Universal is based on MMR- $l_p$ , it is just thanks to the novel geometrical motivation provided by Theorem 3.1 and its interpretation in terms of robustness guarantees of Theorem 4.1 that we have a theoretical justification of MMR-Universal. Moreover, we are not aware of any other approach which can enforce simultaneously $l_1$ - and $l_{\infty}$ -guarantees, which is the key property of MMR-Universal.
+
+The loss function which is minimized while training the classifier $f$ is then, with $\{(x_i,y_i)\}_{i = 1}^T$ being the training set and CE the cross-entropy loss,
+
+$$
+L \left(\left\{\left(x _ {i}, y _ {i}\right) \right\} _ {i = 1} ^ {T}\right) = \frac {1}{T} \sum_ {i = 1} ^ {T} \operatorname {C E} \left(f \left(x _ {i}\right), y _ {i}\right) + \text {M M R - U n i v e r s a l} \left(x _ {i}\right).
+$$
+
+During the optimization our regularizer aims at pushing both the polytope boundaries and the decision hyperplanes farther than $\gamma_{1}$ in $l_{1}$ -distance and farther than $\gamma_{\infty}$ in $l_{\infty}$ -distance from the training point $x$ , in order to achieve robustness close or better than $\gamma_{1}$ and $\gamma_{\infty}$ respectively. According to Theorem 4.1, this enhances also the $l_{p}$ -robustness for $p \in (1,\infty)$ . Note that if the projection of $x$ on a decision hyperplane does not lie inside $Q(x)$ , $d_p^D (x)$ is just an approximation of the signed distance to the true decision surface, in which case Croce et al. (2019a) argue that it is an approximation of the local Cross-Lipschitz constant which is also associated to robustness (see Hein & Andriushchenko (2017)). The regularization parameters $\lambda_{1}$ and $\lambda_{\infty}$ are used to balance the weight of the $l_{1}$ - and $l_{\infty}$ -term in the regularizer, and also wrt the cross-entropy loss. Note that the terms of MMR-Universal involving the quantities $d_{p,\pi_{p,i}}^{D}(x)$ penalize misclassification, as they take negative values in this case.
+
+Moreover, we take into account the $k_{B}$ closest hyperplanes and not just the closest one as done in Theorems 2.1 and 4.1. This has two reasons: first, in this way the regularizer enlarges the size of the linear regions around the training points more quickly and effectively, given the large number of hyperplanes defining each polytope. Second, pushing many hyperplanes influences also the neighboring linear regions of $Q(x)$ . This comes into play when, in order to get better bounds on the robustness at $x$ , one wants to explore also a portion of the input space outside of the linear region $Q(x)$ , which is where Theorem 4.1 holds. As noted in
+
+Raghunathan et al. (2018); Croce et al. (2019a); Xiao et al. (2019), established methods to compute lower bounds on the robustness are loose or completely fail when using normally trained models. In fact, their effectiveness is mostly related to how many ReLU units have stable sign when perturbing the input $x$ within a given $l_{p}$ -ball. This is almost equivalent to having the hyperplanes far from $x$ in $l_{p}$ -distance, which is what MMR-Universal tries to accomplish. This explains why in Section 5 we can certify the models trained with MMR-Universal with the methods of Wong & Kolter (2018) and Tjeng et al. (2019).
+
+# 5 EXPERIMENTS
+
+We compare the models obtained via our MMR-Universal regularizer1 to state-of-the-art methods for provable robustness and adversarial training. As evaluation criterion we use the robust test error, defined as the largest classification error when every image of the test set can be perturbed within a fixed set (e.g. an $l_p$ -ball of radius $\epsilon_p$ ). We focus on the $l_p$ -balls with $p \in \{1, 2, \infty\}$ . Since computing the robust test error is in general an NP-hard problem, we evaluate lower and upper bounds on it. The lower bound is the fraction of points for which an attack can change the decision with perturbations in the $l_p$ -balls of radius $\epsilon_p$ (adversarial samples), that is with $l_p$ -norm smaller than $\epsilon_p$ . For this task we use the PGD-attack (Kurakin et al. (2017); Madry et al. (2018); Tramer & Boneh (2019)) and the FAB-attack (Croce & Hein (2019)) for $l_1$ , $l_2$ and $l_\infty$ , MIP (Tjeng et al. (2019)) for $l_\infty$ and the Linear Region Attack (Croce et al. (2019b)) for $l_2$ and apply all of them (see C.3 for details). The upper bound is the portion of test points for which we cannot certify, using the methods of Tjeng et al. (2019) and Wong & Kolter (2018), that no $l_p$ -perturbation smaller than $\epsilon_p$ can change the correct class of the original input.
+
+Smaller values of the upper bounds on the robust test error indicate models with better provable robustness. While lower bounds give an empirical estimate of the true robustness, it has been shown that they can heavily underestimate the vulnerability of classifiers (e.g. by Athalye et al. (2018); Mosbach et al. (2018)).
+
+# 5.1 CHOICE OF $\epsilon_{p}$
+
+In choosing the values of $\epsilon_{p}$ for $p\in \{1,2,\infty \}$ , we try to be consistent with previous literature (e.g. Wong & Kolter (2018); Croce et al. (2019a)) for the values of $\epsilon_{\infty}$ and $\epsilon_{2}$ . Equation (6) provides, given $\epsilon_{1}$ and $\epsilon_{\infty}$ , a value at which one can expect $l_{2}$ -robustness (approximately $\epsilon_{2} = \sqrt{\epsilon_{1}\epsilon_{\infty}}$ ). Then we fix $\epsilon_{1}$ such that this approximation is slightly larger than the desired $\epsilon_{2}$ . We show in Table 1 the values chosen for $\epsilon_{p}$ , $p\in \{1,2,\infty \}$ , and used to compute the robust test error in Table 2. Notice that for these values no $l_{p}$ -ball is contained in the others.
+
+Table 1: The values chosen for $\epsilon_{p}$ on the different datasets and the expected $l_{2}$ -robustness level (last column) given $\epsilon_{1}$ and $\epsilon_{\infty}$ , computed according to (6).
+
+| dataset | ε1 | ε∞ | ε2 | ε2 by (6) |
| MNIST / F-MNIST | 1 | 0.1 | 0.3 | 0.3162 |
| GTS | 3 | 4/255 | 0.2 | 0.2170 |
| CIFAR-10 | 2 | 2/255 | 0.1 | 0.1252 |
+
+Moreover, we compute for the plain models the percentage of adversarial examples given by an $l_{1}$ -attack (we use the PGD-attack) with budget $\epsilon_{1}$ which have also $l_{\infty}$ -norm smaller than or equal to $\epsilon_{\infty}$ , and vice versa. These percentages are zero for all the datasets, meaning that being (provably) robust in the union of these $l_{p}$ -balls is much more difficult than in just one of them (see also C.1).
+
+Table 2: We report, for the different datasets and training schemes, the test error (TE) and lower (LB) and upper (UB) bounds on the robust test error (in percentage) wrt the union of $l_{p}$ -norms for $p \in \{1,2,\infty\}$ denoted as $l_{1} + l_{2} + l_{\infty}$ (that is the largest test error possible if any perturbation in the union $l_{1} + l_{2} + l_{\infty}$ is allowed). The training schemes compared are plain training, adversarial trainings of Madry et al. (2018); Tramér & Boneh (2019) (AT), robust training of Wong & Kolter (2018); Wong et al. (2018) (KW), MMR regularization of Croce et al. (2019a), MMR combined with AT (MMR+AT) and our MMR-Universal regularization. The models of our MMR-Universal are the only ones which have non trivial upper bounds on the robust test error for all datasets.
+
+provable robustness against multiple perturbations
+
+| model | l1+l2+l∞ | l1+l2+l∞ |
| TE | LB | UB | TE | LB | UB | |
| plain | 0.85 | 88.5 | 100 | 9.32 | 100 | 100 | |
| AT-l∞ | 0.82 | 4.7 | 100 | 11.54 | 26.3 | 100 | |
| AT-l2 | 0.87 | 25.9 | 100 | 8.10 | 98.8 | 100 | |
| AT-(l1,l2,l∞) | 0.80 | 4.9 | 100 | 14.13 | 29.6 | 100 | |
| KW-l∞ | 1.21 | 4.8 | 100 | 21.73 | 43.6 | 100 | |
| KW-l2 | 1.11 | 10.3 | 100 | 13.08 | 66.7 | 86.8 | |
| MMR-l∞ | 1.65 | 10.4 | 100 | 14.51 | 36.7 | 100 | |
| MMR-l2 | 2.57 | 78.6 | 99.9 | 12.85 | 95.8 | 100 | |
| MMR+AT-l∞ | 1.19 | 4.1 | 100 | 14.52 | 31.8 | 100 | |
| MMR+AT-l2 | 1.73 | 15.3 | 99.9 | 13.40 | 66.5 | 99.1 | |
| MMR-Universal | 3.04 | 12.4 | 20.8 | 18.57 | 43.5 | 52.9 | |
| plain | 6.77 | 71.5 | 100 | 23.29 | 88.6 | 100 | |
| AT-l∞ | 6.83 | 64.0 | 100 | 27.06 | 52.5 | 100 | |
| AT-l2 | 8.76 | 59.0 | 100 | 25.84 | 62.1 | 100 | |
| AT-(l1,l2,l∞) | 8.80 | 45.2 | 100 | 35.41 | 57.1 | 100 | |
| KW-l∞ | 15.57 | 87.8 | 100 | 38.91 | 51.9 | 100 | |
| KW-l2 | 14.35 | 57.6 | 100 | 40.24 | 54.0 | 100 | |
| MMR-l∞ | 13.32 | 71.3 | 99.6 | 34.61 | 58.7 | 100 | |
| MMR-l2 | 14.21 | 62.6 | 80.9 | 40.93 | 72.9 | 98.0 | |
| MMR+AT-l∞ | 14.89 | 82.8 | 100 | 35.38 | 50.8 | 100 | |
| MMR+AT-l2 | 15.34 | 58.1 | 84.8 | 37.78 | 61.3 | 99.9 | |
| MMR-Universal | 15.98 | 51.6 | 52.4 | 46.96 | 63.8 | 64.6 | |
+
+# 5.2 MAIN RESULTS
+
+We train CNNs on MNIST, Fashion-MNIST (Xiao et al. (2017)), German Traffic Sign (GTS) (Stallkamp et al. (2012)) and CIFAR-10 (Krizhevsky et al. (2014)). We consider several training schemes: plain training, the PGD-based adversarial training (AT) of Madry et al. (2018) and its extension to multiple $l_{p}$ -balls in Tramér & Boneh (2019), the robust training (KW) of Wong & Kolter (2018); Wong et al. (2018), the MMR-regularized training (MMR) of Croce et al. (2019a), either alone or with adversarial training (MMR+AT) and the training with our regularizer MMR-Universal. We use AT, KW, MMR and MMR+AT wrt $l_{2}$ and $l_{\infty}$ , as these are the norms for which such methods have been used in the original papers. More details about the architecture and models in C.3.
+
+In Table 2 we report test error (TE) computed on the whole test set and lower (LB) and upper (UB) bounds on the robust test error obtained considering the union of the three $l_{p}$ -balls, indicated by $l_{1} + l_{2} + l_{\infty}$ (these statistics are on the first 1000 points of the test set). The lower bounds $l_{1} + l_{2} + l_{\infty}$ -LB are given by the fraction of test points for which one of the adversarial attacks wrt $l_{1}$ , $l_{2}$ and $l_{\infty}$ is successful. The upper bounds $l_{1} + l_{2} + l_{\infty}$ -UB are computed as the percentage of points for which at least one of the three $l_{p}$ -balls is not certified to be free of adversarial examples (lower is better). This last one is the metric of main interest, since we aim at universally provably robust models. In C.2 we report the lower and upper bounds for the individual norms for every model.
+
+MMR-Universal is the only method which can give non-trivial upper bounds on the robust test error for all datasets, while almost all other methods aiming at provable robustness have $l_{1} + l_{2} + l_{\infty}$ -UB close to or at $100\%$ . Notably, on GTS the upper bound on the robust test error of MMR-Universal is lower than the lower bound of all other methods except AT- $(l_{1}, l_{2}, l_{\infty})$ , showing that MMR-Universal provably outperforms existing methods which provide guarantees wrt individual $l_{p}$ -balls, either $l_{2}$ or $l_{\infty}$ , when certifying the union $l_{1} + l_{2} + l_{\infty}$ . The test error is slightly increased wrt the other methods giving provable robustness, but the same holds true for combined adversarial training AT- $(l_{1}, l_{2}, l_{\infty})$ compared to standard adversarial training AT- $l_{2} / l_{\infty}$ . We conclude that MMR-Universal is the only method so far being able to provide non-trivial robustness guarantees for multiple $l_{p}$ -balls in the case that none of them contains any other.
+
+# 6 CONCLUSION
+
+With MMR-Universal we have proposed the first method providing provable robustness guarantees for all $l_p$ -balls with $p \geq 1$ . Compared to existing works guaranteeing robustness wrt either $l_2$ or $l_{\infty}$ , providing guarantees wrt the union of different $l_p$ -balls turns out to be considerably harder. It is an interesting open question if the ideas developed in this paper can be integrated into other approaches towards provable robustness.
+
+# ACKNOWLEDGEMENTS
+
+We would like to thank Maksym Andriushchenko for helping us to set up and adapt the original code for MMR. We acknowledge support from the German Federal Ministry of Education and Research (BMBF) through the Tübingen AI Center (FKZ: 01IS18039A). This work was also supported by the DFG Cluster of Excellence "Machine Learning - New Perspectives for Science", EXC 2064/1, project number 390727645, and by DFG grant 389792660 as part of TRR 248.
+
+# REFERENCES
+
+R. Arora, A. Basuy, P. Mianjyz, and A. Mukherjee. Understanding deep neural networks with rectified linear unit. In ICLR, 2018.
+A. Athalye, N. Carlini, and D. A. Wagner. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In ICML, 2018.
+O. Bastani, Y. Ioannou, L. Lampropoulos, D. Vytiniotis, A. Nori, and A. Criminisi. Measuring neural net robustness with constraints. In NIPS, 2016.
+T. B. Brown, D. Mané, A. Roy, M. Abadi, and J. Gilmer. Adversarial patch. In NIPS 2017 Workshop on Machine Learning and Computer Security, 2017.
+N. Carlini and D. Wagner. Adversarial examples are not easily detected: Bypassing ten detection methods. In ACM Workshop on Artificial Intelligence and Security, 2017.
+F. Croce and M. Hein. A randomized gradient-free attack on relu networks. In $GCPR$ , 2018.
+F. Croce and M. Hein. Minimally distorted adversarial examples with a fast adaptive boundary attack. preprint, arXiv:1907.02044, 2019.
+F. Croce, M. Andriushchenko, and M. Hein. Provable robustness of relu networks via maximization of linear regions. In AISTATS, 2019a.
+F. Croce, J. Rauber, and M. Hein. Scaling up the randomized gradient-free adversarial attack reveals overestimation of robustness using established attacks. International Journal of Computer Vision, 1-19, 2019b.
+L. Engstrom, B. Tran, D. Tsipras, L. Schmidt, and A. Madry. A rotation and a translation suffice: Fooling CNNs with simple transformations. In NIPS 2017 Workshop on Machine Learning and Computer Security, 2017.
+
+R. Geirhos, P. Rubisch, C. Michaelis, M. Bethge, F. A. Wichmann, and W. Brendel. Imagenet-trained cnns are biased towards texture; increasing shape bias improves accuracy and robustness. In ICLR, 2019.
+I. J. Goodfellow, J. Shlens, and C. Szegedy. Explaining and harnessing adversarial examples. In ICLR, 2015.
+S. Gowal, K. Dvijotham, R. Stanforth, R. Bunel, C. Qin, J. Uesato, R. Arandjelovic, T. A. Mann, and P. Kohli. On the effectiveness of interval bound propagation for training verifiably robust models. preprint, arXiv:1810.12715v3, 2018.
+S. Gu and L. Rigazio. Towards deep neural network architectures robust to adversarial examples. In ICLR Workshop, 2015.
+M. Hein and M. Andriushchenko. Formal guarantees on the robustness of a classifier against adversarial manipulation. In NIPS, 2017.
+D. Hendrycks and T. Dietterich. Benchmarking neural network robustness to common corruptions and perturbations. In ICLR, 2019.
+R. Huang, B. Xu, D. Schuurmans, and C. Szepesvari. Learning with a strong adversary. In ICLR, 2016.
+D. Kang, Y. Sun, T. Brown, D. Hendrycks, and J. Steinhardt. Transfer of adversarial robustness between perturbation types. preprint, arXiv:1905.01034, 2019.
+G. Katz, C. Barrett, D. Dill, K. Julian, and M. Kochenderfer. Reluplex: An efficient smt solver for verifying deep neural networks. In CAV, 2017.
+D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. preprint, arXiv:1412.6980, 2014.
+A. Krizhevsky, V. Nair, and G. Hinton. Cifar-10 (canadian institute for advanced research). 2014. URL http://www.cs.toronto.edu/~kriz/cifar.html.
+A. Kurakin, I. J. Goodfellow, and S. Bengio. Adversarial examples in the physical world. In ICLR Workshop, 2017.
+A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Valdu. Towards deep learning models resistant to adversarial attacks. In ICLR, 2018.
+M. Mirman, T. Gehr, and M. Vechev. Differentiable abstract interpretation for provably robust neural networks. In ICML, 2018.
+M. Mosbach, M. Andriushchenko, T. Trost, M. Hein, and D. Klakow. Logit pairing methods can fool gradient-based attacks. In NeurIPS 2018 Workshop on Security in Machine Learning, 2018.
+N. Papernot, P. McDonald, X. Wu, S. Jha, and A. Swami. Distillation as a defense to adversarial perturbations against deep networks. In IEEE Symposium on Security & Privacy, 2016.
+A. Raghunathan, J. Steinhardt, and P. Liang. Certified defenses against adversarial examples. In ICLR, 2018.
+L. Schott, J. Rauber, M. Bethge, and W. Brendel. Towards the first adversarially robust neural network model on MNIST. In ICLR, 2019.
+Y. Sharma and P. Chen. Attacking the madry defense model with $l_{1}$ -based adversarial examples. In ICLR Workshop, 2019.
+J. Stallkamp, M. Schlipsing, J. Salmen, and C. Igel. Man vs. computer: Benchmarking machine learning algorithms for traffic sign recognition. *Neural Networks*, 32:323-332, 2012.
+
+C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus. Intriguing properties of neural networks. In *ICLR*, pp. 2503-2511, 2014.
+V. Tjeng, K. Xiao, and R. Tedrake. Evaluating robustness of neural networks with mixed integer programming. In ICLR, 2019.
+F. Tramér and D. Boneh. Adversarial training and robustness for multiple perturbations. In NeurIPS, 2019.
+E. Wong and J. Zico Kolter. Provable defenses against adversarial examples via the convex outer adversarial polytope. In ICML, 2018.
+E. Wong, F. Schmidt, J. H. Metzen, and J. Z. Kolter. Scaling provable adversarial defenses. In NeurIPS, 2018.
+H. Xiao, K. Rasul, and R. Vollgraf. Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms. preprint, arXiv:1708.07747, 2017.
+K. Y. Xiao, V. Tjeng, N. M. Shafiullah, and A. Madry. Training for faster adversarial robustness verification via inducing relu stability. In *ICLR*, 2019.
+S. Zheng, Y. Song, T. Leung, and I. J. Goodfellow. Improving the robustness of deep neural networks via stability training. In CVPR, 2016.
+
+A MINIMAL $l_{p}$ -NORM OF THE COMPLEMENT OF THE UNION OF $l_{1}$ - AND $l_{\infty}$ -BALL AND ITS CONVEX HULL
+
+# A.1 PROOF OF PROPOSITION 3.1
+
+Proof. We first note that for $\epsilon_1 < \epsilon_{\infty}$ it holds $B_{1} \subset B_{\infty}$ and the proof follows from the standard inequality $\|x\|_p \geq \|x\|_{\infty}$ where equality is attained for $x = \epsilon_{\infty}e_i$ , where $e_i$ are standard basis vectors. Moreover, if $\epsilon_1 > d\epsilon_{\infty}$ it holds $B_{\infty} \subset B_{1}$ as $\max_{x \in B_{\infty}} \|x\|_1 = d\epsilon_{\infty}$ and the result follows by $\|x\|_p \geq \|x\|_1 d^{\frac{1 - p}{p}}$ . The equality is realized by the vector with all the entries equal to $\frac{\epsilon_1}{d}$ .
+
+For the second case we first note that using Hölder inequality $|\langle u,v\rangle |\leq \| u\| _p\| v\| _q$ where $\frac{1}{p} +\frac{1}{q} = 1$ , it holds
+
+$$
+\sum_ {i = 1} ^ {k} | x _ {i} | \leq \left(\sum_ {i = 1} ^ {k} | x _ {i} | ^ {p}\right) ^ {\frac {1}{p}} k ^ {\frac {1}{q}}.
+$$
+
+Let $x \in \mathbb{R}^d$ . Without loss of generality after a potential permutation of the coordinates it holds $|x_d| = \| x\|_{\infty}$ . Then we get
+
+$$
+\| x \| _ {p} ^ {p} = \sum_ {i = 1} ^ {d} | x _ {i} | ^ {p} = | x _ {d} | ^ {p} + \sum_ {i = 1} ^ {d - 1} | x _ {i} | ^ {p} \geq | x _ {d} | ^ {p} + \frac {\left(\sum_ {i = 1} ^ {d - 1} | x _ {i} |\right) ^ {p}}{\left(d - 1\right) ^ {\frac {p}{q}}}.
+$$
+
+We have
+
+$$
+\min _ {\| x \| _ {\infty} \geq \epsilon_ {\infty}, \| x \| _ {1} \geq \epsilon_ {1}} | x _ {d} | ^ {p} + \frac {\left(\sum_ {i = 1} ^ {d - 1} | x _ {i} |\right) ^ {p}}{(d - 1) ^ {\frac {p}{q}}} = \epsilon_ {\infty} ^ {p} + \frac {(\epsilon_ {1} - \epsilon_ {\infty}) ^ {p}}{(d - 1) ^ {p - 1}},
+$$
+
+noting that $|x_{d}| = \| x\|_{\infty}$ and $\sum_{i = 1}^{d - 1}|x_i|\geq \epsilon_1 - \epsilon_\infty$ . Finally, we note that the vector
+
+$$
+v = \sum_ {i = 1} ^ {d - 1} \frac {\epsilon_ {1} - \epsilon_ {\infty}}{d - 1} e _ {i} + \epsilon_ {\infty} e _ {d},
+$$
+
+realizes equality. Indeed, $\| v\| _p^p = (d - 1)\frac{(\epsilon_1 - \epsilon_\infty)^p}{(d - 1)^p} +\epsilon_\infty^p$ , which finishes the proof.
+
+
+
+# A.2 PROOF OF THEOREM 3.1
+
+Proof. We first note that the minimum of the $l_p$ -norm over $\mathbb{R}^d \setminus C$ lies on the boundary of $C$ (otherwise any point on the segment joining the origin and $y$ and outside $C$ would have $l_p$ -norm smaller than $y$ ). Moreover, the faces of $C$ are contained in hyperplanes constructed as the affine hull of a subset of $d$ points from the union of the vertices of $B_1$ and $B_\infty$ .
+
+The vertices of $B_{1}$ are $V_{1} = \{\epsilon_{1}e_{i}, -\epsilon_{1}e_{i} \mid i = 1, \dots, d\}$ , where $e_{i}$ is the $i$ -th element of the standard basis of $\mathbb{R}^d$ , and that of $B_{\infty}$ are $V_{\infty}$ , consisting of the $2^{d}$ vectors whose components are elements of $\{\epsilon_{\infty}, -\epsilon_{\infty}\}$ . Note that $V_{1} \cap V_{\infty} = \emptyset$ . Any subset of $d$ vertices from $V_{1} \cup V_{\infty}$ defines a hyperplane which contains a face of $C$ if it does not contain any point of the interior of $C$ .
+
+Let $S$ be a set of vertices defining a hyperplane containing a face of $C$ . We first derive conditions on the vertices contained in $S$ . Let $k = \left\lfloor \frac{\epsilon_1}{\epsilon_\infty} \right\rfloor \in \mathbb{N}$ and $\alpha = \frac{\epsilon_1}{\epsilon_\infty} - k \in [0,1)$ . Note that $k + 1 > \frac{\epsilon_1}{\epsilon_\infty}$ . Then no more than $k$ vertices of $B_1$ belong to $S$ , that is to a face of $C$ . In fact, if we consider $k + 1$ vertices of $B_1$ , namely wlog $\{\epsilon_1 e_1, \dots, \epsilon_1 e_{k+1}\}$ , and consider their convex combination $z = \epsilon_1 \sum_{i=1}^{k+1} \frac{1}{k+1} e_i$ then $\|z\|_\infty = \frac{\epsilon_1}{k+1} < \epsilon_\infty$ by the definition of $k$ . Thus $S$ cannot contain more than $k$ vertices of $B_1$ .
+
+Second, assume $\epsilon_{1}e_{j}$ is in $S$ . If any vertex $v$ of $B_{\infty}$ with $v_{j} = -\epsilon_{\infty}$ is also in $S$ then, with $\alpha' = \frac{\epsilon_{\infty}}{\epsilon_{1} + \epsilon_{\infty}} \in (0,1)$ , we get
+
+$$
+\left\| \alpha^ {\prime} \epsilon_ {1} e _ {j} + (1 - \alpha^ {\prime}) v \right\| _ {\infty} = \max \left\{\left| \alpha^ {\prime} \epsilon_ {1} - (1 - \alpha^ {\prime}) \epsilon_ {\infty} \right|, (1 - \alpha^ {\prime}) \epsilon_ {\infty} \right\},
+$$
+
+where $(1 - \alpha^{\prime})\epsilon_{\infty} < \epsilon_{\infty}$ and
+
+$$
+\left| \alpha^ {\prime} \epsilon_ {1} - \left(1 - \alpha^ {\prime}\right) \epsilon_ {\infty} \right| = \left| \alpha^ {\prime} \left(\epsilon_ {1} + \epsilon_ {\infty}\right) - \epsilon_ {\infty} \right| = 0 < \epsilon_ {\infty}.
+$$
+
+Thus $S$ would not span a face as a convex combination intersects the interior of $C$ . This implies that if $\epsilon_1 e_j$ is in $S$ then all the vertices $v$ of $B_{\infty}$ in $S$ need to have $v_j = \epsilon_{\infty}$ , otherwise $S$ would not define a face of $C$ . Analogously, if $-\epsilon_1 e_j \in S$ then any vertex $v$ of $B_{\infty}$ in $S$ has $v_j = -\epsilon_{\infty}$ . However, we note that out of symmetry reasons we can just consider faces of $C$ in the positive orthant and thus we consider in the following just sets $S$ which contain vertices of "positive type" $\epsilon_1 e_j$ .
+
+Let now $S$ be a set (not necessarily defining a face of $C$ ) containing $h \leq k$ vertices of $B_{1}$ and $d - h$ vertices of $B_{\infty}$ and $P$ the matrix whose columns are these points. The matrix $P$ has the form
+
+$$
+P = \left( \begin{array}{c c c c c c c} \epsilon_ {1} & 0 & \ldots & 0 & \epsilon_ {\infty} & \ldots & \epsilon_ {\infty} \\ 0 & \epsilon_ {1} & \ldots & 0 & \epsilon_ {\infty} & \ldots & \epsilon_ {\infty} \\ \dots & & & & \\ 0 & \ldots & 0 & \epsilon_ {1} & \epsilon_ {\infty} & \ldots & \epsilon_ {\infty} \\ \hline 0 & \ldots & & 0 & \\ \ldots & & & & & A \\ 0 & \ldots & & 0 & \end{array} \right)
+$$
+
+where $A \in \mathbb{R}^{d - h, d - h}$ is a matrix whose entries are either $\epsilon_{\infty}$ or $-\epsilon_{\infty}$ . If the matrix $P$ does not have full rank then the origin belongs to any hyperplane containing $S$ , which means it cannot be a face of $C$ . This also implies $A$ has full rank if $S$ spans a face of $C$ .
+
+We denote by $\pi$ the hyperplane generated by the affine hull of $S$ (the columns of $P$ ) assuming that $A$ has full rank. Every point $b$ belonging to the hyperplane $\pi$ generated by $S$ is such that there exists a unique $a \in \mathbb{R}^d$ which satisfies
+
+$$
+P ^ {\prime} a = \left( \begin{array}{c} \mathbf {1} _ {1, d} \\ P \end{array} \right) a = \left( \begin{array}{c c c c c c c} 1 & & & \ldots & & & 1 \\ \hline \epsilon_ {1} & 0 & \ldots & 0 & \epsilon_ {\infty} & \ldots & \epsilon_ {\infty} \\ 0 & \epsilon_ {1} & \ldots & 0 & \epsilon_ {\infty} & \ldots & \epsilon_ {\infty} \\ \ldots & & & & \\ 0 & \ldots & 0 & \epsilon_ {1} & \epsilon_ {\infty} & \ldots & \epsilon_ {\infty} \\ \hline 0 & \ldots & & 0 & & \\ \ldots & & & & & A \\ 0 & \ldots & & 0 & & \end{array} \right) a = \left( \begin{array}{c} 1 \\ b \end{array} \right) = b ^ {\prime},
+$$
+
+where $\mathbf{1}_{d_1,d_2}$ is the matrix of size $d_{1} \times d_{2}$ whose entries are 1.
+
+The matrix $(P', b') \in \mathbb{R}^{d+1, d+1}$ need not have full rank, so that
+
+$$
+\operatorname {r a n k} P ^ {\prime} = \operatorname {r a n k} \left(P ^ {\prime}, b ^ {\prime}\right) = \dim a = d
+$$
+
+and then the linear system $P^{\prime}a = b^{\prime}$ has a unique solution.
+
+We define the vector $v \in \mathbb{R}^d$ as solution of $P^T v = \mathbf{1}_{d,1}$ , which is unique as $P$ has full rank.
+
+From their definitions we have $P a = b$ and $\mathbf{1}^T a = 1$ , so that
+
+$$
+1 = \mathbf {1} ^ {T} a = (P ^ {T} v) ^ {T} a = v ^ {T} P a = v ^ {T} b,
+$$
+
+and thus
+
+$$
+\langle b, v \rangle = 1, \tag {10}
+$$
+
+noticing that this also implies that any vector $b \in \mathbb{R}^d$ such that $\langle b, v \rangle = 1$ belongs to $\pi$ (suppose that $\exists q \notin \pi$ with $\langle q, v \rangle = 1$ , then define $c$ as the solution of $Pc = q$ and then $1 = \langle q, v \rangle = \langle Pc, v \rangle = \langle c, P^T v \rangle = \langle c, 1 \rangle$ which contradicts that $q \notin \pi$ ).
+
+Applying Hölder inequality to (10) we get for any $b \in \pi$ ,
+
+$$
+\left\| b \right\| _ {p} \geq \frac {1}{\left\| v \right\| _ {q}}, \tag {11}
+$$
+
+where $\frac{1}{p} +\frac{1}{q} = 1$ . Moreover, as $p\in (1,\infty)$ there exists always a point $b^{*}$ for which (11) holds as equality.
+
+In the rest of the proof we compute $\| v\| _q$ for any $q > 1$ when $S$ is a face of $C$ and then (11) yields the desired minimal value of $\| b\| _p$ over all $b$ lying in faces of $C$ .
+
+Let $v = (v_{1}, v_{2})$ , $v_{1} \in \mathbb{R}^{h}$ , $v_{2} \in \mathbb{R}^{d - h}$ and $I_{h}$ denotes the identity matrix of $\mathbb{R}^{h,h}$ . It holds
+
+$$
+P ^ {T} v = \left( \begin{array}{c c} \epsilon_ {1} I _ {h} & 0 \\ \epsilon_ {\infty} \mathbf {1} _ {d - h, h} & A ^ {T} \end{array} \right) \left( \begin{array}{c} v _ {1} \\ v _ {2} \end{array} \right) = \left( \begin{array}{c} \mathbf {1} _ {h, 1} \\ \mathbf {1} _ {d - h, 1} \end{array} \right) = \mathbf {1} _ {d, 1},
+$$
+
+which implies
+
+$$
+v _ {1} = \left(\frac {1}{\epsilon_ {1}}, \dots , \frac {1}{\epsilon_ {1}}\right) \quad \text {a n d} \quad A ^ {T} v _ {2} = \mathbf {1} _ {d - h, 1} - \frac {h \epsilon_ {\infty}}{\epsilon_ {1}} \mathbf {1} _ {d - h, 1} = \left(1 - h \frac {\epsilon_ {\infty}}{\epsilon_ {1}}\right) \mathbf {1} _ {d - h, 1}.
+$$
+
+Moreover, we have
+
+$$
+\left\| v \right\| _ {1} = \left\| v _ {1} \right\| _ {1} + \left\| v _ {2} \right\| _ {1} = \frac {h}{\epsilon_ {1}} + \left\| v _ {2} \right\| _ {1}, \quad \left\| v \right\| _ {\infty} = \max \left\{\frac {1}{\epsilon_ {1}}, \left\| v _ {2} \right\| _ {\infty} \right\}. \tag {12}
+$$
+
+If $S$ generates a face, then by definition $\pi$ does not intersect with the interior of $C$ and thus it holds for all $b \in \pi$ : $\|b\|_1 \geq \epsilon_1$ and $\|b\|_{\infty} \geq \epsilon_{\infty}$ . Suppose $\|v\|_1 = c > \frac{1}{\epsilon_{\infty}}$ . Then there exists $b^* \in \pi$ such equality in Hölder's equality is realized, that is $1 = \langle b^*, v \rangle = \|b^*\|_{\infty} \|v\|_1$ , and thus $\|b^*\|_{\infty} = \frac{1}{c} < \epsilon_{\infty}$ , which contradicts $\|b\|_{\infty} \geq \epsilon_{\infty}$ for all $b \in \pi$ and thus it must hold $\|v\|_1 \leq \frac{1}{\epsilon_{\infty}}$ . Similarly, one can derive $\|v\|_{\infty} \leq \frac{1}{\epsilon_1}$ . Combining (12) with the just derived inequalities we get upper bounds on the norms of $v_2$ ,
+
+$$
+\left\| v _ {2} \right\| _ {1} \leq \frac {1}{\epsilon_ {\infty}} - \frac {h}{\epsilon_ {1}} \quad \text {a n d} \quad \left\| v _ {2} \right\| _ {\infty} \leq \frac {1}{\epsilon_ {1}}. \tag {13}
+$$
+
+Furthermore $v_{2}$ is defined as the solution of
+
+$$
+\frac {A ^ {T}}{\epsilon_ {\infty}} v _ {2} = \left(\frac {1}{\epsilon_ {\infty}} - \frac {h}{\epsilon_ {1}}\right) \mathbf {1} _ {d - h, 1}.
+$$
+
+We note that all the entries of $\frac{A^T}{\epsilon_\infty}$ are either 1 or -1, so that the inner product between each row of $A^T$ and $v_{2}$ is a lower bound on the $l_{1}$ -norm of $v_{2}$ . Since every entry of the r.h.s. of the linear system is $\frac{1}{\epsilon_\infty} - \frac{h}{\epsilon_1}$ we get $\| v_{2}\|_{1} \geq \frac{1}{\epsilon_{\infty}} - \frac{h}{\epsilon_{1}}$ , which combined with (13) leads to $\| v_{2}\|_{1} = \frac{1}{\epsilon_{\infty}} - \frac{h}{\epsilon_{1}}$ .
+
+This implies that $\frac{A^T}{\epsilon_\infty} v_2 = \|v_2\|_1$ . In order to achieve equality $\langle u, v \rangle = \|v\|_1$ it has to hold $u_i = \operatorname{sgn}(v_i)$ for every $v_i \neq 0$ . If at least two components of $v$ were non-zero, the corresponding columns of $A^T$ would be identical, which contradicts the fact that $A^T$ has full rank. Thus $v_2$ can only have one non-zero component which in absolute value is equal to $\frac{1}{\epsilon_\infty} - \frac{h}{\epsilon_1}$ . Thus, after a potential reordering of the components, $v$ has the form
+
+$$
+v = \left(\underbrace {\frac {1}{\epsilon_ {1}} , \ldots , \frac {1}{\epsilon_ {1}}} _ {h \text {t i m e s}}, \frac {1}{\epsilon_ {\infty}} - \frac {h}{\epsilon_ {1}}, 0, \ldots , 0\right).
+$$
+
+From the second condition in (13), we have $\frac{1}{\epsilon_{\infty}} - \frac{h}{\epsilon_1} \leq \frac{1}{\epsilon_1}$ and $h + 1 \geq \frac{\epsilon_1}{\epsilon_{\infty}} = k + \alpha$ . Recalling $h \leq k$ , we have
+
+$$
+h \in [ k + \alpha - 1, k ] \cap \mathbb {N}.
+$$
+
+This means that, in order for $S$ to define a face of $C$ , we need $h = k$ if $\alpha > 0$ , $h \in \{k - 1, k\}$ if $\alpha = 0$ (in this case choosing $h = k - 1$ or $h = k$ leads to the same $v$ , so in practice it is possible to use simply $h = k$ for any $\alpha$ ).
+
+Once we have determined $v$ , we can use again (10) and (11) to see that
+
+$$
+\left\| b \right\| _ {p} \geq \frac {1}{\left\| v \right\| _ {q}} = \frac {1}{\left(\frac {k}{\epsilon_ {1} ^ {q}} + \left(\frac {1}{\epsilon_ {\infty}} - \frac {k}{\epsilon_ {1}}\right) ^ {q}\right) ^ {\frac {1}{q}}} = \frac {\epsilon_ {1}}{\left(\epsilon_ {1} / \epsilon_ {\infty} - \alpha + \alpha^ {q}\right) ^ {1 / q}}. \tag {14}
+$$
+
+Finally, for any $v$ there exists $b^{*} \in \pi$ for which equality is achieved in (14). Suppose that this $b^{*}$ does not lie in a face of $C$ . Then one could just consider the line segment from the origin to $b^{*}$ and the point intersecting the boundary of $C$ would have smaller $l_{p}$ -norm contradicting the just derived inequality. Thus the $b^{*}$ realizing equality in (14) lies in a face of $C$ .
+
+# A.3 COMPARISON OF THE ROBUSTNESS GUARANTEE FOR THE UNION OF $B_{1}$ AND $B_{\infty}$ IN (5) AND THE CONVEX HULL OF $B_{1}$ AND $B_{\infty}$ IN (6)
+
+We compare the robustness guarantees obtained by considering the union of $B_{1}$ and $B_{\infty}$ (denoted by $b_{U}$ in the following), see (5), and the convex hull of $B_{1}$ and $B_{\infty}$ (denoted by $b_{C}$ in the following), see (6). In particular, we want to determine the ratio $\delta = \frac{\epsilon_1}{\epsilon_\infty} \in [1,d]$ for which the gain in the robustness guarantee $b_{C}$ for the convex hull is maximized compared to just considering the robustness guarantee $b_{U}$ as a function of the dimension $d$ of the input space. We restrict here the analysis to the case of $p = 2$ , that is computing the radius of the largest $l_{2}$ -ball fitting inside $U_{1,\infty}$ or its convex hull $C$ . Let us denote
+
+$$
+b _ {U} = \min _ {x \in \mathbb {R} ^ {d} \backslash U _ {1, \infty}} \| x \| _ {p} = \left(\epsilon_ {\infty} ^ {p} + \frac {(\epsilon_ {1} - \epsilon_ {\infty}) ^ {p}}{(d - 1) ^ {p - 1}}\right) ^ {\frac {1}{p}}, b _ {C} = \min _ {x \in \mathbb {R} ^ {d} \backslash C} \| x \| _ {p} = \frac {\epsilon_ {1}}{(\epsilon_ {1} / \epsilon_ {\infty} - \alpha + \alpha^ {q}) ^ {1 / q}}
+$$
+
+the two bounds from (5) and (6) respectively, which can be rewritten as
+
+$$
+b _ {U} (\delta) = \epsilon_ {\infty} \left(1 + \frac {(\delta - 1) ^ {p}}{(d - 1) ^ {p - 1}}\right) ^ {\frac {1}{p}}, b _ {C} (\delta) = \frac {\epsilon_ {\infty} \delta}{(\delta - \alpha + \alpha^ {q}) ^ {1 / q}} \sim \epsilon_ {\infty} \delta^ {\frac {1}{p}},
+$$
+
+where $\alpha = \delta -\lfloor \delta \rfloor$ .We note that
+
+$$
+\delta - 1 \leq \delta - \alpha + \alpha^ {q} = \lfloor \delta \rfloor + (\delta - \lfloor \delta \rfloor) ^ {q} \leq \delta .
+$$
+
+As the differences are very small, we use instead the lower bound
+
+$$
+b _ {C} ^ {*} (\delta) = \frac {\epsilon_ {\infty} \delta}{(\delta) ^ {1 / q}} = \epsilon_ {\infty} \delta^ {\frac {1}{p}}.
+$$
+
+We want to find the value $\delta^{*}$ which maximizes $\frac{b_C^*}{b_U} (\delta)$ varying $d$ (a numerical evaluation is presented in Figure 2). Notice first that $\delta^{*}$ maximizes also
+
+$$
+\frac {\left(b _ {C} ^ {*} (\delta)\right) ^ {p}}{\left(b _ {U} (\delta)\right) ^ {p}} = \delta \left(1 + \frac {(\delta - 1) ^ {p}}{(d - 1) ^ {p - 1}}\right) ^ {- 1} \geq 1
+$$
+
+and can be found as the solution of
+
+$$
+\begin{array}{l} \frac {\partial}{\partial \delta} \frac {\left(b _ {C} ^ {*} (\delta)\right) ^ {p}}{\left(b _ {U} (\delta)\right) ^ {p}} = \frac {\partial}{\partial \delta} l \left[ \delta \left(1 + \frac {(\delta - 1) ^ {p}}{(d - 1) ^ {p - 1}}\right) ^ {- 1} \right] \\ = \left(1 + \frac {(\delta - 1) ^ {p}}{(d - 1) ^ {p - 1}}\right) ^ {- 1} - \delta \left(1 + \frac {(\delta - 1) ^ {p}}{(d - 1) ^ {p - 1}}\right) ^ {- 2} \frac {p (\delta - 1) ^ {p - 1}}{(d - 1) ^ {p - 1}} = 0, \\ \end{array}
+$$
+
+which is equivalent to
+
+$$
+(d - 1) ^ {p - 1} + (\delta - 1) ^ {p} - p \delta (\delta - 1) ^ {p - 1} = 0.
+$$
+
+Restricting the analysis to $p = 2$ for simplicity, we get
+
+$$
+d - 1 + (\delta - 1) ^ {2} - 2 \delta (\delta - 1) = - \delta^ {2} + d = 0 \quad \Longrightarrow \quad \delta^ {*} = \sqrt {d},
+$$
+
+and one can check that $\delta^{*}$ is indeed a maximizer. Moreover, at $\delta^{*}$ we have a ratio between the two bounds
+
+$$
+\left. \frac {b _ {C}}{b _ {U}} \right| _ {\delta^ {*}} \geq \left. \frac {b _ {C} ^ {*}}{b _ {U}} \right| _ {\delta^ {*}} = d ^ {\frac {1}{4}} \left(1 + \frac {(\sqrt {d} - 1) ^ {2}}{(d - 1)}\right) ^ {- \frac {1}{2}} \sim \frac {d ^ {\frac {1}{4}}}{\sqrt {2}}.
+$$
+
+We observe that the improvement of the robustness guarantee by considering the convex hull instead of the union is increasing with dimension and is $\approx 3.8$ for $d = 784$ and $\approx 5.3$ for $d = 3072$ . Thus in high dimensions there is a considerable gain by considering the convex hull.
+
+# B UNIVERSAL PROVABLE ROBUSTNESS WITH RESPECT TO ALL $l_{p}$ -NORMS
+
+# B.1 PROOF OF THEOREM 4.1
+
+Proof. From the definition of $d_p^B (x)$ and $d_p^D (x)$ we know that none of the hyperplanes $\{\pi_j\}_j$ (either boundaries of the polytope $Q(x)$ or decision hyperplanes) identified by $V^{(l)}$ and $v^{(l)}$ , $l = 1,\ldots ,L + 1$ , is closer than $\min \{d_p^B (x),|d_p^D (x)|\}$ in $l_{p}$ -distance. Therefore the interior of the $l_{1}$ -ball of radius $\rho_{1}$ (namely, $B_{1}(x,\rho_{1})$ ) and of the $l_{\infty}$ -ball of radius $\rho_{\infty}$ ( $B_{\infty}(x,\rho_{\infty}))$ centered in $x$ does not intersect with any of those hyperplanes. This implies that $\{\pi_j\}_j$ are intersecting the closure of $\mathbb{R}^d\setminus \mathrm{conv}(B_1(x,\rho_1)\cup B_\infty (x,\rho_\infty))$ . Then, from Theorem 3.1 we get
+
+$$
+\min \{d _ {p} ^ {B} (x), | d _ {p} ^ {D} (x) | \} \geq \frac {\rho_ {1}}{\left(\rho_ {1} / \rho_ {\infty} - \alpha + \alpha^ {q}\right) ^ {1 / q}}.
+$$
+
+Finally, exploiting Theorem 2.1, $\mathbf{r}_p(x) \geq \min \{d_p^B(x), |d_p^D(x)|\}$ holds.
+
+
+
+# C EXPERIMENTS
+
+# C.1 CHOICE OF $\epsilon_{p}$
+
+In Table 3 we compute the percentage of adversarial perturbations given by the PGD-attack wrt $l_{p}$ with budget $\epsilon_{p}$ which have $l_{q}$ -norm smaller than $\epsilon_{q}$ , for $q \neq p$ (the values of $\epsilon_{p}$ and $\epsilon_{q}$ used are those from Table 1). We used the plain model of each dataset.
+
+The most relevant statistics of Table 3 are about the relation between the $l_{1}$ - and $l_{\infty}$ -perturbations (first two rows). In fact, none of the adversarial examples wrt $l_{1}$ is contained in the $l_{\infty}$ -ball, and vice versa. This means that, although the volume of the $l_{1}$ -ball is much smaller, even because of the intersection with the box constraints $[0,1]^d$ , than that of the $l_{\infty}$ -ball in high dimension, and most of it is actually contained in the $l_{\infty}$ -ball, the adversarial examples found by $l_{1}$ -attacks are anyway very different from those got by $l_{\infty}$ -attacks. The choice of such $\epsilon_p$ is then meaningful, as the adversarial perturbations we are trying to prevent wrt the various norms are non-overlapping and in practice exploit regions of the input space significantly diverse one from another.
+
+Moreover, one can see that also the adversarial manipulations wrt $l_{1}$ and $l_{2}$ do not overlap. Regarding the case of $l_{2}$ and $l_{\infty}$ , for MNIST and F-MNIST it happens that the adversarial examples wrt $l_{2}$ are contained in the $l_{\infty}$ -ball. However, as one observes in Table 4, being able to certify the $l_{\infty}$ -ball is not sufficient to get non-trivial guarantees wrt $l_{2}$ . In fact, all the models trained on these datasets to be provably robust wrt the $l_{\infty}$ -norm, that is KW- $l_{\infty}$ , MMR- $l_{\infty}$ and MMR+AT- $l_{\infty}$ , have upper bounds on the robust test error in the $l_{2}$ -ball larger than $99\%$ , despite the values of the lower bounds are small (which means that the attacks could not find adversarial perturbations for many points).
+
+Such analysis confirms that empirical and provable robustness are two distinct problems, and the interaction of different kinds of perturbations, as we have, changes according to which of these two scenarios one considers.
+
+Table 3: Percentage of ${l}_{1}$ -adversarial examples contained in the ${l}_{\infty }$ -ball and vice versa.
+
+ | MNIST | F-MNIST | GTS | CIFAR-10 |
| l1-perturbations with l∞-norm ≤ ε∞ (%) | 0.0 | 0.0 | 0.0 | 0.0 |
| l∞-perturbations with l1-norm ≤ ε1 (%) | 0.0 | 0.0 | 0.0 | 0.0 |
| l1-perturbations with l2-norm ≤ ε2 (%) | 4.5 | 7.4 | 0.0 | 0.0 |
| l2-perturbations with l1-norm ≤ ε1 (%) | 0.0 | 0.0 | 0.0 | 0.0 |
| l2-perturbations with l∞-norm ≤ ε∞ (%) | 100.0 | 100.0 | 0.3 | 16.0 |
| l∞-perturbations with l2-norm ≤ ε2 (%) | 0.0 | 0.0 | 0.0 | 0.0 |
+
+# C.2 MAIN RESULTS
+
+In Table 4 we report, for each dataset, the test error and upper and lower bounds on the robust test error, together with the $\epsilon_p$ used, for each norm individually. It is clear that training for provable $l_{p}$ -robustness (expressed by the upper bounds) does not, in general, yield provable $l_{q}$ -robustness for $q \neq p$ , even in the case where the lower bounds are small for both $p$ and $q$ .
+
+In order to compute the upper bounds on the robust test error in Tables 2 and 4 we use the method of Wong & Kolter (2018) for all the three $l_{p}$ -norms and that of Tjeng et al. (2019) only for the $l_{\infty}$ -norm. This second one exploits a reformulation of the problem in (3) in terms of mixed integer programming (MIP), which is able to exactly compute the solution of (3) for $p \in \{1, 2, \infty\}$ . However, such technique is strongly limited by its high computational cost. The only reason why it is possible to use it in practice is the exploitation of some presolvers which are able to reduce the complexity of the MIP. Unfortunately, such presolvers are effective just wrt $l_{\infty}$ . On the other hand, the method of Wong & Kolter (2018) applies directly to every $l_{p}$ -norm. This explains why the bounds provided for $l_{\infty}$ are tighter than those for $l_{1}$ and $l_{2}$ .
+
+# C.3 EXPERIMENTAL DETAILS
+
+The convolutional architecture that we use is identical to Wong & Kolter (2018), which consists of two convolutional layers with 16 and 32 filters of size $4 \times 4$ and stride 2, followed by a fully connected layer with 100 hidden units. The AT- $l_{\infty}$ , AT- $l_{2}$ , KW, MMR and MMR+AT training models are those presented in Croce et al. (2019a) and available at https://github.com/max-andr/provable-robustness-max-linear-regions. We trained the AT- $(l_{1}, l_{2}, l_{\infty})$ performing for each batch of the 128 images the PGD-attack wrt the three norms (40 steps for MNIST and F-MNIST, 10 steps for GTS and CIFAR-10) and then training on the point realizing the maximal loss (the cross-entropy function is used), for 100 epochs. For all experiments with MMR-Universal we use batch size 128 and we train the models for 100 epochs. Moreover, we use Adam optimizer of Kingma & Ba (2014) with learning rate of $5 \times 10^{-4}$ for MNIST and F-MNIST, 0.001 for the other datasets. We also reduce the learning rate by a factor of 10 for the last 10 epochs. On CIFAR-10 dataset we apply random crops and random mirroring of the images as data augmentation.
+
+For training we use MMR-Universal as in (9) with $k_{B}$ linearly (wrt the epoch) decreasing from $20\%$ to $5\%$ of the total number of hidden units of the network architecture. We also use a training schedule for $\lambda_{p}$ where we linearly increase it from $\lambda_{p} / 10$ to $\lambda_{p}$ during the first 10 epochs. We employ both schemes since they increase the stability of training with MMR. In order to determine the best set of hyperparameters $\lambda_{1}, \lambda_{\infty}, \gamma_{1}$ , and $\gamma_{\infty}$ of MMR, we perform a grid search over them for every dataset. In particular, we empirically found that the optimal values of $\gamma_{p}$ are usually between 1 and 2 times the $\epsilon_{p}$ used for the evaluation of the robust test error, while the values of $\lambda_{p}$ are more diverse across the different datasets. Specifically, for the models we reported in Table 4 the following values for the $(\lambda_{1}, \lambda_{\infty})$ have been used: (3.0, 12.0) for MNIST, (3.0, 40.0) for F-MNIST, (3.0, 12.0) for GTS and (1.0, 6.0) for CIFAR-10.
+
+In Tables 2 and 4, while the test error which is computed on the full test set, the statistics regarding upper and lower bounds on the robust test error are computed on the first 1000 points of the respective test sets. For the lower bounds we use the FAB-attack with the original parameters, 100 iterations and 10 restarts. For PGD we use also 100 iterations and 10 restarts: the directions for the update step are the sign of the gradient for $l_{\infty}$ , the normalized gradient for $l_{2}$ and the normalized sparse gradient suggested by Tramèr & Boneh (2019) with sparsity level $1\%$ for MNIST and F-MNIST, $10\%$ for GTS and CIFAR-10. Finally we use the Liner Region Attack as in the original code. For MIP (Tjeng et al. (2019)) we use a timeout of 120s, that means if no guarantee is obtained by that time, the algorithm stops verifying that point.
+
+Table 4: We report, for the different datasets and training schemes, the test error (TE) and lower (LB) and upper (UB) bounds on the robust test error (in percentage) wrt the $l_{p}$ -norms at thresholds $\epsilon_{p}$ , with $p = 1,2,\infty$ (that is the largest test error possible if any perturbation of $l_{p}$ -norm equal to $\epsilon_{p}$ is allowed). Moreover we show the $l_{1} + l_{2} + l_{\infty}$ -UB, that is the upper bound on the robust error when the attacker is allowed to use the union of the three $l_{p}$ -balls. The training schemes compared are plain training, adversarial training of Madry et al. (2018); Tramer & Boneh (2019) (AT), robust training of Wong & Kolter (2018); Wong et al. (2018) (KW), MMR regularization of Croce et al. (2019a), MMR combined with AT (MMR+AT) and our MMR-Universal regularization. One can clearly see that our MMR-Universal models are the only ones which have non trivial upper bounds on the robust test error wrt all the considered norms.
+
+provable robustness against multiple perturbations
+
+| model | TE | l1 | l2 | l∞ | l1+l2+l∞ |
| LB | UB | LB | UB | LB | UB | LB | UB |
| MNIST | ε1=1 | ε2=0.3 | ε∞=0.1 |
| plain | 0.85 | 2.3 | 100 | 3.1 | 100 | 88.5 | 100 | 88.5 | 100 |
| AT-l∞ | 0.82 | 1.8 | 100 | 1.7 | 100 | 4.7 | 100 | 4.7 | 100 |
| AT-l2 | 0.87 | 2.1 | 100 | 2.2 | 100 | 25.9 | 100 | 25.9 | 100 |
| AT-(l1,l2,l∞) | 0.80 | 2.1 | 100 | 1.7 | 100 | 4.9 | 100 | 4.9 | 100 |
| KW-l∞ | 1.21 | 3.6 | 100 | 2.8 | 100 | 4.4 | 4.4 | 4.8 | 100 |
| KW-l2 | 1.11 | 2.4 | 100 | 2.3 | 6.6 | 10.3 | 10.3 | 10.3 | 100 |
| MMR-l∞ | 1.65 | 10.0 | 100 | 5.2 | 100 | 6.0 | 6.0 | 10.4 | 100 |
| MMR-l2 | 2.57 | 4.5 | 62.3 | 6.7 | 14.3 | 78.6 | 99.9 | 78.6 | 99.9 |
| MMR+AT-l∞ | 1.19 | 3.6 | 100 | 2.4 | 100 | 3.6 | 3.6 | 4.1 | 100 |
| MMR+AT-l2 | 1.73 | 3.6 | 99.9 | 3.7 | 12.1 | 15.3 | 76.8 | 15.3 | 99.9 |
| MMR-Universal | 3.04 | 6.4 | 20.8 | 6.2 | 10.4 | 12.4 | 12.4 | 12.4 | 20.8 |
| F-MNIST | ε1=1 | ε2=0.3 | ε∞=0.1 |
| plain | 9.32 | 31.3 | 100 | 65.8 | 100 | 100 | 100 | 100 | 100 |
| AT-l∞ | 11.54 | 19.0 | 100 | 17.1 | 100 | 25.4 | 73.0 | 26.3 | 100 |
| AT-l2 | 8.10 | 15.9 | 100 | 20.6 | 100 | 98.8 | 100 | 98.8 | 100 |
| AT-(l1,l2,l∞) | 14.13 | 22.2 | 100 | 20.3 | 100 | 28.3 | 98.6 | 29.6 | 100 |
| KW-l∞ | 21.73 | 42.7 | 100 | 30.5 | 99.2 | 32.4 | 32.4 | 43.6 | 100 |
| KW-l2 | 13.08 | 15.8 | 19.8 | 15.9 | 19.9 | 66.7 | 86.8 | 66.7 | 86.8 |
| MMR-l∞ | 14.51 | 28.5 | 100 | 23.5 | 100 | 33.2 | 33.6 | 36.7 | 100 |
| MMR-l2 | 12.85 | 18.2 | 39.4 | 24.8 | 33.2 | 95.8 | 100 | 95.8 | 100 |
| MMR+AT-l∞ | 14.52 | 27.3 | 100 | 22.9 | 100 | 27.5 | 30.7 | 31.8 | 100 |
| MMR+AT-l2 | 13.40 | 17.2 | 55.4 | 20.2 | 37.8 | 66.5 | 99.1 | 66.5 | 99.1 |
| MMR-Universal | 18.57 | 25.0 | 52.4 | 24.3 | 37.4 | 43.5 | 44.3 | 43.5 | 52.9 |
| GTS | ε1=3 | ε2=0.2 | ε∞=4/255 |
| plain | 6.77 | 60.5 | 100 | 38.4 | 99.3 | 71.1 | 98.4 | 71.5 | 100 |
| AT-l∞ | 6.83 | 64.0 | 100 | 24.9 | 99.2 | 31.7 | 82.3 | 64.0 | 100 |
| AT-l2 | 8.76 | 44.0 | 100 | 27.2 | 98.4 | 58.9 | 97.1 | 59.0 | 100 |
| AT-(l1,l2,l∞) | 8.80 | 41.8 | 100 | 24.0 | 93.7 | 41.2 | 79.4 | 45.2 | 100 |
| KW-l∞ | 15.57 | 87.8 | 100 | 41.1 | 77.7 | 36.1 | 36.6 | 87.8 | 100 |
| KW-l2 | 14.35 | 46.5 | 100 | 30.8 | 35.3 | 57.0 | 63.0 | 57.6 | 100 |
| MMR-l∞ | 13.32 | 71.3 | 99.6 | 40.9 | 41.7 | 49.5 | 49.6 | 71.3 | 99.6 |
| MMR-l2 | 14.21 | 54.6 | 80.4 | 36.3 | 36.6 | 62.3 | 63.6 | 62.6 | 80.9 |
| MMR+AT-l∞ | 14.89 | 82.8 | 100 | 39.9 | 44.7 | 38.3 | 38.4 | 82.8 | 100 |
| MMR+AT-l2 | 15.34 | 49.4 | 84.3 | 33.2 | 33.8 | 57.2 | 60.2 | 58.1 | 84.8 |
| MMR-Universal | 15.98 | 49.7 | 51.5 | 34.3 | 34.6 | 47.0 | 47.0 | 51.6 | 52.4 |
| CIFAR-10 | ε1=2 | ε2=0.1 | ε∞=2/255 |
| plain | 23.29 | 61.0 | 100 | 48.9 | 100 | 88.6 | 100 | 88.6 | 100 |
| AT-l∞ | 27.06 | 39.6 | 100 | 33.3 | 99.2 | 52.5 | 88.5 | 52.5 | 100 |
| AT-l2 | 25.84 | 41.9 | 100 | 35.3 | 99.9 | 62.1 | 99.4 | 62.1 | 100 |
| AT-(l1,l2,l∞) | 35.41 | 47.7 | 100 | 41.7 | 88.2 | 57.0 | 76.8 | 57.1 | 100 |
| KW-l∞ | 38.91 | 51.9 | 100 | 39.9 | 66.1 | 46.6 | 48.0 | 51.9 | 100 |
| KW-l2 | 40.24 | 47.3 | 100 | 44.6 | 49.3 | 53.6 | 54.7 | 54.0 | 100 |
| MMR-l∞ | 34.61 | 54.1 | 100 | 42.3 | 68.4 | 57.7 | 61.0 | 58.7 | 100 |
| MMR-l2 | 40.93 | 58.9 | 98.0 | 50.4 | 56.3 | 72.9 | 86.1 | 72.9 | 98.0 |
| MMR+AT-l∞ | 35.38 | 50.6 | 100 | 41.2 | 84.7 | 48.7 | 54.2 | 50.8 | 100 |
| MMR+AT-l2 | 37.78 | 50.4 | 99.9 | 46.1 | 54.2 | 61.3 | 74.1 | 61.3 | 99.9 |
| MMR-Universal | 46.96 | 56.4 | 63.4 | 51.9 | 53.6 | 63.8 | 63.8 | 63.8 | 64.6 |
+
+
+
+
+
+
+Figure 3: We show, for each dataset, the evolution of the test error (red), upper bound (UB) on the robust test error wrt $l_{1}$ (black), $l_{2}$ (cyan) and $l_{\infty}$ (blue) during training. Moreover, we report in green the upper bounds on the test error when the attacker is allowed to exploit the union of the three $l_{p}$ -balls. The statistics on the robustness are computed at epoch 1, 2, 5, 10 and then every 10 epochs on 1000 points with the method of Wong & Kolter (2018), using the models trained with MMR-Universal.
+
+
+
+# C.4 EVOLUTION OF ROBUSTNESS DURING TRAINING
+
+We show in Figure 3 the clean test error (red) and the upper bounds on the robust test error wrt $l_{1}$ (black), $l_{2}$ (cyan), $l_{\infty}$ (blue) and wrt the union of the three $l_{p}$ -balls (green), evaluated at epoch 1, 2, 5, 10 and then every 10 epochs (for each model we train for 100 epochs) for the models trained with our regularizer MMR-Universal. For each dataset used in Section 5 the test error is computed on the whole test set, while the upper bound on the robust test error is evaluated on the first 1000 points of the test set using the method introduced in Wong & Kolter (2018) (the thresholds $\epsilon_{1}, \epsilon_{2}, \epsilon_{\infty}$ are those provided in Table 1). Note that the statistics wrt $l_{\infty}$ are not evaluated additionally with the MIP formulation of Tjeng et al. (2019) as the results in the main paper which would improve the upper bounds wrt $l_{\infty}$ . For all the datasets the test error keeps decreasing across epochs. The values of all the upper bounds generally improve during training, showing the effectiveness of MMR-Universal.
+
+# C.5 OTHER COMBINATIONS OF MMR AND AT
+
+We here report the robustness obtained training with MMR- $l_{p} + \mathrm{AT}-l_{q}$ with $p \neq q$ on MNIST. This means that MMR is used wrt $l_{p}$ , while adversarial training wrt $l_{q}$ . In particular we test $p, q \in \{1, \infty\}$ . In Table 5 we report the test error (TE), lower (LB) and upper bounds (UB) on the robust test error for such model, evaluated wrt $l_{1}, l_{2}, l_{\infty}$ and $l_{1} + l_{2} + l_{\infty}$ as done in Section 5. It is clear that training with MMR wrt a single norm does not suffice to get provable guarantees in all the other norms, despite the addition of adversarial training. In fact, for both the models analysed the UB equals $100\%$ for at least one norm. Note that the statistics wrt $l_{\infty}$ in the plots do not include the results of the MIP formulation of Tjeng et al. (2019).
+
+Table 5: Robustness of other combinations of MMR and AT.
+
+| model | l1 | l2 | l∞ | l1+l2+l∞ |
| TE | LB | UB | LB | UB | LB | UB | LB | UB |
| MMR-l1+AT-l∞ | 0.99 | 2.5 | 15.7 | 2.6 | 25.4 | 9.4 | 100 | 9.4 | 100 |
| MMR-l∞+AT-l1 | 1.43 | 2.7 | 100 | 2.3 | 100 | 5.6 | 30.2 | 5.6 | 100 |
+
+# C.6 LARGER MODELS
+
+We trained models with MMR-Universal also on the "Large" architecture from Wong et al. (2018), but we could not achieve a significant improvement compared to the smaller networks reported in the main paper. Note that the verification becomes the more expensive the larger the network is. Thus for the results in Table 6 we use only the method of Wong & Kolter (2018) to compute the UB on the robust test error (we do not additionally use Tjeng et al. (2019) for the statistics relative to the $l_{\infty}$ -robustness which yields tighter upper bounds), which explains why the results are seemingly worse than those in Table 2.
+
+Table 6: MMR-Universal models trained on the "Large" architecture from Wong et al. (2018) with the same $\epsilon_p$ as in the experiments in Section 5. * For the $l_{\infty}$ -robustness in this case the method of Tjeng et al. (2019) is not used.
+
+MMR-Universal models with larger architecture
+
+| model | l1 | l2 | l∞ | l1+l2+l∞ |
| TE | LB | UB | LB | UB | LB | UB | LB | UB |
| MNIST | 2.20 | 4.6 | 11.2 | 4.2 | 6.9 | 8.8 | 48.8* | 8.8 | 48.8 |
| F-MNIST | 19.20 | 29.8 | 47.6 | 25.5 | 34.2 | 43.6 | 64.4* | 43.7 | 64.5 |
| GTS | 17.40 | 45.8 | 54.8 | 32.1 | 36.9 | 47.5 | 58.3* | 49.8 | 60.1 |
| CIFAR-10 | 45.09 | 48.6 | 63.3 | 46.0 | 48.9 | 57.1 | 69.6* | 57.1 | 69.9 |
\ No newline at end of file
diff --git a/provablerobustnessagainstalladversariallpperturbationsforpgeq1/images.zip b/provablerobustnessagainstalladversariallpperturbationsforpgeq1/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..ec49177df3885e5f8dcec9d88c6f8e667f1af964
--- /dev/null
+++ b/provablerobustnessagainstalladversariallpperturbationsforpgeq1/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:444ccc0cf41940c74e07e8cd1df757e4a3b44b53cf5dd1b6ca62f3cfa7392c1b
+size 968655
diff --git a/provablerobustnessagainstalladversariallpperturbationsforpgeq1/layout.json b/provablerobustnessagainstalladversariallpperturbationsforpgeq1/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..6515eb2afa5d37a40b6a0b7960f4de76b65e4e7d
--- /dev/null
+++ b/provablerobustnessagainstalladversariallpperturbationsforpgeq1/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2aa53356be1e8f4f20302a55a3a7fb2e13731101da031341db8671597845d272
+size 1142490
diff --git a/proxsgdtrainingstructuredneuralnetworksunderregularizationandconstraints/8bd9918e-02a8-4ec9-9b1d-24ed5d062a6b_content_list.json b/proxsgdtrainingstructuredneuralnetworksunderregularizationandconstraints/8bd9918e-02a8-4ec9-9b1d-24ed5d062a6b_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..7ffd29a6204a75e451bf29a9a81be5415a032ca9
--- /dev/null
+++ b/proxsgdtrainingstructuredneuralnetworksunderregularizationandconstraints/8bd9918e-02a8-4ec9-9b1d-24ed5d062a6b_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e03dfe8b0e499469b556eabf8469b0b19065af98e32be7115e9bc2f6c1418ecf
+size 113470
diff --git a/proxsgdtrainingstructuredneuralnetworksunderregularizationandconstraints/8bd9918e-02a8-4ec9-9b1d-24ed5d062a6b_model.json b/proxsgdtrainingstructuredneuralnetworksunderregularizationandconstraints/8bd9918e-02a8-4ec9-9b1d-24ed5d062a6b_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..fadabf447832049358c9c749eb83d50566114566
--- /dev/null
+++ b/proxsgdtrainingstructuredneuralnetworksunderregularizationandconstraints/8bd9918e-02a8-4ec9-9b1d-24ed5d062a6b_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c6344d51aff101dbfebd9bb00602a961bac51da87334628e5477711ac850176d
+size 132323
diff --git a/proxsgdtrainingstructuredneuralnetworksunderregularizationandconstraints/8bd9918e-02a8-4ec9-9b1d-24ed5d062a6b_origin.pdf b/proxsgdtrainingstructuredneuralnetworksunderregularizationandconstraints/8bd9918e-02a8-4ec9-9b1d-24ed5d062a6b_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..e769fe7cef914528ca2479bf986a8bc6b2d61c4c
--- /dev/null
+++ b/proxsgdtrainingstructuredneuralnetworksunderregularizationandconstraints/8bd9918e-02a8-4ec9-9b1d-24ed5d062a6b_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:25151a99806b5d2a24e76945261b329e9fe849b5cb84fd1320be0b17f95b44b4
+size 676438
diff --git a/proxsgdtrainingstructuredneuralnetworksunderregularizationandconstraints/full.md b/proxsgdtrainingstructuredneuralnetworksunderregularizationandconstraints/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..41984cb694749ee8f0b1f9fcf225effcdc1f9573
--- /dev/null
+++ b/proxsgdtrainingstructuredneuralnetworksunderregularizationandconstraints/full.md
@@ -0,0 +1,584 @@
+# PROXSGD: TRAINING STRUCTURED NEURAL NETWORKS UNDER REGULARIZATION AND CONSTRAINTS
+
+Yang Yang
+
+Fraunhofer ITWM
+
+Fraunhofer Center Machine Learning
+
+yang.yang@itwm.fraunhofer.de
+
+Yaxiong Yuan
+
+University of Luxembourg
+
+yaxiong.yuan@uni.lu
+
+Avraam Chatzimichailidis
+
+Fraunhofer ITWM
+
+TU Kaiserslautern
+
+avraam chatzimichailidis@itwm.fraunhofer.de
+
+Ruud JG van Sloun
+
+Eindhoven University of Technology
+
+r.j.g.v.sloun@tue.nl
+
+Lei Lei, Symeon Chatzinotas
+
+University of Luxembourg {lei.lei, symeon chatzinotas}@uni.lu
+
+# ABSTRACT
+
+In this paper, we consider the problem of training structured neural networks (NN) with nonsmooth regularization (e.g. $\ell_1$ -norm) and constraints (e.g. interval constraints). We formulate training as a constrained nonsmooth nonconvex optimization problem, and propose a convergent proximal-type stochastic gradient descent (ProxSGD) algorithm. We show that under properly selected learning rates, with probability 1, every limit point of the sequence generated by the proposed ProxSGD algorithm is a stationary point. Finally, to support the theoretical analysis and demonstrate the flexibility of ProxSGD, we show by extensive numerical tests how ProxSGD can be used to train either sparse or binary neural networks through an adequate selection of the regularization function and constraint set.
+
+# 1 INTRODUCTION
+
+In this paper, we consider the problem of training neural networks (NN) under constraints and regularization. It is formulated as an optimization problem
+
+$$
+\underset {\boldsymbol {x} \in \mathbb {X} \subseteq \mathbb {R} ^ {n}} {\text {m i n i m i z e}} \frac {1}{m} \underbrace {\sum_ {i = 1} ^ {m} f _ {i} (\boldsymbol {x} , \boldsymbol {y} _ {i})} _ {\triangleq f (\boldsymbol {x})} + r (\boldsymbol {x}), \tag {1}
+$$
+
+where $\pmb{x}$ is the parameter vector to optimize, $\pmb{y}_i$ is the $i$ -th training example which consists of the training input and desired output, and $m$ is the number of training examples. The training loss $f$ is assumed to be smooth (but nonconvex) with respect to $\pmb{x}$ , the regularization $r$ is assumed to be convex (but nonsmooth), proper and lower semicontinuous, and the constraint set $\mathbb{X}$ is convex and compact (closed and bounded).
+
+When $r(\pmb{x}) = 0$ and $\mathbb{X} = \mathbb{R}^n$ , stochastic gradient descent (SGD) has been used to solve the optimization problem (1). At each iteration, a minibatch of the $m$ training examples are drawn randomly, and the obtained gradient is an unbiased estimate of the true gradient. Therefore SGD generally moves along the descent direction, see Bertsekas & Tsitsiklis (2000). SGD can be accelerated by replacing the instantaneous gradient estimates by a momentum aggregating all gradient in past iterations. Despite the success and popularity of SGD with momentum, its convergence had been an open problem. Assuming $f$ is convex, analyzing the convergence was first attempted in Kingma & Ba (2015) and later concluded in Reddi et al. (2018). The proof for a nonconvex $f$ was later given in Chen et al. (2019); Lei et al. (2019).
+
+In machine learning, the regularization function $r$ is typically used to promote a certain structure in the optimal solution, for example sparsity as in, e.g., feature selection and compressed sensing, or a zero-mean-Gaussian prior on the parameters (Bach et al., 2011; Boyd et al., 2010). It can be interpreted as a penalty function since at the optimal point $\pmb{x}^{\star}$ of problem (1), the value $r(\pmb{x}^{\star})$ will be small. One nominant example is the Tikhonov regularization $r(\pmb{x}) = \mu \| \pmb{x} \|_2^2$ for some predefined constant $\mu$ , and it can be used to alleviate the ill-conditioning and ensure that the magnitude of the weights will not become exceedingly large. Another commonly used regularization, the $\ell_1$ -norm where $r(\pmb{x}) = \mu \| \pmb{x} \|_1 = \mu \sum_{j=1}^{n} |x_j|$ (the convex surrogate of the $\ell_0$ -norm), would encourage a sparse solution. In the context of NN, it is used to (i) promote a sparse neural network (SNN) to alleviate overfitting and to allow a better generalization, (ii) accelerate the training process, and (iii) prune the network to reduce its complexity, see Louizos et al. (2018) and Gale et al. (2019).
+
+Technically, it is difficult to analyze the regularizations as some commonly used convex regularizers are nonsmooth, for example, $\ell_1$ -norm. In current implementations of TensorFlow, the gradient of $|x|$ is simply set to 0 when $x = 0$ . This amounts to the stochastic subgradient descent method and usually exhibits slow convergence. Other techniques to promote a SNN include magnitude pruning and variational dropout, see Gale et al. (2019).
+
+Although regularization can be interpreted as a constraint from the duality theory, sometimes it may still be more desirable to use explicit constraints, for example, $\sum x_{j}^{2}\leq \alpha$ , where the summation is over the weights on the same layer. This is useful when we already know how to choose $\alpha$ . Another example is the lower and upper bound on the weights, that is, $l\leq w\leq u$ for some predefined $l$ and $u$ . Compared with regularization, constraints do not encourage the weights to stay in a small neighborhood of the initial weight, see Chapter 7.2 of Goodfellow et al. (2016) for more details.
+
+The set $\mathbb{X}$ models such explicit constraints, but it poses an additional challenge for stochastic gradient algorithms as the new weight obtained from the SGD method (with or without momentum) must be projected back to the set $\mathbb{X}$ to maintain its feasibility. However, projection is a nonlinear operator, so the unbiasedness of the random gradient would be lost. Therefore the convergence analysis for constrained problems is much more involved than unconstrained problems.
+
+In this paper, we propose a convergent proximal-type stochastic gradient algorithm (ProxSGD) to train neural networks under nonsmooth regularization and constraints. It turns out momentum plays a central role in the convergence analysis. We establish that with probability (w.p.) 1, every limit point of the sequence generated by ProxSGD is a stationary point of the nonsmooth nonconvex problem (1). This is in sharp contrast to unconstrained optimization, where the convergence of the vanilla SGD method has long been well understood while the convergence of the SGD method with momentum was only settled recently. Nevertheless, the convergence rate of ProxSGD is not derived in the current work and is worth further investigating.
+
+To test the proposed algorithm, we consider two applications. The first application is to train a SNN, and we leverage $\ell_1$ -regularization, that is,
+
+$$
+\underset {\boldsymbol {x}} {\text {m i n i m i z e}} \quad \frac {1}{m} \sum_ {i = 1} ^ {m} f _ {i} (\boldsymbol {x}, \boldsymbol {y} _ {i}) + \mu | | \boldsymbol {x} | | _ {1}. \tag {2}
+$$
+
+The second application is to train a binary neural network (BNN) where the weights (and activations) are either 1 or -1 (see Courbariaux et al. (2015; 2016); Hou et al. (2017); Yin et al. (2018); Bai et al. (2019) for more details). To achieve this, we augment the loss function with a term that penalizes the weights if they are not +1 or -1:
+
+$$
+\underset {\boldsymbol {x}, \boldsymbol {a}} {\text {m i n i m i z e}} \quad \frac {1}{m} \sum_ {i = 1} ^ {m} f _ {i} (\boldsymbol {x}, \boldsymbol {y} _ {i}) + \frac {\mu}{4} \sum_ {j = 1} ^ {n} (a _ {j} (x _ {j} + 1) ^ {2} + (1 - a _ {j}) (x _ {j} - 1) ^ {2})
+$$
+
+$$
+\text {s u b j e c t} \quad a _ {j} = 0 \text {o r} 1, j = 1, \dots , n,
+$$
+
+where $\mu$ is a given penalty parameter. The binary variable $a_{j}$ can be interpreted as a switch for weight $x_{j}$ : when $a_{j} = 0$ , $(1 - a_{j})(x_{j} - 1)^{2}$ is activated, and there is a strong incentive for $x_{j}$ to be 1 (the analysis for $a_{j} = 1$ is similar). Since integer variables are difficult to optimize, we relax $a_{j}$ to be a continuous variable between 0 and 1. To summarize, a BNN can be obtained by solving the
+
+following regularized optimization problem under constraints with respect to $\pmb{x}$ and $\pmb{a}$
+
+$$
+\underset {\boldsymbol {x}, \boldsymbol {a}} {\text {m i n i m i z e}} \quad \frac {1}{m} \sum_ {i = 1} ^ {m} f _ {i} (\boldsymbol {x}, \boldsymbol {y} _ {i}) + \frac {\mu}{4} \sum_ {j = 1} ^ {n} (a _ {j} (x _ {j} + 1) ^ {2} + (1 - a _ {j}) (x _ {j} - 1) ^ {2})
+$$
+
+$$
+\text {s u b j e c t} \quad 0 \leq a _ {j} \leq 1, j = 1, \dots , n. \tag {3}
+$$
+
+If $\mu$ is properly selected (or sufficiently large), the optimal $a_{j}$ will be exactly or close to 0 or 1. Consequently, regularization and constraints offer interpretability and flexibility, which allows us to use more accurate models to promote structures in the neural networks, and the proposed convergent ProxSGD algorithm ensures efficient training of such models.
+
+# 2 THE PROPOSED PROXSGD ALGORITHM
+
+In this section, we describe the ProxSGD algorithm to solve (1).
+
+Background and setup. We make the following blanket assumptions on problem (1).
+
+- $f_{i}(\pmb{x}, \pmb{y}^{(i)})$ is smooth (continuously differentiable) but not necessarily convex.
+- $\nabla_{\boldsymbol{x}}f_{i}(\boldsymbol{x},\boldsymbol{y}^{(i)})$ is Lipschitz continuous with a finite constant $L_{i}$ for any $\boldsymbol{y}_i$ . Thus $\nabla f(\boldsymbol{x})$ is Lipschitz continuous with constant $L\triangleq \frac{1}{m}\sum_{i = 1}^{m}L_{i}$ .
+- $r(\pmb{x})$ is convex, proper and lower semicontinuous (not necessarily smooth).
+- $\mathbb{X}$ is convex and compact.
+
+We are interested in algorithms that can find a stationary point of (1). A stationary point $\pmb{x}^{\star}$ satisfies the optimality condition: at $\pmb{x} = \pmb{x}^{\star}$ ,
+
+$$
+\left(\boldsymbol {x} - \boldsymbol {x} ^ {\star}\right) ^ {T} \nabla f \left(\boldsymbol {x} ^ {\star}\right) + r (\boldsymbol {x}) - r \left(\boldsymbol {x} ^ {\star}\right) \geq 0, \forall \boldsymbol {x} \in \mathbb {X}. \tag {4}
+$$
+
+When $r(\pmb{x}) = 0$ and $\mathbb{X} = \mathbb{R}^n$ , the deterministic optimization problem 1 can be solved by the (batch) gradient descent method. When $m$ , the number of training examples, is large, it is computationally expensive to calculate the gradient. Instead, we estimate the gradient by a minibatch of $m(t)$ training examples. We denote the minibatch by $\mathbb{M}(t)$ : its elements are drawn uniformly from $\{1,2,\dots,m\}$ and there are $m(t)$ elements. Then the estimated gradient is
+
+$$
+\boldsymbol {g} (t) \triangleq \frac {1}{m (t)} \sum_ {i \in \mathbb {M} (t)} \nabla f _ {i} (\boldsymbol {x} (t), \boldsymbol {y} ^ {(i)}) \tag {5}
+$$
+
+and it is an unbiased estimate of the true gradient.
+
+The proposed algorithm. The instantaneous gradient $\pmb{g}(t)$ is used to form an aggregate gradient (momentum) $\pmb{v}(t)$ , which is updated recursively as follows
+
+$$
+\boldsymbol {v} (t) = (1 - \rho (t)) \boldsymbol {v} (t - 1) + \rho (t) \boldsymbol {g} (t), \tag {6}
+$$
+
+where $\rho(t)$ is the stepsize (learning rate) for the momentum and $\rho(t) \in (0,1]$ .
+
+At iteration $t$ , we propose to solve an approximation subproblem and denote its solution as $\widehat{\pmb{x}}(\pmb{x}(t), \pmb{v}(t), \pmb{\tau}(t))$ , or simply $\widehat{\pmb{x}}(t)$
+
+$$
+\widehat {\boldsymbol {x}} (t) \triangleq \underset {\boldsymbol {x} \in \mathbb {X}} {\arg \min } \left\{\left(\boldsymbol {x} - \boldsymbol {x} (t)\right) ^ {T} \boldsymbol {v} (t) + \frac {1}{2} (\boldsymbol {x} - \boldsymbol {x} (t)) ^ {T} \operatorname {d i a g} (\boldsymbol {\tau} (t)) (\boldsymbol {x} - \boldsymbol {x} (t)) + r (\boldsymbol {x}) \right\}. \tag {7}
+$$
+
+A quadratic regularization term is incorporated so that the subproblem (7) is strongly convex and its modulus is the minimum element of the vector $\pmb{\tau}(t)$ , denoted as $\underline{\tau}(t)$ and $\underline{\tau}(t) = \min_{j=1,\dots,n} \tau_j(t)$ . Note that $\underline{\tau}(t)$ should be lower bounded by a positive constant that is strictly larger than 0, so that the quadratic regularization in (7) will not vanish.
+
+The difference between two vectors $\widehat{\pmb{x}}(t)$ and $\pmb{x}(t)$ specifies a direction starting at $\pmb{x}(t)$ and ending at $\widehat{\pmb{x}}(t)$ . This update direction is used to refine the weight vector
+
+$$
+\boldsymbol {x} (t + 1) = \boldsymbol {x} (t) + \epsilon (t) (\widehat {\boldsymbol {x}} (t) - \boldsymbol {x} (t)), \tag {8}
+$$
+
+| algorithm | momentum | weight | quadratic gain in subproblem | regularization | constraint set |
| ProxSGD | ρ(t) | ε(t) | τ(t) | convex | convex, compact |
| SGD (w. momentum) | 1(ρ) | ε | 1 | 0 | Rn |
| AdaGrad | 1 | ε | √r(t) + δ1† | 0 | Rn |
| RMSProp | 1 | ε | √r(t) + δ1‡ | 0 | Rn |
| ADAM | ρ | ε/1-ρt | √r(t)/1-βt + δ1‡ | 0 | Rn |
| AMSGrad | ρ | ε | √r̂(t)/r̂(t) | 0 | Rn |
| r̂(t) = max(r̂(t-1), r(t))‡ |
| ADABound | ρ | 1 | 1/clip(ε(t)/r(t), ηl(t), ηu(t))‡ | 0 | Rn |
+
+Table 1: Connection between the proposed framework and existing methods, where $\rho, \beta, \epsilon$ and $\delta$ are some predefined constants. ${}^{\dagger}\mathbf{r}(t) = \mathbf{r}(t - 1) + \mathbf{g}(t) \odot \mathbf{g}(t)$ , ${}^{\ddagger}\mathbf{r}(t) = \beta \mathbf{r}(t - 1) + (1 - \beta) \mathbf{g}(t) \odot \mathbf{g}(t)$ .
+
+where $\epsilon(t)$ is a stepsize (learning rate) for the weight and $\epsilon(t) \in (0,1]$ . Note that $\pmb{x}(t + 1)$ is feasible as long as $\pmb{x}(t)$ is feasible, as it is the convex combination of two feasible points $\pmb{x}(t)$ and $\widehat{\pmb{x}}(t)$ while the set $\mathbb{X}$ is convex.
+
+The above steps (5)-(8) are summarized in Algorithm 1, which is termed proximal-type Stochastic Gradient Descent (ProxSGD), for the reason that the explicit constraint $x \in \mathbb{X}$ in (7) can also be formulated implicitly as a regularization function, more specifically, the indicator function $\delta_{\mathbb{X}}(\boldsymbol{x})$ . If all elements of $\tau(t)$ are equal, then $\widehat{\boldsymbol{x}}(t)$ is exactly the proximal operator
+
+$$
+\begin{array}{l} \widehat {\boldsymbol {x}} (t) = \underset {\boldsymbol {x}} {\arg \min } \left\{\left\| \boldsymbol {x} - \left(\boldsymbol {x} (t) - \frac {1}{\tau (t)} \boldsymbol {v} (t)\right) \right\| _ {2} ^ {2} + \frac {1}{\tau (t)} r (\boldsymbol {x}) + \delta_ {\mathbb {X}} (\boldsymbol {x}) \right\} \\ \triangleq \operatorname {P r o x} _ {\frac {1}{\tau (t)} r (\boldsymbol {x}) + \delta_ {X} (\boldsymbol {x})} \left(\boldsymbol {x} (t) - \frac {1}{\tau (t)} \boldsymbol {v} (t)\right). \\ \end{array}
+$$
+
+# Algorithm 1 Proximal-type Stochastic Gradient Descent (ProxSGD) Method
+
+Input: $\mathbf{x}(0) \in \mathbb{X}$ , $\mathbf{v}(-1) = \mathbf{0}$ , $t = 0, T$ , $\{\rho(t)\}_{t=0}(t)$ , $\{\epsilon(t)\}_{t=0}(t)$ .
+
+for $t = 0:1:T$ do
+
+1. Compute the instantaneous gradient $g(t)$ based on the minibatch $\mathbb{M}(t)$ :
+
+$$
+\boldsymbol {g} (t) = \frac {1}{m (t)} \sum_ {i \in \mathbb {M} (t)} \nabla_ {\boldsymbol {x}} f _ {i} (\boldsymbol {x} (t), \boldsymbol {y} ^ {(i)}).
+$$
+
+2. Update the momentum: $\pmb{v}(t) = (1 - \rho(t))\pmb{v}(t-1) + \rho(t)\pmb{g}(t)$ .
+3. Compute $\widehat{\pmb{x}} (t)$ by solving the approximation subproblem:
+
+$$
+\widehat {\boldsymbol {x}} (t) = \underset {\boldsymbol {x} \in \mathbb {X}} {\arg \min } \left\{(\boldsymbol {x} - \boldsymbol {x} (t)) (t) \boldsymbol {v} (t) + \frac {1}{2} (\boldsymbol {x} - \boldsymbol {x} (t)) ^ {T} \mathrm {d i a g} (\boldsymbol {\tau} (t)) (\boldsymbol {x} - \boldsymbol {x} (t)) + r (\boldsymbol {x}) \right\}.
+$$
+
+4. Update the weight: $\pmb{x}(t + 1) = \pmb{x}(t) + \epsilon(t)(\widehat{\pmb{x}}(t) - \pmb{x}(t))$ .
+
+# end for
+
+ProxSGD in Algorithm 1 bears a similar structure as several SGD algorithms, without and with momentum, see Table 1, and it allows to interpret some existing algorithms as special cases of the proposed framework. For example, no momentum is used in SGD, and this amounts to setting $\rho(t) = 1$ in Algorithm 1. In ADAM, the learning rate for momentum is a constant $\rho$ and the learning rate for the weight vector is given by $\epsilon/(1 - \rho^t)$ for some $\epsilon$ , and this simply amounts to setting $\rho(t) = \rho$ and $\epsilon(t) = \epsilon/(1 - \rho^t)$ in Algorithm 1. This interpretation also implies that the convergence conditions to be proposed shortly later are also sufficient for existing algorithms (although they are not meant to be the weakest conditions available in literature).
+
+Solving the approximation subproblem (7). Since (7) is strongly convex, $\widehat{\pmb{x}}(t)$ is unique. Generally $\widehat{\pmb{x}}(t)$ in (7) does not admit a closed-form expression and should be solved by a generic solver. However, some important special cases that are frequently used in practice can be solved efficiently.
+
+- The trivial case is $\mathbb{X} = \mathbb{R}^n$ and $r = 0$ , where
+
+$$
+\widehat {\boldsymbol {x}} (t) = \boldsymbol {x} (t) - \frac {\boldsymbol {v} (t)}{\boldsymbol {\tau} (t)}, \tag {9}
+$$
+
+where the vector division is understood to be element-wise. When $\mathbb{X} = \mathbb{R}^n$ and $r(\pmb{x}) = \mu \| \pmb{x}\|_1$ , $\widehat{\pmb{x}}(t)$ has a closed-form expression that is known as the soft-thresholding operator
+
+$$
+\widehat {\boldsymbol {x}} (t) = S _ {\frac {\mu 1}{\tau (t)}} \left(\boldsymbol {x} (t) - \frac {\boldsymbol {v} (t)}{\tau (t)}\right), \tag {10}
+$$
+
+where $S_{\pmb{a}}(\pmb {b})\triangleq \max (\pmb {b} - \pmb {a},\pmb {0}) - \max (-\pmb {b} - \pmb {a},\pmb {0})$ (Bach et al., 2011).
+
+- If $\mathbb{X} = \mathbb{R}^n$ and $r(\pmb{x}) = \mu \| \pmb{x}\|_2$ and $\pmb{\tau}(t) = \tau \pmb{I}$ for some $\tau$ , then (Parikh & Boyd, 2014)
+
+$$
+\widehat {\boldsymbol {x}} (t) = \left\{ \begin{array}{l l} \left(1 - \mu / \| \tau \boldsymbol {x} (t) - \boldsymbol {v} (t) \| _ {2}\right) (\boldsymbol {x} (t) - \boldsymbol {v} (t) / \tau), & \text {i f} \| \tau \boldsymbol {x} (t) - \boldsymbol {v} (t) \| _ {2} \geq \mu , \\ 0, & \text {o t h e r w i s e .} \end{array} \right. \tag {11}
+$$
+
+If $\pmb{x}$ is divided into blocks $x_{1}, x_{2}, \ldots$ , the $\ell_{2}$ -regularization is commonly used to promote block sparsity (rather than element sparsity by $\ell_{1}$ -regularization).
+
+- When there is a bound constraint $l \leq x \leq u$ , $\widehat{x}(t)$ can simply be obtained by first solving the approximation subproblem (7) without the bound constraint and then projecting the optimal point onto the interval $[l, u]$ . For example, when $\mathbb{X} = \mathbb{R}^n$ and $r = 0$ ,
+
+$$
+\widehat {\boldsymbol {x}} (t) = \left[ \boldsymbol {x} (t) - \frac {\boldsymbol {v} (t)}{\boldsymbol {\tau} (t)} \right] _ {l} ^ {\boldsymbol {u}}, \tag {12}
+$$
+
+with $[\pmb{x}]_l^{\pmb{u}} = \mathrm{clip}(\pmb{x},\pmb{l},\pmb{u})\triangleq \min (\max (\pmb{x},\pmb{l}),\pmb{u})$
+
+- If the constraint function is quadratic: $\mathbb{X} = \{\pmb{x} : \| \pmb{x} \|_2^2 \leq 1\}$ , $\widehat{\pmb{x}}(t)$ has a semi-analytical expression (up to a scalar Lagrange multiplier which can be found efficiently by the bisection method).
+
+Approximation subproblem. We explain why we update the weights by solving an approximation subproblem (7). First, we denote $\widetilde{f}$ as the smooth part of the objective function in (7). Clearly it depends on $\pmb{x}(t)$ and $\pmb{v}(t)$ (and thus $\mathbb{M}(t)$ ), while $\pmb{x}(t)$ and $\pmb{v}(t)$ depend on the old weights $\pmb{x}(0),\dots,\pmb{x}(t-1)$ and momentum and $\pmb{v}(0),\dots,\pmb{v}(t-1)$ . Define $\mathbb{F}(t) \triangleq \{\pmb{x}(0),\dots,\pmb{x}(t),\mathbb{M}(0),\dots,\mathbb{M}(t)\}$ as a shorthand notation for the trajectory generated by ProxSGD. We formally write $\widetilde{f}$ as
+
+$$
+\widetilde {f} (\boldsymbol {x}; \mathbb {F} (t)) \triangleq (\boldsymbol {x} - \boldsymbol {x} (t)) ^ {T} \boldsymbol {v} (t) + \frac {1}{2} (\boldsymbol {x} - \boldsymbol {x} (t)) ^ {T} \operatorname {d i a g} (\boldsymbol {\tau} (t)) (\boldsymbol {x} - \boldsymbol {x} (t)). \tag {13}
+$$
+
+It follows from the optimality of $\widehat{\pmb{x}}(t)$ that
+
+$$
+\widetilde {f} (\boldsymbol {x} (t); \mathbb {F} (t)) + r (\boldsymbol {x} (t)) \geq \widetilde {f} (\widehat {\boldsymbol {x}} (t); \mathbb {F} (t)) + r (\widehat {\boldsymbol {x}} (t)).
+$$
+
+After inserting (13) and reorganizing the terms, the above inequality becomes
+
+$$
+\left(\widehat {\boldsymbol {x}} (t) - \boldsymbol {x} (t)\right) ^ {T} \boldsymbol {v} (t) + r \left(\widehat {\boldsymbol {x}} (t)\right) - r (\boldsymbol {x} (t)) \leq - \tau (t) \| \widehat {\boldsymbol {x}} (t) - \boldsymbol {x} (t) \| _ {2} ^ {2}. \tag {14}
+$$
+
+Since $\nabla f(\pmb{x})$ is Lipschitz continuous with constant $L$ , we have
+
+$$
+\begin{array}{l} f (\boldsymbol {x} (t + 1)) + r (\boldsymbol {x} (t + 1)) - (f (\boldsymbol {x} (t)) + r (\boldsymbol {x} (t))) \\ \leq (\boldsymbol {x} (t + 1) - \boldsymbol {x} (t)) ^ {T} \nabla f (\boldsymbol {x} (t)) + \frac {L}{2} \| \boldsymbol {x} (t + 1) - \boldsymbol {x} (t) \| _ {2} ^ {2} + r (\boldsymbol {x} (t + 1)) - r (\boldsymbol {x} (t)) (15) \\ \leq \epsilon (t) \left(\left(\widehat {\boldsymbol {x}} (t) - \boldsymbol {x} (t)\right) ^ {T} \nabla f (\boldsymbol {x} (t)) + r (\widehat {\boldsymbol {x}} (t)) - r (\boldsymbol {x} (t)) + \frac {L}{2} \epsilon (t) \| \widehat {\boldsymbol {x}} (t) - \boldsymbol {x} (t) \| _ {2} ^ {2}\right), (16) \\ \end{array}
+$$
+
+where the first inequality follows from the descent lemma (applied to $f$ ) and the second inequality follows from the Jensen's inequality of the convex function $r$ and the update rule (8).
+
+If $\pmb{v}(t) = \nabla f(\pmb{x}(t))$ (which is true asymptotically as we show shortly later), by replacing $\nabla f(\pmb{x}(t))$ in (16) by $\pmb{v}(t)$ and inserting (14) into (16), we obtain
+
+$$
+f (\boldsymbol {x} (t + 1)) + r (\boldsymbol {x} (t + 1)) - \left(f (\boldsymbol {x} (t)) + r (\boldsymbol {x} (t))\right) \leq \epsilon (t) \left(\frac {L}{2} \epsilon (t) - \tau (t)\right) \| \widehat {\boldsymbol {x}} (t) - \boldsymbol {x} (t) \| _ {2} ^ {2}. \tag {17}
+$$
+
+The right hand side (RHS) will be negative when $\epsilon(t) < \frac{2\tau(t)}{L}$ : this will eventually be satisfied as we shall use a decaying $\epsilon(t)$ . This implies that the proposed update (8) will decrease the objective value of (1) after each iteration.
+
+Momentum and algorithm convergence. It turns out that the momentum (gradient averaging step) in (6) is essential for the convergence of ProxSGD. Under some mild technical assumptions we outline now, the aggregate gradient $\pmb{v}(t)$ will converge to the true (unknown) gradient $\nabla f(\pmb{x}(t))$ . This remark is made rigorous in the following theorem.
+
+Theorem 1. Assume that the unbiased gradient $\pmb{g}(t)$ has a bounded second moment
+
+$$
+\mathbb {E} \left[ \| \boldsymbol {g} (t) \| _ {2} ^ {2} | \mathbb {F} (t) \right] \leq C, \tag {18}
+$$
+
+for some finite and positive constant $C$ , and the sequence of step sizes $\{\rho(t)\}$ and $\{\epsilon(t)\}$ satisfy
+
+$$
+\sum_ {t = 0} ^ {\infty} \rho (t) = \infty , \sum_ {t = 0} ^ {\infty} \rho (t) ^ {2} < \infty , \sum_ {t = 0} ^ {\infty} \epsilon (t) = \infty , \sum_ {t = 0} ^ {\infty} \epsilon (t) ^ {2} < \infty , \lim _ {t \rightarrow \infty} \frac {\epsilon (t)}{\rho (t)} = 0. \tag {19}
+$$
+
+Then $\lim_{t\to \infty}\| \pmb {v}(t) - \nabla f(\pmb {x}(t))\| = 0$ , and every limit point of the sequence $\{\pmb {x}(t)\}$ is a stationary point of (1) w.p.1.
+
+Proof. Under the assumptions (18) and (19), it follows from Lemma 1 of Ruszczyński (1980) that $\pmb{v}(t) \to \nabla f(\pmb{x}(t))$ . Since the descent direction $\widehat{\pmb{x}}(t) - \pmb{x}(t)$ is a descent direction in view of (14), the convergence of the ProxSGD algorithm can be obtained by generalizing the line of analysis in Theorem 1 of Yang et al. (2016) for smooth optimization problems. The detailed proof is included in the appendix to make the paper self-contained.
+
+We draw some comments on the convergence analysis in Theorem 1.
+
+The bounded second moment assumption on the gradient $\pmb{g}$ in (18) and decreasing stepsizes in (19) are standard assumptions in stochastic optimization and SGD. What is noteworthy is that $\epsilon(t)$ should decrease faster than $\rho(t)$ to ensure that $\pmb{v}(t) \to \nabla f(\pmb{x}(t))$ . But this is more of an interest from the theoretical perspective, and in practice, we observe that $\epsilon(t) / \rho(t) = a$ for some constant $a$ that is smaller than 1 usually yields satisfactory performance, as we show numerically in the next section.
+
+According to Theorem 1, the momentum $\pmb{v}(t)$ converges to the (unknown) true gradient $\nabla f(\pmb{x}(t))$ , so the ProxSGD algorithm eventually behaves similar to the (deterministic) gradient descent algorithm. This property is essential to guarantee the convergence of the ProxSGD algorithm.
+
+To guarantee the theoretical convergence, the quadratic gain $\underline{r}(t)$ in the approximation subproblem (7) should be lower bounded by some positive constant (and it does not even have to be time-varying). In practice, there are various rationales to define it (see Table 1), and they lead to different empirical convergence speed and generalization performance.
+
+The technical assumptions in Theorem 1 may not always be fully satisfied by the neural networks deployed in practice, due to, e.g., the nonsmooth ReLU activation function, batch normalization and dropout. Nevertheless, Theorem 1 still provides valuable guidance on the algorithm's practical performance and the choice of the hyperparameters.
+
+# 3 SIMULATION RESULTS
+
+In this section, we perform numerical experiments to test the proposed ProxSGD algorithm1. In particular, we first train two SNN to compare ProxSGD with ADAM (Kingma & Ba, 2015), AMS-Grad (Reddi et al., 2018), ADABound (Luo et al., 2019) and SGD with momentum. Then we train
+
+a BNN to illustrate the merit of regularization and constraints. To ensure a fair comparison, the hyperparameters of all algorithms are chosen according to either the inventors' recommendations or a hyperparameter search. Furthermore, in all simulations, the quadratic gain $\pmb{\tau}(t)$ in ProxSGD is updated in the same way as ADAM, with $\beta = 0.999$ (see Table 1).
+
+# 3.1 SPARSE NEURAL NETWORK: TRAINING CONVOLUTION NEURAL NETWORKS ON CIFAR-10
+
+We first consider the multiclass classification problem on CIFAR-10 dataset (Krizhevsky, 2009) with convolution neural network (CNN). The network has 6 convolutional layers and each of them is followed by a batch normalization layer; the exact setting is shown in Table 2.
+
+Table 2: CNN Settings
+
+| parameter | value |
| data set | CIFAR-10 |
| number of convolution layers | 6 |
| size of convolution kernels | 3×3 |
| number of output filters in convolution layers 1-2, 3-4, 5-6 | 32, 64, 128 |
| operations after convolution layers 1-2, 3-4, 5-6 | max pooling, dropout (rate=0.2) |
| kernel size, stride, padding of maxing pooling | 2×2, 2, valid |
| activation function for convolution/output layer | elu/softmax |
| loss function and regularization function | cross entropy and ℓ1-norm |
+
+Following the parameter configurations of ADAM in Kingma & Ba (2015), AMSGrad in Reddi et al. (2018), and ADABound in Luo et al. (2019), we set $\rho = 0.1$ , $\beta = 0.999$ and $\epsilon = 0.001$ (see Table 1), which are uniform for all the algorithms and commonly used in practice. Note that we have also activated $\ell_1$ -regularization for these algorithms in the built-in function in TensorFlow/PyTorch, which amounts to adding the subgradient of the $\ell_1$ -norm to the gradient of the loss function. For the proposed ProxSGD, $\epsilon(t)$ and $\rho(t)$ decrease over the iterations as follows,
+
+$$
+\epsilon (t) = \frac {0 . 0 6}{(t + 4) ^ {0 . 5}}, \rho (t) = \frac {0 . 9}{(t + 4) ^ {0 . 5}}. \tag {20}
+$$
+
+Recall that the $\ell_1$ -norm in the approximation subproblem naturally leads to the soft-thresholding proximal mapping, see (10). The regularization parameter $\mu$ in the soft-thresholding then permits controlling the sparsity of the parameter variable $\pmb{x}$ ; in this experiment we set $\mu = 5 \cdot 10^{-5}$ .
+
+
+(a) Training Loss
+
+
+(b) Test Accuracy
+
+
+(c) CDF of Weights
+Figure 1: Performance comparison for CNN on CIFAR-10.
+
+In Figure 1, we compare the four algorithms (ProxSGD, ADAM, AMSGrad, ADABound) in terms of three metrics, namely, the training loss, the test accuracy and the achieved sparsity. On the one hand, Figure 1(a) shows that ProxSGD outperforms ADAM, AMSGrad and ADABound in the achieved loss value. On the other hand, the achieved accuracy is comparable, see Figure 1(b).
+
+The sparsity of the trained model is measured by the cumulative distribution function (CDF) of the weights' value, which specifies the percentage of weights below any given value. For the proposed
+
+ProxSGD in Figure 1(c), we can observe at 0 in the x-axis the abrupt change of the CDF in the y-axis, which implies that more than $90\%$ of the weights are exactly zero. By comparison, only $40\% - 50\%$ are exactly zero by the other algorithms. What is more, for this experiment, the soft-thresholding proximal operator in ProxSGD does not increase the computation time: ADAM 17.24s (per epoch), AMSGrad 17.44s, ADABound 16.38s, ProxSGD 16.04s. Therefore, in this experiment, the proposed ProxSGD with soft-thresholding proximal mapping has a clear and substantial advantage than other stochastic subgradient-based algorithms.
+
+# 3.2 SPARSE NEURAL NETWORK: TRAINING DENSENET-201 ON CIFAR-100
+
+In this subsection, the performance of ProxSGD is evaluated by a much more complex network and dataset. In particular, we train the DenseNet-201 network (Huang et al., 2017) for CIFAR-100 (Krizhevsky, 2009). DenseNet-201 is the deepest topology of the DenseNet family and belongs to the state of the art networks in image classification tasks. We train the network using ProxSGD, ADAM and SGD with momentum. To ensure a fair comparison among these algorithms, the learning rate is not explicitly decayed during training for all algorithms. Furthermore, the ideal hyperparameters for each algorithm were computed by grid-search and the curves are averaged over five runs. A batch-size of 128 is adopted. For ProxSGD, the regularization parameter is $\mu = 10^{-5}$ , the learning rate for the weight and momentum is, respectively,
+
+$$
+\epsilon (t) = \frac {0 . 1 5}{(t + 4) ^ {0 . 5}}, \rho (t) = \frac {0 . 9}{(t + 4) ^ {0 . 5}}.
+$$
+
+For ADAM, $\epsilon = 6\cdot 10^{-4}$ and $\rho = 0.1$ . SGD with momentum uses a learning rate of $\epsilon = 6\cdot 10^{-3}$ and a momentum of 0.9 (so $\rho = 0.1$ ). The regularization parameter for both ADAM and SGD with momentum is $\mu = 10^{-4}$
+
+
+(a) Training Loss
+
+
+(b) Test Accuracy
+
+
+(c) CDF of Weights
+Figure 2: Performance comparison for DenseNet-201 on CIFAR-100.
+
+Figure 2 shows the performance of ProxSGD and other algorithms for DenseNet-201 trained on CIFAR-100. We see from Figure 2(a) that ProxSGD achieves the lowest training loss after Epoch 10. The test accuracy in Figure 2(b) shows that all algorithms achieve similar accuracy and ProxSGD outperforms the other two during the early stage of training. We remark that this is achieved with a much sparser network as shown in Figure 2(c). In particular, we can see from the zoomed-in part of Figure 2(c) that SGD with momentum has approximately $70\%$ of their weights at zero, while most weights learned by ADAM are not exactly zero (although they are very small). In contrast, ProxSGD reaches the sparsity of $92 - 94\%$ .
+
+In Figure 3, we demonstrate that ProxSGD is much more efficient in generating a SNN, irrespective of the hyperparameters (related to the learning rate). In particular, we try many different initial learning rate of the weight vector $\epsilon(0)$ for ProxSGD and test their performance. From Figure 3(a)-(b) we see that, as expected, the hyperparameters affect the achieved training loss and test accuracy, and many lead to a worse training loss and/or test accuracy than ADAM and SGD with momentum. However, Figure 3(c) shows that most of them (except when they are too small: $\epsilon(0) = 0.01$ and 0.001) generate a much sparser NN than both ADAM and SGD with momentum. These observations are also consistent with the theoretical framework in Section 2: interpreting ADAM and SGD with
+
+
+(a) Training Loss
+
+
+(b) Test Accuracy
+
+
+(c) CDF of Weights
+Figure 3: Hyperparameters and sparsity for DenseNet-201 on CIFAR-100.
+
+momentum as special cases of ProxSGD implies that they have the same convergence rate, and the sparsity is due to the explicit use of the nonsmooth $\ell_1$ -norm regularization.
+
+For this experiment, the soft-thresholding proximal operator in ProxSGD increases the training time: the average time per epoch for ProxSGD is $3.5\mathrm{min}$ , SGD with momentum $2.8\mathrm{min}$ and ADAM 2.9 min. In view of the higher level of sparsity achieved by ProxSGD, this increase in computation time is reasonable and affordable.
+
+# 3.3 BINARY NEURAL NETWORKS: TRAINING DEEP NEURAL NETWORKS ON MNIST
+
+In this subsection, we evaluate the proposed algorithm ProxSGD in training the BNN by solving problem (3). We train a 6-layer fully-connected deep neural network (DNN) for the MNIST dataset, and we use the tanh activation function to promote a binary activation output; see Table 3. The algorithm parameters are the same as Sec. 3.1, except that $\mu = 2 \cdot 10^{-4}$ . The chosen setup is particularly suited to evaluate the merit of the proposed method, since MNIST is a simple dataset and it allows us to investigate soly the effect of the proposed model and training algorithm.
+
+Table 3: DNN Settings
+
+| parameter | Value |
| dataset | MNIST |
| number of hidden layers | 6 |
| number of nodes per hidden layer | 200 |
| activation function in hidden/output layer | tanh/softmax |
| loss function | cross entropy |
+
+After customizing Algorithm 1 to problem (3), the approximation subproblem is
+
+$$
+(\widehat{\boldsymbol{x}} (t),\widehat{\boldsymbol{a}} (t)) = \operatorname *{arg min}_{\boldsymbol {0}\leq \boldsymbol {a}\leq \boldsymbol{1}}\left\{ \begin{array}{l}(\boldsymbol {x} - \boldsymbol {x}(t))^{T}\boldsymbol{v}_{x}(t) + \frac{1}{2} (\boldsymbol {x} - \boldsymbol {x}(t))^{T}\operatorname {diag}(\boldsymbol{\tau}_{x}(t))(\boldsymbol {x} - \boldsymbol {x}(t))\\ +(\boldsymbol {a} - \boldsymbol {a}(t))^{T}\boldsymbol{v}_{a}(t) + \frac{1}{2} (\boldsymbol {a} - \boldsymbol {a}(t))^{T}\operatorname {diag}(\boldsymbol{\tau}_{a}(t))(\boldsymbol {a} - \boldsymbol {a}(t)) \end{array} \right\} .
+$$
+
+Both $\widehat{\pmb{x}}(t)$ and $\widehat{\pmb{a}}(t)$ have a closed-form expression (cf. (9) and (12))
+
+$$
+\widehat {\boldsymbol {x}} (t) = \boldsymbol {x} (t) - \frac {\boldsymbol {v} _ {x} (t)}{\boldsymbol {\tau} _ {x} (t)}, \text {a n d} \widehat {\boldsymbol {a}} (t) = \left[ \boldsymbol {a} (t) - \frac {\boldsymbol {v} _ {a} (t)}{\boldsymbol {\tau} _ {a} (t)} \right] _ {\mathbf {0}} ^ {\mathbf {1}}, \tag {21}
+$$
+
+where $\pmb{v}_x(t)$ and $\pmb{v}_a(t)$ are the momentum updated in the spirit of (6), with the gradients given by
+
+$$
+\boldsymbol {g} _ {x} (t) = \frac {1}{m (t)} \sum_ {i \in \mathbb {M} (t)} \nabla f _ {i} (\boldsymbol {x} (t), \boldsymbol {y} ^ {(i)}) + \frac {\mu}{2} (\boldsymbol {x} (t) + 2 \boldsymbol {a} (t) - \mathbf {1}), \text {a n d} \boldsymbol {g} _ {a} (t) = \mu \boldsymbol {x} (t).
+$$
+
+The training loss is shown in Figure 4(a). We remark that during the training process of ProxSGD, the weights are not binarized, for the reason that the penalty should regularize the problem in a way such that the optimal weights (to which ProxSGD converges) are exactly or close to 1 or -1. After training is completed, the CDF of the learned weights is summarized in Figure 4(c), and then the
+
+
+(a) Training Loss
+
+
+(b) Test Accuracy
+
+
+(c) CDF of Weights
+Figure 4: Performance comparison for BNN on MNIST
+
+learned weights are binarized to generate a full BNN whose test accuracy is in Figure 4(b). On the one hand, we see from Figure 4(a)-(b) that the achieved training loss and test accuracy by BNN is worse than the standard full-precision DNN (possibly with soft-thresholding). This is expected as BNN imposes regularization and constraints on the optimization problem and reduces the search space. However, the difference in test accuracy is quite small. On the other hand, we see from Figure 4(c) that the regularization in the proposed formulation (3) is very effective in promoting binary weights: $15\%$ of weights are in the range $(-1,-0.5)$ and $15\%$ of weights are in the range $(0.5,1)$ , and all the other weights are either -1 or 1. As all weights are exactly or close to 1 or -1, we could just binarize the weights to exactly 1 or -1 only once by hard thresholding, after the training is completed, and thus the incurred performance loss is small ( $98\%$ versus $95\%$ for test accuracy). In contrast, the weights generated by the full-precision DNN (that is, without regularization) are smoothly distributed in $[-2,2]$ .
+
+Even though the proposed formulation (3) doubles the number of parameters to optimize (from $\pmb{x}$ in full-precision DNN to $(\pmb{x},\pmb{a})$ in BNN ProxSGD), the convergence speed is equally fast in terms of the number of iterations. The computation time is also roughly the same: full-precision DNN 13.06s (per epoch) and ProxSGD 12.21s. We remark that $g_{a}(t)$ , the batch gradient w.r.t. $\pmb{a}$ , has a closed-form expression and it does not involve the back-propagation. In comparison with the algorithm in Courbariaux et al. (2016), the proposed ProxSGD converges much faster and achieves a much better training loss and test accuracy (95% versus 89%, the computation time per epoch for Courbariaux et al. (2016) is 13.56s). The notable performance improvement is due to the regularization and constraints. Naturally we should make an effort of searching for a proper regularization parameter $\mu$ , but this effort is very well paid off. Furthermore, we observe in the simulations that the performance is not sensitive to the exact value of $\mu$ , as long as it is in an appropriate range.
+
+# 4 CONCLUDING REMARKS
+
+In this paper, we proposed ProxSGD, a proximal-type stochastic gradient descent algorithm with momentum, for constrained optimization problems where the smooth loss function is augmented by a nonsmooth and convex regularization. We considered two applications, namely the stochastic training of SNN and BNN, to show that regularization and constraints can effectively promote structures in the learned network. More generally, incorporating regularization and constraints allows us to use a more accurate and interpretable model for the problem at hand and the proposed convergent ProxSGD algorithms ensures efficient training. Numerical tests showed that ProxSGD outperforms state-of-the-art algorithms, in terms of convergence speed, achieved training loss and/or the desired structure in the learned neural networks.
+
+# ACKNOWLEDGEMENT
+
+The work of Yang is supported by DFG Project DeTol. The work of van Sloun is part of the research programme Rubicon ENW 2018-3 with project number 019.183EN.014, which is financed by the Dutch Research Council (NWO). The work of Yuan, Lei and Chatzinotas is supported by ERC AGNOSTIC, FNR CORE ASWELL and FNR-AFR LARGOS.
+
+# REFERENCES
+
+Francis Bach, Rodolphe Jenatton, Julien Mairal, and Guillaume Obozinski. Optimization with Sparsity-Inducing Penalties. Foundations and Trends in Machine Learning, 4(1):1-106, 2011. doi: 10.1561/2200000015. URL http://arxiv.org/abs/1108.0775.
+Yu Bai, Yu-Xiang Wang, and Edo Liberty. Proxquant: Quantized neural networks via proximal operators. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=HyzMyhCcK7.
+Dimitri P. Bertsekas and John N. Tsitsiklis. Gradient convergence in gradient methods with errors. SIAM Journal on Optimization, 10(3):627-642, 2000. doi: 10.1137/S1052623497331063. URL http://link.aip.org/link/SJOPE8/v10/i3/p627/s1&Agg=doi.
+Stephen Boyd, Neal Parikh, Eric Chu, Borja Peleato, and Jonathan Eckstein. Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends in Machine Learning, 3(1), 2010. doi: 10.1561/220000016. URL http://www-nowpublishers.com/product.aspx?product=MAL{\&}doi=220000016.
+Xiangyi Chen, Sijia Liu, Ruoyu Sun, and Mingyi Hong. On the Convergence of A Class of Adam-Type Algorithms for Non-Convex Optimization. In International Conference on Learning Representations, 2019. URL http://arxiv.org/abs/1808.02941.
+Matthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David. Binaryconnect: Training deep neural networks with binary weights during propagations. In Advances in Neural Information Processing Systems 28, pp. 3123-3131, 2015.
+Matthieu Courbariaux, Itay Hubara, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. Binarized Neural Networks. In Advances in Neural Information Processing Systems 29, 2016. URL https://papers.nips.cc/paper/6573-binarized-neural-networks.pdf.
+Trevor Gale, Erich Elsen, and Sara Hooker. The State of Sparsity in Deep Neural Networks. 2019. URL http://arxiv.org/abs/1902.09574.
+Ian Goodfellow, Yoshua Bengio, Aaron Courville, and Yoshua Bengio. Deep learning, volume 1. MIT Press, 2016.
+Lu Hou, Quanming Yao, and James T. Kwok. Loss-aware binarization of deep networks. In International Conference on Learning Representations, 2017. URL https://openreview.net/forum?id=S1owlN9ll.
+Gao Huang, Zhuang Liu, Laurens van der Maaten, and Kilian Q. Weinberger. Densely connected convolutional networks. In The Conference on Computer Vision and Pattern Recognition, 2017. URL https://arxiv.org/abs/1608.06993.
+Diederik P. Kingma and Jimmy Ba. Adam: A Method for Stochastic Optimization. In International Conference on Learning Representations, 2015. URL http://arxiv.org/abs/1412.6980.
+Alex Krizhevsky. Learning multiple layers of features from tiny images, 2009. URL https://www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdf.
+Yunwen Lei, Ting Hu, and Ke Tang. Stochastic Gradient Descent for Nonconvex Learning without Bounded Gradient Assumptions. to appear in IEEE Transactions on Neural Networks and Learning Systems, 2019. URL https://ieeexplore.ieee.org/document/8930994.
+Christos Louizos, Max Welling, and Diederik P. Kingma. Learning Sparse Neural Networks through $L_0$ Regularization. In International Conference on Learning Representations, 2018. URL http://arxiv.org/abs/1712.01312.
+Liangchen Luo, Yuanhao Xiong, Yan Liu, and Xu Sun. Adaptive Gradient Method with Dynamic Bound of Learning Rate. In International Conference on Learning Representations, 2019. URL http://arxiv.org/abs/1902.09843v1.
+
+Neal Parikh and Stephen Boyd. Proximal algorithms. Foundations and Trends in Optimization, 1 (3):127-239, 2014. doi: 10.1561/2400000003. URL http://www-nowpublishers.com/articles/foundations-and-trends-in-optimization/OPT-003.
+Sashank J. Reddi, Satyen Kale, and Sanjiv Kumar. On the convergence of ADAM and beyond. In International Conference on Learning Representations, 2018. URL http://arxiv.org/abs/1904.09237.
+Andrzej Ruszczyński. Feasible direction methods for stochastic programming problems. Mathematical Programming, 19(1):220-229, December 1980. doi: 10.1007/BF01581643. URL http://www.springerlink.com/index/10.1007/BF01581643.
+Yang Yang, Gesualdo Scutari, Daniel P Palomar, and Marius Pesavento. A parallel decomposition method for nonconvex stochastic multi-agent optimization problems. IEEE Transactions on Signal Processing, 64(11):2949-2964, June 2016. doi: 10.1109/TSP.2016.2531627. URL http://ieeexplore.ieee.org/document/7412752/.
+Penghang Yin, Shuai Zhang, Jiancheng Lyu, Stanley Osher, Yingyong Qi, and Jack Xin. BinaryRelax: A Relaxation Approach for Training Deep Neural Networks with Quantized Weights. SIAM Journal on Imaging Sciences, 11(4):2205-2223, January 2018. URL https://epubs.siam.org/doi/10.1137/18M1166134.
+
+# A APPENDIX: PROOF OF THEOREM 1
+
+Proof. The claim $\lim_{t\to \infty}\| \pmb {v}(t) - \nabla f(\pmb {x}(t))\| = 0$ is a consequence of (Ruszczyński, 1980, Lemma 1). To see this, we just need to verify that all the technical conditions therein are satisfied by the problem at hand. Specifically, Condition (a) of (Ruszczyński, 1980, Lemma 1) is satisfied because $\mathbb{X}$ is closed and bounded. Condition (b) of (Ruszczyński, 1980, Lemma 1) is exactly (18). Conditions (c)-(d) of (Ruszczyński, 1980, Lemma 1) come from the stepsize rules in (19) of Theorem 1. Condition (e) of (Ruszczyński, 1980, Lemma 1) comes from the Lipschitz property of $\nabla f$ and stepsize rule in (19) of Theorem 1.
+
+We need the following intermediate result to prove the limit point of the sequence $\pmb{x}(t)$ is a stationary point of (1).
+
+Lemma 1. There exists a constant $\widehat{L}$ such that
+
+$$
+\left\| \widehat {\boldsymbol {x}} (\boldsymbol {x} (t _ {1}), \boldsymbol {\xi} (t _ {1})) - \widehat {\boldsymbol {x}} (\boldsymbol {x} (t _ {2}), \boldsymbol {\xi} (t _ {2})) \right\| \leq \widehat {L} \left\| \boldsymbol {x} (t _ {1}) - \boldsymbol {x} (t _ {2}) \right\| + e (t _ {1}, t _ {2}),
+$$
+
+and $\lim_{t_1,t_2\to \infty}e(t_1,t_2) = 0$ w.p.1.
+
+Proof. We assume without loss of generality (w.l.o.g.) that $\pmb{\tau}(t) = \pmb{\tau}\mathbf{1}$ , and the approximation subproblem (7) reduces to
+
+$$
+\widehat {\boldsymbol {x}} (t) \triangleq \operatorname * {a r g m i n} _ {\boldsymbol {x} \in \mathbb {X}} \left\{(\boldsymbol {x} - \boldsymbol {x} (t)) ^ {T} \boldsymbol {v} (t) + \frac {\tau}{2} \| \boldsymbol {x} - \boldsymbol {x} (t) \| _ {2} ^ {2} + r (\boldsymbol {x}) \right\}.
+$$
+
+It is further equivalent to
+
+$$
+\min _ {\boldsymbol {x} \in \mathbb {X}, r (\boldsymbol {x}) \leq y} \left\{\left(\boldsymbol {x} - \boldsymbol {x} (t)\right) ^ {T} \boldsymbol {v} (t) + \frac {\tau}{2} \| \boldsymbol {x} - \boldsymbol {x} (t) \| _ {2} ^ {2} + y \right\}, \tag {22}
+$$
+
+where the (unique) optimal $\pmb{x}$ and $y$ is $(\widehat{\pmb{x}}(t))$ and $r(\widehat{\pmb{x}}(t))$ , respectively.
+
+We assume w.l.o.g. that $t_2 > t_1$ . It follows from first-order optimality condition that
+
+$$
+\left(\boldsymbol {x} - \widehat {\boldsymbol {x}} \left(t _ {1}\right)\right) ^ {T} \left(\boldsymbol {v} \left(t _ {1}\right) + \tau \left(\widehat {\boldsymbol {x}} \left(t _ {1}\right) - \boldsymbol {x} \left(t _ {1}\right)\right)\right) + y - r \left(\widehat {\boldsymbol {x}} \left(t _ {1}\right)\right) \geq 0, \forall \boldsymbol {x}, y \text {s u c h t h a t} r (\boldsymbol {x}) \leq y \tag {23a}
+$$
+
+$$
+\left(\boldsymbol {x} - \widehat {\boldsymbol {x}} \left(t _ {2}\right)\right) ^ {T} \left(\boldsymbol {v} \left(t _ {2}\right) + \tau \left(\widehat {\boldsymbol {x}} \left(t _ {2}\right) - \boldsymbol {x} \left(t _ {2}\right)\right)\right) + y - r \left(\widehat {\boldsymbol {x}} \left(t _ {2}\right)\right) \geq 0, \forall \boldsymbol {x}, y \text {s u c h} r (\boldsymbol {x}) \leq y. \tag {23b}
+$$
+
+Setting $(\pmb{x},y) = (\widehat{\pmb{x}} (t_2),r(\widehat{\pmb{x}} (t_2)))$ in (23a) and $(x,y) = (\widehat{x} (t_1),r(\widehat{x} (t_1)))$ in (23b), and adding them up, we obtain
+
+$$
+\left(\widehat {\boldsymbol {x}} \left(t _ {1}\right) - \widehat {\boldsymbol {x}} \left(t _ {2}\right)\right) ^ {T} \left(\boldsymbol {v} \left(t _ {1}\right) - \boldsymbol {v} \left(t _ {2}\right)\right) - \tau \left(\boldsymbol {x} \left(t _ {1}\right) - \boldsymbol {x} \left(t _ {2}\right)\right) ^ {T} \left(\widehat {\boldsymbol {x}} \left(t _ {1}\right) - \widehat {\boldsymbol {x}} \left(t _ {2}\right)\right) \leq - \tau \| \widehat {\boldsymbol {x}} \left(t _ {1}\right) - \widehat {\boldsymbol {x}} \left(t _ {2}\right) \| _ {2} ^ {2}. \tag {24}
+$$
+
+The term on the left hand side can be lower bounded as follows:
+
+$$
+\begin{array}{l} \left\langle \widehat {\boldsymbol {x}} (t _ {1}) - \widehat {\boldsymbol {x}} (t _ {2}), \boldsymbol {v} (t _ {1}) - \nabla f (\boldsymbol {x} (t _ {1})) - \boldsymbol {v} (t _ {2}) + \nabla f (\boldsymbol {x} (t _ {2})) \right\rangle \\ + \left\langle \widehat {\boldsymbol {x}} (t _ {1}) - \widehat {\boldsymbol {x}} (t _ {2}), \nabla f (\boldsymbol {x} (t _ {1})) - \nabla f (\boldsymbol {x} (t _ {2})) \right\rangle - \tau \left\langle \widehat {\boldsymbol {x}} (t _ {1}) - \widehat {\boldsymbol {x}} (t _ {2}), \boldsymbol {x} (t _ {1}) - \boldsymbol {x} (t _ {2}) \right\rangle \\ \geq - \left\| \widehat {\boldsymbol {x}} \left(t _ {1}\right) - \widehat {\boldsymbol {x}} \left(t _ {2}\right) \right\| \left(\varepsilon \left(t _ {1}\right) + \varepsilon \left(t _ {2}\right)\right) - (L + \tau) \| \widehat {\boldsymbol {x}} \left(t _ {1}\right) - \widehat {\boldsymbol {x}} \left(t _ {2}\right) \| \| \boldsymbol {x} \left(t _ {1}\right) - \boldsymbol {x} \left(t _ {2}\right) \| \tag {25} \\ \end{array}
+$$
+
+where the inequality comes from the Lipschitz continuity of $\nabla f(\pmb{x})$ , with $\varepsilon(t) \triangleq \|\pmb{v}(t) - \nabla f(\pmb{x}(t))\|$ .
+
+Combining the inequalities (24) and (25), we have
+
+$$
+\left\| \widehat {\boldsymbol {x}} (t _ {1}) - \widehat {\boldsymbol {x}} (t _ {2}) \right\| \leq (L + \tau) \tau^ {- 1} \left\| \boldsymbol {x} (t _ {1}) - \boldsymbol {x} (t _ {2}) \right\| + \tau^ {- 1} (\varepsilon (t _ {1}) + \varepsilon (t _ {2})),
+$$
+
+which leads to the desired (asymptotic) Lipschitz property:
+
+$$
+\left\| \widehat {\boldsymbol {x}} (t _ {1}) - \widehat {\boldsymbol {x}} (t _ {2}) \right\| \leq \widehat {L} \left\| \boldsymbol {x} (t _ {1}) - \boldsymbol {x} (t _ {2}) \right\| + e (t _ {1}, t _ {2}),
+$$
+
+with $\widehat{L} \triangleq \tau^{-1}(L + \tau)$ and $e(t_1, t_2) \triangleq \tau^{-1}(\varepsilon(t_1) + \varepsilon(t_2))$ , and $\lim_{t_1 \to \infty, t_2 \to \infty} e(t_1, t_2) = 0$ w.p.1.
+
+Define $U(\pmb{x}) \triangleq f(\pmb{x}) + r(\pmb{x})$ . Following the line of analysis from (15) to (16), we obtain
+
+$$
+U (\boldsymbol {x} (t + 1)) - U (\boldsymbol {x} (t)) \tag {26}
+$$
+
+$$
+\begin{array}{l} \leq \epsilon (t) ((\widehat {\boldsymbol {x}} (t) - \boldsymbol {x} (t)) ^ {T} (\nabla f (\boldsymbol {x} (t)) + r (\widehat {\boldsymbol {x}} (t)) - r (\boldsymbol {x} (t))) + \frac {L}{2} \epsilon (t) ^ {2} \| \widehat {\boldsymbol {x}} (t) - \boldsymbol {x} (t) \| ^ {2} \\ = \epsilon (t) (\widehat {\boldsymbol {x}} (t) - \boldsymbol {x} (t)) ^ {T} (\nabla f (\boldsymbol {x} (t)) - \boldsymbol {v} (t) + \boldsymbol {v} (t) + r (\widehat {\boldsymbol {x}} (t)) - r (\boldsymbol {x} (t))) + \frac {L}{2} \epsilon (t) ^ {2} \| \widehat {\boldsymbol {x}} (t) - \boldsymbol {x} (t) \| ^ {2} \\ \leq - \epsilon (t) \left(\tau - \frac {L}{2} \epsilon (t)\right) \left\| \widehat {\boldsymbol {x}} (t) - \boldsymbol {x} (t) \right\| ^ {2} + \epsilon (t) \left\| \widehat {\boldsymbol {x}} (t) - \boldsymbol {x} (t) \right\| \left\| \nabla f (\boldsymbol {x} (t)) - \boldsymbol {v} (t) \right\|, \tag {27} \\ \end{array}
+$$
+
+where in the last inequality we used (14) and the Cauchy-Schwarz inequality.
+
+Let us show by contradiction that $\lim \inf_{t\to \infty}\| \widehat{\pmb{x}} (t) - \pmb {x}(t)\| = 0$ w.p.1. Suppose $\lim \inf_{t\to \infty}\| \widehat{\pmb{x}} (t) - \pmb {x}(t)\| \geq \chi >0$ with a positive probability. Then we can find a realization such that at the same time $\| \widehat{\pmb{x}} (t) - \pmb {x}(t)\| \geq \chi >0$ for all $t$ and $\lim_{t\to \infty}\| \nabla f(\pmb {x}(t)) - \pmb {v}(t)\| = 0$ ; we focus next on such a realization. Using $\| \widehat{\pmb{x}} (t) - \pmb {x}(t)\| \geq \chi >0$ , the inequality (27) is equivalent to
+
+$$
+U (\boldsymbol {x} (t + 1)) - U (\boldsymbol {x} (t)) \leq - \epsilon (t) \left(\tau - \frac {L}{2} \epsilon (t) - \frac {1}{\chi} \| \nabla f (\boldsymbol {x} (t)) - \boldsymbol {v} (t) \|\right) \| \widehat {\boldsymbol {x}} (t) - \boldsymbol {x} (t) \| ^ {2}. \tag {28}
+$$
+
+Since $\lim_{t\to \infty}\left\| \nabla f(\pmb {x}(t)) - \pmb {v}(t)\right\| = 0$ , there exists a $t_0$ sufficiently large such that
+
+$$
+\tau - \frac {L}{2} \epsilon (t) - \frac {1}{\chi} \| \nabla f (\boldsymbol {x} (t)) - \boldsymbol {v} (t) \| \geq \bar {\tau} > 0, \quad \forall t \geq t _ {0}. \tag {29}
+$$
+
+Therefore, it follows from (28) and (29) that
+
+$$
+U (\boldsymbol {x} (t)) - U \left(\boldsymbol {x} ^ {t _ {0}}\right) \leq - \bar {\tau} \chi^ {2} \sum_ {n = t _ {0}} (t) \epsilon^ {n + 1}, \tag {30}
+$$
+
+which, in view of $\sum_{n = t_0}^{\infty}\epsilon^{n + 1} = \infty$ , contradicts the boundedness of $\{U(\pmb {x}(t))\}$ . Therefore it must be $\lim \inf_{t\to \infty}\| \widehat{\pmb{x}} (t) - \pmb {x}(t)\| = 0$ w.p.1.
+
+Let us show by contradiction that $\lim \sup_{t\to \infty}\| \widehat{\pmb{x}} (t) - \pmb {x}(t)\| = 0$ w.p.1. Suppose $\lim \sup_{t\to \infty}\| \widehat{\pmb{x}} (t) - \pmb {x}(t)\| >0$ with a positive probability. We focus next on a realization along with $\lim \sup_{t\to \infty}\| \widehat{\pmb{x}} (t) - \pmb {x}(t)\| >0,\lim_{t\to \infty}\bigl \| \nabla f(\pmb {x}(t)) - \pmb {v}(t)\bigr \| = 0,\lim \inf_{t\to \infty}\bigl \| \widehat{\pmb{x}} (t) - \pmb {x}(t)\bigr \| = 0,$ and $\lim_{t_i,t_2\to \infty}e(t_1,t_2) = 0$ , where $e(t_{1},t_{2})$ is defined in Lemma 1. It follows from $\lim \sup_{t\to \infty}\| \widehat{\pmb{x}} (t) - \pmb {x}(t)\| >0$ and $\lim \inf_{t\to \infty}\bigl \| \widehat{\pmb{x}} (t) - \pmb {x}(t)\bigr \| = 0$ that there exists a $\delta >0$ such that $\| \triangle x(t)\| \geq 2\delta$ (with $\triangle x(t)\triangleq \widehat{x} (t) - x(t)$ ) for infinitely many $t$ and also $\| \triangle x(t)\| < \delta$ for infinitely many $t$ . Therefore, one can always find an infinite set of indexes, say $\mathcal{T}$ , having the following properties: for any $t\in \mathcal{T}$ , there exists an integer $i_t > t$ such that
+
+$$
+\left\| \triangle \boldsymbol {x} (t) \right\| < \delta , \quad \left\| \triangle \boldsymbol {x} \left(i _ {t}\right) \right\| > 2 \delta , \quad \delta \leq \left\| \triangle \boldsymbol {x} (n) \right\| \leq 2 \delta , t < n < i _ {t}. \tag {31}
+$$
+
+Given the above bounds, the following holds: for all $t \in \mathcal{T}$ ,
+
+$$
+\begin{array}{l} \delta \leq \| \triangle \boldsymbol {x} (i _ {t}) \| - \| \triangle \boldsymbol {x} (t) \| \\ \leq \| \triangle \boldsymbol {x} (i _ {t}) - \triangle \boldsymbol {x} (t) \| = \| (\widehat {\boldsymbol {x}} (i _ {t}) - \boldsymbol {x} (i _ {t})) - (\widehat {\boldsymbol {x}} (t) - \boldsymbol {x} (t)) \| \\ \leq \left\| \widehat {\boldsymbol {x}} \left(i _ {t}\right) - \widehat {\boldsymbol {x}} (t) \right\| + \left\| \boldsymbol {x} \left(i _ {t}\right) - \boldsymbol {x} (t) \right\| \\ \leq (1 + \widehat {L}) \| \boldsymbol {x} (i _ {t}) - \boldsymbol {x} (t) \| + e (i _ {t}, t) \\ \leq (1 + \widehat {L}) \sum_ {n = t} ^ {i _ {t} - 1} \epsilon (n) \| \triangle \boldsymbol {x} (n) \| + e (i _ {t}, t) \\ \leq 2 \delta (1 + \widehat {L}) \sum_ {n = t} ^ {i _ {t} - 1} \epsilon (n) + e (i _ {t}, t), \tag {32} \\ \end{array}
+$$
+
+implying that
+
+$$
+\lim _ {\mathcal {T} \ni t \rightarrow \infty} \inf _ {n = t} ^ {i _ {t} - 1} \epsilon (n) \geq \bar {\delta} _ {1} \triangleq \frac {1}{2 (1 + \widehat {L})} > 0. \tag {33}
+$$
+
+Proceeding as in (32), we also have: for all $t \in \mathcal{T}$
+
+$$
+\left\| \triangle \boldsymbol {x} (t + 1) \right\| - \left\| \triangle \boldsymbol {x} (t) \right\| \leq \left\| \triangle \boldsymbol {x} (t + 1) - \triangle \boldsymbol {x} (t) \right\| \leq (1 + \widehat {L}) \epsilon (t) \left\| \triangle \boldsymbol {x} (t) \right\| + e (t, t + 1),
+$$
+
+which leads to
+
+$$
+(1 + (1 + \widehat {L}) \epsilon (t)) \| \triangle \boldsymbol {x} (t) \| + e (t, t + 1) \geq \| \triangle \boldsymbol {x} (t + 1) \| \geq \delta , \tag {34}
+$$
+
+where the second inequality follows from (31). It follows from (34) that there exists a $\bar{\delta}_2 > 0$ such that for sufficiently large $t \in \mathcal{T}$ ,
+
+$$
+\left\| \triangle \boldsymbol {x} (t) \right\| \geq \frac {\delta - e (t , t + 1)}{1 + (1 + \widehat {L}) \epsilon (t)} \geq \bar {\delta} _ {2} > 0. \tag {35}
+$$
+
+Here after we assume w.l.o.g. that (35) holds for all $t \in \mathcal{T}$ (in fact one can always restrict $\{\pmb{x}(t)\}_{t \in \mathcal{T}}$ to a proper subsequence).
+
+We show now that (33) is in contradiction with the convergence of $\{U(\pmb{x}(t))\}$ . Invoking (27), we have for all $t \in \mathcal{T}$ ,
+
+$$
+\begin{array}{l} U (\boldsymbol {x} (t + 1)) - U (\boldsymbol {x} (t)) \leq - \epsilon (t) \left(\tau - \frac {L}{2} \epsilon (t)\right) \left\| \widehat {\boldsymbol {x}} (t) - \boldsymbol {x} (t) \right\| ^ {2} + \epsilon (t) \delta \left\| \nabla f (\boldsymbol {x} (t)) - \boldsymbol {v} (t) \right\| \\ \leq - \epsilon (t) \left(\tau - \frac {L}{2} \epsilon (t) - \frac {\left\| \nabla f (\boldsymbol {x} (t)) - \boldsymbol {v} (t) \right\|}{\delta}\right) \left\| \widehat {\boldsymbol {x}} (t) - \boldsymbol {x} (t) \right\| ^ {2} \\ + \epsilon (t) \delta \| \nabla f (\boldsymbol {x} (t)) - \boldsymbol {v} (t) \| ^ {2}, \tag {36} \\ \end{array}
+$$
+
+and for $t < n < i_t$
+
+$$
+\begin{array}{l} U (\pmb {x} (n + 1)) - U (\pmb {x} (n)) \leq - \epsilon (n) \left(\tau - \frac {L}{2} \epsilon (n) - \frac {\left\| \nabla f (\pmb {x} (n)) - \pmb {v} (n) \right\|}{\left\| \widehat {\pmb {x}} (n) - \pmb {x} (n) \right\|}\right) \left\| \widehat {\pmb {x}} (n) - \pmb {x} (n) \right\| ^ {2} \\ \leq - \epsilon (n) \left(\tau - \frac {L}{2} \epsilon (n) - \frac {\left\| \nabla f (\boldsymbol {x} (n)) - \boldsymbol {v} (n) \right\|}{\delta}\right) \left\| \widehat {\boldsymbol {x}} (n) - \boldsymbol {x} (n) \right\| ^ {2}, \tag {37} \\ \end{array}
+$$
+
+where the last inequality follows from (31). Adding (36) and (37) over $n = t + 1, \ldots, i_t - 1$ and, for $t \in \mathcal{T}$ sufficiently large (so that $\tau - L\epsilon(t)/2 - \delta^{-1} \left\| \nabla f(\boldsymbol{x}(n)) - \boldsymbol{v}(n) \right\| \geq \widehat{\tau} > 0$ and $\left\| \nabla f(\boldsymbol{x}(t)) - \boldsymbol{v}(t) \right\| < \widehat{\tau} \bar{\delta}_2^2/\delta$ ), we have
+
+$$
+\begin{array}{l} U (\boldsymbol {x} (i _ {t})) - U (\boldsymbol {x} (t)) \stackrel {(a)} {\leq} - \widehat {\tau} \sum_ {n = t} ^ {i _ {t} - 1} \epsilon (n) \left\| \widehat {\boldsymbol {x}} (n) - \boldsymbol {x} (n) \right\| ^ {2} + \epsilon (t) \delta \left\| \nabla f (\boldsymbol {x} (t)) - \boldsymbol {v} (t) \right\| \\ \stackrel {(b)} {\leq} - \widehat {\tau} \bar {\delta} _ {2} ^ {2} \sum_ {n = t + 1} ^ {i _ {t} - 1} \epsilon (n) - \epsilon (t) \left(\widehat {\tau} \bar {\delta} _ {2} ^ {2} - \delta \left\| \nabla f (\boldsymbol {x} (t)) - \boldsymbol {v} (t) \right\|\right) \\ \stackrel {(c)} {\leq} - \widehat {\tau} \bar {\delta} _ {2} ^ {2} \sum_ {n = t + 1} ^ {i _ {t} - 1} \epsilon (n), \tag {38} \\ \end{array}
+$$
+
+where (a) follows from $\tau - L\epsilon(t)/2 - \delta^{-1}\|\nabla f(\pmb{x}(n)) - \pmb{v}(n)\| \geq \widehat{\tau} > 0$ ; (b) is due to (35); and in (c) we used $\|\nabla f(\pmb{x}(t)) - \pmb{v}(t)\| < \widehat{\tau}\bar{\delta}_2^2/\delta$ . Since $\{U(\pmb{x}(t))\}$ converges, it must be $\lim_{T \ni t \to \infty} \sum_{n=t+1}^{i_t-1} \epsilon(n) = 0$ , which contradicts (33). Therefore, it must be $\lim_{t \to \infty} \sup_{t \to \infty} \|\widehat{\pmb{x}}(t) - \pmb{x}(t)\| = 0$ w.p.1.
+
+Finally, let us prove that every limit point of the sequence $\{\pmb{x}(t)\}$ is a stationary solution of (1). Let $\pmb{x}^{\star}$ be the limit point of the convergent subsequence $\{\pmb{x}(t)\}_{t\in \mathcal{T}}$ . Taking the limit of (23a) over the index set $\mathcal{T}$ (and replacing w.l.o.g. $y$ by $r(x)$ ), we have
+
+$$
+\begin{array}{l} \lim _ {\mathcal {T} \ni t \rightarrow \infty} (\boldsymbol {x} - \widehat {\boldsymbol {x}} (t)) ^ {T} (\boldsymbol {v} (t) + \tau (\widehat {\boldsymbol {x}} (t) - \boldsymbol {x} (t))) + r (\boldsymbol {x}) - r (\widehat {\boldsymbol {x}} (t)) \\ = (\boldsymbol {x} - \boldsymbol {x} ^ {\star}) ^ {T} \nabla f (\boldsymbol {x} ^ {\star}) + r (\boldsymbol {x}) - r (\boldsymbol {x} ^ {\star}) \geq 0, \forall \boldsymbol {x} \in \mathbb {X}, \\ \end{array}
+$$
+
+where the last equality follows from: i) $\lim_{t\to \infty}\left\| \nabla f(\pmb {x}(t)) - \pmb {v}(t)\right\| = 0$ , and ii) $\lim_{t\to \infty}\left\| \widehat{\pmb{x}} (t) - \pmb {x}(t)\right\| = 0$ . This is the desired first-order optimality condition and $\pmb{x}^{\star}$ is a stationary point of (1).
\ No newline at end of file
diff --git a/proxsgdtrainingstructuredneuralnetworksunderregularizationandconstraints/images.zip b/proxsgdtrainingstructuredneuralnetworksunderregularizationandconstraints/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..93caff1ebb181cefc7d47ba3e2bcd5c6d2a9f00f
--- /dev/null
+++ b/proxsgdtrainingstructuredneuralnetworksunderregularizationandconstraints/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0f63f385f7180c51e6deaec71e41d9c600d712efdd0e6d115a2c2935c65feab6
+size 853301
diff --git a/proxsgdtrainingstructuredneuralnetworksunderregularizationandconstraints/layout.json b/proxsgdtrainingstructuredneuralnetworksunderregularizationandconstraints/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..52665511b0f181fa7cf24cb446e230bb42450213
--- /dev/null
+++ b/proxsgdtrainingstructuredneuralnetworksunderregularizationandconstraints/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c1f12627a21e7a024f547816219450bb521066c78d14f16fcb5cf34c1a5d84bf
+size 681702
diff --git a/prunedgraphscatteringtransforms/c899b190-ad62-4dec-b5d8-1fa661c0e50d_content_list.json b/prunedgraphscatteringtransforms/c899b190-ad62-4dec-b5d8-1fa661c0e50d_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..30d9dcc357ce35bfffcdc241834f44a7fe492192
--- /dev/null
+++ b/prunedgraphscatteringtransforms/c899b190-ad62-4dec-b5d8-1fa661c0e50d_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:454a7e16b0b394ebe42d129836e61551627a1de8c67060967a289523300e0795
+size 96056
diff --git a/prunedgraphscatteringtransforms/c899b190-ad62-4dec-b5d8-1fa661c0e50d_model.json b/prunedgraphscatteringtransforms/c899b190-ad62-4dec-b5d8-1fa661c0e50d_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..68faa97e14516e1fc894dee095a3c5abce7eb23e
--- /dev/null
+++ b/prunedgraphscatteringtransforms/c899b190-ad62-4dec-b5d8-1fa661c0e50d_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:43513271164702399981d331375ddacd2c4b40bdecd77a108687fc45f8e20364
+size 114561
diff --git a/prunedgraphscatteringtransforms/c899b190-ad62-4dec-b5d8-1fa661c0e50d_origin.pdf b/prunedgraphscatteringtransforms/c899b190-ad62-4dec-b5d8-1fa661c0e50d_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..34df0aed4cc97007b13a90a80c1c64800551698b
--- /dev/null
+++ b/prunedgraphscatteringtransforms/c899b190-ad62-4dec-b5d8-1fa661c0e50d_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7a9966fb8a0a64dfbfc22bcec75c4504b526d631920cbdb9a47102e99b86ea9c
+size 1335991
diff --git a/prunedgraphscatteringtransforms/full.md b/prunedgraphscatteringtransforms/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..832859ab18086a202dcad97c6fee98e102d9e225
--- /dev/null
+++ b/prunedgraphscatteringtransforms/full.md
@@ -0,0 +1,467 @@
+# PRUNED GRAPH SCATTERING TRANSFORMS
+
+Vassilis N. Ioannidis *
+
+Dpt. of Electrical and Computer Engineering
+
+Univ. of Minnesota
+
+Minneapolis, MN, USA
+
+ioann006@umn.edu
+
+Siheng Chen
+
+Mitsubishi Electric Research Laboratories
+
+Cambridge, MA, USA
+
+schen@merl.com
+
+Georgios B. Giannakis
+
+Dpt. of Electrical and Computer Engineering
+
+Univ. of Minnesota
+
+Minneapolis, MN, USA
+
+georgios@umn.edu
+
+# ABSTRACT
+
+Graph convolutional networks (GCNs) have achieved remarkable performance in a variety of network science learning tasks. However, theoretical analysis of such approaches is still at its infancy. Graph scattering transforms (GSTs) are non-trainable deep GCN models that are amenable to generalization and stability analyses. The present work addresses some limitations of GSTs by introducing a novel so-termed pruned (p)GST approach. The resultant pruning algorithm is guided by a graph-spectrum-inspired criterion, and retains informative scattering features on-the-fly while bypassing the exponential complexity associated with GSTs. It is further established that pGSTs are stable to perturbations of the input graph signals with bounded energy. Experiments showcase that i) pGST performs comparably to the baseline GST that uses all scattering features, while achieving significant computational savings; ii) pGST achieves comparable performance to state-of-the-art GCNs; and iii) Graph data from various domains lead to different scattering patterns, suggesting domain-adaptive pGST network architectures.
+
+# 1 INTRODUCTION
+
+The abundance of graph-structured data calls for advanced learning techniques, and complements nicely standard machine learning tools that cannot be directly applied to irregular data domains. Permeating the benefits of deep learning to the graph domain, graph convolutional networks (GCNs) provide a versatile and powerful framework to learn from complex graph data (Bronstein et al., 2017). GCNs and variants thereof have attained remarkable success in social network analysis, 3D point cloud processing, recommender systems and action recognition. However, researchers have recently reported inconsistent perspectives on the appropriate designs for GCN architectures. For example, experiments in social network analysis have argued that deeper GCNs marginally increase the learning performance (Wu et al., 2019), whereas a method for 3D point cloud segmentation achieves state-of-the-art performance with a 56-layer GCN network (Li et al., 2019). These 'controversial' empirical findings motivate theoretical analysis to understand the fundamental performance factors and the architecture design choices for GCNs.
+
+Aiming to bestow GCNs with theoretical guarantees, one promising research direction is to study graph scattering transforms (GSTs). GSTs are non-trainable GCNs comprising a cascade of graph filter banks followed by nonlinear activation functions. The graph filter banks are mathematically designed and are adopted to scatter an input graph signal into multiple channels. GSTs extract scattering features that can be utilized towards graph learning tasks (Gao et al., 2019), with competitive performance especially when the number of training examples is small. Under certain conditions on the graph filter banks, GSTs are endowed with energy conservation properties (Zou & Lerman,
+
+
+(a) Academic collaboration
+
+
+(b) Protein-protein network
+Figure 1: Illustration of the same pGST applied to different graph datasets. Notice that for the social network (a) most of GST branches are pruned, suggesting that most information is captured by local interactions.
+
+
+(c) 3D point cloud
+
+2019), as well as stability meaning robustness to graph topology deformations (Gama et al., 2019a). However, GSTs are associated with exponential complexity in space and time that increases with the number of layers. This discourages deployment of GSTs when a deep architecture is needed. Furthermore, stability should not come at odds with sensitivity. A filter's output should be sensitive to and "detect" perturbations of large magnitude. Lastly, graph data in different domains (social networks, 3D point clouds) have distinct properties, which encourages GSTs with domain-adaptive architectures.
+
+The present paper develops a data-adaptive pruning framework for the GST to systematically retain important features. Specifically, the contribution of this work is threefold.
+
+C1. We put forth a pruning approach to select informative GST features that we naturally term pruned graph scattering transform (pGST). The pruning decisions are guided by a criterion promoting alignment (matching) of the input graph spectrum with that of the graph filters. The optimal pruning decisions are provided on-the-fly, and alleviate the exponential complexity of GSTs.
+C2. We prove that the pGST is stable to perturbations of the input graph signals. Under certain conditions on the energy of the perturbations, the resulting pruning patterns before and after the perturbations are identical and the overall pGST is stable.
+C3. We showcase with extensive experiments that: i) the proposed pGSTs perform similarly and in certain cases better than the baseline GSTs that use all scattering features, while achieving significant computational savings; ii) The extracted features from pGSTs can be utilized towards graph classification and 3D point cloud recognition. Even without any training on the feature extraction step, the performance is comparable to state-of-the-art deep supervised learning approaches, particularly when training data are scarce; and iii) By analyzing the pruning patterns of the pGST, we deduce that graph signals in different domains call for different network architectures; see Fig. 1.
+
+# 2 RELATED WORK
+
+GCNs rely on a layered processing architecture comprising trainable graph convolution operations to linearly combine features per graph neighborhood, followed by pointwise nonlinear functions applied to the linearly transformed features (Bronstein et al., 2017). Complex GCNs and their variants have shown remarkable success in graph semi-supervised learning (Kipf & Welling, 2017; Velicković et al., 2018) and graph classification (Ying et al., 2018). To simplify GCNs, (Wu et al., 2019) has shown that by employing a single-layer linear GCN the performance in certain social network learning tasks degrades only slightly. On the other hand, (Li et al., 2019) has developed a 56-layer GCN that achieves state-of-the-art performance in 3D point cloud segmentation. Hence, designing GCN architectures guided by properties of the graph data is a highly motivated research question.
+
+Towards theoretically explaining the success of GCNs, recent works study the stability properties of GSTs with respect to metric deformations of the domain (Gama et al., 2019b;a; Zou & Lerman, 2019). GSTs generalize scattering transforms (Bruna & Mallat, 2013; Mallat, 2012) to non-Euclidean
+
+domains. GSTs are a cascade of graph filter banks and nonlinear operations that is organized in a tree-structured architecture. The number of extracted scattering features of a GST grows exponentially with the number of layers. Theoretical guarantees for GSTs are obtained after fixing the graph filter banks to implement a set of graph wavelets. The work in (Zou & Lerman, 2019) establishes energy conservation properties for GSTs given that certain energy-preserving graph wavelets are employed, and also prove that GSTs are stable to graph structure perturbations; see also (Gama et al., 2019b) that focuses on diffusion wavelets. On the other hand, (Gama et al., 2019a) proves stability to relative metric deformations for a wide class of graph wavelet families. These contemporary works shed light into the stability and generalization capabilities of GCNs. However, stable transforms are not necessarily informative, and albeit highly desirable, a principled approach to selecting informative GST features remains still an uncharted venue.
+
+# 3 BACKGROUND
+
+Consider a graph $\mathcal{G} := \{\mathcal{V}, \mathcal{E}\}$ with node set $\mathcal{V} := \{v_i\}_{i=1}^N$ , and edge set $\mathcal{E} := \{e_i\}_{i=1}^E$ . Its connectivity is described by the graph shift matrix $\mathbf{S} \in \mathbb{R}^{N \times N}$ , whose $(n, n')$ th entry $S_{nn'}$ is nonzero if $(n, n') \in \mathcal{E}$ or if $n = n'$ . A typical choice for $\mathbf{S}$ is the adjacency or the Laplacian matrix. Further, each node can be also associated with a few attributes. Collect attributes across all nodes in the matrix $\mathbf{X} := [\mathbf{x}_1, \ldots, \mathbf{x}_F] \in \mathbb{R}^{N \times F}$ , where each column $\mathbf{x}_f \in \mathbb{R}^N$ is a 'graph signal.'
+
+Graph Fourier transform. A Fourier transform corresponds to the expansion of a signal over bases that are invariant to filtering; here, this graph frequency basis is the eigenbasis of the graph shift matrix $\mathbf{S}$ . Henceforth, $\mathbf{S}$ is assumed normal with $\mathbf{S} = \mathbf{V}\boldsymbol{\Lambda}\mathbf{V}^{\top}$ , where $\mathbf{V} \in \mathbb{R}^{N \times N}$ forms the graph Fourier basis, and $\boldsymbol{\Lambda} \in \mathbb{R}^{N \times N}$ is the diagonal matrix of corresponding eigenvalues $\lambda_0, \ldots, \lambda_{N-1}$ . These eigenvalues represent graph frequencies. The graph Fourier transform (GFT) of $\mathbf{x} \in \mathbb{R}^N$ is $\widehat{\mathbf{x}} = \mathbf{V}^{\top}\mathbf{x} \in \mathbb{R}^{N}$ , while the inverse transform is $\mathbf{x} = \mathbf{V}\widehat{\mathbf{x}}$ . The vector $\widehat{\mathbf{x}}$ represents the signal's expansion in the eigenvector basis and describes the graph spectrum of $\mathbf{x}$ . The inverse GFT reconstructs the graph signal from its graph spectrum by combining graph frequency components weighted by the coefficients of the signal's graph Fourier transform. GFT is a tool that has been popular for analyzing graph signals in the graph spectral domain.
+
+Graph convolution neural networks. GCNs permeate the benefits of CNNs from processing Euclidean data to modeling graph structured data. GCNs model graph data through a succession of layers, each of which consists of a graph convolution operation (graph filter), a pointwise nonlinear function $\sigma(\cdot)$ , and oftentimes also a pooling operation. Given a graph signal $\mathbf{x} \in \mathbb{R}^N$ , the graph convolution operation diffuses each node's information to its neighbors according to the graph shift matrix $\mathbf{S}$ , as $\mathbf{Sx}$ . The $n$ th entry $[\mathbf{Sx}]_n = \sum_{n' \in \mathcal{N}_n} S_{nn'} x_{n'}$ is a weighted average of the one-hop neighboring features. Successive application of $\mathbf{S}$ will increase the reception field, spreading the information across the network. Hence, a $K$ th order graph convolution operation (graph filtering) is
+
+$$
+h (\mathbf {S}) \mathbf {x} := \sum_ {k = 0} ^ {K} w _ {k} \mathbf {S} ^ {k} \mathbf {x} = \mathbf {V} \widehat {h} (\boldsymbol {\Lambda}) \widehat {\mathbf {x}} \tag {1}
+$$
+
+where the graph filter $h(\cdot)$ is parameterized by the learnable weights $\{w_k\}_{k=0}^K$ , and the graph filter in the graph spectral domain is $\widehat{h}(\Lambda) = \sum_{k=0}^{K} w_k \Lambda^k$ . In the graph vertex domain, the learnable weights reflect the influences from various orders of neighbors; and in the graph spectral domain, those weights adaptively adjust the focus and emphasize certain graph frequency bands. GCNs employ various graph filter banks per layer, and learn the parameters that minimize a predefined learning objective, such as classification, or regression.
+
+Graph scattering transforms. GSTs are the nontrainable counterparts of GCNs, where the parameters of the graph convolutions are selected based on mathematical designs. GSTs process the input at each layer by a sequential application of graph filter banks $\{h_j(\mathbf{S})\}_{j=1}^J$ , an elementwise nonlinear function $\sigma(\cdot)$ , and a pooling operator $U$ . At the first layer, the input graph signal $\mathbf{x} \in \mathbb{R}^N$ constitutes the first scattering feature vector $\mathbf{z}_{(0)} := \mathbf{x}$ . Next, $\mathbf{z}_{(0)}$ is processed by the graph filter banks and $\sigma(\cdot)$ to generate $\{\mathbf{z}_{(j)}\}_{j=1}^J$ with $\mathbf{z}_{(j)} := \sigma(h_j(\mathbf{S})\mathbf{z}_{(0)})$ . At the second layer, the same operation is repeated per $j$ . The resulting computation structure is a tree with $J$ branches at each non-leaf node; see also Fig. 2. The $\ell$ th layer of the tree includes $J^\ell$ nodes. Each tree node at layer $\ell$ in the scattering transform is indexed by the path $p^{(\ell)}$ of the sequence of $\ell$ graph convolutions applied to the input
+
+
+Figure 2: Scattering pattern associated with a pGST with $J = 3$ and $L = 3$ . The dashed lines represent the pruned branches. The example of a graph signal and the GFTs of $\mathbf{x}$ and the filter banks are included as well. Note that the third filter $j = 3$ at $\ell = 1$ generates no output, i.e. $\mathbf{z}_{(\mathbf{3})} = \mathbf{0}$ , and hence is pruned.
+
+graph signal $\mathbf{x}$ , i.e. $p^{(\ell)} := (j^{(1)}, j^{(2)}, \ldots, j^{(\ell)})$ . The scattering feature vector at the tree node indexed by $(p^{(\ell)}, j)$ at layer $\ell + 1$ is
+
+$$
+\mathbf {z} _ {(p ^ {(\ell)}, j)} = \sigma (h _ {j} (\mathbf {S}) \mathbf {z} _ {(p ^ {(\ell)})}) \tag {2}
+$$
+
+where the variable $p^{(\ell)}$ holds the list of indices of the parent nodes ordered by ancestry, and all path $p^{(\ell)}$ in the tree with length $\ell$ are included in the path set $\mathcal{P}^{(\ell)}$ with $|\mathcal{P}^{(\ell)}| = 2^{\ell}$ . The nonlinear transformation function $\sigma(\cdot)$ disperses the graph frequency representation through the spectrum, and endows the GST with increased discriminating power (Gama et al., 2019a). By exploiting the sparsity of the graph, the computational complexity of (2) is $\mathcal{O}(KE)$ , where $E = |\mathcal{E}|$ is the number of edges in $\mathcal{G}$ . Each scattering feature vector $\mathbf{z}_{(p^{(\ell)})}$ is summarized by an aggregation operator $U(\cdot)$ to obtain a scalar scattering coefficient as $\phi_{(p^{(\ell)})} := U(\mathbf{z}_{(p^{(\ell)})})$ , where $U(\cdot)$ is typically an average or sum operator that effects dimensionality reduction of the extracted features. The scattering coefficient at each tree node reflects the activation level at a certain graph frequency band.
+
+These scattering coefficients are collected across all tree nodes to form a scattering feature map
+
+$$
+\boldsymbol {\Phi} (\mathbf {x}) := \left\{\left\{\phi_ {(p ^ {(\ell)})} \right\} _ {p ^ {(\ell)} \in \mathcal {P} ^ {(\ell)}} \right\} _ {\ell = 0} ^ {L} \tag {3}
+$$
+
+where $|\Phi(\mathbf{x})| = \sum_{\ell=0}^{L} J^{\ell}$ . The GST operation resembles a forward pass of a trained GCN. This is why several works study GST stability under perturbations of S in order to understand the working mechanism of GCNs (Zou & Lerman, 2019; Gama et al., 2019a;b).
+
+# 4 PRUNED GRAPH SCATTERING TRANSFORMS
+
+While the representation power of GST increases with the number of layers, the computational and space complexity of the transform also increase exponentially with the number of layers due to its scattering nature. Hence, even if informative features are available at deeper GST layers, the associated exponential complexity of extracting such features is prohibitive with the existing GST architectures. On the other hand, various input data (social networks, 3D point clouds) may have distinct properties, leading to different GST feature maps. In some cases, only a few tree nodes in deep layers are informative; and in other cases, tree nodes in shallow layers carry significant information; see Fig. 1. This requires a customized GST to adaptively choose significant tree nodes.
+
+Alleviating GST limitations, we introduce a pruned graph scattering transform (pGST) to systematically retain informative tree nodes without additional complexity. Our novel pGST alleviates the exponential complexity and adapts GST to different input data. Furthermore, pGST offer a practical mechanism to understand the architecture of GCNs. Based on the pruning patterns, the proposed pGST suggests when a deeper GCN is desirable, and when a shallow one will suffice. Pruning the wavelet packets has been traditionally employed for compression in image processing
+
+applications (Xiong et al., 2002), where the pruning is guided by a rate-distortion optimality criterion. In this work, we consider a graph spectrum inspired criterion. Intuitively, each tree node in the GSTs reflects a unique subband in the graph spectrum. When the subband of a tree node does not have a sufficient overlap with the graph spectrum of a graph signal, this tree node cannot capture the property of this graph signal, and should be pruned.
+
+For example, consider a smooth graph signal $\mathbf{x}$ , where connected nodes have similar signal values, that has a sparse (low-rank) representation in the graph spectral domain that is, $\widehat{\mathbf{x}} \coloneqq \mathbf{V}^{\top} \mathbf{x} \in \mathbb{R}^{N}$ and $[\widehat{\mathbf{x}}]_n = 0$ for $n \geq b$ . The graph spectrum of the $j$ th graph filtered output is then
+
+$$
+\mathbf {V} ^ {\top} h _ {j} (\mathbf {S}) \mathbf {x} = \mathrm {d i a g} \left(\widehat {h} _ {j} (\boldsymbol {\lambda})\right) \widehat {\mathbf {x}} = \left[ \widehat {h} _ {j} (\lambda_ {1}) \widehat {x} _ {1}, \widehat {h} _ {j} (\lambda_ {2}) \widehat {x} _ {2}, \dots , \widehat {h} _ {j} (\lambda_ {N}) \widehat {x} _ {N} \right] ^ {\top}
+$$
+
+where $\lambda_{n}$ is the $n$ th eigenvalue of $\mathbf{S}$ and each frequency $\widehat{x}_n$ is weighted by the corresponding transformed eigenvalue $\widehat{h}_j(\lambda_n)$ . Hence, if the support of the graph spectrum $\{\widehat{h}_j(\lambda_n)\}_n$ is not included in the support of $[\widehat{\mathbf{x}}]_n$ then the $j$ th graph filter output will not capture any information; that is, $h_j(\mathbf{S})\mathbf{x} = \mathbf{0}_N$ ; see Fig. 2. Thus, identifying such graph filters and pruning the corresponding tree nodes will result in a parsimonius and thus computationally efficient GST.
+
+Pruning criterion. Motivated by the aforementioned observation, we introduce a pruning criterion to select the scattering branches per tree node by maximizing the alignment between the graph spectrum of the graph filters and the scattering feature. At the tree node $p$ , the optimization problem is
+
+$$
+\max _ {\left\{f _ {(p, j)} \right\} _ {j = 1} ^ {J}} \quad \sum_ {j = 1} ^ {J} \left(\sum_ {n = 1} ^ {N} \left(\widehat {h} _ {j} \left(\lambda_ {n}\right) ^ {2} - \tau\right) \left[ \widehat {\mathbf {z}} _ {(p)} \right] _ {n} ^ {2}\right) f _ {(p, j)} \tag {4}
+$$
+
+$$
+s. t. \quad f _ {(p, j)} \in \{0, 1 \}, j = 1, \dots , J
+$$
+
+where $\widehat{\mathbf{z}}_{(p)}\coloneqq \mathbf{V}\mathbf{z}_{(p)}$ is the graph spectrum of the scattering feature vector $\mathbf{z}_{(p)}$ ; $\tau$ is a user-specific threshold; and, $f_{(p,j)}$ stands for the pruning assignment variable indicating whether node $(p,j)$ is active $(f_{(p,j)} = 1)$ or it should be pruned $(f_{(p,j)} = 0)$ . The objective in (4) promotes retaining tree nodes that maximize the alignment of the graph spectrum of $\widehat{\mathbf{z}}_{(p)}$ with that of $\widehat{h}_j(\lambda)$ . The threshold $\tau$ introduces a minimum spectral value to locate those tree nodes whose corresponding graph spectral response is small, i.e. $\widehat{h}_j(\lambda_n)^2\ll \tau$ . Note that criterion (4) is evaluated per tree node $p$ , thus allowing for a flexible and scalable design.
+
+The optimization problem in (4) is nonconvex since $f_{(p,j)}$ is a discrete variable. Furthermore, recovering $\widehat{\mathbf{z}}_{(p)}$ requires an eigendecomposition of the Laplacian matrix that incurs $\mathcal{O}(N^3)$ complexity. Nevertheless, by exploiting the structure in (4), we develop an efficient pruning algorithm that achieves the maximum of (4), as summarized in the following theorem.
+
+Theorem 1. The optimal pruning assignment variables $\left\{f_{(p,j)}^{*}\right\}_{j}$ of (4) is given as follows
+
+$$
+f _ {(p, j)} ^ {*} = \left\{ \begin{array}{l l} 1 & i f \frac {\| \mathbf {z} _ {(p , j)} \| ^ {2}}{\| \mathbf {z} _ {(p)} \| ^ {2}} > \tau , \\ 0 & i f \frac {\| \mathbf {z} _ {(p , j)} \| ^ {2}}{\| \mathbf {z} _ {(p)} \| ^ {2}} < \tau . \end{array} \right., j = 1, \dots , J \tag {5}
+$$
+
+The optimal variables $f_{(p,j)}^{*}$ are given by comparing the energy of the input $\mathbf{z}_{(p)}$ to that of the output $\mathbf{z}_{(p,j)}$ per graph filter $j$ that can be evaluated at a low complexity of $\mathcal{O}(N)$ . Our pruning criterion leads to a principled and scalable algorithm to selecting the GST tree nodes to be pruned. The pruning objective is evaluated at each tree node $p$ , and pruning decisions are made on-the-fly. Hence, when $f_{(p)}^{*} = 1$ , tree node $p$ is active and the graph filter bank will be applied to $\mathbf{z}_{(p)}$ , expanding the tree to the next layer; otherwise, the GST will not be expanded further at tree node $p$ , which can result in exponential savings in computations. An example of such a pruned tree is depicted in Fig. 2. Evidently, the hyperparameter $\tau$ controls the input-to-output energy ratio. A large $\tau$ corresponds to an aggressively pruned scattering tree, while a small $\tau$ amounts to a minimally pruned scattering tree. The pGST is then defined as
+
+$$
+\Psi (\mathbf {x}) := \left\{\phi_ {(p)} \right\} _ {p \in \mathcal {T}}
+$$
+
+where $\mathcal{T}$ is the set of active tree nodes $\mathcal{T} := \{p \in \mathcal{P} | f_{(p)}^* = 1\}$ .
+
+Our pruning approach provides a concise version of GSTs and effects savings in computations as well as memory. Although the worst-case complexity of pGST is still exponential, a desirable complexity can be effected by properly selecting $\tau$ . As a byproduct, the scattering patterns of pGSTs reveal the appropriate depths and widths of the GSTs for different input data; see also Fig. 1. The pruning approach so far is an unsupervised one, since no input data labels are assumed available.
+
+# 5 STABILITY AND SENSITIVITY OF PGST
+
+In this section, we prove the stability of pGST to perturbations of the input graph signal. To establish the ensuing results, we consider graph wavelets that form a frame with frame bounds $A$ and $B$ (Hammond et al., 2011). Specifically, for any graph signal $\mathbf{x} \in \mathbb{R}^N$ , it holds that, $A^2\| \mathbf{x}\|^2 \leq \sum_{j=1}^{J} \|h_j(\mathbf{S})\mathbf{x}\|^2 \leq B^2\| \mathbf{x}\|^2$ . In the graph vertex domain, the scalar frame bounds $A$ and $B$ characterize the numerical stability of recovering a graph signal $\mathbf{x}$ from $\{h_j(\mathbf{S})\mathbf{x}\}_j$ . In the graph spectral domain, they reflect the ability of the graph filter bank to amplify $\mathbf{x}$ along each graph frequency. Tight frame bounds, satisfying $A^2 = B^2$ , are of particular interest because such wavelets lead to enhanced numerical stability and faster computations (Shuman et al., 2015). The frame property of the graph wavelet plays an instrumental role in proving GST stability to perturbations of the underlying graph structure (Gama et al., 2019a,b; Zou & Lerman, 2019).
+
+Consider a perturbed graph signal $\tilde{\mathbf{x}}$ given by
+
+$$
+\tilde {\mathbf {x}} := \mathbf {x} + \boldsymbol {\delta} \in \mathbb {R} ^ {N} \tag {6}
+$$
+
+where $\delta \in \mathbb{R}^N$ is the perturbation vector. Such an additive model (6) may represent noise in the observed feature or adversarial perturbations. We are interested in studying how and under which conditions our pGST is affected by such perturbations. A stable transformation should have a similar output under small input perturbations.
+
+Before establishing that our pGST is stable, we first show that GST is stable to small perturbations of the input graph signal3.
+
+Lemma 1. Consider the GST $\Phi (\cdot)$ with $L$ layers and $J$ graph filters; and suppose that the graph filter bank forms a frame with bound $B$ , while $\mathbf{x}$ and $\tilde{\mathbf{x}}$ are related via (6). It then holds that
+
+$$
+\frac {\left\| \boldsymbol {\Phi} (\mathbf {x}) - \boldsymbol {\Phi} (\tilde {\mathbf {x}}) \right\| _ {2}}{\sqrt {\left| \boldsymbol {\Phi} (\mathbf {x}) \right|}} \leq \sqrt {\frac {\sum_ {\ell = 0} ^ {L} \left(B ^ {2} J\right) ^ {\ell}}{\sum_ {\ell = 0} ^ {L} J ^ {\ell}}} \| \boldsymbol {\delta} \| _ {2}. \tag {7}
+$$
+
+The squared difference of the GSTs is normalized by the number of scattering features in $\Phi(\cdot)$ , that is $|\Phi(\mathbf{x})| = \sum_{\ell=1}^{L} J^{\ell}$ . The bound in (7) relates to the frame bound of the wavelet filter bank. Notice that for tight frames with $B = 1$ , then the normalized stability bound (7) is tight. Let $\tilde{T}$ be the structure of the pruned tree for $\Psi(\tilde{\mathbf{x}})$ . The following lemma asserts that the pGST offers the same pruned tree for the original and the perturbed inputs.
+
+Lemma 2. Let $\tilde{\mathbf{z}}_p$ denote the perturbed scattering feature at the tree node $p$ and $\delta_p \coloneqq \mathbf{z}_p - \tilde{\mathbf{z}}_p$ . If for all $p \in \mathcal{P}$ and $j = 1, \dots, J$ , it holds that
+
+$$
+\left| \left\| h _ {j} (\mathbf {S}) \mathbf {z} _ {p} \right\| ^ {2} - \tau \left\| \mathbf {z} _ {p} \right\| ^ {2} \right| > \left\| h _ {j} (\mathbf {S}) \boldsymbol {\delta} _ {p} \right\| ^ {2} + \tau \left| \left\| \mathbf {z} _ {p} \right\| ^ {2} - \left\| \tilde {\mathbf {z}} _ {p} \right\| ^ {2} \right|. \tag {8}
+$$
+
+Then, we obtain
+
+1. The pruning transform will output the same tree for $\Psi (\mathbf{x})$ and $\Psi (\tilde{\mathbf{x}})$ ; that is, $\mathcal{T} = \tilde{\mathcal{T}}$ ; and, 2. With $g(\mathbf{x})\coloneqq \| h_j(\mathbf{S})\mathbf{x}\|^2 -\tau \| \mathbf{x}\|^2$ , a necessary condition for (8) is
+
+$$
+\left| g \left(\mathbf {z} _ {p}\right) \right| > g \left(\boldsymbol {\delta} _ {p}\right). \tag {9}
+$$
+
+According to (9), Lemma 2 can be interpreted as a signal-to-noise ratio (SNR) condition because under $g(\delta_p) > 0$ , it is possible to write (9) as $|g(\mathbf{z}_p)| / g(\delta_p) > 1$ . Lemma 2 provides a per-layer and branch condition for pGST to output the same scattering tree for the original or the perturbed signal. By combining Lemmas 1 and 2, we arrive at the following stability result for the pGST network.
+
+
+(a) Authorship
+
+
+(b) Facebook
+
+
+(c) Runtime
+
+
+(d) Feature overlap
+Figure 3: Classification accuracy against number of samples in the authorship attribution (a) and SNR in dB for source localization (b). Runtime comparison in seconds of the scattering transforms (c).
+
+Theorem 2. Consider the $pGST$ transform $\Psi(\cdot)$ with $L$ layers and $J$ graph filters; and suppose that the graph filter bank forms a frame with bound $B$ , while $\mathbf{x}$ and $\tilde{\mathbf{x}}$ are related via (6). The $pGST$ is stable to bounded perturbations $\delta$ , in the sense that
+
+$$
+\frac {\| \Psi (\mathbf {x}) - \Psi (\tilde {\mathbf {x}}) \| _ {2}}{\sqrt {| \Psi (\mathbf {x}) |}} \leq \sqrt {\frac {\sum_ {\ell = 0} ^ {L} F _ {\ell} B ^ {2 \ell}}{\sum_ {\ell = 0} ^ {L} F _ {\ell}}} \| \boldsymbol {\delta} \| _ {2}
+$$
+
+where $F_{\ell} \coloneqq |\mathcal{P}^{(\ell)} \cup \mathcal{T}|$ is the number of active scattering features at layer $\ell$ , and $|\Psi(\mathbf{x})| = \sum_{\ell=0}^{L} F_{\ell}$ the number of retained scattering features.
+
+# 6 EXPERIMENTS
+
+This section evaluates the performance of our pGST in various graph classification tasks. Graph classification amounts to predicting a label $y_{i}$ given $\mathbf{x}_i$ and $\mathbf{S}_i$ for the $i$ th graph. Our pGST extracts $\Psi(\mathbf{x}_i)$ , which is utilized as a feature vector for predicting $y_{i}$ . During training, the structure of the pGST $\mathcal{T}$ is determined, which is kept fixed during validation and testing. The parameter $\tau$ is selected via cross-validation. Our goal is to provide tangible answers to the following research questions.
+
+RQ1 How does the proposed pGST compare to GST?
+RQ2 How does pGST compare to state-of-the-art GCN approaches for graph classification?
+RQ3 What are the appropriate scattering patterns for various graph data?
+
+Appendix A includes additional experiments on ablation studies over the effect of the parameters $J, L, \tau$ .
+
+pGST vs. GST. To address RQ1, we reproduce the experiments of two tasks in (Gama et al., 2019a): authorship attribution and source localization. For the scattering transforms, we consider three implementations of graph filter banks: the diffusion wavelets (DS) in (Gama et al., 2019b), the monic cubic wavelets (MCS) in (Hammond et al., 2011) and the tight Hann wavelets (THS) in (Shuman et al., 2015). The scattering transforms use $J = 5$ filters, $L = 5$ layers, and $\tau = 0.01$ . The extracted features from GSTs are subsequently utilized by a linear support vector machine (SVM) classifier.
+
+Authorship attribution amounts to determining if a certain text was written by a specific author. Each text is represented by a graph with $N = 244$ , where words (nodes) are connected based on their relative positions in the text, and $\mathbf{x}$ is a bag-of-words representation of the text; see also (Gama et al., 2019b). Fig. 3 (a) reports the classification accuracy for the authorship attribution task as the number of training samples (texts) increases. GSTs utilize $\sum_{\ell=1}^{5} 5^{\ell} = 781$ scattering coefficients, while pGSTs rely only on $|\mathcal{T}| = 61$ for PDS, $|\mathcal{T}| = 30$ for PMCS, and $|\mathcal{T}| = 80$ for PTHS. Evidently, the proposed pGST achieves comparable performance as the baseline GST, whereas pGST uses only a subset of features (12.8%, 3.8% and 10.2% respectively). The SVM classifier provides a coefficient that weighs each scattering scalar. The magnitude of each coefficient shows the importance of the corresponding scattering feature in the classification. Fig. 3 (d) depicts the percentage of features
+
+
+(a) Tr./test : 9843/2468
+(b) Tr./test : 615/11703
+Figure 4: 3D point cloud classification.
+
+| Kernel | Method | Data Set |
| ENZYMES | D&D | COLLAB | PROTEINS |
| GNNs | SHORTTEST-PATH | 42.32 | 78.86 | 59.10 | 76.43 |
| WL-OA | 60.13 | 79.04 | 80.74 | 75.26 |
| PATCHYSAN | - | 76.27 | 72.60 | 75.00 |
| GRAPHSAGE | 54.25 | 75.42 | 68.25 | 70.48 |
| ECC | 53.50 | 74.10 | 67.79 | 72.65 |
| SET2SET | 60.15 | 78.12 | 71.75 | 74.29 |
| SORTPOOL | 57.12 | 79.37 | 73.76 | 75.54 |
| DIFFPOOL-DET | 58.33 | 75.47 | 82.13 | 75.62 |
| DIFFPOOL-NOLP | 62.67 | 79.98 | 75.63 | 77.42 |
| DIFFPOOL | 64.23 | 81.15 | 75.50 | 78.10 |
| Scattering | GSC | 53.88 | 76.57 | 76.88 | 74.03 |
| GST | 59.84 | 79.28 | 77.32 | 76.23 |
| PGST (Ours) | 60.25 | 81.27 | 78.40 | 78.57 |
+
+Table 1: Graph classification accuracy.
+
+after pruning retained in the top-2 $|\mathcal{T}|$ most important GST features given by the SVM classifier. It is observed, that although pGST does not take into account the labels, the retained features are indeed informative for classification.
+
+Source localization amounts to recovering the source of a rumor given a diffused signal over a Facebook subnetwork with $N = 234$ ; see the detailed settings in (Gama et al., 2019b). Fig. 3 (b) shows the classification accuracy of the scattering transforms for increasing SNR in dB. In accordance to Lemma 1 and Theorem 2, both pGST and GST are stable for a wide range of SNR. Furthermore, the performance of pGST matches the corresponding GST, while the pGST uses only a subset of features. Fig. 3 (c) depicts the runtime of the different scattering approaches, where the computational advantage of the pruned methods is evident.
+
+Graph classification. Towards answering RQ2, the proposed pGST is compared with the following state-of-the-art approaches. $^{5}$ The kernel methods shortest-path (Borgwardt & Kriegel, 2005), and Weisfeiler-Lehman optimal assignment (WL-OA) (Kriege et al., 2016); the deep learning approaches PatchySan (Niepert et al., 2016), GraphSage (Hamilton et al., 2017), edge-conditioned filters in CCNs (ECC) (Simonovsky & Komodakis, 2017), Set2Set (Vinyals et al., 2015), SortPool (Zhang et al., 2018), and DiffPool (Ying et al., 2018); and the geometric scattering classifier (GSC) (Gao et al., 2019). Results are presented with protein data sets D&D, Enzymes and Proteins, and the scientific collaboration data set Collab. Detailed description of the datasets is included in the Appendix. We perform 10-fold cross validation and report the classification accuracy averaged over the 10 folds. The gradient boosting classifier is employed for pGST and GST with parameters chosen based on the performance on the validation set. The graph scattering transforms use the MC wavelet with $L = 5$ , $J = 5$ and $\tau = 0.01$ . Table 1 lists the classification accuracy of the proposed and competing approaches. Even without any training on the feature extraction step, the performance of pGST is comparable to the state-of-the-art deep supervised learning approaches across all datasets. GST and pGST outperform also GSC, since the latter uses a linear SVM to classify the scattering features.
+
+Point cloud classification. We further test pGST in classifying 3D point clouds. Given a point cloud, a graph can be created by connecting points (nodes) to their nearest neighbors based on their Euclidean distance. Each node is also associated with 6 scalars denoting its x-y-z coordinates and RGB colors. For this experiment, GSTs are compared against PointNet++ (Qi et al., 2017a;b), 3dShapeNets (Wu et al., 2015) and VoxNet (Maturana & Scherer, 2015), that are state-of-the-art deep learning approaches. Fig. 4 reports the classification accuracy for the ModelNet40 dataset (Wu et al., 2015) for increasing $L$ . In Fig. 4 (a) 9,843 clouds are used for training and 2,468 for testing using the gradient boosting classifier; whereas, in Fig. 4 (b) only 615 clouds are used for training and the rest for testing using a fully connected neural network classifier with 3 layers. The scattering transforms use an MC wavelet with $J = 5$ for Fig. 4 (a) and $J = 9$ for Fig. 4 (b). Fig. 4 showcases that scattering transforms are competitive to state-of-the-art approaches, while pGST outperforms GST. This may be attributed to overfitting effects, since a large number of GST features is not informative. Furthermore, the exponential complexity associated with GSTs prevents their application for $L = 6$ . Fig. 4 (b) shows that when the training data are scarce, GST and pGST outperform the PointNet++, which requires a large number of training data to optimize over the network parameters.
+
+Scattering patterns. Towards answering RQ3, we depict the scattering structures of pGSTs, with a MC wavelet, $J = 3$ and $L = 5$ , for the Collab, Proteins, and ModelNet40 datasets in Fig. 1. Evidently, graph data from various domains require an adaptive scattering architecture. Specifically, most tree nodes for the academic collaboration dataset are pruned, and hence most informative features are in the shallow layers. This is consistent with the study in (Wu et al., 2019), which experimentally shows that deeper GCNs do not contribute as much for social network data. These findings are further supported by the small-world phenomenon in social networks, which suggests the diameter of social networks is small (Watts & Strogatz, 1998). On the other hand, the tree nodes for a 3D point cloud are minimally pruned, which is in line with the work in (Li et al., 2019) that showcases the advantage of deep GCNs in 3D point clouds classification. For the protein datasets, additional experiments are performed in Appendix A.4 that corroborate the pGST insights regarding the required number of GCN layers.
+
+# 7 CONCLUSIONS
+
+This paper developed a novel approach to pruning the graph scattering transform. The proposed pGST relies on a graph-spectrum-based data-adaptive criterion to prune non-informative features on-the-fly, and effectively reduce the computational complexity of GSTs. Furthermore, when the input signal is perturbed, the stability of pGST is established. Experiments demonstrate that i) the performance gains of pGSTs relative to GSTs; ii) pGST is competitive in a variety of graph classification tasks; and (iii) graph data from different domains exhibit unique pruned scattering patterns, which calls for adaptive network architectures.
+
+Acknowledgments. This work was supported by Mitsubishi Electric Research Laboratories, the Doctoral Dissertation Fellowship of the University of Minnesota, and the NSF grants 171141, and 1500713.
+
+# REFERENCES
+
+Karsten M Borgwardt and Hans-Peter Kriegel. Shortest-path kernels on graphs. In IEEE Intl. Conf. on Data Mining (ICDM), 2005.
+Michael M Bronstein, Joan Bruna, Yann LeCun, Arthur Szlam, and Pierre Vandergheynst. Geometric deep learning: going beyond euclidean data. IEEE Sig. Process. Mag., 34(4):18-42, 2017.
+Joan Bruna and Stephane Mallat. Invariant scattering convolution networks. IEEE Trans. Pattern Anal. Mach. Intel., 35(8):1872-1886, 2013.
+Fernando Gama, Joan Bruna, and Alejandro Ribeiro. Stability of graph scattering transforms. In Proc. Advances Neural Inf. Process. Syst. (NeurIPS), 2019a.
+Fernando Gama, Alejandro Ribeiro, and Joan Bruna. Diffusion scattering transforms on graphs. In Proc. Intl. Conf. on Learn. Representations (ICLR), 2019b.
+Feng Gao, Guy Wolf, and Matthew Hirn. Geometric scattering for graph data analysis. In Proc. Intl. Conf. Mach. Learn. (ICML), pp. 2122-2131, 2019.
+William L Hamilton, Rex Ying, and Jure Leskovec. Representation learning on graphs: Methods and applications. IEEE Data Engineering Bulletin, 2017.
+David K Hammond, Pierre Vandergheynst, and Rémi Gribonval. Wavelets on graphs via spectral graph theory. Applied and Computational Harmonic Analysis, 30(2):129-150, 2011.
+Roger A Horn and Charles R Johnson. Matrix Analysis. Cambridge university press, 2012.
+Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. In Proc. Intl. Conf. on Learn. Representations (ICLR), Toulon, France, Apr. 2017.
+Nils M Kriege, Pierre-Louis Giscard, and Richard Wilson. On valid optimal assignment kernels and applications to graph classification. In Proc. Advances Neural Inf. Process. Syst. (NeurIPS), 2016.
+Guohao Li, Matthias Müller, Ali Thabet, and Bernard Ghanem. Can gcns go as deep as cnns? In Proc. Int. Conf. Comput. Vis., 2019.
+
+Stéphane Mallat. Group invariant scattering. Communications on Pure and Applied Mathematics, 65 (10):1331-1398, 2012.
+Daniel Maturana and Sebastian Scherer. Voxnet: A 3d convolutional neural network for real-time object recognition. In 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2015.
+Mathias Niepert, Mohamed Ahmed, and Konstantin Kutzkov. Learning convolutional neural networks for graphs. In Proc. Intl. Conf. Mach. Learn. (ICML), 2016.
+Charles R Qi, Hao Su, Kaichun Mo, and Leonidas J Guibas. Pointnet: Deep learning on point sets for 3d classification and segmentation. In IEEE Conf. on Comp. Vis. and Pat. Rec., 2017a.
+Charles Ruizhongtai Qi, Li Yi, Hao Su, and Leonidas J Guibas. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. In Proc. Advances Neural Inf. Process. Syst. (NeurIPS), 2017b.
+David I Shuman, Christoph Wiesmeyr, Nicki Holighaus, and Pierre Vandergheynst. Spectrum-adapted tight graph wavelet and vertex-frequency frames. IEEE Trans. Sig. Process., 63(16):4223-4235, 2015.
+Martin Simonovsky and Nikos Komodakis. Dynamic edge-conditioned filters in convolutional neural networks on graphs. In IEEE Conf. on Comp. Vis. and Pat. Rec., 2017.
+Petar Velicković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. Graph attention networks. In Proc. Intl. Conf. on Learn. Representations (ICLR), 2018.
+Oriol Vinyals, Samy Bengio, and Manjunath Kudlur. Order matters: Sequence to sequence for sets. Proc. Intl. Conf. on Learn. Representations (ICLR), 2015.
+Duncan J Watts and Steven H Strogatz. Collective dynamics of 'small-world' networks. Nature, 393 (6684):440, 1998.
+Felix Wu, Tianyi Zhang, Amauri Holanda de Souza Jr, Christopher Fifty, Tao Yu, and Kilian Q Weinberger. Simplifying graph convolutional networks. In Proc. Intl. Conf. Mach. Learn. (ICML), 2019.
+Zhirong Wu, Shuran Song, Aditya Khosla, Fisher Yu, Linguang Zhang, Xiaou Tang, and Jianxiong Xiao. 3d shapenets: A deep representation for volumetric shapes. In IEEE Conf. on Comp. Vis. and Pat. Rec., 2015.
+Zixiang Xiong, Kannan Ramchandran, and Michael T Orchard. Space-frequency quantization for wavelet image coding. 2002.
+Zhitao Ying, Jiaxuan You, Christopher Morris, Xiang Ren, Will Hamilton, and Jure Leskovec. Hierarchical graph representation learning with differentiable pooling. In Proc. Advances Neural Inf. Process. Syst. (NeurIPS), 2018.
+Muhan Zhang, Zhicheng Cui, Marion Neumann, and Yixin Chen. An end-to-end deep learning architecture for graph classification. In AAAI Conference on Artificial Intelligence, 2018.
+Dongmian Zou and Gilad Lerman. Graph convolutional neural networks via scattering. Applied and Computational Harmonic Analysis, 2019.
+
+| Dataset | Graphs | Features F | Max number of nodes per graph |
| Collab | 5000 | 1 | 492 |
| D&D | 1178 | 89 | 5748 |
| Enzymes | 600 | 3 | 126 |
| Proteins | 1113 | 3 | 620 |
+
+Table 2: Dataset characteristics
+
+
+
+
+
+
+(a) Classification accuracy
+(b) Number of active features $|\mathcal{T}|$
+(c) Runtime in seconds
+Figure 5: Performance of pGSTs for varying $\tau$ .
+
+# A ADDITIONAL EXPERIMENTS
+
+Dataset characteristics. The characteristics of the Datasets used in Table 1 are shown in Table 2. Notice that the nodes in the Collab dataset did not have any features, and hence $\mathbf{x}$ was selected as the vector that holds the node degrees.
+
+
+Figure 6: Classification accuracy over different parameters.
+
+
+
+# A.1 ABLATION STUDY
+
+Fig. 5 reports how the pGST is affected by varying the threshold $\tau$ in the task of source localization, with $J = 6$ and $L = 5$ . Fig. 5 (a) shows the classification accuracy that generally decreases at $\tau$ increases since the number of active features $|\mathcal{T}|$ decreases; cf. Fig. 5 (b). Fig. 5 (c) reports the runtime in seconds of the approaches. Fig. 6 showcases the classification performance of the pGST with $\tau = 0.01$ for varying $L$ with $J = 3$ on the left and for varying $J$ with $L = 3$ on the right. It is observed, that the classification performance generally increases with $L$ and $J$ .
+
+# B PROOF OF THEOREM 1
+
+First, the objective in (4) is rewritten as
+
+$$
+\sum_ {j = 1} ^ {J} \left(\sum_ {n = 1} ^ {N} \left(\widehat {h} _ {j} \left(\lambda_ {n}\right) ^ {2} - \tau\right) \left[ \widehat {\mathbf {z}} _ {(p)} \right] _ {n} ^ {2}\right) f _ {j} = \sum_ {j = 1} ^ {J} \widehat {\mathbf {z}} _ {(p)} ^ {\top} \left(\operatorname {d i a g} \left(\widehat {h} _ {j} (\boldsymbol {\lambda})\right) ^ {2} - \tau \mathbf {I}\right) \widehat {\mathbf {z}} _ {(p)} f _ {j} \tag {10}
+$$
+
+that follows by definition. By introducing the scalars $\alpha_{j} := \widehat{\mathbf{z}}_{(p)}^{\top}(\mathrm{diag}\left(\widehat{h}_{j}(\boldsymbol {\lambda})\right)^{2} - \tau \mathbf{I})\widehat{\mathbf{z}}_{(p)}$ for $j = 1,\ldots ,J$ , (4) can be rewritten as
+
+$$
+\max _ {f _ {j}} \quad \sum_ {j = 1} ^ {J} \alpha_ {j} f _ {j} \tag {11}
+$$
+
+$$
+\begin{array}{l} \text {s . t .} \quad f _ {j} \in \{0, 1 \}, j = 1, \dots , J. \end{array}
+$$
+
+The optimization problem in (11) is nonconvex since $f_{j}$ is a discrete variable. However, maximizing the sum in (11) amounts to setting $f_{j} = 1$ for the positive $\alpha_{j}$ over $j$ . Such an approach leads to the optimal pruning assignment variables as follows
+
+$$
+f _ {j} ^ {*} = \left\{ \begin{array}{l l} 1 & \text {i f} \alpha_ {j} > 0, \\ 0 & \text {i f} \alpha_ {j} < 0. \end{array} , j = 1, \dots , J \right. \tag {12}
+$$
+
+The rest of the proof focuses on rewriting $\alpha_{j}$ as follows
+
+$$
+\alpha_ {j} = \widehat {\mathbf {z}} _ {(p)} ^ {\top} (\operatorname {d i a g} (\widehat {h} _ {j} (\boldsymbol {\lambda})) ^ {2} - \tau \mathbf {I}) \widehat {\mathbf {z}} _ {(p)} \tag {13}
+$$
+
+$$
+= \left\| \operatorname {d i a g} \left(\widehat {h} _ {j} (\boldsymbol {\lambda})\right) \widehat {\mathbf {z}} _ {(p)} \right\| ^ {2} - \tau \left\| \widehat {\mathbf {z}} _ {(p)} \right\| ^ {2} \tag {14}
+$$
+
+Furthermore, since $\mathbf{V}$ is orthogonal matrix it holds that $\| \widehat{\mathbf{z}}_{(p)}\| ^2 = \| \mathbf{V}^\top \mathbf{z}_{(p)}\| ^2 = \| \mathbf{z}_{(p)}\| ^2$ and it follows
+
+$$
+\begin{array}{l} \left\| \operatorname {d i a g} \left(\widehat {h} _ {j} (\boldsymbol {\lambda})\right) \widehat {\mathbf {z}} _ {(p)} \right\| ^ {2} = \left\| h _ {j} (\mathbf {S}) \mathbf {z} _ {(p)} \right\| ^ {2} (15) \\ = \| \sigma \left(h _ {j} (\mathbf {S}) \mathbf {z} _ {(p)}\right) \| ^ {2} \\ = \left\| \mathbf {z} _ {(p, j)} \right\| ^ {2} (16) \\ \end{array}
+$$
+
+where the second line follows since $\sigma (\cdot)$ is applied elementwise and does not change the norm.
+
+# C PROOF OF LEMMA 1
+
+By definition (3) it holds
+
+$$
+\left\| \Phi (\mathbf {x}) - \Phi (\tilde {\mathbf {x}}) \right\| ^ {2} = \sum_ {\ell = 0} ^ {L} \sum_ {p ^ {(\ell)} \in \mathcal {P} ^ {(\ell)}} \left| \phi_ {(p ^ {(\ell)})} - \tilde {\phi} _ {(p ^ {(\ell)})} \right| ^ {2} \tag {17}
+$$
+
+Hence, it is well motivated to bound each term of the sum in (17) as follows
+
+$$
+\begin{array}{l} \left| \phi_ {\left(p ^ {(\ell)}\right)} - \tilde {\phi} _ {\left(p ^ {(\ell)}\right)} \right| = \left| U \left(\mathbf {z} _ {\left(p ^ {(\ell)}\right)} - U \left(\tilde {\mathbf {z}} _ {\left(p ^ {(\ell)}\right)}\right) \right| \right. (18) \\ \leq \| U \| \| \mathbf {z} _ {(p ^ {(\ell)})} - \tilde {\mathbf {z}} _ {(p ^ {(\ell)})} \| (19) \\ \end{array}
+$$
+
+where (19) follows since the norm is a sub-multiplicative operator. Next we will show the following recursive bound
+
+$$
+\left\| \mathbf {z} _ {(p ^ {(\ell)})} - \tilde {\mathbf {z}} _ {(p ^ {(\ell)})} \right\| = \| \sigma \left(h _ {j ^ {(\ell)}} (\mathbf {S}) \mathbf {z} _ {(p ^ {(\ell - 1)})}\right) - \sigma \left(h _ {j ^ {(\ell)}} (\mathbf {S}) \tilde {\mathbf {z}} _ {(p ^ {(\ell - 1)})}\right) \| \tag {20}
+$$
+
+$$
+\leq \| \sigma (\cdot) \| \| h _ {j ^ {(\ell)}} (\mathbf {S}) \mathbf {z} _ {(p ^ {(\ell - 1)})} - h _ {j ^ {(\ell)}} (\mathbf {S}) \tilde {\mathbf {z}} _ {(p ^ {(\ell - 1)})} \| \tag {21}
+$$
+
+$$
+\leq \left\| h _ {j ^ {(\ell)}} (\mathbf {S}) \mathbf {z} _ {\left(p ^ {(\ell - 1)}\right)} - h _ {j ^ {(\ell)}} (\mathbf {S}) \tilde {\mathbf {z}} _ {\left(p ^ {(\ell - 1)}\right)} \right\| \tag {22}
+$$
+
+$$
+\leq \left\| h _ {j ^ {(\ell)}} (\mathbf {S}) \right\| \left\| \mathbf {z} _ {\left(p ^ {(\ell - 1)}\right)} - \tilde {\mathbf {z}} _ {\left(p ^ {(\ell - 1)}\right)} \right\| \tag {23}
+$$
+
+where (21), (23) follow since the norm is a sub-multiplicative operator and (22) follows since the nonlinearity is nonexpansive, i.e. $\| \sigma (\cdot)\| < 1$ . Hence, by applying (23) $\ell - 1$ times the following condition holds
+
+$$
+\left\| \mathbf {z} _ {\left(p ^ {(\ell)}\right)} - \tilde {\mathbf {z}} _ {\left(p ^ {(\ell)}\right)} \right\| \leq \| h _ {j ^ {(\ell)}} (\mathbf {S}) \| \| h _ {j ^ {(\ell - 1)}} (\mathbf {S}) \| \dots \| h _ {j ^ {(1)}} (\mathbf {S}) \| \| \mathbf {x} - \tilde {\mathbf {x}} \| \tag {24}
+$$
+
+and by further applying the frame bound and (6) it follows that
+
+$$
+\left\| \mathbf {z} _ {\left(p ^ {(\ell)}\right)} - \tilde {\mathbf {z}} _ {\left(p ^ {(\ell)}\right)} \right\| \leq B ^ {\ell} \| \boldsymbol {\delta} \| \tag {25}
+$$
+
+Combining (19), (25) and the average operator property $\| U\| = 1$ it holds that
+
+$$
+\left| \phi_ {\left(p ^ {(\ell)}\right)} - \tilde {\phi} _ {\left(p ^ {(\ell)}\right)} \right| \leq B ^ {\ell} \| \boldsymbol {\delta} \| \tag {26}
+$$
+
+By applying the bound (26) for all entries in the right hand side of (17) it follows that
+
+$$
+\left\| \boldsymbol {\Phi} (\mathbf {x}) - \boldsymbol {\Phi} (\tilde {\mathbf {x}}) \right\| ^ {2} \leq \sum_ {\ell = 0} ^ {L} \sum_ {p ^ {(\ell)} \in \mathcal {P} ^ {(\ell)}} B ^ {2 l} \| \boldsymbol {\delta} \| ^ {2} \tag {27}
+$$
+
+By factoring out $\| \delta \|$ and observing that the sum in the right side of (27) does not depend on the path index $p$ it follows that
+
+$$
+\left\| \boldsymbol {\Phi} (\mathbf {x}) - \boldsymbol {\Phi} (\tilde {\mathbf {x}}) \right\| ^ {2} \leq \left(\sum_ {\ell = 0} ^ {L} | \mathcal {P} ^ {(\ell)} | B ^ {2 \ell}\right) \| \boldsymbol {\delta} \| ^ {2} \tag {28}
+$$
+
+Finally, since the cardinality of the paths at $\ell$ is $|\mathcal{P}^{(\ell)}| = J^{\ell}$ and $\sum_{\ell=0}^{L}(B^{2}J)^{\ell} = ((B^{2}J)^{L}) / (B^{2}J - 1)$ it holds
+
+$$
+\left\| \boldsymbol {\Phi} (\mathbf {x}) - \boldsymbol {\Phi} (\tilde {\mathbf {x}}) \right\| \leq \sqrt {\frac {(B ^ {2} J) ^ {L}}{B ^ {2} J - 1}} \| \boldsymbol {\delta} \| \tag {29}
+$$
+
+# D PROOF OF LEMMA 2
+
+We will prove the case for $\ell = 0$ , where $\mathbf{z}_{p^{(0)}} = \mathbf{x}$ , since the same proof holds for any $\ell$ . First, we adapt (8) to the following
+
+$$
+\left|\left\| h _ {j} (\mathbf {S}) \mathbf {x} \right\| ^ {2} - \tau \left\| \mathbf {x} \right\| ^ {2} \right| > \left. \right.\left\| h _ {j} (\mathbf {S}) \boldsymbol {\delta} \right\| ^ {2} + \tau \left|\left\| \mathbf {x} \right\| ^ {2} - \left\| \tilde {\mathbf {x}} \right\| ^ {2} \right|. \tag {30}
+$$
+
+The proof will examine two cases and will follow by contradiction. For the first case, consider that branch $j$ is pruned in $\Psi(\mathbf{x})$ and not pruned in $\Psi(\tilde{\mathbf{x}})$ , i.e. $(j) \in \mathcal{T}$ and $(j) \notin \tilde{\mathcal{T}}$ . By applying (5) for $\mathbf{z}_{(j)} = \sigma(h_j(\mathbf{S})\mathbf{x})$ there exists $C \geq 0$ such that
+
+$$
+\frac {\left\| h _ {j} (\mathbf {S}) \mathbf {x} \right\| ^ {2}}{\left\| \mathbf {x} \right\| ^ {2}} \leq \tau - C \tag {31}
+$$
+
+$$
+\left\| h _ {j} (\mathbf {S}) \mathbf {x} \right\| ^ {2} \leq \tau \left\| \mathbf {x} \right\| ^ {2} - C \| \mathbf {x} \| ^ {2} \tag {32}
+$$
+
+Furthermore, from (5) it holds for $\tilde{\mathbf{z}}_{(j)} = \sigma(h_j(\mathbf{S})\tilde{\mathbf{x}})$ that
+
+$$
+\frac {\left\| h _ {j} (\mathbf {S}) \tilde {\mathbf {x}} \right\| ^ {2}}{\left\| \tilde {\mathbf {x}} \right\| ^ {2}} > \tau \tag {33}
+$$
+
+By applying (6) to (33), and using the triangular inequality it follows that
+
+$$
+\left\| h _ {j} (\mathbf {S}) \mathbf {x} \right\| ^ {2} + \left\| h _ {j} (\mathbf {S}) \boldsymbol {\delta} \right\| ^ {2} \geq \tau \| \tilde {\mathbf {x}} \| ^ {2} \tag {34}
+$$
+
+Next, by applying (32) it holds that
+
+$$
+\tau \| \mathbf {x} \| ^ {2} - C \| \mathbf {x} \| ^ {2} + \| h _ {j} (\mathbf {S}) \boldsymbol {\delta} \| ^ {2} \geq \tau \| \tilde {\mathbf {x}} \| ^ {2} \tag {35}
+$$
+
+$$
+\tau \left(\| \mathbf {x} \| ^ {2} - \| \tilde {\mathbf {x}} \| ^ {2}\right) + \| h _ {j} (\mathbf {S}) \delta \| ^ {2} \geq C \| \mathbf {x} \| ^ {2}. \tag {36}
+$$
+
+Next, by utilizing (30) and the absolute value property $|a| \geq a$ to upper-bound the left side of (36) it follows that
+
+$$
+\left\| h _ {j} (\mathbf {S}) \mathbf {x} \right\| ^ {2} - \tau \left\| \mathbf {x} \right\| ^ {2} > C \| \mathbf {x} \| ^ {2}. \tag {37}
+$$
+
+Finally, by applying (32) the following is obtained
+
+$$
+0 > 2 C \| \mathbf {x} \| ^ {2} \tag {38}
+$$
+
+which implies that $C < 0$ . However, this contradicts (31) since $C \geq 0$ . Following a symmetric argument we can complete the proof for the other case.
\ No newline at end of file
diff --git a/prunedgraphscatteringtransforms/images.zip b/prunedgraphscatteringtransforms/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..636e06d83ee50f710fdc07e1d412f8f5c763fd34
--- /dev/null
+++ b/prunedgraphscatteringtransforms/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:954978d5fee102ca3d0d6a216178b1878e16de7bba2600ff27d6fd646ba21823
+size 476625
diff --git a/prunedgraphscatteringtransforms/layout.json b/prunedgraphscatteringtransforms/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..7c80076a5fe363ce5091ffa03e62596e81799e08
--- /dev/null
+++ b/prunedgraphscatteringtransforms/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:fa00e49d7233b7d2efdf034111fbe946e7b6939ae3fe26785669232f4d3c8233
+size 590804
diff --git a/pseudolidaraccuratedepthfor3dobjectdetectioninautonomousdriving/7d7c4cfb-b180-449a-974d-340e972cd4ad_content_list.json b/pseudolidaraccuratedepthfor3dobjectdetectioninautonomousdriving/7d7c4cfb-b180-449a-974d-340e972cd4ad_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..59400399641b1cfaa81d572c2f498656b63f0005
--- /dev/null
+++ b/pseudolidaraccuratedepthfor3dobjectdetectioninautonomousdriving/7d7c4cfb-b180-449a-974d-340e972cd4ad_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:531f3f417dc8c01cb6042606678db4dba808af264ae1171cec9fe42083276e70
+size 127168
diff --git a/pseudolidaraccuratedepthfor3dobjectdetectioninautonomousdriving/7d7c4cfb-b180-449a-974d-340e972cd4ad_model.json b/pseudolidaraccuratedepthfor3dobjectdetectioninautonomousdriving/7d7c4cfb-b180-449a-974d-340e972cd4ad_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..96f022ed3f8795afe37fbc8daafdee2aa0f1ec02
--- /dev/null
+++ b/pseudolidaraccuratedepthfor3dobjectdetectioninautonomousdriving/7d7c4cfb-b180-449a-974d-340e972cd4ad_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7d909f2dd973174a7f50f66d1560d929e69d7e1a91289eb2ee7d8e9d0ac8d347
+size 153623
diff --git a/pseudolidaraccuratedepthfor3dobjectdetectioninautonomousdriving/7d7c4cfb-b180-449a-974d-340e972cd4ad_origin.pdf b/pseudolidaraccuratedepthfor3dobjectdetectioninautonomousdriving/7d7c4cfb-b180-449a-974d-340e972cd4ad_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..abccd4f5f1868d2ec023ddbd31550e09e234a240
--- /dev/null
+++ b/pseudolidaraccuratedepthfor3dobjectdetectioninautonomousdriving/7d7c4cfb-b180-449a-974d-340e972cd4ad_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:82e71d64ddda96912248abc7a21e7c54281f14994d8602ea72d2f357b4db480d
+size 6915345
diff --git a/pseudolidaraccuratedepthfor3dobjectdetectioninautonomousdriving/full.md b/pseudolidaraccuratedepthfor3dobjectdetectioninautonomousdriving/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..4c5fb883dfcffafdc160f5ae2a9796a9ac1b86a2
--- /dev/null
+++ b/pseudolidaraccuratedepthfor3dobjectdetectioninautonomousdriving/full.md
@@ -0,0 +1,494 @@
+# PSEUDO-LIDAR++: ACCURATE DEPTH FOR 3D OBJECT DETECTION IN AUTONOMOUS DRIVING
+
+Yurong You $^{1}$ , Yan Wang $^{1}$ , Wei-Lun Chao $^{2}$ , Divyansh Garg $^{1}$ , Geoff Pleiss $^{1}$ , Bharath Hariharan $^{1}$ , Mark Campbell $^{1}$ , and Kilian Q. Weinberger $^{1}$
+
+1Cornell University, Ithaca, NY 2The Ohio State University, Columbus, OH {yy785, yw763, dg595, gp346, bh497, mc288, kqw4}@cornell.edu chao.209@osu.edu
+
+# ABSTRACT
+
+Detecting objects such as cars and pedestrians in 3D plays an indispensable role in autonomous driving. Existing approaches largely rely on expensive LiDAR sensors for accurate depth information. While recently pseudo-LiDAR has been introduced as a promising alternative, at a much lower cost based solely on stereo images, there is still a notable performance gap. In this paper we provide substantial advances to the pseudo-LiDAR framework through improvements in stereo depth estimation. Concretely, we adapt the stereo network architecture and loss function to be more aligned with accurate depth estimation of faraway objects — currently the primary weakness of pseudo-LiDAR. Further, we explore the idea to leverage cheaper but extremely sparse LiDAR sensors, which alone provide insufficient information for 3D detection, to de-bias our depth estimation. We propose a depth-propagation algorithm, guided by the initial depth estimates, to diffuse these few exact measurements across the entire depth map. We show on the KITTI object detection benchmark that our combined approach yields substantial improvements in depth estimation and stereo-based 3D object detection — outperforming the previous state-of-the-art detection accuracy for faraway objects by $40\%$ . Our code is available at https://github.com/mileyan/Pseudo_Lidar_V2.
+
+# 1 INTRODUCTION
+
+Safe driving in autonomous cars requires accurate 3D detection and localization of cars, pedestrians and other objects. This in turn requires accurate depth information, which can be obtained from LiDAR (Light Detection And Ranging) sensors. Although highly precise and reliable, LiDAR sensors are notoriously expensive: a 64-beam model can cost around $75,000 (USD) $^1$ . The alternative is to measure depth through inexpensive commodity cameras. However, in spite of recent dramatic progress in stereo-based 3D object detection brought by pseudo-LiDAR (Wang et al., 2019a), a significant performance gap remains especially for faraway objects (which we want to detect early to allow time for reaction). The trade-off between affordability and safety creates an ethical dilemma.
+
+
+Figure 1: An illustration of our proposed depth estimation and correction method. The green box is the ground truth location of the car in the KITTI dataset. The red points are obtained with a stereo disparity network. Purple points, obtained with our stereo depth network (SDN), are much closer to the truth. After depth propagation (blue points) with a few (yellow) LiDAR measurements the car is squarely inside the green box. (One floor square is $1\mathrm{m} \times 1\mathrm{m}$ .)
+
+In this paper we propose a possible solution to this remaining challenge that combines insights from both perspectives. We observe that the higher 3D object localization error of stereo-based systems, compared to LiDAR-based ones, stems entirely from the higher error in depth estimation (after the 3D point cloud is obtained the two approaches are identical (Wang et al., 2019a)). Importantly, this error is not random but systematic: we observe that stereo methods do indeed detect objects with high reliability, yet they estimate the depth of the entire object as either too far or too close. See Figure 1 for an illustration: the red stereo points capture the car but are shifted by about $2\mathrm{m}$ completely outside the ground-truth location (green box). If we can de-bias these depth estimates it should be possible to obtain accurate 3D localization even for distant objects without exorbitant costs.
+
+We start by revisiting the depth estimation routine embedded at the heart of state-of-the-art stereobased 3D detection approach (Wang et al., 2019a). A major contributor to the systematic depth bias comes from the fact that depth is typically not computed directly. Instead, one first estimates the disparity — the horizontal shift of a pixel between the left and right images — and then inverts it to obtain pixel-wise depth. While the use of deep neural networks has largely improved disparity estimation (Chang & Chen, 2018; Cheng et al., 2018; Mayer et al., 2016; Wang et al., 2019b), designing and learning the networks to optimize the accuracy of disparity estimation simply overemphasizes nearby objects due to the reciprocal transformation. For instance, a unit disparity error (in pixels) for a 5-meter-away object means a $10\mathrm{cm}$ error in depth: the length of a side mirror. The same disparity error for a 50-meter-away object, however, becomes a $5.8\mathrm{m}$ error in depth: the length of an entire car. Penalizing both errors equally means that the network spends more time correcting subtle errors on nearby objects than gross errors on faraway objects, resulting in degraded depth estimates and ultimately poor detection and localization for faraway objects. We thus propose to adapt the stereo network architecture and loss function for direct depth estimation. Concretely, the cost volume that fuses the left-right images and the subsequent 3D convolutions are the key components in stereo networks. Taking the central assumption of convolutions — all neighborhoods can be operated in an identical manner — we propose to construct the cost volume on the grid of depth rather than disparity, enabling 3D convolutions and the loss function to perform exactly on the right scale for depth estimation. We refer to our network as stereo depth network (SDN). See Figure 1 for a comparison of 3D points obtained with SDN (purple) and disparity estimation (red).
+
+Although our SDN improves the depth estimates significantly, stereo images are still inherently 2D and it is unclear if they can ever match the accuracy and reliability of a true 3D LiDAR sensor. Although LiDAR sensors with 32 or 64 beams are expensive, LiDAR sensors with only 4 beams are two orders of magnitude cheaper $^{2}$ and thus easily affordable. The 4 laser beams are very sparse and ill-suited to capture 3D object shapes by themselves, but if paired with stereo images they become the ideal tool to de-bias our dense stereo depth estimates: a single high-precision laser beam may inform us how to correct the depth of an entire car or pedestrian in its path. To this end, we present a novel depth-propagation algorithm, inspired by graph-based manifold learning (Weinberger et al., 2005; Roweis & Saul, 2000; Xiaojin & Zoubin, 2002). In a nutshell, we connect our estimated 3D stereo point cloud locally by a nearest neighbor graph, such that points corresponding to the same object will share many local paths with each other. We match the few but exact LiDAR measurements first with pixels (irrespective of depth) and then with their corresponding 3D points to obtain accurate depth estimates for several nodes in the graph. Finally, we propagate this exact depth information along the graph using a label diffusion mechanism — resulting in a dense and accurate depth map at negligible cost. In Figure 1 we see that the few (yellow) LiDAR measurements are sufficient to position almost all final (blue) points of the entire car within the green ground truth box.
+
+We conduct extensive empirical studies of our approaches on the KITTI object detection benchmark (Geiger et al., 2012; 2013) and achieve remarkable results. With solely stereo images, we outperform the previous state of the art (Wang et al., 2019a) by $10\%$ . Further adding a cheap 4-beam LiDAR brings another $27\%$ relative improvement — on some metrics, our approach is nearly on par with those based on a 64-beam LiDAR but can potentially save $95\%$ in cost.
+
+# 2 BACKGROUND
+
+3D object detection. Most work on 3D object detection operates on 3D point clouds from LiDAR as input (Li, 2017; Li et al., 2016; Meyer et al., 2019b; Yang et al., 2018a; Du et al., 2018; Shi et al., 2019; Engelcke et al., 2017; Yan et al., 2018; Lang et al., 2019). Frustum PointNet (Qi et al., 2018) applies PointNet (Qi et al., 2017a;b) to the points directly, while Voxelnet (Zhou & Tuzel, 2018) quantizes them into 3D grids. For street scenes, several work finds that processing points from the bird's-eye view can already capture object contours and locations (Chen et al., 2017; Yang et al., 2018b; Ku et al., 2018). Images have also been used, but mainly to supplement LiDAR (Meyer et al., 2019a; Xu et al., 2018; Liang et al., 2018; Chen et al., 2017; Ku et al., 2018). Early work based solely on images — mostly built on the 2D frontal-view detection pipeline (Ren et al., 2015; He et al., 2017; Lin et al., 2017) — fell far behind in localizing objects in 3D (Li et al., 2019a; Xiang et al., 2015; 2017; Chabot et al., 2017; Mousavian et al., 2017; Chen et al., 2015; Xu & Chen, 2018; Chen et al., 2016; Pham & Jeon, 2017; Chen et al., 2018) $^3$ .
+
+Pseudo-LiDAR. This gap has been reduced significantly recently with the introduction of the pseudo-LiDAR framework proposed in (Wang et al., 2019a). This framework applies a drastically different approach from previous image-based 3D object detectors. Instead of directly detecting the 3D bounding boxes from the frontal view of a scene, pseudo-LiDAR begins with image-based depth estimation, predicting the depth $Z(u,v)$ of each image pixel $(u,v)$ . The resulting depth map $Z$ is then back-projected into a 3D point cloud: a pixel $(u,v)$ will be transformed to $(x,y,z)$ in 3D by
+
+$$
+z = Z (u, v), \quad x = \frac {(u - c _ {U}) \times z}{f _ {U}}, \quad y = \frac {(v - c _ {V}) \times z}{f _ {V}}, \tag {1}
+$$
+
+where $(c_{U}, c_{V})$ is the camera center and $f_{U}$ and $f_{V}$ are the horizontal and vertical focal length. The 3D point cloud is then treated exactly as LiDAR signal — any LiDAR-based 3D detector can be applied seamlessly. By taking the state-of-the-art algorithms from both ends (Chang & Chen, 2018; Ku et al., 2018; Qi et al., 2018), pseudo-LiDAR obtains the highest image-based performance on the KITTI object detection benchmark (Geiger et al., 2012; 2013). Our work builds upon this framework.
+
+Stereo disparity estimation. Pseudo-LiDAR relies heavily on the quality of depth estimation. Essentially, if the estimated pixel depths match those provided by LiDAR, pseudo-LiDAR with any LiDAR-based detector should be able to achieve the same performance as that obtained by applying the same detector to the LiDAR signal. According to (Wang et al., 2019a), depth estimation from stereo pairs of images (Mayer et al., 2016; Yamaguchi et al., 2014; Chang & Chen, 2018) are more accurate than that from monocular (i.e., single) images (Fu et al., 2018; Godard et al., 2017) for 3D object detection. We therefore focus on stereo depth estimation, which is routinely obtained from estimating disparity between images.
+
+A disparity estimation algorithm takes a pair of left-right images $I_{l}$ and $I_{r}$ as input, captured from a pair of cameras with a horizontal offset (i.e., baseline) $b$ . Without loss of generality, we assume that the algorithm treats the left image, $I_{l}$ , as reference and outputs a disparity map $D$ recording the horizontal disparity to $I_{r}$ for each pixel $(u,v)$ . Ideally, $I_{l}(u,v)$ and $I_{r}(u,v + D(u,v))$ will picture the same 3D location. We can therefore derive the depth map $Z$ via the following transform,
+
+$$
+Z (u, v) = \frac {f _ {U} \times b}{D (u , v)} \quad \left(f _ {U}: \text {h o r i z o n t a l f o c a l l e n g t h}\right). \tag {2}
+$$
+
+A common pipeline of disparity estimation is to first construct a 4D disparity cost volume $C_{\mathrm{disp}}$ , in which $C_{\mathrm{disp}}(u,v,d,:)$ is a feature vector that captures the pixel difference between $I_l(u,v)$ and $I_r(u,v + d)$ . It then estimates the disparity $D(u,v)$ for each pixel $(u,v)$ according to the cost volume $C_{\mathrm{disp}}$ . One basic algorithm is to build a 3D cost volume with $C_{\mathrm{disp}}(u,v,d) = \|I_l(u,v) - I_r(u,v + d)\|_2$ and determine $D(u,v)$ as $\arg \min_d C_{\mathrm{disp}}(u,v,d)$ . Advanced algorithms exploit more robust features in constructing $C_{\mathrm{disp}}$ and perform structured prediction for $D$ . In what follows, we give an introduction of PSMNet (Chang & Chen, 2018), a state-of-the-art algorithm used in (Wang et al., 2019a).
+
+PSMNet begins with extracting deep feature maps $h_l$ and $h_r$ from $I_l$ and $I_r$ , respectively. It then constructs $C_{\mathrm{disp}}(u, v, d,:)$ by concatenating features of $h_l(u, v)$ and $h_r(u, v + d)$ , followed by layers
+
+
+Figure 3: Disparity cost volume (left) vs. depth cost volume (right). The figure shows the 3D points obtained from LiDAR (yellow) and stereo (purple) corresponding to a car in KITTI, seen from the bird's eye view (BEV). Points from the disparity cost volume are stretched out and noisy; while points from the depth cost volume capture the car contour faithfully.
+
+
+
+
+Figure 4: Depth estimation errors. We compare depth estimation error on 3,769 KITTI validation images, taking 64-beam LiDAR depths as ground truths. We separate pixels according to their true depths (z). See the text and appendix for details.
+
+of 3D convolutions. The resulting 3D tensor $S_{\mathrm{disp}}$ , with the feature channel size ending up being one, is then used to derive the pixel disparity via the following weighted combination,
+
+$$
+D (u, v) = \sum_ {d} \operatorname {s o f t m a x} (- S _ {\mathrm {d i s p}} (u, v, d)) \times d, \tag {3}
+$$
+
+where softmax is performed along the $3^{\mathrm{rd}}$ dimension of $S_{\mathrm{disp}}$ . PSMNet can be learned end-to-end, including the image feature extractor and 3D convolution kernels, to minimize the disparity error
+
+$$
+\sum_ {(u, v) \in \mathcal {A}} \ell (D (u, v) - D ^ {\star} (u, v)), \tag {4}
+$$
+
+where $\ell$ is the smooth L1 loss, $D^{\star}$ is the ground truth map, and $\mathcal{A}$ contains pixels with ground truths.
+
+# 3 STEREO DEPTH NETWORK (SDN)
+
+A stereo network designed and learned to minimize the disparity error (cf. Equation 4) may over-emphasize nearby objects with smaller depths and therefore perform poorly in estimating depths for faraway objects. To see this, note that Equation 2 implies that for a given error in disparity $\delta D$ , the error in depth $\delta Z$ increases quadratically with depth:
+
+$$
+Z \propto \frac {1}{D} \Rightarrow \delta Z \propto \frac {1}{D ^ {2}} \delta D \Rightarrow \delta Z \propto Z ^ {2} \delta D. \tag {5}
+$$
+
+The middle term is obtained by differentiating $Z(D)$ w.r.t. $D$ . In particular, using the settings on the KITTI dataset (Geiger et al., 2012; 2013), a single pixel error in disparity implies only a $0.1\mathrm{m}$ error in depth at a depth of 5 meters, but a $5.8\mathrm{m}$ error at a depth of 50 meters. See Figure 2 for a mapping from disparity to depth.
+
+Depth Loss. We propose two changes to adapt stereo networks for direct depth estimation. First, we learn stereo networks to directly optimize the depth loss
+
+$$
+\sum_ {(u, v) \in \mathcal {A}} \ell (Z (u, v) - Z ^ {\star} (u, v)). \tag {6}
+$$
+
+$Z$ and $Z^{\star}$ can be obtained from $D$ and $D^{\star}$ using Equation 2. The change from the disparity loss to the depth loss corrects the disproportionally strong emphasis on tiny depth errors of nearby objects — a necessary but still insufficient change to overcome the problems of disparity estimation.
+
+Depth Cost Volume. To facilitate accurate depth learning (rather than disparity) we need to further address the internals of the depth estimation pipeline. A crucial source of error is the 3D convolutions within the 4D disparity cost volume, where the same kernels are applied for the entire cost volume. This is highly problematic as it implicitly assumes that the effect of a convolution is homogeneous
+
+
+Figure 2: The disparity-to-depth transform. We set $f_{U} = 721$ (in pixels) and $b = 0.54$ (in meters) in Equation 2, which are the typical values used in the KITTI dataset.
+
+
+Figure 5: The whole pipeline of improved stereo depth estimation: (top) the stereo depth network (SDN) constructs a depth cost volume from left-right images and is optimized for direct depth estimation; (bottom) the graph-based depth correction algorithm (GDC) refines the depth map by leveraging sparser LiDAR signal. The gray arrows indicate the observer's view point. We superimpose the (green) ground-truth 3D box of a car, the same one in Figure 1. The corrected points (blue; bottom right) are perfectly located inside the ground truth box.
+
+throughout — which is clearly violated by the reciprocal depth to disparity relation (Figure 2). For example, it may be completely appropriate to locally smooth two neighboring pixels with disparity 85 and 86 (changing the depth by a few cm to smooth out a surface), whereas applying the same kernel for two pixels with disparity 5 and 6 could easily move the 3D points by $10\mathrm{m}$ or more.
+
+Taking this insight and the central assumption of convolutions — all neighborhoods can be operated upon in an identical manner — into account, we propose to instead construct the depth cost volume $C_{\mathrm{depth}}$ , in which $C_{\mathrm{depth}}(u,v,z,:)$ will encode features describing how likely the depth $Z(u,v)$ of pixel $(u,v)$ is $z$ . The subsequent 3D convolutions will then operate on the grid of depth, rather than disparity, affecting neighboring depths identically, independent of their location. The resulting 3D tensor $S_{\mathrm{depth}}$ is then used to predict the pixel depth similar to Equation 3
+
+$$
+Z (u, v) = \sum_ {z} \operatorname {s o f t m a x} (- S _ {\mathrm {d e p t h}} (u, v, z)) \times z.
+$$
+
+We construct the new depth volume, $C_{\mathrm{depth}}$ , based on the intuition that $C_{\mathrm{depth}}(u,v,z,:)$ and $C_{\mathrm{disp}}\left(u,v,\frac{f_U\times b}{z},:\right)$ should lead to equivalent "cost". To this end, we apply a bilinear interpolation to construct $C_{\mathrm{depth}}$ from $C_{\mathrm{disp}}$ using the depth-to-disparity transform in Equation 2. Specifically, we consider disparity in the range of [0, 191] following PSMNet (Chang & Chen, 2018), and consider depth in the range of [1m, 80m] and set the grid of depth in $C_{\mathrm{depth}}$ to be 1m. Figure 5 (top) depicts our stereo depth network (SDN) pipeline. Crucially, all convolution operations are operated on $C_{\mathrm{depth}}$ exclusively. Figure 4 compares the median values of absolute depth estimation errors using the disparity cost volume (i.e., PSMNet) and the depth cost volume (SDN) (see subsection D.5 for detailed numbers). As expected, for faraway depth, SDN leads to drastically smaller errors with only marginal increases in the very near range (which disparity based methods over-optimize). See the appendix for the detailed setup and more discussions.
+
+# 4 DEPTH CORRECTION
+
+Our SDN significantly improves depth estimation and more precisely renders the object contours (see Figure 3). However, there is a fundamental limitation in stereo because of the discrete nature of pixels: the disparity, being the difference in the horizontal coordinate between corresponding pixels, has to be quantized at the level of individual pixels while the depth is continuous. Although the quantization error can be alleviated with higher resolution images, the computational depth prediction cost scales cubically with resolution—pushing the limits of GPUs in autonomous vehicles.
+
+We therefore explore a hybrid approach by leveraging a cheap LiDAR with extremely sparse (e.g., 4 beams) but accurate depth measurements to correct this bias. We note that such sensors are too
+
+sparse to capture object shapes and cannot be used alone for detection. However, by projecting the LiDAR points into the image plane we obtain exact depths on a small portion of "landmark" pixels.
+
+We present a graph-based depth correction (GDC) algorithm that effectively combines the dense stereo depth that has rendered object shapes and the sparse accurate LiDAR measurements. Conceptually, we expect the corrected depth map to have the following properties: globally, landmark pixels associated with LiDAR points should possess the exact depths; locally, object shapes captured by neighboring 3D points, back-projected from the input depth map (cf. Equation 1), should be preserved. Figure 5 (bottom) illustrates the algorithm.
+
+Input Matching. We take as input the two point clouds from LiDAR (L) and Pseudo-LiDAR (PL) by stereo depth estimation. The latter is obtained by converting pixels $(u, v)$ with depth $z$ to 3D points $(x_u, y_v, z)$ . First, we characterize the local shapes by the directed K-nearest-neighbor (KNN) graph in the PL point cloud (using accelerated KD-Trees (Shevtsov et al., 2007)) that connects each 3D point to its KNNs with appropriate weights. Similarly, we can project the 3D LiDAR points onto pixel locations $(u, v)$ and match them to corresponding 3D stereo points. Without loss of generality, we assume that we are given "ground truth" LiDAR depth for the first $n$ points and no ground truth for the remaining $m$ points. We refer to the 3D stereo depth estimates as $Z \in \mathbb{R}^{n+m}$ and the LiDAR depth ground-truth as $G \in \mathbb{R}^n$ .
+
+Edge weights. To construct the KNN graph in 3D we ignore the LiDAR information on the first $n$ points and only use their predicted stereo depth in $Z$ . Let $\mathcal{N}_i$ denote the set of $k$ neighbors of the $i^{th}$ point. Further, let $W \in \mathbb{R}^{(n + m) \times (n + m)}$ denote the weight matrix, where $W_{ij}$ denotes the edge-weight between points $i$ and $j$ . Inspired by prior work in manifold learning (Roweis & Saul, 2000; Weinberger et al., 2005) we choose the weights to be the coefficients that reconstruct the depth of any point from the depths of its neighbors in $\mathcal{N}_i$ . We can solve for these weights with the following constrained quadratic optimization problem:
+
+$$
+W = \arg \min _ {W} \| Z - W Z \| _ {2} ^ {2}, \quad \text {s . t .} W \mathbf {1} = \mathbf {1} \text {a n d} W _ {i j} = 0 \text {i f} j \notin \mathcal {N} _ {i}. \tag {7}
+$$
+
+Here $\mathbf{1} \in \mathbb{R}^{n + m}$ denotes the all-ones vector. As long as we pick $k > 3$ and the points are in general position there are infinitely many solutions that satisfy $Z = WZ$ , and we pick the solution with the minimum $L_{2}$ norm (obtained with slight $L_{2}$ regularization).
+
+Depth Correction. Let us denote the corrected depth values as $Z' \in \mathbb{R}^{n+m}$ , with $Z' = [Z_L'; Z_{PL}']$ and $Z_L' \in \mathbb{R}^n$ and $Z_{PL}' \in \mathbb{R}^m$ , where $Z_L'$ are the depth values of points with LiDAR ground-truth and $Z_{PL}'$ otherwise. For the $n$ points with LiDAR measurements we update the depth to the (ground truth) values $Z_L' = G$ . We then solve for $Z_{PL}'$ given $G$ and the weighted KNN graph encoded in $W$ . Concretely, we update the remaining depths $Z_{PL}'$ such that the depth of any point $i$ can still be reconstructed with high fidelity as a weighted sum of its KNNs' depths using the learned weights $W$ ; i.e. if point $i: 1 \leq i \leq n$ is moved to its new depth $G_i$ , then its neighbors in $\mathcal{N}_i$ must also be corrected such that $G_i \approx \sum_{j \in \mathcal{N}_i} W_{ij} Z_j'$ . Further, the neighbors' neighbors must be corrected and the depth of the few $n$ points propagates across the entire graph. We can solve for the final $Z'$ directly with another quadratic optimization:
+
+$$
+Z ^ {\prime} = \arg \min _ {Z ^ {\prime}} \| Z ^ {\prime} - W Z ^ {\prime} \| ^ {2}, \quad \text {s . t .} Z _ {1: n} ^ {\prime} = G. \tag {8}
+$$
+
+To illustrate the correction process, imagine the simplest case where the depth of only a single point $(n = 1)$ is updated to $G_{1} = Z_{1} + \delta$ . A new optimal depth for Equation 8 is to move all the remaining points similarly, i.e. $Z' = Z + \mathbf{1}\delta$ : as $Z = WZ$ and $W\mathbf{1} = \mathbf{1}$ we must have $W(Z + \mathbf{1}\delta) = Z + \mathbf{1}\delta$ . In the setting with $n > 1$ , the least-squares loss ensures a soft diffusion between the different LiDAR depth estimates. Both optimization problems in Equation 7 and Equation 8 can be solved exactly and efficiently with sparse matrix solvers. We summarize the procedure as an algorithm in the appendix.
+
+From the view of graph-based manifold learning, our GDC algorithm is reminiscent of locally linear embeddings (Roweis & Saul, 2000) with landmarks to guide the final solution (Weinberger et al., 2005). Figure 1 illustrates vividly how the initial 3D point cloud from SDN (purple) of a car in the KITTI dataset is corrected with a few sparse LiDAR measurements (yellow). The resulting points (blue) are right inside the ground-truth box and clearly show the contour of the car. Figure 4 shows the additional improvement from the GDC (blue) over the pure SDN depth estimates (see subsection D.5 for detailed numbers). The error (calculated only on non-landmark pixels) is corrected over the entire image where many regions have no LiDAR measurements. This is because that the pseudo-LiDAR point cloud is sufficiently dense and we choose $k$ to be large enough (in practice, we use $k = 10$ )
+
+Table 1: 3D object detection results on KITTI validation. We report $\mathrm{AP}_{\mathrm{BEV}} / \mathrm{AP}_{\mathrm{3D}}$ (in %) of the car category, corresponding to average precision of the bird's-eye view and 3D object detection. We arrange methods according to the input signals: M for monocular images, S for stereo images, L for 64-beam LiDAR, and L# for sparse 4-beam LiDAR. PL stands for PSEUDO-LiDAR. Our PSEUDO-LiDAR ++ (PL++) with enhanced depth estimation — SDN and GDC— are in blue. Methods with 64-beam LiDAR are in gray. Best viewed in color.
+
+| Detection algo | Input | IoU = 0.5 | IoU = 0.7 |
| Easy | Moderate | Hard | Easy | Moderate | Hard |
| 3DOP | S | 55.0 / 46.0 | 41.3 / 34.6 | 34.6 / 30.1 | 12.6 / 6.6 | 9.5 / 5.1 | 7.6 / 4.1 |
| MLF-STEREO | S | - | 53.7 / 47.4 | - | - | 19.5 / 9.8 | - |
| S-RCNN | S | 87.1 / 85.8 | 74.1 / 66.3 | 58.9 / 57.2 | 68.5 / 54.1 | 48.3 / 36.7 | 41.5 / 31.1 |
| PL: AVOD | S | 89.0 / 88.5 | 77.5 / 76.4 | 68.7 / 61.2 | 74.9 / 61.9 | 56.8 / 45.3 | 49.0 / 39.0 |
| PL: PINOR* | S | 89.0 / - | 75.2 / - | 67.3 / - | 73.9 / - | 54.0 / - | 46.9 / - |
| PL: P-RCNN | S | 88.4 / 88.0 | 76.6 / 73.7 | 69.0 / 67.8 | 73.4 / 62.3 | 56.0 / 44.9 | 52.7 / 41.6 |
| PL++: AVOD | S | 89.4 / 89.0 | 79.0 / 77.8 | 70.1 / 69.1 | 77.0 / 63.2 | 63.7 / 46.8 | 56.0 / 39.8 |
| PL++: PINOR* | S | 89.9 / - | 78.4 / - | 74.7 / - | 79.7 / - | 61.1 / - | 54.5 / - |
| PL++: P-RCNN | S | 89.8 / 89.7 | 83.8 / 78.6 | 77.5 / 75.1 | 82.0 / 67.9 | 64.0 / 50.1 | 57.3 / 45.3 |
| PL++: AVOD | L# + S | 90.2 / 90.1 | 87.7 / 86.9 | 79.8 / 79.2 | 86.8 / 70.7 | 76.6 / 56.2 | 68.7 / 53.4 |
| PL++: PINOR* | L# + S | 95.1 / - | 85.1 / - | 78.3 / - | 84.0 / - | 71.0 / - | 65.2 / - |
| PL++: P-RCNN | L# + S | 90.3 / 90.3 | 87.7 / 86.9 | 84.6 / 84.2 | 88.2 / 75.1 | 76.9 / 63.8 | 73.4 / 57.4 |
| AVOD | L + M | 90.5 / 90.5 | 89.4 / 89.2 | 88.5 / 88.2 | 89.4 / 82.8 | 86.5 / 73.5 | 79.3 / 67.1 |
| PIXOR* | L + M | 94.2 / - | 86.7 / - | 86.1 / - | 85.2 / - | 81.2 / - | 76.1 / - |
| P-RCNN | L | 97.3 / 97.3 | 89.9 / 89.8 | 89.4 / 89.3 | 90.2 / 89.2 | 87.9 / 78.9 | 85.5 / 77.9 |
+
+such that the KNN graph is typically connected (or consists of few large connected components). See subsection D.6 for more analysis. For objects such as cars the improvements through GDC are far more pronounced, as these typically are touched by the four LiDAR beams and can be corrected effectively.
+
+# 5 EXPERIMENTS
+
+# 5.1 SETUP
+
+We refer to our combined method (SDN and GDC) for 3D object detection as PSEUDO-LiDAR++ (PL++ in short). To analyze the contribution of each component, we evaluate SDN and GDC independently and jointly across several settings. For GDC we set $k = 10$ and consider adding signal from a (simulated) 4-beam LiDAR, unless stated otherwise.
+
+Dataset, Metrics, and Baselines. We evaluate on the KITTI dataset (Geiger et al., 2013; 2012), which contains 7,481 and 7,518 images for training and testing. We follow (Chen et al., 2015) to separate the 7,481 images into 3,712 for training and 3,769 validation. For each (left) image, KITTI provides the corresponding right image, the 64-beam Velodyne LiDAR point cloud, the camera calibration matrices, and the bounding boxes. We focus on 3D object detection and bird's-eye-view (BEV) localization and report results on the validation set. Specifically, we focus on the "car" category, following Chen et al. (2017) and Xu et al. (2018). We report average precision (AP) with IoU (Intersection over Union) thresholds at 0.5 and 0.7. We denote AP for the 3D and BEV tasks by $\mathrm{AP}_{\mathrm{3D}}$ and $\mathrm{AP}_{\mathrm{BEV}}$ . KITTI defines the easy, moderate, and hard settings, in which objects with 2D box heights smaller than or occlusion/truncation levels larger than certain thresholds are disregarded. We compare to four stereo-based detectors: PSEUDO-LiDAR (PL in short) (Wang et al., 2019a), 3DOP (Chen et al., 2015), S-RCNN (Li et al., 2019b), and MLF-STEREO (Xu & Chen, 2018).
+
+Stereo depth network (SDN). We use PSMNET (Chang & Chen, 2018) as the backbone for our stereo depth estimation network (SDN). We follow Wang et al. (2019a) to pre-train SDN on the synthetic Scene Flow dataset (Mayer et al., 2016) and fine-tune it on the 3,712 training images of KITTI. We obtain the depth ground truth by projecting the corresponding LiDAR points onto images. We also train a PSMNET in the same way for comparison, which minimizes disparity error.
+
+3D object detection. We apply three algorithms: AVOD (Ku et al., 2018), FIXOR (Yang et al., 2018b), and P-RCNN (Shi et al., 2019). All utilize information from LiDAR and/or monocular images. We use the released implementations of AVOD (specifically, AVOD-FPN) and P-RCNN. We implement FIXOR ourselves with a slight modification to include visual information (denoted
+
+Table 2: Results on the car category on the test set. We compare PL++ (blue) and 64-beam LiDAR (gray), using P-RCNN, and report $\mathrm{AP}_{\mathrm{BEV}} / \mathrm{AP}_{\mathrm{3D}}$ at IoU=0.7.
+
+| Input signal | Easy | Moderate | Hard |
| PL++ (SDN) | 75.5 / 60.4 | 57.2 / 44.6 | 53.4 / 38.5 |
| PL++ (SDN + GDC) | 83.8 / 68.5 | 73.5 / 54.7 | 66.5 / 51.2 |
| LiDAR | 89.5 / 85.9 | 85.7 / 75.8 | 79.1 / 68.3 |
+
+Table 4: Ablation study on leveraging sparse LiDAR. We report $\mathrm{AP}_{\mathrm{BEV}} / \mathrm{AP}_{\mathrm{3D}}$ (in %) of the car category at IoU = 0.7 on KITTI validation. L#: 4-beam LiDAR signal alone. SDN + L#: pseudo-LiDAR with depths of landmark pixels replaced by 4-beam LiDAR. The best result of each column is in bold font.
+
+| Stereo depth | Easy | Moderate | Hard |
| SDN | 82.0 / 67.9 | 64.0 / 50.1 | 57.3 / 45.3 |
| L# | 73.2 / 56.1 | 71.3 / 53.1 | 70.5 / 51.5 |
| SDN + L# | 86.3 / 72.0 | 73.0 / 56.1 | 67.4 / 54.1 |
| SDN + GDC | 88.2 / 75.1 | 76.9 / 63.8 | 73.4 / 57.4 |
+
+Table 3: Ablation study on depth estimation. We report $\mathrm{AP}_{\mathrm{BEV}} / \mathrm{AP}_{\mathrm{3D}}$ (in %) of the car category at IoU = 0.7 on KITTI validation. DL: depth loss.
+
+| Stereo depth | Easy | Moderate | Hard |
| PSMNET | 73.4 / 62.3 | 56.0 / 44.9 | 52.7 / 41.6 |
| PSMNET + DL | 80.1 / 65.5 | 61.9 / 46.8 | 56.0 / 43.0 |
| SDN | 82.0 / 67.9 | 64.0 / 50.1 | 57.3 / 45.3 |
+
+Table 5: Results of pedestrians (top) and cyclists (bottom) on KITTI validation. We apply F-POINTNET Qi et al. (2018) and report $\mathrm{AP}_{\mathrm{BEV}} / \mathrm{AP}_{\mathrm{3D}}$ (in %) at IoU = 0.5, following Wang et al. (2019a).
+
+| Stereo depth | Easy | Moderate | Hard |
| PSMNET | 41.3 / 33.8 | 34.9 / 27.4 | 30.1 / 24.0 |
| SDN | 48.7 / 40.9 | 40.4 / 32.9 | 34.9 / 28.8 |
| SDN + GDC | 63.7 / 53.6 | 53.8 / 44.4 | 46.8 / 38.1 |
| PSMNET | 47.6 / 41.3 | 29.9 / 25.2 | 27.0 / 24.9 |
| SDN | 49.3 / 44.6 | 30.4 / 28.7 | 28.6 / 26.4 |
| SDN + GDC | 65.7 / 60.8 | 45.8 / 40.8 | 42.8 / 38.0 |
+
+as $\mathrm{PIXOR}^{\star}$ ). We train all models on the 3,712 training data from scratch by replacing the LiDAR points with pseudo-LiDAR data generated from stereo depth estimation. See the appendix for details.
+
+Sparer LiDAR. We simulate sparser LiDAR signal with fewer beams by first projecting the 64-beam LiDAR points onto a 2D plane of horizontal and vertical angles. We quantize the vertical angles into 64 levels with an interval of $0.4^{\circ}$ , which is close to the SPEC of the 64-beam LiDAR. We keep points fallen into a subset of beams to mimic the sparser signal. See the appendix for details.
+
+# 5.2 EXPERIMENTAL RESULTS
+
+Results on the KITTI val set. We summarize the main results on KITTI object detection in Table 1. Several important trends can be observed: 1) Our $\mathrm{PL}++$ with enhanced depth estimations by SDN and GDC yields consistent improvement over PL across all settings; 2) $\mathrm{PL}++$ with GDC refinement of 4-beam LiDAR (Input: $\mathrm{L}\# +\mathrm{S}$ ) performs significantly better than $\mathrm{PL}++$ with only stereo inputs (Input: S); 3) PL experiences a substantial drop in accuracy from IoU at 0.5 to 0.7 for the hard setting. This suggests that while PL detects faraway objects, it mislocalizes them, likely placing them at the wrong depth. This causes the object to be considered a missed detection at higher overlap thresholds. Interestingly, here is where we experience the largest gain — from PL: P-RCNN ( $\mathrm{AP}_{\mathrm{BEV}} = 52.7$ ) to $\mathrm{PL}++$ : P-RCNN ( $\mathrm{AP}_{\mathrm{BEV}} = 73.4$ ) with input as $\mathrm{L}\# +\mathrm{S}$ . Note that the majority of the gain comes from GDC, as $\mathrm{PL}++$ with the stereo-only version only improving the score to $57.3\mathrm{AP}_{\mathrm{BEV}}$ . 4) The gap between $\mathrm{PL}++$ and LiDAR is at most $13\%$ $\mathrm{AP}_{\mathrm{BEV}}$ , even at the hard setting under IoU at 0.7. 5) For IoU at 0.5, with the aid of only 4 LiDAR beams, $\mathrm{PL}++$ ( $\mathrm{SDN} + \mathrm{GDC}$ ) achieves results comparable to models with 64-beam LiDAR signals.
+
+Results on the KITTI test set. Table 2 summarizes results on the car category on the KITTI test set. We see a similar gap between our methods and LiDAR as on the validation set, suggesting that our improvement is not particular to the validation data. Our approach without LiDAR refinement (pure SDN) is placed at the top position among all the image-based algorithms on the KITTI leaderboard.
+
+In the following, we conduct a series of experiments to analyze the performance gain by our approaches and discuss several key observations. We mainly experiment with P-RCNN: we find that the results with AVOD and $\mathrm{PIXOR}^*$ follow similar trends and thus include them in the appendix.
+
+Depth loss and depth cost volume. To turn a disparity network (e.g., PSMNET) into SDN, there are two changes: 1) change the disparity loss into the depth loss; 2) change the disparity cost volume into the depth cost volume. In Table 3, we uncover the effect of these two changes separately. On the $\mathrm{AP}_{\mathrm{BEV}} / \mathrm{AP}_{\mathrm{3D}}$ (moderate) metric, the depth loss gives us a $6\% / 2\%$ improvement and the depth cost volume brings another $2 \sim 3\%$ gain4.
+
+Impact of sparse LiDAR beams. We leverage 4-beam LiDAR to correct stereo depth using GDC. However, it is possible that gains in 3D object detection come entirely from the new LiDAR sensor and that the stereo estimates are immaterial. In Table 4, we study this question by comparing the detection results against those of models using 1) sole 4-beam LiDAR point clouds and 2) pseudo-LiDAR point clouds with depths of landmark pixels replaced by 4-beam LiDAR: i.e., in depth correction, we only correct depths of the landmark pixels without propagation. It can be seen that 4-beam LiDAR itself performs fairly well on locating faraway objects but cannot capture nearby objects precisely, while simply replacing pseudo-LiDAR with LiDAR at the landmark pixels prevents the model from detecting faraway object accurately. In contrast, our proposed GDC method effectively combines the merits of the two signals, achieving superior performance than using them alone.
+
+Pedestrian and cyclist detection. For a fair comparison to (Wang et al., 2019a), we apply F-POINTNET (Qi et al., 2018) for detecting pedestrians and cyclists. Table 5 shows the results: our methods significantly boosts the performance.
+
+
+
+
+LiDAR
+
+
+
+
+Pseudo-LiDAR
+
+
+
+
+Pseudo-LiDAR++ (SDN)
+
+
+
+
+Pseudo-LiDAR++ (SDN + GDC)
+Figure 6: Qualitative Comparison. We show the detection results on a KITTI validation scene by P-RCNN with different input point clouds. We visualize them from both frontal-view images and bird's-eye view (BEV) point maps. Ground-truth boxes are in green and predicted bounding boxes are in red. The observer is at the left-hand side of the BEV map looking to the right. In other words, ground truth boxes on the right are more faraway (i.e., deeper) from the observer, and hence hard to localize. Best viewed in color.
+
+Qualitative visualization. In Figure 6, we show an qualitative comparison of detection results on a randomly chosen scene in the KITTI object validation set, using P-RCNN (with confidence $>0.95$ ) with different input signals. Specifically, we show the results from the frontal-view images and the bird's-eye view (BEV) point clouds. In the BEV map, the observer is on the left-hand side looking to
+
+the right. It can be seen that the point clouds generated by PSEUDO-LIDAR ++ (SDN alone or SDN +GDC) align better with LiDAR than that generated by PSEUDO-LIDAR (PSMNET). For nearby objects (i.e., bounding boxes close to the left in the BEV map), we see that P-RCNN with any point cloud performs fairly well in localization. However, for faraway objects (i.e., bounding boxes close to the right), PSEUDO-LIDAR with depth estimated from PSMNET predicts objects (red boxes) that are deviated from the ground truths (green boxes). Moreover, the noisy PSMNET points also leads to false negatives. In contrast, the detected boxes by our PSEUDO-LIDAR ++, either with SDN alone or with SDN +GDC, align pretty well with the ground truth boxes, justifying our targeted improvement in estimating faraway depths.
+
+Additional results, analyses, qualitative visualization and discussions. We provide results of PSEUDO-LIDAR ++ with fewer LiDAR beams, comparisons to depth completion methods, analysis on depth quality and detection accuracy, run time, failure cases, and more qualitative results in the appendix. With simple optimizations, GDC runs in 90 ms/frame using a single GPU (7.7 ms for KD-tree construction and search).
+
+# 6 CONCLUSION
+
+In this paper we made two contributions to improve the 3D object detection in autonomous vehicles without expensive LiDAR. First, we identify the disparity estimation as a main source of error for stereo-based systems and propose a novel approach to learn depth directly end-to-end instead of through disparity estimates. Second, we advocate that one should not use expensive LiDAR sensors to learn the local structure and depth of objects. Instead one can use commodity stereo cameras for the former and a cheap sparse LiDAR to correct the systematic bias in the resulting depth estimates. We provide a novel graph propagation algorithm that integrates the two data modalities and propagates the sparse yet accurate depth estimates using two sparse matrix solvers. The resulting system, PSEUDO-LiDAR ++ (SDN + GDC), performs almost on par with 64-beam LiDAR systems for $75,000 but only requires 4 beams and two commodity cameras, which could be obtained with a total cost of less than $1,000.
+
+# ACKNOWLEDGMENTS
+
+This research is supported by grants from the National Science Foundation NSF (III-1618134, III-1526012, IIS-1149882, IIS-1724282, and TRIPODS-1740822), the Office of Naval Research DOD (N00014-17-1-2175), the Bill and Melinda Gates Foundation, and the Cornell Center for Materials Research with funding from the NSF MRSEC program (DMR-1719875). We are thankful for generous support by Zillow and SAP America Inc.
+
+# REFERENCES
+
+Florian Chabot, Mohamed Chaouch, Jaonary Rabarisoa, Céline Teulière, and Thierry Chateau. Deep manta: A coarse-to-fine many-task network for joint 2d and 3d vehicle analysis from monocular image. In CVPR, 2017.
+Jia-Ren Chang and Yong-Sheng Chen. Pyramid stereo matching network. In CVPR, 2018.
+Xiaozhi Chen, Kaustav Kundu, Yukun Zhu, Andrew G Berneshawi, Huimin Ma, Sanja Fidler, and Raquel Urtasun. 3d object proposals for accurate object class detection. In NIPS, 2015.
+Xiaozhi Chen, Kaustav Kundu, Ziyu Zhang, Huimin Ma, Sanja Fidler, and Raquel Urtasun. Monocular 3d object detection for autonomous driving. In CVPR, 2016.
+Xiaozhi Chen, Huimin Ma, Ji Wan, Bo Li, and Tian Xia. Multi-view 3d object detection network for autonomous driving. In CVPR, 2017.
+Xiaozhi Chen, Kaustav Kundu, Yukun Zhu, Huimin Ma, Sanja Fidler, and Raquel Urtasun. 3d object proposals using stereo imagery for accurate object class detection. IEEE TPAMI, 40(5):1259-1272, 2018.
+
+Xinjing Cheng, Peng Wang, and Ruigang Yang. Depth estimation via affinity learned with convolutional spatial propagation network. In ECCV, 2018.
+Xinxin Du, Marcelo H Ang Jr, Sertac Karaman, and Daniela Rus. A general pipeline for 3d detection of vehicles. In ICRA, 2018.
+Martin Engelcke, Dushyant Rao, Dominic Zeng Wang, Chi Hay Tong, and Ingmar Posner. Vote3deep: Fast object detection in 3d point clouds using efficient convolutional neural networks. In ICRA, 2017.
+Huan Fu, Mingming Gong, Chaohui Wang, Kayhan Batmanghelich, and Dacheng Tao. Deep ordinal regression network for monocular depth estimation. In CVPR, pp. 2002-2011, 2018.
+Andreas Geiger, Philip Lenz, and Raquel Urtasun. Are we ready for autonomous driving? the kitti vision benchmark suite. In CVPR, 2012.
+Andreas Geiger, Philip Lenz, Christoph Stiller, and Raquel Urtasun. Vision meets robotics: The kitti dataset. The International Journal of Robotics Research, 32(11):1231-1237, 2013.
+Clément Godard, Oisin Mac Aodha, and Gabriel J Brostow. Unsupervised monocular depth estimation with left-right consistency. In CVPR, 2017.
+Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, 2016.
+Kaiming He, Georgia Gkioxari, Piotr Dólar, and Ross Girshick. Mask r-cnn. In ICCV, 2017.
+Jason Ku, Melissa Mozifian, Jungwook Lee, Ali Harakeh, and Steven Waslander. Joint 3d proposal generation and object detection from view aggregation. In IROS, 2018.
+Alex H Lang, Sourabh Vora, Holger Caesar, Lubing Zhou, Jiong Yang, and Oscar Beijbom. Pointpillars: Fast encoders for object detection from point clouds. In CVPR, 2019.
+Bo Li. 3d fully convolutional network for vehicle detection in point cloud. In IROS, 2017.
+Bo Li, Tianlei Zhang, and Tian Xia. Vehicle detection from 3d lidar using fully convolutional network. In Robotics: Science and Systems, 2016.
+Buyu Li, Wanli Ouyang, Lu Sheng, Xingyu Zeng, and Xiaogang Wang. Gs3d: An efficient 3d object detection framework for autonomous driving. In CVPR, 2019a.
+Peiliang Li, Xiaozhi Chen, and Shaojie Shen. Stereo r-cnn based 3d object detection for autonomous driving. In CVPR, 2019b.
+Ming Liang, Bin Yang, Shenlong Wang, and Raquel Urtasun. Deep continuous fusion for multi-sensor 3d object detection. In ECCV, 2018.
+Tsung-Yi Lin, Piotr Dólar, Ross B Girshick, Kaiming He, Bharath Hariharan, and Serge J Belongie. Feature pyramid networks for object detection. In CVPR, volume 1, pp. 4, 2017.
+Fangchang Ma, Guilherme Venturelli Cavalheiro, and Sertac Karaman. Self-supervised sparse-todense: self-supervised depth completion from lidar and monocular camera. In ICRA, 2019.
+Nikolaus Mayer, Eddy Ilg, Philip Hausser, Philipp Fischer, Daniel Cremers, Alexey Dosovitskiy, and Thomas Brox. A large dataset to train convolutional networks for disparity, optical flow, and scene flow estimation. In CVPR, 2016.
+Gregory P Meyer, Jake Charland, Darshan Hegde, Ankit Laddha, and Carlos Vallespi-Gonzalez. Sensor fusion for joint 3d object detection and semantic segmentation. arXiv preprint arXiv:1904.11466, 2019a.
+Gregory P Meyer, Ankit Laddha, Eric Kee, Carlos Vallespi-Gonzalez, and Carl K Wellington. Lasernet: An efficient probabilistic 3d object detector for autonomous driving. In CVPR, 2019b.
+
+Arsalan Mousavian, Dragomir Anguelov, John Flynn, and Jana Košecka. 3d bounding box estimation using deep learning and geometry. In CVPR, 2017.
+Cuong Cao Pham and Jae Wook Jeon. Robust object proposals re-ranking for object detection in autonomous driving using convolutional neural networks. Signal Processing: Image Communication, 53:110-122, 2017.
+Charles R Qi, Hao Su, Kaichun Mo, and Leonidas J Guibas. Pointnet: Deep learning on point sets for 3d classification and segmentation. In CVPR, 2017a.
+Charles R Qi, Wei Liu, Chenxia Wu, Hao Su, and Leonidas J Guibas. Frustum pointnets for 3d object detection from rgb-d data. In CVPR, 2018.
+Charles Ruizhongtai Qi, Li Yi, Hao Su, and Leonidas J Guibas. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. In NIPS, 2017b.
+Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In NIPS, 2015.
+Sam T Roweis and Lawrence K Saul. Nonlinear dimensionality reduction by locally linear embedding. science, 2000.
+Maxim Shevtsov, Alexei Soupikov, and Alexander Kapustin. Highly parallel fast kd-tree construction for interactive ray tracing of dynamic scenes. In Computer Graphics Forum, volume 26, pp. 395-404. Wiley Online Library, 2007.
+Shaoshuai Shi, Xiaogang Wang, and Hongsheng Li. Pointcnn: 3d object proposal generation and detection from point cloud. In CVPR, 2019.
+Siddharth Srivastava, Frederic Jurie, and Gaurav Sharma. Learning 2d to 3d lifting for object detection in 3d for autonomous vehicles. In IROS, 2019.
+Paden Tomasello, Sammy Sidhu, Anting Shen, Matthew W Moskewicz, Nobie Redmon, Gataryi Joshi, Romi Phadte, Paras Jain, and Forrest Iandola. Dscnet: Replicating lidar point clouds with deep sensor cloning. arXiv preprint arXiv:1811.07070, 2018.
+Luz Abril Torres-Mendez and Gregory Dudek. Statistical inference and synthesis in the image domain for mobile robot environment modeling. In IROS, 2004.
+Tsun-Hsuan Wang, Fu-En Wang, Juan-Ting Lin, Yi-Hsuan Tsai, Wei-Chen Chiu, and Min Sun. Plug-and-play: Improve depth estimation via sparse data propagation. arXiv preprint arXiv:1812.08350, 2018.
+Yan Wang, Wei-Lun Chao, Divyansh Garg, Bharath Hariharan, Mark Campbell, and Kilian Q. Weinberger. Pseudo-lidar from visual depth estimation: Bridging the gap in 3d object detection for autonomous driving. In CVPR, 2019a.
+Yan Wang, Zihang Lai, Gao Huang, Brian H Wang, Laurens van der Maaten, Mark Campbell, and Kilian Q Weinberger. Anytime stereo image depth estimation on mobile devices. In ICRA, 2019b.
+Kilian Q. Weinberger, Benjamin Packer, and Lawrence K. Saul. Nonlinear dimensionality reduction by semidefinite programming and kernel matrix factorization. In AISTATS, 2005.
+Yu Xiang, Wongun Choi, Yuanqing Lin, and Silvio Savarese. Data-driven 3d voxel patterns for object category recognition. In CVPR, 2015.
+Yu Xiang, Wongun Choi, Yuanqing Lin, and Silvio Savarese. Subcategory-aware convolutional neural networks for object proposals and detection. In WACV, 2017.
+Zhu Xiaojin and Ghahramani Zoubin. Learning from labeled and unlabeled data with label propagation. Tech. Rep., Technical Report CMU-CALD-02-107, Carnegie Mellon University, 2002.
+Bin Xu and Zhenzhong Chen. Multi-level fusion based 3d object detection from monocular images. In CVPR, 2018.
+
+Danfei Xu, Dragomir Anguelov, and Ashesh Jain. Pointfusion: Deep sensor fusion for 3d bounding box estimation. In CVPR, 2018.
+Koichiro Yamaguchi, David McAllester, and Raquel Urtasun. Efficient joint segmentation, occlusion labeling, stereo and flow estimation. In ECCV, 2014.
+Yan Yan, Yuxing Mao, and Bo Li. Second: Sparsely embedded convolutional detection. Sensors, 18 (10):3337, 2018.
+Bin Yang, Ming Liang, and Raquel Urtasun. Hdnet: Exploiting hd maps for 3d object detection. In Conference on Robot Learning, pp. 146-155, 2018a.
+Bin Yang, Wenjie Luo, and Raquel Urtasun. Pixor: Real-time 3d object detection from point clouds. In CVPR, 2018b.
+Yanchao Yang, Alex Wong, and Stefano Soatto. Dense depth posterior (ddp) from single image and sparse range. In CVPR, 2019.
+Yin Zhou and Oncel Tuzel. Voxelnet: End-to-end learning for point cloud based 3d object detection. In CVPR, 2018.
+
+# Appendix
+
+We provide details omitted in the main text.
+
+- Appendix A: details on constructing the depth cost volume (section 3 of the main paper).
+- Appendix B: detailed implementation of the GDC algorithm (section 4 of the main paper).
+- Appendix C: additional details of experimental setups (subsection 5.1 of the main paper).
+- Appendix D: additional results, analyses, and discussions (subsection 5.2 of the main paper).
+
+# A DEPTH COST VOLUME
+
+With Equation 2, we know where each grid $(u,v,z)$ in $C_{\mathrm{depth}}$ corresponds to in $C_{\mathrm{disp}}$ (may not be on a grid). We can then obtain features for each grid in $C_{\mathrm{depth}}$ (i.e., $C_{\mathrm{depth}}(u,v,z,:))$ by bilinear interpolation over features on grids of $C_{\mathrm{disp}}$ around the non-grid location (i.e., $\left(u,v,\frac{f_U\times b}{z}\right)$ ). We applied the "grid_sample" function in PyTorch for bilinear interpolation.
+
+We use PSMNET (Chang & Chen, 2018) as the backbone for our stereo depth estimation network (SDN). The only change is to construct the depth cost volume before performing 3D convolutions.
+
+# B GRAPH-BASED DEPTH CORRECTION (GDC) ALGORITHM
+
+Here we present the GDC algorithm in detail (see algorithm 1). The two steps described in the main paper can be easily turned into two (sparse) linear systems and then solved by using Lagrange multipliers. For the first step (i.e., Equation 7), we solve the same problem as in the main text but we switch the objective to minimizing the $L_{2}$ -norm of $W$ and set $Z - WZ = 0$ as a constraint5. For the second step (i.e., Equation 8), we use the Conjugate Gradient (CG) to iteratively solve the sparse linear system.
+
+Algorithm 1: Graph-based depth correction (GDC). “;” stands for column-wise concatenation.
+Input: Stereo depth map $Z\in \mathbb{R}^{(n + m)\times 1}$ , the corresponding pseudo-LiDAR (PL) point cloud $P\in \mathbb{R}^{(n + m)\times 3}$ , and LiDAR depths $G\in \mathbb{R}^{n\times 1}$ on the first the $n$ pixels.
+Output: Corrected depth map $Z^{\prime}\in \mathbb{R}^{(n + m)\times 1}$
+function GDC(Z,P,G,K)
+Solve: $W = \arg \min_{W\in \mathbb{R}^{(n + m)\times (n + m)}}\| W\| ^2$ s.t. $Z - W\cdot Z = 0$ $W_{ij} = 0$ if $j\notin \mathcal{N}_i$ (i.e., the set of neighbors of the $i^{th}$ point) according to $P$ $\sum_{j}W_{ij} = 1$ for $\forall i = 1,\dots ,n + m$ Solve: $Z_{PL}^{\prime} = \arg \min_{Z_{PL}^{\prime}\in \mathbb{R}^{m\times 1}}\| [G;Z_{PL}^{\prime}] - W[G;Z_{PL}^{\prime}]\|^{2}$ return $[G;Z_{PL}^{\prime}]$
+end
+
+# C EXPERIMENTAL SETUP
+
+# C.1 SPARSE LIDAR GENERATION
+
+In this section, we explain how we generate sparser LiDAR with fewer beams from a 64-beam LiDAR point cloud from KITTI dataset in detail. For every point $(x_{i},y_{i},z_{i})\in \mathbb{R}^{3}$ of the point cloud in one
+
+scene (in LiDAR coordinate system $(x$ : front, $y$ : left, $z$ : up, and $(0,0,0)$ is the location of the LiDAR sensor)), we compute the elevation angle $\theta_{i}$ to the LiDAR sensor as
+
+$$
+\theta_ {i} = \arg \cos \left(\frac {\sqrt {x _ {i} ^ {2} + y _ {i} ^ {2}}}{\sqrt {x _ {i} ^ {2} + y _ {i} ^ {2} + z _ {i} ^ {2}}}\right).
+$$
+
+We order the points by their elevation angles and slice them into separate lines by step $0.4^{\circ}$ , starting from $-23.6^{\circ}$ (close to the Velodyne 64-beam LiDAR SPEC). We select LiDAR points whose elevation angles fall within $[-2.4^{\circ}, -2.0^{\circ}) \cup [-0.8^{\circ}, -0.4^{\circ})$ to be the 2-beam LiDAR signal, and similarly $[-2.4^{\circ}, -2.0^{\circ}) \cup [-1.6^{\circ}, -1.2^{\circ}) \cup [-0.8^{\circ}, -0.4^{\circ}) \cup [0.0^{\circ}, 0.4^{\circ})$ to be the 4-beam LiDAR signal. We choose them in such a way that consecutive lines has a $0.8^{\circ}$ interval, following the SPEC of the "cheap" 4-beam LiDAR ScaLa. We visualize these sparsified LiDAR point clouds from the bird's-eye view on one example scene in Figure 7.
+
+
+(a) 2-beam
+
+
+(b) 4-beam
+Figure 7: Bird's-eye views of sparsified LiDAR on an example scene. The observer is on the bottom side looking up. We filter out points invisible from the left image. (One floor square is $10\mathrm{m} \times 10\mathrm{m}$ .)
+
+
+(c) 64-beam (full)
+
+# C.2 3D OBJECT DETECTION ALGORITHMS
+
+In this section, we provide more details about the way we train 3D object detection models on pseudo-LiDAR point clouds. For AVOD, we use the same model as in (Wang et al., 2019a). For P-RCNN, we use the implementation provided by the authors. Since the P-RCNN model exploits the sparse nature of LiDAR point clouds, when training it with pseudo-LiDAR input, we will first sparsify the point clouds into 64 beams using the method described in subsection C.1. For $\mathrm{PIXOR}^{\star}$ , we implement the same base model structure and data augmentation specified by Yang et al. (2018b), but without the "decode fine-tune" step and focal loss. Inspired by the trick in (Liang et al., 2018), we add another image feature (ResNet-18 by He et al. (2016)) branch along the LiDAR branch, and concatenate the corresponding image features onto the LiDAR branch at each stage. We train $\mathrm{PIXOR}^{\star}$ using RMSProp with momentum 0.9, learning rate $10^{-5}$ (decay by 10 after 50 and 80 epochs) for 90 epochs. The BEV evaluation results are similar to the reported results (see Table 1).
+
+# D ADDITIONAL RESULTS, ANALYSES, AND DISCUSSIONS
+
+# D.1 ABLATION STUDY
+
+In Table 6 and Table 7 we provide more experimental results aligned with experiments in subsection 5.2 of the main paper. We conduct the same experiments on two other models, AVOD and $\mathrm{PIXOR}^{\star}$ , and observe similar trends of improvements brought by learning with the depth loss (from PSMNET to PSMNET +DL), constructing the depth cost volume (from PSMNET +DL to SDN), and applying GDC to correct the bias in stereo depth estimation (comparing SDN +GDC with SDN).
+
+We note that, in Table 7, results of AVOD (or $\mathrm{PIXOR}^{\star}$ ) with $\mathrm{SDN} + \mathrm{L}\#$ are worse than those with $\mathrm{L}\#$ at the moderate and hard settings. This observation is different from that in Table 4, where P-RCNN with $\mathrm{SDN} + \mathrm{L}\#$ outperforms P-RCNN with $\mathrm{L}\#$ in 5 out of 6 comparisons. We hypothesize that this is because P-RCNN takes sparsified inputs (see subsection C.2) while AVOD and $\mathrm{PIXOR}^{\star}$ take dense inputs. In the later case, the four replaced LiDAR beams in $\mathrm{SDN} + \mathrm{L}\#$ will be dominated by the dense stereo depths so that $\mathrm{SDN} + \mathrm{L}\#$ is worse than $\mathrm{L}\#$ .
+
+# D.2 USING FEWER LIDAR BEAMS
+
+In $\mathrm{PL}++$ (i.e., SDN + GDC), we use 4-beam LiDAR to correct the predicted point cloud. In Table 8, we investigate using fewer (and also potentially cheaper) LiDAR beams for depth correction. We observe that even with 2 beams, GDC can already manage to combine the two signals and yield a better performance than using 2-beam LiDAR or pseudo-LiDAR alone.
+
+Table 6: Ablation study on stereo depth estimation. We report $\mathrm{AP}_{\mathrm{BEV}} / \mathrm{AP}_{\mathrm{3D}}$ (in %) of the car category at IoU = 0.7 on the KITTI validation set. DL stands for depth loss.
+
+| Depth Estimation | PIXOR* | AVOD |
| Easy | Moderate | Hard | Easy | Moderate | Hard |
| PSMNET | 73.9 / - | 54.0 / - | 46.9 / - | 74.9 / 61.9 | 56.8 / 45.3 | 49.0 / 39.0 |
| PSMNET + DL | 75.8 / - | 56.2 / - | 51.9 / - | 75.7 / 60.5 | 57.1 / 44.8 | 49.2 / 38.4 |
| SDN | 79.7 / - | 61.1 / - | 54.5 / - | 77.0 / 63.2 | 63.7 / 46.8 | 56.0 / 39.8 |
+
+Table 7: Ablation study on leveraging sparse LiDAR. We report $\mathrm{AP}_{\mathrm{BEV}} / \mathrm{AP}_{\mathrm{3D}}$ (in %) of the car category at IoU = 0.7 on the KITTI validation set. L# stands for 4-beam LiDAR signal. SDN +L# means we replace the depth of a portion of pseudo-LiDAR points (i.e., landmark pixels) by L#.
+
+| Depth Estimation | PIXOR* | AVOD |
| Easy | Moderate | Hard | Easy | Moderate | Hard |
| SDN | 79.7 / - | 61.1 / - | 54.5 / - | 77.0 / 63.2 | 63.7 / 46.8 | 56.0 / 39.8 |
| L# | 72.0 / - | 64.7 / - | 63.6 / - | 77.0 / 62.1 | 68.8 / 54.7 | 67.1 / 53.0 |
| SDN + L# | 75.6 / - | 59.4 / - | 53.2 / - | 84.1 / 66.0 | 67.0 / 53.1 | 58.8 / 46.4 |
| SDN + GDC | 84.0 / - | 71.0 / - | 65.2 / - | 86.8 / 70.7 | 76.6 / 56.2 | 68.7 / 53.4 |
+
+Table 8: Ablation study on the sparsity of LiDAR. We report $\mathrm{AP}_{\mathrm{BEV}} / \mathrm{AP}_{\mathrm{3D}}$ (in %) of the car category at IoU= 0.7 on the KITTI validation set. L# stands for using sparse LiDAR signal alone. The number in brackets indicates the number of beams in use.
+
+| Depth Estimation | P-RCNN | PIXOR* |
| Easy | Moderate | Hard | Easy | Moderate | Hard |
| SDN | 82.0 / 67.9 | 64.0 / 50.1 | 57.3 / 45.3 | 79.7 / - | 61.1 / - | 54.5 / - |
| L# (2) | 69.2 / 46.3 | 62.8 / 41.9 | 61.3 / 40.0 | 66.8 / - | 55.5 / - | 53.3 / - |
| L# (4) | 73.2 / 56.1 | 71.3 / 53.1 | 70.5 / 51.5 | 72.0 / - | 64.7 / - | 63.6 / - |
| SDN + GDC (2) | 87.2 / 73.3 | 72.0 / 56.6 | 67.1 / 54.1 | 82.0 / - | 65.3 / - | 61.7 / - |
| SDN + GDC (4) | 88.2 / 75.1 | 76.9 / 63.8 | 73.4 / 57.4 | 84.0 / - | 71.0 / - | 65.2 / - |
+
+Table 9: Comparison of GDC and PNP for 3D object detection. We report $\mathrm{AP}_{\mathrm{BEV}} / \mathrm{AP}_{\mathrm{3D}}$ (in %) of the car category at IoU = 0.7 on the KITTI validation set, using SDN + PNP or SDN + GDC for depth estimation and P-RCNN or $\mathrm{PIXOR}^{\star}$ for detection.
+
+| Input signal | P-RCNN | PIXOR* |
| Easy | Moderate | Hard | Easy | Moderate | Hard |
| SDN + PNP | 86.3 / 72.1 | 73.3 / 58.9 | 67.2 / 54.2 | 79.1 / - | 64.2 / - | 54.0 / - |
| SDN + GDC | 88.2 / 75.1 | 76.9 / 63.8 | 73.4 / 57.4 | 84.0 / - | 71.0 / - | 65.2 / - |
+
+# D.3 DEPTH CORRECTION VS. DEPTH COMPLETION
+
+We compare our GDC algorithm for depth correction to depth completion algorithms, which aim to "densify" LiDAR data beyond the beam lines (Wang et al., 2018; Tomasello et al., 2018; Ma et al., 2019; Yang et al., 2019; Cheng et al., 2018; Torres-Mendez & Dudek, 2004). We note that most depth completion approaches take as input a 64-beam LiDAR and a single image, while our focus is on fusing a much sparser 4-beam LiDAR and stereo depths. As such, the two problems are not
+
+
+Figure 8: Comparison of GDC and PNP for depth correction. We report the median of absolute errors on the KITTI validation set. See text for details.
+
+
+Figure 9: Median depth estimation errors w.r.t. the shortest distances to 4-beam LiDAR points on KITTI validation set.
+
+Table 10: Comparison of 3D object detection using the naive and optimized implementation of GDC. We report $\mathrm{AP}_{\mathrm{BEV}} / \mathrm{AP}_{\mathrm{3D}}$ (in %) of the car category at IoU = 0.7 on the KITTI validation set, using P-RCNN for detection.
+
+ | Easy | Moderate | Hard |
| Naive | 88.2 / 75.1 | 76.9 / 63.8 | 73.4 / 57.4 |
| Optimized | 87.6 / 75.0 | 76.3 / 63.4 | 73.1 / 57.0 |
+
+commensurate. Also, our GDC algorithm is a general, simple, inference-time approach that requires no training, unlike prior learning-based approaches to depth completion.
+
+Here we empirically compare to PNP (Wang et al., 2018), a recently proposed depth completion algorithm compatible with any (even stereo) depth estimation network, similar to GDC. We use SDN for initial depth estimation, and evaluate GDC and PNP by randomly selecting a fraction of LiDAR points as provided ground truths and calculating the median absolute depth errors on the remaining LiDAR points. As shown in Figure 8, GDC outperforms PNP by a large margin. Table 9 shows a further comparison to PNP on 3D object detection. We apply PNP and GDC respectively to correct the depth estimates obtained from SDN, train a P-RCNN or $\mathrm{PIXOR}^{\star}$ using the resulting pseudo-LiDAR points on the KITTI training set, and compare the detection results on the KITTI validation set. In either case, $\mathrm{SDN} + \mathrm{GDC}$ outperforms $\mathrm{SDN} + \mathrm{PNP}$ by a notable margin.
+
+# D.4 RUN TIME
+
+With the following optimizations for implementation,
+
+1. Sub-sampling pseudo-LiDAR points: keeping at most one point within a cubic of size $0.1\mathrm{m}^3$
+2. Limiting the pseudo-LiDAR points for depth correction: keeping only those whose elevation angles are within $[-3.0^{\circ}, 0.4^{\circ}]$ (the range of 4-beam LiDAR plus $0.6^{\circ}$ ; see subsection C.1 for details)
+3. After performing GDC for depth correction, combining the corrected pseudo-LiDAR points with those outside the elevation angles of $[-3.0^{\circ}, 0.4^{\circ})$
+
+GDC runs in 90 ms/frame using a single GPU (7.7ms for KD-tree construction and search, 46.5ms for solving $W$ , and 26.9ms for solving $Z_{PL}^{\prime}$ ) with negligible performance difference (see Table 10). For consistency, all results reported in the main paper are based on the naive implementation. Further speedups can be achieved by CUDA programming for GPUs.
+
+# D.5 STEREODEPTH VS.DETECTION
+
+We quantitatively evaluate the stereo depths by median errors in Figure 4 of the main text (numerical values are listed in Table 11). In Table 12 we further show mean errors with standard deviation (the large standard deviation likely results from outliers such as occluded pixels around object boundaries).
+
+Table 11: Median depth estimation errors over various depth ranges (numerical values of Figure 4).
+
+| Signal | range (m) |
| 0-10 | 10-20 | 20-30 | 30-40 | 40-50 | 50-60 | 60-70 |
| PSMNet | 0.04 | 0.11 | 0.36 | 0.83 | 1.24 | 1.98 | 2.43 |
| SDN | 0.07 | 0.12 | 0.30 | 0.60 | 0.89 | 1.31 | 1.73 |
| SDN + GDC | 0.07 | 0.12 | 0.27 | 0.51 | 0.74 | 1.03 | 1.53 |
+
+Table 12: Mean depth estimation errors (with standard deviation) over various depth ranges.
+
+| Signal | range (m) |
| 0-10 | 10-20 | 20-30 | 30-40 | 40-50 | 50-60 | 60-70 |
| PSMNet | 0.18±0.93 | 0.36±1.20 | 0.97±2.32 | 2.02±4.05 | 2.94±5.64 | 4.61±8.03 | 6.03±10.32 |
| SDN | 0.21±0.89 | 0.35±1.16 | 0.87±2.31 | 1.80±4.22 | 2.67±6.00 | 4.27±8.78 | 5.82±11.23 |
| SDN + GDC | 0.21±0.90 | 0.35±1.17 | 0.84±2.34 | 1.74±4.27 | 2.59±6.06 | 4.14±8.85 | 5.72±11.29 |
+
+For both tables, we divide pixels into beams according to their truth depths, and evaluate on pixels not on the 4-beam LiDAR. The improvement of SDN (+ GDC) over PSMNET becomes larger as we consider pixels farther away. Table 13 further demonstrates the relationship between depth quality and detection accuracy: SDN (+ GDC) significantly outperforms PSMNET for detecting faraway cars. We note that, for very faraway cars (i.e., $50 - 70\mathrm{m}$ ), the number of training object instances are extremely small, which suggests that the very poor performance might partially cause by over-fitting.
+
+Further, we apply the same evaluation procedure but group the errors by the shortest distance between each PSEUDO-LIDAR point and the 4-beam LiDAR points in Figure 9. We can see that the closer the PSEUDO-LIDAR points are to the 4-beam LiDAR points, the bigger improvement GDC can bring.
+
+# D.6 CONNECTED COMPONENTS IN KNN GRAPHS OF PSEUDO-LIDAR POINTS BY SDN
+
+Here, we provide empirical analysis on the relationship between the $k$ we choose in building the K-nearest-neighbor graph of PSEUDO-LIDAR points by SDN and the number of connected components of that graph. We show the results on KITTI validation set in Figure 11. It can be seen that with $k \geq 9$ , the average number of connected components in the graph is smaller than 2.
+
+# D.7 FAILURE CASES AND WEAKNESS
+
+There is still a gap between our approach and LiDAR for faraway objects (see Table 13). We further analyze $\mathrm{AP}_{\mathrm{BEV}}$ at different IoU in Figure 10. For low IoU (0.2-0.5), SDN (+GDC) is on par with LiDAR, but the gap increases significantly at high IoU thresholds. This suggests that the predominant gap between our approach and LiDAR is because of mislocalization, perhaps due to residual inaccuracies in depth.
+
+
+Figure 10: IoU vs. $\mathbf{AP}_{\mathrm{BEV}}$ on KITTI validation set on the car category (moderate).
+
+
+Figure 11: $k$ vs. average number of connected components in KNN graphs of PSEUDO-LiDAR points by SDN.
+
+Table 13: 3D object detection at various depth ranges. We compare different input signals. We report $\mathrm{AP}_{\mathrm{BEV}} / \mathrm{AP}_{\mathrm{3D}}$ (in %) of the car category at IoU = 0.7 on the KITTI validation set, using P-RCNN for detection. In the last two rows we show the number of car objects in KITTI object train and validation sets within different ranges.
+
+| Input signal | 0-30 m | 30-50 m | 50-70 m |
| PSMNET | 65.6 / 54.0 | 15.8 / 6.9 | 0.0 / 0.0 |
| SDN | 68.6 / 56.7 | 27.4 / 11.3 | 0.7 / 0.0 |
| SDN + GDC | 84.7 / 67.8 | 49.9 / 31.5 | 2.5 / 1.0 |
| LiDAR | 88.5 / 84.0 | 69.9 / 51.5 | 8.9 / 3.4 |
| # OBJECTS-TRAIN | 6903 | 3768 | 76 |
| # OBJECTS-VAL | 7379 | 3542 | 39 |
+
+# D.8 QUALITATIVE RESULTS
+
+In Figure 6,12,13 and Figure 14, we show detection results using P-RCNN (with confidence $>0.95$ ) with different input signals on four randomly chosen scenes in the KITTI object validation set. Specifically, we show the results from the frontal-view images and the bird's-eye view (BEV) point clouds. In the BEV map, the observer is on the left-hand side looking to the right. It can be seen that the point clouds generated by PSEUDO-LIDAR ++ (SDN alone or SDN +GDC) align better with LiDAR than those generated by PSEUDO-LIDAR (PSMNET). For nearby objects (i.e., bounding boxes close to the left in the BEV map), we see that P-RCNN with any point cloud performs fairly well in localization. However, for faraway objects (i.e., bounding boxes close to the right), PSEUDO-LIDAR with depth estimated from PSMNET predicts objects (red boxes) deviated from the ground truths (green boxes). Moreover, the noisy PSMNET points also leads to several false positives or negatives. In contrast, the detected boxes by our PSEUDO-LIDAR ++, either with SDN alone or with SDN +GDC, align pretty well with the ground truth boxes, justifying our targeted improvement in estimating faraway depths. In Figure 12, we see one failure case for both PSEUDO-LIDAR and PSEUDO-LIDAR ++: the most faraway car is missed, while LiDAR signal can still detect it, suggesting that for very faraway objects stereo-based methods may still have limitation.
+
+
+
+
+LiDAR
+
+
+
+
+Pseudo-LiDAR
+
+
+
+
+Pseudo-LiDAR++ (SDN)
+
+
+
+
+Pseudo-LiDAR++ (SDN + GDC)
+Figure 12: Qualitative Comparison. We show the detection results on a KITTI validation scene by P-RCNN with different input point clouds. We visualize them from both frontal-view images and bird's-eye view (BEV) point maps. Ground-truth boxes are in green and predicted bounding boxes are in red. The observer is at the left-hand side of the BEV map looking to the right. In other words, ground truth boxes on the right are more faraway (i.e., deeper) from the observer, and hence hard to localize. Best viewed in color.
+
+
+
+
+
+
+LiDAR
+
+
+Pseudo-LiDAR
+
+
+
+
+
+
+Pseudo-LiDAR++ (SDN)
+
+
+Pseudo-LiDAR++ (SDN + GDC)
+Figure 13: Qualitative Comparison - another example. The same setup as in Figure 12
+
+
+
+
+
+
+LiDAR
+
+
+Pseudo-LiDAR
+
+
+
+
+
+
+Pseudo-LiDAR++ (SDN)
+
+
+Pseudo-LiDAR++ (SDN + GDC)
+Figure 14: Qualitative Comparison - another example. The same setup as in Figure 12
\ No newline at end of file
diff --git a/pseudolidaraccuratedepthfor3dobjectdetectioninautonomousdriving/images.zip b/pseudolidaraccuratedepthfor3dobjectdetectioninautonomousdriving/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..6a81492c8b998295bddbca706fb7af572ef57cde
--- /dev/null
+++ b/pseudolidaraccuratedepthfor3dobjectdetectioninautonomousdriving/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:05b07a26a8bee8c5d9433617b9346c5ccaba24f477c74082062c3477d1b1fe4b
+size 1231314
diff --git a/pseudolidaraccuratedepthfor3dobjectdetectioninautonomousdriving/layout.json b/pseudolidaraccuratedepthfor3dobjectdetectioninautonomousdriving/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..c98f9466d2c764732cf7284d4b6c9c260e0ab6e9
--- /dev/null
+++ b/pseudolidaraccuratedepthfor3dobjectdetectioninautonomousdriving/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9fd8421cd69f14399eb49a5f4efaf5824be5b96bcea193f1a466489acdada6f8
+size 746576
diff --git a/pureandspuriouscriticalpointsageometricstudyoflinearnetworks/ae1ee5bc-a663-4c20-b279-f7ac2e10a7fd_content_list.json b/pureandspuriouscriticalpointsageometricstudyoflinearnetworks/ae1ee5bc-a663-4c20-b279-f7ac2e10a7fd_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..c4e5a7ad01d39322767d0ae99464ea06c75cdc83
--- /dev/null
+++ b/pureandspuriouscriticalpointsageometricstudyoflinearnetworks/ae1ee5bc-a663-4c20-b279-f7ac2e10a7fd_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:315b59b52355f744ffec0edee1e184379f8e22410cb943304b8d2946e9fa2d9b
+size 203967
diff --git a/pureandspuriouscriticalpointsageometricstudyoflinearnetworks/ae1ee5bc-a663-4c20-b279-f7ac2e10a7fd_model.json b/pureandspuriouscriticalpointsageometricstudyoflinearnetworks/ae1ee5bc-a663-4c20-b279-f7ac2e10a7fd_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..37f191db2feb456a9be585078574474fde21dbdc
--- /dev/null
+++ b/pureandspuriouscriticalpointsageometricstudyoflinearnetworks/ae1ee5bc-a663-4c20-b279-f7ac2e10a7fd_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f7d9df7056b9aefe263557e41d5b2e66c60b1a89570fc675c68080f6d357aebe
+size 232500
diff --git a/pureandspuriouscriticalpointsageometricstudyoflinearnetworks/ae1ee5bc-a663-4c20-b279-f7ac2e10a7fd_origin.pdf b/pureandspuriouscriticalpointsageometricstudyoflinearnetworks/ae1ee5bc-a663-4c20-b279-f7ac2e10a7fd_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..a22b479bdcbba2161a4067f3e1467029ef299694
--- /dev/null
+++ b/pureandspuriouscriticalpointsageometricstudyoflinearnetworks/ae1ee5bc-a663-4c20-b279-f7ac2e10a7fd_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4878775d32124f374dd67f8a0527a34e2eced9289cf9ad8b7ef733b74eccca75
+size 515537
diff --git a/pureandspuriouscriticalpointsageometricstudyoflinearnetworks/full.md b/pureandspuriouscriticalpointsageometricstudyoflinearnetworks/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..50cf5f474fd6c7c05b39a9a3c4f931a82ff3873a
--- /dev/null
+++ b/pureandspuriouscriticalpointsageometricstudyoflinearnetworks/full.md
@@ -0,0 +1,803 @@
+# PURE AND SPURIOUS CRITICAL POINTS: A GEOMETRIC STUDY OF LINEAR NETWORKS
+
+Matthew Trager* New York University
+
+Kathlen Kohn*
+KTH Stockholm
+
+Joan Bruna
+New York University
+
+# ABSTRACT
+
+The critical locus of the loss function of a neural network is determined by the geometry of the functional space and by the parameterization of this space by the network's weights. We introduce a natural distinction between pure critical points, which only depend on the functional space, and spurious critical points, which arise from the parameterization. We apply this perspective to revisit and extend the literature on the loss function of linear neural networks. For this type of network, the functional space is either the set of all linear maps from input to output space, or a determinantal variety, i.e., a set of linear maps with bounded rank. We use geometric properties of determinantal varieties to derive new results on the landscape of linear networks with different loss functions and different parameterizations. Our analysis clearly illustrates that the absence of "bad" local minima in the loss landscape of linear networks is due to two distinct phenomena that apply in different settings: it is true for arbitrary smooth convex losses in the case of architectures that can express all linear maps ("filling architectures") but it holds only for the quadratic loss when the functional space is a determinantal variety ("non-filling architectures"). Without any assumption on the architecture, smooth convex losses may lead to landscapes with many bad minima.
+
+# 1 INTRODUCTION
+
+A fundamental goal in the theory of deep learning is to explain why the optimization of the nonconvex loss function of a neural network does not seem to be affected by the presence of nonglobal local minima. Many papers have addressed this issue by studying the landscape of the loss function (Baldi & Hornik, 1989; Choromanska et al., 2015; Kawaguchi, 2016; Venturi et al., 2018). These papers have shown that, in certain situations, any local minimum for the loss is in fact always a global minimum. Unfortunately, it is also known that this property does not apply in more general realistic settings (Yun et al., 2018; Venturi et al., 2018). More recently, researchers have begun to search for explanations based on the dynamics of optimization. For example, in certain limit situations, the gradient flow of over-parameterized networks will avoid local minimizers (Chizat & Bach, 2018; Mei et al., 2018). We believe however that the study of the static properties of the loss function (the structure of its critical locus) is not settled. Even in the case of linear networks, the existing literature paints a purely analytical picture of the loss, and provides no sort of explanation as to "why" such architectures exhibit no bad local minima. A complete understanding of the critical locus should be a prerequisite for investigating the dynamics of the optimization.
+
+The goal of this paper is to revisit the loss function of neural networks from a geometric perspective, focusing on the relationship between the functional space of the network and its parameterization. In particular, we view the loss as a composition
+
+$$
+\{\text {p a r a m e t e r s p a c e} \} \stackrel {{\mu}} {{\to}} \{\text {f u n c t i o n a l s p a c e} \} \stackrel {{\ell}} {{\to}} \mathbb {R}.
+$$
+
+In this setting, the function $\ell$ is almost always convex, however the composition $L = \ell \circ \mu$ is not. Critical points for $L$ can in fact arise for two distinct reasons: either because we are applying $\ell$ to a non-convex functional space, or because the parameterizing map $\mu$ is locally degenerate. We distinguish these two types of critical points by referring to them, respectively, as pure and spurious.
+
+Table 1: Bad local minima in loss landscapes for linear networks
+
+ | quadratic loss | other smooth convex loss |
| filling | no bad minima | no bad minima |
| non-filling | no bad minima | bad minima exist |
| ↑ |
| special property of determinantal varieties |
+
+Intuitively, pure critical points actually reflect the geometry of the functional space associated with the network, while spurious critical points arise as "artifacts" from the parameterization. After defining pure and critical points for arbitrary networks, we investigate in detail the classification of critical points in the case of linear networks. The functional space for such networks can be identified with a family of linear maps, and we can describe its geometry using algebraic tools. Many of our statements rely on a careful analysis of the differential of the matrix multiplication map. In particular, we prove that non-global local minima are necessarily pure critical points for convex losses, which means that many properties of the loss landscape can be read from the functional space. On the other hand, we emphasize that even for linear networks it is possible to find many smooth convex losses with non-global local minima. This happens when the functional space is a determinantal variety, i.e., a (non-smooth and non-convex) family of matrices with bounded rank. In this setting, the absence of non-global minima actually holds in the particular case of the quadratic loss, because of very special geometric properties of determinantal varieties that we discuss.
+
+Related Work. Baldi & Hornik (1989) first proved the absence of non-global ("bad") local minima for linear networks with one hidden layer (autoencoders). Their result was generalized to the case of deep linear networks by Kawaguchi (2016). Many papers have since then studied the loss landscape of linear networks under different assumptions (Hardt & Ma, 2016; Yun et al., 2017; Zhou & Liang, 2017; Laurent & von Brecht, 2017; Lu & Kawaguchi, 2017; Zhang, 2019). In particular, Laurent & von Brecht (2017) showed that linear networks with "no bottlenecks" have no bad local minima for arbitrary smooth loss functions. Lu & Kawaguchi (2017) and Zhang (2019) argued that "depth does not create local minima", meaning that the absence of local minima of deep linear networks is implied by the same property of shallow linear networks. Our study of pure and spurious critical points can be used as a framework for explaining all these results in a unified way. The optimization dynamics of linear networks are also an active area of research (Arora et al., 2019; 2018), and our analysis of the landscape in function space sets the stage for studying gradient dynamics on determinantal varieties, as in Bah et al. (2019). Our work is also closely related to objects of study in applied algebraic geometry, particularly determinantal varieties and ED discriminants (Draisma et al., 2013; Ottaviani et al., 2013). Finally, we mention other recent works that study neural networks using algebraic-geometric tools (Mehta et al., 2018; Kileel et al., 2019; Jaffali & Oeding, 2019).
+
+# Main contributions.
+
+- We introduce a natural distinction between "pure" and "spurious" critical points for the loss function of networks. These notions provide an intuitive and useful language for studying a central aspect in the theory of neural networks, namely the (over)parameterization of the functional space and its effect on the optimization landscape. While most of the paper focuses on linear networks, this viewpoint applies to more general settings as well (see also our discussion in Appendix A.3).
+- We study the pure and critical locus for linear networks and arbitrary loss functions. We show that non-global local minima are always pure for convex losses, unifying many known properties on the landscape of linear networks.
+- We explain that the absence of "bad" local minima in the loss landscape of linear networks is due to two distinct phenomena and does not hold in general: it is true for arbitrary smooth convex losses in the case of architectures that can express all linear maps ("filling architectures") and it holds for the quadratic loss when the functional space is a determinantal variety ("non-filling architectures"). Without any assumption on the architecture, smooth convex losses may lead to many local minima. See Table 1.
+
+
+Figure 1: Pure and spurious critical points: $\theta_{1}$ is a pure critical point, while $\theta_{2}$ is a spurious critical point (the level curves on the manifold $\mathcal{M}_{\Phi}$ describe the landscape in functional space). Note that $\theta_{3}$ is mapped to the same function as $\theta_{2}$ , but it is not a critical point.
+
+- We provide a precise description of the number of topologically connected components of the set of global minima. This relates to recent work on "mode connectivity" in loss landscapes of neural networks (Garipov et al., 2018).
+- We spell out connections between the loss landscape and classical geometric objects such as caustics and ED discriminants. We believe that these concepts may be useful in the study of more general functional spaces.
+
+Differential notation. Our functional spaces will be manifolds with singularities, so we will make use of elementary notions from differential geometry. If $\mathcal{M}$ and $\mathcal{N}$ are manifolds and $g: \mathcal{M} \to \mathcal{N}$ is a smooth map, then we write $dg(x)$ for the differential of $g$ at the point $x$ . This means that $dg(x): T_x\mathcal{M} \to T_{g(x)}\mathcal{N}$ is the first order linear approximation of $g$ at the point $x \in \mathcal{M}$ . If $\mathcal{M}$ and $\mathcal{N}$ have singularities, then the same definitions apply if we restrict $g$ to smooth points in $\mathcal{M}$ whose image is also smooth in $\mathcal{N}$ . For most of our analysis, manifolds will be embedded in Euclidean spaces, say $\mathcal{M} \subset \mathbb{R}^m$ and $\mathcal{N} \subset \mathbb{R}^n$ , so we can view the tangent spaces $T_x\mathcal{M}$ and $T_{g(x)}\mathcal{N}$ as also embedded in $\mathbb{R}^m$ and $\mathbb{R}^n$ . When $\mathcal{N} = \mathbb{R}$ , the critical locus of a map $g: \mathcal{M} \to \mathbb{R}$ is defined as $Crit(g) = \{x \in Smooth(\mathcal{M}) \mid dg(x) = 0\}$ .
+
+# 2 PRELIMINARIES
+
+# 2.1 PURE AND SPURIOUS CRITICAL POINTS
+
+A neural network (or any general "parametric learning model") is defined by a continuous mapping $\Phi : \mathbb{R}^{d_{\theta}} \times \mathbb{R}^{d_x} \to \mathbb{R}^{d_y}$ that associates an input vector $x \in \mathbb{R}^{d_x}$ and a set of parameters $\theta \in \mathbb{R}^{d_{\theta}}$ to an output vector $y = \Phi(\theta, x) \in \mathbb{R}^{d_y}$ . In other words, $\Phi$ determines a family of continuous functions parameterized by $\theta \in \mathbb{R}^{d_{\theta}}$ :
+
+$$
+\mathcal {M} _ {\Phi} = \left\{f _ {\theta}: \mathbb {R} ^ {d _ {x}} \rightarrow \mathbb {R} ^ {d _ {y}} \mid f _ {\theta} = \Phi (\theta , \cdot) \right\} \subset C \left(\mathbb {R} ^ {d _ {x}}, \mathbb {R} ^ {d _ {y}}\right).
+$$
+
+Even though $\mathcal{M}_{\Phi}$ is naturally embedded in an infinite-dimensional functional space, it is itself finite dimensional. In fact, if the mapping $\Phi$ is smooth, then $\mathcal{M}_{\Phi}$ is a finite-dimensional manifold with singularities, and its intrinsic dimension is upper bounded by $d_{\theta}$ . It is also important to note that neural networks are often non-identifiable models, which means that different parameters can represent the same function (i.e., $f_{\theta} = f_{\theta'}$ does not imply $\theta = \theta'$ ). The manifold $\mathcal{M}_{\Phi}$ is sometimes known as a neuromanifold (Amari, 2016). We now consider a general loss function of the form $L = \ell \circ \mu$ , where $\mu: \mathbb{R}^{d_{\theta}} \to \mathcal{M}_{\Phi}$ is the (over)parameterization of $\mathcal{M}_{\Phi}$ by $\theta$ and $\ell$ is a functional defined on a subset of $C(\mathbb{R}^{d_x}, \mathbb{R}^{d_y})$ containing $\mathcal{M}_{\Phi}$ :
+
+$$
+L: \mathbb {R} ^ {d _ {\theta}} \xrightarrow {\mu} \mathcal {M} _ {\Phi} \xrightarrow {\ell | _ {\mathcal {M} _ {\Phi}}} \mathbb {R}. \tag {1}
+$$
+
+Definition 1. A critical point $\theta^{*} \in \operatorname{Crit}(L)$ is a pure critical point if $\mu(\theta^{*})$ is a critical point for the restriction $\ell|_{\mathcal{M}_{\Phi}}$ (note that this implicitly requires $\mu(\theta^{*})$ to be a smooth point of $\mathcal{M}_{\Phi}$ ). If $\theta^{*} \in \operatorname{Crit}(L)$ but $\mu(\theta^{*}) \notin \operatorname{Crit}(\ell|_{\mathcal{M}_{\Phi}})$ , we say that $\theta^{*}$ is a spurious critical point.
+
+It is clear from this definition that pure critical points reflect the geometry of the functional space, while spurious critical points do not have an intrinsic functional interpretation. For example, if $\theta^{*} \in \operatorname{Crit}(L)$ is a spurious critical point, then it may be possible to find another parameter $\theta'$ that represents the same function $f_{\theta^{*}} = f_{\theta'}$ and is not a critical point for $L$ (see Figure 1). In contrast, if $\theta^{*}$ is a pure critical point, then all parameters $\theta'$ such that $\mu(\theta') = \mu(\theta^{*})$ are automatically in $\operatorname{Crit}(L)$ , simply because $dL(\theta') = d\ell|_{\mathcal{M}_{\Phi}}(\mu(\theta')) \circ d\mu(\theta')$ . This will motivate us to study the fiber $\{\theta \mid \mu(\theta) = f\}$ of all parameters mapped to the same function $f$ (particularly when the function $f$ is a critical point of $\ell|_{\mathcal{M}_{\Phi}}$ ).
+
+We note that a sufficient condition for $\theta^{*} \in \text{Crit}(L)$ to be a pure critical point is that the differential $d\mu(\theta^{*})$ at $\theta^{*}$ has maximal rank (namely $\dim \mathcal{M}_{\Phi}$ ), i.e., that $\mu$ is locally a submersion at $\theta^{*}$ . Indeed, we have in this case
+
+$$
+0 = d L \left(\theta^ {*}\right) = d \ell | _ {\mathcal {M} _ {\Phi}} \left(\mu \left(\theta^ {*}\right)\right) \circ d \mu \left(\theta^ {*}\right) \Rightarrow d \ell | _ {\mathcal {M} _ {\Phi}} \left(\mu \left(\theta^ {*}\right)\right) = 0,
+$$
+
+so $\mu(\theta^{*})$ is critical for the restriction of $\ell$ to $\mathcal{M}_{\Phi}$ . We also point out a special situation when $\mathcal{M}_{\Phi}$ is a convex set (as a subset of $C(\mathbb{R}^{d_x}, \mathbb{R}^{d_y}))$ and $\ell$ is a smooth convex functional. In this case, the only critical points of $\ell|_{\mathcal{M}_{\Phi}}$ are global minima, so we deduce that any critical point of $L = \ell \circ \mu$ is either a global minimum or a spurious critical point. The following simple observation gives a sufficient condition for critical points to be saddles (i.e., they are not local minima or local maxima).
+
+Lemma 2. Let $\theta^{*} \in \operatorname{Crit}(L)$ be a (necessarily spurious) critical point with the following property: for any open neighborhood $U$ of $\theta$ , there exists $\theta'$ in $U$ such that $\mu(\theta') = \mu(\theta)$ and $\theta' \notin \operatorname{Crit}(L)$ . Then $\theta^{*}$ is a saddle for $L$ .
+
+Proof. Assume that $\theta^{*}$ is a local minimum (the reasoning is analogous if $\theta^{*}$ is a local maximum). This means that there exists a neighborhood $U$ of $\theta^{*}$ such that $L(\theta)\geq L(\theta^{*})$ for all $\theta \in U$ . In particular, if $\theta^{\prime}\in U$ is such that $\mu (\theta^{\prime}) = \mu (\theta)$ , then $\theta^{\prime}$ must also be a local minimum. This contradicts $\theta^{\prime}\notin Crit(L)$ .
+
+This general discussion on pure and spurious critical points applies to any smooth network map $\Phi$ (with possible extensions to the case of piece-wise smooth mappings), and we believe that the distinction can be a useful tool in the study of the optimization landscape of general networks. In the remaining part of the paper, we use this perspective for an in-depth study of the critical points of linear networks. For this type of network, the functional set $\mathcal{M}_{\Phi}$ can be embedded in a finite dimensional ambient space, namely the space of all linear maps $\mathbb{R}^{d_x} \to \mathbb{R}^{d_y}$ . Furthermore, $\mathcal{M}_{\Phi}$ is an algebraic variety (a manifold that can have singularities and that can be described by algebraic equations). We will use basic tools from algebraic geometry to provide a complete description of pure and spurious critical points, and to prove new results on the landscape of linear networks.
+
+# 2.2 LINEAR NETWORKS AND DETERMINANTAL VARIETIES
+
+A linear network is a map $\Phi : \mathbb{R}^{d_{\theta}} \times \mathbb{R}^{d_x} \to \mathbb{R}^{d_y}$ of the form
+
+$$
+\Phi (\theta , x) = W _ {h} \dots W _ {1} x, \quad \theta = \left(W _ {h}, \dots , W _ {1}\right) \in \mathbb {R} ^ {d _ {\theta}}, \tag {2}
+$$
+
+where $W_{i} \in \mathbb{R}^{d_{i} \times d_{i-1}}$ are matrices (so $d_{0} = d_{x}, d_{h} = d_{y}$ , and $d_{\theta} = d_{0}d_{1} + d_{1}d_{2} + \ldots + d_{h-1}d_{h}$ ). The functional space is in this case a subset of the space of all linear maps $\mathbb{R}^{d_{0}} \to \mathbb{R}^{d_{h}}$ . As in (1), we can decompose a loss function $L$ for a linear network $\Phi$ as
+
+$$
+\mathbb {R} ^ {d _ {h} \times d _ {h - 1}} \times \dots \times \mathbb {R} ^ {d _ {1} \times d _ {0}} \xrightarrow {\mu_ {d}} \mathbb {R} ^ {d _ {h} \times d _ {0}} \xrightarrow {\ell} \mathbb {R}
+$$
+
+$$
+\left(W _ {h}, \dots , W _ {1}\right) \longmapsto \overline {{W}} = W _ {h} \dots W _ {1} \longmapsto \ell (\overline {{W}}).
+$$
+
+Here $\mu_{\pmb{d}}$ is the matrix multiplication map for the sequence of widths $\pmb {d} = (d_h,\dots ,d_0)$ , and $\ell$ is a functional on the space of $(d_h\times d_0)$ -matrices. In practice, it is typically a functional that depends on the training data, e.g. $\ell (\overline{W}) = \| \overline{W} X - Y\| ^2$ for fixed matrices $X,Y$ . Note that even if $\ell$ is a convex functional, the set $\mathcal{M}_{\Phi}$ will often not be a convex set. In fact, it is easy to see that the image of $\mu_{\pmb{d}}$ is the space $\mathcal{M}_r$ of $(d_h\times d_0)$ -matrices of rank at most $r = \min \{d_0,\ldots ,d_h\}$ . If $r < \min (d_0,d_h)$ , this
+
+set is known as a determinantal variety, a classical object of study in algebraic geometry (Harris, 1995). It is in fact an algebraic variety, i.e., it is described by polynomial equations in the matrix entries (namely, it is the zero-set of all $(r + 1) \times (r + 1)$ -minors), and it is well known that the dimension of $\mathcal{M}_r$ is $r(m + n - r)$ . Furthermore, for $r > 0$ , the variety $\mathcal{M}_r$ has many singularities: its singular locus is exactly $\mathcal{M}_{r-1} \subset \mathcal{M}_r$ , the set of all matrices with rank strictly smaller than $r$ . We refer the reader to Appendix A.1 for more details on determinantal varieties.
+
+# 3 MAIN RESULTS
+
+In this section, we investigate the critical locus $\text{Crit}(L)$ of general functions $L: \mathbb{R}^{d_{\theta}} \to \mathbb{R}$ of the form $L = \ell \circ \mu_{\pmb{d}}$ where $\ell: \mathbb{R}^{d_h \times d_0} \to \mathbb{R}$ is a (often convex) smooth map, and $\mu_{\pmb{d}}$ is the matrix multiplication map introduced in (3). By studying the differential of $\mu_{\pmb{d}}$ , we will characterize pure and spurious critical points of $L$ . As previously noted, the image of $\mu_{\pmb{d}}$ is $\mathcal{M}_r \subset \mathbb{R}^{d_h \times d_0}$ where $r = \min\{d_i\}$ . In particular, we distinguish between two cases:
+
+- We say that the map $\mu_d$ is filling if $r = \min \{d_0, d_h\}$ , so $\mathcal{M}_r = \mathbb{R}^{d_h \times d_0}$ . In this case, the functional space is smooth and convex.
+- We say that the map $\mu_d$ is non-filling if $r < \min\{d_0, d_h\}$ , so $\mathcal{M}_r \subsetneq \mathbb{R}^{d_h \times d_0}$ is a determinantal variety. In this case, the functional space is non-smooth and non-convex.
+
+# 3.1 PROPERTIES OF THE MATRIX MULTIPLICATION MAP
+
+We present some general results on the matrix multiplication map $\mu_{d}$ , which we will apply to linear networks in the next subsection. These facts may also be useful in other settings, for example, to study the piece-wise linear behavior of ReLU networks.
+
+We begin by noting that the differential map of $\mu_d$ can be written explicitly as
+
+$$
+d \mu_ {d} (\theta) \left(\dot {W} _ {h}, \dots , \dot {W} _ {1}\right) = \dot {W} _ {h} W _ {h - 1} \dots W _ {1} + W _ {h} \dot {W} _ {h - 1} \dots W _ {1} + \dots + W _ {h} \dots W _ {2} \dot {W} _ {1}. \tag {4}
+$$
+
+Given a matrix $M \in \mathbb{R}^{m \times n}$ , we denote by $\text{Row}(M) \subset \mathbb{R}^n$ and $\text{Col}(M) \subset \mathbb{R}^m$ the vector spaces spanned by the rows and columns of $M$ , respectively. Writing $W_{>i} = W_hW_{h-1}\ldots W_{i+1}$ and $W_{ i}) \otimes R o w (W _ {< i}) + \dots + C o l (W _ {> 1}) \otimes \mathbb {R} ^ {d _ {0}}. \tag {5}
+$$
+
+From this expression, we deduce the following useful fact.
+
+Lemma 3. The dimension of the image of the differential $d\mu_{d}$ at $\theta = (W_h,\dots ,W_1)$ is given by
+
+$$
+\operatorname {r k} \left(d \mu_ {\boldsymbol {d}} (\theta)\right) = \sum_ {i = 1} ^ {h} \operatorname {r k} \left(W _ {> i}\right) \cdot \operatorname {r k} \left(W _ {< i}\right) - \sum_ {i = 1} ^ {h - 1} \operatorname {r k} \left(W _ {> i}\right) \cdot \operatorname {r k} \left(W _ {< i + 1}\right),
+$$
+
+where we use the convention that $W_{<1} = I_{d_0}$ , $W_{>h} = I_{d_h}$ are the identity matrices of size $d_0$ , $d_h$ .
+
+We can use Lemma 3 to characterize all cases when the differential $d\mu_{\pmb{d}}$ at $\theta = (W_h,\dots ,W_1)$ has full rank (i.e., when the matrix multiplication map is a local submersion onto $\mathcal{M}_r$ ).
+
+Theorem 4. Let $r = \min \{d_i\}$ , $\theta = (W_h, \dots, W_1)$ , and $\overline{W} = \mu_d(\theta)$ .
+
+- (Filling case) If $r = \min \{d_h, d_0\}$ , the differential $d\mu_{\mathbf{d}}(\theta)$ has maximal rank equal to $\dim M_r = d_h d_0$ if and only if, for every $i \in \{1, 2, \dots, h - 1\}$ , either $\mathrm{rk}(W_{>i}) = d_h$ or $\mathrm{rk}(W_{ i \right\}, \quad \text {w h e r e} \mathcal {I} ^ {c} = \{1, \dots , m \} \setminus \mathcal {I}.
+$$
+
+In the appendix we present a more general version of this statement without the assumption that the singular values of $Q_{0}$ are pairwise distinct and positive. The surprising aspect of this result is that the structure of the critical points is the same for almost all choices of $Q_{0}$ . We want to emphasize that this is a special behavior of determinantal varieties with respect to the Euclidean distance, and the situation changes drastically if we apply even infinitesimal changes to the quadratic loss function. More precisely, any linear perturbation of the Euclidean norm will result in a totally different landscape, as the following example shows (more details are given in Appendix A.2).
+
+Example 13. Let us consider the variety $\mathcal{M}_1 \subseteq \mathbb{R}^{3 \times 3}$ of rank-one $(3 \times 3)$ -matrices. By Theorem 12, for almost all $Q_0$ , the function $h_{Q_0} |_{\mathcal{M}_1}$ has three (real) critical points. Applying a linear change of coordinates to $\mathbb{R}^{d_y \times d_x} \cong \mathbb{R}^{d_y d_x}$ yields a different quadratic loss $\tilde{h}_{Q_0}$ . Using tools from algebraic
+
+
+Figure 2: Left: If $\mathcal{V} \subset \mathbb{R}^2$ is an ellipse, the distance function $h_u(p) = \| p - u\|^2$ restricted to $\mathcal{V}$ generally has 2 or 4 real critical points, depending on whether $u$ lies inside or outside the diamond-shaped region bounded by the caustic curve. Right: If $\mathcal{V} \subset \mathbb{R}^2$ is a circle, then the caustic curve degenerates to a point and the distance function generically has always 2 real critical points.
+
+
+
+geometry, it is possible to show that for almost all linear coordinate changes (an open dense set), the function $\tilde{h}_{Q_0}|_{\mathcal{M}_1}$ has 39 critical points over the complex numbers. The number of real critical points however varies, depending on whether $Q_{0}$ belongs to different open regions separated by a caustic hypersurface in $\mathbb{R}^{3\times 3}$ . Furthermore, the number of local minima varies as well; in particular, it is no longer true that all $Q_{0}$ admit a unique local minimum. Figure 3 presents some simple computational experiments illustrating this behavior.
+
+For all determinantal varieties, the situation is similar to the description in Example 13. More generally, given an algebraic variety $\mathcal{V} \subset \mathbb{R}^n$ and a point $u \in \mathbb{R}^n$ , the number of (real) critical points of the distance function $h_u(p) = \| p - u\|^2$ restricted to $\mathcal{V}$ is usually not constant as $u$ varies: the behavior changes when $u$ crosses the caustic hypersurface, or ED (Euclidean distance) discriminant, of $\mathcal{V}$ ; see Figure 2. In the case of determinantal varieties with the standard Euclidean distance, this caustic hypersurface (more precisely its real locus) degenerates to a set of codimension 2, which does not partition the space into different regions. This is analogous to the case of the circle in Figure 2.
+
+
+Figure 3: Real critical points and local minima for random choices of $\tilde{h}_{Q_0}|_{\mathcal{M}_1}$ as defined in Example 13. The size of each disk is proportional to the number of instances we found with that number of critical points and local minima. This shows that linear networks with a convex loss may indeed have multiple non-global local minima. More details in Appendix A.2 (Table 2 and Experiment 1).
+
+# 3.4 USING DIFFERENT PARAMETERIZATIONS: NORMALIZED NETWORKS
+
+In the simple linear network model (2), the functional space $\mathcal{M}_r\subset \mathbb{R}^{d_h\times d_0}$ is parameterized using the matrix multiplication map $\mu_{d}$ . On the other hand, one can envision many variations of this model that are network architectures with the same functional space but parameterized differently. Examples include linear networks with skip connections, or convolutional linear networks. In this
+
+subsection, we take a look at a model for normalized linear networks: these are maps of the form
+
+$$
+\Psi (\theta , x) = W _ {h} \frac {W _ {h - 1}}{\| W _ {h - 1} \|} \dots \frac {W _ {1}}{\| W _ {1} \|} x, \quad \theta = (W _ {h}, \dots , W _ {1}), \tag {6}
+$$
+
+where $W_{i} \in \mathbb{R}^{d_{i} \times d_{i-1}}$ as before. This is a simple model for different types of weight normalization schemes often used in practice. It is easy to see that the difference between (6) and our previous linear network lies only in the parameterization of linear maps, since for normalized networks the matrix multiplication map is replaced by
+
+$$
+\nu_ {\boldsymbol {d}}: \Omega \to \mathbb {R} ^ {d _ {h} \times d _ {0}}, \quad (W _ {h}, \ldots , W _ {1}) \mapsto \overline {{W}} = W _ {h} \frac {W _ {h - 1}}{\| W _ {h - 1} \|} \ldots \frac {W _ {1}}{\| W _ {1} \|},
+$$
+
+where $\Omega = \{(W_h,\dots ,W_1)\mid W_i\neq 0,i = 1,\dots ,h - 1\} \subset \mathbb{R}^{d_\theta}$ . According to our definitions, if $L = \ell \circ \mu_{\pmb{d}}$ and $L^{\prime} = \ell \circ \nu_{\pmb{d}}$ are losses respectively for linear networks and normalized linear networks, then the pure critical loci of $L$ and $L^{\prime}$ will correspond to each other (since these only depend on the functional space), but a priori the spurious critical loci induced by the two parameterizations may be different. In this particular setting, however, we show that this is not the case: the new parameterization effectively does not introduce different critical points, and in fact makes the critical locus slightly smaller.
+
+Proposition 14. If $L' = \ell \circ \nu_d$ and $L = \ell \circ \mu_d$ , then the critical locus $\text{Crit}(L')$ is in "correspondence" with $\text{Crit}(L) \cap \Omega$ , meaning that
+
+$$
+\{\nu_ {\boldsymbol {d}} (\theta^ {\prime}) \mid \theta^ {\prime} \in C r i t (L ^ {\prime}) \} = \{\mu_ {\boldsymbol {d}} (\theta) \mid \theta \in C r i t (L) \cap \Omega \}.
+$$
+
+# 4 CONCLUSIONS
+
+We have introduced the notions of pure and spurious critical points as general tools for a geometric investigation of the landscape of neural networks. In particular, they provide a basic language for describing the interplay between a convex loss function and an overparameterized, non-convex functional space. In this paper, we have focused on the landscape of linear networks. This simple model is useful for illustrating our geometric perspective, but also exhibits several interesting (and surprisingly subtle) features. For example, the absence of non-global minima in the loss landscape is a rather general property when the architecture is "filling", while in the "non-filling" setting it is a special property that holds for the quadratic loss. Furthermore, we have observed that even in this simple framework global minima can have (possibly exponentially) many disconnected components.
+
+In the future, we hope to extend our analysis to different network models. For example, we can use our framework to study networks with polynomial activations (Kileel et al., 2019), which are a direct generalization of the linear model. We expect that an analysis of pure and spurious critical points in this context can be used to address a conjecture in Venturi et al. (2018) regarding the gap between "upper" and "lower" dimensions in functional space. A geometric investigation of networks with smooth non-polynomial activations is also possible; in that setting, the parameter space and the functional space are usually of the same dimension (i.e., $d_{\theta} = \dim(\mathcal{M}_{\Phi})$ ), however there is still an interesting stratification of singular loci, as explained for example in (Amari, 2016, Section 12.2.2). General "discriminant hypersurfaces" can also be used to describe qualitative changes in the landscape as the data distribution varies. Finally, extending our analysis to networks with ReLU activations will require some care because of the non-differentiable setting. On the other hand, it is clear that ReLU networks behave as linear networks when restricted to appropriate regions of input space: this suggests that our study of ranks of differentials may be a useful building block for pursuing in this important direction.
+
+Acknowledgements. We thank James Mathews for many helpful discussions in the beginning of this project. We are grateful to ICERM (NSF DMS-1439786 and the Simons Foundation grant 507536) for the hospitality during the academic year 2018/2019 where many ideas for this project were developed. MT and JB were partially supported by the Alfred P. Sloan Foundation, NSF RI-1816753, NSF CAREER CIF 1845360, and Samsung Electronics. KK was partially supported by the Knut and Alice Wallenberg's Foundation within their WASP AI/Math initiative.
+
+# REFERENCES
+
+Shun-ichi Amari. Information Geometry and Its Applications, volume 194 of Applied Mathematical Sciences. Springer Japan, Tokyo, 2016. ISBN 978-4-431-55977-1 978-4-431-55978-8. doi: 10.1007/978-4-431-55978-8.
+Sanjeev Arora, Nadav Cohen, Noah Golowich, and Wei Hu. A convergence analysis of gradient descent for deep linear neural networks. arXiv preprint arXiv:1810.02281, 2018.
+Sanjeev Arora, Nadav Cohen, Wei Hu, and Yuping Luo. Implicit regularization in deep matrix factorization. arXiv preprint arXiv:1905.13655, 2019.
+Bubacarr Bah, Holger Rauhut, Ulrich Terstiege, and Michael Westdickenberg. Learning deep linear neural networks: Riemannian gradient flows and convergence to global minimizers. arXiv:1910.05505 [cs, math], November 2019.
+Pierre Baldi and Kurt Hornik. Neural networks and principal component analysis: Learning from examples without local minima. *Neural Networks*, 2(1):53-58, January 1989. ISSN 08936080. doi: 10.1016/0893-6080(89)90014-2.
+Lenaic Chizat and Francis Bach. On the Global Convergence of Gradient Descent for Overparameterized Models using Optimal Transport. arXiv:1805.09545 [cs, math, stat], May 2018.
+Anna Choromanska, Mikael Henaff, Michael Mathieu, Gerard Ben Arous, and Yann LeCun. The Loss Surfaces of Multilayer Networks. In Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics, AISTATS 2015, San Diego, California, USA, May 9-12, 2015, 2015.
+Jan Draisma and Emil Horobet. The average number of critical rank-one approximations to a tensor. arXiv:1408.3507 [math], August 2014.
+Jan Draisma, Emil Horobet, Giorgio Ottaviani, Bernd Sturmfels, and Rekha R. Thomas. The Euclidean distance degree of an algebraic variety. arXiv:1309.0049 [math], August 2013.
+Timur Garipov, Pavel Izmailov, Dmitrii Podoprikhin, Dmitry Vetrov, and Andrew Gordon Wilson. Loss Surfaces, Mode Connectivity, and Fast Ensembling of DNNs. arXiv:1802.10026 [cs, stat], October 2018.
+Daniel R. Grayson and Michael E. Stillman. Macaulay2, a software system for research in algebraic geometry. Available at http://www.math.uic.edu/Macaulay2/, 2019.
+Moritz Hardt and Tengyu Ma. Identity Matters in Deep Learning. arXiv:1611.04231 [cs, stat], November 2016.
+Joe Harris. *Algebraic Geometry: A First Course*. Number 133 in Graduate Texts in Mathematics. Springer, New York, corr. 3rd print edition, 1995. ISBN 978-0-387-97716-4.
+Hamza Jaffali and Luke Oeding. Learning algebraic models of quantum entanglement. arXiv preprint arXiv:1908.10247, 2019.
+Kenji Kawaguchi. Deep Learning without Poor Local Minima. CoRR, abs/1605.07110, 2016.
+Joe Kileel, Matthew Trager, and Joan Bruna. On the expressive power of deep polynomial neural networks. arXiv preprint arXiv:1905.12207, 2019.
+Thomas Laurent and James von Brecht. Deep linear neural networks with arbitrary loss: All local minima are global. arXiv:1712.01473 [cs, stat], December 2017.
+John M. Lee. Introduction to Smooth Manifolds. Number 218 in Graduate Texts in Mathematics. Springer, New York, 2003. ISBN 978-0-387-95495-0 978-0-387-95448-6.
+Haihao Lu and Kenji Kawaguchi. Depth Creates No Bad Local Minima. arXiv:1702.08580 [cs, math, stat], February 2017.
+
+Dhagash Mehta, Tianran Chen, Tingting Tang, and Jonathan D. Hauenstein. The loss surface of deep linear networks viewed through the algebraic geometry lens. arXiv:1810.07716, 2018. URL http://arxiv.org/abs/1810.07716.
+Song Mei, Andrea Montanari, and Phan-Minh Nguyen. A Mean Field View of the Landscape of Two-Layers Neural Networks. arXiv:1804.06561 [cond-mat, stat], April 2018.
+Giorgio Ottaviani, Pierre-Jean Spaenlehauer, and Bernd Sturmfels. Exact Solutions in Structured Low-Rank Approximation. arXiv:1311.2376 [cs, math, stat], November 2013.
+Luca Venturi, Afonso S Bandeira, and Joan Bruna. Spurious valleys in two-layer neural network optimization landscapes. arXiv preprint arXiv:1802.06384, 2018.
+Chulhee Yun, Suvrit Sra, and Ali Jabbabaie. Global optimality conditions for deep neural networks. arXiv preprint arXiv:1707.02444, 2017.
+Chulhee Yun, Suvrit Sra, and Ali Jadbabaie. Small nonlinearities in activation functions create bad local minima in neural networks. arXiv preprint arXiv:1802.03487, 2018.
+Li Zhang. Depth creates no more spurious local minima. arXiv preprint arXiv:1901.09827, 2019.
+Yi Zhou and Yingbin Liang. Critical Points of Neural Networks: Analytical Forms and Landscape Properties. arXiv:1710.11205 [cs, stat], October 2017.
+
+# A APPENDIX
+
+# A.1 DETERMINANTAL VARIETIES
+
+We present some additional properties of determinantal varieties. For proofs and more details, we refer the reader to Harris (1995). Given $r < \min(m,n)$ , the $r$ -th determinantal variety $\mathcal{M}_r \subset \mathbb{R}^{m \times n}$ is defined as the set of matrices with rank at most $r$ :
+
+$$
+\mathcal {M} _ {r} = \{P \in \mathbb {R} ^ {m \times n} \mid \operatorname {r k} (P) \leq r \} \subset \mathbb {R} ^ {m \times n}.
+$$
+
+As mentioned in the main part of the paper, $\mathcal{M}_r$ is an algebraic variety of dimension $r(m + n - r)$ that can be described as the zero-set of all $(r + 1) \times (r + 1)$ minors. For $r > 0$ , the singular locus of $\mathcal{M}_r$ is exactly $\mathcal{M}_{r-1} \subset \mathcal{M}_r$ . Some of our proofs will rely on the following explicit characterization of tangent space of determinantal varieties: given a matrix $P \in \mathbb{R}^{m \times n}$ of rank exactly $r$ (so $P$ is a smooth point on $\mathcal{M}_r$ ) we have that
+
+$$
+T _ {P} \mathcal {M} _ {r} = \mathbb {R} ^ {m} \otimes R o w (P) + C o l (P) \otimes \mathbb {R} ^ {n} \subset \mathbb {R} ^ {m \times n}.
+$$
+
+We will also make use of the normal space to the tangent space $T_P\mathcal{M}_r$ at $P$ , with respect to the Frobenius inner product. This is given by
+
+$$
+\left(T _ {P} \mathcal {M} _ {r}\right) ^ {\perp} = C o l (P) ^ {\perp} \otimes R o w (P) ^ {\perp},
+$$
+
+where $\operatorname{Col}(P)^\perp$ and $\operatorname{Row}(P)^\perp$ are the orthogonal spaces to $\operatorname{Col}(P)$ and $\operatorname{Row}(P)$ , respectively.
+
+# A.2 EUCLIDEAN DISTANCE DEGREES AND DISCRIMINANTS
+
+In this section, we informally discuss some algebraic notions related to ED (Euclidean distance) degrees and discriminants. A detailed presentation can be found in Draisma et al. (2013). Given an algebraic variety $\mathcal{V} \subset \mathbb{R}^n$ and a point $u \in \mathbb{R}^n$ , the number of real critical points of the distance function $h_u(p) = \|p - u\|^2$ restricted to $\mathcal{V}$ is only locally constant as $u$ varies. In general, the behavior changes when $u$ crosses the caustic hypersurface, or ED (Euclidean distance) discriminant, of $\mathcal{V}$ . The ED discriminant can be defined over the complex numbers, and in this setting it is indeed always a hypersurface (i.e., it has codimension one), however it can have higher codimension over the real numbers. For instance, for a circle in the complex plane with the origin as its center, a point $(u_1, u_2) \in \mathbb{C}^2$ is on the ED discriminant if and only if $u_1^2 + u_2^2 = 0$ . This defines a curve in the complex plane whose real locus is a point (see right side of Figure 2). By the Eckart-Young Theorem (Theorem 12), the ED discriminant of the determinantal variety $\mathcal{M}_r$ is the locus of all matrices $Q_0$
+
+with at least two coinciding singular values, so it is defined by the discriminant of $Q_0Q_0^T$ . As in the case of the circle, the ED discriminant of $\mathcal{M}_r$ has codimension two in $\mathbb{R}^{d_y\times d_x}$ .
+
+Over the complex numbers, the number of critical points of the distance function $h_u$ restricted to $\mathcal{V}$ is actually the same for every point $u \in \mathbb{C}^n$ not on the ED discriminant of $\mathcal{V}$ . This quantity is known as the ED degree of the variety $\mathcal{V}$ . For instance, a circle has ED degree two whereas an ellipse has ED degree four (on the left side of Figure 2, points $u$ outside of the caustic curve yield two real critical points and two imaginary critical points). The Eckart-Young Theorem (Theorem 12) tells us that the ED degree of the determinantal variety $\mathcal{M}_r \subset \mathbb{R}^{d_y \times d_x}$ is $\binom{m}{r}$ where $m = \min(d_x, d_y)$ . As argued in the main part of the paper, this does not hold any longer after perturbing either the determinantal variety or the Euclidean distance slightly, even using only a linear change of coordinates. For an algebraic variety $\mathcal{V} \subset \mathbb{C}^n$ , a linear change of coordinates is given by an automorphism $\varphi : \mathbb{C}^n \to \mathbb{C}^n$ . For almost all such automorphisms (i.e., for all $\varphi$ except those lying in some subvariety of $\mathrm{GL}(n, \mathbb{C})$ ) the ED degree of $\varphi(\mathcal{V})$ is the same; see Theorem 5.4 in Draisma et al. (2013). This quantity is known as the general ED degree of $\mathcal{V}$ . For instance, almost all linear coordinate changes will deform a circle into an ellipse, such that the general ED degree of the circle is four.
+
+In the above definition of the general ED degree, we fixed the standard Euclidean distance and perturbed the variety. Alternatively, we can fix the variety and change the standard Euclidean distance $\|\cdot\|$ to $\mathrm{dist}_{\varphi} = \|\varphi(\cdot)\|$ . The new distance function $h_{\varphi,u}(p) = \mathrm{dist}_{\varphi}(p - u)^2$ from $u$ satisfies $h_{\varphi(u)}(\varphi(p)) = h_{\mathrm{id},\varphi(u)}(\varphi(p)) = h_{\varphi,u}(p)$ . Hence, the ED degree of $\varphi(\mathcal{V})$ with respect to the standard Euclidean distance $\mathrm{dist}_{\mathrm{id}} = \|\cdot\|$ equals the ED degree of $\mathcal{V}$ with respect to the perturbed Euclidean distance $\mathrm{dist}_{\varphi}$ . In particular, the general ED degree of $\mathcal{V}$ can be obtained by computing the ED degree after applying a sufficiently random linear change of coordinates on either the Euclidean distance or the variety $\mathcal{V}$ itself.
+
+As in the case of a circle, the general ED degree of the determinantal variety $\mathcal{M}_r$ is not equal to the ED degree of $\mathcal{M}_r$ . Furthermore, there is no known closed formula for the general ED degree of $\mathcal{M}_r$ only involving the parameters $d_x$ , $d_y$ and $r$ . In the special case of rank-one matrices, one can derive a closed expression from the Catanese-Trifogli formula (Theorem 7.8 in Draisma et al. (2013)): the general ED degree of $\mathcal{M}_1$ is
+
+$$
+\sum_{s = 0}^{d_{x} + d_{y}}(-1)^{s}(2^{d_{x} + d_{y} + 1 - s} - 1)(d_{x} + d_{y} - s)! \left[\sum_{\substack{i + j = s\\ i\leq d_{x}, j\leq d_{y}}}\frac{\binom{d_{x} + 1}{i}\binom{d_{y} + 1}{j}}{(d_{x} - i)!(d_{y} - j)!}\right].
+$$
+
+This expression yields 39 for $d_x = d_y = 3$ , as mentioned in Example 13. For general $r$ , formulas for the general ED degree of $\mathcal{M}_r$ involving Chern and polar classes can be found in Ottaviani et al. (2013); Draisma et al. (2013). A short algorithm to compute the general ED degree of $\mathcal{M}_r$ is given in Example 7.11 of Draisma et al. (2013); it uses a package for advanced intersection theory in the algebro-geometric software Macaulay2 (Grayson & Stillman, 2019).
+
+This discussion shows that the Eckart-Young Theorem is indeed very special. The intrinsic reason for this is that the determinantal variety $\mathcal{M}_r$ intersects the "isotropic quadric" associated with the standard Euclidean distance (i.e., zero locus of $X_{1,1}^2 +\ldots +X_{d_x,d_y}^2$ in $\mathbb{C}^{d_x\times d_y}$ ) in a particular way (i.e., non-transversely). Performing a random linear change of coordinates on either $\mathcal{M}_r$ or the isotropic quadric makes the intersection transverse. So the ED degree after the linear change of coordinates is the general ED degree of $\mathcal{M}_r$ , and the Eckart-Young Theorem does not apply.
+
+In summary, we have observed that the degeneration from an ellipse to a circle is analogous to the degeneration from a determinantal variety with a perturbed Euclidean distance to the determinantal variety with the standard Euclidean distance: in both cases, the ED degree drops because the situation becomes degenerate. Moreover, the ED discriminant drops dimension, which causes the special phenomenon that the number of real critical points is almost everywhere the same.
+
+Experiment 1. In general, it is very difficult to describe the open regions in $\mathbb{R}^n$ that are separated by the ED discriminant of a variety $\mathcal{V}\subset \mathbb{R}^n$ . Finding the "typical" number of real critical points for the distance function $h_u$ restricted to $\mathcal{V}$ , requires the computation of the volumes of these open regions. In the current state of the art in real algebraic geometry, this is only possible for very particular varieties $\mathcal{V}$ . For these reasons, and to get more insights on the typical number of real critical points of determinantal varieties with a perturbed Euclidean distance, we performed computational experiments with Macaulay2 (Grayson & Stillman, 2019) in the situation of Example 13. We fixed the
+
+Table 2: Number of critical points (columns) and number of minima (rows) in our experiments
+
+ | #(critical points) |
| 1 | 3 | 5 | 7 | 9 | 11 | 13 | |
| #(local minima) | 1 | 0 | 476 | 120 | 1 | 0 | 0 | 0 |
| 2 | 0 | 0 | 805 | 190 | 10 | 0 | 0 |
| 3 | 0 | 0 | 0 | 228 | 116 | 21 | 0 |
| 4 | 0 | 0 | 0 | 0 | 16 | 12 | 5 |
+
+determinantal variety $\mathcal{M}_1 \subseteq \mathbb{R}^{3 \times 3}$ of rank-one $(3 \times 3)$ -matrices. In each iteration of the experiment, we picked a random automorphism $\varphi: \mathbb{R}^{3 \times 3} \to \mathbb{R}^{3 \times 3}$ and a random matrix $Q_0 \in \mathbb{R}^{3 \times 3}$ . We first verified that the number of complex critical points of the perturbed quadratic distance function $h_{\varphi, Q_0}$ restricted to $\mathcal{M}_1$ is the expected number 39. After that, we computed the number of real critical points and the number of local minima among them. Our results for 2000 iterations can be found in Table 2 and Figure 3. Although this is a very rudimentary experiment in an extremely simple setting, it provides clear evidence that the number of local minima of the perturbed distance function is generally not one.
+
+Implementation details: We note that our computations of real critical points and local minima involved numerical methods and might thus be affected by numerical errors. In our implementation we used several basic tests to rule out numerically bad iterations, so that we can report our results with high confidence. The entries of the random matrix $Q_{0}$ are independently and uniformly chosen among the integers in $Z = \{-10, -9, \dots, 9, 10\}$ . The random automorphism $\varphi$ is given by a matrix in $Z^{9 \times 9}$ whose entries are also chosen independently and uniformly at random.
+
+# A.3 PURE AND SPURIOUS CRITICAL POINTS IN PREDICTOR SPACE
+
+We illustrate a variation of our functional setting where the notions of pure and spurious can also be naturally applied. We consider a training sample $x_{1},\ldots ,x_{N}\in \mathbb{R}^{d_{x}},y_{1},\ldots ,y_{N}\in \mathbb{R}$ (for notational simplicity we use $d_{y} = 1$ but this is not necessary). We then write an empirical risk of the form
+
+$$
+L (\theta) = g (\hat {Y} (\theta), Y),
+$$
+
+where $\hat{Y}(\theta) = (\Phi(\theta, x_1), \ldots, \Phi(\theta, x_N)) \in \mathbb{R}^N$ , $Y = (y_1, \ldots, y_N) \in \mathbb{R}^N$ and $g: \mathbb{R}^N \times \mathbb{R}^N \to \mathbb{R}$ is a convex function. As $\theta$ varies, $\hat{Y}(\theta)$ defines a "predictor manifold" $\mathcal{V} \subset \mathbb{R}^N$ , which depends only on the input data $x_1, \ldots, x_N$ , but not on $\theta$ . The function $L(\theta)$ can be naturally seen as a composition
+
+$$
+\mathbb {R} ^ {d _ {\theta}} \xrightarrow {\eta} \mathcal {Y} \xrightarrow {g} \mathbb {R},
+$$
+
+where $\eta(\theta) = \hat{Y}(\theta) \in \mathcal{V}$ . We may now distinguish again between "pure" and "spurious" critical points for $L$ . In an underparameterized regime $d_{\theta} < N$ , or if the input data $x_1, \ldots, x_N$ is in some way special, then $\mathcal{V} \subsetneq \mathbb{R}^N$ is a submanifold (with singularities), and critical points may arise because we are restricting $g$ to $\mathcal{V}$ (pure), or because of the parameterization map $\eta$ (spurious). In a highly overparameterized regime $d_{\theta} \gg N$ (which is usually the case in practice), we expect $\mathcal{V} = \mathbb{R}^N$ . This can be viewed as analogous to the "filling" situation described for linear networks in this paper. In particular, all critical points that are not global minima for $L$ are necessarily spurious, since $g|_{\mathcal{V}} = g$ is convex.
+
+# A.4 PROOF OF THEOREM 4
+
+We first show Lemma 3 with help of the following general observation:
+
+Proposition 15. Let $V_{1}^{+} \subseteq V_{2}^{+} \subseteq \ldots \subseteq V_{h}^{+}$ and $V_{1}^{-} \supseteq V_{2}^{-} \supseteq \ldots \supseteq V_{h}^{-}$ be vector spaces with dimensions $r_{i}^{+} := \dim(V_{i}^{+})$ and $r_{i}^{-} := \dim(V_{i}^{-})$ for $i = 1, \ldots, h$ . Then we have
+
+$$
+\dim \left(\left(V _ {1} ^ {+} \otimes V _ {1} ^ {-}\right) + \left(V _ {2} ^ {+} \otimes V _ {2} ^ {-}\right) + \dots + \left(V _ {h} ^ {+} \otimes V _ {h} ^ {-}\right)\right) = \sum_ {i = 1} ^ {h} r _ {i} ^ {+} r _ {i} ^ {-} - \sum_ {i = 1} ^ {h - 1} r _ {i} ^ {+} r _ {i + 1} ^ {-}.
+$$
+
+Proof. We prove this assertion by induction on $h$ . The base case ( $h = 1$ ) is clear: $\dim(V_1^+ \otimes V_1^-) = r_1^+ r_1^-$ . For the induction step, we set $V := (V_1^+ \otimes V_1^-) + \ldots + (V_{h-1}^+ \otimes V_{h-1}^-)$ . The key observation is that the inclusions $V_1^+ \subseteq V_2^+ \subseteq \ldots \subseteq V_h^+$ and $V_1^- \supseteq V_2^- \supseteq \ldots \supseteq V_h^-$ imply that $V \cap (V_h^+ \otimes V_h^-) = V_{h-1}^+ \otimes V_h^-$ . Hence, applying the induction hypothesis to $V$ , we derive
+
+$$
+\begin{array}{l} \dim \left(V + \left(V _ {h} ^ {+} \otimes V _ {h} ^ {-}\right)\right) = \dim (V) + \dim \left(V _ {h} ^ {+} \otimes V _ {h} ^ {-}\right) - \dim \left(V _ {h - 1} ^ {+} \otimes V _ {h} ^ {-}\right) \\ = \left(\sum_ {i = 1} ^ {h - 1} r _ {i} ^ {+} r _ {i} ^ {-} - \sum_ {i = 1} ^ {h - 2} r _ {i} ^ {+} r _ {i + 1} ^ {-}\right) + r _ {h} ^ {+} r _ {h} ^ {-} - r _ {h - 1} ^ {+} r _ {h} ^ {-} \\ = \sum_ {i = 1} ^ {h} r _ {i} ^ {+} r _ {i} ^ {-} - \sum_ {i = 1} ^ {h - 1} r _ {i} ^ {+} r _ {i + 1} ^ {-}. \\ \end{array}
+$$
+
+Lemma 3. The dimension of the image of the differential $d\mu_{\pmb{d}}$ at $\theta = (W_h,\dots ,W_1)$ is given by
+
+$$
+\operatorname {r k} \left(d \mu_ {\boldsymbol {d}} (\theta)\right) = \sum_ {i = 1} ^ {h} \operatorname {r k} \left(W _ {> i}\right) \cdot \operatorname {r k} \left(W _ {< i}\right) - \sum_ {i = 1} ^ {h - 1} \operatorname {r k} \left(W _ {> i}\right) \cdot \operatorname {r k} \left(W _ {< i + 1}\right),
+$$
+
+where we use the convention that $W_{<1} = I_{d_0}$ , $W_{>h} = I_{d_h}$ are the identity matrices of size $d_0, d_h$ .
+
+Proof. The image of the differential $d\mu_{\pmb{d}}(\theta)$ is given in (5). Due to
+
+$$
+\begin{array}{l} C o l (\overline {{W}}) \subseteq C o l (W _ {> 1}) \subseteq \dots \subseteq C o l (W _ {> h - 1}) = C o l (W _ {h}), \\ R o w \left(\bar {W}\right) \subseteq R o w \left(W _ {< h}\right) \subseteq \dots \subseteq R o w \left(W _ {< 2}\right) = R o w \left(W _ {1}\right), \tag {7} \\ \end{array}
+$$
+
+we can apply Proposition 15, which concludes the proof.
+
+
+
+Now we provide a proof for Theorem 4, starting from a refinement of the last statement.
+
+Proposition 16. Let $r = \min \{d_i\}$ , $\theta = (W_h, \dots, W_1)$ , $\overline{W} = \mu_d(\theta)$ , and $e = \mathrm{rk}\overline{W}$ . The image of the differential $d\mu_{\mathbf{d}}$ at $\theta$ contains the tangent space $T_{\overline{W}}\mathcal{M}_e$ of $\mathcal{M}_e$ at $\overline{W}$ . Furthermore, for every $\overline{W} \in \mathcal{M}_e \setminus \mathcal{M}_{e-1}$ there exists $\theta'$ such that $\mu_{\mathbf{d}}(\theta') = \overline{W}$ and the image of $d\mu_{\mathbf{d}}(\theta')$ is exactly $T_{\overline{W}}\mathcal{M}_e$ .
+
+Proof. Due to (7) the image (5) of $d\mu_{\pmb{d}}(\theta)$ always contains $\mathbb{R}^{d_h}\otimes \operatorname {Row}(\overline{W}) + \operatorname {Col}(\overline{W})\otimes \mathbb{R}^{d_0} = T_{\overline{W}}\mathcal{M}_e$ . Furthermore, there always exists $(W_{h},\ldots ,W_{1})\in \mu_{\pmb{d}}^{-1}(\overline{W})$ such that each $W_{i}$ has rank exactly $r$ and the containments in (7) are all equalities. For example, one way to achieve this is to consider any decomposition $\overline{W} = UV^T$ where $U\in \mathbb{R}^{d_h\times r}$ and $V = \mathbb{R}^{d_0\times r}$ and then set $W_{1} = [V|0]^{T}$ , $W_{h} = [U|0]$ , and $W_{i} = \begin{bmatrix} I_{r} & 0\\ 0 & 0 \end{bmatrix}$ for $2\leq i\leq h - 1$ , where $I_{r}$ is the $(r\times r)$ -identity matrix and the zeros fill in the dimensions $(d_i\times d_{i - 1})$ of $W_{i}$ .
+
+The next two propositions discuss the first part of Theorem 4, which distinguishes between the filling and the non-filling case.
+
+Proposition 17. Let $r = \min \{d_i\}$ and $\theta = (W_h, \dots, W_1)$ . In the non-filling case (i.e., if $r < \min \{d_h, d_0\}$ ) we have that $\mathrm{rk}(d\mu_d(\theta)) < \dim \mathcal{M}_r$ if and only if $\mathrm{rk}(\mu_d(\theta)) < r$ .
+
+Proof. If $\mathrm{rk}(\mu_d(\theta)) = r$ , then Proposition 16 implies that the image of the differential $d\mu_{d}(\theta)$ is the whole tangent space of $\mathcal{M}_r$ at $\mu_d(\theta)$ . To prove the other direction of the assertion, we assume that $\mathrm{rk}(\mu_d(\theta)) < r$ . Since $r < \min\{d_h, d_0\}$ , there is some $i \in \{1, \dots, h-1\}$ such that $d_i = r$ . We view $\mu_d$ as the following concatenation of the matrix multiplication maps:
+
+$$
+\mathbb {R} ^ {d _ {h} \times d _ {h - 1}} \times \dots \times \mathbb {R} ^ {d _ {1} \times d _ {0}} \xrightarrow {\mu_ {i , 1}} \mathbb {R} ^ {d _ {h} \times d _ {i}} \times \mathbb {R} ^ {d _ {i} \times d _ {0}} \xrightarrow {\mu_ {i , 2}} \mathbb {R} ^ {d _ {h} \times d _ {0}}, \tag {8}
+$$
+
+where $\mu_{i,1} = \mu_{(d_h,\dots,d_i)}\times \mu_{(d_i,\dots,d_0)}$ and $\mu_{i,2} = \mu_{(d_h,d_i,d_0)}$ . Since $\mathrm{rk}(\mu_d(\theta)) < r$ , we have that $\mathrm{rk}(W_{>i}) < r$ or $\mathrm{rk}(W_{< i + 1}) < r$ . Without loss of generality, we may assume the latter. So applying Lemma 3 to $\mu_{i,2}$ and $\theta^{\prime}\coloneqq \mu_{i,1}(\theta)$ yields
+
+$$
+\begin{array}{l} \operatorname {r k} \left(d \mu_ {\boldsymbol {d}} (\theta)\right) \leq \operatorname {r k} \left(d \mu_ {i, 2} \left(\theta^ {\prime}\right)\right) = \operatorname {r k} \left(W _ {< i + 1}\right) \left(d _ {h} - \operatorname {r k} \left(W _ {> i}\right)\right) + \operatorname {r k} \left(W _ {> i}\right) d _ {0} \\ < r \left(d _ {h} - \operatorname {r k} \left(W _ {> i}\right)\right) + \operatorname {r k} \left(W _ {> i}\right) d _ {0} = \operatorname {r k} \left(W _ {> i}\right) \left(d _ {0} - r\right) + r d _ {h} \\ \leq r \left(d _ {0} - r\right) + r d _ {h} = \dim \left(\mathcal {M} _ {r}\right). \\ \end{array}
+$$
+
+Proposition 18. Let $r = \min \{d_i\}$ and $\theta = (W_h, \ldots, W_1)$ . In the filling case (i.e., if $r = \min \{d_h, d_0\}$ ) we have that $\mathrm{rk}(d\mu_d(\theta)) < d_h d_0$ if and only if there is some $i \in \{1, \ldots, h-1\}$ with $\mathrm{rk}(W_{>i}) < d_h$ and $\mathrm{rk}(W_{i}) < d_h$ and $\mathrm{rk}(W_{< i + 1}) < d_0$ for some $i\in \{1,\ldots ,h - 1\}$ . We view $\mu_{d}$ as the concatenation of the matrix multiplication maps in (8). Applying Lemma 3 to $\mu_{i,2}$ and $\theta^{\prime}:= \mu_{i,1}(\theta)$ yields
+
+$$
+\begin{array}{l} \operatorname {r k} \left(d \mu_ {\boldsymbol {d}} (\theta)\right) \leq \operatorname {r k} \left(d \mu_ {i, 2} \left(\theta^ {\prime}\right)\right) = \operatorname {r k} \left(W _ {< i + 1}\right) \left(d _ {h} - \operatorname {r k} \left(W _ {> i}\right)\right) + \operatorname {r k} \left(W _ {> i}\right) d _ {0} \\ < d _ {0} \left(d _ {h} - \operatorname {r k} \left(W _ {> i}\right)\right) + \operatorname {r k} \left(W _ {> i}\right) d _ {0} = d _ {h} d _ {0}. \\ \end{array}
+$$
+
+Secondly, we assume the contrary, i.e., that every $i \in \{1, \dots, h - 1\}$ satisfies $\operatorname{rk}(W_{>i}) = d_h$ or $\operatorname{rk}(W_{ i}) = d _ {h} \quad \Rightarrow \quad \forall j \geq i: \operatorname {r k} (W _ {> j}) = d _ {h}, \\ \operatorname {r k} \left(W _ {< i + 1}\right) = d _ {0} \Rightarrow \forall j \leq i: \operatorname {r k} \left(W _ {< j + 1}\right) = d _ {0}. \tag {9} \\ \end{array}
+$$
+
+We consider the index set $\mathcal{I} \coloneqq \{i \in \{1, \dots, h - 1\} \mid \mathrm{rk}(W_{i}) = d_h$ for every $i \in \{1, \dots, h\}$ . So due to Lemma 3 we have
+
+$$
+\operatorname {r k} \left(d \mu_ {\boldsymbol {d}} (\theta)\right) = d _ {h} \sum_ {i = 1} ^ {h} \operatorname {r k} \left(W _ {< i}\right) - d _ {h} \sum_ {i = 1} ^ {h - 1} \operatorname {r k} \left(W _ {< i + 1}\right) = d _ {h} \operatorname {r k} \left(W _ {< 1}\right) = d _ {h} d _ {0}.
+$$
+
+If $\mathcal{I} \neq \emptyset$ , we define $k := \max \mathcal{I}$ . So for every $i \in \{k + 1, \ldots, h - 1\}$ we have $\mathrm{rk}(W_{i}) = d_h$ by our assumption. Moreover, due to (9), every $j \in \{0, \ldots, k\}$ satisfies $\mathrm{rk}(W_{ j}\right) d _ {0} + d _ {h} d _ {0} + \sum_ {i = k + 2} ^ {h} d _ {h} \operatorname {r k} \left(W _ {< i}\right) \\ - \sum_ {j = 1} ^ {k} \operatorname {r k} (W _ {> j}) d _ {0} - \sum_ {i = k + 1} ^ {h - 1} d _ {h} \operatorname {r k} (W _ {< i + 1}) = d _ {h} d _ {0}. \\ \end{array}
+$$
+
+Example 19. According to Proposition 18, the differential of the matrix multiplication map is surjective whenever $\mathrm{rk}(\overline{W}) = r$ , but also for certain $\theta$ when $\mathrm{rk}(\overline{W}) < r$ . For example, let us consider the map $\mu_{(2,2,2)}: \mathbb{R}^{2 \times 2} \times \mathbb{R}^{2 \times 2} \to \mathbb{R}^{2 \times 2}$ and the two factorizations $\theta = ([\frac{1}{1}, \frac{1}{1}], [\frac{1}{0}, \frac{1}{0}])$ and $\theta' = ([\frac{1}{1}, \frac{1}{0}], [\frac{1}{0}, \frac{1}{0}])$ of the rank-one matrix $[\frac{1}{1}, \frac{1}{0}]$ . According to Proposition 18, the differential $d\mu_{(2,2,2)}(\theta)$ has maximal rank 4. So it is surjective, whereas $d\mu_{(2,2,2)}(\theta')$ is not. In fact, by Lemma 3, we have $\mathrm{rk}(d\mu_{(2,2,2)}(\theta')) = 3$ .
+
+Theorem 4. Let $r = \min \{d_i\}$ , $\theta = (W_h, \dots, W_1)$ , and $\overline{W} = \mu_d(\theta)$ .
+
+- (Filling case) If $r = \min \{d_h, d_0\}$ , the differential $d\mu_{\mathbf{d}}(\theta)$ has maximal rank equal to $\dim M_r = d_h d_0$ if and only if, for every $i \in \{1, 2, \dots, h - 1\}$ , either $\mathrm{rk}(W_{>i}) = d_h$ or $\mathrm{rk}(W_{ d_y$ for all $i = 1, \ldots, h - 1$ .
+
+We prove the assertion by induction on $h$ . For the induction beginning, we consider the cases $h = 1$ and $h = 2$ . If $h = 1$ , then $\mu_{\pmb{d}}$ is the identity and Proposition 20 is trivial. For $h = 2$ , we construct explicit lifts of the given paths. We first show that there is a path in $\mu_{\pmb{d}}^{-1}(W)$ from $\theta = (W_2, W_1)$ to some $(B_2, B_1)$ such that $B_2$ has full rank.
+
+Claim 1. Let $h = 2$ , $W \in \mathbb{R}^{d_y \times d_x}$ and $(W_2, W_1) \in \mu_d^{-1}(W)$ . Then there is $(B_2, B_1) \in \mu_d^{-1}(W)$ with $\mathrm{rk}(B_2) = d_y$ and a continuous function $g: [0,1] \to \mu_d^{-1}(W)$ such that $g(0) = (W_2, W_1)$ and $g(1) = (B_2, B_1)$ .
+
+Proof. If $\mathrm{rk}(W_2) = d_y$ , we have nothing to show. So we assume that $s \coloneqq \mathrm{rk}(W_2) < d_y$ . Without loss of generality, we may further assume that the first $s$ rows of $W_2$ have rank $s$ . As $s < d_1$ , we find a matrix $G \in \mathrm{GL}^+(d_1)$ such that $W_2 G = \left[ \begin{array}{cc} I_s & 0 \\ M & 0 \end{array} \right]$ , where $I_s \in \mathbb{R}^{s \times s}$ is the identity matrix and $M \in \mathbb{R}^{(d_y - s) \times s}$ . Since $\mathrm{GL}^+(d_1)$ is path-connected, there is a continuous function $g_1': [0,1] \to \mathrm{GL}^+(d_1)$ with $g_1'(0) = I_{d_1}$ and $g_1'(1) = G$ . Concatenation with $\mathrm{GL}(d_1) \to \mu_d^{-1}(W)$ , $H \mapsto (W_2 H, H^{-1} W_1)$ yields a continuous path $g_1$ in $\mu_d^{-1}(W)$ from $(W_2, W_1)$ to $(W_2 G, G^{-1} W_1)$ .
+
+Since $(W_{2}G)(G^{-1}W_{1}) = W$ , we see that $G^{-1}W_{1} = \left[ \begin{array}{c}W_{(s)}\\ N \end{array} \right]$ , where $W_{(s)}\in \mathbb{R}^{s\times d_x}$ is the first $s$ rows of $W$ and $N\in \mathbb{R}^{(d_y - s)\times d_x}$ . Replacing $N$ by an arbitrary matrix $N^{\prime}$ still yields that $W_{2}G\left[ \begin{array}{c}W_{(s)}\\ N^{\prime} \end{array} \right] = W$ . Hence, we find a continuous path $g_{2}$ in $\mu_{\pmb{d}}^{-1}(W)$ from $(W_{2}G,G^{-1}W_{1})$ to $(W_{2}G,B_{1}:= \left[ \begin{array}{c}W_{(s)}\\ 0 \end{array} \right])$ .
+
+Finally, we can replace the 0-columns in $W_{2}G$ by arbitrary matrices $M_1 \in \mathbb{R}^{s \times (d_1 - s)}$ and $M_2 \in \mathbb{R}^{(d_y - s) \times (d_1 - s)}$ such that $\left[ \begin{array}{cc} I_s & M_1 \\ M & M_2 \end{array} \right] B_1 = W$ still holds. In particular, we can pick $M_1 = 0$ and $M_2 = \left[ \begin{array}{cc} I_{d_y - s} & 0 \end{array} \right]$ such that $B_2 := \left[ \begin{array}{cc} I_s & 0 \\ M & M_2 \end{array} \right]$ has full rank $d_y$ , and we find a continuous path $g_3$ in $\mu_d^{-1}(W)$ from $(W_{2}G, B_{1})$ to $(B_2, B_1)$ . Putting $g_1, g_2$ and $g_3$ together yields a path $g$ as desired in Claim 1.
+
+As $B_{2}$ has full rank, we find a matrix $H \in \mathrm{GL}^{+}(d_{1})$ such that $B_{2}H = [I_{d_{y}}0]$ . As in the proof of Claim 1, we construct a continuous path in $\mu_{\pmb{d}}^{-1}(W)$ from $(B_{2},B_{1})$ to $(B_{2}H,H^{-1}B_{1})$ . Since $H^{-1}B_{1} = \left[ \begin{array}{l}W\\ N \end{array} \right]$ for some $N \in \mathbb{R}^{(d_1 - d_y)\times d_x}$ , we also find a continuous path in $\mu_{\pmb{d}}^{-1}(W)$ from $(B_{2}H,H^{-1}B_{1})$ to $(B_{2}H,\left[ \begin{array}{l}W\\ 0 \end{array} \right])$ . All in all, we have constructed a continuous path $F_{1}$ in $\mu_{\pmb{d}}^{-1}(W)$ from $(W_{2},W_{1})$ to $([I_{d_y}0],[\stackrel {W}{0}])$ . Analogously, we find a continuous path $F_{3}$ in $\mu_{\pmb{d}}^{-1}(W^{\prime})$ between $(W_2^{\prime},W_1^{\prime})$ and $([I_{d_y}0],[\stackrel {W^{\prime}}{0}])$ . Finally, we define $F_{2}:[0,1]\to \mathbb{R}^{d_y\times d_1}\times \mathbb{R}^{d_1\times d_x}$ , $t\mapsto ([I_{d_y}0],[\stackrel {f(t)}{0}])$ such that putting $F_{1},F_{2}$ and $F_{3}$ together yields a path $F$ as desired in Proposition 20.
+
+For the induction step, we view $\mu_d$ as the concatenation of the following two matrix multiplication maps:
+
+$$
+\mathbb {R} ^ {d _ {y} \times d _ {h - 1}} \times \ldots \times \mathbb {R} ^ {d _ {1} \times d _ {x}} \stackrel {{\mu (d _ {h}, \ldots , d _ {1})} \times \mathrm {i d}}} {\longrightarrow} \mathbb {R} ^ {d _ {y} \times d _ {1}} \times \mathbb {R} ^ {d _ {1} \times d _ {x}} \stackrel {{\mu (d _ {y}, d _ {1}, d _ {x})}} {\longrightarrow} \mathbb {R} ^ {d _ {y} \times d _ {x}}.
+$$
+
+We consider $\theta = (W_h, \ldots, W_1)$ and $\theta' = (W_h', \ldots, W_1')$ , as well as $W_{>1} = W_h \cdots W_2$ and $W_{>1}' = W_h' \cdots W_2'$ . Given a path $f: [0, 1] \to \mathbb{R}^{d_y \times d_x}$ from $W = W_{>1}W_1$ to $W' = W_{>1}'W_1'$ , we apply the induction beginning $(h = 2)$ to $\mu_{(d_y, d_1, d_x)}$ to get a path $F_2: [-0.5, 1.5] \to \mathbb{R}^{d_y \times d_1} \times \mathbb{R}^{d_1 \times d_x}$ such that $F_2(-0.5) = (W_{>1}, W_1)$ , $F_2(1.5) = (W_{>1}', W_1')$ , $\mu_{(d_y, d_1, d_x)}(F_2(t)) = W$ for all $t \in [-0.5, 0]$ , $\mu_{(d_y, d_1, d_x)}(F_2(t)) = W'$ for all $t \in [1, 1.5]$ and $\mu_{(d_y, d_1, d_x)}(F_2(t)) = f(t)$ for all $t \in [0, 1]$ . Now we apply the induction hypothesis on $\mu_{(d_h, \ldots, d_1)}$ and the path from $W_{>1}$ to $W_{>1}'$ given by the first factor of $F_2$ . This yields a path $F_1: [-1, 2] \to \mathbb{R}^{d_y \times d_{h-1}} \times \ldots \times \mathbb{R}^{d_2 \times d_1}$ with $F_1(-1) = (W_h, \ldots, W_2)$ , $F_1(2) = (W_h', \ldots, W_2')$ , $\mu_{(d_h, \ldots, d_1)}(F_1(t)) = W_{>1}$ for all $t \in [-1, -0.5]$ , $\mu_{(d_h, \ldots, d_1)}(F_1(t)) = W_{>1}'$ for all $t \in [1.5, 2]$ and $\mu_{(d_h, \ldots, d_1)}(F_1(t))$ is the first factor of $F_2(t)$ for all $t \in [-0.5, 1.5]$ . This allows us to define a continuous path $F: [-1, 2] \to \mathbb{R}^{d_y \times d_{h-1}} \times \ldots \times \mathbb{R}^{d_1 \times d_x}$ from $\theta$ to $\theta'$ by setting $F(t) = (F_1(t), W_1)$ for all $t \in [-1, -0.5]$ , $F(t) = (F_1(t), W_1')$ for all $t \in [1.5, 2]$ and for all $t \in [-1, -0.5]$ we let $F(t)$ consist of $F_1(t)$ and the second factor of $F_2(t)$ .
+
+Corollary 21. If $b = 0$ , then $\mu_{d}^{-1}(\overline{W})$ is path-connected for each $\overline{W} \in \mathbb{R}^{d_y \times d_x}$ .
+
+Proof. Apply Proposition 20 to the constant function $f:[0,1]\to \mathbb{R}^{d_y\times d_x}$ , $t\mapsto \overline{W}$ .
+
+
+
+Now we study the case $b > 0$ . We write $0 < i_1 < \ldots < i_b < h$ for those indices $i_j$ such that $d_{i_j} = r$ . Then we view $\mu_d$ as the concatenation of the following two matrix multiplication maps:
+
+$$
+\mathbb {R} ^ {d _ {y} \times d _ {h - 1}} \times \dots \times \mathbb {R} ^ {d _ {1} \times d _ {x}} \xrightarrow {\mu_ {1}} \mathbb {R} ^ {d _ {y} \times d _ {i _ {b}}} \times \mathbb {R} ^ {d _ {i _ {b}} \times d _ {i _ {b - 1}}} \times \dots \times \mathbb {R} ^ {d _ {i _ {1}} \times d _ {x}} \xrightarrow {\mu_ {2}} \mathbb {R} ^ {d _ {y} \times d _ {x}}, \tag {10}
+$$
+
+where $\mu_{1} = \mu_{(d_{h},\dots,d_{i_{b}})}\times \mu_{(d_{i_{b}},\dots,d_{i_{b - 1}})}\times \dots \times \mu_{(d_{i_{1}},\dots,d_{0})}$ and $\mu_{2} = \mu_{(d_{y},d_{i_{b}},\dots,d_{i_{1}},d_{x})}$ . Applying the path lifting property described above to the map $\mu_{1}$ , we will show in Proposition 26 that $\mu_2^{-1}(\overline{W})$ and $\mu_d^{-1}(\overline{W})$ have the same number of (path-)connected components. So it remains to study the connected components of $\mu_2^{-1}(\overline{W})$ . We can shortly write the map $\mu_{2}$ as
+
+$$
+\mu_ {2}: \mathbb {R} ^ {d _ {y} \times r} \times \left(\mathbb {R} ^ {r \times r}\right) ^ {b - 1} \times \mathbb {R} ^ {r \times d _ {x}} \longrightarrow \mathbb {R} ^ {d _ {y} \times d _ {x}}.
+$$
+
+In the case that $\mathrm{rk}(\overline{W}) = r$ , we use the following natural action of $\mathrm{GL}(r)^b$ on $\mu_2^{-1}(\overline{W})$ :
+
+$$
+\operatorname {G L} (r) ^ {b} \times \mu_ {2} ^ {- 1} (\bar {W}) \longrightarrow \mu_ {2} ^ {- 1} (\bar {W}),
+$$
+
+$$
+\left(\left(G _ {b}, \dots , G _ {1}\right), \left(A _ {b + 1}, \dots , A _ {1}\right)\right) \longmapsto \left(A _ {b + 1} G _ {b}, G _ {b} ^ {- 1} A _ {b} G _ {b - 1}, \dots , G _ {1} ^ {- 1} A _ {1}\right). \tag {11}
+$$
+
+In fact, we show now that $\mu_2^{-1}(\overline{W})$ is the orbit of any element under this action if $\mathrm{rk}(\overline{W}) = r$ . From this we will deduce in Corollaries 23 and 24 that $\mu_2^{-1}(\overline{W})$ is homeomorphic to $\mathrm{GL}(r)^b$ and thus has $2^b$ (path-)connected components if the matrix $\overline{W}$ has maximal rank $r$ .
+
+Proposition 22. Let $b > 0$ and $\theta = (A_{b+1}, \ldots, A_1) \in \mathbb{R}^{d_y \times r} \times (\mathbb{R}^{r \times r})^{b-1} \times \mathbb{R}^{r \times d_x}$ such that $\overline{W} = \mu_2(\theta)$ has maximal rank $r$ . Then $\mu_2^{-1}(\overline{W})$ is the orbit of $\theta$ under the action defined in (11), i.e.,
+
+$$
+\mu_ {2} ^ {- 1} (\overline {{W}}) = \left\{\left(A _ {b + 1} G _ {b}, G _ {b} ^ {- 1} A _ {b} G _ {b - 1}, \dots , G _ {1} ^ {- 1} A _ {1}\right) \mid G _ {1}, \dots , G _ {b} \in \operatorname {G L} (r) \right\}.
+$$
+
+Proof. One inclusion, namely “ $\supseteq$ ”, is trivial. We prove the other inclusion “ $\subseteq$ ” by induction on $b$ . For the induction beginning $(b = 1)$ , we write $\overline{W} = \left[ \begin{array}{cc} W_{11} & W_{12} \\ W_{21} & W_{22} \end{array} \right]$ where $W_{11} \in \mathbb{R}^{r \times r}$ . Without loss of generality, we may assume that $\mathrm{rk}(W_{11}) = r$ . Similarly, we write $A_2 = \left[ \begin{array}{c} A_{21} \\ A_{22} \end{array} \right]$ and $A_1 = \left[ \begin{array}{cc} A_{11} & A_{12} \end{array} \right]$ where $A_{i1} \in \mathbb{R}^{r \times r}$ for $i = 1, 2$ . For $(A_2', A_1') \in \mu_2^{-1}(\overline{W})$ , we write analogously $A_2' = \left[ \begin{array}{c} A_{21}' \\ A_{22}' \end{array} \right]$ and $A_1' = [A_{11}', A_{12}']$ . Due to $\mathrm{rk}(W_{11}) = r$ , we have that $\mathrm{rk}(A_{21}) = r = \mathrm{rk}(A_{21}')$ . Hence, there is a matrix $G \in \mathrm{GL}(r)$ such that $A_{21}' = A_{21}G$ . This implies that $A_{21}GA_{11}' = W_{11} = A_{21}A_{11}$ , so $A_{11}' = G^{-1}A_{11}$ . Due to $A_{21}A_{12} = W_{12} = A_{21}GA_{12}'$ , we get that $A_{12}' = G^{-1}A_{12}$ . Finally,
+
+$\operatorname{rk}(W_{11}) = r$ implies that $\operatorname{rk}(A_{11}) = r$ , so $A_{22}A_{11} = W_{21} = A_{22}'G^{-1}A_{11}$ shows $A_{22}' = A_{22}G$ . Thus we have shown that $A_2' = A_2G$ and $A_1' = G^{-1}A_1$ .
+
+For the induction step $(b > 1)$ , we consider $(A_{b + 1}^{\prime},\ldots ,A_{1}^{\prime})\in \mu_{2}^{-1}(\overline{W})$ and $A_{>1} = A_{b + 1}\dots A_2,A'_1 = A'_b + 1\dots A'_2\in \mathbb{R}^{d_y\times r}$ . Now we can apply the induction beginning to find $G_{1}\in \mathrm{GL}(r)$ such that $A_{>1}^{\prime} = A_{>1}G_{1}$ and $A_1^\prime = G_1^{-1}A_1$ . As $A_{>1}^{\prime}$ has rank $r$ and $A_{b + 1}^{\prime}\dots A_{2}^{\prime} = A_{>1}^{\prime} = A_{b + 1}\dots (A_{2}G_{1})$ , we can apply the induction hypothesis on the map which multiplies the left-most $b$ matrices. This yields $G_{b},\ldots G_{2}\in \mathrm{GL}(r)$ such that $A_{b + 1}^{\prime} = A_{b + 1}G_{b},\ldots ,A_{3}^{\prime} = G_{3}^{-1}A_{3}G_{2},A_{2}^{\prime} = G_{2}^{-1}(A_{2}G_{1})$ .
+
+Corollary 23. If $b > 0$ and $\overline{W} \in \mathbb{R}^{d_y \times d_x}$ has maximal rank $r$ , then $\mu_2^{-1}(\overline{W})$ is homeomorphic to $\mathrm{GL}(r)^b$ .
+
+Proof. We fix $\theta = (A_{b + 1},\ldots ,A_1)\in \mu_2^{-1}(\overline{W})$ . The map $\varphi :\mathrm{GL}(r)^b\to \mu_2^{-1}(\overline{W})$ given by the action (11) on $\theta$ is continuous. We now construct its inverse. As $\operatorname {rk}(\overline{W}) = r$ , we have that $\operatorname {rk}(A_i) = r$ for all $i = 1,\dots ,b + 1$ . Without loss of generality, we may assume that the first $r$ rows of $A_{b + 1}$ have rank $r$ . We write $\pi :\mathbb{R}^{d_y\times r}\to \mathbb{R}^{r\times r}$ for the projection which forgets the last $d_{y} - r$ rows of a given matrix. For $\theta^{\prime} = (A_{b + 1}^{\prime},\ldots ,A_{1}^{\prime})\in \mu_{2}^{-1}(\overline{W})$ , Proposition 22 shows that $\theta^{\prime} = \varphi (G_{b},\ldots ,G_{1})$ for some $(G_{b},\ldots ,G_{1})\in \mathrm{GL}(r)^{b}$ . So we have that $G_{b} = \pi (A_{b + 1})^{-1}\pi (A_{b + 1}^{\prime}),G_{b - 1} = A_{b}^{-1}G_{b}A_{b}^{\prime},\ldots ,G_{1} = A_{2}^{-1}G_{2}A_{2}^{\prime}$ , which defines a map
+
+$$
+\begin{array}{l} \psi : \mu_ {2} ^ {- 1} (\bar {W}) \longrightarrow \operatorname {G L} (r) ^ {b}, \\ \left(A _ {b + 1} ^ {\prime}, \dots , A _ {1} ^ {\prime}\right) \longmapsto \left(G (A _ {b + 1} ^ {\prime}), A _ {b} ^ {- 1} G (A _ {b + 1} ^ {\prime}) A _ {b} ^ {\prime}, \dots , A _ {2} ^ {- 1} \dots A _ {b} ^ {- 1} G (A _ {b + 1} ^ {\prime}) A _ {b} ^ {\prime} \dots A _ {2} ^ {\prime}\right), \\ \end{array}
+$$
+
+where $G(A_{b + 1}^{\prime})\coloneqq \pi (A_{b + 1})^{-1}\pi (A_{b + 1}^{\prime})$ . By construction, $\psi$ is the inverse of $\varphi$ . Since $\psi$ is continuous, it is a homeomorphism between $\mu_2^{-1}(\overline{W})$ and $\mathrm{GL}(r)^b$ .
+
+Corollary 24. If $b > 0$ and $\overline{W} \in \mathbb{R}^{d_y \times d_x}$ has maximal rank $r$ , then $\mu_2^{-1}(\overline{W})$ has $2^b$ connected components. Each of these components is path-connected.
+
+Proof. The group $\mathrm{GL}(r)$ has two connected components, namely $\mathrm{GL}^+(r)$ and $\mathrm{GL}^-(r)$ . Both components are path-connected. Hence, $\mathrm{GL}(r)^b$ has $2^b$ connected components, each of them path-connected. By Corollary 23, the same holds for $\mu_2^{-1}(\overline{W})$ .
+
+To complete our understanding of the connected components of $\mu_2^{-1}(\overline{W})$ , we consider the case that the matrix $W \in \mathbb{R}^{d_y \times d_x}$ does not have maximal rank $r$ . In that case, it turns out that $\mu_2^{-1}(\overline{W})$ is path-connected, which we show by constructing explicit paths between any two elements of $\mu_2^{-1}(\overline{W})$ .
+
+Proposition 25. Let $\overline{W} \in \mathbb{R}^{d_y \times d_x}$ . If $b > 0$ and $\mathrm{rk}(\overline{W}) < r$ , then $\mu_2^{-1}(\overline{W})$ is path-connected.
+
+Proof. We write $\overline{W} = \left[\frac{\overline{W}_1}{\overline{W}_2}\right]$ where $\overline{W}_1 \in \mathbb{R}^{r \times d_x}$ , and denote by $e$ the rank of $\overline{W}$ . If $\mathrm{rk}(\overline{W}_1) = e$ , then $\overline{W}_2 = M\overline{W}_1$ for some $M \in \mathbb{R}^{(d_y - r) \times r}$ .
+
+Claim 2. If $b = 1$ , $(A_2, A_1) \in \mu_2^{-1}(\overline{W})$ , $\mathrm{rk}(\overline{W}_1) = e$ and $\overline{W}_2 = M\overline{W}_1$ , then there is a continuous function $f:[0,1] \to \mu_2^{-1}(\overline{W})$ with $f(0) = (A_2, A_1)$ and $f(1) = (\left[ \begin{array}{c} I_r \\ M \end{array} \right], \overline{W}_1)$ .
+
+Proof. Since $\mathrm{rk}(\overline{W}) < r$ , we have that $\mathrm{rk}(A_2) < r$ or $\mathrm{rk}(A_1) < r$ . If $\mathrm{rk}(A_2) < r$ , we proceed as in the proof of Claim 1 to find a path in $\mu_2^{-1}(\overline{W})$ from $(A_2, A_1)$ to $(A_2', A_1')$ such that $\mathrm{rk}(A_2') = r$ .
+
+Hence, we may assume that $\mathrm{rk}(A_2) = r$ . This implies that $\mathrm{rk}(A_1) = e$ . So $K \coloneqq \ker (A_1^T) \subseteq \mathbb{R}^r$ has positive dimension $r - e$ . We write $A_2 = \left[ \begin{array}{c} A_{21} \\ A_{22} \end{array} \right]$ where $A_{21} \in \mathbb{R}^{r \times r}$ , and denote by $r_2$ the rank of $A_{21}$ . So the rowspace $R \subseteq \mathbb{R}^r$ of $A_{21}$ has dimension $r_2$ . We now show that $K + R = \mathbb{R}^r$ . To see this we set $\delta \coloneqq \dim (K \cap R)$ . Without loss of generality, we may assume that the first $e$ rows of $\overline{W}_1$ are linearly independent. Then the first $e$ rows of $A_{21}$ must also be linearly independent, so we might further assume that the first $r_2$ rows of $A_{21}$ are linearly independent. We denote by $A_{211}$ and $\overline{W}_{11}$ the matrices formed by the first $r_2$ rows of $A_{21}$ and $\overline{W}_1$ , respectively. In particular, we have
+
+that $A_{211}A_1 = \overline{W}_{11}$ . Now we choose a basis $(b_1, \ldots, b_{r_2})$ for $R$ such that $(b_1, \ldots, b_\delta)$ is a basis for $K \cap R$ . Since $R$ is the rowspace of $A_{211}$ , there is some $G \in \mathrm{GL}(r_2)$ such that the $i$ -th row of $GA_{211}$ is $b_i$ . So the first $\delta$ rows of $G\overline{W}_{11} = GA_{211}A_1$ are zero, which shows that $e = \mathrm{rk}(G\overline{W}_{11}) \leq r_2 - \delta$ . Thus, $\dim(K + R) = (r - e) + r_2 - \delta \geq r$ , which proves that $K + R = \mathbb{R}^r$ .
+
+If $r_2 < r$ , we now show that there is a path in $\mu_2^{-1}(\overline{W})$ from $(A_2, A_1)$ to $(A_2'' = \left[ \begin{array}{c} A_{21}' \\ A_{22} \end{array} \right], A_1)$ such that $\mathrm{rk}(A_{21}''') = r$ . We may assume again that the first $r_2$ rows $a_1, \ldots, a_{r_2}$ of $A_{21}$ are linearly independent, i.e., that they form a basis for $R$ . Since $K + R = \mathbb{R}^r$ , we can extend this basis to a basis $(a_1, \ldots, a_r)$ for $\mathbb{R}^r$ such that $a_i \in K$ for all $i > r_2$ . We define $A_{21}''$ such that its first $r_2$ rows are $a_1, \ldots, a_{r_2}$ and such that its $i$ -th row, for $r_2 < i \leq r$ , is the sum of $a_i \in K$ and the $i$ -th row of $A_{21}$ . Then $A_2'' = \left[ \begin{array}{c} A_{21}' \\ A_{22} \end{array} \right]$ satisfies $A_2''A_1 = \overline{W}$ . Moreover, the straight line from $(A_2, A_1)$ to $(A_2'', A_1)$ is a path in $\mu_2^{-1}(\overline{W})$ . Since the last $r - r_2$ rows of $A_{21}$ are contained in the linear span $R$ of the first $r_2$ rows of $A_{21}$ , the linearity of the determinant implies that $\operatorname*{det}(A_{21}'') = \operatorname*{det}([a_1 \cdots a_r]) \neq 0$ .
+
+Thus, we may assume that $r_2 = r$ . If $\operatorname{det}(A_{21}) < 0$ , we now construct a path in $\mu_2^{-1}(\overline{W})$ from $(A_2, A_1)$ to $(A_2''' = \begin{bmatrix} A_{21}'' \\ A_{22} \end{bmatrix}, A_1)$ such that $\operatorname{det}(A_{21}'''') > 0$ . For this, we pick a vector $v \in K \setminus \{0\}$ . Since the rows of $A_{21}$ form a basis for $\mathbb{R}^r$ , there is an index $i \in \{1, \ldots, r\}$ such that the matrix $D \in \mathbb{R}^{r \times r}$ obtained from $A_{21}$ by replacing its $i$ -th row with $v$ has full rank. We pick $\mu \in \mathbb{R}$ such that $\operatorname{det}(A_{21}) + \mu \operatorname{det}(D) > 0$ and define $A_{21}''$ to be the matrix obtained from $A_{21}$ by adding $\mu v$ to its $i$ -th row. Then $\operatorname{det}(A_{21}'''') = \operatorname{det}(A_{21}) + \mu \operatorname{det}(D) > 0$ and $A_2'''' = \begin{bmatrix} A_{21}'' \\ A_{22} \end{bmatrix}$ satisfies $A_2''''A_1 = \overline{W}$ . Moreover, the straight line from $(A_2, A_1)$ to $(A_2'''', A_1)$ is a path in $\mu_2^{-1}(\overline{W})$ .
+
+Therefore, we may assume that $\operatorname*{det}(A_{21}) > 0$ , so $G := A_{21}^{-1} \in \mathrm{GL}^{+}(r)$ . Any path in $\mathrm{GL}^{+}(r)$ from $I_r$ to $G$ yields a path in $\mu_2^{-1}(\overline{W})$ from $(A_2, A_1)$ to $(A_2G, G^{-1}A_1) = ([_{A_{22}G}]^I, \overline{W}_1)$ . Since $(A_{22}G)\overline{W}_1 = \overline{W}_2 = M\overline{W}_1$ , the straight line from $([_{A_{22}G}]^I, \overline{W}_1)$ to $([_{M}]^I, \overline{W}_1)$ is a path in $\mu_2^{-1}(\overline{W})$ .
+
+Claim 3. If $\theta = (A_{b + 1},\ldots ,A_1)\in \mu_2^{-1}(\overline{W})$ $\mathrm{rk}(\overline{W}_1) = e$ and $\overline{W}_2 = M\overline{W}_1$ , then there is a continuous function $F:[0,1]\to \mu_2^{-1}(\overline{W})$ with $F(0) = \theta$ and $F(1) = (\left[ \begin{array}{l}I_r\\ M \end{array} \right],I_r,\dots ,I_r,\overline{W}_1)$
+
+Proof. As $e < r$ , at least one of the $A_{i}$ has rank smaller than $r$ . We set $k := \min \{i \in \{1, \ldots, b + 1\} \mid \operatorname{rk}(A_{i}) < r\}$ . If $k < b + 1$ , we first show that there is a path in $\mu_{2}^{-1}(\overline{W})$ from $\theta$ to $(A_{b + 1}', \ldots, A_{1}')$ such that $\min \{i \in \{1, \ldots, b + 1\} \mid \operatorname{rk}(A_{i}') < r\} = b + 1$ . Since $\operatorname{rk}(A_{k}) < r$ , the rank of $\overline{W}' := A_{k + 1}A_{k}$ is smaller than $r$ . We write $\overline{W}' = [\overline{W}_{1}'\overline{W}_{2}']$ where $\overline{W}_{1}'$ has $r$ columns. Without loss of generality, we may assume that $\operatorname{rk}(\overline{W}_{1}') = \operatorname{rk}(\overline{W}')$ . Then there is a matrix $N$ such that $\overline{W}_{2}' = \overline{W}_{1}'N$ . Hence, we can apply the transposed version of Claim 2, which yields a path from $(A_{k + 1}, A_{k})$ to $(\overline{W}_{1}', [I_{r}N])$ in the set of factorizations of $\overline{W}'$ . Defining $\tilde{A}_{k + 1} := \overline{W}_{1}', \tilde{A}_{k} := [I_{r}N]$ and $\tilde{A}_{i} := A_{i}$ for all $i \in \{1, \ldots, b + 1\} \setminus \{k, k + 1\}$ , extends this path to a path in $\mu_{2}^{-1}(\overline{W})$ from $\theta$ to $(\tilde{A}_{b + 1}, \ldots, \tilde{A}_{1})$ such that $\min \{i \in \{1, \ldots, b + 1\} \mid \operatorname{rk}(\tilde{A}_{i}) < r\} = k + 1$ . We note that this construction increased the number $k$ . So we can repeat the construction until we reach $(A_{b + 1}', \ldots, A_{1}')$ as desired.
+
+Hence, we may assume that $k = b + 1$ . Since $\mathrm{rk}(A_{b + 1}) < r$ , the rank of $\overline{W}'' \coloneqq A_{b + 1}A_b$ is smaller than $r$ . We write $\overline{W}'' = \left[\frac{\overline{W}_1''}{\overline{W}_2''}\right]$ where $\overline{W}_1''$ has $r$ rows. Since $\overline{W}_1 = \overline{W}_1''A_{b - 1}\dots A_1$ and the matrices $A_{b - 1},\ldots ,A_1$ have rank $r$ , we have that $\mathrm{rk}(\overline{W}_1'') = \mathrm{rk}(\overline{W}_1) = e$ . Analogously, $\mathrm{rk}(\overline{W}'') = e$ . So there is a matrix $M'$ such that $\overline{W}_2'' = M'\overline{W}_1''$ . Applying Claim 2 yields a path from $(A_{b + 1},A_b)$ to $(\left[ \begin{array}{c}I_r\\ M' \end{array} \right],\overline{W}_1'')$ in the set of factorizations of $\overline{W}''$ . This path can be extended to a path in $\mu_2^{-1}(\overline{W})$ from $\theta$ to $\theta^{\prime} := (\left[ \begin{array}{c}I_r\\ M' \end{array} \right],\overline{W}_1'',A_{b - 1},\ldots ,A_1)$ . Applying the same construction on $\overline{W}''' := \overline{W}_1''A_{b - 1}$ yields a path in $\mu_2^{-1}(\overline{W})$ from $\theta^{\prime}$ to $\theta'' := (\left[ \begin{array}{c}I_r\\ M' \end{array} \right],I_r,\overline{W}_1'',A_{b - 1},\ldots ,A_1)$ . We repeat the construction until $(\left[ \begin{array}{c}I_r\\ M' \end{array} \right],I_r,\ldots ,I_r,\overline{W}_1)$ is reached. Since $M'\overline{W}_1 = \overline{W}_2 = M\overline{W}_1$ , the straight line from $(\left[ \begin{array}{c}I_r\\ M' \end{array} \right],I_r,\ldots ,I_r,\overline{W}_1)$ to $(\left[ \begin{array}{c}I_r\\ M' \end{array} \right],I_r,\ldots ,I_r,\overline{W}_1)$ is a path in $\mu_2^{-1}(\overline{W})$ .
+
+Now we finally show that $\mu_2^{-1}(\overline{W})$ is path-connected. Without loss of generality, we may assume that $\mathrm{rk}(\overline{W}_1) = e$ , and we write $\overline{W}_2 = M\overline{W}_1$ . For $\theta, \theta' \in \mu_2^{-1}(\overline{W})$ , there are paths in $\mu_2^{-1}(\overline{W})$ from $\theta$ resp. $\theta'$ to $(\left[ \begin{array}{c} I_r \\ M \end{array} \right], I_r, \ldots, I_r, \overline{W}_1)$ , so there is a path from $\theta$ to $\theta'$ in $\mu_2^{-1}(\overline{W})$ .
+
+To settle the proof of Theorem 5, it is left to show that $\mu_2^{-1}(\overline{W})$ and $\mu_{\pmb{d}}^{-1}(\overline{W})$ have indeed the same number of (path-)connected components, as we promised earlier.
+
+Proposition 26. Let $b > 0$ and $\overline{W} \in \mathbb{R}^{d_y \times d_x}$ . Then $\mu_{\mathbf{d}}^{-1}(\overline{W})$ and $\mu_{2}^{-1}(\overline{W})$ have the same number of connected components. Moreover, each of these components is path-connected.
+
+Proof. Let us denote the connected components of $\mu_2^{-1}(\overline{W})$ be $C_1, \ldots, C_k$ . By Corollary 24 and Proposition 25, each of these components is path-connected. Using the notation in (10), we have that $\mu_{\pmb{d}}^{-1}(\overline{W}) = \bigcup_{i=1}^{k} \mu_1^{-1}(C_i)$ . Since the $\mu_1^{-1}(C_1), \ldots, \mu_1^{-1}(C_k)$ are pairwise disconnected, we see that $\mu_{\pmb{d}}^{-1}(\overline{W})$ has at least $k$ disconnected components. It is left to show that each $\mu_1^{-1}(C_i)$ is path-connected. For this, let $\theta, \theta' \in \mu_1^{-1}(C_i)$ and $\sigma := \mu_1(\theta), \sigma' := \mu_1(\theta') \in C_i$ . As $C_i$ is path-connected, there is a path in $C_i$ from $\sigma$ to $\sigma'$ . The map $\mu_1$ is a direct product of $b + 1$ matrix multiplication maps. To each factor we can apply Proposition 20, which yields a path in $\mu_1^{-1}(C_i)$ from $\theta$ to $\theta'$ .
+
+Corollary 27. Let $b > 0$ and $\overline{W} \in \mathbb{R}^{d_y \times d_x}$ . If $\mathrm{rk}(\overline{W}) = r$ , then $\mu_d^{-1}(\overline{W})$ has $2^b$ connected components, and each of these components is path-connected. If $\mathrm{rk}(W) < r$ , then $\mu_d^{-1}(\overline{W})$ is path-connected.
+
+Proof. Combine Corollary 24 and Propositions 25 and 26.
+
+Proof of Theorem 5. Theorem 5 is an amalgamation of Corollaries 21 and 27.
+
+# A.6 PROOFS OF PROPOSITIONS 6, 7, 9, 10 AND 14
+
+Proposition 6. If $\theta$ is such that $d\mu_{\mathbf{d}}(\theta)$ has maximal rank (see Theorem 4), then $\theta \in \operatorname{Crit}(L)$ if and only if $\overline{W} \in \operatorname{Crit}(\ell|_{\mathcal{M}_r})$ , and $\theta$ is a minimum (resp., saddle, maximum) for $L$ if and only if $\overline{W}$ is a minimum (resp., saddle, maximum) for $\ell|_{\mathcal{M}_r}$ . If $\operatorname{rk}(\overline{W}) = r$ (which implies that $d\mu_{\mathbf{d}}(\theta)$ has maximal rank) and $\theta \in \operatorname{Crit}(L)$ , then all $\pmb{d}$ -factorizations of $\overline{W}$ also belong to $\operatorname{Crit}(L)$ .
+
+Proof. If $\mu_{d}$ is a local submersion at $\theta$ onto $\mathcal{M}_r$ , then there exists an open neighborhood $U$ of $\overline{W}$ in $\mathcal{M}_r$ and an open neighborhood $V$ of $\theta$ with the property that $\mu_{d}$ acts as a projection from $V$ onto $U$ (see, e.g., (Lee, 2003, Theorem 7.3)). From this, we easily deduce that $\theta$ is a minimum (resp. saddle, maximum) for $L$ if and only if $\overline{W} = \mu_d(\theta)$ is a minimum (resp. saddle, maximum) for $\ell|_{\mathcal{M}_r}$ . Finally, if $\mathrm{rk}(\overline{W}) = r$ , then $d\mu_{d}(\theta)$ has maximal rank for all $\theta \in \mu_{d}^{-1}(\overline{W})$ by Theorem 4.
+
+Proposition 7. If $\theta \in \operatorname{Crit}(L)$ with $\mathrm{rk}(\overline{W}) = e \leq r$ , then $\overline{W} \in \operatorname{Crit}(\ell|_{\mathcal{M}_e})$ . In other words, if $\mathrm{rk}(\overline{W}) < r$ , then $\theta \in \operatorname{Crit}(L)$ implies that $\overline{W}$ is a critical point for the restriction of $\ell$ to a smaller determinantal variety $\mathcal{M}_e$ (which is in the singular locus of the functional space $\mathcal{M}_r$ in the non-filling case).
+
+Proof. According to Theorem 4, if $\mu_{\pmb{d}}(\theta) = \overline{W}$ with $\mathrm{rk}(\overline{W}) = e$ , then $Im(d\mu_{\pmb{d}}(\theta)) \supset T_{\overline{W}}\mathcal{M}_e$ . This means that $\theta \in \operatorname{Crit}(L)$ implies that $\overline{W}$ is critical for $\ell|_{\mathcal{M}_e}$ .
+
+Proposition 9. Let $\theta \in \operatorname{Crit}(L)$ be such that $\mathrm{rk}(\overline{W}) < r$ , and assume that $d\ell(\overline{W}) \neq 0$ . Then, for any neighborhood $U$ of $\theta$ , there exists $\theta'$ in $U$ such that $\mu_{d}(\theta') = \overline{W}$ but $\theta' \notin \operatorname{Crit}(L)$ . In particular, $\theta$ is a saddle point.
+
+Proof. Our proof is a modification of an argument used in Zhang (2019). Let us first consider the case that $\mu_d$ is filling, so $r = \min\{d_h, d_0\}$ . Without loss of generality, we assume $r = d_0$ . Recall that the image of $d\mu_d(\theta)$ is given by
+
+$$
+\mathbb {R} ^ {d _ {h}} \otimes R o w (W _ {< h}) + \dots + C o l (W _ {> i}) \otimes R o w (W _ {< i}) + \dots + C o l (W _ {> 1}) \otimes \mathbb {R} ^ {d _ {0}}.
+$$
+
+We first note that $\text{Row}(W_{ i}\right) \otimes \operatorname {R o w} \left(W _ {< i}\right)\right) = d \ell (\bar {W}) \left(\operatorname {C o l} \left(W _ {> i}\right) \otimes \mathbb {R} ^ {d _ {0}}\right) = 0. \tag {12}
+$$
+
+Since $\text{Row}(W_{ 0$ and $v_{i+1} \in \mathbb{R}^{d_{i+1}}$ arbitrarily, and define $\tilde{W}_{i+1} = W_{i+1} + \epsilon (v_{i+1} \otimes w_i)$ . Clearly, we have that $\tilde{W}_{i+1} W_i \ldots W_1 = W_{i+1} W_i \ldots W_1$ . If $(W_h, \ldots, \tilde{W}_{i+1}, W_i, \ldots, W_1)$ is also a critical point of $L$ , then
+
+$$
+d \ell (\bar {W}) \left(C o l \left(W _ {h}, \dots , \tilde {W} _ {i + 1}\right) \otimes \mathbb {R} ^ {d _ {0}}\right) = 0. \tag {13}
+$$
+
+Combining (12) and (13), we have that
+
+$$
+d \ell (\overline {{W}}) (C o l (W _ {h}, \dots , W _ {i + 2} (v _ {i + 1} \otimes w _ {i})) \otimes \mathbb {R} ^ {d _ {0}}) = 0.
+$$
+
+If this were true for all $v_{i+1}$ , then it would imply
+
+$$
+d \ell (\bar {W}) \left(C o l \left(W _ {h}, \dots , W _ {i + 2}\right) \otimes \mathbb {R} ^ {d _ {0}}\right) = 0. \tag {14}
+$$
+
+Hence, we have either found an arbitrarily small perturbation $\theta'$ of $\theta$ as required in Proposition 9, or (14) must hold. In the latter case, we reapply the same argument for $\tilde{W}_{i+2} = W_{i+2} + \epsilon(v_{i+2} \otimes w_{i+1})$ where $w_{i+1} \neq 0$ and $w_{i+1}^T W_{i+1} \ldots W_1 = 0$ . Again, we can either construct an arbitrarily small perturbation $\theta'$ of $\theta$ as required in Proposition 9, or we have $d\ell(\overline{W})(\text{Col}(W_{>i+2}) \otimes \mathbb{R}^{d_0}) = 0$ . Proceeding this way we eventually arrive at $d\ell(\overline{W})(\mathbb{R}^{d_h} \otimes \mathbb{R}^{d_0}) = 0$ so $d\ell(\overline{W}) = 0$ , which contradicts the hypothesis. Thus, at some point we must find an arbitrarily small perturbation $\theta'$ of $\theta$ as required in Proposition 9, which concludes the proof in the case that $\mu_d$ is filling.
+
+We now consider the case that $\mu_{\pmb{d}}$ is not filling. We pick $i \in \{1, \dots, h - 1\}$ such that $d_i = r$ , and write for simplicity $A = W_h \dots W_{i+1}$ and $B = W_i \dots W_1$ . The assumption $\mathrm{rk}(\overline{W}) < r$ implies that $\mathrm{rk}(A) < r$ or $\mathrm{rk}(B) < r$ , and we assume without loss of generality that $\mathrm{rk}(A) < r$ . We define the map $L_B(W_h', \dots, W_{i+1}') = \ell(W_h' \dots W_{i+1}' B)$ . We also introduce the map $\ell_B(A') = \ell(A'B)$ and the matrix multiplication map $\mu_{\pmb{d}_A}$ where $\pmb{d}_A = (d_h, \dots, d_{i+1})$ , so that $L_B = \ell_B \circ \mu_{\pmb{d}_A}$ . If $\theta$ is a critical point for $L$ , then $\theta_A = (W_h, \dots, W_{i+1})$ must be a critical point for $L_B$ . We are thus in the position to apply the analysis carried out in the filling case. In particular, we have that either $\theta_A$ can be perturbed to $\tilde{\theta}_A$ such that $\mu_{\pmb{d}_A}(\theta_A) = \mu_{\pmb{d}_A}(\tilde{\theta}_A)$ but $\tilde{\theta}_A \notin \text{Crit}(L_B)$ , or $d\ell_B(A) = 0$ . In the former case, we have that $\theta' = (\tilde{\theta}_A, \theta_B)$ is not a critical point for $L$ , and we are done. If instead $d\ell_B(A) = 0$ , then we have that
+
+$$
+d \ell (\overline {{W}}) (\mathbb {R} ^ {d _ {h}} \otimes R o w (B)) = 0,
+$$
+
+because the image of the differential of the map $A' \mapsto A'B$ is given by $\mathbb{R}^{d_h} \otimes \operatorname{Row}(B)$ . We now proceed in a similar manner as before. We have that $\operatorname{Col}(W_{>i}) \subsetneq \mathbb{R}^{d_h}$ , because we assumed that $W_{>i} = A$ had rank less than $r \leq d_h$ . Thus, we may find $w_{i+1} \in \mathbb{R}^{d_i}$ , $w_{i+1} \neq 0$ such that $W_h \ldots W_{i+1} w_{i+1} = 0$ . We fix $\epsilon > 0$ and $v_i$ arbitrarily, and define $\tilde{W}_i = W_i + \epsilon (w_{i+1} \otimes v_i)$ . We have that $W_h \ldots W_{i+1} \tilde{W}_i = W_h \ldots W_{i+1} W_i$ . If for all choices of $v_i$ we have that $(W_h, \ldots, \tilde{W}_i, \ldots, W_1)$ is still a critical point for $L$ , then we can deduce that
+
+$$
+d \ell (\bar {W}) (\mathbb {R} ^ {d _ {h}} \otimes R o w (W _ {i - 1} \dots W _ {1})) = 0.
+$$
+
+Repeating this reasoning, we obtain our result as before.
+
+
+
+Proposition 10. Let $\ell$ be any smooth convex function, and let $L = \ell \circ \mu_{\mathbf{d}}$ . If $\theta$ is a non-global local minimum for $L$ , then necessarily $\mathrm{rk}(\overline{W}) = r$ (so $\theta$ is a pure critical point). In particular, $L$ has non-global minima if and only if $\ell|_{\mathcal{M}_r}$ has non-global minima.
+
+Proof. The first statement follows immediately from Proposition 9: if $\theta \in \operatorname{Crit}(L)$ is a non-global local minimum, then necessarily $d\ell(\overline{W}) \neq 0$ , and we conclude that $\mathrm{rk}(\overline{W}) = r$ . For the second statement, we observe that if $\ell$ is a convex function, then $\theta$ is a local minimum for $L$ if and only if $\overline{W} = \mu_d(\theta)$ is a local minimum for $\ell|_{\mathcal{M}_{\tau}}$ . Indeed, if $\overline{W} = \mu_d(\theta)$ is a local minimum for $\ell|_{\mathcal{M}_{\tau}}$ , then it is always true that any $\theta \in \mu_d^{-1}(\overline{W})$ is a local minimum. Conversely, if $\theta$ is a local minimum,
+
+then from Proposition 9 we see that either $d\ell(\overline{W}) = 0$ , in which case $\overline{W}$ is a (global) minimum because $\ell$ is convex, or $\overline{W}$ must have maximal rank. In the latter case, $d\mu_{d}(\theta)$ would be surjective (by Theorem 4), so $\overline{W}$ would also be a local minimum for $\ell|_{\mathcal{M}_r}$ (see Proposition 6). Finally, it is clear that $\theta$ is also a global minimum for $L$ if and only if $\overline{W}$ is a global minimum for $\ell|_{\mathcal{M}_r}$ .
+
+Proposition 14. If $L' = \ell \circ \nu_d$ and $L = \ell \circ \mu_d$ , then the critical locus $\text{Crit}(L')$ is in "correspondence" with $\text{Crit}(L) \cap \Omega$ , meaning that
+
+$$
+\{\nu_ {\boldsymbol {d}} (\theta^ {\prime}) \mid \theta^ {\prime} \in C r i t (L ^ {\prime}) \} = \{\mu_ {\boldsymbol {d}} (\theta) \mid \theta \in C r i t (L) \cap \Omega \}.
+$$
+
+Proof. Let us define
+
+$$
+p: \Omega \to \mathbb {R} ^ {d _ {\theta}}, (W _ {h}, \dots , W _ {1}) \mapsto \left(W _ {h}, \frac {W _ {h - 1}}{\| W _ {h - 1} \|}, \dots , \frac {W _ {1}}{\| W _ {1} \|}\right),
+$$
+
+$$
+q: \Omega \to \mathbb {R} ^ {d _ {\theta}}, (W _ {h}, \dots , W _ {1}) \mapsto \left(W _ {h} \| W _ {h - 1} \| \dots \| W _ {1} \|, \frac {W _ {h - 1}}{\| W _ {h - 1} \|}, \dots , \frac {W _ {1}}{\| W _ {1} \|}\right).
+$$
+
+The image of both of these maps is $N = \{(W_{h},\ldots ,W_{1})|\| W_{i}\| = 1,i = 1,\ldots ,h - 1\}$ . In fact, both maps are submersions onto $N$ . Since $\nu_{d} = \mu_{d}\circ p$ and $\mu_d\circ q = \mu_d|_{\Omega} = \nu_d\circ q$ , it is enough to show the following two assertions: 1) $\theta^{\prime}\in Crit(L^{\prime})$ if and only if $p(\theta^{\prime})\in Crit(L)$ ; and 2) $\theta \in Crit(L)\cap \Omega$ if and only if $q(\theta)\in Crit(L^{\prime})$ .
+
+For 1), we deduce from $L' = L \circ p$ that $dL'(\theta') = dL(p(\theta')) \circ dp(\theta') = 0$ if $dL(p(\theta')) = 0$ , but this also holds conversely: if $dL'(\theta') = 0$ , then $Im(dp(\theta'))$ is contained in $Ker(dL(p(\theta'))$ . Since $q \circ p = p$ and both maps $p$ and $q$ are submersions, we have that $Im(dp(\theta')) = T_{p(\theta')} N = Im(dq(p(\theta'))$ ). Now it follows from $L \circ q = L|_{\Omega}$ that $dL(p(\theta')) = dL(p(\theta')) \circ dq(p(\theta')) = 0$ . For 2), we can argue analogously, exchanging the roles of $L$ and $L'$ as well as $p$ and $q$ .
+
+# A.7 PROOF OF THEOREM 12
+
+We consider a fixed matrix $Q_0 \in \mathbb{R}^{d_y \times d_x}$ and a singular value decomposition (SVD) $Q_0 = U\Sigma V^T$ . Here $U \in \mathbb{R}^{d_y \times d_y}$ and $V \in \mathbb{R}^{d_x \times d_x}$ are orthogonal and $\Sigma \in \mathbb{R}^{d_y \times d_x}$ is diagonal with decreasing diagonal entries $\sigma_1, \sigma_2, \ldots, \sigma_m$ where $m = \min(d_x, d_y)$ . We also write shortly $[m] = \{1, 2, \ldots, m\}$ and denote by $[m]_r$ the set of all subsets of $[m]$ of cardinality $r$ . For $\mathcal{I} \in [m]_r$ , we define $\Sigma_{\mathcal{I}} \in \mathbb{R}^{d_y \times d_x}$ to be the diagonal matrix with entries $\sigma_{\mathcal{I},1}, \sigma_{\mathcal{I},2}, \ldots, \sigma_{\mathcal{I},m}$ where $\sigma_{\mathcal{I},i} = \sigma_i$ if $i \in \mathcal{I}$ and $\sigma_{\mathcal{I},i} = 0$ otherwise. These matrices yield the critical points of the function $h_{Q_0}(P) = \| P - Q_0\|^2$ restricted to the determinantal variety $\mathcal{M}_r$ .
+
+Theorem 28. If $Q_0 \notin \mathcal{M}_r$ , the critical points of $h_{Q_0}|_{\mathcal{M}_r}$ are all matrices of the form $U\Sigma_{\mathcal{I}}V^T$ where $Q_0 = U\Sigma V^T$ is a SVD and $\mathcal{I} \in [m]_r$ . The local minima are the critical points with $\mathcal{I} = [r]$ . They are all global minima.
+
+Proof. A matrix $P \in \mathcal{M}_r$ is a critical point if and only if $(Q_0 - P) \in T_P \mathcal{M}_r^\perp = \text{Col}(P)^\perp \otimes \text{Row}(P)^\perp$ . If $P = \sum_{i=1}^{r} \sigma_i'(u_i' \otimes v_i')$ and $Q_0 - P = \sum_{j=1}^{e} \sigma_j''(u_j'' \otimes v_j'')$ are SVD decompositions with $\sigma_i' \neq 0$ and $\sigma_j'' \neq 0$ , the column spaces of $P$ and $Q_0 - P$ are spanned by the $u_i'$ and $u_j''$ , respectively. Similarly, the row spaces of $P$ and $Q_0 - P$ are spanned by the $v_i'$ and $v_j''$ , respectively. So $P$ is a critical point if and only if the vectors $u_i', u_j''$ and $v_i', v_j''$ are orthonormal, i.e., if
+
+$$
+Q _ {0} = P + \left(Q _ {0} - P\right) = \sum_ {i = 1} ^ {r} \sigma_ {i} ^ {\prime} \left(u _ {i} ^ {\prime} \otimes v _ {i} ^ {\prime}\right) + \sum_ {j = 1} ^ {e} \sigma_ {j} ^ {\prime \prime} \left(u _ {j} ^ {\prime \prime} \otimes v _ {j} ^ {\prime \prime}\right)
+$$
+
+is a SVD of $Q_0$ . This proves that the critical points are of the form $U\Sigma_{\mathcal{I}}V^T$ where $Q_0 = U\Sigma V^T$ is a SVD and $\mathcal{I} \in [m]_r$ .
+
+Since $h_{Q_0}(U\Sigma_{\mathcal{I}}V^T) = \| U\Sigma_{[m]\setminus \mathcal{I}}V^T\| ^2 = \| \Sigma_{[m]\setminus \mathcal{I}}\| ^2 = \sum_{i\notin \mathcal{I}}\sigma_i^2$ , we see that the global minima are exactly the critical points selecting $r$ of the largest singular values of $Q_{0}$ , i.e., with $\mathcal{I} = [r]$ . It is left to show that there are no other local minima. For this, we consider a critical point $P = U\Sigma_{\mathcal{I}}V^T$ such that at least one selected singular value $\sigma_{i}$ for $i\in \mathcal{I}$ is strictly smaller than $\sigma_r$ . We will show now that $P$ cannot be a local minimum. Since $\sigma_{i} < \sigma_{r}$ , there is some $j\in [r]$ such that $j\notin \mathcal{I}$ . As
+
+above, we write $u_{k}$ and $v_{k}$ for the columns of $U$ and $V^{T}$ such that $Q_{0} = \sum_{k=1}^{m} \sigma_{k}(u_{k} \otimes v_{k})$ and $P = \sum_{k \in \mathcal{I}} \sigma_{k}(u_{k} \otimes v_{k})$ . We consider rotations in the planes spanned by $u_{i}, u_{j}$ and $v_{i}, v_{j}$ , respectively: for $a \in [0, \frac{\pi}{2}]$ , we set $u^{(\alpha)} = \cos(\alpha) u_{i} + \sin(\alpha) u_{j}$ and $v^{(\alpha)} = \cos(\alpha) v_{i} + \sin(\alpha) v_{j}$ . Note that $u^{(0)} = u_{i}$ and $u^{(\frac{\pi}{2})} = u_{j}$ ; analogously for $v^{(\alpha)}$ . Next we define $\sigma^{(\alpha)} = \cos^2(\alpha) \sigma_{i} + \sin^2(\alpha) \sigma_{j}$ and
+
+$$
+P _ {\alpha} = \sum_ {k \in \mathcal {I} \backslash \{i \}} \sigma_ {k} (u _ {k} \otimes v _ {k}) + \sigma^ {(\alpha)} \left(u ^ {(\alpha)} \otimes v ^ {(\alpha)}\right) \in \mathcal {M} _ {r}.
+$$
+
+We note that $P_0 = P$ and $P_{\frac{\pi}{2}} = U\Sigma_{\mathcal{I}\backslash \{i\} \cup \{j\}}V^T$ are both critical points of $h_{Q_0}|_{\mathcal{M}_r}$
+
+It remains to show that $h_{Q_0}(P_\alpha)$ as a function in $\alpha$ is strictly decreasing on the interval $[0, \frac{\pi}{2}]$ . From
+
+$$
+h _ {Q _ {0}} (P _ {\alpha}) = \left\| \sum_ {k \notin \mathcal {I}} \sigma_ {k} (u _ {k} \otimes v _ {k}) + \sigma_ {i} (u _ {i} \otimes v _ {i}) - \sigma^ {(\alpha)} (u ^ {(\alpha)} \otimes v ^ {(\alpha)}) \right\| ^ {2}
+$$
+
+and $u^{(\alpha)}\otimes v^{(\alpha)} = \cos^2 (\alpha)(u_i\otimes v_i) + \cos (\alpha)\sin (\alpha)(u_i\otimes v_j + u_j\otimes v_i) + \sin^2 (\alpha)(u_j\otimes v_j)$ , we deduce that
+
+$$
+\begin{array}{l} h _ {Q _ {0}} (P _ {\alpha}) = \sum_ {k \notin \mathcal {I}, k \neq j} \sigma_ {k} ^ {2} + \left(\sigma_ {i} - \sigma^ {(\alpha)} \cos^ {2} (\alpha)\right) ^ {2} + 2 \left(\sigma^ {(\alpha)} \cos (\alpha) \sin (\alpha)\right) ^ {2} + \left(\sigma_ {j} - \sigma^ {(\alpha)} \sin^ {2} (\alpha)\right) ^ {2} \\ = \sum_ {k \notin \mathcal {I}, k \neq j} \sigma_ {k} ^ {2} + \sigma_ {i} ^ {2} + 2 \sigma_ {j} (\sigma_ {j} - \sigma_ {i}) \cos^ {2} (\alpha) - (\sigma_ {j} - \sigma_ {i}) ^ {2} \cos^ {4} (\alpha). \\ \end{array}
+$$
+
+The graph of the function $f(x) = \sigma_i^2 + 2\sigma_j(\sigma_j - \sigma_i)x - (\sigma_j - \sigma_i)^2x^2$ , for $x \in \mathbb{R}$ , is a parabola with a unique local and global maximum at $x_0 = \frac{\sigma_j}{\sigma_j - \sigma_i}$ . Since $x_0 \geq 1$ , the function $f$ is strictly increasing on the interval [0, 1]. Hence, $h_{Q_0}(P_\alpha) = \sum_{k \notin \mathcal{I}, k \neq j} \sigma_k^2 + f(\cos^2(\alpha))$ is strictly decreasing on $[0, \frac{\pi}{2}]$ , which concludes the proof.
+
+If the singular values of $Q_{0}$ are pairwise distinct and positive, the singular vectors of $Q_{0}$ are unique up to sign. So for each index set $\mathcal{I} \in [m]_r$ the matrix $Q_{\mathcal{I}} = U\Sigma_{\mathcal{I}}V^T$ is the unique critical point of $h_{Q_0}|_{\mathcal{M}_r}$ whose singular values are the $\sigma_{i}$ for $i \in \mathcal{I}$ . Hence, Theorem 28 implies immediately the following:
+
+Corollary 29. If the singular values of $Q_0$ are pairwise distinct and positive, $h_{Q_0}|_{\mathcal{M}_r}$ has exactly $\binom{m}{r}$ critical points, namely the $Q_{\mathcal{I}} = U\Sigma_{\mathcal{I}}V^T$ for $\mathcal{I} \in [m]_r$ . Moreover, its unique local and global minimum is $Q_{[r]}$ .
+
+We can strengthen this result by explicitly calculating the index of each critical point, i.e., the number of negative eigenvalues of the Hessian matrix.
+
+Theorem 30. If the singular values of $Q_0$ are pairwise distinct and positive, the index of $Q_{\mathcal{I}}$ as a critical point of $h_{Q_0}|_{\mathcal{M}_r}$ is
+
+$$
+\operatorname {i n d e x} \left(Q _ {\mathcal {I}}\right) = \# \left\{\left(j, i\right) \in \mathcal {I} \times \left([ m ] \backslash \mathcal {I}\right) \mid j > i \right\}.
+$$
+
+To prove this assertion, we may assume without loss of generality that $d_y \leq d_x$ , so $m = d_y$ . We may further assume that $Q_0$ is a diagonal matrix, so $Q_0 = \Sigma$ . Let $\mu_{(d_y,r,d_x)}: \mathbb{R}^{d_y \times r} \times \mathbb{R}^{r \times d_x} \to \mathbb{R}^{d_y \times d_x}$ be the matrix multiplication map, and $L = h_\Sigma \circ \mu_{(d_y,r,d_x)}$ . For $(A,B) \in \mu_{(d_y,r,d_x)}^{-1}(\Sigma_{\mathcal{I}})$ , Theorem 4 implies that the condition $\Sigma_{\mathcal{I}} \in \text{Crit}(h_\Sigma|_{\mathcal{M}_r})$ is equivalent to $dL(A,B) = 0$ . Moreover, the number of negative eigenvalues of the Hessian of $L$ at any such factorization $(A,B)$ of $\Sigma_{\mathcal{I}}$ is the same. This number is the index of $\Sigma_{\mathcal{I}}$ . So we can compute it by fixing one specific factorization $(A,B)$ of $\Sigma_{\mathcal{I}}$ .
+
+To compute the Hessian of $L$ at $(A, B)$ , we compute the partial derivatives of first and second order of $L$ :
+
+$$
+\frac {\partial L}{\partial a _ {i j}} = 2 \left[ (A B - \Sigma) B ^ {T} \right] _ {i j}, \quad \frac {\partial L}{\partial b _ {i j}} = 2 \left[ A ^ {T} (A B - \Sigma) \right] _ {i j},
+$$
+
+$$
+\frac {\partial^ {2} L}{\partial a _ {i j} \partial a _ {k l}} = \left\{ \begin{array}{l l} 0 & , \text {i f} i \neq k \\ 2 \left[ B B ^ {T} \right] _ {j l} & , \text {i f} i = k \end{array} \right., \tag {15}
+$$
+
+$$
+\frac {\partial^ {2} L}{\partial b _ {i j} \partial b _ {k l}} = \left\{ \begin{array}{l l} 0 & , \text {i f} j \neq l \\ 2 \left[ A ^ {T} A \right] _ {i k} & , \text {i f} j = l \end{array} \right., \tag {16}
+$$
+
+$$
+\frac {\partial^ {2} L}{\partial a _ {i j} \partial b _ {k l}} = \left\{ \begin{array}{l l} 2 a _ {i k} b _ {j l} & , \text {i f} j \neq k \\ 2 \left(a _ {i k} b _ {j l} + [ A B - \Sigma ] _ {i l}\right) & , \text {i f} j = k \end{array} \right.. \tag {17}
+$$
+
+To assemble these second order partial derivatives into a matrix, we choose the following order of the entries of $(A,B)$ :
+
+$$
+a _ {1 1}, \dots , a _ {1 r}, a _ {2 1}, \dots , a _ {2 r}, \dots , a _ {d _ {y} 1}, \dots , a _ {d _ {y} r}, b _ {1 1}, \dots , b _ {r 1}, b _ {1 2}, \dots , b _ {r 2}, \dots , b _ {1 d _ {x}}, \dots , b _ {r d _ {x}}.
+$$
+
+We denote by $H$ the Hessian matrix of $L$ with respect to this ordering at the following specifically chosen matrices $(A_0, B_0)$ : denoting by $i_1, i_2, \ldots, i_r$ the entries of $\mathcal{I}$ in decreasing order, we pick the $j$ -th column of $A_0$ to be the $i_j$ -th standard basis vector in $\mathbb{R}^{d_y}$ and the $j$ -th row of $B_0$ to be the $\sigma_{i_j}$ -multiple of the $i_j$ -th standard basis vector in $\mathbb{R}^{d_x}$ . Note that $A_0B_0 = \Sigma_{\mathcal{I}}$ , $A_0^T A_0 = I_r$ is the $r \times r$ -identity matrix, and $B_0B_0^T$ is the $r \times r$ -diagonal matrix with entries $\sigma_{i_1}^2, \sigma_{i_2}^2, \ldots, \sigma_{i_r}^2$ . We write
+
+$$
+H = \left[ \begin{array}{c c} D & M \\ M ^ {T} & N \end{array} \right], \quad \text {w h e r e} D \in \mathbb {R} ^ {r d _ {y} \times r d _ {y}}, N \in \mathbb {R} ^ {r d _ {x} \times r d _ {x}}, M \in \mathbb {R} ^ {r d _ {y} \times r d _ {x}}.
+$$
+
+Our first observation is that $N$ , whose entries are described by (16), is twice the identity matrix, so $N = 2I_{rd_x}$ . Similarly, we see from (15) that $D$ is a diagonal matrix. According to our fixed ordering, the entries of $D$ are indexed by pairs $(ij,kl)$ of integers $i,k\in [d_y]$ and $j,l\in [r]$ . With this, the diagonal entries of $D$ are $D_{ij,ij} = 2\sigma_{ij}^2$ . Analogously, the entries of $M$ are indexed by pairs $(ij,kl)$ of integers $i\in [d_y]$ , $j,k\in [r]$ and $l\in [d_x]$ .
+
+Lemma 31. Let $i \in [d_y]$ and $j \in [r]$ . The $ij$ -th row of $M$ has exactly one non-zero entry. If $i \in \mathcal{I}$ , there is some $k \in [r]$ with $i = i_k$ and the non-zero entry is $M_{ij,ki_j} = 2\sigma_{i_j}$ . Otherwise, so if $i \notin \mathcal{I}$ , the non-zero entry is $M_{ij,ji} = -2\sigma_i$ .
+
+Proof. The entries of $M$ are given by (17). We first observe that $(A_0)_{ik}(B_0)_{jl}$ is non-zero if and only if $i = i_k$ and $l = i_j$ . Moreover, we have that $(A_0)_{ikk}(B_0)_{jij} = \sigma_{i_j}$ . Similarly, $[A_0B_0 - \Sigma]_{il}$ is non-zero if and only $i = l \notin \mathcal{I}$ . For $i \notin \mathcal{I}$ we have that $[A_0B_0 - \Sigma]_{ii} = -\sigma_i$ .
+
+Now we fix $i$ and $j$ and consider the $ij$ -th row of $M$ . We apply our observations above to the following three cases.
+
+If $i = i_j$ , then $M_{ij,kl}$ is non-zero if and only if $k = j$ and $l = i$ . In that case, $M_{ij,ji} = 2\sigma_i$ .
+
+If $i \in \mathcal{I}$ , but $i \neq i_j$ , then there is some $n \neq j$ such that $i = i_n$ . Now $M_{ij,kl}$ is non-zero if and only if $k = n$ and $l = i_j$ . In that case, $M_{ij,ki_j} = 2\sigma_{i_j}$ .
+
+Finally, if $i \notin \mathcal{I}$ , then $M_{ij,kl}$ is non-zero if and only if $k = j$ and $l = i$ . In that case, we have that $M_{ij,ji} = -2\sigma_i$ .
+
+Corollary 32. The square matrix $\Delta := MM^T \in \mathbb{R}^{r d_y \times r d_y}$ is a diagonal matrix. For $i \in [d_y]$ and $j \in [r]$ , its ij-th diagonal entry is $\Delta_{ij,ij} = 4\sigma_{ij}^2$ if $i \in \mathcal{I}$ and $\Delta_{ij,ij} = 4\sigma_i^2$ if $i \notin \mathcal{I}$ .
+
+Proof. The computation of the diagonal entries follows directly from Lemma 31. To see that all other entries of $\Delta$ are zero, we need to show that no column of $M$ has more than one non-zero entry. So let us assume for contradiction that the $kl$ -th column of $M$ has non-zero entries in the $ij$ -th row and in the $\overline{ij}$ -th row for $(i,j) \neq (\overline{i},\overline{j})$ .
+
+If $i, \bar{i} \in \mathcal{I}$ , then Lemma 31 implies $i = i_k = \bar{i}$ and $i_j = l = i_{\overline{j}}$ , which contradicts $(i,j) \neq (\bar{i},\bar{j})$ .
+
+If $i,\bar{i}\notin \mathcal{I}$ , we see from Lemma 31 that $j = k = \overline{j}$ and $i = l = \overline{i}$ , which contradicts $(i,j)\neq (\bar{i},\bar{j})$
+
+Finally, if $i \in \mathcal{I}$ and $\bar{\iota} \notin \mathcal{I}$ , then Lemma 31 yields that $\bar{\iota} = l = i_j \in \mathcal{I}$ ; a contradiction.
+
+Corollary 33. The characteristic polynomial of $H$ is
+
+$$
+\left(t - 2\right) ^ {r \left| d _ {x} - d _ {y} \right|} \cdot t ^ {r ^ {2}} \cdot \prod_ {k \in \mathcal {I}} \left(t - 2 \left(\sigma_ {k} ^ {2} + 1\right)\right) ^ {r} \cdot \prod_ {i \in [ m ] \backslash \mathcal {I}} \prod_ {j \in \mathcal {I}} \left(t ^ {2} - 2 t \left(\sigma_ {j} ^ {2} + 1\right) + 4 \left(\sigma_ {j} ^ {2} - \sigma_ {i} ^ {2}\right)\right). \tag {18}
+$$
+
+Proof. Using Schur complements, we can compute the characteristic polynomial of $H$ as follows:
+
+$$
+\begin{array}{l} \chi_ {H} (t) = \det \left(t I _ {r \left(d _ {x} + d _ {y}\right)} - H\right) \\ = \det \left(t I _ {r d _ {x}} - 2 I _ {r d _ {x}}\right) \det \left(\left(t I _ {r d _ {y}} - D\right) - M \left(t I _ {r d _ {x}} - 2 I _ {r d _ {x}}\right) ^ {- 1} M ^ {T}\right) \\ = (t - 2) ^ {r d _ {x}} \det \left((t I _ {r d _ {y}} - D) - (t - 2) ^ {- 1} \Delta\right) \\ = (t - 2) ^ {r \left(d _ {x} - d _ {y}\right)} \det \left(\left(t - 2\right) \left(t I _ {r d _ {y}} - D\right) - \Delta\right). \\ \end{array}
+$$
+
+By Corollary 32, the matrix $(t - 2)(tI_{rd_y} - D) - \Delta$ is a diagonal matrix whose $ij$ -th diagonal entry is $(t - 2)(t - D_{ij,ij}) - \Delta_{ij,ij}$ . We write shortly $\delta_{ij} \coloneqq \Delta_{ij,ij}$ and use the identity $D_{ij,ij} = 2\sigma_{ij}^2$ to further derive
+
+$$
+\begin{array}{l} \chi_ {H} (t) = (t - 2) ^ {r \left(d _ {x} - d _ {y}\right)} \prod_ {i = 1} ^ {d _ {y}} \prod_ {j = 1} ^ {r} \Big ((t - 2) (t - 2 \sigma_ {i j} ^ {2}) - \delta_ {i j} \Big) \\ = (t - 2) ^ {r \left(d _ {x} - d _ {y}\right)} \prod_ {i = 1} ^ {d _ {y}} \prod_ {j = 1} ^ {r} \left(t ^ {2} - 2 t \left(\sigma_ {i j} ^ {2} + 1\right) + \left(4 \sigma_ {i j} ^ {2} - \delta_ {i j}\right)\right) \\ = (t - 2) ^ {r \left(d _ {x} - d _ {y}\right)} \left(\prod_ {i \in \mathcal {I}} \prod_ {j = 1} ^ {r} \left(t \left(t - 2 \left(\sigma_ {i j} ^ {2} + 1\right)\right)\right)\right) \\ \cdot \left(\prod_ {i \in [ d _ {y} ] \backslash \mathcal {I}} \prod_ {j = 1} ^ {r} \left(t ^ {2} - 2 t \left(\sigma_ {i _ {j}} ^ {2} + 1\right) + 4 \left(\sigma_ {i _ {j}} ^ {2} - \sigma_ {i} ^ {2}\right)\right)\right). \\ \end{array}
+$$
+
+The latter equality was derived by substituting specific values into the $\delta_{ij}$ according to Corollary 32. Rearranging the terms of this last expression of $\chi_H(t)$ yields (18).
+
+Lemma 34. Let $x, y > 0$ . The polynomial $f(t) = t^2 - 2t(x + 1) + 4(x - y)$ has two real roots and at least one of them is positive. Moreover, $f(t)$ has a negative root if and only if $x < y$ .
+
+Proof. The roots of $f(t)$ are $x + 1 \pm \sqrt{(x + 1)^2 - 4(x - y)} = x + 1 \pm \sqrt{(x - 1)^2 + 4y}$ . So the discriminant is positive and $f(t)$ has two real roots. Clearly, one of these is positive. The other one is negative if and only if $x + 1 < \sqrt{(x - 1)^2 + 4y}$ , which is equivalent to $(x + 1)^2 < (x - 1)^2 + 4y$ and thus to $x < y$ .
+
+Proof of Theorem 30. It is left to count the number of negative roots of the univariate polynomial (18). All the linear factors of (18) have non-negative roots. The $ij$ -th quadratic factor of (18), for $i \in [d_y] \setminus \mathcal{I}$ and $j \in \mathcal{I}$ , has at most one negative root due to Lemma 34. Moreover, it has exactly one negative root if and only if $\sigma_j^2 < \sigma_i^2$ , which is equivalent to $j > i$ . Hence, the polynomial (18) has exactly $\# \{(j,i) \in \mathcal{I} \times [d_y] \setminus \mathcal{I} \mid j > i\}$ many negative roots.
+
+Theorem 12. If the singular values of $Q_{0}$ are pairwise distinct and positive, $h_{Q_0}|_{\mathcal{M}_r}$ has exactly $\binom{m}{r}$ critical points, namely the matrices $Q_{\mathcal{I}} = U\Sigma_{\mathcal{I}}V^{T}$ with $\#(\mathcal{I}) = r$ . Moreover, its unique local and global minimum is $Q_{\{1,\dots,r\}}$ . More precisely, the index of $Q_{\mathcal{I}}$ as a critical point of $h_{Q_0}|_{\mathcal{M}_r}$ (i.e., the number of negative eigenvalues of the Hessian matrix for any local parameterization) is
+
+$$
+\operatorname {i n d e x} \left(Q _ {\mathcal {I}}\right) = \# \left\{\left(j, i\right) \in \mathcal {I} \times \mathcal {I} ^ {c} \mid j > i \right\}, \quad \text {w h e r e} \mathcal {I} ^ {c} = \{1, \dots , m \} \backslash \mathcal {I}.
+$$
+
+Proof. This is an amalgamation of Corollary 29 and Theorem 30. $\square$
\ No newline at end of file
diff --git a/pureandspuriouscriticalpointsageometricstudyoflinearnetworks/images.zip b/pureandspuriouscriticalpointsageometricstudyoflinearnetworks/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..dbbbdd22bd1534e6896de926b45c1024653add02
--- /dev/null
+++ b/pureandspuriouscriticalpointsageometricstudyoflinearnetworks/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8e70c8ffa3f57a5f4d7ba951ccb5818c15b462039194dcd905eb4e7164e02b67
+size 657169
diff --git a/pureandspuriouscriticalpointsageometricstudyoflinearnetworks/layout.json b/pureandspuriouscriticalpointsageometricstudyoflinearnetworks/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..18e4b704d78ca1497d4ddfc4f9397642a941ed3e
--- /dev/null
+++ b/pureandspuriouscriticalpointsageometricstudyoflinearnetworks/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:fc1448fce68f6a78e147ab80e1a0cb4bd1b9c8d3cbdb2cdecc317d1feb65f173
+size 2029187
diff --git a/qlearningwithucbexplorationissampleefficientforinfinitehorizonmdp/70de4301-ce89-4c6a-a3fb-3adb83da965f_content_list.json b/qlearningwithucbexplorationissampleefficientforinfinitehorizonmdp/70de4301-ce89-4c6a-a3fb-3adb83da965f_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..fb292fadc55cd2f79991359372a8fb2d9da21739
--- /dev/null
+++ b/qlearningwithucbexplorationissampleefficientforinfinitehorizonmdp/70de4301-ce89-4c6a-a3fb-3adb83da965f_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:50b73c4830c981c9e8ba10987a56606d15dc835dc2533ab14ead12fd0468a3d9
+size 130644
diff --git a/qlearningwithucbexplorationissampleefficientforinfinitehorizonmdp/70de4301-ce89-4c6a-a3fb-3adb83da965f_model.json b/qlearningwithucbexplorationissampleefficientforinfinitehorizonmdp/70de4301-ce89-4c6a-a3fb-3adb83da965f_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..1e19cc9e149880f61e8c63da0a2e4d2a3c6c94b6
--- /dev/null
+++ b/qlearningwithucbexplorationissampleefficientforinfinitehorizonmdp/70de4301-ce89-4c6a-a3fb-3adb83da965f_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c7e30422ca17271b048b3a101aca53e3313662a514449081da0b9597ed98ba70
+size 150727
diff --git a/qlearningwithucbexplorationissampleefficientforinfinitehorizonmdp/70de4301-ce89-4c6a-a3fb-3adb83da965f_origin.pdf b/qlearningwithucbexplorationissampleefficientforinfinitehorizonmdp/70de4301-ce89-4c6a-a3fb-3adb83da965f_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..0d16e62cf50cd4cb7c44ebf94738341de8a1286e
--- /dev/null
+++ b/qlearningwithucbexplorationissampleefficientforinfinitehorizonmdp/70de4301-ce89-4c6a-a3fb-3adb83da965f_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5151cde119b6bdcc876575a3d44439c9e2be9b1a336267a52578fdd66998c7ef
+size 380704
diff --git a/qlearningwithucbexplorationissampleefficientforinfinitehorizonmdp/full.md b/qlearningwithucbexplorationissampleefficientforinfinitehorizonmdp/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..19d7010af32e66254bdd819374ed5ad096a5cf5c
--- /dev/null
+++ b/qlearningwithucbexplorationissampleefficientforinfinitehorizonmdp/full.md
@@ -0,0 +1,716 @@
+# Q-LEARNING WITH UCB EXPLORATION IS SAMPLE EFFICIENT FOR INFINITE-HORIZON MDP
+
+Kefan Dong*, Yuanhao Wang*
+
+Institute for Interdisciplinary Information Sciences,
+
+Tsinghua University
+
+{dkf16,yuanhao-16}@mails.tsinghua.edu.cn
+
+Xiaoyu Chen
+
+Key Laboratory of Machine Perception, MOE, School of EECS,
+
+Peking University
+
+cxy30@pku.edu.cn
+
+Liwei Wang
+
+Key Laboratory of Machine Perception, MOE, School of EECS
+
+Center for Data Science, Peking University
+
+wanglw@cis.pku.edu.cn
+
+# ABSTRACT
+
+A fundamental question in reinforcement learning is whether model-free algorithms are sample efficient. Recently, Jin et al. (2018) proposed a Q-learning algorithm with UCB exploration policy, and proved it has nearly optimal regret bound for finite-horizon episodic MDP. In this paper, we adapt Q-learning with UCB-exploration bonus to infinite-horizon MDP with discounted rewards without accessing a generative model. We show that the sample complexity of exploration of our algorithm is bounded by $\tilde{O}\left(\frac{SA}{\epsilon^2(1 - \gamma)^7}\right)$ . This improves the previously best known result of $\tilde{O}\left(\frac{SA}{\epsilon^4(1 - \gamma)^8}\right)$ in this setting achieved by delayed Q-learning (Strehl et al., 2006), and matches the lower bound in terms of $\epsilon$ as well as $S$ and $A$ up to logarithmic factors.
+
+# 1 INTRODUCTION
+
+The goal of reinforcement learning (RL) is to construct efficient algorithms that learn and plan in sequential decision making tasks when the underlying system dynamics are unknown. A typical model in RL is Markov Decision Process (MDP). At each time step, the environment is in a state $s$ . The agent takes an action $a$ , obtain a reward $r$ , and then the environment transits to another state. In reinforcement learning, the transition probability distribution is unknown. The algorithm needs to learn the transition dynamics of MDP, while aiming to maximize the cumulative reward. This poses the exploration-exploitation dilemma: whether to act to gain new information (explore) or to act consistently with past experience to maximize reward (exploit).
+
+Theoretical analyses of reinforcement learning fall into two broad categories: those assuming a simulator (a.k.a. generative model), and those without a simulator. In the first category, the algorithm is allowed to query the outcome of any state action pair from an oracle. The emphasis is on the number of calls needed to estimate the $Q$ value or to output a near-optimal policy. There has been extensive research in literature following this line of research, the majority of which focuses on discounted infinite horizon MDPs (Azar et al., 2011; Even-Dar & Mansour, 2003; Sidford et al., 2018b). The current results have achieved near-optimal time and sample complexities (Sidford et al., 2018b;a).
+
+Without a simulator, there is a dichotomy between finite-horizon and infinite-horizon settings. In finite-horizon settings, there are straightforward definitions for both regret and sample complexity; the latter is defined as the number of samples needed before the policy becomes near optimal. In this setting, extensive research in the past decade (Jin et al., 2018; Azar et al., 2017; Jaksch et al., 2010; Dann et al., 2017) has achieved great progress, and established nearly-tight bounds for both regret and sample complexity.
+
+The infinite-horizon setting is a very different matter. First of all, the performance measure cannot be a straightforward extension of the sample complexity defined above (See Strehl & Littman (2008) for detailed discussion). Instead, the measure of sample efficiency we adopt is the so-called sample complexity of exploration (Kakade et al., 2003), which is also a widely-accepted definition. This measure counts the number of times that the algorithm "makes mistakes" along the whole trajectory. See also (Strehl & Littman, 2008) for further discussions regarding this issue.
+
+Several model based algorithms have been proposed for infinite horizon MDP, for example Rmax (Brafman & Tennenholtz, 2003), MoRmax (Szita & Szepesvári, 2010) and UCRL-γ (Lattimore & Hutter, 2012). It is noteworthy that there still exists a considerable gap between the state-of-the-art algorithm and the theoretical lower bound (Lattimore & Hutter, 2012) regarding $1 / (1 - \gamma)$ factor.
+
+Though model-based algorithms have been proved to be sample efficient in various MDP settings, most state-of-the-art RL algorithms are developed in the model-free paradigm (Schulman et al., 2015; Mnih et al., 2013; 2016). Model-free algorithms are more flexible and require less space, which have achieved remarkable performance on benchmarks such as Atari games and simulated robot control problems.
+
+For infinite horizon MDPs without access to simulator, the best model-free algorithm has a sample complexity of exploration $\tilde{\mathcal{O}}\left(\frac{SA}{\epsilon^4(1 - \gamma)^8}\right)$ , achieved by delayed Q-learning (Strehl et al., 2006). The authors provide a novel strategy of argument when proving the upper bound for the sample complexity of exploration, namely identifying a sufficient condition for optimality, and then bound the number of times that this condition is violated.
+
+However, the results of Delayed Q-learning still leave a quadratic gap in $1 / \epsilon$ from the best-known lower bound. This is partly because the updates in Q-value are made in an over-conservative way. In fact, the loose sample complexity bound is a result of delayed Q-learning algorithm itself, as well as the mathematical artifact in their analysis. To illustrate this, we construct a hard instance showing that Delayed Q-learning incurs $\Omega (1 / \epsilon^3)$ sample complexity. This observation, as well as the success of the Q-learning with UCB algorithm (Jin et al., 2018) in proving a regret bound in finite-horizon settings, motivates us to incorporate a UCB-like exploration term into our algorithm.
+
+In this work, we propose a Q-learning algorithm with UCB exploration policy. We show the sample complexity of exploration bound of our algorithm is $\tilde{\mathcal{O}}\left(\frac{SA}{\epsilon^2(1 - \gamma)^7}\right)$ . This strictly improves the previous best known result due to Delayed Q-learning. It also matches the lower bound in the dependence on $\epsilon$ , $S$ and $A$ up to logarithmic factors.
+
+We point out here that the infinite-horizon setting cannot be solved by reducing to finite-horizon setting. There are key technical differences between these two settings: the definition of sample complexity of exploration, time-invariant policies and the error propagation structure in Q-learning. In particular, the analysis techniques developed in (Jin et al., 2018) do not directly apply here. We refer the readers to Section 3.2 for detailed explanations and a concrete example.
+
+The rest of the paper is organized as follows. After introducing the notation used in the paper in Section 2, we describe our infinite Q-learning with UCB algorithm in Section 3. We then state our main theoretical results, which are in the form of PAC sample complexity bounds. In Section 4 we present some interesting properties beyond sample complexity bound. Finally, we conclude the paper in Section 5.
+
+# 2 PRELIMINARY
+
+We consider a Markov Decision Process defined by a five tuple $\langle S, \mathcal{A}, p, r, \gamma \rangle$ , where $\mathcal{S}$ is the state space, $\mathcal{A}$ is the action space, $p(s'|s, a)$ is the transition function, $r: \mathcal{S} \times \mathcal{A} \to [0,1]$ is the deterministic
+
+reward function, and $0 \leq \gamma < 1$ is the discount factor for rewards. Let $S = |\mathcal{S}|$ and $A = |\mathcal{A}|$ denote the number of states and the number of actions respectively.
+
+Starting from a state $s_1$ , the agent interacts with the environment for infinite number of time steps. At each time step, the agent observes state $s_t \in S$ , picks action $a_t \in \mathcal{A}$ , and receives reward $r_t$ ; the system then transits to next state $s_{t+1}$ .
+
+Using the notations in Strehl et al. (2006), a policy $\pi_t$ refers to the non-stationary control policy of the algorithm since step $t$ . We use $V^{\pi_t}(s)$ to denote the value function under policy $\pi_t$ , which is defined as $V^{\pi_t}(s) = \mathbb{E}[\sum_{i=1}^{\infty} \gamma^{i-1} r(s_i, \pi_{t+i-1}(s_i)) | s_1 = s]$ . We also use $V^*(s) = \sup_{\pi} V^{\pi}(s)$ to denote the value function of the optimal policy. Accordingly, we define $Q^{\pi_t}(s, a) = r(s, a) + \mathbb{E}[\sum_{i=2}^{\infty} \gamma^{i-1} r(s_i, \pi_{t+i-1}(s_i)) | s_1 = s, a_1 = a]$ as the Q function under policy $\pi_t$ ; $Q^*(s, a)$ is the Q function under optimal policy $\pi^*$ .
+
+We use the sample complexity of exploration defined in Kakade et al. (2003) to measure the learning efficiency of our algorithm. This sample complexity definition has been widely used in previous works Strehl et al. (2006); Lattimore & Hutter (2012); Strehl & Littman (2008).
+
+Definition 1. Sample complexity of Exploration of an algorithm $\mathcal{ALG}$ is defined as the number of time steps $t$ such that the non-stationary policy $\pi_t$ at time $t$ is not $\epsilon$ -optimal for current state $s_t$ , i.e. $V^{\pi_t}(s_t) < V^*(s_t) - \epsilon$ .
+
+Roughly speaking, this measure counts the number of mistakes along the whole trajectory. We use the following definition of PAC-MDP Strehl et al. (2006).
+
+Definition 2. An algorithm $\mathcal{ALG}$ is said to be PAC-MDP (Probably Approximately Correct in Markov Decision Processes) if, for any $\epsilon$ and $\delta$ , the sample complexity of $\mathcal{ALG}$ is less than some polynomial in the relevant quantities $(S, A, 1/\epsilon, 1/\delta, 1/(1 - \gamma))$ , with probability at least $1 - \delta$ .
+
+Finally, recall that Bellman equation is defined as the following:
+
+$$
+\left\{ \begin{array}{l} V ^ {\pi_ {t}} (s) = Q ^ {\pi_ {t}} \left(s, \pi_ {t} (s)\right) \\ Q ^ {\pi_ {t}} (s, a) := \left(r _ {t} + \gamma \mathbb {P} V ^ {\pi_ {t + 1}}\right) (s, a), \end{array} \right. \quad \left\{ \begin{array}{l} V ^ {*} (s) = Q ^ {*} \left(s, \pi^ {*} (s)\right) \\ Q ^ {*} (s, a) := \left(r _ {t} + \gamma \mathbb {P} V ^ {*}\right) (s, a), \end{array} \right.
+$$
+
+which is frequently used in our analysis. Here we denote $[\mathbb{P}V^{\pi_t}](s,a) := \mathbb{E}_{s' \sim p(\cdot | s, a)} V^{\pi_{t+1}}(s')$ .
+
+# 3 MAIN RESULTS
+
+In this section, we present the UCB Q-learning algorithm and the sample complexity bound.
+
+# 3.1 ALGORITHM
+
+Algorithm 1 Infinite Q-learning with UCB
+Parameters: $\epsilon, \gamma, \delta$
+Initialize $Q(s,a), \hat{Q}(s,a) \gets \frac{1}{1 - \gamma}, N(s,a) \gets 0, \epsilon_1 \gets \frac{\epsilon}{24RM\ln\frac{1}{1 - \gamma}}, H \gets \frac{\ln 1 / ((1 - \gamma)\epsilon_1)}{\ln 1 / \gamma}$ .
+Define $\iota(k) = \ln (SA(k + 1)(k + 2) / \delta), \alpha_k = \frac{H + 1}{H + k}$ .
+for $t = 1,2,\ldots$ do
+5: Take action $a_t \gets \arg \max_{a'} \hat{Q}(s_t,a')$
+Receive reward $r_t$ and transit to $s_{t+1}$ $N(s_t,a_t) \gets N(s_t,a_t) + 1$ $k \gets N(s_t,a_t), b_k \gets \frac{c_2}{1 - \gamma}\sqrt{\frac{H\iota(k)}{k}}$ $\hat{V}(s_{t+1}) \gets \max_{a \in A} \hat{Q}(s_{t+1}, a)$
+10: $Q(s_t,a_t) \gets (1 - \alpha_k)Q(s_t,a_t) + \alpha_k\left[r(s_t,a_t) + b_k + \gamma \hat{V}(s_{t+1})\right]$ $\hat{Q}(s_t,a_t) \gets \min(\hat{Q}(s_t,a_t), Q(s_t,a_t))$
+end for
+
+Here $c_2 = 4\sqrt{2}$ is a constant. $R = \lceil \ln \frac{3}{\epsilon(1 - \gamma)} / (1 - \gamma) \rceil$ , while the choice of $M$ can be found in Section 3.3. $(M = \mathcal{O}(\ln 1 / ((1 - \gamma)\epsilon)))$ . The learning rate is defined as $\alpha_k = (H + 1) / (H + k)$ . $H$ is chosen as $\frac{\ln 1 / ((1 - \gamma)\epsilon_1)}{\ln 1 / \gamma}$ , which satisfies $H \leq \frac{\ln 1 / ((1 - \gamma)\epsilon_1)}{1 - \gamma}$ .
+
+Our UCB Q-learning algorithm (Algorithm 1) maintains an optimistic estimation of action value function $Q(s, a)$ and its historical minimum value $\hat{Q}(s, a)$ . $N_{t}(s, a)$ denotes the number of times that $(s, a)$ is experienced before time step $t$ ; $\tau(s, a, k)$ denotes the time step $t$ at which $(s_{t}, a_{t}) = (s, a)$ for the $k$ -th time; if this state-action pair is not visited that many times, $\tau(s, a, k) = \infty$ . $Q_{t}(s, a)$ and $\hat{Q}_{t}(s, a)$ denote the $Q$ and $\hat{Q}$ value of $(s, a)$ that the algorithm maintains when arriving at $s_{t}$ respectively.
+
+# 3.2 SAMPLE COMPLEXITY OF EXPLORATION
+
+Our main result is the following sample complexity of exploration bound.
+
+Theorem 1. For any $\epsilon >0$ , $\delta >0,1 / 2 < \gamma < 1$ , with probability $1 - \delta$ , the sample complexity of exploration (i.e., the number of time steps $t$ such that $\pi_t$ is not $\epsilon$ -optimal at $s_t$ ) of Algorithm 1 is at most
+
+$$
+\tilde {\mathcal {O}} \left(\frac {S A \ln 1 / \delta}{\epsilon^ {2} (1 - \gamma) ^ {7}}\right),
+$$
+
+where $\tilde{\mathcal{O}}$ suppresses logarithmic factors of $1 / \epsilon, 1 / (1 - \gamma)$ and $SA$ .
+
+We first point out the obstacles for proving the theorem and reasons why the techniques in Jin et al. (2018) do not directly apply here. We then give a high level description of the ideas of our approach.
+
+One important issue is caused by the difference in the definition of sample complexity for finite and infinite horizon MDP. In finite horizon settings, sample complexity (and regret) is determined in the first $T$ timesteps, and only measures the performance at the initial state $s_1$ (i.e. $(V^{*} - V^{\pi})(s_{1}))$ . However, in the infinite horizon setting, the agent may enter under-explored regions at any time period, and sample complexity of exploration characterizes the performance at all states the agent enters.
+
+The following example clearly illustrates the key difference between infinite-horizon and finite-horizon. Consider an MDP with a starting state $s_1$ where the probability of leaving $s_1$ is $o(T^{-1})$ . In this case, with high probability, it would take more than $T$ timesteps to leave $s_1$ . Hence, guarantees about the learning in the first $T$ timesteps or about the performance at $s_1$ imply almost nothing about the number of mistakes the algorithm would make in the rest of the MDP (i.e. the sample complexity of exploration of the algorithm). As a result, the analysis for finite horizon MDPs cannot be directly applied to infinite horizon setting.
+
+This calls for techniques for counting mistakes along the entire trajectory, such as those employed by Strehl et al. (2006). In particular, we need to establish convenient sufficient conditions for being $\epsilon$ -optimal at timestep $t$ and state $s_t$ , i.e. $V^{*}(s_{t}) - V^{\pi_{t}}(s_{t}) \leq \epsilon$ . Then, bounding the number of violations of such conditions gives a bound on sample complexity.
+
+Another technical reason why the proof in Jin et al. (2018) cannot be directly applied to our problem is the following: In finite horizon settings, Jin et al. (2018) decomposed the learning error at episode $k$ and time $h$ as errors from a set of consecutive episodes before $k$ at time $h + 1$ using a clever design of learning rate. However, in the infinite horizon setting, this property does not hold. Suppose at time $t$ the agent is at state $s_t$ and takes action $a_t$ . Then the learning error at $t$ only depends on those previous time steps such that the agent encountered the same state as $s_t$ and took the same action as $a_t$ . Thus the learning error at time $t$ cannot be decomposed as errors from a set of consecutive time steps before $t$ , but errors from a set of non-consecutive time steps without any structure. Therefore, we have to control the sum of learning errors over an unstructured set of time steps. This makes the analysis more challenging.
+
+Now we give a brief road map of the proof of Theorem 1. Our first goal is to establish a sufficient condition so that $\pi_t$ learned at step $t$ is $\epsilon$ -optimal for state $s_t$ . As an intermediate step we show that a sufficient condition for $V^*(s_t) - V^{\pi_t}(s_t) \leq \epsilon$ is that $V^*(s_{t'}) - Q^*(s_{t'}, a_{t'})$ is small for a few time steps $t'$ within an interval $[t, t + R]$ for a carefully chosen $R$ (Condition 1). Then we show the desired sufficient condition (Condition 2) implies Condition 1. We then bound the total number of bad time steps on which $V^*(s_t) - Q^*(s_t, a_t)$ is large for the whole MDP; this implies a bound on the number of violations of Condition 2. This in turn relies on a key technical lemma (Lemma 2).
+
+The remaining part of this section is organized as follows. We establish the sufficient condition for $\epsilon$ -optimality in Section 3.3. The key lemma is presented in Section 3.4. Finally we prove Theorem 1 in Section 3.5.
+
+# 3.3 SUFFICIENT CONDITION FOR $\epsilon$ -OPTIMALITY
+
+In this section, we establish a sufficient condition (Condition 2) for $\epsilon$ -optimality at time step $t$ .
+
+For a fixed $s_t$ , let $\mathrm{TRAJ}(R)$ be the set of length- $R$ trajectories starting from $s_t$ . Our goal is to give a sufficient condition so that $\pi_t$ , the policy learned at step $t$ , is $\epsilon$ -optimal. For any $\epsilon_2 > 0$ , define $R := \lceil \ln \frac{1}{\epsilon_2(1 - \gamma)} / (1 - \gamma) \rceil$ . Denote $V^*(s_t) - Q^*(s_t, a_t)$ by $\Delta_t$ . We have
+
+$$
+\begin{array}{l} V ^ {*} (s _ {t}) - V ^ {\pi_ {t}} (s _ {t}) \\ = V ^ {*} \left(s _ {t}\right) - Q ^ {*} \left(s _ {t}, a _ {t}\right) + Q ^ {*} \left(s _ {t}, a _ {t}\right) - V ^ {\pi_ {t}} \left(s _ {t}\right) \\ = V ^ {*} \left(s _ {t}\right) - Q ^ {*} \left(s _ {t}, a _ {t}\right) + \gamma \mathbb {P} \left(V ^ {*} - V ^ {\pi_ {t}}\right) \left(s _ {t}, \pi_ {t} \left(s _ {t}\right)\right) \\ = V ^ {*} \left(s _ {t}\right) - Q ^ {*} \left(s _ {t}, a _ {t}\right) + \gamma \sum_ {s _ {t + 1}} p \left(s _ {t + 1} \mid s _ {t}, \pi_ {t} \left(s _ {t}\right)\right) \cdot \left[ V ^ {*} \left(s _ {t + 1}\right) - Q ^ {*} \left(s _ {t + 1}, a _ {t + 1}\right) \right] + \\ \end{array}
+$$
+
+$$
+\begin{array}{l} \gamma \sum_ {s _ {t + 1}, s _ {t + 2}} p \left(s _ {t + 2} \mid s _ {t + 1}, \pi_ {t + 1} \left(s _ {t + 1}\right)\right) \cdot p \left(s _ {t + 1} \mid s _ {t}, \pi_ {t} \left(s _ {t}\right)\right) \left[ V ^ {*} \left(s _ {t + 2}\right) - Q ^ {*} \left(s _ {t + 2}, a _ {t + 2}\right) \right] \\ \dots \\ \leq \epsilon_ {2} + \sum_ {\substack {t r a j \in \\ \operatorname {T R A J} (R)}} p (t r a j) \cdot \left[ \sum_ {j = 0} ^ {R - 1} \gamma^ {j} \Delta_ {t + j} \right], \tag{1} \\ \end{array}
+$$
+
+where the last inequality holds because $\frac{\gamma^R}{1 - \gamma} \leq \epsilon_2$ , which follows from the definition of $R$ .
+
+For any fixed trajectory of length $R$ starting from $s_t$ , consider the sequence $(\Delta_{t'})_{t \leq t' < t + R}$ . Let $X_t^{(i)}$ be the $i$ -th largest item of $(\Delta_{t'})_{t \leq t' < t + R}$ . Rearranging Eq. (1), we obtain
+
+$$
+V ^ {*} \left(s _ {t}\right) - V ^ {\pi_ {t}} \left(s _ {t}\right) \leq \epsilon_ {2} + E _ {t r a j} \left[ \sum_ {i = 1} ^ {R} \gamma^ {i - 1} X _ {t} ^ {(i)} \right]. \tag {2}
+$$
+
+We first prove that Condition 1 implies $\epsilon$ -optimality at time step $t$ when $\epsilon_2 = \epsilon / 3$ .
+
+Condition 1. Let $\xi_{i}:= \frac{1}{2^{i + 2}}\epsilon_{2}\left(\ln \frac{1}{1 - \gamma}\right)^{-1}$ . For all $0\leq i\leq \lfloor \log_2R\rfloor$
+
+$$
+E \left[ X _ {t} ^ {\left(2 ^ {i}\right)} \right] \leq \xi_ {i}. \tag {3}
+$$
+
+Claim 1. If Condition 1 is satisfied at time step $t$ , the policy $\pi_t$ is $\epsilon$ -optimal at state $s_t$ , i.e. $V^{*}(s_{t}) - V^{\pi_{t}}(s_{t}) \leq \epsilon$ .
+
+Proof. Note that $X_{t}^{(i)}$ is monotonically decreasing with respect to $i$ . Therefore, $E[X_{t}^{(i)}] \leq E[X_{t}^{(2^{\lfloor \log_2 i \rfloor})}]$ . Eq. (3) implies that for $1/2 < \gamma < 1$ ,
+
+$$
+\begin{array}{l} E \left[ \sum_ {i = 1} ^ {R} \gamma^ {i - 1} X _ {t} ^ {(i)} \right] = \sum_ {i = 1} ^ {R} \gamma^ {i - 1} E [ X _ {t} ^ {(i)} ] \leq \sum_ {i = 1} ^ {R} \gamma^ {i - 1} E [ X _ {t} ^ {(2 ^ {\lfloor \log_ {2} i \rfloor})} ] \\ \leq \sum_ {i = 1} ^ {R} \gamma^ {i - 1} 2 ^ {- \lfloor \log_ {2} i \rfloor - 2} \epsilon_ {2} \left(\ln \frac {1}{1 - \gamma}\right) ^ {- 1} \leq \sum_ {i = 1} ^ {R} \frac {\gamma^ {i - 1}}{i} \epsilon_ {2} \left(\ln \frac {1}{1 - \gamma}\right) ^ {- 1} \leq 2 \epsilon_ {2}, \\ \end{array}
+$$
+
+where the last inequality follows from the fact that $\sum_{i=1}^{\infty} \frac{\gamma^{i-1}}{i} = \frac{1}{\gamma} \ln \frac{1}{1-\gamma}$ and $\gamma > 1/2$ .
+
+Combining with Eq. 2, we have, $V^{*}(s_{t}) - V^{\pi_{t}}(s_{t}) \leq \epsilon_{2} + E\left[\sum_{i = 1}^{R}\gamma^{i - 1}X_{t}^{(i)}\right] \leq 3\epsilon_{2} = \epsilon$ .
+
+Next we show that given $i,t$ , Condition 2 implies Eq. (3).
+
+Condition 2. Define $L = \lfloor \log_2 R \rfloor$ . Let $M = \max \left\{\lceil 2 \log_2 \frac{1}{\xi_L(1 - \gamma)} \rceil, 10\right\}$ , and $\eta_j = \frac{\xi_i}{M} \cdot 2^{j-1}$ . For all $2 \leq j \leq M$ , $\eta_j \Pr[X_t^{(2^i)} > \eta_{j-1}] \leq \frac{\xi_i}{M}$ .
+
+Claim 2. Given $i$ , $t$ , Eq. (3) holds if Condition 2 is satisfied.
+
+Proof. The reason behind the choice of $M$ is to ensure that $\eta_M > 1 / (1 - \gamma)^1$ . It follows that, assuming Condition 2 holds, for $1 \leq j \leq M$ ,
+
+$$
+E \left[ X _ {t} ^ {(2 ^ {i})} \right] = \int_ {0} ^ {1 / (1 - \gamma)} \Pr \left[ X _ {t} ^ {(2 ^ {i})} > x \right] d x \leq \eta_ {1} + \sum_ {j = 2} ^ {M} \eta_ {j} \Pr \left[ X _ {t} ^ {(2 ^ {i})} > \eta_ {j - 1} \right] \leq \xi_ {i}.
+$$
+
+
+
+Therefore, if a time step $t$ is not $\epsilon_{2}$ -optimal, there exists $0 \leq i < \lfloor \log_2 R \rfloor$ and $2 \leq j \leq M$ such that
+
+$$
+\eta_ {j} \Pr \left[ X _ {t} ^ {(2 ^ {i})} > \eta_ {j - 1} \right] > \frac {\xi_ {i}}{M}. \tag {4}
+$$
+
+Now, the sample complexity can be bounded by the number of $(t,i,j)$ pairs that Eq. (4) is violated. Following the approach of Strehl et al. (2006), for a fixed $(i,j)$ -pair, instead of directly counting the number of time steps $t$ such that $\operatorname*{Pr}[X_t^{(2^i)} > \eta_{j-1}] > \frac{\xi_i}{M\eta_j}$ , we count the number of time steps that $X_t^{(2^i)} > \eta_{j-1}$ . Lemma 1 provides an upper bound of the number of such $t$ .
+
+# 3.4 KEY LEMMAS
+
+In this section, we present two key lemmas. Lemma 1 bounds the number of sub-optimal actions, which in turn, bounds the sample complexity of our algorithm. Lemma 2 bounds the weighted sum of learning error, i.e. $(\hat{Q}_t - Q^*)(s,a)$ , with the sum and maximum of weights. Then, we show that Lemma 1 follows from Lemma 2.
+
+Lemma 1. For fixed $t$ and $\eta > 0$ , let $B_{\eta}^{(t)}$ be the event that $V^{*}(s_{t}) - Q^{*}(s_{t}, a_{t}) > \frac{\eta}{1 - \gamma}$ in step $t$ . If $\eta > 2\epsilon_{1}$ , then with probability at least $1 - \delta / 2$ ,
+
+$$
+\sum_ {t = 1} ^ {t = \infty} I \left[ B _ {\eta} ^ {(t)} \right] \leq \frac {S A \ln S A \ln 1 / \delta}{\eta^ {2} (1 - \gamma) ^ {3}} \cdot p o l y l o g \left(\frac {1}{\epsilon_ {1}}, \frac {1}{1 - \gamma}\right), \tag {5}
+$$
+
+where $I[\cdot ]$ is the indicator function.
+
+Before presenting Lemma 2, we define a class of sequence that occurs in the proof.
+
+Definition 3. A sequence $(w_{t})_{t\geq 1}$ is said to be a $(C,w)$ -sequence for $C, w > 0$ , if $0\leq w_{t}\leq w$ for all $t\geq 1$ , and $\sum_{t\geq 1}w_t\leq C$ .
+
+Lemma 2. For every $(C, w)$ -sequence $(w_{t})_{t \geq 1}$ , with probability $1 - \delta / 2$ , the following holds:
+
+$$
+\sum_ {t \geq 1} w _ {t} (\hat {Q} _ {t} - Q ^ {*}) (s _ {t}, a _ {t}) \leq \frac {C \epsilon_ {1}}{1 - \gamma} + \mathcal {O} \left(\frac {\sqrt {w S A C \ell (C)}}{(1 - \gamma) ^ {2 . 5}} + \frac {w S A \ln C}{(1 - \gamma) ^ {3}} \ln \frac {1}{(1 - \gamma) \epsilon_ {1}}\right).
+$$
+
+where $\ell (C) = \iota (C)\ln \frac{1}{(1 - \gamma)\epsilon_1}$ is a log-factor.
+
+Proof of Lemma 2 is quite technical, and is therefore deferred to supplementary materials.
+
+Now, we briefly explain how to prove Lemma 1 with Lemma 2. (Full proof can be found in supplementary materials.) Note that since $\hat{Q}_t \geq Q^*$ and $a_t = \arg \max_a \hat{Q}_t(s_t, a)$ ,
+
+$$
+V ^ {*} (s _ {t}) - Q ^ {*} (s _ {t}, a _ {t}) \leq \hat {Q} _ {t} (s _ {t}, a _ {t}) - Q ^ {*} (s _ {t}, a _ {t}).
+$$
+
+We now consider a set $J = \{t : V^{*}(s_{t}) - Q^{*}(s_{t},a_{t}) > \eta (1 - \gamma)^{-1}\}$ , and consider the $(|J|,1)$ -weight sequence defined by $w_{t} = I[t\in J]$ . We can now apply Lemma 2 to weighted sum $\sum_{t\geq 1}w_t[V^* (s_t) - Q^* (s_t,a_t)]$ . On the one hand, this quantity is obviously at least $|J|\eta (1 - \gamma)^{-1}$ . On the other hand, by lemma 2, it is upper bounded by the weighted sum of $(\hat{Q} -Q^{*})(s_{t},a_{t})$ . Thus we get
+
+$$
+| J | \eta (1 - \gamma) ^ {- 1} \leq \frac {C \epsilon_ {1}}{1 - \gamma} + \mathcal {O} \left(\frac {\sqrt {S A | J | \ell (| J |)}}{(1 - \gamma) ^ {2 . 5}} + \frac {w S A \ln | J |}{(1 - \gamma) ^ {3}} \ln \frac {1}{(1 - \gamma) \epsilon_ {1}}\right).
+$$
+
+Now focus on the dependence on $|J|$ . The left-hand-side has linear dependence on $|J|$ , whereas the left-hand-side has a $\tilde{\mathcal{O}}\left(\sqrt{|J|}\right)$ dependence. This allows us to solve out an upper bound on $|J|$ with quadratic dependence on $1 / \eta$ .
+
+# 3.5 PROOF FOR THEOREM 1
+
+We prove the theorem by stitching Lemma 1 and Condition 2.
+
+Proof. (Proof for Theorem 1)
+
+By lemma 1, for any $2 \leq j \leq M$ , $\sum_{t=1}^{\infty} I\left[V^{*}(s_{t}) - Q^{*}(s_{t}, a_{t}) > \eta_{j-1}\right] \leq C$ , where
+
+$$
+C = \frac {S A \ln S A \ln 1 / \delta}{\eta_ {j - 1} ^ {2} (1 - \gamma) ^ {5}} \cdot \tilde {P}. \tag {6}
+$$
+
+Here $\tilde{P}$ is a shorthand for polylog $\left(\frac{1}{\epsilon_1},\frac{1}{1 - \gamma}\right)$ .
+
+Let $A_{t} = I[X_{t}^{(2^{i})} \geq \eta_{j-1}]$ be a Bernoulli random variable, and $\{\mathcal{F}_{t}\}_{t \geq 1}$ be the filtration generated by random variables $\{(s_{\tau}, a_{\tau}): 1 \leq \tau \leq t\}$ . Since $A_{t}$ is $\mathcal{F}_{t+R}$ -measurable, for any $0 \leq k < R$ , $\{A_{k+tR} - E[A_{k+tR} \mid \mathcal{F}_{k+tR}]\}_{t \geq 0}$ is a martingale difference sequence. For now, consider a fixed $0 \leq k < R$ . By Azuma-Hoeffding inequality, after $T = \mathcal{O}\left(\frac{C}{2^{i}} \cdot \frac{M\eta_{j}}{\xi_{i}} \ln(RML)\right)$ time steps (if it happens that many times) with
+
+$$
+\Pr \left[ X _ {k + t R} ^ {(2 ^ {i})} \geq \eta_ {j - 1} \right] = \mathbb {E} [ A _ {k + t R} ] > \frac {\xi_ {i}}{M \eta_ {j}}, \tag {7}
+$$
+
+we have $\sum_{t}A_{k + tR}\geq C / 2^{i}$ with probability at least $1 - \delta /(2MRL)$
+
+On the other hand, if $A_{k + tR}$ happens, within $[k + tR, k + tR + R - 1]$ , there must be at least $2^i$ time steps at which $V^{*}(s_{t}) - Q^{*}(s_{t}, a_{t}) > \eta_{j - 1}$ . The latter event happens at most $C$ times, and $[k + tR, k + tR + R - 1]$ are disjoint. Therefore, $\sum_{t = 0}^{\infty} A_{k + tR} \leq C / 2^i$ . This suggests that the event described by (7) happens at most $T$ times for fixed $i$ and $j$ . Via a union bound on $0 \leq k < R$ , we can show that with probability $1 - \delta / (2ML)$ , there are at most $RT$ time steps where $\operatorname{Pr}\left[X_t^{(2^i)} \geq \eta_{j - 1}\right] > \xi_i / (M\eta_j)$ . Thus, the number of sub-optimal steps is bounded by,
+
+$$
+\begin{array}{l} \sum_ {t = 1} ^ {\infty} I \big [ V ^ {*} (s _ {t}) - V ^ {\pi_ {t}} (s _ {t}) > \epsilon \big ] \\ \leq \sum_ {t = 1} ^ {\infty} \sum_ {i = 0} ^ {L} \sum_ {j = 2} ^ {M} I \left[ \eta_ {j} \operatorname * {P r} [ X _ {t} ^ {(2 ^ {i})} > \eta_ {j - 1} ] > \frac {\xi_ {i}}{M} \right] = \sum_ {i = 0} ^ {L} \sum_ {j = 2} ^ {M} \sum_ {t = 1} ^ {\infty} I \left[ \operatorname * {P r} [ X _ {t} ^ {(2 ^ {i})} > \eta_ {j - 1} ] > \frac {\xi_ {i}}{\eta_ {j} M} \right] \\ \leq \sum_ {i = 0} ^ {L} \sum_ {j = 2} ^ {M} \frac {S A M R \ln 1 / \delta \ln S A}{\eta_ {j} \xi_ {i} \cdot 2 ^ {i} (1 - \gamma) ^ {5}} \tilde {P} \leq \sum_ {i = 0} ^ {L} \frac {S A \cdot 2 ^ {i + 4} \ln S A \ln 1 / \delta}{\epsilon_ {2} ^ {2} (1 - \gamma) ^ {6}} \tilde {P} (\text {B y d e f i n i t i o n o f} \xi_ {i} \text {a n d} \eta_ {j}) \\ \leq \frac {S A R \ln S A \ln 1 / \delta}{\epsilon_ {2} ^ {2} (1 - \gamma) ^ {6}} \tilde {P} \leq \frac {S A \ln S A \ln 1 / \delta}{\epsilon_ {2} ^ {2} (1 - \gamma) ^ {7}} \tilde {P}. \quad (\text {B y}) \\ \end{array}
+$$
+
+It should be stressed that throughout the lines, $\tilde{P}$ is a shorthand for an asymptotic expression, instead of an exact value. Our final choice of $\epsilon_{2}$ and $\epsilon_{1}$ are $\epsilon_{2} = \frac{\epsilon}{3}$ , and $\epsilon_{1} = \frac{\epsilon}{24RM\ln\frac{1}{1 - \gamma}}$ . It is not hard to see that $\ln 1 / \epsilon_1 = \mathrm{poly}(\ln \frac{1}{\epsilon},\ln \frac{1}{1 - \gamma})$ . This immediately implies that with probability $1 - \delta$ , the number of time steps such that $(V^{*} - V^{\pi})(s_{t}) > \epsilon$ is
+
+$$
+\tilde {\mathcal {O}} \left(\frac {S A \ln 1 / \delta}{\epsilon^ {2} (1 - \gamma) ^ {7}}\right),
+$$
+
+where hidden factors are poly $\ln \frac{1}{\epsilon},\ln \frac{1}{1 - \gamma},\ln SA$
+
+
+
+# 4 DISCUSSION
+
+In this section, we discuss the implication of our results, and present some interesting properties of our algorithm beyond its sample complexity bound.
+
+# 4.1 COMPARISON WITH PREVIOUS RESULTS
+
+Lower bound To the best of our knowledge, the current best lower bound for worst-case sample complexity is $\Omega\left(\frac{SA}{\epsilon^2(1 - \gamma)^3}\ln 1 / \delta\right)$ due to Lattimore & Hutter (2012). The gap between our results and this lower bound lies only in the dependence on $1 / (1 - \gamma)$ and logarithmic terms of $SA$ , $1 / (1 - \gamma)$ and $1 / \epsilon$ .
+
+Model-free algorithms Previously, the best sample complexity bound for a model-free algorithm is $\tilde{\mathcal{O}}\left(\frac{SA}{\epsilon^4(1 - \gamma)^8}\right)$ (suppressing all logarithmic terms), achieved by Delayed Q-learning Strehl et al. (2006). Our results improve this upper bound by a factor of $\frac{1}{\epsilon^2(1 - \gamma)}$ , and closes the quadratic gap in $1 / \epsilon$ between Delayed Q-learning's result and the lower bound. In fact, the following theorem shows that UCB Q-learning can indeed outperform Delayed Q-learning.
+
+Theorem 2. There exists a family of MDPs with constant $S$ and $A$ , in which with probability $1 - \delta$ , Delayed $Q$ -learning incurs sample complexity of exploration of $\Omega \left( \frac{\epsilon^{-3}}{\ln(1 / \delta)} \right)$ , assuming that $\ln(1 / \delta) < \epsilon^{-2}$ .
+
+The construction of this hard MDP family is given in the supplementary material.
+
+Model-based algorithms For model-based algorithms, better sample complexity results in infinite horizon settings have been claimed Szita & Szepesvári (2010). To the best of our knowledge, the best published result without further restrictions on MDPs is $\tilde{\mathcal{O}}\left(\frac{SA}{\epsilon^2(1 - \gamma)^6}\right)$ claimed by Szita & Szepesvári (2010), which is $(1 - \gamma)$ smaller than our upper bound. From the space complexity point of view, our algorithm is much more memory-efficient. Our algorithm stores $O(SA)$ values, whereas the algorithm in Szita & Szepesvári (2010) needs $\Omega(S^2 A)$ memory to store the transition model.
+
+# 4.2 EXTENSION TO OTHER SETTINGS
+
+Due to length limits, detailed discussion in this section is deferred to supplementary materials.
+
+Finite horizon MDP The sample complexity of exploration bounds of UCB Q-learning implies $\tilde{\mathcal{O}}\left(\epsilon^{-2}\right)$ PAC sample complexity and a $\tilde{\mathcal{O}}\left(T^{1/2}\right)$ regret bound in finite horizon MDPs. That is, our algorithm implies a PAC algorithm for finite horizon MDPs. We are not aware of reductions of the opposite direction (from finite horizon sample complexity to infinite horizon sample complexity of exploration).
+
+Regret The reason why our results can imply an $\tilde{\mathcal{O}} (\sqrt{T})$ regret is that, after choosing $\epsilon_{1}$ , it follows from the argument of Theorem 1 that with probability $1 - \delta$ , for all $\epsilon_{2} > \tilde{\mathcal{O}} (\epsilon_{1} / (1 - \gamma))$ , the number of $\epsilon_{2}$ -suboptimal steps is bounded by
+
+$$
+\mathcal {O} \left(\frac {S A \ln S A \ln 1 / \delta}{\epsilon_ {2} ^ {2} (1 - \gamma) ^ {7}} \mathrm {p o l y l o g} \left(\frac {1}{\epsilon_ {1}}, \frac {1}{1 - \gamma}\right)\right).
+$$
+
+In contrast, Delayed Q-learning Strehl et al. (2006) can only give an upper bound on $\epsilon_{1}$ -suboptimal steps after setting parameter $\epsilon_{1}$ .
+
+# 5 CONCLUSION
+
+Infinite-horizon MDP with discounted reward is a setting that is arguably more difficult than other popular settings, such as finite-horizon MDP. Previously, the best sample complexity bound achieved by model-free reinforcement learning algorithms in this setting is $\hat{O}\left(\frac{SA}{\epsilon^4(1 - \gamma)^8}\right)$ , due to Delayed Q-learning Strehl et al. (2006). In this paper, we propose a variant of Q-learning that incorporates upper confidence bound, and show that it has a sample complexity of $\hat{\mathcal{O}}\left(\frac{SA}{\epsilon^2(1 - \gamma)^7}\right)$ . This matches the best lower bound except in dependence on $1 / (1 - \gamma)$ and logarithmic factors.
+
+# 6 ACKNOWLEDGEMENTS
+
+The authors thank Chi Jin and Chongjie Zhang for helpful discussions. This work is supported by National Basic Research Program of China (973 Program) (grant no. 2015CB352502), NSFC (61573026), BJNSF (L172037) and Beijing Acedemy of Artificial Intelligence.
+
+# REFERENCES
+
+Mohammad Gheshlaghi Azar, Remi Munos, Mohammad Ghavamzadeh, and Hilbert Kappen. Speedy q-learning. In Advances in neural information processing systems, 2011.
+Mohammad Gheshlaghi Azar, Ian Osband, and Rémi Munos. Minimax regret bounds for reinforcement learning. arXiv preprint arXiv:1703.05449, 2017.
+Ronen I. Brafman and Moshe Tennenholtz. R-max - a general polynomial time algorithm for near-optimal reinforcement learning. J. Mach. Learn. Res., 3:213-231, March 2003. ISSN 1532-4435. doi: 10.1162/153244303765208377. URL https://doi.org/10.1162/153244303765208377.
+Christoph Dann, Tor Lattimore, and Emma Brunskill. Unifying pac and regret: Uniform pac bounds for episodic reinforcement learning. In Advances in Neural Information Processing Systems, pp. 5713-5723, 2017.
+Eyal Even-Dar and Yishay Mansour. Learning rates for q-learning. Journal of Machine Learning Research, 5(Dec):1-25, 2003.
+Thomas Jaksch, Ronald Ortner, and Peter Auer. Near-optimal regret bounds for reinforcement learning. Journal of Machine Learning Research, 11(Apr):1563-1600, 2010.
+Chi Jin, Zeyuan Allen-Zhu, Sebastian Bubeck, and Michael I Jordan. Is q-learning provably efficient? In Advances in Neural Information Processing Systems, pp. 4864-4874, 2018.
+Sham Machandranath Kakade et al. On the sample complexity of reinforcement learning. PhD thesis, University of London London, England, 2003.
+Tor Lattimore and Marcus Hutter. Pac bounds for discounted mdps. In International Conference on Algorithmic Learning Theory, pp. 320-334. Springer, 2012.
+Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602, 2013.
+Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In International conference on machine learning, pp. 1928-1937, 2016.
+John Schulman, Sergey Levine, Pieter Abbeel, Michael Jordan, and Philipp Moritz. Trust region policy optimization. In International Conference on Machine Learning, pp. 1889-1897, 2015.
+
+Aaron Sidford, Mengdi Wang, Xian Wu, Lin Yang, and Yinyu Ye. Near-optimal time and sample complexities for solving markov decision processes with a generative model. In Advances in Neural Information Processing Systems, pp. 5186-5196, 2018a.
+Aaron Sidford, Mengdi Wang, Xian Wu, and Yinyu Ye. Variance reduced value iteration and faster algorithms for solving markov decision processes. In Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 770-787. Society for Industrial and Applied Mathematics, 2018b.
+Alexander L Strehl and Michael L Littman. An analysis of model-based interval estimation for markov decision processes. Journal of Computer and System Sciences, 74(8):1309-1331, 2008.
+Alexander L Strehl, Lihong Li, Eric Wiewiora, John Langford, and Michael L Littman. Pac model-free reinforcement learning. In Proceedings of the 23rd international conference on Machine learning, pp. 881-888. ACM, 2006.
+István Szita and Csaba Szepesvári. Model-based reinforcement learning with nearly tight exploration complexity bounds. In Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 1031-1038, 2010.
+
+# A PROOF OF LEMMA 1
+
+Lemma 1. For fixed $t$ and $\eta > 0$ , let $B_{\eta}^{(t)}$ be the event that $V^{*}(s_{t}) - Q^{*}(s_{t}, a_{t}) > \frac{\eta}{1 - \gamma}$ in step $t$ . If $\eta > 2\epsilon_{1}$ , then with probability at least $1 - \delta / 2$ ,
+
+$$
+\sum_ {t = 1} ^ {t = \infty} I \left[ B _ {\eta} ^ {(t)} \right] \leq \frac {S A \ln S A \ln 1 / \delta}{\eta^ {2} (1 - \gamma) ^ {3}} \cdot p o l y l o g \left(\frac {1}{\epsilon_ {1}}, \frac {1}{1 - \gamma}\right), \tag {8}
+$$
+
+where $I[\cdot ]$ is the indicator function.
+
+Proof. When $\eta > 1$ the lemma holds trivially. Now consider the case that $\eta \leq 1$ .
+
+Let $I = \{t: V^{*}(s_{t}) - Q^{*}(s_{t}, a_{t}) > \frac{\eta}{1 - \gamma}\}$ . By lemma 2, with probability $1 - \delta$ ,
+
+$$
+\begin{array}{l} \frac {\eta | I |}{1 - \gamma} \leq \sum_ {t \in I} \left(V ^ {*} \left(s _ {t}\right) - Q ^ {*} \left(s _ {t}, a _ {t}\right)\right) \leq \sum_ {t \in I} \left[ \left(\hat {Q} _ {t} - Q ^ {*}\right) \left(s _ {t}, a _ {t}\right) \right] \\ \leq \frac {| I | \epsilon_ {1}}{1 - \gamma} + \mathcal {O} \left(\frac {1}{(1 - \gamma) ^ {5 / 2}} \sqrt {S A | I | \ell (| I |)} + \frac {S A}{(1 - \gamma) ^ {3}} \ln | I | \ln \frac {1}{\epsilon_ {1} (1 - \gamma)}\right) \\ \leq \frac {| I | \epsilon_ {1}}{1 - \gamma} + \mathcal {O} \left(\ln \frac {1}{\epsilon_ {1} (1 - \gamma)} \cdot \left(\frac {\sqrt {S A | I | \ln \frac {S A | I |}{\delta}}}{(1 - \gamma) ^ {5 / 2}} + \frac {S A \ln | I |}{(1 - \gamma) ^ {3}}\right)\right) \\ \leq \frac {| I | \epsilon_ {1}}{1 - \gamma} + \mathcal {O} \left(\sqrt {\ln \frac {1}{\delta}} \ln \frac {1}{\epsilon_ {1} (1 - \gamma)} \cdot \left(\frac {\sqrt {S A | I | \ln S A | I |}}{(1 - \gamma) ^ {5 / 2}} + \frac {S A \ln | I |}{(1 - \gamma) ^ {3}}\right)\right) \\ \end{array}
+$$
+
+Suppose that $|I| = \frac{SAk^2}{\eta^2(1 - \gamma)^3} \ln SA$ , for some $k > 1$ . Then it follows that for some constant $C_1$
+
+$$
+\begin{array}{l} \frac {\eta | I |}{1 - \gamma} = \frac {k ^ {2} S A \ln S A}{(1 - \gamma) ^ {4} \eta} \leq 2 \frac {(\eta - \epsilon_ {1}) | I |}{1 - \gamma} \\ \leq C _ {1} \sqrt {\ln \frac {1}{\delta}} \ln \frac {1}{\epsilon_ {1} (1 - \gamma)} \left(\frac {\sqrt {S A | I | \ln (S A | I |)}}{(1 - \gamma) ^ {5 / 2}} + \frac {S A \ln | I |}{(1 - \gamma) ^ {3}}\right) \\ \leq C _ {1} \sqrt {\ln \frac {1}{\delta}} \ln \frac {1}{\epsilon_ {1} (1 - \gamma)} \left(\frac {S A k}{\eta (1 - \gamma) ^ {4}} \sqrt {\ln S A \cdot (\ln S A + \ln | I |)} + \frac {S A \ln | I |}{(1 - \gamma) ^ {3}}\right). \\ \end{array}
+$$
+
+Therefore
+
+$$
+\begin{array}{l} k ^ {2} \ln (S A) \leq C _ {1} \sqrt {\ln \frac {1}{\delta}} \ln \frac {1}{\epsilon_ {1} (1 - \gamma)} (k (\ln S A + \ln | I |) + \eta (1 - \gamma) \ln | I |) \\ \leq k C _ {1} \sqrt {\ln \frac {1}{\delta} \ln \frac {1}{\epsilon_ {1} (1 - \gamma)}} \cdot (\ln S A + 2 \ln | I |) \\ \leq k C _ {1} \sqrt {\ln \frac {1}{\delta}} \ln \frac {1}{\epsilon_ {1} (1 - \gamma)} \cdot \left(3 \ln S A + 4 \ln k + 6 \ln \frac {1}{\eta (1 - \gamma)}\right) \\ \leq 6 k C _ {1} \sqrt {\ln \frac {1}{\delta}} \ln^ {2} \frac {1}{\epsilon_ {1} (1 - \gamma)} (\ln S A + \ln e k). \\ \end{array}
+$$
+
+Let $C^\prime = \max \{2,6C_1\sqrt{\ln\frac{1}{\delta}}\ln^2\frac{1}{\epsilon_1(1 - \gamma)}\}$ . Then
+
+$$
+k \leq C ^ {\prime} (2 + \ln k). \tag {9}
+$$
+
+If $k \geq 10C' \ln C'$ , then
+
+$$
+\begin{array}{l} k - C ^ {\prime} (2 + \ln k) \geq 8 C ^ {\prime} \ln C ^ {\prime} - (2 + \ln 1 0) C ^ {\prime} \\ \geq 4 C ^ {\prime} (2 \ln C ^ {\prime} - 4) \geq 0, \\ \end{array}
+$$
+
+which means violation of (9). Therefore, since $C' \geq 2$
+
+$$
+k \leq 1 0 C ^ {\prime} \ln C ^ {\prime} \leq 3 6 0 C _ {1} ^ {2} \max \left\{\ln^ {4} \frac {1}{\epsilon_ {1} (1 - \gamma)}, 2 0 \ln 2 \right\}. \tag {10}
+$$
+
+It immediately follows that
+
+$$
+\begin{array}{l} \left| I \right| = \frac {S A k ^ {2}}{\eta^ {2} (1 - \gamma) ^ {3}} \ln S A (11) \\ \leq \frac {S A \ln S A}{\eta^ {2} (1 - \gamma) ^ {5}} \cdot \ln \frac {1}{\delta} \cdot \mathcal {O} \left(\ln^ {8} \frac {1}{\epsilon_ {1} (1 - \gamma)}\right). (12) \\ \end{array}
+$$
+
+
+
+# B PROOF OF LEMMA 2
+
+Lemma 2. For every $(C, w)$ -sequence $(w_{t})_{t \geq 1}$ , with probability $1 - \delta / 2$ , the following holds:
+
+$$
+\sum_ {t \geq 1} w _ {t} (\hat {Q} _ {t} - Q ^ {*}) (s _ {t}, a _ {t}) \leq \frac {C \epsilon_ {1}}{1 - \gamma} + \mathcal {O} \left(\frac {\sqrt {w S A C \ell (C)}}{(1 - \gamma) ^ {2 . 5}} + \frac {w S A \ln C}{(1 - \gamma) ^ {3}} \ln \frac {1}{(1 - \gamma) \epsilon_ {1}}\right).
+$$
+
+where $\ell (C) = \iota (C)\ln \frac{1}{(1 - \gamma)\epsilon_1}$ is a log-factor.
+
+Fact 1. (1) The following statement holds throughout the algorithm,
+
+$$
+\hat {Q} _ {p + 1} (s, a) \leq Q _ {p + 1} (s, a).
+$$
+
+(2) For any $p$ , there exists $p' \leq p$ such that
+
+$$
+\hat {Q} _ {p + 1} (s, a) \geq Q _ {p ^ {\prime} + 1} (s, a).
+$$
+
+Proof. Both properties are results of the update rule at line 11 of Algorithm 1.
+
+
+
+Before proving lemma 2, we will prove two auxiliary lemmas.
+
+Lemma 3. The following properties hold for $\alpha_{t}^{i}$ :
+
+1. $\sqrt{\frac{1}{t}}\leq \sum_{i = 1}^{t}\alpha_{t}^{i}\sqrt{\frac{1}{i}}\leq 2\sqrt{\frac{1}{t}}$ for every $t\geq 1,c > 0$
+2. $\max_{i\in [t]}\alpha_t^i\leq \frac{2H}{t}$ and $\sum_{i = 1}^{t}(\alpha_{t}^{i})^{2}\leq \frac{2H}{t}$ for every $t\geq 1$
+3. $\sum_{t = i}^{\infty}\alpha_t^i = 1 + 1 / H,for every i\geq 1.$
+4. $\sqrt{\frac{\iota(t)}{t}} \leq \sum_{i=1}^{t} \alpha_t^i \sqrt{\frac{\iota(i)}{i}} \leq 2\sqrt{\frac{\iota(t)}{t}}$ where $\iota(t) = \ln(c(t+1)(t+2))$ , for every $t \geq 1, c \geq 1$ .
+
+Proof. Recall that
+
+$$
+\alpha_ {t} = \frac {H + 1}{H + t}, \quad \alpha_ {t} ^ {0} = \prod_ {j = 1} ^ {t} (1 - \alpha_ {j}), \quad \alpha_ {t} ^ {i} = \alpha_ {i} \prod_ {j = i + 1} ^ {t} (1 - \alpha_ {j}).
+$$
+
+Properties 1-3 are proven by Jin et al. (2018). Now we prove the last property.
+
+On the one hand,
+
+$$
+\sum_ {i = 1} ^ {t} \alpha_ {t} ^ {i} \sqrt {\frac {\iota (i)}{i}} \leq \sum_ {i = 1} ^ {t} \alpha_ {t} ^ {i} \sqrt {\frac {\iota (t)}{i}} \leq 2 \sqrt {\frac {\iota (t)}{t}},
+$$
+
+where the last inequality follows from property 1.
+
+The left-hand side is proven by induction on $t$ . For the base case, when $t = 1$ , $\alpha_{t}^{t} = 1$ . For $t \geq 2$ , we have $\alpha_{t}^{i} = (1 - \alpha_{t})\alpha_{t - 1}^{i}$ for $1 \leq i \leq t - 1$ . It follows that
+
+$$
+\sum_ {i = 1} ^ {t} \alpha_ {t} ^ {i} \sqrt {\frac {\iota (i)}{i}} = \alpha_ {t} \sqrt {\frac {\iota (t)}{t}} + (1 - \alpha_ {t}) \sum_ {i = 1} ^ {t - 1} \alpha_ {t - 1} ^ {i} \sqrt {\frac {\iota (i)}{i}} \geq \alpha_ {t} \sqrt {\frac {\iota (t)}{t}} + (1 - \alpha_ {t}) \sqrt {\frac {\iota (t - 1)}{t - 1}}.
+$$
+
+Since function $f(t) = \iota(t) / t$ is monotonically decreasing for $t \geq 1, c \geq 1$ , we have
+
+$$
+\alpha_ {t} \sqrt {\frac {\iota (t)}{t}} + (1 - \alpha_ {t}) \sqrt {\frac {\iota (t - 1)}{t - 1}} \geq \alpha_ {t} \sqrt {\frac {\iota (t)}{t}} + (1 - \alpha_ {t}) \sqrt {\frac {\iota (t)}{t}} \geq \sqrt {\frac {\iota (t)}{t}}.
+$$
+
+□
+
+Lemma 4. With probability at least $1 - \delta /2$ , for all $p\geq 0$ and $(s,a)$ -pair,
+
+$$
+0 \leq \left(Q _ {p} - Q ^ {*}\right) (s, a) \leq \frac {\alpha_ {t} ^ {0}}{1 - \gamma} + \sum_ {i = 1} ^ {t} \gamma \alpha_ {t} ^ {i} \left(\hat {V} _ {t _ {i}} - V ^ {*}\right) \left(s _ {t _ {i} + 1}\right) + \beta_ {t}, \tag {13}
+$$
+
+$$
+0 \leq \left(\hat {Q} _ {p} - Q ^ {*}\right) (s, a), \tag {14}
+$$
+
+where $t = N_{p}(s,a),t_{i} = \tau (s,a,i)$ and $\beta_{t} = c_{3}\sqrt{H\iota(t) / ((1 - \gamma)^{2}t)}$
+
+Proof. Recall that
+
+$$
+\alpha_ {t} ^ {0} = \prod_ {j = 1} ^ {t} (1 - \alpha_ {j}), \quad \alpha_ {t} ^ {i} = \alpha_ {i} \prod_ {j = i + 1} ^ {t} (1 - \alpha_ {j}).
+$$
+
+From the update rule, it can be seen that our algorithm maintains the following $Q(s,a)$ :
+
+$$
+Q _ {p} (s, a) = \alpha_ {t} ^ {0} \frac {1}{1 - \gamma} + \sum_ {i = 1} ^ {t} \alpha_ {t} ^ {i} \left[ r (s, a) + b _ {i} + \gamma \hat {V} _ {t _ {i}} \left(s _ {t _ {i} + 1}\right) \right].
+$$
+
+Bellman optimality equation gives:
+
+$$
+Q ^ {*} (s, a) = r (s, a) + \gamma \mathbb {P} V ^ {*} (s, a) = \alpha_ {t} ^ {0} Q ^ {*} (s, a) + \sum_ {i = 1} ^ {t} \alpha_ {t} ^ {i} [ r (s, a) + \gamma \mathbb {P} V ^ {*} (s, a) ].
+$$
+
+Subtracting the two equations gives
+
+$$
+(Q _ {p} - Q ^ {*}) (s, a) = \alpha_ {t} ^ {0} \left(\frac {1}{1 - \gamma} - Q ^ {*} (s, a)\right) + \sum_ {i = 1} ^ {t} \alpha_ {t} ^ {i} \left[ b _ {i} + \gamma \left(V _ {t _ {i}} - V ^ {*}\right) \left(s _ {t _ {i} + 1}\right) + \gamma \left(V ^ {*} \left(s _ {t _ {i} + 1}\right) - \mathbb {P} V ^ {*} (s, a)\right) \right].
+$$
+
+The identity above holds for arbitrary $p, s$ and $a$ . Now fix $s \in S$ , $a \in A$ and $p \in \mathbb{N}$ . Let $t = N_p(s, a)$ , $t_i = \tau(s, a, i)$ . The $t = 0$ case is trivial; we assume $t \geq 1$ below. Now consider an arbitrary fixed $k$ . Define
+
+$$
+\Delta_ {i} = \left(\alpha_ {k} ^ {i} \cdot I [ t _ {i} < \infty ] \cdot \left(\mathbb {P} V ^ {*} - \hat {\mathbb {P}} _ {t _ {i}} V ^ {*}\right) (s, a)\right)
+$$
+
+Let $F_{i}$ be the $\sigma$ -Field generated by random variables $(s_{1},a_{1},\dots,s_{t_{i}},a_{t_{i}})$ . It can be seen that $\mathbb{E}[\Delta_i|F_i] = 0$ , while $\Delta_{i}$ is measurable in $F_{i + 1}$ . Also, since $0\leq V^{*}(s,a)\leq \frac{1}{1 - \gamma}$ , $|\Delta_i|\leq \frac{2}{1 - \gamma}$ . Therefore, $\Delta_{i}$ is a martingale difference sequence; by the Azuma-Hoeffding inequality,
+
+$$
+\Pr \left[ \left| \sum_ {i = 1} ^ {k} \Delta_ {i} \right| > \eta \right] \leq 2 \exp \left\{- \frac {\eta^ {2}}{8 (1 - \gamma) ^ {- 2} \sum_ {i = 1} ^ {k} \left(\alpha_ {k} ^ {i}\right) ^ {2}} \right\}. \tag {15}
+$$
+
+By choosing $\eta$ , we can show that with probability $1 - \delta / [SA(k + 1)(k + 2)]$ ,
+
+$$
+\left| \sum_ {i = 1} ^ {k} \Delta_ {i} \right| \leq \frac {2 \sqrt {2}}{1 - \gamma} \cdot \sqrt {\sum_ {i = 1} ^ {k} \left(\alpha_ {k} ^ {i}\right) ^ {2} \cdot \ln \frac {2 (k + 1) (k + 2) S A}{\delta}} \leq \frac {c _ {2}}{1 - \gamma} \sqrt {\frac {H \iota (k)}{k}}. \tag {16}
+$$
+
+Here $c_2 = 4\sqrt{2}$ , $\iota(k) = \ln \frac{(k+1)(k+2)SA}{\delta}$ . By a union bound for all $k$ , this holds for arbitrary $k > 0$ , arbitrary $s \in S$ , $a \in A$ simultaneously with probability
+
+$$
+1 - \sum_ {s ^ {\prime} \in S, a ^ {\prime} \in A} \sum_ {k = 1} ^ {\infty} \frac {\delta}{2 S A (k + 1) (k + 2)} = 1 - \frac {\delta}{2}.
+$$
+
+Therefore, we conclude that (16) holds for the random variable $t = N_{p}(s,a)$ and for all $p$ , with probability $1 - \delta /2$ as well.
+
+Proof of the right hand side of (13): We also know that $(b_{k} = \frac{c_{2}}{1 - \gamma}\sqrt{\frac{H\iota(k)}{k}})$
+
+$$
+\frac {c _ {2}}{1 - \gamma} \sqrt {\frac {H \iota (k)}{k}} \leq \sum_ {i = 1} ^ {k} \alpha_ {k} ^ {i} b _ {i} \leq \frac {2 c _ {2}}{1 - \gamma} \sqrt {\frac {H \iota (k)}{k}}.
+$$
+
+It is implied by (16) that
+
+$$
+(Q _ {p} - Q ^ {*}) (s, a) \leq \frac {\alpha_ {t} ^ {0}}{1 - \gamma} + \gamma \left| \sum_ {i = 1} ^ {t} \Delta_ {i} \right| + \sum_ {i = 1} ^ {t} \alpha_ {t} ^ {i} \left[ \gamma (\hat {V} _ {t _ {i}} - V ^ {*}) (x _ {t _ {i} + 1}) + b _ {i} \right]
+$$
+
+$$
+\leq \frac {\alpha_ {t} ^ {0}}{1 - \gamma} + \frac {3 c _ {2}}{1 - \gamma} \sqrt {\frac {H \iota (t)}{t}} + \sum_ {i = 1} ^ {t} \gamma \alpha_ {t} ^ {i} (\hat {V} ^ {t _ {i}} - V ^ {*}) (x _ {t _ {i} + 1})
+$$
+
+(Property 4 of lemma 3)
+
+$$
+\leq \frac {\alpha_ {t} ^ {0}}{1 - \gamma} + \sum_ {i = 1} ^ {t} \gamma \alpha_ {t} ^ {i} (\hat {V} ^ {t _ {i}} - V ^ {*}) (x _ {t _ {i} + 1}) + \beta_ {t}.
+$$
+
+Note that $\beta_{t} = c_{3}(1 - \gamma)^{-1}\sqrt{H\iota(t) / t};c_{3} = 3c_{2} = 12\sqrt{2}.$
+
+Proof of the left hand side of (13): Now, we assume that event that (16) holds. We assert that $Q_{p} \geq Q^{*}$ for all $(s, a)$ and $p \leq p'$ . This assertion is obviously true when $p' = 0$ . Then
+
+$$
+\begin{array}{l} (Q _ {p} - Q ^ {*}) (s, a) \geq - \gamma \left| \sum_ {i = 1} ^ {t} \Delta_ {i} \right| + \sum_ {i = 1} ^ {t} \alpha_ {t} ^ {i} \left[ \gamma (\hat {V} _ {t _ {i}} - V ^ {*}) (x _ {t _ {i} + 1}) + b _ {i} \right] \\ \geq \sum_ {i = 1} ^ {t} \alpha_ {t} ^ {i} b _ {i} - \gamma \left| \sum_ {i = 1} ^ {t} \Delta_ {i} \right| \geq 0. \\ \end{array}
+$$
+
+Therefore the assertion holds for $p' + 1$ as well. By induction, it holds for all $p$ .
+
+We now see that (13) holds for probability $1 - \delta / 2$ for all $p, s, a$ . Since $\hat{Q}_p(s, a)$ is always greater than $Q_{p'}(s, a)$ for some $p' \leq p$ , we know that $\hat{Q}_p(s, a) \geq Q_{p'}(s, a) \geq Q^*(s, a)$ , thus proving (14).
+
+
+
+We now give a proof for lemma 2. Recall the definition for a $(C, w)$ -sequence. A sequence $(w_{t})_{t \geq 1}$ is said to be a $(C, w)$ -sequence for $C, w > 0$ , if $0 \leq w_{t} \leq w$ for all $t \geq 1$ , and $\sum_{t \geq 1} w_{t} \leq C$ .
+
+Proof. Let $n_t = N_t(s_t, a_t)$ for simplicity; we have
+
+$$
+\begin{array}{l} \sum_ {t \geq 1} w _ {t} (\hat {Q} _ {t} - Q ^ {*}) (s _ {t}, a _ {t}) \\ \leq \sum_ {t \geq 1} w _ {t} \left(Q _ {t} - Q ^ {*}\right) \left(s _ {t}, a _ {t}\right) \\ \leq \sum_ {t \geq 1} w _ {t} \left[ \frac {\alpha_ {n _ {t}} ^ {0}}{1 - \gamma} + \beta_ {n _ {t}} + \gamma \sum_ {i = 1} ^ {n _ {t}} \alpha_ {n _ {t}} ^ {i} \left(\hat {V} _ {\tau \left(s _ {t}, a _ {t}, i\right)} - V ^ {*}\right) \left(s _ {\tau \left(s _ {t}, a _ {t}, i\right) + 1}\right) \right] \tag {17} \\ \end{array}
+$$
+
+The last inequality is due to lemma 4. Note that $\alpha_{n_t}^0 = \mathbb{I}[n_t = 0]$ , the first term in the summation can be bounded by,
+
+$$
+\sum_ {t \geq 1} w _ {t} \frac {\alpha_ {n _ {t}} ^ {0}}{1 - \gamma} \leq \frac {S A w}{1 - \gamma}. \tag {18}
+$$
+
+For the second term, define $u(s,a) = \sup_t N_t(s,a)$ . It follows that,
+
+$$
+\begin{array}{l} \sum_ {t \geq 1} w _ {t} \beta_ {n _ {t}} = \sum_ {s, a} \sum_ {i = 1} ^ {u (s, a)} w _ {\tau (s, a, i)} \beta_ {i} \\ \leq \sum_ {s, a} (1 - \gamma) ^ {- 1} c _ {3} \sum_ {i = 1} ^ {C _ {s, a} / w} \sqrt {\frac {H \iota (i)}{i}} w (19) \\ \leq 2 \sum_ {s, a} (1 - \gamma) ^ {- 1} c _ {3} \sqrt {\iota (C) H C _ {s , a} w} (20) \\ \leq 2 c _ {3} (1 - \gamma) ^ {- 1} \sqrt {w S A H C \iota (C)}. (21) \\ \end{array}
+$$
+
+Where $C_{s,a} = \sum_{t\geq 1,(s_t,a_t) = (s,a)}w_t$ . Inequality (19) follows from rearrangement inequality, since $\iota (x) / x$ is monotonically decreasing. Inequality (21) follows from Jensen's inequality.
+
+For the third term of the summation, we have
+
+$$
+\begin{array}{l} \sum_ {t \geq 1} w _ {t} \sum_ {i = 1} ^ {n _ {t}} \alpha_ {n _ {t}} ^ {i} \left(\hat {V} _ {\tau \left(s _ {t}, a _ {t}, i\right)} - V ^ {*}\right) \left(s _ {\tau \left(s _ {t}, a _ {t}, i\right) + 1}\right) \\ \leq \sum_ {t ^ {\prime} \geq 1} \left(\hat {V} _ {t ^ {\prime}} - V ^ {*}\right) \left(s _ {t ^ {\prime} + 1}\right) \left(\sum_ {\substack {t = t ^ {\prime} + 1 \\ \left(s _ {t}, a _ {t}\right) = \left(s _ {t} ^ {\prime}, a _ {t} ^ {\prime}\right)}} ^ {\infty} \alpha_ {n _ {t}} ^ {n _ {t ^ {\prime}}} w _ {t}\right). \tag{23} \\ \end{array}
+$$
+
+Define
+
+$$
+w^{\prime}_{t^{\prime} + 1} = \left(\sum_{\substack{t = t^{\prime} + 1\\ (s_{t},a_{t}) = (s^{\prime}_{t},a^{\prime}_{t})}}^{\infty}\alpha^{n_{t^{\prime}}}_{nt^{\prime}}w_{t}\right).
+$$
+
+We claim that $w_{t + 1}^{\prime}$ is a $(C, (1 + \frac{1}{H})w)$ -sequence. We now prove this claim. By lemma 3, for any $t' \geq 0$ ,
+
+$$
+w _ {t ^ {\prime} + 1} ^ {\prime} \leq w \sum_ {j = n _ {t ^ {\prime}}} ^ {\infty} \alpha_ {j} ^ {n _ {t ^ {\prime}}} = (1 + 1 / H) w.
+$$
+
+By $\sum_{j=0}^{i} \alpha_{i}^{j} = 1$ , we have $\sum_{t' \geq 1} w_{t' + 1}' \leq \sum_{t \geq 1} w_{t} \leq C$ . This proves the assertion. It follows from (22) that
+
+$$
+\begin{array}{l} \sum_ {t \geq 1} w _ {t + 1} ^ {\prime} \left(\hat {V} _ {t} - V ^ {*}\right) (s _ {t + 1}) \\ = \sum_ {t \geq 1} w _ {t + 1} ^ {\prime} \left(\hat {V} _ {t + 1} - V ^ {*}\right) \left(s _ {t + 1}\right) + \sum_ {t \geq 1} w _ {t + 1} ^ {\prime} \left(\hat {V} _ {t} - \hat {V} _ {t + 1}\right) \left(s _ {t + 1}\right) (24) \\ \leq \sum_ {t \geq 1} w _ {t + 1} ^ {\prime} \left(\hat {V} _ {t + 1} - V ^ {*}\right) \left(s _ {t + 1}\right) + \sum_ {t \geq 1} w _ {t + 1} ^ {\prime} \left(2 \alpha_ {n _ {t} + 1} \frac {1}{1 - \gamma}\right) (25) \\ \leq \sum_ {t \geq 1} w _ {t + 1} ^ {\prime} \left(\hat {V} _ {t + 1} - V ^ {*}\right) \left(s _ {t + 1}\right) + \mathcal {O} \left(\frac {w S A H}{1 - \gamma} \ln C\right) (26) \\ \leq \sum_ {t \geq 1} w _ {t + 1} ^ {\prime} \left(\hat {Q} _ {t + 1} - Q ^ {*}\right) \left(s _ {t + 1}, a _ {t + 1}\right) + \mathcal {O} \left(\frac {w S A H}{1 - \gamma} \ln C\right) (27) \\ \end{array}
+$$
+
+Inequality (25) comes from the update rule of our algorithm. Inequality (26) comes from the fact that $\alpha_{t} = (H + 1) / (H + t)\leq H / t$ and Jensen's Inequality. More specifically, let $C_{s,a}^{\prime} = \sum_{t\geq 1,(s_t,a_t = s,a}}w_{t + 1}^{\prime},w^{\prime} = w(1 + 1 / H)$ . Then
+
+$$
+\sum_ {t \geq 1} w _ {t + 1} ^ {\prime} \alpha_ {n _ {t} + 1} \leq \sum_ {s, a} \sum_ {n = 1} ^ {C _ {s, a} ^ {\prime} / w ^ {\prime}} w ^ {\prime} \frac {H}{n} \leq \sum_ {s, a} H w ^ {\prime} \ln \left(C _ {s, a} ^ {\prime} / w\right) \leq 2 S A H w \ln C.
+$$
+
+Putting (18), (21) and (27) together, we have,
+
+$$
+\begin{array}{l} \sum_ {t \geq 1} w _ {t} (\hat {Q} _ {t} - Q ^ {*}) (s _ {t}, a _ {t}) \\ \leq 2 c _ {3} \frac {\sqrt {w S A H C \iota (C)}}{1 - \gamma} + \mathcal {O} \left(\frac {w S A H}{1 - \gamma} \ln C\right) + \gamma \sum_ {t \geq 1} w _ {t + 1} ^ {\prime} \left(\hat {Q} _ {t + 1} - Q ^ {*}\right) \left(s _ {t + 1}, a _ {t + 1}\right). \tag {28} \\ \end{array}
+$$
+
+Observe that the third term is another weighted sum with the same form as (17). Therefore, we can unroll this term repetitively with changing weight sequences. Suppose that our original weight sequence is also denoted by $\{w_{t}^{(0)}\}_{t\geq 1}$ , while $\{w_{t}^{(k)}\}_{t\geq 1}$ denotes the weight sequence after unrolling for $k$ times. Let $w^{(k)}$ be $w\cdot (1 + 1 / H)^k$ . Then we can see that $\{w_t^{(k)}\}_{t\geq 1}$ is a $(C,w^{(k)})$ -sequence. Suppose that we unroll for $H$ times. Then
+
+$$
+\begin{array}{l} \sum_ {t \geq 1} w _ {t} (\hat {Q} _ {t} - Q ^ {*}) (s _ {t}, a _ {t}) \\ \leq 2 c _ {3} \frac {\sqrt {w ^ {(H)} S A H C \iota (C)}}{(1 - \gamma) ^ {2}} + \mathcal {O} \left(\frac {w ^ {(H)} S A H}{(1 - \gamma) ^ {2}} \ln C\right) + \gamma^ {H} \sum_ {t \geq 1} w _ {t} ^ {(H)} \left(\hat {Q} _ {t} - Q ^ {*}\right) (s _ {t}, a _ {t}) \\ \leq 2 c _ {3} \frac {\sqrt {w ^ {(H)} S A H C \iota (C)}}{(1 - \gamma) ^ {2}} + \mathcal {O} \left(\frac {w ^ {(H)} S A H}{(1 - \gamma) ^ {2}} \ln C\right) + \gamma^ {H} \frac {C}{1 - \gamma}. \\ \end{array}
+$$
+
+We set $H = \frac{\ln 1 / ((1 - \gamma)\epsilon_1)}{\ln 1 / \gamma} \leq \frac{\ln 1 / ((1 - \gamma)\epsilon_1)}{1 - \gamma}$ . It follows that $w^{(H)} = (1 + 1 / H)^H w^{(0)} \leq ew^{(0)}$ , and that $\gamma^H\frac{C}{1 - \gamma} \leq C\epsilon_1$ . Also, let $\ell(C) = \iota(C)\ln((1 - \gamma)^{-1}\epsilon_1^{-1})$ . Therefore,
+
+$$
+\sum_ {t \geq 1} w _ {t} \left(\hat {Q} _ {t} - Q ^ {*}\right) \left(s _ {t}, a _ {t}\right) \leq \frac {C \epsilon_ {1}}{1 - \gamma} + \mathcal {O} \left(\frac {\sqrt {w S A C \ell (C)}}{(1 - \gamma) ^ {2 . 5}} + \frac {w S A}{(1 - \gamma) ^ {3}} \ln C \ln \frac {1}{(1 - \gamma) \epsilon_ {1}}\right). \tag {29}
+$$
+
+
+
+# C EXTENSION TO OTHER SETTINGS
+
+First we define a mapping from a finite horizon MDP to an infinite horizon MDP so that our algorithm can be applied. For an arbitrary finite horizon MDP $\mathcal{M} = (S, A, H, r_h(s, a), p_h(s' \mid s, a))$ where $H$ is the length of episode, the corresponding infinite horizon MDP $\tilde{\mathcal{M}} = (\bar{S}, \bar{A}, \gamma, \bar{r}(\bar{s}, \bar{a}), \bar{p}(\bar{s}' \mid \bar{s}, \bar{a}))$ is defined as,
+
+- $\bar{S} = S \times H, \bar{A} = A$ ;
+- $\gamma = (1 - 1 / H)$
+- for a state $s$ at step $h$ , let $\bar{s}_{s,h}$ be the corresponding state. For any action $a$ and next state $s'$ , define $\bar{r}(\bar{s}_{s,h},a) = \gamma^{H - h + 1}r_h(s,a)$ and $\bar{p}(\bar{s}_{s',h + 1} \mid \bar{s}_{s,h},a) = p_h(s' \mid s,h)$ . And for $h = H$ , set $\bar{r}(\bar{s}_{s,h},a) = 0$ and $\bar{p}(\bar{s}_{s',1} \mid \bar{s}_{s,h},a) = I[s' = s_1]$ for a fixed starting state $s_1$ .
+
+Let $\bar{V}_t$ be the value function in $\bar{\mathcal{M}}$ at time $t$ and $V_h^k$ the value function in $\mathcal{M}$ at episode $k$ , step $h$ . It follows that $\bar{V}^{*}(\bar{s}_{s_{1},1}) = \frac{\gamma^{H}}{1 - \gamma^{H}} V_{1}^{*}(s_{1})$ . And the policy mapping is defined as $\pi_h(s) = \bar{\pi} (\bar{s}_{s,h})$ for policy $\bar{\pi}$ in $\bar{\mathcal{M}}$ . Value functions in MDP $\mathcal{M}$ and $\bar{\mathcal{M}}$ are closely related in a sense that, any $\epsilon$ -optimal policy $\bar{\pi}$ of $\bar{\mathcal{M}}$ corresponding to an $(\epsilon/\gamma^{H})$ -optimal policy $\pi$ in $\mathcal{M}$ (see section C.1 for proof). Note that here $\gamma^{H} = (1 - 1 / H)^{H} = \mathcal{O}(1)$ is a constant.
+
+For any $\epsilon > 0$ , by running our algorithm on $\bar{M}$ for $\tilde{\mathcal{O}}\left(\frac{3SAH^9}{\epsilon^2}\right)$ time steps, the starting state $s_1$ is visited at least $\tilde{\mathcal{O}}\left(\frac{3SAH^8}{\epsilon^2}\right)$ times, and at most $1/3$ of them are not $\epsilon$ -optimal. If we select the policy uniformly randomly from the policy $\pi^{tH+1}$ for $0 \leq t < T/H$ , with probability at least $2/3$ we can get an $\epsilon$ -optimal policy. Therefore the PAC sample complexity is $\tilde{\mathcal{O}}\left(\epsilon^{-2}\right)$ after hiding $S, A, H$ terms.
+
+On the other hand, we want to show that for any $K$ episodes,
+
+$$
+\operatorname {R e g r e t} (T) = \sum_ {k = 1} ^ {T / H} \left[ V ^ {*} \left(s _ {1}\right) - V _ {1} ^ {k} \left(s _ {1}\right) \right] \propto T ^ {1 / 2}.
+$$
+
+The reason why our algorithm can have a better reduction from regret to PAC is that, after choosing $\epsilon_{1}$ , it follows from the argument of theorem 1 that for all $\epsilon_{2} > \tilde{\mathcal{O}} (\epsilon_{1} / (1 - \gamma))$ , the number of $\epsilon_{2}$ -suboptimal steps is bounded by
+
+$$
+\mathcal {O} \left(\frac {S A \ln S A \ln 1 / \delta}{\epsilon_ {2} ^ {2} (1 - \gamma) ^ {7}} \mathrm {p o l y l o g} \left(\frac {1}{\epsilon_ {1}}, \frac {1}{1 - \gamma}\right)\right)
+$$
+
+with probability $1 - \delta$ . In contrast, delayed Q-learning can only give an upper bound on $\epsilon_{1}$ -suboptimal steps after setting parameter $\epsilon_{1}$ .
+
+Formally, let $X_{k} = V^{*}(s_{1}) - V_{1}^{k}(s_{1})$ be the regret of $k$ -th episode. For any $T$ , set $\epsilon = \sqrt{SA / T}$ and $\epsilon_{2} = \tilde{\mathcal{O}}(\epsilon_{1} / (1 - \gamma))$ . Let $M = \lceil \log_2\frac{1}{\epsilon_2(1 - \gamma)}\rceil$ . It follows that,
+
+$$
+\begin{array}{l} \operatorname {R e g r e t} (T) \leq T \epsilon_ {2} + \sum_ {i = 1} ^ {M} \left(\left| k: \{X _ {k} \geq \epsilon_ {2} \cdot 2 ^ {i - 1} \} \right|\right) \epsilon_ {2} \cdot 2 ^ {i} \\ \leq \tilde {\mathcal {O}} \left(T \epsilon_ {2} + \sum_ {i = 1} ^ {M} \frac {S A \ln 1 / \delta}{\epsilon_ {2} \cdot 2 ^ {i - 2}}\right) \\ \leq \tilde {\mathcal {O}} \left(\sqrt {S A T} \ln 1 / \delta\right) \\ \end{array}
+$$
+
+with probability $1 - \delta$ . Note that the $\tilde{\mathcal{O}}$ notation hides the poly $(1 / (1 - \gamma), \log 1 / \epsilon_1)$ which is, by our reduction, poly $(H, \log T, \log S, \log A)$ .
+
+# C.1 CONNECTION BETWEEN VALUE FUNCTIONS
+
+Recall that our MDP mapping from $\mathcal{M} = (S, A, H, r_h(s, a), p_h(s' \mid s, a))$ to $\bar{\mathcal{M}} = (\bar{S}, \bar{A}, \gamma, \bar{r}(\bar{s}, \bar{a}), \bar{p}(\bar{s}' \mid \bar{s}, \bar{a}))$ is defined as,
+
+- $\bar{S} = S \times H, \bar{A} = A$ ;
+- $\gamma = (1 - 1 / H)$
+- for a state $s$ at step $h$ , let $\bar{s}_{s,h}$ be the corresponding state. For any action $a$ and next state $s'$ , define $\bar{r}(\bar{s}_{s,h}, a) = \gamma^{H - h + 1} r_h(s, a)$ and $\bar{p}(\bar{s}_{s',h + 1} \mid \bar{s}_{s,h}, a) = p_h(s, h)$ . And for $h = H$ , set $\bar{r}(\bar{s}_{s,h}, a) = 0$ and $\bar{p}(\bar{s}_{s',1} \mid \bar{s}_{s,h}, a) = I[s' = s_1]$ for a fixed starting state $s_1$ .
+
+For a trajectory $\{(\bar{s}_{s_1,1},\bar{a}_1),(\bar{s}_{s_2,2},\bar{a}_2),\dots \}$ in $\bar{\mathcal{M}}$ , let $\{(s_1,a_1),(s_2,a_2),\dots \}$ be the corresponding trajectory in $\bar{\mathcal{M}}$ . Note that $\mathcal{M}$ has a unique fixed starting state $s_1$ , which means that $s_{tH + 1} = s_{1}$ for all $t\geq 0$ . Denote the corresponding policy of $\bar{\pi}^t$ as $\pi^t$ (may be non-stationary), then we have
+
+$$
+\begin{array}{l} \bar {V} ^ {\bar {\pi} ^ {t}} \left(\bar {s} _ {s _ {1}, 1}\right) = \mathbb {E} \left[ \bar {r} \left(\bar {s} _ {s _ {1}, 1}, \bar {a} _ {1}\right) + \gamma \bar {r} \left(\bar {s} _ {s _ {2}, 2}, \bar {a} _ {2}\right) + \dots + \gamma^ {H - 1} \bar {r} \left(\bar {s} _ {s _ {H - 1}, H - 1}, \bar {a} _ {H - 1}\right) + \gamma^ {H} \bar {V} ^ {\pi_ {t + H - 1}} \left(\bar {s} _ {s _ {H + 1}, 1}\right) \right] \\ = \gamma^ {H} \mathbb {E} \left[ r _ {1} (s _ {1}, a _ {1}) + r _ {2} (s _ {2}, a _ {2}) + \dots + r _ {H - 1} (s _ {H - 1}, a _ {H - 1}) + \bar {V} ^ {\pi_ {t + H}} (\bar {s} _ {s _ {H + 1}, 1}) \right] \\ = \gamma^ {H} V ^ {\pi^ {t}} (s _ {1}) + \gamma^ {H} \bar {V} ^ {\pi_ {t + H}} (\bar {s} _ {s _ {1}, 1}). \\ \end{array}
+$$
+
+Then for a stationary policy $\bar{\pi}$ , we can conclude $\bar{V}^{\bar{\pi}}(\bar{s}_{s_1,1}) = \frac{\gamma^H}{1 - \gamma^H} V^{\pi}(s_1)$ . Since the optimal policy $\bar{\pi}^*$ is stationary, we have $\bar{V}^{*}(\bar{s}_{s_{1},1}) = \frac{\gamma^{H}}{1 - \gamma^{H}} V^{*}(s_{1})$ .
+
+By definition, $\bar{\pi}$ is $\epsilon$ -optimal at time step $t$ means that
+
+$$
+\bar {V} ^ {\bar {\pi} ^ {t}} (\bar {s} _ {s _ {1}, 1}) \geq \bar {V} ^ {*} (\bar {s} _ {s _ {1}, 1}) - \epsilon .
+$$
+
+It follows that
+
+$$
+\gamma^ {H} V ^ {\pi^ {t}} (s _ {1}) + \gamma^ {H} \bar {V} ^ {\pi_ {t + H}} (\bar {s} _ {s _ {1}, 1}) = \bar {V} ^ {\bar {\pi}} (\bar {s} _ {s _ {1}, 1}) \geq \bar {V} ^ {*} (\bar {s} _ {s _ {1}, 1}) - \epsilon ,
+$$
+
+hence
+
+$$
+\gamma^ {H} V ^ {\pi^ {t}} (s _ {1}) \geq (1 - \gamma^ {H}) \bar {V} ^ {*} (\bar {s} _ {s _ {1}, 1}) + \gamma^ {H} (\bar {V} ^ {*} (\bar {s} _ {s _ {1}, 1}) - \bar {V} ^ {\pi_ {t + H}} (\bar {s} _ {s _ {1}, 1})) - \epsilon \geq (1 - \gamma^ {H}) \bar {V} ^ {*} (\bar {s} _ {s _ {1}, 1}) - \epsilon .
+$$
+
+Therefore we have
+
+$$
+V ^ {\pi^ {t}} (s _ {1}) \geq \frac {1 - \gamma^ {H}}{\gamma^ {H}} \bar {V} ^ {*} (\bar {s} _ {s _ {1}, 1}) - \epsilon / \gamma^ {H} = V ^ {*} (s _ {1}) - \epsilon / \gamma^ {H},
+$$
+
+which means that $\pi^t$ is an $(\epsilon/\gamma^H)$ -optimal policy.
+
+# D A HARDINSTANCE FOR DELAYED Q-LEARNING
+
+In this section, we prove Theorem 2 regarding the performance of Delayed Q-learning.
+
+Theorem 2. There exists a family of MDPs with constant $S$ and $A$ , in which with probability $1 - \delta$ , Delayed $Q$ -learning incurs sample complexity of exploration of $\Omega \left( \frac{\epsilon^{-3}}{\ln(1 / \delta)} \right)$ , assuming that $\ln(1 / \delta) < \epsilon^{-2}$ .
+
+
+Figure 1: The MDP family. Actions are denoted by arrows. Actions with red color have reward 1, and reward 0 otherwise.
+
+Proof. For each $0 < \epsilon < \frac{1}{10}$ , consider the following MDP (see also Fig. 1): state space is $\mathcal{S} = \{a, b, c\}$ while action set is $\mathcal{A} = \{x, y\}$ ; transition probabilities are $P(b|a, y) = 1 - 10\epsilon$ , $P(c|a, y) = 10\epsilon$ , $P(b|a, x) = 1$ , $P(a|b, \cdot) = P(a|c, \cdot) = 1$ . Rewards are all 1, except $R(c, \cdot) = 0$ .
+
+Assume that Delayed Q-learning is called for this MDP starting from state $a$ , with discount $\gamma > \frac{1}{2}$ and precision set as $\epsilon$ . Denote the $Q$ value maintained by the algorithm by $\hat{Q}$ . Without loss of generality, assume that the initial tie-breaking favors action $y$ when comparing $\hat{Q}(a, x)$ and $\hat{Q}(a, y)$ . In that case, unless $\hat{Q}(a, y)$ is updated, the agent will always choose $y$ in state $a$ . Since $Q(a, x) - Q(a, y) = 10\epsilon \gamma > \epsilon$ for any policy, choosing $y$ at state $a$ implies that the timestep is not $\epsilon$ -optimal. In other words, sample complexity for exploration is at least the number of times the agent visits $a$ before the first update of $\hat{Q}(a, y)$ .
+
+In the Delayed Q-learning algorithm, $\hat{Q} (\cdot ,\cdot)$ are initialized to $1 / (1 - \gamma)$ . Therefore, $\hat{Q} (a,y)$ could only be updated if max $\hat{Q} (c,\cdot)$ is updated (and becomes smaller than $1 / (1 - \gamma)$ ). According to the algorithm, this can only happen if $c$ is visited $m = \Omega \left(\frac{1}{\epsilon^2}\right)$ times.
+
+However, each time the agent visits $a$ , there is less than $10\epsilon$ probability of transiting to $c$ . Let $t_0 = m / (10\epsilon C)$ , where $C = 3\ln \frac{1}{\delta} + 1$ . $\delta$ is chosen such that $C \leq m$ . In the first $2t_0$ timesteps, $a$ will be visited $t_0$ times. By Chernoff's bound, with probability $1 - \delta$ , state $c$ will be visited less than $m$ times. In that case, $\hat{Q}(a, y)$ will not be updated in the first $2t_0$ timesteps. Therefore, with probability $1 - \delta$ , sample complexity of exploration is at least
+
+$$
+t _ {0} = \Omega \left(\frac {1}{\epsilon^ {3} (\ln 1 / \delta)}\right).
+$$
+
+When $\ln (1 / \delta) < \epsilon^{-2}$ , it can be seen that $C = 3\ln \frac{1}{\delta} + 1 < \frac{4}{\epsilon^2} < m$ .
\ No newline at end of file
diff --git a/qlearningwithucbexplorationissampleefficientforinfinitehorizonmdp/images.zip b/qlearningwithucbexplorationissampleefficientforinfinitehorizonmdp/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..f648d2c91b09afb991369d71fbce8db94b7c4475
--- /dev/null
+++ b/qlearningwithucbexplorationissampleefficientforinfinitehorizonmdp/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:dc11d2e49e32ed0db0ca769055174f3d0cc20552847a5a32eb5b2c4edb2ceac0
+size 799377
diff --git a/qlearningwithucbexplorationissampleefficientforinfinitehorizonmdp/layout.json b/qlearningwithucbexplorationissampleefficientforinfinitehorizonmdp/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..b028d1c4c8a6db60301a47ada4726d02dfd03d1d
--- /dev/null
+++ b/qlearningwithucbexplorationissampleefficientforinfinitehorizonmdp/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3a74340db1a94f384262ace41b13711dfe70d895779a47475be829b4c2f1a3ec
+size 990241
diff --git a/quantifyingpointpredictionuncertaintyinneuralnetworksviaresidualestimationwithaniokernel/5438c4ce-ac10-40f8-9c1c-64a3262feb79_content_list.json b/quantifyingpointpredictionuncertaintyinneuralnetworksviaresidualestimationwithaniokernel/5438c4ce-ac10-40f8-9c1c-64a3262feb79_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..d61ea95ecacca5de6b044838e611aabebb0d38f6
--- /dev/null
+++ b/quantifyingpointpredictionuncertaintyinneuralnetworksviaresidualestimationwithaniokernel/5438c4ce-ac10-40f8-9c1c-64a3262feb79_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6a82b3cfdbc5686e1283a0743fc80163047824f3a927b248ff66069df1197618
+size 237038
diff --git a/quantifyingpointpredictionuncertaintyinneuralnetworksviaresidualestimationwithaniokernel/5438c4ce-ac10-40f8-9c1c-64a3262feb79_model.json b/quantifyingpointpredictionuncertaintyinneuralnetworksviaresidualestimationwithaniokernel/5438c4ce-ac10-40f8-9c1c-64a3262feb79_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..544c66c1bf5f1050bfae2048c0a5923f5e68d77b
--- /dev/null
+++ b/quantifyingpointpredictionuncertaintyinneuralnetworksviaresidualestimationwithaniokernel/5438c4ce-ac10-40f8-9c1c-64a3262feb79_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3dfdac6b4a45f03a453cb3c7b1db5713b060a9e9e570d796e5f76c390b2d0d64
+size 256496
diff --git a/quantifyingpointpredictionuncertaintyinneuralnetworksviaresidualestimationwithaniokernel/5438c4ce-ac10-40f8-9c1c-64a3262feb79_origin.pdf b/quantifyingpointpredictionuncertaintyinneuralnetworksviaresidualestimationwithaniokernel/5438c4ce-ac10-40f8-9c1c-64a3262feb79_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..15042b50ef85b4e46d94db64b7e3b4d73148575a
--- /dev/null
+++ b/quantifyingpointpredictionuncertaintyinneuralnetworksviaresidualestimationwithaniokernel/5438c4ce-ac10-40f8-9c1c-64a3262feb79_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c45e89ee41ad1c8a34d46096eccbd56d8e7de285f2bc5a2b4dedd91694978ff1
+size 8773790
diff --git a/quantifyingpointpredictionuncertaintyinneuralnetworksviaresidualestimationwithaniokernel/full.md b/quantifyingpointpredictionuncertaintyinneuralnetworksviaresidualestimationwithaniokernel/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..3394ca065be016a74842b8af1e454c40492cd064
--- /dev/null
+++ b/quantifyingpointpredictionuncertaintyinneuralnetworksviaresidualestimationwithaniokernel/full.md
@@ -0,0 +1,853 @@
+# QUANTIFYING POINT-PREDICTION UNCERTAINTY IN NEURAL NETWORKS VIA RESIDUAL ESTIMATION WITH AN I/O KERNEL
+
+Xin Qiu
+
+Cognizant
+
+qiuxin.nju@gmail.com
+
+Elliot Meyerson
+
+Cognizant
+
+elliot.meyerson@cognizant.com
+
+Risto Miikkulainen
+
+Cognizant
+
+The University of Texas at Austin
+
+risto@cognizant.com
+
+# ABSTRACT
+
+Neural Networks (NNs) have been extensively used for a wide spectrum of real-world regression tasks, where the goal is to predict a numerical outcome such as revenue, effectiveness, or a quantitative result. In many such tasks, the point prediction is not enough: the uncertainty (i.e. risk or confidence) of that prediction must also be estimated. Standard NNs, which are most often used in such tasks, do not provide uncertainty information. Existing approaches address this issue by combining Bayesian models with NNs, but these models are hard to implement, more expensive to train, and usually do not predict as accurately as standard NNs. In this paper, a new framework (RIO) is developed that makes it possible to estimate uncertainty in any pretrained standard NN. The behavior of the NN is captured by modeling its prediction residuals with a Gaussian Process, whose kernel includes both the NN's input and its output. The framework is evaluated in twelve real-world datasets, where it is found to (1) provide reliable estimates of uncertainty, (2) reduce the error of the point predictions, and (3) scale well to large datasets. Given that RIO can be applied to any standard NN without modifications to model architecture or training pipeline, it provides an important ingredient for building real-world NN applications.
+
+# 1 INTRODUCTION
+
+Nowadays, Neural Networks (NNs) are arguably the most popular machine learning tool among Artificial Intelligence (AI) community. Researchers and practitioners have applied NNs to a wide variety of fields, including manufacturing (Bergmann et al., 2014), bioinformatics (LeCun et al., 2015), physics (Baldi et al., 2014), finance (Niaki & Hoseinzade, 2013), chemistry (Anjos et al., 2015), healthcare (Shahid et al., 2019), etc. Although standard NNs are good at making a point prediction (a single outcome) for supervised learning tasks, they are unable to provide uncertainty information about predictions. For real-world decision makers, representing prediction uncertainty is of crucial importance (Krzywinski & Altman, 2013; Ghahramani, 2015). For example, in the case of regression, providing a $95\%$ confidence interval around the prediction allows the decision maker to anticipate the possible outcomes with explicit probability. In contrast, simply returning a single point prediction imposes increased risks on decision making, e.g., a predictively good but actually risky medical treatment may be overconfidently interpreted without uncertainty information.
+
+Conventional Bayesian models such as Gaussian Processes (GP) (Rasmussen & Williams, 2006) offer a mathematically grounded approach to reason about the predictive uncertainty, but often come with a prohibitive computational cost and lower prediction performance compared to NNs (Gal & Ghahramani, 2016). As a potential solution, considerable research has been devoted to the combination of Bayesian models and NNs (see Section 2 for a detailed review of such approaches),
+
+aiming to overcome the downsides of each. However, from the classical Bayesian Neural Network (Neal, 1996), in which a distribution of weights is learned, to the recent Neural Processes (Garnelo et al., 2018a;b; Kim et al., 2019), in which a distribution over functions is defined, all such methods require significant modifications to the model infrastructure and training pipeline. Compared to standard (non-Bayesian) NNs, these new models are often computationally slower to train and harder to implement (Gal & Ghahramani, 2016; Lakshminarayanan et al., 2017), creating tremendous difficulty for practical uses. Gal & Ghahramani (2016) derived a theoretical tool to extract uncertainty information from dropout training, however, the method can only be applied to dropout models, and requires changes to their internal inference pipeline. Quantifying point-prediction uncertainty in standard NNs, which are overwhelmingly popular in practical applications, still remains a challenging problem with significant potential impact.
+
+To circumvent the above issues, this paper presents a new framework that can quantitatively estimate the point-prediction uncertainty of standard NNs without any modifications to the model structure or training pipeline. The proposed approach works as a supporting tool that can augment any pretrained NN without retraining them. The idea is to capture the behavior of the NN by estimating its prediction residuals using a modified GP, which uses a new composite (I/O) kernel that makes use of both inputs and outputs of the NNs. The framework is referred to as RIO (for Residual estimation with an I/O kernel). In addition to providing valuable uncertainty estimation, RIO has an interesting side-effect: It provides a way to reduce the error of the NN predictions. Moreover, with the help of recent sparse GP models, RIO scales well to large datasets. Since classification problems can be treated as regression on class labels (Lee et al., 2018), this paper focuses on regression tasks. In this setting, empirical studies are conducted with twelve real-world datasets, and an initial theoretical investigation is presented that characterizes the benefits of RIO. The results show that RIO exhibits reliable uncertainty estimations, more accurate point predictions, and better scalability compared to alternative approaches.
+
+# 2 RELATED WORK
+
+There has been significant interest in combining NNs with probabilistic Bayesian models. An early approach was Bayesian Neural Networks, in which a prior distribution is defined on the weights and biases of a NN, and a posterior distribution is then inferred from the training data (MacKay, 1992; Neal, 1996). Notice that unlike these methods, RIO is concerned only with uncertainty over NN predictions, not over NN weights. Traditional variational inference techniques have been applied to the learning procedure of Bayesian NN, but with limited success (Hinton & van Camp, 1993; Barber & Bishop, 1998; Graves, 2011). By using a more advanced variational inference method, new approximations for Bayesian NNs were achieved that provided similar prediction performance as dropout NNs (Blundell et al., 2015). However, the main drawbacks of Bayesian NNs remain: prohibitive computational cost and difficult implementation procedure compared to standard NNs.
+
+Alternatives to Bayesian NNs have been developed recently. One such approach introduces a training pipeline that incorporates ensembles of NNs and adversarial training (Lakshminarayanan et al., 2017). Another approach, NNGP, considers a theoretical connection between NNs and GP to develop a model approximating the Bayesian inference process of wide deep neural networks (Lee et al., 2018). Deep kernel learning (DKL) combines NNs with GP by using a deep NN embedding as the input to the GP kernel (Wilson et al., 2016). In Iwata & Ghahramani (2017), NNs are used for the mean functions of GPs, and parameters of both NNs and GP kernels are simultaneously estimated by stochastic gradient descent methods. Conditional Neural Processes (CNPs) combine the benefits of NNs and GP, by defining conditional distributions over functions given data, and parameterizing this dependence with a NN (Garnelo et al., 2018a). Neural Processes (NPs) generalize deterministic CNPs by incorporating a latent variable, strengthening the connection to approximate Bayesian and latent variable approaches (Garnelo et al., 2018b). Attentive Neural Processes (ANPs) further extends NPs by incorporating attention to overcome underfitting issues (Kim et al., 2019). The above models all require significant modifications to the original NN model and training pipeline. Compared to standard NNs, they are also less computationally efficient and more difficult for practitioners to implement. In the approach that shares the most motivation with RIO, Monte Carlo dropout was used to estimate the predictive uncertainty of dropout NNs (Gal & Ghahramani, 2016). However, this method is restricted to dropout NNs, and also requires modifications to the NN inference process. Different from all above-mentioned methods, which are independent alternatives to standard NNs,
+
+
+Figure 1: Complete model-building process. Given a dataset, first a standard NN model is constructed and trained by a data scientist. The RIO method takes this pretrained model and trains a GP to estimate the residuals of the NN using both the output of the NN and the original input. Blue pathways are only active during the training phase. In the deployment phase, the GP provides uncertainty estimates for the predictions, while calibrating them, i.e., making point predictions more accurate. Overall, RIO transforms the standard NN regression model into a more practical probabilistic estimator.
+
+RIO is designed as a supporting tool that can be applied on top of any pretrained NN. RIO augments standard NNs without retraining or modifying any component of them.
+
+# 3 THE RIO FRAMEWORK
+
+This section gives the general problem statement, develops the RIO framework, and discusses its scalability. For background introductions of NNs, GP, and its more efficient approximation, SVGP (Hensman et al., 2013; 2015), see Appendix B.
+
+# 3.1 PROBLEM STATEMENT
+
+Consider a training dataset $\mathcal{D} = (\mathcal{X},\mathbf{y}) = \{(\mathbf{x}_i,y_i)\}_{i = 1}^n$ , and a pretrained standard NN that outputs a point prediction $\hat{y}_i$ given $\mathbf{x}_i$ . The problem is two-fold: (1) Quantify the uncertainty in the predictions of the NN, and (2) calibrate the point predictions of the NN (i.e. make them more accurate).
+
+# 3.2 FRAMEWORK OVERVIEW
+
+RIO solves this problem by modeling the residuals between observed outcomes $y$ and NN predictions $\hat{y}$ using GP with a composite kernel. The framework can be divided into two phases: the training phase and the deployment phase.
+
+In the training phase, the residuals between observed outcomes and NN predictions on the training dataset are calculated as
+
+$$
+r _ {i} = y _ {i} - \hat {y} _ {i}, \text {f o r} i = 1, 2, \dots , n. \tag {1}
+$$
+
+Let $\mathbf{r}$ denote the vector of all residuals and $\hat{\mathbf{y}}$ denote the vector of all NN predictions. A GP with a composite kernel is trained assuming $\mathbf{r} \sim \mathcal{N}(0, \mathbf{K}_c((\mathcal{X}, \hat{\mathbf{y}}), (\mathcal{X}, \hat{\mathbf{y}})) + \sigma_n^2 \mathbf{I})$ , where $\mathbf{K}_c((\mathcal{X}, \hat{\mathbf{y}}), (\mathcal{X}, \hat{\mathbf{y}}))$ denotes an $n \times n$ covariance matrix at all pairs of training points based on a composite kernel
+
+$$
+k _ {c} \left(\left(\mathbf {x} _ {i}, \hat {y} _ {i}\right), \left(\mathbf {x} _ {j}, \hat {y} _ {j}\right)\right) = k _ {\text {i n}} \left(\mathbf {x} _ {i}, \mathbf {x} _ {j}\right) + k _ {\text {o u t}} \left(\hat {y} _ {i}, \hat {y} _ {j}\right), \text {f o r} i, j = 1, 2, \dots , n. \tag {2}
+$$
+
+Suppose a radial basis function (RBF) kernel is used for both $k_{\mathrm{in}}$ and $k_{\mathrm{out}}$ . Then,
+
+$$
+k _ {c} \left(\left(\mathbf {x} _ {i}, \hat {y} _ {i}\right), \left(\mathbf {x} _ {j}, \hat {y} _ {j}\right)\right) = \sigma_ {\mathrm {i n}} ^ {2} \exp \left(- \frac {1}{2 l _ {\mathrm {i n}} ^ {2}} \left\| \mathbf {x} _ {i} - \mathbf {x} _ {j} \right\| ^ {2}\right) + \sigma_ {\mathrm {o u t}} ^ {2} \exp \left(- \frac {1}{2 l _ {\mathrm {o u t}} ^ {2}} \left\| \hat {y} _ {i} - \hat {y} _ {j} \right\| ^ {2}\right). \tag {3}
+$$
+
+The training process of GP learns the hyperparameters $\sigma_{\mathrm{in}}^2, l_{\mathrm{in}}, \sigma_{\mathrm{out}}^2, l_{\mathrm{out}}$ , and $\sigma_n^2$ by maximizing the log marginal likelihood $\log p(\mathbf{r}|\mathcal{X}, \hat{\mathbf{y}})$ given by
+
+$$
+\log p (\mathbf {r} \mid \mathcal {X}, \hat {\mathbf {y}}) = - \frac {1}{2} \mathbf {r} ^ {\top} \left(\mathbf {K} _ {c} \left(\left(\mathcal {X}, \hat {\mathbf {y}}\right), \left(\mathcal {X}, \hat {\mathbf {y}}\right)\right) + \sigma_ {n} ^ {2} \mathbf {I}\right) \mathbf {r} - \frac {1}{2} \log \left| \mathbf {K} _ {c} \left(\left(\mathcal {X}, \hat {\mathbf {y}}\right), \left(\mathcal {X}, \hat {\mathbf {y}}\right)\right) + \sigma_ {n} ^ {2} \mathbf {I} \right| - \frac {n}{2} \log 2 \pi . \tag {4}
+$$
+
+
+Figure 2: Capturing uncertainty of more and less accurate NNs. These figures illustrate the behavior of RIO in two cases: (left) The neural network has discovered true complex structure in the labels, so the residuals have low variance and are easy for the GP to fit with high confidence; (right) The ineffective neural network has introduced unnecessary complexity, so the residuals are modeled with high uncertainty. In both cases, RIO matches the intuition for how uncertain the NN really is.
+
+
+
+In the deployment phase, a test point $\mathbf{x}_{*}$ is input to the NN to get an output $\hat{y}_{*}$ . The trained GP predicts the distribution of the residual as $\hat{r}_{*}|\mathcal{X},\hat{\mathbf{y}},\mathbf{r},\mathbf{x}_{*},\hat{y}_{*}\sim \mathcal{N}(\hat{r}_{*},\mathrm{var}(\hat{r}_{*}))$ , where
+
+$$
+\bar {\hat {r}} _ {*} = \mathbf {k} _ {*} ^ {\top} \left(\mathbf {K} _ {c} \left(\left(\mathcal {X}, \hat {\mathbf {y}}\right), \left(\mathcal {X}, \hat {\mathbf {y}}\right)\right) + \sigma_ {n} ^ {2} \mathbf {I}\right) ^ {- 1} \mathbf {r}, \tag {5}
+$$
+
+$$
+\operatorname {v a r} \left(\hat {r} _ {*}\right) = k _ {c} \left(\left(\mathbf {x} _ {*}, \hat {y} _ {*}\right), \left(\mathbf {x} _ {*}, \hat {y} _ {*}\right)\right) - \mathbf {k} _ {*} ^ {\top} \left(\mathbf {K} _ {c} \left(\left(\mathcal {X}, \hat {\mathbf {y}}\right), \left(\mathcal {X}, \hat {\mathbf {y}}\right)\right) + \sigma_ {n} ^ {2} \mathbf {I}\right) ^ {- 1} \mathbf {k} _ {*}, \tag {6}
+$$
+
+where $\mathbf{k}_{*}$ denotes the vector of kernel-based covariances (i.e., $k_{c}((\mathbf{x}_{*},\hat{y}_{*}),(x_{i},\hat{y}_{i}))$ ) between $(\mathbf{x}_{*},\hat{y}_{*})$ and the training points.
+
+Interestingly, note that the predicted residuals can also be used to calibrate the point predictions of the NN, so that the final calibrated prediction with uncertainty information is given by
+
+$$
+\hat {y} _ {*} ^ {\prime} \sim \mathcal {N} \left(\hat {y} _ {*} + \bar {\hat {r}} _ {*}, \operatorname {v a r} \left(\hat {r} _ {*}\right)\right). \tag {7}
+$$
+
+In other words, RIO not only adds uncertainty estimation to a standard NN—it also provides a way to calibrate NN predictions, without any modification to its architecture or training. Figure 1 shows the overall procedure when applying the proposed framework in real-world applications. Figure 2 shows example behavior of RIO that illustrates the intuition of the approach. An algorithmic description of RIO is provided in Appendix C.
+
+# 3.3 SCALABILITY
+
+RIO is scalable to large datasets (No. of data points $\times$ No. of features $>1\mathrm{M}$ ) by applying sparse GP methods, e.g., SVGP (Hensman et al., 2013; 2015). All the conclusions in previous sections still remain valid since sparse GP is simply an approximation of the original GP. In the case of applying SVGP with a traditional optimizer, e.g., L-BFGS-B (Byrd et al., 1995; Zhu et al., 1997), the computational complexity is $\mathcal{O}(nm^2)$ , and space complexity is $\mathcal{O}(nm)$ , where $n$ is the number of data points and $m$ is the number of inducing variables, compared to $\mathcal{O}(n^3)$ and $\mathcal{O}(n^2)$ for traditional GP. Experiments verify that the computational cost of RIO with SVGP is significantly cheaper than other state-of-the-art approaches.
+
+# 4 EMPIRICAL EVALUATION
+
+Experiments in this section compare nine algorithms on 12 real-world datasets. The algorithms include standard NN, the proposed RIO framework, four ablated variants of RIO, and three state-of-the-art models that provide predictive uncertainty: SVGP (Hensman et al., 2013), Neural Network Gaussian Process (NNGP) (Lee et al., 2018), and Attentive Neural Processes (ANP) (Kim et al., 2019). In naming the RIO variants, "R" means estimating NN residuals then correcting NN outputs, "Y" means directly estimating outcomes, "I" means only using input kernel, "O" means only using output kernel, and "IO" means using I/O kernel. For all RIO variants (including full RIO), SVGP is used as the GP component, but using the appropriate kernel and prediction target. Therefore, "Y+I" amounts to original SVGP, and it is denoted as "SVGP" in all the experimental results. All 12 datasets
+
+Table 1: Summary of experimental results
+
+| Dataset n × d | Method | RMSE mean±std | NLPD mean±std | Noise Variance | Time (sec) | Dataset n × d | Method | RMSE mean±std | NLPD mean±std | Noise Variance | Time (sec) |
| yacht | NN | 2.30±0.93†‡ | - | - | 4.02 | ENB/h | NN | 1.03±0.51†‡ | - | - | 6.65 |
| RIO | 1.46±0.49 | 2.039±0.762 | 0.82 | 7.16 | RIO | 0.70±0.38 | 1.038±0.355 | 0.46 | 8.18 |
| R+I | 2.03±0.73†‡ | 2.341±0.516†‡ | 2.54 | 4.30 | R+I | 0.79±0.46†‡ | 1.147±0.405†‡ | 0.63 | 7.52 |
| 308 | R+O | 1.88±0.66†‡ | 2.305±0.614†‡ | 1.60 | 6.27 | 768 | R+O | 0.80±0.43†‡ | 1.169±0.388†‡ | 0.59 | 7.61 |
| Y+O | 1.86±0.64†‡ | 2.305±0.639†‡ | 1.89 | 9.93 | × | Y+O | 0.88±0.48†‡ | 1.248±0.405†‡ | 0.75 | 11.06 |
| 6 | Y+IO | 1.58±0.52†‡ | 2.160±0.773†‡ | 1.18 | 9.44 | Y+IO | 0.76±0.41†‡ | 1.124±0.368†‡ | 0.56 | 10.64 |
| SVGP | 4.42±0.62†‡ | 2.888±0.102†‡ | 18.56 | 8.96 | SVGP | 2.13±0.18†‡ | 2.200±0.074†‡ | 4.70 | 10.16 |
| NNGP | 12.40±1.45†‡ | 35.18±0.534†‡ | - | 7347 | NNGP | 4.97±0.29†‡ | 32.40±0.638†‡ | - | 7374 |
| ANP | 7.59±3.20†‡ | 1.793±0.887†‡ | - | 40.82 | ANP | 4.08±2.27†‡ | 2.475±0.559†‡ | - | 102.3 |
| ENB/c | NN | 1.88±0.44†‡ | - | - | 6.45 | airfoil | NN | 4.82±0.43†‡ | - | - | 6.48 |
| RIO | 1.48±0.33 | 1.816±0.191 | 1.58 | 8.07 | RIO | 3.07±0.18 | 2.554±0.053 | 9.48 | 17.63 |
| R+I | 1.71±0.44†‡ | 1.969±0.236†‡ | 2.22 | 5.02 | R+I | 3.16±0.18†‡ | 2.583±0.051†‡ | 10.07 | 15.90 |
| 768 | R+O | 1.75±0.43†‡ | 2.000±0.229†‡ | 2.25 | 4.57 | 1505 | R+O | 4.17±0.26†‡ | 2.849±0.066†‡ | 16.64 | 9.97 |
| Y+O | 1.76±0.43†‡ | 2.000±0.231†‡ | 2.32 | 10.99 | Y+O | 4.24±0.28†‡ | 2.869±0.075†‡ | 17.81 | 22.72 |
| 8 | Y+IO | 1.64±0.36†‡ | 1.936±0.210†‡ | 1.96 | 10.56 | 5 | Y+IO | 3.64±0.53†‡ | 2.712±0.150†‡ | 14.40 | 24.51 |
| SVGP | 2.63±0.23†‡ | 2.403±0.078†‡ | 6.81 | 10.28 | SVGP | 3.59±0.20†‡ | 2.699±0.053†‡ | 12.67 | 21.74 |
| NNGP | 4.91±0.32†‡ | 30.14±0.886†‡ | - | 7704 | NNGP | 6.54±0.23†‡ | 33.60±0.420†‡ | - | 3355 |
| ANP | 4.81±2.15†‡ | 2.698±0.548†‡ | - | 64.11 | ANP | 21.17±30.72†‡ | 5.399±6.316†‡ | - | 231.7 |
| CCS | NN | 6.23±0.53†‡ | - | - | 9.46 | wine/r | NN | 0.691±0.041†‡ | - | - | 3.61 |
| RIO | 5.97±0.48 | 3.241±0.109 | 24.74 | 13.71 | RIO | 0.672±0.036 | 1.094±0.100 | 0.28 | 9.25 |
| 1030 | R+I | 6.01±0.50†‡ | 3.248±0.112†‡ | 25.40 | 9.52 | 1599 | R+I | 0.669±0.036†‡ | 1.085±0.097†‡ | 0.28 | 8.34 |
| R+O | 6.17±0.54†‡ | 3.283±0.120†‡ | 26.31 | 9.54 | R+O | 0.676±0.035†‡ | 1.099±0.094†‡ | 0.29 | 5.02 |
| 8 | Y+O | 6.15±0.52†‡ | 3.279±0.117†‡ | 26.53 | 21.35 | 11 | Y+O | 0.676±0.034†‡ | 1.096±0.092 | 0.29 | 12.71 |
| Y+IO | 6.06±0.49†‡ | 3.261±0.110†‡ | 25.82 | 23.15 | Y+IO | 0.672±0.036†‡ | 1.094±0.098 | 0.28 | 12.48 |
| SVGP | 6.87±0.39†‡ | 3.336±0.048†‡ | 44.55 | 19.85 | SVGP | 0.642±0.028†‡ | 0.974±0.042†‡ | 0.40 | 12.17 |
| wine/w | NN | 0.721±0.023†‡ | - | - | 7.17 | CCPP | NN | 4.96±0.53†‡ | - | - | 14.52 |
| RIO | 0.704±0.018 | 1.090±0.038 | 0.37 | 16.74 | RIO | 4.05±0.128 | 2.818±0.031 | 16.30 | 42.65 |
| 4898 | R+I | 0.699±0.018†‡ | 1.081±0.037†‡ | 0.38 | 13.5 | 9568 | 4.06±0.13†‡ | 2.822±0.031†‡ | 16.39 | 39.88 |
| R+O | 0.710±0.019†‡ | 1.098±0.038†‡ | 0.39 | 6.19 | R+O | 4.32±0.15†‡ | 2.883±0.035†‡ | 18.50 | 18.48 |
| 11 | Y+O | 0.710±0.019†‡ | 1.096±0.038†‡ | 0.39 | 18.39 | Y+O | 4.37±0.20†‡ | 2.914±0.122†‡ | 23.98 | 48.27 |
| Y+IO | 0.705±0.019†‡ | 1.090±0.038 | 0.38 | 20.06 | Y+IO | 4.56±1.00†‡ | 2.958±0.216†‡ | 31.06 | 46.8 |
| SVGP | 0.719±0.018†‡ | 1.081±0.022†‡ | 0.50 | 18.18 | SVGP | 4.36±0.13†‡ | 2.893±0.031†‡ | 19.04 | 46.43 |
| protein | NN | 4.21±0.07†‡ | - | - | 151.8 | SC | NN | 12.23±0.77†‡ | - | - | 51.9 |
| RIO | 4.08±0.06 | 2.826±0.014 | 15.71 | 149.4 | RIO | 11.28±0.46 | 3.853±0.042 | 105.83 | 53.39 |
| 45730 | R+I | 4.11±0.06†‡ | 2.834±0.037†‡ | 15.99 | 141.2 | R+I | 11.33±0.45†‡ | 3.858±0.041†‡ | 107.35 | 47.72 |
| R+O | 4.14±0.06†‡ | 2.840±0.015†‡ | 16.18 | 115.1 | × | R+O | 11.63±0.52†‡ | 3.881±0.046†‡ | 112.91 |
| 9 | Y+O | 4.14±0.06†‡ | 2.840±0.015†‡ | 16.17 | 138.4 | 81 | Y+O | 11.64±0.53†‡ | 3.882±0.046†‡ | 113.61 |
| Y+IO | 4.08±0.06 | 2.826±0.014 | 15.72 | 155.5 | Y+IO | 11.32±0.45†‡ | 3.856±0.041†‡ | 106.93 | 57.74 |
| SVGP | 4.68±0.04†‡ | 2.963±0.007†‡ | 22.54 | 149.5 | SVGP | 14.66±0.25†‡ | 4.136±0.014†‡ | 239.28 | 50.89 |
| CT | NN | 1.17±0.34†‡ | - | - | 194.5 | MSD | NN | 12.42±2.97†‡ | - | - | 1136 |
| RIO | 0.88±0.15 | 1.284±0.219 | 1.02 | 516.4 | RIO | 9.26±0.21 | 3.639±0.022 | 84.28 | 1993 |
| 53500 | R+I | 1.17±0.34†‡ | 1.538±0.289†‡ | 1.71 | 19.80 | R+I | 10.92±1.30†‡ | 3.811±0.128†‡ | 135.34 | 282.0 |
| R+O | 0.88±0.15 | 1.283±0.219†‡ | 1.02 | 159.4 | × | R+O | 9.25±0.20 | 3.638±0.021 | 84.05 |
| 384 | Y+O | 0.99±0.42†‡ | 1.365±0.385†‡ | 2.45 | 168.2 | 90 | Y+O | 10.00±0.86†‡ | 3.768±0.148†‡ | 169.90 |
| Y+IO | 0.91±0.16†‡ | 1.280±0.177† | 0.62 | 578.6 | Y+IO | 9.43±0.52†‡ | 3.644±0.025†‡ | 85.66 | 2605 |
| SVGP | 52.07±0.19†‡ | 5.372±0.004†‡ | 2712 | 27.56 | SVGP | 9.57±0.00†‡ | 3.677±0.000†‡ | 92.21 | 2276 |
+
+The symbols $\dagger$ and $\ddagger$ indicate that the difference between the marked entry and RIO is statistically significant at the $5\%$ significance level using paired t-test and Wilcoxon test, respectively. The best entries that are significantly better than all the others under at least one statistical test are marked in boldface (ties are allowed).
+
+are real-world regression problems (Dua & Graff, 2017), and cover a wide variety of dataset sizes and feature dimensionalities. Except for the "MSD" dataset, all other datasets are tested for 100 independent runs. During each run, the dataset is randomly split into training set, validation set, and test set, and all algorithms are trained on the same split. All RIO variants that involve an output kernel or residual estimation are based on the trained NN in the same run. For "MSD", since the dataset split is strictly predefined by the provider, only 10 independent runs are conducted. NNGP and ANP are only tested on the four smallest dataset (based on the product of dataset size and feature dimensionality) because they do not scale well to larger datasets. It is notable that for all the RIO variants, no extensive hyperparameter tuning is conducted; the same default setup is used for all experiments, i.e., standard RBF kernel and 50 inducing points. See Appendix D.1 for additional details of experimental setups and a link to downloadable source code. Table 1 summarizes the numerical results from these experiments. The main conclusions in terms of point-prediction error, uncertainty estimation, computational requirements, and ablation studies are summarized below.
+
+Point-Prediction Error The errors between point predictions of models and true outcomes of test points are measured using Root Mean Square Error (RMSE); the mean and standard deviation of RMSEs over multiple experimental runs are shown in Table 1. For models that return a probabilistic distribution, the mean of the distribution is the point prediction. Although the main motivation of RIO is to enhance pretrained NN rather than construct a new state-of-the-art prediction model from scratch, RIO performs the best or equals the best method (based on statistical tests) in 10 out of 12 datasets. RIO significantly outperforms original NN in all 12 datasets, while original SVGP performs significantly worse than NN in 7 datasets. For the CT dataset, which has 386 input features, SVGP fails severely since the input kernel cannot capture the implicit correlation information. ANP is unstable on the airfoil dataset because it scales poorly with dataset size. Figure 3 compares NN, RIO
+
+
+Figure 3: Comparison among NN, RIO, and SVGP. The horizontal axis denotes the prediction RMSE of the NN, and the vertical axis the prediction RMSE of RIO (blue dots) and SVGP (yellow dots). Each dot represents an independent experimental run. Since the scales are different, the solid blue line indicates where NN and RIO/SVGP have same prediction RMSE. Thus, a dot below the line means that the method (RIO or SVGP) performs better than the NN, and vice versa. Results of SVGP on the CT dataset are not plotted because its prediction RMSE exceeded the visible scale (i.e. they were $>50$ ). RIO consistently reduces the error of the NN, and outperforms SVGP in most cases.
+
+and SVGP in terms of prediction RMSE. RIO is able to improve the NN predictions consistently regardless of how the dataset is split and how well the NN is trained. Even in the situations where original NN is much worse than SVGP, RIO still successfully improves the NN performance into a level that is comparable or better than SVGP. Appendix D.3 provides additional investigation into the behavior of RIO output corrections. RIO exhibits diverse behaviors that generally move the original NN predictions closer to the ground truth. To conclude, applying RIO to NNs not only provides additional uncertainty information, but also reliably reduces the point-prediction error.
+
+Uncertainty Estimation Average negative log predictive density (NLPD) is used to measure the quality of uncertainty estimation, which favours conservative models, but also effectively penalizes both over- and under-confident predictions (Quinonero-Candela et al., 2006). The mean and standard deviation of NLPDs over multiple experimental runs are shown in Table 1 (lower is better). RIO performs the best or equals the best method (based on statistical tests) in 8 out of 12 datasets. NNGP always yields a high NLPD; it returns over-confident predictions, because it does not include noise estimation in its original implementation. For the yacht dataset, ANP achieves the best NLPD, but with high RMSE. This is because ANP is able to correctly return high predictive variance when its prediction error is high. For all other tested datasets, RIO variants consistently outperform ANP. Among all RIO variants, the full RIO provides the most reliable overall predictive uncertainty. The conclusion is that RIO successfully extracts useful uncertainty information from NN predictions. Appendix D.2 evaluates uncertainty estimates on an additional metric: the true coverage probabilities of estimated confidence intervals. Although RIO also provides reasonable estimates for this metric in most cases, comparison among algorithms using this metric is discouraged due to its unreliability (see Appendix D.2 for examples).
+
+Computation Time Table 1 shows the average wall clock time of each algorithm. All algorithms are implemented using Tensorflow under the same running environment (see Appendix D.1 for implementation details). The RIO variants scale well to increasing dataset sizes and feature dimensionality. L-BFGS-B converges especially quickly for $\mathrm{R + I}$ on the three highest dimensional datasets, presumably because the residuals are very well-behaved compared to the raw targets or NN output. In contrast, ANP's computation time increases significantly with the scale of the dataset, and NNGP always needs very expensive computational budgets due to its costly grid search of hyperparameters.
+
+Ablation Study The RIO variants with residual estimation generally perform better than its counterparts in both point-prediction error and uncertainty estimation. This result confirms the effectiveness of residual estimation, as suggested in Section 5.1. Another important result is that $\mathrm{Y + IO}$ outperforms both $\mathrm{Y + I}$ (SVGP) and $\mathrm{Y + O}$ in most cases across all performance metrics, and RIO generally provides better performance than $\mathrm{R + I}$ and $\mathrm{R + O}$ in all respects. This result, in turn, confirms that the I/O kernel provides additional robustness, as suggested by the analysis in Section 5.2. In sum, both residual estimation and the I/O kernel contribute substantially to the performance of the framework.
+
+Table 2: Spearman's correlation between RMSE and ${\sigma }_{n}^{2}$ across RIO variants (including SVGP)
+
+ | yacht | ENB/h | ENB/c | airfoil | CCS | wine/r | wine/w | CCPP | protein | SC | CT | MSD |
| correlation | 0.943 | 0.943 | 1.0 | 1.0 | 0.943 | -0.09 | 0.886 | 1.0 | 0.943 | 1.0 | 0.771 | 0.943 |
| p-value | 0.005 | 0.005 | 0.0 | 0.0 | 0.005 | 0.872 | 0.02 | 0.0 | 0.005 | 0.0 | 0.072 | 0.005 |
+
+Entries that are considered to indicate very strong positive monotonic correlation are marked in boldface.
+
+Correlation between error and noise variance Table 2 shows the results of Spearman's Rank Correlation between RMSE and noise variance $\sigma_{n}^{2}$ . For each dataset, there are six pairs of data points, each of which contains the mean of RMSE and noise variance of a RIO variant. A correlation value larger than 0.8 and $p$ -value less than 0.05 indicate a very strong positive monotonic correlation. For 10 out of 12 datasets, very strong positive monotonic correlation between RMSE and noise variance was observed. This empirical result is in accordance with the theoretical prediction in Section 5.1.
+
+Application to a large-scale vision problem To show RIO's off-the-shelf applicability to modern deep convolutional architectures, it was applied to a recent pretrained NN for age estimation (Yang et al., 2018). The pretrained NN and all data preprocessing were taken exactly from the official code release. The model is a variant of DenseNet-121 (Huang et al., 2017), and uses the IMDB age estimation dataset, which contains $\approx 172\mathrm{K}$ RGB images (Rothe et al., 2016). The goal is to predict the age of the individual in each image. The features for the GP input kernel were simply a global max pool of the CNN's first stage output. From Table 3, RIO substantially improves upon the mean absolute error (MAE) of the pretrained NN, outperforms SVGP in terms of both MAE and NLPD, and yields realistic confidence intervals (CIs). SVGP effectively learns nothing, so it estimates almost all variance as noise, while RIO effectively augments the pretrained model.
+
+Table 3:IMDB Age Estimation Results with DenseNet
+
+| Method | MAE | NLPD | 95% CI Coverage | 90% CI Coverage | 68% CI Coverage |
| Pretrained DenseNet (Yang et al., 2018) | 7.43 | - | - | - | - |
| SVGP | 36.45 | 5.06 | 0.99 | 0.96 | 0.62 |
| RIO | 6.35 | 3.59 | 0.94 | 0.91 | 0.75 |
+
+CI coverage means the percentage of testing outcomes that are within the estimated CI.
+
+# 5 ANALYSIS OF RIO
+
+This section presents an initial theoretical investigation into what makes RIO work, including why it is useful to fit residuals, and why it is beneficial to use an I/O kernel.
+
+# 5.1 BENEFITS OF FITTING NEURAL NETWORK RESIDUALS
+
+Given a pretrained NN, this section provides a theoretical perspective on how fitting its residuals with a GP can yield useful uncertainty information, while leading to prediction error lower than the NN or GP alone. The idea is that, due to its high expressivity, a pretrained NN may have learned complex structure that a GP would model poorly, when selecting a kernel that would capture this complexity is infeasible. Fitting a GP to the residuals of the NN is easier, since this complex structure has been removed, and takes advantage of the predictive power of the NN, while providing useful uncertainty estimates.
+
+Given a pretrained NN, when is fitting its residuals with a GP a good idea? To produce a model with uncertainty information, one could simply discard the NN and train a GP directly on $\mathcal{D}$ . However, for problems with enough complexity, GP model selection (i.e., specifying an appropriate class of kernels) is challenging. Fortunately, the pretrained NN may have learned a useful representation of this complexity, which can then be exploited. Suppose instead that a GP tries to model this complexity, and this attempt leads to worse generalization. This poor performance could come from many dimensions of the kernel selection process. As an initial theoretical investigation, an ideal case is considered first: suppose the GP optimizer optimally avoids modeling this complexity incorrectly by estimating the variance of such complexity as noise. Such an estimate leads to the following decomposition:
+
+$$
+y _ {i} = h \left(\mathbf {x} _ {i}\right) + \xi_ {i} = f \left(\mathbf {x} _ {i}\right) + g \left(\mathbf {x} _ {i}\right) + \xi_ {i}, \tag {8}
+$$
+
+where $h(\cdot)$ is the true signal, $f(\cdot)$ is the apparent signal of the GP, $g(\cdot)$ is its apparent (spurious) noise, and $\xi_{i} \sim \mathcal{N}(0, \sigma_{n}^{2})$ is real noise. Intuitively, due to its high expressivity, it is possible for a pretrained NN $h_{\mathrm{NN}}$ to correctly model part of $g$ , along with part of $f$ , resulting in the residuals
+
+$$
+y _ {i} - \bar {h} _ {\mathrm {N N}} (\mathbf {x} _ {i}) = r (\mathbf {x} _ {i}) + \xi_ {i} = r _ {f} (\mathbf {x} _ {i}) + r _ {g} (\mathbf {x} _ {i}) + \xi_ {i}, \tag {9}
+$$
+
+where $r_f(\cdot)$ is now the apparent signal of the GP, and $r_g(\cdot)$ is now its apparent noise. In such a case, it will be easier for the GP to learn the residuals than to learn $h$ directly. The uncertainty estimates of the GP will also be more precise, since the NN removes spurious noise. Thus, Eq. 8 immediately predicts that the noise estimates for GP will be higher when fitting $h$ directly than when fitting NN residuals. This prediction is confirmed in experiments (Table 1).
+
+To analyze these behaviors, the above decomposition is used to identify a class of scenarios for which fitting residuals leads to improvements (see Appendix A for additional details and proofs of following theorems). For simplicity, suppose GP kernels are parameterized only by their signal variance $\beta$ , i.e., kernels are of the form $\beta k(\cdot, \cdot)$ for some kernel $k$ . Then, the following two assumptions make Eq. 8 concrete: (1) GP is well-suited for $f$ , i.e., $f(\cdot) \sim \mathcal{GP}(0, k(\cdot, \cdot))$ ; (2) GP is ill-suited for $g$ , i.e., $g$ is independent of $f$ and $\varepsilon$ -indistinguishable for GP on $\mathcal{D}$ (see Lemma A.2 for an example of such a function). Intuitively, this means the predictions of GP change by no more than $\varepsilon$ in the presence of $g$ .
+
+Consider the residuals $y_{i} - \bar{h}_{\mathrm{NN}}(\mathbf{x}_{i}) = r(\mathbf{x}_{i}) + \xi_{i} = r_{f}(\mathbf{x}_{i}) + r_{g}(\mathbf{x}_{i}) + \xi_{i}$ , where $r_{f}$ is the remaining GP component, and $r_{g}$ is the remainder of $g$ . The following assumption ensures that $r_{g}$ captures any spurious complexity added by an imperfectly trained NN: $r_{f} \sim \mathcal{GP}(0, \alpha k(\cdot, \cdot))$ , for some $\alpha \in (0, 1]$ .
+
+Let $\bar{h}_{\mathrm{GP}}$ be the GP predictor trained directly on $\mathcal{D}$ , and $\bar{h}_{\mathrm{GP + NN}} = \bar{h}_{\mathrm{NN}} + \bar{r}_{\mathrm{GP}}$ be the final predictor after fitting residuals; let $E_{\mathrm{GP}}^{h}$ , $E_{\mathrm{NN}}^{h}$ , and $E_{\mathrm{GP + NN}}^{h}$ be the expected generalization errors of $\bar{h}_{\mathrm{GP}}$ , $\bar{h}_{\mathrm{NN}}$ , and $\bar{h}_{\mathrm{GP + NN}}$ , resp. The advantage of $\bar{h}_{\mathrm{GP + NN}}$ becomes clear as $g$ becomes complex:
+
+Theorem 5.1 (Advantage of fitting residuals).
+
+$$
+\lim _ {\varepsilon \to 0} \left(E _ {\mathrm {G P}} ^ {h} - E _ {\mathrm {G P + N N}} ^ {h}\right) \geq \mathbb {E} [ g ^ {2} (\mathbf {x}) ] - \mathbb {E} [ r _ {g} ^ {2} (\mathbf {x}) ] \mathrm {a n d} \lim _ {\varepsilon \to 0} \left(E _ {\mathrm {N N}} ^ {h} - E _ {\mathrm {G P + N N}} ^ {h}\right) > 0.
+$$
+
+The experimental results in Section 4 support this result (Table 1). First, in all experiments, $\mathrm{GP + NN}$ does indeed improve over the underlying NN. Second, notice that the improvement of $\mathrm{GP + NN}$ over GP is only guaranteed when $\mathbb{E}[g^2 (\mathbf{x})] - \mathbb{E}[r_g^2 (\mathbf{x})]$ is positive, i.e., when the NN successfully captures some underlying complexity. In experiments, when the NN is deficient, $\mathrm{GP + NN}$ often outperforms GP, but not always. For example, if the NN is overfit to zero training loss, and uses the same training data as the GP, the GP cannot provide benefit. However, when the NN is well-trained, using proper overfitting prevention, the improvements are consistent and significant.
+
+The prediction of Theorem 5.1 that lower apparent noise corresponds to improved error is confirmed in experiments (Table 2). Note that an optimal model would learn $h$ exactly, and have a predictive variance of $\sigma_n^2$ . So, the improvement in error of $\bar{h}_{\mathrm{GP + NN}}$ over $\bar{h}_{\mathrm{GP}}$ also corresponds to a predictive variance that is closer to that of the optimal model, since some spurious noise has been removed by $\bar{h}_{\mathrm{NN}}$ . Complementing this theoretical improvement in uncertainty estimation, the above setting also leads to a key practical property, which is illustrated in Figure 2:
+
+Theorem 5.2. The uncertainty of $\bar{r}_{\mathrm{GP}}$ is positively correlated with the variance of NN residuals.
+
+This property matches the intuition that the GP's variance should generally be higher for unstable NNs, i.e., NNs with high residual variance, than for stable NNs. Such a property is crucial to measuring the confidence of NN predictions in practice.
+
+Overall, the initial theoretical results in this section (and the appendix) show that if a problem is well-suited to be learned by an NN, and a practitioner has access to a pretrained NN for that problem, then training a GP to fit the residuals of the NN can provide potential benefits in uncertainty estimates, without sacrificing prediction performance. Further theoretical investigation regarding more complicated situations, e.g., misspecification of GP kernels, is considered as one interesting direction for future work.
+
+# 5.2 ROBUSTNESS OF THE I/O KERNEL
+
+This section provides a justification for why a GP using the proposed I/O kernel is more robust than the standard GP, i.e., using the input kernel alone. The key assumption is that the output of an NN can
+
+contain valuable information about its behavior, and, consequently, the structure of the target function. Consider the setup in Section 5.1, but now with $y_{i} = f_{\mathrm{in}}(\mathbf{x}_{i}) + f_{\mathrm{out}}(\mathbf{x}_{i}) + \xi_{i}$ , where $f_{\mathrm{in}} \sim GP(0,k_{\mathrm{in}})$ and $f_{\mathrm{out}} \sim GP(0,k_{\mathrm{out}})$ , with non-trivial RBF kernels $k_{\mathrm{in}}$ , $k_{\mathrm{out}}$ (as in Equation 3). Let $\bar{h}_{\mathrm{NN}}$ be an NN of sufficient complexity to be nontrivially non-affine, in that there exists a positive-measure set of triples $(\mathbf{x},\mathbf{x}',\mathbf{x}'')$ s.t. $\| \mathbf{x} - \mathbf{x}'\| = \| \mathbf{x} - \mathbf{x}''\|$ , but $\| \bar{h}_{\mathrm{NN}}(\mathbf{x}) - \bar{h}_{\mathrm{NN}}(\mathbf{x}')\| \neq \| \bar{h}_{\mathrm{NN}}(\mathbf{x}) - \bar{h}_{\mathrm{NN}}(\mathbf{x}'')\|$ . Denote the generalization errors of the standard GP, GP with output kernel only, and GP with I/O kernel by $E_{\mathrm{I}}^{h}$ , $E_{\mathrm{O}}^{h}$ , and $E_{\mathrm{I / O}}^{h}$ , respectively. The expected result follows (proof in Appendix A).
+
+Theorem 5.3 (Advantage of I/O kernel). $E_{\mathrm{I/O}}^{h} < E_{\mathrm{I}}^{h}$ and $E_{\mathrm{I/O}}^{h} < E_{\mathrm{O}}^{h}$ .
+
+The optimizer associated with the GP simultaneously optimizes the hyperparameters of both kernels, so the less useful kernel usually receives a smaller signal variance. As a result, the I/O kernel is resilient to failures of either kernel. In particular, the GP using I/O kernel improves performance even in the case where the problem is so complex that Euclidean distance in the input space provides no useful correlation information or when the input space contains some noisy features. Conversely, when the NN is a bad predictor, and $h_{\mathrm{NN}}$ is no better noise, the standard GP with input kernel alone is recovered. In other words, the I/O kernel is never worse than using the input kernel alone, and in practice it is often better. This conclusion is confirmed in the experiments in Section 4.
+
+# 6 DISCUSSION AND FUTURE DIRECTIONS
+
+In addition to the reliable uncertainty estimation, accurate point prediction, and good scalability, demonstrated in Section 4, RIO provides other important benefits.
+
+RIO can be directly applied to any standard NN without modification to the model architecture or training pipeline. Moreover, retraining of the NN or change of inference process are not required. The framework simply requires the outputs of an NN; it does not need to access any internal structure. This feature makes the framework more accessible to practitioners in real-world applications, e.g., data scientists can train NNs using traditional pipelines, then directly apply RIO to the trained NNs.
+
+RIO also provides robustness to a type of adversarial attack. Consider a worst-case scenario, in which an adversary can arbitrarily alter the output of the NN with minuscule changes to the input. It is well-known that there are NNs for which this is possible (Goodfellow et al., 2015). In this case, with the help of the I/O kernel, the model becomes highly uncertain with respect to the output kernel. A confident prediction then requires both input and output to be reasonable. In the real world, a high degree of uncertainty may meet a threshold for disqualifying the prediction as outside the scope of the model's ability.
+
+There are several promising future directions for extending RIO: First, applying RIO to reinforcement learning (RL) algorithms, which usually use standard NNs for reward predictions, would allow uncertainty estimation of the future rewards. Agents can then directly employ efficient exploration strategies, e.g., bandit algorithms (Thompson, 1933), rather than traditional stochastic approaches like $\varepsilon$ -greedy. Second, RIO applied to Bayesian optimization (BO) (Močkus, 1975) would make it possible to use standard NNs in surrogate modeling. This approach can potentially improve the expressivity of the surrogate model and the scalability of BO. Third, since RIO only requires access to the inputs and outputs of NNs, it could be directly applied to any existing prediction models, including hybrid and ensemble models. For example, experiments in Appendix D.6 show that RIO achieves good results when directly applied to random forests. This general applicability makes RIO a more general tool for real-world practitioners.
+
+# 7 CONCLUSION
+
+The RIO framework both provides estimates of predictive uncertainty of neural networks, and reduces their point-prediction errors. The approach captures NN behavior by estimating their residuals with an I/O kernel. RIO is theoretically-grounded, performs well on several real-world problems, and, by using a sparse GP approximation, scales well to large datasets. Remarkably, it can be applied directly to any pretrained NNs without modifications to model architecture or training pipeline. Thus, RIO can be used to make NN regression practical and powerful in many real-world applications.
+
+# REFERENCES
+
+Emmanuel Abbe and Colin Sandon. Provable limitations of deep learning. CoRR, abs/1812.06369, 2018. URL http://arxiv.org/abs/1812.06369.
+Ofelia Anjos, Carla Iglesias, Fátima Peres, Javier Martínez, Ángela García, and Javier Taboada. Neural networks applied to discriminate botanical origin of honeys. Food Chemistry, 175:128 - 136, 2015. ISSN 0308-8146. doi: https://doi.org/10.1016/j.foodchem.2014.11.121. URL http://www.sciencedirect.com/science/article/pii/S0308814614018597.
+Pierre Baldi, Peter Sadowski, and Daniel Whiteson. Searching for exotic particles in high-energy physics with deep learning. Nature communications, 5:4308, 07 2014. doi: 10.1038/ncomms5308.
+D. Barber and Christopher Bishop. Ensemble learning in bayesian neural networks. In Generalization in Neural Networks and Machine Learning, pp. 215-237. Springer Verlag, January 1998. URL https://www.microsoft.com/en-us/research/publication/ensemble-learning-in-bayesian-neural-networks/.
+S. Bergmann, S. Stelzer, and S. Strassburger. On the use of artificial neural networks in simulation-based manufacturing control. Journal of Simulation, 8(1):76-90, Feb 2014. ISSN 1747-7786. doi: 10.1057/jos.2013.6. URL https://doi.org/10.1057/jos.2013.6.
+Charles Blundell, Julien Cornebise, Koray Kavukcuoglu, and Daan Wierstra. Weight uncertainty in neural networks. In Proceedings of the 32Nd International Conference on International Conference on Machine Learning - Volume 37, ICML'15, pp. 1613-1622. JMLR.org, 2015. URL http://dl.acm.org/citation.cfm?id=3045118.3045290.
+Leo Breiman. *Random forests. Machine Learning*, 45(1):5-32, Oct 2001. ISSN 1573-0565. doi: 10.1023/A:1010933404324. URL https://doi.org/10.1023/A:1010933404324.
+R. Byrd, P. Lu, J. Nocedal, and C. Zhu. A limited memory algorithm for bound constrained optimization. SIAM Journal on Scientific Computing, 16(5):1190-1208, 1995. doi: 10.1137/0916069.
+Lehel Csato and Manfred Opper. Sparse on-line gaussian processes. Neural computation, 14:641-68, 04 2002. doi: 10.1162/089976602317250933.
+Dheeru Dua and Casey Graff. UCI machine learning repository, 2017. URL http://archive.ics.uci.edu/ml.
+Yarin Gal and Zoubin Ghahramani. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In Proceedings of the 33rd International Conference on International Conference on Machine Learning - Volume 48, ICML'16, pp. 1050-1059. JMLR.org, 2016. URL http://dl.acm.org/citation.cfm?id=3045390.3045502.
+Marta Garnelo, Dan Rosenbaum, Christopher Maddison, Tiago Ramalho, David Saxton, Murray Shanahan, Yee Whye Teh, Danilo Rezende, and S. M. Ali Eslami. Conditional neural processes. In Jennifer Dy and Andreas Krause (eds.), Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pp. 1704-1713. PMLR, 10-15 Jul 2018a. URL http://proceedings.mlr.press/v80/garnelo18a.html.
+Marta Garnelo, Jonathan Schwarz, Dan Rosenbaum, Fabio Viola, Danilo J. Rezende, S. M. Ali Eslami, and Yee Whye Teh. Neural processes. CoRR, abs/1807.01622, 2018b. URL http://arxiv.org/abs/1807.01622.
+Zoubin Ghahramani. Probabilistic machine learning and artificial intelligence. Nature, 521:452 EP -, 05 2015. URL https://doi.org/10.1038/nature14541.
+Ian Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. In International Conference on Learning Representations, 2015. URL http:// arxiv.org/abs/1412.6572.
+
+Alex Graves. Practical variational inference for neural networks. In Proceedings of the 24th International Conference on Neural Information Processing Systems, NIPS'11, pp. 2348-2356, USA, 2011. Curran Associates Inc. ISBN 978-1-61839-599-3. URL http://dl.acm.org/citation.cfm?id=2986459.2986721.
+James Hensman, Nicolò Fusi, and Neil D. Lawrence. Gaussian processes for big data. In Proceedings of the Twenty-Ninth Conference on Uncertainty in Artificial Intelligence, UAI'13, pp. 282-290, Arlington, Virginia, United States, 2013. AUAI Press. URL http://dl.acm.org/citation.cfm?id=3023638.3023667.
+James Hensman, Alexander Matthews, and Zoubin Ghahramani. Scalable Variational Gaussian Process Classification. In Guy Lebanon and S. V. N. Vishwanathan (eds.), Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics, volume 38 of Proceedings of Machine Learning Research, pp. 351-360, San Diego, California, USA, 09-12 May 2015. PMLR. URL http://proceedings.mlr.press/v38/hensman15.html.
+Geoffrey E. Hinton and Drew van Camp. Keeping the neural networks simple by minimizing the description length of the weights. In Proceedings of the Sixth Annual Conference on Computational Learning Theory, COLT '93, pp. 5-13, New York, NY, USA, 1993. ACM. ISBN 0-89791-611-5. doi: 10.1145/168304.168306. URL http://doi.acm.org/10.1145/168304.168306.
+Tin Kam Ho. Random decision forests. In Proceedings of the Third International Conference on Document Analysis and Recognition (Volume 1) - Volume 1, ICDAR '95, pp. 278-, Washington, DC, USA, 1995. IEEE Computer Society. ISBN 0-8186-7128-9. URL http://dl.acm.org/citation.cfm?id=844379.844681.
+Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. Densely connected convolutional networks. In Proc. of CVPR, pp. 4700-4708, 2017.
+Tomoharu Iwata and Zoubin Ghahramani. Improving Output Uncertainty Estimation and Generalization in Deep Learning via Neural Network Gaussian Processes. arXiv e-prints, art. arXiv:1707.05922, Jul 2017.
+Hyunjik Kim, Andriy Mnih, Jonathan Schwarz, Marta Garnelo, S. M. Ali Eslami, Dan Rosenbaum, Oriol Vinyals, and Yee Whye Teh. Attentive neural processes. CoRR, abs/1901.05761, 2019. URL http://arxiv.org/abs/1901.05761.
+Martin Krzywinski and Naomi Altman. Importance of being uncertain. Nature Methods, 10:809 EP -, 08 2013. URL https://doi.org/10.1038/nmeth.2613.
+Balaji Lakshminarayanan, Alexander Pritzel, and Charles Blundell. Simple and scalable predictive uncertainty estimation using deep ensembles. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (eds.), Advances in Neural Information Processing Systems 30, pp. 6402-6413. Curran Associates, Inc., 2017.
+Quoc V. Le, Jiquan Ngiam, Adam Coates, Abhik Lahiri, Bobby Prochnow, and Andrew Y. Ng. On optimization methods for deep learning. In Proceedings of the 28th International Conference on International Conference on Machine Learning, ICML'11, pp. 265-272, USA, 2011. Omnipress. ISBN 978-1-4503-0619-5. URL http://dl.acm.org/citation.cfm?id=3104482. 3104516.
+Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 521:436 EP -, 05 2015. URL https://doi.org/10.1038/nature14539.
+Jaehoon Lee, Yasaman Bahri, Roman Novak, Sam Schoenholz, Jeffrey Pennington, and Jascha Sohl-dickstein. Deep neural networks as gaussian processes. International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=B1EA-M-0Z.
+David J. C. MacKay. A practical bayesian framework for backpropagation networks. *Neural Comput.*, 4(3):448-472, May 1992. ISSN 0899-7667. doi: 10.1162/neco.1992.4.3.448. URL http://dx.doi.org/10.1162/neco.1992.4.3.448.
+
+J. Močkus. On bayesian methods for seeking the extremum. In G. I. Marchuk (ed.), Optimization Techniques IFIP Technical Conference Novosibirsk, July 1-7, 1974, pp. 400-404, Berlin, Heidelberg, 1975. Springer Berlin Heidelberg. ISBN 978-3-540-37497-8.
+Radford M. Neal. Bayesian Learning for Neural Networks. Springer-Verlag, Berlin, Heidelberg, 1996. ISBN 0387947248.
+Seyed Taghi Akhavan Niaki and Saeid Hoseinzade. Forecasting s&p 500 index using artificial neural networks and design of experiments. Journal of Industrial Engineering International, 9(1):1, Feb 2013. ISSN 2251-712X. doi: 10.1186/2251-712X-9-1. URL https://doi.org/10.1186/2251-712X-9-1.
+Manfred Opper and Francesco Vivarelli. General bounds on bayes errors for regression with gaussian processes. In Proceedings of the 1998 Conference on Advances in Neural Information Processing Systems II, pp. 302-308, Cambridge, MA, USA, 1999. MIT Press. ISBN 0-262-11245-0. URL http://dl.acm.org/citation.cfm?id=340534.340656.
+Joaquin Quinonero Candela and Carl Edward Rasmussen. A unifying view of sparse approximate gaussian process regression. J. Mach. Learn. Res., 6:1939-1959, December 2005. ISSN 1532-4435. URL http://dl.acm.org/citation.cfm?id=1046920.1194909.
+Joaquin Quñonero-Candela, Carl Edward Rasmussen, Fabian Sinz, Olivier Bousquet, and Bernhard Schölkopf. Evaluating predictive uncertainty challenge. In Joaquin Quñonero-Candela, Ido Dagan, Bernardo Magnini, and Florence d'Alché Buc (eds.), Machine Learning Challenges. Evaluating Predictive Uncertainty, Visual Object Classification, and Recognising Tectual Entailment, pp. 1-27, Berlin, Heidelberg, 2006. Springer Berlin Heidelberg. ISBN 978-3-540-33428-6.
+CE. Rasmussen and CKI. Williams. Gaussian Processes for Machine Learning. Adaptive Computation and Machine Learning. MIT Press, January 2006.
+Rasmus Rothe, Radu Timofte, and Luc Van Gool. Deep expectation of real and apparent age from a single image without facial landmarks. International Journal of Computer Vision (IJCV), July 2016.
+Matthias Seeger, Christopher K. I. Williams, and Neil D. Lawrence. Fast forward selection to speed up sparse gaussian process regression. In IN WORKSHOP ON AI AND STATISTICS 9, 2003.
+Nida Shahid, Tim Rappon, and Whitney Berta. Applications of artificial neural networks in health care organizational decision-making: A scoping review. PLOS ONE, 14(2):1-22, 02 2019. doi: 10.1371/journal.pone.0212356. URL https://doi.org/10.1371/journal.pone.0212356.
+Bernard W Silverman et al. Spline smoothing: the equivalent variable kernel method. The Annals of Statistics, 12(3):898-916, 1984.
+Peter Sollich. Learning curves for gaussian processes. In Proceedings of the 1998 Conference on Advances in Neural Information Processing Systems II, pp. 344-350, Cambridge, MA, USA, 1999. MIT Press. ISBN 0-262-11245-0. URL http://dl.acm.org/citation.cfm?id=340534.340667.
+Peter Sollich. Gaussian process regression with mismatched models. In Advances in Neural Information Processing Systems, pp. 519-526, 2002.
+William R. Thompson. On the likelihood that one unknown probability exceeds another in view of the evidence of two samples. Biometrika, 25(3/4):285-294, 1933. ISSN 00063444. URL http://www.jstor.org/stable/2332286.
+Michalis K. Titsias. Variational learning of inducing variables in sparse gaussian processes. In In Artificial Intelligence and Statistics 12, pp. 567-574, 2009.
+Andrew Gordon Wilson, Zhiting Hu, Ruslan Salakhutdinov, and Eric P. Xing. Deep kernel learning. In Arthur Gretton and Christian C. Robert (eds.), Proceedings of the 19th International Conference on Artificial Intelligence and Statistics, volume 51 of Proceedings of Machine Learning Research, pp. 370-378, Cadiz, Spain, 09-11 May 2016. PMLR. URL http://proceedings.mlr.org/press/v51/wilson16.html.
+
+Tsun-Yi Yang, Yi-Hsuan Huang, Yen-Yu Lin, Pi-Cheng Hsiu, and Yung-Yu Chuang. Ssr-net: A compact soft stagewise regression network for age estimation. In Proc. of IJCAI, pp. 1078-1084, 2018.
+
+Ciyou Zhu, Richard H. Byrd, Peihuang Lu, and Jorge Nocedal. Algorithm 778: L-bfgs-b: Fortran subroutines for large-scale bound-constrained optimization. ACM Trans. Math. Softw., 23(4): 550-560, December 1997. ISSN 0098-3500. doi: 10.1145/279232.279236. URL http://doi.acm.org/10.1145/279232.279236.
+
+# A ADDITIONAL DETAILS AND PROOFS FOR SECTIONS 5.1 AND 5.2
+
+Section 5.1 provides a theoretical perspective on how using GP to fit NN residuals can provide advantages over GP or NN alone. The idea is to decompose the target function into two independent parts: one that is well-suited for a GP to learn, and the other that is difficult for it to capture. The following definition provides a formalization of this notion of difficulty.
+
+Definition A.1 ( $\varepsilon$ -indistinguishable). Let $\bar{f}_A$ be the predictor of a learner $A$ trained on $\{(\mathbf{x}_i, y_i)\}_{i=1}^n$ , and $\bar{h}_A$ that of $A$ trained on $\mathcal{D} = \{(\mathbf{x}_i, y_i + g(\mathbf{x}_i))\}_{i=1}^n$ . Then, if $|\bar{f}_A(\mathbf{x}) - \bar{h}_A(\mathbf{x})| < \varepsilon \forall \mathbf{x}$ , we say $g$ is $\varepsilon$ -indistinguishable for $A$ on $\mathcal{D}$ .1
+
+In other words, the predictions of $A$ change by no more than $\varepsilon$ in the presence of $g$ . If $\varepsilon$ is small compared to $\mathbb{E}[g^2 (\mathbf{x})]$ , then $g$ contains substantial complexity that $A$ cannot capture.
+
+For notation, let $\bar{\psi}_{\mathrm{A}}$ denote the predictor of a learner $A$ trained on $\{(\mathbf{x}_i,\psi (\mathbf{x}_i) + \xi_i)\}_{i = 1}^n$ , with $\xi_{i}\sim \mathcal{N}(0,\sigma_{n}^{2})$ . Let $E_A^\psi = \mathbb{E}[(\psi (\mathbf{x}) - \bar{\psi}_{\mathrm{A}}(\mathbf{x}))^2 ]$ denote the expected generalization error of $A$ on $\psi$ given training locations $\{\mathbf{x}_i\}_{i = 1}^n$ . Now, consider a dataset $\mathcal{D} = \{(\mathbf{x}_i,y_i)\}_{i = 1}^n$ with
+
+$$
+y _ {i} = h (\mathbf {x} _ {i}) + \xi_ {i} = f (\mathbf {x} _ {i}) + g (\mathbf {x} _ {i}) + \xi_ {i},
+$$
+
+where $f(\cdot) \sim \mathcal{GP}(0, k(\cdot, \cdot))$ , $\xi_i \sim \mathcal{N}(0, \sigma_n^2)$ , and $\mathbb{E}[g^2(\mathbf{x})] = \sigma_g^2$ . For simplicity, assume the GP learner selects optimal hyperparameters for $k$ and $\sigma_n^2$ . Suppose $g$ is independent of $f$ and $\varepsilon$ -indistinguishable for GP on $\mathcal{D}$ , but a neural network $\bar{h}_{\mathrm{NN}}$ may have successfully learned some of $g$ 's structure. These assumptions induce a concrete class of examples of the decomposition in Eq. 8. To demonstrate the existence of such decompositions, Lemma A.2 provides a constructive example of such a function $g$ .
+
+Lemma A.2 (Construction of independent $\varepsilon$ -indistinguishable functions for GP). Given a dataset $\{(\mathbf{x}_i, f(\mathbf{x}_i) + \xi_i)\}_{i=1}^n$ , for any $\sigma_g^2$ and $\varepsilon > 0$ , there exists $g$ with $\mathbb{E}[g^2(\mathbf{x})] = \sigma_g^2$ , and $\{\mathbf{x}_i\}_{i=n+1}^{2n}$ , s.t. $g$ is $\varepsilon$ -indistinguishable on $\{(\mathbf{x}_i, f(\mathbf{x}_i) + g(\mathbf{x}_i) + \xi_i)\}_{i=1}^{2n}$ for $GP$ with a continuous kernel $k$ .
+
+Proof. Let $\mathbf{x}_{n + i} = \mathbf{x}_i + \mathbf{v}\forall i = 1,\ldots ,n$ , for a fixed vector $\mathbf{v}$ . Consider the linear smoother view of GP prediction (Rasmussen & Williams, 2006): $\bar{f}_{\mathrm{GP}}(\mathbf{x}_*) = \mathbf{w}(\mathbf{x}_*)^\top \mathbf{y}$ , where $\mathbf{w}(\mathbf{x}_*) = (K + \sigma_n^2 I)^{-1}\mathbf{k}(\mathbf{x}_*)$ is the weight function (Silverman et al., 1984), and $\mathbf{y}$ is the target vector. Since $k$ and $\mathbf{w}(\mathbf{x}_*)$ are continuous around each $\mathbf{x}_i$ , we can choose $\mathbf{v}$ such that $\forall \mathbf{x}_*$
+
+$$
+\left| \mathbf {w} (\mathbf {x} _ {*}) _ {i} - \mathbf {w} (\mathbf {x} _ {*}) _ {n + i} \right| < \frac {\varepsilon}{n \sigma_ {g} ^ {2}} \forall i = 1, \ldots , n.
+$$
+
+Let $\bar{f}_{\mathrm{GP}}$ be the predictor of GP trained on $\{(\mathbf{x}_i, f(\mathbf{x}_i) + \xi_i)\}_{i=1}^{2n}$ , and $\bar{h}_{\mathrm{GP}}$ the predictor trained on $\{(\mathbf{x}_i, f(\mathbf{x}_i) + g(\mathbf{x}_i) + \xi_i)\}_{i=1}^{2n}$ . Now, choose $g$ so that
+
+$$
+g (\mathbf {x} _ {i}) = \left\{ \begin{array}{l l} \sigma_ {g} ^ {2} & \text {i f} i \leq n, \\ - \sigma_ {g} ^ {2} & \text {i f} i > n. \end{array} \right.
+$$
+
+Then,
+
+$$
+\begin{array}{l} \bar {h} _ {\mathrm {G P}} (\mathbf {x} _ {*}) = \mathbf {w} (\mathbf {x} _ {*}) ^ {\top} (f (\mathbf {x} _ {i}) + g (\mathbf {x} _ {i}) + \xi_ {i}) _ {i = 1} ^ {2 n} = \mathbf {w} (\mathbf {x} _ {*}) ^ {\top} (f (\mathbf {x} _ {i}) + \xi_ {i}) _ {i = 1} ^ {2 n} + \mathbf {w} (\mathbf {x} _ {*}) ^ {\top} (g (\mathbf {x} _ {i})) _ {i = 1} ^ {2 n} \\ = \bar {f} _ {\mathrm {G P}} (\mathbf {x} _ {*}) + \sum_ {i = 1} ^ {n} \sigma_ {g} ^ {2} \mathbf {w} (\mathbf {x} _ {*}) _ {i} - \sum_ {i = 1} ^ {n} \sigma_ {g} ^ {2} \mathbf {w} (\mathbf {x} _ {*}) _ {n + i} = \bar {f} _ {\mathrm {G P}} (\mathbf {x} _ {*}) + \sigma_ {g} ^ {2} \sum_ {i = 1} ^ {n} \left(\mathbf {w} (\mathbf {x} _ {*}) _ {i} - \mathbf {w} (\mathbf {x} _ {*}) _ {n + i}\right). \\ \end{array}
+$$
+
+$$
+\left| \sigma_ {g} ^ {2} \sum_ {i = 1} ^ {n} \big (\mathbf {w} (\mathbf {x} _ {*}) _ {i} - \mathbf {w} (\mathbf {x} _ {*}) _ {n + i} \big) \right| < \sigma_ {g} ^ {2} n (\frac {\varepsilon}{n \sigma_ {g} ^ {2}}) = \varepsilon \Rightarrow | \bar {f} _ {\mathrm {G P}} (\mathbf {x} _ {*}) - \bar {h} _ {\mathrm {G P}} (\mathbf {x} _ {*}) | < \varepsilon .
+$$
+
+
+
+Consider the residuals $y_{i} - \bar{h}_{\mathrm{NN}}(\mathbf{x}_{i}) = r(\mathbf{x}_{i}) + \xi_{i} = r_{f}(\mathbf{x}_{i}) + r_{g}(\mathbf{x}_{i}) + \xi_{i}$ . Here, $r_{f}$ is the remaining GP component, i.e., $r_{f} \sim \mathcal{GP}(0, \alpha k(\cdot, \cdot))$ , for $0 < \alpha \leq 1$ . Similarly, $r_{g}$ is the remainder of $g$ $\varepsilon$ -indistinguishable for GP on $\{(\mathbf{x}_i, r(\mathbf{x}_i) + \xi_i)\}_{i=1}^n$ , i.e., $\sigma_g^2 - \mathbb{E}[r_g^2(\mathbf{x})] = \delta$ . These two assumptions about $r_{f}$ and $r_{g}$ ensure that any new spurious complexity added by the NN is collected in $r_{g}$ , since the biggest advantage of the NN (its expressivity) is also its greatest source of risk. The final predictor after fitting residuals is then $\bar{h}_{\mathrm{GP+NN}} = \bar{h}_{\mathrm{NN}} + \bar{r}_{\mathrm{GP}}$ . The following sequence of results captures the improvement due to residual estimation.
+
+Lemma A.3 (Generalization of GP on $h$ ).
+
+$$
+E _ {\mathrm {G P}} ^ {f} + \sigma_ {g} ^ {2} - 2 \varepsilon (E _ {\mathrm {G P}} ^ {f}) ^ {\frac {1}{2}} - 2 \varepsilon \sigma_ {g} < E _ {\mathrm {G P}} ^ {h} < E _ {\mathrm {G P}} ^ {f} + \sigma_ {g} ^ {2} + 2 \varepsilon (E _ {\mathrm {G P}} ^ {f}) ^ {\frac {1}{2}} + 2 \varepsilon \sigma_ {g} + \varepsilon^ {2}.
+$$
+
+Proof. Let $\Delta \bar{f}_{\mathrm{GP}}(\mathbf{x}) = \bar{h}_{\mathrm{GP}}(\mathbf{x}) - \bar{f}_{\mathrm{GP}}(\mathbf{x})$ denote the change in GP prediction due to $g$ . Then,
+
+$$
+\begin{array}{l} E _ {\mathrm {G P}} ^ {h} = \mathbb {E} \left[\left(h (\mathbf {x}) - \bar {h} _ {\mathrm {G P}} (\mathbf {x})\right) ^ {2} \right] = \mathbb {E} \left[ \right.\left( \right.\left(f (\mathbf {x}) + g (\mathbf {x})\right) - \left(\bar {f} _ {\mathrm {G P}} (\mathbf {x}) + \Delta \bar {f} _ {\mathrm {G P}} (\mathbf {x}))\right) ^ {2} \left. \right] \\ = \mathbb {E} \left[ \left(\left(f (\mathbf {x}) - \bar {f} _ {\mathrm {G P}} (\mathbf {x})\right) + \left(g (\mathbf {x}) - \Delta \bar {f} _ {\mathrm {G P}} (\mathbf {x}))\right) ^ {2} \right] \right. \\ = E _ {\mathrm {G P}} ^ {f} + 2 \mathbb {E} [ (f (\mathbf {x}) - \bar {f} _ {\mathrm {G P}} (\mathbf {x})) (g (\mathbf {x}) - \Delta \bar {f} _ {\mathrm {G P}} (\mathbf {x})) ] + \mathbb {E} [ (g (\mathbf {x}) - \Delta \bar {f} _ {\mathrm {G P}} (\mathbf {x})) ^ {2} ] \\ = E _ {\mathrm {G P}} ^ {f} - 2 \mathbb {E} \left[ \left(f (\mathbf {x}) - \bar {f} _ {\mathrm {G P}} (\mathbf {x})\right) \Delta \bar {f} _ {\mathrm {G P}} (\mathbf {x})) \right] + \mathbb {E} \left[ \left(g (\mathbf {x}) - \Delta \bar {f} _ {\mathrm {G P}} (\mathbf {x})\right) ^ {2} \right], \\ \end{array}
+$$
+
+where the last line makes use of the fact that $f$ and $g$ are independent. Now,
+
+$$
+\left. \left| 2 \mathbb {E} [ (f (\mathbf {x}) - \bar {f} _ {\mathrm {G P}} (\mathbf {x})) \Delta \bar {f} _ {\mathrm {G P}} (\mathbf {x})) ] \right| < 2 \varepsilon \left(E _ {\mathrm {G P}} ^ {f}\right) ^ {\frac {1}{2}}, \right.
+$$
+
+and
+
+$$
+\mathbb {E} [ (g (\mathbf {x}) - \Delta \bar {f} _ {\mathrm {G P}} (\mathbf {x})) ^ {2} ] = \sigma_ {g} ^ {2} - 2 \mathbb {E} [ g (\mathbf {x}) \Delta \bar {f} _ {\mathrm {G P}} (\mathbf {x}) ] + \mathbb {E} [ \Delta \bar {f} _ {\mathrm {G P}} (\mathbf {x}) ^ {2} ],
+$$
+
+where
+
+$$
+\left| 2 \mathbb {E} [ g (\mathbf {x}) \Delta \bar {f} _ {\mathrm {G P}} (\mathbf {x}) ] \right| < 2 \varepsilon \sigma_ {g} \quad \text {a n d} \quad 0 \leq \mathbb {E} [ \Delta \bar {f} _ {\mathrm {G P}} (\mathbf {x}) ^ {2} ] < \varepsilon^ {2}.
+$$
+
+So, $E_{\mathrm{GP}}^{f} + \sigma_{g}^{2} - 2\varepsilon (E_{\mathrm{GP}}^{f})^{\frac{1}{2}} - 2\varepsilon \sigma_{g} < E_{\mathrm{GP}}^{h} < E_{\mathrm{GP}}^{f} + \sigma_{g}^{2} + 2\varepsilon (E_{\mathrm{GP}}^{f})^{\frac{1}{2}} + 2\varepsilon \sigma_{g} + \varepsilon^{2}$ .
+
+
+
+Lemma A.4 (Generalization of NN). $E_{\mathrm{NN}}^{h} = \alpha \mathbb{E}[f^{2}(\mathbf{x})] + \sigma_{g}^{2} - \delta$
+
+Proof. Making use of the fact that $\mathbb{E}[r_f] = 0$
+
+$$
+E _ {\mathrm {N N}} ^ {h} = \mathbb {E} [ r ^ {2} (x) ] = \mathbb {E} [ (r _ {f} (x) + r _ {g} (x)) ^ {2} ] = \mathbb {E} [ r _ {f} ^ {2} (x) ] + \mathbb {E} [ r _ {g} ^ {2} (x) ].
+$$
+
+Now, $r_f \sim \mathrm{GP}(0, \alpha k) \implies \mathbb{E}[r_f^2(x)] = \alpha \mathbb{E}[f^2(\mathbf{x})]$ , and $\sigma_g^2 - \mathbb{E}[r_g^2(\mathbf{x})] = \delta \implies \mathbb{E}[r_g^2(\mathbf{x})] =$
+
+$$
+\sigma_ {g} ^ {2} - \delta . \mathrm {S o}, E _ {\mathrm {N N}} ^ {h} = \alpha \mathbb {E} [ f ^ {2} (\mathbf {x}) ] + \sigma_ {g} ^ {2} - \delta .
+$$
+
+Lemma A.5 (Generalization of GP fitting residuals).
+
+$$
+E _ {\mathrm {G P}} ^ {r _ {f}} + \sigma_ {g} ^ {2} - \delta - 2 \varepsilon (E _ {\mathrm {G P}} ^ {r _ {f}}) ^ {\frac {1}{2}} - 2 \varepsilon (\sigma_ {g} ^ {2} - \delta) ^ {\frac {1}{2}} < E _ {\mathrm {G P + N N}} ^ {h} < E _ {\mathrm {G P}} ^ {r _ {f}} + \sigma_ {g} ^ {2} - \delta + 2 \varepsilon (E _ {\mathrm {G P}} ^ {r _ {f}}) ^ {\frac {1}{2}} + 2 \varepsilon (\sigma_ {g} ^ {2} - \delta) ^ {\frac {1}{2}} + \varepsilon^ {2}.
+$$
+
+Proof. Let $\Delta \bar{r}_{f\mathrm{GP}}(\mathbf{x}) = \bar{r}_{\mathrm{GP}}(\mathbf{x}) - \bar{r}_{f\mathrm{GP}}(\mathbf{x})$ denote the change in GP prediction due to $r_g$ . Then,
+
+$$
+\begin{array}{l} E _ {\mathrm {G P} + \mathrm {N N}} ^ {h} = \mathbb {E} \left[ \left(h (\mathbf {x}) - \bar {h} _ {\mathrm {G P} + \mathrm {N N}} (\mathbf {x})\right) ^ {2} \right] = \mathbb {E} \left[ \left(h (\mathbf {x}) - \bar {h} _ {\mathrm {N N}} (\mathbf {x}) - \bar {r} _ {\mathrm {G P}} (\mathbf {x})\right) ^ {2} \right] \\ = \mathbb {E} \left[ \left(r _ {f} (\mathbf {x}) + r _ {g} (\mathbf {x})\right) - \left(\bar {r} _ {f \mathrm {G P}} (\mathbf {x}) + \Delta \bar {r} _ {f \mathrm {G P}} (\mathbf {x}))\right) ^ {2} \right] \\ = \mathbb {E} \left[ \left(r _ {f} (\mathbf {x}) - \bar {r} _ {f \mathrm {G P}} (\mathbf {x})\right) + \left(r _ {g} (\mathbf {x}) - \Delta \bar {r} _ {f \mathrm {G P}} (\mathbf {x}))\right) ^ {2} \right] \\ = E _ {\mathrm {G P}} ^ {r _ {f}} + 2 \mathbb {E} \left[ \left(r _ {f} (\mathbf {x}) - \bar {r} _ {\mathrm {G P}} (\mathbf {x})\right) \left(r _ {g} (\mathbf {x}) - \Delta \bar {r} _ {f \mathrm {G P}} (\mathbf {x})\right) \right] + \mathbb {E} \left[ \left(r _ {g} (\mathbf {x}) - \Delta \bar {r} _ {f \mathrm {G P}} (\mathbf {x})\right) ^ {2} \right] \\ = E _ {\mathrm {G P}} ^ {r _ {f}} - 2 \mathbb {E} \left[ \right.\left( \right.r _ {f} (\mathbf {x}) - \bar {r} _ {\mathrm {G P}} (\mathbf {x}) \Delta \bar {r} _ {f \mathrm {G P}} (\mathbf {x}) \left. \right] + \mathbb {E} \left[\left(r _ {g} (\mathbf {x}) - \Delta \bar {r} _ {f \mathrm {G P}} (\mathbf {x})\right) ^ {2} \right]. \\ \end{array}
+$$
+
+Similar to the case of Lemma A.3,
+
+$$
+\left. \right.\left| 2 \mathbb {E} \big [ (r _ {f} (\mathbf {x}) - \bar {r} _ {\mathrm {G P}} (\mathbf {x})) (r _ {g} (\mathbf {x}) - \Delta \bar {r} _ {f \mathrm {G P}} (\mathbf {x})) ] \right| < 2 \varepsilon (E _ {\mathrm {G P}} ^ {r _ {f}}) ^ {\frac {1}{2}},
+$$
+
+and
+
+$$
+\mathbb {E} \big [ (r _ {g} (\mathbf {x}) - \Delta \bar {r} _ {f \mathrm {G P}} (\mathbf {x})) ^ {2} \big ] = \sigma_ {g} ^ {2} - \delta - 2 \mathbb {E} [ r _ {g} (\mathbf {x}) \Delta \bar {r} _ {f \mathrm {G P}} (\mathbf {x}) ] + \mathbb {E} [ \Delta \bar {r} _ {f \mathrm {G P}} (\mathbf {x}) ^ {2} ],
+$$
+
+where
+
+$$
+\left| 2 \mathbb {E} \left[ r _ {g} (\mathbf {x}) \Delta \bar {r} _ {f \mathrm {G P}} (\mathbf {x}) \right] \right| < 2 \varepsilon \left(\sigma_ {g} ^ {2} - \delta\right) ^ {\frac {1}{2}} \text {a n d} 0 \leq \mathbb {E} \left[ \Delta \bar {r} _ {f \mathrm {G P}} (\mathbf {x}) ^ {2} \right] < \varepsilon^ {2}.
+$$
+
+So,
+
+$$
+E _ {\mathrm {G P}} ^ {r _ {f}} + \sigma_ {g} ^ {2} - \delta - 2 \varepsilon (E _ {\mathrm {G P}} ^ {r _ {f}}) ^ {\frac {1}{2}} - 2 \varepsilon (\sigma_ {g} ^ {2} - \delta) ^ {\frac {1}{2}} < E _ {\mathrm {G P + N N}} ^ {h} < E _ {\mathrm {G P}} ^ {r _ {f}} + \sigma_ {g} ^ {2} - \delta + 2 \varepsilon (E _ {\mathrm {G P}} ^ {r _ {f}}) ^ {\frac {1}{2}} + 2 \varepsilon (\sigma_ {g} ^ {2} - \delta) ^ {\frac {1}{2}} + \varepsilon^ {2}.
+$$
+
+
+
+Lemma A.6 (Generalization of GP on $f$ and $r_f$ ). From the classic result (Opper & Vivarelli, 1999; Sollich, 1999; Rasmussen & Williams, 2006): Consider the eigenfunction expansion $k(\mathbf{x},\mathbf{x}^{\prime}) = \sum_{j}\lambda_{j}\phi_{j}(\mathbf{x})\phi_{j}(\mathbf{x}^{\prime})$ and $\int k(\mathbf{x},\mathbf{x}^{\prime})\phi_{i}(\mathbf{x})p(x)d\mathbf{x} = \lambda_{i}\phi_{i}(\mathbf{x}^{\prime})$ . Let $\Lambda$ be the diagonal matrix of the eigenvalues $\lambda_{j}$ , and $\Phi$ be the design matrix, i.e., $\Phi_{ji} = \phi_j(\mathbf{x}_i)$ . Then,
+
+$$
+E _ {\mathrm {G P}} ^ {f} = \operatorname {t r} \left(\Lambda^ {- 1} + \sigma_ {n} ^ {- 2} \Phi \Phi^ {\top}\right) ^ {- 1} \text {a n d} E _ {\mathrm {G P}} ^ {r _ {f}} = \operatorname {t r} \left(\alpha^ {- 1} \Lambda^ {- 1} + \sigma_ {n} ^ {- 2} \Phi \Phi^ {\top}\right) ^ {- 1}.
+$$
+
+Theorem 5.1. $\lim_{\varepsilon \to 0}\left(E_{\mathrm{GP}}^{h} - E_{\mathrm{GP + NN}}^{h}\right)\geq \delta$ and $\lim_{\varepsilon \to 0}\left(E_{\mathrm{NN}}^{h} - E_{\mathrm{GP + NN}}^{h}\right) > 0.$
+
+Proof. From Lemmas A.3, A.4, and A.5 we have, resp.:
+
+$$
+\lim _ {\varepsilon \to 0} E _ {\mathrm {G P}} ^ {h} = E _ {\mathrm {G P}} ^ {f} + \sigma_ {g} ^ {2}, \lim _ {\varepsilon \to 0} E _ {\mathrm {N N}} ^ {h} = \alpha \mathbb {E} [ f ^ {2} (\mathbf {x}) ] + \sigma_ {g} ^ {2} - \delta , \mathrm {a n d} \lim _ {\varepsilon \to 0} E _ {\mathrm {G P} + \mathrm {N N}} ^ {h} = E _ {\mathrm {G P}} ^ {r _ {f}} + \sigma_ {g} ^ {2} - \delta .
+$$
+
+From Lemma A.6, we have
+
+$$
+E _ {\mathrm {G P}} ^ {f} - E _ {\mathrm {G P}} ^ {r _ {f}} = \operatorname {t r} (\Lambda^ {- 1} + \sigma_ {n} ^ {- 2} \Phi \Phi^ {\top}) ^ {- 1} - \operatorname {t r} (\alpha^ {- 1} \Lambda^ {- 1} + \sigma_ {n} ^ {- 2} \Phi \Phi^ {\top}) ^ {- 1} \geq 0,
+$$
+
+and
+
+$$
+\alpha \mathbb {E} [ f ^ {2} (\mathbf {x}) ] - E _ {\mathrm {G P}} ^ {r _ {f}} = \alpha \mathbb {E} [ f ^ {2} (\mathbf {x}) ] - \operatorname {t r} \left(\alpha^ {- 1} \Lambda^ {- 1} + \sigma_ {n} ^ {- 2} \Phi \Phi^ {\top}\right) ^ {- 1} > 0.
+$$
+
+So,
+
+$$
+\lim _ {\varepsilon \rightarrow 0} \left(E _ {\mathrm {G P}} ^ {h} - E _ {\mathrm {G P + N N}} ^ {h}\right) \geq \delta \text {a n d} \lim _ {\varepsilon \rightarrow 0} \left(E _ {\mathrm {N N}} ^ {h} - E _ {\mathrm {G P + N N}} ^ {h}\right) > 0.
+$$
+
+
+
+Theorem 5.2. The variance of NN residuals is positively correlated with the uncertainty of $r_{\mathrm{GP}}$ .
+
+Proof. Increases in $\mathbb{E}[r_f^2 (x)]$ lead to increases in $\alpha$ ; increases in $\mathbb{E}[r_g^2 (x)]$ lead to decreases in $\delta$ , and thus increases in the estimated noise level $\hat{\sigma}_n^2$ . So, an increase in either $\mathbb{E}[r_f^2 (x)]$ or $\mathbb{E}[r_g^2 (x)]$ leads to an increase in $\alpha k((\mathbf{x}_*,\hat{y}_*),(\mathbf{x}_*,\hat{y}_*)) - \alpha \mathbf{k}_*^\top (\alpha \mathbf{K}((\mathcal{X},\hat{\mathbf{y}}),(\mathcal{X},\hat{\mathbf{y}})) + \hat{\sigma}_n^2\mathbf{I})^{-1}\alpha \mathbf{k}_*$ , which is the predictive variance of $r_{\mathrm{GP}}$ .
+
+Theorem 5.3. $E_{\mathrm{I / O}}^h < E_{\mathrm{I}}^h$ and $E_{\mathrm{I / O}}^h < E_{\mathrm{O}}^h$
+
+Proof.
+
+$$
+\| \mathbf {x} - \mathbf {x} ^ {\prime} \| = \| \mathbf {x} - \mathbf {x} ^ {\prime \prime} \| \quad \Longrightarrow \quad k _ {\mathrm {i n}} (\mathbf {x}, \mathbf {x} ^ {\prime}) = k _ {\mathrm {i n}} (\mathbf {x}, \mathbf {x} ^ {\prime \prime}).
+$$
+
+$$
+\| \bar {h} _ {\mathrm {N N}} (\mathbf {x}) - \bar {h} _ {\mathrm {N N}} \left(\mathbf {x} ^ {\prime}\right) \| \neq \| \bar {h} _ {\mathrm {N N}} (\mathbf {x}) - \bar {h} _ {\mathrm {N N}} \left(\mathbf {x} ^ {\prime \prime}\right) \| \Longrightarrow k _ {\text {o u t}} (\mathbf {x}, \mathbf {x} ^ {\prime}) \neq k _ {\text {o u t}} (\mathbf {x}, \mathbf {x} ^ {\prime \prime}).
+$$
+
+These are true for all hyperparameter settings of $k_{\mathrm{in}}$ or $k_{\mathrm{out}}$ , since both are RBF kernels. So, there is no hyperparameter setting of $k_{\mathrm{in}}$ that yields $k_{\mathrm{in}}^{\prime}(\mathbf{x}_1,\mathbf{x}_2) = k_{\mathrm{in}}(\mathbf{x}_1,\mathbf{x}_2) + k_{\mathrm{out}}(\mathbf{x}_1,\mathbf{x}_2)\forall \mathbf{x}_1,\mathbf{x}_2$ . Similarly, there is no hyperparameter setting of $k_{\mathrm{out}}$ that yields $k_{\mathrm{out}}^{\prime}(\mathbf{x}_1,\mathbf{x}_2) = k_{\mathrm{in}}(\mathbf{x}_1,\mathbf{x}_2) + k_{\mathrm{out}}(\mathbf{x}_1,\mathbf{x}_2)\forall \mathbf{x}_1,\mathbf{x}_2$ . Since, neither the input nor output kernel alone can correctly specify the kernel over all sets of positive measure, their generalization error is greater than the Bayes error, which is achieved by the correct kernel (Sollich, 2002), i.e., $k_{\mathrm{in}}(\mathbf{x}_1,\mathbf{x}_2) + k_{\mathrm{out}}(\mathbf{x}_1,\mathbf{x}_2)$ .
+
+# B BACKGROUND
+
+This section reviews notation for Neural Networks, Gaussian Process, and its more efficient approximation, SVGP. The RIO method, introduced in Section 3 of the main paper, uses Gaussian Processes to estimate the uncertainty in neural network predictions and reduces their point-prediction errors.
+
+# B.1 NEURAL NETWORKS
+
+Neural Networks (NNs) learn a nonlinear transformation from input to output space based on a number of training examples. Let $\mathcal{D} \subseteq \mathbb{R}^{d_{\mathrm{in}}} \times \mathbb{R}^{d_{\mathrm{out}}}$ denote the training dataset with size $n$ , and $\mathcal{X} = \{\mathbf{x}_i : (\mathbf{x}_i, \mathbf{y}_i) \in \mathcal{D}, \mathbf{x}_i = [x_i^1, x_i^2, \ldots, x_i^{d_{\mathrm{in}}}] \mid i = 1, 2, \ldots, n\}$ and $\mathcal{Y} = \{\mathbf{y}_i : (\mathbf{x}_i, \mathbf{y}_i) \in \mathcal{D}, \mathbf{y}_i = [y_i^1, y_i^2, \ldots, y_i^{d_{\mathrm{out}}}] \mid i = 1, 2, \ldots, n\}$ denote the inputs and outputs (i.e., targets). A fully-connected feed-forward neural network with $L$ hidden layers of width $N_l$ (for layer $l = 1, 2, \ldots, L$ ) performs the following computations: Let $z_l^j$ denote the output value of $j$ th node in $l$ th hidden layer given input $\mathbf{x}_i$ , then $z_l^j = \phi(\sum_{k=1}^{d_{\mathrm{in}}} w_l^{j,k} x_i^k + b_l^j)$ , for $l = 1$ and $z_l^j = \phi(\sum_{k=1}^{N_{l-1}} w_l^{j,k} z_{l-1}^k + b_l^j)$ , for $l = 2, \ldots, L$ , where $w_l^{j,k}$ denotes the weight on the connection from $k$ th node in previous layer to $j$ th node in $l$ th hidden layer, $b_l^j$ denotes the bias of $j$ th node in $l$ th hidden layer, and $\phi$ is a nonlinear activation function. The output value of $j$ th node in output layer is then given by $\hat{y}_i^j = \sum_{k=1}^{N_L} w_{\mathrm{out}}^{j,k} z_L^k + b_{\mathrm{out}}^j$ , where $w_{\mathrm{out}}^{j,k}$ denotes the weight on the connection from $k$ th node in last hidden layer to $j$ th node in output layer, and $b_l^j$ denotes the bias of $j$ th node in output layer.
+
+A gradient-based optimizer is usually used to learn the weights and bias given a pre-defined loss function, e.g., a squared loss function $\mathcal{L} = \frac{1}{n}\sum_{i=1}^{n}(\mathbf{y}_i - \hat{\mathbf{y}}_i)^2$ . For a standard NN, the learned parameters are fixed, so the NN output $\hat{\mathbf{y}}_i$ is also a fixed point. For a Bayesian NN, a distribution of the parameters is learned, so the NN output is a distribution of $\hat{\mathbf{y}}_i$ . However, a pretrained standard NN needs to be augmented, e.g., with a Gaussian Process, to achieve the same result.
+
+# B.2 GAUSSIAN PROCESS
+
+A Gaussian Process (GP) is a collection of random variables, such that any finite collection of these variables follows a joint multivariate Gaussian distribution (Rasmussen & Williams, 2006). Given a training dataset $\mathcal{X} = \{\mathbf{x}_i \mid i = 1, 2, \dots, n\}$ and $\mathbf{y} = \{y_i = f(\mathbf{x}_i) + \epsilon \mid i = 1, 2, \dots, n\}$ , where $\epsilon$ denotes additive independent identically distributed Gaussian noise, the first step for GP is to fit itself to these training data assuming $\mathbf{y} \sim \mathcal{N}(0, \mathbf{K}(\mathcal{X}, \mathcal{X}) + \sigma_n^2 \mathbf{I})$ , where $\mathcal{N}$ denotes a multivariate Gaussian distribution with mean 0 and covariance matrix $\mathbf{K}(\mathcal{X}, \mathcal{X}) + \sigma_n^2 \mathbf{I}$ . $\mathbf{K}(\mathcal{X}, \mathcal{X})$ denotes the kernel-based covariance matrix at all pairs of training points with each entry $k_{i,j} = k(\mathbf{x}_i, \mathbf{x}_j)$ , and $\sigma_n^2$ denotes the noise variance of observations. One commonly used kernel is the radial basis function (RBF) kernel, which is defined as $k(\mathbf{x}_i, \mathbf{x}_j) = \sigma_f^2 \exp\left(-\frac{1}{2l_f^2} \| \mathbf{x}_i - \mathbf{x}_j \|^2\right)$ . The signal variance $\sigma_f^2$ , length scale $l_f$ and noise variance $\sigma_n^2$ are trainable hyperparameters. The hyperparameters of the covariance function are optimized during the learning process to maximize the log marginal likelihood $\log p(\mathbf{y}|\mathcal{X})$ .
+
+After fitting phase, the GP is utilized to predict the distribution of label $y_{*}$ given a test point $\mathbf{x}_{*}$ . This prediction is given by $y_{*}|\mathcal{X},\mathbf{y},\mathbf{x}_{*}\sim \mathcal{N}(\bar{y}_{*},\mathrm{var}(y_{*}))$ with $\bar{y}_{*} = \mathbf{k}_{*}^{\top}(\mathbf{K}(\mathcal{X},\mathcal{X}) + \sigma_{n}^{2}\mathbf{I})^{-1}\mathbf{y}$ and $\mathrm{var}(y_{*}) = k(\mathbf{x}_{*},\mathbf{x}_{*}) - \mathbf{k}_{*}^{\top}(\mathbf{K}(\mathcal{X},\mathcal{X}) + \sigma_{n}^{2}\mathbf{I})^{-1}\mathbf{k}_{*}$ , where $\mathbf{k}_{*}$ denotes the vector of kernel-based covariances (i.e., $k(\mathbf{x}_{*},\mathbf{x}_{i})$ ) between $\mathbf{x}_{*}$ and all the training points, and $\mathbf{y}$ denotes the vector of all training labels. Unlike with NN, the uncertainty of the prediction of a GP is therefore explicitly quantified.
+
+# B.3 SVGP
+
+The main limitation of the standard GP, as defined above, is that it is excessively expensive in both computational and storage cost. For a dataset with $n$ data points, the inference of standard GP has time complexity $\mathcal{O}(n^3)$ and space complexity $\mathcal{O}(n^2)$ . To circumvent this issue, sparse GP methods were developed to approximate the original GP by introducing inducing variables (Csato & Opper, 2002; Seeger et al., 2003; Quinonero Candela & Rasmussen, 2005; Titsias, 2009). These approximation approaches lead to a computational complexity of $\mathcal{O}(nm^2)$ and space complexity of $\mathcal{O}(nm)$ , where $m$ is the number of inducing variables. Following this line of work, SVGP (Hensman et al., 2013; 2015) further improves the scalability of the approach by applying Stochastic Variational Inference (SVI) technique, as follows:
+
+Consider the same training dataset and GP as in section B.2, and assume a set of inducing variables as $\mathcal{Z} = \{\mathbf{z}_i\mid i = 1,2,\ldots ,m\}$ and $\mathcal{U} = \{u_i = f(\mathbf{z}_i) + \epsilon \mid i = 1,2,\dots,m\}$ ( $f(\cdot)$ and $\epsilon$ are unknown).
+
+Algorithm 1 Procedure of RIO
+Require:
+Ensure:
+$(\mathcal{X},\mathbf{y}) = \{(\mathbf{x}_i,y_i)\}_{i = 1}^n$ training data $\hat{\mathbf{y}} = \{\hat{y}_i\}_{i = 1}^n$ NN predictions on training data $\mathbf{x}_{*}$ data to be predicted $\hat{y}_{*}$ : NN prediction on $\mathbf{x}_{*}$
+
+Training Phase:
+$\hat{y}_{*}^{\prime}\sim \mathcal{N}(\hat{y}_{*} + \bar{\hat{r}}_{*},\mathrm{var}(\hat{r}_{*}))$ : a distribution of calibrated prediction
+
+1: calculate residuals $\mathbf{r} = \{r_i = y_i - \hat{y}_i\}_{i = 1}^n$
+
+2: for each optimizer step do
+
+3: calculate covariance matrix $\mathbf{K}_c((\mathcal{X},\hat{\mathbf{y}}),(\mathcal{X},\hat{\mathbf{y}}))$ where each entry is given by $k_{c}((\mathbf{x}_{i},\hat{y}_{i}),(x_{j},\hat{y}_{j})) = k_{\mathrm{in}}(\mathbf{x}_{i},\mathbf{x}_{j}) + k_{\mathrm{out}}(\hat{y}_{i},\hat{y}_{j})$ for $i,j = 1,2,\ldots ,n$
+
+Deployment Phase:
+4: optimize GP hyperparameters by maximizing log marginal likelihood $\log p(\mathbf{r}|\mathcal{X},\hat{\mathbf{y}}) = -\frac{1}{2}\mathbf{r}^{\top}(\mathbf{K}_{c}((\mathcal{X},\hat{\mathbf{y}}),(\mathcal{X},\hat{\mathbf{y}})) + \sigma_{n}^{2}\mathbf{I})\mathbf{r} - \frac{1}{2}\log |\mathbf{K}_{c}((\mathcal{X},\hat{\mathbf{y}}),(\mathcal{X},\hat{\mathbf{y}})) + \sigma_{n}^{2}\mathbf{I}| - \frac{n}{2}\log 2\pi$
+
+5: calculate residual mean $\hat{\bar{r}}_{*} = \mathbf{k}_{*}^{\top}(\mathbf{K}_{c}((\mathcal{X},\hat{\mathbf{y}}),(\mathcal{X},\hat{\mathbf{y}})) + \sigma_{n}^{2}\mathbf{I})^{-1}\mathbf{r}$ and residual variance $\operatorname {var}(\hat{r}_*) = k_c((\mathbf{x}_*,\hat{y}_*),(\mathbf{x}_*,\hat{y}_*)) - \mathbf{k}_*^\top (\mathbf{K}_c((\mathcal{X},\hat{\mathbf{y}}),(\mathcal{X},\hat{\mathbf{y}}_\bot) + \sigma_n^2\mathbf{I})^{-1}\mathbf{k}_*$
+
+6: return distribution of calibrated prediction $\hat{y}_*^\prime \sim \mathcal{N}(\hat{y}_* + \hat{r}_*,\mathrm{var}(\hat{r}_*))$
+
+SVGP learns a variational distribution $q(\mathcal{U})$ by maximizing a lower bound of $\log p(\mathbf{y}|\mathcal{X})$ , where $\log p(\mathbf{y}|\mathcal{X}) = \log \int p(\mathbf{y}|\mathcal{U},\mathcal{X})p(\mathcal{U})\mathrm{d}\mathcal{U}$ and $p(\cdot)$ denotes the probability density under original GP. Trainable hyperparameters during the learning process include values of $\mathbf{z}_i$ and hyperparameters of the covariance function of original GP. Given a test point $\mathbf{x}_{*}$ , the predictive distribution is then given by $p(y_{*}|\mathbf{x}_{*}) = \int p(y_{*}|\mathcal{U},\mathbf{x}_{*})q(\mathcal{U})\mathrm{d}\mathcal{U}$ , which still follows a Gaussian distribution. One advantage of SVGP is that minibatch training methods (Le et al., 2011) can be applied in case of very large dataset. Suppose the minibatch size is $m'$ and $m \ll m'$ , then for each training step/iteration, the computational complexity is $\mathcal{O}(m'm^2)$ , and the space complexity is $\mathcal{O}(m'm)$ . For full details about SVGP, see Hensman et al. (2013). Since NNs typically are based on training with relatively large datasets, SVGP makes it practical to implement uncertainty estimates on NNs.
+
+# C PROCEDURE OF RIO
+
+This section provides an algorithmic description of RIO (see Algorithm 1).
+
+# D EMPIRICAL STUDY
+
+# D.1 EXPERIMENTAL SETUPS
+
+Dataset Description In total, 12 real-world regression datasets from UCI machine learning repository (Dua & Graff, 2017) are tested. Table 4 summarizes the basic information of these datasets. For all the datasets except MSD, $20\%$ of the whole dataset is used as test dataset and $80\%$ is used as training dataset, and this split is randomly generated in each independent run. For MSD, the first 463715 samples are used as training dataset and the last 51630 samples are used as testing dataset according to the provider's guideline. During the experiments, all the datasets except for MSD are tested for 100 independent runs, and MSD datasets are tested for 10 independent runs. For each independent run, the dataset is randomly split into training set, validation set, and test set (except for MSD, in which the dataset split is strictly predefined by the provider), and the same random dataset split are used by all the tested algorithms to ensure fair comparisons. All source codes for reproducing the experimental results are provided at: (https://github.com/leaf-ai/rio-paper).
+
+# Parametric Setup for Algorithms
+
+- NN: For SC dataset, a fully connected feed-forward NN with 2 hidden layers, each with 128 hidden neurons, is used. For CT dataset, a fully connected feed-forward NN with 2 hidden layers, each with 256 hidden neurons, is used. For MSD dataset, a fully connected
+
+Table 4: Summary of testing dataset
+
+| abbreviation | full name in UCI ML repository | dataset size | dimension | note |
| yacht | Yacht Hydrodynamics Data Set | 308 | 6 | - |
| ENB/h | Energy efficiency | 768 | 8 | Heating Load as target |
| ENB/c | Energy efficiency | 768 | 8 | Cooling Load as target |
| airfoil | Airfoil Self-Noise | 1505 | 5 | - |
| CCS | Concrete Compressive Strength | 1030 | 8 | - |
| wine/r | Wine Quality | 1599 | 11 | only use winequality-red data |
| wine/w | Wine Quality | 4898 | 11 | only use winequality-white data |
| CCPP | Combined Cycle Power Plant | 9568 | 4 | - |
| CASP | Physicochemical Properties of Protein Tertiary Structure | 54730 | 9 | - |
| SC | Superconductivity Data | 21263 | 81 | - |
| CT | Relative location of CT slices on axial axis | 53500 | 384 | - |
| MSD | YearPredictionMSD | 515345 | 90 | train: first 463715, test: last 51630 |
+
+feed-forward NN with 4 hidden layers, each with 64 hidden neurons, is used. For all the remaining datasets, a fully connected feed-forward NN with 2 hidden layers, each with 64 hidden neurons, is used. The inputs to the NN are normalized to have mean 0 and standard deviation 1. The activation function is ReLU for all the hidden layers. The maximum number of epochs for training is 1000. $20\%$ of the training data is used as validation data, and the split is random at each independent run. An early stop is triggered if the loss on validation data has not be improved for 10 epochs. The optimizer is RMSprop with learning rate 0.001, and the loss function is mean squared error (MSE).
+
+- RIO, RIO variants and SVGP (Hensman et al., 2013): SVGP is used as an approximator to original GP in RIO and all the RIO variants. For RIO, RIO variants and SVGP, the number of inducing points are 50 for all the experiments. RBF kernel is used for both input and output kernel. For RIO, RIO variants and SVGP, the signal variances and length scales of all the kernels plus the noise variance are the trainable hyperparameters. The optimizer is L-BFGS-B with default parameters as in Scipy适合自己 documentation (https://docs.scipy.org/doc/scipy/reference/optimize-minimize-lbfgsb.html), and the maximum number of iterations is set as 1000. The training process runs until the L-BFGS-B optimizer decides to stop.
+- NNGP (Lee et al., 2018): For NNGP kernel, the depth is 2, and the activation function is ReLU. $n_g = 101$ , $n_v = 151$ , and $n_c = 131$ . Following the learning process in original paper, a grid search is performed to search for the best values of $\sigma_w^2$ and $\sigma_b^2$ . Same as in the original paper, a grid of 30 points evenly spaced from 0.1 to 5.0 (for $\sigma_w^2$ ) and 30 points evenly spaced from 0 to 2.0 (for $\sigma_b^2$ ) was evaluated. The noise variance $\sigma_\epsilon^2$ is fixed as 0.01. The grid search process stops when Cholesky decomposition fails or all the 900 points are evaluated. The best values found during the grid search will be used in the experiments. No pre-computed lookup tables are utilized.
+- ANP (Kim et al., 2019): The parametric setups of ANP are following the recommendations in the original paper. The attention type is multihead, the hidden size is 64, the max number of context points is 50, the context ratio is 0.8, the random kernel hyperparameters option is on. The size of latent encoder is $64 \times 64 \times 64 \times 64$ , the number of latents is 64, the size of deterministic encoder is $64 \times 64 \times 64 \times 64$ , the size of decoder is $64 \times 64 \times 64 \times 64 \times 2$ , and the deterministic path option is on. Adam optimizer with learning rate $10^{-4}$ is used, and the maximum number of training iterations is 2000.
+
+# Performance Metrics
+
+- To measure the point-prediction error, the Root Mean Square Error (RMSE) between the method predictions and true outcomes on test datasets are calculated for each independent experimental run. After that, the mean and standard deviations of these RMSEs are used to measure the performance of the algorithms.
+- To quantitatively measure the quality of uncertainty estimation, average negative log predictive density (NLPD) (Quinonero-Candela et al., 2006) is used to measure the quality of
+
+uncertainty estimation. NLPD is given by
+
+$$
+L = - \frac {1}{n} \sum_ {i = 1} ^ {n} \log p \left(\hat {\mathbf {y}} _ {i} = \mathbf {y} _ {i} \mid \mathbf {x} _ {i}\right) \tag {10}
+$$
+
+where $\hat{\mathbf{y}}_i$ indicates the prediction results, $\mathbf{x}_i$ is the input with true associated outcome $\mathbf{y}_i$ , $p(\cdot)$ is the probability density function (PDF) of the returned distribution based on input $\mathbf{x}_i$ .
+
+- To investigate the behaviors of RIO variants during learning, the mean of estimated noise variance $\sigma_{n}^{2}$ over all the independent runs are calculated.
+- To compare the computation time of the algorithms, the training time (wall clock time) of NN, RIO, all the ablated RIO variants, SVGP and ANP are averaged over all the independent runs as the computation time. It is notable that the computation time of all RIO variants does not include the training time of associated NN, because the NN is considered to be pretrained. For NNGP, the wall clock time for the grid search is used. In case that the grid search stops due to Cholesky decomposition failures, the computation time of NNGP will be estimated as the average running time of all the successful evaluations $\times 900$ , which is the supposed number of evaluations. All the algorithms are implemented using Tensorflow, and tested in the exactly same python environment. All the experiments are running on a machine with 16 Intel(R) Xeon(R) CPU E5-2623 v4@2.60GHz and 128GB memory.
+
+# D.2 RESULTS ON ESTIMATED CONFIDENCE INTERVALS
+
+Confidence interval (CI) is a useful tool to estimate the distribution of outcomes with explicit probabilities in real-world applications. In order to provide more insights for practitioners, the percentages of testing outcomes that are within the $95\% / 90\% / 68\%$ CIs as estimated by each algorithm are calculated. Ideally, these percentages should be as close to the estimated confidence levels as possible, e.g., a perfect uncertainty estimator would have exactly $95\%$ of testing outcomes within its estimated $95\%$ CIs. In practice, when two models have similar quality of CI estimations, the more conservative one is favoured. Figure 4 and 5 show the distribution of the percentages that testing outcomes are within the estimated $95\% / 90\% / 68\%$ CIs over all the independent runs for all the datasets and algorithms. Figure 6, 7 and 8 compare the distributions of coverage percentages for $1\% - 99\%$ CIs for RIO and SVGP. The experimental data are extracted from the experiments that generate Table 1 in main paper. In Figure 4, 5, 6, 7 and 8, the box extends from the lower to upper quartile values of the data (each data point represents an independent experimental run), with a line at the median. The whiskers extend from the box to show the range of the data. Flier points are those past the end of the whiskers, indicating the outliers.
+
+According to the results, RIO is able to provide reasonable CI estimations in most cases. However, using this metric to compare algorithms regarding uncertainty estimation quality may be noisy and misleading. A method that simply learns the distribution of the labels would perform well in CI coverage metric, but it cannot make any meaningful point-wise prediction. More detailed examples are given as follows.
+
+For "CT" dataset, SVGP has an extremely high RMSE of around 52 while RIO variants only have RMSEs of 1. However, SVGP still shows acceptable $95\%$ and $90\%$ CI coverage, and even has over-confident coverage for $68\%$ CI. After investigation, it was found that this happened only by chance, and was not due to the accurate CI estimations of SVGP. Since SVGP is not able to extract any useful information from the high-dimensional input space, it simply treated all outcomes as noise. As a result, SVGP shows a very large RMSE compared to other algorithms, and the mean of its predicted outcome distribution is always around 0. Since SVGP treats everything as noise, the estimated noise variance is very high, and the estimated $95\%$ CI based on this noise variance is overly high and covers all the test outcomes in most cases. When the estimated $90\%$ CI is evaluated, the large error in mean estimation and large error in noise variance estimation cancel mostly cancel each other out by chance, i.e., the estimated $90\%$ CI is mistakenly shifted by erroneous mean then the overly wide noise variance fortunately covers slightly more than $90\%$ test outcomes. The case of the estimated $68\%$ CI is similar, but now the error in noise variance cannot fully cover the error in mean, so the coverage percentages are now below $68\%$ , indicating over-confidence.
+
+One interesting observation is that SVGP tends to be more "conservative" for high confidence levels $(>90\%)$ , even in cases where they are "optimistic" for low confidence levels. After investigation,
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Figure 4: Quality of estimated CIs. These figures show the distribution of the percentages that testing outcomes are within the estimated $95\% / 90\% / 68\%$ CIs over all the independent runs.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Figure 5: Quality of estimated CIs. These figures show the distribution of the percentages that testing outcomes are within the estimated $95\% / 90\% / 68\%$ CIs over all the independent runs.
+
+
+
+
+
+
+
+
+
+
+
+
+Figure 6: Distribution of CI Coverage. These figures compare the distributions of empirical coverage percentages of estimated $1\% -99\%$ CIs over all the independent runs for RIO and SVGP.
+
+
+
+
+
+
+
+
+Figure 7: Distribution of CI Coverage. These figures compare the distributions of empirical coverage percentages of estimated $1\% -99\%$ CIs over all the independent runs for RIO and SVGP.
+
+
+
+
+
+
+
+
+Figure 8: Distribution of CI Coverage. These figures compare the distributions of empirical coverage percentages of estimated $1\% -99\%$ CIs over all the independent runs for RIO and SVGP.
+
+this is because SVGP normally has an overly high noise variance estimation (also comes with a higher prediction RMSE in most cases), so it has a higher chance of covering more points when the increase in CI width (influenced by noise variance) surpasses the erroneous shift of mean (depending on prediction errors).
+
+# D.3 ADDITIONAL RESULTS ON OUTPUT CORRECTION
+
+This section shows additional demonstrations of the detailed behaviors of RIO output correction. The experimental data are extracted from the experiments that generate Table 1 in main paper.
+
+Figure 9 plots the distributions of ground truth labels (outcomes), original NN predictions and predictions corrected after RIO for a randomly picked run for each dataset. Figure 10 shows the pointwise comparisons between NN outputs and RIO-corrected outputs for the same experimental runs as in Figure 9. Based on the results, it is clear that RIO tends to calibrate each NN prediction accordingly. The distribution of outputs after RIO calibration may be a shift, or shrinkage, or expansion, or even more complex modifications of the original NN predictions, depending on how different are NN predictions from ground truths. As a result, the distribution of RIO calibrated outputs are closer to the distribution of the ground truth. One interesting behavior can be observed in Figure 9 for "protein" dataset: after applying RIO, the range of whiskers shrunk and the outliers disappeared, but the box (indicating 25 to 75 percentile of the data) expanded. This behavior shows that RIO can customize its calibration to each point. Another interesting behavior is that for "wine/r" dataset (see both Figure 9 and 10), RIO shifts all the original NN outputs to lower values, which are closer to the distribution of the ground truth.
+
+To better study the behaviors of RIO, a new performance metric is defined, called improvement ratio (IR), which is the ratio between number of successful corrections (successfully reducing the prediction error) and total number of data points. For each run on each dataset, this IR value is calculated, and the distribution of IR values over 100 independent runs (random dataset split except for MSD, random NN initialization and training) on each dataset is plotted in Figure 11. According to the results, the IR values for RIO are above 0.5 in most cases. For 7 datasets, IR values are above 0.5 in all 100 independent runs. For some runs in yacht, ENB/h, CT, and MSD, the IR values are above 0.8 or even above 0.9. All these observations show that RIO is making meaningful corrections instead of random perturbations. Results in Figure 11 also provides useful information for practitioners: Although not all RIO calibrations improve the result, most of them do.
+
+Same empirical analysis is also conducted for two RIO variants, namely $\mathrm{R + I}$ (predicting residuals with only input kernel) and $\mathrm{R + O}$ (predicting residuals with only output kernel). Figure 12, 13 and 14 show results for $\mathrm{R + I}$ , and Figure 15, 16 and 17 show results for $\mathrm{R + O}$ . From the results, the output kernel is helpful in problems where input kernel does not work well (CT and MSD), and it also shows more robust performance in terms of improvement ratio (IR) in most datasets. However it is still generally worse than full RIO. More specifically, $\mathrm{R + I}$ shows an extremely low IR in CT dataset (see Figure 14). After investigation, this is because the input kernel itself is not able to learn anything from the complex high-dimensional input space, so it treats everything as noise. As a result, it keeps the NN output unchanged during correction in most cases. Applying output kernel instead solves the issue. After comparing Figure 10, 13 and 16, it can be observed that the behaviors of RIO are either a mixture or selection between $\mathrm{R + I}$ and $\mathrm{R + O}$ . This means RIO with I/O kernel is able to choose the best kernel among these two or combines both if needed.
+
+# D.4 EXPERIMENTAL RESULTS WITH ARD
+
+The experimental results shown in the main paper is based on the setup that all RIO variants are using standard RBF kernel without Automatic Relevance Determination (ARD). To investigate the performance of RIO under more sophisticated setups, same experiments are run for all RIO variants with ARD mechanism turned on (all other experimental setups are identical with section D.1, except that the NN depth for MSD dataset is reduced from 4 hidden layers to 2, due to computation failure issue during Cholesky decomposition). Table 5 shows the summarized experimental results. From the results, RIO still performs the best or equals the best method (based on statistical tests) in 9 out of 12 datasets for both RMSE and NLPD metrics. This clearly shows the effectiveness and robustness of RIO.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Figure 9: Distribution of Ground Truths and NN/RIO-corrected predictions. The box extends from the lower to upper quartile values of the data points, with a line at the median. The whiskers extend from the box to show the range of the data. Flier points are those past the end of the whiskers, indicating the outliers.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Figure 10: Comparison between NN and RIO-corrected outputs. Each dot represents a data point. The horizontal axis denote the original NN predictions, and the vertical axis the corresponding predictions after RIO corrections. The solid line indicates where NN predictions are same as RIO-corrected predictions (i.e., no change in output). This figure shows that RIO exhibits diverse behavior in how it calibrates output.
+
+
+
+
+
+Table 5: Summary of experimental results (with ARD)
+
+| Dataset n × d | Method | RMSE mean±std | NLPD mean±std | Noise Variance | Time (sec) | Dataset n × d | Method | RMSE mean±std | NLPD mean±std | Noise Variance | Time (sec) | |
| yacht | NN | 2.35±1.16†‡ | - | - | 3.41 | ENB/h | NN | 1.01±0.38†‡ | - | - | 8.47 | |
| RIO | 1.22±0.44 | 1.718±0.531 | 0.46 | 7.73 | RIO | 0.59±0.10 | 0.937±0.186 | 0.22 | 11.98 | |
| R+I | 1.36±0.39†‡ | 1.893±0.482†‡ | 0.83 | 6.89 | R+I | 0.61±0.12†‡ | 0.971±0.197†‡ | 0.25 | 11.07 | |
| 308 | R+O | 1.88±0.65†‡ | 2.208±0.470†‡ | 1.62 | 6.01 | 768 | R+O | 0.77±0.32†‡ | 1.150±0.328†‡ | 0.49 | 10.51 | |
| × | Y+O | 1.88±0.62†‡ | 2.227±0.439†‡ | 1.82 | 9.56 | × | Y+O | 0.83±0.37†‡ | 1.226±0.345†‡ | 0.61 | 15.31 | |
| 6 | Y+IO | 1.09±0.30†‡ | 1.591±0.425†‡ | 0.84 | 9.85 | 8 | Y+IO | 0.61±0.08†‡ | 0.974±0.164†‡ | 0.25 | 15.96 | |
| SVGP | 0.91±0.21†‡ | 1.359±0.208†‡ | 0.77 | 8.93 | SVGP | 0.77±0.05†‡ | 1.156±0.049†‡ | 0.59 | 14.21 | |
| NNGP | 12.40±1.45†‡ | 35.18±0.534†‡ | - | 7347 | NNGP | 4.97±0.29†‡ | 32.40±0.638†‡ | - | 7374 | |
| ANP | 7.59±3.20†‡ | 1.793±0.887†‡ | - | 40.82 | ANP | 4.08±2.27†‡ | 2.475±0.559†‡ | - | 102.3 | |
| ENB/c | NN | 1.88±0.46†‡ | - | - | 11.64 | airfoil | NN | 4.84±0.74†‡ | - | - | 8.83 | |
| RIO | 1.42±0.18 | 1.825±0.155 | 1.31 | 15.27 | RIO | 2.49±0.15 | 2.349±0.048 | 6.56 | 17.92 | |
| R+I | 1.45±0.17†‡ | 1.837±0.134†‡ | 1.45 | 14.02 | R+I | 2.56±0.16†‡ | 2.376±0.054†‡ | 6.92 | 16.64 | |
| 768 | R+O | 1.77±0.46†‡ | 2.021±0.232†‡ | 2.34 | 9.73 | 1505 | R+O | 4.14±0.27†‡ | 2.844±0.071†‡ | 16.31 | 10.97 | |
| × | Y+O | 1.77±0.45†‡ | 2.020±0.231†‡ | 2.56 | 20.54 | × | Y+O | 4.39±1.64†‡ | 2.889±0.196†‡ | 20.08 | 22.90 | |
| 8 | Y+IO | 1.46±0.22†‡ | 1.847±0.157†‡ | 1.42 | 21.65 | 5 | Y+IO | 3.73±0.35†‡ | 2.746±0.127†‡ | 14.88 | 25.08 | |
| SVGP | 1.73±0.25†‡ | 1.969±0.130†‡ | 3.12 | 18.73 | SVGP | 3.36±0.23†‡ | 2.632±0.065†‡ | 11.11 | 21.80 | |
| NNGP | 4.91±0.32†‡ | 30.14±0.886†‡ | - | 7704 | NNGP | 6.54±0.23†‡ | 33.60±0.420†‡ | - | 3355 | |
| ANP | 4.81±2.15†‡ | 2.698±0.548†‡ | - | 64.11 | ANP | 21.17±30.72†‡ | 5.399±6.316†‡ | - | 231.7 | |
| CCS | NN | 6.29±0.54†‡ | - | - | 6.53 | wine/r | NN | 0.689±0.037†‡ | - | - | 3.24 | |
| RIO | 5.81±0.50 | 3.210±0.119 | 23.00 | 9.20 | RIO | 0.674±0.033 | 1.091±0.082 | 0.28 | 7.55 | |
| 1030 | R+I | 5.80±0.49 | 3.206±0.118 | 23.19 | 8.80 | 1599 | R+I | 0.670±0.033†‡ | 1.085±0.084†‡ | 0.28 | 9.52 | |
| × | R+O | 6.21±0.53†‡ | 3.286±0.121†‡ | 27.02 | 3.77 | × | R+O | 0.676±0.034†‡ | 1.093±0.083 | 0.29 | 5.18 | |
| Y+O | 6.18±0.51†‡ | 3.278±0.114†‡ | 27.22 | 11.74 | 11 | Y+O | 0.675±0.033 | 1.088±0.081†‡ | 0.29 | 13.56 | |
| Y+IO | 5.91±0.46†‡ | 3.228±0.108†‡ | 24.50 | 12.09 | Y+IO | 0.672±0.033†‡ | 1.085±0.081†‡ | 0.29 | 14.23 | |
| SVGP | 6.20±0.40†‡ | 3.233±0.060†‡ | 34.66 | 11.34 | SVGP | 0.640±0.028†‡ | 0.969±0.043†‡ | 0.39 | 13.32 | |
| wine/w | NN | 0.725±0.026†‡ | - | - | 7.01 | CCPP | NN | 4.97±0.53†‡ | - | - | 10.11 | |
| RIO | 0.707±0.017 | 1.093±0.035 | 0.38 | 11.8 | RIO | 3.99±0.13 | 2.796±0.026 | 15.82 | 26.27 | |
| 4898 | R+I | 0.702±0.018†‡ | 1.084±0.035†‡ | 0.38 | 14.76 | 9568 | R+I | 3.99±0.13 | 2.797±0.025†‡ | 15.87 | 24.47 | |
| × | R+O | 0.711±0.019†‡ | 1.098±0.037†‡ | 0.39 | 6.46 | R+O | 4.33±0.13†‡ | 2.879±0.027†‡ | 18.51 | 9.99 | |
| 11 | Y+O | 0.710±0.019†‡ | 1.096±0.037†‡ | 0.39 | 19.77 | Y+O | 8.94±44.78†‡ | 2.974±0.484†‡ | 2095 | 27.24 | |
| Y+IO | 0.708±0.018† | 1.093±0.036 | 0.39 | 20.15 | Y+IO | 4.57±0.97†‡ | 2.968±0.255†‡ | 36.48 | 27.65 | |
| SVGP | 0.714±0.017†‡ | 1.074±0.022†‡ | 0.50 | 19.42 | SVGP | 8.80±44.83†‡ | 2.917±0.468†‡ | 1455 | 26.61 | |
| protein | NN | 4.23±0.08†‡ | - | - | 147.4 | SC | NN | 12.41±0.84†‡ | - | - | 73.11 | |
| RIO | 4.08±0.05 | 2.826±0.013 | 15.75 | 149.6 | RIO | 11.24±0.33 | 3.844±0.030 | 104.6 | 99.91 | |
| 45730 | R+I | 4.11±0.05†‡ | 2.834±0.013†‡ | 16.01 | 130.9 | 21263 | R+I | 11.27±0.32†‡ | 3.847±0.029†‡ | 105.5 | 95.33 | |
| R+O | 4.15±0.06†‡ | 2.843±0.015†‡ | 16.31 | 98.88 | R+O | 11.68±0.42†‡ | 3.881±0.037†‡ | 113.9 | 47.61 | |
| Y+O | 4.15±0.06†‡ | 2.843±0.015†‡ | 16.31 | 120.4 | Y+O | 11.68±0.42†‡ | 3.882±0.036†‡ | 114.2 | 72.08 | |
| Y+IO | 4.08±0.05 | 2.826±0.013 | 15.75 | 148.6 | Y+IO | 11.29±0.38†‡ | 3.848±0.034†‡ | 105.9 | 103.4 | |
| SVGP | 4.64±0.04†‡ | 2.955±0.008†‡ | 22.15 | 128.1 | SVGP | 14.12±0.27†‡ | 4.090±0.015†‡ | 217.8 | 95.78 | |
| CT | NN | 1.12±0.29†‡ | - | - | 196.6 | MSD | NN | 12.54±1.16†‡ | - | - | 777.8 | |
| RIO | 0.86±0.12 | 1.261±0.202 | 0.95 | 519.6 | RIO | 9.79±0.22 | 3.698±0.023 | 94.90 | 3059 | |
| 53500 | R+I | 1.12±0.29†‡ | 1.505±0.260†‡ | 1.55 | 20.03 | R+I | 12.54±1.16†‡ | 3.943±0.092†‡ | 156.6 | 91.01 | |
| R+O | 0.86±0.12 | 1.261±0.202 | 0.95 | 159.4 | × | R+O | 9.82±0.19 | 3.701±0.020 | 96.11 | 2739 |
| 384 | Y+O | 0.98±0.77‡ | 1.303±0.336 | 2.02 | 180.6 | 90 | Y+O | 209.8±596.2‡ | 7.010±9.68‡ | 236.4 | 1491 |
| Y+IO | 0.88±0.13†‡ | 1.256±0.145 | 0.55 | 594.5 | Y+IO | 208.8±596.6‡ | 7.055±9.66‡ | 197.2 | 3347 | |
| SVGP | 52.07±0.19†‡ | 5.372±0.004†‡ | 2712 | 29.69 | SVGP | 10.97±0.0†‡ | 3.981±0.0†‡ | 199.7 | 2590 | |
+
+The symbols $\dagger$ and $\ddagger$ indicate that the difference between the marked entry and RIO is statistically significant at the $5\%$ significance level using paired t-test and Wilcoxon test, respectively. The best entries that are significantly better than all the others under at least one statistical test are marked in boldface (ties are allowed).
+
+
+Figure 11: Distribution of Impovement Ratio. The box extends from the lower to upper quartile values of the data (each data point represents an independent experimental run), with a line at the median. The whiskers extend from the box to show the range of the data. Flier points are those past the end of the whiskers, indicating the outliers.
+
+# D.5 EXPERIMENTAL RESULTS WITH 10,000 MAXIMUM OPTIMIZER IERATIONS
+
+The experimental results shown in the main paper is based on the setup that all RIO variants are using L-BFGS-B optimizer with a maximum number of iterations as 1,000. To investigate the performance of RIO under larger computational budget, same experiments are run for all RIO variants with maximum number of optimizer iterations as 10,000 (all other experimental setups are identical with section D.1). Table 6 shows the summarized experimental results for 8 smallest datasets. According to the results, the rankings of the methods are very similar to those in Table 1 (in the main paper), and RIO still performs best in all metrics.
+
+# D.6 RESULTS ON RANDOM FORESTS
+
+In principle, RIO can be applied to any prediction model since it treats the pretrained model as a black box. To demonstrate this extensibility, RIO is applied to another classical prediction model — Random Forests (RF) (Ho, 1995; Breiman, 2001). The same experimental setups as in section D.1 are used. For RF, the number of estimators is set as 100 for all datasets. To avoid overfitting, the minimum number of samples required to be at a leaf node is 10 for all datasets, and the max depth of the tree is 7 for MSD and 5 for all other datasets. Table 7 shows the experimental results. From Table 7, RIO performs the best or equals the best method (based on statistical tests) in 9 out of 12 datasets in terms of both RMSE and NLPD. In addition, RIO significantly improves the performance of original RF in 11 out of 12 datasets. These results demonstrate the robustness and broad applicability of RIO.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Figure 12: Distribution of Ground Truths and NN/R+I-corrected predictions. The box extends from the lower to upper quartile values of the data points, with a line at the median. The whiskers extend from the box to show the range of the data. Flier points are those past the end of the whiskers, indicating the outliers.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Figure 13: Comparison between NN and R+I-corrected outputs. Each dot represents a data point. The horizontal axis denote the original NN predictions, and the vertical axis the corresponding predictions after R+I corrections. The solid line indicates where NN predictions are same as R+I-corrected predictions (i.e., no change in output).
+
+
+
+
+
+
+Figure 14: Distribution of Impovement Ratio. The box extends from the lower to upper quartile values of the data (each data point represents an independent experimental run), with a line at the median. The whiskers extend from the box to show the range of the data. Flier points are those past the end of the whiskers, indicating the outliers.
+
+Table 6: Summary of experimental results (with 10,000 maximum optimizer iterations)
+
+| Dataset n × d | Method | RMSE mean±std | NLPD mean±std | Noise Variance | Time (sec) | Dataset n × d | Method | RMSE mean±std | NLPD mean±std | Noise Variance | Time (sec) |
| yacht | NN | 2.20±0.93†‡ | - | - | 3.33 | ENB/h | NN | 0.94±0.37†‡ | - | - | 10.14 |
| RIO | 1.40±0.50 | 1.883±0.568 | 0.74 | 25.67 | RIO | 0.64±0.26 | 0.968±0.273 | 0.31 | 45.04 |
| R+I | 1.93±0.65†‡ | 2.266±0.484†‡ | 2.19 | 5.59 | R+I | 0.70±0.33†‡ | 1.043±0.317†‡ | 0.41 | 22.22 |
| 308 | R+O | 1.78±0.57†‡ | 2.176±0.525†‡ | 1.39 | 6.76 | 768 | R+O | 0.72±0.31†‡ | 1.084±0.309†‡ | 0.41 | 20.49 |
| × | Y+O | 1.78±0.56†‡ | 2.204±0.509†‡ | 1.60 | 23.99 | × | Y+O | 0.78±0.35†‡ | 1.163±0.328†‡ | 0.55 | 55.23 |
| 6 | Y+IO | 1.40±0.44 | 1.919±0.567 | 0.82 | 45.69 | 8 | Y+IO | 0.66±0.26†‡ | 1.013±0.280†‡ | 0.34 | 82.82 |
| SVGP | 3.67±0.60†‡ | 2.689±0.111†‡ | 12.07 | 42.59 | SVGP | 2.01±0.17†‡ | 2.145±0.071†‡ | 4.24 | 79.46 |
| NNGP | 12.40±1.45†‡ | 35.18±0.534†‡ | - | 7347 | NNGP | 4.97±0.29†‡ | 32.40±0.638†‡ | - | 7374 |
| ANP | 7.59±3.20†‡ | 1.793±0.887†‡ | - | 40.82 | ANP | 4.08±2.27†‡ | 2.475±0.559†‡ | - | 102.3 |
| ENB/c | NN | 1.87±0.42†‡ | - | - | 11.79 | airfoil | NN | 4.84±0.47†‡ | - | - | 8.96 |
| RIO | 1.51±0.35 | 1.852±0.198 | 1.59 | 48.53 | RIO | 3.06±0.20 | 2.551±0.058 | 9.44 | 104.0 |
| R+I | 1.70±0.41†‡ | 1.98±0.21†‡ | 2.17 | 10.96 | R+I | 3.13±0.21†‡ | 2.573±0.059†‡ | 9.91 | 73.22 |
| 768 | R+O | 1.75±0.41†‡ | 2.011±0.211†‡ | 2.21 | 10.85 | 1505 | R+O | 4.16±0.27†‡ | 2.848±0.068†‡ | 16.58 | 10.92 |
| × | Y+O | 1.75±0.41†‡ | 2.012±0.210†‡ | 2.27 | 39.52 | × | Y+O | 4.21±0.27†‡ | 2.862±0.082†‡ | 17.89 | 38.69 |
| 8 | Y+IO | 1.62±0.35†‡ | 1.936±0.197†‡ | 1.86 | 70.94 | 5 | Y+IO | 3.19±0.30†‡ | 2.583±0.087†‡ | 10.24 | 119.95 |
| SVGP | 2.52±0.21†‡ | 2.363±0.072†‡ | 6.31 | 84.32 | SVGP | 3.27±0.20†‡ | 2.608±0.056†‡ | 10.56 | 106.0 |
| NNGP | 4.91±0.32†‡ | 30.14±0.886†‡ | - | 7704 | NNGP | 6.54±0.23†‡ | 33.60±0.420†‡ | - | 3355 |
| ANP | 4.81±2.15†‡ | 2.698±0.548†‡ | - | 64.11 | ANP | 21.17±30.72†‡ | 5.399±6.316†‡ | - | 231.7 |
| CCS | NN | 6.25±0.49†‡ | - | - | 6.54 | wine/r | NN | 0.688±0.039†‡ | - | - | 3.26 |
| RIO | 5.96±0.47 | 3.230±0.108 | 25.37 | 14.31 | RIO | 0.671±0.033 | 1.088±0.087 | 0.28 | 20.12 |
| 1030 | R+I | 5.99±0.47†‡ | 3.235±0.107 | 26.04 | 4.91 | 1599 | R+I | 0.668±0.033†‡ | 1.080±0.085†‡ | 0.28 | 12.61 |
| × | R+O | 6.19±0.49†‡ | 3.280±0.112†‡ | 27.43 | 4.04 | × | R+O | 0.675±0.033†‡ | 1.094±0.088†‡ | 0.29 | 4.96 |
| 8 | Y+O | 6.18±0.48†‡ | 3.276±0.109†‡ | 27.65 | 17.55 | 11 | Y+O | 0.674±0.033†‡ | 1.089±0.086†‡ | 0.29 | 17.01 |
| Y+IO | 6.03±0.47†‡ | 3.246±0.107†‡ | 26.24 | 41.89 | Y+IO | 0.671±0.032 | 1.087±0.086 | 0.28 | 34.45 |
| SVGP | 6.62±0.37†‡ | 3.297±0.045†‡ | 41.15 | 73.66 | SVGP | 0.642±0.028†‡ | 0.973±0.042†‡ | 0.39 | 70.53 |
| wine/w | NN | 0.723±0.027†‡ | - | - | 8.51 | CCPP | NN | 4.94±0.49†‡ | - | - | 17.38 |
| RIO | 0.704±0.018 | 1.088±0.034 | 0.38 | 49.96 | RIO | 4.03±0.13 | 2.808±0.025 | 16.21 | 151.3 |
| 4898 | R+I | 0.700±0.017†‡ | 1.079±0.033†‡ | 0.38 | 26.43 | 9568 | R+I | 4.04±0.13†‡ | 2.810±0.026†‡ | 16.28 | 116.0 |
| × | R+O | 0.710±0.021†‡ | 1.095±0.037†‡ | 0.39 | 8.87 | × | R+O | 4.33±0.14†‡ | 2.880±0.029†‡ | 18.56 | 19.02 |
| 11 | Y+O | 0.710±0.020†‡ | 1.093±0.037†‡ | 0.39 | 29.97 | 4 | Y+O | 13.40±63.01‡ | 3.012±0.663†‡ | 4161 | 100.8 |
| Y+IO | 0.704±0.018‡ | 1.088±0.034 | 0.38 | 66.79 | Y+IO | 4.71±1.51†‡ | 2.969±0.271†‡ | 33.58 | 267.1 |
| SVGP | 0.713±0.016†‡ | 1.076±0.022†‡ | 0.5 | 158.1 | SVGP | 4.25±0.13†‡ | 2.859±0.028†‡ | 17.94 | 334.6 |
+
+The symbols $\dagger$ and $\ddagger$ indicate that the difference between the marked entry and RIO is statistically significant at the $5\%$ significance level using paired t-test and Wilcoxon test, respectively. The best entries that are significantly better than all the others under at least one statistical test are marked in boldface (ties are allowed).
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Figure 15: Distribution of Ground Truths and NN/R+O-corrected predictions. The box extends from the lower to upper quartile values of the data points, with a line at the median. The whiskers extend from the box to show the range of the data. Flier points are those past the end of the whiskers, indicating the outliers.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Figure 16: Comparison between NN and R+O-corrected outputs. Each dot represents a data point. The horizontal axis denote the original NN predictions, and the vertical axis the corresponding predictions after R+O corrections. The solid line indicates where NN predictions are same as R+O-corrected predictions (i.e., no change in output).
+
+
+
+
+
+
+Figure 17: Distribution of Impovement Ratio. The box extends from the lower to upper quartile values of the data (each data point represents an independent experimental run), with a line at the median. The whiskers extend from the box to show the range of the data. Flier points are those past the end of the whiskers, indicating the outliers.
+
+Table 7: Summary of experimental results with Random Forests (RF)
+
+| Dataset n × d | Method | RMSE mean±std | NLPD mean±std | Noise Variance | Time (sec) | Dataset n × d | Method | RMSE mean±std | NLPD mean±std | Noise Variance | Time (sec) |
| yacht | RF | 1.95±0.59†‡ | - | - | 0.33 | ENB/h | RF | 1.44±0.16†‡ | - | - | 0.35 |
| RIO | 0.80±0.18 | 1.177±0.187 | 0.49 | 14.93 | RIO | 0.79±0.13 | 1.150±0.138 | 0.61 | 16.66 |
| R+I | 1.62±0.67†‡ | 1.833±0.453†‡ | 1.91 | 12.41 | R+I | 0.91±0.12†‡ | 1.306±0.128†‡ | 0.74 | 14.13 |
| 252 × 6 | R+O | 1.29±0.33†‡ | 1.658±0.267†‡ | 1.31 | 12.00 | 768 | R+O | 1.02±0.20†‡ | 1.401±0.187†‡ | 0.92 | 13.81 |
| Y+O | 1.47±0.33†‡ | 1.841±0.243†‡ | 2.29 | 16.43 | Y+O | 1.43±0.17†‡ | 1.792±0.146†‡ | 1.88 | 17.57 |
| Y+IO | 0.87±0.20†‡ | 1.268±0.193†‡ | 0.65 | 18.40 | Y+IO | 0.88±0.11†‡ | 1.280±0.111†‡ | 0.73 | 20.51 |
| SVGP | 4.41±0.60†‡ | 2.886±0.096†‡ | 18.55 | 15.44 | SVGP | 2.13±0.18†‡ | 2.199±0.074†‡ | 4.70 | 16.90 |
| NNGP | 12.40±1.45†‡ | 35.18±0.534†‡ | - | 7347 | NNGP | 4.97±0.29†‡ | 32.40±0.638†‡ | - | 7374 |
| ANP | 7.59±3.20†‡ | 1.793±0.887†‡ | - | 40.82 | ANP | 4.08±2.27†‡ | 2.475±0.559†‡ | - | 102.3 |
| ENB/c | RF | 1.93±0.15†‡ | - | - | 0.31 | airfoil | RF | 3.68±0.17†‡ | - | - | 0.36 |
| RIO | 1.55±0.15 | 1.854±0.087 | 2.13 | 16.31 | RIO | 2.90±0.17 | 2.487±0.063 | 7.37 | 19.80 |
| R+I | 1.53±0.12 | 1.845±0.072 | 2.08 | 12.95 | R+I | 2.95±0.17†‡ | 2.505±0.059†‡ | 7.88 | 17.19 |
| 768 × 8 | R+O | 1.85±0.14†‡ | 2.037±0.089†‡ | 2.93 | 10.62 | 1505 | R+O | 3.54±0.17†‡ | 2.693±0.059†‡ | 10.50 | 8.26 |
| Y+O | 1.92±0.15†‡ | 2.081±0.091†‡ | 3.20 | 19.82 | Y+O | 3.58±0.18†‡ | 2.714±0.085†‡ | 12.05 | 21.22 |
| Y+IO | 1.85±0.14†‡ | 2.039±0.087†‡ | 2.99 | 21.50 | Y+IO | 3.31±0.41†‡ | 2.623±0.131†‡ | 10.44 | 23.61 |
| SVGP | 2.63±0.23†‡ | 2.403±0.079†‡ | 6.81 | 18.63 | SVGP | 3.59±0.20†‡ | 2.698±0.054†‡ | 12.66 | 20.52 |
| NNGP | 4.91±0.32†‡ | 30.14±0.886†‡ | - | 7704 | NNGP | 6.54±0.23†‡ | 33.60±0.420†‡ | - | 3355 |
| ANP | 4.81±2.15†‡ | 2.698±0.548†‡ | - | 64.11 | ANP | 21.17±0.72†‡ | 5.399±6.316†‡ | - | 231.7 |
| CCS | RF | 7.17±0.43†‡ | - | - | 0.49 | wine/r | RF | 0.625±0.028†‡ | - | - | 0.75 |
| RIO | 5.84±0.34 | 3.180±0.068 | 24.84 | 17.72 | RIO | 0.642±0.028 | 1.019±0.070 | 0.27 | 21.13 |
| 1030 × 8 | R+I | 5.89±0.35†‡ | 3.190±0.069†‡ | 25.34 | 14.62 | 1599 | R+I | 0.625±0.028†‡ | 0.966±0.059†‡ | 0.31 | 7.57 |
| R+O | 6.90±0.41†‡ | 3.380±0.079†‡ | 34.20 | 9.77 | R+O | 0.627±0.028†‡ | 0.974±0.061†‡ | 0.30 | 9.26 |
| Y+O | 7.03±0.78†‡ | 3.401±0.154†‡ | 35.75 | 18.72 | Y+O | 0.628±0.028†‡ | 0.975±0.062†‡ | 0.30 | 20.74 |
| Y+IO | 5.88±0.33†‡ | 3.192±0.068†‡ | 25.78 | 22.01 | Y+IO | 0.635±0.031†‡ | 0.999±0.074†‡ | 0.28 | 24.88 |
| SVGP | 6.88±0.40†‡ | 3.337±0.048†‡ | 44.62 | 18.09 | SVGP | 0.642±0.028 | 0.974±0.042†‡ | 0.40 | 20.41 |
| wine/w | RF | 0.714±0.016†‡ | - | - | 1.17 | CCPP | RF | 4.21±0.13†‡ | - | - | 1.47 |
| RIO | 0.702±0.015 | 1.068±0.025 | 0.43 | 36.60 | RIO | 4.02±0.14 | 2.801±0.029 | 15.18 | 44.81 |
| 4898 × 11 | R+I | 0.697±0.016†‡ | 1.059±0.025†‡ | 0.44 | 33.17 | R+I | 4.03±0.14†‡ | 2.803±0.029†‡ | 15.22 | 39.09 |
| R+O | 0.715±0.016†‡ | 1.087±0.026†‡ | 0.46 | 13.70 | R+O | 4.17±0.13†‡ | 2.839±0.028†‡ | 16.19 | 17.67 |
| Y+O | 0.715±0.016†‡ | 1.087±0.026†‡ | 0.46 | 34.16 | Y+O | 13.26±63.05†‡ | 3.274±3.329†‡ | 2095 | 44.63 |
| Y+IO | 0.710±0.017†‡ | 1.080±0.028†‡ | 0.45 | 41.52 | Y+IO | 4.63±0.95†‡ | 2.975±0.253†‡ | 35.54 | 45.63 |
| SVGP | 0.719±0.018†‡ | 1.081±0.022†‡ | 0.50 | 36.62 | SVGP | 4.36±0.13†‡ | 2.887±0.026†‡ | 18.94 | 48.83 |
| protein | RF | 5.05±0.03†‡ | - | - | 19.76 | SC | RF | 15.26±0.25†‡ | - | - | 56.98 |
| RIO | 4.55±0.03 | 2.935±0.007 | 20.92 | 298.6 | RIO | 14.18±0.75 | 4.072±0.051 | 196.1 | 170.6 |
| 45730 × 9 | R+I | 4.57±0.04†‡ | 2.939±0.008†‡ | 21.06 | 278.1 | R+I | 14.38±0.76† | 4.089±0.047†‡ | 204.1 | 145.0 |
| R+O | 5.02±0.04†‡ | 3.033±0.007†‡ | 24.79 | 242.9 | R+O | 14.98±0.40†‡ | 4.126±0.034†‡ | 213.9 | 72.33 |
| Y+O | 5.04±0.04†‡ | 3.037±0.007†‡ | 25.02 | 206.4 | Y+O | 14.80±0.30†‡ | 4.121±0.059†‡ | 204.8 | 167.6 |
| Y+IO | 4.56±0.03†‡ | 2.937±0.007†‡ | 21.00 | 300.2 | Y+IO | 14.39±0.29 | 4.079±0.020 | 196.9 | 234.4 |
| SVGP | 4.68±0.04†‡ | 2.963±0.008†‡ | 22.54 | 281.1 | SVGP | 14.66±0.25†‡ | 4.135±0.013†‡ | 239.2 | 221.3 |
| CT | RF | 9.77±0.16†‡ | - | - | 302.0 | MSD | RF | 9.93±0.01†‡ | - | - | 2112 |
| RIO | 9.32±0.49 | 3.651±0.053 | 86.67 | 1024 | RIO | 9.90±0.01 | 3.711±0.001 | 97.59 | 2133 |
| 53500 × 384 | R+I | 9.77±0.16†‡ | 3.698±0.017†‡ | 94.04 | 47.04 | R+I | 9.93±0.01†‡ | 3.714±0.001†‡ | 98.11 | 136.8 |
| R+O | 9.35±0.46 | 3.654±0.049 | 86.94 | 228.1 | R+O | 9.92±0.01†‡ | 3.714±0.001†‡ | 97.78 | 1747 |
| Y+O | 9.46±0.36†‡ | 3.668±0.042†‡ | 88.00 | 448.7 | Y+O | 9.94±0.01†‡ | 3.715±0.001†‡ | 98.29 | 2198 |
| Y+IO | 9.58±0.66†‡ | 3.676±0.094†‡ | 91.09 | 1439 | Y+IO | 9.94±0.01†‡ | 3.715±0.001†‡ | 96.73 | 5638 |
| SVGP | 52.07±0.17†‡ | 5.372±0.003†‡ | 2712 | 61.70 | SVGP | 9.57±0.00†‡ | 3.677±0.000†‡ | 92.21 | 2276 |
+
+The symbols $\dagger$ and $\ddagger$ indicate that the difference between the marked entry and RIO is statistically significant at the $5\%$ significance level using paired $t$ -test and Wilcoxon test, respectively. The best entries that are significantly better than all the others under at least one statistical test are marked in boldface (ties are allowed).
\ No newline at end of file
diff --git a/quantifyingpointpredictionuncertaintyinneuralnetworksviaresidualestimationwithaniokernel/images.zip b/quantifyingpointpredictionuncertaintyinneuralnetworksviaresidualestimationwithaniokernel/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..f3310a822c8ad1035acb427e3adabff8ed1b350f
--- /dev/null
+++ b/quantifyingpointpredictionuncertaintyinneuralnetworksviaresidualestimationwithaniokernel/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a885a624e475c9b491481494d557fa2d25f79a763578005576e3bfb910c73ad0
+size 3228095
diff --git a/quantifyingpointpredictionuncertaintyinneuralnetworksviaresidualestimationwithaniokernel/layout.json b/quantifyingpointpredictionuncertaintyinneuralnetworksviaresidualestimationwithaniokernel/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..7757b3c74d735b2186dbcc3946301b111f93b61e
--- /dev/null
+++ b/quantifyingpointpredictionuncertaintyinneuralnetworksviaresidualestimationwithaniokernel/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e0496ca8a999176f9df02d590f62abc04ce6b4183babbc542cdbbfe7fc0b85ca
+size 1173099
diff --git a/quantifyingthecostofreliablephotoauthenticationviahighperformancelearnedlossyrepresentations/31dba084-c7ed-4ce4-a2c0-c9e2ca65effd_content_list.json b/quantifyingthecostofreliablephotoauthenticationviahighperformancelearnedlossyrepresentations/31dba084-c7ed-4ce4-a2c0-c9e2ca65effd_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..b47b8a2a27cdd7bb3e7b972d20e1d466b7544f1a
--- /dev/null
+++ b/quantifyingthecostofreliablephotoauthenticationviahighperformancelearnedlossyrepresentations/31dba084-c7ed-4ce4-a2c0-c9e2ca65effd_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:86157a90efe695149483594bafed094d42604b70447456244d1e0435a22d07cc
+size 136705
diff --git a/quantifyingthecostofreliablephotoauthenticationviahighperformancelearnedlossyrepresentations/31dba084-c7ed-4ce4-a2c0-c9e2ca65effd_model.json b/quantifyingthecostofreliablephotoauthenticationviahighperformancelearnedlossyrepresentations/31dba084-c7ed-4ce4-a2c0-c9e2ca65effd_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..d3244587cdf35ca393bb0ce172951781af8e58e9
--- /dev/null
+++ b/quantifyingthecostofreliablephotoauthenticationviahighperformancelearnedlossyrepresentations/31dba084-c7ed-4ce4-a2c0-c9e2ca65effd_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:daf2166d08ef7f259b8193c66d05eb1a50ea64a6654cd75f39531cfc2cfc25fe
+size 152651
diff --git a/quantifyingthecostofreliablephotoauthenticationviahighperformancelearnedlossyrepresentations/31dba084-c7ed-4ce4-a2c0-c9e2ca65effd_origin.pdf b/quantifyingthecostofreliablephotoauthenticationviahighperformancelearnedlossyrepresentations/31dba084-c7ed-4ce4-a2c0-c9e2ca65effd_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..bf4be5210130c28871a69079c7db620963f7951c
--- /dev/null
+++ b/quantifyingthecostofreliablephotoauthenticationviahighperformancelearnedlossyrepresentations/31dba084-c7ed-4ce4-a2c0-c9e2ca65effd_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:307b7fa4f3d6a4301b16735bdcb839e5ec518843c2f5168095b317700dfa567c
+size 23162740
diff --git a/quantifyingthecostofreliablephotoauthenticationviahighperformancelearnedlossyrepresentations/full.md b/quantifyingthecostofreliablephotoauthenticationviahighperformancelearnedlossyrepresentations/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..56c335cce2af1a49e211b09bbcf8b1412abe4284
--- /dev/null
+++ b/quantifyingthecostofreliablephotoauthenticationviahighperformancelearnedlossyrepresentations/full.md
@@ -0,0 +1,590 @@
+# QUANTIFYING THE COST OF RELIABLE PHOTO AUTHENTICATION VIA HIGH-PERFORMANCE LEARNED LOSSY REPRESENTATIONS
+
+Pawel Korus1,2 & Nasir Memon1
+
+$^{1}$ Tandon School of Engineering, New York University, USA
+
+$^{2}$ Department of Telecommunications, AGH University of Science and Technology, Poland {pkorus,memon}@nyu.edu
+
+# ABSTRACT
+
+Detection of photo manipulation relies on subtle statistical traces, notoriously removed by aggressive lossy compression employed online. We demonstrate that end-to-end modeling of complex photo dissemination channels allows for codec optimization with explicit provenance objectives. We design a lightweight trainable lossy image codec, that delivers good rate-distortion performance, comparable with the popular hand-crafted BPG, and has low computational footprint on modern GPU-enabled platforms. Our results show that significant improvements in manipulation detection accuracy are possible at fractional costs in bandwidth/storage. Our codec improved the accuracy from $37\%$ to $86\%$ even at very low bit-rates, well below the practicality of JPEG (QF 20).
+
+# 1 INTRODUCTION
+
+Increasing adoption of machine learning in computer graphics has rapidly decreased the time-frame and skill set needed for convincing photo manipulation. Point-and-click solutions are readily available for plausible object insertion (Portenier et al., 2019), removal (Xiong et al., 2019), sky replacement (Tsai et al., 2016), face editing (Portenier et al., 2018) and many other popular operations. While often performed with humorous or artistic intent, they can wreak havoc by altering medical records (Mirsky et al., 2019), concealing scientific misconduct (Gilbert, 2009; Bik et al., 2016; Bucci, 2018) or even interfering with democratic elections (Chesney & Citron, 2019).
+
+Reasoning about photo integrity and origin relies on subtle statistical traces, e.g., fingerprints of imaging sensors (Chen et al., 2008), color interpolation artifacts (Popescu & Farid, 2005), or pixel co-occurrence patterns (Marra et al., 2019b; Mayer & Stamm, 2019). Unfortunately, such traces are commonly destroyed during online dissemination, since social networks are forced to aggressively compress digital media to optimize storage and bandwidth expenditures - especially on mobile devices (Cabral & Kandrot, 2015). As a result, detection of photo manipulations online is notoriously unreliable. Some platforms perform forensic photo analysis at the ingress (Truepic, 2019), but it may already be too late. Existing photo compression standards, like JPEG, optimize for human perception alone and aggressively remove weak micro-signals already at the device.
+
+We demonstrate that huge gains in photo manipulation detection accuracy are possible at low cost by carefully optimizing lossy compression. Thanks to explicit optimization, fractional increase in bitrate is sufficient to significantly increase the detection accuracy. We build upon the work of Korus & Memon (2019) and use their toolbox for end-to-end modeling of photo dissemination channels. We design a lightweight and high-performance lossy image codec, and optimize for reliable manipulation detection - a backbone of modern forensic analysis (Wu et al., 2019; Mayer & Stamm, 2019). Interestingly, the model learns complex frequency attenuation patterns as simple inclusion of high-frequency information turns out to be insufficient. This suggests new directions in ongoing efforts to revisit the standard rate-distortion paradigm (Blau & Michaeli, 2019).
+
+We believe such solutions could be useful for social media platforms, photo attestation services, or insurance companies, which may exploit asymmetric compression setups and acquire photos from smart-phones in analysis-friendly formats. In terms of rate-distortion, our model is comparable
+
+
+Figure 1: A generic end-to-end trainable model of photo acquisition and dissemination: camera ISP is modeled by a neural imaging pipeline (NIP); manipulation detection is performed by a forensic analysis network (FAN); the channel may use either JPEG or a trainable deep compression network (DCN). Potentially trainable elements are shown in yellow.
+
+with modern hand-engineered codecs, like BPG (Bellard, 2014) which delivers only slightly better results. On GPU-enabled platforms, our codec can be faster, even without low-level optimization.
+
+# 2 RELATED WORK
+
+Learned Compression: Rapid progress in deep learning has rekindled interest in lossy image compression. While some studies consider fully end-to-end solutions dispensing with conventional entropy coding (Toderici et al., 2017), the most successful solutions tend to be variations of auto-encoders combined with context-adaptive arithmetic coding. Such codecs have recently surpassed state-of-the-art hand-crafted solutions (Rippel & Bourdev, 2017; Mentzer et al., 2018). Adoption of generative models allows to hallucinate unimportant details, and reach extreme compression rates while maintaining good perceptual quality (Agustsson et al., 2018). This research direction makes explicit provenance objectives increasingly pressing.
+
+Compression vs High-level Vision: JPEG compression is commonly used for data augmentation to retain high machine vision performance on compressed images. Despite this, severe compression is known to degrade accuracy (Dodge & Karam, 2016), and restoration techniques are often used to mitigate the problem (Wang et al., 2016). Some studies optimize JPEG compression to encode semantically salient regions with better quality in a format-compliant way (Prakash et al., 2017). Researchers also explore trainable variations of the JPEG codec optimized for minimal performance degradation and low power use in IoT devices (Liu et al., 2018). In high-volume applications, computational footprint can be reduced by running high-level vision directly on the DCT coefficients (Gueguen et al., 2018). Adoption of trainable latent representations gives more flexibility and allows for end-to-end training (Torfason et al., 2018).
+
+**Optimization of Photo Dissemination Channels:** Large volume of photos shared online spawned the need to aggressively optimize all steps of photo dissemination (uplink, downlink and storage). Social media platforms already rely on in-house solutions (Facebook, 2018), and employ extreme measures, like header transplantation, to minimize overhead and improve user experience (Cabral & Kandrot, 2015). The platforms actively engage in research and development of image compression, including optimization of the standard JPEG codec (Google, 2016), development of new backward-compatible standards like JPEG-XL (Rhatushnyak et al., 2019), and development of entirely new codecs - both hand-engineered (e.g., WebP) and end-to-end trained (Toderici et al., 2017).
+
+# 3 END-TO-END TRAINABLE PHOTO DISSEMINATION MODEL
+
+We build upon a recently published end-to-end trainable model of photo acquisition and dissemination (Korus & Memon, 2019). The model uses a forensic analysis network (FAN) for photo manipulation detection, and allows for joint optimization of the FAN and the camera ISP, leading to distinct imaging artifacts that facilitate authentication. The published toolbox included only stan
+
+
+Figure 2: Architecture of our deep compression network: an auto-encoder with 3 sub-sampling stages and residual units in between. (Empty arrows: no activation; filled arrows: leaky ReLU.)
+
+dard JPEG compression, and we extended it to support trainable CODECs. We show a generic version of the updated model in Fig. 1 with highlighted potentially trainable elements. In this study, we fixed the camera model, and jointly optimize the FAN and a deep compression network (DCN). We describe the design of our DCN codec, and its pre-training protocol below.
+
+# 3.1 BASELINE DCN ARCHITECTURE
+
+Our DCN model follows the general auto-encoder architecture proposed by Theis et al. (2017), but uses different quantization, entropy estimation and entropy coding schemes (Section 3.2). The model is fully convolutional, and consists of 3 sub-sampling (stride-2) convolutional layers, and 3 residual blocks in between (Fig. 2). We do not use any normalization layers (such as GDN), and rely solely on a single trainable scaling factor. Distribution shaping occurs organically thanks to entropy regularization (see Fig. A.3b in the appendix). The decoder mirrors the encoder, and implements up-sampling using sub-pixel convolutions (combination of convolutional and depth-to-space layers).
+
+We experimented with different variants of latent representation quantization, eventually converging on soft-quantization with a fixed codebook of integers with a given maximal number of bits per feature (bpf). We used a 5-bpf uniform codebook ( $M = 32$ values from -15 to 16). We show the impact of codebook size in the appendix (Fig. A.3a).
+
+The model is trained to minimize distortion between the input and reconstructed images regularized by entropy of the latent representation:
+
+$$
+\mathcal {L} _ {\mathrm {d c n}} = \underset {\mathbf {X}} {\mathbb {E}} \left[ d (\mathbf {X}, \mathcal {D} \circ \mathcal {Q} \circ \mathcal {E} (\mathbf {X})) + \lambda_ {H} H (\mathcal {Q} \circ \mathcal {E} (\mathbf {X})) \right], \tag {1}
+$$
+
+where $\mathbf{X}$ is the input image, and $\mathcal{E},\mathcal{Q}$ , and $\mathcal{D}$ denote the encoder, quantization, and decoder, respectively. We used a simple $L_{2}$ loss in the RGB domain as the distortion measure $d(\cdot ,\cdot)$ , a differentiable soft estimate of entropy $H$ (Section 3.2), and SSIM as the validation metric.
+
+# 3.2 SOFT QUANTIZATION AND ENTROPY ESTIMATION
+
+We developed our own quantization and entropy estimation mechanism, because we found existing approaches unnecessarily complicated and/or lacking in accuracy. Some of the most recent solutions include: (1) addition of uniform random noise to quantized samples and non-parametric entropy modeling by a fitted piece-wise linear model (Balle et al., 2016); (2) differentiable entropy upper bound with a uniform random noise component (Theis et al., 2017); (3) regularization by penalizing norm of quantized coefficients and differences between spatial neighbors (Rippel & Bourdev, 2017); (4) PixelCNN for entropy estimation and context modeling (Mentzer et al., 2018). Our approach builds upon the soft quantization used by Mentzer et al. (2018), but is extended to address numerical stability problems, and allow for accurate entropy estimation.
+
+Let $\mathbf{z}$ be a vectorized latent representation $\mathbf{Z}$ of $N$ images, i.e.: $z_{k} = z_{n,i,j,c}$ where $n,i,j,c$ advance sequentially along an arbitrary memory layout (here image, width, height, channel). Let $\mathbf{c}$ denote a quantization codebook with $M$ centers $[c_1,\dots ,c_M]$ (code words). Then, given a weight matrix $\mathbf{W}\in [0,1]_{N,M}:\forall_m\sum_n w_{n,m} = 1$ , we can define: hard quantization as $\hat{\mathbf{z}} = \left[c_{\mathrm{argmax}_m}\mathbf{w}_{:,m}\right]$ ; and soft
+
+
+Figure 3: Entropy estimation error for a Laplacian distribution with varying scale and for the latent space of $128 \times 128$ px images. The t-Student kernel is significantly more accurate - especially for wide distributions overflowing the codebook range.
+
+
+
+
+
+
+
+quantization as $\tilde{\mathbf{z}} = \mathbf{W}\mathbf{c}$ . Hard quantization replaces an input value with the closest available codeword, and corresponds to a rounding operation performed by the image codec. Soft quantization is a differentiable relaxation, which uses a linear combination of all code-words - as specified by the weight matrix. A detailed comparison of both quantization modes, along with an illustration of potential numerical pitfalls, can be observed in the top row of Fig. A.1 in the appendix. The hard and soft quantization are used in the forward and backward passes, respectively. In Tensorflow, this can be implemented as $\mathbf{z} = \mathrm{tf.stop\_gradient}(\hat{\mathbf{z}} - \tilde{\mathbf{z}}) + \tilde{\mathbf{z}}$ .
+
+The weights for individual code-words in the mixture are computed by applying a kernel $\kappa$ to the distances between the values and the code-words, which can be organized into a distance matrix $\mathbf{D}$ :
+
+$$
+\mathbf {D} = \mathbf {z} - \mathbf {c} ^ {\intercal} = \left[ d _ {n, m} = z _ {n} - c _ {m} \right], \tag {2}
+$$
+
+$$
+\mathbf {W} = \kappa (\mathbf {D}) = \left[ w _ {n, m} = \kappa \left(d _ {n, m}\right) \right]. \tag {3}
+$$
+
+The most commonly used implementations use a Gaussian kernel:
+
+$$
+\kappa_ {\gamma} = e ^ {- \gamma d _ {n, m} ^ {2}}, \tag {4}
+$$
+
+which suffers from numerical problems for edge cases overflowing the codebook range (see Fig. A.1 top row in the 4-th and 5-th columns). To alleviate these problems, we adopt a t-Student kernel:
+
+$$
+\kappa_ {\gamma , v} = \left(1 + \frac {\gamma d _ {n , m} ^ {2}}{v}\right) ^ {- (v + 1) / 2}, \tag {5}
+$$
+
+which behaves much better in practice. We do not normalize the kernels, and ensure correct proportions of the weights by numerically normalizing rows of the weight matrix.
+
+We estimate entropy of the quantized values by summing the weight matrix along the sample dimension, which yields an estimate of the histogram w.r.t. codebook entries (comparison with an actual histogram is shown in Fig. A.3):
+
+$$
+\tilde {\mathbf {h}} = \left[ \tilde {h} _ {m} = \sum_ {n} w _ {n, m} \right]. \tag {6}
+$$
+
+This allows to estimate the entropy of the latent representation by simply:
+
+$$
+\hat {H} = - \sum_ {m} \tilde {h} _ {m} \log_ {2} \tilde {h} _ {m}. \tag {7}
+$$
+
+We assess the quality of the estimate both for synthetic random numbers (1,000 numbers sampled from Laplace distributions of various scales) and an actual latent representation of $128 \times 128$ px RGB image patches sampled from the click test set (see Section 3.5 and examples in Fig. 4a). For the random sample, we fixed the quantization codebook to integers from -5 to 5, and performed the experiment numerically. For the real patches, we fed the images through a pre-trained DCN model (a medium-quality model with 32 feature channels; 32-C) and used the codebook embedded in the model (integers from -15 to 16).
+
+Fig. 3 shows the entropy estimation error (both absolute and relative) and scatter plots of real entropies vs. their soft estimates using the Gaussian and t-Student kernels. It can be observed that the t-Student kernel consistently outperforms the commonly used Gaussian. The impact of the kernels' hyperparameters on the relative estimation error is shown in Fig. A.2. The best combination of kernel parameters $(v = 50, \gamma = 25)$ is highlighted in red and used in all subsequent experiments.
+
+
+Figure 4: Example images from the considered click, kodak and raw test sets $(512\times 512\mathrm{px})$
+
+# 3.3 ENTROPY CODING AND BIT-STREAM STRUCTURE
+
+We used a state-of-the-art entropy coder based on asymmetric numeral systems (Duda, 2013; Duda et al., 2015) and its finite state entropy (FSE) implementation (Collet, 2013). For simplicity and computational efficiency, we did not employ a context model1 and instead encode individual feature channels (channel EC). Bitrate savings w.r.t. global entropy coding (global EC) vary based on the model, image size and content. For $512 \times 512$ px images, we observed average savings of $\approx 12\%$ , but for very small patches (e.g., $128~\mathrm{px}$ ), it may actually result in overhead (Tab. A.2). This can be easily addressed with a flag that switches between different compression modes, but we leave practical design of the format container for future work. We use a simple structure of the bit-stream, which enables variable-length, per-channel entropy coding with random channel access (Tab. A.1). Such an approach offers flexibility and scalability benefits, e.g.: (1) it allows for rapid analysis of selected feature channels (Torfason et al., 2018); (2) it enables trivial parallel processing of the channels to speed up encoding/decoding on modern multi-core platforms.
+
+# 3.4 TRAINING PROTOCOL AND DATA
+
+We pre-trained the DCN model in isolation and minimize the entropy-regularized $L_{2}$ loss (equation 1) on mixed natural images (MNI) from 6 sources: (1) native camera output from the RAISE and MIT-5k datasets (Dang-Nguyen et al., 2015; Bychkovsky et al., 2011); (2) photos from the Waterloo exploration database (Ma et al., 2016); (3) HDR images (Hasinoff et al., 2016); (4) computer game footage (Richter et al., 2016); (5) city scapes (Cordts et al., 2016); and (6) the training sub-set of the CLIC professional dataset (CLIC, 2019). In total, we collected 32,000 square crops ranging from $512 \times 512$ to $1024 \times 1024$ px, which were subsequently down-sampled to $256 \times 256$ px and randomly split into training and validation subsets.
+
+We used three augmentation strategies: (1) we trained on $128 \times 128$ px patches randomly sampled in each step; (2) we flip the patches vertically and/or horizontally with probability 0.5; and (3) we apply random gamma correction with probability 0.5. This allowed for reduction of the training set size, to $\approx 10k$ images where the performance saturates. We used batches of 50 images, and learning rate starting at $10^{-4}$ and decaying by a factor of 0.5 every 1,000 epochs. The optimization algorithm was Adam with default settings (as of Tensorflow 1.12). We train by minimizing MSE ( $L_{2}$ loss) until convergence of SSIM on a validation set with 1,000 images.
+
+# 3.5 BASELINE MODELS AND EVALUATION
+
+We control image quality by changing the number of feature channels. We consider three configurations for low, medium, and high quality with 16, 32, and 64 channels, respectively.
+
+Standard Codes: As hand-crafted baselines, we consider three codes: JPEG from the libJPEG library via the imageio interface, JPEG2000 from the OpenJPEG library via the Glymur interface, and BPG from its reference implementation (Bellard, 2014). Since our model uses full-resolution
+
+
+Figure 5: Rate-distortion trade-offs on the clic, kodak and raw test sets. (See Fig. A.5 for MS-SSIM).
+
+
+
+
+
+RGB channels as input, we also use full-resolution chrominance channels whenever possible (JPEG and BPG). To make the comparison as fair as possible, we measure effective payload of the CODECs. For the JPEG codec, we manually seek byte markers and include only the Huffman tables and Huffman-coded image data. For JPEG2000, we add up lengths of tile-parts, as reported by jpylyzer. For BPG, we seek the picture_data_length marker.
+
+Rate-distortion Trade-off: We used 3 datasets for the final evaluation (Fig. 4): (raw) 39 images with native camera output from 4 different cameras (Dang-Nguyen et al., 2015; Bychkovsky et al., 2011); (clic) 39 images from the professional validation subset of CLIC (2019); (kodak) 24 images from the standard Kodak dataset. All test images are of size $512 \times 512\mathrm{px}$ , and were obtained either by cropping directly without re-sampling (raw, kodak) or by resizing a central square crop (clic).
+
+We measured PSNR and SSIM using the scikit-image package and MS-SSIM using sewar. We used the default settings and only set the data range to [0, 1]. The values are computed as the average over RGB channels. The bit-rate for hand-crafted CODEs was computed using the effective payload, as explained above. For the DCN codec, we completely encoded and decoded the images (Section A.1).
+
+Fig. 5 shows rate-distortion curves (SSIM vs. effective bpp) for the clic and raw datasets (see appendix for additional results). We show 4 individual images (Fig. 4) and averages over the respective datasets. Since standard quality control (e.g., quality level in JPEG, or number of channels in DCN) leads to a wide variety of bpps, we fit individual images to a parametric curve $f(x) = (1 + e^{-\alpha x^{\beta} + \gamma})^{-1} - \delta$ and show the averaged fits. It can be observed that our DCN model delivers significantly better results than JPEG and JPEG2000, and approaches BPG.
+
+Processing Time: We collected DCN processing times for various platforms (Table 1), including desktops, servers, and low-power edge AI. We report network inference and complete encoding/decoding times on $512 \times 512$ px and $1920 \times 1080$ px images, averaged over the `clic` dataset (separate runs with batch size 1) and obtained using an unoptimized Python 3 implementation2. On GPU-enabled platforms, the inference time becomes negligible (over 100 fps for $512 \times 512$ px images, and over 20 fps for $1920 \times 1080$ px images on GeForce 1080 with a 2.6 GHz Xeon CPU), and entropy coding starts to become the bottleneck (down to 13 and 5 fps, respectively). We emphasize that the adopted FSE codec is one of the fastest available, and significantly outperforms commonly used arithmetic coding (Duda, 2013). If needed, channel EC can be easily parallelized, and the ANS codec could be re-implemented to run on GPU as well (Weißenberger & Schmidt, 2019).
+
+As a reference, we measured the processing times of $1920 \times 1080$ px images for the standard CODEs on the i7-7700 CPU @ 3.60GHz processor. JPEG coding with 1 thread takes between 0.061 s (Q=30) and 0.075 s (Q=90) (inclusive of writing time to RAM disk; using the pillow library). JPEG 2000 with 1 thread takes 0.61 s regardless of the quality level (inclusive of writing time to RAM disk; glympur library). BPG with 4 parallel threads takes 2.4 s (Q=1), 1.25 s (Q=20) and 0.72 s (Q=30) (inclusive of PNG reading time from RAM disk; bpgenc command line tool). While not directly comparable and also not optimized, some state-of-the-art deep learned CODEs require minutes to process even small images, e.g., 5-6 min for $768 \times 512$ px images from the Kodak dataset reported by Mentzer et al. (2018). The fastest state-of-the-art learned codec is reported to run at $\approx 100$ fps on images of that size on a GPU-enabled desktop computer (Rippel & Bourdev, 2017).
+
+Table 1: Average compression/decompression time on different platforms (in seconds) with breakdown into NN inference and complete processing; in-memory processing using the 32-C model.
+
+| GPU | CPU / Platform | 512×512 px images | 1920×1080 px images |
| inference | whole codec | inference | whole codec |
| Encode | Decode | Encode | Decode | Encode | Decode | Encode | Decode |
| Maxwell | ARM 57 (nVidia Jetson Nano) | 0.2076 | 0.5333 | 0.6507 | 0.6721 | 1.6348 | 3.6057 | 4.4978 | 4.9722 |
| - | i7-7700 @ 3.60GHz | 0.2165 | 0.3330 | 0.2272 | 0.3317 | 1.8052 | 2.7678 | 1.8753 | 2.7901 |
| - | i7-9770 @ 3.60Ghz | 0.0648 | 0.1396 | 0.0762 | 0.1397 | 0.5197 | 1.1685 | 0.6080 | 1.1728 |
| GF 1080 | Xeon E5-2690 @ 2.60GHz | 0.0083 | 0.0173 | 0.0742 | 0.0498 | 0.0597 | 0.1244 | 0.1805 | 0.1714 |
| P40 | Xeon E5-2680 @ 2.40GHz | 0.0093 | 0.0160 | 0.0720 | 0.0375 | 0.0558 | 0.1123 | 0.1895 | 0.1684 |
| V100 | Xeon E5-2680 @ 2.40GHz | 0.0065 | 0.0071 | 0.0604 | 0.0209 | 0.0416 | 0.0489 | 0.1735 | 0.0979 |
| GF 2080S | i7-9770 @ 3.60Ghz | 0.0059 | 0.0132 | 0.0421 | 0.0244 | 0.0399 | 0.0953 | 0.1343 | 0.1320 |
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+JPEG $Q = 52\rightarrow$ ssim:0.92 bpp:0.92
+Figure 6: Comparison of our DCN codec with low-quality settings (16-C) against JPEG with matching SSIM and matching bpp. Samples from,《clc,kodak,andraw datasets.
+
+
+JPEGQ=10→ssim:0.78bpp:0.38
+
+
+JPEG $Q = 35\rightarrow$ sism:0.85 bpp:0.79
+
+
+JPEG Q=14 → ssim:0.74 bpp:0.44
+
+
+JPEG Q=54 $\rightarrow$ ssim:0.92 bpp:0.55
+
+
+JPEG $Q = 1\rightarrow$ ssim:0.72 bpp:0.24
+
+# 4 OPTIMIZATION FOR MANIPULATION DETECTION
+
+We consider the standard photo manipulation detection setup where an adversary uses content-preserving post-processing, and a forensic analysis network (FAN) needs to identify the applied operation or confirm that the image is unprocessed. We use a challenging real-world setup, where the FAN can analyze images only after transmission through a lossy dissemination channel (Fig. 1). In such conditions, forensic analysis is known to fail (Korus & Memon, 2019). We consider several versions of the channel, including: standard JPEG compression, pre-trained DCN CODECs, and trainable DCN CODECs jointly optimized along with the FAN. We analyze $128 \times 128$ px patches, and don't use down-sampling to isolate the impact of the codec.
+
+# 4.1 PHOTO MANIPULATION AND DETECTION STRATEGY
+
+We consider 6 benign post-processing operations which preserve image content, but change low-level traces that can reveal a forgery. Such operations are commonly used either during photo manipulation or as a masking step afterwards. We include: (a) sharpening - implemented as an unsharp mask operator applied to the luminance channel in the HSV color space; (b) resampling involving successive down-sampling and up-sampling using bilinear interpolation and scaling factors 1:2 and 2:1; (c) Gaussian filtering with a $5 \times 5$ filter and standard deviation 0.83; (d) JPEG compression using a differentiable dJPEG model with quality level 80; (e) AWGN noise with standard deviation 0.02; and (f) median filtering with a $3 \times 3$ kernel. The operations are difficult to distinguish visually from native camera output - even without lossy compression (Fig. 7).
+
+The FAN is a state-of-the-art image forensics CNN with a constrained residual layer (Bayar & Stamm, 2018). We used the model provided in the toolbox (Korus & Memon, 2019), and optimize for classification (native camera output + 6 post-processing classes) of RGB image patches. In total, the model has 1.3 million parameters.
+
+
+Figure 7: Subtle photo manipulation: (left) $128 \times 128$ px native patch; (rest) various post-processing.
+
+# 4.2 TRAINING PROTOCOL
+
+We jointly train the entire workflow and optimize both the FAN and DCN models. Let $\mathcal{M}_c$ denote the $c$ -th manipulation (including identity for native camera output), and $\mathcal{F}$ denote the output of the FAN with $\mathcal{F}_c$ being the probability of the corresponding manipulation class $c$ . Let also $\mathcal{C}$ denote the adopted lossy compression model, e.g., $\mathcal{D} \circ \mathcal{Q} \circ \mathcal{E}$ for the DCN. We denote sRGB images rendered by the camera ISP as $\mathbf{X}$ . The FAN model is trained to minimize a cross-entropy loss:
+
+$$
+\mathcal {L} _ {\mathrm {c e}} = \underset {\mathbf {X}} {\mathbb {E}} \left[ \sum_ {c = 1} ^ {7} \log \left(\mathcal {F} _ {c} \circ \mathcal {C} \circ \mathcal {M} _ {c} (\mathbf {X})\right) \right], \tag {8}
+$$
+
+and the DCN to minimize its combination with the original fidelity/entropy loss (equation 1):
+
+$$
+\mathcal {L} = \mathcal {L} _ {\mathrm {c e}} + \lambda_ {c} \mathcal {L} _ {\mathrm {d c n}}, \tag {9}
+$$
+
+where $\lambda_{c}$ is used to control the balance between the objectives (we consider values from $10^{-3}$ to 1). We start from pre-trained DCN models (Section 3.4). The FAN model is trained from scratch.
+
+When JPEG compression was used in the channel, we used the differentiable $d\text{JPEG}$ model from the original study (Korus & Memon, 2019), but modified it to use hard-quantization in the forward pass to ensure results equivalent to libJPEG. We used quality levels from 10 to 95 with step 5.
+
+We followed the same training protocol as Korus & Memon (2019), and trained on native camera output (NCO). We used the DNet pipeline for Nikon D90, and randomly sampled $128 \times 128$ px RGB patches from 120 full-resolution images. The remaining 30 images were used for validation (we sampled 4 patches per image to increase diversity). We used batches of 20 images, and trained for 2,500 epochs with learning rate starting at $10^{-4}$ and decaying by $10\%$ every 100 epochs. For each training configuration, we repeated the experiment 3-5 times to validate training stability.
+
+# 4.3 QUANTITATIVE ANALYSIS
+
+We summarize the obtained results in Fig. 8 which shows the trade-off between effective bpp (rate), SSIM (distortion), and manipulation detection accuracy. The figure compares standard JPEG compression (diamond markers), pre-trained basic DCN models (connected circles with black border), and fine-tuned DCN models for various regularization strengths $\lambda_{c}$ (loose circles with gray border). Fine-tuned models are labeled with a delta in the auxiliary metric (also encoded as marker size and color), and the text is colored in red/green to indicate deterioration or improvement.
+
+Fig. 8a shows how the manipulation detection capability changes with effective bitrate of the codec. We can make the following observations. Firstly, JPEG delivers the worst trade-off and exhibits irregular behavior. The performance gap may be attributed to: (a) better image fidelity for the DCN codec, which retains more information at any bitrate budget; (b) presence of JPEG compression as one of the manipulations. The latter factor also explains irregular drops in accuracy, which coincide with the quality level of the manipulation (80) and unfavorable multiples of the quantization tables (see also Fig. B.1). Secondly, we observe that fine-tuning the DCN model leads to a sudden increase in payload requirements, minor improvement in quality, and gradual increase in manipulation detection accuracy (as $\lambda_{c} \to 0$ ). We obtain significant improvements in accuracy even for the lowest
+
+
+Figure 8: The rate-distortion-accuracy trade-off (raw test set) reveals significant improvements in manipulation detection accuracy while maintaining similar rate-distortion performance.
+
+
+
+
+Figure 9: Visualization of frequency attenuation/amplification patterns in the FFT domain for the fine-tuned DCN codec (low-quality, 16-C model).
+
+quality level ( $37\% \to 85\%$ ; at such low bitrates JPEG stays below $30\%$ ). Interestingly, we don't see major differences in payload between the fine-tuned models, which suggests that qualitative differences in encoding may be expected beyond simple inclusion of more information.
+
+Fig. 8b shows the same results from a different perspective, and depicts the standard rate-distortion trade-off supplemented with auxiliary accuracy information. We can observe that DCN fine-tuning moves the model to a different point (greater payload, better quality), but doesn't seem to visibly deteriorate the rate-distortion trade-off (with the exception of the smallest regularization $\lambda_{c} = 0.001$ which consistently shows better accuracy and worse fidelity).
+
+# 4.4 QUALITATIVE ANALYSIS
+
+To explain the behavior of the models, we examine frequency attenuation patterns. We compute FFT spectra of the compressed images, and divide them by the corresponding spectra of uncompressed images. We repeat this procedure for all images in our raw test set, and average them to show consistent trends. The results are shown in Fig. 9 for the pre-trained DCN model (1st column) and fine-tuned models with decreasing $\lambda_{c}$ (increasing emphasis on accuracy). The plots are calibrated to show unaffected frequencies as gray, and attenuated/emphasized frequencies as dark/bright areas.
+
+The pre-trained models reveal clear and gradual attenuation of high frequencies. Once the models are plugged in to the dissemination workflow, high frequencies start to be retained, but it does not suffice to improve manipulation detection. Increasing importance of the cross-entropy loss gradually changes the attenuation patterns. Selection of frequencies becomes more irregular, and some bands are actually emphasized by the codec. The right-most column shows the most extreme configuration where the trend is clearly visible (the outlier identified in quantitative analysis in Section 4.3).
+
+The changes in codec behavior generally do not introduce visible differences in compressed images (as long as the data distribution is similar, see Section 5). We show an example image (raw test set), its compressed variants (low-quality, 16-C), and their corresponding spectra in Fig. 10. The spectra follow the identified attenuation pattern (Fig. 9). The compressed images do not reveal any obvious artifacts, and the most visible change seems to be the jump in entropy, as predicted in Section 4.3.
+
+
+Figure 10: Compression results for various versions of the low-quality DCN: (1st column) original image; (2nd) pre-trained model; (3rd-6th) fine-tuned models with decreasing $\lambda_{c}$ .
+
+# 5 DISCUSSION, LIMITATIONS AND FUTURE WORK
+
+While the proposed approach can successfully facilitate pre-screening of photographs shared online, further research is needed to improve model generalization. We observed that the fine-tuning procedure tends bias the DCN/FAN models towards the secondary image dataset, in our case the native camera output (NCO). The baseline DCN was pre-trained on mixed natural images (MNI) with extensive augmentation, leading to competitive results on all test sets. However, fine-tuning was performed on NCO only. Characteristic pixel correlations, e.g., due to color interpolation, bias the codec and lead to occasional artifacts in MNIs (mostly in the click test set; see Appendix B), and deterioration of the rate-distortion trade-off. The problem is present regardless of $\lambda_{c}$ , which suggests issues with the fine-tuning protocol (data diversity) and not the forensic optimization objective.
+
+We ran additional experiments by skipping photo acquisition and fine-tuning directly on MNI from the original training set (subset of 2,500 RGB images). We observed the same behavior (see Appendix C), and the optimized codec was artifact-free on all test sets. (Although, due to a smaller training set, the model loses some of its performance; cf. MNI results in Fig. A.6.) However, the FANs generalized well only to clic and kodak images. The originally trained FANs generalized reasonably well to different NCO images (including images from other 3 cameras) but not to clic or kodak. This confirms that existing forensics models are sensitive to data distribution, and that further work will be needed to establish more universal training protocols (see detailed discussion in Appendix D). Short fine-tuning is known to help (Cozzolino et al., 2018), and we leave this aspect for future work. We are also planning to explore new transfer learning protocols (Li & Hoiem, 2017).
+
+Generalization should also consider other forensic tasks. We optimized for manipulation detection, which serves as a building block for more complex problems, like processing history analysis or tampering localization (Korus, 2017; Mayer & Stamm, 2019; Wu et al., 2019; Marra et al., 2019a). However, additional pre-screening may also be needed, e.g., analysis of sensor fingerprints (Chen et al., 2008), or identification of computer graphics or synthetic content (Marra et al., 2019b).
+
+# 6 CONCLUSIONS
+
+Our study shows that lossy image codecs can be explicitly optimized to retain subtle low-level traces that are useful for photo manipulation detection. Interestingly, simple inclusion of high frequencies in the signal is insufficient, and the models learn more complex frequency attenuation/amplification patterns. This allows for reliable authentication even at very low bit-rates, where standard JPEG compression is no longer practical, e.g., at bit-rates around 0.4 bpp where our DCN codec with low-quality settings improved manipulation detection accuracy from $37\%$ to $86\%$ . We believe the proposed approach is particularly valuable for online media platforms (e.g., Truepic, or Facebook), who need to pre-screen content upon reception, but need to aggressively optimize bandwidth/storage.
+
+Source Code: github.com/pkorus/neural-imaging.
+
+Acknowledgements: This work was supported by the NSF award number 1909488.
+
+# REFERENCES
+
+E. Agustsson, M. Tschannen, F. Mentzer, R. Timofte, and L. Van Gool. Generative Adversarial Networks for Extreme Learned Image Compression. arXiv:1804.02958, 2018.
+J. Balle, V. Laparra, and E. Simoncelli. End-to-end Optimized Image Compression. arXiv:1611.01704, 2016.
+B. Bayar and M. Stamm. Constrained convolutional neural networks: A new approach towards general purpose image manipulation detection. IEEE Trans. Information Forensics and Security, 13(11), 2018.
+F. Bellard. Better portable graphic. https://bellard.org/bpg/, 2014.
+E. M Bik, A. Casadevall, and F. Fang. The prevalence of inappropriate image duplication in biomedical research publications. MBio, 7(3), 2016.
+Y. Blau and T. Michaeli. Rethinking lossy compression: The rate-distortion-perception tradeoff. arXiv:1901.07821, 2019.
+E. Bucci. Automatic detection of image manipulations in the biomedical literature. Cell death & disease, 9(3):400, 2018.
+V. Bychkovsky, S. Paris, E. Chan, and F. Durand. Learning photographic global tonal adjustment with a database of input/output image pairs. In IEEE Conf. computer vision and pattern recognition, 2011.
+B. Cabral and E. Kandrot. https://engineering.fb.com/android/the-technology-behind-preview-photos/, 2015.
+M. Chen, J. Fridrich, M. Goljan, and J. Lukás. Determining image origin and integrity using sensor noise. IEEE Trans. Information Forensics and Security, 3(1), 2008.
+R. Chesney and D. Citron. Deepfakes and the new disinformation war: The coming age of post-truth geopolitics. Foreign Aff., 98, 2019.
+CLIC. Image compression challenge. http://www.compression.cc/challenge/, 2019.
+Y. Collet. FSE Codec. https://github.com/Cyan4973/FiniteStateEntropy, 2013.
+M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, and B. Schiele. The cityscapes dataset for semantic urban scene understanding. In IEEE Conf. computer vision and pattern recognition, 2016.
+D. Cozzolino, J. Thies, A. Rössler, C. Riess, M. Nießner, and L. Verdoliva. Forensictransfer: Weakly-supervised domain adaptation for forgery detection. arXiv:1812.02510, 2018.
+DT Dang-Nguyen, C. Pasquini, V. Conotter, and G. Boato. Raise: A raw images dataset for digital image forensics. In ACM Multimedia Systems Conference, 2015.
+S. Dodge and L. Karam. Understanding how image quality affects deep neural networks. In IEEE Int. Conf. Quality of Multimedia Experience, 2016.
+J. Duda. Asymmetric numeral systems: entropy coding combining speed of huffman coding with compression rate of arithmetic coding. arXiv:1311.2540, 2013.
+J. Duda, K. Tahboub, N. Gadgil, and E. Delp. The use of asymmetric numeral systems as an accurate replacement for Huffman coding. In Picture Coding Symposium, 2015. doi: 10.1109/PCS.2015.7170048.
+Facebook. Spectrum, image transcoding library. https://libspectrum.io/, 2018.
+N. Gilbert. Science journals crack down on image manipulation. https://www.nature.com/news/2009/091009/full/news.2009.991.html, 2009.
+Google. Guetzli perceptual.jpeg encoder. https://github.com/google/Guetzli, 2016.
+
+L. Gueguen, A. Sergeev, B. Kadlec, R. Liu, and J. Yosinski. Faster neural networks straight fromjpeg. In Advances in Neural Information Processing Systems, 2018.
+S. Hasinoff, D. Sharlet, R. Geiss, A. Adams, J. Barron, F. Kainz, J. Chen, and M. Levoy. Burst photography for high dynamic range and low-light imaging on mobile cameras. ACM Transactions on Graphics, 35(6), 2016.
+P. Korus. Digital image integrity-a survey of protection and verification techniques. Digital Signal Processing, 71, 2017.
+P. Korus and N. Memon. Content authentication for neural imaging pipelines: End-to-end optimization of photo provenance in complex distribution channels. In IEEE Conf. Computer Vision and Pattern Recognition, 2019.
+Z. Li and D. Hoiem. Learning without forgetting. IEEE Trans. pattern analysis and machine intelligence, 40(12), 2017.
+Z. Liu, T. Liu, W. Wen, L. Jiang, J. Xu, Y. Wang, and G. Quan. DeepN-JPEG: A deep neural network favorable JPEG-based image compression framework. In ACM Annual Design Automation Conference, 2018.
+K. Ma, Z. Duanmu, Q. Wu, Z. Wang, H. Yong, H. Li, and L. Zhang. Waterloo exploration database: New challenges for image quality assessment models. IEEE Trans. on Image Processing, 26(2), 2016.
+F. Marra, D. Gragnaniello, L. Verdoliva, and G. Poggi. A full-image full-resolution end-to-endtrainable cnn framework for image forgery detection. arXiv:1909.06751, 2019a.
+F. Marra, D. Gragnaniello, L. Verdoliva, and G. Poggi. Do GANs leave artificial fingerprints? In IEEE Conf. Multimedia Information Processing and Retrieval, pp. 506-511, 2019b.
+O. Mayer and M. Stamm. Forensic similarity for digital images. arXiv:1902.04684, 2019.
+F. Mentzer, E. Agustsson, M. Tschannen, R. Timofte, and L. Van Gool. Conditional Probability Models for Deep Image Compression. IEEE Conf. Computer Vision and Pattern Recognition, 2018. doi: 10.1109/CVPR.2018.00462.
+Y. Mirsky, T. Mahler, I. Shelef, and Y. Elovici. CT-GAN: Malicious Tampering of 3D Medical Imagery using Deep Learning. arXiv:1901.03597, 2019.
+A. Popescu and H. Farid. Exposing digital forgeries in color filter array interpolated images. IEEE Trans. Signal Processing, 53(10), 2005.
+T. Portenier, Q. Hu, A. Szabo, S. Bigdeli, P. Favaro, and M. Zwicker. Faceshop: Deep sketch-based face image editing. ACM Trans. Graph., 37(4):99, 2018.
+T. Portenier, Q. Hu, P. Favaro, and M. Zwicker. Smart, deep copy-paste. arXiv:1903.06763, 2019.
+A. Prakash, N. Moran, S. Garber, A. DiLillo, and J. Storer. Semantic perceptual image compression using deep convolution networks. In Data Compression Conference, 2017.
+A. Rhatushnyak, J. Wassenberg, J. Sneyers, J. Alakuijala, L. Vandevenne, L. Versari, R. Obryk, Z. Szabadka, A. Deymo, E. Kliuchnikov, et al. Committee Draft of JPEG XL Image Coding System. arXiv:1908.03565, 2019.
+S. Richter, V. Vineet, S. Roth, and V. Koltun. Playing for data: Ground truth from computer games. In European conference on computer vision, 2016.
+O. Rippel and L. Bourdev. Real-Time Adaptive Image Compression. arXiv:1705.05823, 2017.
+L. Theis, W. Shi, A. Cunningham, and F. Huszár. Lossy Image Compression with Compressive Autoencoders. arXiv:1703.00395, 2017.
+
+G. Toderici, D. Vincent, N. Johnston, S. Jin Hwang, D. Minnen, J. Shor, and M. Covell. Full resolution image compression with recurrent neural networks. In IEEE Conf. Computer Vision and Pattern Recognition, 2017.
+R. Torfason, F. Mentzer, E. Agustsson, M. Tschannen, R. Timofte, and L. Van Gool. Towards Image Understanding from Deep Compression without Decoding. arXiv:1803.06131, 2018.
+Truepic. https://truepic.com, 2019.
+YH. Tsai, X. Shen, Z. Lin, K. Sunkavalli, and MH. Yang. Sky is not the limit: semantic-aware sky replacement. ACM Trans. Graph., 35(4):149-1, 2016.
+Z. Wang, D. Liu, S. Chang, Q. Ling, Y. Yang, and T. Huang. D3: Deep dual-domain based fast restoration of JPEG-compressed images. In IEEE Conf. Computer Vision and Pattern Recognition, 2016.
+A. Weißenberger and B. Schmidt. Massively parallel ans decoding on gpus. In ACM Int. Conf. Parallel Processing, 2019.
+Y. Wu, W. AbdAlmageed, and P. Natarajan. Mantra-net: Manipulation tracing network for detection and localization of image forgeries with anomalous features. In IEEE Conf. Computer Vision and Pattern Recognition, 2019.
+W. Xiong, J. Yu, Z. Lin, J. Yang, X. Lu, C. Barnes, and J. Luo. Foreground-aware image inpainting. In IEEE Conf. Computer Vision and Pattern Recognition, pp. 5840-5848, 2019.
+
+Table A.1: Structure of the bit-stream describing a DCN-compressed image
+
+| Section | Content | Data Type | Bytes |
| Basic meta-data: | Latent shape H x W x N | uint8 | 3 |
| Length of coded channel sizes = 2 bytes (uint16) | uint16 | 2 |
| Channel sizes (shorter of a/b) | (a) FSE-encoded channel sizes1 | uint16 | var |
| (b) raw bytes | uint16 | 2N |
| Image data (N × shorter of a/b) | (a) FSE-encoded latent channel1 | uint8 | var |
| (b) RLE-encoded latent channel (#repetitions + byte) | uint16 + uint8 | 3 |
+
+1 - inclusive of both ANS probability tables and entropy-coded data
+
+Table A.2: Bit-stream length of channel entropy coding (EC) relative to global EC for different quality levels and image patches of various size.
+
+| DCN model | Avg. bit-stream size | Bit-stream size range |
| 128 | 256 | 512 | 128 | 256 | 512 |
| low quality (16-C) | 1.03 | 0.934 | 0.882 | 0.917 - 1.098 | 0.829 - 1.005 | 0.755 - 0.980 |
| medium quality (32-C) | 1.05 | 0.933 | 0.874 | 0.961 - 1.108 | 0.821 - 0.998 | 0.742 - 0.968 |
| high quality (64-C) | 1.07 | 0.948 | 0.887 | 0.977 - 1.119 | 0.833 - 0.998 | 0.773 - 0.964 |
+
+# A DCN CODEC DETAILS
+
+# A.1 QUANTIZATION AND ENTROPY REGULARIZATION
+
+The standard soft quantization with a Gaussian kernel (Mentzer et al., 2018) works well for rounding to arbitrary integers, but leads to numerical issues for smaller codebooks. Values significantly exceeding codebook endpoints have zero affinity to any of the entries, and collapse to the mean (i.e., $\approx 0$ in our case; Fig. A.1a). Such issues can be addressed by increasing numerical precision, sacrificing accuracy (due to larger kernel bandwidth), or adding explicit conditional statements in the code. The latter approach is infeasible and cumbersome in graph-based machine learning frameworks like Tensorflow. We used a t-Student kernel instead and increased precision of the computation to 64-bits. This doesn't solve the problem entirely, but successfully eliminated all issues that we came across in our experiments, and further improved our entropy estimation accuracy. Fig. A.2 shows entropy estimation error for Laplace-distributed random values, and different hyper-parameters of the kernels. We observed the best results for a t-Student kernel with 50 degrees of freedom and bandwidth $\gamma = 25$ (marked in red). This kernel is used in all subsequent experiments.
+
+We experimented with different codebooks and entropy regularization strengths. Fig. A.3a shows how the quantized latent representation (QLR) changes with the size of the codebook. The figures also compare the actual histogram with its soft estimate (equation 6). We observed that the binary codebook is sub-optimal and significantly limits the achievable image quality, especially as the number of feature channels grows. Adding more entries steadily improved quality and the codebook with $M = 32$ entires (values from -15 to 16) seemed to be the point of diminishing returns.
+
+Our entropy-based regularization turned out to be very effective at shaping the QLR (Fig. A.3b) and dispensed with the need to use other normalization techniques (e.g., GDN). We used only a single scalar multiplication factor responsible for scaling the distribution. All baseline and finetuned models use $\lambda_{H} = 250$ (last column). Fig. A.4 visually compares the QLRs of our baseline low-quality codec (16 feature channels) with weak and strong regularization.
+
+# A.2 ENTROPY CODING AND BIT-STREAM STRUCTURE
+
+We used a rudimentary bit-stream structure with the essential meta-data and markers that allow for successful decoding (Tab. A.1). Feature channels are entropy-coded independently (we refer to this approach as channel $EC$ ), and can be accessed randomly after decoding a lookup table of their sizes. This simple approach can yield considerable savings w.r.t. global entropy coding (global $EC$ ), especially as the image size increases (Tab. A.2).
+
+
+
+
+
+
+
+
+
+
+
+
+Figure A.1: Comparison of soft quantization with Gaussian and t-Student kernels for a Laplace distribution of increasing scale: the t-Student kernel is more accurate and robust to outliers.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Figure A.2: Entropy estimation error for a Laplacian distribution with varying scale and various hyper-parameters of the kernels: the t-Student kernel (2nd-6th column) is more accurate than the Gaussian (1st column) - especially for wide distributions overflowing the codebook range.
+
+
+
+
+
+
+
+
+
+
+
+
+(a) impact of quantization codebook size
+
+
+(b) impact of entropy regularization given a fixed 5-bpf codebook
+Figure A.3: Comparison of the distributions of the learned quantized latent representations (QLRs): (a) impact of codebook size; (b) impact of entropy regularization.
+
+
+Figure A.4: Comparison of the latent representations (16-C model) of an example $512 \times 512$ px image (thong-vo-428) learned with weak and strong entropy regularization (5-bpf codebook).
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Figure A.5: Rate-distortion trade-offs of the baseline DCNs on the click, kodak and raw test sets: (left) average over all images; (2nd-5th columns) sample images from Fig. 4; (top section) linear-scale SSIM; (bottom section) MS-SSIM in dB scale $f(x) = -10 \cdot \log_{10}(1 - x)$ .
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Figure A.6: Rate-distortion performance for standard CODECs and all DCN versions including the pre-trained baselines (b) and 3 fine-tuned models with the the weakest $(\lambda_c = 1)$ and the strongest emphasis on manipulation detection $(\lambda_c = 0.005$ and 0.001). The latter incur only fractional cost in payload/quality but bring significant benefits for manipulation detection. The top (bottom) rows show results for SSIM (MS-SSIM), respectively, and include DCN models fine tuned on native camera output (raw) and mixed natural images (clic).
+
+
+
+Table B.1: Example confusion matrices for the baseline and fine-tuned low-quality DCN models.
+
+| True\Predicted | (a) pre-trained 16-C DCN → 36.9% | (b) fine-tuned w. λc=0.0010 → 87.0% |
| native | sharpen | resample | gaussian | jpeg | awgn | median | native | sharpen | resample | gaussian | jpeg | awgn | median | |
| native | 8 | 10 | 10 | 24 | 7 | 18 | 23 | 92 | * | | * | * | * | * | |
| sharpen | 8 | 46 | 8 | 3 | 5 | 18 | 12 | 7 | 86 | | | 8 | | | |
| resample | * | | 58 | 25 | * | 5 | 8 | | | 88 | 11 | | * | | |
| gaussian | * | | 12 | 65 | * | 7 | 15 | | | * | 95 | * | | * | |
| jpeg | 12 | 9 | 10 | 22 | 11 | 16 | 20 | * | * | | 3 | 91 | | * | |
| awgn | 12 | 12 | 11 | 9 | 5 | 41 | 10 | * | 4 | * | | | 94 | | |
| median | 4 | * | 8 | 45 | 3 | 9 | 29 | * | | 6 | 28 | | * | 63 | |
+
+# B FINE-TUNING ON NATIVE CAMERA OUTPUT
+
+Controlling Detection Accuracy: Fig. 8 visualizes the trade-offs in image compression and forensic analysis performance. Here we show direct impact of image compression and fine-tuning settings on the achievable manipulation detection accuracy and its variations (Fig. B.1). For the JPEG codec, we observe nearly perfect manipulation detection for the highest quality level (QF=95), and a steady decline starting immediately below. The sudden drop in accuracy corresponds to the quality level of JPEG as one of the manipulations (QF=80). For DCN models, we clearly see steady improvement of fine-tuning w.r.t. the baseline models (on the right). Interestingly, the high-quality model shows a slight decline at first.
+
+Qualitative Analysis: The learned frequency attenuation/amplification patterns for all of the considered quality levels are shown in Fig. B.2. The visualizations were computed in the FFT domain and show the relative magnitude of individual frequencies w.r.t. original uncompressed images (averaged over all test images). In all cases, we observe complex behavior beyond simple inclusion of high-frequency content. The pattern seem to have a stable trajectory, despite independent runs of the experiment with different regularization strengths $\lambda_{c}$ . The same patterns will also be visible in individual image spectra (Fig. B.4 - Fig. B.6).
+
+Generalization and Imaging Artifacts: While our baseline DCN models were pre-trained on a large and diverse training set, the fine-tuning procedure relied on the complete model of photo acquisition and dissemination. Photo acquisition with digital cameras yields characteristic imaging artifacts, e.g., due to color filtering and interpolation. This leads to a characteristic distribution of native camera output (NCO), and ultimately biases the codec. Fig. B.3 shows the differences in SSIM between the baseline models and the models fine-tuned with a very weak cross-entropy objective (leading to no improvement in manipulation detection accuracy). For NCO (raw test set), we observe improvement of image quality (and corresponding increase in bitrate). For the kodak set, the quality remains mostly unaffected (with an increased bitrate). On the,《clc》set, we observe minor quality loss, and occasional artifacts (see examples in Fig. B.4).
+
+In Fig. B.4 - Fig. B.6 we collected example images from all test sets (clic, kodak, and raw) and compress them with baseline and fine-tuned models. The images are ordered by SSIM deterioration due to weak fine-tuning (quality loss without gains in accuracy; Fig. B.3) - the worst cases are shown at the top. (Note that some of the artifacts are caused by JPEG encoding of the images embedded in the PDF, and some geometric distortions were introduced by imperfect scaling in matplotlib.) In the first cli c image (1st row in Fig. B.4), we can see color artifacts along the high-contrast wires. In the second image, we see distortions in the door blinds, and a subtle shift in the hue of the bike. For the remaining two images, SSIM remains the same or better and we do not see any artifacts. In the kodak set, the worst image quality was observed for kodim05 (1st row in Fig. B.5), but we don't see any artifacts.
+
+
+Figure B.1: Impact of compression quality and fine-tuning regularization on the achievable detection accuracy and its variation. Sudden drops for JPEG are caused by inclusion of this compression as one of the manipulations, and correspond to the manipulation quality level (80).
+
+
+
+
+
+
+
+
+Figure B.2: Visualization of frequency attenuation/amplification patterns in the FFT domain for the fine-tuned DCN CODECs (on native camera output). From the top: 16-C, 32-C, and 64-C models.
+
+
+Figure B.3: Difference in image fidelity (SSIM) after fine-tuning the low-quality DCN model within the dissemination workflow (weak CE objective with little to no improvement in detection accuracy): raw, kodak and cliC datasets.
+
+
+
+
+
+
+Figure B.4: Changes in the image compressed with various versions of the medium-quality DCN codec: (1st column) sample image from the,《clc dataset; (2nd) pre-trained DCN model; (3rd-6th) fine-tuned models with decreasing $\lambda_{c}$ . Images are ordered by SSIM fidelity loss of $\lambda_{c} = 1.0$ w.r.t. the pre-trained model.
+
+
+Figure B.5: Changes in the image compressed with various versions of the medium-quality DCN codec: (1st column) sample image from the kodak dataset; (2nd) pre-trained DCN model; (3rd-6th) fine-tuned models with decreasing $\lambda_{c}$ . Images are ordered by SSIM fidelity loss of $\lambda_{c} = 1.0$ w.r.t. the pre-trained model.
+
+
+Figure B.6: Changes in the image compressed with various versions of the medium-quality DCN codec: (1st column) sample image from the raw dataset; (2nd) pre-trained DCN model; (3rd-6th) fine-tuned models with decreasing $\lambda_{c}$ . Images are ordered by SSIM fidelity loss of $\lambda_{c} = 1.0$ w.r.t. the pre-trained model.
+
+
+Figure C.1: Visualization of the rate-distortion-accuracy trade-off on the click dataset after fine-tuning on mixed natural images.
+
+
+
+# C FINE-TUNING ON MIXED NATURAL IMAGES
+
+As discussed in Section 5, we ran additional experiments by skipping photo acquisition and fine-tuning directly on mixed natural images (MNI) - a subset of the original DCN training set (2,500 images). Images in this dataset tend to have more details and depict objects at a coarser scale, since they were all down-sampled to $256 \times 256$ px (from various original sizes). This required adjusting manipulation strength to maintain visual similarity between photo variations. In particular, we used weaker sharpening, Gaussian filtering with a smaller standard deviation (0.5), down&up-sampling via $75\%$ of the image size (instead of $50\%$ ), Gaussian noise with standard deviation 0.012, and JPEG quality level 90. We fine-tuned for 600 epochs.
+
+We summarize the obtained results in Fig C.1 using images from the click test set. In this experiment, the gap in manipulation detection accuracy between JPEG and baseline DCN has disappeared, except for remaining sudden drops at selected JPEG quality levels (corresponding to the manipulation quality factor 90). We still observe significant improvement for fine-tuned DCN models, but here it tends to saturate around $86\%$ , which might explain negligible improvement of the high-quality 64-C model. By inspecting confusion matrices, we see most of the confusion between native, sharpen and awgn classes where the differences are extremely subtle.
+
+The fine-tuned DCN models remain close to the baseline rate-distortion behavior. Interestingly, except for the weakest regularization $(\lambda_{c} = 0.001)$ , all fine-tuned models seem to be equivalent (w.r.t. the trade-off). We did not observe any obvious artifacts, even in the most aggressive models. The only image with deteriorated SSIM is the alejandro-escamilla-6 image from直销, which was consistently the most affected outlier in nearly all fine tuned models for both NCO and MNI (1st row in Fig. C.2). In some replications it actually managed to improve, e.g., for the shown cases with $\lambda_{c} = 0.005$ and 0.001. However, we don't see any major differences between these variations.
+
+Visualization of frequency attenuation patterns (Fig. C.3) shows analogous behavior, but the changes are more subtle on MNI. We included additional difference plots w.r.t. baseline and weakly finetuned models, where the changes are better visible. On NCO, due to less intrinsic high-frequency content, the behavior is still very clear (cf. bottom part of Fig. C.3).
+
+
+Figure C.2: Changes in images compressed with various versions of the medium-quality DCN codec fine-tuned on MNI: (1st column) sample image from the click dataset; (2nd) pre-trained DCN model; (3rd-6th) fine-tuned models with decreasing $\lambda_{c}$ . The top image corresponds to the most consistent outlier with the worst SSIM degradation.
+
+
+Figure C.3: Visualization of frequency attenuation/amplification patterns for DCN CODECs fine-tuned on MNI: (top) low-quality codec tested on clic images; (bottom) the same codec tested on raw images. Difference plots show changes w.r.t. the baseline and weakly fine-tuned models.
+
+
+Figure D.1: Transferability of the trained FAN models to different data distributions: (top) models trained on native camera output can generalize to raw test images from 4 different cameras; (bottom) models trained on diverse images generalize well to different except native camera output.
+
+# D TRANSFERABILITY OF THE FAN
+
+In this study, we considered two broad classes of images: native camera output (NCO) and mixed natural images (MNI) which exhibit significant differences in pixel distribution. For DCN pretraining, we relied on a large MNI dataset with images down-sampled to $256 \times 256$ px (Section 3.4). Fine-tuning was then performed on either NCO from a single camera (Nikon D90; Section 4) or a smaller sub-set of the original training MNI (2,500 images; Appendix C). Finally, we considered three test sets: raw with NCO from 4 different cameras; click and kodak with MNI.
+
+We observed that the FAN models tend to have limited generalization capabilities to images with a different pixel distribution. We ran additional experiments to quantify this phenomenon, and show the obtained results in Fig. D.1 where we compare test vs validation accuracy for various test sets (we also included a version of the clic set resized to $256 \times 256$ px). In the top row, we show results for models trained on NCO from a single camera. We can see imperfect, but reasonable generalization to the output of various cameras (red markers). Once the data distribution changes, the performance drops significantly. Analogously, FAN models trained on MNI generalize well to other MNI datasets (clic and kodak), but fail on NCO. We see an additional bias towards images down-sampled to the same resolution as the training data (compare clic vs clic-256 images), but here the difference is small and consistent - between $5.2 - 6\%$ based on a linear fit to the data.
\ No newline at end of file
diff --git a/quantifyingthecostofreliablephotoauthenticationviahighperformancelearnedlossyrepresentations/images.zip b/quantifyingthecostofreliablephotoauthenticationviahighperformancelearnedlossyrepresentations/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..23b25de24578796eaf797a1f968b75730f954359
--- /dev/null
+++ b/quantifyingthecostofreliablephotoauthenticationviahighperformancelearnedlossyrepresentations/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:bfcd1d01a9a94d61a3636c143fe98b262b31bea5235154c37e4b88f6fce5f36a
+size 2558566
diff --git a/quantifyingthecostofreliablephotoauthenticationviahighperformancelearnedlossyrepresentations/layout.json b/quantifyingthecostofreliablephotoauthenticationviahighperformancelearnedlossyrepresentations/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..125c14fde40d5689937e9fc0683c78f68a27a52f
--- /dev/null
+++ b/quantifyingthecostofreliablephotoauthenticationviahighperformancelearnedlossyrepresentations/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:dab71c4f7ed857dbbebecc3eae5e030297408dbb237adb36724b5670b091cc19
+size 713508
diff --git a/quantumalgorithmsfordeepconvolutionalneuralnetworks/d102b7eb-d233-4817-975b-e8ee4c402758_content_list.json b/quantumalgorithmsfordeepconvolutionalneuralnetworks/d102b7eb-d233-4817-975b-e8ee4c402758_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..f43d994474d690733d40f5df01aada9fb901c7eb
--- /dev/null
+++ b/quantumalgorithmsfordeepconvolutionalneuralnetworks/d102b7eb-d233-4817-975b-e8ee4c402758_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7b116a12c2004ab3f12b3f05130870bece0398748857efabbade9500eacfc2f8
+size 246253
diff --git a/quantumalgorithmsfordeepconvolutionalneuralnetworks/d102b7eb-d233-4817-975b-e8ee4c402758_model.json b/quantumalgorithmsfordeepconvolutionalneuralnetworks/d102b7eb-d233-4817-975b-e8ee4c402758_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..728fe24b439fee4eee94bcefd1e1b12908b1b79d
--- /dev/null
+++ b/quantumalgorithmsfordeepconvolutionalneuralnetworks/d102b7eb-d233-4817-975b-e8ee4c402758_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ea8921eb4715bcaeef8298b4e9276492150a09221fce27edf3a81dcba49f3712
+size 284901
diff --git a/quantumalgorithmsfordeepconvolutionalneuralnetworks/d102b7eb-d233-4817-975b-e8ee4c402758_origin.pdf b/quantumalgorithmsfordeepconvolutionalneuralnetworks/d102b7eb-d233-4817-975b-e8ee4c402758_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..ac7d5cb64995b26eb2857f524a4c4aaf8ef8c66a
--- /dev/null
+++ b/quantumalgorithmsfordeepconvolutionalneuralnetworks/d102b7eb-d233-4817-975b-e8ee4c402758_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f54b8e753e5c6e203caec227c02917f27924543a50b0a1e97559b65779dac43a
+size 2032719
diff --git a/quantumalgorithmsfordeepconvolutionalneuralnetworks/full.md b/quantumalgorithmsfordeepconvolutionalneuralnetworks/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..ff77fbcf0d1c40e3a2f87fee742b5fcd2004efaa
--- /dev/null
+++ b/quantumalgorithmsfordeepconvolutionalneuralnetworks/full.md
@@ -0,0 +1,1014 @@
+# QUANTUM ALGORITHMS FOR DEEP CONVOLUTIONAL NEURAL NETWORK
+
+Iordanis Kerenidis, Jonas Landman & Anupam Prakash
+
+Institut de Recherche en Informatique Fondamentale (IRIF)
+
+Université de Paris, CNRS
+
+Paris, France
+
+landman@irif.fr
+
+# ABSTRACT
+
+Quantum computing is a powerful computational paradigm with applications in several fields, including machine learning. In the last decade, deep learning, and in particular Convolutional Neural Networks (CNN), have become essential for applications in signal processing and image recognition. Quantum deep learning, however, remains a challenging problem, as it is difficult to implement non linearities with quantum unitaries Schuld et al. (2014). In this paper we propose a quantum algorithm for evaluating and training deep convolutional neural networks with potential speedups over classical CNNs for both the forward and backward passes. The quantum CNN (QCNN) reproduces completely the outputs of the classical CNN and allows for non linearities and pooling operations. The QCNN is in particular interesting for deep networks and could allow new frontiers in the image recognition domain, by allowing for many more convolution kernels, larger kernels, high dimensional inputs and high depth input channels. We also present numerical simulations for the classification of the MNIST dataset to provide practical evidence for the efficiency of the QCNN.
+
+# 1 INTRODUCTION
+
+The growing importance of deep learning in research, in industry and in our society will require extreme computational power as the dataset sizes and the complexity of these algorithms is expected to increase. Quantum computers are a good candidate to answer this challenge. The recent progress in the physical realization of quantum processors and the advances in quantum algorithms increases the importance of understanding their capabilities and limitations. In particular, the field of quantum machine learning has witnessed many innovative algorithms that offer speedups over their classical counterparts Kerenidis et al. (2019); Lloyd et al. (2013; 2014); Kerenidis & Prakash (2017b); Wiebe et al. (2014a).
+
+Quantum deep learning refers to the problem of creating quantum circuits that mimic and enhance the operations of neural networks. It has been studied in several works Allcock et al. (2018); Rebentrost et al. (2018); Wiebe et al. (2014b) but remains challenging as it is difficult to implement non linearities with quantum unitaries Schuld et al. (2014). In this work we propose a quantum algorithm for convolutional neural networks (CNN), a type of deep learning designed for visual recognition, signal processing and time series. We also provide results of numerical simulations to evaluate the running time and accuracy of the quantum convolutional neural network (QCNN). Note that our algorithm is theoretical and could be compiled on any type of quantum computers (trapped ions, superconducting qubits, cold atoms, photons, etc.)
+
+The CNN was originally developed by LeCun et al. (1998) in the 1980's. They have achieved great practical success over the last decade Krizhevsky et al. (2012) and have been used in cutting-edge domains like autonomous cars Bojarski et al. (2016) and gravitational wave detection George & Huerta (2018). Despite these successes, CNNs suffer from computational bottlenecks due to the size of the optimization space and the complexity of the inner operations, these bottlenecks make deep CNNs resource expensive.
+
+The growing interest in quantum machine learning has led researchers to develop different variants of Quantum Neural Networks (QNN). The quest for designing quantum analogs of neural networks is challenging due to the modular layer architecture of the neural networks and the presence of non linearities, pooling, and other non unitary operations, as explained in Schuld et al. (2014). Several strategies have been tried in order to implement some features of neural networks Allcock et al. (2018); Wiebe et al. (2014b); Beer et al. (2019) in the quantum setting.
+
+Variational quantum circuits provide another path to the design of QNNs, this approach has been developed in Farhi & Neven (2018); Henderson et al. (2019); Killoran et al. (2018). A quantum convolutional neural network architecture using variational circuits was recently proposed Cong et al. (2018). However further work is required to provide evidence that such techniques can outperform classical neural networks in machine learning settings.
+
+# 2 PRELIMINARIES
+
+# 2.1 CONVOLUTION PRODUCT AS MATRIX MULTIPLICATION
+
+We briefly introduce the formalism and notation concerning classical convolution product and its equivalence with matrix multiplication. More details can be found in Appendix (Section C). A single layer $\ell$ of the classical CNN does the following operations: from an input image $X^{\ell} \in \mathbb{R}^{H^{\ell} \times W^{\ell} \times D^{\ell}}$ seen as a 3D tensor, and a kernel $K^{\ell} \in \mathbb{R}^{H \times W \times D^{\ell} \times D^{\ell + 1}}$ seen as a 4D tensor, it performs a convolution product and outputs $X^{\ell + 1} = X^{\ell} * K^{\ell}$ , with $X^{\ell + 1} \in \mathbb{R}^{H^{\ell + 1} \times W^{\ell + 1} \times D^{\ell + 1}}$ . This convolution operation is equivalent to the matrix multiplication $A^{\ell} F^{\ell} = Y^{\ell + 1}$ where $A^{\ell}, F^{\ell}$ and $Y^{\ell + 1}$ are suitably vectorized versions of $X^{\ell}, K^{\ell}$ and $X^{\ell + 1}$ respectively. The output of the layer $\ell$ of the CNN is $f(X^{\ell + 1})$ where $f$ is a non-linear function.
+
+# 2.2 QUANTUM COMPUTING
+
+For a detailed introduction to quantum computing and its applications to machine learning in the context of this work, we invite the reader to look at Appendix F. We also refer to Nielsen & Chuang (2002b) for a more complete overview of quantum computing.
+
+In this part we will discuss only briefly the core notions of quantum computing. Like a classical bit, a quantum bit (qubit) can be $|0\rangle$ , $|1\rangle$ , but can also be in a superposition state $\alpha |0\rangle + \beta |1\rangle$ with amplitudes $(\alpha, \beta) \in \mathbb{C}$ such that $|\alpha|^2 + |\beta|^2 = 1$ . With $n$ qubits it is then possible to construct a superposition of the $2^n$ binary combinations possible, each with a specific amplitude. We will note the $i^{th}$ combination (e.g. $|01 \cdots 110\rangle$ ) as $|i\rangle$ . A vector $v \in \mathbb{R}^d$ can be encoded in a quantum state made of $\lceil \log(d) \rceil$ qubits. This encoding is a quantum superposition, where the components $(v_1, \dots, v_d)$ of $v$ are used as the amplitudes of the $d$ binary combinations. We note this state $|v\rangle := \frac{1}{\|v\|} \sum_{i \in [d]} v_i |i\rangle$ , where $|i\rangle$ is a register representing the $i^{th}$ vector in the standard basis.
+
+Quantum computation proceeds by applying quantum gates which are defined to be unitary matrices acting on 1 or 2 qubits, for example the Hadamard gate that maps $|0\rangle \mapsto \frac{1}{\sqrt{2}} (|0\rangle + |1\rangle)$ and $|1\rangle \mapsto \frac{1}{\sqrt{2}} (|0\rangle - |1\rangle)$ . The output of the computation is a quantum state that can be measured to obtain classical information. The measurement of a qubit $\alpha |0\rangle + \beta |1\rangle$ yields either 0 or 1, with probability equal to the square of the respective amplitude. A detailed discussion of the results from quantum machine learning and linear algebra used in this work can be found in Appendix (Section F).
+
+# 3 MAIN RESULTS
+
+In this paper, we design a quantum convolutional neural network (QCNN) algorithm with a modular architecture that allows for any number of layers, any number and size of kernels, and that can support a large variety of non-linearity and pooling methods. Our main technical contributions include a new notion of a quantum convolution product, the development of a quantum sampling technique well suited for information recovery in the context of CNNs and a proposal for a quantum backpropagation algorithm for efficient training of the QCNN.
+
+The QCNN can be directly compared to the classical CNN as it has the same inputs and outputs. We show that it offers a speedup compared to certain cases of classical CNN for both the forward pass and for training using backpropagation. For each layer, on the forward pass (Algorithm 1), the speedup is exponential in the size of the layer (number of kernels) and almost quadratic on the spatial dimension of the input. We next state informally the speedup for the forward pass, the formal version appears as Theorem D.1.
+
+# Result 1 (Quantum Convolution Layer)
+
+Let $X^{\ell}$ be the input and $K^{\ell}$ be the kernel for layer $\ell$ of a convolutional neural network, and $f: \mathbb{R} \mapsto [0, C]$ with $C > 0$ be a non-linear function so that $f(X^{\ell + 1}) := f(X^{\ell} * K^{\ell})$ is the output for layer $\ell$ . Given $X^{\ell}$ and $K^{\ell}$ stored in QRAM (Quantum Random Access Memory), there is a quantum algorithm that, for precision parameters $\epsilon > 0$ and $\eta > 0$ , creates quantum state $|f(\overline{X}^{\ell + 1}) \rangle$ such that $\left\| f(\overline{X}^{\ell + 1}) - f(X^{\ell + 1}) \right\|_{\infty} \leq 2\epsilon$ and retrieves classical tensor $\mathcal{X}^{\ell + 1}$ such that for each pixel $j$ ,
+
+$$
+\left\{ \begin{array}{l} \left| \mathcal {X} _ {j} ^ {\ell + 1} - f \left(X _ {j} ^ {\ell + 1}\right) \right| \leq 2 \epsilon \quad i f \quad f \left(\bar {X} _ {j} ^ {\ell + 1}\right) \geq \eta \\ \mathcal {X} _ {j} ^ {\ell + 1} = 0 \quad i f \quad f \left(\bar {X} _ {j} ^ {\ell + 1}\right) < \eta \end{array} \right. \tag {1}
+$$
+
+The running time of the algorithm is $\widetilde{O}\left(\frac{1}{\epsilon\eta^2} \cdot \frac{M\sqrt{C}}{\sqrt{\mathbb{E}(f(X^{\ell + 1}))}}\right)$ where $\mathbb{E}(\cdot)$ represents the average value, $\widetilde{O}$ hides factors poly-logarithmic in the size of $X^{\ell}$ and $K^{\ell}$ and the parameter $M = \max_{p,q} \| A_p \| \| F_q \|$ is the maximum product of norms from subregions of $X^{\ell}$ and $K^{\ell}$ .
+
+We see that the number of elements in the input and the kernels appear only with a poly-logarithmic contribution in the running time. This is one of the main advantages of our algorithm and it allows us to use for larger and even exponentially deeper kernels. For the number of elements in the input, their number is hidden in the precision parameter $\eta$ in the running time. Indeed, a sufficiently large fraction of pixels must be sampled from the output of the quantum convolution to retrieve the meaningful information. In the Numerical Simulations (Section 6) we provide empirical estimates for $\eta$ . For details about the QRAM, see Appendix F.2.
+
+Following the forward pass, a loss function $\mathcal{L}$ is computed for the output of a classical CNN. The backpropagation algorithm is then used to calculate, layer by layer, the gradient of this loss with respect to the elements of the kernels $K^{\ell}$ , in order to update them through gradient descent. We state our quantum backpropagation algorithm next, the formal version of this result appears as Theorem E.1
+
+# Result 2 (Quantum Backpropagation for Quantum CNN)
+
+Given the forward pass quantum algorithm in Result 1, and given the kernel matrix $F^{\ell}$ , input matrices $A^{\ell}$ and $Y^{\ell}$ , stored in the QRAM for each layer $\ell$ , and a loss function $\mathcal{L}$ , there is a quantum backpropagation algorithm that estimates each element of the gradient tensor $\frac{\partial\mathcal{L}}{\partial F^{\ell}}$ within additive error $\delta \left\| \frac{\partial\mathcal{L}}{\partial F^{\ell}} \right\|$ , and updates $F^{\ell}$ according to a gradient descent update rule. The running time of a single layer $\ell$ for quantum backpropagation is given by
+
+$$
+O \left(\left(\left(\mu (A ^ {\ell}) + \mu (\frac {\partial \mathcal {L}}{\partial Y ^ {\ell + 1}})\right) \kappa (\frac {\partial \mathcal {L}}{\partial F ^ {\ell}}) + \left(\mu (\frac {\partial \mathcal {L}}{\partial Y ^ {\ell + 1}}) + \mu (F ^ {\ell})\right) \kappa (\frac {\partial \mathcal {L}}{\partial Y ^ {\ell}})\right) \frac {\log 1 / \delta}{\delta^ {2}}\right) \tag {2}
+$$
+
+where for a matrix $V \in \mathbb{R}^{n \times n}$ , $\kappa(V)$ is the condition number and $\mu(V) \leq \sqrt{n}$ is a matrix dependent parameter defined in Equation (5).
+
+For the quantum back-propagation algorithm, we introduce a quantum tomography algorithm with $\ell_{\infty}$ norm guarantees, that could be of independent interest. It is exponentially faster than tomography with $\ell_{2}$ norm guarantees and is given as Theorem G.1 in Section G. Numerical simulations on classifying the MNIST dataset show that our quantum CNN achieves a similar classification accuracy as the classical CNN.
+
+# 4 FORWARD PASS FOR QCNN
+
+The forward pass algorithm for the QCNN implements the quantum analog of a single quantum convolutional layer. It includes a convolution product between an input and a kernel, followed by
+
+the application of a non linear function and pooling operations to prepare the next layer's input. We provide an overview of the main ideas of the algorithm here, the complete technical details are given in the Appendix (Section D).
+
+# Algorithm 1 QCNN Layer
+
+Require: Matrix $A^\ell$ representing input to layer $\ell$ and kernel matrix $F^\ell$ stored in QRAM. Precision parameters $\epsilon$ and $\eta$ , a boolean circuit for a non-linear function $f:\mathbb{R}\mapsto [0,C]$ .
+
+Ensure: Outputs the data matrix $A^{\ell + 1}$ for the next layer which is the result of the convolution between the input and the kernel, followed by a non linearity and pooling.
+
+# 1: Step 1: Quantum Convolution
+
+# 1.1: Inner product estimation
+
+Perform the following mapping, using QRAM queries on rows $A_{p}^{\ell}$ and columns $F_{q}^{\ell}$ , followed by quantum inner product estimation (Theorems F.2 and F.4) to implement the mapping $\frac{1}{K}\sum_{p,q}|p\rangle |q\rangle \mapsto \frac{1}{K}\sum_{p,q}|p\rangle |q\rangle |\overline{P}_{pq}\rangle |g_{pq}\rangle$
+
+Where $\overline{P}_{pq}$ is $\epsilon$ -close to $P_{pq} = \frac{1 + \langle A_p^\ell | F_q^\ell \rangle}{2}$ , $K = \sqrt{H^{\ell+1} W^{\ell+1} D^{\ell+1}}$ is a normalisation factor and $|g_{pq}\rangle$ is some garbage quantum state that can be ignored.
+
+# 1.2: Non linearity
+
+Use an arithmetic circuit and two QRAM queries to obtain $\overline{Y}^{\ell +1}$ , an $\epsilon$ -approximation of the convolution output $Y_{p,q}^{\ell +1} = (A_p^\ell, F_q^\ell)$ and apply the non-linear function $f$ as a boolean circuit to obtain $\frac{1}{K}\sum_{p,q}|p\rangle |q\rangle |f(\overline{Y}_{p,q}^{\ell +1})\rangle |g_{pq}\rangle$ .
+
+# 2: Step 2: Quantum Sampling
+
+Use Conditional Rotation and Amplitude Amplification to encode the values $\alpha_{pq}^{\prime} := \frac{f(\overline{Y}_{pq}^{\ell+1})}{C}$ into the amplitudes to obtain $\frac{1}{K} \sum_{p,q} \alpha_{pq}^{\prime} |p\rangle |q\rangle |f(\overline{Y}_{pq}^{\ell+1})\rangle |g_{pq}$ . Perform $\ell_{\infty}$ tomography from Theorem G.1 with precision $\eta$ , and obtain classically all positions and values $(p,q, f(\overline{Y}_{pq}^{\ell+1}))$ such that, with high probability, values above $\eta$ are known exactly, while others are set to 0.
+
+# 3: Step 3: QRAM Update and Pooling
+
+Update the QRAM for the next layer $A^{\ell + 1}$ while sampling. The implementation of pooling (Max, Average, etc.) can be done by a specific update to the QRAM data structure described in Section D.2.2.
+
+In this algorithm, we propose the first quantum algorithm for performing the convolution product. Our algorithm is based on the observation that the convolution product can be regarded as a matrix product between reshaped matrices. The reshaped input's rows $A_{p}^{\ell}$ and the reshaped kernel's columns $F_{q}^{\ell}$ are loaded as quantum states, in superposition. Then the entries of the convolution $\langle A_{p}^{\ell}|F_{q}^{\ell}\rangle$ are estimated using a simple quantum circuit for inner product estimation and stored in an auxiliary register as in Step 1.1 of Algorithm 1.
+
+One of the difficulties in the design of quantum neural networks is that non-linear functions are hard to implement as unitary operations. We get around this difficulty by applying the non-linear function $f$ as a boolean circuit to the output of the quantum inner product estimation circuit in Step 1.2 of Algorithm 1. Most of the non-linear functions in the machine learning literature can be implemented using small sized boolean circuits, our algorithm thus allows for many possible choices of the nonlinear function $f$ (see Appendix F.1 for details on non linear boolean circuits in quantum circuits).
+
+Step 2 of Algorithm 1 develops a quantum importance sampling procedure wherein the pixels with high values of $f(\overline{Y}_{pq}^{\ell +1})$ are read out with higher probability. This is done by encoding these values into the amplitudes of the quantum state using the well known Amplitude Amplification algorithm Brassard et al. (2002). This kind of importance sampling is a task that can be performed easily in the quantum setting and has no direct classical analog. Although it does not lead to asymptotic improvements for the algorithms running time, it could lead to improvements that are significant in practice.
+
+More precisely, during the measurement of a quantum register in superposition, only one of its values appears, with a probability corresponding the the square of its amplitude. It implies that the
+
+output's pixels measured with more probability are the ones with the highest value $f(Y_{p,q}^{\ell +1})$ . Once measured, we read directly from the registers the position $p,q$ and the value itself. Thus we claim that we measure only a fraction of the quantum convolution product output, and that the set of pixels measured collect most of the meaningful information for the CNN, the other pixels being set to 0.
+
+After being measured, each pixel's value and position are stored in a QRAM to be used as quantum state for next layer's input. During this phase, it is possible to discard or aggregate some values to perform pooling operations as described in Step 3 of Algorithm 1. The forward pass for the QCNN thus includes the convolution product, the non linearity $f$ and pooling operation, in time polylogarithmic in the kernel's dimensions. In comparison, the classical CNN layer in linear in both kernel and input dimensions.
+
+Note finally that quantum importance sampling in Step 2 implies that the non linear function $f$ be bounded by a parameter $C > 1$ . In our experiments we use the capReLU function, which is a modified ReLu function that becomes constant above $C$ .
+
+# 5 QUANTUM BACK PROPAGATION ALGORITHM
+
+The second algorithm required for the QCNN is the quantum backpropagation algorithm given as Algorithm 2. Like the classical backpropagation algorithm, it updates all kernels weights according to the derivatives of a given loss function $\mathcal{L}$ . In this section, we explain the main ideas and compare it to the classical backpropagation algorithm, the complete details are given in Appendix (Section E).
+
+# Algorithm 2 Quantum Backpropagation
+
+Require: Precision parameter $\delta$ . Data matrices $A^{\ell}$ and kernel matrices $F^{\ell}$ stored in QRAM for each layer $\ell$ .
+
+Ensure: Outputs gradient matrices $\frac{\partial\mathcal{L}}{\partial F^{\ell}}$ and $\frac{\partial\mathcal{L}}{\partial Y^{\ell}}$ for each layer $\ell$
+
+1: Calculate the gradient for the last layer $L$ using the outputs and the true labels: $\frac{\partial \mathcal{L}}{\partial Y^L}$
+2: for $\ell = L - 1, \dots, 0$ do
+3: Step 1 : Modify the gradient
+
+With $\frac{\partial\mathcal{L}}{\partial Y^{\ell + 1}}$ stored in QRAM, set to 0 some of its values to take into account pooling, tomography and non linearity that occurred in the forward pass of layer $\ell$ . These values correspond to positions that haven't been sampled nor pooled, since they have no impact on the final loss.
+
+4: Step 2 : Matrix-matrix multiplications
+
+With the modified values of $\frac{\partial\mathcal{L}}{\partial Y^{\ell + 1}}$ , use quantum linear algebra algorithm (Theorem F.7) to perform the matrix-matrix multiplications $(A^{\ell})^{T} \cdot \frac{\partial L}{\partial Y^{\ell + 1}}$ and $\frac{\partial\mathcal{L}}{\partial Y^{\ell + 1}} \cdot (F^{\ell})^{T}$ , allowing to obtain quantum states corresponding to $\frac{\partial\mathcal{L}}{\partial F^{\ell}}$ and $\frac{\partial\mathcal{L}}{\partial Y^{\ell}}$ .
+
+5: Step 3: $\ell_{\infty}$ tomography
+
+Measure the previous outputs, as in Algorithm 3. This allows to estimate each entry of $\frac{\partial\mathcal{L}}{\partial F^{\ell}}$ and $\frac{\partial\mathcal{L}}{\partial Y^{\ell}}$ with errors $\delta \left\| \frac{\partial\mathcal{L}}{\partial F^{\ell}}\right\|$ and $\delta \left\| \frac{\partial\mathcal{L}}{\partial Y^{\ell}}\right\|$ respectively, using $\ell_{\infty}$ tomography from Theorem G.1. Store all elements of $\frac{\partial\mathcal{L}}{\partial F^{\ell}}$ in QRAM.
+
+6: Step 4 : Gradient descent
+
+From the previous tomography, perform the gradient descent to update the values of $F^{\ell}$ in QRAM: $F_{s,q}^{\ell} \gets F_{s,q}^{\ell} - \lambda \left( \frac{\partial \mathcal{L}}{\partial F_{s,q}^{\ell}} \pm 2\delta \left\| \frac{\partial \mathcal{L}}{\partial F^{\ell}} \right\|_2 \right)$ .
+
+7: end for
+
+We describe briefly detail the implementation of quantum backpropagation at layer $\ell$ . The algorithm assumes that $\frac{\partial\mathcal{L}}{\partial Y^{\ell + 1}}$ is known. First, the backpropagation of the quantum convolution product is equivalent to the classical one, and we use the matrix-matrix multiplication formulation to obtain the derivatives $\frac{\partial\mathcal{L}}{\partial F^{\ell}}$ and $\frac{\partial\mathcal{L}}{\partial Y^{\ell}}$ . The first one is the result wanted and the second one is needed for layer $\ell - 1$ . This matrix-matrix multiplication can be implemented as a quantum circuit, by decomposing into several matrix-vector multiplications, known to be efficient, with a running time depending on the ranks and Frobenius norm of the matrices. We obtain a quantum state corresponding to a
+
+superposition of all derivatives. We use again the $\ell_{\infty}$ tomography to retrieve each derivative with precision $\delta > 0$ such that, for all kernel's weight $F_{s,q}^{\ell}$ , we have approximated it's loss derivative with $\overline{\frac{\partial\mathcal{L}}{\partial F_{s,q}^{\ell}}}$ , with an error bounded by $\left|\frac{\partial\mathcal{L}}{\partial F_{s,q}^{\ell}} - \overline{\frac{\partial\mathcal{L}}{\partial F_{s,q}^{\ell}}}\right| \leq 2\delta \left\| \frac{\partial\mathcal{L}}{\partial F^{\ell}} \right\|_2$ . This implies that the gradient descent rule is perturbed by $2\delta \left\| \frac{\partial\mathcal{L}}{\partial F^{\ell}} \right\|_2$ at most, see Appendix (Section E.4).
+
+We also take into account the effects of quantum non linearity, quantum measurement and pooling. The quantum pooling operation is equivalent to the classical one, where pixels that were not selected during pooling see their derivative set to 0. Quantum measurement is similar, since pixels that haven't been measured don't contribute to the gradient. For the non linearity, as in the classical case, pixels with negative values were set to zero, hence should have no contribution to the gradient. Additionally, because we used the capReLU function, pixels bigger than the threshold $C$ must also have null derivatives. This two rules can be implemented by combining them with measurement rules compared to classical backpropagation, see Appendix (Section E.2.2) for details.
+
+# 6 NUMERICAL SIMULATIONS
+
+As described above, the adaptation of the CNNs to the quantum setting implies some modifications that could alter the efficiency of the learning or classifying phases. We now present some experiments to show that such modified CNNs can converge correctly, as the original ones.
+
+The experiment, using the PyTorch library developed by Paszke et al. (2017), consists of training classically a small convolutional neural network for which we have added a "quantum" sampling after each convolution. Instead of parametrising it with the precision $\eta$ , we have chosen to use the sampling ratio $\sigma$ that represents the fraction of pixels drawn during tomography. This two definitions are equivalent, as shown in Appendix (Section D.1.5), but the second one is more intuitive regarding the running time and the simulations.
+
+We also add a noise simulating the amplitude estimation (parameter $\epsilon$ ), followed by a capReLU instead of the usual ReLu (parameter $C$ ), and a noise during the backpropagation (parameter $\delta$ ). In the following results, we observe that our quantum CNN is able to learn and classify visual data from the widely used MNIST dataset. This dataset is made of 60.000 training images and 10.000 testing images of handwritten digits. Each image is a 28x28 grayscale pixels between 0 and 255 (8 bits encoding), before normalization.
+
+Let's first observe the "quantum" effects on an image of the dataset. In particular, the effect of the capped non linearity, the introduction of noise and the quantum sampling.
+
+We now present the full simulation of our quantum CNN. In the following, we use a simple network made of 2 convolution layers, and compare our quantum CNN to the classical one. The first and second layers are respectively made of 5 and 10 kernels, both of size $7 \times 7$ . A three-layer fully connected network is applied at the end and a softmax activation function is applied on the last layer to detect the predicted outcome over 10 classes (the ten possible digits). Note that we didn't introduce pooling, being equivalent between quantum and classical algorithms and not improving the results on our CNN. The objective of the learning phase is to minimize the loss function, defined by the negative log likelihood of the classification on the training set. The optimizer used was a built-in Stochastic Gradient Descent.
+
+Using PyTorch, we have been able to implement the following quantum effects (the first three points are shown in Figure 1):
+
+- The addition of a noise, to simulate the approximation of amplitude estimation during the forward quantum convolution layer, by adding gaussian noise centered on 0 and with standard deviation $2M\epsilon$ , with $M = \max_{p,q}\|A_p\|\|F_q\|$ .
+- A modification of the non linearity: a ReLu function which is constant above the value $T$ (the cap).
+- A sampling procedure to apply on a tensor with a probability distribution proportional to the tensor itself, reproducing the quantum sampling with ratio $\sigma$ .
+- The addition of a noise during the gradient descent, to simulate the quantum backpropagation, by adding a gaussian noise centered on 0 with standard deviation $\delta$ , multiplied by the norm of the gradient, as given by Equation (28).
+
+
+Figure 1: Effects of the QCNN on a $28 \times 28$ input image. From left to right: original image, image after applying a capReLU activation function with a cap $C$ at 2.0, introduction of a strong noise during amplitude estimation with $\epsilon = 0.5$ , quantum sampling with ratio $\sigma = 0.4$ that samples the highest values in priority. The useful information tends to be conserved in this example. The side gray scale indicates the value of each pixel. Note that during the QCNN layer, a convolution is supposed to happen before the last image but we chose not to perform it for visualisation matter.
+
+The CNN used for this simulation may seem "small" compared to the standards AlexNet developed by Krizhevsky et al. (2012) or VGG-16 by Simonyan & Zisserman (2014), or those used in industry. However simulating this small QCNN on a classical computer was already very computationally intensive and time consuming, due to the "quantum" sampling task, apparently not optimized for a classical implementation in PyTorch. Every single training curve showed in Figure 9 could last for 4 to 8 hours. Hence adding more convolutional layers wasn't convenient. Similarly, we didn't compute the loss on the whole testing set (10.000 images) during the training to plot the testing curve. However we have computed the test losses and accuracies once the model trained (see Table 4), in order to detect potential overfitting cases.
+
+We now present the result of the training phase for a quantum version of this CNN, where partial quantum sampling is applied, for different sampling ratio (number of samples taken from the resulting convolution). Since the quantum sampling gives more probability to observe high value pixels, we expect to be able to learn correctly even with small ratio $(\sigma \leq 0.5)$ . We compare these training curve to the classical one. The learning has been done on two epochs, meaning that the whole dataset is used twice. The following plots show the evolution of the loss $\mathcal{L}$ during the iterations on batches. This is the standard indicator of the good convergence of a neural network learning phase. We can compare the evolution of the loss between a classical CNN and our QCNN for different parameters. Most results are presented in Appendix (Section H).
+
+Our simulations show that the QCNN is able to learn despite the introduction of noise, tensor sampling and other modifications. In particular it shows that only a fraction of the information is meaningful for the neural network, and that the quantum algorithm captures this information in priority. This learning can be more or less efficient depending on the choice of the key parameters. For decent values of these parameters, the QCNN is able to converge during the training phase. It can then classify correctly on both training and testing set, indicating neither overfitting nor underfitting.
+
+We notice that the learning curves sometimes present a late start before the convergence initializes, in particular for small sampling ratio. This late start can be due to the random initialization of the kernel weights, that performs a meaningless convolution, a case where the quantum sampling of the output is of no interest. However it is very interesting to see that despite this late start, the kernel start converging once they have found a good combination.
+
+Overall, it is possible that the QCNN presents some behaviors that have no classical equivalence. Understanding their potential effects, positive or negative, is an open question, all the more so as the effects of the classical CNN's hyperparameters are already a topic of active research, see the work of Samek et al. (2017) for details. Note also that the neural network used in this simulation is
+
+
+Figure 2: Training curves comparison between the classical CNN and the Quantum CNN (QCNN) for $\epsilon = 0.01$ , $C = 10$ , $\delta = 0.01$ and the sampling ratio $\sigma$ from 0.1 to 0.5. We can observe a learning phase similar to the classical one, even for a weak sampling of $20\%$ or $30\%$ of each convolution output, which tends to show that the meaningful information is distributed only at certain location of the images, coherently with the purpose of the convolution layer. Even for a very low sampling ratio of $10\%$ , we observe a convergence despite a late start.
+
+rather small. A following experiment would be to simulate a quantum version of a standard deeper CNN (AlexNet or VGG-16), eventually on more complex dataset, such as CIFAR-10 developed by Krizhevsky & Hinton (2009) or Fashion MNIST by Xiao et al. (2017).
+
+# 7 CONCLUSIONS
+
+We have presented a quantum algorithm for evaluating and training convolutional neural networks (CNN). At the core of this algorithm, we have developed a novel quantum algorithm for computing a convolution product between two tensors, with a substantial speed up. This technique could be reused in other signal processing tasks that could benefit from an enhancement by a quantum computer. Layer by layer, convolutional neural networks process and extract meaningful information. Following this idea of learning foremost important features, we have proposed a new approach of quantum tomography where the most meaningful information is sampled with higher probability, hence reducing the complexity of our algorithm.
+
+Our QCNN is complete in the sense that almost all classical architectures can be implemented in a quantum fashion: any (non negative and upper bounded) non linearity, pooling, number of layers and size of kernels are available. Our circuit is shallow and could be run on relatively small quantum computers. One could repeat the main loop many times on the same shallow circuit, since performing the convolution product is simple, and is similar for all layer. The pooling and non linearity are included in the loop. Our building block approach, layer by layer, allows high modularity, and can be combined with work on quantum feedforward neural network developed by Allcock et al. (2018).
+
+The running time presents a speedup compared to the classical algorithm, due to fast linear algebra when computing the convolution product, and by only sampling the important values from the resulting quantum state. This speedup can be highly significant in cases where the number of channels $D^{\ell}$ in the input tensor is high (high dimensional time series, videos sequences, games play) or when the number of kernels $D^{\ell + 1}$ is big, allowing deep architectures for CNN, which was the case in the recent breakthrough of DeepMind AlphaGo algorithm of Silver et al. (2016). The Quantum CNN also allows larger kernels, that could be used for larger input images, since the size the kernels must be a constant fraction of the input in order to recognize patterns. However, despite our new techniques to reduce the complexity, applying a non linearity and reusing the result of a layer for the next layer make register encoding and state tomography mandatory, hence preventing from having an exponential speedup on the number of input parameters.
+
+Finally we have presented a backpropagation algorithm that can also be implemented as a quantum circuit. The numerical simulations on a small CNN show that despite the introduction of noise and sampling, the QCNN can efficiently learn to classify visual data from the MNIST dataset, performing a similar accuracy than the classical CNN.
+
+# REFERENCES
+
+Jonathan Allcock, Chang-Yu Hsieh, Jordanis Kerenidis, and Shengyu Zhang. Quantum algorithms for feedforward neural networks. arXiv preprint arXiv:1812.03089, 2018.
+Kerstin Beer, Dmytro Bondarenko, Terry Farrelly, Tobias J Osborne, Robert Salzmann, and Ramona Wolf. Efficient learning for deep quantum neural networks. arXiv preprint arXiv:1902.10445, 2019.
+Mariusz Bojarski, Anna Choromanska, Krzysztof Choromanski, Bernhard Firner, Larry Jackel, Urs Muller, and Karol Zieba. Visualbackprop: efficient visualization of cnns. arXiv preprint arXiv:1611.05418, 2016.
+Gilles Brassard, Peter Hoyer, Michele Mosca, and Alain Tapp. Quantum amplitude amplification and estimation. Contemporary Mathematics, 305:53-74, 2002.
+Shantanav Chakraborty, András Gilyén, and Stacey Jeffery. The power of block-encoded matrix powers: improved regression techniques via faster Hamiltonian simulation. arXiv preprint arXiv:1804.01973, 2018.
+Iris Cong, Soonwon Choi, and Mikhail D Lukin. Quantum convolutional neural networks. arXiv preprint arXiv:1810.03787, 2018.
+Edward Farhi and Hartmut Neven. Classification with quantum neural networks on near term processors. arXiv preprint arXiv:1802.06002, 2018.
+Daniel George and EA Huerta. Deep learning for real-time gravitational wave detection and parameter estimation: Results with advanced ligo data. Physics Letters B, 778:64-70, 2018.
+Maxwell Henderson, Samriddhi Shakya, Shashindra Pradhan, and Tristan Cook. Quanvolutional neural networks: Powering image recognition with quantum circuits. arXiv preprint arXiv:1904.04767, 2019.
+Iordanis Kerenidis and Anupam Prakash. Quantum recommendation systems. Proceedings of the 8th Innovations in Theoretical Computer Science Conference, 2017a.
+Iordanis Kerenidis and Anupam Prakash. Quantum gradient descent for linear systems and least squares. arXiv:1704.04992, 2017b.
+Iordanis Kerenidis and Anupam Prakash. A quantum interior point method for LPs and SDPs. arXiv:1808.09266, 2018.
+Iordanis Kerenidis, Jonas Landman, Alessandro Luongo, and Anupam Prakash. q-means: A quantum algorithm for unsupervised machine learning. Neural Information Processing systems (NeurIPS), 2019.
+Nathan Killoran, Thomas R Bromley, Juan Miguel Arrazola, Maria Schuld, Nicolas Quesada, and Seth Lloyd. Continuous-variable quantum neural networks. arXiv preprint arXiv:1806.06871, 2018.
+Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. Technical report, Citeseer, 2009.
+Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pp. 1097-1105, 2012.
+Yann LeCun, L Bottou, Yoshua Bengio, and P Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 1998.
+Seth Lloyd, Masoud Mohseni, and Patrick Rebentrost. Quantum algorithms for supervised and unsupervised machine learning. arXiv, 1307.0411:1-11, 7 2013. URL http://arxiv.org/abs/1307.0411.
+
+Seth Lloyd, Masoud Mohseni, and Patrick Rebentrost. Quantum principal component analysis. Nature Physics, 10(9):631, 2014.
+Michael A Nielsen and Isaac Chuang. Quantum computation and quantum information, 2002a.
+Michael A Nielsen and Isaac Chuang. Quantum computation and quantum information, 2002b.
+Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. Automatic differentiation in pytorch. 2017.
+Patrick Rebentrost, Thomas R Bromley, Christian Weedbrook, and Seth Lloyd. Quantum hopfield neural network. Physical Review A, 98(4):042308, 2018.
+Wojciech Samek, Thomas Wiegand, and Klaus-Robert Müller. Explainable artificial intelligence: Understanding, visualizing and interpreting deep learning models. arXiv preprint arXiv:1708.08296, 2017.
+Maria Schuld, Ilya Sinayskiy, and Francesco Petruccione. The quest for a quantum neural network. Quantum Information Processing, 13(11):2567-2586, 2014.
+David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go with deep neural networks and tree search. nature, 529(7587):484, 2016.
+Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
+Nathan Wiebe, Ashish Kapoor, and Krysta M Svore. Quantum Algorithms for Nearest-Neighbor Methods for Supervised and Unsupervised Learning. arXiv:1401.2142v2, 2014a. URL https://arxiv.org/pdf/1401.2142.pdf.
+Nathan Wiebe, Ashish Kapoor, and Krysta M Svore. Quantum deep learning. arXiv preprint arXiv:1412.3489, 2014b.
+J Wu. Introduction to convolutional neural networks. https://pdfssemanticscholar.org/450c/a19932fcef1ca6d0442cbf52fec38fb9d1e5.pdf, 2017.
+Han Xiao, Kashif Rasul, and Roland Vollgraf. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747, 2017.
+
+# APPENDIX A VARIABLE SUMMARY
+
+We recall the most important variables for layer $\ell$ . They represent tensors, their approximations, and their reshaped versions.
+
+| Data | Variable | Dimensions | Indices |
| Input | Xl | Hl × Wl × Dl | (il, jl, dl) |
| Yl | (HlWl) × Dl | - |
| Al | (Hl+1Wl+1) × (HWDl) | (p,r) |
| Kernel | Kl | H × W × Dl × Dl+1 | (i,j,d,d') |
| Fl | (HWDl) × Dl+1 | (s,q) |
+
+Table 1: Summary of input variables for the $\ell^{th}$ layer, along with their meaning, dimensions and corresponding notations. These variables are common for both quantum and classical algorithms. We have omitted indices for $Y^{\ell}$ which don't appear in our work.
+
+| Data | Variable | Dimensions | Indices |
| Output of Quantum Convolution | f( Yℓ+1 ) | (Hℓ+1 Wℓ+1) × Dℓ+1 | (p,q) |
| f( Xℓ+1 ) | Hℓ+1 × Wℓ+1 × Dℓ+1 | (iℓ+1, jℓ+1, dℓ+1) |
| Output of Quantum Tomography | Xℓ+1 | Hℓ+1 × Wℓ+1 × Dℓ+1 | (iℓ+1, jℓ+1, dℓ+1) |
| Output of Quantum Pooling | Xℓ+1 | Hℓ+1 / P × Wℓ+1 / P × Dℓ+1 | ( iℓ+1 , jℓ+1 , dℓ+1 ) |
+
+Table 2: Summary of variables describing outputs of the layer $\ell$ ,with the quantum algorithm.
+
+| Data | Variable | Dimensions | Indices |
| Output of Classical Convolution | f(Yl+1) | (Hl+1Wl+1) × Dl+1 | (p,q) |
| f(Xl+1) | Hl+1 × Wl+1 × Dl+1 | (i^l+1, j^l+1, d^l+1) |
| Output of Classical Pooling | Xl+1 | Hl+1 / P × Wl+1 / P × Dl+1 | (i^l+1, j^l+1, d^l+1) |
+
+Table 3: Summary of variables describing outputs of the layer $\ell$ ,with the classical algorithm.
+
+Classical and quantum algorithms can be compared with these two diagrams:
+
+$$
+\left\{\begin{array}{l}\text {Q u a n t u m c o n v o l u t i o n l a y e r}: X ^ {\ell} \rightarrow | \bar {X} ^ {\ell + 1} \rangle \rightarrow | f (\bar {X} ^ {\ell + 1}) \rangle \rightarrow \mathcal {X} ^ {\ell + 1} \rightarrow \tilde {\mathcal {X}} ^ {\ell + 1}\\\text {C l a s s i c a l c o n v o l u t i o n l a y e r}: X ^ {\ell} \rightarrow X ^ {\ell + 1} \rightarrow f (X ^ {\ell + 1}) \rightarrow \tilde {X} ^ {\ell + 1}\end{array}\right. \tag {3}
+$$
+
+We finally provide some remarks that could clarify some notations ambiguity:
+
+- Formally, the output of the quantum algorithm is $\tilde{\mathcal{X}}^{\ell +1}$ . It is used as input for the next layer $\ell +1$ . But we consider that all variables' names are reset when starting a new layer: $X^{\ell +1}\gets \tilde{\mathcal{X}}^{\ell +1}$
+- For simplicity, we have sometimes replaced the indices $(i^{\ell +1},j^{\ell +1},d^{\ell +1})$ by $n$ to index the elements of the output.
+- In Section D.2.2, the input for layer $\ell + 1$ is stored as $A^{\ell + 1}$ , for which the elements are indexed by $(p', r')$ .
+
+# APPENDIX B PRELIMINARIES IN QUANTUM INFORMATION
+
+We introduce a basic and broad-audience quantum information background necessary for this work. For a more detailed introduction we recommend Nielsen & Chuang (2002a).
+
+# B.1 QUANTUM INFORMATION
+
+Quantum Bits and Quantum Registers: The bit is the most basic unit of classical information. It can be either in state 0 or 1. Similarly a quantum bit or qubit, is a quantum system that can be in state $|0\rangle$ , $|1\rangle$ (the braket notation $|\cdot \rangle$ is a reminder that the bit considered is a quantum system) or in superposition of both states $\alpha |0\rangle + \beta |1\rangle$ with coefficients $\alpha, \beta \in \mathbb{C}$ such that $|\alpha|^2 + |\beta|^2 = 1$ . The amplitudes $\alpha$ and $\beta$ are linked to the probabilities of observing either 0 or 1 when measuring the qubit, since $P(0) = |\alpha|^2$ and $P(1) = |\beta|^2$ .
+
+Before the measurement, any superposition is possible, which gives quantum information special abilities in terms of computation. With $n$ qubits, the $2^n$ possible binary combinations can exist simultaneously, each with a specific amplitude. For instance we can consider an uniform distribution $\frac{1}{\sqrt{n}} \sum_{i=0}^{2^n-1} |i\rangle$ where $|i\rangle$ represents the $i^{th}$ binary combination (e.g. $|01 \cdots 1001\rangle$ ). Multiple qubits together are often called a quantum register.
+
+In its most general formulation, a quantum state with $n$ qubits can be seen as vector in a complex Hilbert space of dimension $2^n$ . This vector must be normalized under $\ell_2$ -norm, to guarantee that the squared amplitudes sum to 1.
+
+Quantum Computation: To process qubits and therefore quantum registers, we use quantum gates. These gates are unitary operators in the Hilbert space as they should map unit-norm vectors to unit-norm vectors. Formally, we can see a quantum gate acting on $n$ qubits as a matrix $U \in \mathbb{C}^{2^n}$ such that $U U^{\dagger} = U^{\dagger} U = I$ , where $U^{\dagger}$ is the conjugate transpose of $U$ . Some basic single qubit gates include the NOT gate $\begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}$ that inverts $|0\rangle$ and $|1\rangle$ , or the Hadamard gate $\frac{1}{\sqrt{2}} \begin{pmatrix} 1 & 1 \\ 1 & -1 \end{pmatrix}$ that maps $|0\rangle \mapsto \frac{1}{\sqrt{2}} (|0\rangle + |1\rangle)$ and $|1\rangle \mapsto \frac{1}{\sqrt{2}} (|0\rangle - |1\rangle)$ , creating the quantum superposition.
+
+Finally, multiple qubits gates exist, such as the Controlled-NOT that applies a NOT gate on a target qubit conditioned on the state of a control qubit.
+
+The main advantage of quantum gates is their ability to be applied to a superposition of inputs. Indeed, given a gate $U$ such that $U|x\rangle \mapsto |f(x)\rangle$ , we can apply it to all possible combinations of $x$ at once $U\left(\frac{1}{C}\sum_{x}|x\rangle\right) \mapsto \frac{1}{C}\sum_{x}|f(x)\rangle$ .
+
+We now state some primitive quantum circuits, which we will use in our algorithm: For two integers $i$ and $j$ , we can check their equality with the mapping $|i\rangle |j\rangle |0\rangle \mapsto |i\rangle |j\rangle |[i = j]\rangle$ . For two real value numbers $a > 0$ and $\delta > 0$ , we can compare them using $|a\rangle |\delta\rangle |0\rangle \mapsto |a\rangle |\delta\rangle |[a\leq \delta]\rangle$ . Finally, for a real value number $a > 0$ , we can obtain its square $|a\rangle |0\rangle \mapsto |a\rangle |a^2\rangle$ . Note that these circuits are basically a reversible version of the classical ones and are linear in the number of qubits used to encode the input values.
+
+Any classical boolean function can be implemented in a quantum unitary, even though this seems at first contradictory with the requirements of unitaries (reversibility, linearity). Let $\sigma : \mathbb{R} \mapsto \mathbb{R}$ be a classical function, we define $U_{\sigma}$ the unitary that acts as $U_{\sigma}|x\rangle |0\rangle \mapsto |x\rangle |\sigma(x)\rangle$ . Using a second quantum register to encode the result of the function, the properties of quantum unitaries are respected.
+
+# B.2 QUANTUM SUBRoutines FOR DATA ENCODING
+
+Knowing some basic principles of quantum information, the next step is to understand how data can be efficiently encoded using quantum states. While several approaches could exist, we present the most common one called amplitude encoding, which leads to interesting and efficient applications.
+
+Let $x \in \mathbb{R}^d$ be a vector with components $(x_1, \dots, x_d)$ . Using only $\lceil \log(d) \rceil$ qubits, we can form $|x\rangle$ , the quantum state encoding $x$ , given by $|x\rangle = \frac{1}{\|x\|} \sum_{j=0}^{d-1} x_j |j\rangle$ . We see that the $j^{th}$ component $x_j$ becomes the amplitude of $|j\rangle$ , the $j^{th}$ binary combination (or equivalently the $j^{th}$ vector in the standard basis). Each amplitude must be divided by $\|x\|$ to preserve the unit $\ell_2$ -norm of $|x\rangle$ .
+
+Similarly, for a matrix $A \in \mathbb{R}^{n \times d}$ or equivalently for $n$ vectors $A_i$ for $i \in [n]$ , we can express each row of $A$ as $|A_i\rangle = \frac{1}{||A_i||}\sum_{i=0}^{d-1}A_{ij}|j\rangle$ .
+
+We can now explain an important definition, the ability to have quantum access to a matrix. This will be a requirements for many algorithms.
+
+# Definition 1 [Quantum Access to Data]
+
+We say that we have quantum access to a matrix $A \in \mathbb{R}^{n \times d}$ if there exist a procedure to perform the following mapping, for $i \in [n]$ , in time $T$ :
+
+- $|i\rangle \left|0\right\rangle \mapsto \left| i\right\rangle \left| {A}_{i}\right\rangle$
+$\left|0\right\rangle \mapsto \frac{1}{\left\|A\right\|_F}\sum_i\left\| A_i\right\|\left|i\right\rangle$
+
+By using appropriate data structures the first mapping can be reduced to the ability to perform a mapping of the form $|i\rangle |j\rangle |0\rangle \mapsto |i\rangle |j\rangle |A_{ij}\rangle$ . The second requirement can be replaced by the ability of performing $|i\rangle |0\rangle \mapsto |i\rangle ||A_i||\rangle$ or to just have the knowledge of each norm. Therefore, using matrices such that all rows $A_{i}$ have the same norm makes it simpler to obtain the quantum access.
+
+The time or complexity $T$ necessary for the quantum access can be reduced to polylogarithmic dependence in $n$ and $d$ if we consider the access to a Quantum Memory or QRAM. The QRAM Kerenidis & Prakash (2017a) is a specific data structure from which a quantum circuit can allow quantum access to data in time $O(\log (nd))$ .
+
+Theorem B.1 (QRAM data structure, see Kerenidis & Prakash (2017a)) Let $A \in \mathbb{R}^{n \times d}$ , there is a data structure to store the rows of $A$ such that,
+
+1. The time to insert, update or delete a single entry $A_{ij}$ is $O(\log^2 (n))$ .
+2. A quantum algorithm with access to the data structure can perform the following unitaries in time $T = O(\log^2 n)$ .
+
+(a) $|i\rangle |0\rangle \rightarrow |i\rangle |A_i\rangle$ for $i\in [n]$ .
+(b) $|0\rangle \rightarrow \sum_{i\in [n]}\| A_i\|$ $|i\rangle$
+
+We now state important methods for processing the quantum information. Their goal is to store some information alternatively in the quantum state's amplitude or in the quantum register as a bitstring.
+
+Theorem B.2 [Amplitude Amplification and Estimation Brassard et al. (2002)] Given a unitary operator $U$ such that $U:|0\rangle \mapsto \sqrt{p} |y\rangle |0\rangle +\sqrt{1 - p} |y^{\perp}\rangle |1\rangle$ in time $T$ , where $p > 0$ is the probability of measuring "0", it is possible to obtain the state $|y\rangle |0\rangle$ using $O(\frac{T}{\sqrt{p}})$ queries to $U$ , or to estimate $p$ with relative error $\delta$ using $O(\frac{T}{\delta\sqrt{p}})$ queries to $U$ .
+
+Theorem B.3 [Conditional Rotation] Given the quantum state $|a\rangle$ , with $a\in [-1,1]$ , it is possible to perform $|a\rangle |0\rangle \mapsto |a\rangle (a|0\rangle +\sqrt{1 - a} |1\rangle)$ with complexity $\widetilde{O}(1)$ .
+
+Using Theorem F.3 followed by Theorem F.2, it then possible to transform the state $\frac{1}{\sqrt{d}}\sum_{j=0}^{d-1}|x_j\rangle$ into $\frac{1}{\|x\|}\sum_{j=0}^{d-1}x_j|x_j\rangle$ .
+
+In addition to amplitude estimation, we will make use of a tool developed in Wiebe et al. (2014a) to boost the probability of getting a good estimate for the inner product required for the quantum convolution algorithm. In high level, we take multiple copies of the estimator from the amplitude estimation procedure, compute the median, and reverse the circuit to get rid of the garbage. Here we provide a theorem with respect to time and not query complexity.
+
+Theorem B.4 (Median Evaluation, see Wiebe et al. (2014a)) Let $\mathcal{U}$ be a unitary operation that maps
+
+$$
+\mathcal {U}: | 0 ^ {\otimes n} \rangle \mapsto \sqrt {a} | x, 1 \rangle + \sqrt {1 - a} | G, 0 \rangle
+$$
+
+for some $1/2 < a \leq 1$ in time $T$ . Then there exists a quantum algorithm that, for any $\Delta > 0$ and for any $1/2 < a_0 \leq a$ , produces a state $|\Psi\rangle$ such that $\| |\Psi\rangle - |0^{\otimes nL}\rangle |x\rangle \| \leq \sqrt{2\Delta}$ for some integer
+
+$L$ , in time
+
+$$
+2 T \left[ \frac {\ln (1 / \Delta)}{2 \left(| a _ {0} | - \frac {1}{2}\right) ^ {2}} \right].
+$$
+
+# B.3 QUANTUM SUBRoutines FOR LINEAR ALGEBRA
+
+In the recent years, as the field of quantum machine learning grew, its " toolkit" for linear algebra algorithms has become important enough to allow the development of many quantum machine learning algorithms. We introduce here the important subroutines for this work, without detailing the circuits or the algorithms.
+
+Definition 2 For $a$ matrix $A$ , the parameter $\mu(A)$ is defined by $\mu(A) = \min_{p \in [0,1]} \left( \| A \|_F, \sqrt{s_{2p}(A)s_{2(1 - p)}(A^T)} \right)$ where $s_p(A) = \max_i(\| A_i \|_p^p)$ .
+
+The next theorems allow to compute the distance between vectors encoded as quantum states, and use this idea to perform the $k$ -means algorithm.
+
+Theorem B.5 [Quantum Distance Estimation Wiebe et al. (2014b); Kerenidis et al. (2019)] Given quantum access in time $T$ to two matrices $U$ and $V$ with rows $u_{i}$ and $v_{j}$ of dimension $d$ , there is a quantum algorithm that, for any pair $(i,j)$ , performs the following mapping $|i\rangle |j\rangle |0\rangle \mapsto |i\rangle |j\rangle |\overline{d^2(u_i,v_j)}\rangle$ , estimating the euclidean distance between $u_{i}$ and $v_{j}$ with precision $|\overline{d^2(u_i,v_j)} - d^2(u_i,v_j)| \leq \epsilon$ for any $\epsilon > 0$ . The algorithm has a running time given by $\widetilde{O}(T\eta/\epsilon)$ , where $\eta = \max_{ij}(\| u_i\| \| v_j\|$ , assuming that $\min_i(\| u_i\|) = \min_i(\| v_i\|) = 1$ .
+
+Theorem B.6 [Quantum $k$ -means clustering Kerenidis et al. (2019)]
+
+Given quantum access in time $T$ to a dataset $V \in \mathbb{R}^{n \times d}$ , there is a quantum algorithm that outputs with high probability $k$ centroids $c_1, \dots, c_k$ that are consistent with the output of the $k$ -means algorithm with noise $\delta > 0$ , in time $\widetilde{O}(T \times (kd\frac{\eta(V)}{\delta^2}\kappa(V)(\mu(V) + k\frac{\eta(V)}{\delta}) + k^2\frac{\eta(V)^{1.5}}{\delta^2}\kappa(V)\mu(V)))$ per iteration.
+
+Definition 3 For a matrix $V \in \mathbb{R}^{n \times d}$ , its parameter $\eta(V)$ is defined as as $\frac{\max_i(\|v_i\|^2)}{\min_i(\|v_i\|^2)}$ , or as $\max_i(\|v_i\|^2)$ assuming $\min_i(\|v_i\|) = 1$ .
+
+In theorem F.6, the other parameters in the running time can be interpreted as follows: $\delta$ is the precision in the estimation of the distances, but also in the estimation of the position of the centroids. $\kappa (V)$ is the condition number of $V$ and $\mu (V)$ is defined above (Definition 5). Finally, in the case of well clusterable datasets, which should be the case when we will apply $k$ -means during spectral clustering, the running simplifies to $\widetilde{O} (T\times (k^2 d\frac{\eta(V)^{2.5}}{\delta^3} +k^{2.5}\frac{\eta(V)^2}{\delta^3}))$ .
+
+Note that the dependence in $n$ is hidden in the time $T$ to load the data. This dependence becomes polylogarithmic in $n$ if we assume access to a QRAM.
+
+Theorem B.7 (Quantum Matrix Operations, Chakraborty et al. (2018)) Let $M \in \mathbb{R}^{d \times d}$ and $x \in \mathbb{R}^d$ . Let $\delta_1, \delta_2 > 0$ . If $M$ is stored in appropriate QRAM data structures and the time to prepare $|x\rangle$ is $T_x$ , then there exist quantum algorithms that with probability at least $1 - 1/poly(d)$ return
+
+1. A state $|z\rangle$ such that $\| |z\rangle - |Mx\rangle \|_2 \leq \delta_1$ in time $\widetilde{O}((\kappa(M)\mu(M) + T_x\kappa(M))\log(1/\delta_1))$ . Note that this also implies $\| |z\rangle - |Mx\rangle \|_{\infty} \leq \delta_1$
+2. Norm estimate $z \in (1 \pm \delta_2) \| Mx \|_2$ , with relative error $\delta_2$ , in time $\widetilde{O}(T_x \frac{\kappa(M)\mu(M)}{\delta_2} \log(1 / \delta_1))$ .
+
+The linear algebra procedures above can also be applied to any rectangular matrix $V \in \mathbb{R}^{n \times d}$ by considering instead the symmetric matrix $\overline{V} = \begin{pmatrix} 0 & V \\ V^T & 0 \end{pmatrix}$ .
+
+# APPENDIX C CLASSICAL CONVOLUTIONAL NEURAL NETWORK (CNN)
+
+CNN is a specific type of neural network, designed in particular for image processing or time series. It uses the Convolution Product as a main procedure for each layer. We will focus on image processing with a tensor framework for all elements of the network. Our goal is to explicitly describe the CNN procedures in a form that can be translated in the context of quantum algorithms.
+
+As a regular neural network, a CNN should learn how to classify any input, in our case images. The training consists of optimizing a series of parameters, learned on the inputs and their corresponding labels.
+
+# C.1 TENSOR REPRESENTATION
+
+Images, or more generally layers of the network, can be seen as tensors. A tensor is a generalization of a matrix to higher dimensions. For instance an image of height $H$ and width $W$ can be seen as a matrix in $\mathbb{R}^{H \times W}$ , where every pixel is a greyscale value between 0 ans 255 (8 bit). However the three channels of color (RGB: Red Green Blue) must be taken into account, by stacking three times the matrix for each color. The whole image is then seen as a 3 dimensional tensor in $\mathbb{R}^{H \times W \times D}$ where $D$ is the number of channels. We will see that the Convolution Product in the CNN can be expressed between 3-tensors (input) and 4-tensors (convolution filters or kernels), the output being a 3-tensor of different dimensions (spatial size and number of channels).
+
+
+Figure 3: RGB decomposition, a colored image is a 3-tensor.
+
+# C.2 ARCHITECTURE
+
+A CNN is composed of 4 main procedures, compiled and repeated in any order : Convolution layers, most often followed by an Activation Function, Pooling Layers and some Fully Connected layers at the end. We will note $\ell$ the current layer.
+
+Convolution Layer: The $\ell^{th}$ layer is convolved by a set of filters called kernels. The output of this operation is the $(\ell + 1)^{th}$ layer. A convolution by a single kernel can be seen as a feature detector, that will screen over all regions of the input. If the feature represented by the kernel, for instance a vertical edge, is present in some part of the input, there will be a high value at the corresponding position of the output. The output is commonly called the feature map of this convolution.
+
+Activation Function : As in regular neural network, we insert some non linearities also called activation functions. These are mandatory for a neural network to be able to learn any function. In the case of a CNN, each convolution is often followed by a Rectified Linear Unit function, or $ReLu$ . This is a simple function that puts all negative values of the output to zero, and lets the positive values as they are.
+
+Pooling Layer : This downsampling technique reduces the dimensionality of the layer, in order to improve the computation. Moreover, it gives to the CNN the ability to learn a representation invariant to small translations. Most of the time, we apply a Maximum Pooling or an Average Pooling. The first one consists of replacing a subregion of $P \times P$ elements only by the one with the
+
+maximum value. The second does the same by averaging all values. Recall that the value of a pixel corresponds to how much a particular feature was present in the previous convolution layer.
+
+Fully Connected Layer : After a certain number of convolution layers, the input has been sufficiently processed so that we can apply a fully connected network. Weights connect each input to each output, where inputs are all element of the previous layer. The last layer should have one node per possible label. Each node value can be interpreted as the probability of the initial image to belong to the corresponding class.
+
+# C.3 CONVOLUTION PRODUCT AS A TENSOR OPERATION
+
+Most of the following mathematical formulations have been very well detailed by Wu (2017). At layer $\ell$ , we consider the convolution of a multiple channels image, seen as a 3-tensor $X^{\ell} \in \mathbb{R}^{H^{\ell} \times W^{\ell} \times D^{\ell}}$ . Let's consider a single kernel in $\mathbb{R}^{H \times W \times D^{\ell}}$ . Note that its third dimension must match the number of channels of the input, as in Figure 4. The kernel passes over all possible regions of the input and outputs a value for each region, stored in the corresponding element of the output. Therefore the output is 2 dimensional, in $\mathbb{R}^{H^{\ell + 1} \times W^{\ell + 1}}$
+
+
+Figure 4: Convolution of a 3-tensor input (Left) by one 3-tensor kernel (Center). The output (Right) is a matrix for which each entry is a inner product between the kernel and the corresponding overlapping region of the input.
+
+In a CNN, the most general case is to apply several convolution products to the input, each one with a different 3-tensor kernel. Let's consider an input convolved by $D^{\ell +1}$ kernels. We can globally see this process as a whole, represented by one 4-tensor kernel $K^{\ell}\in \mathbb{R}^{H\times W\times D^{\ell}\times D^{\ell +1}}$ . As $D^{\ell +1}$ convolutions are applied, there are $D^{\ell +1}$ outputs of 2 dimensions, equivalent to a 3-tensor $X^{\ell +1}\in \mathbb{R}^{H^{\ell +1}\times W^{\ell +1}\times D^{\ell +1}}$
+
+We can see on Figure 5 that the output's dimensions are modified given the following rule:
+
+$$
+\left\{ \begin{array}{l} H ^ {\ell + 1} = H ^ {\ell} - H + 1 \\ W ^ {\ell + 1} = W ^ {\ell} - W + 1 \end{array} \right. \tag {4}
+$$
+
+We omit to detail the use of Padding and Stride, two parameters that control how the kernel moves through the input, but these can easily be incorporated in the algorithms.
+
+An element of $X^{\ell}$ is determined by 3 indices $(i^{\ell}, j^{\ell}, d^{\ell})$ , while an element of the kernel $K^{\ell}$ is determined by 4 indices $(i, j, d, d')$ . For an element of $X^{\ell+1}$ we use 3 indices $(i^{\ell+1}, j^{\ell+1}, d^{\ell+1})$ . We can express the value of each element of the output $X^{\ell+1}$ with the relation
+
+$$
+X _ {i ^ {\ell + 1}, j ^ {\ell + 1}, d ^ {\ell + 1}} ^ {\ell + 1} = \sum_ {i = 0} ^ {H} \sum_ {j = 0} ^ {W} \sum_ {d = 0} ^ {D ^ {\ell}} K _ {i, j, d, d ^ {\ell + 1}} ^ {\ell} X _ {i ^ {\ell + 1} + i, j ^ {\ell + 1} + j, d} ^ {\ell} \tag {5}
+$$
+
+
+Figure 5: Convolutions of the 3-tensor input $X^{\ell}$ (Left) by one 4-tensor kernel $K^{\ell}$ (Center). Each channel of the output $X^{\ell +1}$ (Right) corresponds to the output matrix of the convolution with one of the 3-tensor kernel.
+
+
+
+# C.4 MATRIX EXPRESSION
+
+
+Figure 6: A convolution product is equivalent to a matrix-matrix multiplication.
+
+It is possible to reformulate Equation (5) as a matrix product. For this we have to reshape our objects. We expand the input $X^{\ell}$ into a matrix $A^{\ell} \in \mathbb{R}^{(H^{\ell + 1}W^{\ell + 1}) \times (HWD^{\ell})}$ . Each row of $A^{\ell}$ is a vectorized version of a subregion of $X^{\ell}$ . This subregion is a volume of the same size as a single kernel volume $H \times W \times D^{\ell}$ . Hence each of the $H^{\ell + 1} \times W^{\ell + 1}$ rows of $A^{\ell}$ is used for creating one value in $X^{\ell + 1}$ . Given such a subregion of $X^{\ell}$ , the rule for creating the row of $A^{\ell}$ is to stack, channel by channel, a column first vectorized form of each matrix. Then, we reshape the kernel tensor $K^{\ell}$ into a matrix $F^{\ell} \in \mathbb{R}^{(HWD^{\ell}) \times D^{\ell + 1}}$ , such that each column of $F^{\ell}$ is a column first vectorized version of one of the $D^{\ell + 1}$ kernels.
+
+The convolution operation $X^{\ell} * K^{\ell} = X^{\ell + 1}$ is equivalent to the following matrix multiplication
+
+$$
+A ^ {\ell} F ^ {\ell} = Y ^ {\ell + 1}, \tag {6}
+$$
+
+where each column of $Y^{\ell + 1} \in \mathbb{R}^{(H^{\ell + 1}W^{\ell + 1}) \times D^{\ell + 1}}$ is a column first vectorized form of one of the $D^{\ell + 1}$ channels of $X^{\ell + 1}$ . Note that an element $Y_{p,q}^{\ell + 1}$ is the inner product between the $p^{th}$ row of $A^\ell$ and the $q^{th}$ column of $F^\ell$ . It is then simple to convert $Y^{\ell + 1}$ into $X^{\ell + 1}$ . The indices relation between the elements $Y_{p,q}^{\ell + 1}$ and $X_{i^{\ell + 1},j^{\ell + 1},d^{\ell + 1}}^{\ell + 1}$ is given by:
+
+$$
+\left\{ \begin{array}{l} d ^ {\ell + 1} = q \\ j ^ {\ell + 1} = \left\lfloor \frac {p}{H ^ {\ell + 1}} \right\rfloor \\ i ^ {\ell + 1} = p - H ^ {\ell + 1} \left\lfloor \frac {p}{H ^ {\ell + 1}} \right\rfloor \end{array} \right. \tag {7}
+$$
+
+A summary of all variables along with their meaning and dimensions is given in Section A.
+
+# APPENDIX D QUANTUM CONVOLUTIONAL NEURAL NETWORK
+
+In this section we will design quantum procedures for the usual operations in a CNN layer. We start by describing the main ideas before providing the details. Steps are gathered in Algorithm 1.
+
+First, to perform a convolution product between an input and a kernel, we will use the mapping between convolution of tensors and matrix multiplication from Section C.3, that can further be reduced to inner product estimation between vectors, in order to use quantum linear algebra procedures to perform these computations faster. The output will be a quantum state representing the result of the convolution product, from which we can sample to retrieve classical information to feed the next layer. This is stated by the following Theorem:
+
+# Theorem D.1 (Quantum Convolution Layer)
+
+Given 3D tensor input $X^{\ell} \in \mathbb{R}^{H^{\ell} \times W^{\ell} \times D^{\ell}}$ and 4D tensor kernel $K^{\ell} \in \mathbb{R}^{H \times W \times D^{\ell} \times D^{\ell + 1}}$ stored in QRAM, there is a quantum algorithm that computes a quantum states $\Delta$ -close to $|f(\overline{X}^{\ell + 1})\rangle$ with arbitrary small parameter $\Delta > 0$ . $|f(\overline{X}^{\ell + 1})\rangle$ is close to the result of the convolution product $X^{\ell + 1} = X^{\ell} * K^{\ell}$ followed by any non-linear function $f: \mathbb{R} \mapsto \mathbb{R}^{+}$ , with an error bounded by $\left\| f(\overline{X}^{\ell + 1}) - f(X^{\ell + 1}) \right\|_{\infty} \leq 2M\epsilon$ for any precision $\epsilon > 0$ , where $M$ is the maximum norm of a product between one of the $D^{\ell + 1}$ kernels, and one of the regions of $X^{\ell}$ of size $HWD^{\ell}$ . The time complexity of this procedure is given by $\widetilde{O}(1/\epsilon)$ , where $\widetilde{O}$ hides factors poly-logarithmic in $\Delta$ and in the size of $X^{\ell}$ and $K^{\ell}$ .
+
+In a second step, we efficiently retrieve classical information from the output. Recall that a convolution can be seen as a pattern detection on the input image, where the pattern is the kernel. The output values correspond to "how much" the pattern was present in the corresponding region of the input. Low value pixels in the output indicate the absence of the pattern in the input at the corresponding regions. Therefore, by sampling according to these output values, where the high value pixels are sampled with more probability, we could retrieve less but only meaningful information for the neural network to learn. While sampling, we update the QRAM data structure with the new information (see Section D.2). We also perform the Pooling operation during this phase (see Section D.2.2). It is an interesting use case where amplitudes of a quantum state are proportional to the importance of the information they carry, giving a new utility to the probabilistic nature of quantum sampling. Numerical simulations are presented in Section 6 to have an empirical estimation of how many samples from the output state are necessary.
+
+# D.1 SINGLE QUANTUM CONVOLUTION LAYER
+
+In order to develop a quantum algorithm to perform the convolution as described above, we will make use of quantum linear algebra procedures. We will use quantum states proportional to the rows of $A^{\ell}$ , noted $|A_{p}\rangle$ , and the columns of $F^{\ell}$ , noted $|F_{q}\rangle$ (we omit the $\ell$ exponent in the quantum states to simplify the notation). These states are given by $|A_{p}\rangle = \frac{1}{\|A_{p}\|}\sum_{r = 0}^{HWD^{\ell} - 1}A_{pr}|r\rangle$ and $|F_{q}\rangle = \frac{1}{\|F_{q}\|}\sum_{s = 0}^{D^{\ell +1} - 1}F_{sq}|s\rangle$ . We suppose we can load these vectors in quantum states by performing the following queries:
+
+$$
+\begin{array}{l} \int | p \rangle | 0 \rangle \mapsto | p \rangle | A _ {p} \rangle \tag {8} \\ \left\lfloor | q \rangle | 0 \rangle \mapsto | q \rangle | F _ {q} \right\rangle \\ \end{array}
+$$
+
+Such queries, in time poly-logarithmic in the dimension of the vector, can be implemented with a Quantum Random Access Memory (QRAM). See Section D.2 for more details on the QRAM update rules and its integration layer by layer.
+
+# D.1.1 INNER PRODUCT ESTIMATION
+
+The following method to estimates inner product is derived from previous work by Kerenidis et al. (2019). With the initial state $|p\rangle |q\rangle \frac{1}{\sqrt{2}}(|0\rangle + |1\rangle)|0\rangle$ we apply the queries detailed above in a controlled fashion, followed simply by a Hadamard gate to extract the inner product $\langle A_p|F_q\rangle$ in an amplitude. $\frac{1}{\sqrt{2}}(|p\rangle |q\rangle |0\rangle + |p\rangle |q\rangle |1\rangle |0\rangle) \mapsto \frac{1}{\sqrt{2}}(|p\rangle |q\rangle |0\rangle |A_p\rangle + |p\rangle |q\rangle |1\rangle |F_q\rangle)$ . By applying a Hadamard gate on the third register we obtain the following state, $\frac{1}{2} |p\rangle |q\rangle \left(|0\rangle (|A_p\rangle + |F_q\rangle) + |1\rangle (|A_p\rangle - |F_q\rangle)\right)$ . The probability of measuring 0 on the third register is given by $P_{pq} = \frac{1 + \langle A_p|F_q\rangle}{2}$ . Thus we can rewrite the previous state as $|p\rangle |q\rangle \left(\sqrt{P_{pq}} |0,y_{pq}\rangle + \sqrt{1 - P_{pq}} |1,y_{pq}'\rangle\right)$ , where $|y_{pq}\rangle$ and $|y_{pq}'\rangle$ are some garbage states. We can perform the previous circuit in superposition. Since $A^\ell$ has $H^{\ell+1}W^{\ell+1}$ rows, and $F^\ell$ has $D^{\ell+1}$ columns, we obtain the state: $|u\rangle = \frac{1}{\sqrt{H^{\ell+1}W^{\ell+1}D^{\ell+1}}} \sum_p \sum_q |p\rangle |q\rangle \left(\sqrt{P_{pq}} |0,y_{pq}\rangle + \sqrt{1 - P_{pq}} |1,y_{pq}'\rangle\right)$ . Therefore the probability of measuring the triplet $(p,q,0)$ in the first three registers is given by $P_0(p,q) = \frac{P_{pq}}{H^{\ell+1}W^{\ell+1}D^{\ell+1}} = \frac{1 + \langle A_p|F_q\rangle}{2H^{\ell+1}W^{\ell+1}D^{\ell+1}}$ . Now we can relate to the Convolution product. Indeed, the triplets $(p,q,0)$ that are the most probable to be measured are the ones for which the value $\langle A_p|F_q\rangle$ is the highest. Recall that each element of $Y^{\ell+1}$ is given by $Y_{pq}^{\ell+1} = (A_p,F_q)$ , where “ $(\cdot,\cdot)$ ” denotes the inner product. We see here that we will sample most probably the positions $(p,q)$ for the highest values of $Y^{\ell+1}$ , that corresponds to the most important points of $X^{\ell+1}$ , by the Equation (7). Note that the values of $Y^{\ell+1}$ can be either positive or negative, which is not an issue thanks to the positiveness of $P_0(p,q)$ .
+
+A first approach could be to measure indices $(p,q)$ and rely on the fact that pixels with high values, hence a high amplitude, would have a higher probability to be measured. However we have not exactly the final result, since $\langle A_p|F_q\rangle \neq (A_p,F_q) = \| A_p\| \| F_q\| \langle A_p|F_q\rangle$ . Most importantly we then want to apply a non-linearity $f(Y_{pq}^{\ell +1})$ to each pixel, for instance the ReLu function, which seems not possible with unitary quantum gates if the data is encoded in the amplitudes only. Moreover, due to normalization of the quantum amplitudes and the high dimension of the Hilbert space of the input, the probability of measuring each pixel is roughly the same, making the sampling inefficient. Given these facts, we have added steps to the circuit, in order to measure $(p,q,f(Y_{pq}^{\ell +1}))$ , therefore know the value of a pixel when measuring it, while still measuring the most important points in priority.
+
+# D.1.2 ENCODING THE AMPLITUDE IN A REGISTER
+
+Let $\mathcal{U}$ be the unitary that map $|0\rangle$ to $|u\rangle$ : $|u\rangle = \frac{1}{\sqrt{H^{\ell + 1}W^{\ell + 1}D^{\ell + 1}}}\sum_{p,q}|p\rangle |q\rangle\left(\sqrt{P_{pq}} |0,y_{pq}\rangle + \sqrt{1 - P_{pq}} |1,y_{pq}'\rangle\right)$ . The amplitude $\sqrt{P_{pq}}$ can be encoded in an ancillary register by using Amplitude Estimation (Theorem F.2) followed by a Median Evaluation (Theorem F.4). For any $\Delta > 0$ and $\epsilon > 0$ , we can have a state $\Delta$ -close to $|u'\rangle = \frac{1}{\sqrt{H^{\ell + 1}W^{\ell + 1}D^{\ell + 1}}}\sum_{p,q}|p\rangle |q\rangle |0\rangle |\overline{P}_{pq}\rangle |g_{pq}\rangle$ with probability at least $1 - 2\Delta$ , where $|P_{pq} - \overline{P}_{pq}|\leq \epsilon$ and $|g_{pq}\rangle$ is a garbage state. This requires $O(\frac{\ln(1 / \Delta)}{\epsilon})$ queries of $\mathcal{U}$ . In the following we discard the third register $|0\rangle$ for simplicity.
+
+The benefit of having $\overline{P}_{pq}$ in a register is to be able to perform operations on it (arithmetic or even non linear). Therefore we can simply obtain a state corresponding to the exact value of the convolution product. Since we've built a circuit such that $P_{pq} = \frac{1 + \langle A_p|F_q\rangle}{2}$ , with two QRAM calls, we can retrieve the norm of the vectors by applying the following unitary $|p\rangle |q\rangle |\overline{P}_{pq}\rangle |g_{pq}\rangle |0\rangle |0\rangle \mapsto |p\rangle |q\rangle |\overline{P}_{pq}\rangle |g_{pq}\rangle ||A_p||\rangle ||F_q||\rangle$ . On the fourth register, we can then write $Y_{pq}^{\ell +1} = \| A_p\| \| F_q\| \langle A_p|F_q\rangle$ using some arithmetic circuits (addition, multiplication by a scalar, multiplication between registers). We then apply a boolean circuit that implements the ReLu function on the same register, in order to obtain an estimate of $f(Y_{pq}^{\ell +1})$ in the fourth register. We finish by inverting the previous computations and obtain the final state:
+
+$$
+| f (\bar {Y} ^ {\ell + 1}) \rangle = \frac {1}{\sqrt {H ^ {\ell + 1} W ^ {\ell + 1} D ^ {\ell + 1}}} \sum_ {p, q} | p \rangle | q \rangle | f (\bar {Y} _ {p q} ^ {\ell + 1}) \rangle | g _ {p q} \rangle \tag {9}
+$$
+
+Because of the precision $\epsilon$ on $|\overline{P}_{pq}\rangle$ , our estimation $\overline{Y}_{pq}^{\ell + 1} = (2\overline{P}_{pq} - 1)\| A_p\| \| F_q\|$ , is obtained with error such that $|\overline{Y}_{pq}^{\ell + 1} - Y_{pq}^{\ell + 1}| \leq 2\epsilon \| A_p\| \| F_q\|$ .
+
+In superposition, we can bound this error by $\left|\overline{Y}_{pq}^{\ell +1} - Y_{pq}^{\ell +1}\right| \leq 2M\epsilon$ where we define
+
+$$
+M = \max _ {p, q} \| A _ {p} \| \| F _ {q} \| \tag {10}
+$$
+
+$M$ is the maximum product between norms of one of the $D^{\ell +1}$ kernels, and one of the regions of $X^{\ell}$ of size $HWD^{\ell}$ . Finally, since the previous error estimation is valid for all pairs $(p,q)$ , the overall error committed on the convolution product can be bounded by $\left\| \overline{Y}^{\ell +1} - Y^{\ell +1}\right\|_{\infty}\leq 2M\epsilon$ , where $\| .\|_{\infty}$ denotes the $\ell_{\infty}$ norm. Recall that $Y^{\ell +1}$ is just a reshaped version of $X^{\ell +1}$ . Since the non linearity adds no approximation, we can conclude on the final error committed for a layer of our QCNN
+
+$$
+\left\| f \left(\bar {X} ^ {\ell + 1}\right) - f \left(X ^ {\ell + 1}\right) \right\| _ {\infty} \leq 2 M \epsilon \tag {11}
+$$
+
+At this point, we have established Theorem D.1 as we have created the quantum state (9), with given precision guarantees, in time poly-logarithmic in $\Delta$ and in the size of $X^{\ell}$ and $K^{\ell}$ .
+
+We know aim to retrieve classical information from this quantum state. Note that $|Y_{pq}^{\ell +1}\rangle$ is representing a scalar encoded in as many qubits as needed for the precision, whereas $|A_p\rangle$ was representing a vector as a quantum state in superposition, where each element $A_{p,r}$ is encoded in one amplitude (See Section F). The next step can be seen as a way to retrieve both encoding at the same time, that will allow an efficient tomography focus on the values of high magnitude.
+
+# D.1.3 CONDITIONAL ROTATION
+
+In the following sections, we omit the $\ell + 1$ exponent for simplicity. Garbage states are removed as they will not perturb the final measurement. We now aim to modify the amplitudes, such that the highest values of $|f(\overline{Y})\rangle$ are measured with higher probability. A way to do so consists in applying a conditional rotation on an ancillary qubit, proportionally to $f(\overline{Y}_{pq})$ . We will detail the calculation since in the general case $f(\overline{Y}_{pq})$ can be greater than 1. To simplify the notation, we note $x = f(\overline{Y}_{pq})$ . This step consists in applying the following rotation on an ancillary qubit: $|x\rangle |0\rangle \mapsto |x\rangle \left(\sqrt{\frac{x}{\max x}} |0\rangle + \beta |1\rangle\right)$ , where $\max x = \max_{p,q} f(\overline{Y}_{pq})$ and $\beta = \sqrt{1 - (\frac{x}{\max x})^2}$ . Note that in practice it is not possible to have access to $|\max x\rangle$ from the state (9), but we will present a method to know a priori this value or an upper bound in section D.1.6. Let's note $\alpha_{pq} = \sqrt{\frac{f(\overline{Y}_{pq})}{\max_{p,q}(f(\overline{Y}_{pq}))}}$ . The output of this conditional rotation in superposition on state (9) is then $\frac{1}{\sqrt{HWD}} \sum_{p,q} |p\rangle |q\rangle |f(\overline{Y}_{pq})\rangle (\alpha_{pq}|0\rangle + \sqrt{1 - \alpha_{pq}^2}|1\rangle)$ .
+
+# D.1.4 AMPLITUDE AMPLIFICATION
+
+In order to measure $(p,q,f(\overline{Y}_{pq}))$ with higher probability where $f(\overline{Y}_{pq})$ has high value, we could post select on the measurement of $|0\rangle$ on the last register. Otherwise, we can perform an amplitude amplification on this ancillary qubit. Let's rewrite the previous state as $\frac{1}{\sqrt{HWD}}\sum_{p,q}\alpha_{pq}|p\rangle |q\rangle |f(\overline{Y}_{pq})\rangle |0\rangle +\sqrt{1 - \alpha_{pq}^2} |g_{pq}'\rangle |1\rangle$ , where $|g_{pq}'\rangle$ is another garbage state. The overall probability of measuring $|0\rangle$ on the last register is $P(0) = \frac{1}{HWD}\sum_{pq}|\alpha_{pq}|^2$ . The number of queries required to amplify the state $|0\rangle$ is $O\left(\frac{1}{\sqrt{P(0)}}\right)$ , as shown by Brassard et al. (2002). Since $f(\overline{Y}_{pq}) \in \mathbb{R}^{+}$ , we have $\alpha_{pq}^2 = \frac{f(\overline{Y}_{pq})}{\max_{p,q}(f(\overline{Y}_{pq}))}$ . Therefore the number of
+
+queries is $O\left(\sqrt{\max_{p,q}(f(\overline{Y}_{pq}))}\frac{1}{\sqrt{\frac{1}{HWD}\sum_{p,q}f(\overline{Y}_{pq})}}\right) = O\left(\frac{\sqrt{\max_{p,q}(f(\overline{Y}_{pq}))}}{\sqrt{\mathbb{E}_{p,q}(f(\overline{Y}_{pq}))}}\right)$ , where the notation $\mathbb{E}_{p,q}(f(\overline{Y}_{pq}))$ represents the average value of the matrix $f(\overline{Y})$ . It can also be written $\mathbb{E}(f(\overline{X}))$ as in Result 1: $\mathbb{E}_{p,q}(f(\overline{Y}_{pq})) = \frac{1}{HWD}\sum_{p,q}f(\overline{Y}_{pq})$ . At the end of these iterations, we have modified with high probability the state to the following:
+
+$$
+| f (\bar {Y}) \rangle = \frac {1}{\sqrt {H W D}} \sum_ {p, q} \alpha_ {p q} ^ {\prime} | p \rangle | q \rangle | f (\bar {Y} _ {p q}) \rangle \tag {12}
+$$
+
+Where, to respect the normalization of the quantum state, $\alpha_{pq}^{\prime} = \frac{\alpha_{pq}}{\sqrt{\sum_{p,q}\frac{\alpha_{pq}^{2}}{HWD}}}$ . Eventually, the probability of measuring $(p,q,f(\overline{Y}_{pq}))$ is given by $p(p,q,f(\overline{Y}_{pq})) = \frac{(\alpha_{pq}^{\prime})^{2}}{HWD} = \frac{f(\overline{Y}_{pq})}{\sum_{p,q}f(\overline{Y}_{pq})}$ . Note that we have used the same type of name $|f(\overline{Y})\rangle$ for both state (9) and state (12). For now on, this state name will refer only to the latter (12).
+
+# D.1.5 $\ell_{\infty}$ TOMOGRAPHY AND PROBABILISTIC SAMPLING
+
+We can rewrite the final quantum state obtained in (12) as
+
+$$
+| f (\bar {Y} ^ {\ell + 1}) \rangle = \frac {1}{\sqrt {\sum_ {p , q} f (\bar {Y} _ {p q} ^ {\ell + 1})}} \sum_ {p, q} \sqrt {f (\bar {Y} _ {p q} ^ {\ell + 1})} | p \rangle | q \rangle | f (\bar {Y} _ {p q} ^ {\ell + 1}) \rangle \tag {13}
+$$
+
+We see here that $f(\overline{Y}_{pq}^{\ell +1})$ , the values of each pixel, are encoded in both the last register and in the amplitude. We will use this property to extract efficiently the exact values of high magnitude pixels. For simplicity, we will use instead the notation $f(\overline{X}_n^{\ell +1})$ to denote a pixel's value, with $n\in [H^{\ell +1}W^{\ell +1}D^{\ell +1}]$ . Recall that $Y^{\ell +1}$ and $X^{\ell +1}$ are reshaped version of the same object.
+
+The pixels with high values will have more probability of being sampled. Specifically, we perform a tomography with $\ell_{\infty}$ guarantee and precision parameter $\eta > 0$ . See Theorem G.1 and Section G for details. The $\ell_{\infty}$ guarantee allows to obtain each pixel with error at most $\eta$ , and require $\widetilde{O}(1/\eta^2)$ samples from the state (13). Pixels with low values $f(\overline{X}_n^{\ell+1}) < \eta$ will probably not be sampled due to their low amplitude. Therefore the error committed will be significant and we adopt the rule of setting them to 0. Pixels with higher values $f(\overline{X}_n^{\ell+1}) \geq \eta$ , will be sample with high probability, and only one appearance is enough to get the exact register value $f(\overline{X}_n^{\ell+1})$ of the pixel, as is it also written in the last register.
+
+To conclude, let's note $\mathcal{X}_n^{\ell +1}$ the resulting pixel values after the tomography, and compare it to the real classical outputs $f(X_{n}^{\ell +1})$ . Recall that the measured values $f(\overline{X}_n^{\ell +1})$ are approximated with error at most $2M\epsilon$ with $M = \max_{p,q}\| A_p\| \| F_q\|$ . The algorithm described above implements the following rules:
+
+$$
+\left\{ \begin{array}{l l} \left| \mathcal {X} _ {n} ^ {\ell + 1} - f \left(X _ {n} ^ {\ell + 1}\right) \right| \leq 2 M \epsilon & \text {i f} \quad f \left(\bar {X} _ {n} ^ {\ell + 1}\right) \geq \eta \\ \mathcal {X} _ {n} ^ {\ell + 1} = 0 & \text {i f} \quad f \left(\bar {X} _ {n} ^ {\ell + 1}\right) < \eta \end{array} \right. \tag {14}
+$$
+
+Concerning the running time, one could ask what values of $\eta$ are sufficient to obtain enough meaningful pixels. Obviously this highly depends on the output's size $H^{\ell +1}W^{\ell +1}D^{\ell +1}$ and on the output's content itself. But we can view this question from an other perspective, by considering that we sample a constant fraction of pixels given by $\sigma \cdot (H^{\ell +1}W^{\ell +1}D^{\ell +1})$ where $\sigma \in [0,1]$ is a sampling ratio. Because of the particular amplitudes of state (13), the high value pixels will be measured and known with higher probability. The points that are not sampled are being set to 0. We see that this approach is equivalent to the $\ell_{\infty}$ tomography, therefore we have $\frac{1}{\eta^2} = \sigma \cdot H^{\ell +1}W^{\ell +1}D^{\ell +1}$ .
+
+We will use this analogy in the numerical simulations (Section 6) to estimate, for a particular QCNN architecture and a particular dataset of images, which values of $\sigma$ are enough to allow the neural network to learn.
+
+# D.1.6 REGULARIZATION OF THE NON LINEARITY
+
+In the previous steps, we see several appearances of the parameter $\max_{p,q}(f(\overline{Y}_{pq}^{\ell +1}))$ . First for the conditional rotation preprocessing, we need to know this value or an upper bound. Then for the running time, we would like to bound this parameter. Both problems can be solved by replacing the usual ReLu non linearity by a particular activation function, that we note capReLu. This function is simply a parametrized ReLu function with an upper threshold, the cap $C$ , after which the function remains constant. The choice of $C$ will be tuned for each particular QCNN, as a tradeoff between accuracy and speed. Otherwise, the only other requirement of the QCNN activation function would be not to allow negative values. This is already often the case for most of the classical CNN. In practice, we expect the capReLu to be as good as a usual ReLu, for convenient values of the cap $C$ ( $\leq 10$ ). We performed numerical simulations to compare the learning curve of the same CNN with several values of $C$ . See the numerical experiments presented in Section 6 for more details.
+
+
+Figure 7: Activation functions: ReLu (Left) and capReLu (Right) with a cap $C$ at 5.
+
+
+
+# D.2 QRAM UPDATE
+
+We wish to detail the use of the QRAM between each quantum convolution layer, and present how the pooling operation can happen during this phase. General results about the QRAM is given as Theorem F.1. Implementation details can be found in the work of Kerenidis & Prakash (2017a). In this section, we will show how to store samples from the output of the layer $\ell$ , to create the input of layer $\ell + 1$ .
+
+# D.2.1 STORING THE OUTPUT VALUES DURING THE SAMPLING
+
+At the beginning of layer $\ell + 1$ , the QRAM must store $A^{\ell + 1}$ , a matrix where each element is indexed by $(p', r')$ , and perform $|p'|\langle 0 \rangle \mapsto |p'|\langle A_{p'}^{\ell + 1} \rangle$ . The data is stored in the QRAM as a tree structure described by Kerenidis & Prakash (2017b). Each row $A_{p'}^{\ell + 1}$ is stored in such a tree $T_{p'}^{\ell + 1}$ . Each leaf $A_{p'r'}^{\ell + 1}$ corresponds to a value sampled from the previous quantum state $|f(\overline{Y}^{\ell + 1})\rangle$ , output of the layer $\ell$ . The question is to know where to store a sample from $|f(\overline{Y}^{\ell + 1})\rangle$ in the tree $T_{p'}^{\ell + 1}$ .
+
+When a point is sampled from the final state of the quantum convolution, at layer $\ell$ , as described in Section D.1.4, we obtain a triplet corresponding to the two positions and the value of a point in the matrix $f(\overline{Y}^{\ell + 1})$ . We can know where this point belongs in the input of layer $\ell + 1$ , the tensor $X^{\ell + 1}$ by Equation (7), since $Y^{\ell}$ is a reshaped version of $X^{\ell}$ .
+
+The position in $X^{\ell +1}$ , noted $(i^{\ell +1}, j^{\ell +1}, d^{\ell +1})$ , is then matched to several positions $(p', r')$ in $A^{\ell +1}$ . For each $p'$ , we write in the tree $T_{p'}^{\ell +1}$ the sampled value at leaf $r'$ and update its parent nodes, as required in the work of Kerenidis & Prakash (2017b). Note that leaves that weren't updated will be considered as zeros, corresponding to pixels with too low values, or not selected during pooling (see next section).
+
+Having stored pixels in this way, we can then query $|p' \rangle |0 \rangle \mapsto |p' \rangle |A_{p'}^{\ell} \rangle$ , using the quantum circuit developed by Kerenidis & Prakash (2017b), where we correctly have $|A_{p'}^{\ell + 1} \rangle = \frac{1}{\left\| A_{p'}^{\ell + 1} \right\|} \sum_{r'} A_{p'r'}^{\ell + 1} |r' \rangle$ . Note that each tree has a logarithmic depth in the number of leaves, hence the running time of writing the output of the quantum convolution layer in the QRAM gives a marginal multiplicative increase, poly-logarithmic in the number of points sampled from $|f(\overline{Y}^{\ell + 1}) \rangle$ , namely $O(\log(1 / \eta^2))$ .
+
+# D.2.2 QUANTUM POOLING
+
+As for the classical CNN, a QCNN should be able to perform pooling operations. We first detail the notations for classical pooling. At the end of layer $\ell$ , we wish to apply a pooling operation of size $\mathbf{P}$ on the output $f(X^{\ell +1})$ . We note $\tilde{X}^{\ell +1}$ the tensor after the pooling operation. For a point in $f(X^{\ell +1})$ at position $(i^{\ell +1},j^{\ell +1},d^{\ell +1})$ , we know to which pooling region it belongs, corresponding to a position $(\tilde{i}^{\ell +1},\tilde{j}^{\ell +1},\tilde{d}^{\ell +1})$ in $\tilde{X}^{\ell +1}$ :
+
+$$
+\left\{ \begin{array}{l} \tilde {d} ^ {\ell + 1} = d ^ {\ell + 1} \\ \tilde {j} ^ {\ell + 1} = \left\lfloor \frac {j ^ {\ell + 1}}{P} \right\rfloor \\ \tilde {i} ^ {\ell + 1} = \left\lfloor \frac {i ^ {\ell + 1}}{P} \right\rfloor \end{array} \right. \tag {15}
+$$
+
+
+Figure 8: A $2 \times 2$ tensor pooling. A point in $f(X^{\ell +1})$ (left) is given by its position $(i^{\ell +1},j^{\ell +1},d^{\ell +1})$ . A point in $\tilde{X}^{\ell +1}$ (right) is given by its position $(i^{\ell +1},j^{\ell +1},\tilde{d}^{\ell +1})$ . Different pooling regions in $f(X^{\ell +1})$ have separate colours, and each one corresponds to a unique point in $\tilde{X}^{\ell +1}$ .
+
+We now show how any kind of pooling can be efficiently integrated to our QCNN structure. Indeed the pooling operation will occur during the QRAM update described above, at the end of a convolution layer. At this moment we will store sampled values according to the pooling rules.
+
+In the quantum setting, the output of layer $\ell$ after tomography is noted $\mathcal{X}^{\ell +1}$ . After pooling, we will describe it by $\tilde{\mathcal{X}}^{\ell +1}$ , which has dimensions $\frac{H^{\ell + 1}}{P} \times \frac{W^{\ell + 1}}{P} \times D^{\ell +1}$ . $\tilde{\mathcal{X}}^{\ell +1}$ will be effectively used as input for layer $\ell +1$ and its values should be stored in the QRAM to form the trees $\tilde{T}_{p'}^{\ell +1}$ , related to the matrix expansion $\tilde{A}^{\ell +1}$ .
+
+However $\mathcal{X}^{\ell +1}$ is not known before the tomography is over. Therefore we have to modify the update rule of the QRAM to implement the pooling in an online fashion, each time a sample from $|f(\overline{X}^{\ell +1})\rangle$ is drawn. Since several sampled values of $|f(\overline{X}^{\ell +1})\rangle$ can correspond to the same leaf $\tilde{A}_{p'r'}^{\ell +1}$ (points in the same pooling region), we need an overwrite rule, that will depend on the type of pooling. In the case of Maximum Pooling, we simply update the leaf and the parent nodes if the new sampled value is higher than the one already written. In the case of Average Polling, we replace the actual value by the new averaged value.
+
+In the end, any pooling can be included in the already existing QRAM update. In the worst case, the running time is increased by $\widetilde{O}(P / \eta^2)$ , an overhead corresponding to the number of times we need to overwrite existing leaves, with $P$ being a small constant in most cases.
+
+As we will see in Section E, the final positions $(p,q)$ that were sampled from $|f(\overline{X}^{\ell +1})\rangle$ and selected after pooling must be stored for further use during the backpropagation phase.
+
+# D.3 RUNNING TIME
+
+We will now summarise the running time for one forward pass of convolution layer $\ell$ . With $\tilde{O}$ we hide the polylogaryhtmic factors. We first write the running time of the classical CNN layer, which is given by $\widetilde{O}\left(H^{\ell +1}W^{\ell +1}D^{\ell +1}\cdot HWD^{\ell}\right)$ . For the QCNN, the previous steps prove Result 1 and can be implemented in time $\widetilde{O}\left(\frac{1}{\epsilon\eta^2}\cdot \frac{M\sqrt{C}}{\sqrt{\mathbb{E}(f(\overline{X}^{\ell + 1}))}}\right)$ . Note that, as explain in Section D.1.5, the quantum running time can also be written $\widetilde{O}\left(\sigma H^{\ell +1}W^{\ell +1}D^{\ell +1}\cdot \frac{M\sqrt{C}}{\epsilon\sqrt{\mathbb{E}(f(\overline{X}^{\ell + 1}))}}\right)$ , with $\sigma \in [0,1]$ being the fraction of sampled elements among $H^{\ell +1}W^{\ell +1}D^{\ell +1}$ of them.
+
+It is interesting to notice that the one quantum convolution layer can also include the ReLu operation and the Pooling operation in the same circuit, for no significant increase in the running time, whereas in the classical CNN each operation must be done on the whole data again.
+
+# APPENDIX E QUANTUM BACKPROGATION
+
+The entire QCNN is made of multiple layers. For the last layer's output, we expect only one possible outcome, or a few in the case of a classification task, which means that the dimension of the quantum output is very small. A full tomography can be performed on the last layer's output in order to calculate the outcome. The loss $\mathcal{L}$ is then calculated, as a measure of correctness of the predictions compared to the ground truth. As the classical CNN, our QCNN should be able to perform the optimization of its weights (elements of the kernels) to minimize the loss by an iterative method.
+
+# Theorem E.1 (Quantum Backpropagation for Quantum CNN)
+
+Given the forward pass quantum algorithm in Algorithm 1, the input matrix $A^{\ell}$ and the kernel matrix $F^{\ell}$ stored in the QRAM for each layer $\ell$ , and a loss function $\mathcal{L}$ , there is a quantum backpropagation algorithm that estimates, for any precision $\delta > 0$ , the gradient tensor $\frac{\partial\mathcal{L}}{\partial F^{\ell}}$ and update each element to perform gradient descent such that $\forall (s,q), \left|\frac{\partial\mathcal{L}}{\partial F_{s,q}^{\ell}} -\overline{\frac{\partial\mathcal{L}}{\partial F_{s,q}^{\ell}}}\right| \leq 2\delta \left\| \frac{\partial\mathcal{L}}{\partial F^{\ell}}\right\|_{2}$ . Let $\frac{\partial\mathcal{L}}{\partial Y^{\ell}}$ be the gradient with respect to the $\ell^{th}$ layer. The running time of a single layer $\ell$ for quantum backpropagation is given by
+
+$$
+O \left(\left(\left(\mu (A ^ {\ell}) + \mu (\frac {\partial \mathcal {L}}{\partial Y ^ {\ell + 1}})\right) \kappa (\frac {\partial \mathcal {L}}{\partial F ^ {\ell}}) + \left(\mu (\frac {\partial \mathcal {L}}{\partial Y ^ {\ell + 1}}) + \mu (F ^ {\ell})\right) \kappa (\frac {\partial \mathcal {L}}{\partial Y ^ {\ell}})\right) \frac {\log 1 / \delta}{\delta^ {2}}\right) \tag {16}
+$$
+
+where for a matrix $V$ , $\kappa(V)$ is the condition number and $\mu(V)$ is defined in Equation (5).
+
+# E.1 CLASSICAL BACK PROPAGATION
+
+After each forward pass, the outcome is compared to the true labels and define a loss. We can update our weights by gradient descent to minimize this loss, and iterate. The main idea behind the backpropagation is to compute the derivatives of the loss $\mathcal{L}$ , layer by layer, starting from the last one.
+
+At layer $\ell$ , the derivatives needed to perform the gradient descent are $\frac{\partial\mathcal{L}}{\partial F^{\ell}}$ and $\frac{\partial\mathcal{L}}{\partial Y^{\ell}}$ . The first one represents the gradient of the final loss $\mathcal{L}$ with respect to each kernel element, a matrix of values that we will use to update the kernel weights $F_{s,q}^{\ell}$ . The second one is the gradient of $\mathcal{L}$ with respect to the layer itself and is only needed to calculate the gradient $\frac{\partial\mathcal{L}}{\partial F^{\ell-1}}$ at layer $\ell-1$ .
+
+# E.1.1 CONVOLUTION PRODUCT
+
+We first consider a classical convolution layer without non-linearity or pooling. Thus the output of layer $\ell$ is the same tensor as the input of layer $\ell + 1$ , namely $X^{\ell + 1}$ or equivalently $Y^{\ell + 1}$ . Assuming we know $\frac{\partial \mathcal{L}}{\partial X^{\ell + 1}}$ or equivalently $\frac{\partial \mathcal{L}}{\partial Y^{\ell + 1}}$ , both corresponding to the derivatives of the $(\ell + 1)^{th}$ layer's
+
+input, we will show how to calculate $\frac{\partial\mathcal{L}}{\partial F^{\ell}}$ , the matrix of derivatives with respect to the elements of the previous kernel matrix $F^{\ell}$ . This is the main goal in order to optimize the kernel's weights.
+
+The details of the following calculations can be found in the work of Wu (2017). We will use the notation $vec(X)$ to represent the vectorized form of any tensor $X$ .
+
+Recall that $A^\ell$ is the matrix expansion of the tensor $X^\ell$ , whereas $Y^\ell$ is a matrix reshaping of $X^\ell$ . By applying the chain rule $\frac{\partial\mathcal{L}}{\partial vec(F^\ell)^T} = \frac{\partial\mathcal{L}}{\partial vec(X^{\ell + 1})^T}\frac{\partial vec(X^{\ell + 1})}{\partial vec(F^\ell)^T}$ , we can obtain:
+
+$$
+\frac {\partial \mathcal {L}}{\partial F ^ {\ell}} = (A ^ {\ell}) ^ {T} \frac {\partial L}{\partial Y ^ {\ell + 1}} \tag {17}
+$$
+
+See calculations details in the work of Wu (2017). Equation (17) shows that, to obtain the desired gradient, we can just perform a matrix-matrix multiplication between the transposed layer itself $(A^{\ell})$ and the gradient with respect to the previous layer $\left(\frac{\partial L}{\partial Y^{\ell + 1}}\right)$ .
+
+Equation (17) explains also why we will need to calculate $\frac{\partial\mathcal{L}}{\partial Y^{\ell}}$ in order to backpropagate through layer $\ell - 1$ . To calculate it, we use the chain rule again for $\frac{\partial\mathcal{L}}{\partial vec(X^{\ell})^{T}} = \frac{\partial\mathcal{L}}{\partial vec(X^{\ell + 1})^{T}}\frac{\partial vec(X^{\ell + 1})}{\partial vec(X^{\ell})^{T}}$ . Recall that a point in $A^\ell$ , indexed by the pair $(p,r)$ , can correspond to several triplets $(i^\ell,j^\ell,d^\ell)$ in $X^{\ell}$ . We will use the notation $(p,r)\leftrightarrow (i^{\ell},j^{\ell},d^{\ell})$ to express formally this relation. One can show that $\frac{\partial\mathcal{L}}{\partial Y^{\ell + 1}}(F^{\ell})^{T}$ is a matrix of same shape as $A^\ell$ , and that the chain rule leads to a simple relation to calculate $\frac{\partial\mathcal{L}}{\partial Y^{\ell}}$ :
+
+$$
+\left[ \frac {\partial \mathcal {L}}{\partial X ^ {\ell}} \right] _ {i ^ {\ell}, j ^ {\ell}, d ^ {\ell}} = \sum_ {(p, r) \leftrightarrow (i ^ {\ell}, j ^ {\ell}, d ^ {\ell})} \left[ \frac {\partial \mathcal {L}}{\partial Y ^ {\ell + 1}} \left(F ^ {\ell}\right) ^ {T} \right] _ {p, r} \tag {18}
+$$
+
+We have shown how to obtain the gradients with respect to the kernels $F^{\ell}$ and to the layer itself $Y^{\ell}$ (or equivalently $X^{\ell}$ ).
+
+# E.1.2 NON LINEARITY
+
+The activation function has also an impact on the gradient. In the case of the ReLu, we should only cancel gradient for points with negative values. For points with positive value, the derivatives remain the same since the function is the identity. A formal relation can be given by
+
+$$
+\left[ \frac {\partial \mathcal {L}}{\partial X ^ {\ell + 1}} \right] _ {i ^ {\ell + 1}, j ^ {\ell + 1}, d ^ {\ell + 1}} = \left\{ \begin{array}{l} \left[ \frac {\partial \mathcal {L}}{\partial f (X ^ {\ell + 1})} \right] _ {i ^ {\ell + 1}, j ^ {\ell + 1}, d ^ {\ell + 1}} \text {i f} X _ {i ^ {\ell + 1}, j ^ {\ell + 1}, d ^ {\ell + 1}} ^ {\ell + 1} \geq 0 \\ 0 \text {o t h e r w i s e} \end{array} \right. \tag {19}
+$$
+
+# E.1.3 POOLING
+
+If we take into account the pooling operation, we must change some of the gradients. Indeed, a pixel that hasn't been selected during pooling has no impact on the final loss, thus should have a gradient equal to 0. We will focus on the case of Max Pooling (Average Pooling relies on similar idea). To state a formal relation, we will use the notations of Section D.2.2: an element in the output of the layer, the tensor $f(X^{\ell +1})$ , is located by the triplet $(i^{\ell +1},j^{\ell +1},d^{\ell +1})$ . The tensor after pooling is noted $\tilde{X}^{\ell +1}$ and its points are located by the triplet $(\tilde{i}^{\ell +1},\tilde{j}^{\ell +1},\tilde{d}^{\ell +1})$ . During backpropagation, after the calculation of $\frac{\partial\mathcal{L}}{\partial\tilde{X}^{\ell + 1}}$ , some of the derivatives of $f(X^{\ell +1})$ should be set to zero with the following rule:
+
+$$
+\left[ \frac {\partial \mathcal {L}}{\partial f \left(X ^ {\ell + 1}\right)} \right] _ {i ^ {\ell + 1}, j ^ {\ell + 1}, d ^ {\ell + 1}} = \left\{ \begin{array}{l} \left[ \frac {\partial \mathcal {L}}{\partial \tilde {X} ^ {\ell + 1}} \right] _ {\tilde {i} ^ {\ell + 1}, \tilde {j} ^ {\ell + 1}, \tilde {d} ^ {\ell + 1}} \text {i f} (i ^ {\ell + 1}, j ^ {\ell + 1}, d ^ {\ell + 1}) \text {w a s e s e l e c t e d d u r i n g p o o l i n g} \\ 0 \text {o t h e r w i s e} \end{array} \right. \tag {20}
+$$
+
+# E.2 QUANTUM ALGORITHM FOR BACK PROPAGATION
+
+In this section, we want to give a quantum algorithm to perform backpropagation on a layer $\ell$ , and detail the impact on the derivatives, given by the following diagram:
+
+$$
+\left\{ \begin{array}{l} \frac {\partial \mathcal {L}}{\partial X ^ {\ell}} \\ \frac {\partial \mathcal {L}}{\partial F ^ {\ell}} \end{array} \right. \leftarrow \frac {\partial \mathcal {L}}{\partial \bar {X} ^ {\ell + 1}} \leftarrow \frac {\partial \mathcal {L}}{\partial f \left(\bar {X} ^ {\ell + 1}\right)} \leftarrow \frac {\partial \mathcal {L}}{\partial \mathcal {X} ^ {\ell + 1}} \leftarrow \frac {\partial \mathcal {L}}{\partial \tilde {\mathcal {X}} ^ {\ell + 1}} = \frac {\partial \mathcal {L}}{\partial X ^ {\ell + 1}} \tag {21}
+$$
+
+We assume that backpropagation has been done on layer $\ell + 1$ . This means in particular that $\frac{\partial \mathcal{L}}{\partial X^{\ell + 1}}$ is stored in QRAM. However, as shown on Diagram (21), $\frac{\partial \mathcal{L}}{\partial X^{\ell + 1}}$ corresponds formally to $\frac{\partial \mathcal{L}}{\partial \tilde{X}^{\ell + 1}}$ , and not $\frac{\partial \mathcal{L}}{\partial \overline{X}^{\ell + 1}}$ . Therefore, we will have to modify the values stored in QRAM to take into account non-linearity, tomography and pooling. We will first consider how to implement $\frac{\partial \mathcal{L}}{\partial X^{\ell}}$ and $\frac{\partial \mathcal{L}}{\partial F^{\ell}}$ through backpropagation, considering only convolution product, as if $\frac{\partial \mathcal{L}}{\partial \overline{X}^{\ell + 1}}$ and $\frac{\partial \mathcal{L}}{\partial X^{\ell + 1}}$ where the same. Then we will detail how to simply modify $\frac{\partial \mathcal{L}}{\partial X^{\ell + 1}}$ a priori, by setting some of its values to 0.
+
+# E.2.1 QUANTUM CONVOLUTION PRODUCT
+
+In this section we consider only the quantum convolution product without non-linearity, tomography nor pooling, hence writing its output directly as $X^{\ell +1}$ . Regarding derivatives, the quantum convolution product is equivalent to the classical one. Gradient relations (17) and (18) remain the same. Note that the $\epsilon$ -approximation from Section D.1.2 doesn't participate in gradient considerations.
+
+The gradient relations being the same, we still have to specify the quantum algorithm that implements the backpropagation and outputs classical description of $\frac{\partial\mathcal{L}}{\partial X^{\ell}}$ and $\frac{\partial\mathcal{L}}{\partial F^{\ell}}$ . We have seen that the two main calculations (17) and (18) are in fact matrix-matrix multiplications both involving $\frac{\partial\mathcal{L}}{\partial Y^{\ell + 1}}$ , the reshaped form of $\frac{\partial\mathcal{L}}{\partial X^{\ell + 1}}$ . For each, the classical running time is $O(H^{\ell + 1}W^{\ell + 1}D^{\ell + 1}HWD^{\ell})$ . We know from Theorem F.7 and Theorem G.1 a quantum algorithm to perform efficiently a matrix-vector multiplication and return a classical state with $\ell_{\infty}$ norm guarantees. For a matrix $V$ and a vector $b$ , both accessible from the QRAM, the running time to perform this operation is $O\left(\frac{\mu(V)\kappa(V)\log 1 / \delta}{\delta^2}\right)$ , where $\kappa (V)$ is the condition number of the matrix and $\mu (V)$ is a matrix parameter defined in Equation (5). Precision parameter $\delta >0$ is the error committed in the approximation for both Theorems F.7 and G.1.
+
+We can therefore apply theses theorems to perform matrix-matrix multiplications, by simply decomposing them in several matrix-vector multiplications. For instance, in Equation (17), the matrix could be $(A^{\ell})^{T}$ and the different vectors would be each column of $\frac{\partial L}{\partial Y^{\ell + 1}}$ . The global running time to perform quantumly Equation (17) is obtained by replacing $\mu (V)$ by $\mu (\frac{\partial\mathcal{L}}{\partial Y^{\ell + 1}}) + \mu (A^{\ell})$ and $\kappa (V)$ by $\kappa ((A^{\ell})^{T}\cdot \frac{\partial\mathcal{L}}{\partial Y^{\ell + 1}})$ . Likewise, for Equation (18), we have $\mu (\frac{\partial\mathcal{L}}{\partial Y^{\ell + 1}}) + \mu (F^{\ell})$ and $\kappa (\frac{\partial\mathcal{L}}{\partial Y^{\ell + 1}}\cdot (F^{\ell})^{T})$ .
+
+Note that the dimension of the matrix doesn't appear in the running time since we tolerate a $\ell_{\infty}$ norm guarantee for the error, instead of a $\ell_{2}$ guarantee (see Section G for details). The reason why $\ell_{\infty}$ tomography is the right approximation here is because the result of these linear algebra operations are rows of the gradient matrices, that are not vectors in an euclidean space, but a series of numbers for which we want to be $\delta$ -close to the exact values. See next section for more details.
+
+It is a open question to see if one can apply the same sub-sampling technique as in the forward pass (Section D.1) and sample only the highest derivatives of $\frac{\partial\mathcal{L}}{\partial X^{\ell}}$ , to reduce the computation cost while maintaining a good optimization. We then have to understand which elements of $\frac{\partial\mathcal{L}}{\partial X^{\ell + 1}}$ must be set to zero to take into account the effects the non linearity, tomography and pooling.
+
+# E.2.2 QUANTUM NON LINEARITY AND TOMOGRAPHY
+
+To include the impact of the non linearity, one could apply the same rule as in (19), and simply replace $\mathrm{ReLu}$ by capReLu. After the non linearity, we obtain $f(\overline{X}^{\ell +1})$ , and the gradient relation would be given by
+
+$$
+\left[ \frac {\partial \mathcal {L}}{\partial \bar {X} ^ {\ell + 1}} \right] _ {i ^ {\ell + 1}, j ^ {\ell + 1}, d ^ {\ell + 1}} = \left\{ \begin{array}{l l} \left[ \frac {\partial \mathcal {L}}{\partial f (\bar {X} ^ {\ell + 1})} \right] _ {i ^ {\ell + 1}, j ^ {\ell + 1}, d ^ {\ell + 1}} & \text {i f} 0 \leq \bar {X} _ {i ^ {\ell + 1}, j ^ {\ell + 1}, d ^ {\ell + 1}} ^ {\ell + 1} \leq C \\ 0 & \text {o t h e r w i s e} \end{array} \right. \tag {22}
+$$
+
+If an element of $\overline{X}^{\ell +1}$ was negative or bigger than the cap $C$ , its derivative should be zero during the backpropagation. However, this operation was performed in quantum superposition. In the quantum algorithm, one cannot record at which positions $(i^{\ell +1},j^{\ell +1},d^{\ell +1})$ the activation function was selective or not. The gradient relation (22) cannot be implemented a posteriori. We provide a partial solution to this problem, using the fact that quantum tomography must also be taken into account for some derivatives. Indeed, only the points $(i^{\ell +1},j^{\ell +1},d^{\ell +1})$ that have been sampled should have an impact on the gradient of the loss. Therefore we replace the previous relation by
+
+$$
+\left[ \frac {\partial \mathcal {L}}{\partial \bar {X} ^ {\ell + 1}} \right] _ {i ^ {\ell + 1}, j ^ {\ell + 1}, d ^ {\ell + 1}} = \left\{ \begin{array}{l} \left[ \frac {\partial \mathcal {L}}{\partial \mathcal {X} ^ {\ell + 1}} \right] _ {i ^ {\ell + 1}, j ^ {\ell + 1}, d ^ {\ell + 1}} \text {i f} (i ^ {\ell + 1}, j ^ {\ell + 1}, d ^ {\ell + 1}) \text {w a s s a m p l e d} \\ 0 \text {o t h e r w i s e} \end{array} \right. \tag {23}
+$$
+
+Nonetheless, we can argue that this approximation will be tolerable. In the first case where $\overline{X}_{i^{\ell +1},j^{\ell +1},d^{\ell +1}}^{\ell +1} < 0$ , the derivatives can not be set to zero as they should. But in practice, their values will be zero after the activation function and such points would not have a chance to be sampled. In conclusion their derivatives would be zero as required. In the other case, where $\overline{X}_{i^{\ell +1},j^{\ell +1},d^{\ell +1}}^{\ell +1} > C$ , the derivatives can not be set to zero as well but the points have a high probability of being sampled. Therefore their derivative will remain unchanged, as if we were using a ReLu instead of a capReLu. However in cases where the cap $C$ is high enough, this shouldn't be a source of disadvantage in practice.
+
+# E.2.3 QUANTUM POOLING
+
+From relation (23), we can take into account the impact of quantum pooling (see Section D.2.2) on the derivatives. This case is easier since one can record the selected positions during the QRAM update. Therefore, applying the backpropagation is similar to the classical setting with Equation (20).
+
+$$
+\left[ \frac {\partial \mathcal {L}}{\partial \mathcal {X} ^ {\ell + 1}} \right] _ {i ^ {\ell + 1}, j ^ {\ell + 1}, d ^ {\ell + 1}} = \left\{ \begin{array}{l} \left[ \frac {\partial \mathcal {L}}{\partial \tilde {\mathcal {X}} ^ {\ell + 1}} \right] _ {\tilde {i} ^ {\ell + 1}, \tilde {j} ^ {\ell + 1}, \tilde {d} ^ {\ell + 1}} \text {i f} (i ^ {\ell + 1}, j ^ {\ell + 1}, d ^ {\ell + 1}) \text {w a s s e l e c t e d d u r i n g p o o l i n g} \\ 0 \text {o t h e r w i s e} \end{array} \right. \tag {24}
+$$
+
+Note that we know $\frac{\partial\mathcal{L}}{\partial\tilde{\mathcal{X}}^{\ell + 1}}$ as it is equal to $\frac{\partial\mathcal{L}}{\partial X^{\ell + 1}}$ , the gradient with respect to the input of layer $\ell + 1$ , known by assumption and stored in the QRAM.
+
+# E.3 CONCLUSION AND RUNNING TIME
+
+In conclusion, given $\frac{\partial\mathcal{L}}{\partial Y^{\ell + 1}}$ in the QRAM, the quantum backpropagation first consists in applying the relations (24) followed by (23). The effective gradient now take into account non linearity, tomography and pooling that occurred during layer $\ell$ . We can know use apply the quantum algorithm for matrix-matrix multiplication that implements relations (18) and (17).
+
+Note that the steps in Algorithm 2 could also be reversed: during backpropagation of layer $\ell + 1$ , when storing values for each element of $\frac{\partial \mathcal{L}}{\partial Y^{\ell + 1}}$ in the QRAM, one can already take into account (24) and (23) of layer $\ell$ . In this case we directly store $\frac{\partial \mathcal{L}}{\partial X^{\ell + 1}}$ , at no supplementary cost.
+
+Therefore, the running time of the quantum backpropagation for one layer $\ell$ , given as Algorithm 2, corresponds to the sum of the running times of the circuits for implementing relations (17) and (18). We finally obtain
+
+$$
+O \left(\left(\left(\mu (A ^ {\ell}) + \mu (\frac {\partial \mathcal {L}}{\partial Y ^ {\ell + 1}})\right) \kappa ((A ^ {\ell}) ^ {T} \cdot \frac {\partial \mathcal {L}}{\partial Y ^ {\ell + 1}}\right) + \left(\mu (\frac {\partial \mathcal {L}}{\partial Y ^ {\ell + 1}}) + \mu (F ^ {\ell})\right) \kappa (\frac {\partial \mathcal {L}}{\partial Y ^ {\ell + 1}} \cdot (F ^ {\ell}) ^ {T})\right) \frac {\log 1 / \delta}{\delta^ {2}}\left. \right),
+$$
+
+which can be rewritten as
+
+$$
+O \left(\left(\left(\mu (A ^ {\ell}) + \mu (\frac {\partial \mathcal {L}}{\partial Y ^ {\ell + 1}})\right) \kappa (\frac {\partial \mathcal {L}}{\partial F ^ {\ell}}) + \left(\mu (\frac {\partial \mathcal {L}}{\partial Y ^ {\ell + 1}}) + \mu (F ^ {\ell})\right) \kappa (\frac {\partial \mathcal {L}}{\partial Y ^ {\ell}})\right) \frac {\log 1 / \delta}{\delta^ {2}}\right) \tag {25}
+$$
+
+Besides storing $\frac{\partial\mathcal{L}}{\partial X^{\ell}}$ , the main output is a classical description of $\frac{\partial\mathcal{L}}{\partial F^{\ell}}$ , necessary to perform gradient descent of the parameters of $F^{\ell}$ . In the Appendix (Section E.4), which details the impact of the quantum backpropagation compared to the classical case, which can be reduced to a simple noise addition during the gradient descent.
+
+# E.4 QUANTUM GRADIENT DESCENT AND CLASSICAL EQUIVALENCE
+
+In this part we will see the impact of the quantum backpropagation compared to the classical case, which can be reduced to a simple noise addition during the gradient descent. Recall that gradient descent, in our case, would consist in applying the following update rule $F^{\ell} \gets F^{\ell} - \lambda \frac{\partial \mathcal{L}}{\partial F^{\ell}}$ with the learning rate $\lambda$ .
+
+Let's note $x = \frac{\partial\mathcal{L}}{\partial F^{\ell}}$ and its elements $x_{s,q} = \frac{\partial\mathcal{L}}{\partial F_{s,q}^{\ell}}$ . From the first result of Theorem F.7 with error $\delta < 0$ , and the tomography procedure from Theorem G.1, with same error $\delta$ , we can obtain a classical description of $\frac{\overline{x}}{\|\overline{x}\|_2}$ with $\ell_{\infty}$ norm guarantee, such that:
+
+$$
+\left\| \frac {\overline {{x}}}{\| \overline {{x}} \| _ {2}} - \frac {x}{\| x \| _ {2}} \right\| _ {\infty} \leq \delta
+$$
+
+in time $\widetilde{O} (\frac{\kappa(V)\mu(V)\log(\delta)}{\delta^2})$ , where we note $V$ is the matrix stored in the QRAM that allows to obtain $x$ , as explained in Section E.2. The $\ell_{\infty}$ norm tomography is used so that the error $\delta$ is at most the same for each component
+
+$$
+\forall (s, q), \left| \frac {\overline {{x _ {s , q}}}}{\| \overline {{x}} \| _ {2}} - \frac {x _ {s , q}}{\| x \| _ {2}} \right| \leq \delta
+$$
+
+From the second result of the Theorem F.7 we can also obtain an estimate $\| \overline{x} \|_2$ of the norm, for the same error $\delta$ , such that
+
+$$
+| | \overline {{x}} | | _ {2} - | | x | | _ {2} | \leq \delta | | x | | _ {2}
+$$
+
+in time $\widetilde{O} (\frac{\kappa(V)\mu(V)}{\delta}\log (\delta))$ (which does not affect the overall asymptotic running time). Using both results we can obtain an unnormalized state close to $x$ such that, by the triangular inequality
+
+$$
+\begin{array}{l} \| \overline {{x}} - x \| _ {\infty} = \left\| \frac {\overline {{x}}}{\| \overline {{x}} \| _ {2}} \| \overline {{x}} \| _ {2} - \frac {x}{\| x \| _ {2}} \| x \| _ {2} \right\| _ {\infty} \\ \leq \left\| \frac {\bar {x}}{\| \bar {x} \| _ {2}} \| \bar {x} \| _ {2} - \frac {\bar {x}}{\| \bar {x} \| _ {2}} \| x \| _ {2} \right\| _ {\infty} + \left\| \frac {\bar {x}}{\| \bar {x} \| _ {2}} \| x \| _ {2} - \frac {x}{\| x \| _ {2}} \| x \| _ {2} \right\| _ {\infty} \\ \leq 1 \cdot | | \overline {{x}} | | _ {2} - \| x \| _ {2} | + \| x \| _ {2} \cdot \left\| \frac {\overline {{x}}}{| | \overline {{x}} | | _ {2}} - \frac {x}{| | x | | _ {2}} \right\| _ {\infty} \\ \leq \delta \| x \| _ {2} + \| \bar {x} \| _ {2} \delta \leq 2 \delta \| x \| _ {2} \\ \end{array}
+$$
+
+in time $\widetilde{O} \left( \frac{\kappa(V) \mu(V) \log(\delta)}{\delta^2} \right)$ . In conclusion, with $\ell_{\infty}$ norm guarantee, having also access to the norm of the result is costless.
+
+Finally, the noisy gradient descent update rule, expressed as $F_{s,q}^{\ell} \gets F_{s,q}^{\ell} - \lambda \overline{\frac{\partial\mathcal{L}}{\partial F_{s,q}^{\ell}}}$ can written in the worst case with
+
+$$
+\frac {\partial \mathcal {L}}{\partial F _ {s , q} ^ {\ell}} = \frac {\partial \mathcal {L}}{\partial F _ {s , q} ^ {\ell}} \pm 2 \delta \left\| \frac {\partial \mathcal {L}}{\partial F ^ {\ell}} \right\| _ {2} \tag {26}
+$$
+
+To summarize, using the quantum linear algebra from Theroem F.7 with $\ell_{\infty}$ norm tomography from Theroem G.1, both with error $\delta$ , along with norm estimation with relative error $\delta$ too, we can obtain classically the unnormalized values $\overline{\frac{\partial\mathcal{L}}{\partial F^{\ell}}}$ such that $\left\| \overline{\frac{\partial\mathcal{L}}{\partial F^{\ell}}} - \frac{\partial\mathcal{L}}{\partial F^{\ell}}\right\|_{\infty} \leq 2\delta \left\| \frac{\partial\mathcal{L}}{\partial F^{\ell}}\right\|_{2}$ or equivalently
+
+$$
+\forall (s, q), \left| \frac {\partial \mathcal {L}}{\partial F _ {s , q} ^ {\ell}} - \frac {\partial \mathcal {L}}{\partial F _ {s , q} ^ {\ell}} \right| \leq 2 \delta \left\| \frac {\partial \mathcal {L}}{\partial F ^ {\ell}} \right\| _ {2} \tag {27}
+$$
+
+Therefore the gradient descent update rule in the quantum case becomes $F_{s,q}^{\ell} \gets F_{s,q}^{\ell} - \lambda \overline{\frac{\partial\mathcal{L}}{\partial F_{s,q}^{\ell}}}$ , which in the worst case becomes
+
+$$
+F _ {s, q} ^ {\ell} \leftarrow F _ {s, q} ^ {\ell} - \lambda \left(\frac {\partial \mathcal {L}}{\partial F _ {s , q} ^ {\ell}} \pm 2 \delta \left\| \frac {\partial \mathcal {L}}{\partial F ^ {\ell}} \right\| _ {2}\right) \tag {28}
+$$
+
+This proves the Theorem E.1. This update rule can be simulated by the addition of a random relative noise given as a gaussian centered on 0, with standard deviation equal to $\delta$ . This is how we will simulate quantum backpropagation in the Numerical Simulations.
+
+Compared to the classical update rule, this corresponds to the addition of noise during the optimization step. This noise decreases as $\left\| \frac{\partial \mathcal{L}}{\partial F^{\ell}} \right\|_{2}$ , which is expected to happen while converging. Recall that the gradient descent is already a stochastic process. Therefore, we expect that such noise, with acceptable values of $\delta$ , will not disturb the convergence of the gradient, as the following numerical simulations tend to confirm.
+
+# APPENDIX F PRELIMINARIES IN QUANTUM INFORMATION
+
+We introduce a basic and broad-audience quantum information background necessary for this work. For a more detailed introduction we recommend Nielsen & Chuang (2002a).
+
+# F.1 QUANTUM INFORMATION
+
+Quantum Bits and Quantum Registers: The bit is the most basic unit of classical information. It can be either in state 0 or 1. Similarly a quantum bit or qubit, is a quantum system that can be in state $|0\rangle$ , $|1\rangle$ (the braket notation $|\cdot \rangle$ is a reminder that the bit considered is a quantum system) or in superposition of both states $\alpha |0\rangle +\beta |1\rangle$ with coefficients $\alpha ,\beta \in \mathbb{C}$ such that $|\alpha |^2 +|\beta |^2 = 1$ . The amplitudes $\alpha$ and $\beta$ are linked to the probabilities of observing either 0 or 1 when measuring the qubit, since $P(0) = |\alpha |^2$ and $P(1) = |\beta |^2$ .
+
+Before the measurement, any superposition is possible, which gives quantum information special abilities in terms of computation. With $n$ qubits, the $2^n$ possible binary combinations can exist simultaneously, each with a specific amplitude. For instance we can consider an uniform distribution $\frac{1}{\sqrt{n}} \sum_{i=0}^{2^n-1} |i\rangle$ where $|i\rangle$ represents the $i^{th}$ binary combination (e.g. $|01 \cdots 1001\rangle$ ). Multiple qubits together are often called a quantum register.
+
+In its most general formulation, a quantum state with $n$ qubits can be seen as vector in a complex Hilbert space of dimension $2^n$ . This vector must be normalized under $\ell_2$ -norm, to guarantee that the squared amplitudes sum to 1.
+
+Quantum Computation: To process qubits and therefore quantum registers, we use quantum gates. These gates are unitary operators in the Hilbert space as they should map unit-norm vectors to unit-norm vectors. Formally, we can see a quantum gate acting on $n$ qubits as a matrix $U \in \mathbb{C}^{2^n}$ such that $U U^{\dagger} = U^{\dagger} U = I$ , where $U^{\dagger}$ is the conjugate transpose of $U$ . Some basic single qubit gates include the NOT gate $\begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}$ that inverts $|0\rangle$ and $|1\rangle$ , or the Hadamard gate $\frac{1}{\sqrt{2}} \begin{pmatrix} 1 & 1 \\ 1 & -1 \end{pmatrix}$ that maps $|0\rangle \mapsto \frac{1}{\sqrt{2}} (|0\rangle + |1\rangle)$ and $|1\rangle \mapsto \frac{1}{\sqrt{2}} (|0\rangle - |1\rangle)$ , creating the quantum superposition.
+
+Finally, multiple qubits gates exist, such as the Controlled-NOT that applies a NOT gate on a target qubit conditioned on the state of a control qubit.
+
+The main advantage of quantum gates is their ability to be applied to a superposition of inputs. Indeed, given a gate $U$ such that $U|x\rangle \mapsto |f(x)\rangle$ , we can apply it to all possible combinations of $x$ at once $U(\frac{1}{C}\sum_{x}|x\rangle) \mapsto \frac{1}{C}\sum_{x}|f(x)\rangle$ .
+
+We now state some primitive quantum circuits, which we will use in our algorithm:
+
+For two integers $i$ and $j$ , we can check their equality with the mapping $|i\rangle |j\rangle |0\rangle \mapsto |i\rangle |j\rangle |[i = j]\rangle$ . For two real value numbers $a > 0$ and $\delta > 0$ , we can compare them using $|a\rangle |\delta \rangle |0\rangle \mapsto |a\rangle |\delta \rangle |[a \leq \delta]\rangle$ . Finally, for a real value number $a > 0$ , we can obtain its square $|a\rangle |0\rangle \mapsto |a\rangle |a^2\rangle$ .
+
+Note that these circuits are basically a reversible version of the classical ones and are linear in the number of qubits used to encode the input values.
+
+# F.2 QUANTUM SUBROUTINES FOR DATA ENCODING
+
+Knowing some basic principles of quantum information, the next step is to understand how data can be efficiently encoded using quantum states. While several approaches could exist, we present the most common one called amplitude encoding, which leads to interesting and efficient applications.
+
+Let $x \in \mathbb{R}^d$ be a vector with components $(x_1, \dots, x_d)$ . Using only $\lceil \log(d) \rceil$ qubits, we can form $|x\rangle$ , the quantum state encoding $x$ , given by $|x\rangle = \frac{1}{\|x\|} \sum_{j=0}^{d-1} x_j |j\rangle$ . We see that the $j^{th}$ component $x_j$ becomes the amplitude of $|j\rangle$ , the $j^{th}$ binary combination (or equivalently the $j^{th}$ vector in the standard basis). Each amplitude must be divided by $\|x\|$ to preserve the unit $\ell_2$ -norm of $|x\rangle$ .
+
+Similarly, for a matrix $A \in \mathbb{R}^{n \times d}$ or equivalently for $n$ vectors $A_i$ for $i \in [n]$ , we can express each row of $A$ as $|A_i\rangle = \frac{1}{\|\boldsymbol{A}_i\|} \sum_{i=0}^{d-1} A_{ij} |j\rangle$ .
+
+We can now explain an important definition, the ability to have quantum access to a matrix. This will be a requirements for many algorithms.
+
+# Definition 4 [Quantum Access to Data]
+
+We say that we have quantum access to a matrix $A \in \mathbb{R}^{n \times d}$ if there exist a procedure to perform the following mapping, for $i \in [n]$ , in time $T$ :
+
+- $|i\rangle |0\rangle \mapsto |i\rangle |A_i\rangle$
+$\left|0\right\rangle \mapsto \frac{1}{\left\|A\right\|_F}\sum_i\left\| A_i\right\|\left|i\right\rangle$
+
+By using appropriate data structures the first mapping can be reduced to the ability to perform a mapping of the form $|i\rangle |j\rangle |0\rangle \mapsto |i\rangle |j\rangle |A_{ij}\rangle$ . The second requirement can be replaced by the ability of performing $|i\rangle |0\rangle \mapsto |i\rangle ||A_i||\rangle$ or to just have the knowledge of each norm. Therefore, using matrices such that all rows $A_{i}$ have the same norm makes it simpler to obtain the quantum access.
+
+The time or complexity $T$ necessary for the quantum access can be reduced to polylogarithmic dependence in $n$ and $d$ if we consider the access to a Quantum Memory or QRAM. The QRAM Kerenidis & Prakash (2017a) is a specific data structure from which a quantum circuit can allow quantum access to data in time $O(\log (nd))$ .
+
+Theorem F.1 (QRAM data structure, see Kerenidis & Prakash (2017a)) Let $A \in \mathbb{R}^{n \times d}$ , there is a data structure to store the rows of $A$ such that,
+
+1. The time to insert, update or delete a single entry $A_{ij}$ is $O(\log^2 (n))$ .
+2. A quantum algorithm with access to the data structure can perform the following unitaries in time $T = O(\log^2 n)$ .
+
+(a) $|i\rangle |0\rangle \rightarrow |i\rangle |A_i\rangle$ for $i\in [n]$ .
+(b) $|0\rangle \rightarrow \sum_{i\in [n]}\| A_i\| |i\rangle .$
+
+We now state important methods for processing the quantum information. Their goal is to store some information alternatively in the quantum state's amplitude or in the quantum register as a bitstring.
+
+Theorem F.2 [Amplitude Amplification and Estimation Brassard et al. (2002)] Given a unitary operator $U$ such that $U:|0\rangle \mapsto \sqrt{p} |y\rangle |0\rangle +\sqrt{1 - p} |y^{\perp}\rangle |1\rangle$ in time $T$ , where $p > 0$ is the probability of measuring "0", it is possible to obtain the state $|y\rangle |0\rangle$ using $O(\frac{T}{\sqrt{p}})$ queries to $U$ , or to estimate $p$ with relative error $\delta$ using $O(\frac{T}{\delta\sqrt{p}})$ queries to $U$ .
+
+Theorem F.3 [Conditional Rotation] Given the quantum state $|a\rangle$ , with $a\in [-1,1]$ , it is possible to perform $|a\rangle |0\rangle \mapsto |a\rangle (a|0\rangle +\sqrt{1 - a} |1\rangle)$ with complexity $\widetilde{O}(1)$ .
+
+Using Theorem F.3 followed by Theorem F.2, it then possible to transform the state $\frac{1}{\sqrt{d}}\sum_{j=0}^{d-1}|x_j\rangle$ into $\frac{1}{\|x\|}\sum_{j=0}^{d-1}x_j|x_j\rangle$ .
+
+In addition to amplitude estimation, we will make use of a tool developed in Wiebe et al. (2014a) to boost the probability of getting a good estimate for the inner product required for the quantum convolution algorithm. In high level, we take multiple copies of the estimator from the amplitude estimation procedure, compute the median, and reverse the circuit to get rid of the garbage. Here we provide a theorem with respect to time and not query complexity.
+
+Theorem F.4 (Median Evaluation, see Wiebe et al. (2014a)) Let $\mathcal{U}$ be a unitary operation that maps
+
+$$
+\mathcal {U}: | 0 ^ {\otimes n} \rangle \mapsto \sqrt {a} | x, 1 \rangle + \sqrt {1 - a} | G, 0 \rangle
+$$
+
+for some $1/2 < a \leq 1$ in time $T$ . Then there exists a quantum algorithm that, for any $\Delta > 0$ and for any $1/2 < a_0 \leq a$ , produces a state $|\Psi\rangle$ such that $\| |\Psi\rangle - |0^{\otimes nL}\rangle |x\rangle \| \leq \sqrt{2\Delta}$ for some integer $L$ , in time
+
+$$
+2 T \left[ \frac {\ln (1 / \Delta)}{2 \left(| a _ {0} | - \frac {1}{2}\right) ^ {2}} \right].
+$$
+
+# F.3 QUANTUM SUBROUTINES FOR LINEAR ALGEBRA
+
+In the recent years, as the field of quantum machine learning grew, its " toolkit" for linear algebra algorithms has become important enough to allow the development of many quantum machine learning algorithms. We introduce here the important subroutines for this work, without detailing the circuits or the algorithms.
+
+Definition 5 For a matrix $A$ , the parameter $\mu(A)$ is defined by $\mu(A) = \min_{p \in [0,1]} \left( \| A \|_F, \sqrt{s_{2p}(A)s_{2(1 - p)}(A^T)} \right)$ where $s_p(A) = \max_i (\| A_i \|_p^p)$ .
+
+The next theorems allow to compute the distance between vectors encoded as quantum states, and use this idea to perform the $k$ -means algorithm.
+
+Theorem F.5 [Quantum Distance Estimation Wiebe et al. (2014b); Kerenidis et al. (2019)] Given quantum access in time $T$ to two matrices $U$ and $V$ with rows $u_{i}$ and $v_{j}$ of dimension $d$ , there is a quantum algorithm that, for any pair $(i,j)$ , performs the following mapping $|i\rangle |j\rangle |0\rangle \mapsto |i\rangle |j\rangle |\overline{d^2(u_i,v_j)}\rangle$ , estimating the euclidean distance between $u_{i}$ and $v_{j}$ with precision $|d^{2}(u_{i},v_{j}) - d^{2}(u_{i},v_{j})| \leq \epsilon$ for any $\epsilon > 0$ . The algorithm has a running time given by $\widetilde{O}(T\eta/\epsilon)$ , where $\eta = \max_{ij}(\| u_i\| \| v_j\|)$ , assuming that $\min_{i}(\| u_{i}\|) = \min_{i}(\| v_{i}\|) = 1$ .
+
+Theorem F.6 [Quantum $k$ -means clustering Kerenidis et al. (2019)]
+
+Given quantum access in time $T$ to a dataset $V \in \mathbb{R}^{n \times d}$ , there is a quantum algorithm that outputs with high probability $k$ centroids $c_1, \dots, c_k$ that are consistent with the output of the $k$ -means algorithm with noise $\delta > 0$ , in time $\widetilde{O}(T \times (kd\frac{\eta(V)}{\delta^2}\kappa(V)(\mu(V) + k\frac{\eta(V)}{\delta}) + k^2\frac{\eta(V)^{1.5}}{\delta^2}\kappa(V)\mu(V)))$ per iteration.
+
+Definition 6 For a matrix $V \in \mathbb{R}^{n \times d}$ , its parameter $\eta(V)$ is defined as as $\frac{\max_i (\|v_i\|^2)}{\min_i (\|v_i\|^2)}$ , or as $\max_i (\|v_i\|^2)$ assuming $\min_i (\|v_i\|) = 1$ .
+
+In theorem F.6, the other parameters in the running time can be interpreted as follows: $\delta$ is the precision in the estimation of the distances, but also in the estimation of the position of the centroids. $\kappa (V)$ is the condition number of $V$ and $\mu (V)$ is defined above (Definition 5). Finally, in the case of well clusterable datasets, which should be the case when we will apply $k$ -means during spectral clustering, the running simplifies to $\widetilde{O} (T\times (k^2 d\frac{\eta(V)^{2.5}}{\delta^3} +k^{2.5}\frac{\eta(V)^2}{\delta^3}))$ .
+
+Note that the dependence in $n$ is hidden in the time $T$ to load the data. This dependence becomes polylogarithmic in $n$ if we assume access to a QRAM.
+
+Theorem F.7 (Quantum Matrix Operations, Chakraborty et al. (2018)) Let $M \in \mathbb{R}^{d \times d}$ and $x \in \mathbb{R}^d$ . Let $\delta_1, \delta_2 > 0$ . If $M$ is stored in appropriate QRAM data structures and the time to prepare $|x\rangle$ is $T_x$ , then there exist quantum algorithms that with probability at least $1 - 1/poly(d)$ return
+
+1. A state $|z\rangle$ such that $\left\| |z\rangle - |Mx\rangle \right\|_2 \leq \delta_1$ in time $\widetilde{O}((\kappa(M)\mu(M) + T_x\kappa(M))\log(1/\delta_1))$ . Note that this also implies $\left\| |z\rangle - |Mx\rangle \right\|_{\infty} \leq \delta_1$
+2. Norm estimate $z \in (1 \pm \delta_2) \| Mx \|_2$ , with relative error $\delta_2$ , in time $\widetilde{O}(T_x \frac{\kappa(M)\mu(M)}{\delta_2} \log(1 / \delta_1))$ .
+
+The linear algebra procedures above can also be applied to any rectangular matrix $V \in \mathbb{R}^{n \times d}$ by considering instead the symmetric matrix $\overline{V} = \begin{pmatrix} 0 & V \\ V^T & 0 \end{pmatrix}$ .
+
+# APPENDIX G ALGORITHM AND PROOF FOR $\ell_{\infty}$ NORM TOMOGRAPHY
+
+Finally, we present a logarithmic time algorithm for vector state tomography that will be used to recover classical information from the quantum states with $\ell_{\infty}$ norm guarantee. Given a unitary $U$ that produces a quantum state $|x\rangle = \frac{1}{\|x\|_2}\sum_{j=0}^{d-1}x_j|j\rangle$ , by calling $O(\log d/\delta^2)$ times $U$ , the tomography algorithm is able to reconstruct a vector $\widetilde{X}$ that approximates $|x\rangle$ with $\ell_{\infty}$ norm guarantee, such that $\left\|\left|\widetilde{X}\right\rangle - |x\rangle\right\|_{\infty} \leq \delta$ , or equivalently that $\forall i \in [d], |x_i - \widetilde{X}_i| \leq \delta$ . Such a tomography is of interest when the components $x_i$ of a quantum state are not the coordinates of an meaningful vector in some linear space, but just a series of values, such that we don't want an overall guarantee on the vector (which is the case with usual $\ell_2$ tomography) but a similar error guarantee for each component in the estimation.
+
+Theorem G.1 ( $\ell_{\infty}$ Vector state tomography) Given access to unitary $U$ such that $U|0\rangle = |x\rangle$ and its controlled version in time $T(U)$ , there is a tomography algorithm with time complexity $O(T(U)\frac{\log d}{\delta^2})$ that produces unit vector $\widetilde{X} \in \mathbb{R}^d$ such that $\left\| \widetilde{X} - x \right\|_{\infty} \leq \delta$ with probability at least $(1 - 1/poly(d))$ .
+
+The proof of this theorem is similar to the proof of the $\ell_2$ -norm tomography by Kerenidis & Prakash (2018). However the $\ell_{\infty}$ norm tomography introduced in this paper depends only logarithmically and not linearly in the dimension $d$ . Note that in our case, $T(U)$ will be logarithmic in the dimension.
+
+Theorem G.2 $[\ell_2$ Vector state tomography Kerenidis & Prakash (2018)] Given access to unitary $U$ such that $U|0\rangle = |x\rangle$ and its controlled version in time $T(U)$ , there is an algorithm that allows to output a classical vector $\widetilde{X} \in \mathbb{R}^d$ with $\ell_2$ -norm guarantee $\left\| \widetilde{X} - x \right\|_2 \leq \delta$ for any $\delta > 0$ , in time $O(T(U) \times \frac{d\log(d)}{\delta^2})$ .
+
+In the following we consider a quantum state $|x\rangle = \sum_{i\in [d]}x_i|i\rangle$ , with $x\in \mathbb{R}^d$ and $\| x\| _2 = 1$ .
+
+The following version of the Chernoff Bound will be used for analysis of algorithm 3.
+
+Theorem G.3 (Chernoff Bound) Let $X_{j}$ , for $j \in [N]$ , be independent random variables such that $X_{j} \in [0,1]$ and let $X = \sum_{j \in [N]} X_{j}$ . We have the three following inequalities:
+
+1. For $0 < \beta < 1$ , $\mathbb{P}[X < (1 - \beta)\mathbb{E}[X]] \leq e^{-\beta^2 \mathbb{E}[X] / 2}$
+2. For $\beta > 0, \mathbb{P}[X > (1 + \beta)\mathbb{E}[X]] \leq e^{-\frac{\beta^2}{2 + \beta}\mathbb{E}[X]}$
+3. For $0 < \beta < 1$ , $\mathbb{P}[|X - \mathbb{E}[X]| \geq \beta \mathbb{E}[X]] \leq e^{-\beta^2 \mathbb{E}[X] / 3}$ , by composing 1. and 2.
+
+# Algorithm 3 $\ell_{\infty}$ norm tomography
+
+Require: Error $\delta > 0$ , access to unitary $U: |0\rangle \mapsto |x\rangle = \sum_{i \in [d]} x_i |i\rangle$ , the controlled version of $U$ , QRAM access.
+
+Ensure: Classical vector $\widetilde{X} \in \mathbb{R}^d$ , such that $\left\| \widetilde{X} \right\| = 1$ and $\left\| \widetilde{X} - x \right\|_{\infty} < \delta$ .
+
+1: Measure $N = \frac{36\ln d}{\delta^2}$ copies of $|x\rangle$ in the standard basis and count $n_i$ , the number of times the outcome $i$ is observed. Store $\sqrt{p_i} = \sqrt{n_i / N}$ in QRAM data structure.
+2: Create $N = \frac{36\ln d}{\delta^2}$ copies of the state $\frac{1}{\sqrt{2}} |0\rangle \sum_{i\in [d]}x_i|i\rangle +\frac{1}{\sqrt{2}} |1\rangle \sum_{i\in [d]}\sqrt{p_i} |i\rangle .$
+3: Apply an Hadamard gate on the first qubit to obtain
+
+$$
+| \phi \rangle = \frac {1}{2} \sum_ {i \in [ d ]} \left((x _ {i} + \sqrt {p _ {i}}) | 0, i \rangle + (x _ {i} - \sqrt {p _ {i}}) | 1, i \rangle\right)
+$$
+
+4: Measure both registers of each copy in the standard basis, and count $n(0,i)$ the number of time the outcome $(0,i)$ is observed.
+5: Set $\sigma(i) = +1$ if $n(0,i) > 0.4Np_i$ and $\sigma(i) = -1$ otherwise.
+6: Output the unit vector $\tilde{X}$ such that $\forall i\in [N],\tilde{X}_i = \sigma_i\sqrt{p_i}$
+
+Theorem G.4 Algorithm 3 produces an estimate $\widetilde{X} \in \mathbb{R}^d$ such that $\left\| \widetilde{X} - x \right\|_{\infty} < (1 + \sqrt{2})\delta$ with probability at least $1 - \frac{1}{d^{0.83}}$ .
+
+Proving $\left\| x - \widetilde{X}\right\|_{\infty} \leq O(\delta)$ is equivalent to show that for all $i \in [d]$ , we have $|x_i - \widetilde{X}_i| = |x_i - \sigma(i)\sqrt{p_i}| \leq O(\delta)$ . Let $S$ be the set of indices defined by $S = \{i \in [d]; |x_i| > \delta\}$ . We will separate the proof for the two cases where $i \in S$ and $i \notin S$ .
+
+# Case 1: $i \in S$ .
+
+We will show that if $i \in S$ , we correctly have $\sigma(i) = \operatorname{sgn}(x_i)$ with high probability. Therefore we will need to bound $|x_i - \sigma(i)\sqrt{p_i}| = ||x_i| - \sqrt{p_i}|$ .
+
+We suppose that $x_{i} > 0$ . The value of $\sigma(i)$ correctly determines $sgn(x_{i})$ if the number of times we have measured $(0, i)$ at Step 4. is more than half of the outcomes, i.e. $n(0, i) > \frac{1}{2}\mathbb{E}[n(0, i)]$ . If $x_{i} < 0$ , the same arguments hold for $n(1, i)$ . We consider the random variable that represents the outcome of a measurement on state $|\phi\rangle$ . The Chernoff Bound, part 1 with $\beta = 1/2$ gives
+
+$$
+\mathbb {P} [ n (0, i) \leq \frac {1}{2} \mathbb {E} [ n (0, i) ] ] \leq e ^ {- \mathbb {E} [ n (0, i) ] / 8} \tag {29}
+$$
+
+From the definition of $|\phi \rangle$ we have $\mathbb{E}[n(0,i)] = \frac{N}{4} (x_i + \sqrt{p_i})^2$ . We will lower bound this value with the following argument.
+
+For the $k^{th}$ measurement of $|x\rangle$ , with $k \in [N]$ , let $X_{k}$ be a random variable such that $X_{k} = 1$ if the outcome is $i$ , and 0 otherwise. We define $X = \sum_{k \in [N]} X_{k}$ . Note that $X = n_{i} = N p_{i}$ and $\mathbb{E}[X] = N x_{i}^{2}$ . We can apply the Chernoff Bound, part 3 on $X$ for $\beta = 1/2$ to obtain,
+
+$$
+\mathbb {P} [ | X - \mathbb {E} [ X ] | \geq \mathbb {E} [ X ] / 2 ] \leq e ^ {- \mathbb {E} [ X ] / 1 2} \tag {30}
+$$
+
+$$
+\mathbb {P} \left[ \left| x _ {i} ^ {2} - p _ {i} \right| \geq x _ {i} ^ {2} / 2 \right] \leq e ^ {- N x _ {i} ^ {2} / 1 2}
+$$
+
+We have $N = \frac{36\ln d}{\delta^2}$ and by assumption $x_{i}^{2} > \delta^{2}$ (since $i\in S$ ). Therefore,
+
+$$
+\mathbb {P} \left[ \left| x _ {i} ^ {2} - p _ {i} \right| \geq x _ {i} ^ {2} / 2 \right] \leq e ^ {- 3 6 \ln d / 1 2} = 1 / d ^ {3}
+$$
+
+This proves that the event $|x_i^2 - p_i| \leq x_i^2 / 2$ occurs with probability at least $1 - \frac{1}{d^3}$ if $i \in S$ . This previous inequality is equivalent to $\sqrt{2p_i / 3} \leq |x_i| \leq \sqrt{2p_i}$ . Thus, with high probability we have $\mathbb{E}[n(0,i)] = \frac{N}{4} (x_i + \sqrt{p_i})^2 \geq 0.82Np_i$ , since $\sqrt{2p_i / 3} \leq |x_i|$ . Moreover, since $|p_i| \leq x_i^2 / 2$ , $\mathbb{E}[n(0,i)] \geq 0.82Nx_i^2 / 2 \geq 14.7\ln d$ . Therefore, equation equation 29 becomes
+
+$$
+\mathbb {P} [ n (0, i) \leq 0. 4 1 N p _ {i} ] \leq e ^ {- 1. 8 3 \ln d} = 1 / d ^ {1. 8 3}
+$$
+
+We conclude that for $i \in S$ , if $n(0,i) > 0.41Np_i$ , the sign of $x_{i}$ is determined correctly by $\sigma (i)$ with high probability $1 - \frac{1}{d^{1.83}}$ , as indicated in Step 5.
+
+We finally show $|x_{i} - \sigma (i)\sqrt{p_{i}}| = ||x_{i}| - \sqrt{p_{i}}|$ is bounded. Again by the Chernoff Bound (3.) we have, for $0 < \beta < 1$ :
+
+$$
+\mathbb {P} [ | x _ {i} ^ {2} - p _ {i} | \geq \beta x _ {i} ^ {2} ] \leq e ^ {\beta^ {2} N x _ {i} ^ {2} / 3}
+$$
+
+By the identity $|x_i^2 - p_i| = (|x_i| - \sqrt{p_i})(|x_i| + \sqrt{p_i})$ we have
+
+$$
+\mathbb {P} \left[ \left| | x _ {i} | - \sqrt {p _ {i}} \right| \geq \beta \frac {x _ {i} ^ {2}}{| x _ {i} | + \sqrt {p _ {i}}} \right] \leq e ^ {\beta^ {2} N x _ {i} ^ {2} / 3}
+$$
+
+Since $\sqrt{p_i} > 0$ , we have $\beta \frac{x_i^2}{|x_i| + \sqrt{p_i}} \leq \beta \frac{x_i^2}{|x_i|} = \beta |x_i|$ , therefore $\mathbb{P}\left[\left|\left|x_i\right| - \sqrt{p_i}\right| \geq \beta |x_i|\right] \leq e^{\beta^2 N x_i^2 / 3}$ . Finally, by choosing $\beta = \delta / |x_i| < 1$ we have
+
+$$
+\mathbb {P} \left[ \left| | x _ {i} | - \sqrt {p _ {i}} \right| \geq \delta \right] \leq e ^ {3 6 \ln d / 3} = 1 / d ^ {1 2}
+$$
+
+We conclude that, if $i \in S$ , we have $|x_i - \tilde{X}_i| \leq \delta$ with high probability.
+
+Since $|S| \leq d$ , the probability for this result to be true for all $i \in S$ is $1 - \frac{1}{d^{0.83}}$ . This can be proved by using the Union Bound on the correctness of $\sigma(i)$ .
+
+# Case 2: $i \notin S$ .
+
+If $i \notin S$ , we need to separate again in two cases. When the estimated sign is wrong, i.e. $\sigma(i) = -\operatorname{sgn}(x_i)$ , we have to bound $|x_i - \sigma(i)\sqrt{p_i}| = ||x_i| + \sqrt{p_i}|$ . On the contrary, if it is correct, i.e. $\sigma(i) = \operatorname{sgn}(x_i)$ , we have to bound $|x_i - \sigma(i)\sqrt{p_i}| = ||x_i| - \sqrt{p_i}| \leq ||x_i| + \sqrt{p_i}|$ . Therefore only one bound is necessary.
+
+We use Chernoff Bound (2.) on the random variable $X$ with $\beta > 0$ to obtain
+
+$$
+\mathbb {P} [ p _ {i} > (1 + \beta) x _ {i} ^ {2} ] \leq e ^ {\frac {\beta^ {2}}{2 + \beta} N x _ {i} ^ {2}}
+$$
+
+We chose $\beta = \delta^2 / x_i^2$ and obtain $\mathbb{P}[p_i > x_i^2 + \delta^2] \leq e^{\frac{\delta^4}{3\delta^2} N} = 1 / d^{12}$ . Therefore, if $i \notin S$ , with very high probability $1 - \frac{1}{d^{12}}$ we have $p_i \leq x_i^2 + \delta^2 \leq 2\delta^2$ . We can conclude and bound the error:
+
+$$
+\left| x _ {i} - \tilde {X} _ {i} \right| \leq \left| \left| x _ {i} \right| + \sqrt {p _ {i}} \right| \leq \delta + \sqrt {2} \delta = (1 + \sqrt {2}) \delta
+$$
+
+Since $|\overline{S}| \leq d$ , the probability for this result to be true for all $i \notin S$ is $1 - \frac{1}{d^{11}}$ . This follows from applying the Union Bound on the event $p_i > x_i^2 + \delta^2$ .
+
+# APPENDIX H ADDITIONAL NUMERICAL SIMULATIONS
+
+
+
+
+
+
+
+
+
+
+Figure 9: Numerical simulations of the training of the QCNN. These training curves represent the evolution of the Loss $\mathcal{L}$ as we iterate through the MNIST dataset. For each graph, the amplitude estimation error $\epsilon$ (0.1, 0.01), the non-linearity cap $C$ (2, 10), and the backpropagation error $\delta$ (0.1, 0.01) are fixed whereas the quantum sampling ratio $\sigma$ varies from 0.1 to 0.5. We can compare each training curve to the classical learning (CNN). Note that these training curves are smoothed, over windows of 12 steps, for readability.
+
+
+
+In the following we report the classification results of the QCNN when applied on the test set (10.000 images). We distinguish to use cases: in Table 4 the QCNN has been trained quantumly as described in this paper, whereas in Table 5 we first have trained the classical CNN, then transferred the weights to the QCNN only for the classification. This second use case has a global running time worst than the first one, but we see it as another concrete application: quantum machine learning could be used only for faster classification from a classically generated model, which could be the case for high rate classification task (e.g. for autonomous systems, classification over many simultaneous inputs). We report the test loss and accuracy for different values of the sampling ratio $\sigma$ , the amplitude estimation error $\epsilon$ , and for the backpropagation noise $\delta$ in the first case. The cap $C$ is fixed at 10. These values must be compared to the classical CNN classification metrics, for which the loss is 0.129 and the accuracy is $96.1\%$ . Note that we used a relatively small CNN and hence
+
+the accuracy is just over $96\%$ , lower than the best possible accuracy with larger CNN.
+
+| QCNN Test - Classification |
| σ | ε | 0.01 | 0.1 |
| δ | 0.01 | 0.1 | 0.01 | 0.1 |
| 0.1 | Loss | 0.519 | 0.773 | 2.30 | 2.30 |
| Accuracy | 82.8% | 74.8% | 11.5% | 11.7% |
| 0.2 | Loss | 0.334 | 0.348 | 0.439 | 1.367 |
| Accuracy | 89.5% | 89.0% | 86.2% | 54.1% |
| 0.3 | Loss | 0.213 | 0.314 | 0.381 | 0.762 |
| Accuracy | 93.4% | 90.3% | 87.9% | 76.8% |
| 0.4 | Loss | 0.177 | 0.215 | 0.263 | 1.798 |
| Accuracy | 94.7% | 93.3% | 91.8% | 34.9% |
| 0.5 | Loss | 0.142 | 0.211 | 0.337 | 1.457 |
| Accuracy | 95.4% | 93.5% | 89.2% | 52.8% |
+
+Table 4: QCNN trained with quantum backpropagation on MNIST dataset. With $C = {10}$ fixed.
+
+| QCNN Test - Classification |
| σ | ε | 0.01 | 0.1 |
| 0.1 | Loss | 1.07 | 1.33 |
| Accuracy | 86.1% | 78.6% |
| 0.2 | Loss | 0.552 | 0.840 |
| Accuracy | 92.8% | 86.5% |
| 0.3 | Loss | 0.391 | 0.706 |
| Accuracy | 94,3% | 85.8% |
| 0.4 | Loss | 0.327 | 0.670 |
| Accuracy | 94.4% | 84.0% |
| 0.5 | Loss | 0.163 | 0.292 |
| Accuracy | 95.9% | 93.5% |
+
+Table 5: QCNN created from a classical CNN trained on MNIST dataset. With $\delta = 0.01$ and $C = 10$ fixed.
\ No newline at end of file
diff --git a/quantumalgorithmsfordeepconvolutionalneuralnetworks/images.zip b/quantumalgorithmsfordeepconvolutionalneuralnetworks/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..c559f25f760c3feec2573b7ea2f24a65a1e88246
--- /dev/null
+++ b/quantumalgorithmsfordeepconvolutionalneuralnetworks/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b091c8eeb723bb69202ecac554c6106fe959f04b4b2cde8d49f74232fc5e8631
+size 957494
diff --git a/quantumalgorithmsfordeepconvolutionalneuralnetworks/layout.json b/quantumalgorithmsfordeepconvolutionalneuralnetworks/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..bd15d835091c0a6ee47713ca4c15287d44c67445
--- /dev/null
+++ b/quantumalgorithmsfordeepconvolutionalneuralnetworks/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:73f1d7ebb76211f243685450fedf72c668ca9f1e3d917d11c45bf856fe4d7417
+size 1830329
diff --git a/query2boxreasoningoverknowledgegraphsinvectorspaceusingboxembeddings/317c6b55-261c-4774-813a-93e9719e57f2_content_list.json b/query2boxreasoningoverknowledgegraphsinvectorspaceusingboxembeddings/317c6b55-261c-4774-813a-93e9719e57f2_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..c8a86f5881f2160409cf11293fcb8ccc6a2f8e79
--- /dev/null
+++ b/query2boxreasoningoverknowledgegraphsinvectorspaceusingboxembeddings/317c6b55-261c-4774-813a-93e9719e57f2_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e0ba03b1a98dc9b2e60d608a80a4a4bc8141f55cb0ab63ff25f24c8f02564313
+size 114621
diff --git a/query2boxreasoningoverknowledgegraphsinvectorspaceusingboxembeddings/317c6b55-261c-4774-813a-93e9719e57f2_model.json b/query2boxreasoningoverknowledgegraphsinvectorspaceusingboxembeddings/317c6b55-261c-4774-813a-93e9719e57f2_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..b921e02c4c08bc506e755334d834271934725c78
--- /dev/null
+++ b/query2boxreasoningoverknowledgegraphsinvectorspaceusingboxembeddings/317c6b55-261c-4774-813a-93e9719e57f2_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:681b9191a34c938933d136b8a66a83ec57e579ac886cfede12d95035d65b5100
+size 134145
diff --git a/query2boxreasoningoverknowledgegraphsinvectorspaceusingboxembeddings/317c6b55-261c-4774-813a-93e9719e57f2_origin.pdf b/query2boxreasoningoverknowledgegraphsinvectorspaceusingboxembeddings/317c6b55-261c-4774-813a-93e9719e57f2_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..42b52993e1f153796aad7fe4845879d60ee1d1eb
--- /dev/null
+++ b/query2boxreasoningoverknowledgegraphsinvectorspaceusingboxembeddings/317c6b55-261c-4774-813a-93e9719e57f2_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9fedbd65ba759f6c632c5c91389afa18b6733af6b6b75b0126bb7fa1f2c45567
+size 537969
diff --git a/query2boxreasoningoverknowledgegraphsinvectorspaceusingboxembeddings/full.md b/query2boxreasoningoverknowledgegraphsinvectorspaceusingboxembeddings/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..0064fdd0e3d753645238d97e9ce237c6f754c164
--- /dev/null
+++ b/query2boxreasoningoverknowledgegraphsinvectorspaceusingboxembeddings/full.md
@@ -0,0 +1,398 @@
+# QUERY2BOX: REASONING OVER KNOWLEDGE GRAPHS IN VECTOR SPACE USING BOX EMBEDDINGS
+
+Hongyu Ren*, Weihua Hu*, Jure Leskovec
+
+Department of Computer Science, Stanford University
+
+{hyren, weihuahu, jure}@cs.stanford.edu
+
+# ABSTRACT
+
+Answering complex logical queries on large-scale incomplete knowledge graphs (KGs) is a fundamental yet challenging task. Recently, a promising approach to this problem has been to embed KG entities as well as the query into a vector space such that entities that answer the query are embedded close to the query. However, prior work models queries as single points in the vector space, which is problematic because a complex query represents a potentially large set of its answer entities, but it is unclear how such a set can be represented as a single point. Furthermore, prior work can only handle queries that use conjunctions $(\land)$ and existential quantifiers $(\exists)$ . Handling queries with logical disjunctions $(\lor)$ remains an open problem. Here we propose QUERY2BOX, an embedding-based framework for reasoning over arbitrary queries with $\land, \lor$ , and $\exists$ operators in massive and incomplete KGs. Our main insight is that queries can be embedded as boxes (i.e., hyper-rectangles), where a set of points inside the box corresponds to a set of answer entities of the query. We show that conjunctions can be naturally represented as intersections of boxes and also prove a negative result that handling disjunctions would require embedding with dimension proportional to the number of KG entities. However, we show that by transforming queries into a Disjunctive Normal Form, QUERY2BOX is capable of handling arbitrary logical queries with $\land, \lor, \exists$ in a scalable manner. We demonstrate the effectiveness of QUERY2BOX on three large KGs and show that QUERY2BOX achieves up to $25\%$ relative improvement over the state of the art.
+
+# 1 INTRODUCTION
+
+Knowledge graphs (KGs) capture different types of relationships between entities, e.g., Canada citizen $\xrightarrow{\text{citizen}}$ Hinton. Answering arbitrary logical queries, such as "where did Canadian citizens with Turing Award graduate?", over such KGs is a fundamental task in question answering, knowledge base reasoning, as well as AI more broadly.
+
+First-order logical queries can be represented as Directed Acyclic Graphs (DAGs) (Fig. 1(A)) and be reasoned according to the DAGs to obtain a set of answers (Fig. 1(C)). While simple and intuitive, such approach has many drawbacks: (1) Computational complexity of subgraph matching is exponential in the query size, and thus cannot scale to modern KGs; (2) Subgraph matching is very sensitive as it cannot correctly answer queries with missing relations. To remedy (2) one could impute missing relations (Koller et al., 2007; Džeroski, 2009; De Raedt, 2008; Nickel et al., 2016) but that would only make the KG denser, which would further exacerbate issue (1) (Dalvi & Suciu, 2007; Krompaß et al., 2014).
+
+Recently, a promising alternative approach has emerged, where logical queries as well as KG entities are embedded into a low-dimensional vector space such that entities that answer the query are embedded close to the query (Guu et al., 2015; Hamilton et al., 2018; Das et al., 2017). Such approach robustly handles missing relations (Hamilton et al., 2018) and is also orders of magnitude faster, as answering an arbitrary logical query is reduced to simply identifying entities nearest to the embedding of the query in the vector space.
+
+
+$q = V_{7}$ . $\exists V$ : Win(TuringAward, $V$ ) $\wedge$ Citizen(Canada, $V$ ) $\wedge$ Graduate(V,V2)
+
+
+(A) Query $q$ and Its Dependency Graph
+(B) Computation Graph
+
+
+(C) Knowledge Graph Space
+
+
+(D) Vector Space
+Figure 1: Query2Box reasoning framework. (A) A given conjunctive query “Where did Canadian citizens with Turing Award graduate?” can be represented with a dependency graph. (B) Computation graph specifies the reasoning procedure to obtain a set of answers for the query in (A). (C) Example knowledge graph, where green nodes/entities denote answers to the query. Bold arrows indicate subgraphs that match the query graph in (A). (D) In QUERY2BOX, nodes of the KG are embedded as points in the vector space. We then obtain query embedding according to the computation graph (B) as a sequence of box operations: start with two nodes TuringAward and Canada and apply Win and Citizen projection operators, followed by an intersection operator (denoted as a shaded intersection of yellow and orange boxes) and another projection operator. The final embedding of the query is a green box and query’s answers are the entities inside the box.
+
+However, prior work embeds a query into a single point in the vector space. This is problematic because answering a logical query requires modeling a set of active entities while traversing the KG (Fig. 1(C)), and how to effectively model a set with a single point is unclear. Furthermore, it is also unnatural to define logical operators (e.g., set intersection) of two points in the vector space. Another fundamental limitation of prior work is that it can only handle conjunctive queries, a subset of first-order logic that only involves conjunction $(\wedge)$ and existential quantifier $(\exists)$ , but not disjunction $(\lor)$ . It remains an open question how to handle disjunction effectively in the vector space.
+
+Here we present QUERY2BOX, an embedding-based framework for reasoning over KGs that is capable of handling arbitrary Existential Positive First-order (EPFO) logical queries (i.e., queries that include any set of $\land$ , $\lor$ , and $\exists$ ) in a scalable manner. First, to accurately model a set of entities, our key idea is to use a closed region rather than a single point in the vector space. Specifically, we use a box (axis-aligned hyper-rectangle) to represent a query (Fig. 1(D)). This provides three important benefits: (1) Boxes naturally model sets of entities they enclose; (2) Logical operators (e.g., set intersection) can naturally be defined over boxes similarly as in Venn diagrams (Venn, 1880); (3) Executing logical operators over boxes results in new boxes, which means that the operations are closed; thus, logical reasoning can be efficiently performed in QUERY2BOX by iteratively updating boxes according to the query computation graph (Fig. 1(B)(D)).
+
+We show that QUERY2BOX can naturally handle conjunctive queries. We first prove a negative result that embedding EPFO queries to only single points or boxes is intractable as it would require embedding dimension proportional to the number of KG entities. However, we provide an elegant solution, where we transform a given EPFO logical query into a Disjunctive Normal Form (DNF) (Davey & Priestley, 2002), i.e., disjunction of conjunctive queries. Given any EPFO query, QUERY2BOX represents it as a set of individual boxes, where each box is obtained for each conjunctive query in the DNF. We then return nearest neighbor entities to any of the boxes as the answers to the query. This means that to answer any EPFO query we first answer individual conjunctive queries and then take the union of the answer entities.
+
+We evaluate QUERY2BOX on three standard KG benchmarks and show: (1) QUERY2BOX provides strong generalization as it can answer complex queries; (2) QUERY2BOX can generalize to new
+
+logical query structures that it has never seen during training; (3) QUERY2BOX is able to implicitly impute missing relations as it can answer any EPFO query with high accuracy even when relations involving answering the query are missing in the KG; (4) QUERY2BOX provides up to $25\%$ relative improvement in accuracy of answering EPFO queries over state-of-the-art baselines.
+
+# 2 FURTHER RELATED WORK
+
+Most related to our work are embedding approaches for multi-hop reasoning over KGs (Bordes et al., 2013; Das et al., 2017; Guu et al., 2015; Hamilton et al., 2018). Crucial difference is that we provide a way to tractably handle a larger subset of the first-order logic (EPFO queries vs. conjunctive queries) and that we embed queries as boxes, which provides better accuracy and generalization.
+
+Second line of related work is on structured embeddings, which associate images, words, sentences, or knowledge base concepts with geometric objects such as regions (Erk, 2009; Vilnis et al., 2018; Li et al., 2019), densities (Vilnis & McCallum, 2014; He et al., 2015; Athiwaratkun & Wilson, 2018), and orderings (Vendrov et al., 2016; Lai & Hockenmaier, 2017; Li et al., 2017). While the above work uses geometric objects to model individual entities and their pairwise relations, we use the geometric objects to model sets of entities and reason over those sets. In this sense our work is also related to classical Venn Diagrams (Venn, 1880), where boxes are essentially the Venn Diagrams in vector space, but our boxes and entity embeddings are jointly learned, which allows us to reason over incomplete KGs.
+
+Box embeddings have also been used to model hierarchical nature of concepts in an ontology with uncertainty (Vilnis et al., 2018; Li et al., 2019). While our work is also based on box embeddings we employ them for logical reasoning in massive heterogeneous knowledge graphs.
+
+# 3 QUERY2BOX: LOGICAL REASONING OVER KGS IN VECTOR SPACE
+
+Here we present the QUERY2BOX, where we will define an objective function that allows us to learn embeddings of entities in the KG, and at the same time also learn parameterized geometric logical operators over boxes. Then given an arbitrary EPFO query $q$ (Fig. 1(A)), we will identify its computation graph (Fig. 1(B)), and embed the query by executing a set of geometric operators over boxes (Fig. 1(D)). Entities that are enclosed in the final box embedding are returned as answers to the query (Fig. 1(D)).
+
+In order to train our system, we generate a set of queries together with their answers at training time and then learn entity embeddings and geometric operators such that queries can be accurately answered. We show in the following sections that our approach is able to generalize to queries and logical structures never seen during training. Furthermore, as we show in experiments, our approach is able to implicitly impute missing relations and answer queries that would be impossible to answer with traditional graph traversal methods.
+
+In the following we first only consider conjunctive queries (conjunction and existential operator) and then we extend our method to also include disjunction.
+
+# 3.1 KNOWLEDGE GRAPHS AND CONJUNCTIVE QUERIES
+
+We denote a KG as $\mathcal{G} = (\mathcal{V},\mathcal{R})$ , where $v\in \mathcal{V}$ represents an entity, and $r\in \mathcal{R}$ is a binary function $r:\mathcal{V}\times \mathcal{V}\to \{\mathrm{True},\mathrm{False}\}$ , indicating whether the relation $r$ holds between a pair of entities or not. In the KG, such binary output indicates the existence of the directed edge between a pair of entities, i.e., $v\stackrel {r}{\rightarrow}v^{\prime}$ iff $r(v,v^{\prime}) = \mathrm{True}$ .
+
+Conjunctive queries are a subclass of the first-order logical queries that use existential (≡) and conjunction (∧) operations. They are formally defined as follows.
+
+$$
+q \left[ V _ {?} \right] = V _ {?} \cdot \exists V _ {1}, \dots , V _ {k}: e _ {1} \wedge e _ {2} \wedge \dots \wedge e _ {n}, \tag {1}
+$$
+
+$$
+w h e r e e _ {i} = r \left(v _ {a}, V\right), V \in \left\{V _ {?}, V _ {1}, \dots , V _ {k} \right\}, v _ {a} \in \mathcal {V}, r \in \mathcal {R},
+$$
+
+$$
+\text {o r} e _ {i} = r \left(V, V ^ {\prime}\right), V, V ^ {\prime} \in \left\{V _ {?}, V _ {1}, \dots , V _ {k} \right\}, V \neq V ^ {\prime}, r \in \mathcal {R},
+$$
+
+where $v_{a}$ represents non-variable anchor entity, $V_{1},\ldots ,V_{k}$ are existentially quantified bound variables, $V_{?}$ is the target variable. The goal of answering the logical query $q$ is to find a set of entities $[[q]]\subseteq \mathcal{V}$ such that $v\in [[q]]$ iff $q[v] = \mathrm{True}$ . We call $[[q]]$ the denotation set (i.e., answer set) of query $q$ .
+
+As shown in Fig. 1(A), the dependency graph is a graphical representation of conjunctive query $q$ , where nodes correspond to variable or non-variable entities in $q$ and edges correspond to relations in $q$ . In order for the query to be valid, the corresponding dependency graph needs to be a Directed Acyclic Graph (DAG), with the anchor entities as the source nodes of the DAG and the query target $V_{?}$ as the unique sink node (Hamilton et al., 2018).
+
+From the dependency graph of query $q$ , one can also derive the computation graph, which consists of two types of directed edges that represent operators over sets of entities:
+
+- Projection: Given a set of entities $S \subseteq \mathcal{V}$ ,and relation $r \in \mathcal{R}$ ,this operator obtains ${ \cup }_{v \in S}{A}_{r}\left( v\right)$ ,where ${A}_{r}\left( v\right) \equiv \left\{ {{v}^{\prime } \in \mathcal{V} : r\left( {v,{v}^{\prime }}\right) = \text{True}}\right\}$ .
+- Intersection: Given a set of entity sets $\{S_1, S_2, \ldots, S_n\}$ , this operator obtains $\cap_{i=1}^{n} S_i$ .
+
+For a given query $q$ , the computation graph specifies the procedure of reasoning to obtain a set of answer entities, i.e., starting from a set of anchor nodes, the above two operators are applied iteratively until the unique sink target node is reached. The entire procedure is analogous to traversing KGs following the computation graph (Guu et al., 2015).
+
+# 3.2 REASONING OVER SETS OF ENTITIES USING BOX EMBEDDINGS
+
+So far we have defined conjunctive queries as computation graphs that can be executed directly over the nodes and edges in the KG. Now, we define logical reasoning in the vector space. Our intuition follows Fig. 1: Given a complex query, we shall decompose it into a sequence of logical operations, and then execute these operations in the vector space. This way we will obtain the embedding of the query, and answers to the query will be entities that are enclosed in the final query embedding box.
+
+In the following, we detail our two methodological advances: (1) the use of box embeddings to efficiently model and reason over sets of entities in the vector space, and (2) how to tractably handle disjunction operator $(\vee)$ , expanding the class of first-order logic that can be modeled in the vector space (Section 3.3).
+
+Box embeddings. To efficiently model a set of entities in the vector space, we use boxes (i.e., axis-aligned hyper-rectangles). The benefit is that unlike a single point, the box has the interior; thus, if an entity is in a set, it is natural to model the entity embedding to be a point inside the box. Formally, we operate on $\mathbb{R}^d$ , and define a box in $\mathbb{R}^d$ by $\mathbf{p} = (\mathrm{Cen}(\mathbf{p}),\mathrm{Off}(\mathbf{p}))\in \mathbb{R}^{2d}$ as:
+
+$$
+\operatorname {B o x} _ {\mathbf {p}} \equiv \left\{\mathbf {v} \in \mathbb {R} ^ {d}: \operatorname {C e n} (\mathbf {p}) - \operatorname {O f f} (\mathbf {p}) \preceq \mathbf {v} \preceq \operatorname {C e n} (\mathbf {p}) + \operatorname {O f f} (\mathbf {p}) \right\}, \tag {2}
+$$
+
+where $\preceq$ is element-wise inequality, $\mathrm{Cen}(\mathbf{p})\in \mathbb{R}^d$ is the center of the box, and $\mathrm{Off}(\mathbf{p})\in \mathbb{R}_{\geq 0}^{d}$ is the positive offset of the box, modeling the size of the box. Each entity $v\in \mathcal{V}$ in KG is assigned a single vector $\mathbf{v}\in \mathbb{R}^d$ (i.e., a zero-size box), and the box embedding $\mathbf{p}$ models $\{v\in \mathcal{V}:\mathbf{v}\in \mathrm{Box}_{\mathbf{p}}\}$ , i.e., a set of entities whose vectors are inside the box. For the rest of the paper, we use the bold face to denote the embedding, e.g., embedding of $v$ is denoted by $\mathbf{v}$ .
+
+Our framework reasons over KGs in the vector space following the computation graph of the query, as shown in Fig. 1(D): we start from the initial box embeddings of the source nodes (anchor entities) and sequentially update the embeddings according to the logical operators. Below, we describe how we set initial box embeddings for the source nodes, as well as how we model projection and intersection operators (defined in Sec. 3.1) as geometric operators that operate over boxes. After that, we describe our entity-to-box distance function and the overall objective that learns embeddings as well as the geometric operators.
+
+Initial boxes for source nodes. Each source node represents an anchor entity $v \in \mathcal{V}$ , which we can regard as a set that only contains the single entity. Such a single-element set can be naturally modeled by a box of size/offset zero centered at $\mathbf{v}$ . Formally, we set the initial box embedding as $(\mathbf{v}, \mathbf{0})$ , where $\mathbf{v} \in \mathbb{R}^d$ is the anchor entity vector and $\mathbf{0}$ is a $d$ -dimensional all-zero vector.
+
+Geometric projection operator. We associate each relation $r \in \mathcal{R}$ with relation embedding $\mathbf{r} = (\mathrm{Cen}(\mathbf{r}), \mathrm{Off}(\mathbf{r})) \in \mathbb{R}^{2d}$ with $\mathrm{Off}(\mathbf{r}) \succeq \mathbf{0}$ . Given an input box embedding $\mathbf{p}$ , we model the projection by $\mathbf{p} + \mathbf{r}$ , where we sum the centers and sum the offsets. This gives us a new box with the translated center and larger offset because $\mathrm{Off}(\mathbf{r}) \succeq \mathbf{0}$ , as illustrated in Fig. 2(A). The adaptive box size effectively models a different number of entities/vectors in the set.
+
+Geometric intersection operator. We model the intersection of a set of box embeddings $\{\mathbf{p}_1, \ldots, \mathbf{p}_n\}$ as $\mathbf{p}_{\mathrm{inter}} = (\mathrm{Cen}(\mathbf{p}_{\mathrm{inter}}), \mathrm{Off}(\mathbf{p}_{\mathrm{inter}}))$ , which is calculated by performing attention
+
+
+Figure 2: The geometric intuition of the two operations and distance function in QUERY2BOX. (A) Projection generates a larger box with a translated center. (B) Intersection generates a smaller box lying inside the given set of boxes. (C) Distance $\mathrm{dist}_{\mathrm{box}}$ is the weighted sum of $\mathrm{dist}_{\mathrm{outside}}$ and $\mathrm{dist}_{\mathrm{inside}}$ , where the latter is weighted less.
+
+
+
+
+
+over the box centers (Bahdanau et al., 2015) and shrinking the box offset using the sigmoid function:
+
+$$
+\operatorname {C e n} \left(\mathbf {p} _ {\text {i n t e r}}\right) = \sum_ {i} \mathbf {a} _ {i} \odot \operatorname {C e n} \left(\mathbf {p} _ {\mathbf {i}}\right), \quad \mathbf {a} _ {i} = \frac {\exp \left(\mathrm {M L P} \left(\mathbf {p} _ {\mathbf {i}}\right)\right)}{\sum_ {j} \exp \left(\mathrm {M L P} \left(\mathbf {p} _ {\mathbf {j}}\right)\right)},
+$$
+
+$$
+\operatorname {O f f} \left(\mathbf {p} _ {\text {i n t e r}}\right) = \operatorname {M i n} \left(\left\{\operatorname {O f f} \left(\mathbf {p} _ {1}\right), \dots , \operatorname {O f f} \left(\mathbf {p} _ {n}\right) \right\}\right) \odot \sigma \left(\text {D e e p S e t s} \left(\left\{\mathbf {p} _ {1}, \dots , \mathbf {p} _ {n} \right\}\right)\right),
+$$
+
+where $\odot$ is the dimension-wise product, $\mathrm{MLP}(\cdot):\mathbb{R}^{2d}\to \mathbb{R}^d$ is the Multi-Layer Perceptron, $\sigma (\cdot)$ is the sigmoid function, DeepSets() is the permutation-invariant deep architecture (Zaheer et al., 2017), and both $\mathrm{Min}(\cdot)$ and $\exp (\cdot)$ are applied in a dimension-wise manner. Following Hamilton et al. (2018), we model all the deep sets by DeepSets( $\{\mathbf{x_1},\dots,\mathbf{x_N}\}$ ) = MLP((1/N) $\cdot \sum_{i=1}^{N}\mathrm{MLP}(\mathbf{x_i}))$ , where all the hidden dimensionalities of the two MLPs are the same as the input dimensionality. The intuition behind our geometric intersection is to generate a smaller box that lies inside a set of boxes, as illustrated in Fig. 2(B). Different from the generic deep sets to model the intersection (Hamilton et al., 2018), our geometric intersection operator effectively constrains the center position and models the shrinking set size.
+
+Entity-to-box distance. Given a query box $\mathbf{q} \in \mathbb{R}^{2d}$ and an entity vector $\mathbf{v} \in \mathbb{R}^d$ , we define their distance as
+
+$$
+\operatorname {d i s t} _ {\text {b o x}} (\mathbf {v}; \mathbf {q}) = \operatorname {d i s t} _ {\text {o u t s i d e}} (\mathbf {v}; \mathbf {q}) + \alpha \cdot \operatorname {d i s t} _ {\text {i n s i d e}} (\mathbf {v}; \mathbf {q}), \tag {3}
+$$
+
+where $\mathbf{q}_{\mathrm{max}} = \mathrm{Cen}(\mathbf{q}) + \mathrm{Off}(\mathbf{q}) \in \mathbb{R}^d$ , $\mathbf{q}_{\mathrm{min}} = \mathrm{Cen}(\mathbf{q}) - \mathrm{Off}(\mathbf{q}) \in \mathbb{R}^d$ and $0 < \alpha < 1$ is a fixed scalar, and
+
+$$
+\operatorname {d i s t} _ {\text {o u t s i d e}} (\mathbf {v}; \mathbf {q}) = \| \operatorname {M a x} (\mathbf {v} - \mathbf {q} _ {\max }, \mathbf {0}) + \operatorname {M a x} (\mathbf {q} _ {\min } - \mathbf {v}, \mathbf {0}) \| _ {1},
+$$
+
+$$
+\operatorname {d i s t} _ {\text {i n s i d e}} (\mathbf {v}; \mathbf {q}) = \| \operatorname {C e n} (\mathbf {q}) - \operatorname {M i n} \left(\mathbf {q} _ {\max }, \operatorname {M a x} \left(\mathbf {q} _ {\min }, \mathbf {v}\right)\right) \| _ {1}.
+$$
+
+As illustrated in Fig. 2(C), dist_outside corresponds to the distance between the entity and closest corner/side of the box. Analogously, dist_inside corresponds to the distance between the center of the box and its side/corner (or the entity itself if the entity is inside the box).
+
+The key here is to downweight the distance inside the box by using $0 < \alpha < 1$ . This means that as long as entity vectors are inside the box, we regard them as "close enough" to the query center (i.e., $\mathrm{dist}_{\mathrm{outside}}$ is 0, and $\mathrm{dist}_{\mathrm{inside}}$ is scaled by $\alpha$ ). When $\alpha = 1$ , $\mathrm{dist}_{\mathrm{box}}$ reduces to the ordinary $L_{1}$ distance, i.e., $\| \mathsf{Cen}(\mathbf{q}) - \mathbf{v}\| _1$ , which is used by the conventional TransE (Bordes et al., 2013) as well as prior query embedding methods (Guu et al., 2015; Hamilton et al., 2018).
+
+Training objective. Our next goal is to learn entity embeddings as well as geometric projection and intersection operators.
+
+Given a training set of queries and their answers, we optimize a negative sampling loss (Mikolov et al., 2013) to effectively optimize our distance-based model (Sun et al., 2019):
+
+$$
+L = - \log \sigma (\gamma - \operatorname {d i s t} _ {\mathrm {b o x}} (\mathbf {v}; \mathbf {q})) - \sum_ {i = 1} ^ {k} \frac {1}{k} \log \sigma \left(\operatorname {d i s t} _ {\mathrm {b o x}} \left(\mathbf {v} _ {\mathbf {i}} ^ {\prime}; \mathbf {q}\right) - \gamma\right), \tag {4}
+$$
+
+where $\gamma$ represents a fixed scalar margin, $v\in [[q]]$ is a positive entity (i.e., answer to the query $q$ ), and $v_{i}^{\prime}\notin [[q]]$ is the $i$ -th negative entity (non-answer to the query $q$ ) and $k$ is the number of negative entities.
+
+# 3.3 TRACTABLE HANDLING OF DISJUNCTION USING DISJUNCTIVE NORMAL FORM
+
+So far we have focused on conjunctive queries, and our aim here is to tractably handle in the vector space a wider class of logical queries, called Existential Positive First-order (EPFO) queries (Dalvi & Suciu, 2012) that involve $\lor$ in addition to $\exists$ and $\land$ . We specifically focus on EPFO queries whose computation graphs are a DAG, same as that of conjunctive queries (Section 3.1), except that we now have an additional type of directed edge, called union defined as follows:
+
+- Union: Given a set of entity sets $\{S_1, S_2, \ldots, S_n\}$ , this operator obtains $\cup_{i=1}^{n} S_i$ .
+
+A straightforward approach here would be to define another geometric operator for union and embed the query as we did in the previous sections. An immediate challenge for our box embeddings is that boxes can be located anywhere in the vector space, so their union would no longer be a simple box. In other words, union operation over boxes is not closed.
+
+Theoretically, we prove a general negative result that holds for any embedding-based method that embeds query $q$ into $\mathbf{q}$ and uses some distance function to retrieve entities, i.e., $\mathrm{dist}(\mathbf{v};\mathbf{q}) \leq \beta$ iff $v \in [[q]]$ . Here, $\mathrm{dist}(\mathbf{v};\mathbf{q})$ is the distance between entity and query embeddings, e.g., $\mathrm{dist}_{\mathrm{box}}(\mathbf{v};\mathbf{q})$ or $\| \mathbf{v} - \mathbf{q}\|_1$ , and $\beta$ is a fixed threshold.
+
+Theorem 1. Consider any $M$ conjunctive queries $q_{1},\ldots ,q_{M}$ whose denotation sets $\llbracket q_1\rrbracket ,\dots ,\llbracket q_M\rrbracket$ are disjoint with each other; $\forall i\neq j,\llbracket q_i\rrbracket \cap \llbracket q_j\rrbracket = \emptyset$ . Let $D$ be the VC dimension of the function class $\{\mathrm{sign}(\beta -\mathrm{dist}(\cdot ;\mathbf{q})): \mathbf{q}\in \Xi \}$ , where $\Xi$ represents the query embedding space and $\mathrm{sign}(\cdot)$ is the sign function. Then, we need $D\geq M$ to model any EPFO query, i.e., $\mathrm{dist}(\mathbf{v};\mathbf{q})\leq \beta \Leftrightarrow v\in \llbracket q\rrbracket$ is satisfied for every EPFO query $q$
+
+The proof is provided in Appendix A, where the key is that with the introduction of the union operation any subset of denotation sets can be the answer, which forces us to model the powerset $\{\bigcup_{q_i\in S}[[q_i]]:S\subseteq \{q_1,\ldots ,q_M\} \}$ in a vector space.
+
+For a real-world KG, there are $M \approx |\mathcal{V}|$ conjunctive queries with non-overlapping answers. For example, in the commonly-used FB15k dataset (Bordes et al., 2013), derived from the Freebase (Bollacker et al., 2008), we find $M = 13,365$ , while $|\mathcal{V}|$ is 14,951 (see Appendix B for the details).
+
+Theorem 1 shows that in order to accurately model any EPFO query with the existing framework, the complexity of the distance function measured by the VC dimension needs to be as large as the number of KG entities. This implies that if we use common distance functions based on hyper-plane, Euclidean sphere, or axis-aligned rectangle, their parameter dimensionality needs to be $\Theta(M)$ , which is $\Theta(|\mathcal{V}|)$ for real KGs we are interested in. In other words, the dimensionality of the logical query embeddings needs to be $\Theta(|\mathcal{V}|)$ , which is not low-dimensional; thus not scalable to large KGs and not generalizable in the presence of unobserved KG edges.
+
+To rectify this issue, our key idea is to transform a given EPFO query into a Disjunctive Normal Form (DNF) (Davey & Priestley, 2002), i.e., disjunction of conjunctive queries, so that union operation only appears in the last step. Each of the conjunctive queries can then be reasoned in the low-dimensional space, after which we can aggregate the results by a simple and intuitive procedure. In the following, we describe the transformation to DNF and the aggregation procedure.
+
+Transformation to DNF. Any first-order logic can be transformed into the equivalent DNF (Davey & Priestley, 2002). We perform such transformation directly in the space of computation graph, i.e., moving all the edges of type "union" to the last step of the computation graph. Let $G_{q} = (V_{q}, E_{q})$ be the computation graph for a given EPFO query $q$ , and let $V_{\text{union}} \subset V_{q}$ be a set of nodes whose in-coming edges are of type "union". For each $v \in V_{\text{union}}$ , define $P_{v} \subset V_{q}$ as a set of its parent nodes. We first generate $N = \prod_{v \in V_{\text{union}}} |P_{v}|$ different computation graphs $G_{q^{(1)}}, \ldots, G_{q^{(N)}}$ as follows, each with different choices of $v_{\text{parent}}$ in the first step.
+
+1. For every $v \in V_{\text{union}}$ , select one parent node $v_{\text{parent}} \in P_v$ .
+
+
+Figure 3: Illustration of converting a computation graph of an EPFO query into an equivalent computation graph of the Disjunctive Normal Form.
+
+2. Remove all the edges of type 'union.'
+3. Merge $v$ and $v_{\text{parent}}$ , while retaining all other edge connections.
+
+We then combine the obtained computation graphs $G_{q^{(1)}}, \ldots, G_{q^{(N)}}$ as follows to give the final equivalent computation graph.
+
+1. Convert the target sink nodes of all the obtained computation graphs into the existentially quantified bound variables nodes.
+2. Create a new target sink node $V_{?}$ , and draw directed edges of type "union" from all the above variable nodes to the new target node.
+
+An example of the entire transformation procedure is illustrated in Fig. 3. By the definition of the union operation, our procedure gives the equivalent computation graph as the original one. Furthermore, as all the union operators are removed from $G_{q^{(1)}}, \ldots, G_{q^{(N)}}$ , all of these computation graphs represent conjunctive queries, which we denote as $q^{(1)}, \ldots, q^{(N)}$ . We can then apply existing framework to obtain a set of embeddings for these conjunctive queries as $\mathbf{q}^{(1)}, \ldots, \mathbf{q}^{(N)}$ .
+
+Aggregation. Next we define the distance function between the given EPFO query $q$ and an entity $v \in \mathcal{V}$ . Since $q$ is logically equivalent to $q^{(1)} \lor \dots \lor q^{(N)}$ , we can naturally define the aggregated distance function using the box distance dist_box:
+
+$$
+\operatorname {d i s t} _ {\mathrm {a g g}} (\mathbf {v}; q) = \operatorname {M i n} \left(\left\{\operatorname {d i s t} _ {\mathrm {b o x}} (\mathbf {v}; \mathbf {q} ^ {(1)}), \dots , \operatorname {d i s t} _ {\mathrm {b o x}} (\mathbf {v}; \mathbf {q} ^ {(\mathbf {N})}) \right\}\right), \tag {5}
+$$
+
+where $\mathrm{dist}_{\mathrm{agg}}$ is parameterized by the EPFO query $q$ . When $q$ is a conjunctive query, i.e., $N = 1$ , $\mathrm{dist}_{\mathrm{agg}}(\mathbf{v}; q) = \mathrm{dist}_{\mathrm{box}}(\mathbf{v}; \mathbf{q})$ . For $N > 1$ , $\mathrm{dist}_{\mathrm{agg}}$ takes the minimum distance to the closest box as the distance to an entity. This modeling aligns well with the union operation; an entity is inside the union of sets as long as the entity is in one of the sets. Note that our DNF-query rewriting scheme is general and is able to extend any method that works for conjunctive queries (e.g., (Hamilton et al., 2018)) to handle more general class of EPFO queries.
+
+Computational complexity. The computational complexity of answering an EPFO query with our framework is equal to that of answering the $N$ conjunctive queries. In practice, $N$ might not be so large, and all the $N$ computations can be parallelized. Furthermore, answering each conjunctive query is very fast as it requires us to execute a sequence of simple box operations (each of which takes constant time) and then perform a range search (Bentley & Friedman, 1979) in the embedding space, which can also be done in constant time using techniques based on Locality Sensitive Hashing (Indyk & Motwani, 1998).
+
+# 4 EXPERIMENTS
+
+Our goal in the experiment section is to evaluate the performance of QUERY2BOX on discovering answers to complex logical queries that cannot be obtained by traversing the incomplete KG. This means, we will focus on answering queries where one or more missing edges in the KG have to be successfully predicted in order to obtain the additional answers.
+
+# 4.1 KNOWLEDGE GRAPHS AND QUERY GENERATION
+
+We perform experiments on three standard KG benchmarks, FB15k (Bordes et al., 2013), FB15k-237 (Toutanova & Chen, 2015), and NELL995 (Xiong et al., 2017) (see Appendix E for NELL995 pre-processing details). Dataset statistics are summarized in Table 5 in Appendix F.
+
+
+Figure 4: Query structures considered in the experiments, where anchor entities and relations are to be specified to instantiate logical queries. Naming for each query structure is provided under each subfigure, where 'p', 'i', and 'u' stand for 'projection', 'intersection', and 'union', respectively. Models are trained on the first 5 query structures, and evaluated on all 9 query structures. For example, "3p" is a path query of length three, and "2i" is an intersection of cardinality two.
+
+| Dataset | 1p | 2p | 3p | 2i | 3i | ip | pi | 2u | up |
| FB15k | 10.8 | 255.6 | 250.0 | 90.3 | 64.1 | 593.8 | 190.1 | 27.8 | 227.0 |
| FB15k-237 | 13.3 | 131.4 | 215.3 | 69.0 | 48.9 | 593.8 | 257.7 | 35.6 | 127.7 |
| NELL995 | 8.5 | 56.6 | 65.3 | 30.3 | 15.9 | 310.0 | 144.9 | 14.4 | 62.5 |
+
+Table 1: Average number of answer entities of test queries with missing edges grouped by different query structures (for a KG with ${10}\%$ edges missing).
+
+We follow the standard evaluation protocol in KG literature: Given the standard split of edges into training, test, and validation sets, we first augment the KG to also include inverse relations and effectively double the number of edges in the graph. We then create three graphs: $\mathcal{G}_{\mathrm{train}}$ , which only contains training edges and we use this graph to train node embeddings as well as box operators. We then also generate two bigger graphs: $\mathcal{G}_{\mathrm{valid}}$ , which contains $\mathcal{G}_{\mathrm{train}}$ plus the validation edges, and $\mathcal{G}_{\mathrm{test}}$ , which includes $\mathcal{G}_{\mathrm{valid}}$ as well as the test edges.
+
+We consider 9 kinds of diverse query structures shown and named in Fig. 4. We use 5 query structures for training and then evaluate on all the 9 query structures. We refer the reader to Appendix D for full details on query generation and Table 6 in Appendix F for statistics of the generated logical queries. Given a query $q$ , let $[[q]]_{\text{train}}$ , $[[q]]_{\text{val}}$ , and $[[q]]_{\text{test}}$ denote a set of answer entities obtained by running subgraph matching of $q$ on $\mathcal{G}_{\text{train}}$ , $\mathcal{G}_{\text{valid}}$ , and $\mathcal{G}_{\text{test}}$ , respectively. At the training time, we use $[[q]]_{\text{train}}$ as positive examples for the query and other random entities as negative examples. However, at the test/Validation time we proceed differently. Note that we focus on answering queries where generalization performance is crucial and at least one edge needs to be imputed in order to answer the queries. Thus, rather than evaluating a given query on the full validation (or test) set $[[q]]_{\text{val}}$ ( $[[q]]_{\text{test}}$ ) of answers, we validate the method only on answers that include missing relations. Given how we constructed $\mathcal{G}_{\text{train}} \subseteq \mathcal{G}_{\text{valid}} \subseteq \mathcal{G}_{\text{test}}$ , we have $[[q]]_{\text{train}} \subseteq [[q]]_{\text{val}} \subseteq [[q]]_{\text{test}}$ and thus we evaluate the method on $[[q]]_{\text{val}} \backslash [[q]]_{\text{train}}$ to tune hyper-parameters and then report results identifying answer entities in $[[q]]_{\text{test}} \backslash [[q]]_{\text{val}}$ . This means we always evaluate on queries/entities that were not part of the training set and the method has not seen them before. Furthermore, for these queries, traditional graph traversal techniques would not be able to find the answers (due to missing relations).
+
+Table 1 shows the average number of answer entities for different query structures. We observe that complex logical queries (especially 2p, 3p, ip, pi, up) indeed require modeling a much larger number of answer entities (often more than 10 times) than the simple 1p queries do. Therefore, we expect our box embeddings to work particularly well in handling complex queries with many answer entities.3
+
+# 4.2 EVALUATION PROTOCOL
+
+Given a test query $q$ , for each of its non-trivial answers $v \in [[q]]_{\mathrm{test}} \backslash [[q]]_{\mathrm{val}}$ , we use $\mathrm{dist}_{\mathrm{box}}$ in Eq. 3 to rank $v$ among $\mathcal{V} \backslash [[q]]_{\mathrm{test}}$ . Denoting the rank of $v$ by $\mathrm{Rank}(v)$ , we then calculate evaluation metrics for answering query $q$ , such as Mean Reciprocal Rank (MRR) and Hits at $K$ ( $H@K$ ):
+
+$$
+\operatorname {M e t r i c s} (q) = \frac {1}{| [ q ] _ {\text {t e s t}} \backslash [ q ] _ {\text {v a l}} |} \sum_ {v \in [ q ] _ {\text {t e s t}} \backslash [ q ] _ {\text {v a l}}} f _ {\text {m e t r i c s}} (\operatorname {R a n k} (v)), \tag {6}
+$$
+
+| Method | Avg | 1p | 2p | 3p | 2i | 3i | ip | pi | 2u | up |
| FB15k |
| Q2B | 0.484 | 0.786 | 0.413 | 0.303 | 0.593 | 0.712 | 0.211 | 0.397 | 0.608 | 0.33 |
| GQE | 0.386 | 0.636 | 0.345 | 0.248 | 0.515 | 0.624 | 0.151 | 0.310 | 0.376 | 0.273 |
| GQE-DOUBLE | 0.384 | 0.630 | 0.346 | 0.250 | 0.515 | 0.611 | 0.153 | 0.320 | 0.362 | 0.271 |
| FB15k-237 |
| Q2B | 0.268 | 0.467 | 0.24 | 0.186 | 0.324 | 0.453 | 0.108 | 0.205 | 0.239 | 0.193 |
| GQE | 0.228 | 0.402 | 0.213 | 0.155 | 0.292 | 0.406 | 0.083 | 0.17 | 0.169 | 0.163 |
| GQE-DOUBLE | 0.23 | 0.405 | 0.213 | 0.153 | 0.298 | 0.411 | 0.085 | 0.182 | 0.167 | 0.16 |
| NELL995 |
| Q2B | 0.306 | 0.555 | 0.266 | 0.233 | 0.343 | 0.48 | 0.132 | 0.212 | 0.369 | 0.163 |
| GQE | 0.247 | 0.418 | 0.228 | 0.205 | 0.316 | 0.447 | 0.081 | 0.186 | 0.199 | 0.139 |
| GQE-DOUBLE | 0.248 | 0.417 | 0.231 | 0.203 | 0.318 | 0.454 | 0.081 | 0.188 | 0.2 | 0.139 |
+
+Table 2: H@3 results of QUERY2BOX vs. GQE on FB15k, FB15k-237 and NELL995.
+
+where $f_{\mathrm{metrics}}(x) = \frac{1}{x}$ for MRR, and $f_{\mathrm{metrics}}(x) = 1[x \leq K]$ for H@K.
+
+We then average Eq. 6 over all the queries within the same query structure, and report the results separately for different query structures. The same evaluation protocol is applied to the validation stage except that we evaluate on $\llbracket q\rrbracket_{\mathrm{val}}\backslash \llbracket q\rrbracket_{\mathrm{train}}$ rather than $\llbracket q\rrbracket_{\mathrm{test}}\backslash \llbracket q\rrbracket_{\mathrm{val}}$ .
+
+# 4.3 BASELINE AND MODEL VARIANTS
+
+We compare our framework QUERY2BOX against the state-of-the-art GQE (Hamilton et al., 2018). GQE embeds a query to a single vector, and models projection and intersection operators as translation and deep sets (Zaheer et al., 2017), respectively. The $L_{1}$ distance is used as the distance between query and entity vectors. For a fair comparison, we also compare with GQE-DOUBLE (GQE with doubled embedding dimensionality) so that QUERY2BOX and GQE-DOUBLE have the same amount of parameters. Refer to Appendix G for the model hyper-parameters used in our experiments. Although the original GQE cannot handle EPFO queries, we apply our DNF-query rewriting strategy and in our evaluation extend GQE to handle general EPFO queries as well. Furthermore, we perform extensive ablation study by considering several variants of QUERY2BOX (abbreviated as Q2B). We list our method as well as its variants below.
+
+- Q2B (our method): The box embeddings are used to model queries, and the attention mechanism is used for the intersection operator.
+Q2B-AVG: The attention mechanism for intersection is replaced with averaging.
+Q2B-DEEPSETS: The attention mechanism for intersection is replaced with the deep sets.
+- Q2B-AVG-1P: The variant of Q2B-AVG that is trained with only 1p queries (see Fig. 4); thus, logical operators are not explicitly trained.
+- Q2B-SHAREDOFFSET; The box offset is shared across all queries (every query is represented by a box with the same trainable size).
+
+# 4.4 MAIN RESULTS
+
+We start by comparing our Q2B with state-of-the-art query embedding method GQE (Hamilton et al., 2018) on FB15k, FB15k-237, and NELL995. As listed in Tables 2, our method significantly and consistently outperforms the state-of-the-art baseline across all the query structures, including those not seen during training as well as those with union operations. On average, we obtain $9.8\%$ ( $25\%$ relative), $3.8\%$ ( $15\%$ relative), and $5.9\%$ ( $24\%$ relative) higher H@3 than the best baselines on FB15k, FB15k-237, and NELL995, respectively. Notice that naively increasing embedding dimensionality in GQE yields limited performance improvement. Our Q2B is able to effectively model a large set of entities by using the box embedding, and achieves a significant performance gain compared with GQE-DOUBLE (with same number of parameters) that represents queries as point vectors. Also notice
+
+| Method | Avg | 1p | 2p | 3p | 2i | 3i | ip | pi | 2u | up |
| FB15k |
| Q2B | 0.484 | 0.786 | 0.413 | 0.303 | 0.593 | 0.712 | 0.211 | 0.397 | 0.608 | 0.330 |
| Q2B-AVG | 0.468 | 0.779 | 0.407 | 0.300 | 0.577 | 0.673 | 0.199 | 0.345 | 0.607 | 0.326 |
| Q2B-DEEPSETS | 0.467 | 0.755 | 0.407 | 0.294 | 0.588 | 0.699 | 0.197 | 0.378 | 0.562 | 0.324 |
| Q2B-AVG-1P | 0.385 | 0.812 | 0.262 | 0.173 | 0.463 | 0.529 | 0.126 | 0.263 | 0.653 | 0.187 |
| Q2B-SHAREOFFSET | 0.372 | 0.684 | 0.335 | 0.232 | 0.442 | 0.559 | 0.144 | 0.282 | 0.417 | 0.252 |
| FB15k-237 |
| Q2B | 0.268 | 0.467 | 0.24 | 0.186 | 0.324 | 0.453 | 0.108 | 0.205 | 0.239 | 0.193 |
| Q2B-AVG | 0.249 | 0.462 | 0.242 | 0.182 | 0.278 | 0.391 | 0.101 | 0.158 | 0.236 | 0.189 |
| Q2B-DEEPSETS | 0.259 | 0.458 | 0.243 | 0.186 | 0.303 | 0.432 | 0.104 | 0.187 | 0.231 | 0.190 |
| Q2B-AVG-1P | 0.219 | 0.457 | 0.193 | 0.132 | 0.251 | 0.319 | 0.083 | 0.142 | 0.241 | 0.152 |
| Q2B-SHAREOFFSET | 0.207 | 0.391 | 0.199 | 0.139 | 0.251 | 0.354 | 0.082 | 0.154 | 0.15 | 0.142 |
| NELL995 |
| Q2B | 0.306 | 0.555 | 0.266 | 0.233 | 0.343 | 0.480 | 0.132 | 0.212 | 0.369 | 0.163 |
| Q2B-AVG | 0.283 | 0.543 | 0.250 | 0.228 | 0.300 | 0.403 | 0.116 | 0.188 | 0.36 | 0.161 |
| Q2B-DEEPSETS | 0.293 | 0.539 | 0.26 | 0.231 | 0.317 | 0.467 | 0.11 | 0.202 | 0.349 | 0.16 |
| Q2B-AVG-1P | 0.274 | 0.607 | 0.229 | 0.182 | 0.277 | 0.315 | 0.097 | 0.18 | 0.443 | 0.133 |
| Q2B-SHAREOFFSET | 0.237 | 0.436 | 0.219 | 0.201 | 0.278 | 0.379 | 0.096 | 0.174 | 0.217 | 0.137 |
+
+Table 3: H@3 results of QUERY2BOX vs. several variants on FB15k, FB15k-237 and NELL995.
+
+that Q2B performs well on new queries with the same structure as the training queries as well as on new query structures never seen during training, which demonstrates that Q2B generalizes well within and beyond query structures.
+
+We also conduct extensive ablation studies (Tables 3). We summarize the results as follows:
+
+Importance of attention mechanism. First, we show that our modeling of intersection using the attention mechanism is important. Given a set of box embeddings $\{\mathbf{p}_1,\dots ,\mathbf{p_n}\}$ , Q2B-AVG is the most naive way to calculate the center of the resulting box embedding $\mathbf{p}_{\mathrm{inter}}$ while Q2B-DEEPSETS is too flexible and neglects the fact that the center should be a weighted average of $\mathrm{Cen}(\mathbf{p_1}),\ldots ,\mathrm{Cen}(\mathbf{p_n})$ . Compared with the two methods, Q2B achieves better performance in answering queries that involve intersection operation, e.g., 2i, 3i, pi, ip. Specifically, on FB15k-237, Q2B obtains more than $4\%$ and $2\%$ absolute gain in H@3 compared to Q2B-AVG and Q2B-DEEPSETS, respectively.
+
+Necessity of training on complex queries. Second, we observe that explicitly training on complex logical queries beyond one-hop path queries (1p in Fig. 4) improves the reasoning performance. Although Q2B-AVG-1P is able to achieve strong performance on 1p and 2u, where answering 2u is essentially answering two 1p queries with an additional minimum operation (see Eq. 5 in Section 3.3), Q2B-AVG-1P fails miserably in answering other types of queries involving logical operators. On the other hand, other methods (Q2B, Q2B-AVG, and Q2B-DEEPSETS) that are explicitly trained on the logical queries achieve much higher accuracy, with up to $10\%$ absolute average improvement of H@3 on FB15k.
+
+Adaptive box size for different queries. Third, we investigate the importance of learning adaptive offsets (box size) for different queries. Q2B-SHAREOFFSET is a variant of our Q2B where all the box embeddings share the same learnable offset. Q2B-SHAREOFFSET does not work well on all types of queries. This is most likely because different queries have different numbers of answer entities, and the adaptive box size enables us to better model it. In fact, we find that box offset varies significantly across different relations, and one-to-many relations tend to have larger offset embeddings (see Appendix H for the details).
+
+# 5 CONCLUSION
+
+In this paper we proposed a reasoning framework called QUERY2BOX that can effectively model and reason over sets of entities as well as handle EPFO queries in a vector space. Given a logical query, we first transform it into DNF, embed each conjunctive query into a box, and output entities closest to their nearest boxes. Our approach is capable of handling all types of EPFO queries scalably and accurately. Experimental results on standard KGs demonstrate that QUERY2BOX significantly outperforms the existing work in answering diverse logical queries.
+
+# ACKNOWLEDGMENTS
+
+We thank William Hamilton, Rex Ying, and Jiaxuan You for their helpful discussion. W.H is supported by Funai Overseas Scholarship and Masason Foundation Fellowship. J.L is a Chan Zuckerberg Biohub investigator. We gratefully acknowledge the support of DARPA under Nos. FA865018C7880 (ASED), N660011924033 (MCS); ARO under Nos. W911NF-16-1-0342 (MURI), W911NF-16-1-0171 (DURIP); NSF under Nos. OAC-1835598 (CINES), OAC-1934578 (HDR); Stanford Data Science Initiative, Wu Tsai Neurosciences Institute, Chan Zuckerberg Biohub, JD.com, Amazon, Boeing, Docomo, Huawei, Hitachi, Observe, Siemens, UST Global.
+The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views, policies, or endorsements, either expressed or implied, of DARPA, NIH, ARO, or the U.S. Government.
+
+# REFERENCES
+
+Carl Allen, Ivana Balazevic, and Timothy M Hospedales. On understanding knowledge graph representation. arXiv preprint arXiv:1909.11611, 2019.
+Ben Athiwaratkun and Andrew Gordon Wilson. Hierarchical density order embeddings. In International Conference on Learning Representations (ICLR), 2018.
+Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. In International Conference on Learning Representations (ICLR), 2015.
+Jon Louis Bentley and Jerome H Friedman. Data structures for range searching. ACM Computing Surveys (CSUR), 11(4):397-409, 1979.
+Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. Freebase: a collaboratively created graph database for structuring human knowledge. In ACM SIGMOD international conference on Management of data (SIGMOD), pp. 1247-1250. AcM, 2008.
+Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, and Oksana Yakhnenko. Translating embeddings for modeling multi-relational data. In Advances in Neural Information Processing Systems (NeurIPS), pp. 2787-2795, 2013.
+Nilesh Dalvi and Dan Suciu. Efficient query evaluation on probabilistic databases. VLDB, 16(4): 523-544, 2007.
+Nilesh Dalvi and Dan Suciu. The dichotomy of probabilistic inference for unions of conjunctive queries. Journal of the ACM (JACM), 59(6):30, 2012.
+Rajarshi Das, Arvind Neelakantan, David Belanger, and Andrew McCallum. Chains of reasoning over entities, relations, and text using recurrent neural networks. In European Chapter of the Association for Computational Linguistics (EACL), pp. 132-141, 2017.
+Brian A Davey and Hilary A Priestley. Introduction to lattices and order. Cambridge university press, 2002.
+Luc De Raedt. Logical and relational learning. Springer Science & Business Media, 2008.
+Saso Džeroski. Relational data mining. In Data Mining and Knowledge Discovery Handbook, pp. 887-911. Springer, 2009.
+Katrin Erk. Representing words as regions in vector space. In Proceedings of the Thirteenth Conference on Computational Natural Language Learning, pp. 57-65. Annual Meeting of the Association for Computational Linguistics (ACL), 2009.
+Kelvin Guu, John Miller, and Percy Liang. Traversing knowledge graphs in vector space. In Empirical Methods in Natural Language Processing (EMNLP), pp. 318-327, 2015.
+
+Will Hamilton, Payal Bajaj, Marinka Zitnik, Dan Jurafsky, and Jure Leskovec. Embedding logical queries on knowledge graphs. In Advances in Neural Information Processing Systems (NeurIPS), pp. 2027-2038, 2018.
+Shizhu He, Kang Liu, Guoliang Ji, and Jun Zhao. Learning to represent knowledge graphs with gaussian embedding. In Proceedings of the 24th ACM International on Conference on Information and Knowledge Management, pp. 623-632. ACM, 2015.
+Piotr Indyk and Rajeev Motwani. Approximate nearest neighbors: towards removing the curse of dimensionality. In Proceedings of the thirtieth annual ACM symposium on Theory of computing, pp. 604-613. ACM, 1998.
+Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In International Conference on Learning Representations (ICLR), 2015.
+Daphne Koller, Nir Friedman, Sašo Džeroski, Charles Sutton, Andrew McCallum, Avi Pfeffer, Pieter Abbeel, Ming-Fai Wong, David Heckerman, Chris Meek, et al. Introduction to statistical relational learning. MIT press, 2007.
+Denis Krompaß, Maximilian Nickel, and Volker Tresp. Querying factorized probabilistic triple databases. In International Semantic Web Conference, pp. 114-129. Springer, 2014.
+Alice Lai and Julia Hockenmaier. Learning to predict denotational probabilities for modeling entailment. In Annual Meeting of the Association for Computational Linguistics (ACL), pp. 721-730, 2017.
+Xiang Li, Luke Vilnis, and Andrew McCallum. Improved representation learning for predicting commonsense ontologies. arXiv preprint arXiv:1708.00549, 2017.
+Xiang Li, Luke Vilnis, Dongxu Zhang, Michael Boratko, and Andrew McCallum. Smoothing the geometry of probabilistic box embeddings. In International Conference on Learning Representations (ICLR), 2019.
+Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word representations in vector space. In International Conference on Learning Representations (ICLR), 2013.
+Maximilian Nickel, Kevin Murphy, Volker Tresp, and Evgeniy Gabrilovich. A review of relational machine learning for knowledge graphs. Proceedings of the IEEE, 104(1):11-33, 2016.
+Zhiqing Sun, Zhi-Hong Deng, Jian-Yun Nie, and Jian Tang. Rotate: Knowledge graph embedding by relational rotation in complex space. In International Conference on Learning Representations (ICLR), 2019.
+Kristina Toutanova and Danqi Chen. Observed versus latent features for knowledge base and text inference. In Proceedings of the 3rd Workshop on Continuous Vector Space Models and their Compositionality, pp. 57-66, 2015.
+Vladimir Vapnik. The nature of statistical learning theory. Springer science & business media, 2013.
+Ivan Vendrov, Ryan Kiros, Sanja Fidler, and Raquel Urtasun. Order-embeddings of images and language. In International Conference on Learning Representations (ICLR), 2016.
+John Venn. I. on the diagrammatic and mechanical representation of propositions and reasonings. The London, Edinburgh, and Dublin philosophical magazine and journal of science, 10(59):1-18, 1880.
+Luke Vilnis and Andrew McCallum. Word representations via gaussian embedding. In International Conference on Learning Representations (ICLR), 2014.
+Luke Vilnis, Xiang Li, Shikhar Murty, and Andrew McCallum. Probabilistic embedding of knowledge graphs with box lattice measures. In Annual Meeting of the Association for Computational Linguistics (ACL), 2018.
+
+Wenhan Xiong, Thien Hoang, and William Yang Wang. Deeppath: A reinforcement learning method for knowledge graph reasoning. In Empirical Methods in Natural Language Processing (EMNLP), 2017.
+Manzil Zaheer, Satwik Kottur, Siamak Ravanbakhsh, Barnabas Poczos, Ruslan R Salakhutdinov, and Alexander J Smola. Deep sets. In Advances in Neural Information Processing Systems (NeurIPS), pp. 3391-3401, 2017.
+
+# A PROOF OF THEOREM 1
+
+Proof. To model any EPFO query, we need to at least model a subset of EPFO queries $\mathcal{Q} = \{\bigvee_{q_i\in S}q_i: S\subseteq \{q_1,\ldots ,q_M\} \}$ , where the corresponding denotation sets are $\{\cup_{q_i\in S}[[q_i]]:S\subseteq \{q_1,\dots ,q_M\} \}$ . For the sake of modeling $\mathcal{Q}$ , without loss of generality, we consider assigning a single entity embedding $\mathbf{v}_{\mathbf{q}_i}$ to all $v\in [[q_i]]$ , so there are $M$ kinds of entity vectors, $\mathbf{v}_{\mathbf{q}_1},\ldots ,\mathbf{v}_{\mathbf{q}_M}$ . To model all queries in $\mathcal{Q}$ , it is necessary to satisfy the following.
+
+$$
+\exists \mathbf {v} _ {\mathbf {q} _ {1}}, \dots , \exists \mathbf {v} _ {\mathbf {q} _ {\mathbf {M}}}, \forall S \subseteq \left\{q _ {1}, \dots , q _ {M} \right\}, \exists \mathbf {q} _ {\mathbf {S}} \in \Xi , \text {s u c h t h a t} \operatorname {d i s t} \left(\mathbf {v} _ {\mathbf {q} _ {\mathrm {i}}}; \mathbf {q} _ {\mathbf {S}}\right) \left\{ \begin{array}{l l} \leq \beta & \text {i f} q _ {i} \in S, \\ > \beta & \text {i f} q _ {i} \notin S. \end{array} \right. \tag {7}
+$$
+
+where $\mathbf{q}\mathbf{s}$ is the embedding of query $\vee_{q_i\in S}q_i$ . Eq. 7 means that we can learn the $M$ kinds of entity vectors such that for every query in $\mathcal{Q}$ , we can obtain its embedding to model the corresponding set using the distance function. Notice that this is agnostic to the specific algorithm to embed query $\vee_{q\in S}q$ into $\mathbf{q}\mathbf{s}$ ; thus, our result is generally applicable to any method that embeds the query into a single vector.
+
+Crucially, satisfying Eq. 7 is equivalent to $\{\mathrm{sign}(\beta -\mathrm{dist}(\cdot ;\mathbf{q})): \mathbf{q}\in \Xi \}$ being able to shelter $\{\mathbf{v_{q_1}},\dots ,\mathbf{v_{q_M}}\}$ , i.e., any binary labeling of the points can be perfectly fit by some classifier in the function class. To sum up, in order to model any EPFO query, we need to at least model any query in $\mathcal{Q}$ , which requires the VC dimension of the distance function to be larger than or equal to $M$ .
+
+# B DETAILS ABOUT COMPUTING $M$ IN THEOREM 1
+
+Given the full KG $\mathcal{G}_{\mathrm{test}}$ for the FB15k dataset, our goal is to find conjunctive queries $q_{1},\ldots ,q_{M}$ such that $\llbracket q_1\rrbracket ,\dots ,\llbracket q_M\rrbracket$ are disjoint with each other. For conjunctive queries, we use two types of queries: '1p' and '2i' whose query structures are shown in Figure 4. On the FB15k, we instantiate 308,006 queries of type '1p', which we denote by $S_{1\mathrm{p}}$ . Out of all the queries in $S_{1\mathrm{p}}$ , 129,717 queries have more than one answer entities, and we denote such a set of the queries by $S_{1\mathrm{p}}^{\prime}$ . We then generate a set of queries of type '2i' by first randomly sampling two queries from $S_{1\mathrm{p}}^{\prime}$ and then taking conjunction; we denote the resulting set of queries by $S_{2\mathrm{i}}$ .
+
+Now, we use $S_{1\mathrm{p}}$ and $S_{2\mathrm{i}}$ to generate a set of conjunctive queries whose denotation sets are disjoint with each other. First, we prepare two empty sets $\mathcal{V}_{\mathrm{seen}} = \emptyset$ , and $\mathcal{Q} = \emptyset$ . Then, for every $q \in S_{1\mathrm{p}}$ , if $\mathcal{V}_{\mathrm{seen}} \cap [[q]] = \emptyset$ holds, we let $\mathcal{Q} \gets \mathcal{Q} \cup \{q\}$ and $\mathcal{V}_{\mathrm{seen}} \gets \mathcal{V}_{\mathrm{seen}} \cup [[q]]$ . This procedure already gives us $\mathcal{Q}$ , where we have 10,812 conjunctive queries whose denotation sets are disjoint with each other. We can further apply the analogous procedure for $S_{2\mathrm{i}}$ , which gives us a further increased $\mathcal{Q}$ , where we have 13,365 conjunctive queries whose denotation sets are disjoint with each other. Therefore, we get $M = 13,365$ .
+
+# C EXPERIMENTS ON LINK PREDICTION
+
+ | FB15k | FB15k-237 | NELL995 |
| Method | H@3 | MRR | H@3 | MRR | H@3 | MRR |
| query2box | 0.613 | 0.516 | 0.331 | 0.295 | 0.382 | 0.303 |
| query2box-1p | 0.633 | 0.531 | 0.323 | 0.292 | 0.415 | 0.320 |
| TransE | 0.611 | 0.522 | 0.318 | 0.289 | 0.413 | 0.320 |
+
+Table 4: Performance comparison on the simple link prediction task on the three datasets.
+
+In Table 4, we report the link prediction performance (no multi-hop logical reasoning required) following the conventional metrics (taking average over the triples of head, relation, and tail). Here query2box is trained on all five query structures as shown in Figure 4, and query2box-1p is only trained on simple 1p queries. We found that our query2box is comparable or slightly better than TransE on simple link prediction. Note that in the case of simple link prediction, we do not expect a huge performance gain by using box embeddings as link prediction does not involve logical reasoning nor handling a large set of answer entities. Also, we see that even if we train query2box over diverse queries, its performance on link prediction is still comparable to TransE and query2box-1p, which are trained solely on the link prediction task.
+
+
+Figure 5: Example of the degenerated queries, including (1) $r$ and $r^{-1}$ appear along one path and (2) same anchor node and relation in intersections.
+
+
+
+# D DETAILS ON QUERY GENERATION
+
+Given $\mathcal{G}_{\mathrm{train}}$ , $\mathcal{G}_{\mathrm{valid}}$ , and $\mathcal{G}_{\mathrm{test}}$ as defined in Section 4.1, we generate training, validation and test queries of different query structures. During training, we consider the first 5 kinds of query structures. For evaluation, we consider all the 9 query structures in Fig. 4, containing query structures that are both seen and unseen during training time. We instantiate queries in the following way.
+
+Given a KG and a query structure (which is a DAG), we use pre-order traversal to assign an entity and a relation to each node and edge in the DAG of query structure to instantiate a query. Namely, we start from the root of the DAG (which is the target node), we sample an entity $e$ uniformly from the KG to be the root, then for every node connected to the root in the DAG, we choose a relation $r$ uniformly from the in-coming relations of $e$ in the KG, and a new entity $e'$ from the set of entities that reaches $e$ by $r$ in the KG. Then we assign the relation $r$ to the edge and $e'$ to the node, and move on the process based on the pre-order traversal. This iterative process stops after we assign an entity and relation to every node and edge in DAG. The leaf nodes in the DAG serve as the anchor nodes. Note that during the entity and relation assignment, we specifically filter out all the degenerated queries, as shown in Fig. D. Then we perform a post-order traversal of the DAG on the KG, starting from the anchor nodes, to obtain a set of answer entities to this query.
+
+When generating validation/test queries, we explicitly filter out trivial queries that can be fully answered by subgraph matching on $\mathcal{G}_{\mathrm{train}} / \mathcal{G}_{\mathrm{valid}}$ .
+
+# E DETAILS OF NELL995 DATASET
+
+Here we detail our pre-processing of the NELL995 dataset, which is originally presented by Xiong et al. (2017). Following Allen et al. (2019), we first combine the validation and test sets with the training set to create the whole knowledge graph for NELL995. Then we create new validation and test set splits by randomly selecting 20,000 triples each from the whole knowledge graph. Note that we filter out all the entities that only appear in the validation and test sets but not in the training set.
+
+| Dataset | Entities | Relations | Training Edges | Validation Edges | Test Edges | Total Edges |
| FB15k | 14,951 | 1,345 | 483,142 | 50,000 | 59,071 | 592,213 |
| FB15k-237 | 14,505 | 237 | 272,115 | 17,526 | 20,438 | 310,079 |
| NELL995 | 63,361 | 200 | 114,213 | 14,324 | 14,267 | 142,804 |
+
+Table 5: Knowledge graph dataset statistics as well as the split into training, validation, and test sets.
+
+| Queries | Training | Validation | Test |
| Dataset | 1p | others | 1p | others | 1p | others |
| FB15k | 273,710 | 273,710 | 59,097 | 8,000 | 67,016 | 8,000 |
| FB15k-237 | 149,689 | 149,689 | 20,101 | 5,000 | 22,812 | 5,000 |
| NELL995 | 107,982 | 107,982 | 16,927 | 4,000 | 17,034 | 4,000 |
+
+# F DATASET STATISTICS
+
+Table 5 summarizes the basic statistics of the three datasets used in our experiments. Table 6 summarizes the basic statistics of the generated logical queries.
+
+# G HYPER-PARAMETERS
+
+We use embedding dimensionality of $d = 400$ and set $\gamma = 24$ , $\alpha = 0.2$ for the loss in Eq. 4. We train all types of training queries jointly. In every iteration, we sample a minibatch size of 512 queries for each query structure (details in Appendix D), and we sample 1 answer entity and 128 negative entities for each query. We optimize the loss in Eq. 4 using Adam Optimizer (Kingma & Ba, 2015) with learning rate $= 0.0001$ . We train all models for 250 epochs, monitor the performance on the validation set, and report the test performance.
+
+# H ANALYSIS OF LEARNED BOX OFFSET SIZE
+
+Here we study the correlation between the box size (measured by the L1 norm of the box offset) and the average number of entities that are contained in 1p queries using the corresponding relation. Table 7 shows the top 10 relations with smallest/largest box sizes. We observe a clear trend that the size of the box has a strong correlation with the number of entities the box encloses. Specifically, we see that one-to-many relations tend to have larger offset embeddings, which demonstrates that larger boxes are indeed used to model sets of more points (entities).
+
+Table 6: Number of training, validation, and test queries generated for different query structures.
+
+| Top 10 relations with smallest box size | #Ent | Box size |
| /architecture/../owner | 1.0 | 2.3 |
| /base/../dog_breeds | 2.0 | 4.0 |
| /education/../campuses | 1.0 | 4.3 |
| /education/../educational_institution | 1.0 | 4.6 |
| /base/../collective | 1.0 | 5.1 |
| /base/../member | 1.0 | 5.1 |
| /people/../appointed_by | 1.0 | 5.2 |
| /base/../fashion_models_with_this_hair_color | 2.0 | 5.2 |
| /fictional_universe/../parents | 1.0 | 5.5 |
| /american_football/../team | 2.0 | 6.7 |
+
+| Top 10 relations with largest box size | #Ent | Box size |
| /common/../topic | 3616.0 | 147.0 |
| /user/../taxonomy | 1.0 | 137.2 |
| /common/../category | 1.3 | 125.6 |
| /base/../administrative_area_type | 1.0 | 123.6 |
| /medicine/../legal_status | 1.5 | 114.9 |
| /people/../spouse | 889.8 | 114.3 |
| /sports/../team | 397.9 | 113.9 |
| /people/../location_of_ceremony | 132.0 | 108.4 |
| /sports/../team | 83.1 | 104.5 |
| /user/../subject | 495.0 | 104.2 |
+
+Table 7: Top 10 relations with smallest/largest box size in FB15k.
+
+# I MRR RESULTS
+
+| Method | Avg | 1p | 2p | 3p | 2i | 3i | ip | pi | 2u | up |
| FB15k |
| Q2B | 0.41 | 0.654 | 0.373 | 0.274 | 0.488 | 0.602 | 0.194 | 0.339 | 0.468 | 0.301 |
| GQE | 0.328 | 0.505 | 0.320 | 0.218 | 0.439 | 0.536 | 0.139 | 0.272 | 0.3 | 0.244 |
| GQE-DOUBLE | 0.326 | 0.49 | 0.3 | 0.222 | 0.438 | 0.532 | 0.142 | 0.28 | 0.285 | 0.242 |
| FB15k-237 |
| Q2B | 0.235 | 0.4 | 0.225 | 0.173 | 0.275 | 0.378 | 0.105 | 0.18 | 0.198 | 0.178 |
| GQE | 0.203 | 0.346 | 0.193 | 0.145 | 0.25 | 0.355 | 0.086 | 0.156 | 0.145 | 0.151 |
| GQE-DOUBLE | 0.205 | 0.346 | 0.191 | 0.144 | 0.258 | 0.361 | 0.087 | 0.164 | 0.144 | 0.149 |
| NELL995 |
| Q2B | 0.254 | 0.413 | 0.227 | 0.208 | 0.288 | 0.414 | 0.125 | 0.193 | 0.266 | 0.155 |
| GQE | 0.21 | 0.311 | 0.193 | 0.175 | 0.273 | 0.399 | 0.078 | 0.168 | 0.159 | 0.13 |
| GQE-DOUBLE | 0.211 | 0.309 | 0.192 | 0.174 | 0.275 | 0.408 | 0.08 | 0.17 | 0.156 | 0.129 |
+
+Table 8: MRR results of QUERY2BOX vs. GQE on FB15k, FB15k-237 and NELL995.
+
+| Method | Avg | 1p | 2p | 3p | 2i | 3i | ip | pi | 2u | up |
| FB15k |
| Q2B | 0.41 | 0.654 | 0.373 | 0.274 | 0.488 | 0.602 | 0.194 | 0.339 | 0.468 | 0.301 |
| Q2B-AVG | 0.396 | 0.648 | 0.368 | 0.27 | 0.476 | 0.564 | 0.182 | 0.295 | 0.465 | 0.3 |
| Q2B-DEEPSETS | 0.402 | 0.631 | 0.371 | 0.269 | 0.499 | 0.605 | 0.181 | 0.325 | 0.437 | 0.298 |
| Q2B-AVG-1P | 0.324 | 0.688 | 0.236 | 0.159 | 0.378 | 0.435 | 0.122 | 0.225 | 0.498 | 0.178 |
| Q2B-SHAREDOFFSET | 0.296 | 0.511 | 0.273 | 0.199 | 0.351 | 0.444 | 0.132 | 0.233 | 0.311 | 0.213 |
| FB15k-237 |
| Q2B | 0.235 | 0.4 | 0.225 | 0.173 | 0.275 | 0.378 | 0.105 | 0.18 | 0.198 | 0.178 |
| Q2B-AVG | 0.219 | 0.398 | 0.222 | 0.171 | 0.236 | 0.328 | 0.1 | 0.145 | 0.193 | 0.177 |
| Q2B-DEEPSETS | 0.23 | 0.395 | 0.224 | 0.172 | 0.264 | 0.372 | 0.101 | 0.168 | 0.194 | 0.176 |
| Q2B-AVG-1P | 0.196 | 0.41 | 0.18 | 0.122 | 0.217 | 0.274 | 0.085 | 0.127 | 0.209 | 0.145 |
| Q2B-SHAREDOFFSET | 0.18 | 0.328 | 0.18 | 0.131 | 0.207 | 0.289 | 0.083 | 0.136 | 0.135 | 0.132 |
| NELL995 |
| Q2B | 0.254 | 0.413 | 0.227 | 0.208 | 0.288 | 0.414 | 0.125 | 0.193 | 0.266 | 0.155 |
| Q2B-AVG | 0.235 | 0.406 | 0.219 | 0.2 | 0.251 | 0.342 | 0.114 | 0.174 | 0.259 | 0.149 |
| Q2B-DEEPSETS | 0.246 | 0.405 | 0.226 | 0.207 | 0.275 | 0.403 | 0.107 | 0.182 | 0.256 | 0.153 |
| Q2B-AVG-1P | 0.227 | 0.468 | 0.191 | 0.16 | 0.234 | 0.275 | 0.094 | 0.162 | 0.332 | 0.125 |
| Q2B-SHAREDOFFSET | 0.196 | 0.318 | 0.187 | 0.172 | 0.228 | 0.312 | 0.098 | 0.156 | 0.169 | 0.127 |
+
+Table 9: MRR results of QUERY2BOX vs. several variants on FB15k, FB15k-237 and NELL995.
\ No newline at end of file
diff --git a/query2boxreasoningoverknowledgegraphsinvectorspaceusingboxembeddings/images.zip b/query2boxreasoningoverknowledgegraphsinvectorspaceusingboxembeddings/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..4946cc58ea054f9cee151db3e7aaf9f533db8637
--- /dev/null
+++ b/query2boxreasoningoverknowledgegraphsinvectorspaceusingboxembeddings/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d4b3fed7488bba8edb6d85d7c2d03664d97bf394e724bbaa35f8faf73dc7cd41
+size 868686
diff --git a/query2boxreasoningoverknowledgegraphsinvectorspaceusingboxembeddings/layout.json b/query2boxreasoningoverknowledgegraphsinvectorspaceusingboxembeddings/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..cba5083a93a06fceceaa74e5dc5d07c2542798f4
--- /dev/null
+++ b/query2boxreasoningoverknowledgegraphsinvectorspaceusingboxembeddings/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d00f79436bb3fe3c187b67f0fff748a65facab5c4bd6b22a25e934e3d125a65f
+size 640436
diff --git a/queryefficientmetaattacktodeepneuralnetworks/e1a2c63d-703e-4881-b497-1e4dca6655b7_content_list.json b/queryefficientmetaattacktodeepneuralnetworks/e1a2c63d-703e-4881-b497-1e4dca6655b7_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..3e88606684953d47b80cb099e91f34d27962f760
--- /dev/null
+++ b/queryefficientmetaattacktodeepneuralnetworks/e1a2c63d-703e-4881-b497-1e4dca6655b7_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:dca38384e72b7651e9453a4cb3d57b686f2b7bcde1cae2f6596c6d8ec88c70f9
+size 78165
diff --git a/queryefficientmetaattacktodeepneuralnetworks/e1a2c63d-703e-4881-b497-1e4dca6655b7_model.json b/queryefficientmetaattacktodeepneuralnetworks/e1a2c63d-703e-4881-b497-1e4dca6655b7_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..01fb1ce897a4b77d79cde77f6c8d8e64844f5656
--- /dev/null
+++ b/queryefficientmetaattacktodeepneuralnetworks/e1a2c63d-703e-4881-b497-1e4dca6655b7_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:99d59efaba49828a3bcd7f5c380a172e178aec61a01e15f3498f0c72b2fc761d
+size 91814
diff --git a/queryefficientmetaattacktodeepneuralnetworks/e1a2c63d-703e-4881-b497-1e4dca6655b7_origin.pdf b/queryefficientmetaattacktodeepneuralnetworks/e1a2c63d-703e-4881-b497-1e4dca6655b7_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..620079f4a653bd9f46445b83dd765f4bd7193029
--- /dev/null
+++ b/queryefficientmetaattacktodeepneuralnetworks/e1a2c63d-703e-4881-b497-1e4dca6655b7_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a5b78a31059dbd64684626a68758c24e321ca39fe0815f96495a1df4ece5568a
+size 1611249
diff --git a/queryefficientmetaattacktodeepneuralnetworks/full.md b/queryefficientmetaattacktodeepneuralnetworks/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..3b728d1e6cb2f1769fd84666cd7275cd95963dae
--- /dev/null
+++ b/queryefficientmetaattacktodeepneuralnetworks/full.md
@@ -0,0 +1,299 @@
+# QUERY-EFFICIENT META ATTACK TO DEEP NEURAL NETWORKS
+
+Jiawei Du $^{1,3*}$ , Hu Zhang $^{2*}$ , Joel Tianyi Zhou $^{3}$ , Yi Yang $^{2}$ , Jiashi Feng $^{1}$
+
+$^{1}$ Dept. ECE, National University of Singapore, Singapore
+
+$^{2}$ ReLER, University of Technology Sydney, Australia
+
+$^{3}$ Institute of High performance Computing, A*STAR, Singapore
+
+dujiawei@u.nus.edu, Hu.Zhang-1@student.uts.edu.au
+
+joey.tianyi.zhou@gmail.com,Yi.Yang@uts.edu.au
+
+elefjia@nus.edu.sg
+
+# ABSTRACT
+
+Black-box attack methods aim to infer suitable attack patterns to targeted DNN models by only using output feedback of the models and the corresponding input queries. However, due to lack of prior and inefficiency in leveraging the query and feedback information, existing methods are mostly query-intensive for obtaining effective attack patterns. In this work, we propose a meta attack approach that is capable of attacking a targeted model with much fewer queries. Its high query-efficiency stems from effective utilization of meta learning approaches in learning generalizable prior abstraction from the previously observed attack patterns and exploiting such prior to help infer attack patterns from only a few queries and outputs. Extensive experiments on MNIST, CIFAR10 and tiny-Imagenet demonstrate that our meta-attack method can remarkably reduce the number of model queries without sacrificing the attack performance. Besides, the obtained meta attacker is not restricted to a particular model but can be used easily with a fast adaptive ability to attack a variety of models. The code of our work is available at https://github.com/dydjw9/MetaAttack_ICLR2020/.
+
+# 1 INTRODUCTION
+
+Despite the great success in various tasks, deep neural networks (DNNs) are found to be susceptible to adversarial attacks and often suffer dramatic performance degradation in front of adversarial examples, even if only tiny and invisible noise is imposed on the input (Szegedy et al., 2014). To investigate the safety and robustness of DNNs, many adversarial attack methods have been developed, which apply to either a white-box (Goodfellow et al., 2015; Moosavi-Dezfooli et al., 2016; Carlini & Wagner, 2017; Madry et al., 2018) or a black-box setting (Papernot et al., 2017; Brendel et al., 2018; Narodytska & Kasiviswanathan, 2017). In the white-box attack setting, the target model is transparent to the attacker and imperceptible adversarial noise can be easily crafted to mislead this model by leveraging its gradient information (Goodfellow et al., 2015). In contrast, in the black-box setting, the structure and parameters of the target DNN model are invisible, and the adversary can only access the input-output pair in each query. With a sufficient number of queries, black-box methods utilize the returned information to attack the target model generally by estimating gradient (Chen et al., 2017; Ilyas et al., 2018a; Narodytska & Kasiviswanathan, 2017; Cheng et al., 2019).
+
+Black-box attack is more feasible in realistic scenarios than white-box attack but it is much more query-intensive. Such a drawback is largely attributed to the fact that returned information for each queried example is sparse and limited. During inferring attack patterns, existing black-box methods simply integrate the information between two sequential iterations brutally and ignore the implicit but profound message, thus not fully exploiting the returned information. Although query-efficient algorithms for generating attack examples are very meaningful in practice (Ilyas et al., 2018a), how to enhance query-efficiency for black-box attack remains underexplored.
+
+In this work, we address a query-efficiency concerned attack problem. Particularly, we consider only top- $k$ probability scores accessible from the target black-box model. With this practical but challenging scenario, we aim at three important objectives: lower query number, higher success rate and smaller noise magnitude. We develop a meta-learning based attack method, which applies meta learning to obtaining prior information from the successful attack patterns, and uses the prior for efficient optimization. Specifically, we propose to train a meta attacker model through meta learning (Nichol et al., 2018), inspired by its success in solving few-shot learning problems. We first deploy several existing classification models to get pairs of (images, gradients) with the max-margin logit classification loss. Then we use the data pairs of each classification model to train the meta attacker. After obtaining the attacker, we use it to attack a new black-box model for accelerating the search process for adversarial examples by optimizing it with coordinate-wise gradient estimation. Different from previous methods, we use the estimated gradient not only to update adversarial noise but to fine-tune the well-trained attacker. After few-shot fine-tuning, the attacker is able to simulate the gradient distribution of the target model.
+
+We evaluate our method on MNIST, CIFAR10 and tiny-ImageNet datasets by comparing it with state-of-the-art black-box attack methods including Zoo (Chen et al., 2017), Decision-Boundary (Brendel et al., 2018), AutoZoom (Tu et al., 2019), Opt-attack (Cheng et al., 2019) and Bandits (Ilyas et al., 2018b). In both targeted and untargeted settings, our proposed method achieves comparable attack success rate and adversarial perturbation to all baselines but with a significantly reduced query number. The detailed experiment results demonstrate our superior query-efficiency.
+
+# 2 RELATED WORK
+
+Classical white-box attack methods include Fast-Gradient Sign Method (FGSM) (Goodfellow et al., 2015), IFGSM (Madry et al., 2018), DeepFool (Moosavi-Dezfooli et al., 2016) and C&W attack (Carlini & Wagner, 2017), following a setting where detailed information about the target model (gradients and losses) is provided. Comparatively, the black-box setting better accords with the real world scenarios in that little information about the target model is visible to the attacker. The pioneer work on black-box attack (Papernot et al., 2017) tries to construct a substitute model with augmented data and transfer the black-box attack problem to a white-box one. However, its attack performance is very poor due to the limited transferability of adversarial examples between two different models. (Brendel et al., 2018) considers a more restricted case where only top-1 prediction classes are returned and proposes a random-walk based attack method around the decision boundary. It dispenses class prediction scores and hence requires extensive model queries. Zoo (Chen et al., 2017) is a black-box version of C&W attack, achieving a similar attack success rate and comparable visual quality as many white-box attack methods. However, its coordinate-wise gradient estimation requires extensive model evaluations. More recently, (Ilyas et al., 2018a) proposes a query-limited setting with $L_{\infty}$ noise considered, and uses a natural evolution strategy (NES) to enhance query efficiency. Though this method successfully controls the query number, the noise imposed is larger than average. (Narodytska & Kasiviswanathan, 2017) proposes a novel local-search based technique to construct numerical approximation to the network gradient, which is then carefully used to construct a small set of pixels in an image to perturb. It suffers a similar problem as in (Chen et al., 2017) for pixel-wise attack. (Cheng et al., 2019) considers a hard-label black-box setting and formulates the problem as real-valued optimization that is solved by a zeroth order optimization algorithm. Ilyas et al. (2018b) reduce the queries by introducing two gradient priors, the time-independent prior and the data-dependent prior, and reformulating the optimization problem.
+
+We then briefly introduce some works on meta-learning related to our work. Meta-learning is a process of learning how to learn. A meta-learning algorithm takes in a distribution of tasks, each being a learning problem, and produces a quick learner that can generalize from a small number of examples. Meta-learning is very popular recently for its fast adaptive ability. MAML (Finn et al., 2017) is the first to propose this idea. Recently, a simplified algorithm Reptile (Nichol et al., 2018) which is an approximation to the first-order MAML is proposed, achieving higher efficiency in computation and consuming less memory. With these superior properties, meta learning is applied to adversarial attack methods (Zgner & Gnnemann, 2019; Edmunds et al., 2017). Zgner & Gnnemann (2019) try to attack the structure of a graph model in the training process to decrease the model generalization performance. Edmunds et al. (2017) investigate the susceptibility of MAML to adversarial attacks and the transferability of the obtained meta model to a specific task.
+
+# 3 METHOD
+
+# 3.1 PRELIMINARIES: BLACK-BOX ATTACK SCHEMES
+
+We first formulate the black-box attack problem and introduce the widely used solutions. We use $(\pmb{x},t)$ to denote the pair of a natural image and its true label, and $\hat{\pmb{x}}$ and $\hat{t}$ to denote the adversarial perturbed version of $\pmb{x}$ and the returned label by the target classification model $\mathcal{M}_{tar}$ . The black-box attack aims to find an adversarial example $\hat{\pmb{x}}$ with imperceivable difference from $\pmb{x}$ to fail the target model, i.e., $\hat{t} \neq t$ through querying the target model for multiple times. It can be formulated as
+
+$$
+\min _ {\hat {\pmb {x}}} \ell (\hat {\pmb {x}}, \mathcal {M} _ {t a r} (\hat {\pmb {x}}), t)
+$$
+
+$$
+\begin{array}{l} \hat {\boldsymbol {x}} \\ \text {s . t .} \| \hat {\boldsymbol {x}} - \boldsymbol {x} \| _ {p} \leq \rho , \quad \# \text {q u e r i e s} \leq Q. \end{array} \tag {1}
+$$
+
+Here $\| \cdot \| _p$ denotes the $\ell_p$ norm that measures how much perturbation is imposed. $\mathcal{M}_{tar}(\hat{\boldsymbol{x}})$ is the returned logit or probability by the target model $\mathcal{M}_{tar}$ . The loss function $\ell (\hat{\boldsymbol{x}},\mathcal{M}_{tar}(\hat{\boldsymbol{x}}),t)$ measures the degree of certainty for model $\mathcal{M}_{tar}$ assigning the input $\hat{\boldsymbol{x}}$ into class $t$ . One common used adversarial loss is the probability of class $t$ : $\ell (\hat{\boldsymbol{x}},\mathcal{M}_{tar}(\hat{\boldsymbol{x}}),t) = p_{\mathcal{M}_{tar}}(t|\hat{\boldsymbol{x}})$ . The first constraint enforces high similarity between the clean image $\boldsymbol{x}$ and the adversarial one $\boldsymbol{x}_{adv}$ and the second imposes a fixed budget $Q$ for the number of queries allowed in the optimization.
+
+In the white-box attack setting, the adversary can access the true gradient $\nabla_{\hat{\boldsymbol{x}}_t}\ell (\hat{\boldsymbol{x}}_t)$ and perform gradient descent $\hat{\boldsymbol{x}}_{t + 1} = \hat{\boldsymbol{x}}_t - \nabla_{\hat{\boldsymbol{x}}_t}\ell (\hat{\boldsymbol{x}}_t)$ . But in the black-box setting, the gradient information $\nabla_{\hat{\boldsymbol{x}}_t}\ell (\hat{\boldsymbol{x}}_t)$ is not attainable. In this case, the attacker can estimate the gradient using only queried information from model evaluation such as hard label, logits and probability scores. This kind of estimator is the backbone of so-called zeroth-order optimization approaches (Chen et al., 2017; Narodytska & Kasiviswanathan, 2017; Tu et al., 2019; Ilyas et al., 2018a,b). The estimation is done via finite difference method (Chen et al., 2017; Narodytska & Kasiviswanathan, 2017; Tu et al., 2019), which finds the $k$ components of the gradient by estimating the inner products of the gradients with all the standard basis vector $e_1,\ldots ,e_k$ :
+
+$$
+\nabla \ell (\boldsymbol {x}) \approx \sum_ {i = 1} ^ {k} \frac {f \left(\boldsymbol {x} + h \boldsymbol {e} _ {i}\right) - f \left(\boldsymbol {x} - h \boldsymbol {e} _ {i}\right)}{2 h} \boldsymbol {e} _ {i}, \tag {2}
+$$
+
+where step size $h$ controls the quality of the estimated gradient. Another strategy is to reformulate the loss function (Ilyas et al., 2018a;b). Instead of computing the gradient of $\ell(\boldsymbol{x})$ itself, the expected value of loss function $\ell(\boldsymbol{x})$ under the search distribution is minimized and when the search distribution of random Gaussian noise is adopted, the gradient estimation problem transfers into a zeroth-order estimation problem,
+
+$$
+\nabla \mathbb {E} [ \ell (\boldsymbol {x}) ] \approx \frac {1}{\sigma n} \sum_ {i = 1} ^ {n} \ell (\boldsymbol {x} + \sigma \boldsymbol {\delta} _ {i}) \boldsymbol {\delta} _ {i} \tag {3}
+$$
+
+where $n$ is the amount of noise sampled from the distribution. After obtaining the estimated gradient, classical optimization algorithms (Nesterov, 2013; Johnson & Zhang, 2013) can be used to infer the adversarial examples. Though the estimated gradient may not be accurate, it is still proved useful enough in adversarial attack. The convergence of these zeroth-order methods is guaranteed under mild assumptions (Ghadimi & Lan, 2013; Nesterov & Spokoiny, 2017; Hazan et al., 2016).
+
+Since each model evaluation consumes a query, naively applying the above gradient estimation to black-box attack is quite query expensive due to its coordinate or noise sampling nature. Take the first strategy on tiny-Imagenet dataset for example. It consumes more than 20,000 queries for each image to obtain a full gradient estimate, which is not affordable in practice. In this work, we address such a limitation via developing a query-efficient meta-learning based attack model.
+
+# 3.2 LEARNING OF META ATTACKER
+
+To reduce the query cost for black-box attack, we apply meta learning to training a meta attacker model, inspired by its recent success in few-shot learning problems (Finn et al., 2017; Nichol et al., 2018). The meta attacker learns to extract useful prior information of the gradient of a variety of models w.r.t. specific input samples. It can infer the gradient for a new target model using only a
+
+Algorithm 1 Meta Attacker Training
+Input: Input images $\mathbb{X}$ , groundtruth gradients $\mathbb{G}_i$ generated from classification models $\mathcal{M}_i$ to serve as task $\mathcal{T}_i$
+1: Randomly initialize $\pmb{\theta}$
+2: while not done do
+3: for all $\mathcal{T}_i$ do
+4: Sample $K$ samples from $(\mathbb{X},\mathbb{G}_i)$ for training, denoted as $(\mathbb{X}_s,\mathbb{G}_i^s)$
+5: Evaluate $\nabla_{\theta}\mathcal{L}_i(\mathcal{A}_{\theta}) = \nabla_{\theta}\| \mathcal{A}_{\theta}(\mathbb{X}_s) - \mathbb{G}_i^s\| _2^2$ with respect to $(\mathbb{X}_s,\mathbb{G}_i^s)$
+6: Update $\theta_i^\prime \coloneqq \theta -\alpha \nabla_\theta \mathcal{L}_i(\mathcal{A}_\theta)$
+7: end for
+8: Update $\pmb {\theta}\coloneqq \pmb {\theta} + \epsilon \frac{1}{n}\sum_{i = 1}^{n}(\pmb{\theta}_i^{\prime} - \pmb {\theta})$
+9: end while
+Output: Parameters $\pmb{\theta}$ of meta model A.
+
+few queries. After obtaining such a meta attacker, we replace the zeroth-order gradient estimation in traditional black box attack methods with it to directly estimate the gradient.
+
+We collect a set of existing classification models $\mathcal{M}_1, \dots, \mathcal{M}_n$ to generate gradient information for universal meta attacker training. Specifically, we feed each image $\pmb{x}$ into the models $\mathcal{M}_1, \dots, \mathcal{M}_n$ respectively and compute losses $\ell_1, \dots, \ell_n$ by using following max-margin logit classification loss:
+
+$$
+\ell_ {i} (\boldsymbol {x}) = \max \left[ \log \left[ \mathcal {M} _ {i} (\boldsymbol {x}) \right] _ {t} - \max _ {j \neq t} \log \left[ \mathcal {M} _ {i} (\boldsymbol {x}) \right] _ {j}, 0 \right]. \tag {4}
+$$
+
+Here $t$ is the groundtruth label and $j$ indexes other classes. $[\mathcal{M}_i(\pmb {x})]_t$ is the probability score of the true label predicted by the model $\mathcal{M}_i$ , and $[\mathcal{M}_i(\pmb {x})]_j$ denotes the probability scores of other classes. By performing one step back-propagation of losses $\ell_1,\ldots ,\ell_n$ w.r.t. the input images $\pmb{x}$ , the corresponding gradients $\pmb {g}_i = \nabla_{\pmb{x}}\ell_i(\pmb {x}),i = 1,\dots,n$ are obtained. Finally, we collect $n$ groups of data $\mathbb{X} = \{\pmb {x}\}$ $\mathbb{G}_i = \{\pmb {g}_i\}$ $i = 1,\dots,n$ to train the universal meta attacker.
+
+We design a meta attacker $\mathcal{A}$ which has a similar structure as an autoencoder, consisting of symmetric convolution and de-convolution layers and outputs a gradient map with the same size as the input. Meta attacker model $\mathcal{A}$ is parameterized with parameters $\theta$ .
+
+Due to the intrinsic difference between selected classification models, each obtained set $(\mathbb{X},\mathbb{G}_i)$ is treated as a task $\mathcal{T}_i$ in meta attacker training. During the training process, for each iteration we only draw $K$ samples from task $\mathcal{T}_i$ and feedback the loss $\mathcal{L}_i$ to update model parameters from $\pmb{\theta}$ to $\pmb{\theta}_i'$ . $\pmb{\theta}_i'$ is then computed through one or multiple gradient descents: $\pmb{\theta}_i' := \pmb{\theta} - \alpha \nabla_\pmb{\theta} \mathcal{L}_i(\mathcal{A}_\pmb{\theta})$ . For a sensitive position of the meta attacker, the meta attacker parameters are optimized by combining each $\pmb{\theta}_i'$ across all tasks $\{\mathcal{T}_i\}_{i=1,\dots,n}$ , following the update strategy of Reptile (Nichol et al., 2018) in meta learning,
+
+$$
+\boldsymbol {\theta} := \boldsymbol {\theta} + \epsilon \frac {1}{n} \sum_ {i = 1} ^ {n} \left(\boldsymbol {\theta} _ {i} ^ {\prime} - \boldsymbol {\theta}\right). \tag {5}
+$$
+
+We adopt mean-squared error (MSE) as the training loss in the inner update,
+
+$$
+\mathcal {L} _ {i} \left(\mathcal {A} _ {\boldsymbol {\theta}}\right) = \left\| \mathcal {A} _ {\boldsymbol {\theta}} \left(\mathbb {X} _ {s}\right) - \mathbb {G} _ {i} ^ {s} \right\| _ {2} ^ {2}. \tag {6}
+$$
+
+The set $(\mathbb{X}_s,\mathbb{G}_i^s)$ denotes the $K$ samples used for each inner update from $\theta$ to $\theta_{i}^{\prime}$ . Since the number of $K$ sampled each time is very small, the update strategy above tries to find good meta attacker parameters $\theta$ as an initial point, from which the meta attacker model can fast adapt to new data distribution through gradient descent based fine-tuning within limited samples. Therefore, this characteristic can be naturally leveraged in attacking new black-box models by estimating their gradient information through a few queries. Detailed training process of our meta attacker is described in Algorithm 1.
+
+# 3.3 QUERY-EFFICIENT ATTACK VIA META ATTACKER
+
+An effective adversarial attack relies on optimizing the loss function equation 1 w.r.t. the input image to find the adversarial example of the target model $\mathcal{M}_{tar}$ . Differently, our proposed method applies the meta attacker $\mathcal{A}$ to predicting the gradient map of a test image directly.
+
+Algorithm 2 Adversarial Meta Attack Algorithm
+Input: Test image $x_0$ with label $t$ , meta attacker $A_{\theta}$ , target model $\mathcal{M}_{tar}$ , iteration interval $m$ , selected top- $q$ coordinates;
+1: for $t = 0,1,2,\ldots$ do
+2: if $(t + 1) \mod m = 0$ then
+3: Perform zeroth-order gradient estimation on top $q$ coordinates, denoted as $I_t$ and obtain $g_t$ ;
+4: Fine-tune meta attacker $A$ with $(\pmb{x}_t, g_t)$ on $I_t$ by loss $L = \| [\mathcal{A}_{\theta}(\pmb{x}_t)]_{I_t} - [g_t]_{I_t} \|_2^2$ ;
+5: else
+6: Generate the gradient map $g_t$ directly from meta attacker $A$ with $x_t$ , select coordinates $I_t$ ;
+7: end if
+8: Update $[\pmb{x}']_{I_t} = [\pmb{x}_t]_{I_t} + \beta [\pmb{g}_t]_{I_t}$ ;
+9: if $\mathcal{M}_{tar}(\pmb{x}') \neq t$ then
+10: $x_{adv} = x'$ ;
+11: break;
+12: else
+13: $x_{t+1} = x'$ ;
+14: end if
+15: end for
+Output: adversarial example $x_{adv}$ .
+
+We use the obtained meta attacker model $\mathcal{A}$ to predict useful gradient map for attacking, which should be fine tuned to adapt to the new gradient distribution under our new target model. Particularly, instead of finetuning once, for each given image $\pmb{x}$ , we update $\mathcal{A}$ by leveraging query information with the following periodic scheme. Suppose the given image is perturbed to $\pmb{x}_t \in \mathbb{R}^p$ at iteration $t$ . If $(t + 1) \mod m = 0$ , our method performs zeroth-order gradient estimation to obtain gradient map $\pmb{g}_t$ for fine-tuning. As each pixel value for estimated gradient map consumes two queries, for further saving queries, we just select $q$ of the $p$ coordinates to estimate, $q \ll p$ , instead of the full gradient map through all $p$ coordinates. The indexes of chosen coordinates are determined by the gradient map $\pmb{g}_{t-1}$ obtained in iteration $t - 1$ . We sort the coordinate indexes by the value of $\pmb{g}_{t-1}$ and select top- $q$ indexes. The set of these indexes are denoted as $I_t$ . We feed image $\pmb{x}_t$ in iteration $t$ into meta attacker $\mathcal{A}$ and compute the MSE loss on indexes $I_t$ , i.e. $L = \| [\mathcal{A}_{\theta}(\pmb{x}_t)]_{I_t} - [\pmb{g}_t]_{I_t} \|_2^2$ . Then we perform gradient descent for the MSE loss with a few steps to update the parameters $\pmb{\theta}$ of meta attacker $\mathcal{A}$ . For the rest iterations, we just use the periodically updated attacker $\mathcal{A}_{\theta}$ to directly generate the gradient $\pmb{g}_t = \mathcal{A}_{\theta}(\pmb{x}_t)$ . When we have the estimated gradient map $\pmb{g}_t$ in iteration $t$ , we update to get the adversarial sample $\pmb{x}_t'$ by $[\pmb{x}_t']_{I_t} = [\pmb{x}_t]_{I_t} + \beta [\pmb{g}_t]_{I_t}$ where $\beta$ is a hyperparameter to be tuned. The details are summarized in Algorithm 2.
+
+In our method, the following operations contribute to reducing the query number needed by the attacker. First, though we just use $q$ coordinates to fine-tune our meta attacker $\mathcal{A}$ every $m$ iterations, the meta attacker $\mathcal{A}$ is trained to ensure that it can abstract the gradient distribution of different $x_{t}$ and learn to predict the gradient from a few samples with simple fine-tuning. Secondly, the most query-consuming part lies in zeroth-order gradient estimation, due to its coordinate-wise nature. In our algorithm, we only do this every $m$ iterations. When we use the finetuned meta attacker $\mathcal{A}$ directly, no query is consumed in gradient estimation in these iterations. Intuitively, larger $m$ implies less gradient estimation computation and fewer queries. Besides, just as mentioned above, even in zeroth-order gradient estimation, only top- $q$ coordinates are required. Normally, $q$ is much smaller than dimension $p$ of the input.
+
+# 4 EXPERIMENTS
+
+We compare our meta attacker with state-of-the-art black-box attack methods including Zoo (Chen et al., 2017), Decision-Boundary (Brendel et al., 2018), AutoZoom (Tu et al., 2019), Opt-attack (Cheng et al., 2019), FW-black (Chen et al., 2018) and Bandits (Ilyas et al., 2018b) to evaluate its query efficiency. We also study its generalizability and transferability through a Meta transfer attacker, as detailed in the following sections.
+
+# 4.1 SETTINGS
+
+Datasets and Target Models We evaluate the attack performance on MNIST (LeCun, 1998) for handwritten digit recognition, CIFAR10 (Krizhevsky & Hinton, 2009) and tiny-Imagenet (Rusakovsky et al., 2015) for object classification. The architecture details of meta attack models on MNIST, CIFAR10 and tiny-Imagenet are given in Table 6. For MNIST, we train a separate meta attacker model since the images have different channel numbers from other natural image datasets. For CIFAR10 and tiny-Imagenet, we use a common meta attacker model. On CIFAR-10, we choose ResNet18 (He et al., 2016) as the target model $\mathcal{M}_{tar}$ and use VGG13, VGG16 (Simonyan & Zisserman, 2014) and GoogleNet (Szegedy et al., 2015) for training our meta attacker. On tiny-Imagenet, we choose VGG19 and ResNet34 as the target model separately, and use VGG13, VGG16 and ResNet18 for training the meta attacker together.
+
+Attack Protocols For a target black-box model $\mathcal{M}_{tar}$ , obtaining a pair of (input-output) is considered as one query. We use the mis-classification rate as attack success rate; we randomly select 1000 images from each dataset as test images. To evaluate overall noise added by the attack methods, we use the mean $L_{2}$ distance across all the samples noise $(\mathcal{M}_{tar}) = \frac{1}{n}\sum_{i=1}^{n}\|\pmb{x}_{i,\mathcal{A},\mathcal{M}_{tar}}^{adv} - \pmb{x}_{i}\|_{2}$ , where $\pmb{x}_{i,\mathcal{A},\mathcal{M}_{tar}}^{adv}$ denotes the adversarial version for the authentic sample $\pmb{x}_{i}$ .
+
+Meta-training Details For all the experiments, we use the same architecture for the meta attacker $\mathcal{A}$ as shown in Table 6. We use Reptile (Nichol et al., 2018) with 0.01 learning rate to train meta attackers. We use 10000 randomly selected images from the training set to train the meta-attackers in three datasets. The proportion of the selected images to the whole training set are $16\%$ , $20\%$ , and $10\%$ respectively. Fine-tuning parameters are set as $m = 5$ for MNIST and CIFAR10, and $m = 3$ for tiny-Imagenet. Top $q = 128$ coordinates are selected as part coordinates for attacker fine-tuning and model attacking on MNIST; and $q = 500$ on CIFAR10 and tiny-Imagenet.
+
+# 4.2 COMPARISON WITH BASELINES
+
+We compare our meta attacker with baselines for both the untargeted and targeted black-box attack on the three datasets. The results are reported in detail as below.
+
+Untargeted Attack Untargeted attack aims to generate adversarial examples that would be misclassified by the attacked model into any category different from the ground truth one. The overall results are shown in Table 1, in which, Meta transfer denotes that meta attacker trained on one dataset is used to attack target models on another dataset. Our method is competitive with baselines in terms of adversarial perturbation and success rate, but our query number is reduced.
+
+We also compare the results of our method with Zoo (Chen et al., 2017) and AutoZoom (Tu et al., 2019) from a query-efficiency perspective. We use these models to conduct untargeted attack on CIFAR10 and tiny-Imagenet by limiting a maximum number of queries for each adversarial example and compare their success rate. The results are shown in Fig. 1. We notice that for different query thresholds, the success rate of our method is always higher than Zoo and AutoZoom. This is possibly because the testing samples have different $L_{2}$ distances to the decision boundary. Higher success rate of our method indicates our meta attacker can predict correct gradient even when the query information is limited. These results give strong evidence on effectiveness of our proposed method for enhancing query efficiency.
+
+
+Figure 1: Comparison with limited queries.
+
+
+
+
+Figure 2: Top- $q$ and $\beta$ selection.
+
+
+
+Targeted Attack Targeted attack aims to generate adversarial noise such that the perturbed sample would be mis-classified into any pre-specified category. It is a more strict setting than the untargeted
+
+Table 1: MNIST, CIFAR10 and tiny-ImageNet untargeted attack comparison: Meta attacker attains comparable success rate and $L_{2}$ distortion as baselines, and significantly reduces query numbers.
+
+| Dataset / Target model | Method | Success Rate | Avg. \( L_2 \) | Avg. Queries |
| MNIST / Net4 | Zoo (Chen et al., 2017) | 1.00 | 1.61 | 21,760 |
| Decision Boundary (Brendel et al., 2018) | 1.00 | 1.85 | 13,630 |
| Opt-attack (Cheng et al., 2019) | 1.00 | 1.85 | 12,925 |
| AutoZoom (Tu et al., 2019), Bandits (Ilyas et al., 2018b) | 1.00 | 1.86 | 2,412 |
| 0.73 | 1.99 | 3,771 |
| Meta attack (ours) | 1.00 | 1.77 | 749 |
| CIFAR10 / Resnet18 | Zoo (Chen et al., 2017) | 1.00 | 0.30 | 8,192 |
| Decision Boundary (Brendel et al., 2018) | 1.00 | 0.30 | 17,010 |
| Opt-attack (Cheng et al., 2019) | 1.00 | 0.33 | 20,407 |
| AutoZoom (Tu et al., 2019) | 1.00 | 0.28 | 3,112 |
| Bandits (Ilyas et al., 2018b) | 0.91 | 0.33 | 4,491 |
| FW-black (Chen et al., 2018) | 1.00 | 0.43 | 5,021 |
| Meta transfer (ours) | 0.92 | 0.35 | 1,765 |
| Meta attack (ours) | 0.94 | 0.34 | 1,583 |
| tiny-ImageNet / VGG19 | Zoo (Chen et al., 2017) | 1.00 | 0.52 | 27,827 |
| Decision Boundary (Brendel et al., 2018) | 1.00 | 0.52 | 49,942 |
| Opt-attack (Cheng et al., 2019) | 1.00 | 0.53 | 71,016 |
| AutoZoom (Tu et al., 2019) | 1.00 | 0.54 | 8,904 |
| Bandits (Ilyas et al., 2018b) | 0.78 | 0.54 | 9,159 |
| Meta transfer (ours) | 0.99 | 0.56 | 3,624 |
| Meta attack (ours) | 0.99 | 0.53 | 3,278 |
| tiny-ImageNet / Resnet34 | Zoo (Chen et al., 2017) | 1.00 | 0.47 | 25,344 |
| Decision Boundary (Brendel et al., 2018) | 1.00 | 0.48 | 49,982 |
| AutoZoom (Tu et al., 2019) | 1.00 | 0.45 | 9,770 |
| Opt-attack (Cheng et al., 2019) | 1.00 | 0.52 | 60,437 |
| Bandits (Ilyas et al., 2018b) | 0.73 | 0.49 | 9,978 |
| Meta transfer (ours) | 0.99 | 0.56 | 3,540 |
| Meta attack (ours) | 0.99 | 0.53 | 3,268 |
+
+one. For fair comparison, we define the target label for each sample—a sample with label $\ell$ gets the target label $(\ell + 1) \mod \# \text{classes}$ . We deploy our meta attacker the same as above. The results on MNIST, CIFAR10 and tiny-ImageNet are shown in Table 2. Similar to results of untargeted attack, we achieve comparable noise and success rate to baselines but with reduced query numbers.
+
+# 4.3 MODEL ANALYSIS
+
+Meta Training We first test the benefits of meta training by comparing performance of a meta-trained attacker with a Gaussian randomly initialized attacker without meta training on the three datasets. Fig. 3 shows their success rate, $L_{2}$ distortion and query count results for initial success. The meta pre-trained attacker achieves averagely $7\%$ higher success rate with $16\%$ lower $L_{2}$ distortion and $30\%$ less queries, compared with the randomly initialized one. This justifies the contributions of meta training to enhancing query efficiency and also attack performance.
+
+Guaranteed by fine-tuning, the randomly initialized attacker succeeds over many testing samples. The fine-tuning works like an inner training in meta training. With sufficient fine-tuning iterations, the randomly initialized attacker functions like a well-trained meta attacker. This explains the effectiveness of the randomly initialized meta attacker on many testing samples compromised by more queries. However, it could not predict gradient as accurate as the well-trained meta attacker during earlier iterations. Such inaccuracy leads to larger $L_{2}$ distortion at the beginning. On the contrary, the meta training process enables the well-trained meta attacker to fast-adapt to current testing samples. These results highlight the significant advantages of our meta model towards black-box attack. The process of meta training makes it familiar with gradient patterns of various models.
+
+Generalizability Here we show that our meta attacker trained on one dataset can be transferred to other datasets. We conduct this experiment between CIFAR10 and tiny-Imagenet, denoted as Meta transfer in Table 1 and 2. We first apply the meta attacker trained on CIFAR10 to attack VGG19, ResNet34 on tiny-Imagenet respectively, which are different from models used for training meta attacker. Note the meta attacker tested on CIFAR10 has no privileged prior and is not familiar with
+
+Table 2: MNIST, CIFAR10 and tiny-ImageNet targeted attack comparison: Meta attack significantly outperforms other black-box methods in query numbers.
+
+| Dataset / Target model | Method | Success Rate | Avg. \( L_2 \) | Avg. Queries |
| MNIST / Net4 | Zoo (Chen et al., 2017) | 1.00 | 2.63 | 23,552 |
| Decision Boundary (Brendel et al., 2018) | 0.64 | 2.71 | 19,951 |
| AutoZoom (Tu et al., 2019) | 0.95 | 2.52 | 6,174 |
| Opt-attack (Cheng et al., 2019) | 1.00 | 2.33 | 99,661 |
| Meta attack (ours) | 1.00 | 2.66 | 1,299 |
| CIFAR10 / Resnet18 | Zoo (Chen et al., 2017) | 1.00 | 0.55 | 66,400 |
| Decision Boundary (Brendel et al., 2018) | 0.58 | 0.53 | 16,250 |
| AutoZoom (Tu et al., 2019) | 1.00 | 0.51 | 9,082 |
| Opt-attack (Cheng et al., 2019) | 1.00 | 0.50 | 121,810 |
| FW-black (Chen et al., 2018) | 0.90 | 0.73 | 6,987 |
| Meta transfer (ours) | 0.92 | 0.74 | 3,899 |
| Meta attack (ours) | 0.93 | 0.77 | 3,667 |
| tiny-ImageNet / VGG19 | Zoo (Chen et al., 2017) | 0.74 | 1.26 | 119,648 |
| AutoZoom (Tu et al., 2019) | 0.87 | 1.45 | 53,778 |
| Opt-attack (Cheng et al., 2019) | 0.66 | 1.14 | 252,009 |
| Meta transfer (ours) | 0.55 | 1.37 | 12,275 |
| Meta attack (ours) | 0.54 | 1.24 | 11,498 |
| tiny-ImageNet / Resnet34 | Zoo (Chen et al., 2017) | 0.60 | 1.03 | 88,966 |
| AutoZoom (Tu et al., 2019) | 0.95 | 1.15 | 52,174 |
| Opt-attack (Cheng et al., 2019) | 0.78 | 1.00 | 214,015 |
| Meta transfer (ours) | 0.69 | 1.40 | 13,435 |
| Meta attack (ours) | 0.54 | 1.21 | 12,897 |
+
+
+Figure 3: Comparison of randomly initialized and well-trained meta attackers.
+
+
+
+
+
+neither tiny-Imagenet dataset nor the corresponding classification models. Similarly, we also use the meta attacker trained on tiny-Imagenet to attack the target ResNet18 model on CIFAR10. The results show the good generalizability and robustness of our proposed meta attacker.
+
+Parameters Selection We test the selection of top- $q$ and $\beta$ on CIFAR10. We choose $q$ ranging from 350 to 600 and give the results with $\beta$ ranging from 3e-3 to 5e-3, as shown in Fig. 2. When $q$ increases, query number, success rate and $L_{2}$ will all increase. In order to balance the overall result, we choose $q$ to be 500. When $\beta$ increases, success rate and $L_{2}$ will increase and query number decreases. In order to balance success rate and query, we choose $\beta$ to be 4e-3 in the experiment.
+
+# 5 CONCLUSION
+
+We propose a meta-based black-box attack method that largely reduces demanded query numbers without compromising in attack success rate and distortion. We train a meta attacker to learn useful prior information about gradient and incorporate it into the optimization process to decrease the number of queries. Specifically, the meta attacker is finetuned to fit the gradient distribution of target model and each update is based on the output of finetuned meta attaker. Extensive experimental results confirm the superior query-efficiency of our method over baselines.
+
+# ACKNOWLEDGEMENT
+
+This research is partially supported by Programmatic grant no. A1687b0033 from the Singapore government's Research, Innovation and Enterprise 2020 plan (Advanced Manufacturing and Engineering domain).
+
+Hu Zhang (No. 201706340188) is partially supported by the Chinese Scholarship Council.
+
+Jiashi Feng was partially supported by NUS IDS R-263-000-C67-646, ECRA R-263-000-C87-133, MOE Tier-II R-263-000-D17-112 and AI.SG R-263-000-D97-490.
+
+# REFERENCES
+
+W. Brendel, J. Rauber, and M. Bethge. Decision-based adversarial attacks: Reliable attacks against black-box machine learning models. In International Conference on Learning Representations, 2018. URL https://arxiv.org/abs/1712.04248.
+Nicholas Carlini and David Wagner. Towards evaluating the robustness of neural networks. In 2017 IEEE Symposium on Security and Privacy (SP), pp. 39-57. IEEE, 2017.
+Jinghui Chen, Jinfeng Yi, and Quanquan Gu. A frank-wolfe framework for efficient and effective adversarial attacks. arXiv preprint arXiv:1811.10828, 2018.
+Pin-Yu Chen, Huan Zhang, Yash Sharma, Jinfeng Yi, and Cho-Jui Hsieh. Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models. In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, pp. 15-26. ACM, 2017.
+Minhao Cheng, Thong Le, Pin-Yu Chen, Huan Zhang, JinFeng Yi, and Cho-Jui Hsieh. Query-efficient hard-label black-box attack: An optimization-based approach. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=rJlk6iRqKX.
+Riley Edmunds, Noah Golmant, Vinay Ramasesh, Phillip Kuznetsov, Piyush Patil, and Raul Puri. Transferability of adversarial attacks in model-agnostic meta-learning. Deep Learning and Security Workshop (DLSW) in Singapore, 2017.
+Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 1126-1135. JMLR.org, 2017.
+Saeed Ghadimi and Guanghui Lan. Stochastic first-and zeroth-order methods for nonconvex stochastic programming. SIAM Journal on Optimization, 23(4):2341-2368, 2013.
+Ian Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. In International Conference on Learning Representations, 2015. URL http:// arxiv.org/abs/1412.6572.
+Elad Hazan et al. Introduction to online convex optimization. Foundations and Trends in Optimization, 2(3-4):157-325, 2016.
+Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778, 2016.
+Andrew Ilyas, Logan Engstrom, Anish Athalye, and Jessy Lin. Black-box adversarial attacks with limited queries and information. In Proceedings of the 35th International Conference on Machine Learning, ICML 2018, July 2018a. URL https://arxiv.org/abs/1804.08598.
+Andrew Ilyas, Logan Engstrom, and Aleksander Madry. Prior convictions: Black-box adversarial attacks with bandits and priors. arXiv preprint arXiv:1807.07978, 2018b.
+Rie Johnson and Tong Zhang. Accelerating stochastic gradient descent using predictive variance reduction. In Advances in neural information processing systems, pp. 315-323, 2013.
+Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. Technical report, Citeseer, 2009.
+Yann LeCun. The mnist database of handwritten digits. http://yann.lecun.com/exdb/mnist/, 1998.
+Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=rJzIBfZAb.
+
+Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. Deepfool: a simple and accurate method to fool deep neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2574-2582, 2016.
+Nina Narodytska and Shiva Kasiviswanathan. Simple black-box adversarial attacks on deep neural networks. In 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 1310-1318. IEEE, 2017.
+Yurii Nesterov. Introductory lectures on convex optimization: A basic course, volume 87. Springer Science & Business Media, 2013.
+Yurii Nesterov and Vladimir Spokoiny. Random gradient-free minimization of convex functions. Foundations of Computational Mathematics, 17(2):527-566, 2017.
+Alex Nichol, Joshua Achiam, and John Schulman. On first-order meta-learning algorithms. arXiv preprint arXiv:1803.02999, 2018.
+Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z Berkay Celik, and Ananthram Swami. Practical black-box attacks against machine learning. In Proceedings of the 2017 ACM on Asia conference on computer and communications security, pp. 506-519. ACM, 2017.
+Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115(3):211-252, 2015. doi: 10.1007/s11263-015-0816-y.
+Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
+Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. In International Conference on Learning Representations, 2014. URL http://arxiv.org/abs/1312.6199.
+Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1-9, 2015.
+Chun-Chen Tu, Paishun Ting, Pin-Yu Chen, Sijia Liu, Huan Zhang, Jinfeng Yi, Cho-Jui Hsieh, and Shin-Ming Cheng. Autozoom: Autoencoder-based zeroth order optimization method for attacking black-box neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pp. 742-749, 2019.
+Daniel Zgner and Stephan Gnnemann. Adversarial attacks on graph neural networks via meta learning. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=Bylnx209YX.
+
+# 6 APPENDIX
+
+# 6.1 MORE EXPERIMENTAL RESULTS
+
+# 6.1.1 COSINE SIMILARITY BETWEEN ESTIMATED GRADIENTS AND WHITE-BOX GRADIENTS
+
+To demonstrate the ability of estimating gradient of our meta attacker, we have conducted experiments with the Resnet-34 model on the tiny-Imagenet dataset to compare the cosine similarity between the estimated gradients from our proposed meta-attacker and the accurate white-box gradients. We also compare the cosine similarity between ZOO (Chen et al., 2017) estimated gradients and the white-box gradients as reference. The results of cosine similarity and required number of queries are shown in Table 3. We can observe that our meta-attacker does fast-adapt to the target model and generate accurate gradients. Not only it estimates gradients with positive cosine similarity to the true gradients, but it also performs closer to the ZOO estimated results with the same small standard deviation.
+
+Table 3: Cosine similarity between estimated gradients and white-box gradients.
+
+ | Ours meta-attacker | ZOO (Chen et al., 2017) |
| Task Type | Similarity | Queries | Similarity | Queries |
| Untargeted | 0.356 ± 0.074 | 3,268 | 0.395 ± 0.079 | 25,344 |
| Targeted | 0.225 ± 0.090 | 12,897 | 0.363 ± 0.069 | 88,966 |
+
+# 6.1.2 VANILLA TRAINING AUTOENCODER AND META-ATTACKER LEARNING FROM ESTIMATED GRADIENTS
+
+We have conducted experiments in section 4.3 to demonstrate the benefits of meta training. However, we only compare the performance of a meta-trained attacker with a Gaussian randomly initialized attacker. We conduct two more experiments on meta training here to further investigate the benefits of meta training. First we compare our meta-trained attacker with a vanilla autoencoder that learns to map images to gradients of one white-box model. The vanilla autoencoder has the same architecture with our meta-attacker, but it is trained in one white-box model. Then, we train a new meta-attacker in four black-box models, i.e., the gradients used for training are estimated via ZOO (Chen et al., 2017). The experiment results are presented in Table 4, 5.
+
+Table 4: MNIST untargeted attack comparison.
+
+| Method | Success Rate | Avg. L2 | Avg. Queries |
| Ours.reported) | 1.000 | 1.78 | 1,103 |
| ZOO estimated | 1.000 | 1.77 | 1,130 |
| Vanilla Autoencoder | 1.000 | 1.93 | 1,899 |
| Initialised attacker | 0.912 | 1.96 | 1,721 |
+
+Table 5: MNIST targeted attack comparison.
+
+| Method | Success Rate | Avg. L2 | Avg. Queries |
| Ours.reported) | 1.000 | 2.66 | 1,971 |
| ZOO estimated | 1.000 | 2.42 | 2,105 |
| Vanilla Autoencoder | 1.000 | 2.80 | 2,905 |
| Initialised attacker | 0.895 | 2.81 | 3,040 |
+
+The two experiments are conducted on 1000 randomly selected images from the MNIST testing set. As for the four approach settings: "Ours (reported)" is the results we report in our paper; "ZOO estimated" is the meta-attacker trained from ZOO estimated gradients; "Vanilla Autoencoder" is the autoencoder maps image to gradient trained in one different MNIST classification model; "Initialised attacker" is the meta-attacker with randomly initialized weights. We can see the "ZOO estimated" model performs closer to our reported meta-attacker. However, the "Vanilla Autoencoder" performs much worse (with larger L2-norm and more queries), which performs similarly to randomly initialized meta-attacker. The two experiments verify the effectiveness of our first meta-training phase.
+
+# 6.2 STRUCTURE OF META ATTACKER
+
+Table 6: Structure of meta attacker. Conv: convolutional layer, Convt: de-convolutional layer.
+
+| Meta attacker (MNIST) | Meta attacker (CIFAR10, tiny-ImageNet) |
| Conv(16, 3, 3, 1) + ReLu + bn | Conv(32, 3, 3, 1) + ReLu + bn |
| Conv(32, 4, 4, 2) + ReLu + bn | Conv(64, 4, 4, 2) + ReLu + bn |
| Conv(64, 4, 4, 2) + ReLu + bn | Conv(128, 4, 4, 2) + ReLu + bn |
| Conv(64, 4, 4, 2) + ReLu + bn | Conv(256, 4, 4, 2) + ReLu + bn |
| Convt(64, 4, 4, 2) + ReLu + bn | Convt(256, 4, 4, 2) + ReLu + bn |
| Convt(32, 4, 4, 2) + ReLu + bn | Convt(128, 4, 4, 2) + ReLu + bn |
| Convt(16, 4, 4, 2) + ReLu + bn | Convt(64, 4, 4, 2) + ReLu + bn |
| Convt(8, 3, 3, 1) + ReLu + bn | Convt(32, 3, 3, 1) + ReLu + bn |
+
+# 6.3 STRUCTURE OF TARGET MODEL USED IN MNIST
+
+Table 7: Neural network architecture used on MNIST.
+
+| MNIST Model (Conv: convolutional layer, FC: fully connected layer.) |
| Conv(128, 3, 3) + Tanh
+MaxPool(2,2) |
| Conv(64, 3, 3) + Tanh
+MaxPool(2,2) |
| FC(128) + Relu
+FC(10) + Softmax |
+
+# 6.4 ACCURACY OF TARGET MODELS ON ORIGINAL DATASETS
+
+Table 8: Accuracy of each target model on each dataset
+
+| Dataset | MNIST | CIFAR10 | tiny-ImageNet |
| Model | MNIST Model | Resnet18 | VGG19 | Resnet34 |
| Accuracy | 0.9911 | 0.9501 | 0.6481 | 0.6972 |
+
+# 6.5 ADVERSARIAL EXAMPLES GENERATED BY OUR METHOD
+
+
+Target class
+Figure 4: Adversarial examples generated by our method on MNIST. The groundtruth images are shown in the diagonal and the rest are adversarial examples that are misclassified to the targeted class shown on the top.
+
+
+Figure 5: Adversarial examples generated by our method on CIFAR10. The groundtruth images are shown in the diagonal and the rest are adversarial examples that are misclassified to the targeted class shown on the top.
\ No newline at end of file
diff --git a/queryefficientmetaattacktodeepneuralnetworks/images.zip b/queryefficientmetaattacktodeepneuralnetworks/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..3c55eed3d3767fe747bb81e1ab583ab7e922bef4
--- /dev/null
+++ b/queryefficientmetaattacktodeepneuralnetworks/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:449f90b13c07d8c83e0943a1f57895f1170363882bbe15cce35963b5a93ab266
+size 695996
diff --git a/queryefficientmetaattacktodeepneuralnetworks/layout.json b/queryefficientmetaattacktodeepneuralnetworks/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..76b8205b7e4251a961bb2660f90503c702455629
--- /dev/null
+++ b/queryefficientmetaattacktodeepneuralnetworks/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8be6672eabf8a8d252393fc6ffeb0065fd3e04c619b71b2b93775c04d3f2e33a
+size 453542
diff --git a/racttowardamortizedrankingcriticaltrainingforcollaborativefiltering/0c96217f-71fe-4cdc-a9e6-187758fd4652_content_list.json b/racttowardamortizedrankingcriticaltrainingforcollaborativefiltering/0c96217f-71fe-4cdc-a9e6-187758fd4652_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..9ef8d2fcad8e4197434458ea4498ee45d0cb4281
--- /dev/null
+++ b/racttowardamortizedrankingcriticaltrainingforcollaborativefiltering/0c96217f-71fe-4cdc-a9e6-187758fd4652_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:fa2e75612b80b0b4b942d0bfc06a92d6ade4cadfa34295192274fc974b127694
+size 113388
diff --git a/racttowardamortizedrankingcriticaltrainingforcollaborativefiltering/0c96217f-71fe-4cdc-a9e6-187758fd4652_model.json b/racttowardamortizedrankingcriticaltrainingforcollaborativefiltering/0c96217f-71fe-4cdc-a9e6-187758fd4652_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..f3dcb2d83d0a807b5d246f47d6291ab4e5216615
--- /dev/null
+++ b/racttowardamortizedrankingcriticaltrainingforcollaborativefiltering/0c96217f-71fe-4cdc-a9e6-187758fd4652_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e1815a8812d47604233c2627dea36d8222b0e9e83e1fad8f949e1d5531cb07ec
+size 140622
diff --git a/racttowardamortizedrankingcriticaltrainingforcollaborativefiltering/0c96217f-71fe-4cdc-a9e6-187758fd4652_origin.pdf b/racttowardamortizedrankingcriticaltrainingforcollaborativefiltering/0c96217f-71fe-4cdc-a9e6-187758fd4652_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..43495306c759ea93b95096a2a1905de2cbdc70a9
--- /dev/null
+++ b/racttowardamortizedrankingcriticaltrainingforcollaborativefiltering/0c96217f-71fe-4cdc-a9e6-187758fd4652_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6cca433c72dbe92f13ad9914f7623d8a755c13303be5c55e50bf9b8a64c026c5
+size 1897628
diff --git a/racttowardamortizedrankingcriticaltrainingforcollaborativefiltering/full.md b/racttowardamortizedrankingcriticaltrainingforcollaborativefiltering/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..23a0f015b8f218d2156bba60d81dfa2340c07a4f
--- /dev/null
+++ b/racttowardamortizedrankingcriticaltrainingforcollaborativefiltering/full.md
@@ -0,0 +1,449 @@
+# RACT: TOWARDS AMORTIZED RANKING-CRITICAL TRAINING FOR COLLABORATIVE FILTERING
+
+Sam Lobel $^{1\dagger}$ , Chunyuan Li $^{2\dagger*}$ , Jianfeng Gao $^{2}$ , Lawrence Carin $^{3}$ $^{1}$ Brown University $^{2}$ Microsoft Research, Redmond $^{3}$ Duke University samuel_lobel@brown.edu chunyl@microsoft.com
+
+# ABSTRACT
+
+We investigate new methods for training collaborative filtering models based on actor-critic reinforcement learning, to more directly maximize ranking-based objective functions. Specifically, we train a critic network to approximate ranking-based metrics, and then update the actor network to directly optimize against the learned metrics. In contrast to traditional learning-to-rank methods that require re-running the optimization procedure for new lists, our critic-based method amortizes the scoring process with a neural network, and can directly provide the (approximate) ranking scores for new lists. We demonstrate the actor-critic's ability to significantly improve the performance of a variety of prediction models, and achieve better or comparable performance to a variety of strong baselines on three large-scale datasets.
+
+# 1 INTRODUCTION
+
+Recommender systems are an important means of improving a user's web experience. Collaborative filtering is a widely-applied technique in recommender systems (Ricci et al., 2015), in which patterns across similar users and items are leveraged to predict user preferences (Su & Khoshgoftaar, 2009). This naturally fits within the learning paradigm of latent variable models (LVMs) (Bishop, 2006), where latent representations capture the shared patterns. Due to their simplicity and effectiveness, LVMs are still a dominant approach. Traditional LVMs employ linear mappings of limited modeling capacity (Paterek, 2007; Mnih & Salakhutdinov, 2008), and a growing body of literature involves applying deep neural networks (DNNs) to collaborative filtering to create more expressive models (He et al., 2017; Wu et al., 2016; Liang et al., 2018). Among them, variational autoencoders (VAEs) (Kingma & Welling, 2013; Rezende et al., 2014) have been proposed as non-linear extensions of LVMs (Liang et al., 2018). Empirically, VAEs significantly outperform many competing LVM-based methods. One essential contribution to the improved performance is the use of the multinomial likelihood, which is argued by Liang et al. (2018) to be a close proxy to ranking loss.
+
+This property is desirable, because in recommender systems we generally care more about the ranking of predictions than an individual item's score. Hence, prediction results are often evaluated using top- $N$ ranking-based metrics, such as Normalized Discounted Cumulative Gain (NDCG) (Jarvelin & Kekalainen, 2002). The VAE is trained to maximize the likelihood of observations; as shown below, this does not necessarily result in higher ranking-based scores. A natural question concerns whether one may directly optimize against ranking-based metrics, which are by nature non-differentiable and piecewise-constant. Previous work on learning-to-rank has been explored this question in the information-retrieval community, where relaxations/approximations of ranking loss are considered (Weimer et al., 2008; Liu et al., 2009; Li, 2014; Weston et al., 2013).
+
+In this paper, we borrow the actor-critic idea from reinforcement learning (RL) (Sutton et al., 1998) to propose an efficient and scalable learning-to-rank algorithm. The critic is trained to approximate the ranking metric, while the actor is trained to optimize against this learned metric. Specifically, with the goal of making the actor-critic approach practical for recommender systems, we introduce a novel feature-based critic architecture. Instead of treating raw predictions as the critic input, and hoping the neural network will discover the metric's structure from massive data, we consider
+
+engineering sufficient statistics for efficient critic learning. Experimental results on three large-scale datasets demonstrate the actor-critic's ability to significantly improve the performance of a variety of latent-variable models, and achieve better or comparable performance to strong baseline methods.
+
+# 2 BACKGROUND: VAES FOR COLLABORATIVE FILTERING
+
+Vectors are denoted as bold lower-case letters $x$ , matrices as bold uppercase letters $\mathbf{X}$ , and scalars as lower-case non-bold letters $x$ . We use $\circ$ for function composition, $\odot$ for the element-wise multiplication, and $|\cdot|$ for cardinality of a set. $\delta(\cdot)$ is the indicator function.
+
+We use $n \in \{1, \dots, N\}$ to index users, and $m \in \{1, \dots, M\}$ to index items. The user-item interaction matrix $\mathbf{X} \in \{0, 1\}^{N \times M}$ collected from the users' implicit feedback is defined as:
+
+$$
+x _ {n m} \left\{ \begin{array}{l l} 1, & \text {i f i n t e r a c t i o n o f u s e r} n \text {w i t h i t e m} m \text {i s o b s e r v e d ;} \\ 0, & \text {o t h e r w i s e .} \end{array} \right. \tag {1}
+$$
+
+Note that $x_{nm} = 0$ does not necessarily mean user $n$ dislikes item $m$ ; they may simply be unaware of the item. Further, $x_{nm} = 1$ is not equivalent to saying user $n$ likes item $m$ , but that there is at least interest.
+
+VAE model VAEs have been investigated for collaborative filtering (Liang et al., 2018), where this principled Bayesian approach is shown to achieve strong performance on large-scale datasets. Given the user's interaction history $\pmb{x} = [x_1, \dots, x_M]^\top \in \{0, 1\}^M$ , our goal is to predict the full interaction behavior with all remaining items. To simulate this process during training, a random binary mask $\pmb{b} \in \{0, 1\}^M$ is introduced, with the entry 1 as un-masked, and 0 as masked. Thus, $\pmb{x}_h = \pmb{x} \odot \pmb{b}$ is the user's partial interaction history. The goal becomes recovering the masked interactions: $\pmb{x}_p = \pmb{x} \odot (1 - \pmb{x}_h)$ , which is equivalent to recovering the full $\pmb{x}$ as $\pmb{x}_h$ is known.
+
+In LVMs, each user's binary interaction behavior is assumed to be controlled by a $k$ -dimensional user-dependent latent representation $\pmb{z} \in \mathbb{R}^{K}$ . When applying VAEs to collaborative filtering (Liang et al., 2018), the user's latent feature $\pmb{z}$ is represented as a distribution $q(\pmb{z}|\pmb{x})$ , obtained from some partial history $\pmb{x}_h$ of $\pmb{x}$ . With the assumption that $q(\pmb{z}|\pmb{x})$ follows a Gaussian form, the inference of $\pmb{z}$ for the corresponding $\pmb{x}$ is performed as:
+
+$$
+q _ {\phi} (\boldsymbol {z} | \boldsymbol {x}) = \mathcal {N} (\boldsymbol {\mu}, \operatorname {d i a g} \left(\boldsymbol {\sigma} ^ {2}\right)), \text {w i t h} \boldsymbol {\mu}, \boldsymbol {\sigma} ^ {2} = f _ {\phi} \left(\boldsymbol {x} _ {h}\right), \boldsymbol {x} _ {h} = \boldsymbol {x} \odot \boldsymbol {b}, \quad \boldsymbol {b} \sim \operatorname {B e r} (\alpha), \tag {2}
+$$
+
+where $\alpha$ is the hyper-parameter of a Bernoulli distribution, $f_{\phi}$ is a $\phi$ -parameterized neural network, which outputs the mean $\mu$ and variance $\sigma^2$ of the Gaussian distribution.
+
+After obtaining a user's latent representation $z$ , we use the generative process to make predictions. In Liang et al. (2018) a multinomial distribution is used to model the likelihood of items. Specifically, to construct $p_{\theta}(x|z)$ , $z$ is transformed to produce a probability distribution $\pi$ over $M$ items, from which the interaction vector $x$ is assumed to have been drawn:
+
+$$
+\boldsymbol {x} \sim \operatorname {M u l t} (\pi), \text {w i t h} \pi = \operatorname {S o f t m a x} (g _ {\theta} (z)) \tag {3}
+$$
+
+where $g_{\theta}$ is a $\pmb{\theta}$ -parameterized neural network. The output $\pi$ is normalized via a softmax function to produce a probability vector $\pi \in \Delta^{M-1}$ (an $(M-1)$ -simplex) over the entire item set.
+
+Training Objective Learning VAE parameters $\{\phi, \theta\}$ yields the following generalized objective: $\mathcal{L}_{\beta}(\pmb{x}; \pmb{\theta}, \phi) = \mathcal{L}_E + \beta \mathcal{L}_R$ , with $\mathcal{L}_E = -\mathbb{E}_{q_{\phi}(\pmb{z}|\pmb{x})}[\log p_{\pmb{\theta}}(\pmb{x}|\pmb{z})]$ and $\mathcal{L}_R = \mathrm{KL}(q_{\phi}(\pmb{z}|\pmb{x})||p(\pmb{z}))$ (4)
+
+where $\mathcal{L}_E$ is the negative log likelihood (NLL) term, $\mathcal{L}_R$ is the KL regularization term with standard normal prior $p(z)$ , and $\beta$ is a weighting hyper-parameter. When $\beta = 1$ , we can lower-bound the log marginal likelihood of the data using equation 4 as $-\mathcal{L}_{\beta=1}(\boldsymbol{x}; \boldsymbol{\theta}, \phi) \leq \log p(\boldsymbol{x})$ . This is commonly known as the evidence lower bound (ELBO) in variational inference (Blei et al., 2017). Thus equation 4 is the negative $\beta$ -regularized ELBO. To improve the optimization efficiency, the reparametrization trick (Kingma & Welling, 2013; Rezende et al., 2014) is used to draw samples $\boldsymbol{z} \sim q_{\phi}(\boldsymbol{z}|\boldsymbol{x})$ to obtain an unbiased estimate of the ELBO, which is further optimized via stochastic optimization. We call this procedure maximum likelihood estimate (MLE)-based training, as it effectively maximizes the (regularized) ELBO. The testing stage of VAEs for collaborative filtering is detailed in Section A of the Supplement.
+
+Advantages of VAEs The VAE framework successfully scales to relatively large datasets by making use of amortized inference (Gershman & Goodman, 2014): the prediction for all users share the same procedure, which effectively requires evaluating two functions - the encoder $f_{\phi}(\cdot)$ and the decoder $g_{\theta}(\cdot)$ . Crucially, as all users share the same encoder/decoder, the number of parameters required for an autoencoder is independent of the number of users. This is in contrast to some traditional latent factor collaborative filtering models (Paterek, 2007; Hu et al., 2008; Mnih & Salakhutdinov, 2008), where a unique latent vector is learned for each user. The reuse of encoder/decoder for all users is well-aligned with collaborative filtering, where user preferences are analyzed by exploiting the similar patterns inferred from past experiences (Liang et al., 2018). VAEs has the two advantages simultaneously: expressive representation power as a non-linear model, and the number of parameters being independent of the number of users.
+
+Pitfalls of VAEs Among various likelihood forms, it was argued in Liang et al. (2018) that multinomial likelihoods are a closer proxy to the ranking loss than the traditional Gaussian or logistic likelihoods. Though simple and effective, the MLE procedure may still diverge with the ultimate goal in recommendation of correctly suggesting the top-ranked items. To illustrate the divergence between MLE-based training and ranking-based evaluation, consider the example in Figure 1. For the target $\pmb{x} = \{1,1,0,0\}$ , two different predictions $A$
+
+| Target | 1 | 1 | 0 | 0 |
| Prediction A | 0.8 | 0.1 | 0.05 | 0.05 |
| Prediction B | 0.3 | 0.3 | 0.35 | 0.05 |
+
+Figure 1: Difference between MLE-based training loss and ranking-based evaluation. For A, $-1 \times \log 0.8 - 1 \times \log 0.1 = -\log 0.08$ ; For B, $-1 \times \log 0.3 - 1 \times \log 0.3 = -\log 0.09$ . NLL assigns a better value to the misranked example than to the properly-ranked one. NDCG always assigns maximum value to properly-ranked scorings.
+
+and $\vec{B}$ are provided. In MLE, the training loss is the multinomial NLL: $-x\log \pi$ , where $\pi$ is the predicted probability. From the NLL point of view, $B$ is a better prediction than $A$ , because $B$ shows a lower loss than $A$ . However, $B$ ranks an incorrect item highest, and therefore would return a worse recommendation than $A$ . Fortunately, NDCG is calculated directly from the ranking, and so captures this dependence. This inspired us to directly use ranking-based evaluation metrics to guide training. For details on calculating NDCG, refer to Section E.2 of the Supplement.
+
+# 3 RANKING-CRITICAL TRAINING
+
+We introduce a novel algorithm for recommender system training, which we call Ranking-Critical Training (RaCT). RaCT learns a differentiable approximation to the ranking metric, which the prediction network then leverages as a target for optimization through gradient ascent. This is in contrast to existing methods in collaborative filtering, which define an objective relaxation ahead of time. This methodology of learning approximations to functions which cannot be optimized directly stems from the actor-critic paradigm of RL, which we adapt for collaborative filtering.
+
+Any ranking-based evaluation metric can be considered as a "black box" function $\omega : \{\pmb{\pi}; \pmb{x}, \pmb{b}\} \mapsto y \in [0,1]$ , which takes in the prediction $\pmb{\pi}$ to compare with the ground-truth $\pmb{x}$ (conditioned on the mask $\pmb{b}$ ), and outputs a scalar $y$ to rate the prediction quality. As in equation 2, $\pmb{b}$ partitions a user's interactions into those that are "observed" and "unobserved" during inference. As we are only interested in recovering the unobserved items in recommendation, we compute the ranking score of predicted items $\pi_{p} = \pi \odot (1 - x_{h})$ based on the ground-truth items $\pmb{x}_{p}$ .
+
+One salient component of a ranking-based Oracle metric $\omega^{*}$ is to sort $\pi_{p}$ . The sorting operation is non-differentiable, rendering it impossible to directly use $\omega^{*}$ as the critic. While REINFORCE (Williams, 1992) may appear to be suited to tackle the non-differentiable problem, it suffers from large estimate variance (Silver et al., 2014), especially in the collaborative filtering problem, which has a very large prediction space. This motivates consideration of a differentiable neural network to approximate the mapping executed by the Oracle. In the actor-critic framework, the prediction network is called the actor, and the network which approximates the oracle is called the critic. The actor begins by making a prediction (action) given the user's interaction history as the state. The critic learns to estimate the value of each action, which we define as the task-specific reward, i.e., the Oracle's output. The value predicted by the critic is then used to train the actor. Under the assumption that the critic produces the exact values, the actor is trained based on an unbiased estimate of the gradient of the prediction value in terms of relevant ranking quality metrics. In Figure 2, we illustrate the actor-critic paradigm in (b), and the traditional auto-encoder shown in (a) can be used as the actor in our paradigm.
+
+
+(a) Traditional auto-encoder paradigm
+
+
+(b) Proposed actor-critic paradigm
+Figure 2: Illustration of learning parameters $\{\phi, \theta\}$ in the two different paradigms. (a) Learning with MLE, as in VAEs; (b) Learning with a learned ranking-critic. The actor can be viewed as the function composition of encoder $f_{\phi}(\cdot)$ and $g_{\theta}(\cdot)$ in VAEs. The critic mimics the ranking-based evaluation scores, so that it can provide ranking-sensitive feedback in the actor learning.
+
+Naive critic Conventionally one may concatenate vectors $[\pmb{\pi}_p, \pmb{x}_p]$ as input to a neural network, and train a network to output the measured ranking scores $y$ . However, this naive critic is impractical, and failed in our experiments. Our hypothesis is that since this network architecture has a huge number of parameters to train (as the input data layer is of length $2M$ , where $M > 10k$ ), it would require rich data for training. Unfortunately, this is impractical: $\{\pmb{\pi}, \pmb{x}\} \in \mathbb{R}^M$ are very high-dimensional, and the implicit feedback used in collaborative filtering is naturally sparse.
+
+Feature-based critic The naive critic hopes a deep network can discover structure from massive data by itself, leaving much valuable domain knowledge unused. We propose a more efficient critic, that takes into account the structure underlined by the assumed likelihood in MLE (Miyato & Koyama, 2018). We describe our intuition and method below, and provide the justification from the perspective of adversarial learning in Section D of the Supplement.
+
+Consider the computation procedure of the evaluation metric as a function decomposition $\omega = \omega_0\circ \omega_\psi$ including two steps:
+
+- $\omega_0: \pi \mapsto h$ , feature engineering of prediction $\pi$ into the sufficient statistics $h$ ;
+- $\omega_{\psi}: h \mapsto \hat{y}$ , neural approximation of the mapping from the statistics $h$ to the estimated ranking score $\hat{y}$ , using a $\psi$ -parameterized neural network.
+
+The success of this two-step critic largely depends on the effectiveness of the feature $\pmb{h}$ . We hope feature $\pmb{h}$ is (i) compact so that fewer parameters in the critic $\omega_{\psi}$ can simplify training; (ii) easy-to-compute so that training and testing is efficient; and (iii) informative so that the necessary information is preserved. We suggest to use a 3-dimensional vector as the feature, and leave more complicated feature engineering as future work. In summary, our feature is
+
+$$
+\boldsymbol {h} = \left[ \mathcal {L} _ {E}, \left| \mathcal {H} _ {0} \right|, \left| \mathcal {H} _ {1} \right| \right], \tag {5}
+$$
+
+where $(i)\mathcal{L}_E$ is the negative log-likelihood in equation 4, defined in the MLE training loss. $(ii)\left|\mathcal{H}_0\right|$ is the number of unobserved items that a user will interact, with $\mathcal{H}_0 = \{m|x_m = 1$ and $b_{m} = 0\}$ . (iii) $|\mathcal{H}_1|$ is the number of observed items that a user has interacted, with $\mathcal{H}_1 = \{m|x_m = 1$ and $b_{m} = 1\}$
+
+The NLL characterizes the prediction quality of the actor's output $\pi$ against the ground-truth $x$ in an item-to-item comparison manner, e.g., the inner product between two vectors $-\pmb{x}\log \pmb{\pi}$ as in the multinomial NLL (Liang et al., 2018). Ranking is made easier when there are many acceptable items to rank highly (e.g. when $|\mathcal{H}_0|$ is large), and made difficult when predicting from very few interactions (e.g. when $|\mathcal{H}_1|$ is small), motivating these two features. Including these three features allows the critic to guide training by weighting the NLL's relation to ranking given this context about the user. Interestingly, this idea to consider the importance of user behavior statistics coincides with the scaling trick in SVD (Nikolakopoulos et al., 2019b).
+
+Note that $|\mathcal{H}_0|$ and $|\mathcal{H}_1|$ are user-specific, indicating the user's frequency to interact with the system, which can be viewed as side-information about the user. They are only used as features in training the critic to better approximate the ranking scores, and not in training the actor. Hence, we do not use additional information in the testing stage.
+
+Actor Pre-training In order to be a helpful feature for the critic, the NLL must hold some relationship to the ranking-based objective function. But for the high-dimensional datasets common to collaborative filtering, the ranking score is near-uniformly zero for a randomly-initialized actor. In
+
+this situation, a trained critic will not propagate derivatives to the actor, and therefore the actor will not improve. We mitigate this problem by using a pre-trained actor, such as VAEs that have been trained via MLE.
+
+Critic Pre-training Training a generic critic to approximate the ranking scores for all possible predictions is difficult and cumbersome. Furthermore, it is unnecessary. In practice, a critic only needs to estimate the ranking scores on the restricted domain of the current actor's outputs. Therefore, we train the critic offline on top of the pre-trained MLE-based actor. To train the critic, we minimize the Mean Square Error (MSE) between the critic output and true ranking score $y$ from the Oracle:
+
+$$
+\mathcal {L} _ {C} (\boldsymbol {h}, y; \psi) = \| \omega_ {\psi} (\boldsymbol {h}) - y \| ^ {2}, \tag {6}
+$$
+
+where the target $y$ is generated using its non-differential definition, which plays the role of ground truth simulator in training.
+
+Actor-critic Training Once the critic is well trained, we fix its parameters $\psi$ and update the actor parameters $\{\phi, \theta\}$ to maximize the estimated ranking score
+
+$$
+\mathcal {L} _ {A} (\boldsymbol {h}; \phi , \boldsymbol {\theta}) = \omega_ {\psi} (\boldsymbol {h}), \tag {7}
+$$
+
+where $h$ is defined in equation 5, including NLL feature extracted from the prediction made in equation 4, together with count features. During back-propagation, the gradient of $\mathcal{L}_A$ wrt the prediction $\pi$ is $\frac{\partial \mathcal{L}_A}{\partial \pi} = \frac{\partial \mathcal{L}_A}{\partial h} \frac{\partial h}{\partial \pi}$ . It further updates the actor parameters, with the encoder gradient $\frac{\partial \mathcal{L}_A}{\partial \phi} = \frac{\partial \mathcal{L}_A}{\partial \pi} \frac{\partial \pi}{\partial \phi}$ and the decoder gradient $\frac{\partial \mathcal{L}_A}{\partial \theta} = \frac{\partial \mathcal{L}_A}{\partial \pi} \frac{\partial \pi}{\partial \theta}$ . Updating the actor changes its predictions, so we must update the critic to produce the correct ranking scores for its new input domain.
+
+The full RaCT training procedure is summarized in Algorithm 1 in the Supplement. Stochastic optimization is used, where a batch of users $\mathcal{U} = \{\pmb{x}_i | i \in \mathcal{B}\}$ is drawn at each iteration, with $\mathcal{B}$ as a random subset of user index in $\{1, \dots, N\}$ . The pre-training of the actor in Stage 1 and the critic in Stage 2 are important; they provide good initialization to the actor-critic training in Stage 3 for fast convergence. Further, we provide an alternative interpretation to view our actor-critic approach in equation 6 and equation 7 from the perspective of adversarial learning (Goodfellow et al., 2014) in the Supplement. This can partially justify our choice of feature engineering.
+
+# 4 RELATED WORK
+
+Deep Learning for Collaborative Filtering There are many recent efforts focused on developing deep learning models for collaborative filtering (Sedhain et al., 2015; Xue et al., 2017; He et al., 2018a,b; Zhang et al., 2017; Chen et al., 2017). Early work on DNNs focused on explicit feedback settings (Georgiev & Nakov, 2013; Salakhutdinov et al., 2007; Zheng et al., 2016), such as rating predictions. Recent research gradually recognized the importance of implicit feedback (Wu et al., 2016; He et al., 2017; Liang et al., 2018), where the user's preference is not explicitly presented (Hu et al., 2008). This setting is more practical but challenging, and is the focus of our work. The proposed actor-critic method belongs to the general two-level architectures for recommendation systems, where a coarse to fine prediction procedure is used. For a systematic method comparison for top-N recommendation tasks, we suggest referring to Dacrema et al. (2019). Our method is closely related to three papers, on VAEs (Liang et al., 2018), collaborative denoising autoencoder (CDAE) (Wu et al., 2016) and neural collaborative filtering (NCF) (He et al., 2017). CDAE and NCF may suffer from scalability issues: the model size grows linearly with both the number of users as well as items. The VAE (Liang et al., 2018) alleviates this problem via amortized inference. Our work builds on top of the VAE, and improves it by optimizing to the ranking-based metric.
+
+Learned Metrics in Vision & Languages Recent research in computer vision and natural language processing has generated excellent results, using learned instead of hand-crafted metrics. Among the rich literature of generating realistic images via generative adversarial networks (GANs) (Goodfellow et al., 2014; Radford et al., 2015; Karras et al., 2018), our work is most similar to Larsen et al. (2016), where the VAE objective (Kingma & Welling, 2013) is augmented with the learned representations in the GAN discriminator (Goodfellow et al., 2014) to better measure image similarities. For language generation, the discrepancy between word-level MLE training and sequence-level semantic evaluation has been alleviated with GANs or RL techniques (Bahdanau et al., 2016; Ren et al., 2017; Lin et al.,
+
+
+(a) ML-20M dataset
+Figure 3: Performance improvement (NDCG@100) with RaCT over the VAE baseline.
+
+
+(b) Netflix dataset
+
+
+(c) MSD dataset
+
+2017). The RL approach directly optimizes the metric used at test time, and has shown improvement on various applications, including dialogue (Li et al., 2016), image captioning (Rennie et al., 2017) and translations (Ranzato et al., 2015). Despite the significant successes in other domains, there has been little if any research reported for directly learning the metrics with deep neural networks for collaborative filtering. Our work fills the gap, and we hope it inspires more research in this direction.
+
+Learning to Rank (L2R) The idea of L2R has existed for two decades in the information-retrieval community. The goal is to maximize a given ranking-based evaluation metric (Liu et al., 2009; Li, 2014), generally through optimizing objective relaxations (Weimer et al., 2008). Many L2R methods used in recommendation, such as the popular pairwise L2R methods BPR (Rendle et al., 2009) and WARP (Weston et al., 2011), are trained by optimizing a pairwise classification function that penalizes mis-ranked pairs of items. Through negative sampling (Hu et al., 2008), these methods can scale to extremely high-dimensional output spaces. However, it is computationally expensive to compute low-variance updates to a model when the number of items is large.
+
+An alternative to the pairwise approach is listwise loss functions, which minimize a loss calculated from a user's entire interaction history. By considering the entire interaction history these methods can more closely model ranking, and generally perform better than their pairwise counterparts (Xia et al., 2008). Furthermore, compared to methods which calculate relative ranking for each pair (Weston et al., 2011), the per-user amortization of rank-calculation can be computed more efficiently. NLL is an example of a listwise loss function, as it is calculated over a user's entire interaction history. Interestingly, NLL is also used as the loss function for ListNet (Cao et al., 2007), a classic listwise L2R method designed to probabilistically maximize Top-1 Recall. The VAE framework under NLL can be seen as a principled extension of this method to Top-N collaborative filtering. Our ranking-critical training further extends this methodology by explicitly calculating the relationship between a differentiable listwise loss function and the desired ranking-based evaluation function.
+
+# 5 EXPERIMENTS
+
+Experimental Settings We implemented our algorithm in TensorFlow. The source code to reproduce the experimental results and plots is included as Supplementary Material. We conduct experiments on three publicly available large-scale datasets, which represent different item recommendation scenarios, including user-movie ratings and user-song play counts. This is the same set of user-item consumption datasets used in Liang et al. (2018), and we keep the same pre-processing steps for fair comparison. The statistics of the datasets, evaluation protocols and hyper-parameters are summarized in the Supplement. VAE (Liang et al., 2018) is used as the baseline, which plays the role of our actor pre-training. The NCDG@100 ranking metric is used as the critic's target in training.
+
+**Baseline Methods** We use ranking-critical training to improve the three MLE-based methods described in Section 2.1: VAE, DAE, and MF. We also adapt traditional L2R methods as the actors in our framework, where the L2R loss is used to replace $\mathcal{L}_E$ in equation 5 to construct the feature. We consider WARP and LambdaRank, two pairwise loss functions designed for optimizing NDCG, for these experiments. We also compare our approaches with four representative baseline methods in collaborative filtering. CDAE (Wu et al., 2016) is a strongly-performing neural-network based method, weighted MF (Hu et al., 2008) is a linear latent-factor model, and SLIM (Ning & Karypis, 2011) and EASE (Steck, 2019) are item-to-item similarity models. We additionally compare with Bayesian Pairwise Ranking (Rendle et al., 2009), but as this method did not yield competitive performance on these datasets, we omit the results.
+
+Table 1: Comparison on three large datasets. The best testing set performance is reported. The results below the line are from Liang et al. (2018), and $\mathrm{VAE}^{\ddagger}$ shows the VAE results based on our runs. Blue indicates improvement over the VAE baseline, and bold indicates overall best.
+
+| Dataset | ML-20M | Netflix | MSD |
| Metric | R@20 | R@50 | NDCG@100 | R@20 | R@50 | NDCG@100 | R@20 | R@50 | NDCG@100 |
| RaCT | 0.403 | 0.543 | 0.434 | 0.357 | 0.450 | 0.392 | 0.268 | 0.364 | 0.319 |
| VAE‡ | 0.396 | 0.536 | 0.426 | 0.350 | 0.443 | 0.385 | 0.260 | 0.356 | 0.310 |
| WARP | 0.310 | 0.448 | 0.348 | 0.273 | 0.360 | 0.312 | 0.162 | 0.253 | 0.210 |
| LambdaRank | 0.395 | 0.534 | 0.427 | 0.352 | 0.441 | 0.386 | 0.259 | 0.355 | 0.308 |
| EASE | 0.391 | 0.521 | 0.420 | 0.362 | 0.445 | 0.393 | 0.333 | 0.428 | 0.389 |
| VAE | 0.395 | 0.537 | 0.426 | 0.351 | 0.444 | 0.386 | 0.266 | 0.364 | 0.316 |
| CDAE | 0.391 | 0.523 | 0.418 | 0.343 | 0.428 | 0.376 | 0.188 | 0.283 | 0.237 |
| WMF | 0.360 | 0.498 | 0.386 | 0.316 | 0.404 | 0.351 | 0.211 | 0.312 | 0.257 |
| SLIM | 0.370 | 0.495 | 0.401 | 0.347 | 0.428 | 0.379 | - | - | - |
+
+# 5.1 OVERALL PERFORMANCE OF RACT
+
+Improvement over VAE In Figure 3, we show the learning curves of RaCT and VAE on the validation set. The VAE converges to a plateau by the time that the RaCT finishes its actor pretraining stage, e.g., 150 epochs on ML-20 dataset, after which the VAE's performance is not improving. By contrast, when the RaCT is plugged in, the performance shows a significant immediate boost. For the amount of improvement gain, RaCT takes only half the number of epochs that VAE takes in the end of actor pre-training. For example, RaCT takes 50 epochs (from 150 to 200) to achieve an improvement of $0.44 - 0.43 = 0.01$ , while VAE takes 100 epochs (from 50 to 150) to achieve an improvement of $0.43 - 0.424 = 0.006$ .
+
+Training/Evaluation Correlation We visualize scatter plots between learning objectives and evaluation metric for all users on ML-20M dataset in Figure 4. More details and an enlarged visualization is shown in Figure 6 of the Supplement. The Pearson's correlation $r$ is computed. NLL exhibits low correlation with the target NDCG ( $r$ is close to zero), while the learned metric in RaCT shows much higher positive correlation. It strongly indicates RaCT optimizes a more direct objective than an MLE approach. Further, NLL should in theory have a negative correlation with the target NDCG, as we
+
+
+(a) MLE
+Figure 4: Correlation between the learning objectives (MLE or RaCT) and evaluation metrics on training.
+
+
+(b) RaCT
+
+wish that minimizing NLL can maximize NDCG. However, in practice it yields positive correlation. We hypothesize that this is because the number of interactions for each user may dominate the NLL values. That partially motivates us to consider the number of user interactions as features.
+
+Comparison with traditional L2R methods As examples of traditional L2R methods, we compare to our method using WARP (Weston et al., 2011) and LambdaRank (Burges et al., 2007) as the ranking-critical objectives. We use implementations of both methods designed specifically to maximize NDCG. We observe that WARP and LambdaRank are roughly 2 and 10 times more computationally expensive than RaCT per epoch, respectively. Table 1 shows the results of RaCT, WARP and LambdaRank, using the same amount of wall-clock training time. We observe the trends that WARP degrades performance, and LambdaRank provides performance roughly equal to VAE. WARP's poor performance is perhaps due to poor approximation of the ranking when the number of items is large.
+
+Comparison with existing methods In Table 1, we report our RaCT performance, and compare with competing methods in terms of three evaluation metrics: NDCG@100, Recall@20, and Recall@50. We use the published code1 of Liang et al. (2018), and reproduce the VAE as our actor pre-training. We further use their reported values for the classic collaborative filtering methods CDAE, WMF, and SLIM. Our reproduced VAE results are very close to Liang et al. (2018) on the ML-20M and
+
+| Actor | Before | After | Gain |
| VAE | 0.4258 | 0.4339 | 8.09 |
| VAE (Gaussian) | 0.4202 | 0.4224 | 2.21 |
| VAE (β = 0) | 0.4203 | 0.4255 | 5.17 |
| VAE (Linear) | 0.4156 | 0.4162 | 0.53 |
| DAE (Liang et al., 2018) | 0.4205 | 0.4214 | 0.87 |
| MF (Liang et al., 2018) | 0.4159 | 0.4172 | 1.37 |
| WARP | 0.3123 | 0.3439 | 31.63 |
+
+Table 2: Performance gain $\left( {\times {10}^{-3}}\right)$ for various actors.
+
+
+Figure 5: Ablation study on features.
+
+Netflix datasets, but slightly lower on the MSD dataset. The RaCT is built on top of our VAE runs, and consistently improves its baseline actor for all the evaluation metrics and datasets, as seen by comparing the rows RaCT and $\mathrm{VAE}^{\ddagger}$ . The proposed RaCT also significantly outperforms competing LVMs, including VAE, CDAE, and WMF.
+
+When comparing to EASE (Steck, 2019), our method performs substantially better for ML-20M, comparably for Netflix, and is substantially outperformed for MSD. We observe a similar trend when comparing SLIM (an item-to-item similarity method) and CDAE (a latent variable method). As SLIM and EASE rely on recreating the Gram-matrix $\mathbf{G} = \mathbf{X}^T\mathbf{X}$ , their performance should improve with the number of users (Steck, 2019). However this performance may come at a computational cost, as inference requires multiplication with an unfactored $M\times M$ matrix. EASE requires computing a dense item-to-item similarity matrix, making its inference on MSD roughly 30 times more expensive than for VAE or RaCT. A practitioner's choice between these two methods should be informed by the specifics of the dataset as well as demands of the system.
+
+In the Supplement, we study the generalization of RaCT trained with different ranking-metrics in Section F.1, and break down the performance improvement with different cut-off values of NDCG in Section F.3, and with different number of interactions of $\mathbf{X}$ in Section F.4.
+
+# 5.2 WHAT ACTOR CAN BE IMPROVED BY RACT?
+
+In RL, the choice of policy plays a crucial role in the agent's performance. Similarly, we would like to study how different actor designs impact RaCT performance. Table 2 shows the performance of various policies before and after applying RaCT. The results on NDCG@100 are reported. The VAE, DAE and MF models follow the setup in Liang et al. (2018).
+
+We modify one component of the VAE at a time, and check the change of performance improvement that RaCT can provide. (1) VAE (Gaussian): we change likelihood form from multinomial to Gaussian, and observe a smaller performance improvement. This shows the importance of having a closer proxy of ranking-based loss. (2) VAE ( $\beta = 0$ ): we remove the KL regularization by setting $\beta = 0$ , and replace the posterior sampling with a delta distribution. We see a marginally smaller performance improvement. This compares a stochastic and deterministic policy. The stochastic policy (i.e., posterior sampling) provides higher exploration ability for the actor, allowing more diverse samples generated for the critic's training. This is essential for better critic learning. (3) VAE (Linear): we limit the expressive ability of the actor by using a linear encoder and decoder. This significantly degrades performance, and the RaCT cannot help much in this case. RaCT shows improvements for all MLE-based methods, including DAE and MF from Liang et al. (2018). It also shows significant improvement over WARP. Please see detailed discussion in Section F.5 of the Supplement.
+
+# 5.3 ABLATION STUDY ON FEATURE-BASED CRITIC
+
+In Figure 5, we investigate the importance of the features we designed in equation 5, using results from the ML-20M dataset. The full feature vector consists of three elements: $\pmb{h} = [\mathcal{L}_E, |\mathcal{H}_0|, |\mathcal{H}_1|]$ . $\mathcal{L}_E$ is mandatory, because it links the actor to the critic; removing it would break the back-propagation to train the actor. We carefully remove $|\mathcal{H}_0|$ or $|\mathcal{H}_1|$ from $\pmb{h}$ at each time, and observe that it leads to performance degradation. In particular, removing $|\mathcal{H}_0|$ results in a severe over-fitting issue. When both counts are removed, we observe an immediate performance drop, as depicted by the orange curve. Overall, the results indicate that all three features are necessary to our performance improvement.
+
+# 6 CONCLUSION & DISCUSSION
+
+We have proposed an actor-critic framework for collaborative filtering on implicit data. The critic learns to approximate the ranking scores, which in turn improves the traditional MLE-based nonlinear LVMs with the learned ranking-critical objectives. To make it practical and efficient, we introduce a few techniques: a feature-based critic to reduce the number of learnable parameters, posterior sampling as exploration for better critic estimates, and pre-training of actor and critic for fast convergence. The experimental results on three large-scale datasets demonstrate the actor-critic's ability to significantly improve the results of a variety of latent-variable models, and achieve better or comparable performance to strong baseline methods.
+
+Though RaCT improves VAEs, it does not start from the best performing actor model. The very recent work by Dacrema et al. (2019) conducts a systematic analysis of algorithmic proposals for top-N recommendation tasks. There are other simple and efficient methods that perform better than VAEs, such as pure SVD-based models (Cremonesi et al., 2010; Nikolakopoulos et al., 2019b), RecWalk (Nikolakopoulos & Karypis, 2019) and Personalized Diffusions (Nikolakopoulos et al., 2019a). One interesting future research direction is to explore learning-to-rank techniques for them.
+
+# REFERENCES
+
+Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, and Yoshua Bengio. An actor-critic algorithm for sequence prediction. arXiv preprint arXiv:1607.07086, 2016.
+Thierry Bertin-Mahieux, Daniel PW Ellis, Brian Whitman, and Paul Lamere. The million song dataset. In ISMIR, 2011.
+Christopher Bishop. Pattern recognition and machine learning. Pattern Recognition and Machine Learning, 2006.
+David M Blei, Alp Kucukelbir, and Jon D McAuliffe. Variational inference: A review for statisticians. Journal of the American Statistical Association, 2017.
+Andrew Brock, Jeff Donahue, and Karen Simonyan. Large scale gan training for high fidelity natural image synthesis. arXiv preprint arXiv:1809.11096, 2018.
+Christopher J Burges, Robert Ragno, and Quoc V Le. Learning to rank with nonsmooth cost functions. In NIPS, 2007.
+Zhe Cao, Tao Qin, Tie-Yan Liu, Ming-Feng Tsai, and Hang Li. Learning to rank: From pairwise approach to listwise approach. In International conference on machine learning, 2007.
+Jingyuan Chen, Hanwang Zhang, Xiangnan He, Liqiang Nie, Wei Liu, and Tat-Seng Chua. Attentive collaborative filtering: Multimedia recommendation with item-and component-level attention. In Proceedings of the 40th International ACM SIGIR conference on Research and Development in Information Retrieval, pp. 335-344. ACM, 2017.
+Paolo Cremonesi, Yehuda Koren, and Roberto Turrin. Performance of recommender algorithms on top-N recommendation tasks. In ACM Conference on Recommender systems, 2010.
+Maurizio Ferrari Dacrema, Paolo Cremonesi, and Dietmar Jannach. Are we really making much progress? a worrying analysis of recent neural recommendation approaches. In ACM Conference on Recommender Systems, 2019.
+Chelsea Finn, Paul Christiano, Pieter Abbeel, and Sergey Levine. A connection between generative adversarial networks, inverse reinforcement learning, and energy-based models. arXiv preprint arXiv:1611.03852, 2016.
+Kostadin Georgiev and Preslav Nakov. A non-iid framework for collaborative filtering with restricted boltzmann machines. In International conference on machine learning, pp. 1148-1156, 2013.
+Samuel Gershman and Noah Goodman. Amortized inference in probabilistic reasoning. In Proceedings of the Annual Meeting of the Cognitive Science Society, 2014.
+Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In NIPS, pp. 2672-2680, 2014.
+
+Ian Goodfellow, Yoshua Bengio, Aaron Courville, and Yoshua Bengio. Deep learning, volume 1. MIT press Cambridge, 2016.
+Xiangnan He, Lizi Liao, Hanwang Zhang, Liqiang Nie, Xia Hu, and Tat-Seng Chua. Neural collaborative filtering. In Proceedings of the 26th International Conference on World Wide Web, pp. 173-182. International World Wide Web Conferences Steering Committee, 2017.
+Xiangnan He, Xiaoyu Du, Xiang Wang, Feng Tian, Jinhui Tang, and Tat-Seng Chua. Outer product-based neural collaborative filtering. *IJCAI*, 2018a.
+Xiangnan He, Zhankui He, Xiaoyu Du, and Tat-Seng Chua. Adversarial personalized ranking for recommendation. In The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval, pp. 355-364. ACM, 2018b.
+Yifan Hu, Yehuda Koren, and Chris Volinsky. Collaborative filtering for implicit feedback datasets. In Data Mining, 2008. ICDM'08. Eighth IEEE International Conference on, pp. 263-272. IEEE, 2008.
+Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. ICML, 2015.
+Kalervo Järvelin and Jaana Kekäläinen. Cumulated gain-based evaluation of ir techniques. ACM Transactions on Information Systems (TOIS), 2002.
+Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. Progressive growing of gans for improved quality, stability, and variation. *ICLR*, 2018.
+Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. *ICLR*, 2014.
+Diederik P Kingma and Max Welling. Auto-encoding variational bayes. ICLR, 2013.
+Maciej Kula. Metadata embeddings for user and item cold-start recommendations. arXiv preprint arXiv:1507.08439, 2015.
+Anders Boesen Lindbo Larsen, Søren Kaae Sønderby, Hugo Larochelle, and Ole Winther. Autoencoding beyond pixels using a learned similarity metric. In International Conference on Machine Learning, pp. 1558-1566, 2016.
+Chunyuan Li, Hao Liu, Changyou Chen, Yuchen Pu, Liquin Chen, Ricardo Henao, and Lawrence Carin. Alice: Towards understanding adversarial learning for joint distribution matching. In NIPS, pp. 5495-5503, 2017.
+Hang Li. Learning to rank for information retrieval and natural language processing. Synthesis Lectures on Human Language Technologies, 7(3):1-121, 2014.
+Jiwei Li, Will Monroe, Alan Ritter, Michel Galley, Jianfeng Gao, and Dan Jurafsky. Deep reinforcement learning for dialogue generation. arXiv preprint arXiv:1606.01541, 2016.
+Dawen Liang, Minshu Zhan, and Daniel PW Ellis. Content-aware collaborative music recommendation using pre-trained neural networks. In ISMIR, 2015.
+Dawen Liang, Rahul G Krishnan, Matthew D Hoffman, and Tony Jebara. Variational autoencoders for collaborative filtering. WWW, 2018.
+Kevin Lin, Dianqi Li, Xiaodong He, Zhengyou Zhang, and Ming-Ting Sun. Adversarial ranking for language generation. In NIPS, pp. 3155-3165, 2017.
+Tie-Yan Liu et al. Learning to rank for information retrieval. Foundations and Trends® in Information Retrieval, 3(3):225-331, 2009.
+Mehdi Mirza and Simon Osindero. Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784, 2014.
+Takeru Miyato and Masanori Koyama. cgans with projection discriminator. ICLR, 2018.
+Andriy Mnih and Ruslan R Salakhutdinov. Probabilistic matrix factorization. In NIPS, 2008.
+Athanasios N Nikolakopoulos and George Karypis. RecWalk: Nearly uncoupled random walks for top-N recommendation. In ACM International Conference on Web Search and Data Mining, 2019.
+Athanasios N Nikolakopoulos, Dimitris Berberidis, George Karypis, and Georgios B Giannakis. Personalized diffusions for top-N recommendation. In ACM Conference on Recommender Systems, 2019a.
+
+Athanasios N Nikolakopoulos, Vassilis Kalantzis, Efstratios Gallopoulos, and John D Garofalakis. EigenRec: generalizing pureSVD for effective and efficient top-N recommendations. Knowledge and Information Systems, 2019b.
+Xia Ning and George Karypis. Slim: Sparse linear methods for top-n recommender systems. In 2011 11th IEEE International Conference on Data Mining, pp. 497-506. IEEE, 2011.
+Arkadiusz Paterek. Improving regularized singular value decomposition for collaborative filtering. In Proceedings of KDD cup and workshop, volume 2007, pp. 5-8, 2007.
+David Pfau and Oriol Vinyals. Connecting generative adversarial networks and actor-critic methods. arXiv preprint arXiv:1610.01945, 2016.
+Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015.
+Marc'Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. Sequence level training with recurrent neural networks. *ICLR*, 2015.
+Zhou Ren, Xiaoyu Wang, Ning Zhang, Xutao Lv, and Li-Jia Li. Deep reinforcement learning-based image captioning with embedding reward. CVPR, 2017.
+Steffen Rendle, Christoph Freudenthaler, Zeno Gantner, and Lars Schmidt-Thieme. Bpr: Bayesian personalized ranking from implicit feedback. In Proceedings of the twenty-fifth conference on uncertainty in artificial intelligence, pp. 452-461. AUAI Press, 2009.
+Steven J Rennie, Etienne Marcheret, Youssef Mroueh, Jarret Ross, and Vaibhava Goel. Self-critical sequence training for image captioning. In CVPR, volume 1, pp. 3, 2017.
+Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. ICML, 2014.
+Francesco Ricci, Lior Rokach, and Bracha Shapira. Recommender systems: introduction and challenges. In Recommender systems handbook, pp. 1-34. Springer, 2015.
+Ruslan Salakhutdinov, Andriy Mnih, and Geoffrey Hinton. Restricted boltzmann machines for collaborative filtering. In Proceedings of the 24th international conference on Machine learning, pp. 791-798. ACM, 2007.
+Suvash Sedhain, Aditya Krishna Menon, Scott Sanner, and Lexing Xie. Autorec: Autoencoders meet collaborative filtering. In Proceedings of the 24th International Conference on World Wide Web, pp. 111-112. ACM, 2015.
+Suvash Sedhain, Aditya Krishna Menon, Scott Sanner, and Darius Braziunas. On the effectiveness of linear models for one-class collaborative filtering. In AAAI, 2016.
+David Silver, Guy Lever, Nicolas Heess, Thomas Degris, Daan Wierstra, and Martin Riedmiller. Deterministic policy gradient algorithms. 2014.
+Harald Steck. Embarrassingly shallow autoencoders for sparse data. 2019.
+Xiaoyuan Su and Taghi M Khoshgoftaar. A survey of collaborative filtering techniques. Advances in artificial intelligence, 2009, 2009.
+Richard S Sutton, Andrew G Barto, Francis Bach, et al. Reinforcement learning: An introduction. MIT press, 1998.
+Markus Weimer, Alexandros Karatzoglou, Quoc V Le, and Alex J Smola. Cofi rank-maximum margin matrix factorization for collaborative ranking. In NIPS, pp. 1593-1600, 2008.
+Jason Weston, Samy Bengio, and Nicolas Usunier. Wsabie: Scaling up to large vocabulary image annotation. In *IJCAI*, volume 11, pp. 2764-2770, 2011.
+Jason Weston, Hector Yee, and Ron J Weiss. Learning to rank recommendations with the k-order statistic loss. In Proceedings of the 7th ACM conference on Recommender systems, pp. 245-248. ACM, 2013.
+Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229-256, 1992.
+
+Yao Wu, Christopher DuBois, Alice X Zheng, and Martin Ester. Collaborative denoising auto-encoders for top-n recommender systems. In Proceedings of the Ninth ACM International Conference on Web Search and Data Mining, 2016.
+Fen Xia, Tie-Yan Liu, Jue Wang, Wensheng Zhang, and Hang Li. Listwise approach to learning to rank - theory and algorithm. 2008.
+Hong-Jian Xue, Xinyu Dai, Jianbing Zhang, Shujian Huang, and Jiajun Chen. Deep matrix factorization models for recommender systems. In IJCAI, pp. 3203-3209, 2017.
+Shuai Zhang, Lina Yao, and Aixin Sun. Deep learning based recommender system: A survey and new perspectives. arXiv preprint arXiv:1707.07435, 2017.
+Yin Zheng, Bangsheng Tang, Wenkui Ding, and Hanning Zhou. A neural autoregressive approach to collaborative filtering. arXiv preprint arXiv:1605.09477, 2016.
+
+Summary of contributions: Sam and Chunyuan conceptualized learning-to-rank for VAEs. Sam created and implemented the current algorithm, made the model work, and ran all experiments. Chunyuan set up the experiments, led and completed the manuscript writing. Lawrence edited every version of the manuscript. Jianfeng proofread an early version of the manuscript.
+
+# A TESTING STAGE OF VAES FOR COLLABORATIVE FILTERING
+
+We focus on studying the performance of various models under strong generalization (Liang et al., 2015) as in (Liang et al., 2018). All users are split into training/validation/test sets. The models are learned using the entire interaction history of the users in the training set. To evaluate, we use a part of the interaction history from held-out (validation and test) users to infer the user-level representations from the model, and compute quality metrics by quantifying how well the model ranks the rest of the unseen interaction history from the held-out users. Specifically, for a held-out user with the full history $\mathbf{x}$ , we take $\mathbf{x}_h = \mathbf{x} \odot \mathbf{b}$ offline using the randomly generated mask $\mathbf{b}$ . $\mathbf{x}_h$ is then frozen as the testing input, and is fed into various trained models during the evaluation stage to get the prediction $\hat{\pi}$ . The recovered interaction $\bar{\mathbf{x}} = \hat{\pi} \odot (1 - \mathbf{x}_h)$ for the masked seen part is then evaluated by ranking-based metrics.
+
+# B BACKGROUND ON TRADITIONAL LEARNING-TO-RANK METHODS
+
+Formally, the Bayesian Personalized Ranking (BPR) (Rendle et al., 2009) loss for the $n$ -th user is
+
+$$
+\mathcal {L} _ {\mathrm {B P R}} = \sum_ {i \in \mathcal {K} _ {+}} \sum_ {j \in \mathcal {K} _ {-}} \sigma \left(\pi_ {n j} - \pi_ {n i}\right), \tag {8}
+$$
+
+where $\sigma(\cdot)$ is the sigmoid function, $\mathcal{K}_{+}$ denotes the set of items that the user has interacted with before, and $\mathcal{K}_{-}$ denotes the complement item set.
+
+The Weighted Approximate-Rank Pairwise (WARP) model (Weston et al., 2011) has been shown to perform better than BPR for implicit feedback (Kula, 2015):
+
+$$
+\mathcal {L} _ {\mathrm {W A R P}} = \sum_ {n i \in \mathcal {K} _ {+}} \sum_ {j \in \mathcal {K} _ {-}} w \left(r _ {i}\right) \max \left(0, 1 + \pi_ {n j} - \pi_ {n i}\right), \tag {9}
+$$
+
+where $w(\cdot)$ is a weighting function for different ranks, and $r_i$ is the rank for the $i$ -th item for $n$ -th user. A common choice of weighting function $w(\cdot)$ for optimizing NDCG is $w(r) = \sum_{i=1}^{r} \alpha_i$ , with $\alpha_i = 1 / i$ . WARP improves BPR by the weights $w(\cdot)$ and the margin between positive and negative items.
+
+# C PSEUDO-CODE FOR RACT
+
+We summarize the full training procedure of RaCT in Algorithm 1.
+
+# D INTERPRETATION WITH GANS
+
+We can view our actor-critic approach in equation 6 and equation 7 from the perspective of Generative Adversarial Networks (GANs). GANs constitute a framework to construct a generator $G$ that can mimic a target distribution, and have achieved significant success in generating realistic images (Goodfellow et al., 2014; Radford et al., 2015; Karras et al., 2018; Brock et al., 2018). The most distinctive feature of GANs is the discriminator $D$ that evaluates the divergence between the current generator distribution and the target distribution (Goodfellow et al., 2014; Li et al., 2017). The GAN learning procedure performs iterative training between the discriminator and generator, with the discriminator acting as an increasingly meticulous critic to refine the generator. In our work, the actor can be interpreted as the generator, while the critic can be viewed as the discriminator.
+
+Note that GANs and actor-critic models learn the metric functions (Finn et al., 2016), and it has been shown in Pfau & Vinyals (2016) that GANs can be viewed as actor-critic in an environment where the actor cannot affect the reward. This is exactly our setup. One key difference is that we know the Oracle metric, and the critic is trained to mimic the Oracle's behaviour.
+
+Algorithm 1: Our full ranking-critical training with stochastic optimization.
+Input: Interaction matrix X; Actor parameters (encoder $\phi$ and decoder $\theta$ ), Critic parameters $\psi$ .
+1 Initialize: Randomly initialize weights $\phi, \theta$ and $\psi$
+2 /* Stage 1: Pretrain the actor via MLE
+3 while not converged do do
+4 Sample a batch of users $\mathcal{U}$ ;
+5 Update $\{\theta, \phi\}$ with gradient $\frac{\partial \mathcal{L}_{\beta}}{\partial \theta}$ and $\frac{\partial \mathcal{L}_{\beta}}{\partial \phi}$ in equation 4;
+6 end
+7 /* Stage 2: Pretrain the critic via MSE
+8 while not converged do do
+9 Sample a batch of users $\mathcal{U}$ ;
+10 Construct features $h$ in equation 5 and target $y$ from the Oracle;
+11 Update $\psi$ with gradient $\frac{\partial \mathcal{L}_C}{\partial \psi}$ in equation 6;
+12 end
+13 /* Stage 3: Alternative training of actor and critic
+14 for $t = 1, 2, \ldots, T$ do
+15 Sample a batch of users $\mathcal{U}$ ;
+16 /* Actor step
+17 Update $\{\theta, \phi\}$ with gradient $\frac{\partial \mathcal{L}_A}{\partial \theta}$ and $\frac{\partial \mathcal{L}_A}{\partial \phi}$ in equation 7;
+18 /* Critic step
+19 Construct features $h$ in equation 5 and target $y$ from the Oracle;
+20 Update $\psi$ with gradient $\frac{\partial \mathcal{L}_C}{\partial \psi}$ in equation 6;
+21 end
+
+Conditioned on interaction history $\pmb{x}_h$ corrupted from $\pmb{x}$ , the actor predicts the distribution parameter $\pi$ over items, which further constructs the likelihood $p(\pmb{x}|\pi)$ . We use $q$ to designate the data empirical distribution, the target conditional is $q(\pmb{x}|\pi)$ . It can be formulated as the standard adversarial loss for the conditional GAN (Mirza & Osindero, 2014). It has been shown that the optimal critic (Goodfellow et al., 2014; Li et al., 2017) for a conditional GAN can be represented as the log likelihood ratio
+
+$$
+R ^ {*} (\boldsymbol {\pi}, \boldsymbol {x}) = \log \frac {q (\boldsymbol {x} | \boldsymbol {\pi})}{p (\boldsymbol {x} | \boldsymbol {\pi})} \tag {10}
+$$
+
+In the collaborative filtering setup, we often make the assumptions that $p(\pmb{x}|\pmb{\pi})$ are simple distributions, such as multinomial in VAEs (Liang et al., 2018) and Gaussian in MF. This simplification allows the parameterization of critic following the following form (Miyato & Koyama, 2018):
+
+$$
+R ^ {*} (\boldsymbol {\pi}, \boldsymbol {x}) = \boldsymbol {x} ^ {\top} \mathbf {V} \nu (\boldsymbol {\pi}) + \mathbf {C} \tag {11}
+$$
+
+where $\pmb{x}$ is the target, $\nu(\pi)$ is a layer of the critic with input $\pi$ , and $\mathbf{V}$ and $\mathbf{C}$ are the parameters to learn. Most notably, this formulation introduces the prediction information via an inner product, as opposed to concatenation. The form equation 11 is indeed the form we proposed for NLL feature $\pmb{x} \log \pi$ , with $\mathbf{V} = \mathbf{I}$ and $\nu(\cdot) = \log(\cdot)$ . $\mathbf{C}$ includes the normalizer for the prediction probability (Miyato & Koyama, 2018), which is related to the count features in equation 5.
+
+# EXPERIMENTAL SETUP
+
+# E.1 DATASETS
+
+We conduct experiments on three publicly available datasets. Table 3 summarizes the statistics of the data. These three ten-million-size datasets represent different item recommendation scenarios, including user-movie ratings and user-song play counts. This is the same set of medium- to large-scale user-item consumption datasets used in Liang et al. (2018), and we keep the same pre-processing steps for fair comparison.
+
+Table 3: Summary Statistics of datasets after all pre-processing steps. Interactions# is the number of non-zero entries. Sparsity% refers to the percentage of zero entries in the user-item interaction matrix $X$ . Items# is the number of total items. HO# is the number of validation/test users held out of the total number of users in the 5th column Users#.
+
+| Dataset | Interaction# | Sparsity% | Item# | User# | HO# |
| ML-20M | 10.0M | 99.64% | 20,108 | 136,677 | 10K |
| Netflix | 56.9M | 99.31% | 17,769 | 463,435 | 40K |
| MSD | 33.6M | 99.86% | 41,140 | 571,355 | 50K |
+
+1. MovieLens-20M (ML-20M): This is the user-movie rating data collected from a movie recommendation service2. The data is binarized by keeping ratings of four or higher and setting other entries as unobserved. Only users who have watched at least five movies are considered.
+2. Netflix Prize (Netflix): This is the user-movie rating data from the Netflix Prize3. Similarly to ML-20M, the data is binarized by keeping ratings of four or higher, and only users who have watched at least five movies are kept.
+3. Million Song Dataset (MSD): This is the user-song play count data from the Million Song Dataset (Bertin-Mahieux et al., 2011). We binarize play counts, and keep users who have listened to at least 20 songs as well as songs that are listened to by at least 200 users.
+
+# E.2 EVALUATION PROTOCOL
+
+In the testing stage, we get the predicted ranking by sorting the multinomial probability $\pi_p$ . For each user, we compare the predicted ranking of the held-out items with their true ranking. Two ranking-based metrics are considered, Recall@R and the truncated NDCG (NDCG@R), where $R$ is the cut-off hyper-parameter. While Recall@R considers all items ranked within the first $R$ to be equally important, NDCG@R uses a monotonically increasing discount to emphasize the importance of higher ranks versus lower ones.
+
+Formally, we define $m(r)$ as the item at rank $r$ , and $\mathcal{H}_0$ as the held-out unobserved items that a user will interact.
+
+$$
+\mathrm {D C G} @ \mathrm {R} = \sum_ {r = 1} ^ {R} \frac {2 ^ {\delta [ m (r) \in \mathcal {H} _ {0} ]} - 1}{\log (r + 1)}. \tag {12}
+$$
+
+By dividing DCG@R by its best possible value, we obtain NDCG@R in [0, 1].
+
+$$
+\operatorname {R e c a l l} @ \mathrm {R} = \sum_ {r = 1} ^ {R} \frac {\delta [ m (r) \in \mathcal {H} _ {0} ]}{\min \left(R , | \mathcal {H} _ {0} |\right)}. \tag {13}
+$$
+
+The denominator normalizes Recall@R in [0, 1], with maximum value 1 corresponding to the case that all relevant items are ranked in the top $R$ positions.
+
+# E.3 EXPERIMENT HYPER-PARAMETERS
+
+We set hyper-parameters by following Liang et al. (2018) for comparisons. For VAE, the dimension of the latent representation is 200. When KL regularization is removed $(\beta = 0)$ , i.e., for DAE and MF, we instead apply $\ell_2$ regularization (0.01) on weights to prevent overfitting. Adam optimizer (Kingma & Ba, 2014) is used, with batch size of $|\mathcal{B}| = 500$ users. For ML-20M, the actor is pre-trained for 150 epochs, and alternative training for 50 epochs. On the other two datasets, the actor is pre-trained for 75 epochs, and alternative training for 25 epochs. The critic is pre-trained for 50 epochs for all three datasets. The alternative training has equal update frequency for actor and critic. This schedule ensures that we have the same total number of actor training epochs as Liang et al. (2018): 200 epochs for ML-20M, 100 epochs for the other two datasets.
+
+Table 4: Network architectures. The arrow indicates the flow between two layers. For each layer, we show the number of units on top of its following activation function. BN indicates Batch Normalization.
+
+| Networks | Architectures |
| Actor | Encoder | M Linear → 600 Tanh → 200 Linear& Exp |
| Decoder | 200 Linear → 600 Tanh → M Softmax |
| Critic | 3 BN → 100ReLU → 100ReLU → 10 ReLU → Sigmoid |
+
+Table 5: Summary of training schedule hyper-parameters. $\beta_{\mathrm{max}}$ indicates the maximum value of $\beta$ . In the actor pre-training stage, the number of epochs used for increasing and fixing $\beta$ are shown in row 3 and 4, respectively.
+
+| Dataset | ML-20M | Netflix | MSD |
| βmax | 0.2 | 0.2 | 0.1 |
| # epochs for annealing | 100 | 75 | 75 |
| # epochs for fixing | 50 | 0 | 0 |
| # epochs for actor pre-training | 150 | 75 | 75 |
| # epochs for critic pre-training | 50 | 50 | 50 |
| # epochs for alternative training | 50 | 25 | 25 |
+
+Table 6: Performance trained with different metrics. (ML-20M)
+
+| Training | Testing |
| Recall@20 | Recall@50 | NDCG@100 |
| RaCT (Recall@100) | 0.40316 | 0.54317 | 0.43392 |
| RaCT (NDCG@100) | 0.40269 | 0.54304 | 0.43395 |
| VAE | 0.39623 | 0.53632 | 0.42586 |
+
+A fully-connected (FC) architecture is used for all networks, as detailed in Table 4. Please refer to Goodfellow et al. (2016) for the activation functions. Batch Normalization (Ioffe & Szegedy, 2015) is used to normalize the input features, because the magnitude of the inputs (NLL) change as training progresses. The encoder outputs the mean and variance of the varational distribution; the variance is implemented via an exponential function.
+
+# F ADDITIONAL EXPERIMENTAL RESULTS
+
+# F.1 GENERALIZATION ACROSS RANKING METRICS
+
+To study the generalization ability of RaCT, we consider training the critic against Recall@100, in addition to NDCG@100. The only difference is that Recall treats each item as equally important, while NDCG treats the higher ranking items as more important. The results are shown in Table 6. Indeed, the RaCT gets slightly better testing Recall values when trained against the Recall metric, and the reverse holds for NDCG. More importantly, RaCT allows generalization across different ranking metrics: all testing metric values are significantly improved when trained against either Recall or NDCG.
+
+Following Liang et al. (2018), we compare with NCF on two small datasets, ML-1M (6,040 users, 3,704 items) and Pinterest (55,187 users, 9,916 items). This is because the prediction stage of NCF is slow, due to a lack of amortized inference as in VAE. We use their publicly available datasets and metrics for fair comparison. The results are evaluated with a small cut-off value $R$ , to only study the highly ranked items: NDCG@10 and Recall@10. The performance are compared in Table 7. Our observation that DAE performs better than VAE on these two datasets is consistent with Liang
+
+
+(a) Training NLL
+
+
+(b) Training NDCG
+
+
+(c) Testing NLL
+
+
+(d) Testing NDCG
+Figure 6: Correlation between the learning objectives (NLL or RaCT) and evaluation metrics NDCG.
+
+Table 7: Comparison between our RaCT with NCF on two small datasets. NCF results are from Liang et al. (2018).
+
+| Dataset | Metric | NCF | DAE | RaCT | VAE | RaCT |
| ML-1M | Recall@10 | 0.705 | 0.722 | 0.722 | 0.704 | 0.706 |
| NDCG@10 | 0.426 | 0.446 | 0.446 | 0.433 | 0.434 |
| Pinterest | Recall@10 | 0.872 | 0.886 | 0.887 | 0.873 | 0.878 |
| NDCG@10 | 0.551 | 0.580 | 0.581 | 0.564 | 0.568 |
+
+et al. (2018). In general, RaCT shows higher improvement when a larger dataset (Pinterest), or a stochastic actor (VAE) is considered. This is because the sizes of the two datasets are relatively small, the critic can be better trained when more samples are observed. On the larger Pinterest dataset, the auto-encoder variants perform better than NCF by a big margin, and our RaCT further boosts the performance.
+
+# F.2 CORRELATION BETWEEN TRAINING METRICS
+
+Figure 6 explores the relationship between NLL, the learned RaCT metric, and NDCG. We ensure that the best model for each method is used: the model after actor pre-training (Stage 1) is used for NLL plots, and the model after the actor-critic alternative training (Stage 3) is used for RaCT plots. The bottom row of plots displays the output of these models on the testing data. This demonstrates that RaCT's connection to the ranking metric generalizes to unseen data.
+
+
+Figure 7: The improvement at various cut-off value R in evaluation. Given a specific R, the dashed line shows the VAE, and square dot shows the RaCT.
+
+
+(a) Scatter plot
+
+
+(b) NDCG mean
+Figure 8: Improvement breakdown over different user interactions. (a) Scatter plot between NDCG@100 and activity levels. Note only # interactions $\leq 1000$ is visualized, there is a long tail ( $>1000$ ) in the distribution. (b) Comparison of the mean NDCG@100 values for four user groups.
+
+# F.3 BREAKDOWN ANALYSIS FOR DIFFERENT CUT-OFF VALUES
+
+NDCG@100 only reflects the ranking quality at the cut-off value $R = 100$ . i.e., the top-100 ranking items. To study the ranking quality at different range of the predicted list, we consider a large range of $R$ , and report the corresponding NDCG values. We consider $R = 5, 20, 50, 100, 200$ , and report the results in Figure 7. The NDCG@R values are improved for various R, though the critic is trained against NDCG@100. This is because the NDCG metrics of different R are highly correlated, the RaCT can generalize across them.
+
+# F.4 BREAKDOWN ANALYSIS FOR DIFFERENT NUMBER OF INTERACTIONS
+
+In Figure 8, we show performance improvement across increasing user interactions. We use ML-20M dataset for this case study. The # interactions is the number of items each user interacts with (ground-truth), indicating the user's activity level. Figure 8(a) shows the scatter plots between NDCG@100 values and various number of interactions on the testing dataset, for both VAE and our RaCT methods. RaCT generally improves VAE for a large range of user interactions. We further categorize the users in four groups according to their number of interactions: $< 250$ , $250 - 500$ , $501 - 750$ , $>750$ , and plot the mean of NDCG@100 values for two methods in Figure 8(b). RaCT improves VAE except for users with high activity level ( $>750$ ). This is probably because the number of the most active users is small, as observed in Figure 8(a). It yields a lack of training data for critic learning, which potentially hurts the performance.
+
+# F.5 ON THE PERFORMANCE IMPROVEMENT OF ACTORS VIA RACT.
+
+We also consider the two other auto-encoder variants used in Liang et al. (2018) as the actor. (1) The DAE in Liang et al. (2018) chooses a smaller architecture $M \to 600 \to M$ , which achieves better performance than the larger architecture as in our VAE ( $\beta = 0$ ) by prevent over-fitting. While we observe the same result, it is interesting to note that the VAE ( $\beta = 0$ ) shows a much larger improvement gain than DAE (Liang et al., 2018) when trained with our RaCT technique, and eventually significantly outperforms the latter. This shows that the additional modeling capacity is
+
+necessary to capture the more complex relationship in prediction, when the goal is ranking rather than MLE. (2) The MF in Liang et al. (2018) employs a Gaussian likelihood, which also gets slight improvement with the RaCT. Overall, we can conclude that the RaCT method improves all the MLE-based variants.
+
+We also use ranking-loss-based WARP as the actor. For the large datasets considered in this paper, calculating the full WARP-loss for each user is impractically slow. We derive a simple approximation to WARP which runs in quasilinear time to the number of items. Even so, it takes around 30 minutes per epoch on ML-20M dataset, roughly 30 times slower than the VAE. WARP yields the score 0.312, which is lower than other baseline methods. This is consistent with the studies in Liang et al. (2018); Sedhain et al. (2016). However, when RaCT is applied, WARP gets a significant improvement; in fact, the largest improvement gain of all the actors. This indicates the RaCT is a more direct and effective approach for learning to rank on large datasets.
\ No newline at end of file
diff --git a/racttowardamortizedrankingcriticaltrainingforcollaborativefiltering/images.zip b/racttowardamortizedrankingcriticaltrainingforcollaborativefiltering/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..584b10ab5a9ea89bcff1da87705a68af9d4906b6
--- /dev/null
+++ b/racttowardamortizedrankingcriticaltrainingforcollaborativefiltering/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:bb0c907ef3bd42039e6100d817097d140f13f83cb7bcead4d9b2242688606d18
+size 561059
diff --git a/racttowardamortizedrankingcriticaltrainingforcollaborativefiltering/layout.json b/racttowardamortizedrankingcriticaltrainingforcollaborativefiltering/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..b4756938a43a0fe1a6c1a47e81db077279322b5f
--- /dev/null
+++ b/racttowardamortizedrankingcriticaltrainingforcollaborativefiltering/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6e27d4ac0c0ee3272c16d2f0344b0e3dce6c7b3e61b6c37145f79413403edfb3
+size 667680
diff --git a/rappnoveltydetectionwithreconstructionalongprojectionpathway/8bb42a5b-16e5-4f33-b232-90b208a61371_content_list.json b/rappnoveltydetectionwithreconstructionalongprojectionpathway/8bb42a5b-16e5-4f33-b232-90b208a61371_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..a2e6fb25680ae7e7fdf5abf46ff4c62ee2b9553e
--- /dev/null
+++ b/rappnoveltydetectionwithreconstructionalongprojectionpathway/8bb42a5b-16e5-4f33-b232-90b208a61371_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f559105b3fafe9ad5a656036eff74d3524a8a2d08de4c18d128988231a1628d2
+size 78834
diff --git a/rappnoveltydetectionwithreconstructionalongprojectionpathway/8bb42a5b-16e5-4f33-b232-90b208a61371_model.json b/rappnoveltydetectionwithreconstructionalongprojectionpathway/8bb42a5b-16e5-4f33-b232-90b208a61371_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..0bbb9867081788eb0264dc56dd7335e1f89fc38d
--- /dev/null
+++ b/rappnoveltydetectionwithreconstructionalongprojectionpathway/8bb42a5b-16e5-4f33-b232-90b208a61371_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:638b6c3e5b9780982f6aa870b1edc11505fb4eaa3c5694e84de500af32821ab5
+size 95325
diff --git a/rappnoveltydetectionwithreconstructionalongprojectionpathway/8bb42a5b-16e5-4f33-b232-90b208a61371_origin.pdf b/rappnoveltydetectionwithreconstructionalongprojectionpathway/8bb42a5b-16e5-4f33-b232-90b208a61371_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..1b78dee44aeb71c575f21d3240324ee383b2eb18
--- /dev/null
+++ b/rappnoveltydetectionwithreconstructionalongprojectionpathway/8bb42a5b-16e5-4f33-b232-90b208a61371_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c386c5cd52eecdfccebe902d6f411649dbb32fcb98379f19ca62f78d3a95aa97
+size 339774
diff --git a/rappnoveltydetectionwithreconstructionalongprojectionpathway/full.md b/rappnoveltydetectionwithreconstructionalongprojectionpathway/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..8b5d5480038cf7ab2594d3e133b0097f44903f86
--- /dev/null
+++ b/rappnoveltydetectionwithreconstructionalongprojectionpathway/full.md
@@ -0,0 +1,333 @@
+# RAPP: NOVELTY DETECTION WITH RECONSTRUCTION ALONG PROJECTION PATHWAY
+
+Ki Hyun Kim, Sangwoo Shim, Yongsub Lim, Jongseob Jeon, Jeongwoo Choi, Byungchan Kim, Andre S. Yoon
+
+MakinaRocks
+
+{khkim, sangwoo, yongsub,jongseob.jeon}@makinarocks.ai
+
+{jeongwoo, kbc8894, andre}@makinarocks.ai
+
+# ABSTRACT
+
+We propose RAPP, a new methodology for novelty detection by utilizing hidden space activation values obtained from a deep autoencoder. Precisely, RAPP compares input and its autoencoder reconstruction not only in the input space but also in the hidden spaces. We show that if we feed a reconstructed input to the same autoencoder again, its activated values in a hidden space are equivalent to the corresponding reconstruction in that hidden space given the original input. We devise two metrics aggregating those hidden activated values to quantify the novelty of the input. Through extensive experiments using diverse datasets, we validate that RAPP improves novelty detection performances of autoencoder-based approaches. Besides, we show that RAPP outperforms recent novelty detection methods evaluated on popular benchmarks.
+
+# 1 INTRODUCTION
+
+How can we characterize novelty when only normality information is given? Novelty detection is the mechanism to decide whether a data sample is an outlier with respect to the training data. This mechanism is especially useful in situations where a proportion of detection targets is inherently small. Examples are fraudulent transaction detection (Pawar et al., 2014; Porwal & Mukund, 2018), intrusion detection (Lee, 2017; Aoudi et al., 2018), video surveillance (Ravanbakhsh et al., 2017; Xu et al., 2015b), medical diagnosis (Schlegl et al., 2017; Baur et al., 2018) and equipment failure detection (Kuzin & Borovicka, 2016; Zhao et al., 2017; Beghi et al., 2014). Recently, deep autoencoders and their variants have shown outstanding performances in finding compact representations from complex data, and the reconstruction error has been chosen as a popular metric for detecting novelty (An & Cho, 2015; Vasilev et al., 2018). However, this approach has a limitation of measuring reconstruction quality only in an input space, which does not fully utilize hierarchical representations in hidden spaces identified by the deep autoencoder.
+
+In this paper, we propose RAPP, a new method of detecting novelty samples exploiting hidden activation values in addition to the input values and their autoencoder reconstruction values. While ordinary reconstruction-based methods carry out novelty detection by comparing differences between input data before the input layer and reconstructed data at the output layer, RAPP extends these comparisons to hidden spaces. We first collect a set of hidden activation values by feeding the original input to the autoencoder. Subsequently, we feed the autoencoder reconstructed input to the autoencoder to calculate another set of activation values in the hidden layers. This procedure does not need additional training of the autoencoder. In turn, we quantify the novelty of the input by aggregating these two sets of hidden activation values. To this end, we devise two metrics. The first metric measures the total amount of reconstruction errors in input and hidden spaces. The second metric normalizes the reconstruction errors before summing up. Note that RAPP falls back to the ordinary reconstruction-based method if we only aggregate input values before the input layer and the reconstructed values at the output layer.
+
+Also, we explain the motivations that facilitated the development of RAPP. We show that activation values in a hidden space obtained by feeding a reconstructed input to the autoencoder are equivalent to the corresponding reconstruction in that hidden space for the original input. We refer the latter
+
+quantity as a hidden reconstruction of the input. Note that this is a natural extension of the reconstruction to the hidden space. Unfortunately, we cannot directly compute the hidden reconstruction as in the computation of the ordinary reconstruction because the autoencoder does not impose any correspondence between encoding-decoding pairs of hidden layers during the training. Nevertheless, we show that it can be computed by feeding a reconstructed input to the autoencoder again. Consequently, RAPP incorporates hidden reconstruction errors as well as the ordinary reconstruction error in detecting novelty.
+
+With extensive experiments, we demonstrate using diverse datasets that our method effectively improves autoencoder-based novelty detection methods. In addition, we show by evaluating on popular benchmark datasets that RAPP outperforms competing methods recently developed.
+
+Our contributions are summarized as follows.
+
+- We propose a new novelty detection method by utilizing hidden activation values of an input and its autoencoder reconstruction, and provide aggregation functions for them to quantify novelty of the input.
+- We provide motivation that RAPP extends the reconstruction concept in the input space into the hidden spaces. Precisely, we show that hidden activation values of a reconstructed input are equivalent to the corresponding hidden reconstruction of the original input.
+- We demonstrate that RAPP improves autoencoder-based novelty detection methods in diverse datasets. Moreover, we validate that RAPP outperforms recent novelty detection methods on popular benchmark datasets.
+
+# 2 RELATED WORK
+
+Various novelty detection methods with deep neural networks rely on the reconstruction error (Sakurada & Yairi, 2014; Hoffmann, 2007; An & Cho, 2015), because discriminative learning schemes are not suitable for highly class-imbalanced data which is common in practice. Unsupervised and semi-supervised learning approaches handle such imbalance by focusing on the characterization of normality and detecting samples out of the normality.
+
+Variational Autoencoders (VAE) (Kingma & Welling, 2014) were reported to outperform vanilla autoencoders for novelty detection based on reconstruction error (An & Cho, 2015). To carry out the novelty detection outlined in this approach, an autoencoder needs to be trained only with normal data. The autoencoder encodes the training data, which comprises of only normal data in this case, into a lower-dimensional space and decodes them to the input space. To test novelty, an input value is fed to the autoencoder to produce a reconstructed value and calculate the distance between the input and reconstructed values. This distance is the reconstruction error. A higher reconstruction error means that the input value cannot be encoded onto the lower-dimensional space that represents normal data. Therefore, the input value can be marked as a novelty if its reconstruction error exceeds a certain threshold.
+
+Instead of autoencoders, Generative Adversarial Networks (GAN) have been also suggested to model a distribution of normal data (Sabokrou et al., 2018; Schlegl et al., 2017). Despite the same purpose of discovering a simpler, lower-dimensional representation, the training criterion for GAN is focusing on the quality of data generation rather than the reconstruction quality of training data. Recently, several pieces of research have combined autoencoders and adversarial learning to meet both criteria in dimension reduction and data generation (Haloui et al., 2018; Pidhorskyi et al., 2018; Zenati et al., 2018). One limitation of these methods based on the ordinary reconstruction error is that they do not exploit all the information available along the projection pathway of deep autoencoders. We will explain how to leverage this information for novelty detection in the next section.
+
+From the viewpoint of the diversity and ratio of the normal data in novelty detection, there are two cases available. The first case is when a small fraction of classes are normal. This case has been studied in a one-class classification context, and usually evaluated by organizing training data into a collection of samples belonging to a small number of normal classes (Ruff et al., 2018; Perera & Patel, 2018; Sabokrou et al., 2018; Golan & El-Yaniv, 2018). The second case is when a majority of classes are assigned as normal (An & Cho, 2015; Schlegl et al., 2017; Haloui et al., 2018; Zenati et al., 2018). In this case, normal data is more diverse, and the training data is consist of samples of a
+
+relatively large number of normal classes: e.g., nine digits of MNIST. One setup does not dominate the other, but depending on applications, either can be more suitable than the other. Different methods may perform differently in both cases. In this paper, we evaluate RAPP and other competing methods with experiments in both setups.
+
+# 3 PROPOSED METHOD: RAPP
+
+In this section, we describe the proposed novelty detection method RAPP based on an autoencoder. The main idea is to compare hidden activations of an input and its hidden reconstructions along the projection pathway of the autoencoder. To be precise, we project the input and its autoencoder reconstruction onto the hidden spaces to obtain pairs of activation values, and aggregate them to quantify the novelty of the input. For the aggregation, we present two metrics to measure the total amount of difference within each pair.
+
+# 3.1 RECONSTRUCTION BASED NOVELTY DETECTION
+
+An autoencoder $A$ is a neural network consisting of an encoder $g$ and a decoder $f$ , responsible for dimension reduction and its inverse mapping to the original input space, respectively: i.e. $A = f \circ g$ . For the purpose, training the autoencoder aims to minimize difference between its input $x$ and output $A(x)$ . The space that the encoder $g$ constitutes is called the latent space, and provides more concise representation for data than the input space.
+
+Due to this unsupervised representation learning property, the autoencoder has been widely used for novelty detection. Specifically, training an autoencoder on normal data samples, novelty of a test sample $x$ is measured by the following reconstruction error $\epsilon$ :
+
+$$
+\epsilon = \| x - A (x) \| _ {2}.
+$$
+
+The test sample $x$ is more likely to be novel as the error $\epsilon(x)$ becomes larger, because it means that $x$ is farther from the manifold that the autoencoder describes.
+
+Although this approach has shown promising results in novelty detection, the reconstruction error alone does not fully exploit information provided by a trained autoencoder especially when its architecture is deep. In other words, hierarchical information identified by the deep architecture is being ignored. This is rather unfortunate because hierarchical representation learning is one of the most successfully proven capabilities of deep neural networks.
+
+To fully leverage that capability, below we will describe the way to exploit hidden spaces to capture the difference between normal and novel samples in more detail.
+
+# 3.2 RECONSTRUCTION ERROR IN HIDDEN SPACES
+
+Let $A = f \circ g$ be a trained autoencoder where $g$ and $f$ are an encoder and a decoder, and $\ell$ be the number of hidden layers of $g$ . Namely, $g = g_{\ell} \circ \dots \circ g_{1}$ . We define partial computation of $g$ as follows:
+
+$$
+g _ {: i} = g _ {i} \circ \dots \circ g _ {1},
+$$
+
+for $1\leq i\leq \ell$
+
+Let $x$ be an input vector, and $\hat{x}$ be its reconstruction by $A$ : i.e., $\hat{x} = A(x)$ . In addition to comparing $x$ and $\hat{x}$ in the input space, as the ordinary approach does, we examine them in hidden spaces along a projection pathway of $A$ . More precisely, feeding $x$ and $\hat{x}$ into $A$ , we obtain pairs $(h_i, \hat{h}_i)$ of their hidden representations where
+
+$$
+h _ {i} (x) = g _ {: i} (x),
+$$
+
+$$
+\hat {h} _ {i} (x) = g _ {: i} (\hat {x}) = g _ {: i} (A (x)).
+$$
+
+Figure 1a illustrates the procedure of computing $h_i$ and $\hat{h}_i$ . As a result, novelty of the sample $x$ is quantified by aggregating $H(x) = \{(h_i(x), \hat{h}_i(x)): 1 \leq i \leq \ell\}$ .
+
+The overall procedure of RAPP is summarized in Algorithm 1. To clearly state the required variables to construct $H$ , we write the algorithm with the for loop in Lines 3-5, but in practice, all of them
+
+
+(a) Hidden activations of input and its reconstruction
+
+
+(b) Indirect computation of hidden reconstruction
+Figure 1: (a) Computation of $h_i(x)$ and $\hat{h}_i(x)$ . The reconstruction $\hat{x}$ is fed to the same autoencoder that produced itself. (b) Motivation of RAPP. The quantity that RAPP computes, the hidden activation of the reconstruction input, is equivalent to the hidden reconstruction of the input. If $\tilde{f} = f$ , computing $\hat{h}_2'(x) = \hat{h}_2(x)$ does not require explicitly evaluating $\tilde{f}_i$ but only $g_i$ and $f = \tilde{f}$ .
+
+Algorithm 1: RAPP to compute a novelty score.
+Input: Sample $x$ , trained autoencoder $A = f \circ g$ , the number of layers $\ell$ , and aggregation $s$ . Output: Novelty score $S$ .
+1 $\hat{x} = A(x)$ .
+2 $H = \varnothing$ .
+3 foreach i in 1 to $\ell$ do
+4 | $H = H \cup \{(g_{:i}(x), g_{:i}(\hat{x}))\}$ .
+5 end
+6 $S = s(H)$ .
+
+can be computed by feed-forwarding one time each of $x$ and $\hat{x}$ to $g$ . Note that RAPP is indeed a generalization of the ordinary reconstruction method with defining $g_{0}$ as the identity function and $s_{ord}$ as follows.
+
+$$
+s _ {o r d} (H (x)) = \left\| h _ {0} (x) - \hat {h} _ {0} (x) \right\| _ {2} ^ {2},
+$$
+
+where $h_0(x) = g_0(x) = x$ and $\hat{h}_0(x) = g_0(\hat{x}) = \hat{x}$ .
+
+In this paper, we provide two metrics $s_{SAP}$ and $s_{NAP}$ which more extensively utilize $H$ than $s_{ord}$ . Those are especially suited when no prior knowledge exists for the selection of layers to derive a novelty metric, which commonly happens when modeling with deep neural networks. Note that, however, more elaborate metrics can be designed if we have knowledge on or can characterize the spaces.
+
+# 3.2.1 SIMPLE AGGREGATION ALONG PATHWAY (SAP)
+
+This is the most straightforward metric that one can define on $H$ . For a data sample $x$ , SAP is defined by summing the square of Euclidean distances for all pairs in $H$ :
+
+$$
+s _ {S A P} (x) = \sum_ {i = 0} ^ {\ell} \| h _ {i} (x) - \hat {h} _ {i} (x) \| _ {2} ^ {2} = \| \mathbf {h} (x) - \hat {\mathbf {h}} (x) \| _ {2} ^ {2},
+$$
+
+where $\mathbf{h}(x)$ and $\hat{\mathbf{h}}(x)$ are the concatenations of $[h_0(x), \dots, h_\ell(x)]$ and $[\hat{h}_0(x); \dots; \hat{h}_\ell(x)]$ , respectively.
+
+# 3.2.2 NORMALIZED AGGREGATION ALONG PATHWAY (NAP)
+
+Although SAP is intuitive, it does not consider properties of hidden spaces; distance distributions of pairs in $H$ may be different depending on the individual hidden spaces. For instance, the magnitude
+
+of distances can depend on layers, or there may exist correlated neurons even across layers which are unintentionally emphasized in SAP. To capture clearer patterns, we propose to normalize the distances via two steps: orthogonalization and scaling.
+
+Let $\mathbf{d}(x) = \mathbf{h}(x) - \hat{\mathbf{h}}(x)$ ; given a training set $X$ , let $\mathbf{D}$ be a matrix whose $i$ -th row corresponds to $\mathbf{d}(x_i)$ for $x_i \in X$ , and $\bar{\mathbf{D}}$ be the column-wise centered matrix of $\mathbf{D}$ . For the normalization, we compute $\bar{\mathbf{D}} = U\Sigma V^\top$ , SVD of $\bar{\mathbf{D}}$ , to obtain its singular values $\Sigma$ and right singular vectors $V$ . For a given data sample $x$ , we define $s_{NAP}$ as follows:
+
+$$
+s _ {N A P} (x) = \left\| \left(\mathbf {d} (x) - \mu_ {X}\right) ^ {\top} V \Sigma^ {- 1} \right\| _ {2} ^ {2},
+$$
+
+where $\mu_{X}$ is the column-wise mean of $D$ , and $\mathbf{d}(x)$ is expressed as a column vector. Note that $s_{NAP}$ is equal to the Mahalanobis distance with the covariance matrix $V\Sigma \Sigma V^{\top}$ . Although SVD computation time is quadratic in the number of columns of the target matrix, we observe that its impact is relatively small in practical setups. See Appendix A for more details.
+
+# 4 MOTIVATION OF RAPP
+
+One natural question in using the ordinary reconstruction method is as follows: why do we investigate only the input space? Or, why do we not use information in hidden spaces? While the reconstruction error in the input space is extensively employed, any similar concept does not exist in hidden spaces. One reason is that the corresponding encoding and decoding layers are not guaranteed to express the same space: e.g. permuted dimensions. This is because the autoencoder objective does not have any term involving activations from intermediate hidden layers. As a result, $f_{\ell :i + 1}(g(x))$ cannot be considered a reconstruction of $g_{:i}(x)$ , except for $i = 0$ with which they become the ordinary reconstruction of and input to an autoencoder, respectively.
+
+Nevertheless, in this section, we will show that there is an indirect way to compute the hidden reconstruction. Precisely, we will show that $\hat{h}_i(x) = g_{:i}(A(x))$ is indeed equivalent to a reconstruction of $g_{:i}(x)$ . The overall mechanism is depicted in Figure 1b.
+
+# 4.1 COMPUTATION OF HIDDEN RECONSTRUCTION
+
+Let $A = f \circ g$ be a trained autoencoder, and $M_0 = \{A(x) : x \in \mathbb{R}^n\}$ be the low dimensional manifold that $A$ describes (Pidhorskyi et al., 2018): i.e.,
+
+$$
+\forall x \in M _ {0}, x = A (x).
+$$
+
+Defining $M_{i} = \{g_{:i}(x):x\in M_{0}\}$ , which is the low dimensional image of $M_0$ defined by $g_{:i},g$ and $f$ restricted on $M_0$ and $M_{\ell}$ , respectively, are inverse functions of each other.
+
+Quantifying Hidden Reconstruction We first assume that there exists a decoder $\tilde{f} = \tilde{f}_1\circ \dots \circ \tilde{f}_\ell$ such that
+
+$$
+\forall x \in M _ {\ell}, \tilde {f} (x) = f (x), \tag {1}
+$$
+
+$$
+\forall a \in M _ {i}, a = \left(g _ {i} \circ \tilde {f} _ {i}\right) (a). \tag {2}
+$$
+
+The second condition makes $\tilde{f}_{\ell :i + 1}$ a proper decoder corresponding to $g_{i + 1}$ , and thus, $\tilde{f}$ enables to define the $i$ -th hidden reconstruction $\hat{h}_i'(x)$ as follows:
+
+$$
+\hat {h} _ {i} ^ {\prime} (x) = (\tilde {f} _ {\ell : i + 1} \circ g _ {i + 1:}) (h _ {i} (x)).
+$$
+
+Finally, we conclude that $\hat{h}_i'(x)$ is equal to $\hat{h}_i(x)$ for $x \in M_0$ as follows.
+
+$$
+\begin{array}{l} \hat {h} _ {i} ^ {\prime} (x) = \left(\tilde {f} _ {\ell : i + 1} \circ g _ {i + 1:}\right) \left(h _ {i} (x)\right) = \left(\tilde {f} _ {\ell : i + 1} \circ g\right) (x) \\ = (g _ {i: i} \circ \tilde {f} \circ g) (x) \quad (\text {b y}) \\ = (g _ {i} \circ A) (x) = h _ {i} (\hat {x}) = \hat {h} _ {i} (x). \quad \text {(b y} \\ \end{array}
+$$
+
+where we do not need $\tilde{f}_i$ for computing $\hat{h}_i'(x)$ , but only $g_i$ and $f$ . Note that for $x \in M_0$ already on the manifold, its $i$ -th hidden reconstruction $\hat{h}_i'(x)$ becomes equal to its corresponding hidden input $h_i(x) = \hat{h}_i(x)$ for every $1 \leq i \leq \ell$ : i.e. $h_i(x) = \hat{h}_i'(x)$ as $x = A(x)$ . For $x \notin M_0$ , its hidden reconstruction $\hat{h}_i'(x)$ will differ from the input $h_i(x)$ .
+
+Table 1: Description of datasets used in our evaluation.
+
+| Name | # Samples | # Features | # Class | Domain | Novelty Target |
| MI-F | 25,286 | 58 | 2 | CNC milling | Machine not completed |
| MI-V | 23,125 | 58 | 2 | CNC milling | Workpiece out-of-spec |
| EOPT | 90,515 | 20 | 2 | Storage system | System failures |
| NASA | 4,687 | 33 | 2 | Astronomy | Hazardous asteroids |
| RARM | 20,221 | 6 | 2 | Robotics | Malfunctions |
| STL | 1,941 | 27 | 7 | Steel | Surface defects |
| OTTO | 61,878 | 93 | 9 | E-commerce | Types of products |
| SNSR | 58,509 | 48 | 11 | Electric Currents | Defective conditions |
| MNIST | 70,000 | 784 | 10 | Hand written digits | Digits |
| F-MNIST | 70,000 | 784 | 10 | Fashion articles | Articles |
+
+Existence of $\tilde{f}$ Since $x = A(x)$ for $x\in M_0$ , $g_{i}$ and $f_{i}$ are one-to-one functions from $M_{i - 1}$ and $M_{i}$ , respectively. Let us define $\tilde{f}_i = g_i^{-1}$ for $M_{i}$ ; then it also holds $\tilde{f} = g^{-1}$ for $M_{\ell}$ . This implies $x = (\tilde{f}\circ g)(x)$ for $x\in M_0$ , and consequently, $\tilde{f} = f$ on $M_{\ell}$ . This definition of $\tilde{f}_i$ satisfies the two conditions above, and as discussed, we are able to compute hidden reconstructions given an input $x$ , through computing the $i$ -th hidden activation of the reconstructed input: i.e. $\hat{h}_i'(x) = (g_{:i}\circ A)(x) = \hat{h}_i(x)$ .
+
+Existence of $\tilde{f}$ with Neural Networks Given $g_{i}$ , if the symmetric architecture for $\tilde{f}_{i}$ is used, we may not be able to learn $\tilde{f}_{i} = g_{i}^{-1}$ . Neural networks are, however, highly flexible frameworks in which we can deal with models of arbitrary function forms by adjusting network architecture. This property enables us to design a layer capable of representing $\tilde{f}_{i}$ . For instance, even if $\tilde{f}_{i}$ is too complicated to be represented with a single fully connected layer, we can still approximate $\tilde{f}_{i}$ by stacking multiple layers. Hence, given $g_{i},\tilde{f}_{i}$ can be represented by neural networks.
+
+# 5 EVALUATION
+
+In this section, we evaluate RAPP in comparison to existing methods. To this end, we tested the methods on several benchmarks and diverse datasets collected from Kaggle and the UCI repository which are suitable for evaluating novelty detection methods.
+
+# 5.1 DATASETS AND PROBLEM SETUPS
+
+The datasets from Kaggle and the UCI repository are chosen from problem sets of anomaly detection and multi-class classification, summarized in Table 1. We note that MI-F and MI-V share the same feature matrix, but are considered to be different datasets because their labels normal and abnormal are assigned by different columns: i.e. machine completed and pass visual inspection, respectively. We use these datasets to compare RAPP with standard autoencoder-based methods described in Section 5.2.
+
+To compare RAPP with novelty detection methods in recent literatures, we also use popular benchmark datasets for evaluating deep learning techniques: MNIST (LeCun & Cortes, 2010) and F-MNIST (Xiao et al., 2017). For theses datasets, we do not take pre-split training and test sets, but instead merge them for post-processing.
+
+Novelty detection detects novel patterns by focusing on deviations from model-learned normal patterns. Thus, training sets contain only normal samples and test sets contain both normal and anomaly samples in our evaluation setups. Precisely, if a dataset contains an anomaly label, we assign all samples with that label to the test set for detection. If a dataset does not have any anomaly labels, we consider the following two setups.
+
+- Multimodal Normality: A single class is chosen to be the novelty class and the remaining classes are assigned as the normal class. This setup is repeated to produce sub-datasets
+
+with all possible novelty assignments. For instance, MNIST results in a set of datasets with 10 different novelty classes.
+
+- Unimodal Normality: In contrast to the multimodal normality setup, we take one class for normality, and the others for novelty. For instance, MNIST results in a set of datasets with 10 different normal classes.
+
+We applied these two setups to STL, OTTO, SNSR, MNIST, and F-MNIST datasets.
+
+# 5.2 COMPARISON METHOD
+
+We compare RAPP and the other methods using Area Under Receiver Operating Characteristic (AUROC). Note that we do not employ thresholding-based metrics such as $F1$ score because access to abnormal samples is only allowed in testing time. Hence, we focus on the separability of models for novelty with AUROC.
+
+For the datasets in Table 1, we compare the effectiveness of the reconstruction error, SAP and NAP for three models: Autoencoder (AE), Variational Autoencoder (VAE), Adversarial Autoencoder (AAE) (Makhzani et al., 2016). For the benchmark datasets, recent approaches including OCNN (Chalopathy et al., 2018), GPND (Pidhorskyi et al., 2018), DSVDD (Ruff et al., 2018) and GT (Golan & El-Yaniv, 2018) are available. To obtain the performances of the existing approaches, we downloaded their codes and applied against our problem setups.
+
+For MNIST and F-MNIST, we create test sets of novelty ratios $35\%$ for the multimodal setup and $50\%$ for the unimodal setup, where novelty samples in the test sets are randomly selected from given novelty classes. For the other datasets, we take all samples in given novelty classes to create test sets. Note that the expectation value of AUROC is invariant to the novelty ratio.
+
+# 5.3 IMPLEMENTATION DETAILS
+
+We use symmetric architecture with fully-connected layers for the three base models, AE, VAE, and AAE. Each encoder and decoder has 10 layers with different bottleneck size. For the Kaggle and UCI datasets, we carry out PCA for each dataset first. The minimum number of principal components that explain at least $90\%$ of the variance is selected as the bottleneck size of the autoencoders. We set bottleneck size to 20 for benchmark datasets. Leaky-ReLU (Xu et al., 2015a) activation and batch normalization (Ioffe & Szegedy, 2015) layers are appended to all layers except the last layer.
+
+We train AE, VAE and AAE with Adam optimizer (Kingma & Ba, 2015), and select the model with the lowest validation loss as the best model. For training stability of VAE, 10 Monte Carlo samples were averaged in the reparamterization trick (Kingma & Welling, 2014) to obtain reconstruction from the decoder. In the calculation of SAP and NAP, we excluded reconstructions in the input space for MNIST and F-MNIST.
+
+# 5.4 RESULTS
+
+Each AUROC score is obtained by averaging AUROC scores from multiple trials to reduce the random errors in training neural networks: 5 trials for MNIST and F-MNIST, and 20 trials for the other datasets. More results are provided in Appendix: standard deviations in Appendix B, comparison to baselines other than autoencoder variants C, and the effect of varying hidden layers involved in RAPP computation in Appendix D.
+
+# 5.4.1 COMPARISON WITH BASELINES
+
+Table 2 summarizes the result of our performance evaluation; the best score for each model is in bold, and the best score for each dataset with an underline. Since STL, OTTO, SNSR, MNIST, and F-MNIST do not have anomaly labels, their scores are averaged over all possible anomaly class assignments. For instance, the AUROC value for OTTO in the unimodal normality setup is the average of 9 AUROC values with different novelty class assignments.
+
+In Table 2, RAPP shows the highest AUROC scores for most of the cases. If we examine the performance for each dataset, RAPP achieves the best for 10 cases out of 15 (see the underlines).
+
+Table 2: AUROC of RAPP and the baselines.
+
+| Dataset | AE | VAE | AAE |
| Recon | SAP | NAP | Recon | SAP | NAP | Recon | SAP | NAP |
| Multimodal Normality |
| STL | 0.723 | 0.703 | 0.711 | 0.700 | 0.675 | 0.711 | 0.726 | 0.711 | 0.724 |
| OTTO | 0.617 | 0.616 | 0.665 | 0.630 | 0.631 | 0.649 | 0.617 | 0.618 | 0.665 |
| SNSR | 0.613 | 0.611 | 0.614 | 0.608 | 0.620 | 0.658 | 0.612 | 0.608 | 0.609 |
| MNIST | 0.825 | 0.881 | 0.899 | 0.900 | 0.965 | 0.965 | 0.847 | 0.911 | 0.929 |
| F-MNIST | 0.712 | 0.725 | 0.734 | 0.725 | 0.728 | 0.755 | 0.721 | 0.710 | 0.727 |
| Unimodal Normality |
| MI-F | 0.607 | 0.670 | 0.705 | 0.591 | 0.572 | 0.678 | 0.632 | 0.692 | 0.704 |
| MI-V | 0.897 | 0.898 | 0.907 | 0.845 | 0.833 | 0.903 | 0.895 | 0.891 | 0.904 |
| EOPT | 0.610 | 0.607 | 0.625 | 0.675 | 0.634 | 0.596 | 0.606 | 0.603 | 0.634 |
| NASA | 0.719 | 0.702 | 0.692 | 0.738 | 0.714 | 0.719 | 0.712 | 0.695 | 0.688 |
| RARM | 0.687 | 0.674 | 0.686 | 0.587 | 0.565 | 0.648 | 0.664 | 0.675 | 0.675 |
| STL | 0.870 | 0.856 | 0.830 | 0.850 | 0.824 | 0.792 | 0.875 | 0.861 | 0.830 |
| OTTO | 0.829 | 0.829 | 0.832 | 0.831 | 0.835 | 0.833 | 0.828 | 0.828 | 0.831 |
| SNSR | 0.979 | 0.982 | 0.990 | 0.975 | 0.979 | 0.964 | 0.979 | 0.983 | 0.991 |
| MNIST | 0.972 | 0.980 | 0.979 | 0.980 | 0.987 | 0.989 | 0.972 | 0.966 | 0.977 |
| F-MNIST | 0.924 | 0.928 | 0.933 | 0.926 | 0.926 | 0.942 | 0.922 | 0.905 | 0.928 |
+
+Table 3: AUROC on benchmark datasets.
+
+| Dataset | OCNN | GPND | DSVDD | GT | \(NAP_{AE}\) | \(NAP_{VAE}\) | \(NAP_{AAE}\) |
| Multimodal Normality (Novelty Ratio: 35%) |
| MNIST | 0.600 | 0.501 | 0.622 | 0.893 | 0.899 | 0.965 | 0.929 |
| F-MNIST | 0.609 | 0.691 | 0.610 | 0.725 | 0.734 | 0.755 | 0.727 |
| Unimodal Normality (Novelty Ratio: 50%) |
| MNIST | 0.927 | 0.971 | 0.922 | 0.974 | 0.979 | 0.989 | 0.977 |
| F-MNIST | 0.915 | 0.917 | 0.923 | 0.935 | 0.933 | 0.942 | 0.928 |
+
+# 5.4.2 COMPARISON WITH COMPETITORS
+
+Table 3 summarizes the comparison of RAPP to recent novelty detection methods. As in Table 2, AUROC values are calculated by averaging results from 10 cases with different anomaly class assignments for both datasets.
+
+Except for the unimodal F-MNIST setup, NAP outperforms all competing methods regardless of base model choice. Notably, NAP combined with VAE always shows the best performance, which is even higher than that of GT relying on image-specific data transformations for all cases.
+
+# 6 CONCLUSION
+
+In this paper, we propose a novelty detection method which utilizes hidden reconstructions along a projection pathway of deep autoencoders. To this end, we extend the concept of reconstruction in the input space to hidden spaces found by an autoencoder and present a tractable way to compute the hidden reconstructions, which requires neither modifying nor retraining the autoencoder. Our experimental results show that the proposed method outperforms other competing methods in terms of AUROC for diverse datasets including popular benchmarks.
+
+# REFERENCES
+
+Jinwon An and Sungzoon Cho. Variational autoencoder based anomaly detection using reconstruction probability. SNUDM-TR, Mar 2015.
+Wissam Aoudi, Mikel Iturbe, and Magnus Almgren. Truth will out: Departure-based process-level detection of stealthy attacks on control systems. In Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security. ACM, 2018.
+Christoph Baur, Benedikt Wiestler, Shadi Albarqouni, and Nassir Navab. Deep autoencoding models for unsupervised anomaly segmentation in brain mr images. MICCAI, 2018.
+Alessandro Beghi, Luca Cecchinato, C Corazzol, Mirco Rampazzo, F Simmini, and Gian Antonio Susto. A one-class svm based tool for machine learning novelty detection in HVAC chiller systems. IFAC Proceedings Volumes (IFAC-PapersOnline), 19:1953-1958, 2014.
+Raghavendra Chalopathy, Aditya Krishna Menon, and Sanjay Chawla. Anomaly detection using one-class neural networks. arXiv preprint arXiv:1802.06360, 2018.
+EOPT. https://www.kaggle.com/init-owl/high-storage-system-data-for-energy-optimization.
+F-MNIST. https://github.com/zalandoresearch/fashion-mnist.
+Izhak Golan and Ran El-Yaniv. Deep anomaly detection using geometric transformations. NIPS, 2018.
+Ilyass Haloui, Jayant Sen Gupta, and Vincent Feuillard. Anomaly detection with Wasserstein GAN. arXiv e-prints, art. arXiv:1812.02463, Dec 2018.
+Heiko Hoffmann. Kernel pca for novelty detection. Pattern recognition, 40(3):863-874, 2007.
+Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML, volume 37, pp. 448-456, 2015.
+Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015.
+Diederik P. Kingma and Max Welling. Auto-encoding variational bayes. In ICLR, 2014.
+Tomás Kuzin and Tomás Borovicka. Early failure detection for predictive maintenance of sensor parts. In ITAT, 2016.
+Yann LeCun and Corinna Cortes. MNIST handwritten digit database. 2010. URL http://yann.lecun.com/exdb/mnist/.
+D. Lee. Anomaly detection in multivariate non-stationary time series for automatic dbms diagnosis. In ICMLA, 2017.
+Alireza Makhzani, Jonathon Shlens, Navdeep Jaitly, and Ian Goodfellow. Adversarial autoencoders. In International Conference on Learning Representations, 2016. URL http://arxiv.org/abs/1511.05644.
+MI. https://www.kaggle.com/shasun/tool-wear-detection-in-cnc-mill.
+MNIST. http://yann.leun.com/exdb/mnist/.
+NASA. https://www.kaggle.com/shrutimehta/nasa-asteroids-classification.
+OTTO. https://www.kaggle.com/c/otto-group-product-classification-challenge.
+Anuradha Vijay Pawar, Prakash N. Kalavadekar, and Ms. Swapnali N. Tambe. A survey on outlier detection techniques for credit card fraud detection. IOSR Journal of Computer Engineering, 16: 44-48, 2014.
+Pramuditha Perera and Vishal M. Patel. Learning deep features for one-class classification. CoRR, abs/1801.05365, 2018.
+
+Stanislav Pidhorskyi, Ranya Almohsen, and Gianfranco Doretto. Generative probabilistic novelty detection with adversarial autoencoders. In NeurIPS, pp. 6823-6834, 2018.
+Utkarsh Porwal and Smruthi Mukund. Credit card fraud detection in e-commerce: An outlier detection approach. CoRR, abs/1811.02196, 2018.
+RARM. https://github.com/narayave/mh5_anomaly_detector.
+M. Ravanbakhsh, M. Nabi, E. Sangineto, L. Marcenaro, C. Regazzoni, and N. Sebe. Abnormal event detection in videos using generative adversarial nets. In 2017 IEEE International Conference on Image Processing (ICIP), 2017.
+Lukas Ruff, Robert Vandermeulen, Nico Goernitz, Lucas Deecke, Shoaib Ahmed Siddiqui, Alexander Binder, Emmanuel Müller, and Marius Kloft. Deep one-class classification. In ICML, 2018.
+Mohammad Sabokrou, Mohammad Khalooei, Mahmood Fathy, and Ehsan Adeli. Adversarily learned one-class classifier for novelty detection. In CVPR, pp. 3379-3388, 2018.
+Mayu Sakurada and Takehisa Yairi. Anomaly detection using autoencoders with nonlinear dimensionality reduction. In MLSDA, 2014.
+Thomas Schlegl, Philipp Seebock, Sebastian M. Waldstein, Ursula Schmidt-Erfurth, and Georg Langs. Unsupervised anomaly detection with generative adversarial networks to guide marker discovery. In IPMI, 2017.
+SNSR. https://archive.ics.uci.edu/ml/datasets/dataset+for+sensorless+drive+diagnosis.
+STL. https://www.kaggle.com/uciml/faulty-steel-plates.
+Aleksei Vasilev, Vladimir Golkov, Marc Meissner, Ilona Lipp, Eleonora Sgarlata, Valentina Tomassini, Derek K. Jones, and Daniel Cremers. q-Space Novelty Detection with Variational Autoencoders. arXiv, Jun 2018.
+Han Xiao, Kashif Rasul, and Roland Vollgraf. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. CoRR, abs/1708.07747, 2017. URL http://arxiv.org/abs/1708.07747.
+Bing Xu, Naiyan Wang, Tianqi Chen, and Mu Li. Empirical evaluation of rectified activations in convolutional network. arXiv, abs/1505.00853, 2015a.
+Dan Xu, Elisa Ricci, Yan Yan, Jingkuan Song, and Nicu Sebe. Learning deep representations of appearance and motion for anomalous event detection. In BMVC, 2015b.
+Houssam Zenati, Chuan Sheng Foo, Bruno Lecouat, Gaurav Manek, and Vijay Ramaseshan Chandrasekhar. Efficient GAN-based anomaly detection. arXiv, abs/1802.06222, 2018.
+Pushe Zhao, Masaru Kurihara, Junichi Tanaka, Tojiro Noda, Shigeyoshi Chikuma, and Tadashi Suzuki. Advanced correlation-based anomaly detection method for predictive maintenance. In ICPHM, 2017.
+
+# A SVD COMPUTATION TIME
+
+We compare running times of training an autoencoder and computing SVD for NAP. We choose two packages for the SVD computation: Pytorch SVD and fbpca provided in https://fbpca.readthedocs.io/en/latest/.
+
+Since the time complexity of SVD is linear in the number of data samples1, we mainly focus on the performance of SVD with varying the number of columns of the input matrix that SVD is applied. To obtain variable sizes of the columns, we vary the depth and bottleneck size of autoencoders.
+
+The result is shown below. Notably, Pytorch SVD utilizing GPU is at least $47\mathrm{x}$ faster than training neural networks. Even, fbpc running only on CPU achieves at least $2.4\mathrm{x}$ speedup.
+
+
+
+
+
+
+
+The detailed setups to obtain the matrices for the experiment are given in the table below:
+
+ | MNIST | OTTO | SNSR |
| Depth | Bottleneck Size | Depth | Bottleneck Size | Depth | Bottleneck Size |
| 1 | 20 | 100 | 20 | 40 | 20 | 90 |
| 2 | 18 | 100 | 18 | 40 | 18 | 90 |
| 3 | 16 | 100 | 16 | 40 | 16 | 90 |
| 4 | 14 | 100 | 14 | 40 | 14 | 90 |
| 5 | 12 | 100 | 12 | 40 | 12 | 90 |
| 6 | 10 | 100 | 10 | 40 | 10 | 90 |
| 7 | 8 | 100 | 8 | 40 | 8 | 90 |
| 8 | 6 | 100 | 6 | 40 | 6 | 90 |
| 9 | 4 | 100 | 4 | 40 | 4 | 90 |
| 10 | 2 | 100 | 2 | 40 | 2 | 90 |
| 11 | 2 | 80 | 2 | 30 | 2 | 70 |
| 12 | 2 | 60 | 2 | 20 | 2 | 50 |
| 13 | 2 | 40 | 2 | 10 | 2 | 30 |
| 14 | 2 | 20 | | | 2 | 10 |
+
+# B STANDARD DEVIATIONS OF EXPERIMENTAL RESULTS
+
+We provide the standard deviations of the result in Table 2. Given a dataset and a setup, multimodal or unimodal, AUROC values are first averaged over multiple cases with different novelty class assignments; then, the standard deviations are calculated for those averaged values over multiple trials. The number of cases and the number of trials depend on datasets, and we refer to Section 5 for more details.
+
+| Dataset | AE | VAE | AAE |
| Recon | SAP | NAP | Recon | SAP | NAP | Recon | SAP | NAP |
| Multimodal Normality |
| STL | 0.723(0.005) | 0.703(0.007) | 0.711(0.006) | 0.700(0.012) | 0.675(0.018) | 0.711(0.012) | 0.726(0.009) | 0.711(0.011) | 0.724(0.016) |
| OTTO | 0.617(0.002) | 0.616(0.001) | 0.665(0.004) | 0.630(0.003) | 0.631(0.005) | 0.649(0.003) | 0.617(0.001) | 0.618(0.003) | 0.665(0.004) |
| SNSR | 0.613(0.002) | 0.611(0.003) | 0.614(0.004) | 0.608(0.001) | 0.620(0.002) | 0.658(0.003) | 0.612(0.005) | 0.608(0.005) | 0.609(0.005) |
| MNIST | 0.825(0.001) | 0.881(0.007) | 0.899(0.008) | 0.900(0.004) | 0.965(0.005) | 0.965(0.003) | 0.847(0.005) | 0.911(0.006) | 0.929(0.005) |
| F-MNIST | 0.712(0.001) | 0.725(0.004) | 0.734(0.004) | 0.725(0.003) | 0.728(0.002) | 0.755(0.006) | 0.721(0.001) | 0.710(0.005) | 0.727(0.005) |
| Unimodal Normality |
| MI-F | 0.607(0.023) | 0.670(0.041) | 0.705(0.018) | 0.591(0.013) | 0.572(0.019) | 0.678(0.047) | 0.632(0.028) | 0.692(0.042) | 0.704(0.024) |
| MI-V | 0.897(0.010) | 0.898(0.005) | 0.907(0.005) | 0.845(0.021) | 0.833(0.032) | 0.903(0.010) | 0.895(0.005) | 0.891(0.006) | 0.904(0.005) |
| EOPT | 0.610(0.015) | 0.607(0.014) | 0.625(0.008) | 0.675(0.017) | 0.634(0.007) | 0.596(0.002) | 0.606(0.011) | 0.603(0.013) | 0.634(0.010) |
| NASA | 0.719(0.009) | 0.702(0.015) | 0.692(0.016) | 0.738(0.009) | 0.714(0.015) | 0.719(0.009) | 0.712(0.016) | 0.695(0.027) | 0.688(0.024) |
| RARM | 0.687(0.038) | 0.674(0.031) | 0.686(0.030) | 0.587(0.021) | 0.565(0.029) | 0.648(0.022) | 0.664(0.015) | 0.675(0.023) | 0.675(0.040) |
| STL | 0.870(0.007) | 0.856(0.007) | 0.830(0.007) | 0.850(0.006) | 0.824(0.003) | 0.792(0.034) | 0.875(0.003) | 0.861(0.004) | 0.830(0.008) |
| OTTO | 0.829(0.001) | 0.829(0.001) | 0.832(0.001) | 0.831(0.002) | 0.835(0.003) | 0.833(0.004) | 0.828(0.002) | 0.828(0.001) | 0.831(0.002) |
| SNSR | 0.979(0.001) | 0.982(0.001) | 0.990(0.001) | 0.975(0.003) | 0.979(0.002) | 0.964(0.043) | 0.979(0.001) | 0.983(0.002) | 0.997(0.001) |
| MNIST | 0.972(0.001) | 0.980(0.001) | 0.979(0.000) | 0.980(0.002) | 0.987(0.002) | 0.989(0.001) | 0.972(0.001) | 0.966(0.004) | 0.977(0.001) |
| F-MNIST | 0.924(0.002) | 0.928(0.002) | 0.933(0.002) | 0.926(0.001) | 0.926(0.001) | 0.942(0.000) | 0.922(0.001) | 0.905(0.002) | 0.928(0.002) |
+
+# C COMPARING RAPP TO BASELINES OTHER THAN NEURAL NETWORKS
+
+The table below shows the result of comparing RAPP and baselines not based on neural networks. Note that the baselines are run with hyperparameters or hyperparameter search spaces provided in Ruff et al. (2018), while NAP is not further tuned.
+
+| Dataset | OCSVM | ISOF | PCA | kPCA | \(NAP_{AE}\) | \(NAP_{VAE}\) | \(NAP_{AAE}\) |
| Multimodal Normality |
| STL | 0.608 | 0.635 | 0.700 | 0.693 | 0.711 | 0.711 | 0.724 |
| OTTO | 0.628 | 0.548 | 0.572 | 0.644 | 0.665 | 0.649 | 0.665 |
| SNSR | 0.524 | 0.526 | 0.536 | 0.580 | 0.614 | 0.658 | 0.606 |
| MNIST | 0.566 | 0.562 | 0.698 | 0.701 | 0.899 | 0.965 | 0.929 |
| F-MNIST | 0.571 | 0.650 | 0.670 | 0.676 | 0.734 | 0.755 | 0.727 |
| Unimodal Normality |
| MI-F | 0.777 | 0.826 | 0.546 | 0.838 | 0.705 | 0.678 | 0.704 |
| MI-V | 0.840 | 0.839 | 0.874 | 0.888 | 0.907 | 0.903 | 0.904 |
| EOPT | 0.597 | 0.595 | 0.552 | 0.693 | 0.625 | 0.596 | 0.634 |
| NASA | 0.573 | 0.664 | 0.765 | 0.699 | 0.692 | 0.719 | 0.688 |
| RARM | 0.749 | 0.752 | 0.587 | 0.734 | 0.734 | 0.648 | 0.675 |
| STL | 0.879 | 0.871 | 0.883 | 0.880 | 0.880 | 0.792 | 0.830 |
| OTTO | 0.821 | 0.699 | 0.799 | 0.823 | 0.823 | 0.833 | 0.831 |
| SNSR | 0.959 | 0.893 | 0.894 | 0.970 | 0.970 | 0.964 | 0.991 |
| MNIST | 0.894 | 0.919 | 0.903 | 0.906 | 0.979 | 0.989 | 0.977 |
| F-MNIST | 0.884 | 0.854 | 0.950 | 0.943 | 0.933 | 0.942 | 0.928 |
+
+# D NAP PERFORMANCE OVER INCREASING INVOLVED HIDDEN LAYERS
+
+We investigate the performance of NAP while increasing the number of hidden layers involved in the NAP computation. Specifically, we consider two ways for the increment: 1) adding hidden layers one by one from the input layer (forward addition), and 2) adding hidden layers one by one from the bottleneck layer (backward addition). Experimental results on three datasets are shown below. For most cases, more hidden layers tend to result in higher performance. The values are obtained from one trial without averaging results from multiple trials.
+
+
+(a) MNIST
+
+
+(b) OTTO
+
+
+(c) STEEL
\ No newline at end of file
diff --git a/rappnoveltydetectionwithreconstructionalongprojectionpathway/images.zip b/rappnoveltydetectionwithreconstructionalongprojectionpathway/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..27dac6f7d3af6f6a079f54bdc8d96ae4876b5931
--- /dev/null
+++ b/rappnoveltydetectionwithreconstructionalongprojectionpathway/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4a678cd1413db398568b75274e0f5f0cf11b5b9a3913bf7a0401f7e7042e58f2
+size 894138
diff --git a/rappnoveltydetectionwithreconstructionalongprojectionpathway/layout.json b/rappnoveltydetectionwithreconstructionalongprojectionpathway/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..4259eb5cc89226a0aebedf519240c852d5622774
--- /dev/null
+++ b/rappnoveltydetectionwithreconstructionalongprojectionpathway/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3931bf07cb91f0520d42f2934ded3a4c033743c3fee0ebd11fc57ba762b57563
+size 452571
diff --git a/rgbdganunsupervised3drepresentationlearningfromnaturalimagedatasetsviargbdimagesynthesis/93e4a613-69e7-4784-ab4f-24953f8580f8_content_list.json b/rgbdganunsupervised3drepresentationlearningfromnaturalimagedatasetsviargbdimagesynthesis/93e4a613-69e7-4784-ab4f-24953f8580f8_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..077f34a856058988bb86076d3b78a543d7fc118f
--- /dev/null
+++ b/rgbdganunsupervised3drepresentationlearningfromnaturalimagedatasetsviargbdimagesynthesis/93e4a613-69e7-4784-ab4f-24953f8580f8_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:65eb0c78f72f4b55dc2bc61c5f6f16aab0cf5e2cdcc038b5ac352614940a7473
+size 83993
diff --git a/rgbdganunsupervised3drepresentationlearningfromnaturalimagedatasetsviargbdimagesynthesis/93e4a613-69e7-4784-ab4f-24953f8580f8_model.json b/rgbdganunsupervised3drepresentationlearningfromnaturalimagedatasetsviargbdimagesynthesis/93e4a613-69e7-4784-ab4f-24953f8580f8_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..62ebc5eb8c2d3f949906f0a1cbc17f12b1bd4d95
--- /dev/null
+++ b/rgbdganunsupervised3drepresentationlearningfromnaturalimagedatasetsviargbdimagesynthesis/93e4a613-69e7-4784-ab4f-24953f8580f8_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7d8daa395844de754245f17771b7125d8e3d29ebaa7aaee0830283701b78cb10
+size 102946
diff --git a/rgbdganunsupervised3drepresentationlearningfromnaturalimagedatasetsviargbdimagesynthesis/93e4a613-69e7-4784-ab4f-24953f8580f8_origin.pdf b/rgbdganunsupervised3drepresentationlearningfromnaturalimagedatasetsviargbdimagesynthesis/93e4a613-69e7-4784-ab4f-24953f8580f8_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..07f4bf848c4fd8a3a58b613aa9ee39eb5fa1c208
--- /dev/null
+++ b/rgbdganunsupervised3drepresentationlearningfromnaturalimagedatasetsviargbdimagesynthesis/93e4a613-69e7-4784-ab4f-24953f8580f8_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:81b79c420d1524a3b94502b8c9b7abe1d9ff326c7582917066a903481ab5ac0b
+size 7943670
diff --git a/rgbdganunsupervised3drepresentationlearningfromnaturalimagedatasetsviargbdimagesynthesis/full.md b/rgbdganunsupervised3drepresentationlearningfromnaturalimagedatasetsviargbdimagesynthesis/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..91b58ae3e4226459c562cb423da1e48dc1bda552
--- /dev/null
+++ b/rgbdganunsupervised3drepresentationlearningfromnaturalimagedatasetsviargbdimagesynthesis/full.md
@@ -0,0 +1,343 @@
+# RGBD-GAN: UNSUPERVISED 3D REPRESENTATION LEARNING FROM NATURAL IMAGE DATASETS VIA RGBD IMAGE SYNTHESIS
+
+Atsuhiro Noguchi1 & Tatsuya Harada1,2
+
+1The University of Tokyo, 2RIKEN {noguchi, harada}@mi.t.u-tokyo.ac.jp
+
+# ABSTRACT
+
+Understanding three-dimensional (3D) geometries from two-dimensional (2D) images without any labeled information is promising for understanding the real world without incurring annotation cost. We herein propose a novel generative model, RGBD-GAN, which achieves unsupervised 3D representation learning from 2D images. The proposed method enables camera parameter-conditional image generation and depth image generation without any 3D annotations, such as camera poses or depth. We use an explicit 3D consistency loss for two RGBD images generated from different camera parameters, in addition to the ordinal GAN objective. The loss is simple yet effective for any type of image generator such as DCGAN and StyleGAN to be conditioned on camera parameters. Through experiments, we demonstrated that the proposed method could learn 3D representations from 2D images with various generator architectures.
+
+# 1 INTRODUCTION
+
+
+Figure 1: Generated face images from PGGAN and car images from StyleGAN. Images in even rows are generated depth images with colormaps. Though the models are trained on unlabeled RGB image datasets, they achieve RGBD image generation as well as explicit control over the camera poses.
+
+Understanding three-dimensional (3D) geometries from two-dimensional (2D) images is important in computer vision. An image of real-world objects comprises two independent components: object identity and camera pose. Object identity represents the shape and texture of an object, and camera pose comprises camera rotation, translation, and intrinsics such as focal length. Learning the representation of these two components independently facilitates in understanding the real 3D world. For example, camera pose invariant feature extraction can facilitate object identification problems, and camera pose variant feature representations are beneficial for the pose estimation of the objects. These tasks are easy for humans but difficult for machines.
+
+Recently, 3D representation learning through 3D object generation has been actively researched. Many techniques are available for learning the relationship between 2D images and 3D objects. Typically used 3D representations are voxel grids (Yan et al., 2016; Wu et al., 2016; Choy et al., 2016; Henzler et al., 2019), point clouds (Fan et al., 2017), and meshes (Rezende et al., 2016; Kato et al., 2018; Wang et al., 2018; Kato & Harada, 2019). For most of the methods, 3D annotations such
+
+as ground truth 3D models (Choy et al., 2016; Fan et al., 2017; Wang et al., 2018), multiple-view images (Yan et al., 2016), or silhouette annotations of objects (Yan et al., 2016; Kato et al., 2018; Kato & Harada, 2019) must be used to reconstruct 3D shape from 2D images. Although these methods achieve 3D object generation by controlling the object identity and camera poses independently, the construction of such datasets requires considerable time and effort. Therefore, a method that can learn 3D representations without any labeled information must be developed. Though some research tackles this problem, their performance is limited when applied to natural images. Rezende et al. (2016) proposed unsupervised single-view 3D mesh reconstruction, but it can only be applied to a primitive dataset. Henzler et al. (2019) recently proposed unsupervised single-view voxel reconstruction on natural images, but the resolution is limited due to the memory constraint.
+
+To realize unsupervised 3D object generation, we employ a different approach, i.e., RGB-Depth (RGBD) image generation. RGBD images comprise the color and depth information of each pixel. The proposed RGBD image generation can be achieved through a simple extension of recently developed image generation models. We propose RGBD Generative Adversarial Networks (RGBDGAN), which learns to generate RGBD images from natural RGB image datasets without the need of any annotations, such as camera pose and depth annotations, multiple viewpoints for the single objects. The proposed model uses an explicit 3D consistency loss for the generated images; the model generates two RGBD images with different camera parameters and learns them to be consistent with the 3D world. This training pipeline is simple yet effective for generating depth images without supervision and for disentangling a camera pose from the image content. Because the proposed model does not restrict the generator architecture, we can condition any type of image generator (e.g., PGGAN (Karras et al., 2018), StyleGAN (Karras et al., 2019)) on camera parameters. Figure 1 shows the generation results from the proposed models. As such, our model can generate RGBD images from arbitrary viewpoints without any supervision, and therefore we regard this as "unsupervised 3D representation learning" though a single output cannot represent a full 3D scene.
+
+Our contributions are as follows.
+
+- We propose a new image generation technique, i.e., RGBD image generation, which can be achieved from RGB images without any labeled information such as annotations of camera parameters, depth, or multiple viewpoints for single objects.
+- The proposed method can disentangle camera parameters from the image content without any supervision.
+- Our method can be used to condition any type of generator on camera parameters because the proposed loss function does not restrict the generator architecture.
+
+# 2 RELATED WORKS
+
+Recently, image generation models have shown significant progress, especially generative adversarial networks (GANs) (Goodfellow et al., 2014). GAN trains a discriminator that estimates the distribution distance between generated and real images; additionally it trains a generator that minimizes the estimated distance. As such, the distribution of training images can be estimated precisely without supervision. Recent interest in generative models pertain to their training stability (Arjovsky et al., 2017; Gulrajani et al., 2017; Miyato et al., 2018) and improvement in quality and diversity (Karras et al., 2018; Brock et al., 2019; Karras et al., 2019). Furthermore, methods to learn 3D morphable generative models from 2D images have been proposed (Tran et al., 2017; Shen et al., 2018; Sitzmann et al., 2019; Nguyen-Phuoc et al., 2019). Tran et al. (2017) and Shen et al. (2018) learned to generate images by controlling camera poses using camera pose annotations or images captured from multiple viewpoints. Although these methods can successfully control an object pose, the scalability is limited owing to the annotation costs. Nguyen-Phuoc et al. (2019) recently proposed a method to disentangle object identity and camera poses without any annotations. This method uses latent 3D features and learns to generate images from the feature projected from the 3D feature with rigid-body transformations. That is, this method uses strong inductive biases regarding the 3D world to learn the relationship between camera poses and images. These image generation models cannot output explicit 3D representations, thus limiting the comprehensibility of the output. Sitzmann et al. (2019) achieved RGB and depth image synthesis from 2D image datasets by unsupervisingly learning occlusion aware projection from 3D latent feature to 2D. The model, how
+
+ever, requires multiple viewpoints for a single object and camera pose annotations, thus limiting the scalability.
+
+Similarly, Rajeswar et al. (2019) proposed depth image generator trained on unlabeled RGB images. The model can control the pose of generated images without supervision. However, it only works on a synthetic dataset, where the surface-normal of each pixel is easily estimated by the color and location.
+
+# 3 METHOD
+
+In this study, unsupervised 3D representation learning is achieved via RGBD image synthesis. In this section, we first describe the motivation to use RGBD representation in Section 3.1 and we provide the details of our method in Section 3.2.
+
+# 3.1 MOTIVATION
+
+A goal of this research is to construct a model that can generate images $I$ conditioned on camera parameters $c$ . However, it is impossible to perfectly model the relationship between $c$ and $I$ without any annotations. Therefore, we alleviate the problem by considering optical flow consistency. Although optical flow is typically used for two different frames in a movie, we used it for images captured with different camera parameters. Optical flow consistency is expressed as the pixel movement between two images.
+
+$$
+I (x, y, c) = I (x + \Delta x, y + \Delta y, c + \Delta c) \quad \text {f o r} \quad \forall x, y, c \tag {1}
+$$
+
+Here, $x$ and $y$ are pixel coordinates in the image. Considering a small $\Delta c$ , this equation can be written as the following partial differential equation.
+
+$$
+\frac {\partial I}{\partial x} \frac {\mathrm {d} x}{\mathrm {d} c} + \frac {\partial I}{\partial y} \frac {\mathrm {d} y}{\mathrm {d} c} + \frac {\partial I}{\partial c} = 0 \quad \text {f o r} \quad \forall x, y, c \tag {2}
+$$
+
+$\frac{\partial I}{\partial x}$ and $\frac{\partial I}{\partial y}$ can be estimated using ordinary image generation models. Therefore, if $\frac{\mathrm{d}x}{\mathrm{d}c}$ and $\frac{\mathrm{d}y}{\mathrm{d}c}$ are known, then $\frac{\partial I}{\partial c}$ can be calculated. This term can be helpful for conditioning the generator on the camera parameters when optimizing the GAN objective. As $\frac{\mathrm{d}x}{\mathrm{d}c}$ and $\frac{\mathrm{d}y}{\mathrm{d}c}$ remain unknown, we consider a geometric constraint on a homogeneous coordinate. Let $\mathcal{D}$ be the depth, $p = (x,y,1)$ the homogeneous coordinate of the pixel, $p_{world}$ the world coordinate of the pixel, $R$ the rotation matrix, $t$ the translation vector, and $K$ the camera intrinsics. The camera parameters $c$ are represented herein as $\{K,R,t\}$ . $p_{world}$ is constant to $c$ . Then, we can calculate the position on an image and the depth from the world coordinate $p_{world}$ .
+
+$$
+\mathcal {D} p = K R p _ {\text {w o r l d}} + K t \tag {3}
+$$
+
+This facilitates in calculating $\frac{\mathrm{d}x}{\mathrm{d}c}$ and $\frac{\mathrm{d}y}{\mathrm{d}c}$ by estimating the depth $\mathcal{D}$ . Hence, we used the RGBD representation for camera parameter conditioning. For depth image $\mathcal{D}$ , an optical flow consistency as an RGB image exists, considering the camera parameter change. This facilitates in estimating the depth image $\mathcal{D}$ .
+
+$$
+\mathcal {D} (x, y, c) = \mathcal {D} (x + \Delta x, y + \Delta y, c + \Delta c) + \Delta \mathcal {D} \quad \text {f o r} \quad \forall x, y, c \tag {4}
+$$
+
+Here, $\Delta D$ can be calculated from Equation 3.
+
+Briefly, training a GAN with the constraints in Equation 1, 3, and 4 is beneficial for learning $\frac{\partial I}{\partial c}$ , which benefits camera parameter-conditional synthesis. Additionally, learning a camera parameter-conditional image generation model facilitates in learning depth distributions with the constraint from Equation 1 and 3. The details for each module are explained below.
+
+# 3.2 PROPOSED PIPELINE
+
+The proposed model comprises three components: an RGBD image generator conditioned on camera parameters, RGB image discriminator for adversarial training, and self-supervised RGBD consistency loss. The overview of the pipeline is shown in Figure 2.
+
+
+Figure 2: Proposed pipeline. We train the RGBD image generator with the self-supervised 3D consistency loss and adversarial loss for RGB channels. The model generates two RGBD images with different camera parameters and learns them to be consistent with the 3D world.
+
+# 3.2.1 RGBD IMAGE GENERATOR
+
+Considering the success in image generation, the generator of a GAN can estimate complicated distributions. Therefore, we used ordinary RGB image generators such as DCGAN (Radford et al., 2016) or StyleGAN for RGBD synthesis. RGBD synthesis is achieved by adding one channel to the final layer of the RGB generator. Moreover, as described in the experimental section, we can use image generation models through 3D latent representations such as HoloGAN (Nguyen-Phuoc et al., 2019) or DeepVoxels (Sitzmann et al., 2019), which models the 3D world more naturally.
+
+In the proposed pipeline, the generator is conditioned on camera parameters and trained with gradient descent to minimize the self-supervised consistency loss and the adversarial loss described below. Because no constraint exists for the generator architecture, any type of generator architecture can be used for RGBD image synthesis, thus resulting in the high applicability of our method.
+
+# 3.2.2 SELF-SUPERVISED RGBD CONSISTENCY LOSS
+
+In Section 3.1, we showed that the optical flow consistency for RGB and depth can facilitate in learning camera parameter-conditional image generation. We approximated the constraint in Equation 1 and 4 by sampling two camera parameters $c_{1}$ and $c_{2}$ and minimizing the difference of both sides of the equations for two generated images conditioned on those camera parameters, where $c = c_{1}$ and $c + \Delta c = c_{2}$ . In this study, the camera parameters are sampled from a predefined distribution $p(c)$ according to the dataset, similarly to HoloGAN. The detailed settings are explained in Section 4.2 and appendix. We limit the maximum values of $\Delta c$ to $30^{\circ}$ to avoid large occlusion.
+
+The objective function for Equation 1 is similar to the loss used in monocular video depth estimation (Zhou et al., 2017). Using Equation 3, we can calculate the 3D position of each pixel when an RGBD image is viewed from different viewpoints. Therefore, images captured from $c_{1}$ can be rendered by sampling the pixel values from RGBD images captured from $c_{2}$ . This operation is typically called as "warp" operation and is implemented with bilinear sampling. We applied this loss to the generated RGBD images conditioned on $c_{1}$ and $c_{2}$ . The main difference between depth estimation (Zhou et al., 2017) and the proposed method is that our method optimizes both the RGB and depth image generator, while depth estimation only optimizes the depth estimator.
+
+Moreover, for the constraints of the depth map in Equation 4, we define a consistency loss on the generated depth maps. This loss, which is similar to the left-right disparity consistency loss in (Godard et al., 2017), attempts to equate the depth map generated from $c_{1}$ to that generated from $c_{2}$ in 3D space. The overall proposed 3D loss function can be written as Equation 5.
+
+$$
+\begin{array}{l} \mathcal {L} _ {3 D} = \mathbb {E} _ {z \sim p (z), c _ {1, 2} \sim p (c)} \left[ \frac {1}{3 H W} \left|\left| G _ {R G B} (z, c _ {1}) - \operatorname {w a r p} \left(G _ {R G B} (z, c _ {2}), c _ {1 \rightarrow 2}\right)\right|\right| _ {1} ^ {1} \right. \\ \left. + \frac {1}{H W} \left|\left| \text {p r o j e c t i o n} \left(G _ {D} \left(z, c _ {1}\right), c _ {1 \rightarrow 2}\right) - \operatorname {w a r p} \left(G _ {D} \left(z, c _ {2}\right), c _ {1 \rightarrow 2}\right)\right|\right| _ {1} ^ {1} \right] \tag {5} \\ \end{array}
+$$
+
+Here, $G_{RGB}(z,c)$ and $G_{D}(z,c)$ denote the generated RGB and depth image from a latent vector $z$ and camera parameters $c$ respectively, $W$ and $H$ denote the width and height of the images respectively, and $c_{1\rightarrow 2}$ is a relative transformation matrix from $c_{1}$ to $c_{2}$ . The "projection" operation calculates the depth value viewed from different viewpoints from the input depth map using Equation 3. For simplification, we omit the loss for the inverse transformation $c_{2\rightarrow 1}$ in the equation. The detailed explanations of "warp" and "projection" are provided in the appendix.
+
+This loss function causes inaccurate gradients for the occluded pixels during the transformation $c_{1\rightarrow 2}$ because it does not consider those regions. Therefore, in this study, we used the technique proposed in (Gordon et al., 2019). This technique propagates gradients only to pixels where the projected depth is smaller than the depth of the other viewpoint image. This prevents inaccurate gradients in pixels that move behind other pixels during projection.
+
+Finally, we add a depth constraint term to stabilize the training. The loss above can be easily minimized to 0 when the generated depth is extremely small. Therefore, we set the minimum limit for the depth value as $\mathcal{D}_{min}$ and add a regularization for depth values smaller than $\mathcal{D}_{min}$ .
+
+$$
+\mathcal {L} _ {\text {d e p t h}} = \frac {1}{H W} \sum_ {x, y} \max \left(0, \mathcal {D} _ {\min } - \mathcal {D} (x, y)\right) ^ {2} \tag {6}
+$$
+
+# 3.2.3 RGB IMAGE DISCRIMINATOR
+
+To achieve the training of an RGBD generator from unlabeled RGB images, we apply adversarial loss only for the RGB channels of generated images. Although the loss can only improve the reality of the images, this loss is beneficial for learning depth images and camera parameter conditioning through the optimization of the loss in Equation 5.
+
+Based on the above, the final objective for the generator $\mathcal{L}_G$ is as follows.
+
+$$
+\mathcal {L} _ {G} = \mathcal {L} _ {G A N} + \lambda_ {3 D} \mathcal {L} _ {3 D} + \lambda_ {\text {d e p t h}} \mathcal {L} _ {\text {d e p t h}} \tag {7}
+$$
+
+Here, $\mathcal{L}_{GAN}$ is an adversarial loss function, and $\lambda_{3D}$ and $\lambda_{depth}$ are hyperparameters.
+
+# 4 EXPERIMENTS
+
+# 4.1 MODEL ARCHITECTURES
+
+The proposed method does not restrict the generator architecture: any type of image generators can be conditioned on camera parameters. To demonstrate the effectiveness of our method, we tested three types of image generation models: PGGAN, StyleGAN, and DeepVoxels. The model architectures are shown in Figure 3. Because perspective information is difficult to obtain from a single image, in this experiment, the camera intrinsics $K$ are fixed during training. We controlled only the azimuth $\theta_{a}$ (left-right rotation) and elevation $\theta_{e}$ (up-down rotation) parameters based on the training setting of HoloGAN. In the following, we provide the details of each model architecture.
+
+PGGAN: PGGAN (Karras et al., 2018) is a state-of-the-art DCGAN. In this experiment, we conditioned the model on two camera parameters, azimuth and elevation, as follows: First, these values are input to cos and sin functions, respectively, and the outputs are concatenated to a single four-dimensional vector $c_{cyclic}$ . Subsequently, $c_{cyclic}$ is concatenated to the latent vector $z$ , which is input to the generator. This operation allows the generated images to change continuously for a $360^{\circ}$ angle change. We start with a resolution of $32 \times 32$ and increase it progressively to $128 \times 128$ .
+
+StyleGAN: StyleGAN (Karras et al., 2019) is a state-of-the-art GAN model that controls the output "style" of each convolutional layer by performing adaptive instance normalization (AdaIN) (Huang & Belongie, 2017) and acquires hierarchical latent representations. We used $c_{cyclic}$ to only control the style of features on resolutions of $4 \times 4$ and $8 \times 8$ , as it is known that styles at low-resolution layers control global features such as the pose and shape of an object. More concretely, we concatenated $c_{cyclic}$ and the output of the mapping network $w$ , which was then converted to $w'$ with a multilayer perceptron. Please refer to Figure 3. The image resolution is the same as PGGAN.
+
+Deep Voxels: HoloGAN enables the disentanglement of camera parameters by using 3D latent feature representations. This is more natural modeling of the 3D world than the two models above because it considers explicit transformations in 3D space. However, HoloGAN cannot consider depth
+
+
+Figure 3: Generator architectures tested. PGGAN-based model (left), StyleGAN-based model (middle), and DeepVoxels-based model (right).
+
+
+
+
+
+information as the projection unit of HoloGAN only calculates the weighted sum of the feature on the depth dimension. Therefore, we used the model inspired by DeepVoxels (Sitzmann et al., 2019) to apply the proposed method. DeepVoxels is a method that can learn the 3D latent voxel representation of objects using images from multiple viewpoints of a single object; additionally, it can generate novel-view images. This method uses the occlusion-aware projection module that learns which voxels are visible from the camera viewpoint along the depth axis. This is achieved via unsupervised learning. Therefore, a depth image can be acquired from the model, which is suitable for combining with our method. In this experiment, we combined DeepVoxels and a voxel feature generator that generates features from random latent vector $z$ , for the random image generation task. We used 3D convolution and AdaIN for the voxel feature generator, similarly to HoloGAN. DeepVoxels uses an explicit camera model to acquire the feature visible in the camera frustum, whereas HoloGAN uses rigid-body transformations. Therefore, DeepVoxels enables more accurate reasoning about the 3D world. We compare the three settings for the models using 3D feature representations. The first model uses the weighted sum on the depth dimension instead of occlusion-aware projection modules, similarly to HoloGAN. The second model uses occlusion-aware projection modules but does not use the proposed 3D loss. The final model uses DeepVoxels and the proposed 3D loss. The methods are called "HoloGAN-like," "DeepVoxels," and "DeepVoxels + 3D loss" in the figures and tables. It is noteworthy that "HoloGAN-like" is not the same model as the original HoloGAN because it is based on DeepVoxels' network structures.
+
+# 4.2 DATASETS
+
+We trained our model using FFHQ (Karras et al., 2019), cars from ShapeNet (Chang et al., 2015), car images (Krause et al., 2013), and the LSUN bedroom dataset (Yu et al., 2015). We used $128 \times 128$ images for the PGGAN and StyleGAN, and $64 \times 64$ images for models using 3D latent feature representations owing to memory constraints. We used $35^{\circ}$ for the elevation angle range for all experiments, $120^{\circ}$ for the azimuth range for the FFHQ and bedroom datasets, and $360^{\circ}$ for the azimuth range for the Car and ShapeNet car datasets. For the ShapeNet car dataset and car image dataset, we used a new occlusion reasoning algorithm for DeepVoxels-based models to stabilize the training. The details are explained in the appendix.
+
+# 4.3 RESULTS
+
+Qualitative results The generative results from each model controlling the camera parameters on the FFHQ and ShapeNet car datasets are shown in Figures 4, 5, 10, and 11. In the figures, images with colormaps show the generated depth images. The depth is normalized (subtracted by the minimum value and divided by the range) and visualized with colormaps. For all models using the proposed loss (top three in the figures), images can be generated by controlling the camera parameters while preserving their identity. Moreover, the models can generate depth images that do not exist in the training samples. To confirm the depth consistency, we show normal maps and rotated images for the generative results from each model, as shown in Figure 6. The white regions of the
+
+
+
+
+Figure 4: Visualization of comparison for the generated images from each model on FFHQ dataset. Images in each row are generated from the same latent vector $z$ but different azimuth or elevation angles. The images with colormaps are the generated depth images.
+Figure 5: Visualization of comparison for the generated images from each model on ShapeNet car images. Images in each row are generated from the same latent vector $z$ but different azimuth or elevation angles. The images with colormaps are the generated depth images.
+
+ShapeNet car dataset are omitted for the visualization of point clouds. As shown in the figure, the models can generate the convex shape of a face and the rectangular shape of a car without any annotations regarding the 3D world. In particular, although the PGGAN and StyleGAN use a 2D CNN, consistent rotation and depth estimation are achieved, which is impossible with previous methods. This implies that the proposed method has good generalization performance on the generator architecture. The DeepVoxels-based method with the proposed loss performs well on both FFHQ and the ShapeNet car dataset. They can acquire more consistent rotation and generate more consistent depth images than 2D CNN-based models. This is thanks to the explicit 3D space modeling, though it does consume much memory and has high computational cost. In the experiments, although the output images for DeepVoxels-based models were half the size than that of StyleGAN-based models, they required 2.5 times higher memory and 2.9 times longer computational times for one iteration.
+
+
+Figure 6: Normal map and point cloud visualization for FFHQ and ShapeNet car datasets. Point clouds in occluded region are not visualized in the figure.
+
+
+
+Table 1: Performance comparison of unconditional generation models and proposed camera parameter-conditional models. We report FID, $V_{depth}$ , and $V_{color}$ (lower is better) for each model.
+
+ | FFHQ | ShapeNet Car |
| METRICS | FID | Vdepth | Vcolor | FID | Vdepth | Vcolor |
| PGGAN | 28.5 | - | - | 16.7 | - | - |
| PGGAN + 3D loss | 30.3 | 0.00141 | 0.0142 | 14.5 | 0.00043 | 0.0444 |
| StyleGAN | 20.9 | - | - | 15.5 | - | - |
| StyleGAN + 3D loss | 24.2 | 0.00077 | 0.0092 | 13.5 | 0.00027 | 0.0469 |
| HoloGAN-like | 23.4 | - | - | 33.5 | - | - |
| Deep Voxels | 19.4 | 0.00487 | 0.0153 | 28.6 | 0.00045 | 0.0398 |
| Deep Voxels + 3D loss | 21.1 | 0.00067 | 0.0072 | 31.2 | 0.00030 | 0.0283 |
+
+For the ShapeNet car dataset in Figure 5, PGGAN- and StyleGAN-based methods can generate consistently rotated images. However, for the PGGAN, only a $180^{\circ}$ azimuth change is acquired. This is because the model cannot distinguish between the front and back of the car, as it is difficult to achieve only with unsupervised learning. Meanwhile, StyleGAN-based methods can learn consistent azimuth and elevation angle changes. This is because the StyleGAN is stable owing to its hierarchical latent representation.
+
+Here we will compare the three 3D-latent-feature-based methods. In our training settings, the "HoloGAN-like" method works well on the FFHQ dataset but cannot acquire consistent $360^{\circ}$ rotation on the ShapeNet car dataset. DeepVoxels-based methods, on the other hand, can control $360^{\circ}$ object rotation on the dataset, realizing the depth map generation without any supervised information. This result shows that the depth reasoning helps to generate images considering the 3D geometry of the objects. Moreover, DeepVoxels-based method with the proposed loss can generate more consistent images for the FFHQ dataset. For example, in "DeepVoxels", the depth of the background is smaller than that of the face, and the background pixels hide foregrounds when the generated images are rotated as shown in Figure 6. However, this is not observed with the proposed loss, as our method considers warped images from different viewpoints, which facilitates learning the 3D world accurately.
+
+Moreover, additional generative results on the car image and bedroom datasets are depicted in Figure 7. These datasets are more difficult to train than the other two datasets due to the imbalanced distribution of the camera poses and the diversity of the object layouts. Although the models can learn consistent rotation reasonably for both datasets, the models cannot generate consistent depth maps. The results show the difficulty of the unorganized datasets.
+
+
+PGGAN + 3D loss
+
+
+StyleGAN + 3D loss
+
+
+Deep Voxels + 3D loss
+
+
+Deep Voxels
+
+
+StyleGAN + 3D loss
+
+
+DeepVoxels + 3D loss
+Figure 7: Generated car and bedroom images changing the azimuth angle range.
+
+As a result, the proposed method effectively helps various generators to learn both depth information and explicit controls on camera poses. These are achieved without the need for 3D latent representations as required in HoloGAN. Moreover, the proposed method further improves the results for the models using 3D latent representations.
+
+Quantitative evaluation on RGB images We compared the Fréchet inception distance (FID) (Heusel et al., 2017) between models with and without the proposed method for each generator architecture. FID is a typical evaluation metric for the quality and diversity of the generated RGB images. The results are shown in Table 1. The results show that the proposed camera parameter-conditional image generation models can generate images with comparable or even better FIDs than unconditional or RGB image generation models for all generator architecture types. Notably, this is an unfair comparison because the models without 3D loss were trained just to minimize the distribution distance between the appearance of the real and generated images, although the models with 3D loss learn camera parameter conditioning. The results show the robustness and effectiveness of our method against the generator architectures.
+
+Quantitative evaluation on RGBD images Evaluating the generated RGB and depth in the 3D space is difficult as obtaining the ground truth color or depth for the generated images is impossible. A possible approach to evaluate RGBD images without ground truth images is by calculating the inception score (IS) (Salimans et al., 2016) or FID on the generated images. However, this is inappropriate, as IS and FID are estimated in the feature space of a pre-trained CNN, and they cannot consider the 3D world geometry. Therefore, the evaluation of the generated RGBD with the 3D space is unattainable. Instead, we evaluated the color and depth consistency across the views to quantitatively compare the RGBD images generated by different methods. For the point clouds generated from the same latent vector $z$ , but different camera parameters $c$ , all points should be on a single surface in the 3D space. Therefore, by calculating the variation of the generated RGBD across the views, the 3D consistency can be quantitatively evaluated. We calculated the variance of the point clouds for generated images as $V_{depth}$ and $V_{color}$ in Table 1. The details are provided in the appendix. notably, these values cannot be calculated for PGGAN and StyleGAN without 3D loss, and HoloGAN-like method.
+
+The results show that DeepVoxels with 3D loss exhibited the best scores, due to the strong inductive bias of the 3D-latent-representations. Compared to DeepVoxels without 3D loss, DeepVoxels with 3D loss generated consistent color and depth, exhibiting the effectiveness of the proposed loss.
+
+# 5 CONCLUSION
+
+We herein proposed an RGBD image synthesis technique for camera parameter-conditional image generation. Although the proposed method does not require any labeled dataset, it can explicitly con
+
+trol the camera parameters of generated images and generate consistent depth images. The method does not limit the generator architecture, and can be used to condition any type of image generator on camera parameters. As the proposed method can learn the relationship between camera parameters and images, future works will include extending the method for unsupervised camera pose estimation and unsupervised camera pose invariant feature extraction from images.
+
+# 6 ACKNOWLEDGEMENT
+
+This work was partially supported by JST CREST Grant Number JPMJCR1403, and partially supported by JSPS KAKENHI Grant Number JP19H01115. We would like to thank Antonio Tejero de Pablos, Dexuan Zhang, Hiroaki Yamane, James Borg, Mikihiro Tanaka, Sho Maeoki, Shunya Wakasugi, Takayuki Hara, Takuhiro Kaneko, and Toshihiko Matsuura for helpful discussions.
+
+# REFERENCES
+
+Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein generative adversarial networks. In ICML, 2017.
+Andrew Brock, Jeff Donahue, and Karen Simonyan. Large scale gan training for high fidelity natural image synthesis. In ICLR, 2019.
+Angel X Chang, Thomas Funkhouser, Leonidas Guibas, Pat Hanrahan, Qixing Huang, Zimo Li, Silvio Savarese, Manolis Savva, Shuran Song, Hao Su, et al. Shapenet: An information-rich 3d model repository. arXiv preprint arXiv:1512.03012, 2015.
+Christopher B Choy, Danfei Xu, JunYoung Gwak, Kevin Chen, and Silvio Savarese. 3d-r2n2: A unified approach for single and multi-view 3d object reconstruction. In ECCV. Springer, 2016.
+Haoqiang Fan, Hao Su, and Leonidas J Guibas. A point set generation network for 3d object reconstruction from a single image. In CVPR, 2017.
+Clément Godard, Oisin Mac Aodha, and Gabriel J Brostow. Unsupervised monocular depth estimation with left-right consistency. In CVPR, 2017.
+Ian J Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative Adversarial Nets. In NIPS, 2014.
+Ariel Gordon, Hanhan Li, Rico Jonschkowski, and Anelia Angelova. Depth from videos in the wild: Unsupervised monocular depth learning from unknown cameras. In ICCV, 2019.
+Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron C Courville. Improved training of wasserstein gans. In NIPS, 2017.
+Philipp Henzler, Niloy J. Mitra, and Tobias Ritschel. Escaping plato's cave: 3d shape from adversarial rendering. In ICCV, 2019.
+Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. In NIPS, 2017.
+Xun Huang and Serge J Belongie. Arbitrary style transfer in real-time with adaptive instance normalization. In ICCV, 2017.
+Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. Progressive Growing of GANs for Improved Quality, Stability, and Variation. In ICLR, 2018.
+Tero Karras, Samuli Laine, and Timo Aila. A style-based generator architecture for generative adversarial networks. In CVPR, 2019.
+Hiroharu Kato and Tatsuya Harada. Learning view priors for single-view 3d reconstruction. In CVPR, 2019.
+Hiroharu Kato, Yoshitaka Ushiku, and Tatsuya Harada. Neural 3d mesh renderer. In CVPR, 2018.
+
+Jonathan Krause, Michael Stark, Jia Deng, and Li Fei-Fei. 3d object representations for fine-grained categorization. In 4th International IEEE Workshop on 3D Representation and Recognition (3dRR-13), Sydney, Australia, 2013.
+Lars Mescheder, Andreas Geiger, and Sebastian Nowozin. Which training methods for gans do actually converge? In ICML, 2018.
+Takeru Miyato, Toshiki Kataoka, Masanori Koyama, and Yuichi Yoshida. Spectral normalization for generative adversarial networks. In ICLR, 2018.
+Thu Nguyen-Phuoc, Chuan Li, Lucas Theis, Christian Richardt, and Yong-Liang Yang. Hologan: Unsupervised learning of 3d representations from natural images. 2019.
+Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. In ICLR, 2016.
+Sai Rajeswar, Fahim Mannan, Florian Golemo, David Vazquez, Derek Nowrouzezahrai, and Aaron Courville. Pix2scene: Learning implicit 3d representations from images, 2019. URL https://openreview.net/forum?id=BJeem3C9F7.
+Danilo Jimenez Rezende, SM Ali Eslami, Shakir Mohamed, Peter Battaglia, Max Jaderberg, and Nicolas Heess. Unsupervised learning of 3d structure from images. In NIPS, 2016.
+Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training gans. In NIPS, 2016.
+Yujun Shen, Ping Luo, Junjie Yan, Xiaogang Wang, and Xiaouu Tang. Faceid-gan: Learning a symmetry three-player gan for identity-preserving face synthesis. In CVPR, 2018.
+Vincent Sitzmann, Justus Thies, Felix Heide, Matthias Nießner, Gordon Wetzstein, and Michael Zollhofer. Deepvoxels: Learning persistent 3d feature embeddings. In CVPR, 2019.
+Luan Tran, Xi Yin, and Xiaoming Liu. Disentangled representation learning gan for pose-invariant face recognition. In CVPR, 2017.
+Nanyang Wang, Yinda Zhang, Zhuwen Li, Yanwei Fu, Wei Liu, and Yu-Gang Jiang. Pixel2mesh: Generating 3d mesh models from single rgb images. In ECCV, 2018.
+Jiajun Wu, Chengkai Zhang, Tianfan Xue, Bill Freeman, and Josh Tenenbaum. Learning a probabilistic latent space of object shapes via 3d generative-adversarial modeling. In NIPS, 2016.
+Xinchen Yan, Jimei Yang, Ersin Yumer, Yijie Guo, and Honglak Lee. Perspective transformer nets: Learning single-view 3d object reconstruction without 3d supervision. In NIPS, 2016.
+Fisher Yu, Ari Seff, Yinda Zhang, Shuran Song, Thomas Funkhouser, and Jianxiong Xiao. Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365, 2015.
+Tinghui Zhou, Matthew Brown, Noah Snavely, and David G Lowe. Unsupervised learning of depth and ego-motion from video. In CVPR, 2017.
+
+# A “WARP” AND “PROJECTION”
+
+Here, we explain the "warp" and "projection" operation in Equation 5.
+
+Warp When we warp $G_{RGB}(z, c_2)$ to $G_{RGB}(z, c_1)$ , first we calculate the position of pixels $p_{c_1}$ in $G_{RGB}(z, c_1)$ when they are viewed from $c_2$ .
+
+$$
+\mathcal {D} _ {c _ {2}} p _ {c _ {2}} = K R _ {1 \rightarrow 2} K ^ {- 1} \mathcal {D} _ {c _ {1}} p _ {c _ {1}} + K t _ {1 \rightarrow 2} \tag {8}
+$$
+
+Here, $R_{1\rightarrow 2}$ and $t_{1\rightarrow 2}$ are relative rotation and translation matrices respectively. We warp $G_{RGB}(z,c_2)$ according to the calculated positions, $p_{c_2}$ . The operation is implemented with bilinear sampling between the four neighboring pixel colors of warped coordinates, such that the operation is differentiable. Same operation is performed to warp generated depth images.
+
+
+Figure 8: Comparison between the softmax weighting (left) and our occlusion reasoning algorithm (right). The values denote the voxel weight, and the orange regions are the visible voxels in the camera. For softmax weighting, the occlusion network needs to change the voxel weight, according to the camera location. Our method accumulates the weight along the camera ray and ignored the voxels where the accumulative values exceed one, thus the occlusion network does not need to change the weights according to the camera location.
+
+Projection As the depth values in $\operatorname{warp}(G_D(z, c_2), c_{1 \to 2})$ are sampled from $G_D(z, c_2)$ , which is viewed from $c_2$ , to compare $G_D(z, c_1)$ with $\operatorname{warp}(G_D(z, c_2), c_{1 \to 2})$ , we need to project the depth values of each pixel in $G_D(z, c_1)$ to the viewpoint $c_2$ . This operation is the same as in Equation 8, and $\mathcal{D}_{c_2}$ is used for the projected depth values. We denote it as "projection".
+
+# B $V_{\text {depth}}$ AND $V_{\text {color}}$
+
+These metrics evaluated the depth and color consistency across the views. First, the images were generated from the same $z$ but different $c$ . Second, the point clouds were plotted for each image in the real-world coordinate. Third, the coordinates were converted to polar coordinates from the origin, and the angle coordinates (azimuth and elevation) were quantized. Fourth, for each image, the points were aggregated following the quantized coordinates, which we denote as "cell". For each cell, the point with the smallest radial coordinates were sampled. This prevented inaccurate variance for cells with multiple surfaces. Fifth, the variance of the depth and color for each cell, across different $c$ , was calculated. Sixth, the variance was averaged across cells and different $z$ . Randomly sampled $100z$ and $100c$ for each $z$ were employed to calculate the metrics.
+
+For face images, the origin was set as $(0,0,-0.5)$ (behind the head), and used $-23^{\circ}$ to $23^{\circ}$ for angular coordination. For ShapeNet car dataset, the origin was set as $(0,0,0)$ , and used $-180^{\circ}$ to $180^{\circ}$ for the azimuth angle and $-90^{\circ}$ to $90^{\circ}$ for the elevation angle. The white region was ignored for the evaluation of the ShapeNet car images.
+
+# C CAMERA PARAMETER DISTRIBUTIONS
+
+To randomly sample two similar camera parameters, $c_{1}$ and $c_{2}$ , we sampled $c_{1}$ from a uniform distribution and sampled $c_{2}$ from an area near to $c_{1}$ , within the angle range. We limited the maximum distance between $c_{1}$ and $c_{2}$ as $30^{\circ}$ , to avoid large occlusions. Since it is difficult to obtain camera intrinsics $K$ from a single image, we fix $K$ and the camera distance during training. We used $K$ as,
+
+$$
+K = \left( \begin{array}{c c c} 2 s & 0 & \frac {s}{2} \\ 0 & 2 s & \frac {s}{2} \\ 0 & 0 & 1 \end{array} \right), \tag {9}
+$$
+
+wherein $s$ is the size of images. We fixed the distance between the camera and origin of coordinates as 1.
+
+# D IMPLEMENTATION DETAILS FOR DEEPVOXELS
+
+The DeepVoxels-based methods were implemented using the projection layer, occlusion module, and rendering module proposed in (Sitzmann et al., 2019). We implemented them with simpler
+
+
+Ours
+
+
+Softmax weighting
+Figure 9: Comparison of the occlusion reasoning algorithms. The proposed method can acquire more consistent rotation and generate consistent depth maps than the softmax weighting.
+
+structures than those in the original implementation, to reduce the computational and memory expense. We used fewer 3D convolutional layers for the occlusion module and a U-Net-like network with AdaIN for the rendering module. Moreover, for simplicity, we did not use the Identity regulariser or style discriminator proposed in (Nguyen-Phuoc et al., 2019).
+
+We employed a different algorithm in occlusion reasoning to enable consistent depth for the ShapeNet car and car image datasets. The occlusion reasoning used to get image features in DeepVoxels is softmax weighting along the depth axis, which is visualized on the left side in Figure 8. This algorithm needs the occlusion module to calculate the weight of the voxels according to the camera poses, which is difficult through unsupervised learning. Therefore, to reduce the training expenses of the occlusion network, we employ an explicit reasoning algorithm. The network estimates the probability of each voxel to be on the surface of the object, i.e., the opacity of each voxel. This is implemented using a sigmoid activation function. Further, the weights are accumulated along the rays from the camera by adding-up the values. When the accumulated values exceed 1, the later voxels are ignored by replacing the weight values with 0s. By doing this, the occlusion module does not need to change voxel weight according to the camera poses. The algorithm overview is shown in the right side of Figure 8.
+
+The image generation results on the ShapeNet car dataset using "DeepVoxels + 3D loss", with each algorithm, are depicted in Figure 9. The proposed occlusion reasoning model can acquire more consistent $360^{\circ}$ rotation, whereas the softmax weighting cannot. Moreover, the proposed algorithm can generate consistent depth maps compared to the softmax weighting method. The results show the effectiveness of the proposed method for unsupervised learning.
+
+# E TRAINING DETAILS
+
+We trained PGGAN- and StyleGAN-based models for 250,000 iterations using batch-size of 32, and 3D-latent-feature-based models for 65,000 iterations with a batch-size of 10. All models are trained with Adam optimizer with equalized learning rates 0.001 for the generators, 0.00001 for the mapping networks, and 0.003 for the discriminators. In the experiments, we used a ResNet-based discriminator and non-saturating loss (Goodfellow et al., 2014) with gradient penalty (Mescheder et al., 2018). Training with a single NVIDIA P100 GPU required 30, 40, and 50 hours for DeepVoxels-, StyleGAN-, and PGGAN-based methods, respectively.
+
+# F ADDITIONAL RESULTS
+
+
+Figure 10: Additional results on FFHQ dataset. Images in each row are generated from the same latent vector $z$ but different azimuth or elevation angles. The images with colormaps are the generated depth images.
+
+
+Figure 11: Additional results on ShapeNet car dataset. Images in each row are generated from the same latent vector $z$ but different azimuth or elevation angles. The images with colormaps are the generated depth images.
+
+
+PGGAN + 3D loss
+
+
+PGGAN
+Figure 12: Randomly generated RGB images on the FFHQ dataset from PGGAN, with and without proposed loss.
+
+
+PGGAN + 3D loss
+
+
+Figure 13: Randomly generated RGB images on the ShapeNet car dataset from PGGAN, with and without proposed loss.
+
+
+StyleGAN + 3D loss
+
+
+StyleGAN
+Figure 14: Randomly generated RGB images on the FFHQ dataset from StyleGAN, with and without proposed loss.
+
+
+StyleGAN + 3D loss
+
+
+Figure 15: Randomly generated RGB images on the ShapeNet car dataset from StyleGAN, with and without proposed loss.
+
+
+Deep Voxels + 3D loss
+
+
+Deep Voxels
+Figure 16: Randomly generated RGB images on the FFHQ dataset from DeepVoxels, with and without proposed loss.
+
+
+DeepVoxels + 3D loss
+
+
+Figure 17: Randomly generated RGB images on the ShapeNet car dataset from DeepVoxels, with and without proposed loss.
\ No newline at end of file
diff --git a/rgbdganunsupervised3drepresentationlearningfromnaturalimagedatasetsviargbdimagesynthesis/images.zip b/rgbdganunsupervised3drepresentationlearningfromnaturalimagedatasetsviargbdimagesynthesis/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..f848d2d28cfe4cc1c96f07259c2e7a894beb593f
--- /dev/null
+++ b/rgbdganunsupervised3drepresentationlearningfromnaturalimagedatasetsviargbdimagesynthesis/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e4628bf1ecb26dc2c507a96d0560f0ef4b56dc308919fc84eeba256807431cda
+size 2317292
diff --git a/rgbdganunsupervised3drepresentationlearningfromnaturalimagedatasetsviargbdimagesynthesis/layout.json b/rgbdganunsupervised3drepresentationlearningfromnaturalimagedatasetsviargbdimagesynthesis/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..9e9a5319917bc7c2b7bf23ca38fc1903632e792a
--- /dev/null
+++ b/rgbdganunsupervised3drepresentationlearningfromnaturalimagedatasetsviargbdimagesynthesis/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:dce7013f586411082a3ac7c64832316bc95e0035051bcabcf2c42cc6b731a157
+size 489345
diff --git a/riderewardingimpactdrivenexplorationforprocedurallygeneratedenvironments/da438397-db9d-46b6-88be-cda19f3d9d84_content_list.json b/riderewardingimpactdrivenexplorationforprocedurallygeneratedenvironments/da438397-db9d-46b6-88be-cda19f3d9d84_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..4c3e85b4f0f48f808b4a4c4258adba2e7e226e26
--- /dev/null
+++ b/riderewardingimpactdrivenexplorationforprocedurallygeneratedenvironments/da438397-db9d-46b6-88be-cda19f3d9d84_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1a71a04eb16377d3cc7858ff8d84f8f1820a544a50ef68a9f82286f7c8997018
+size 116914
diff --git a/riderewardingimpactdrivenexplorationforprocedurallygeneratedenvironments/da438397-db9d-46b6-88be-cda19f3d9d84_model.json b/riderewardingimpactdrivenexplorationforprocedurallygeneratedenvironments/da438397-db9d-46b6-88be-cda19f3d9d84_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..aa8b3c3894a620dd4a9ef296789586941363e4e9
--- /dev/null
+++ b/riderewardingimpactdrivenexplorationforprocedurallygeneratedenvironments/da438397-db9d-46b6-88be-cda19f3d9d84_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1d462e5e6e6c1494a9bf24e2b6bc8bd8765c43042533b8e62cf50011a9cbf7fd
+size 138468
diff --git a/riderewardingimpactdrivenexplorationforprocedurallygeneratedenvironments/da438397-db9d-46b6-88be-cda19f3d9d84_origin.pdf b/riderewardingimpactdrivenexplorationforprocedurallygeneratedenvironments/da438397-db9d-46b6-88be-cda19f3d9d84_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..a80162b810c0673b9e1a2e2a4b86dd4f3efd35c4
--- /dev/null
+++ b/riderewardingimpactdrivenexplorationforprocedurallygeneratedenvironments/da438397-db9d-46b6-88be-cda19f3d9d84_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7bd3bd45a850204b9346cbbae321eb903c32e963bc956cf15bdc622363a6ff64
+size 2541336
diff --git a/riderewardingimpactdrivenexplorationforprocedurallygeneratedenvironments/full.md b/riderewardingimpactdrivenexplorationforprocedurallygeneratedenvironments/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..909f90b4cc0955def27e4e18166f594a50df299d
--- /dev/null
+++ b/riderewardingimpactdrivenexplorationforprocedurallygeneratedenvironments/full.md
@@ -0,0 +1,423 @@
+# RIDE: REWARDING IMPACT-DRIVEN EXPLORATION FOR PROCEDURALLY-GENERATED ENVIRONMENTS
+
+Roberta Raileanu*
+
+Facebook AI Research
+
+New York University
+
+raileanu@cs.nyu.edu
+
+Tim Rocktäschel
+
+Facebook AI Research
+
+University College London
+
+rockt@fb.com
+
+# ABSTRACT
+
+Exploration in sparse reward environments remains one of the key challenges of model-free reinforcement learning. Instead of solely relying on extrinsic rewards provided by the environment, many state-of-the-art methods use intrinsic rewards to encourage exploration. However, we show that existing methods fall short in procedurally-generated environments where an agent is unlikely to visit a state more than once. We propose a novel type of intrinsic reward which encourages the agent to take actions that lead to significant changes in its learned state representation. We evaluate our method on multiple challenging procedurally-generated tasks in MiniGrid, as well as on tasks with high-dimensional observations used in prior work. Our experiments demonstrate that this approach is more sample efficient than existing exploration methods, particularly for procedurally-generated MiniGrid environments. Furthermore, we analyze the learned behavior as well as the intrinsic reward received by our agent. In contrast to previous approaches, our intrinsic reward does not diminish during the course of training and it rewards the agent substantially more for interacting with objects that it can control.
+
+# 1 INTRODUCTION
+
+Deep reinforcement learning (RL) is one of the most popular frameworks for developing agents that can solve a wide range of complex tasks (Mnih et al., 2016; Silver et al., 2016; 2017). RL agents learn to act in new environments through trial and error, in an attempt to maximize their cumulative reward. However, many environments of interest, particularly those closer to real-world problems, do not provide a steady stream of rewards for agents to learn from. In such settings, agents require many episodes to come across any reward, often rendering standard RL methods inapplicable.
+
+Inspired by human learning, the use of intrinsic motivation has been proposed to encourage agents to learn about their environments even when extrinsic feedback is rarely provided (Schmidhuber, 1991b; 2010; Oudeyer et al., 2007; Oudeyer & Kaplan, 2009). This type of exploration bonus emboldens the agent to visit new states (Bellemare et al., 2016; Burda et al., 2019b; Ecoffet et al., 2019) or to improve its knowledge and forward prediction of the world dynamics (Pathak et al., 2017; Burda et al., 2019a), and can be highly effective for learning in hard exploration games such as Montezuma's Revenge (Mnih et al., 2016). However, most hard exploration environments used in previous work have either a limited state space or an easy way to measure similarity between states (Ecoffet et al., 2019) and generally use the same "singleton" environment for training and evaluation (Mnih et al., 2016; Burda et al., 2019a). Deep RL agents trained in this way are prone to overfitting to a specific environment and often struggle to generalize to even slightly different settings (Rajeswaran et al., 2017; Zhang et al., 2018a,b). As a first step towards addressing this problem, a number of procedurally-generated environments have been recently released, for example DeepMind Lab (Beattie et al., 2016), Sokoban (Racanière et al., 2017), Malmö (Johnson et al., 2016), CraftAssist (Jernite et al., 2019), Sonic (Nichol et al., 2018), CoinRun (Cobbe et al., 2019), Obstacle Tower (Juliani et al., 2019), or Capture the Flag (Jaderberg et al., 2019).
+
+In this paper, we investigate exploration in procedurally-generated sparse-reward environments. Throughout the paper, we will refer to the general problem that needs to be solved as the task
+
+
+Figure 1: RIDE rewards the agent for actions that have an impact on the state representation $(R_{IDE})$ , which is learned using both a forward $(L_{fw})$ and an inverse dynamics $(L_{inv})$ model.
+
+(e.g. find a goal inside a maze) and to the particular instantiation of this task as the environment (e.g. maze layout, colors, textures, locations of the objects, environment dynamics etc.). The environment can be singleton or procedurally-generated. Singleton environments are those in which the agent has to solve the same task in the same environment in every episode, i.e., the environment does not change between episodes. A popular example of a hard exploration environment that falls into that category is Montezuma's Revenge. In procedurally-generated environments, the agent needs to solve the same task, but in every episode the environment is constructed differently (e.g. resulting in a different maze layout), making it unlikely for an agent to ever visit the same state twice. Thus, agents in such environments have to learn policies that generalize well across a very large state space. We demonstrate that current exploration methods fall short in such environments as they (i) make strong assumptions about the environment (deterministic or resettable to previous states) (Ecoffet et al., 2019; Aytar et al., 2018), (ii) make strong assumptions about the state space (small number of different states or easy to determine if two states are similar) (Ecoffet et al., 2019; Burda et al., 2019b; Bellemare et al., 2016; Ostrovski et al., 2017; Machado et al., 2018a), or (iii) provide intrinsic rewards that can diminish quickly during training (Pathak et al., 2017; Burda et al., 2019a).
+
+To overcome these limitations, we propose Rewarding Impact-Driven Exploration (RIDE), a novel intrinsic reward for exploration in RL that encourages the agent to take actions which result in impactful changes to its representation of the environment state (see Figure 1 for an illustration). We compare against state-of-the-art intrinsic reward methods on singleton environments with high-dimensional observations (i.e. visual inputs), as well as on hard-exploration tasks in procedurally-generated grid-world environments. Our experiments show that RIDE outperforms state-of-the-art exploration methods, particularly in procedurally-generated environments. Furthermore, we present a qualitative analysis demonstrating that RIDE, in contrast to prior work, does not suffer from diminishing intrinsic rewards during training and encourages agents substantially more to interact with objects that they can control (relative to other state-action pairs).
+
+# 2 RELATED WORK
+
+The problem of exploration in reinforcement learning has been extensively studied. Exploration methods encourage RL agents to visit novel states in various ways, for example by rewarding surprise (Schmidhuber, 1991b;a; 2010; 2006; Achiam & Sastry, 2017), information gain (Little & Sommer, 2013; Still & Precup, 2012; Houthooft et al., 2016), curiosity (Pathak et al., 2017; Burda et al., 2019b), empowerment (Klyubin et al., 2005; Rezende & Mohamed, 2015; Gregor et al., 2017), diversity (Eysenbach et al., 2019), feature control (Jaderberg et al., 2017; Dilokthanakul et al., 2019), or decision states (Goyal et al., 2019; Modhe et al., 2019). Another class of exploration methods apply the Thompson sampling heuristics (Osband et al., 2016; Ostrovski et al., 2017; O'Donoghue et al., 2018; Tang et al., 2017). Osband et al. (2016) use a family of randomized Q-functions trained on bootstrapped data to select actions, while Fortunato et al. (2018) add noise in parameter space to encourage exploration. Here, we focus on intrinsic motivation methods, which are widely-used and have proven effective for various hard-exploration tasks (Mnih et al., 2016; Pathak et al., 2017; Bellemare et al., 2016; Burda et al., 2019b).
+
+Intrinsic motivation can be useful in guiding the exploration of RL agents, particularly in environments where the extrinsic feedback is sparse or missing altogether (Oudeyer et al., 2007; 2008; Oudeyer & Kaplan, 2009; Schmidhuber, 1991b; 2010). The most popular and effective kinds of intrinsic motivation can be split into two broad classes: count-based methods that encourage the agent to visit novel states and curiosity-based methods that encourage the agent to learn about the environment dynamics.
+
+Count-Based Exploration. Strehl & Littman (2008) proposed the use of state visitation counts as an exploration bonus in tabular settings. More recently, such methods were extended to high-dimensional state spaces (Bellemare et al., 2016; Ostrovski et al., 2017; Martin et al., 2017; Tang et al., 2017; Machado et al., 2018a). Bellemare et al. (2016) use a Context-Tree Switching (CTS) density model to derive a state pseudo-count, while Ostrovski et al. (2017) use PixelCNN as a state density estimator. Burda et al. (2019b) employ the prediction error of a random network as exploration bonus with the aim of rewarding novel states more than previously seen ones. However, one can expect count-based exploration methods to be less effective in procedurally-generated environments with sparse reward. In these settings, the agent is likely to characterize two states as being different even when they only differ by features that are irrelevant for the task (e.g. the texture of the walls). If the agent considers most states to be "novel", the feedback signal will not be distinctive or varied enough to guide the agent.
+
+Curiosity-Driven Exploration. Curiosity-based bonuses encourage the agent to explore the environment to learn about its dynamics. Curiosity can be formulated as the error or uncertainty in predicting the consequences of the agent's actions (Stadie et al., 2015; Pathak et al., 2017; Burda et al., 2019b). For example, Pathak et al. (2017) learn a latent representation of the state and design an intrinsic reward based on the error of predicting the next state in the learned latent space. While we use a similar mechanism for learning state embeddings, our exploration bonus is very different and builds upon the difference between the latent representations of two consecutive states. As we will see in the following sections, one problem with their approach is that the intrinsic reward can vanish during training, leaving the agent with no incentive to further explore the environment and reducing its feedback to extrinsic reward only.
+
+Generalization in Deep RL. Most of the existing exploration methods that have achieved impressive results on difficult tasks (Ecoffet et al., 2019; Pathak et al., 2017; Burda et al., 2019b; Bellemare et al., 2016; Choi et al., 2019; Aytar et al., 2018), have been trained and tested on the same environment and thus do not generalize to new instances. Several recent papers (Rajeswaran et al., 2017; Zhang et al., 2018a;b; Machado et al., 2018b; Foley et al., 2018) demonstrate that deep RL is susceptible to severe overfitting. As a result, a number of benchmarks have been recently released for testing generalization in RL (Beattie et al., 2016; Cobbe et al., 2019; Packer et al., 2018; Justesen et al., 2018; Leike et al., 2017; Nichol et al., 2018; Juliani et al., 2019). Here, we make another step towards developing exploration methods that can generalize to unseen scenarios by evaluating them on procedurally-generated environments. We opted for MiniGrid (Chevalier-Boisvert et al., 2018) because it is fast to run, provides a standard set of tasks with varied difficulty levels, focuses on single-agent, and does not use visual inputs, thereby allowing us to better isolate the exploration problem.
+
+More closely related to our work are the papers of Marino et al. (2019) and Zhang et al. (2019). Marino et al. (2019) use a reward that encourages changing the values of the non-propriceptive features for training low-level policies on locomotion tasks. Their work assumes that the agent has access to a decomposition of the observation state into internal and external parts, an assumption which may not hold in many cases and may not be trivial to obtain even if it exists. Zhang et al. (2019) use the difference between the successor features of consecutive states as intrinsic reward. In this framework, a state is characterized through the features of all its successor states. While both of these papers use fixed (i.e. not learned) state representations to define the intrinsic reward, we use forward and inverse dynamics models to learn a state representation constrained to only capture elements in the environment that can be influenced by the agent. Lesort et al. (2018) emphasize the benefits of using a learned state representation for control as opposed to a fixed one (which may not contain information relevant for acting in the environment). In the case of Zhang et al. (2019), constructing a temporally extended state representation for aiding exploration is not trivial. Such a feature space may add extra noise to the intrinsic reward due to the uncertainty of future states. This is particularly problematic when the environment is highly stochastic or the agent often encounters novel states (as it is the case in procedurally-generated environments).
+
+# 3 BACKGROUND: CURIOSITY-DRIVEN EXPLORATION
+
+We use the standard formalism of a single agent Markov Decision Process (MDP) defined by a set of states $S$ , a set of actions $A$ , and a transition function $\mathcal{T} : S \times \mathcal{A} \to \mathcal{P}(S)$ providing the probability distribution of the next state given a current state and action. The agent chooses actions by sampling from a stochastic policy $\pi : S \to \mathcal{P}(A)$ , and receives reward $r : S \times \mathcal{A} \to \mathbb{R}$ at every time step. The agent's goal is to learn a policy which maximizes its discounted expected return $R_{t} = \mathbb{E}\left[\sum_{k=0}^{T} \gamma^{k} r_{t+k+1}\right]$ where $r_{t}$ is the sum of the intrinsic and extrinsic reward received by the agent at time $t$ , $\gamma \in [0,1]$ is the discount factor, and the expectation is taken with respect to both the policy and the environment. Here, we consider the case of episodic RL in which the agent maximizes the reward received within a finite time horizon.
+
+In this paper we consider that, along with the extrinsic reward $r_t^e$ , the agent also receives some intrinsic reward $r_t^i$ , which can be computed for any $(s_t, a_t, s_{t+1})$ tuple. Consequently, the agent tries to maximize the weighted sum of the intrinsic and extrinsic reward: $r_t = r_t^e + \omega_{ir} r_t^i$ where $\omega_{ir}$ is a hyperparameter to weight the importance of both rewards.
+
+We built upon the work of Pathak et al. (2017) who note that some parts of the observation may have no influence on the agent's state. Thus, Pathak et al. propose learning a state representation that disregards those parts of the observation and instead only models (i) the elements that the agent can control, as well as (ii) those that can affect the agent, even if the agent cannot have an effect on them. Concretely, Pathak et al. learn a state representations $\phi(s) = f_{emb}(s; \theta_{emb})$ of a state $s$ using an inverse and a forward dynamics model (see Figure 1). The forward dynamics model is a neural network parametrized by $\theta_{fw}$ that takes as inputs $\phi(s_t)$ and $a_t$ , predicts the next state representation: $\hat{\phi}(s_{t+1}) = f_{fw}(\phi_t, a_t; \theta_{fw})$ , and it is trained to minimize $L_{fw}(\theta_{fw}, \theta_{emb}) = \|\hat{\phi}(s_{t+1}) - \phi(s_{t+1})\|_2^2$ . The inverse dynamics model is also a neural network parameterized by $\theta_{inv}$ that takes as inputs $\phi(s_t)$ and $\phi(s_{t+1})$ , predicts the agent's action: $\hat{a}_t = f_{inv}(\phi_t, \phi_{t+1}; \theta_{inv})$ and it is trained to minimize $L_{inv}(\theta_{inv}, \theta_{emb}) = CrossEntropy(\hat{a}_t, a_t)$ when the action space is discrete. Pathak et al.'s curiosity-based intrinsic reward is proportional to the squared Euclidean distance between the actual embedding of the next state $\phi(s_{t+1})$ and the one predicted by the forward model $\hat{\phi}(s_{t+1})$ .
+
+# 4 IMPACT-DRIVEN EXPLORATION
+
+Our main contribution is a novel intrinsic reward based on the change in the state representation produced by the agent's action. The proposed method encourages the agent to try out actions that have a significant impact on the environment. We demonstrate that this approach can promote effective exploration strategies when the feedback from the environment is sparse.
+
+We train a forward and an inverse dynamics model to learn a latent state representation $\phi(s)$ as proposed by Pathak et al. (2017). However, instead of using the Euclidean distance between the predicted next state representation and the actual next state representation as intrinsic reward ( $R_{cur}$ in Figure 1), we define impact-driven reward as the Euclidean distance between consecutive state representations ( $R_{IDE}$ in Figure 1). Compared to curiosity-driven exploration, impact-driven exploration rewards the agent for very different state-action, leading to distinct agent behaviors which we analyze in Section 6.1.1.
+
+Stanton & Clune (2018) categorize exploration into: across-training and intra-life and argue they are complementary. Popular methods such as count-based exploration (Bellemare et al., 2016) encourage agents to visit novel states in relation to all prior training episodes (i.e. across-training novelty), but they do not consider whether an agent visits novel states within some episode (i.e. intra-life novelty). As we will see, RIDE combines both types of exploration.
+
+Formally, RIDE is computed as the $L_{2}$ -norm $\| \phi (s_{t + 1}) - \phi (s_t)\| _2$ of the difference in the learned state representation between consecutive states. However, to ensure that the agent does not go back and forth between a sequence of states (with a large difference in their embeddings) in order to gain intrinsic reward, we discount RIDE by episodic state visitation counts. Concretely, we divide the impact-driven reward by $\sqrt{N_{ep}(s_{t + 1})}$ , where $N_{ep}(s_{t + 1})$ is the number of times that state has been visited during the current episode, which is initialized to 1 in the beginning of the episode. In high-
+
+dimensional regimes, one can use episodic pseudo-counts instead (Bellemare et al., 2016; Ostrovski et al., 2017). Thus, the overall intrinsic reward provided by RIDE is calculated as:
+
+$$
+R _ {I D E} (s _ {t}, a _ {t}) \equiv r _ {t} ^ {i} (s _ {t}, a _ {t}) = \frac {\| \phi (s _ {t + 1}) - \phi (s _ {t}) \| _ {2}}{\sqrt {N _ {e p} (s _ {t + 1})}}
+$$
+
+where $\phi(s_{t+1})$ and $\phi(s_t)$ are the learned representations of consecutive states, resulting from the agent transitioning to state $s_{t+1}$ after taking action $a_t$ in state $s_t$ . The state is projected into a latent space using a neural network with parameters $\theta_{emb}$ .
+
+The overall optimization problem that is solved for training the agent is
+
+$$
+\min _ {\theta_ {\pi}, \theta_ {i n v}, \theta_ {f w}, \theta_ {e m b}} \left[ \omega_ {\pi} L _ {R L} (\theta_ {\pi}) + \omega_ {f w} L _ {f w} (\theta_ {f w}, \theta_ {e m b}) + \omega_ {i n v} L _ {i n v} (\theta_ {i n v}, \theta_ {e m b}) \right]
+$$
+
+where $\theta_{\pi}$ are the parameters of the policy and value network $(a_t \sim \pi(s_t; \theta_{\pi}))$ , and $\omega_{\pi}$ , $\omega_{inv}$ and $\omega_{fw}$ are scalars that weigh the relative importance of the reinforcement learning (RL) loss to that of the inverse and forward dynamics losses which are used for learning the intrinsic reward signal. Note that we never update the parameters of the inverse $(\theta_{inv})$ , forward $(\theta_{fw})$ , or embedding networks $(\theta_{emb})$ using the signal from the intrinsic or extrinsic reward (i.e. the RL loss); we only use these learned state embeddings for constructing the exploration bonus and never as part of the agent's policy (Figure 1 highlights that the policy learns its own internal representation of the state $\psi_t$ , which is only used for control and never for computing the intrinsic reward). Otherwise, the agent can artificially maximize its intrinsic reward by constructing state representations with large distances among themselves, without grounding them in environment observations.
+
+Note that there is no incentive for the learned state representations to encode features of the environment that cannot be influenced by the agent's actions. Thus, our agent will not receive rewards for reaching states that are inherently unpredictable, making exploration robust with respect to distractor objects or other inconsequential sources of variation in the environment. As we will later show, RIDE is robust to the well-known noisy-TV problem in which an agent, that is rewarded for errors in the prediction of its forward model (such as the one proposed in Pathak et al. (2017)), gets attracted to local sources of entropy in the environment. Furthermore, the difference of consecutive state representations is unlikely to go to zero during learning as they are representations of actual states visited by the agent and constrained by the forward and inverse model. This is in contrast to Pathak et al. (2017) and Burda et al. (2019b) where the intrinsic reward goes to zero as soon as the forward model becomes sufficiently accurate or the agent's policy only explores well known parts of the state space.
+
+# 5 EXPERIMENTS
+
+We evaluate RIDE on procedurally-generated environments from MiniGrid, as well as on two existing singleton environments with high-dimensional observations used in prior work, and compare it against both standard RL and three commonly used intrinsic reward methods for exploration. For all our experiments, we show the mean and standard deviation of the average return across 5 different seeds for each model. The average return is computed as the rolling mean over the past 100 episodes.
+
+# 5.1 ENVIRONMENTS
+
+The first set of environments are procedurally-generated grid-worlds in MiniGrid (Chevalier-Boisvert et al., 2018). We consider three types of hard exploration tasks: MultiRoomNXSY, KeyCorridorS3R3, and ObstructedMaze2Dlh.
+
+In MiniGrid, the world is a partially observable grid of size $N \times N$ . Each tile in the grid contains at most one of the following objects:
+
+wall, door, key, ball, box and goal. The agent can take one of seven actions: turn left or right, move forward, pick up or drop an object, toggle or done. More details about the MiniGrid environment and tasks can be found in A.3.
+
+
+Figure 2: Rendering of a procedurally-generated environment from MiniGrid's MultiRoomN12S10 task.
+
+
+
+
+
+
+
+
+
+
+Figure 3: Performance of RIDE, Count, RND, ICM and IMPALA on a variety of hard exploration problems in MiniGrid. Note RIDE is the only one that can solve the hardest tasks.
+
+
+
+
+
+
+
+For the sole purpose of comparing in a fair way to the curiosity-driven exploration work by Pathak et al. (2017), we ran a one-off experiment on their Mario (singleton) environment (Kauten, 2018). We train our model with and without extrinsic reward on the first level of the game.
+
+The last (singleton) environment we evaluate on is VizDoom (Kempka et al., 2016). Details about the environment can be found in A.4.
+
+# 5.2 BASELINES
+
+For all our experiments, we use IMPALA (Espeholt et al., 2018) following the implementation of Kuttler et al. (2019) as the base RL algorithm, and RMSProp (Tieleman & Hinton, 2012) for optimization. All models use the same basic RL algorithm and network architecture for the policy and value functions (see Appendix A.2 and Appendix A.1 for details regarding the hyperparameters and network architectures), differing only in how intrinsic rewards are defined. In our experiments we compare with the following baselines: Count: Count-Based Exploration by Bellemare et al. (2016) which uses state visitation counts to give higher rewards for new or rarely seen states. RND: Random Network Distillation Exploration by Burda et al. (2019b) which uses the prediction error of a random network as exploration bonus with the aim of rewarding novel states more than previously encountered ones. ICM: Intrinsic Curiosity Module by Pathak et al. (2017) (see Section 3). IMPALA: Standard RL approach by Espeholt et al. (2018) that uses only extrinsic reward and encourages random exploration by entropy regularization of the policy.
+
+# 6 RESULTS AND DISCUSSION
+
+We present the results of RIDE in comparison to popular exploration methods, as well as an analysis of the learned policies and properties of the intrinsic reward generated by different methods.
+
+# 6.1 MINIGRID
+
+Figure 3 summarizes our results on various hard MiniGrid tasks. Note that the standard RL approach IMPALA (purple) is not able to learn in any of the environments since the extrinsic reward is too sparse. Furthermore, our results reveal that RIDE is more sample efficient compared to all the other exploration methods across all MiniGrid tasks considered here. While other exploration bonuses seem effective on easier tasks and are able to learn optimal policies where IMPALA fails, the gap between our approach and the others is increasing with the difficulty of the task. Furthermore, RIDE manages to solve some very challenging tasks on which the other methods fail to get any reward even after training on over 100M frames (Figure 3).
+
+
+Figure 4: Intrinsic reward heatmaps for RND, ICM, and RIDE (from left to right) for opening doors (green), moving forward (blue), or turning left or right (red) on a random environment from the MultiRoomN7S4 task. A is the agent's starting position, G is the goal position and D are doors that have to be opened on the way.
+
+ | Open Door | Turn Left / Right | Move Forward |
| Model | Mean | Std | Mean | Std | Mean | Std |
| RIDE | 0.0490 | 0.0019 | 0.0071 | 0.0034 | 0.0181 | 0.0116 |
| RND | 0.0032 | 0.0018 | 0.0031 | 0.0028 | 0.0026 | 0.0017 |
| ICM | 0.0055 | 0.0003 | 0.0052 | 0.0003 | 0.0056 | 0.0003 |
+
+Table 1: Mean intrinsic reward per action over 100 episodes on a random maze in MultiRoomN7S4.
+
+In addition to existing MiniGrid tasks, we also tested the model's ability to deal with stochasticity in the environment by adding a "noisy TV" in the MiniGridN7S4 task, resulting in the new MiniGirdN7S4NoisyTV task (left-center plot in the top row of Figure 3). The noisy TV is implemented as a ball that changes its color to a randomly picked one whenever the agent takes a particular action. As expected, the performance of ICM drops as the agent becomes attracted to the ball while obtaining intrinsic rewarded for not being able to predict the next color. The Count model also needs more time to train, likely caused by the increasing number of rare and novel states (due to the changing color of the ball).
+
+We include results for ablations to our model in Appendix A.5, highlighting the importance of combining impact-driven exploration with episodic state visitation discounting.
+
+# 6.1.1 ANALYSIS OF THE INTRINSIC REWARD
+
+To better understand the effectiveness of different exploration methods, we investigate the intrinsic reward an agent receives for certain trajectories in the environment.
+
+Figure 4 shows a heatmap of the intrinsic reward received by RND, ICM, and RIDE on a sampled environment after having been trained on procedurally-generated environments from the MultiRoomN7S4 task. While all three methods can solve this task, the intrinsic rewards received are different. Specifically, the RIDE agent is rewarded in a much more structured manner for opening doors, entering new rooms and turning at decision points. Table 1 provides quantitative numbers for this phenomenon. We record the intrinsic rewards received for each type of action, averaged over 100 episodes. We found that RIDE is putting more emphasis on actions interacting with the door than for moving forward or turning left or right, while the other methods reward actions more uniformly.
+
+Figure 12 and Table 3 in A.6.2 show a similar pattern for the intrinsic rewards for agents trained on the MultiRoomN12S10 task, while Figure 13 and Table 4 in A.6.3 contain the equivalent analysis for agents trained on ObstructedMaze2Dlh. As emphasized there, RIDE is rewarding the agent more for interactions with objects as opposed to actions for moving around in the maze, a characteristic which is not as prevalent in the other models.
+
+Figure 5 shows the mean intrinsic reward of all models while training on the MultiRoomN12S10 task. While the ICM, RND, and Count intrinsic reward converges to very low values quite early in the training process, the RIDE bonus keeps changing and has a higher value even after training on 100M frames. Hence, RIDE constantly encourages the agent to take actions that change the local environment. In contrast, Count, RND, and Curiosity may not consider certain states to be "novel" or "surprising" after longer periods of training as they have seen similar states in the past or learned to almost perfectly predict the next state in a subset of the environment states. Consequently, their intrinsic rewards diminish during training and the agent struggles to distinguish between actions that lead to novel or surprising states from those that do not, thereby getting trapped in some parts of the state space (see Figure 12).
+
+
+Figure 5: Mean intrinsic reward for models trained on Multi-RoomN12S10.
+
+# 6.1.2 SINGLETON VERSUS PROCEDURALLY-GENERATED ENVIRONMENTS
+
+It is important to understand and quantify how much harder it is to train existing deep RL exploration methods on tasks in procedurally-generated environments compared to a singleton environment.
+
+To investigate this dependency, we trained the models on a singleton environment of the the ObstructedMaze2Dlh task so that at the beginning of every episode, the agent is spawned in exactly the same maze with all objects located in the same positions. In this setting, we see that Count, RND, and IMPALA are also able to solve the task (see Figure 6 and compare with the center-right plot in the bottom row of Figure 3 for procedurally-generated environments of the same task). As expected, this emphasizes that training an agent in procedurally-generated environments creates significant challenges over training on a singleton environment for the same task. Moreover, it highlights the importance of training on a variety of environments to avoid overfitting to the idiosyncrasies of a particular environment.
+
+
+Figure 6: Training on a singleton instance of ObstructedMaze2Dlh.
+
+# 6.1.3 No EXTRINSIC REWARD
+
+To analyze the way different methods explore environments without depending on the chance of running into extrinsic reward (which can dramatically change the agent's policy), we analyze agents that are trained without any extrinsic reward on both singleton and procedurally-generated environments.
+
+The top row of Figure 7 shows state visitation heatmaps for all the models in a singleton environment on the MultiRoomN10S6 task, after training all of them for 50M frames with intrinsic reward only. The agents are allowed to take 200 steps in every episode. The figure indicates that all models have effective exploration strategies when trained on a singleton maze, the 10th, 9th and 6th rooms are reached by RIDE, Count/RND, and ICM, respectively. The Random policy fully explores the first room but does not get to the second room within the time limit.
+
+When trained on procedurally-generated mazes, existing models are exploring much less efficiently as can be seen in the bottom row of Figure 7. Here, Count, RND, and ICM only make it to the 4th, 3rd and 2nd rooms respectively within an episode, while RIDE is able to explore all rooms. This
+
+
+Figure 7: State visitation heatmaps for Count, RND, ICM, Random, and RIDE models (from left to right) trained for $50\mathrm{m}$ frames without any extrinsic reward on a singleton maze (top row) and on procedurally-generated mazes (bottom row) in MultiRoomN10S6.
+
+
+Figure 8: Performance on Mario with intrinsic reward only (a), with intrinsic and extrinsic reward (b), and VizDoom (c). Note that IMPALA is trained with extrinsic reward only in all cases.
+
+
+
+
+
+further supports that RIDE learns a state representation that allows generalization across different mazes and is not as distracted by less important details that change from one procedurally-generated environment to another.
+
+# 6.2 MARIO AND VIZDOOM
+
+In order to compare to Pathak et al. (2017), we evaluate RIDE on the first level of the Mario environment. Our results (see Figure 8 a and b) suggest that this environment may not be as challenging as previously believed, given that all the methods evaluated here, including vanilla IMPALA, can learn similarly good policies after training on only $1\mathrm{m}$ frames even without any intrinsic reward (left figure). Note that we are able to reproduce the results mentioned in the original ICM paper (Pathak et al., 2017). However, when training with both intrinsic and extrinsic reward (center figure), the curiosity-based exploration bonus (ICM) hurts learning, converging later and to a lower value than the other methods evaluated here.
+
+For VizDoom (see Figure 8 c) we observe that RIDE performs as well as ICM, while all the other baselines fail to learn effective policies given the same amount of training. Note that our ICM implementation can reproduce the results in the original paper on this task, achieving a $100\%$ success rate after training on approximately 60m frames (Pathak et al., 2017).
+
+# 7 CONCLUSION AND FUTURE WORK
+
+In this work, we propose Rewarding Impact-Driven Exploration (RIDE), an intrinsic reward bonus that encourages agents to explore actions that substantially change the state of the environment, as measured in a learned latent space. RIDE has a number of desirable properties: it attracts agents to states where they can affect the environment, it provides a signal to agents even after training for a long time, and it is conceptually simple as well as compatible with other intrinsic or extrinsic rewards and any deep RL algorithm.
+
+Our approach is particularly effective in procedurally-generated sparse-reward environments where it significantly outperforms IMPALA (Espeholt et al., 2018), as well as some of the most popular exploration methods such as Count (Bellemare et al., 2016), RND (Burda et al., 2019b), and ICM (Pathak et al., 2017). Furthermore, RIDE explores procedurally-generated environments more efficiently than other exploration methods.
+
+However, there are still many ways to improve upon RIDE. For example, one can make use of symbolic information to measure or characterize the agent's impact, consider longer-term effects of the agent's actions, or promote diversity among the kinds of changes the agent makes to the environment. Another interesting avenue for future research is to develop algorithms that can distinguish between desirable and undesirable types of impact the agent can have in the environment, thus constraining the agent to act safely and avoid distractions (i.e. actions that lead to large changes in the environment but that are not useful for a given task). The different kinds of impact might correspond to distinctive skills or low-level policies that a hierarchical controller could use to learn more complex policies or better exploration strategies.
+
+# ACKNOWLEDGMENTS
+
+We would like to thank Heinrich Kuttler, Edward Grefenstette, Nantas Nardelli, Jakob Foerster, Kyunghyun Cho, Arthur Szlam, Rob Fergus, Victor Zhong and Léon Bottou for insightful discussions and valuable feedback on this work.
+
+# REFERENCES
+
+Joshua Achiam and Shankar Sastry. Surprise-based intrinsic motivation for deep reinforcement learning. CoRR, abs/1703.01732, 2017. URL http://arxiv.org/abs/1703.01732.
+Yusuf Aytar, Tobias Pfaff, David Budden, Thomas Paine, Ziyu Wang, and Nando de Freitas. Playing hard exploration games by watching youtube. In Advances in Neural Information Processing Systems, pp. 2930-2941, 2018.
+Charles Beattie, Joel Z. Leibo, Denis Teplyashin, Tom Ward, Marcus Wainwright, Heinrich Kuttler, Andrew Lefrancq, Simon Green, Víctor Valdés, Amir Sadik, Julian Schrittwieser, Keith Anderson, Sarah York, Max Cant, Adam Cain, Adrian Bolton, Stephen Gaffney, Helen King, Demis Hassabis, Shane Legg, and Stig Petersen. Deepmind lab. CoRR, abs/1612.03801, 2016. URL http://arxiv.org/abs/1612.03801.
+Marc Bellemare, Sriram Srinivasan, Georg Ostrovski, Tom Schaul, David Saxton, and Remi Munos. Unifying count-based exploration and intrinsic motivation. In Advances in Neural Information Processing Systems, pp. 1471-1479, 2016.
+Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. Openai gym. arXiv preprint arXiv:1606.01540, 2016.
+Yuri Burda, Harrison Edwards, Deepak Pathak, Amos J. Storkey, Trevor Darrell, and Alexei A. Efros. Large-scale study of curiosity-driven learning. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019, 2019a. URL https://openreview.net/forum?id=rJNwDjAqYX.
+Yuri Burda, Harrison Edwards, Amos J. Storkey, and Oleg Klimov. Exploration by random network distillation. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019, 2019b. URL https://openreview.net/forum?id=H11JJnR5Ym.
+
+Maxime Chevalier-Boisvert, Lucas Willems, and Suman Pal. Minimalistic gridworld environment for operai gym. https://github.com/maximecb/gym-minigrid, 2018.
+Jongwook Choi, Yijie Guo, Marcin Moczulski, Junhyuk Oh, Neal Wu, Mohammad Norouzi, and Honglak Lee. Contingency-aware exploration in reinforcement learning. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019, 2019. URL https://openreview.net/forum?id=HyxGB2AcY7.
+Djork-Arné Clevert, Thomas Unterthiner, and Sepp Hochreiter. Fast and accurate deep network learning by exponential linear units (elus). In 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings, 2016. URL http://arxiv.org/abs/1511.07289.
+Karl Cobbe, Oleg Klimov, Christopher Hesse, Taehoon Kim, and John Schulman. Quantifying generalization in reinforcement learning. In Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, pp. 1282-1289, 2019. URL http://proceedings.mlr.press/v97/cobbe19a.html.
+Nat Dilokthanakul, Christos Kaplanis, Nick Pawlowski, and Murray Shanahan. Feature control as intrinsic motivation for hierarchical reinforcement learning. IEEE transactions on neural networks and learning systems, 2019.
+Adrien Ecoffet, Joost Huizinga, Joel Lehman, Kenneth O Stanley, and Jeff Clune. Go-exlore: a new approach for hard-exploration problems. arXiv preprint arXiv:1901.10995, 2019.
+Lasse Espeholt, Hubert Soyer, Rémi Munos, Karen Simonyan, Volodymyr Mnih, Tom Ward, Yotam Doron, Vlad Firoiu, Tim Harley, Iain Dunning, Shane Legg, and Koray Kavukcuoglu. IMPALA: scalable distributed deep-rl with importance weighted actor-learner architectures. In Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholm, Sweden, July 10-15, 2018, pp. 1406-1415, 2018. URL http://proceedings.mlr.press/v80/espeholt18a.html.
+Benjamin Eysenbach, Abhishek Gupta, Julian Ibarz, and Sergey Levine. Diversity is all you need: Learning skills without a reward function. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019, 2019. URL https://openreview.net/forum?id=SJx63jRqFm.
+John Foley, Emma Tosch, Raleigh Clary, and David Jensen. Toybox: Better atari environments for testing reinforcement learning agents. CoRR, abs/1812.02850, 2018. URL http://arxiv.org/abs/1812.02850.
+Meire Fortunato, Mohammad Gheshlaghi Azar, Bilal Piot, Jacob Menick, Matteo Hessel, Ian Osband, Alex Graves, Volodymyr Mnih, Rémi Munos, Demis Hassabis, Olivier Pietquin, Charles Blundell, and Shane Legg. Noisy networks for exploration. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings, 2018. URL https://openreview.net/forum?id=rywHCPkAW.
+Anirudh Goyal, Riashat Islam, Daniel Strouse, Zafarali Ahmed, Hugo Larochelle, Matthew Botvinick, Yoshua Bengio, and Sergey Levine. Infobot: Transfer and exploration via the information bottleneck. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019, 2019. URL https://openreview.net/forum?id=rJg8yhAqKm.
+Karol Gregor, Danilo Jimenez Rezende, and Daan Wierstra. Variational intrinsic control. In 5th International Conference on Learning Representations, ICLR 2017, Toulouse, France, April 24-26, 2017, Workshop Track Proceedings, 2017. URL https://openreview.net/forum?id= Skc-Fo4Yg.
+Rein Houthooft, Xi Chen, Yan Duan, John Schulman, Filip De Turck, and Pieter Abbeel. Vime: Variational information maximizing exploration. In Advances in Neural Information Processing Systems, pp. 1109-1117, 2016.
+
+Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z. Leibo, David Silver, and Koray Kavukcuoglu. Reinforcement learning with unsupervised auxiliary tasks. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings, 2017. URL https://openreview.net/forum?id=SJ6yPD5xg.
+Max Jaderberg, Wojciech M Czarnecki, Iain Dunning, Luke Marris, Guy Lever, Antonio Garcia Castaneda, Charles Beattie, Neil C Rabinowitz, Ari S Morcos, Avraham Ruderman, et al. Human-level performance in 3d multiplayer games with population-based reinforcement learning. Science, 364(6443):859-865, 2019.
+Yacine Jernite, Kavya Srinet, Jonathan Gray, and Arthur Szlam. Craftassist instruction parsing: Semantic parsing for a minecraft assistant. CoRR, abs/1905.01978, 2019. URL http://arxiv.org/abs/1905.01978.
+Matthew Johnson, Katja Hofmann, Tim Hutton, and David Bignell. The malmo platform for artificial intelligence experimentation. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, IJCAI 2016, New York, NY, USA, 9-15 July 2016, pp. 4246-4247, 2016. URL http://www.ijcai.org/Abstract/16/643.
+Arthur Juliani, Ahmed Khalifa, Vincent-Pierre Berges, Jonathan Harper, Ervin Teng, Hunter Henry, Adam Crespi, Julian Togelius, and Danny Lange. Obstacle tower: A generalization challenge in vision, control, and planning. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI 2019, Macao, China, August 10-16, 2019, pp. 2684-2691, 2019. doi: 10.24963/ijcai.2019/373. URL https://doi.org/10.24963/ijcai.2019/373.
+Niels Justesen, Ruben Rodriguez Torrado, Philip Bontrager, Ahmed Khalifa, Julian Togelius, and Sebastian Risi. Illuminating generalization in deep reinforcement learning through procedural level generation. arXiv preprint arXiv:1806.10729, 2018.
+Christian Kauten. Super Mario Bros for OpenAI Gym. GitHub, 2018. URL https://github.com/Kautenja/gym-super-mario-bros.
+Michal Kempka, Marek Wydmuch, Grzegorz Runc, Jakub Toczek, and Wojciech Jaskowski. Vizdoom: A doom-based ai research platform for visual reinforcement learning. In 2016 IEEE Conference on Computational Intelligence and Games (CIG), pp. 1-8. IEEE, 2016.
+Alexander S Klyubin, Daniel Polani, and Chrystopher L Nehaniv. All else being equal be empowered. In European Conference on Artificial Life, pp. 744-753. Springer, 2005.
+Heinrich Kuttler, Nantas Nardelli, Thibaut Lavril, Marco Selvatici, Viswanath Sivakumar, Tim Rocktäschel, and Edward Grefenstette. TorchBeast: A PyTorch Platform for Distributed RL. arXiv preprint arXiv:1910.03552, 2019. URL https://github.com/facebookresearch/torchbeast.
+Jan Leike, Miljan Martic, Victoria Krakovna, Pedro A Ortega, Tom Everitt, Andrew Lefrancq, Laurent Orseau, and Shane Legg. Ai safety gridworlds. arXiv preprint arXiv:1711.09883, 2017.
+Timothee Lesort, Natalia Diaz Rodríguez, Jean-François Goudou, and David Filliat. State representation learning for control: An overview. *Neural Networks*, 108:379-392, 2018. doi: 10.1016/j.neunet.2018.07.006. URL https://doi.org/10.1016/j.neunet.2018.07.006.
+Daniel Ying-Jeh Little and Friedrich Tobias Sommer. Learning and exploration in action-perception loops. Frontiers in neural circuits, 7:37, 2013.
+Marlos C Machado, Marc G Bellemare, and Michael Bowling. Count-based exploration with the successor representation. arXiv preprint arXiv:1807.11622, 2018a.
+Marlos C Machado, Marc G Bellemare, Erik Talvitie, Joel Veness, Matthew Hausknecht, and Michael Bowling. Revisiting the arcade learning environment: Evaluation protocols and open problems for general agents. Journal of Artificial Intelligence Research, 61:523-562, 2018b.
+
+Kenneth Marino, Abhinav Gupta, Rob Fergus, and Arthur Szlam. Hierarchical RL using an ensemble of proprioceptive periodic policies. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019, 2019. URL https://openreview.net/forum?id=SJz1x20cFQ.
+Jarryd Martin, Suraj Narayanan Sasikumar, Tom Everitt, and Marcus Hutter. Count-based exploration in feature space for reinforcement learning. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI 2017, Melbourne, Australia, August 19-25, 2017, pp. 2471-2478, 2017. doi: 10.24963/ijcai.2017/344. URL https://doi.org/10.24963/ijcai.2017/344.
+Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In International conference on machine learning, pp. 1928-1937, 2016.
+Nirbhay Modhe, Prithvijit Chattopadhyay, Mohit Sharma, Abhishek Das, Devi Parikh, Dhruv Batra, and Ramakrishna Vedantam. Unsupervised discovery of decision states for transfer in reinforcement learning. CoRR, abs/1907.10580, 2019. URL http://arxiv.org/abs/1907.10580.
+Alex Nichol, Vicki Pfau, Christopher Hesse, Oleg Klimov, and John Schulman. Gotta learn fast: A new benchmark for generalization in rl. arXiv preprint arXiv:1804.03720, 2018.
+Brendan O'Donoghue, Ian Osband, Rémi Munos, and Volodymyr Mnih. The uncertainty bellman equation and exploration. In Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholm, Sweden, July 10-15, 2018, pp. 3836-3845, 2018. URL http://proceedings.mlr.press/v80/o-donoghue18a.html.
+Ian Osband, Charles Blundell, Alexander Pritzel, and Benjamin Van Roy. Deep exploration via bootstrapped dqn. In Advances in neural information processing systems, pp. 4026-4034, 2016.
+Georg Ostrovski, Marc G Bellemare, Aäron van den Oord, and Rémi Munos. Count-based exploration with neural density models. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 2721-2730. JMLR.org, 2017.
+Pierre-Yves Oudefyer and Frederic Kaplan. What is intrinsic motivation? a typology of computational approaches. Frontiers in neurorobotics, 1:6, 2009.
+Pierre-Yves Oudeyer, Frédric Kaplan, and Verena V Hafner. Intrinsic motivation systems for autonomous mental development. IEEE transactions on evolutionary computation, 11(2):265-286, 2007.
+Pierre-Yves Oudeyer, Frederic Kaplan, et al. How can we define intrinsic motivation. In Proc. of the 8th Conf. on Epigenetic Robotics, volume 5, pp. 29-31, 2008.
+Charles Packer, Katelyn Gao, Jernej Kos, Philipp Krahenbuhl, Vladlen Koltun, and Dawn Song. Assessing generalization in deep reinforcement learning. arXiv preprint arXiv:1810.12282, 2018.
+Deepak Pathak, Pulkit Agrawal, Alexei A Efros, and Trevor Darrell. Curiosity-driven exploration by self-supervised prediction. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 16-17, 2017.
+Sebastien Racanière, Theophane Weber, David P. Reichert, Lars Buesing, Arthur Guez, Danilo Jimenez Rezende, Adrià Puigdomènech Badia, Oriol Vinyals, Nicolas Heess, Yujia Li, Razvan Pascanu, Peter W. Battaglia, Demis Hassabis, David Silver, and Daan Wierstra. Imagination-augmented agents for deep reinforcement learning. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4-9 December 2017, Long Beach, CA, USA, pp. 5690-5701, 2017. URL http://papers.nips.cc/paper/7152-imagination-augmented-agents-for-deep-reinforcement-learning.
+Aravind Rajeswaran, Kendall Lowrey, Emanuel V Todorov, and Sham M Kakade. Towards generalization and simplicity in continuous control. In Advances in Neural Information Processing Systems, pp. 6550-6561, 2017.
+
+Danilo Jimenez Rezende and Shakir Mohamed. Variational inference with normalizing flows. In Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015, pp. 1530-1538, 2015. URL http://proceedings.mlr.press/v37/rezende15.html.
+Jürgen Schmidhuber. Curious model-building control systems. In Proc. international joint conference on neural networks, pp. 1458-1463, 1991a.
+Jürgen Schmidhuber. A possibility for implementing curiosity and boredom in model-building neural controllers. In Proc. of the international conference on simulation of adaptive behavior: From animals to animals, pp. 222-227, 1991b.
+Jürgen Schmidhuber. Developmental robotics, optimal artificial curiosity, creativity, music, and the fine arts. Connection Science, 18(2):173-187, 2006.
+Jürgen Schmidhuber. Formal theory of creativity, fun, and intrinsic motivation (1990-2010). IEEE Transactions on Autonomous Mental Development, 2(3):230-247, 2010.
+David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go with deep neural networks and tree search. nature, 529(7587):484, 2016.
+David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, et al. Mastering the game of go without human knowledge. Nature, 550(7676):354, 2017.
+Bradly C Stadie, Sergey Levine, and Pieter Abbeel. Incentivizing exploration in reinforcement learning with deep predictive models. arXiv preprint arXiv:1507.00814, 2015.
+Christopher Stanton and Jeff Clune. Deep curiosity search: Intra-life exploration can improve performance on challenging deep reinforcement learning problems. arXiv preprint arXiv:1806.00553, 2018.
+Susanne Still and Doina Precup. An information-theoretic approach to curiosity-driven reinforcement learning. Theory in Biosciences, 131(3):139-148, 2012.
+Alexander L Strehl and Michael L Littman. An analysis of model-based interval estimation for markov decision processes. Journal of Computer and System Sciences, 74(8):1309-1331, 2008.
+Haoran Tang, Rein Houthooft, Davis Foote, Adam Stooke, OpenAI Xi Chen, Yan Duan, John Schulman, Filip DeTurck, and Pieter Abbeel. # exploration: A study of count-based exploration for deep reinforcement learning. In Advances in neural information processing systems, pp. 2753-2762, 2017.
+T Tieleman and G Hinton. Rmsprop: Divide the gradient by a running average of its recent magnitude. coursera: Neural networks for machine learning. Tech. Rep., Technical report, pp. 31, 2012.
+Amy Zhang, Nicolas Ballas, and Joelle Pineau. A dissection of overfitting and generalization in continuous reinforcement learning. arXiv preprint arXiv:1806.07937, 2018a.
+Chiyuan Zhang, Oriol Vinyals, Remi Munos, and Samy Bengio. A study on overfitting in deep reinforcement learning. arXiv preprint arXiv:1804.06893, 2018b.
+Jingwei Zhang, Niklas Wetzel, Nicolai Dorka, Joschka Boedecker, and Wolfram Burgard. Scheduled intrinsic drive: A hierarchical take on intrinsically motivated exploration. arXiv preprint arXiv:1903.07400, 2019.
+
+# A APPENDIX
+
+# A.1 NETWORK ARCHITECTURES
+
+All our models use the same network architecture for the policy and value networks. The input is passed through a sequence of three (for MiniGrid) or four (for the environments used by Pathak et al. (2017)) convolutional layers with 32 filters each, kernel size of $3 \times 3$ , stride of 2 and padding of 1. An exponential linear unit (ELU; (Clevert et al. (2016))) is used after each convolution layer. The output of the last convolution layer is fed into a LSTM with 256 units. Two separate fully connected layers are used to predict the value function and the action from the LSTM feature representation.
+
+For the singleton environments used in prior work, the agents are trained using visual inputs that are pre-processed similarly to Mnih et al. (2016). The RGB images are converted into gray-scale and resized to $42 \times 42$ . The input given to both the policy and the state representation networks consists of the current frame concatenated with the previous three frames. In order to reduce overfitting, during training, we use action repeat of four. At inference time, we sample the policy without any action repeats.
+
+# A.2 HYPERPARAMETERS
+
+We ran grid searches over the learning rate $\in [0.0001, 0.0005, 0.001]$ , batch size $\in [8, 32]$ and unroll length $\in [20, 40, 100, 200]$ . The best values for all models can be found in Table 2. The learning rate is linearly annealed to 0 in all experiments.
+
+| Parameter | Value |
| Learning Rate | 0.0001 |
| Batch Size | 32 |
| Unroll Length | 100 |
| Discount | 0.99 |
| RMSProp Momentum | 0.0 |
| RMSProp ε | 0.01 |
| Clip Gradient Norm ε | 40.0 |
+
+Table 2: Hyperparameters common to all experiments.
+
+We also ran grid searches over the intrinsic reward coefficient $\in [1.0, 0.5, 0.1, 0.05, 0.01, 0.005, 0.001]$ and the entropy coefficient $\in [0.01, 0.005, 0.001, 0.0005, 0.0001, 0.00005]$ for all the models on all environments. The best intrinsic reward coefficient was 0.1 for ICM and RND, and 0.005 for Count on all environments. The best entropy coefficient was 0.0001 for ICM, RND, and Count on all environments. For RIDE, we used an intrinsic reward coefficient of 0.1 and entropy coefficient of 0.0005 for MultiRoomN7S4, MultiRoomNoisyTVN7S4, MultiRoomN10S4, KeyCorridorS3R3 and 0.5, 0.001 for MultiRoomN7S8, MultiRoomN10S10, MultiRoomN12S10, and ObstructedMaze2Dlh. The ablations use the same hyperparameters as RIDE. In all experiments presented here, we use the best values found for each model.
+
+# A.3 MINIGRID ENVIRONMENT
+
+In MiniGrid, the world is a partially observable grid of size NxN. Each tile in the grid contains exactly zero or one objects. The possible object types are wall, door, key, ball, box and goal.
+
+Each object in MiniGrid has an associated discrete color, which can be one of red, green, blue, purple, yellow or grey. By default, walls are always grey and goal squares are always green. Rewards are sparse for all MiniGrid environments.
+
+There are seven actions in MiniGrid: turn left, turn right, move forward, pick up an object, drop an object, toggle and done. The agent can use the turn left and turn right action to rotate and face one of 4 possible directions (north, south, east, west). The move forward action makes the agent move from its current tile onto the tile in the direction it is currently facing, provided there is nothing on
+
+that tile, or that the tile contains an open door. The agent can open doors if they are right in front of it by using the toggle action.
+
+Observations in MiniGrid are partial and egocentric. By default, the agent sees a square of 7x7 tiles in the direction it is facing. These include the tile the agent is standing on. The agent cannot see through walls or closed doors. The observations are provided as a tensor of shape 7x7x3. However, note that these are not RGB images. Each tile is encoded using 3 integer values: one describing the type of object contained in the cell, one describing its color, and a flag indicating whether doors are open or closed. This compact encoding was chosen for space efficiency and to enable faster training. For all tasks, the agent gets an egocentric view of its surroundings, consisting of $3 \times 3$ pixels. A neural network parameterized as a CNN is used to process the visual observation.
+
+The MultiRoomNXSY environment consists of X rooms, with size at most Y, connected in random orientations. The agent is placed in the first room and must navigate to a green goal square in the most distant room from the agent. The agent receives an egocentric view of its surrounding, consisting of $3 \times 3$ pixels. The task increases in difficulty with X and Y. Episodes finish with a positive reward when the agent reaches the green goal square. Otherwise, episodes are terminated with zero reward after a maximum of $20 \times N$ steps.
+
+In the KeyCorridorS3R3 environment, the agent has to pick up an object which is behind a locked door. The key is hidden in another room, and the agent has to explore the environment to find it. Episodes finish with a positive reward when the agent picks up the ball behind the locked door or after a maximum of 270 steps.
+
+In the ObstructedMaze2Dlh environment, the agent has to pick up a box which is placed in a corner of a 3x3 maze. The doors are locked, the keys are hidden in boxes and the doors are obstructed by balls. Episodes finish with a positive reward when the agent picks up the ball behind the locked door or after a maximum of 576 steps.
+
+In the DynamicObstacles environment, the agent has to navigate to a fixed goal location while avoiding moving obstacles. In our experiments, the agent is randomly initialized to a location in the grid. If the agent collides with an obstacle, it receives a penalty of -1 and the episode ends.
+
+# A.4 VIZDOOM ENVIRONMENT
+
+We consider the Doom 3D navigation task where the action space of the agent consists of four discrete actions: move forward, move left, move right and no-action. Our testing setup in all the experiments is the *DoomMyWayHome-v0* environments which is available as part of OpenAI Gym (Brockman et al., 2016). Episodes are terminated either when the agent finds the vest or if the agent exceeds a maximum of 2100 time steps. The map consists of 9 rooms connected by corridors and the agent is tasked to reach some fixed goal location from its spawning location. The agent is always spawned in Room-13 which is 270 steps away from the goal under an optimal policy. A long sequence of actions is required to reach the goals from these rooms, making this setting a hard exploration problem. The agent is only provided a sparse terminal reward of +1 if it finds the vest and 0 otherwise. While this environment has sparse reward, it is not procedurally-generated, so the agent finds itself in exactly the same environment in each episode and does not need to generalize to different environment instantiations. This environment is identical to the "sparse" setting used in Pathak et al. (2017).
+
+# A.5 ABLATIONS
+
+In this section, we aim to better understand the effect of using episodic discounting as part of the intrinsic reward, as well as that of using entropy regularization as part of the IMPALA loss.
+
+Figure 9 compares the performance of our model on different MiniGrid tasks with that of three ablations. The first one only uses episodic state counts as exploration bonus without multiplying it by the impact-driven intrinsic reward (OnlyEpisodicCounts), the second one only uses the impact-driven exploration bonus without multiplying it by the episodic state count term (NoEpisodicCounts), while the third one is the NoEpisodicCounts model without the entropy regularization term in the IMPALA loss (NoEntropyNoEpisodicCounts).
+
+OnlyEpisodicCounts does not solve any of the tasks. NoEntropyNoEpisodicCounts either converges to a suboptimal policy or completely fails. In contrast, NoEpisodicCounts can solve the easier tasks but it requires more interactions than RIDE and fails to learn on the hardest domain. During training, NoEpisodicCounts can get stuck cycling between two states (with a large distance in the embedding states) but due to entropy regularization, it can sometimes escape such local optima (unlike NoEntropyNoEpisodicCounts) if it finds extrinsic reward. However, when the reward is too sparse, NoEpisodicCounts is insufficient while RIDE still succeeds, indicating the effectiveness of augmenting the impact-driven intrinsic reward with the episodic count term.
+
+
+
+
+
+
+Figure 9: Comparison between the performance of RIDE and three ablations: OnlyEpisodicCounts, NoEpisodicCounts, and NoEntropyNoEpisodicCounts.
+
+
+
+Figure 10 shows the average number of states visited during an episode of MultiRoomN12S10, measured at different training stages for our full RIDE model and the NoEpisodicCounts ablation. While the NoEpisodicCounts ablation always visits a low number of different states each episode $(\leq 10)$ , RIDE visits an increasing number of states throughout training (converging to $\sim 100$ for an optimal policy). Hence, it can be inferred that NoEpisodicCounts revisits some of the states. This claim can be further verified by visualizing the agents' behaviors. After training, NoEpisodicCounts goes back and forth between two states, while RIDE visits each state once on its path to the goal. Consistent with our intuition, discounting the intrinsic reward by the episodic state-count term does help to avoid this failure mode.
+
+# A.6 ANALYSIS
+
+# A.6.1 STATE VISITATION IN MULTIROOMN12S10
+
+In this section, we analyze the behavior learned by the agents. Figure 11 shows the state visitation heatmaps for all models trained on $100\mathrm{m}$ frames of MultiRoomN12S10, which has a very sparse reward. Note that while our model has already reached the goal in the farthest room of the maze, Count has explored about half of the maze, while RND and ICM are still in the first two rooms.
+
+
+Figure 10: Average number of states visited during an episode of MultiRoomN12S10, measured at different training stages for our full RIDE model (blue) and the NoEpisodicCounts ablation (orange).
+
+
+Figure 11: State visitation heatmaps for Count, RND, ICM, and RIDE (from left to right) trained for $100\mathrm{m}$ frames on MultiRoomN12S10.
+
+
+
+
+
+
+
+
+Figure 12: Intrinsic reward heatmaps for RND, ICM, and RIDE (from left to right) on Multi-RoomN12S10.
+
+
+
+
+
+# A.6.2 INTRINSIC REWARD IN MULTIROOMN12S20
+
+Figure 12 shows a heatmap of the intrinsic reward received by RIDE, RND, and ICM agents trained on the procedurally-generated MultiRoomN12S10 environment. Table 3 shows the corresponding intrinsic rewards received for each type of action, averaged over 100 episodes, for the trained models. This environment is very challenging since the chance of randomly stumbling upon extrinsic reward is extremely low. Thus, we see that while the intrinsic reward provided by RIDE is still effective at exploring the maze and finding extrinsic reward, the exploration bonuses used by RND and ICM are less useful, leading to agents that do not go beyond the second room, even after training on 100m frames.
+
+ | Open Door | Turn Left / Right | Move Forward |
| Model | Mean | Std | Mean | Std | Mean | Std |
| RIDE | 0.0116 | 0.0011 | 0.0042 | 0.0020 | 0.0032 | 0.0016 |
| RND | 0.0041 | 0.0016 | 0.0035 | 0.0013 | 0.0034 | 0.0012 |
| ICM | 0.0082 | 0.0003 | 0.0074 | 0.0005 | 0.0086 | 0.0002 |
+
+Table 3: Mean intrinsic reward per action computed over 100 episodes on a random map from MultiRoomN12S10.
+
+
+
+
+
+
+
+
+
+
+Figure 13: Intrinsic reward heatmaps for RND (left) and RIDE (right) for interacting with objects (i.e. open doors, pick up / drop keys or balls) (green), moving forward (blue), or turning left or right (red) on a random map from ObstructedMaze2Dlh. A is the agent's starting position, K are the keys hidden inside boxes (that need to be opened in order to see their colors), D are colored doors that can only be opened by keys with the same color, and B is the ball that the agent needs to pick up in order to win the game. After passing through the door the agent also needs to drop the key in order to be able to pick up the ball since it can only hold one object at a time.
+
+
+
+# A.6.3 INTRINSIC REWARD IN OBSTRUCTEDMAZE2DLH
+
+In order to understand how various interactions with objects are rewarded by the different exploration methods, we also looked at the intrinsic reward in the ObstructedMaze2Dlh environment which contains multiple objects. However, the rooms are connected by locked doors and the keys for unlocking the doors are hidden inside boxes. The agent does not know in which room the ball is located and it needs the color of the key to match that of the door in order to open it. Moreover, the agent cannot hold more than one object so it needs to drop one in order to pick up another.
+
+Figure 13 and Table 4 indicate that RIDe rewards the agent significantly for interacting with various objects (e.g. opening the box, picking up the key, opening the door, dropping the key, picking up the ball) relative to other actions such as moving forward or turning left and right. In contrast, RND again rewards all actions much more uniformly and often times, within an episode, it rewards the interactions with objects less than the ones for moving around inside the maze.
+
+ | Open Door | Pick Ball | Pick Key | Drop Key | Other |
| Model | Mean | Std | Mean | Std | Mean | Std | Mean | Std | Mean | Std |
| RIDE | 0.0005 | 0.0002 | 0.0004 | 0.0001 | 0.0004 | 0.00001 | 0.0004 | 0.00007 | 0.0003 | 0.00001 |
| RND | 0.0034 | 0.0015 | 0.0027 | 0.0006 | 0.0026 | 0.0060 | 0.0030 | 0.0010 | 0.0025 | 0.0006 |
+
+Table 4: Mean intrinsic reward per action computed over 100 episodes on a random map from ObstructedMaze2Dlh.
+
+
+Figure 14: Performance on DynamicObstacles with varying degrees of difficulty.
+
+
+
+
+
+
+
+
+Figure 15: Performance on a version of the MiniGridRoomN7S4 in which the colors of the walls and goals are randomly picked from a set of 4 colors at the beginning of each episode.
+
+# A.7 DYNAMIC OBSTACLES ENVIRONMENT
+
+One potential limitation of RIDE is that it may be drawn to take actions that significantly change the environment, even when those actions are undesirable. In order to test the limits of RIDE, we ran experiments on the DynamicObstacles environment in MiniGrid. As seen in Figure 14, RIDE learns to solve the task of avoiding the moving obstacles in the environment, even if chasing them provides large intrinsic rewards. Hence, RIDE is still able to learn effectively in certain scenarios in which high-impact actions are detrimental to solving the task.
+
+# A.8 GENERALIZATION TO UNSEEN COLORS
+
+In order to test generalization to unseen colors, we also ran experiments on a version of Multi-RoomN7S4 in which the colors of the walls and the goal change at each episode. The models are trained on a set of 4 colors and tested on a held-out set of 2 colors. As seen in Figure 15 and Table 5, RIDE and Count learn to solve this task and can generalize to unseen colors at test time without any extra fine-tuning. RND and ICM perform slightly worse on the test environments, and only one out of five seeds of ICM converges to the optimal policy on the train environments. The best seed for each model was used to evaluate on the test set.
+
+ | Test Return |
| Model | Mean | Std |
| RIDE | 0.77 | 0.02 |
| Count | 0.77 | 0.02 |
| RND | 0.76 | 0.11 |
| ICM | 0.73 | 0.03 |
| IMPALA | 0.00 | 0.00 |
+
+Table 5: Average return over 100 episodes on a version of MultiRoomN7S4 in which the colors of the walls and goals change with each episode. The models were trained until convergence on a set of 4 colors and tested on a held-out set of 2 colors.
+
+# A.9 OTHER PRACTICAL INSIGHTS
+
+While developing this work, we also experimented with a few other variations of RIDE that did not work. First, we tried to use observations instead of learned state embeddings for computing the RIDE reward, but this was not able to solve any of the tasks. Using a common state representation for both the policy and the embeddings also proved to be ineffective.
\ No newline at end of file
diff --git a/riderewardingimpactdrivenexplorationforprocedurallygeneratedenvironments/images.zip b/riderewardingimpactdrivenexplorationforprocedurallygeneratedenvironments/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..dffd1930ed9cbb3772e63070ad63cdbf1d52931d
--- /dev/null
+++ b/riderewardingimpactdrivenexplorationforprocedurallygeneratedenvironments/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0bac1492ff25be2c5b6dcd881765cd57e0c4f5c18c0784a52fc554a6ebf3fd00
+size 833990
diff --git a/riderewardingimpactdrivenexplorationforprocedurallygeneratedenvironments/layout.json b/riderewardingimpactdrivenexplorationforprocedurallygeneratedenvironments/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..0716bb03674f701bbfd9939131e9a29311375d1e
--- /dev/null
+++ b/riderewardingimpactdrivenexplorationforprocedurallygeneratedenvironments/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:33f62bea9587e22c326daf7e272c1afc3c57ef4d0083c4d9026f3d2e859e02d4
+size 521057
diff --git a/rnasecondarystructurepredictionbylearningunrolledalgorithms/07390216-b704-4c34-83f3-19665db33263_content_list.json b/rnasecondarystructurepredictionbylearningunrolledalgorithms/07390216-b704-4c34-83f3-19665db33263_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..53f56236a1dad88440dcb249e705916ab5ecf580
--- /dev/null
+++ b/rnasecondarystructurepredictionbylearningunrolledalgorithms/07390216-b704-4c34-83f3-19665db33263_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f8ed885e2b28781b22e6c2a6ff5233aab3f2a82fa888f1d5333c6e70883516a8
+size 112934
diff --git a/rnasecondarystructurepredictionbylearningunrolledalgorithms/07390216-b704-4c34-83f3-19665db33263_model.json b/rnasecondarystructurepredictionbylearningunrolledalgorithms/07390216-b704-4c34-83f3-19665db33263_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..4aab434b49449b1b81bc55efd25db14320d55104
--- /dev/null
+++ b/rnasecondarystructurepredictionbylearningunrolledalgorithms/07390216-b704-4c34-83f3-19665db33263_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b4492e4fa3adc97a83ecb5e80740007fb454dcecb34fa425d068047a521b49a5
+size 137376
diff --git a/rnasecondarystructurepredictionbylearningunrolledalgorithms/07390216-b704-4c34-83f3-19665db33263_origin.pdf b/rnasecondarystructurepredictionbylearningunrolledalgorithms/07390216-b704-4c34-83f3-19665db33263_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..2c32f5cc983b3c8b52df4b84e89d923be8a9d065
--- /dev/null
+++ b/rnasecondarystructurepredictionbylearningunrolledalgorithms/07390216-b704-4c34-83f3-19665db33263_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:40640452835f9848bc4f1dea3c20de74b165dbc290ba7bcb84ed7bad7b0339bb
+size 9600167
diff --git a/rnasecondarystructurepredictionbylearningunrolledalgorithms/full.md b/rnasecondarystructurepredictionbylearningunrolledalgorithms/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..d2cd3bd2ec397f4aaa493e37694c8d3a7e6d6161
--- /dev/null
+++ b/rnasecondarystructurepredictionbylearningunrolledalgorithms/full.md
@@ -0,0 +1,495 @@
+# RNA SECONDARY STRUCTURE PREDICTION BY LEARNING UNROLLED ALGORITHMS
+
+Xinshi Chen $^{1*}$ , Yu Li $^{2*}$ , Ramzan Umarov $^{2}$ , Xin Gao $^{2,\dagger}$ , Le Song $^{1,3,\dagger}$
+
+1Georgia Tech 2KAUST 3Ant Financial
+
+xinshi.chen@gatech.edu
+
+{yu.li;ramzan.umarov;xin.gao}@kaust.edu.sa
+
+lsong@cc.gatech.edu
+
+# ABSTRACT
+
+In this paper, we propose an end-to-end deep learning model, called E2Efold, for RNA secondary structure prediction which can effectively take into account the inherent constraints in the problem. The key idea of E2Efold is to directly predict the RNA base-pairing matrix, and use an unrolled algorithm for constrained programming as the template for deep architectures to enforce constraints. With comprehensive experiments on benchmark datasets, we demonstrate the superior performance of E2Efold: it predicts significantly better structures compared to previous SOTA (especially for pseudoknotted structures), while being as efficient as the fastest algorithms in terms of inference time.
+
+# 1 INTRODUCTION
+
+Ribonucleic acid (RNA) is a molecule playing essential roles in numerous cellular processes and regulating expression of genes (Crick, 1970). It consists of an ordered sequence of nucleotides, with each nucleotide containing one of four bases: Adenine (A), Guanine (G), Cytosine (C) and Uracile (U). This sequence of bases can be represented as
+
+$$
+\boldsymbol {x} := \left(x _ {1}, \dots , x _ {L}\right) \text {w h e r e} x _ {i} \in \{A, G, C, U \},
+$$
+
+which is known as the primary structure of RNA. The bases can bond with one another to form a set of base-pairs, which defines the secondary structure. A secondary structure can be represented by a binary matrix $A^{*}$ where $A_{ij}^{*} = 1$ if the $i,j$ -th
+
+bases are paired (Fig 1). Discovering the secondary structure of RNA is important for understanding functions of RNA since the structure essentially affects the interaction and reaction between RNA and other cellular components. Although secondary structure can be determined by experimental assays (e.g. X-ray diffraction), it is slow, expensive and technically challenging. Therefore, computational prediction of RNA secondary structure becomes an important task in RNA research and is useful in many applications such as drug design (Iorns et al., 2007).
+
+
+Figure 1: Graph and matrix representations of RNA secondary structure.
+
+Research on computational prediction of RNA secondary structure from knowledge of primary structure has been carried out for decades. Most existing methods assume the secondary structure is a result of energy minimization, i.e., $A^{*} = \arg \min_{A}E_{x}(A)$ . The energy function is either estimated by physics-based thermodynamic experiments (Lorenz et al., 2011; Bellaousov et al., 2013; Markham & Zuker, 2008) or learned from data (Do et al.,
+
+
+Figure 2: Nested and non-nested structures.
+
+2006). These approaches are faced with a common problem that the search space of all valid secondary structures is exponentially-large with respect to the length $L$ of the sequence. To make the minimization tractable, it is often assumed the base-pairing has a nested structure (Fig 2 left), and
+
+the energy function factorizes pairwise. With this assumption, dynamic programming (DP) based algorithms can iteratively find the optimal structure for subsequences and thus consider an enormous number of structures in time $\mathcal{O}(L^3)$ .
+
+Although DP-based algorithms have dominated RNA structure prediction, it is notable that they restrict the search space to nested structures, which excludes some valid yet biologically important RNA secondary structures that contain 'pseudoknots', i.e., elements with at least two non-nested base-pairs (Fig 2 right). Pseudoknots make up roughly $1.4\%$ of base-pairs (Mathews & Turner, 2006), and are overrepresented in functionally important regions (Hajdin et al., 2013; Staple & Butcher, 2005). Furthermore, pseudoknots are present in around $40\%$ of the RNAs. They also assist folding into 3D structures (Fechter et al., 2001) and thus should not be ignored. To predict RNA structures with pseudoknots, energy-based methods need to run more computationally intensive algorithms to decode the structures.
+
+In summary, in the presence of more complex structured output (i.e., pseudoknots), it is challenging for energy-based approaches to simultaneously take into account the complex constraints while being efficient. In this paper, we adopt a different viewpoint by assuming that the secondary structure is the output of a feed-forward function, i.e., $A^{*} = \mathcal{F}_{\theta}(\boldsymbol{x})$ , and propose to learn $\theta$ from data in an end-to-end fashion. It avoids the second minimization step needed in energy function based approach, and does not require the output structure to be nested. Furthermore, the feed-forward model can be fitted by directly optimizing the loss that one is interested in.
+
+Despite the above advantages of using a feed-forward model, the architecture design is challenging. To be more concrete, in the RNA case, $\mathcal{F}_{\theta}$ is difficult to design for the following reasons:
+
+(i) RNA secondary structure needs to obey certain hard constraints (see details in Section 3), which means certain kinds of pairings cannot occur at all (Steeg, 1993). Ideally, the output of $\mathcal{F}_{\theta}$ needs to satisfy these constraints.
+(ii) The number of RNA data points is limited, so we cannot expect that a naive fully connected network can learn the predictive information and constraints directly from data. Thus, inductive biases need to be encoded into the network architecture.
+(iii) One may take a two-step approach, where a post-processing step can be carried out to enforce the constraints when $\mathcal{F}_{\theta}$ predicts an invalid structure. However, in this design, the deep network trained in the first stage is unaware of the post-processing stage, making less effective use of the potential prior knowledge encoded in the constraints.
+
+In this paper, we present an end-to-end deep learning solution which integrates the two stages. The first part of the architecture is a transformer-based deep model called Deep Score Network which represents sequence information useful for structure prediction. The second part is a multilayer network called Post-Processing Network which gradually enforces the constraints and restrict the output space. It is designed based on an unrolled algorithm for solving a constrained optimization. These two networks are coupled together and learned jointly in an end-to-end fashion. Therefore, we call our model
+
+
+Figure 3: Output space of E2Efold.
+
+By using an unrolled algorithm as the inductive bias to design Post-Processing Network, the output space of E2Efold is constrained (illustrated in Fig 3), which makes it easier to learn a good model in the case of limited data and also reduces the overfitting issue. Yet, the constraints encoded in E2Efold are flexible enough such that pseudoknots are not excluded. In summary, E2Efold strikes a nice balance between model biases for learning and expressiveness for valid RNA structures.
+
+We conduct extensive experiments to compare E2Efold with state-of-the-art (SOTA) methods on several RNA benchmark datasets, showing superior performance of E2Efold including:
+
+- being able to predict valid RNA secondary structures including pseudoknots;
+- running as efficient as the fastest algorithm in terms of inference time;
+- producing structures that are visually close to the true structure;
+- better than previous SOTA in terms of F1 score, precision and recall.
+
+Although in this paper we focus on RNA secondary structure prediction, which presents an important and concrete problem where E2Efold leads to significant improvements, our method is generic
+
+and can be applied to other problems where constraints need to be enforced or prior knowledge is provided. We imagine that our design idea of learning unrolled algorithm to enforce constraints can also be transferred to problems such as protein folding and natural language understanding problems (e.g., building correspondence structure between different parts in a document).
+
+# 2 RELATED WORK
+
+Classical RNA folding methods identify candidate structures for an RNA sequence energy minimization through DP and rely on thousands of experimentally-measured thermodynamic parameters. A few widely used methods such as RNAstructure (Bellaousov et al., 2013), Vienna RNAfold (Lorenz et al., 2011) and UNAFold (Markham & Zuker, 2008) adpoted this approach. These methods typically scale as $\mathcal{O}(L^3)$ in time and $\mathcal{O}(L^2)$ in storage (Mathews, 2006), making them slow for long sequences. A recent advance called LinearFold (Huang et al., 2019) achieved linear run time $\mathcal{O}(L)$ by applying beam search, but it can not handle pseudoknots in RNA structures. The prediction of lowest free energy structures with pseudoknots is NP-complete (Lyngsø & Pedersen, 2000), so pseudoknots are not considered in most algorithms. Heuristic algorithms such as HotKnots (Andronescu et al., 2010) and Probknots (Bellaousov & Mathews, 2010) have been made to predict structures with pseudoknots, but the predictive accuracy and efficiency still need to be improved.
+
+Learning-based RNA folding methods such as ContraFold (Do et al., 2006) and ContextFold (Zakov et al., 2011) have been proposed for energy parameters estimation due to the increasing availability of known RNA structures, resulting in higher prediction accuracies, but these methods still rely on the above DP-based algorithms for energy minimization. A recent deep learning model, CDPfold (Zhang et al., 2019), applied convolutional neural networks to predict base-pairings, but it adopts the dot-bracket representation for RNA secondary structure, which can not represent pseudoknotted structures. Moreover, it requires a DP-based post-processing step whose computational complexity is prohibitive for sequences longer than a few hundreds.
+
+Learning with differentiable algorithms is a useful idea that inspires a series of works (Hershey et al., 2014; Belanger et al., 2017; Ingraham et al., 2018; Chen et al., 2018; Shrivastava et al., 2019), which shared similar ideas of using differentiable unrolled algorithms as a building block in neural architectures. Some models are also applied to structured prediction problems (Hershey et al., 2014; Pillutla et al., 2018; Ingraham et al., 2018), but they did not consider the challenging RNA secondary structure problem or discuss how to properly incorporating constraints into the architecture. OptNet (Amos & Kolter, 2017) integrates constraints by differentiating KKT conditions, but it has cubic complexity in the number of variables and constraints, which is prohibitive for the RNA case.
+
+Dependency parsing in NLP is a different but related problem to RNA folding. It predicts the dependency between the words in a sentence. Similar to nested/non-nested structures, the corresponding terms in NLP are projective/non-projective parsing, where most works focus on the former and DP-based inference algorithms are commonly used (McDonald et al., 2005). Deep learning models (Dozat & Manning, 2016; Kiperwasser & Goldberg, 2016) are proposed to propose to score the dependency between words, which has a similar flavor to the Deep Score Network in our work.
+
+# 3 RNA SECONDARY STRUCTURE PREDICTION PROBLEM
+
+In the RNA secondary structure prediction problem, the input is the ordered sequence of bases $\pmb{x} = (x_{1},\dots,x_{L})$ and the output is the RNA secondary structure represented by a matrix $A^{*}\in \{0,1\}_{L\times L}^{L\times L}$ . Hard constraints on the forming of an RNA secondary structure dictate that certain kinds of pairings cannot occur at all (Steeg, 1993). Formally, these constraints are:
+
+| (i) Only three types of nucleotides combinations, B := {AU, UA} ∪ {GC, CG} ∪ {GU, UG}, can form base-pairs. | ∀i, j, if xi xj∉ B, then Aij = 0. |
| (ii) No sharp loops are allowed. | ∀|i - j| < 4, Aj = 0. |
| (iii) There is no overlap of pairs, i.e., it is a matching. | ∀i, ∑j=1L Aj ≤ 1. |
+
+(i) and (ii) prevent pairing of certain base-pairs based on their types and relative locations. Incorporating these two constraints can help the model exclude lots of illegal pairs. (iii) is a global constraint among the entries of $A^{*}$ .
+
+The space of all valid secondary structures contains all symmetric matrices $A \in \{0,1\}^{L \times L}$ that satisfy the above three constraints. This space is much smaller than the space of all binary matrices $\{0,1\}^{L \times L}$ . Therefore, if we could incorporate these constraints in our deep model, the reduced output space could help us train a better predictive model with less training data. We do this by using an unrolled algorithm as the inductive bias to design deep architecture.
+
+# 4 E2EFOLD: DEEP LEARNING MODEL BASED ON UNROLLED ALGORITHM
+
+In the literature on feed-forward networks for structured prediction, most models are designed using traditional deep learning architectures. However, for RNA secondary structure prediction, directly using these architectures does not work well due to the limited amount of RNA data points and the hard constraints on forming an RNA secondary structure. These challenges motivate the design of our E2Efold deep model, which combines a Deep Score Network with a Post-Processing Network based on an unrolled algorithm for solving a constrained optimization problem.
+
+# 4.1 DEEP SCORE NETWORK
+
+The first part of E2Efold is a Deep Score Network $U_{\theta}(\pmb{x})$ whose output is an $L \times L$ symmetric matrix. Each entry of this matrix, i.e., $U_{\theta}(\pmb{x})_{ij}$ , indicates the score of nucleotides $x_{i}$ and $x_{j}$ being paired. The $\pmb{x}$ input to the network here is the $L \times 4$ dimensional one-hot embedding. The specific architecture of $U_{\theta}$ is shown in Fig 4. It mainly consists of
+
+- a position embedding matrix $\pmb{P}$ which distinguishes $\{x_{i}\}_{i = 1}^{L}$ by their exact and relative positions: $P_{i} = \mathrm{MLP}\big(\psi_{1}(i),\dots ,\psi_{\ell}(i),\psi_{\ell +1}(i / L),\dots ,\psi_{n}(i / L)\big)$ , where $\{\psi_j\}$ is a set of $n$ feature maps such as $\sin (\cdot)$ , poly(·), sigmoid(·), etc, and $\mathrm{MLP}(\cdot)$ denotes multi-layer perceptions. Such position embedding idea has been used in natural language modeling such as BERT (Devlin et al., 2018), but we adapted for RNA sequence representation;
+- a stack of Transformer Encoders (Vaswani et al., 2017) which encode the sequence information and the global dependency between nucleotides;
+- a 2D Convolution layers (Wang et al., 2017) for outputting the pairwise scores.
+
+With the representation power of neural networks, the hope is that we can learn an informative $U_{\theta}$ such that higher scoring entries in $U_{\theta}(\pmb{x})$ correspond well to actual paired bases in RNA structure. Once the score matrix $U_{\theta}(\pmb{x})$ is computed, a naive approach to use it is to choose an offset term $s \in \mathbb{R}$ (e.g., $s = 0$ ) and let $A_{ij} = 1$ if $U_{\theta}(\pmb{x})_{ij} > s$ . However, such entry-wise independent predictions of $A_{ij}$ may result in a matrix $A$ that violates the constraints for a valid RNA secondary structure. Therefore, a post-processing step is needed to make sure the predicted $A$ is valid. This step could be carried out separately after $U_{\theta}$ is learned. But such decoupling of base-pair scoring and post-processing for constraints may lead to sub-optimal results, where the errors in these two stages can not be considered together and tuned together. Instead, we will introduce a Post-Processing Network which can be trained end-to-end together with $U_{\theta}$ to enforce the constraints.
+
+# 4.2 POST-PROCESSING NETWORK
+
+The second part of E2Efold is a Post-Processing Network $\mathrm{PP}_{\phi}$ which is an unrolled and parameterized algorithm for solving a constrained optimization prob
+
+lem. We first present how we formulate the post-processing step as a constrained optimization problem and the algorithm for solving it. After that, we show how we use the algorithm as a template to design deep architecture $\mathrm{PP}_{\phi}$ .
+
+
+Figure 4: Architecture of Deep Score Network.
+
+# 4.2.1 POST-PROCESSING WITH CONSTRAINED OPTIMIZATION
+
+Formulation of constrained optimization. Given the scores predicted by $U_{\theta}(\pmb{x})$ , we define the total score $\frac{1}{2}\sum_{i,j}(U_{\theta}(\pmb{x})_{ij} - s)A_{ij}$ as the objective to maximize, where $s$ is an offset term. Clearly, without structure constraints, the optimal solution is to take $A_{ij} = 1$ when $U_{\theta}(\pmb{x})_{ij} > s$ . Intuitively, the objective measures the covariation between the entries in the scoring matrix and the $A$ matrix. With constraints, the exact maximization becomes intractable. To make it tractable, we consider a convex relaxation of this discrete optimization to a continuous one by allowing $A_{ij} \in [0,1]$ . Consequently, the solution space that we consider to optimize over is $\mathcal{A}(\pmb{x}) := \{A \in [0,1]^{L \times L} \mid A$ is symmetric and satisfies constraints (i)-(iii) in Section 3\}.
+
+To further simplify the search space, we define a nonlinear transformation $\mathcal{T}$ on $\mathbb{R}^{L\times L}$ as $\mathcal{T}(\hat{A})\coloneqq \frac{1}{2}\big(\hat{A}\circ \hat{A} +(\hat{A}\circ \hat{A})^\top \big)\circ M(\pmb {x})$ , where $\circ$ denotes element-wise multiplication. Matrix $M$ is defined as $M(\pmb {x})_{ij}\coloneqq 1$ if $x_{i}x_{j}\in \mathcal{B}$ and also $|i - j|\geq 4$ , and $M(\pmb {x})_{ij}\coloneqq 0$ otherwise. From this definition we can see that $M(\pmb {x})$ encodes both constraint (i) and (ii). With transformation $\mathcal{T}$ , the resulting matrix is non-negative, symmetric, and satisfies constraint (i) and (ii). Hence, by defining $A\coloneqq \mathcal{T}(\hat{A})$ , the solution space is simplified as $\mathcal{A}(\pmb {x}) = \{A = \mathcal{T}(\hat{A})\mid \hat{A}\in \mathbb{R}^{L\times L},A\mathbf{1}\leq \mathbf{1}\}$
+
+Finally, we introduce a $\ell_1$ penalty term $\| \hat{A}\| _1\coloneqq \sum_{i,j}|\hat{A}_{ij}|$ to make $A$ sparse and formulate the post-processing step as: $(\langle \cdot ,\cdot \rangle$ denotes matrix inner product, i.e., sum of entry-wise multiplication)
+
+$$
+\max _ {\hat {A} \in \mathbb {R} ^ {L \times L}} \frac {1}{2} \left\langle U _ {\theta} (\boldsymbol {x}) - s, A := \mathcal {T} (\hat {A}) \right\rangle - \rho \| \hat {A} \| _ {1} \quad \text {s . t .} A \mathbf {1} \leq \mathbf {1} \tag {1}
+$$
+
+The advantages of this formulation are that the variables $\hat{A}_{ij}$ are free variables in $\mathbb{R}$ and there are only $L$ inequality constraints $A\mathbf{1} \leq \mathbf{1}$ . This system of linear inequalities can be replaced by a set of nonlinear equalities $\mathrm{relu}(A\mathbf{1} - \mathbf{1}) = \mathbf{0}$ so that the constrained problem can be easily transformed into an unconstrained problem by introducing a Lagrange multiplier $\lambda \in \mathbb{R}_+^L$ :
+
+$$
+\min _ {\boldsymbol {\lambda} \geq \mathbf {0}} \max _ {\hat {A} \in \mathbb {R} ^ {L \times L}} \underbrace {\frac {1}{2} \langle U _ {\theta} (\boldsymbol {x}) - s , A \rangle - \langle \boldsymbol {\lambda} , \operatorname {r e l u} (A \mathbf {1} - \mathbf {1}) \rangle} _ {f} - \rho \| \hat {A} \| _ {1}. \tag {2}
+$$
+
+Algorithm for solving it. We use a primal-dual method for solving Eq. 2 (derived in Appendix B). In each iteration, $\hat{A}$ and $\lambda$ are updated alternatively by:
+
+(primal) gradient step: $\dot{A}_{t + 1}\gets \hat{A}_t + \alpha \cdot \gamma_\alpha^t\cdot \hat{A}_t\circ M(\pmb {x})\circ \left(\partial f / \partial A_t + (\partial f / \partial A_t)^\top\right),$ (3)
+
+where $\left\{ \begin{array}{ll} \frac{\partial f}{\partial A_t} = \frac{1}{2} (U_\theta (\pmb{x}) - s) - (\pmb{\lambda} \circ \mathrm{sign}(A_t\mathbf{1} - \mathbf{1}))\mathbf{1}^\top, \\ \mathrm{sign}(c) := 1 \text{ when } c > 0 \text{ and } 0 \text{ otherwise}, \end{array} \right.$ (4)
+
+(primal) soft threshold: $\hat{A}_{t + 1}\gets \mathrm{relu}(|\dot{A}_{t + 1}| - \rho \cdot \alpha \cdot \gamma_{\alpha}^{t}),\quad A_{t + 1}\gets \mathcal{T}(\hat{A}_{t + 1}),$ (5)
+
+(dual) gradient step: $\lambda_{t + 1}\gets \lambda_{t + 1} + \beta \cdot \gamma_{\beta}^{t}\cdot \mathrm{relu}(A_{t + 1}\mathbf{1} - \mathbf{1}),$ (6)
+
+where $\alpha, \beta$ are step sizes and $\gamma_{\alpha}, \gamma_{\beta}$ are decaying coefficients. When it converges at $T$ , an approximate solution $\text{Round}(A_T = \mathcal{T}(\hat{A}_T))$ is obtained. With this algorithm operated on the learned $U_{\theta}(\boldsymbol{x})$ , even if this step is disconnected to the training phase of $U_{\theta}(\boldsymbol{x})$ , the final prediction works much better than many other existing methods (as reported in Section 6). Next, we introduce how to couple this post-processing step with the training of $U_{\theta}(\boldsymbol{x})$ to further improve the performance.
+
+# 4.2.2 POST-PROCESSING NETWORK VIA AN UNROLLED ALGORITHM
+
+We design a Post-Processing Network, denoted by $\mathrm{PP}_{\phi}$ , based on the above algorithm. After it is defined, we can connect it with the deep score network $U_{\theta}$ and train them jointly in an end-to-end fashion, so that the training phase of $U_{\theta}(\pmb{x})$ is aware of the post-processing step.
+
+Algorithm 1: Post-Processing Network $\mathrm{PP}_{\phi}(U,M)$
+Parameters $\phi := \{w,s,\alpha,\beta,\gamma_{\alpha},\gamma_{\beta},\rho\}$ $U \leftarrow$ softsign $(U - s) \circ U$ $\hat{A}_0 \leftarrow$ softsign $(U - s) \circ$ sigmoid $(U)$ $A_0 \leftarrow \mathcal{T}(\hat{A}_0); \quad \lambda_0 \leftarrow w \cdot \mathrm{relu}(A_0\mathbf{1} - \mathbf{1})$
+For $t = 0, \dots, T - 1$ do
+ $\begin{array}{r}\boldsymbol{\lambda}_{t + 1},A_{t + 1},\hat{A}_{t + 1} = \mathrm{PPcell}_{\phi}(\boldsymbol {U},\boldsymbol {M},\boldsymbol {\lambda}_t,A_t,\hat{A}_t,t) \end{array}$
+return $\{A_t\}_{t = 1}^T$
+
+Algorithm 2: Neural Cell $\mathsf{PPcell}_{\phi}$
+Function $\mathsf{PPcell}_{\phi}(U,M,\lambda,A,\hat{A},t): G\gets \frac{1}{2} U - (\lambda \circ \mathrm{softsign}(A1 - 1))\mathbf{1}^{\top}$ $\dot{A}\gets \hat{A} +\alpha \cdot \gamma_{\alpha}^{t}\cdot \hat{A}\circ M\circ (G + G^{\top})$ $\hat{A}\gets \mathrm{relu}(|\dot{A}| - \rho \cdot \alpha \cdot \gamma_{\alpha}^{t})$ $\hat{A}\gets 1 - \mathrm{relu}(1 - \hat{A})$ [i.e.,min(A,1)]
+ $A\gets \mathcal{T}(\hat{A});\lambda \gets \lambda +\beta \cdot \gamma_{\beta}^{t}\cdot \mathrm{relu}(A1 - 1)$
+return $\lambda ,A,\hat{A}$
+
+The specific computation graph of $\mathrm{PP}_{\phi}$ is given in Algorithm 1, whose main component is a recurrent cell which we call $\mathrm{PPcell}_{\phi}$ . The computation graph is almost the same as the iterative update from Eq. 3 to Eq. 6, except for several modifications:
+
+- (learnable hyperparameters) The hyperparameters including step sizes $\alpha$ , $\beta$ , decaying rate $\gamma_{\alpha}$ , $\gamma_{\beta}$ , sparsity coefficient $\rho$ and the offset term $s$ are treated as learnable parameters in $\phi$ , so that there is no need to tune the hyperparameters by hand but automatically learn them from data instead.
+- (fixed # iterations) Instead of running the iterative updates until convergence, $\mathrm{PPcell}_{\phi}$ is applied recursively for $T$ iterations where $T$ is a manually fixed number. This is why in Fig 3 the output space of E2Efold is slightly larger than the true solution space.
+- (smoothed sign function) Resulted from the gradient of $\mathrm{relu}(\cdot)$ , the update step in Eq. 4 contains a $\mathrm{sign}(\cdot)$ function. However, to push gradient through $\mathrm{PP}_{\phi}$ , we require a differentiable update step. Therefore, we use a smoothed sign function defined as $\mathrm{softsign}(c) := 1 / (1 + \exp(-kc))$ , where $k$ is a temperature.
+- (clip $\hat{A}$ ) An additional step, $\hat{A} \gets \min(\hat{A}, 1)$ , is included to make the output $A_{t}$ at each iteration stay in the range $[0, 1]^{L \times L}$ . This is useful for computing the loss over intermediate results $\{A_{t}\}_{t=1}^{T}$ , for which we will explain more in Section 5.
+
+With these modifications, the Post-Processing Network $\mathrm{PP}_{\phi}$ is a tuning-free and differentiable unrolled algorithm with meaningful intermediate outputs. Combining it with the deep score network, the final deep model is
+
+$$
+\mathbf {E 2 E f o l d}: \quad \left\{A _ {t} \right\} _ {t = 1} ^ {T} = \overbrace {\mathrm {P P} _ {\phi} \left(\underbrace {U _ {\theta} (\boldsymbol {x})} _ {\text {D e e p S c o r e N e t w o r k}}, M (\boldsymbol {x})\right)} ^ {\text {P o s t - P r o c e s s N e t w o r k}}. \tag {7}
+$$
+
+# 5 END-TO-END TRAINING ALGORITHM
+
+Given a dataset $\mathcal{D}$ containing examples of input-output pairs $(x, A^{*})$ , the training procedure of E2Efold is similar to standard gradient-based supervised learning. However, for RNA secondary structure prediction problems, commonly used metrics for evaluating predictive performances are F1 score, precision and recall, which are non-differentiable.
+
+Differentiable F1 Loss. To directly optimize these metrics, we mimic true positive (TP), false positive (FP), true negative (TN) and false negative (FN) by defining continuous functions on $[0, 1]^{L \times L}$ :
+
+$$
+\mathrm {T P} = \langle A, A ^ {*} \rangle , \mathrm {F P} = \langle A, 1 - A ^ {*} \rangle , \mathrm {F N} = \langle 1 - A, A ^ {*} \rangle , \mathrm {T N} = \langle 1 - A, 1 - A ^ {*} \rangle .
+$$
+
+Since $\mathrm{F1} = 2\mathrm{TP} / (2\mathrm{TP} + \mathrm{FP} + \mathrm{FN})$ , we define a loss function to mimic the negative of F1 score as:
+
+$$
+\mathcal {L} _ {- \mathrm {F l}} (A, A ^ {*}) := - 2 \langle A, A ^ {*} \rangle / \left(2 \langle A, A ^ {*} \rangle + \langle A, 1 - A ^ {*} \rangle + \langle 1 - A, A ^ {*} \rangle\right). \tag {8}
+$$
+
+Assuming that $\sum_{ij}A_{ij}^{*}\neq 0$ , this loss is well-defined and differentiable on $[0,1]^{L\times L}$ . Precision and recall losses can be defined in a similar way, but we optimize F1 score in this paper.
+
+It is notable that this F1 loss takes advantages over other differentiable losses including $\ell_2$ and cross-entropy losses, because there are much more negative samples (i.e. $A_{ij} = 0$ ) than positive samples (i.e. $A_{ij} = 1$ ). A hand-tuned weight is needed to balance them while using $\ell_2$ or cross-entropy losses, but F1 loss handles this issue automatically, which can be useful for a number of problems (Wang et al., 2016; Li et al., 2017).
+
+Overall Loss Function. As noted earlier, E2Efold outputs a matrix $A_{t} \in [0,1]^{L \times L}$ in each iteration. This allows us to add auxiliary losses to regularize the intermediate results, guiding it to learn parameters which can generate a smooth solution trajectory. More specifically, we use an objective that depends on the entire trajectory of optimization:
+
+$$
+\min _ {\theta , \phi} \frac {1}{| \mathcal {D} |} \sum_ {(x, A ^ {*}) \in \mathcal {D}} \frac {1}{T} \sum_ {t = 1} ^ {T} \gamma^ {T - t} \mathcal {L} _ {- \mathrm {F} 1} \left(A _ {t}, A ^ {*}\right), \tag {9}
+$$
+
+where $\{A_t\}_{t=1}^T = \mathrm{PP}_\phi(U_\theta(\boldsymbol{x}), M(\boldsymbol{x}))$ and $\gamma \leq 1$ is a discounting factor. Empirically, we find it very useful to pre-train $U_\theta$ using logistic regression loss. Also, it is helpful to add this additional loss to Eq. 9 as a regularization.
+
+# 6 EXPERIMENTS
+
+We compare E2Efold with the SOTA and also the most commonly used methods in the RNA secondary structure prediction field on two benchmark datasets. It is revealed from the experimental results that E2Efold achieves $29.7\%$ improvement in terms of F1 score on RNAstralign dataset and it infers the RNA secondary structure as fast as the most efficient algorithm (LinearFold) among existing ones. An ablation study is also conducted to show the necessity of pushing gradient through the post-processing step. The codes for reproducing the experimental results are released. $^{1}$
+
+Dataset. We use two benchmark datasets: (i) ArchiveII (Sloma & Mathews, 2016), containing 3975 RNA structures from 10 RNA types, is a widely used benchmark dataset for classical RNA folding methods. (ii) RNASTralign (Tan et al., 2017), composed of 37149 structures from 8 RNA types, is one of the most comprehensive collections of RNA structures in the market. After removing redundant sequences and structures, 30451 structures remain. See Table 1 for statistics about these two datasets.
+
+Experiments On RNAStralign. We divide RNAStralign dataset into training, testing and validation sets by stratified sampling (see details in Table 7 and Fig 6), so that
+
+each set contains all RNA types. We compare the performance of E2Efold to six methods including CDPfold, LinearFold, Mfold, RNAstructure (ProbKnot), RNAfold and CONTRAfold. Both E2Efold and CDPfold are learned from the same training/validation sets. For other methods, we directly use the provided packages or web-servers to generate predicted structures. We evaluate the F1 score, Precision and Recall for each sequence in the test set. Averaged values are reported in Table 2. As suggested by Mathews (2019), for a base pair $(i,j)$ , the following predictions are also considered as correct: $(i + 1,j)$ , $(i - 1,j)$ , $(i,j + 1)$ , $(i,j - 1)$ , so we also reported the metrics when one-position shift is allowed.
+
+Table 1: Dataset Statistics
+
+| Type | ArchiveII | RNAStralign |
| length | #samples | length | #samples |
| All | 28~2968 | 3975 | 30~1851 | 30451 |
| 16SrRNA | 73~1995 | 110 | 54~1851 | 11620 |
| 5SrRNA | 102~135 | 1283 | 104~132 | 9385 |
| tRNA | 54~93 | 557 | 59~95 | 6443 |
| grp1 | 210~736 | 98 | 163~615 | 1502 |
| SRP | 28~533 | 928 | 30~553 | 468 |
| tmRNA | 102~437 | 462 | 102~437 | 572 |
| RNaseP | 120~486 | 454 | 189~486 | 434 |
| telomerase | 382~559 | 37 | 382~559 | 37 |
| 23SrRNA | 242~2968 | 35 | - | - |
| grp2 | 619~780 | 11 | - | - |
+
+Table 2: Results on RNAStralign test set. "(S)" indicates the results when one-position shift is allowed.
+
+| Method | Prec | Rec | F1 | Prec(S) | Rec(S) | F1(S) |
| E2Efold | 0.866 | 0.788 | 0.821 | 0.880 | 0.798 | 0.833 |
| CDPfold | 0.633 | 0.597 | 0.614 | 0.720 | 0.677 | 0.697 |
| LinearFold | 0.620 | 0.606 | 0.609 | 0.635 | 0.622 | 0.624 |
| Mfold | 0.450 | 0.398 | 0.420 | 0.463 | 0.409 | 0.433 |
| RNAstructure | 0.537 | 0.568 | 0.550 | 0.559 | 0.592 | 0.573 |
| RNAfold | 0.516 | 0.568 | 0.540 | 0.533 | 0.587 | 0.558 |
| CONTRAfold | 0.608 | 0.663 | 0.633 | 0.624 | 0.681 | 0.650 |
+
+
+Figure 5: Distribution of F1 score.
+
+As shown in Table 2, traditional methods can achieve a F1 score ranging from 0.433 to 0.624, which is consistent with the performance reported with their original papers. The two learning-based methods, CONTRAfold and CDPfold, can outperform classical methods with reasonable margin on
+
+some criteria. E2Efold, on the other hand, significantly outperforms all previous methods across all criteria, with at least $20\%$ improvement. Notice that, for almost all the other methods, the recall is usually higher than precision, while for E2Efold, the precision is higher than recall. That can be the result of incorporating constraints during neural network training. Fig 5 shows the distributions of F1 scores for each method. It suggests that E2Efold has consistently good performance.
+
+To estimate the performance of E2Efold on long sequences, we also compute the F1 scores weighted by the length of sequences, such that the results are more dominated by longer sequences. Detailed results are given in Appendix D.3.
+
+Table 3: Performance comparison on ArchiveII
+
+| Method | Prec | Rec | F1 | Prec(S) | Rec(S) | F1(S) |
| E2Efold | 0.734 | 0.66 | 0.686 | 0.758 | 0.676 | 0.704 |
| CDPfold | 0.557 | 0.535 | 0.545 | 0.612 | 0.585 | 0.597 |
| LinearFold | 0.641 | 0.617 | 0.621 | 0.668 | 0.644 | 0.647 |
| Mfold | 0.428 | 0.383 | 0.401 | 0.450 | 0.403 | 0.421 |
| RNAstructure | 0.563 | 0.615 | 0.585 | 0.590 | 0.645 | 0.613 |
| RNAfold | 0.565 | 0.627 | 0.592 | 0.586 | 0.652 | 0.615 |
| CONTRAfold | 0.607 | 0.679 | 0.638 | 0.629 | 0.705 | 0.662 |
+
+Table 4: Inference time on RNAStralign
+
+| Method | total run time | time per seq |
| E2Efold (Pytorch) | 19m (GPU) | 0.40s |
| CDPfold (Pytorch) | 440m*32 threads | 300.107s |
| LinearFold | (C) | 20m | 0.43s |
| Mfold | (C) | 360m | 7.65s |
| RNAstructure | (C) | 3 days | 142.02s |
| RNAfold | (C) | 26m | 0.55s |
| CONTRAfold | (C) | 1 day | 30.58s |
+
+Test On ArchiveII Without Re-training. To mimic the real world scenario where the users want to predict newly discovered RNA's structures which may have a distribution different from the training dataset, we directly test the model learned from RNAStralign training set on the ArchiveII dataset, without re-training the model. To make the comparison fair, we exclude sequences that are overlapped with the RNAStralign dataset. We then test the model on sequences in ArchiveII that have overlapping RNA types (5SrRNA, 16SrRNA, etc) with the RNAStralign dataset. Results are shown in Table 3. It is understandable that the performances of classical methods which are not learning-based are consistent with that on RNAStralign. The performance of E2Efold, though is not as good as that on RNAStralign, is still better than all the other methods across different evaluation criteria. In addition, since the original ArchiveII dataset contains domain sequences (subsequences), we remove the domains and report the results in Appendix D.4, which are similar to results in Table 3.
+
+Inference Time Comparison. We record the running time of all algorithms for predicting RNA secondary structures on the RNAStralign test set, which is summarized in Table 4. LinearFold is the most efficient among baselines because it uses beam pruning heuristic to accelerate DP. CDPfold, which achieves higher F1 score than other baselines, however, is extremely slow due to its DP post-processing step. Since we use a gradient-based algorithm which is simple to design the Post-Processing Network, E2Efold is fast. On GPU, E2Efold has similar inference time as LinearFold.
+
+Pseudoknot Prediction. Even though E2Efold does not exclude pseudoknots, it is not sure whether it actually generates pseudoknotted structures. Therefore, we pick all sequences containing pseudoknots and compute the averaged F1 score only on this set. Besides, we count the number of pseudoknotted sequences that are
+
+Table 5: Evaluation of pseudoknot prediction
+
+| Method | Set F1 | TP | FP | TN | FN |
| E2Efold | 0.710 | 1312 | 242 | 1271 | 0 |
| RNAstructure | 0.472 | 1248 | 307 | 983 | 286 |
+
+predicted as pseudoknotted and report this count as true positive (TP). Similarly we report TN, FP and FN in Table 5 along with the F1 score. Most tools exclude pseudoknots while RNAstructure is the most famous one that can predict pseudoknots, so we choose it for comparison.
+
+
+
+Visualization. We visualize predicted structures of three RNA sequences in the main text. More examples are provided in appendix (Fig 8 to 14). In these figures, purple lines indicate edges of pseudoknotted elements. Although CDPfold has higher F1 score than other baselines, its predictions are visually far from the groundtruth. Instead, RNAstructure and CONTRAfold produce
+
+
+
+comparatively more reasonable visualizations among all baselines, so we compare with them. These
+
+two methods can capture a rough sketch of the structure, but not good enough. For most cases, E2Efold produces structures most similar to the ground-truths. Moreover, it works surprisingly well for some RNA sequences that are long and very difficult to predict.
+
+Ablation Study. To exam whether integrating the two stages by pushing gradient through the postprocess is necessary for performance of E2Efold, we conduct an ablation study (Table 6). We test the performance when the post-processing step is discon
+
+Table 6: Ablation study (RNAStralign test set)
+
+Method Prec Rec F1 Prec(S)Rec(S)F1(S)
+E2Efold 0.866 0.788 0.821 0.880 0.798 0.833
+$U_{\theta} + \mathrm{PP}$ 0.755 0.712 0.721 0.782 0.737 0.752
+
+nected with the training of Deep Score Network $U_{\theta}$ . We apply the post-processing step (i.e., for solving augmented Lagrangian) after $U_{\theta}$ is learned (thus the notation " $U_{\theta} + \mathrm{PP}$ " in Table 6). Although " $U_{\theta} + \mathrm{PP}$ " performs decently well, with constraints incorporated into training, E2Efold still has significant advantages over it.
+
+Discussion. To better estimate the performance of E2Efold on different RNA types, we include the per-family F1 scores in Appendix D.5. E2Efold performs significantly better than other methods in 16S rRNA, tRNA, 5S RNA, tmRNA, and telomerase. These results are from a single model. In the future, we can view it as multi-task learning and further improve the performance by learning multiple models for different RNA families and learning an additional classifier to predict which model to use for the input sequence.
+
+# 7 CONCLUSION
+
+We propose a novel DL model, E2Efold, for RNA secondary structure prediction, which incorporates hard constraints in its architecture design. Comprehensive experiments are conducted to show the superior performance of E2Efold, no matter on quantitative criteria, running time, or visualization. Further studies need to be conducted to deal with the RNA types with less samples. Finally, we believe the idea of unrolling constrained programming and pushing gradient through post-processing can be generic and useful for other constrained structured prediction problems.
+
+# ACKNOWLEDGEMENT
+
+We would like to thank anonymous reviewers for providing constructive feedbacks. This work is supported in part by NSF grants CDS&E-1900017 D3SC, CCF-1836936 FMitF, IIS-1841351, CA-REER IIS-1350983 to L.S. and grants from King Abdullah University of Science and Technology, under award numbers BAS/1/1624-01, FCC/1/1976-18-01, FCC/1/1976-23-01, FCC/1/1976-25-01, FCC/1/1976-26-01, REI/1/0018-01-01, and URF/1/4098-01-01.
+
+# REFERENCES
+
+Brandon Amos and J Zico Kolter. Optnet: Differentiable optimization as a layer in neural networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 136-145. JMLR.org, 2017.
+Mirela S Andronescu, Cristina Pop, and Anne E Condon. Improved free energy parameters for RNA pseudoknotted secondary structure prediction. RNA, 16(1):26-42, 2010.
+David Belanger, Bishan Yang, and Andrew McCallum. End-to-end learning for structured prediction energy networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 429-439. JMLR.org, 2017.
+Stanislav Bellaousov and David H Mathews. Probknot: fast prediction of RNA secondary structure including pseudoknots. RNA, 16(10):1870-1880, 2010.
+Stanislav Bellaousov, Jessica S Reuter, Matthew G Seetin, and David H Mathews. RNAstructure: web servers for RNA secondary structure prediction and analysis. Nucleic acids research, 41 (W1):W471-W474, 2013.
+Xiaohan Chen, Jialin Liu, Zhangyang Wang, and Wotao Yin. Theoretical linear convergence of unfolded ista and its practical weights and thresholds. In Advances in Neural Information Processing Systems, pp. 9061-9071, 2018.
+
+Francis Crick. Central dogma of molecular biology. Nature, 227(5258):561, 1970.
+Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
+Chuong B Do, Daniel A Woods, and Serafim Batzoglou. Contrafold: RNA secondary structure prediction without physics-based models. Bioinformatics, 22(14):e90-e98, 2006.
+Timothy Dozat and Christopher D Manning. Deep bioffine attention for neural dependency parsing. arXiv preprint arXiv:1611.01734, 2016.
+P Fechter, J Rudinger-Thirion, C Florentz, and R Giege. Novel features in the tRNA-like world of plant viral RNAs. Cellular and Molecular Life Sciences CMLS, 58(11):1547-1561, 2001.
+Christine E Hajdin, Stanislav Bellaousov, Wayne Huggins, Christopher W Leonard, David H Mathews, and Kevin M Weeks. Accurate shape-directed RNA secondary structure modeling, including pseudoknots. Proceedings of the National Academy of Sciences, 110(14):5498-5503, 2013.
+John R Hershey, Jonathan Le Roux, and Felix Weninger. Deep unfolding: Model-based inspiration of novel deep architectures. arXiv preprint arXiv:1409.2574, 2014.
+Liang Huang, He Zhang, Dezhong Deng, Kai Zhao, Kaibo Liu, David A Hendrix, and David H Mathews. Linearfold: linear-time approximate RNA folding by 5'-to-3'dynamic programming and beam search. Bioinformatics, 35(14):i295-i304, 2019.
+John Ingraham, Adam Riesselman, Chris Sander, and Debora Marks. Learning protein structure with a differentiable simulator. 2018.
+Elizabeth Iorns, Christopher J Lord, Nicholas Turner, and Alan Ashworth. Utilizing RNA interference to enhance cancer drug discovery. Nature reviews Drug discovery, 6(7):556, 2007.
+Eliyahu Kiperwasser and Yoav Goldberg. Simple and accurate dependency parsing using bidirectional LSTM feature representations. Transactions of the Association for Computational Linguistics, 4:313-327, 2016.
+Yu Li, Sheng Wang, Ramzan Umarov, Bingqing Xie, Ming Fan, Lihua Li, and Xin Gao. Deep learning: sequence-based enzyme ec number prediction by deep learning. Bioinformatics, 34(5):760-769, 2017.
+Ronny Lorenz, Stephan H Bernhart, Christian Honer Zu Siederdissen, Hakim Tafer, Christoph Flamm, Peter F Stadler, and Ivo L Hofacker. ViennaRNA package 2.0. Algorithms for molecular biology, 6(1):26, 2011.
+Rune B Lyngsø and Christian NS Pedersen. RNA pseudoknot prediction in energy-based models. Journal of computational biology, 7(3-4):409-427, 2000.
+NR Markham and M Zuker. Unafold: software for nucleic acid folding and hybridization in: Keith jm, editor.(ed.) bioinformatics methods in molecular biology, vol. 453, 2008.
+David H Mathews. Predicting RNA secondary structure by free energy minimization. Theoretical Chemistry Accounts, 116(1-3):160-168, 2006.
+David H Mathews. How to benchmark RNA secondary structure prediction accuracy. Methods, 2019.
+David H Mathews and Douglas H Turner. Prediction of RNA secondary structure by free energy minimization. Current opinion in structural biology, 16(3):270-278, 2006.
+Ryan McDonald, Fernando Pereira, Kiril Ribarov, and Jan Hajic. Non-projective dependency parsing using spanning tree algorithms. In Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing, pp. 523-530. Association for Computational Linguistics, 2005.
+
+Venkata Krishna Pillutla, Vincent Roulet, Sham M Kakade, and Zaid Harchaoui. A smoother way to train structured prediction models. In Advances in Neural Information Processing Systems, pp. 4766-4778, 2018.
+Harsh Shrivastava, Xinshi Chen, Binghong Chen, Guanghui Lan, Srinvas Aluru, and Le Song. Glad: Learning sparse graph recovery. arXiv preprint arXiv:1906.00271, 2019.
+Michael F Sloma and David H Mathews. Exact calculation of loop formation probability identifies folding motifs in RNA secondary structures. RNA, 22(12):1808-1818, 2016.
+David W Staple and Samuel E Butcher. Pseudoknots: RNA structures with diverse functions. *PLoS biology*, 3(6):e213, 2005.
+Evan W Steeg. Neural networks, adaptive optimization, and RNA secondary structure prediction. Artificial intelligence and molecular biology, pp. 121-160, 1993.
+Zhen Tan, Yinghan Fu, Gaurav Sharma, and David H Mathews. Turbofold ii: RNA structural alignment and secondary structure prediction informed by multiple homologs. *Nucleic acids research*, 45(20):11570-11581, 2017.
+Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in neural information processing systems, pp. 5998-6008, 2017.
+Sheng Wang, Siqi Sun, and Jinbo Xu. Auc-maximized deep convolutional neural fields for protein sequence labeling. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pp. 1-16. Springer, 2016.
+Sheng Wang, Siqi Sun, Zhen Li, Renyu Zhang, and Jinbo Xu. Accurate de novo prediction of protein contact map by ultra-deep learning model. PLoS computational biology, 13(1):e1005324, 2017.
+Shay Zakov, Yoav Goldberg, Michael Elhadad, and Michal Ziv-Ukelson. Rich parameterization improves RNA structure prediction. Journal of Computational Biology, 18(11):1525-1542, 2011.
+Hao Zhang, Chunhe Zhang, Zhi Li, Cong Li, Xu Wei, Borui Zhang, and Yuanning Liu. A new method of RNA secondary structure prediction based on convolutional neural network and dynamic programming. Frontiers in genetics, 10, 2019.
+
+# A MORE DISCUSSION ON RELATED WORKS
+
+Here we explain the difference between our approach and other works on unrolling optimization problems.
+
+First, our view of incorporating constraints to reduce output space and to reduce sample complexity is novel. Previous works (Hershey et al., 2014; Belanger et al., 2017; Ingraham et al., 2018) did not discuss these aspects. The most related work which also integrates constraints is OptNet (Amos & Kolter, 2017), but its very expensive and can not scale to the RNA problem. Therefore, our proposed approach is a simple and effective one.
+
+Second, compared to (Chen et al., 2018; Shrivastava et al., 2019), our approach has a different purpose of using the algorithm. Their goal is to learn a better algorithm, so they commonly make their architecture more flexible than the original algorithm for the room of improvement. However, we aim at enforcing constraints. To ensure that constraints are nicely incorporated, we keep the original structure of the algorithm and only make the hyperparameters learnable.
+
+Finally, although all works consider end-to-end training, none of them can directly optimize the F1 score. We proposed a differentiable loss function to mimic the F1 score/precision/recall, which is effective and also very useful when negative samples are much fewer than positive samples (or the inverse).
+
+# B DERIVATION OF THE PROXIMAL GRADIENT STEP
+
+The maximization step in Eq. 1 can be written as the following minimization:
+
+$$
+\min _ {\hat {A} \in \mathbb {R} ^ {L \times L}} \underbrace {- \frac {1}{2} \left\langle U _ {\theta} (\boldsymbol {x}) - s , A \right\rangle + \left\langle \boldsymbol {\lambda} , \operatorname {r e l u} (A \mathbf {1} - \mathbf {1}) \right\rangle} _ {- f (\hat {A})} + \rho \| \hat {A} \| _ {1}. \tag {10}
+$$
+
+Consider the quadratic approximation of $-f(\hat{A})$ centered at $\hat{A}_t$ :
+
+$$
+\begin{array}{l} - \tilde {f} _ {\alpha} (\hat {A}) := - f (\hat {A} _ {t}) + \left\langle - \frac {\partial f}{\partial \hat {A} _ {t}}, \hat {A} - \hat {A} _ {t} \right\rangle + \frac {1}{2 \alpha} \| \hat {A} - \hat {A} _ {t} \| _ {F} ^ {2} (11) \\ = - f \left(\hat {A} _ {t}\right) + \frac {1}{2 \alpha} \left\| \hat {A} - \left(\hat {A} _ {t} + \alpha \frac {\partial f}{\partial \hat {A} _ {t}}\right) \right\| _ {F} ^ {2}, (12) \\ \end{array}
+$$
+
+and rewrite the optimization in Eq. 10 as
+
+$$
+\begin{array}{l} \min _ {\hat {A} \in \mathbb {R} ^ {L \times L}} - f (\hat {A} _ {t}) + \frac {1}{2 \alpha} \left\| \hat {A} - \dot {A} _ {t + 1} \right\| _ {F} ^ {2} + \rho \| \hat {A} \| _ {1} (13) \\ \equiv \min _ {\hat {A} \in \mathbb {R} ^ {L \times L}} \frac {1}{2 \alpha} \left\| \hat {A} - \dot {A} _ {t + 1} \right\| _ {F} ^ {2} + \rho \| \hat {A} \| _ {1}, (14) \\ \end{array}
+$$
+
+where
+
+$$
+\dot {A} _ {t + 1} := \hat {A} _ {t} + \alpha \frac {\partial f}{\partial \hat {A} _ {t}}. \tag {15}
+$$
+
+Next, we define proximal mapping as a function depending on $\alpha$ as follows:
+
+$$
+\begin{array}{l} \operatorname {p r o x} _ {\alpha} \left(\dot {A} _ {t + 1}\right) = \underset {\hat {A} \in \mathbb {R} ^ {L \times L}} {\arg \min } \frac {1}{2 \alpha} \left\| \hat {A} - \dot {A} _ {t + 1} \right\| _ {F} ^ {2} + \rho \| \hat {A} \| _ {1} (16) \\ = \underset {\hat {A} \in \mathbb {R} ^ {L \times L}} {\arg \min } \frac {1}{2} \left\| \hat {A} - \dot {A} _ {t + 1} \right\| _ {F} ^ {2} + \alpha \rho \| \hat {A} \| _ {1} (17) \\ = \operatorname {s i g n} \left(\dot {A} _ {t + 1}\right) \max \left(\left| \dot {A} _ {t + 1} \right| - \alpha \rho , 0\right) (18) \\ = \operatorname {s i g n} \left(\dot {A} _ {t + 1}\right) \operatorname {r e l u} \left(\left| \dot {A} _ {t + 1} \right| - \alpha \rho\right). (19) \\ \end{array}
+$$
+
+Since we always use $\hat{A} \circ \hat{A}$ instead of $\hat{A}$ in our problem, we can take the absolute value $|\operatorname{prox}_{\alpha}(\dot{A}_{t+1})| = \operatorname{relu}(|\dot{A}_{t+1}| - \alpha \rho)$ without loss of generality. Therefore, the proximal gradient
+
+step is
+
+$$
+\dot {A} _ {t + 1} \leftarrow \hat {A} _ {t} + \alpha \frac {\partial f}{\partial \hat {A} _ {t}} \quad (\text {c o r r e s p o n d t o E q . 3}) \tag {20}
+$$
+
+$$
+\hat {A} _ {t + 1} \leftarrow \operatorname {r e l u} \left(\left| \dot {A} _ {t + 1} \right| - \alpha \rho\right) \quad (\text {c o r r e s p o n d t o E q . 5}). \tag {21}
+$$
+
+More specifically, in the main text, we write $\frac{\partial f}{\partial\hat{A}_t}$ as
+
+$$
+\begin{array}{l} \frac {\partial f}{\partial \hat {A} _ {t}} = \frac {1}{2} \left(\frac {\partial f}{\partial A _ {t}} + \frac {\partial f}{\partial A _ {t}} ^ {\top}\right) \circ \frac {\partial A _ {t}}{\partial \hat {A} _ {t}} (22) \\ = \left(\frac {1}{2} \frac {\partial A _ {t}}{\partial \hat {A} _ {t}}\right) \circ \left(\frac {\partial f}{\partial A _ {t}} + \frac {\partial f}{\partial A _ {t}} ^ {\top}\right) (23) \\ = \left(\frac {1}{2 ^ {2}} \circ M \circ \left(2 \hat {A} _ {t} + 2 \hat {A} _ {t} ^ {\top}\right)\right) \circ \left(\frac {\partial f}{\partial A _ {t}} + \frac {\partial f}{\partial A _ {t}} ^ {\top}\right) (24) \\ = \left(\frac {1}{2 ^ {2}} \circ M \circ \left(2 \hat {A} _ {t} + 2 \hat {A} _ {t} ^ {\top}\right)\right) \circ \left(\frac {\partial f}{\partial A _ {t}} + \frac {\partial f}{\partial A _ {t}} ^ {\top}\right) (25) \\ = M \circ \hat {A} _ {t} \circ \left(\frac {\partial f}{\partial A _ {t}} + \frac {\partial f}{\partial A _ {t}} ^ {\top}\right). (26) \\ \end{array}
+$$
+
+The last equation holds since $\hat{A}_t$ will remain symmetric in our algorithm if the initial $\hat{A}_0$ is symmetric. Moreover, in the main text, $\alpha$ is replaced by $\alpha \cdot \gamma_{\alpha}^{t}$ .
+
+# C IMPLEMENTATION AND TRAINING DETAILS
+
+We used Pytorch to implement the whole package of E2Efold.
+
+Deep Score Network. In the deep score network, we used a hyper-parameter, $d$ , which was set as 10 in the final model, to control the model capacity. In the transformer encoder layers, we set the number of heads as 2, the dimension of the feed-forward network as 2048, the dropout rate as 0.1. As for the position encoding, we used 58 base functions to form the position feature map, which goes through a 3-layer fully-connected neural network (the number of hidden neurons is $5 * d$ ) to generate the final position embedding, whose dimension is $L$ by $d$ . In the final output layer, the pairwise concatenation is carried out in the following way: Let $X \in \mathbb{R}^{L \times 3d}$ be the input to the final output layers in Figure 4 (which is the concatenation of the sequence embedding and position embedding). The pairwise concatenation results in a tensor $Y \in \mathbb{R}^{L \times L \times 6d}$ defined as
+
+$$
+Y (i, j,:) = [ X (i,:), X (j,:) ], \tag {27}
+$$
+
+where $Y(i,j,:) \in \mathbb{R}^{6d}$ , $X(i,:) \in \mathbb{R}^{3d}$ , and $X(j,:) \in \mathbb{R}^{3d}$ .
+
+In the 2D convolution layers, the channel of the feature map gradually changes from $6*d$ to $d$ , and finally to 1. We set the kernel size as 1 to translate the feature map into the final score matrix. Each 2D convolution layer is followed by a batch normalization layer. We used ReLU as the activation function within the whole score network.
+
+Post-Processing Network. In the PP network, we initialized $w$ as 1, $s$ as $\log(9)$ , $\alpha$ as 0.01, $\beta$ as 0.1, $\gamma_{\alpha}$ as 0.99, $\gamma_{\beta}$ as 0.99, and $\rho$ as 1. We set $T$ as 20.
+
+Training details. During training, we first pre-trained a deep score network and then fine-tuned the score network and the PP network together. To pre-train the score network, we used binary cross-entropy loss and Adam optimizer. Since, in the contact map, most entries are 0, we used weighted loss and set the positive sample weight as 300. The batch size was set to fully use the GPU memory, which was 20 for the Titan Xp card. We pre-train the score network for 100 epochs. As for the fine-tuning, we used binary cross-entropy loss for the score network and F1 loss for the PP network and summed up these two losses as the final loss. The user can also choose to only use the F1 loss or
+
+use another coefficient to weight the loss estimated on the score network $U_{\theta}$ . Due to the limitation of the GPU memory, we set the batch size as 8. However, we updated the model's parameters every 30 steps to stabilize the training process. We fine-tuned the whole model for 20 epochs. Also, since the data for different RNA families are imbalanced, we up-sampled the data in the small RNA families based on their size. For the training of the score network $U_{\theta}$ in the ablation study, it is exactly the same as the training of the above mentioned process. Except that during the fine-tune process, there is the unrolled number of iterations is set to be 0.
+
+# D MORE EXPERIMENTAL DETAILS
+
+# D.1 DATASET STATISTICS
+
+
+Figure 6: The RNAStralign length distribution.
+
+Table 7: RNAStralign dataset splits statistics
+
+| RNA type | All | Training | Validation | Testing |
| 16SrRNA | 11620 | 9325 | 1145 | 1150 |
| 5SrRNA | 9385 | 7687 | 819 | 879 |
| tRNA | 6443 | 5412 | 527 | 504 |
| grp1 | 1502 | 1243 | 123 | 136 |
| SRP | 468 | 379 | 36 | 53 |
| tmRNA | 572 | 461 | 50 | 61 |
| RNaseP | 434 | 360 | 37 | 37 |
| telomerase | 37 | 28 | 4 | 5 |
| RNAStralign | 30451 | 24895 | 2702 | 2854 |
+
+# D.2 TWO-SAMPLE HYPOTHESIS TESTING
+
+To better understand the data distribution in different datasets, we provide statistical hypothesis test results in this section.
+
+We can assume that
+
+(i) Samples in RNAStralign training set are i.i.d. from the distribution $\mathcal{P}(\mathrm{RNAStr}_{\mathrm{train}})$ ;
+(ii) Samples in RNAStralign testing set are i.i.d. from the distribution $\mathcal{P}(\mathrm{RNAStr}_{\mathrm{test}})$ ;
+(iii) Samples in ArchiveII dataset are i.i.d. from the distribution $\mathcal{P}(\mathrm{ArcII})$ .
+
+To compare the differences among these data distributions, we can test the following hypothesis:
+
+(a) $\mathcal{P}(\mathrm{RNASr}_{\mathrm{train}}) = \mathcal{P}(\mathrm{RNASr}_{\mathrm{test}})$
+(b) $\mathcal{P}(\mathrm{RNAStr}_{\mathrm{train}}) = \mathcal{P}(\mathrm{ArchiveII})$
+
+The approach that we adopted is the permutation test on the unbiased empirical Maximum Mean Discrepancy (MMD) estimator:
+
+$$
+\operatorname {M M D} _ {u} (X, Y) := \left(\sum_ {i = 1} ^ {N} \sum_ {j \neq i} ^ {N} k \left(x _ {i}, x _ {j}\right) + \sum_ {i = 1} ^ {M} \sum_ {j \neq i} ^ {M} k \left(y _ {i}, y _ {j}\right) - \frac {2}{m n} \sum_ {i = 1} ^ {N} \sum_ {j = 1} ^ {M} k \left(x _ {i}, y _ {j}\right)\right) ^ {\frac {1}{2}}, \tag {28}
+$$
+
+where $X = \{x_{i}\}_{i = 1}^{N}$ contains $N$ i.i.d. samples from a distribution $\mathcal{P}_1$ , $Y = \{y_{i}\}_{i = 1}^{M}$ contains $M$ i.i.d. samples from a distribution $\mathcal{P}_2$ , and $k(\cdot ,\cdot)$ is a string kernel.
+
+Since we conduct stratified sampling to split the training and testing dataset, when we perform permutation test, we use stratified re-sampling as well (for both Hypotheses (a) and (b)). The result of the permutation test (permuted 1000 times) is reported in Figure 7.
+
+
+Figure 7: Left: Distribution of $\mathrm{MMD}_u$ under Hypothesis $\mathcal{P}(\mathrm{RNASTr}_{\mathrm{train}}) = \mathcal{P}(\mathrm{RNASTr}_{\mathrm{test}})$ . Right: Distribution of $\mathrm{MMD}_u$ under Hypothesis $\mathcal{P}(\mathrm{RNASTr}_{\mathrm{train}}) = \mathcal{P}(\mathrm{ArchiveII})$ .
+
+
+
+The result shows
+
+(a) Hypothesis $\mathcal{P}(\mathrm{RNAStr}_{\mathrm{train}}) = \mathcal{P}(\mathrm{RNAStr}_{\mathrm{test}})$ can be accepted with significance level 0.1.
+(b) Hypothesis $\mathcal{P}(\mathrm{RNASstr}_{\mathrm{train}}) = \mathcal{P}(\mathrm{ArchiveII})$ is rejected since the p-value is 0.
+
+Therefore, the data distribution in ArchiveII is very different from the RNAStralign training set. A good performance on ArchiveII shows a significant generalization power of E2Efold.
+
+# D.3 PERFORMANCE ON LONG SEQUENCES: WEIGHTED F1 SCORE
+
+For long sequences, E2Efold still performs better than other methods. We compute F1 scores weighted by the length of sequences (Table 8), such that the results are more dominated by longer sequences.
+
+Table 8: RNAStralign: F1 after a weighted average by sequence length.
+
+| Method | E2Efold | CDPfold | LinearFold | Mfold | RNAstructure | RNAfold | CONTRAfold |
| non-weighted | 0.821 | 0.614 | 0.609 | 0.420 | 0.550 | 0.540 | 0.633 |
| weighted | 0.720 | 0.691 | 0.509 | 0.366 | 0.471 | 0.444 | 0.542 |
| change | -12.3% | +12.5% | -16.4% | -12.8% | -14.3% | -17.7% | -14.3% |
+
+The third row reports how much F1 score drops after reweighting.
+
+# D.4 ARCHIVEII RESULTS AFTER DOMAIN SEQUENCES ARE REMOVED
+
+Since domain sequence (subsequences) in ArchiveII are explicitly labeled, we filter them out in ArchiveII and recompute the F1 scores (Table 9).
+
+The results do not change too much before or after filtering out subsequences.
+
+Table 9: ArchiveII: F1 after subsequences are filtered out.
+
+| Method | E2Efold | CDPfold | LinearFold | Mfold | RNAstructure | RNAfold | CONTRAfold |
| original | 0.704 | 0.597 | 0.647 | 0.421 | 0.613 | 0.615 | 0.662 |
| filtered | 0.723 | 0.605 | 0.645 | 0.419 | 0.611 | 0.615 | 0.659 |
+
+# D.5 PER-FAMILY PERFORMANCES
+
+To balance the performance among different families, during the training phase we conducted weighted sampling of the data based on their family size. With weighted sampling, the overall F1 score (S) is 0.83, which is the same as when we did equal-weighted sampling. The per-family results are shown in Table 10.
+
+Table 10: RNAStralign: per-family performances
+
+ | 16S rRNA | tRNA | 5S RNA | SRP |
| F1 | F1(S) | F1 | F1(S) | F1 | F1(S) | F1 | F1(S) |
| E2Efold | 0.783 | 0.795 | 0.917 | 0.939 | 0.906 | 0.936 | 0.550 | 0.614 |
| LinearFold | 0.493 | 0.504 | 0.734 | 0.739 | 0.713 | 0.738 | 0.618 | 0.648 |
| Mfold | 0.362 | 0.373 | 0.662 | 0.675 | 0.356 | 0.367 | 0.350 | 0.378 |
| RNAstructure | 0.464 | 0.485 | 0.709 | 0.736 | 0.578 | 0.597 | 0.579 | 0.617 |
| RNAfold | 0.430 | 0.449 | 0.695 | 0.706 | 0.592 | 0.612 | 0.617 | 0.651 |
| CONTRAfold | 0.529 | 0.546 | 0.758 | 0.765 | 0.717 | 0.740 | 0.563 | 0.596 |
| tmRNA | Group I intron | RNaseP | telomerase |
| F1 | F1(S) | F1 | F1(S) | F1 | F1(S) | F1 | F1(S) |
| E2Efold | 0.588 | 0.653 | 0.387 | 0.428 | 0.565 | 0.604 | 0.954 | 0.961 |
| LinearFold | 0.393 | 0.412 | 0.565 | 0.579 | 0.567 | 0.578 | 0.515 | 0.531 |
| Mfold | 0.290 | 0.308 | 0.483 | 0.498 | 0.562 | 0.579 | 0.403 | 0.531 |
| RNAstructure | 0.400 | 0.423 | 0.566 | 0.599 | 0.589 | 0.616 | 0.512 | 0.545 |
| RNAfold | 0.411 | 0.430 | 0.589 | 0.599 | 0.544 | 0.563 | 0.471 | 0.496 |
| CONTRAfold | 0.463 | 0.482 | 0.603 | 0.620 | 0.645 | 0.662 | 0.529 | 0.548 |
+
+# D.6 MORE VISUALIZATION RESULTS
+
+
+Figure 8: Visualization of 5S rRNA, B01865.
+
+
+Figure 9: Visualization of 16S rRNA, DQ170870.
+
+
+Figure 10: Visualization of Group I intron, IC3, Kaf.c.trnL.
+
+
+Figure 11: Visualization of RNaseP, A.salinestris-184.
+
+
+Figure 12: Visualization of SRP, Homo.sapi._BU56690.
+
+
+Figure 13: Visualization of tmRNA, uncu.bact._AF389956.
+
+
+Figure 14: Visualization of tRNA, tdbD00012019.
\ No newline at end of file
diff --git a/rnasecondarystructurepredictionbylearningunrolledalgorithms/images.zip b/rnasecondarystructurepredictionbylearningunrolledalgorithms/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..a831df65348205616a47a6d4010a3cbd33ce17d4
--- /dev/null
+++ b/rnasecondarystructurepredictionbylearningunrolledalgorithms/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:922009f3505f141d18b80216fb258ee500fd1e8e22604c892b60bba919ce1159
+size 1357133
diff --git a/rnasecondarystructurepredictionbylearningunrolledalgorithms/layout.json b/rnasecondarystructurepredictionbylearningunrolledalgorithms/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..b084456c2f6e12a596658b40070c5092752b6f8d
--- /dev/null
+++ b/rnasecondarystructurepredictionbylearningunrolledalgorithms/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a4adfded2afff462904d369a8dba6c3dffa3bbd500079159cbed16a32d8b083e
+size 652778
diff --git a/rnnsincrementallyevolvingonanequilibriummanifoldapanaceaforvanishingandexplodinggradients/91b170ad-f380-4cb2-a186-c24c63573be1_content_list.json b/rnnsincrementallyevolvingonanequilibriummanifoldapanaceaforvanishingandexplodinggradients/91b170ad-f380-4cb2-a186-c24c63573be1_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..600d5484186d2cb4103f915277841bc217ed528d
--- /dev/null
+++ b/rnnsincrementallyevolvingonanequilibriummanifoldapanaceaforvanishingandexplodinggradients/91b170ad-f380-4cb2-a186-c24c63573be1_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c5931752fee7e71099531ad7dc73634778c3a75caacf462b6eb28a2d1d1d79e5
+size 148318
diff --git a/rnnsincrementallyevolvingonanequilibriummanifoldapanaceaforvanishingandexplodinggradients/91b170ad-f380-4cb2-a186-c24c63573be1_model.json b/rnnsincrementallyevolvingonanequilibriummanifoldapanaceaforvanishingandexplodinggradients/91b170ad-f380-4cb2-a186-c24c63573be1_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..3c3cba1c8f5411b0e0be17eda2706e689bbe9dae
--- /dev/null
+++ b/rnnsincrementallyevolvingonanequilibriummanifoldapanaceaforvanishingandexplodinggradients/91b170ad-f380-4cb2-a186-c24c63573be1_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:218ef6d693f83c9d8c56bdeb6b8049fe85123548769aeaa161f6c720afde93bc
+size 177935
diff --git a/rnnsincrementallyevolvingonanequilibriummanifoldapanaceaforvanishingandexplodinggradients/91b170ad-f380-4cb2-a186-c24c63573be1_origin.pdf b/rnnsincrementallyevolvingonanequilibriummanifoldapanaceaforvanishingandexplodinggradients/91b170ad-f380-4cb2-a186-c24c63573be1_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..515b1404b97c7cbc91f40c701347659a01eeb7ce
--- /dev/null
+++ b/rnnsincrementallyevolvingonanequilibriummanifoldapanaceaforvanishingandexplodinggradients/91b170ad-f380-4cb2-a186-c24c63573be1_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:28ed525aafeded6adea85315c586f4787b6c174bcee13f0efa3ffed58fc15bdf
+size 992808
diff --git a/rnnsincrementallyevolvingonanequilibriummanifoldapanaceaforvanishingandexplodinggradients/full.md b/rnnsincrementallyevolvingonanequilibriummanifoldapanaceaforvanishingandexplodinggradients/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..368676ee690c0a1debc6194e25e536b52e0b4c4e
--- /dev/null
+++ b/rnnsincrementallyevolvingonanequilibriummanifoldapanaceaforvanishingandexplodinggradients/full.md
@@ -0,0 +1,633 @@
+# RNNS INIncrementally Evolving on an EQUILIBRIUM MANIFOLD: A PANACEA FOR VANISHING AND EXPLODING GRADIENTS?
+
+Anil Kag
+
+ECE Department
+
+Boston University
+
+Boston, MA 02215, USA
+
+anilkag@bu.edu
+
+Ziming Zhang
+
+ECE Department
+
+Worcester Polytechnic Institute
+
+Worcester, MA 01609, USA
+
+zhang15@wpi.edu
+
+Venkatesh Saligrama
+
+ECE Department
+
+Boston University
+
+Boston, MA 02215, USA
+
+srv@bu.edu
+
+# ABSTRACT
+
+Recurrent neural networks (RNNs) are particularly well-suited for modeling long-term dependencies in sequential data, but are notoriously hard to train because the error backpropagated in time either vanishes or explodes at an exponential rate. While a number of works attempt to mitigate this effect through gated recurrent units, skip-connections, parametric constraints and design choices, we propose a novel incremental RNN (iRNN), where hidden state vectors keep track of incremental changes, and as such approximate state-vector increments of Rosenblatt's (1962) continuous-time RNNs. iRNN exhibits identity gradients and is able to account for long-term dependencies (LTD). We show that our method is computationally efficient overcoming overheads of many existing methods that attempt to improve RNN training, while suffering no performance degradation. We demonstrate the utility of our approach with extensive experiments and show competitive performance against standard LSTMs on LTD and other non-LTD tasks.
+
+# 1 INTRODUCTION
+
+Recurrent neural networks (RNNs) in each round store a hidden state vector, $h_m \in \mathbb{R}^D$ , and upon receiving the input vector, $x_{m+1} \in \mathbb{R}^d$ , linearly transform the tuple $(h_m, x_{m+1})$ and pass it through a memoryless non-linearity to update the state over $T$ rounds. Subsequently, RNNs output an affine function of the hidden states as its prediction. The model parameters (state/input/prediction parameters) are learnt by minimizing an empirical loss. This seemingly simple update rule has had significant success in learning complex patterns for sequential input data.
+
+Nevertheless, that training RNNs can be challenging, and that performance can be uneven on tasks that require long-term-dependency (LTD), was first noted by Hochreiter (1991), Bengio et al. (1994) and later by other researchers. Pascanu et al. (2013b) attributed this to the fact that the error gradient back-propagated in time (BPTT), for the time-step $m$ , is dominated by product of partials of hidden-state vectors, $\prod_{j=m}^{T-1} \frac{\partial h_{j+1}}{\partial h_j}$ , and these products typically exhibit exponentially vanishing decay or explosion, resulting in incorrect credit assignment during training and test-time.
+
+Rosenblatt (1962), on whose work we draw inspiration from, introduced continuous-time RNN (CTRNN) to mimic activation propagation in neural circuitry. CTRNN dynamics evolves as follows:
+
+$$
+\tau \dot {g} (t) = - \alpha g (t) + \phi (U g (t) + W x (t) + b), t \geq t _ {0}. \tag {1}
+$$
+
+Here, $x(t) \in \mathbb{R}^d$ is the input signal, $g(t) \in \mathbb{R}^D$ is the hidden state vector of $D$ neurons, $\dot{g}_i(t)$ is the rate of change of the $i$ -th state component; $\tau, \alpha \in \mathbb{R}^+$ , referred to as the post-synaptic time-constant, impacts the rate of a neuron's response to the instantaneous activation $\phi(Ug(t) + Wx(t) + b)$ ; and $U \in \mathbb{R}^{D \times D}$ , $W \in \mathbb{R}^{D \times d}$ , $b \in \mathbb{R}^D$ are model parameters. In passing, note that recent RNN works that draw inspiration from ODE's (Chang et al., 2019) are special cases of CTRNN ( $\tau = 1$ , $\alpha = 0$ ).
+
+Vanishing Gradients. The qualitative aspects of the CTRNN dynamics is transparent in its integral form:
+
+$$
+g (t) = e ^ {- \alpha \frac {t - t _ {0}}{\tau}} g (t _ {0}) + \frac {1}{\tau} \int_ {t _ {0}} ^ {t} e ^ {- \alpha \frac {t - s}{\tau}} \phi (U g (s) + W x (s) + b) d s \tag {2}
+$$
+
+This integral form reveals that the partials of hidden-state vector with respect to the initial condition, $\frac{\partial g(t)}{\partial g(t_0)}$ , gets attenuated rapidly (first term in RHS), and so we face a vanishing gradient problem. We will address this issue later but we note that this is not an artifact of CTRNN but is exhibited by ODEs that have motivated other RNNs (see Sec. 2).
+
+Shannon-Nyquist Sampling. A key property of CTRNN is that the time-constant $\tau$ together with the first term $-g(t)$ , is in effect a low-pass filter with bandwidth $\alpha \tau^{-1}$ suppressing high frequency components of the activation signal, $\phi((Ug(s)) + (Wx(s)) + b)$ . This is good, because, by virtue of the Shannon-Nyquist sampling theorem, we can now maintain fidelity of discrete samples with respect to continuous time dynamics, in contrast to conventional ODEs ( $\alpha = 0$ ). Additionally, since high-frequencies are already suppressed, in effect we may assume that the input signal $x(t)$ is slowly varying relative to the post-synaptic time constant $\tau$ .
+
+Equilibrium. The combination of low pass filtering and slowly time varying input has a significant bearing. The state vector as well as the discrete samples evolve close to the equilibrium state, i.e., $g(t) \approx \phi(Ug(t) + Wx(t) + b)$ under general conditions (Sec. 3).
+
+Incremental Updates. Whether or not system is in equilibrium, the integral form in Eq. 2 points to gradient attenuation as a fundamental issue. To overcome this situation, we store and process increments rather than the cumulative values $g(t)$ and propose dynamic evolution in terms of increments. Let us denote hidden state sequence as $h_m \in \mathbb{R}^D$ and input sequence $x_m \in \mathbb{R}^d$ . For $m = 1, 2, \ldots, T$ , and a suitable $\beta > 0$
+
+$$
+\tau \dot {g} (t) = - \alpha (g (t) \pm h _ {m - 1}) + \phi (U (g (t) \pm h _ {m - 1}) + W x _ {m} + b), g (0) = 0, t \geq 0 \tag {3}
+$$
+
+$$
+h _ {m} \triangleq h _ {m} ^ {\beta \cdot \tau} \triangleq g (\beta \cdot \tau)
+$$
+
+Intuitively, say system is in equilibrium and $-\alpha (\mu (x_m,h_{m - 1})) + \phi (U\mu (x_m,h_{m - 1}) + Wx_m + b) = 0$ We note state transitions are marginal changes from previous states, namely, $h_m = \mu (x_m,h_{m - 1}) - h_{m - 1}$ . Now for a fixed input $x_{m}$ , as to which equilibrium is reached depends on $h_{m - 1}$ , but are nevertheless finitely many. So encoding marginal changes as states leads to "identity" gradient.
+
+Incremental RNN (iRNN) achieves Identity Gradient. We propose to discretize Eq. 3 to realize iRNN (see Sec. 3). At time $m$ , it takes the previous state $h_{m-1} \in \mathbb{R}^D$ and input $x_m \in \mathbb{R}^d$ and outputs $h_m \in \mathbb{R}^D$ after simulating the CTRNN evolution in discrete-time, for a suitable number of discrete steps. We show that the proposed RNN approximates the continuous dynamics and solves the vanishing/exploding gradient issue by ensuring identity gradient. In general, we consider two options, SiRNN, whose state is updated with a single CTRNN sample, similar to vanilla RNNs, and, iRNN, with many intermediate samples. SiRNN is well-suited for slowly varying inputs.
+
+Contributions. To summarize, we list our main contributions:
+
+(A) iRNN converges to equilibrium for typical activation functions. The partial gradients of hidden-state vectors for iRNNs converge to identity, thus solving vanishing/exploding gradient problem!
+(B) iRNN converges rapidly, at an exponential rate in the number of discrete samplings of Eq. 1. SiRNN, the single-step iRNN, is efficient and can be leveraged for slowly varying input sequences. It exhibits fast training time, has fewer parameters and better accuracy relative to standard LSTMs.
+(C) Extensive experiments on LTD datasets show that we improve upon standard LSTM accuracy as well as other recent proposals that are based on designing transition matrices and/or skip connections. iRNNs/SiRNNs are robust to time-series distortions such as noise paddings
+(D) While our method extends directly (see Appendix A.1) to Deep RNNs, we deem these extensions complementary, and focus on single-layer to highlight our incremental perspective.
+
+# 2 RELATED WORK
+
+Gated Architectures. Long short-term memory (LSTM) (Hochreiter & Schmidhuber, 1997) is widely used in RNNs to model long-term dependency in sequential data. Gated recurrent unit (GRU) (Cho et al., 2014) is another gating mechanism that has been demonstrated to achieve similar
+
+performance of LSTM with fewer parameters. Some recent gated RNNs include UGRNN (Collins et al., 2016), and FastGRNN (Kusupati et al., 2018). While mitigating vanishing/exploding gradients, they do not eliminate it. Often, these models incur increased inference, training costs, and model size.
+
+Unitary RNNs. Arjovsky et al. (2016); Jing et al. (2017); Zhang et al. (2018); Mhammedi et al. (2016) focus on designing well-conditioned state transition matrices, attempting to enforce unitary-property, during training. Unitary property does not generally circumvent vanishing gradient (Pennington et al. (2017)). Also, it limits expressive power and prediction accuracy while also increasing training time.
+
+Deep RNNs. These are nonlinear transition functions incorporated into RNNs for performance improvement. For instance, Pascanu et al. (2013a) empirically analyzed the problem of how to construct deep RNNs. Zilly et al. (2017) proposed extending the LSTM architecture to allow step-to-step transition depths larger than one. Mujika et al. (2017) proposed incorporating the strengths of both multiscale RNNs and deep transition RNNs to learn complex transition functions. While Deep RNNs offer richer representations relative to single-layers, it is complementary to iRNNs.
+
+Residual/Skip Connections. Jaeger et al. (2007); Bengio et al. (2013); Chang et al. (2017); Campos et al. (2017); Kusupati et al. (2018) feed-forward state vectors to induce skip or residual connections, to serve as a middle ground between feed-forward and recurrent models, and to mitigate gradient decay. Nevertheless, these connections cannot entirely eliminate gradient explosion/decay. For instance, Kusupati et al. (2018) suggest $h_m = \alpha_m h_{m-1} + \beta_m \phi(Uh_{m-1} + Wx_m + b)$ , and learn parameters so that $\alpha_m \approx 1$ and $\beta_m \approx 0$ . Evidently, this setting can lead to identity gradient, observe that setting $\beta_m \approx 0$ , implies little contribution from the inputs and can conflict with good accuracy, as also observed in our experiments.
+
+Linear RNNs. (Bradbury et al., 2016; Lei et al., 2018; Balduzzi & Ghifary, 2016) focus on speeding up RNNs by replacing recurrent connections, such as hidden-to-hidden interactions, with light weight linear components. This reduces training time, but results in significantly increased model size. For example, Lei et al. (2018) requires twice the number of cells for LSTM level performance.
+
+ODE/Dynamical Perspective. Few ODE inspired architectures attempt to address stability, but do not end up eliminating vanishing/exploding gradients. Talathi & Vartak (2015) proposed a modified weight initialization strategy based on a dynamical system perspective on weight initialization process to successfully train RNNs composed of ReLUs. Niu et al. (2019) analyzed RNN architectures using numerical methods of ODE and propose a family of ODE-RNNs. Chang et al. (2019), propose Antisymmetric-RNN. Their key idea is to express the transition matrix in Eq. 1, for the special case $\alpha = 0, \tau = 1$ , as a difference: $U = V - V^T$ and note that the eigenspectrum is imaginary. Nevertheless, Euler discretization, in this context leads to instability, necessitating damping of the system. As such vanishing gradient cannot be completely eliminated. Its behavior is analogous to FastRNN Kusupati et al. (2018), in that, identity gradient conflicts with high accuracy. In summary, we are the first to propose evolution over the equilibrium manifold, and demonstrating identity gradients. Neural ODEs (Chen et al., 2018; Rubanova et al., 2019) have also been proposed for time-series prediction to deal with irregularly sampled inputs. They parameterize the derivative of the hidden-state in terms of an autonomous differential equation and let the ODE evolve in continuous time until the next input arrives. As such, this is not our goal, our ODE explicitly depends on the input, and evolves until equilibrium for that input is reached. We introduce incremental updates to bypass vanishing/exploding gradient issues, which is not of specific concern for these works.
+
+# 3 METHOD
+
+We use Euler's method to discretize Eq. 3 in steps $\delta = \eta \tau$ . Denoting the kth step as $g_{k} = g(k\delta)$
+
+$$
+\tau \frac {g _ {k} - g _ {k - 1}}{\delta} = - \alpha \left(g _ {k - 1} + h _ {m - 1}\right) + \phi \left(U \left(g _ {k - 1} + h _ {m - 1}\right) + W x _ {m} + b\right), k \in [ K ] \tag {4}
+$$
+
+Rearranging terms we get a compact form for iRNN (see Fig. 1). In addition we introduce a learnable parameter $\eta_m^k$ and let it be a function of time $m$ and the recursion-step $k$ .
+
+$$
+g _ {k} = g _ {k - 1} + \eta_ {m} ^ {k} \left(\phi \left(U \left(g _ {k - 1} + h _ {m - 1}\right) + W x _ {m} + b\right) - \alpha \left(g _ {k - 1} + h _ {m - 1}\right)\right), k \in [ K ] \tag {5}
+$$
+
+$$
+h _ {m} ^ {K} = g _ {K}
+$$
+
+We run the recursion for $k \in [K]$ with some suitable initial condition. This could be $g_0 = 0$ or initialized to the previous state, i.e., $g_0 = h_{m-1}$ at time $m$ .
+
+
+Figure 1: iRNN depicted by unfolding into $K$ recursions for one transition from $g_0 = h_{m-1}$ to $h_m = g_K$ . Here, $\varphi(x, g, h) = \phi(U(g + h) + Wx + b) - \alpha(g + h)$ . See Sec. A.2 for implementation and pseudo-code. This resembles Graves (2016), who propose to vary $K$ with $m$ as a way to attend to important input transitions. However, the transition functions used are gated units, unlike our conventional ungated functions. As such, while this is not their concern, equilibrium may not even exist and identity gradients are not guaranteed in their setup.
+
+In many of our examples, we find the input sequence is slowly varying, and $K = 1$ can also realize good empirical performance. We refer to this as single-step-incremental-RNN (SiRNN).
+
+$$
+h _ {m} ^ {1} = g _ {0} + \eta_ {m} \left(\phi \left(U \left(g _ {0} + h _ {m - 1}\right) + W x _ {m} + b\right) - \alpha \left(h _ {m - 1} + g _ {0}\right)\right) \tag {6}
+$$
+
+For both iRNN and SiRNN we drop the superscript whenever it is clear from the context.
+
+Root Finding and Transitions. The two indices $k$ and $m$ should not be confused. The index $m \in [T]$ refers to the time index, and indexes input, $x_{m}$ and hidden state $h_m$ over time horizon $T$ . The index $k \in [K]$ is a fixed-point recursion for converging to the equilibrium solution at each time $m$ , given input $x_{m}$ and the hidden state $h_{m - 1}$ . We iterate over $k$ so that at $k = K$ , $g_{K}$ satisfies,
+
+$$
+\phi (U (g _ {K} + h _ {m - 1}) + W x _ {m} + b) - \alpha (g _ {K} + h _ {m - 1}) \approx 0
+$$
+
+The recursion (Eq. 5) at time $m$ runs for $K$ rounds, terminates, and recursion is reset for the new input, $x_{m + 1}$ . Indeed, Eq. 5 is a standard root-finding recursion, with $g_{k - 1}$ serving as the previous solution, plus a correction term, which is the error, $\phi (U(g_{k - 1} + h_{m - 1}) + Wx_m + b) - \alpha (g_{k - 1} + h_{m - 1})$ . If the sequence converges, the resulting solution is the equilibrium point. Proposition 2 guarantees a geometric rate of convergence.
+
+Identity Gradient. We will informally (see Theorem 1) show here that partial gradients are identity. Say we have for sufficiently large $K$ , $h_m = g_K$ is the equilibrium solution. It follows that,
+
+$$
+\phi \left(U \left(h _ {m} + h _ {m - 1}\right) + W x _ {m} + b\right) - \alpha \left(h _ {m} + h _ {m - 1}\right)) = 0
+$$
+
+Taking derivatives, we have,
+
+$$
+\nabla \phi (\cdot) U \left(\frac {\partial h _ {m}}{\partial h _ {m - 1}} + I\right) - \alpha \left(\frac {\partial h _ {m}}{\partial h _ {m - 1}} + I\right) = 0 \Rightarrow (\nabla \phi (\cdot) U - \alpha I) \left(\frac {\partial h _ {m}}{\partial h _ {m - 1}} + I\right) = 0. \tag {7}
+$$
+
+Thus if the matrix $(\nabla \phi(\cdot)U - \alpha I)$ is not singular, it follows that $\left(\frac{\partial h_m}{\partial h_{m-1}} + I\right) = 0$ .
+
+SiRNN vs. iRNN. SiRNN approximates iRNN. In particular, say $x_{m}$ is a constant in the segment, $m \in [m_0, m_0 + K]$ , then SiRNN trajectory of hidden states, denoted as $h_{m_0 + K}^1$ is equal to the iRNN hidden state $h_{m_0}^K$ , when both SiRNN and iRNN are initialized with $g_0 = h_{m - 1}$ . Thus, for slowly time-varying inputs we can expect SiRNN to closely approximate iRNN.
+
+Residual Connections vs. iRNN/SiRNN. As such, our architecture is a special case of skip/residual connections. Nevertheless, unlike skip connections, our connections are structured, and the dynamics driven by the error term ensures that the hidden state is associated with equilibrium and leads to identity gradient. No such guarantees are possible with unstructured skip connections. Note that for slowly varying inputs, after a certain transition-time period, we should expect SiRNN to be close to equilibrium as well. Without this imposed structure, general residual architectures can learn patterns that can be dramatically different (see Fig. 2).
+
+# 3.1 IDENTITY GRADIENT PROPERTY AND CONVERGENCE GUARANTEES.
+
+Let us now collect a few properties of Eq. 3 and Eq. 5. First, denote the equilibrium solutions for an arbitrary input $x \in \mathbb{R}^d$ , arbitrary state-vector $\nu \in \mathbb{R}^D$ , in an arbitrary round:
+
+$$
+\mathcal {M} _ {e q} (x, \nu) = \left\{\mu \in \mathbb {R} ^ {D} \mid \alpha (\mu + \nu) = \phi (U (\mu + \nu) + W x + b) \right\}
+$$
+
+Whenever the equilibrium set is a singleton, we denote it as a function $h_{eq}(x,\nu)$ . For simplicity, we assume below that $\eta_k^i$ is a positive constant independent of $k$ and $i$ .
+
+Proposition 1. Suppose, $\phi (\cdot)$ is a 1-Lipshitz function in the norm induced by $\| \cdot \|$ , and $\| U\| < \alpha$ , then for any $x_{m}\in \mathbb{R}^{d}$ and $h_{m - 1}\in \mathbb{R}^D$ , it follows that $\mathcal{M}_{eq}(x,\nu)$ is a singleton and as $K\to \infty$ the iRNN recursions converge to this solution, namely, $h_m = \lim_{K\to \infty}g_K = h_{eq}(x_m,h_{m - 1})$
+
+Proof. Define $T: \mathbb{R}^D \to \mathbb{R}^d$ , with $T(g) = (1 - \eta \alpha)g + \eta (\phi(U(g + h_{m-1}) + Wx_m + b) - h_{m-1})$ . It follows that $T(\cdot)$ is a contraction:
+
+$$
+\begin{array}{l} \| T (g) - T \left(g ^ {\prime}\right) \| \leq (1 - \eta \alpha) \| g - g ^ {\prime} \| + \eta \| \phi \left(U \left(g + h _ {m - 1}\right) + W x _ {m} + b\right) - \phi \left(U \left(g ^ {\prime} + h _ {m - 1}\right) + W x _ {m} + b\right) \| \\ \leq \left(1 - \eta \alpha + \| U \| \eta\right) \| g - g ^ {\prime} \| < \| g - g ^ {\prime} \|. \\ \end{array}
+$$
+
+We now invoke the Banach fixed point theorem, which asserts that a contractive operator on a complete metric space converges to a unique fixed point, namely, $T^{K}(g) \to g_{*}$ . Upon substitution, we see that this point $g_{*}$ must be such that, $\phi(U(g_{*} + h_{m-1}) + Wx_{m} + b) - (g_{*} + h_{m-1}) = 0$ . Thus equilibrium point exists and is unique. Result follows by setting $h_{m} \triangleq h_{eq}(x_{m}, h_{m-1})$ .
+
+Handling $\| U \| \leq \alpha$ . In experiments, we set $\alpha = 1$ , and do not enforce $\| U \| \leq \alpha$ constraint. Instead, we initialize $U$ as a Gaussian matrix with IID mean zero, small variance components. As such, the matrix norm is smaller than 1. Evidently, the resulting learnt $U$ matrix does not violate this condition.
+
+Next we show for $\eta > 0$ , iRNN converges at a linear rate, which follows directly from Proposition 1.
+
+Proposition 2. Under the setup in Proposition 1, it follows that,
+
+$$
+\| h _ {m} ^ {K} - h _ {e q} (x _ {m}, h _ {m - 1}) \| \triangleq \| g _ {K} - h _ {e q} (x _ {m}, h _ {m - 1}) \| \leq (1 - \alpha \eta + \eta \| U \|) ^ {K} \| g _ {1} - h _ {e q} (x _ {m}, h _ {m - 1}) \|
+$$
+
+Remark. Proposition 1 accounts for typical activation functions ReLU, tanh, sigmoids as well as deep RNNs (appendix A.1).
+
+In passing we point out that, in our experiments, we learn parameters $\eta_{m}^{k}$ , and a result that accounts for this case is desirable. We describe this case in Appendix A.3. A fundamental result we describe below is that partials of hidden-state vectors, on the equilibrium surface is unity. For technical simplicity, we assume a continuously differentiable activation, which appears to exclude ReLU activations. Nevertheless, we can overcome this issue, but requires more technical arguments. The main difficulty stems from ensuring that derivatives along the equilibrium surface exist, and this can be realized by invoking the implicit function theorem (IFT). IFT requires continuous differentiability, which ReLUs violate. Nevertheless, recent results1 suggests that one can state implicit function theorem for everywhere differentiable functions, which includes ReLUs.
+
+Theorem 1. Suppose $\phi(\cdot)$ is a continuously differentiable, 1-Lipshtitz function, with $\|U\| < \alpha$ . Then as $K \to \infty$ , $\frac{\partial h_m}{\partial h_{m-1}} \to \frac{\partial h_{eq}(x_m, h_{m-1})}{\partial h_{m-1}} = -I$ . Furthermore, as $K \to \infty$ the partial gradients over arbitrary number of rounds for iRNN is identity.
+
+$$
+\frac {\partial h _ {r}}{\partial h _ {s}} = \prod_ {r \geq m > s} \frac {\partial h _ {m}}{\partial h _ {m - 1}} = (- 1) ^ {r - s} \mathbf {I} \Rightarrow \left\| \frac {\partial h _ {r}}{\partial h _ {s}} \right\| = 1. \tag {8}
+$$
+
+Proof. Define, $\psi(g, h_{m-1}) = \phi(U(g + h_{m-1}) + Wx_m + b) - \alpha(g + h_{m-1})$ . We overload notation and view the equilibrium point as a function of $h_{m-1}$ , i.e., $g_*(h_{m-1}) = h_{eq}(x_m, h_{m-1})$ . Invoking standard results² in ODE's, it follows that $g_*(h_{m-1})$ is a smooth function, so long as the Jacobian, $\nabla_g \psi(g_*, h_{m-1})$ with respect to the first coordinate, $g_*$ , is non-singular. Upon computation, we see that, $\nabla_g \psi(g_*, h_{m-1}) = \nabla \phi(g_*, h_{m-1})U - \alpha I$ , is non-singular, since $\| \nabla \phi(g_*, h_{m-1})U \| \leq \| U \|$ . It follows that we can take partials of the state-vectors. By taking the partial derivatives w.r.t. $h_{m-1}$ in Eq. 5, at the equilibrium points we have $[\nabla \phi(g_*, h_{m-1})U - \alpha \mathbf{I}][\frac{\partial g_*}{\partial h_{m-1}} + \mathbf{I}] = \mathbf{0}$ (see Eq. 7). The rest of the proof follows by observing that the first term is non-singular.
+
+Remark. We notice that replacing $h_{m-1}$ with $-h_{m-1}$ in Eq. 12 will lead to $\frac{\partial h_{eq}}{\partial h_{m-1}} = \mathbf{I}$ , which also has no impact on magnitudes of gradients. As a result, both choices are suitable for circumventing vanishing or exploding gradients during training, but still may converge to different local minima and thus result in different test-time performance. Furthermore, notice that the norm preserving property is somewhat insensitive to choices of $\alpha$ , so long as the non-singular condition is satisfied.
+
+# 3.2 IRNN DESIGN IMPLICATIONS: LOW-RANK MODEL PARAMETRIZATION
+
+Fig. 2 depicts phase portrait and illustrates salient differences between RNN, FastRNN (RNN with skip connection), and iRNN $(\mathrm{K} = 5)$ . RNN and Fas
+
+tRNN exhibit complex trajectories, while iRNN trajectory is smooth, projecting initial point (black circle) onto the equilibrium surface (blue) and moving within it (green). This suggests that iRNN trajectory belongs to a low-dimensional manifold.
+
+
+Figure 2: Phase-space trajectory with tanh activation of RNN, FastRNN, iRNN. X-axis denotes 1st dimension, and Y-axis 2nd dimension of 2D hidden state subject to random walk input with variance 10 for 1000 time-steps. Parameters $U, W, b$ are randomly initialized. RNN states are scaled to fit plot since FastRNN is not required to be in the cube.
+
+Variation of Equilibrium w.r.t. Input. As before, $h_{eq}$ be an equilibrium solution for some tuple $(h_{m-1}, x_m)$ . It follows that,
+
+$$
+(\alpha \mathbf {I} - \nabla \phi (U (h _ {e q} + \mathbf {h} _ {m - 1}) + W x _ {m} + b) U) \partial h _ {e q} = \nabla \phi (U (h _ {e q} + h _ {m - 1}) + W x _ {m} + b) W \partial x _ {m}
+$$
+
+This suggests that, whenever the input undergoes a slow variation, we expect that the equilibrium point moves in such a way that $U\partial h_{eq}$ must lie in a transformed span of $W$ . Now $W \in \mathbb{R}^{D\times d}$ with $d \ll D$ , which implies that $(\alpha \mathbf{I} - \nabla \phi (U(h_{eq} + h_{m - 1}) + Wx_m + b)U$ is rank-deficient.
+
+Low Rank Matrix Parameterization. For typical activation functions, note that whenever the argument is in the unsaturated regime, $\nabla \phi (\cdot)\approx \mathbf{I}$ . We then approximately get $\mathrm{span}(\alpha \mathbf{I} - U)\approx$ $\mathrm{span}(W)$ . We can express these constraints as $U = \alpha \mathbf{I} + VH$ with low-rank matrices $V\in$ $\mathbb{R}^{D\times d_1},H\in \mathbb{R}^{d_1\times D}$ , and further map both $Uh_{m}$ and $Wx_{m}$ onto a shared space. Since in our experiments the signal vectors we encounter are low-dimensional, and sequential inputs vary slowly over time, we enforce this restriction in all our experiments. In particular, we consider,
+
+$$
+\phi \left(P \left[ U \left(h _ {m} + h _ {m - 1}\right) + W x _ {m} + b \right]\right) - \left(h _ {m} + h _ {m - 1}\right) = \mathbf {0}. \tag {9}
+$$
+
+The parameter matrix $P \in \mathbb{R}^{D \times D}$ maps the contributions from input and hidden states onto the same space. To decrease model-size we let $P = U = (\mathbf{I} + VH)$ learn these parameters.
+
+# 4 EXPERIMENTS
+
+We organize this section as follows. First, the experimental setup, competing algorithms will be described. Then we present an ablative analysis to highlight salient aspects of iRNN and justify some of our experimental choices. We then plot and tabulate experimental results on benchmark datasets.
+
+# 4.1 EXPERIMENTAL SETUP AND BASELINES
+
+Choice of Competing Methods: We choose competing methods based on the following criteria: (a) methods that are devoid of additional application or dataset-specific heuristics, (b) methods that leverage only single cell/block/layer, and (c) methods without the benefit of complementary add-ons (such as gating, advanced regularization, model compression, etc.). Requiring (a) is not controversial since our goal is methodological. Conditions (b),(c) are justifiable since we could also leverage these add-ons and are not germane to any particular method3. We benchmark iRNN against standard RNN, LSTM (Hochreiter & Schmidhuber, 1997), (ungated) AntisymmetricRNN (Chang et al., 2019), (ungated) FastRNN (Kusupati et al., 2018).
+
+
+(a)
+
+
+(b)
+Figure 3: Exploratory experiments for the Add task (a) Convergence with varying K; (b) Ratio $\left\| \frac{\partial h_T}{\partial h_1} \right\| / \left\| \frac{\partial h_T}{\partial h_{T-1}} \right\|$ illustrates Vanishing/Exploding gradient $\left( \left\| \frac{\partial h_T}{\partial h_{T-1}} \right\| \right.$ and loss gradients are omitted but displayed in A.7.8. For iRNN (a) and (b) together show strong correlation of gradient with accuracy in contrast to other methods.
+
+Unitary RNN Variants. Results for methods based on unitary transitions (such as Arjovsky et al. (2016); Wisdom et al. (2016); Vorontsov et al. (2017); Zhang et al. (2018)) are not reported in the main paper (when available reported in appendix) for the following reasons: (a) They are substantially more expensive, and requiring large model sizes; (b) Apart from the benchmark copy and add tasks, results tabulated by FastRNN and Antisymmetric authors (see Zhang et al. (2018); Chang et al. (2019)) show that they are well below SOTA; (c) iRNN dominates unitary-RNN variants on add-task (see Sec. 4.3.1); (d) On copy task, while unitary invariants are superior, Vorontsov et al. (2017) attributes it to modReLU or leaky ReLU activations. Leaky ReLUs allow for linear transitions, and copy task being a memory task benefits from it. With hard non-linear activation, unitary RNN variants can take up to 1000's of epochs for even 100-length sequences (Vorontsov et al. (2017)).
+
+Implementation. For all our experiments, we used the parametrized update formulation in Eq. 9 for iRNN. We used tensorflow framework for our experiments. For most competing methods apart from AntisymmetricRNN, which we implemented, code is publicly available. All the experiments were run on an Nvidia GTX 1080 GPU with CUDA 9 and cuDNN 7.0 on a machine with Intel Xeon 2.60 GHz CPU with 20 cores.
+
+Datasets. Pre-processing and feature extraction details for all publicly available datasets are in the appendix A.4. We replicate benchmark test/train split with $20\%$ of training data for validation to tune hyperparameters. Reported results are based on the full training set, and performance achieved on the publicly available test set. Table 4 (Appendix) and A.4 describes details for all the data sets.
+
+Hyper Parameters We used grid search and fine-grained validation wherever possible to set the hyper-parameters of each algorithm, or according to the settings published in (Kusupati et al., 2018; Arjovsky et al., 2016) (e.g. number of hidden states). Both the learning rate and $\eta$ ’s were initialized to $10^{-2}$ . The batch size of 128 seems to work well across all the data sets. We used ReLU as the non-linearity and Adam (Kingma & Ba (2015)) as the optimizer for all the experiments.
+
+# 4.2 ABLATIVE ANALYSIS
+
+We perform ablative analysis on the benchmark add-task (Sec 4.3.1) for sequence length 200 for 1000 iterations and explore mean-squared error as a metric. Fig. 3 depicts salient results.
+
+(a) Identity Gradients & Accuracy: iRNN accuracy is correlated with identity gradients. Increasing $K$ improves gradients, and correlates with increased accuracy (Fig. 3). While other models $h_t = \alpha h_{t-1} + \beta \phi((U - \gamma I)h_{t-1} + Wx_t)$ , can realize identity gradients for suitable choices; linear ( $\alpha = 1$ , $\beta = 1$ , $\gamma = 0$ , $U = 0$ ), FastRNN ( $\alpha \approx 1$ , $\beta \approx 0$ , $\gamma = 0$ ) and Antisymmetric ( $\alpha = 1$ , $\beta = 1$ , $U = V - V^T$ , $\|U\| \leq \gamma$ ), this goal may not be correlated with improved test accuracy. FastRNN( $\eta = 0.001$ ), Antisymmetric ( $\gamma = 0.01$ , $\epsilon = 0.001$ ) have good gradients but poorer test accuracy relative to FastRNN( $\eta = 0.01$ ), Antisymmetric ( $\gamma = 0.01$ , $\epsilon = 0.1$ ), with poorer gradients.
+(b) Identity gradient implies faster convergence: Identity gradient, whenever effective, must be capable of assigning credit to the informative parts, which in turn results in larger loss gradients, and significantly faster convergence with number of iterations. This is borne out in figure 3(a). iRNN for larger $K$ is closer to identity gradient with fewer (unstable) spikes ( $K = 1, 5, 10$ ). With $K = 10$ , iRNN converges within 300 iterations while competing methods take about twice this time (other baselines not included here exhibited poorer performance than the once plotted).
+
+
+(a)
+
+
+(b)
+
+
+(c)
+
+
+(d)
+Figure 4: Following Arjovsky et al. (2016) we display average Cross Entropy for the Copy Task (Sequence Length (with baseline memoryless strategy)): (a) 200 (0.09) (b) 500 (0.039). Mean Squared Error for the Add Task, baseline performance is 0.167 (Sequence Length): (c) 200 (d) 750. For both tasks, iRNN runs $K = 5$ .
+
+(c) SiRNN (iRNN with $K = 1$ delivers good performance in some cases. Fig. 3(a) illustrates that iRNN $K = \{5,10\}$ achieves faster convergence than SiRNN, but the computational overhead per iteration roughly doubles or triples in comparison. SiRNN is faster relative to competitors. For this reason, we sometimes tabulate only SiRNN, whenever it is SOTA in benchmark experiments, since accuracy improves with $K$ but requires higher overhead.
+
+# 4.3 LONG-TERM DEPENDENCY AND OTHER TASKS
+
+We list five types of datasets, all of which in some way require effective gradient propagation: (1) Conventional Benchmark LTD tasks (Add & Copy tasks) that illustrate that iRNN can rapidly learn long-term dependence; (2) Benchmark vision tasks (pixel MNIST, perm-MNIST) that may not require long-term, but nevertheless, demonstrates that iRNN achieves SOTA for short term dependencies but with less resources. (3) Noise Padded (LTD) Vision tasks (Noisy MNIST, Noisy CIFAR), where a large noise time segment separates information segments and the terminal state, and so the learner must extract information parts while rejecting the noisy parts; (4) short duration activity embedded in a larger time-window (HAR-2, Google-30 in Appendix Table 4 and many others A.7), that usually arise in the context of smart IoT applications and require a small model-size footprint. Chang et al. (2019) further justify (3) and (4) as LTD, because for these datasets where only a smaller unknown segment(s) of a longer sequence is informative. (5) Sequence-sequence prediction tasks (PTB language modeling) that are different from terminal prediction (reported in appendix A.7).
+
+# 4.3.1 STANDARD BENCHMARK LTD TASKS : ADDITION & COPY MEMORY
+
+Addition and Copy tasks (Hochreiter & Schmidhuber, 1997) have long been used as benchmarks in the literature to evaluate LTD (Hori et al., 2017; Zhang et al., 2018; Arjovsky et al., 2016; Martens & Sutskever, 2011). We follow the setup described in Arjovsky et al. (2016) to create the adding and copying tasks. See appendix A.4 for detailed description. For both tasks we run iRNN with $K = 5$ .
+
+Figure 4 show the average performance of various methods on these tasks. For the copying task we observe that $iRNN$ converges rapidly to the naive baseline and is the only method to achieve zero average cross entropy. For the addition task, both FastRNN and $iRNN$ solves the addition task but FastRNN takes twice the number of iterations to reach desired 0 MSE. In both the tasks,
+
+iRNN performance is much more stable across number of online training samples. In contrast, other methods either takes a lot of samples to match iRNN's performance or depict high variance in the evaluation metric. This shows that iRNN converges faster than the baselines (to the desired error). These results demonstrate that iRNN easily and quickly learns the long term dependencies. We omitted reporting unitary RNN variants for Add and Copy task. See Sec. 4.1 for copy task. On Add-task we point out that our performance is superior. In particular, for the longer $T = 750$ length, Arjovsky et al. (2016), points out that MSE does not reach zero, and uRNN is noisy. Others either (Wisdom et al., 2016) do not report add-task or report only for shorter lengths (Zhang et al., 2018).
+
+Table 1: Results for Pixel-by-Pixel MNIST and Permuted MNIST datasets. $K$ denotes pre-defined recursions embedded in graph to reach equilibrium.
+
+| Data set | Algorithm | Accuracy (%) | Train Time (hr) | #Params |
| Pixel-MNIST | FastRNN | 96.44 | 15.10 | 33k |
| RNN | 94.10 | 45.56 | 14k |
| LSTM | 97.81 | 26.57 | 53k |
| Antisymmetric | 98.01 | 8.61 | 14k |
| iRNN (K=1) | 97.73 | 2.83 | 4k |
| iRNN (K=3) | 98.13 | 2.93 | 4k |
| Permute-MNIST | FastRNN | 92.68 | 9.32 | 8.75k |
| LSTM | 92.61 | 19.31 | 35k |
| Antisymmetric | 93.59 | 4.75 | 14k |
| iRNN (K=1) | 95.62 | 2.41 | 8k |
+
+# 4.3.2 NON LTD VISION TASKS:PIXEL MNIST,PERMUTE MNIST
+
+Next, we perform experiments on the sequential vision tasks: (a) classification of MNIST images on a pixel-by-pixel sequence; (b) a fixed random permuted MNIST sequence (Lecun et al., 1998). These tasks typically do not fall in the LTD categories (Chang et al., 2019), but are useful to demonstrate faster training, which can be attributed to better gradients.
+
+For the pixel-MNIST task, Kusupati et al. (2018) reports that it takes significantly longer time for existing (LSTMs, Unitary, Gated, Spectral) RNNs to converge to reasonable performance. In contrast, FastRNN trains at least $2x$ faster than LSTMs. Our results (table 1) for iRNN shows a $9x$ speedup relative LSTMs, and $2x$ speedup in comparison to Antisymmetric. In terms of test accuracy, iRNN matches the performance of Antisymmetric, but with at least $3x$ fewer parameters. We did not gain much with increased $\mathbf{K}$ values5. For the permuted version of this task, we seem to outperform the existing baselines6. In both tasks, iRNN trained at least $2x$ faster than the strongest baselines. These results demonstrate that iRNN converges much faster than the baselines with fewer parameters.
+
+# 4.3.3 NOISE PADDING TASKS: NOISY-MNIST, NOISY-CIFAR
+
+Additionally, as in Chang et al. (2019), we induce LTD by padding CIFAR-10 with noise exactly replicating their setup, resulting in Noisy-CIFAR. We extend this setting to MNIST dataset resulting in Noisy-MNIST. Intuitively we expect our model to be resilient to such perturbations. We attribute iRNN's superior performance to the fact that it is capable of suppressing noise. For example, say noise is padded at $t > \tau$ and this results in $Wx_{t}$ being zero on average. For iRNN the resulting states ceases to be updated. So iRNN recalls last informative state $h_\tau$ (modulo const) unlike RNNs/variants! Thus information from signal component is possibly better preserved.
+
+Results for Noisy-MNIST and Noisy-CIFAR are shown in Table 2. Note that almost all timesteps contain noise in these datasets. LSTMs perform poorly on these tasks due to vanishing gradients. This
+
+Table 2: Results for Noise Padded CIFAR-10 and MNIST datasets. Since the equilibrium surface is smooth and resilient to small perturbations, $iRNN$ achieves better performance than the baselines with faster convergence.
+
+| Data set | Algorithm | Accuracy (%) | Train Time (hr) | #Params |
| Noisy-MNIST | FastRNN | 98.12 | 8.93 | 11k |
| LSTM | 10.31 | 19.43 | 44k |
| Antisymmetric | 97.76 | 5.21 | 10k |
| iRNN (K=1) | 98.48 | 2.39 | 6k |
| Noisy-CIFAR | FastRNN | 45.76 | 11.61 | 16k |
| LSTM | 11.60 | 23.47 | 64k |
| Antisymmetric | 48.63 | 5.81 | 16k |
| iRNN (K=1) | 54.50 | 2.47 | 11.5k |
+
+is consistent with the earlier observations (Chang et al., 2019). iRNN outperforms the baselines very comprehensively on CIFAR-10, while on MNIST the gains are smaller, as it's a relatively easier task. These results show that iRNN is more resilient to noise and can account for longer dependencies.
+
+Table 3: Results for Activity Recognition Datasets. iRNN outperforms the baselines on all metrics even with $K = 1$ . Its worth noticing that although $K = 5$ increases test time, it's well within LSTM's numbers, the overall train time and resulting performance are better than $K = 1$ .
+
+| Data set | Algorithm | Accuracy (%) | Train Time (hr) | #Params | Test Time (ms) |
| HAR-2 | FastRNN | 94.50 | 0.063 | 7.5k | 0.01 |
| RNN | 91.31 | 0.114 | 7.5k | 0.01 |
| LSTM | 93.65 | 0.183 | 16k | 0.04 |
| Antisymmetric | 93.15 | 0.087 | 7.5k | 0.01 |
| iRNN (K=1) | 95.32 | 0.061 | 4k | 0.01 |
| iRNN (K=5) | 96.30 | 0.018 | 4k | 0.03 |
| Google-30 | FastRNN | 91.60 | 1.30 | 18k | 0.01 |
| RNN | 80.05 | 2.13 | 12k | 0.01 |
| LSTM | 90.31 | 2.63 | 41k | 0.05 |
| Antisymmetric | 90.91 | 0.54 | 12k | 0.01 |
| iRNN (K=1) | 93.77 | 0.44 | 8.5k | 0.01 |
| iRNN (K=5) | 94.23 | 0.44 | 8.5k | 0.05 |
+
+# 4.3.4 SHORT DURATION EMBEDDED ACTIVITY RECOGNITION TASKS: HAR-2, GOOGLE-30
+
+We are interested in detecting activity embedded in a longer sequence with small footprint RNNs (Kusupati et al. (2018)): (a) Google-30 (Warden, 2018), i.e. detection of utterances of 30 commands plus background noise and silence, and (b) HAR-2 (Anguita et al., 2012), i.e. Human Activity Recognition from an accelerometer and gyroscope on a Samsung Galaxy S3 smartphone.
+
+Table 3 shows accuracy, training time, number of parameters and prediction time. Even with $K = 1$ , we compare well against competing methods, and iRNN accuracy improves with larger $K$ . Interestingly, higher $K$ yields faster training as well as moderate prediction time, despite the overhead of additional recursions. These results show that iRNN outperforms baselines on activity recognition tasks, and fits within IoT/edge-device budgets.
+
+# 5 CONCLUSION
+
+Drawing inspiration from Rosenblatts Continuous RNNs, we developed discrete time incremental RNN (iRNN). Leveraging equilibrium properties of CTRNN, iRNN solves exploding/vanishing gradient problem. We show that iRNN improved gradients are directly correlated with improved test accuracy. A number of experiments demonstrate iRNNs responsiveness to long-term dependency tasks. In addition, due to its smooth low-dimensional trajectories, it has a lightweight footprint that can be leveraged for IoT applications.
+
+# ACKNOWLEDGMENTS
+
+The authors would like to thank the Area Chair and the reviewers for their constructive comments. This work was supported partly by the National Science Foundation Grant 1527618, the Office of Naval Research Grant N0014-18-1-2257 and by a gift from ARM corporation.
+
+# REFERENCES
+
+Kerem Altun, Billur Barshan, and Orkun Tuncel. Comparative study on classifying human activities with miniature inertial and magnetic sensors. Pattern Recogn., 43(10):3605-3620, October 2010. ISSN 0031-3203. doi: 10.1016/j.patcog.2010.04.019. URL http://dx.doi.org/10.1016/j.patcog.2010.04.019.
+Davide Anguita, Alessandro Ghio, Luca Oneto, Xavier Parra, and Jorge L. Reyes-Ortiz. Human activity recognition on smartphones using a multiclass hardware-friendly support vector machine. In Proceedings of the 4th International Conference on Ambient Assisted Living and Home Care, IWAAL'12, pp. 216-223, Berlin, Heidelberg, 2012. Springer-Verlag. ISBN 978-3-642-35394-9. doi: 10.1007/978-3-642-35395-6_30. URL http://dx.doi.org/10.1007/978-3-642-35395-6_30.
+Martin Arjovsky, Amar Shah, and Yoshua Bengio. Unitary evolution recurrent neural networks. In International Conference on Machine Learning, pp. 1120-1128, 2016.
+David Balduzzi and Muhammad Ghifary. Strongly-typed recurrent neural networks. In Proceedings of The 33rd International Conference on Machine Learning, pp. 1292-1300, 2016.
+Y. Bengio, P. Simard, and P. Frasconi. Learning long-term dependencies with gradient descent is difficult. Trans. Neur. Netw., 5(2):157-166, March 1994. ISSN 1045-9227. doi: 10.1109/72.279181. URL http://dx.doi.org/10.1109/72.279181.
+Yoshua Bengio, Nicolas Boulanger-Lewandowski, and Razvan Pascanu. Advances in optimizing recurrent networks. 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 8624-8628, 2013.
+James Bradbury, Stephen Merity, Caiming Xiong, and Richard Socher. Quasi-recurrent neural networks. CoRR, abs/1611.01576, 2016. URL http://arxiv.org/abs/1611.01576.
+Víctor Campos, Brendan Jou, Xavier Giró-i Nieto, Jordi Torres, and Shih-Fu Chang. Skip rnn: Learning to skip state updates in recurrent neural networks. arXiv preprint arXiv:1708.06834, 2017.
+Bo Chang, Minmin Chen, Eldad Haber, and Ed H. Chi. AntisymmetricRNN: A dynamical system view on recurrent neural networks. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=ryxepo0cFX.
+Shiyu Chang, Yang Zhang, Wei Han, Mo Yu, Xiaoxiao Guo, Wei Tan, Xiaodong Cui, Michael Witbrock, Mark A Hasegawa-Johnson, and Thomas S Huang. Dilated recurrent neural networks. In Advances in Neural Information Processing Systems, pp. 77-87, 2017.
+Tian Qi Chen, Yulia Rubanova, Jesse Bettencourt, and David K Duvenaud. Neural ordinary differential equations. In Advances in Neural Information Processing Systems, pp. 6571-6583, 2018.
+Kyunghyun Cho, Bart Van Merrienboer, Dzmitry Bahdanau, and Yoshua Bengio. On the properties of neural machine translation: Encoder-decoder approaches. arXiv preprint arXiv:1409.1259, 2014.
+Jasmine Collins, Jascha Sohl-Dickstein, and David Sussillo. Capacity and Trainability in Recurrent Neural Networks. arXiv e-prints, art. arXiv:1611.09913, November 2016.
+Tim Cooijmans, Nicolas Ballas, César Laurent, Caglar Gülçehre, and Aaron Courville. Recurrent batch normalization. arXiv preprint arXiv:1603.09025, 2016.
+Ron S Dembo, Stanley C Eisenstat, and Trond Steihaug. Inexact newton methods. SIAM Journal on Numerical analysis, 19(2):400-408, 1982.
+
+Chengyue Gong, Di He, Xu Tan, Tao Qin, Liwei Wang, and Tie-Yan Liu. Frage: frequency-agnostic word representation. In Advances in Neural Information Processing Systems, pp. 1334–1345, 2018.
+Alex Graves. Adaptive computation time for recurrent neural networks. CoRR, abs/1603.08983, 2016. URL http://arxiv.org/abs/1603.08983.
+Michiel Hermans and Benjamin Schrauwen. Training and analysing deep recurrent neural networks. In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger (eds.), Advances in Neural Information Processing Systems 26, pp. 190-198. Curran Associates, Inc., 2013. URL http://papers.nips.cc/paper/5166-training-and-analysing-deep-recurrent-neural-networks.pdf.
+Josef Hochreiter. Untersuchungen zu dynamischen neuronalen netzen. 1991. URL http://people.idsia.ch/~juergen/ SeppHochreiter1991ThesisAdvisorSchmidhuber.pdf.
+Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8): 1735-1780, 1997.
+Chiori Hori, Takaaki Hori, Teng-Yok Lee, Ziming Zhang, Bret Harsham, John R Hershey, Tim K Marks, and Kazuhiko Sumi. Attention-based multimodal fusion for video description. In ICCV, pp. 4203-4212, 2017.
+Herbert Jaeger, Mantas Lukosevicius, Dan Popovici, and Udo Siewert. Optimization and applications of echo state networks with leaky-integrator neurons. *Neural networks: the official journal of the International Neural Network Society*, 20:335–52, 05 2007. doi: 10.1016/j.neunet.2007.04.016.
+Li Jing, Yichen Shen, Tena Dubcek, John Peurifoy, Scott Skirlo, Yann LeCun, Max Tegmark, and Marin Soljacic. Tunable efficient unitary neural networks (eunn) and their application to rnns. In International Conference on Machine Learning, pp. 1733-1741, 2017.
+Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICML, 2015.
+Aditya Kusupati, Manish Singh, Kush Bhatia, Ashish Kumar, Prateek Jain, and Manik Varma. Fastgrnn: A fast, accurate, stable and tiny kilobyte sized gated recurrent neural network. In Advances in Neural Information Processing Systems, 2018.
+Yann Lecun, Leon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. In Proceedings of the IEEE, pp. 2278-2324, 1998.
+Tao Lei, Yu Zhang, Sida I. Wang, Hui Dai, and Yoav Artzi. Simple recurrent units for highly parallelizable recurrence. In Empirical Methods in Natural Language Processing (EMNLP), 2018.
+James Martens and Ilya Sutskever. Learning recurrent neural networks with hessian-free optimization. In Proceedings of the 28th International Conference on Machine Learning (ICML-11), pp. 1033-1040, 2011.
+Zakaria Mhammedi, Andrew D. Hellicar, Ashfaqur Rahman, and James Bailey. Efficient orthogonal parametrisation of recurrent neural networks using householder reflections. CoRR, abs/1612.00188, 2016. URL http://arxiv.org/abs/1612.00188.
+Asier Mujika, Florian Meier, and Angelika Steger. Fast-slow recurrent neural networks. In Advances in Neural Information Processing Systems, pp. 5915-5924, 2017.
+Murphy Yuezhen Niu, Lior Horesh, and Isaac Chuang. Recurrent neural networks in the eye of differential equations. arXiv preprint arXiv:1904.12933, 2019.
+Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, and Yoshua Bengio. How to construct deep recurrent neural networks. arXiv preprint arXiv:1312.6026, 2013a.
+Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. On the difficulty of training recurrent neural networks. In International Conference on Machine Learning, pp. 1310-1318, 2013b.
+Razvan Pascanu, Caglar Güçehre, Kyunghyun Cho, and Yoshua Bengio. How to construct deep recurrent neural networks. CoRR, abs/1312.6026, 2013c.
+
+Jeffrey Pennington, Samuel Schoenholz, and Surya Ganguli. Resurrecting the sigmoid in deep learning through dynamical isometry: theory and practice. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (eds.), Advances in Neural Information Processing Systems 30, pp. 4785-4795. 2017.
+F. Rosenblatt. Principles of neurodynamics. Spartan Books, Washington, D.C., 1962.
+Yulia Rubanova, Ricky T. Q. Chen, and David Duvenaud. Latent odes for irregularly-sampled time series. CoRR, abs/1907.03907, 2019. URL http://arxiv.org/abs/1907.03907.
+Sachin S Talathi and Aniket Vartak. Improving performance of recurrent neural network with relunonlinearity. arXiv preprint arXiv:1511.03771, 2015.
+Eugene Vorontsov, Chiheb Trabelsi, Samuel Kadoury, and Chris Pal. On orthogonality and learning recurrent networks with long term dependencies. In ICML, pp. 3570-3578, 2017. URL http://proceedings.mlr.press/v70/vorontsov17a.html.
+Pete Warden. Speech Commands: A Dataset for Limited-Vocabulary Speech Recognition. arXiv e-prints, art. arXiv:1804.03209, April 2018.
+Scott Wisdom, Thomas Powers, John Hershey, Jonathan Le Roux, and Les Atlas. Full-capacity unitary recurrent neural networks. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett (eds.), Advances in Neural Information Processing Systems 29, pp. 4880-4888. Curran Associates, Inc., 2016. URL http://papers.nips.cc/paper/6327-full-capacity-unitary-recurrent-neural-networks.pdf.
+Inc. Yelp. Yelp dataset challenge. 2017. URL https://www.yelp.com/dataset/challenge.
+Jiong Zhang, Qi Lei, and Inderjit S. Dhillon. Stabilizing gradients for deep neural networks via efficient svd parameterization. In ICML, 2018.
+Jiong Zhang, Qi Lei, and Inderjit S Dhillon. Stabilizing gradients for deep neural networks via efficient SVD parameterization. arXiv preprint arXiv:1803.09327, 2018.
+Julian Georg Zilly, Rupesh Kumar Srivastava, Jan Koutnik, and Jurgen Schmidhuber. Recurrent highway networks. In ICML, pp. 4189-4198. JMLR.org, 2017.
+
+# A APPENDIX
+
+# A.1 MULTI-LAYER DEEP RNN NETWORKS.
+
+We point out in passing that our framework readily admits deep multi-layered networks within a single time-step. Indeed our setup is general; it applies to shallow and deep nets; small and large time steps. As a case in point, the Deep Transition RNN Pascanu et al. (2013c):
+
+$$
+h _ {m + 1} = f _ {h} \left(h _ {m}, x _ {m + 1}\right) = \phi_ {h} \left(W _ {L} \phi_ {L - 1} \left(W _ {L - 1} \dots W _ {1} \phi_ {1} \left(U h _ {m} + W x _ {m + 1}\right)\right) \right.
+$$
+
+is readily accounted by Theorem 1 in an implicit form:
+
+$$
+h _ {m + 1} = f _ {h} \left(h _ {m + 1} + h _ {m}, x _ {m + 1}\right) - h _ {m}.
+$$
+
+So is Deep-RNN Hermans & Schrauwen (2013). The trick is to transform $h_m \to h_m + h_{m+1}$ and $h_{m+1} \to h_m + h_{m+1}$ . As such, all we need is smoothness of $f_h$ , which has no restriction on # layers. On the other hand, that we do not have to limit the number of time steps is the point of Theorem 1, which asserts that the partial differential of hidden states (which is primarily why vanishing/exploding gradient arises Pascanu et al. (2013b) in the first place) is identity!!
+
+# A.2 PSEUDO CODE AND IMPLEMENTATION
+
+Given an input sequence and iRNN model parameters, the hidden states can be generated with the help of subroutine 1. This routine can be plugged into standard deep learning frameworks such as Tensorflow/PyTorch to learn the model parameters via back-propagation.
+
+Algorithm 1: Pseudo Code for computing iRNN hidden states for one input sequence
+Data: Input sequence $\{x_{m}\}_{m = 1}^{T}$
+Require: Number of recursion steps $K$ , Model parameters $(U,W,b,\alpha ,\{\eta_m^k\})$
+1 Initial hidden state $h_0 = 0$
+for $m = 1$ to $T$ do Initialize $g_{0}$ to zero or $h_{m - 1}$ for $k = 1$ to $K$ do $g_{k} = g_{k - 1} + \eta_{m}^{k}\bigl (\phi (U(g_{k - 1} + h_{m - 1}) + Wx_{m} + b) - \alpha (g_{k - 1} + h_{m - 1})\bigr)$ $h_m = g_K$
+Output: hidden states $\{h_m\}_{m = 1}^T$
+
+Table 4: Dataset Statistics & Long Term Dependence
+
+| Dataset | Avg. Activity Time | Input Time | Sequence Ratio | #Train | #Fts | #Steps | #Test |
| Google-30 | 25ms | 1000ms | 3/99 | 51,088 | 32 | 99 | 6,835 |
| HAR-2 | 256ms | 2560ms | 13/128 | 7,352 | 9 | 128 | 2,947 |
| Noisy-MNIST | 28 | 1000 | 7/250 | 60,000 | 28 | 1000 | 10,000 |
| Noisy-CIFAR | 32 | 1000 | 4/125 | 60,000 | 96 | 1000 | 10,000 |
| Pixel-MNIST | | | | 60,000 | 1 | 784 | 10,000 |
| Permuted-MNIST | | | | 60,000 | 1 | 784 | 10,000 |
+
+# A.3 CONVERGENCE GUARANTEES FOR GENERAL LEARNING RATES.
+
+Theorem 2 (Local Convergence with Linear Rate). Assume that the function $F(g_{i}) \triangleq \phi(U(g_{i} + h_{k-1}) + Wx_{k} + b) - (g_{i} + h_{k-1})$ and the parameter $\eta_{k}^{(i)}$ in Eq. 5 satisfies
+
+$$
+\left[ \eta_ {k} ^ {(i)} \right] ^ {2} \| \nabla F (g _ {i}) F (g _ {i}) \| ^ {2} + 2 \eta_ {k} ^ {(i)} F (g _ {i}) ^ {\top} \nabla F (g _ {i}) F (g _ {i}) < 0, \forall k, \forall i. \tag {10}
+$$
+
+Then there exists $\epsilon > 0$ such that if $\| g_0 - h_{eq}\| \leq \epsilon$ where $h_{eq}$ denotes the fixed point, the sequence $g_i$ generated by the Euler method converges to the equilibrium solution in $\mathcal{M}_{eq}(h_{k-1},x_k)$ locally with linear rate.
+
+The proof is based on drawing a connection between the Euler method and inexact Newton methods, and leverages Thm. 2.3 in Dembo et al. (1982). See appendix Sec. A.8.1 Thm. 3 and Sec. A.7.5 (for proof, empirical verification).
+
+Corollary 1. If $\| \mathbf{I} + \eta_k^{(i)}\nabla F(g_i)\| < 1, \forall k, \forall i$ , the forward propagation (Eq. 13) is stable and the sequence $\{g_i\}$ converges locally at a linear rate.
+
+The proof is based on Thm. 2.3 in Dembo et al. (1982), Thm. 2 and Prop. 2 in Chang et al. (2019). See appendix A.8.1 Corollary. 2
+
+# A.4 DATASET DETAILS
+
+Table 4 and table 6 lists the statistics of all the datasets described below.
+
+Google-12 & Google-30: Google Speech Commands dataset contains 1 second long utterances of 30 short words (30 classes) sampled at $16\mathrm{kHz}$ . Standard log Mel-filter-bank featurization with 32 filters over a window size of $25\mathrm{ms}$ and stride of $10\mathrm{ms}$ gave 99 timesteps of 32 filter responses for a 1-second audio clip. For the 12 class version, 10 classes used in Kaggle's Tensorflow Speech Recognition challenge were used and remaining two classes were noise and background sounds (taken randomly from remaining 20 short word utterances). Both the datasets were zero mean - unit variance normalized during training and prediction.
+
+$\mathbf{HAR - 2^{8}}$ : Human Activity Recognition (HAR) dataset was collected from an accelerometer and gyroscope on a Samsung Galaxy S3 smartphone. The features available on the repository were directly used for experiments. The 6 activities were merged to get the binarized version. The classes Sitting, Laying, Walking_ Upstairs and Standing, Walking, Walking_Downsstairs were merged to obtain the two classes. The dataset was zero mean - unit variance normalized during training and prediction.
+
+Penn Treebank: 300 length word sequences were used for word level language modeling task using Penn Treebank (PTB) corpus. The vocabulary consisted of 10,000 words and the size of trainable word embeddings was kept the same as the number of hidden units of architecture. This is the setup used in (Kusupati et al., 2018; Zhang et al., 2018).
+
+Pixel-MNIST: Pixel-by-pixel version of the standard MNIST-10 dataset $^{9}$ . The dataset was zero mean - unit variance normalized during training and prediction.
+
+Permuted-MNIST: This is similar to Pixel-MNIST, except its made harder by shuffling the pixels with a fixed permutation. We keep the random seed as 42 to generate the permutation of 784 pixels.
+
+Noisy-MNIST: To introduce more long-range dependencies to the Pixel-MNIST task, we define a more challenging task called the Noisy-MNIST, inspired by the noise padded experiments in Chang et al. (2019). Instead of feeding in one pixel at one time, we input each row of a MNIST image at every time step. After the first 28 time steps, we input independent standard Gaussian noise for the remaining time steps. Since a MNIST image is of size 28 with 1 RGB channels, the input dimension is $m = 28$ . The total number of time steps is set to $T = 1000$ . In other words, only the first 28 time steps of input contain salient information, all remaining 972 time steps are merely random noise. For a model to correctly classify an input image, it has to remember the information from a long time ago. This task is conceptually more difficult than the pixel-by-pixel MNIST, although the total amount of signal in the input sequence is the same.
+
+Noisy-CIFAR: This is exactly replica of the noise paded CIFAR task mentioned in Chang et al. (2019). Instead of feeding in one pixel at one time, we input each row of a CIFAR-10 image at every time step. After the first 32 time steps, we input independent standard Gaussian noise for the remaining time steps. Since a CIFAR-10 image is of size 32 with three RGB channels, the input dimension is $m = 96$ . The total number of time steps is set to $T = 1000$ . In other words, only the first 32 time steps of input contain salient information, all remaining 968 time steps are merely random noise. For a model to correctly classify an input image, it has to remember the information from a long time ago. This task is conceptually more difficult than the pixel-by-pixel CIFAR-10, although the total amount of signal in the input sequence is the same.
+
+Addition Task: We closely follow the adding problem defined in (Arjovsky et al., 2016; Hochreiter & Schmidhuber, 1997) to explain the task at hand. Each input consists of two sequences of length $T$ . The first sequence, which we denote $x$ , consists of numbers sampled uniformly at random $\mathcal{U}[0,1]$ . The second sequence is an indicator sequence consisting of exactly two entries of 1 and remaining entries 0. The first 1 entry is located uniformly at random in the first half of the sequence, whilst the second 1 entry is located uniformly at random in the second half. The output is the sum of the two entries of the first sequence, corresponding to where the 1 entries are located in the second sequence. A naive strategy of predicting 1 as the output regardless of the input sequence gives an expected mean squared error of 0.167, the variance of the sum of two independent uniform distributions.
+
+Copying Task: Following a similar setup to (Arjovsky et al., 2016; Hochreiter & Schmidhuber, 1997), we outline the copy memory task. Consider 10 categories, $\{a_i\}_{i=0}^9$ . The input takes the form of a $T + 20$ length vector of categories, where we test over a range of values of T. The first 10 entries are sampled uniformly, independently and with replacement from $\{a_i\}_{i=0}^7$ , and represent the sequence which will need to be remembered. The next $T - 1$ entries are set to $a_8$ , which can be thought of as the 'blank' category. The next single entry is $a_9$ , which represents a delimiter, which should indicate to the algorithm that it is now required to reproduce the initial 10 categories in the output. The remaining 10 entries are set to $a_8$ . The required output sequence consists of $T + 10$ repeated entries of $a_8$ , followed by the first 10 categories of the input sequence in exactly the same order. The goal is to minimize the average cross entropy of category predictions at each time step of
+
+the sequence. The task amounts to having to remember a categorical sequence of length 10, for T time steps.
+
+A simple baseline can be established by considering an optimal strategy when no memory is available, which we deem the memoryless strategy. The memoryless strategy would be to predict $a_8$ for $T + 10$ entries and then predict each of the final 10 categories from the set $\{a_i\}_{i=0}^7$ i=0 independently and uniformly at random. The categorical cross entropy of this strategy is $\frac{10 \log(8)}{T + 20}$
+
+DSA-1910: This dataset is based on Daily and Sports Activity (DSA) detection from a resource-constrained IoT wearable device with 5 Xsens MTx sensors having accelerometers, gyroscopes and magnetometers on the torso and four limbs. The features available on the repository were used for experiments. The dataset was zero mean - unit variance normalized during training and prediction.
+
+Yelp-5: Sentiment Classification dataset based on the text reviews11. The data consists of 500,000 train points and 500,000 test points from the first 1 million reviews. Each review was clipped or padded to be 300 words long. The vocabulary consisted of 20000 words and 128 dimensional word embeddings were jointly trained with the network.
+
+# A.5 BASELINE JUSTIFICATION
+
+In our experiments section, we stated that some of the potential baselines were removed due to experimental conditions enforced in the setup. Here we clearly justify our choice. Mostly the reasoning is to avoid comparing complementary add-ons and compare the bare-bone cells.
+
+- Cooijmans et al. (2016) is removed since its an add-on and can be applied to any method. Besides its pixel-mnist results involve dataset specific heuristics.
+Gong et al. (2018) is also an add-on and hence can be applied to any method.
+- Zilly et al. (2017); Pascanu et al. (2013a); Mujika et al. (2017) denote deep transitioning methods. They are add-ons for any single recurrent block and hence can be applied to any recurrent cell.
+- Gating variants of single recurrent cells (Chang et al., 2019; Kusupati et al., 2018) have also been removed. Since iRNN can be extended to a gating variant and hence its just an add-on.
+
+Table 5: Various hyper-parameters to reproduce results
+
+| Dataset | Hidden Units |
| Google-30 | 80 |
| HAR-2 | 80 |
| Pixel-MNIST | 128 |
| Permuted-MNIST | 128 |
| Noisy-MNIST | 128 |
| Noisy-CIFAR | 128 |
| Addition Task | 128 |
| Copying Task | 128 |
| PTB | 256 |
+
+# A.6 HYPER-PARAMETERS FOR REPRODUCIBILITY
+
+We report various hyper-parameters we use in our experiments for reproducibility. As mentioned earlier we mainly use 'ReLU' as the non-linearity and Adam as the optimizer. Apart from this, other hyper-parameters are mentioned in table 5.
+
+Table 6: Other Dataset Statistics & Long Term Dependence
+
+| Dataset | Avg. Acitivity Time | Input Time | Sequence Ratio | #Train | #Fts | #Steps | #Test |
| Google-12 | 25ms | 1000ms | 3/99 | 22,246 | 32 | 99 | 3,081 |
| DSA-19 | 500ms | 5000ms | 13/125 | 4,560 | 45 | 125 | 4,560 |
| Yelp-5 | 20 | 300 | 1/15 | 500,000 | 128 | 300 | 500,000 |
| PTB | | | | 929,589 | 300 | 300 | 82,430 |
+
+
+(a)
+
+
+(b)
+Figure 5: Mean Squared Error shown for the Add Task (Sequence Length): (c) 100 (d) 400
+
+# A.7 ADDITIONAL EXPERIMENTS
+
+# A.7.1 COOPYING AND ADDITION TASKS
+
+Figure 5 shows the results for remaining experiments for the addition task for length 100, 400.
+
+Table 7: Results for Pixel-by-Pixel MNIST and Permuted MNIST datasets. $K$ denotes pre-defined recursions embedded in graph to reach equilibrium.
+
+| Data set | Algorithm | Accuracy (%) | Train Time (hr) | #Params |
| Pixel-MNIST | FastRNN | 96.44 | 15.10 | 33k |
| FastGRNN-LSQ | 98.72 | 12.57 | 14k |
| RNN | 94.10 | 45.56 | 14k |
| SpectralRNN | 97.7 | | 6k |
| LSTM | 97.81 | 26.57 | 53k |
| URNN | 95.1 | | 16k |
| Antisymmetric | 98.01 | 8.61 | 14k |
| iRNN (K=1) | 97.73 | 2.83 | 4k |
| iRNN (K=2) | 98.13 | 3.11 | 4k |
| iRNN (K=3) | 98.13 | 2.93 | 4k |
| Permute-MNIST | FastRNN | 92.68 | 9.32 | 8.75k |
| SpectralRNN | 92.7 | | 8.5k |
| LSTM | 92.61 | 19.31 | 35k |
| URNN | 91.4 | | 12k |
| Antisymmetric | 93.59 | 4.75 | 14k |
| iRNN (K=1) | 95.62 | 2.41 | 8k |
+
+# A.7.2 TRADITIONAL DATASETS
+
+Table 7 shows the results including left out baselines for Pixel-MNIST and permute-MNIST task. Here we also include star rating prediction on a scale of 1 to 5 of Yelp reviews Yelp (2017). Table 8 shows the results for this dataset.
+
+Table 8: Results for Yelp Dataset.
+
+| Data set | Algorithm | Accuracy (%) | Model Size (KB) | Train Time (hr) | Test Time (ms) | #Params |
| Yelp-5 | FastRNN | 55.38 | 130 | 3.61 | 0.4 | 32.5k |
| FastGRNN-LSQ | 59.51 | 130 | 3.91 | 0.7 | 32.5k |
| FastGRNN | 59.43 | 8 | 4.62 | | |
| RNN | 47.59 | 130 | 3.33 | 0.4 | 32.5k |
| SpectralRNN | 56.56 | 89 | 4.92 | 0.3 | 22k |
| EURNN | 59.01 | 122 | 72.00 | | |
| LSTM | 59.49 | 516 | 8.61 | 1.2 | 129k |
| GRU | 59.02 | 388 | 8.12 | 0.8 | 97k |
| Antisymmetric | 54.14 | 130 | 2.61 | 0.4 | 32.5k |
| UGRNN | 58.67 | 258 | 4.34 | | |
| iRNN (K=1) | 58.16 | 97.67 | 0.31 | 0.4 | 25k |
| iRNN (K=2) | 59.01 | 98.84 | 0.31 | 0.7 | 25k |
| iRNN (K=3) | 59.34 | 100 | 1.16 | 1.0 | 25k |
+
+# A.7.3 ACTIVITY RECOGNITION DATASETS
+
+We also include activity recognition tasks: (a) Google-12 Warden (2018), i.e. detection of utterances of 10 commands plus background noise and silence and (b) DSA-19 Altun et al. (2010), Daily and Sports Activity (DSA) detection from a resource-constrained IoT wearable device with 5 Xsens MTx sensors having accelerometers, gyroscopes and magnetometers on the torso and four limbs. Table 9 shows results for these activities along with some other baselines for activity recognition tasks mentioned in Sec. 4.3.4 and described in Sec. A.4.
+
+# A.7.4 PTB LANGUAGE MODELLING
+
+We follow (Kusupati et al., 2018; Zhang et al., 2018) to setup our PTB experiments. We only pursue one layer language modelling, but with more difficult sequence length (300). Table 10 reports all the evaluation metrics for the PTB Language modelling task with 1 layer as setup by Kusupati et al. (2018), including test time and number of parameters (which we omitted from the main paper due to lack of space).
+
+# A.7.5 LINEAR RATE OF CONVERGENCE TO FIXED POI
+
+Empirically we verify the local convergence to a fixed point with linear rate by comparing the Euclidean distance between the approximate solutions, $\mathbf{h}_t^{(k)}$ , using Eq. 11 with $g_0 = 0$ and the fixed points, $\mathbf{h}_t$ , computed using FSOLVE from SCIPY. The learnable parameters are initialized suitably and then fixed. We illustrate our results in Fig. 6, which clearly demonstrates that the approximate solutions tend to converge with linear rate.
+
+
+Figure 6: Linear convergence in $iRNN$ .
+
+$$
+g _ {i} = g _ {i - 1} + \eta_ {t} ^ {i} \left(\phi \left(U \left(g _ {i - 1} + h _ {t - 1}\right) + W x _ {t} + b\right) - \alpha \left(g _ {i - 1} + h _ {t - 1}\right)\right) \tag {11}
+$$
+
+$$
+h _ {t} ^ {K} = g _ {K}
+$$
+
+# A.7.6 THEORETICAL VERIFICATION
+
+Here we include some experiments to show that our theoretical assumptions hold true.
+
+
+Figure 7: Histogram of the eigenvalues of $\nabla \phi \mathbf{U} - \mathbf{I}$ for $iRNN$ on HAR-2 dataset.
+
+Non-Singularity of the matrix D For our iRNN parametrization to satisfy the conditions of having equilibrium points to be locally asymptotically stable, the eigen values of the matrix $D = (\nabla \phi(\cdot)U - \gamma I)$ should be negative. We plot a histogram of the eigenvalues of $D$ for all the points in the HAR-2 dataset. As illustrated in the figure 7, all the eigenvalues are negative.
+
+# A.7.7 IDENTITY GRADIENT COMPARISON iRNN VS RNN
+
+To verify Theorem 1 empirically, we train RNN and iRNN on the HAR-2 data set (see more details in Sec. 4), respectively, and plot in Fig. 8 the magnitude of gradient of the last layer $\mathbf{h}_T$ w.r.t. the first
+
+
+(a)
+Figure 8: Comparison between RNN and iRNN on the magnitudes of gradients.
+
+layer $\mathbf{h}_1$ in log scale to confirm that our approach leads to no vanishing or exploding gradients when the error is back-propagated through time. We also conducted experiments to verify that the gradient of iRNN is norm preserving (see Sec. A.7.8 and Figure . 3). As we see clearly, RNN suffers from serious vanishing gradient issue in training, while iRNN's backpropagated gradients is close to 1, and the variance arises mainly our approximation of fixed points and stochastic behavior in training networks, demonstrating much better training stability of iRNN.
+
+# A.7.8 GRADIENT NORM W.R.T. LOSS $\left\| \frac{\partial L}{\partial h_1}\right\|$
+
+In addition to the gradient ratio we plot in Sec.4.2, we also show in figure 9, the more popular quantity captured in earlier works (Arjovsky et al., 2016; Zhang et al., 2018), i.e. the gradient norm w.r.t. loss $\| \frac{\partial L}{\partial h_1}\|$ . We emphasize that this quantity alone is misleading in the context of resolving the issue of vanishing/exploding gradients. Since $\| \frac{\partial L}{\partial h_1}\| = \| \frac{\partial L}{\partial h_T}\| * \| \frac{\partial h_T}{\partial h_1}\|$ . The long term component controlling the gradients is $\| \frac{\partial h_T}{\partial h_1}\|$ , but the other component, $\| \frac{\partial L}{\partial h_T}\|$ could become zero by the virtue that the loss is nearly zero. This happens in our addition task experiment, because MSE is close to zero, we experience nearly 0 value for this quantity. But this is clearly because the MSE is 0. Also note that none of our graphs have log scale, which is not the case in earlier works. The conclusion that can be drawn from the loss-gradient is that it is somewhat stable, and can inform us about quality of convergence.
+
+We also plot $\left\| \frac{\partial h_T}{\partial h_{T - 1}}\right\|$ in figure 9 in order to show that indeed iRNN achieves identity gradients everywhere in the time horizon, since fig. 3 had shown that the ratio of $\left\| \frac{\partial h_T}{\partial h_1}\right\|$ and $\left\| \frac{\partial h_T}{\partial h_{T - 1}}\right\|$ equals 1 for iRNN.
+
+# A.7.9 DIFFERENT ACTIVATION FUNCTION
+
+We also performed some experiments for sigmoid activation on HAR-2 dataset. The results for this variant also follow similar pattern as we saw in ReLU variant.
+
+# A.8 PROOFS
+
+# A.8.1 LOCAL CONVERGENCE WITH LINEAR RATE
+
+Recall that we rewrite the fixed-point constraints in our $iRNN$ as the following ODE:
+
+$$
+g _ {k} ^ {\prime} (t) = F \left(g _ {i}\right) \stackrel {\text {d e f}} {=} \phi \left(U \left(g _ {i} + h _ {k - 1}\right) + W x _ {k} + b\right) - \left(g _ {i} + h _ {k - 1}\right); g (0) = 0. \tag {12}
+$$
+
+Then based on the Euler method, we have the following update rule for solving fixed-points:
+
+$$
+\begin{array}{l} g _ {i + 1} = g _ {i} + \eta_ {k} ^ {(i)} F (g _ {i}) (13) \\ = g _ {i} + \eta_ {k} ^ {(i)} [ \phi (U (g _ {i} + h _ {k - 1}) + W x _ {k} + b) - (g _ {i} + h _ {k - 1}) ]. (14) \\ \end{array}
+$$
+
+Inexact Newton methods Dembo et al. (1982) refer to a family of algorithms that aim to solve the equation system $F(z) = 0$ approximately at each iteration using the following rule:
+
+$$
+z _ {i + 1} = z _ {i} + s _ {i}, r _ {i} = F \left(z _ {i}\right) + \nabla F \left(z _ {i}\right) s _ {i}, \tag {15}
+$$
+
+where $\nabla F$ denotes the (sub)gradient of function $F$ , and $r_i$ denotes the error at the $i$ -th iteration between $F(z_i)$ and 0.
+
+By drawing the connection between Eq. 13 and Eq. 15, we can set $z_{i} \equiv g_{i}$ and $s_i \equiv \eta_k^{(i)} F(g_i)$ . Then based on Eq. 15 we have
+
+$$
+r _ {i} = F \left(g _ {i}\right) + \eta_ {k} ^ {(i)} \nabla F \left(g _ {i}\right) F \left(g _ {i}\right). \tag {16}
+$$
+
+Lemma 1 (Thm. 2.3 in Dembo et al. (1982)). Assume that
+
+$$
+\frac {\left\| r _ {i} \right\|}{\left\| F \left(z _ {i}\right) \right\|} \leq \tau < 1, \forall k, \tag {17}
+$$
+
+where $\| \cdot \|$ denotes an arbitrary norm and the induced operator norm. There exists $\varepsilon > 0$ such that, if $\| z_0 - z_*\| \leq \varepsilon$ , then the sequence of inexact Newton iterates $\{z_i\}$ converges to $z_*$ . Moreover, the convergence is linear in the sense that $\| z_{i + 1} - z_*\|_* \leq \tau \| z_i - z_*\|_*$ , where $\| y\|_* = \| \nabla F(z_*)y\|$ .
+
+
+(a)
+
+
+(b)
+Figure 9: Exploratory experiments for the Add task : (a) Gradient norms w.r.t. loss $\| \frac{\partial L}{\partial h_1}\|$ , (b) Gradient norms $\| \frac{\partial h_T}{\partial h_{T - 1}}\|$ . This together with Figure 3 shows that the gradients are identity everywhere for $K = 10$
+
+Theorem 3 (Local Convergence with Linear Rate). Assume that the function $F$ in Eq. 12 and the parameter $\eta_k^{(i)}$ in Eq. 13 satisfy
+
+$$
+\left[ \eta_ {k} ^ {(i)} \right] ^ {2} \| \nabla F (g _ {i}) F (g _ {i}) \| ^ {2} + 2 \eta_ {k} ^ {(i)} F (g _ {i}) ^ {\top} \nabla F (g _ {i}) F (g _ {i}) < 0, \forall i, \forall k. \tag {18}
+$$
+
+Then there exists $\epsilon > 0$ such that if $\| g_0 - h_{eq} \| \leq \epsilon$ where $h_{eq}$ denotes the fixed point, the sequence $\{g_i\}$ generated by the Euler method converges to the equilibrium solution in $\mathcal{M}_{eq}(h_{k-1}, x_k)$ locally with linear rate.
+
+Proof. By substituting Eq. 16 into Eq. 17, to prove local convergence we need to guarantee
+
+$$
+\left\| F \left(g _ {i}\right) + \eta_ {k} ^ {(i)} \nabla F \left(g _ {i}\right) F \left(g _ {i}\right) \right\| < \| F \left(g _ {i}\right) \|. \tag {19}
+$$
+
+By taking the square of both sides in Eq. 19, we can show that Eq. 19 is equivalent to Eq. 18. We then complete our proof.
+
+
+
+Corollary 2. Assume that $\| \mathbf{I} + \eta_k^{(i)}\nabla F(g_i)\| < 1, \forall i, \forall k$ holds. Then the forward propagation using Eq. 13 is stable and our sequence $\{g_i\}$ converges locally with linear rate.
+
+Proof. By substituting Eq. 16 into Eq. 17 and based on the assumption in the corollary, we have
+
+$$
+\begin{array}{l} \frac {\left\| r _ {i} \right\|}{\left\| F (g _ {i}) \right\|} = \frac {\left\| F (g _ {i}) + \eta_ {k} ^ {(i)} \nabla F (g _ {i}) F (g _ {i}) \right\|}{\left\| F (g _ {i}) \right\|} \\ \leq \frac {\| \mathbf {I} + \eta_ {k} ^ {(i)} \nabla F (g _ {i}) \| \| F (g _ {i}) \|}{\| F (g _ {i}) \|} < 1. \tag {20} \\ \end{array}
+$$
+
+Further based on Prop. 2 in Chang et al. (2019) and Thm. 2, we then complete our proof.
+
+
+
+Table 9: Results for Activity Recognition Datasets.
+
+| Data set | Algorithm | Accuracy (%) | Model Size (KB) | Train Time (hr) | Test Time (ms) | #Params |
| HAR-2 | FastRNN | 94.50 | 29 | 0.063 | 0.01 | 7.5k |
| FastGRNN-LSQ | 95.38 | 29 | 0.081 | 0.03 | 7.5k |
| FastGRNN | 95.59 | 3 | 0.10 | | |
| RNN | 91.31 | 29 | 0.114 | 0.01 | 7.5k |
| SpectralRNN | 95.48 | 525 | 0.730 | 0.04 | 134k |
| EURNN | 93.11 | 12 | 0.740 | | |
| LSTM | 93.65 | 74 | 0.183 | 0.04 | 16k |
| GRU | 93.62 | 71 | 0.130 | 0.02 | 16k |
| Antisymmetric | 93.15 | 29 | 0.087 | 0.01 | 7.5k |
| UGRNN | 94.53 | 37 | 0.120 | | |
| iRNN (K=1) | 95.32 | 17 | 0.061 | 0.01 | 4k |
| iRNN (K=3) | 95.52 | 17 | 0.081 | 0.02 | 4k |
| iRNN (K=5) | 96.30 | 18 | 0.018 | 0.03 | 4k |
| DSA-19 | FastRNN | 84.14 | 97 | 0.032 | 0.01 | 17.5k |
| FastGRNN-LSQ | 85.00 | 208 | 0.036 | 0.03 | 35k |
| FastGRNN | 83.73 | 3.25 | 2.10m | | |
| RNN | 71.68 | 20 | 0.019 | 0.01 | 3.5k |
| SpectralRNN | 80.37 | 50 | 0.038 | 0.02 | 8.8k |
| LSTM | 84.84 | 526 | 0.043 | 0.06 | 92k |
| GRU | 84.84 | 270 | 0.039 | 0.03 | 47k |
| Antisymmetric | 85.37 | 32 | 0.031 | 0.01 | 8.3k |
| UGRNN | 84.74 | 399 | 0.039 | | |
| iRNN (K=1) | 88.11 | 19 | 0.015 | 0.01 | 3.5k |
| iRNN (K=3) | 85.20 | 19 | 0.020 | 0.02 | 3.5k |
| iRNN (K=5) | 87.37 | 20 | 0.005 | 0.03 | 3.5k |
| Google-12 | FastRNN | 92.21 | 56 | 0.61 | 0.01 | 12k |
| FastGRNN-LSQ | 93.18 | 57 | 0.63 | 0.03 | 12k |
| FastGRNN | 92.10 | 5.5 | 0.75 | | |
| RNN | 73.25 | 56 | 1.11 | 0.01 | 12k |
| SpectralRNN | 91.59 | 228 | 19.0 | 0.05 | 49k |
| EURNN | 76.79 | 210 | 120.00 | | |
| LSTM | 92.30 | 212 | 1.36 | 0.05 | 45k |
| GRU | 93.15 | 248 | 1.23 | 0.05 | 53k |
| Antisymmetric | 89.91 | 57 | 0.71 | 0.01 | 12k |
| UGRNN | 92.63 | 75 | 0.78 | | |
| iRNN (K=1) | 93.93 | 36 | 0.20 | 0.01 | 8.1k |
| iRNN (K=3) | 94.16 | 37 | 0.33 | 0.03 | 8.1k |
| iRNN (K=5) | 94.71 | 38 | 0.17 | 0.05 | 8.1k |
| Google-30 | FastRNN | 91.60 | 96 | 1.30 | 0.01 | 18k |
| FastGRNN-LSQ | 92.03 | 45 | 1.41 | 0.01 | 8.5k |
| FastGRNN | 90.78 | 6.25 | 1.77 | | |
| RNN | 80.05 | 63 | 2.13 | 0.01 | 12k |
| SpectralRNN | 88.73 | 128 | 11.0 | 0.03 | 24k |
| EURNN | 56.35 | 135 | 19.00 | | |
| LSTM | 90.31 | 219 | 2.63 | 0.05 | 41k |
| GRU | 91.41 | 257 | 2.70 | 0.05 | 48.5k |
| Antisymmetric | 90.91 | 64 | 0.54 | 0.01 | 12k |
| UGRNN | 90.54 | 260 | 2.11 | | |
| iRNN (K=1) | 93.77 | 44 | 0.44 | 0.01 | 8.5k |
| iRNN (K=3) | 91.30 | 44 | 0.44 | 0.03 | 8.5k |
| iRNN (K=5) | 94.23 | 45 | 0.44 | 0.05 | 8.5k |
| Algorithm | Test Perplexity | Model Size (KB) | Train Time (min) | Test Time (ms) | #Params |
| FastRNN | 127.76 | 513 | 11.20 | 1.2 | 52.5k |
| FastGRNN-LSQ | 115.92 | 513 | 12.53 | 1.5 | 52.5k |
| FastGRNN | 116.11 | 39 | 13.75 | | |
| RNN | 144.71 | 129 | 9.11 | 0.3 | 13.2k |
| SpectralRNN | 130.20 | 242 | - | 0.6 | 24.8k |
| LSTM | 117.41 | 2052 | 13.52 | 4.8 | 210k |
| UGRNN | 119.71 | 256 | 11.12 | 0.6 | 26.3k |
| iRNN (K=1) | 115.71 | 288 | 7.11 | 0.6 | 29.5k |
+
+Table 10: PTB Language Modeling: 1 Layer. To be consistent with our other experiments we used a low-dim U; For this size our results did not significantly improve with $K$ . This is the dataset of Kusupati et al. (2018) which uses sequence length 300 as opposed to 30 in the conventional PTB.
+
+| Data set | Algorithm | Accuracy (%) | Model Size (KB) | Train Time (hr) | Activation | #Params |
| HAR-2 | iRNN (K=1) | 95.32 | 17 | 0.061 | ReLU | 4k |
| iRNN (K=3) | 95.52 | 17 | 0.081 | ReLU | 4k |
| iRNN (K=5) | 96.30 | 18 | 0.018 | ReLU | 4k |
| iRNN (K=1) | 92.16 | 17 | 0.065 | Sigmoid | 4k |
| iRNN (K=3) | 93.35 | 17 | 0.078 | Sigmoid | 4k |
| iRNN (K=5) | 95.30 | 18 | 0.020 | Sigmoid | 4k |
+
+Table 11: HAR-2 dataset (Sigmoid, ReLU activations): $K$ denotes pre-defined recursions embedded in graph to reach equilibrium.
\ No newline at end of file
diff --git a/rnnsincrementallyevolvingonanequilibriummanifoldapanaceaforvanishingandexplodinggradients/images.zip b/rnnsincrementallyevolvingonanequilibriummanifoldapanaceaforvanishingandexplodinggradients/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..63bccd260bc41bb23331edb379c73d4c342ba811
--- /dev/null
+++ b/rnnsincrementallyevolvingonanequilibriummanifoldapanaceaforvanishingandexplodinggradients/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4e56fbbf524890524cdf8906f6443681ecf8dc9c0af950c50432bcdcbb1ee222
+size 1290214
diff --git a/rnnsincrementallyevolvingonanequilibriummanifoldapanaceaforvanishingandexplodinggradients/layout.json b/rnnsincrementallyevolvingonanequilibriummanifoldapanaceaforvanishingandexplodinggradients/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..0d300cb26d8766025f4772c2ae7df2cd97c9eef7
--- /dev/null
+++ b/rnnsincrementallyevolvingonanequilibriummanifoldapanaceaforvanishingandexplodinggradients/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8fb669b1834a617c79603f55d0e0cc6376b9a0b677d0a44f4634b5310b9dce67
+size 845053
diff --git a/rtfmgeneralisingtonewenvironmentdynamicsviareading/6b758b46-a2ad-4738-9a8c-5fe2eba94e50_content_list.json b/rtfmgeneralisingtonewenvironmentdynamicsviareading/6b758b46-a2ad-4738-9a8c-5fe2eba94e50_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..0d30d737bf66f6e62b4fb12eb41d8cc586d968e0
--- /dev/null
+++ b/rtfmgeneralisingtonewenvironmentdynamicsviareading/6b758b46-a2ad-4738-9a8c-5fe2eba94e50_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3f87348bed7d924a48fee80f7e0cba58b5daf1f79f59f4c9a3ca230a012f0299
+size 96115
diff --git a/rtfmgeneralisingtonewenvironmentdynamicsviareading/6b758b46-a2ad-4738-9a8c-5fe2eba94e50_model.json b/rtfmgeneralisingtonewenvironmentdynamicsviareading/6b758b46-a2ad-4738-9a8c-5fe2eba94e50_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..eb45d3ee40603ac7ef9ccc87f3c25ef22e9c48b6
--- /dev/null
+++ b/rtfmgeneralisingtonewenvironmentdynamicsviareading/6b758b46-a2ad-4738-9a8c-5fe2eba94e50_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d8461cb3a013318b80160796d9a9581d30fbef0f6a69ae12ff10cb151c5b1a7a
+size 112727
diff --git a/rtfmgeneralisingtonewenvironmentdynamicsviareading/6b758b46-a2ad-4738-9a8c-5fe2eba94e50_origin.pdf b/rtfmgeneralisingtonewenvironmentdynamicsviareading/6b758b46-a2ad-4738-9a8c-5fe2eba94e50_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..73b997e5402c36cb9e907dc6600d06f395ca1c56
--- /dev/null
+++ b/rtfmgeneralisingtonewenvironmentdynamicsviareading/6b758b46-a2ad-4738-9a8c-5fe2eba94e50_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:86ea7067a9014ba7764c090da506b1233a162df8b9ba92a15acc4c87458b8b62
+size 2107820
diff --git a/rtfmgeneralisingtonewenvironmentdynamicsviareading/full.md b/rtfmgeneralisingtonewenvironmentdynamicsviareading/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..e7e9baab3d2056a2e7f09c70c2177608cb621021
--- /dev/null
+++ b/rtfmgeneralisingtonewenvironmentdynamicsviareading/full.md
@@ -0,0 +1,425 @@
+# RTFM: GENERALISING TO NOVEL ENVIRONMENT DYNAMICS VIA READING
+
+Victor Zhong*
+
+Paul G. Allen School of Computer Science & Engineering
+University of Washington
+
+vzhong@cs.washington.edu
+
+Tim Rocktäschel
+
+Facebook AI Research & University College London rockt@fb.com
+
+Edward Grefenstette
+
+Facebook AI Research & University College London egrefen@fb.com
+
+# ABSTRACT
+
+Obtaining policies that can generalise to new environments in reinforcement learning is challenging. In this work, we demonstrate that language understanding via a reading policy learner is a promising vehicle for generalisation to new environments. We propose a grounded policy learning problem, Read to Fight Monsters (RTFM), in which the agent must jointly reason over a language goal, relevant dynamics described in a document, and environment observations. We procedurally generate environment dynamics and corresponding language descriptions of the dynamics, such that agents must read to understand new environment dynamics instead of memorising any particular information. In addition, we propose $\texttt{tx} \texttt{t} 2\pi$ , a model that captures three-way interactions between the goal, document, and observations. On RTFM, $\texttt{tx} \texttt{t} 2\pi$ generalises to new environments with dynamics not seen during training via reading. Furthermore, our model outperforms baselines such as FiLM and language-conditioned CNNs on RTFM. Through curriculum learning, $\texttt{tx} \texttt{t} 2\pi$ produces policies that excel on complex RTFM tasks requiring several reasoning and coreference steps.
+
+# 1 INTRODUCTION
+
+Reinforcement learning (RL) has been successful in a variety of areas such as continuous control (Lillicrap et al., 2015), dialogue systems (Li et al., 2016), and game-playing (Mnih et al., 2013). However, RL adoption in real-world problems is limited due to poor sample efficiency and failure to generalise to environments even slightly different from those seen during training. We explore language-conditioned policy learning, where agents use machine reading to discover strategies required to solve a task, thereby leveraging language as a means to generalise to new environments.
+
+Prior work on language grounding and language-based RL (see Luketina et al. (2019) for a recent survey) are limited to scenarios in which language specifies the goal for some fixed environment dynamics (Branavan et al., 2011; Hermann et al., 2017; Bahdanau et al., 2019; Fried et al., 2018; Co-Reyes et al., 2019), or the dynamics of the environment vary and are presented in language for some fixed goal (Branavan et al., 2012). In practice, changes to goals and to environment dynamics tend to occur simultaneously—given some goal, we need to find and interpret relevant information to understand how to achieve the goal. That is, the agent should account for variations in both by selectively reading, thereby generalising to environments with dynamics not seen during training.
+
+Our contributions are two-fold. First, we propose a grounded policy learning problem that we call Read to Fight Monsters (RTFM). In RTFM, the agent must jointly reason over a language goal, a document that specifies environment dynamics, and environment observations. In particular, it must identify relevant information in the document to shape its policy and accomplish the goal. To necessitate reading comprehension, we expose the agent to ever changing environment dynamics and corresponding language descriptions such that it cannot avoid reading by memorising any particular environment dynamics. We procedurally generate environment dynamics and natural language-templated descriptions of dynamics and goals to produce a combinatorially large number of environment dynamics to train and evaluate RTFM.
+
+Second, we propose $\texttt{txt} 2\pi$ to model the joint reasoning problem in RTFM. We show that $\texttt{txt} 2\pi$ generalises to goals and environment dynamics not seen during training, and outperforms previous language-conditioned models such as language-conditioned CNNs and FiLM (Perez et al., 2018; Bahdanau et al., 2019) both in terms of sample efficiency and final win-rate on RTFM. Through curriculum learning where we adapt $\texttt{txt} 2\pi$ trained on simpler tasks to more complex tasks, we obtain agents that generalise to tasks with natural language documents that require five hops of reasoning between the goal, document, and environment observations. Our qualitative analyses show that $\texttt{txt} 2\pi$ attends to parts of the document relevant to the goal and environment observations, and that the resulting agents exhibit complex behaviour such as retrieving correct items, engaging correct enemies after acquiring correct items, and avoiding incorrect enemies. Finally, we highlight the complexity of RTFM in scaling to longer documents, richer dynamics, and natural language variations. We show that significant improvement in language-grounded policy learning is needed to solve these problems in the future.
+
+# 2 RELATED WORK
+
+Language-conditioned policy learning. A growing body of research is learning policies that follow imperative instructions. The granularity of instructions vary from high-level instructions for application control (Branavan, 2012) and games (Hermann et al., 2017; Bahdanau et al., 2019) to step-by-step navigation (Fried et al., 2018). In contrast to learning policies for imperative instructions, Branavan et al. (2011; 2012); Narasimhan et al. (2018) infer a policy for a fixed goal using features extracted from high level strategy descriptions and general information about domain dynamics. Unlike prior work, we study the combination of imperative instructions and descriptions of dynamics. Furthermore, we require that the agent learn to filter out irrelevant information to focus on dynamics relevant to accomplishing the goal.
+
+Language grounding. Language grounding refers to interpreting language in a non-linguistic context. Examples of such context include images (Barnard & Forsyth, 2001), games (Chen & Mooney, 2008; Wang et al., 2016), robot control (Kollar et al., 2010; Tellex et al., 2011), and navigation (Anderson et al., 2018). We study language grounding in interactive games similar to Branavan (2012); Hermann et al. (2017) or Co-Reyes et al. (2019), where executable semantics are not provided and the agent must learn through experience. Unlike prior work, we require grounding between an underspecified goal, a document of environment dynamics, and world observations. In addition, we focus on generalisation to not only new goal descriptions but new environments dynamics.
+
+# 3 READ TO FIGHT MONSTERS
+
+We consider a scenario where the agent must jointly reason over a language goal, relevant environment dynamics specified in a text document, and environment observations. In reading the document, the agent should identify relevant information key to solving the goal in the environment. A successful agent needs to perform this language grounding to generalise to new environments with dynamics not seen during training.
+
+To study generalisation via reading, the environment dynamics must differ every episode such that the agent cannot avoid reading by memorising a limited set of dynamics. Consequently, we procedurally generate a large number of unique environment dynamics (e.g. effective (blessed items, poison monsters)), along with language descriptions of environment dynamics (e.g. blessed items are effective against poison monsters) and goals (e.g. Defeat the order of the forest). We couple a large, customisable ontology inspired by rogue-like games such as NetHack or Diablo, with natural language templates to create a combinatorially rich set of environment dynamics to learn from and evaluate on.
+
+In RTFM, the agent is given a document of environment dynamics, observations of the environment, and an underspecified goal instruction. Figure 1 illustrates an instance of the game. Concretely, we design a set of dynamics that consists of monsters (e.g. wolf, goblin), teams (e.g. Order of the Forest), element types (e.g. fire, poison), item modifiers (e.g. fanatical, arcane), and items (e.g. sword, hammer). When the player is in the same cell with a monster or weapon, the player picks up the item or engages in combat with the monster. The player can possess one item at a time, and drops existing
+
+# Doc:
+
+The Rebel Enclave consists of jackal, spider, and warg. Arcane, blessed items are useful for poison monsters. Star Alliance contains bat, panther, and wolf. Goblin, jaguar, and lynx are on the same team - they are in the Order of the Forest. Gleaming and mysterious weapons beat cold monsters. Lightning monsters are weak against Grandmaster's and Soldier's weapons. Fire monsters are defeated by fanatical and shimmering weapons.
+
+# Goal:
+
+Defeat the Order of the Forest
+
+
+
+
+
+
+Figure 1: RTFM requires jointly reasoning over the goal, a document describing environment dynamics, and environment observations. This figure shows key snapshots from a trained policy on one randomly sampled environment. Frame 1 shows the initial world. In 4, the agent approaches "fanatical sword", which beats the target "fire goblin". In 5, the agent acquires the sword. In 10, the agent evades the distractor "poison bat" while chasing the target. In 11, the agent engages the target and defeats it, thereby winning the episode.Sprites are used for visualisation — the agent observes cell content in text (shown in white). More examples are in appendix A.
+
+
+
+
+
+weapons if they pick up a new weapon. A monster moves towards the player with $60\%$ probability, and otherwise moves randomly. The dynamics, the agent's inventory, and the underspecified goal are rendered as text. The game world is rendered as a matrix of text in which each cell describes the entity occupying the cell. We use human-written templates for stating which monsters belong to which team, which modifiers are effective against which element, and which team the agent should defeat (see appendix H for details on collection and G for a list of entities in the game). In order to achieve the goal, the agent must cross-reference relevant information in the document and as well as in the observations.
+
+During every episode, we subsample a set of groups, monsters, modifiers, and elements to use. We randomly generate group assignments of which monsters belong to which team and which modifier is effective against which element. A document that consists of randomly ordered statements corresponding to this group assignment is presented to the agent. We sample one element, one team, and a monster from that team (e.g. "fire goblin" from "Order of the forest") to be the target monster. Additionally, we sample one modifier that beats the element and an item to be the item that defeats the target monster (e.g. "fanatical sword"). Similarly, we sample an element, a team, and a monster from a different team to be the distractor monster (e.g. poison bat), as well as an item that defeats the distractor monster (e.g. arcane hammer).
+
+In order to win the game (e.g. Figure 1), the agent must
+
+1. identify the target team from the goal (e.g. Order of the Forest)
+2. identify the monsters that belong to that team (e.g. goblin, jaguar, and ghost)
+3. identify which monster is in the world (e.g. goblin), and its element (e.g. fire)
+4. identify the modifiers that are effective against this element (e.g. fanatical, shimmering)
+5. find which modifier is present (e.g. fanatical), and the item with the modifier (e.g. sword)
+
+6. pick up the correct item (e.g. fanatical sword)
+7. engage the correct monster in combat (e.g. fire goblin).
+
+If the agent deviates from this trajectory (e.g. does not have correct item before engaging in combat, engages with distractor monster), it cannot defeat the target monster and therefore will lose the game. The agent receives a reward of $+1$ if it wins the game and -1 otherwise.
+
+RTFM presents challenges not found in prior work in that it requires a large number of grounding steps in order to solve a task. In order to perform this grounding, the agent must jointly reason over a language goal and document of dynamics, as well as environment observations. In addition to the environment, the positions of the target and distractor within the document are randomised—the agent cannot memorise ordering patterns in order to solve the grounding problems, and must instead identify information relevant to the goal and environment at hand.
+
+We split environments into train and eval sets. No assignments of monster-team-modifier-element are shared between train and eval to test whether the agent is able to generalise to new environments with dynamics not seen during training via reading. There are more than 2 million train or eval environments without considering the natural language templates, and 200 million otherwise. With random ordering of templates, the number of unique documents exceeds 15 billion.
+
+# 4 MODEL
+
+We propose the $t\times t2\pi$ model, which builds representations that capture three-way interactions between the goal, document describing environment dynamics, and environment observations. We begin with definition of the Bidirectional Feature-wise Linear Modulation (FiLM²) layer, which forms the core of our model.
+
+# 4.1 BIDIRECTIONAL FEATURE-WISE LINEAR MODULATION (FILM²) LAYER
+
+Feature-wise linear modulation (FiLM), which modulates visual inputs using representations of textual instructions, is an effective method for image captioning (Perez et al., 2018) and instruction following (Bahdanau et al., 2019). In RTFM, the agent must not only filter concepts in the visual domain using language but filter concepts in the text domain using visual observations. To support this, $\mathrm{FiLM}^2$ builds
+
+
+Figure 2: The FiLM² layer.
+
+codependent representations of text and visual inputs by further incorporating conditional representations of the text given visual observations. Figure 2 shows the $\mathrm{FiLM}^2$ layer.
+
+We use upper-case bold letters to denote tensors, lower-case bold letters for vectors, and nonBold letters for scalars. Exact dimensions of these variables are shown in Table 4 in appendix B. Let $\mathbf{x}_{\mathrm{text}}$ denote a fixed-length $d_{\mathrm{text}}$ -dimensional representation of the text and $\mathbf{X}_{\mathrm{vis}}$ the representation of visual inputs with height $H$ , width $W$ , and $d_{\mathrm{vis}}$ channels. Let Conv denote a convolution layer. Let $+$ and $^*$ symbols denote element-wise addition and multiplication operations that broadcast over spatial dimensions. We first modulate visual features using text features:
+
+$$
+\gamma_ {\text {t e x t}} = \boldsymbol {W} _ {\gamma} \boldsymbol {x} _ {\text {t e x t}} + \boldsymbol {b} _ {\gamma} \tag {1}
+$$
+
+$$
+\boldsymbol {\beta} _ {\text {t e x t}} = \boldsymbol {W} _ {\beta} \boldsymbol {x} _ {\text {t e x t}} + \boldsymbol {b} _ {\beta} \tag {2}
+$$
+
+$$
+\boldsymbol {V} _ {\text {v i s}} = \operatorname {R e L U} \left(\left(1 + \boldsymbol {\gamma} _ {\text {t e x t}}\right) * \operatorname {C o n v} _ {\text {v i s}} \left(\boldsymbol {X} _ {\text {v i s}}\right) + \boldsymbol {\beta} _ {\text {t e x t}}\right) \tag {3}
+$$
+
+Unlike FiLM, we additionally modulate text features using visual features:
+
+$$
+\boldsymbol {\Gamma} _ {\text {v i s}} = \operatorname {C o n v} _ {\gamma} \left(\boldsymbol {X} _ {\text {v i s}}\right) \tag {4}
+$$
+
+$$
+\boldsymbol {B} _ {\text {v i s}} = \operatorname {C o n v} _ {\beta} (\boldsymbol {X} _ {\text {v i s}}) \tag {5}
+$$
+
+$$
+\boldsymbol {V} _ {\text {t e x t}} = \operatorname {R e L U} \left(\left(1 + \boldsymbol {\Gamma} _ {\text {v i s}}\right) * \left(\boldsymbol {W} _ {\text {t e x t}} \boldsymbol {x} _ {\text {t e x t}} + \boldsymbol {b} _ {\text {t e x t}}\right) + \boldsymbol {B} _ {\text {v i s}}\right) \tag {6}
+$$
+
+
+Figure 3: $\texttt{txt}2\pi$ models interactions between the goal, document, and observations.
+
+The output of the FiLM² layer consists of the sum of the modulated features $V$ , as well as a max-pooled summary $s$ over this sum across spatial dimensions.
+
+$$
+\boldsymbol {V} = \boldsymbol {V} _ {\text {v i s}} + \boldsymbol {V} _ {\text {t e x t}} \tag {8}
+$$
+
+# 4.2 THE $\mathrm{X}\mathrm{T}\mathrm{T}2\pi$ MODEL
+
+We model interactions between observations from the environment, goal, and document using $\mathrm{FiLM}^2$ layers. We first encode text inputs using bidirectional LSTMs, then compute summaries using self-attention and conditional summaries using attention. We concatenate text summaries into text features, which, along with visual features, are processed through consecutive $\mathrm{FiLM}^2$ layers. In this case of a textual environment, we consider the grid of word embeddings as the visual features for $\mathrm{FiLM}^2$ . The final $\mathrm{FiLM}^2$ output is further processed by MLPs to compute a policy distribution over actions and a baseline for advantage estimation. Figure 3 shows the $t\times t2\pi$ model.
+
+Let $E_{\mathrm{obs}}$ denote word embeddings corresponding to the observations from the environment, where $E_{\mathrm{obs}[:, :, i, j]}$ represents the embeddings corresponding to the $l_{\mathrm{obs}}$ -word string that describes the objects in location $(i, j)$ in the grid-world. Let $E_{\mathrm{doc}}$ , $E_{\mathrm{inv}}$ , and $E_{\mathrm{goal}}$ respectively denote the embeddings corresponding to the $l_{\mathrm{doc}}$ -word document, the $l_{\mathrm{inv}}$ -word inventory, and the $l_{\mathrm{goal}}$ -word goal. We first compute a fixed-length summary $c_{\mathrm{goal}}$ of the goal using a bidirectional LSTM (Hochreiter & Schmidhuber, 1997) followed by self-attention (Lee et al., 2017; Zhong et al., 2018).
+
+$$
+\boldsymbol {H} _ {\text {g o a l}} = \operatorname {B i L S T M} _ {\text {g o a l}} \left(\boldsymbol {E} _ {\text {g o a l}}\right) \quad (9) \quad a _ {\text {g o a l}, i} ^ {\prime} = \boldsymbol {w} _ {\text {g o a l}} \boldsymbol {h} _ {\text {g o a l}, i} ^ {\intercal} + b _ {\text {g o a l}} \tag {10}
+$$
+
+$$
+\boldsymbol {a} _ {\text {g o a l}} = \operatorname {s o f t m a x} \left(\boldsymbol {a} _ {\text {g o a l}} ^ {\prime}\right) \tag {12}
+$$
+
+We abbreviate self-attention over the goal as $c_{\mathrm{goal}} = \mathrm{selfattn}(H_{\mathrm{goal}})$ . We similarly compute a summary of the inventory as $c_{\mathrm{inv}} = \mathrm{selfattn}(\mathrm{BiLSTM}_{\mathrm{inv}}(E_{\mathrm{inv}}))$ . Next, we represent the document encoding conditioned on the goal using dot-product attention (Luong et al., 2015).
+
+$$
+\boldsymbol {H} _ {\mathrm {d o c}} = \operatorname {B i L S T M} _ {\text {g o a l - d o c}} \left(\boldsymbol {E} _ {\mathrm {d o c}}\right) \quad (1 3) \quad a _ {\mathrm {d o c}, i} ^ {\prime} = \boldsymbol {c} _ {\text {g o a l}} \boldsymbol {h} _ {\mathrm {d o c}, i} ^ {\prime} \tag {14}
+$$
+
+$$
+\boldsymbol {a} _ {\mathrm {d o c}} = \operatorname {s o f t m a x} \left(\boldsymbol {a} _ {\mathrm {d o c}} ^ {\prime}\right) \tag {16}
+$$
+
+We abbreviate attention over the document encoding conditioned on the goal summary as $\pmb{c}_{\mathrm{doc}} = \mathrm{attend}(\pmb{H}_{\mathrm{doc}}, \pmb{c}_{\mathrm{goal}})$ . Next, we build the joint representation of the inputs using successive FiLM² layers. At each layer, the visual input to the FiLM² layer is the concatenation of the output of the previous layer with positional features. For each cell, the positional feature $X_{\mathrm{pos}}$ consists of the $x$ and $y$ distance from the cell to the agent's position respectively, normalized by the width and height of the grid-world. The text input is the concatenation of the goal summary, the inventory summary, the attention over the document given the goal, and the attention over the document given the previous visual summary. Let $[a; b]$ denote the feature-wise concatenation of $a$
+
+
+Figure 4: Ablation training curves on simplest variant of RTFM. Individual runs are in light colours. Average win rates are in bold, dark lines.
+
+| Model | Win rate |
| Train | Eval 6×6 | Eval 10×10 |
| conv | 24 ± 0 | 25 ± 1 | 13 ± 1 |
| FiLM | 49 ± 1 | 49 ± 2 | 32 ± 3 |
| no_task_attn | 49 ± 2 | 49 ± 2 | 35 ± 6 |
| no.vis_attn | 49 ± 2 | 49 ± 1 | 40±12 |
| no_text_mod | 49 ± 1 | 49 ± 2 | 35 ± 2 |
| txt2π | 84±21 | 83±21 | 66±22 |
+
+Table 1: Final win rate on simplest variant of RTFM. The models are trained on one set of dynamics (e.g. training set) and evaluated on another set of dynamics (e.g. evaluation set). "Train" and "Eval" show final win rates on training and eval environments.
+
+and $b$ . For the $i$ th layer, we have
+
+$$
+R ^ {(i)} = \left[ \boldsymbol {V} ^ {(i - 1)}; \boldsymbol {X} _ {\text {p o s}} \right] \tag {17}
+$$
+
+$$
+T ^ {(i)} = \left[ c _ {\text {g o a l}}; c _ {\text {i n v}}; c _ {\text {d o c}}; \text {a t t e n d} \left(\operatorname {B i L S T M} _ {\text {v i s - d o c}} \left(\boldsymbol {E} _ {\text {d o c}}\right), \boldsymbol {s} ^ {(i - 1)}\right) \right] \tag {18}
+$$
+
+$$
+\boldsymbol {V} ^ {(i)}, \boldsymbol {s} ^ {(i)} = \operatorname {F i L M} ^ {2 ^ {(i)}} \left(\mathrm {R} ^ {(i)}, \mathrm {T} ^ {(i)}\right) \tag {19}
+$$
+
+BiLSTM $_{\text{vis-doc}}(E_{\text{doc}})$ is another encoding of the document similar to $H_{\text{goal}}$ , produced using a separate LSTM, such that the document is encoded differently for attention with the visual features and with the goal. For $i = 0$ , we concatenate the bag-of-words embeddings of the grid with positional features as the initial visual features $\mathbf{V}^{(0)} = [\sum_{j} \mathbf{E}_{\text{obs},j}; \mathbf{X}_{\text{pos}}]$ . We max pool a linear transform of the initial visual features to compute the initial visual summary $\mathbf{s}^{(0)} = \text{MaxPool}(\mathbf{W}_{\text{ini}} \mathbf{V}^{(0)} + \mathbf{b}_{\text{ini}})$ . Let $\mathbf{s}^{(\text{last})}$ denote visual summary of the last FiLM $^2$ layer. We compute the policy $\mathbf{y}_{\text{policy}}$ and baseline $\mathbf{y}_{\text{baseline}}$ as
+
+$$
+\boldsymbol {o} = \operatorname {R e L U} \left(\boldsymbol {W} _ {o} \boldsymbol {s} ^ {\text {(l a s t)}} + \boldsymbol {b} _ {o}\right) \tag {20}
+$$
+
+$$
+\boldsymbol {y} _ {\text {p o l i c y}} = \operatorname {M L P} _ {\text {p o l i c y}} (\boldsymbol {o}) \tag {21}
+$$
+
+$$
+y _ {\text {b a s e l i n e}} = \mathrm {M L P} _ {\text {b a s e l i n e}} (\boldsymbol {o}) \tag {22}
+$$
+
+where $\mathrm{MLP}_{\mathrm{policy}}$ and $\mathrm{MLP}_{\mathrm{baseline}}$ are 2-layer multi-layer perceptrons with ReLU activation. We train using TorchBeast (Küttler et al., 2019), an implementation of IMPALA (Espeholt et al., 2018). Please refer to appendix D for details.
+
+# 5 EXPERIMENTS
+
+We consider variants of RTFM by varying the size of the grid-world $(6 \times 6$ vs $10 \times 10)$ , allowing many-to-one group assignments to make disambiguation more difficult (group), allowing dynamic, moving monsters that hunt down the player (dyna), and using natural language templated documents (n1). In the absence of many-to-one assignments, the agent does not need to perform steps 3 and 5 in section 3 as there is no need to disambiguate among many assignees, making it easier to identify relevant information.
+
+We compare $\text{txt}2\pi$ to the FiLM model by Bahdanau et al. (2019) and a language-conditioned residual CNN model. We train on one set of dynamics (e.g. group assignments of monsters and modifiers) and evaluated on a held-out set of dynamics. We also study three variants of $\text{txt}2\pi$ . In no_task_attn, the document attention conditioned on the goal utterance (equation 16) is removed and the goal instead represented through self-attention and concatenated with the rest of the text features. In novis_attn, we do not attend over the document given the visual output of the previous layer (equation 18), and the document is instead represented through self-attention.
+
+| Transfer from | Transfer to |
| 6 × 6 | 6 × 6 dyna | 6 × 6 groups | 6 × 6 nl | 6 × 6 dyna groups | 6 × 6 group nl | 6 × 6 dyna nl | 6 × 6 dyna group nl |
| random +6 × 6 +dyna +group | 84 ± 20 | 26 ± 7 | 25 ± 3 | 45 ± 6 | 23 ± 2 | 25 ± 3 | 23 ± 2 | 23 ± 2 |
| 85 ± 9 | 82 ± 19 | 78 ± 24 | 64 ± 12 | 52 ± 13 | 53 ± 18 | 40 ± 8 |
| | | | 77 ± 10 | | 65 ± 16 | 43 ± 4 |
| | | | | | | 65 ± 17 |
+
+In no_text_mod, text modulation using visual features (equation 6) is removed. Please see appendix C for model details on our model and baselines, and appendix D for training details.
+
+# 5.1 COMPARISON TO BASELINES AND ABLATIONS
+
+We compare $\text{txt}2\pi$ to baselines and ablated variants on a simplified variant of RTFM in which there are one-to-one group assignments (no group), stationary monsters (no dyna), and no natural language-templated descriptions (no n1). Figure 4 shows that compared to baselines and ablated variants, $\text{txt}2\pi$ is more sample efficient and converges to higher performance. Moreover, no ablated variant is able to solve the tasks—it is the combination of ablated features that enables $\text{txt}2\pi$ to win consistently. Qualitatively, the ablated variants converge to locally optimum policies in which the agent often picks up a random item and then attacks the correct monster, resulting in a $\sim 50\%$ win rate. Table 1 shows that all models, with the exception of the CNN baseline, generalise to new evaluation environments with dynamics and world configurations not seen during training, with $\text{txt}2\pi$ outperforming FiLM and the CNN model.
+
+We find similar results for $t \times t 2\pi$ , its ablated variants, and baselines on a separate, language-based rock-paper-scissors task in which the agent needs to deduce cyclic dependencies (which type beats which other type) through reading in order to acquire the correct item and defeat a monster. We observe that the performance of reading models transfer from training environments to new environments with unseen types and unseen dependencies. Compared to ablated variants and baselines, $t \times t 2\pi$ is more sample efficient and achieves higher performance on both training and new environment dynamics. When transferring to new environments, $t \times t 2\pi$ remains more sample efficient than the other models. Details on these experiments are found in appendix E.
+
+# 5.2 CURRICULUM LEARNING FOR COMPLEX ENVIRONMENTS
+
+Due to the long sequence of co-references the agent must perform in order to solve the full RTFM $(10 \times 10$ with moving monsters, many-to-one group assignments, and natural language templated documents) we design a curriculum to facilitate policy learning by starting with simpler variants of RTFM. We start with the simplest variant (no group, no dyna, no nl) and then add in an additional dimension of complexity. We repeatedly add more complexity until we obtain $10 \times 10$ worlds with moving monsters, many-to-one group assignments and natural language templated descriptions. The performance across the curriculum is shown in Table 2
+
+Table 2: Curriculum training results. We keep 5 randomly initialised models through the entire curriculum. A cell in row $i$ and column $j$ shows transfer from the best-performing setting in the previous stage (bold in row $i - 1$ ) to the new setting in column $j$ . Each cell shows final mean and standard deviation of win rate on the training environments. Each experiment trains for 50 million frames, except for the initial stage (first row, 100 million instead). For the last stage (row 4), we also transfer to a ${10} \times {10} + \mathrm{{dyna}} + \mathrm{{group}} + \mathrm{{nl}}$ variant and obtain ${61} \pm {18}$ win rate.
+
+| Train env | Eval env | Win rate |
| Train | Eval |
| 6 × 6 | 6 × 6 | 65 ± 17 | 55 ± 22 |
| 10 × 10 | 55 ± 27 |
| 10 × 10 | 10 × 10 | 61 ± 18 | 43 ± 13 |
+
+Table 3: Win rate when evaluating on new dynamics and world configurations for $\mathrm{{txt}}{2\pi }$ on the full RTFM problem.
+
+
+
+
+(a) The entities present are shimmering morning star, mysterious spear, fire jaguar, and lightning ghost.
+(b) The entities present are soldier's axe, shimmering axe, fire shaman, and poison wolf.
+Figure 5: $\text{txt}2\pi$ attention on the full RTFM. These include the document attention conditioned on the goal (top) as well as those conditioned on summaries produced by intermediate $\mathrm{FiLM}^2$ layers. Weights are normalised across words (e.g. horizontally). Darker means higher attention weight.
+
+(see Figure 13 in appendix F for training curves of each stage). We see that curriculum learning is crucial to making progress on RTFM, and that initial policy training (first row of Table 2) with additional complexities in any of the dimensions result in significantly worse performance. We take each of the 5 runs after training through the whole curriculum and evaluate them on dynamics not seen during training. Table 3 shows variants of the last stage of the curriculum in which the model was trained on $6 \times 6$ versions of the full RTFM and in which the model was trained on $10 \times 10$ versions of the full RTFM. We see that models trained on smaller worlds generalise to bigger worlds. Despite curriculum learning, however, performance of the final model trail that of human players, who can consistently solve RTFM. This highlights the difficulties of the RTFM problem and suggests that there is significant room for improvement in developing better language grounded policy learners.
+
+Attention maps. Figure 5 shows attention conditioned on the goal and on observation summaries produced by intermediate $\mathrm{FiLM}^2$ layers. Goal-conditioned attention consistently locates the clause that contains the team the agent is supposed to attack. Intermediate layer attentions focus on regions near modifiers and monsters, particularly those that are present in the observations. These results suggest that attention mechanisms in $t\times t2\pi$ help identify relevant information in the document.
+
+Analysis of trajectories and failure modes. We examine trajectories from well-performing policies (80% win rate) as well as poorly-performing policies (50% win rate) on the full RTFM. We find that well-performing policies exhibit a number of consistent behaviours such as identifying the correct item to pick up to fight the target monster, avoiding distractors, and engaging target monsters after acquiring the correct item. In contrast, the poorly-performing policies occasionally pick up the wrong item, causing the agent to lose when engaging with a monster. In addition, it occasionally gets stuck in evading monsters indefinitely, causing the agent to lose when the time runs out. Replays of both policies can be found in GIFs in the supplementary materials1.
+
+# 6 CONCLUSION
+
+We proposed RTFM, a grounded policy learning problem in which the agent must jointly reason over a language goal, relevant dynamics specified in a document, and environment observations. In order to study RTFM, we procedurally generated a combinatorially large number of environment dynamics such that the model cannot memorise a set of environment dynamics and must instead generalise via reading. We proposed $\texttt{txt} 2\pi$ , a model that captures three-way interactions between the goal,
+
+document, and observations, and that generalises to new environments with dynamics not seen during training. txt2π outperforms baselines such as FiLM and language-conditioned CNNs. Through curriculum learning, txt2π performs well on complex RTFM tasks that require several reasoning and coreference steps with natural language templated goals and descriptions of the dynamics. Our work suggests that language understanding via reading is a promising way to learn policies that generalise to new environments. Despite curriculum learning, our best models trail performance of human players, suggesting that there is ample room for improvement in grounded policy learning on complex RTFM problems. In addition to jointly learning policies based on external documentation and language goals, we are interested in exploring how to use supporting evidence in external documentation to reason about plans (Andreas et al., 2018) and induce hierarchical policies (Hu et al., 2019; Jiang et al., 2019).
+
+# ACKNOWLEDGEMENT
+
+We thank Heinrich Kuttler and Nantas Nardelli for their help in adapting TorchBeast and the FAIR London team for their feedback and support.
+
+# REFERENCES
+
+Peter Anderson, Qi Wu, Damien Teney, Jake Bruce, Mark Johnson, Niko Sünderhauf, Ian D. Reid, Stephen Gould, and Anton van den Hengel. Vision-and-language navigation: Interpreting visually-grounded navigation instructions in real environments. In CVPR, 2018.
+Jacob Andreas, Dan Klein, and Sergey Levine. Learning with latent language. In NAACL, 2018.
+Dzmitry Bahdanau, Felix Hill, Jan Leike, Edward Hughes, Pushmeet Kohli, and Edward Grefenstette. Learning to follow language instructions with adversarial reward induction. In ICLR, 2019.
+K. Barnard and D. Forsyth. Learning the semantics of words and pictures. In ICCV, 2001.
+S. R. K. Branavan, David Silver, and Regina Barzilay. Learning to win by reading manuals in a monte-carlo framework. In ACL, 2011.
+S. R. K. Branavan, Nate Kushman, Tao Lei, and Regina Barzilay. Learning high-level planning from text. In ACL, 2012.
+S.R.K. Branavan. Grounding Linguistic Analysis in Control Applications. PhD thesis, MIT, 2012.
+David L. Chen and Raymond J. Mooney. Learning to sportscast: A test of grounded language acquisition. In ICML, 2008.
+John D. Co-Reyes, Abhishek Gupta, Suvansh Sanjeev, Nick Altieri, John DeNero, Pieter Abbeel, and Sergey Levine. Guiding policies with language via meta-learning. In ICLR, 2019.
+Lasse Espeholt, Hubert Soyer, Rémi Munos, Karen Simonyan, Volodymyr Mnih, Tom Ward, Yotam Doron, Vlad Firoiu, Tim Harley, Iain Dunning, Shane Legg, and Koray Kavukcuoglu. IMPALA: scalable distributed deep-rl with importance weighted actor-learner architectures. In ICML, 2018.
+Daniel Fried, Ronghang Hu, Volkan Cirik, Anna Rohrbach, Jacob Andreas, Louis-Philippe Morency, Taylor Berg-Kirkpatrick, Kate Saenko, Dan Klein, and Trevor Darrell. Speaker-follower models for vision-and-language navigation. In NeurIPS, 2018.
+Karl Moritz Hermann, Felix Hill, Simon Green, Fumin Wang, Ryan Faulkner, Hubert Soyer, David Szepesvari, Wojciech Marian Czarnecki, Max Jaderberg, Denis Teplyashin, Marcus Wainwright, Chris Apps, Demis Hassabis, and Phil Blunsom. Grounded language learning in a simulated 3d world. CoRR, abs/1706.06551, 2017.
+Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural Computation, 9(8), 1997.
+
+Hengyuan Hu, Denis Yarats, Qucheng Gong, Yuandong Tian, and Mike Lewis. Hierarchical decision making by generating and following natural language instructions. CoRR, abs/1906.00744, 2019.
+Yiding Jiang, Shixiang Gu, Kevin Murphy, and Chelsea Finn. Language as an abstraction for hierarchical deep reinforcement learning. CoRR, abs/1906.07343, 2019.
+Thomas Kollar, Stefanie Tellex, Deb Roy, and Nicholas Roy. Toward understanding natural language directions. In HRI, 2010.
+Heinrich Küttler, Nantas Nardelli, Thibaut Lavril, Marco Selvatici, Viswanath Sivakumar, Tim Roktäschel, and Edward Grefenstette. TorchBeast: A PyTorch Platform for Distributed RL. arXiv preprint arXiv:1910.03552, 2019. URL https://github.com/facebookresearch/torchbeast.
+Kenton Lee, Luheng He, Mike Lewis, and Luke Zettlemoyer. End-to-end neural coreference resolution. In EMNLP, 2017.
+Jiwei Li, Will Monroe, Alan Ritter, Dan Jurafsky, Michel Galley, and Jianfeng Gao. Deep reinforcement learning for dialogue generation. In EMNLP, 2016.
+Timothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Manfred Otto Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. CoRR, abs/1509.02971, 2015.
+Jelena Luketina, Nantas Nardelli, Gregory Farquhar, Jakob Foerster, Jacob Andreas, Edward Grefenstette, Shimon Whiteson, and Tim Rocktäschel. A Survey of Reinforcement Learning Informed by Natural Language. In *IJCAI*, 2019.
+Minh-Thang Luong, Hieu Pham, and Christopher D Manning. Effective approaches to attention-based neural machine translation. In ACL, 2015.
+Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin A. Riedmiller. Playing atari with deep reinforcement learning. CoRR, abs/1312.5602, 2013.
+Karthik Narasimhan, Regina Barzilay, and Tommi S. Jaakkola. Deep transfer in reinforcement learning by language grounding. JAIR, 2018.
+Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, and Aaron C. Courville. Film: Visual reasoning with a general conditioning layer. In AAAI, 2018.
+Stefanie Tellex, Thomas Kollar, Steven Dickerson, Matthew R. Walter, Ashis Gopal Banerjee, Seth Teller, and Nicholas Roy. Understanding natural language commands for robotic navigation and mobile manipulation. In AAAI, 2011.
+T. Tieleman and G. Hinton. Lecture 6.5—RmsPropG: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning, 2012.
+Sida I. Wang, Percy Liang, and Christopher D. Manning. Learning language games through interaction. In ACL, 2016.
+Victor Zhong, Caiming Xiong, and Richard Socher. Global-locally self-attentive dialogue state tracker. In ACL, 2018.
+
+# A PLAYTHROUGH EXAMPLES
+
+These figures show key snapshots from a trained policy on randomly sampled environments.
+
+# Doc:
+
+You should use arcane and mysterious items to beat lightning monsters. Panther, warg, and wolf are from the Order of the Forest. Blessed and Grandmaster's items beat poison monsters. Cold is not good against fanatical and shimmering weapons. The Star Alliance team is made up of beetle, jackal, and shaman. Imp, jaguar, and lynx make up the Rebel Enclave. Gleaming and Soldier's weapons beat fire monsters.
+
+# Goal:
+
+Defeat the Star Alliance
+
+
+
+
+
+
+Figure 6: The initial world is shown in 1. In 4, the agent avoids the target "lightning shaman" because it does not yet have "arcane spear", which beats the target. In 7 and 8, the agent is cornered by monsters. In 9, the agent is forced to engage in combat and loses.
+
+
+
+
+
+# Doc:
+
+Cold monsters are defeated by gleaming and Soldier's weapons. The Order of the Forest team consists of ant, lynx, and wolf. Mysterious and shimmering weapons are good against lightning monsters. Poison monster are defeated by blessed and fanatical items. Get arcane and Grandmaster's weapons to slay fire monsters. Beetle, panther, and zombie are Star Alliance. Jackal, jaguar, and ghost are on the Rebel Enclave.
+
+# Goal:
+
+Fight the monster in the Rebel Encclave.
+
+
+
+
+
+
+Figure 7: The initial world is shown in 1. In 5 the agent evades the target "cold ghost" because it does not yet have "soldier's knife", which beats the target. In 11 and 13, the agent obtains "soldier's knife" while evading monsters. In 14, the agent defeats the target and wins.
+
+
+
+
+
+# B VARIABLE DIMENSIONS
+
+Let $\mathbf{x}_{\mathrm{text}} \in \mathbb{R}^{d_{\mathrm{text}}}$ denote a fixed-length $d_{\mathrm{text}}$ -dimensional representation of the text and $\mathbf{X}_{\mathrm{vis}} \in \mathbb{R}^{d_{\mathrm{vis}} \times H \times W}$ denote the representation of visual inputs with
+
+| Variable | Symbol | Dimension |
| d_text-dim text representation | x_text | d_text |
| d.vis-dim visual representation with height H, width W, d.vis channels | X.vis | d.vis × H × W |
| Environment observations embeddings | E_obs | l_obs × d_emb × H × W |
| l_obs-word string that describes the objects in location (i, j) in the grid-world | E_obs[;, :, i, j] | l_obs × d_emb |
| l_doc-word document embeddings | E_doc | l_doc × d_emb |
| l_inv-word inventory embeddings | E_inv | l_inv × d_emb |
| l_goal-word goal embeddings | E_goal | l_goal × d_emb |
+
+Table 4: Variable dimensions
+
+# C MODEL DETAILS
+
+# C.1 T x T 2π
+
+Hyperparameters. The $\texttt{txt}2\pi$ used in our experiments consists of 5 consecutive $\mathrm{FiLM^2}$ layers, each with 3x3 convolutions and padding and stride sizes of 1. The $\texttt{txt}2\pi$ layers have channels of 16, 32, 64, 64, and 64, with residual connections from the 3rd layer to the 5th layer. The Goal-doc LSTM (see Figure 3) shares weight with the Goal LSTM. The Inventory and Goal LSTMs have a hidden dimension of size 10, whereas the Vis-doc LSTM has a dimension of 100. We use a word embedding dimension of 30.
+
+# C.2 CNN WITH RESIDUAL CONNECTIONS
+
+
+Figure 8: The convolutional network baseline. The FiLM baseline has the same structure, but with convolutional layers replaced by FiLM layers.
+
+Like $\texttt{tx} \texttt{t} 2\pi$ , the CNN baseline consists of 5 layers of convolutions with channels of 16, 32, 64, 64, and 64. There are residual connections from the 3rd layer to the 5th layer. The input to each layer consists of the output of the previous layer, concatenated with positional features.
+
+The input to the network is the concatenation of the observations $V^{(0)}$ and text representations. The text representations consist of self-attention over bidirectional LSTM-encoded goal, document, and inventory. These attention outputs are replicated over the dimensions of the grid and concatenated feature-wise with the observation embeddings in each cell. Figure 8 illustrates the CNN baseline.
+
+# C.3 FILMBASELINE
+
+The FiLM baseline encodes text in the same fashion as the CNN model. However, instead of using convolutional layers, each layer is a FiLM layer from Bahdanau et al. (2019). Note that in our case, the language representation is a self-attention over the LSTM states instead of a concatenation of terminal LSTM states.
+
+# D TRAINING PROCEDURE
+
+We train using an implementation of IMPALA (Espeholt et al., 2018). In particular, we use 20 actors and a batch size of 24. When unrolling actors, we use a maximum unroll length of 80 frames. Each episode lasts for a maximum of 1000 frames. We optimise using RMSProp (Tieleman & Hinton, 2012) with a learning rate of 0.005, which is annealed linearly for 100 million frames. We set $\alpha = 0.99$ and $\epsilon = 0.01$ .
+
+During training, we apply a small negative reward for each time step of $-0.02$ and a discount factor of 0.99 to facilitate convergence. We additionally include an entropy cost to encourage exploration. Let $y_{\text{policy}}$ denote the policy. The entropy loss is calculated as
+
+$$
+L _ {\text {p o l i c y}} = - \sum_ {i} \boldsymbol {y} _ {\text {p o l i c y} i} \log \boldsymbol {y} _ {\text {p o l i c y} i} \tag {23}
+$$
+
+In addition to policy gradient, we add in the entropy loss with a weight of 0.005 and the baseline loss with a weight of 0.5. The baseline loss is computed as the root mean square of the advantages (Espoholt et al., 2018).
+
+When tuning models, we perform a grid search using the training environments to select hyperparameters for each model. We train 5 runs for each configuration in order to report the mean and standard deviation. When transferring, we transfer each of the 5 runs to the new task and once again report the mean and standard deviation.
+
+| Scenario | # graphs | # edges | # nodes |
| train | dev | unseen | train | dev | % new | train | dev | % new |
| permutation | 30 | 30 | y | 20 | 20 | n | 60 | 60 | n |
| new edge | 20 | 20 | y | 48 | 36 | y | 17 | 13 | n |
| new edge+nodes | 60 | 60 | y | 20 | 20 | y | 5 | 5 | y |
+
+Table 5: Statistics of the three variations of the Rock-paper-scissors task
+
+
+Figure 10: Performance on the Rock-paper-scissors task across models. Left shows final performance on environments whose goals and dynamics were seen during training. Right shows performance on the environments whose goals and dynamics were not seen during training.
+
+# E ROCK-PAPER-SCISSORS
+
+In addition to the main RTFM tasks, we also study a simpler formulation called Rock-paper-scissors that has a fixed goal. In Rock-paper-scissors, the agent must interpret a document that describes the environment dynamics in order to solve the task. Given an set of characters (e.g. a-z), we sample 3 characters and set up a rock-paper-scissors-like dependency graph between the characters (e.g. "a beats b, b beats c, c beats a"). We then spawn a monster in the world with a randomly assigned type (e.g. "b goblin"), as well as an item corresponding to each type (e.g. "a", "b", and "c"). The attributes of the agent, monster, and items are set up such that the player must obtain the correct item and then engage the monster in order to win. Any other sequence of actions (e.g. engaging the monster without the correct
+
+weapon) results in a loss. The winning policy should then be to first identify the type of monster present, then cross-reference the document to find which item defeats that type, then pick up the item, and finally engage the monster in combat. Figure 9 shows an instance of Rock-paper-scissors.
+
+Figure 9: The Rock-paper-scissors task requires jointly reasoning over the game observations and a document describing environment dynamics. The agent observes cell content in the form of text (shown in white).
+
+Doc: e beats d. c beats e. d beats c.
+Inventory: empty
+
+Reading models generalise to new environments. We split environment dynamics by permuting 3-character dependency graphs from an alphabet, which we randomly split into training and held-out sets. This corresponds to the "permutations" setting in Table 5.
+
+
+Figure 11: Learning curve while transferring to the development environments. Win rates of individual runs are shown in light colours. Average win rates are shown in bold, dark lines.
+
+We train models on the $10 \times 10$ worlds from the training set and evaluate them on both seen and not seen during training. The left of Figure 10 shows the performance of models on worlds of varying sizes with training environment dynamics. In this case, the dynamics (e.g. dependency graphs) were seen during training. For $9 \times 9$ and $11 \times 11$ worlds, the world configuration not seen during training. For $10 \times 10$ worlds, there is a $5\%$ chance that the initial frame was seen during training. Figure 10 shows the performance on held-out environments not seen during training. We see that all models generalise to environments not seen during training, both when the world configuration is not seen (left) and when the environment dynamics are not seen (right).
+
+# Reading models generalise to new concepts. In addition to splitting via permutations, we de
+
+vise two additional ways of splitting environment dynamics by introducing new edges and nodes into the held-out set. Table 5 shows the three different settings. For each, we study the transfer behaviour of models on new environments. Figure 11 shows the learning curve when training a model on the held-out environments directly and when transferring the model trained on train environments to held-out environments. We observe that all models are significantly more sample-efficient when transferring from training environments, despite the introduction of new edges and new nodes.
+
+$\mathsf{tx} \mathsf{t} 2\pi$ is more sample-efficient and learns better policies. In Figure 10, we see that the FiLM model outperforms the CNN model on both training environment dynamics and held-out environment dynamics. $\mathsf{tx} \mathsf{t} 2\pi$ further outperforms FiLM, and does so more consistently in that the final performance has less variance. This behaviour is also observed in the in Figure 11. When training on the held-out set without transferring, $\mathsf{tx} \mathsf{t} 2\pi$ is more sample efficient than FiLM and the CNN model, and achieves higher win-rate. When transferring to the held-out set, $\mathsf{tx} \mathsf{t} 2\pi$ remains more sample efficient than the other models.
+
+
+Figure 12: Ablation training curves. Win rates of individual runs are shown in light colours. Average win rates are shown in bold, dark lines.
+
+# F CURRICULUM LEARNING TRAINING CURVES
+
+
+
+
+
+
+Figure 13: Curriculum learning results for $\text{txt}2\pi$ on RTFM. Win rates of individual runs are shown in light colours. Average win rates are shown in bold, dark lines.
+
+
+
+# G ENTITIES AND MODIFIERS
+
+Below is a list of entities and modifiers contained in RTFM:
+
+Monsters: wolf, jaguar, panther, goblin, bat, imp, shaman, ghost, zombie
+
+Weapons: sword, axe, morningstar, polearm, knife, katana, cutlass, spear
+
+Elements: cold, fire, lightning, poison
+
+Modifiers: Grandmaster's, blessed, shimmering, gleaming, fanatical, mysterious, Soldier's, arcane
+
+Teams: Star Alliance, Order of the Forest, Rebel Enclave
+
+# H LANGUAGETEMPLATES
+
+We collect human-written natural language templates for the goal and the dynamics. The goal statements in RTFM describe which team the agent should defeat. We collect 12 language templates for goal statements. The document of environment dynamics consists of two types of statements. The first type describes which monsters are assigned to with team. The second type describes which modifiers, which describe items, are effective against which element types, which are associated with monsters. We collection 10 language templates for each type of statements. The entire document is composed from statements, which are randomly shuffled. We randomly sample a template for each statement, which we fill with the monsters and team for the first type and modifiers and element for the second type.
\ No newline at end of file
diff --git a/rtfmgeneralisingtonewenvironmentdynamicsviareading/images.zip b/rtfmgeneralisingtonewenvironmentdynamicsviareading/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..638c1a40f88da68c99896d1ded2d7c41a1ba5037
--- /dev/null
+++ b/rtfmgeneralisingtonewenvironmentdynamicsviareading/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f6651b259085356d9bec8376d0c13d67609f9ed90ba31a0f8dc4bfbe5c4c958b
+size 849243
diff --git a/rtfmgeneralisingtonewenvironmentdynamicsviareading/layout.json b/rtfmgeneralisingtonewenvironmentdynamicsviareading/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..c1ddbcbb496a9f91a01e70c8345e4f533a13a10a
--- /dev/null
+++ b/rtfmgeneralisingtonewenvironmentdynamicsviareading/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4154d4fcd1aab2ddea691ffe1d68f8a1a8f07db7ba6540e7d3ca5e19fd11809d
+size 485950