SlowGuess commited on
Commit
9222bbf
·
verified ·
1 Parent(s): 22c99cb

Add Batch d2686597-0451-4461-a23f-31a9221b859a

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. onceforalltrainonenetworkandspecializeitforefficientdeployment/4a51cd14-2713-41e5-b618-48653a66fdc6_content_list.json +3 -0
  2. onceforalltrainonenetworkandspecializeitforefficientdeployment/4a51cd14-2713-41e5-b618-48653a66fdc6_model.json +3 -0
  3. onceforalltrainonenetworkandspecializeitforefficientdeployment/4a51cd14-2713-41e5-b618-48653a66fdc6_origin.pdf +3 -0
  4. onceforalltrainonenetworkandspecializeitforefficientdeployment/full.md +252 -0
  5. onceforalltrainonenetworkandspecializeitforefficientdeployment/images.zip +3 -0
  6. onceforalltrainonenetworkandspecializeitforefficientdeployment/layout.json +3 -0
  7. oneshotpruningofrecurrentneuralnetworksbyjacobianspectrumevaluation/f414d865-4e58-4bd4-8107-5e77486b1ba7_content_list.json +3 -0
  8. oneshotpruningofrecurrentneuralnetworksbyjacobianspectrumevaluation/f414d865-4e58-4bd4-8107-5e77486b1ba7_model.json +3 -0
  9. oneshotpruningofrecurrentneuralnetworksbyjacobianspectrumevaluation/f414d865-4e58-4bd4-8107-5e77486b1ba7_origin.pdf +3 -0
  10. oneshotpruningofrecurrentneuralnetworksbyjacobianspectrumevaluation/full.md +340 -0
  11. oneshotpruningofrecurrentneuralnetworksbyjacobianspectrumevaluation/images.zip +3 -0
  12. oneshotpruningofrecurrentneuralnetworksbyjacobianspectrumevaluation/layout.json +3 -0
  13. ontheconvergenceoffedavgonnoniiddata/a6fa98c7-5455-427d-bd93-84e9253f15ac_content_list.json +3 -0
  14. ontheconvergenceoffedavgonnoniiddata/a6fa98c7-5455-427d-bd93-84e9253f15ac_model.json +3 -0
  15. ontheconvergenceoffedavgonnoniiddata/a6fa98c7-5455-427d-bd93-84e9253f15ac_origin.pdf +3 -0
  16. ontheconvergenceoffedavgonnoniiddata/full.md +0 -0
  17. ontheconvergenceoffedavgonnoniiddata/images.zip +3 -0
  18. ontheconvergenceoffedavgonnoniiddata/layout.json +3 -0
  19. ontheequivalencebetweenpositionalnodeembeddingsandstructuralgraphrepresentations/79265851-6c9f-40c9-9670-1df76f49c641_content_list.json +3 -0
  20. ontheequivalencebetweenpositionalnodeembeddingsandstructuralgraphrepresentations/79265851-6c9f-40c9-9670-1df76f49c641_model.json +3 -0
  21. ontheequivalencebetweenpositionalnodeembeddingsandstructuralgraphrepresentations/79265851-6c9f-40c9-9670-1df76f49c641_origin.pdf +3 -0
  22. ontheequivalencebetweenpositionalnodeembeddingsandstructuralgraphrepresentations/full.md +0 -0
  23. ontheequivalencebetweenpositionalnodeembeddingsandstructuralgraphrepresentations/images.zip +3 -0
  24. ontheequivalencebetweenpositionalnodeembeddingsandstructuralgraphrepresentations/layout.json +3 -0
  25. ontheglobalconvergenceoftrainingdeeplinearresnets/9ebe3cd7-2a94-45a9-87cd-1cac7a4d4898_content_list.json +3 -0
  26. ontheglobalconvergenceoftrainingdeeplinearresnets/9ebe3cd7-2a94-45a9-87cd-1cac7a4d4898_model.json +3 -0
  27. ontheglobalconvergenceoftrainingdeeplinearresnets/9ebe3cd7-2a94-45a9-87cd-1cac7a4d4898_origin.pdf +3 -0
  28. ontheglobalconvergenceoftrainingdeeplinearresnets/full.md +0 -0
  29. ontheglobalconvergenceoftrainingdeeplinearresnets/images.zip +3 -0
  30. ontheglobalconvergenceoftrainingdeeplinearresnets/layout.json +3 -0
  31. ontheinteractionbetweensupervisionandselfplayinemergentcommunication/1b3901be-621d-4bb3-bcdb-d88a447dac73_content_list.json +3 -0
  32. ontheinteractionbetweensupervisionandselfplayinemergentcommunication/1b3901be-621d-4bb3-bcdb-d88a447dac73_model.json +3 -0
  33. ontheinteractionbetweensupervisionandselfplayinemergentcommunication/1b3901be-621d-4bb3-bcdb-d88a447dac73_origin.pdf +3 -0
  34. ontheinteractionbetweensupervisionandselfplayinemergentcommunication/full.md +274 -0
  35. ontheinteractionbetweensupervisionandselfplayinemergentcommunication/images.zip +3 -0
  36. ontheinteractionbetweensupervisionandselfplayinemergentcommunication/layout.json +3 -0
  37. ontheneedfortopologyawaregenerativemodelsformanifoldbaseddefenses/58912334-d640-40cb-adcc-b945fac97af5_content_list.json +3 -0
  38. ontheneedfortopologyawaregenerativemodelsformanifoldbaseddefenses/58912334-d640-40cb-adcc-b945fac97af5_model.json +3 -0
  39. ontheneedfortopologyawaregenerativemodelsformanifoldbaseddefenses/58912334-d640-40cb-adcc-b945fac97af5_origin.pdf +3 -0
  40. ontheneedfortopologyawaregenerativemodelsformanifoldbaseddefenses/full.md +912 -0
  41. ontheneedfortopologyawaregenerativemodelsformanifoldbaseddefenses/images.zip +3 -0
  42. ontheneedfortopologyawaregenerativemodelsformanifoldbaseddefenses/layout.json +3 -0
  43. ontherelationshipbetweenselfattentionandconvolutionallayers/3e5ce1ef-dba0-48e8-afe3-e1959710950f_content_list.json +3 -0
  44. ontherelationshipbetweenselfattentionandconvolutionallayers/3e5ce1ef-dba0-48e8-afe3-e1959710950f_model.json +3 -0
  45. ontherelationshipbetweenselfattentionandconvolutionallayers/3e5ce1ef-dba0-48e8-afe3-e1959710950f_origin.pdf +3 -0
  46. ontherelationshipbetweenselfattentionandconvolutionallayers/full.md +458 -0
  47. ontherelationshipbetweenselfattentionandconvolutionallayers/images.zip +3 -0
  48. ontherelationshipbetweenselfattentionandconvolutionallayers/layout.json +3 -0
  49. onthesteerabilityofgenerativeadversarialnetworks/a1c0e785-948e-4091-8bae-4fdc1021722e_content_list.json +3 -0
  50. onthesteerabilityofgenerativeadversarialnetworks/a1c0e785-948e-4091-8bae-4fdc1021722e_model.json +3 -0
onceforalltrainonenetworkandspecializeitforefficientdeployment/4a51cd14-2713-41e5-b618-48653a66fdc6_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3031ec358799ac64d92deab6256599477fd6d744dc6282a4e45e518363c67fbf
3
+ size 75756
onceforalltrainonenetworkandspecializeitforefficientdeployment/4a51cd14-2713-41e5-b618-48653a66fdc6_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d9f833c0a72fee7d7e2a00be0abc255cc442961d9db4ae2818ded5248e2b7958
3
+ size 93098
onceforalltrainonenetworkandspecializeitforefficientdeployment/4a51cd14-2713-41e5-b618-48653a66fdc6_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7a8a9a28af56a54dd8a516291f7503eb3dc24cd12a40639c6fc711ea7315ae3c
3
+ size 3471878
onceforalltrainonenetworkandspecializeitforefficientdeployment/full.md ADDED
@@ -0,0 +1,252 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # ONCE-FOR-ALL: TRAIN ONE NETWORK AND SPECIALIZE IT FOR EFFICIENT DEPLOYMENT
2
+
3
+ Han Cai<sup>1</sup>, Chuang Gan<sup>2</sup>, Tianzhe Wang<sup>1</sup>, Zhekai Zhang<sup>1</sup>, Song Han<sup>1</sup>
4
+
5
+ <sup>1</sup>Massachusetts Institute of Technology, <sup>2</sup>MIT-IBM Watson AI Lab {hancai, chuangg, songhan}@mit.edu
6
+
7
+ # ABSTRACT
8
+
9
+ We address the challenging problem of efficient inference across many devices and resource constraints, especially on edge devices. Conventional approaches either manually design or use neural architecture search (NAS) to find a specialized neural network and train it from scratch for each case, which is computationally prohibitive (causing $CO_2$ emission as much as 5 cars' lifetime Strubell et al. (2019)) thus unscalable. In this work, we propose to train a once-for-all (OFA) network that supports diverse architectural settings by decoupling training and search, to reduce the cost. We can quickly get a specialized sub-network by selecting from the OFA network without additional training. To efficiently train OFA networks, we also propose a novel progressive shrinking algorithm, a generalized pruning method that reduces the model size across many more dimensions than pruning (depth, width, kernel size, and resolution). It can obtain a surprisingly large number of subnetworks $(>10^{19})$ that can fit different hardware platforms and latency constraints while maintaining the same level of accuracy as training independently. On diverse edge devices, OFA consistently outperforms state-of-the-art (SOTA) NAS methods (up to $4.0\%$ ImageNet top1 accuracy improvement over MobileNetV3, or same accuracy but $1.5\times$ faster than MobileNetV3, $2.6\times$ faster than EfficientNet w.r.t measured latency) while reducing many orders of magnitude GPU hours and $CO_2$ emission. In particular, OFA achieves a new SOTA $80.0\%$ ImageNet top-1 accuracy under the mobile setting (<600M MACs). OFA is the winning solution for the 3rd Low Power Computer Vision Challenge (LPCVC), DSP classification track and the 4th LPCVC, both classification track and detection track. Code and 50 pre-trained models (for many devices & many latency constraints) are released at https://github.com/mit-han-lab/once-for-all.
10
+
11
+ # 1 INTRODUCTION
12
+
13
+ Deep Neural Networks (DNNs) deliver state-of-the-art accuracy in many machine learning applications. However, the explosive growth in model size and computation cost gives rise to new challenges on how to efficiently deploy these deep learning models on diverse hardware platforms, since they have to meet different hardware efficiency constraints (e.g., latency, energy). For instance, one mobile application on App Store has to support a diverse range of hardware devices, from a high-end Samsung Note10 with a dedicated neural network accelerator to a 5-year-old Samsung S6 with a much slower processor. With different hardware resources (e.g., on-chip memory size, #arithmetic units), the optimal neural network architecture varies significantly. Even running on the same hardware, under different battery conditions or workloads, the best model architecture also differs a lot.
14
+
15
+ Given different hardware platforms and efficiency constraints (defined as deployment scenarios), researchers either design compact models specialized for mobile (Howard et al., 2017; Sandler et al., 2018; Zhang et al., 2018) or accelerate the existing models by compression (Han et al., 2016; He et al., 2018) for efficient deployment. However, designing specialized DNNs for every scenario is engineer-expensive and computationally expensive, either with human-based methods or NAS. Since such methods need to repeat the network design process and retrain the designed network from scratch for each case. Their total cost grows linearly as the number of deployment scenarios increases, which will result in excessive energy consumption and $CO_2$ emission (Strubell et al., 2019). It makes them unable to handle the vast amount of hardware devices (23.14 billion IoT devices till
16
+
17
+ ![](images/4795181fda638f05f9350d49fd791d6afa83a56aeb438cd64970f82839ea0de3.jpg)
18
+ Figure 1: Left: a single once-for-all network is trained to support versatile architectural configurations including depth, width, kernel size, and resolution. Given a deployment scenario, a specialized subnetwork is directly selected from the once-for-all network without training. Middle: this approach reduces the cost of specialized deep learning deployment from O(N) to O(1). Right: once-for-all network followed by model selection can derive many accuracy-latency trade-offs by training only once, compared to conventional methods that require repeated training.
19
+
20
+ $2018^{1}$ ) and highly dynamic deployment environments (different battery conditions, different latency requirements, etc.).
21
+
22
+ This paper introduces a new solution to tackle this challenge – designing a once-for-all network that can be directly deployed under diverse architectural configurations, amortizing the training cost. The inference is performed by selecting only part of the once-for-all network. It flexibly supports different depths, widths, kernel sizes, and resolutions without retraining. A simple example of Once-for-All (OFA) is illustrated in Figure 1 (left). Specifically, we decouple the model training stage and the neural architecture search stage. In the model training stage, we focus on improving the accuracy of all sub-networks that are derived by selecting different parts of the once-for-all network. In the model specialization stage, we sample a subset of sub-networks to train an accuracy predictor and latency predictors. Given the target hardware and constraint, a predictor-guided architecture search (Liu et al., 2018) is conducted to get a specialized sub-network, and the cost is negligible. As such, we reduce the total cost of specialized neural network design from O(N) to O(1) (Figure 1 middle).
23
+
24
+ However, training the once-for-all network is a non-trivial task, since it requires joint optimization of the weights to maintain the accuracy of a large number of sub-networks (more than $10^{19}$ in our experiments). It is computationally prohibitive to enumerate all sub-networks to get the exact gradient in each update step, while randomly sampling a few sub-networks in each step will lead to significant accuracy drops. The challenge is that different sub-networks are interfering with each other, making the training process of the whole once-for-all network inefficient. To address this challenge, we propose a progressive shrinking algorithm for training the once-for-all network. Instead of directly optimizing the once-for-all network from scratch, we propose to first train the largest neural network with maximum depth, width, and kernel size, then progressively fine-tune the once-for-all network to support smaller sub-networks that share weights with the larger ones. As such, it provides better initialization by selecting the most important weights of larger sub-networks, and the opportunity to distill smaller sub-networks, which greatly improves the training efficiency. From this perspective, progressive shrinking can be viewed as a generalized network pruning method that shrinks multiple dimensions (depth, width, kernel size, and resolution) of the full network rather than only the width dimension. Besides, it targets on maintaining the accuracy of all sub-networks rather than a single pruned network.
25
+
26
+ We extensively evaluated the effectiveness of OFA on ImageNet with many hardware platforms (CPU, GPU, mCPU, mGPU, FPGA accelerator) and efficiency constraints. Under all deployment scenarios, OFA consistently improves the ImageNet accuracy by a significant margin compared to SOTA hardware-aware NAS methods while saving the GPU hours, dollars, and $CO_2$ emission by orders of magnitude. On the ImageNet mobile setting (less than 600M MACs), OFA achieves a new SOTA $80.0\%$ top1 accuracy with 595M MACs (Figure 2). To the best of our knowledge, this is the first time that the SOTA ImageNet top1 accuracy reaches $80\%$ under the mobile setting.
27
+
28
+ ![](images/d0d208cacade8b073169a94cefb1c2fd4fc238a20fa61eab7436c1bbcbdd83fa.jpg)
29
+ Figure 2: Comparison between OFA and state-of-the-art CNN models on ImageNet. OFA provides $80.0\%$ ImageNet top1 accuracy under the mobile setting ( $<600\mathrm{M}$ MACs).
30
+
31
+ # 2 RELATED WORK
32
+
33
+ Efficient Deep Learning. Many efficient neural network architectures are proposed to improve the hardware efficiency, such as SqueezeNet (Iandola et al., 2016), MobileNets (Howard et al., 2017; Sandler et al., 2018), ShuffleNets (Ma et al., 2018; Zhang et al., 2018), etc. Orthogonal to architecting efficient neural networks, model compression (Han et al., 2016) is another very effective technique for efficient deep learning, including network pruning that removes redundant units (Han et al., 2015) or redundant channels (He et al., 2018; Liu et al., 2017), and quantization that reduces the bit width for the weights and activations (Han et al., 2016; Courbariaux et al., 2015; Zhu et al., 2017).
34
+
35
+ Neural Architecture Search. Neural architecture search (NAS) focuses on automating the architecture design process (Zoph & Le, 2017; Zoph et al., 2018; Real et al., 2019; Cai et al., 2018a; Liu et al., 2019). Early NAS methods (Zoph et al., 2018; Real et al., 2019; Cai et al., 2018b) search for high-accuracy architectures without taking hardware efficiency into consideration. Therefore, the produced architectures (e.g., NASNet, AmoebaNet) are not efficient for inference. Recent hardware-aware NAS methods (Cai et al., 2019; Tan et al., 2019; Wu et al., 2019) directly incorporate the hardware feedback into architecture search. Hardware-DNN co-design techniques (Jiang et al., 2019b;a; Hao et al., 2019) jointly optimize neural network architectures and hardware architectures. As a result, they can improve inference efficiency. However, given new inference hardware platforms, these methods need to repeat the architecture search process and retrain the model, leading to prohibitive GPU hours, dollars, and $CO_2$ emission. They are not scalable to a large number of deployment scenarios. The individually trained models do not share any weight, leading to large total model size and high downloading bandwidth.
36
+
37
+ Dynamic Neural Networks. To improve the efficiency of a given neural network, some work explored skipping part of the model based on the input image. For example, Wu et al. (2018); Liu & Deng (2018); Wang et al. (2018) learn a controller or gating modules to adaptively drop layers; Huang et al. (2018) introduce early-exit branches in the computation graph; Lin et al. (2017) adaptively prune channels based on the input feature map; Kuen et al. (2018) introduce stochastic downsampling point to reduce the feature map size adaptively. Recently, Slimmable Nets (Yu et al., 2019; Yu & Huang, 2019b) propose to train a model to support multiple width multipliers (e.g., 4 different global width multipliers), building upon existing human-designed neural networks (e.g., MobileNetV2 0.35, 0.5, 0.75, 1.0). Such methods can adaptively fit different efficiency constraints at runtime, however, still inherit a pre-designed neural network (e.g., MobileNet-v2), which limits the degree of flexibility (e.g., only width multiplier can adapt) and the ability in handling new deployment scenarios where the pre-designed neural network is not optimal. In this work, in contrast, we enable a much more diverse architecture space (depth, width, kernel size, and resolution) and a significantly larger number of architectural settings $(10^{19}$ v.s. 4 (Yu et al., 2019)). Thanks to the diversity and the large design
38
+
39
+ ![](images/ffc5914ed689105b9dab31a61e58266c1288d58d4e85bc0425a432d5473c3758.jpg)
40
+ Figure 3: Illustration of the progressive shrinking process to support different depth $D$ , width $W$ , kernel size $K$ and resolution $R$ . It leads to a large space comprising diverse sub-networks ( $>10^{19}$ ).
41
+
42
+ space, we can derive new specialized neural networks for many different deployment scenarios rather than working on top of an existing neural network that limits the optimization headroom. However, it is more challenging to train the network to achieve this flexibility, which motivates us to design the progressive shrinking algorithm to tackle this challenge.
43
+
44
+ # 3 METHOD
45
+
46
+ # 3.1 PROBLEM FORMALIZATION
47
+
48
+ Assuming the weights of the once-for-all network as $W_{o}$ and the architectural configurations as $\{arch_{i}\}$ , we then can formalize the problem as
49
+
50
+ $$
51
+ \min _ {W _ {o}} \sum_ {\text {a r c h} _ {i}} \mathcal {L} _ {\text {v a l}} \left(C \left(W _ {o}, \operatorname {a r c h} _ {i}\right)\right), \tag {1}
52
+ $$
53
+
54
+ where $C(W_{o}, \text{arch}_{i})$ denotes a selection scheme that selects part of the model from the once-for-all network $W_{o}$ to form a sub-network with architectural configuration $\text{arch}_{i}$ . The overall training objective is to optimize $W_{o}$ to make each supported sub-network maintain the same level of accuracy as independently training a network with the same architectural configuration.
55
+
56
+ # 3.2 ARCHITECTURE SPACE
57
+
58
+ Our once-for-all network provides one model but supports many sub-networks of different sizes, covering four important dimensions of the convolutional neural networks (CNNs) architectures, i.e., depth, width, kernel size, and resolution. Following the common practice of many CNN models (He et al., 2016; Sandler et al., 2018; Huang et al., 2017), we divide a CNN model into a sequence of units with gradually reduced feature map size and increased channel numbers. Each unit consists of a sequence of layers where only the first layer has stride 2 if the feature map size decreases (Sandler et al., 2018). All the other layers in the units have stride 1.
59
+
60
+ We allow each unit to use arbitrary numbers of layers (denoted as elastic depth); For each layer, we allow to use arbitrary numbers of channels (denoted as elastic width) and arbitrary kernel sizes (denoted as elastic kernel size). In addition, we also allow the CNN model to take arbitrary input image sizes (denoted as elastic resolution). For example, in our experiments, the input image size ranges from 128 to 224 with a stride 4; the depth of each unit is chosen from $\{2,3,4\}$ ; the width expansion ratio in each layer is chosen from $\{3,4,6\}$ ; the kernel size is chosen from $\{3,5,7\}$ . Therefore, with 5 units, we have roughly $(3\times 3)^2 + (3\times 3)^3 + (3\times 3)^4)^5 \approx 2\times 10^{19}$ different neural network architectures and each of them can be used under 25 different input resolutions. Since all of these sub-networks share the same weights (i.e., $W_{o}$ ) (Cheung et al., 2019), we only require 7.7M parameters to store all of them. Without sharing, the total model size will be prohibitive.
61
+
62
+ # 3.3 TRAINING THE ONCE-FOR-ALL NETWORK
63
+
64
+ Naïve Approach. Training the once-for-all network can be cast as a multi-objective problem, where each objective corresponds to one sub-network. From this perspective, a naïve training approach is to directly optimize the once-for-all network from scratch using the exact gradient of the overall objective, which is derived by enumerating all sub-networks in each update step, as shown in Eq. (1). The cost of this approach is linear to the number of sub-networks. Therefore, it is only applicable to scenarios where a limited number of sub-networks are supported (Yu et al., 2019), while in our case, it is computationally prohibitive to adopt this approach.
65
+
66
+ Another naive training approach is to sample a few sub-networks in each update step rather than enumerate all of them, which does not have the issue of prohibitive cost. However, with such a large number of sub-networks that share weights, thus interfere with each other, we find it suffers from
67
+
68
+ ![](images/da6f7d699c01a4204f9e190aca4f369a5cc0e3f133d61f61cdd040715c75b6f6.jpg)
69
+ Figure 4: Progressive shrinking can be viewed as a generalized network pruning technique with much higher flexibility. Compared to network pruning, it shrinks more dimensions (not only width) and provides a much more powerful once-for-all network that can fit different deployment scenarios rather than a single pruned network.
70
+
71
+ ![](images/86edce657bcf4ef008b21481f93ae66629e98535a40eb4f5515fe02605978c62.jpg)
72
+ Figure 5: Left: Kernel transformation matrix for elastic kernel size. Right: Progressive shrinking for elastic depth. Instead of skipping each layer independently, we keep the first $D$ layers and skip the last $(4 - D)$ layers. The weights of the early layers are shared.
73
+
74
+ significant accuracy drop. In the following section, we introduce a solution to address this challenge, i.e., progressive shrinking.
75
+
76
+ Progressive Shrinking. The once-for-all network comprises many sub-networks of different sizes where small sub-networks are nested in large sub-networks. To prevent interference between the sub-networks, we propose to enforce a training order from large sub-networks to small sub-networks in a progressive manner. We name this training scheme as progressive shrinking (PS). An example of the training process with PS is provided in Figure 3 and Figure 4, where we start with training the largest neural network with the maximum kernel size (e.g., 7), depth (e.g., 4), and width (e.g., 6). Next, we progressively fine-tune the network to support smaller sub-networks by gradually adding them into the sampling space (larger sub-networks may also be sampled). Specifically, after training the largest network, we first support elastic kernel size which can choose from $\{3,5,7\}$ at each layer, while the depth and width remain the maximum values. Then, we support elastic depth and elastic width sequentially, as is shown in Figure 3. The resolution is elastic throughout the whole training process, which is implemented by sampling different image sizes for each batch of training data. We also use the knowledge distillation technique after training the largest neural network (Hinton et al., 2015; Ashok et al., 2018; Yu & Huang, 2019b). It combines two loss terms using both the soft labels given by the largest neural network and the real labels.
77
+
78
+ Compared to the naive approach, PS prevents small sub-networks from interfering large sub-networks, since large sub-networks are already well-trained when the once-for-all network is fine-tuned to support small sub-networks. Regarding the small sub-networks, they share the weights with the large ones. Therefore, PS allows initializing small sub-networks with the most important weights of well-trained large sub-networks, which expedites the training process. Compared to network pruning (Figure 4), PS also starts with training the full model, but it shrinks not only the width dimension but also the depth, kernel size, and resolution dimensions of the full model. Additionally, PS fine-tunes both large and small sub-networks rather than a single pruned network. As a result, PS provides a much more powerful once-for-all network that can fit diverse hardware platforms and efficiency constraints compared to network pruning. We describe the details of the PS training flow as follows:
79
+
80
+ ![](images/67785c7fd17fbb33d6819d6dbc45757cdee921563e17c130be787d40346292df.jpg)
81
+ Figure 6: Progressive shrinking for elastic width. In this example, we progressively support 4, 3, and 2 channel settings. We perform channel sorting and pick the most important channels (with large L1 norm) to initialize the smaller channel settings. The important channels' weights are shared.
82
+
83
+ - Elastic Kernel Size (Figure 5 left). We let the center of a $7 \times 7$ convolution kernel also serve as a $5 \times 5$ kernel, the center of which can also be a $3 \times 3$ kernel. Therefore, the kernel size becomes elastic. The challenge is that the centering sub-kernels (e.g., $3 \times 3$ and $5 \times 5$ ) are shared and need to play multiple roles (independent kernel and part of a large kernel). The weights of centered sub-kernels may need to have different distribution or magnitude as different roles. Forcing them to be the same degrades the performance of some sub-networks. Therefore, we introduce kernel transformation matrices when sharing the kernel weights. We use separate kernel transformation matrices for different layers. Within each layer, the kernel transformation matrices are shared among different channels. As such, we only need $25 \times 25 + 9 \times 9 = 706$ extra parameters to store the kernel transformation matrices in each layer, which is negligible.
84
+ - Elastic Depth (Figure 5 right). To derive a sub-network that has $D$ layers in a unit that originally has $N$ layers, we keep the first $D$ layers and skip the last $N - D$ layers, rather than keeping any $D$ layers as done in current NAS methods (Cai et al., 2019; Wu et al., 2019). As such, one depth setting only corresponds to one combination of layers. In the end, the weights of the first $D$ layers are shared between large and small models.
85
+ - Elastic Width (Figure 6). Width means the number of channels. We give each layer the flexibility to choose different channel expansion ratios. Following the progressive shrinking scheme, we first train a full-width model. Then we introduce a channel sorting operation to support partial widths. It reorganizes the channels according to their importance, which is calculated based on the L1 norm of a channel's weight. Larger L1 norm means more important. For example, when shrinking from a 4-channel-layer to a 3-channel-layer, we select the largest 3 channels, whose weights are shared with the 4-channel-layer (Figure 6 left and middle). Thereby, smaller sub-networks are initialized with the most important channels on the once-for-all network which is already well trained. This channel sorting operation preserves the accuracy of larger sub-networks.
86
+
87
+ # 3.4 SPECIALIZED MODEL DEPLOYMENT WITH ONCE-FOR-ALL NETWORK
88
+
89
+ Having trained a once-for-all network, the next stage is to derive the specialized sub-network for a given deployment scenario. The goal is to search for a neural network that satisfies the efficiency (e.g., latency, energy) constraints on the target hardware while optimizing the accuracy. Since OFA decouples model training from neural architecture search, we do not need any training cost in this stage. Furthermore, we build neural-network-twins to predict the latency and accuracy given a neural network architecture, providing a quick feedback for model quality. It eliminates the repeated search cost by substituting the measured accuracy/latency with predicted accuracy/latency (twins).
90
+
91
+ Specifically, we randomly sample 16K sub-networks with different architectures and input image sizes, then measure their accuracy on 10K validation images sampled from the original training set. These [architecture, accuracy] pairs are used to train an accuracy predictor to predict the accuracy of a model given its architecture and input image size<sup>2</sup>. We also build a latency lookup table (Cai et al., 2019) on each target hardware platform to predict the latency. Given the target hardware and latency constraint, we conduct an evolutionary search (Real et al., 2019) based on the neural-network-twins to get a specialized sub-network. Since the cost of searching with neural-network-twins is negligible,
92
+
93
+ ![](images/b4ca34c09b57512e36838f319ae4ba56ddf8d94c5d526603debf2c968ee9fd74.jpg)
94
+ Figure 7: ImageNet top1 accuracy (\%) performances of sub-networks under resolution $224 \times 224$ . " $(\mathrm{D} = d, \mathrm{W} = w, \mathrm{K} = k)$ " denotes a sub-network with $d$ layers in each unit, and each layer has an width expansion ratio $w$ and kernel size $k$ .
95
+
96
+ we only need 40 GPU hours to collect the data pairs, and the cost stays constant regardless of #deployment scenarios.
97
+
98
+ # 4 EXPERIMENTS
99
+
100
+ In this section, we first apply the progressive shrinking algorithm to train the once-for-all network on ImageNet (Deng et al., 2009). Then we demonstrate the effectiveness of our trained once-for-all network on various hardware platforms (Samsung S7 Edge, Note8, Note10, Google Pixel1, Pixel2, LG G8, NVIDIA 1080Ti, V100 GPUs, Jetson TX2, Intel Xeon CPU, Xilinx ZU9EG, and ZU3EG FPGAs) with different latency constraints.
101
+
102
+ # 4.1 TRAINING THE ONCE-FOR-ALL NETWORK ON IMAGENET
103
+
104
+ Training Details. We use the same architecture space as MobileNetV3 (Howard et al., 2019). For training the full network, we use the standard SGD optimizer with Nesterov momentum 0.9 and weight decay $3e^{-5}$ . The initial learning rate is 2.6, and we use the cosine schedule (Loshchilov & Hutter, 2016) for learning rate decay. The full network is trained for 180 epochs with batch size 2048 on 32 GPUs. Then we follow the schedule described in Figure 3 to further fine-tune the full network<sup>3</sup>. The whole training process takes around 1,200 GPU hours on V100 GPUs. This is a one-time training cost that can be amortized by many deployment scenarios.
105
+
106
+ Results. Figure 7 reports the top1 accuracy of sub-networks derived from the once-for-all networks that are trained with our progressive shrinking (PS) algorithm and without PS respectively. Due to space limits, we take 8 sub-networks for comparison, and each of them is denoted as “ $(\mathrm{D} = d, \mathrm{W} = w, \mathrm{K} = k)$ ”. It represents a sub-network that has $d$ layers for all units, while the expansion ratio and kernel size are set to $w$ and $k$ for all layers. PS can improve the ImageNet accuracy of sub-networks by a significant margin under all architectural settings. Specifically, without architecture optimization, PS can achieve $74.8\%$ top1 accuracy using 226M MACs under the architecture setting $(\mathrm{D} = 4, \mathrm{W} = 3, \mathrm{K} = 3)$ , which is on par with MobileNetV3-Large. In contrast, without PS, it only achieves $71.5\%$ , which is $3.3\%$ lower.
107
+
108
+ # 4.2 SPECIALIZED SUB-NETWORKS FOR DIFFERENT HARDWARE AND CONSTRAINTS
109
+
110
+ We apply our trained once-for-all network to get different specialized sub-networks for diverse hardware platforms: from the cloud to the edge. On cloud devices, the latency for GPU is measured with batch size 64 on NVIDIA 1080Ti and V100 with Pytorch 1.0+cuDNN. The CPU latency is measured with batch size 1 on Intel Xeon E5-2690 v4+MKL-DNN. On edge devices, including mobile phones, we use Samsung, Google and LG phones with TF-Lite, batch size 1; for mobile GPU,
111
+
112
+ <table><tr><td rowspan="2">Model</td><td rowspan="2">ImageNet Top1 (%)</td><td rowspan="2">MACs</td><td rowspan="2">Mobile latency</td><td rowspan="2">Search cost (GPU hours)</td><td rowspan="2">Training cost (GPU hours)</td><td colspan="3">Total cost (N=40)</td></tr><tr><td>GPU hours</td><td>CO2e (Ibs)</td><td>AWS cost</td></tr><tr><td>MobileNetV2 [31]</td><td>72.0</td><td>300M</td><td>66ms</td><td>0</td><td>150N</td><td>6k</td><td>1.7k</td><td>$18.4k</td></tr><tr><td>MobileNetV2 #1200</td><td>73.5</td><td>300M</td><td>66ms</td><td>0</td><td>1200N</td><td>48k</td><td>13.6k</td><td>$146.9k</td></tr><tr><td>NASNet-A [44]</td><td>74.0</td><td>564M</td><td>-</td><td>48,000N</td><td>-</td><td>1,920k</td><td>544.5k</td><td>$5875.2k</td></tr><tr><td>DARTS [25]</td><td>73.1</td><td>595M</td><td>-</td><td>96N</td><td>250N</td><td>14k</td><td>4.0k</td><td>$42.8k</td></tr><tr><td>MnasNet [33]</td><td>74.0</td><td>317M</td><td>70ms</td><td>40,000N</td><td>-</td><td>1,600k</td><td>453.8k</td><td>$4896.0k</td></tr><tr><td>FBNet-C [36]</td><td>74.9</td><td>375M</td><td>-</td><td>216N</td><td>360N</td><td>23k</td><td>6.5k</td><td>$70.4k</td></tr><tr><td>ProxylessNAS [4]</td><td>74.6</td><td>320M</td><td>71ms</td><td>200N</td><td>300N</td><td>20k</td><td>5.7k</td><td>$61.2k</td></tr><tr><td>SinglePathNAS [8]</td><td>74.7</td><td>328M</td><td>-</td><td>288 + 24N</td><td>384N</td><td>17k</td><td>4.8k</td><td>$52.0k</td></tr><tr><td>AutoSlim [38]</td><td>74.2</td><td>305M</td><td>63ms</td><td>180</td><td>300N</td><td>12k</td><td>3.4k</td><td>$36.7k</td></tr><tr><td>MobileNetV3-Large [15]</td><td>75.2</td><td>219M</td><td>58ms</td><td>-</td><td>180N</td><td>7.2k</td><td>1.8k</td><td>$22.2k</td></tr><tr><td>OFA w/o PS</td><td>72.4</td><td>235M</td><td>59ms</td><td>40</td><td>1200</td><td>1.2k</td><td>0.34k</td><td>$3.7k</td></tr><tr><td>OFA w/ PS</td><td>76.0</td><td>230M</td><td>58ms</td><td>40</td><td>1200</td><td>1.2k</td><td>0.34k</td><td>$3.7k</td></tr><tr><td>OFA w/ PS #25</td><td>76.4</td><td>230M</td><td>58ms</td><td>40</td><td>1200 + 25N</td><td>2.2k</td><td>0.62k</td><td>$6.7k</td></tr><tr><td>OFA w/ PS #75</td><td>76.9</td><td>230M</td><td>58ms</td><td>40</td><td>1200 + 75N</td><td>4.2k</td><td>1.2k</td><td>$13.0k</td></tr><tr><td>OFALarge w/ PS #75</td><td>80.0</td><td>595M</td><td>-</td><td>40</td><td>1200 + 75N</td><td>4.2k</td><td>1.2k</td><td>$13.0k</td></tr></table>
113
+
114
+ Table 1: Comparison with SOTA hardware-aware NAS methods on Pixel1 phone. OFA decouples model training from neural architecture search. The search cost and training cost both stay constant as the number of deployment scenarios grows. “#25” denotes the specialized sub-networks are fine-tuned for 25 epochs after grabbing weights from the once-for-all network. “ $CO_2e$ ” denotes $CO_2$ emission which is calculated based on Strubell et al. (2019). AWS cost is calculated based on the price of on-demand P3.16xlarge instances.
115
+
116
+ ![](images/e3a6ed95ca6c86149fc22414093a73e6e0490cc21d319ba43d73e5a04f140697.jpg)
117
+ Figure 8: OFA saves orders of magnitude design cost compared to NAS methods.
118
+
119
+ we use Jetson TX2 with Pytorch 1.0+cuDNN, batch size of 16; for embedded FPGA, we use Xilinx ZU9EG and ZU3EG FPGAs with Vitis AI<sup>4</sup>, batch size 1.
120
+
121
+ Comparison with NAS on Mobile Devices. Table 1 reports the comparison between OFA and state-of-the-art hardware-aware NAS methods on the mobile phone (Pixel1). OFA is much more efficient than NAS when handling multiple deployment scenarios since the cost of OFA is constant while others are linear to the number of deployment scenarios $(N)$ . With $N = 40$ , the total $CO_2$ emissions of OFA is $16 \times$ fewer than ProxylessNAS, $19 \times$ fewer than FBNet, and $1,300 \times$ fewer than MnasNet (Figure 8). Without retraining, OFA achieves $76.0\%$ top1 accuracy on ImageNet, which is $0.8\%$ higher than MobileNetV3-Large while maintaining similar mobile latency. We can further improve the top1 accuracy to $76.4\%$ by fine-tuning the specialized sub-network for 25 epochs and to $76.9\%$ by fine-tuning for 75 epochs. Besides, we also observe that OFA with PS can achieve $3.6\%$ better accuracy than without PS.
122
+
123
+ OFA under Different Computational Resource Constraints. Figure 9 summarizes the results of OFA under different MACs and Pixel1 latency constraints. OFA achieves $79.1\%$ ImageNet top1 accuracy with 389M MACs, being $2.8\%$ more accurate than EfficientNet-B0 that has similar MACs. With 595M MACs, OFA reaches a new SOTA $80.0\%$ ImageNet top1 accuracy under the mobile setting (<600M MACs), which is $0.2\%$ higher than EfficientNet-B2 while using $1.68 \times$ fewer MACs. More importantly, OFA runs much faster than EfficientNets on hardware. Specifically, with 143ms Pixel1 latency, OFA achieves $80.1\%$ ImageNet top1 accuracy, being $0.3\%$ more accurate and $2.6 \times$ faster than EfficientNet-B2. We also find that training the searched neural architectures from scratch cannot reach the same level of accuracy as OFA, suggesting that not only neural architectures but also pre-trained weights contribute to the superior performances of OFA.
124
+
125
+ Figure 10 reports detailed comparisons between OFA and MobileNetV3 on six mobile devices. Remarkably, OFA can produce the entire trade-off curves with many points over a wide range of latency constraints by training only once (green curve). It is impossible for previous NAS methods (Tan et al., 2019; Cai et al., 2019) due to the prohibitive training cost.
126
+
127
+ ![](images/526938bdfca4160bd33f401995d6569095ad1010c7fb29ddccc86708891379e5.jpg)
128
+ Figure 9: OFA achieves $80.0\%$ top1 accuracy with 595M MACs and $80.1\%$ top1 accuracy with 143ms Pixel1 latency, setting a new SOTA ImageNet top1 accuracy on the mobile setting.
129
+
130
+ ![](images/be24b45494a85b7fb2d340a9d8ec71b0fe190451f977da335b77a8e7ce3b5917.jpg)
131
+
132
+ ![](images/4f189bfc08741420e3aa00a8bc1d8f29740a462806c4645e8580201c6a3cca7e.jpg)
133
+
134
+ ![](images/52f03ff5f084664cee1a06a9e20688730e12d1f5bb48ad5be0804a0fdd9f3d70.jpg)
135
+
136
+ ![](images/3a9ee90679b053823b0464123e62920ca75497614dadca4e339f5167766f5e23.jpg)
137
+
138
+ ![](images/492d1b7b2a3f88c6d705f7b164910324b0b4644a7f12b57dc554d4a5fbf5bb15.jpg)
139
+ Figure 10: OFA consistently outperforms MobileNetV3 on mobile platforms.
140
+
141
+ ![](images/14a046e729b89ecb3be2bd52eec6c4095cb99b33df447b0e297547c05234ba08.jpg)
142
+
143
+ ![](images/37e76a2e669aae1a415cc5748ad6471462b174f8eb49063ddc0f7ed78541ec92.jpg)
144
+
145
+ OFA for Diverse Hardware Platforms. Besides the mobile platforms, we extensively studied the effectiveness of OFA on six additional hardware platforms (Figure 11) using the ProxylessNAS architecture space (Cai et al., 2019). OFA consistently improves the trade-off between accuracy and latency by a significant margin, especially on GPUs which have more parallelism. With similar latency as MobileNetV2 0.35, "OFA #25" improves the ImageNet top1 accuracy from MobileNetV2's $60.3\%$ to $72.6\%$ ( $+12.3\%$ improvement) on the 1080Ti GPU. Detailed architectures of our specialized models are shown in Figure 14. It reveals the insight that using the same model for different deployment scenarios with only the width multiplier modified has a limited impact on efficiency improvement: the accuracy drops quickly as the latency constraint gets tighter.
146
+
147
+ OFA for Specialized Hardware Accelerators. There has been plenty of work on NAS for general-purpose hardware, but little work has been focused on specialized hardware accelerators. We quantitatively analyzed the performance of OFA on two FPGAs accelerators (ZU3EG and ZU9EG) using Xilinx Vitis AI with 8-bit quantization, and discuss two design principles.
148
+
149
+ **Principle 1:** memory access is expensive, computation is cheap. An efficient CNN should do as much as computation with a small amount of memory footprint. The ratio is defined as the arithmetic intensity (OPs/Byte). The higher OPs/Byte, the less memory bounded, the easier to parallelize. Thanks to OFA's diverse choices of sub-network architectures ( $10^{19}$ ) (Section 3.3), and the OFA
150
+
151
+ ![](images/3093de04708b53383cddc57f6133accac2b9d44a8a5b85054044638a2d874fc4.jpg)
152
+ Figure 11: Specialized OFA models consistently achieve significantly higher ImageNet accuracy with similar latency than non-specialized neural networks on CPU, GPU, mGPU, and FPGA. More remarkably, specializing for a new hardware platform does not add training cost using OFA.
153
+
154
+ ![](images/fa455e6aa9ea5525dd82a1414d222af6263b523423884d726609f4fd62abaf2c.jpg)
155
+ Figure 12: OFA models improve the arithmetic intensity (OPS/Byte) and utilization (GOPS/s) compared with the MobileNetV2 and MnasNet (measured results on Xilinx ZU9EG and ZU3EG FPGA).
156
+
157
+ model twin that can quickly give the accuracy-latency feedback (Section 3.4), the evolutionary search can automatically find a CNN architecture that has higher arithmetic intensity. As shown in Figure 12, OFA's arithmetic intensity is $48\% /43\%$ higher than MobileNetV2 and MnasNet (MobileNetV3 is not supported by Xilinx Vitis AI). Removing the memory bottleneck results in higher utilization and GOPS/s by $70\% -90\%$ , pushing the operation point to the upper-right in the roofline model (Williams et al., 2009), as shown in Figure 13. $(70\% -90\%)$ looks small in the log scale but that is significant).
158
+
159
+ Principle 2: the CNN architecture should be co-designed with the hardware accelerator's cost model. The FPGA accelerator has a specialized depth-wise engine that is pipelined with the point-wise engine. The pipeline throughput is perfectly matched for 3x3 kernels. As a result, OFA's searched model only has 3x3 kernel (Figure 14, a) on FPGA, despite 5x5 and 7x7 kernels are also in the search space. Additionally, large kernels sometimes cause "out of BRAM" error on FPGA, giving high cost. On Intel Xeon CPU, however, more than $50\%$ operations are large kernels. Both FPGA and GPU models are wider than CPU, due to the large parallelism of the computation array.
160
+
161
+ # 5 CONCLUSION
162
+
163
+ We proposed Once-for-All (OFA), a new methodology that decouples model training from architecture search for efficient deep learning deployment under a large number of hardware platforms. Unlike
164
+
165
+ ![](images/2848e6cfa77f0b93a8582c6f4dbcc0a7301a995a92ee67c8c6f8fb5c72e8fee0.jpg)
166
+ (a) on Xilinx ZU9EG FPGA
167
+
168
+ ![](images/a4d7ee1bb88ab8b90b3d77b6e426ed51904e0b8e2874686bc7e80567abe2b823.jpg)
169
+ (b) on Xilinx ZU3EG FPGA
170
+
171
+ ![](images/6f291f250d0bef7bbc23e687a884b647054845b48452d4776da8c1ba5c80875e.jpg)
172
+ Figure 13: Quantitative study of OFA's roofline model on Xilinx ZU9EG and ZU3EG FPGAs (log scale). OFA model increased the arithmetic intensity by $33\% /43\%$ and GOPS/s by $72\% /92\%$ on these two FPGAs compared with MnasNet.
173
+ (a) 4.1ms latency on Xilinx ZU3EG (batch size = 1).
174
+
175
+ ![](images/c2c644d569704bcfde66bd77bb807344f696c03e8283e5575122a7b96ba127be.jpg)
176
+ (b) $10.9\mathrm{ms}$ latency on Intel Xeon CPU (batch size $= 1$ -
177
+
178
+ ![](images/519163121dd01792b42819ca35b829ba7616d18b5473ac83b1794abed66b47dc.jpg)
179
+ (c) $14.9\mathrm{ms}$ latency on NVIDIA 1080Ti (batch size $= 64$
180
+ Figure 14: OFA can design specialized models for different hardware and different latency constraint. "MB4 3x3" means "mobile block with expansion ratio 4, kernel size 3x3". FPGA and GPU models are wider than CPU model due to larger parallelism. Different hardware has different cost model, leading to different optimal CNN architectures. OFA provides a unified and efficient design methodology.
181
+
182
+ previous approaches that design and train a neural network for each deployment scenario, we designed a once-for-all network that supports different architectural configurations, including elastic depth, width, kernel size, and resolution. It reduces the training cost (GPU hours, energy consumption, and $CO_2$ emission) by orders of magnitude compared to conventional methods. To prevent sub-networks of different sizes from interference, we proposed a progressive shrinking algorithm that enables a large number of sub-network to achieve the same level of accuracy compared to training them independently. Experiments on a diverse range of hardware platforms and efficiency constraints demonstrated the effectiveness of our approach. OFA provides an automated ecosystem to efficiently design efficient neural networks with the hardware cost model in the loop.
183
+
184
+ # ACKNOWLEDGMENTS
185
+
186
+ We thank NSF Career Award #1943349, MIT-IBM Watson AI Lab, Google-Daydream Research Award, Samsung, Intel, Xilinx, SONY, AWS Machine Learning Research Award for supporting this
187
+
188
+ research. We thank Samsung, Google and LG for donating mobile phones. We thank Shuang Wu and Lei Deng for drawing the Figure 2.
189
+
190
+ # REFERENCES
191
+
192
+ Anubhav Ashok, Nicholas Rhinehart, Fares Beainy, and Kris M Kitani. N2n learning: Network to network compression via policy gradient reinforcement learning. In ICLR, 2018. 5
193
+ Han Cai, Tianyao Chen, Weinan Zhang, Yong Yu, and Jun Wang. Efficient architecture search by network transformation. In AAAI, 2018a. 3
194
+ Han Cai, Jiacheng Yang, Weinan Zhang, Song Han, and Yong Yu. Path-level network transformation for efficient architecture search. In ICML, 2018b. 3
195
+ Han Cai, Ligeng Zhu, and Song Han. ProxylessNAS: Direct neural architecture search on target task and hardware. In ICLR, 2019. URL https://arxiv.org/pdf/1812.00332.pdf. 3, 6, 8, 9
196
+ Brian Cheung, Alex Terekhov, Yubei Chen, Pulkit Agrawal, and Bruno Olshausen. Superposition of many models into one. In NeurIPS, 2019. 4
197
+ Matthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David. Binaryconnect: Training deep neural networks with binary weights during propagations. In NeurIPS, 2015. 3
198
+ Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In CVPR, 2009. 7
199
+ Zichao Guo, Xiangyu Zhang, Haoyuan Mu, Wen Heng, Zechun Liu, Yichen Wei, and Jian Sun. Single path one-shot neural architecture search with uniform sampling. arXiv preprint arXiv:1904.00420, 2019. 8
200
+ Song Han, Jeff Pool, John Tran, and William Dally. Learning both weights and connections for efficient neural network. In NeurIPS, 2015. 3
201
+ Song Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. In ICLR, 2016. 1, 3
202
+ Cong Hao, Xiaofan Zhang, Yuhong Li, Sitao Huang, Jinjun Xiong, Kyle Rupnow, Wen-mei Hwu, and Deming Chen. Fpga/dnn co-design: An efficient design methodology for 1ot intelligence on the edge. In 2019 56th ACM/IEEE Design Automation Conference (DAC), pp. 1-6. IEEE, 2019. 3
203
+ Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, 2016. 4
204
+ Yihui He, Ji Lin, Zhijian Liu, Hanrui Wang, Li-Jia Li, and Song Han. Amc: Automl for model compression and acceleration on mobile devices. In ECCV, 2018. 1, 3
205
+ Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015. 5
206
+ Andrew Howard, Mark Sandler, Grace Chu, Liang-Chieh Chen, Bo Chen, Mingxing Tan, Weijun Wang, Yukun Zhu, Ruoming Pang, Vijay Vasudevan, et al. Searching for mobilenetv3. In ICCV 2019, 2019. 7, 8
207
+ Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861, 2017. 1, 3
208
+ Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. Densely connected convolutional networks. In CVPR, 2017. 4
209
+ Gao Huang, Danlu Chen, Tianhong Li, Felix Wu, Laurens van der Maaten, and Kilian Q Weinberger. Multi-scale dense networks for resource efficient image classification. In ICLR, 2018. 3
210
+
211
+ Forrest N Iandola, Song Han, Matthew W Moskewicz, Khalid Ashraf, William J Dally, and Kurt Keutzer. SqueezeNet: Alexnet-level accuracy with 50x fewer parameters and $0.5\mathrm{mb}$ model size. arXiv preprint arXiv:1602.07360, 2016. 3
212
+ Weiwen Jiang, Lei Yang, Edwin Sha, Qingfeng Zhuge, Shouzhen Gu, Yiyu Shi, and Jingtong Hu. Hardware/software co-exploration of neural architectures. arXiv preprint arXiv:1907.04650, 2019a. 3
213
+ Weiwen Jiang, Xinyi Zhang, Edwin H-M Sha, Lei Yang, Qingfeng Zhuge, Yiyu Shi, and Jingtong Hu. Accuracy vs. efficiency: Achieving both through fpga-implementation aware neural architecture search. In Proceedings of the 56th Annual Design Automation Conference 2019, pp. 1-6, 2019b. 3
214
+ Jason Kuen, Xiangfei Kong, Zhe Lin, Gang Wang, Jianxiong Yin, Simon See, and Yap-Peng Tan. Stochastic downsampling for cost-adjustable inference and improved regularization in convolutional networks. In CVPR, 2018. 3
215
+ Ji Lin, Yongming Rao, Jiwen Lu, and Jie Zhou. Runtime neural pruning. In NeurIPS, 2017. 3
216
+ Chenxi Liu, Barret Zoph, Maxim Neumann, Jonathon Shlens, Wei Hua, Li-Jia Li, Li Fei-Fei, Alan Yuille, Jonathan Huang, and Kevin Murphy. Progressive neural architecture search. In ECCV, 2018. 2
217
+ Hanxiao Liu, Karen Simonyan, and Yiming Yang. Darts: Differentiable architecture search. In ICLR, 2019. 3, 8
218
+ Lanlan Liu and Jia Deng. Dynamic deep neural networks: Optimizing accuracy-efficiency trade-offs by selective execution. In AAAI, 2018. 3
219
+ Zhuang Liu, Jianguo Li, Zhiqiang Shen, Gao Huang, Shoumeng Yan, and Changshui Zhang. Learning efficient convolutional networks through network slimming. In ICCV, 2017. 3
220
+ Ilya Loshchilov and Frank Hutter. Sgdr: Stochastic gradient descent with warm restarts. arXiv preprint arXiv:1608.03983, 2016. 7
221
+ Ningning Ma, Xiangyu Zhang, Hai-Tao Zheng, and Jian Sun. Shufflenet v2: Practical guidelines for efficient cnn architecture design. In ECCV, 2018. 3
222
+ Esteban Real, Alok Aggarwal, Yanping Huang, and Quoc V Le. Regularized evolution for image classifier architecture search. In AAAI, 2019. 3, 6
223
+ Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, and Liang-Chieh Chen. *Mobilenetv2: Inverted residuals and linear bottlenecks*. In *CVPR*, 2018. 1, 3, 4, 8
224
+ Emma Strubell, Ananya Ganesh, and Andrew McCallum. Energy and policy considerations for deep learning in nlp. In ACL, 2019. 1, 8
225
+ Mingxing Tan, Bo Chen, Ruoming Pang, Vijay Vasudevan, Mark Sandler, Andrew Howard, and Quoc V Le. Mnasnet: Platform-aware neural architecture search for mobile. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2820-2828, 2019. 3, 8
226
+ Xin Wang, Fisher Yu, Zi-Yi Dou, Trevor Darrell, and Joseph E Gonzalez. Skipnet: Learning dynamic routing in convolutional networks. In ECCV, 2018. 3
227
+ Samuel Williams, Andrew Waterman, and David Patterson. Roofline: An insightful visual performance model for floating-point programs and multicore architectures. Technical report, Lawrence Berkeley National Lab.(LBNL), Berkeley, CA (United States), 2009. 10
228
+ Bichen Wu, Xiaoliang Dai, Peizhao Zhang, Yanghan Wang, Fei Sun, Yiming Wu, Yuandong Tian, Peter Vajda, Yangqing Jia, and Kurt Keutzer. Fbnet: Hardware-aware efficient convnet design via differentiable neural architecture search. In CVPR, 2019. 3, 6, 8
229
+ Zuxuan Wu, Tushar Nagarajan, Abhishek Kumar, Steven Rennie, Larry S Davis, Kristen Grauman, and Rogerio Feris. Blockdrop: Dynamic inference paths in residual networks. In CVPR, 2018. 3
230
+
231
+ Jiahui Yu and Thomas Huang. Autoslim: Towards one-shot architecture search for channel numbers. arXiv preprint arXiv:1903.11728, 2019a. 8
232
+ Jiahui Yu and Thomas Huang. Universally slimmable networks and improved training techniques. In ICCV, 2019b. 3, 5
233
+ Jiahui Yu, Linjie Yang, Ning Xu, Jianchao Yang, and Thomas Huang. Slimmable neural networks. In ICLR, 2019. 3, 4
234
+ Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, and Jian Sun. Shufflenet: An extremely efficient convolutional neural network for mobile devices. In CVPR, 2018. 1, 3
235
+ Chenzhuo Zhu, Song Han, Huizi Mao, and William J Dally. Trained ternary quantization. In ICLR, 2017. 3
236
+ Barret Zoph and Quoc V Le. Neural architecture search with reinforcement learning. In ICLR, 2017. 3
237
+ Barret Zoph, Vijay Vasudevan, Jonathon Shlens, and Quoc V Le. Learning transferable architectures for scalable image recognition. In CVPR, 2018. 3, 8
238
+
239
+ # A DETAILS OF THE ACCURACY PREDICTOR
240
+
241
+ We use a three-layer feedforward neural network that has 400 hidden units in each layer as the accuracy predictor. Given a model, we encode each layer in the neural network into a one-hot vector based on its kernel size and expand ratio, and we assign zero vectors to layers that are skipped. Besides, we have an additional one-hot vector that represents the input image size. We concatenate these vectors into a large vector that represents the whole neural network architecture and input image size, which is then fed to the three-layer feedforward neural network to get the predicted accuracy. In our experiments, this simple accuracy prediction model can provide very accurate predictions. At convergence, the root-mean-square error (RMSE) between predicted accuracy and estimated accuracy on the test set is only $0.21\%$ . Figure 15 shows the relationship between the RMSE of the accuracy prediction model and the final results (i.e., the accuracy of selected sub-networks). We can find that lower RMSE typically leads to better final results.
242
+
243
+ ![](images/5845fb60794b117c8ab4c9fedbf2a44353ea366dede20150538bb2acba1a65c1.jpg)
244
+ Figure 15: Performances of selected sub-networks using different accuracy prediction model.
245
+
246
+ # B IMPLEMENTATION DETAILS OF PROGRESSIVE SHRINKING
247
+
248
+ After training the full network, we first have one stage of fine-tuning to incorporate elastic kernel size. In this stage (i.e., $K \in [7,5,3]$ ), we sample one sub-network in each update step. The network is fine-tuned for 125 epochs with an initial learning rate of 0.96. All other training settings are the same as training the full network.
249
+
250
+ Next, we have two stages of fine-tuning to incorporate elastic depth. We sample two sub-networks and aggregate their gradients in each update step. The first stage (i.e., $D \in [4,3]$ ) takes 25 epochs with an initial learning rate of 0.08 while the second stage (i.e., $D \in [4,3,2]$ ) takes 125 epochs with an initial learning rate of 0.24.
251
+
252
+ Finally, we have two stages of fine-tuning to incorporate elastic width. We sample four sub-networks and aggregate their gradients in each update step. The first stage (i.e., $W \in [6, 4]$ ) takes 25 epochs with an initial learning rate of 0.08 while the second stage (i.e., $W \in [6, 4, 3]$ ) takes 125 epochs with an initial learning rate of 0.24.
onceforalltrainonenetworkandspecializeitforefficientdeployment/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8c6caf25de8592437e3b8126b4ceba7e48a3bd79ff193b5018509c15546eaba3
3
+ size 840384
onceforalltrainonenetworkandspecializeitforefficientdeployment/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5345b856310860365daf92dc0340dd65928577222a000ab7c1c2a46f9d786553
3
+ size 402018
oneshotpruningofrecurrentneuralnetworksbyjacobianspectrumevaluation/f414d865-4e58-4bd4-8107-5e77486b1ba7_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8724bc04c6b8a72da30e777032e97c8454d864f2816e34db9f4bfa49ff07ef6b
3
+ size 74954
oneshotpruningofrecurrentneuralnetworksbyjacobianspectrumevaluation/f414d865-4e58-4bd4-8107-5e77486b1ba7_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5b7cfdb17e41ffe3d21125b560c372cef9eca45c8aa0330f9bc90804e24c7d6a
3
+ size 91426
oneshotpruningofrecurrentneuralnetworksbyjacobianspectrumevaluation/f414d865-4e58-4bd4-8107-5e77486b1ba7_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e4b5ebdce9975b4384dd5813eabb71c8773bc7055acb3973a18351ff8c3d1b30
3
+ size 483612
oneshotpruningofrecurrentneuralnetworksbyjacobianspectrumevaluation/full.md ADDED
@@ -0,0 +1,340 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # ONE-SHOT PRUNING OF RECURRENT NEURAL NETWORKS BY JACOBIAN SPECTRUM EVALUATION
2
+
3
+ Matthew Shunshi Zhang
4
+
5
+ University of Toronto
6
+
7
+ matthew.zhang@mail.utoronto.ca
8
+
9
+ Bradly C. Stadie
10
+
11
+ Vector Institute
12
+
13
+ # ABSTRACT
14
+
15
+ Recent advances in the sparse neural network literature have made it possible to prune many large feed forward and convolutional networks with only a small quantity of data. Yet, these same techniques often falter when applied to the problem of recovering sparse recurrent networks. These failures are quantitative: when pruned with recent techniques, RNNs typically obtain worse performance than they do under a simple random pruning scheme. The failures are also qualitative: the distribution of active weights in a pruned LSTM or GRU network tend to be concentrated in specific neurons and gates, and not well dispersed across the entire architecture. We seek to rectify both the quantitative and qualitative issues with recurrent network pruning by introducing a new recurrent pruning objective derived from the spectrum of the recurrent Jacobian. Our objective is data efficient (requiring only 64 data points to prune the network), easy to implement, and produces 95 % sparse GRUs that significantly improve on existing baselines. We evaluate on sequential MNIST, Billion Words, and Wikitext.
16
+
17
+ # 1 INTRODUCTION
18
+
19
+ Within the neural network community, network pruning has been something of an evergreen problem. There are several motivations for pruning a neural network. Theoretically, overparameterization is a well known but poorly understood quality of many networks. Pruning algorithms provide a link between overparameterized models appropriately parameterized models. Thus, these algorithms may provide insights into exactly why overparameterized models have so much success. Indeed, recent work has closely linked the efficient utilization of model capacity with generalization results (Arora et al., 2018). From a more practical perspective, overparameterized networks require more storage capacity and are computationally more expensive than their pruned counterparts. Hence, there is an incentive to use pruned networks rather than fully dense networks during deployment.
20
+
21
+ For years, many of the most successful network pruning techniques were iterative — relying on cycle of pruning and retraining weights to induce sparsity in the network. As identified in Lee et al. (2018), these methods usually either enforce a sparsity-based penalty on the weights (Han et al., 2015; LeCun et al., 1990), or else prune based on some fitness criteria (Carreira-Perpinan & Idelbayev, 2018; Chauvin, 1989). Recent advances in pruning literature suggest that such costly cycles of pruning and retraining might not always be necessary. For some problems, there exists a small subnetwork within the original larger network such that training against this smaller network produces comparable performance to training against the original fully dense network. The Lottery Ticket Hypothesis (Frankle & Carbin, 2019) provides a method for recovering these networks, but only after training is complete. SNIP (Lee et al., 2018) and GraSP (Wang et al., 2020) provide a saliency criterion for identifying this small subnetwork using less than 100 data points, no training, and no iterative pruning.
22
+
23
+ Our present work began by asking the question: "How well do these newly discovered pruning techniques, which optimize a network sensitivity objective, work on recurrent neural networks?" Although Lee et al. (2018) does evaluate the SNIP pruning criterion on both GRU and LSTM networks, we found these results to be somewhat incomplete. They did not provide a comparison to random pruning, and the chosen tasks were not extensive enough to draw definitive conclusions. When compared against random pruning, we found that the SNIP and GraSP pruning objective
24
+
25
+ performed similarly to or worse than random pruning. This left us wondering where those techniques were falling short, and if a better pruning objective could be developed that takes the temporal structure of recurrent networks into account.
26
+
27
+ In this paper, we propose a new pruning objective for recurrent neural networks. This objective is based on recent advances in mean field theory (Gilboa et al., 2019; Chen et al., 2018a), and can be interpreted as forcing the network to preserve weights that propagate information through its temporal depths. Practically, this constraint is imposed by forcing the singular values of the temporal Jacobian with respect to the network weights to be non-degenerate. We provide a discussion about the similarities and differences between our objective and the SNIP and GraSP pruning objectives. It can be shown that these prior objectives fail to ensure that the temporal Jacobian of the recurrent weights is well conditioned. Our method is evaluated with a GRU network on sequential MNIST, Wikitext, and Billion Words. At $95\%$ sparsity, our network achieves better results than fully dense networks, randomly pruned networks, SNIP (Lee et al., 2018) pruned networks, and GraSP (Wang et al., 2020) pruned networks.
28
+
29
+ # 2 PRUNING RECURRENT NETWORKS BY JACOBIAN SPECTRUM EVALUATION
30
+
31
+ # 2.1 NOTATION
32
+
33
+ We denote matrices and vectors by upper- and lower-case bold letters respectively. Vector-valued functions are bolded, whereas scalar valued functions are not. Distributions over variables are with the following script: $\mathcal{D},\mathcal{P}$ . We denote the standard $\ell_p$ norm of a vector by $\| \cdot \| _p$ . Let $[\cdot ]_{ij}$ be the (i,j)-th element of a matrix, and $[\cdot ]_i$ the i-th element of a vector. $\vec{1},\vec{0}$ , denotes a vector of 1s or 0s of appropriate length, and use $\odot$ denotes a Hadamard product. $I_A$ represents the standard indicator function. For vectors, superscripts are always used for sequence lengths while subscripts are reserved for indexing vector elements.
34
+
35
+ # 2.2 PRELIMINARIES
36
+
37
+ # 2.2.1 RECURRENT MODELS
38
+
39
+ Let $\mathbf{X} = \{\mathbf{x}^{(t)}\}_{t=1}^{S}$ ; with each $\mathbf{x}^{(t)} \in \mathbb{R}^D$ . Similarly, let $\hat{\mathbf{Y}} = \{\hat{\mathbf{y}}^{(t)}\}_{t=1}^{S}$ , where each $\mathbf{y}^{(t)} \in \mathbb{R}^O$ is an associated set of outputs, such that each tuple $(\mathbf{X}, \mathbf{Y}) \stackrel{i.i.d.}{\sim} \mathcal{D}$ .
40
+
41
+ Let $\mathbf{M}(\mathbf{x};\boldsymbol {\theta}): \mathbb{R}^D \mapsto \mathbb{R}^O$ be a generic model, parameterized by $\boldsymbol{\theta} \in \mathbb{R}^{N}$ , that maps $\mathbf{X}$ onto an output sequence. Define a recurrent model as a mapping done through iterative computation, such that each $(\hat{\mathbf{y}}^{(t)},\mathbf{h}^{(t)}) = \mathbf{M}(\mathbf{x}^{(\mathbf{t})},\hat{\mathbf{h}}^{(t - 1)};\boldsymbol{\theta})$ depends explicitly only on the current input and some latent state of the model, $h$ .
42
+
43
+ We define a loss over the entire sequence of outputs as the sum of a non-sequential loss function, $\tilde{L}$ over an entire sequence: $L(\mathbf{M},\mathbf{X},\mathbf{Y}) = \sum_{i = 1}^{S}\tilde{L} (\hat{\mathbf{y}}^{(t)},\mathbf{y}^{(t)})$
44
+
45
+ We define a sparse model as one where the parameters factorize into $\theta = c\odot w$ , with $c\in \{0,1\} ^N$ a binary mask and $w\in \mathbb{R}^N$ the free values, typically trained by gradient descent. We define a $K$ -sparse condition on a sparse model $\mathbf{M}$ as the restriction $\| c\| _0 = K$ during the entire training trajectory. A model is optimally $K$ -sparse if it minimizes the expected loss, $\mathbb{E}_{\mathcal{D}}[L(\mathbf{M},\mathbf{X},\mathbf{Y})]$ after training while also being subject to a $K$ -sparse condition.
46
+
47
+ # 2.2.2 MEMORY HORIZON
48
+
49
+ We introduce the following terms: $N$ is the size of the network hidden state $\mathbf{h}$ , and $\mathbf{J}_t \in \mathbb{R}^{N \times N}$ is the temporal Jacobian, of the hidden state at time $t + 1$ with respect to the previous hidden state, $\frac{\partial \mathbf{h}^{(t + 1)}}{\partial \mathbf{h}^{(t)}}$ , and $\sigma_i^{(t)}$ the singular values of said matrix.
50
+
51
+ To arrive at a one-shot pruning criteria for recurrent neural networks, we consider the impact of the temporal Jacobian on both forward- and backward-propagation.
52
+
53
+ - (Backpropagation) The formula for backpropagation through time (BPTT), from the loss at time $s$ can be given as:
54
+
55
+ $$
56
+ \nabla_ {\boldsymbol {\theta}} \tilde {L} \left(\hat {\mathbf {y}} _ {s}, \mathbf {y} _ {s}\right) = \underbrace {\left[ \tilde {\mathbf {G}} _ {\mathbf {h} ^ {(s)} ; \boldsymbol {\theta}} ^ {T} + \tilde {\mathbf {G}} _ {\mathbf {h} ^ {(s - 1)} ; \boldsymbol {\theta}} ^ {T} \mathbf {J} _ {s - 1} + \dots + \tilde {\mathbf {G}} _ {\mathbf {h} ^ {(1)} ; \boldsymbol {\theta}} ^ {T} \prod_ {t = 1} ^ {s - 1} \mathbf {J} _ {t} \right]} _ {\tilde {\mathbf {G}} _ {s}} \cdot \nabla_ {\mathbf {h} ^ {(s)}} \tilde {L} \left(\hat {\mathbf {y}} _ {s}, \mathbf {y} _ {s}\right) \tag {1}
57
+ $$
58
+
59
+ Where $\tilde{\mathbf{G}}_{\mathbf{h}^{(t)};\theta}$ is the Jacobian of $\mathbf{h}^{(t)}$ considering only the explicit dependence on $\theta$ .
60
+
61
+ - (Forward Propagation)
62
+
63
+ A single time-step of the network under small perturbations yields the following:
64
+
65
+ $$
66
+ \mathbf {M} \left(\mathbf {x} ^ {(t)}; \mathbf {h} ^ {(t)} + \boldsymbol {\epsilon}\right) \approx \mathbf {h} ^ {(t + 1)} + \mathbf {J} _ {t} \boldsymbol {\epsilon} \tag {2}
67
+ $$
68
+
69
+ With additional powers of the Jacobian appearing as we observe the entire sequence.
70
+
71
+ From Equation 1, it can easily be seen that increasing the normed singular values of each $\mathbf{J}_{\mathrm{t}}$ will on average exponentially increase the gradient signal from later sequence elements, which will expedite convergence by reducing the vanishing gradient problem. From Equation 2, we additionally note that a well-conditioned Jacobian would enable the network to preserve separation of distinct input vectors, by preventing the additive perturbation from vanishing or exploding. Prior works in mean-field theory Gilboa et al. (2019); Chen et al. (2018a) provide an extensive analysis of a similar objective on the performance of a wide range of recurrent networks.
72
+
73
+ The Froebenius norm of the temporal Jacobian, defined below, is thus key to both forward and backpropagation. Both processes are significantly expedited when the norm is close to 1.
74
+
75
+ $$
76
+ \chi = \frac {1}{N (S - 1)} \sum_ {t = 1} ^ {S - 1} \mathbb {E} \left(\left\| \mathbf {J} _ {\mathbf {t}} \vec {\mathbf {I}} \right\| _ {2} ^ {2}\right) = \frac {1}{N (S - 1)} \sum_ {t = 1} ^ {S - 1} \mathbb {E} \left(\sum_ {i = 1} ^ {N} \left| \sigma_ {i} ^ {(t)} \right| ^ {2}\right) \tag {3}
77
+ $$
78
+
79
+ # 2.3 PRUNING CRITERIA
80
+
81
+ Under typical recurrent model initializations, where $\pmb{\theta} \sim \mathcal{N}(\mu_{\theta}, s_{\theta}^{2}\mathbf{I})$ or a similar distribution, with $\mu_{\theta} \sim 0$ , $s_{\theta}^{2} \ll 1$ , Gilboa et al. (2019) has empirically observed that $\chi$ is $< 1$ , and that the singular values concentrate towards 0 (see Figure 2 for further evidence). Therefore, we hypothesize that the fastest converging and best performing sparse models are those which simply maximize $\chi$ .
82
+
83
+ We would like to determine the effect of removing one parameter on the Jacobian during the training trajectory. However, as we restrict ourselves only to information available at initialization, we approximate the effect of each parameter on the Jacobian by a first-order Taylor expansion. This is analogous to the derivations given in Lee et al. (2018); Wang et al. (2020):
84
+
85
+ $$
86
+ d _ {n} \propto | [ \Delta \chi ] _ {n} | = \frac {1}{S - 1} \sum_ {t = 1} ^ {S - 1} \left| \frac {\partial}{\partial \theta_ {n}} \| \mathbf {J} _ {\mathbf {t}} \mathbf {1} \| _ {2} ^ {2} \right| \tag {4}
87
+ $$
88
+
89
+ We call $d_{n}$ the sensitivity score of parameter $\theta_{n}$ .
90
+
91
+ This criterion will not be well-normed across different types of parameters. This is due to numerous factors, including differing activation functions used for each gate, and differing distributions between the input and recurrent state. Consequently, the variance of our objective is not uniform between groups of parameters (see Section 3.3 for empirical confirmation). We compensate for this by dividing our criterion by the expected magnitude of the gradient for each parameter. The normalized sensitivity score becomes:
92
+
93
+ $$
94
+ d _ {n} = \left[ \Delta \tilde {\chi} \right] _ {n} = \approx \frac {\left[ \Delta \chi \right] _ {n}}{\left| \gamma_ {n} \right|}, \gamma_ {n} = \mathbb {E} _ {\tilde {\mathcal {D}}} \left[ \sum_ {t = 1} ^ {S} \sum_ {i = 1} ^ {O} \frac {\partial \tilde {h} _ {i} ^ {(t)}}{\partial \theta_ {n}} \right] \tag {5}
95
+ $$
96
+
97
+ where $\hat{\mathcal{D}}$ is either the data distribution or an approximate distribution (since we are only trying to estimate the approximate variance of the gradient distribution), and the sequence $\{\widetilde{\mathbf{h}}^{(t)}\}$ is computed
98
+
99
+ on inputs from that distribution. This normalization scheme is similar in motivation to the normalization proposed in Pascanu et al. (2013), and allows us to consider all recurrent models with only one additional computation.
100
+
101
+ For our pruning objective, we simply take the $K$ weights with largest sensitivity scores, as those represent the parameters which most affect the Jacobian objective near the initialization. Formally, we find the $K$ -th largest sensitivity, $\tilde{d}_K$ , and set $c_{n} = I_{A}(d_{n}\geq \tilde{d}_{K})$ . Empirically, we find that the sensitivity score remains an effective metric even if the weights are not restricted to a neighborhood where the Taylor expansion is valid (see Figure 2 for details).
102
+
103
+ This objective is simple to compute, requiring only two backward passes using auto-differentiation. Furthermore, as we only depend on the Jacobian-vector product, it has a memory cost linear in the parameters.
104
+
105
+ Algorithm 1 Pruning Recurrent Networks
106
+ Require: Parameters $\theta$ , Dataset $\mathcal{D}$ , Approximate Dataset $\tilde{\mathcal{D}}$ , Sparsity Level $K$ , Sequence Length $S$ , Number to Sample $P$ , Sequence Horizon $U$
107
+ 1: for all $p = 1 \dots P$ do
108
+ 2: Sample sequence $(\tilde{\mathbf{X}}, \tilde{\mathbf{Y}}) \sim \tilde{\mathcal{D}}$ , $(\mathbf{X}, \mathbf{Y}) \sim \mathcal{D}$
109
+ 3: for all $t = 1 \dots S$ do
110
+ 4: Compute $\{\tilde{\mathbf{h}}^{(t)}\}$ with $(\tilde{\mathbf{X}}, \tilde{\mathbf{Y}})$ , $\{\mathbf{h}^{(t)}\}$ with $(\mathbf{X}, \mathbf{Y})$
111
+ 5: end for
112
+ 6: end for
113
+ 7: Compute $\gamma$ using $\{\tilde{\mathbf{h}}^{(t)}\}$ and Equation 5
114
+ 8: for all $u = 1 \dots U$ do
115
+ 9: Compute $\chi^{(u)} \gets \| \mathbf{J}_{S-u} \mathbf{1} \|_2^2 = \mathbb{E} \left[ \sum_{i,j} \left| \frac{\partial h_i^{(S-u)}}{\partial h_j^{(S-u-1)}} \right|^2 \right]$
116
+ 10: Compute $\Delta \chi^{(u)} \gets |\nabla_\theta \chi^{(u)}|$
117
+ 11: end for
118
+ 12: Compute $\mathbf{d} \gets \frac{\sum_{t} [\Delta \chi^{(t)}]}{|\gamma|}$
119
+ 13: $\tilde{d} \gets \text{SortDescending}(\mathbf{d})$
120
+ 14: $c_n \gets \mathbb{1}[d_n \geq \tilde{d}_K]$ , $\forall n$
121
+ 15: return c
122
+
123
+ # 2.4 COMPARISON TO EXTANT METHODS
124
+
125
+ There are two recently proposed criteria for pruning at initialization: GraSP (Wang et al., 2020), and SNIP (Lee et al., 2018). They are given by:
126
+
127
+ $$
128
+ \operatorname {G r a S P} (\theta) = \boldsymbol {\theta} ^ {T} \mathbf {H} \mathbf {g} \tag {6}
129
+ $$
130
+
131
+ $$
132
+ \operatorname {S N I P} (\theta) = \left| \boldsymbol {\theta} ^ {T} \mathbf {g} \right| \tag {7}
133
+ $$
134
+
135
+ where $[\mathbf{H}]_{ij} = \mathbb{E}\left[\frac{\partial^2\mathcal{L}}{\partial\theta_i\partial\theta_j}\right]$ , $\mathbf{g} = \mathbb{E}[\nabla_{\theta}\mathcal{L}]$ are the expected gradient and Hessian respectively.
136
+
137
+ Both methods rely on the gradient of the loss with respect to the weights, with SNIP being more dependent on this gradient than GraSP. Thus, the main term of interest is $\mathbf{g}$ , which can be decomposed as:
138
+
139
+ $$
140
+ \mathbf {g} _ {\mathbf {t}} = \tilde {\mathbf {G}} _ {t} \nabla_ {\mathbf {h} ^ {(t)}} \tilde {L} _ {t} \tag {8}
141
+ $$
142
+
143
+ With $\tilde{\mathbf{G}}_t$ , the Jacobian of $\mathbf{h}^{(t)}$ with respect to $\theta$ , as defined in Equation 1.
144
+
145
+ A consequence of the smaller singular values of $\mathbf{J}$ is that the successive terms of $\tilde{\mathbf{G}}_t$ tend to vanish over time. Thus, loss-based gradient objectives tend to be biased toward explicit dependency between $\mathbf{h}^{(t)}$ on $\theta$ , thus neglecting long-term dependence between $\mathbf{h}^{(t)}$ and $\mathbf{h}^{(t-1)}$ .
146
+
147
+ In certain cases, (ex. when the hidden state is small relative to the input) SNIP and GraSP prune many recurrent connections while leaving the input connections largely untouched (see section 3). In contrast, our algorithm considers the $\mathbf{J}$ matrix explicitly, which mitigates the problem of pruning too many recurrent connections.
148
+
149
+ <table><tr><td>Architecture</td><td># of Parameters</td><td>Random</td><td>Ours</td><td>Dense</td><td>Δ</td></tr><tr><td>Basic RNN Cell</td><td>171k → 8.5k</td><td>9.51±3.98</td><td>7.57±0.20</td><td>7.08±2.08</td><td>+4.39</td></tr><tr><td>Standard LSTM</td><td>684k → 34.2k</td><td>2.17±0.18</td><td>1.66±0.16</td><td>0.80±0.18</td><td>+0.86</td></tr><tr><td>Peephole LSTM</td><td>1.32M → 66.2k</td><td>1.80±0.18</td><td>1.24±0.08</td><td>0.74±0.10</td><td>+0.50</td></tr><tr><td>GRU</td><td>513k → 25.7k</td><td>1.50±0.08</td><td>1.46±0.05</td><td>0.77±0.14</td><td>+0.69</td></tr></table>
150
+
151
+ # 3 EVALUATION
152
+
153
+ For the following experiments, we compute the $\ell_2$ norm of $\mathbf{J}\vec{\mathbf{T}}$ using a single minibatch of 64 data samples, and using only the last 4 steps of the sequence.
154
+
155
+ # 3.1 SEQUENTIAL MNIST BENCHMARK
156
+
157
+ We first test our method on the sequential MNIST benchmark (Lee et al., 2018), a relatively small dataset which contains long term dependencies. We begin by verifying that our algorithm is robust across several common recurrent architectures. The results in Table 1 confirm that our method is not dependent on any specific recurrent neural architecture choice.
158
+
159
+ Our principal results for the Sequential MNIST benchmark are presented in Table 2. Again, we see that our network's performance improves with network size, with the largest gap between our method and others coming when the network grows to 1600 units. We observe that SNIP and GraSP are surprisingly effective at small scales with good initialization, but fail when scaled to larger network sizes. Of the baselines, only random pruning is competitive when scaled, a fact we found quite interesting. For reference, we also provide results on standard L2 pruning (Reed, 1993) (for which the schedule can be found in the appendix) and random pruning. The reader should be careful to note that L2 pruning requires an order of magnitude more resources than other methods due to it's prune - retrain cycle; it is only considered here as a lower bound for network compression. Furthermore, while GraSP requires computing the Hessian gradient across the entire dataset, this is computationally infeasible in our case and we instead compute it with a single minibatch, for fairness.
160
+
161
+ Table 1: Validation Error % of Various 400 Unit RNN Architectures after 50 Epochs of Training on Seq. MNIST; our method works well across all common recurrent architectures. Sparsity of $95\%$ was used on all experiments.
162
+
163
+ <table><tr><td>Pruning Scheme</td><td>100 Units</td><td>400 Units</td><td>1600 Units</td></tr><tr><td>Unnorm. SNiP</td><td>88.9±0.1</td><td>88.8±0.1</td><td>89.0±0.1</td></tr><tr><td>Norm. SNiP</td><td>4.09±1.06</td><td>1.52±0.11</td><td>1.10±0.11</td></tr><tr><td>Unnorm. GraSP</td><td>88.6±0.1</td><td>88.7±0.1</td><td>88.6±0.1</td></tr><tr><td>Norm. GraSP</td><td>4.28±0.57</td><td>1.62±0.24</td><td>1.22±0.14</td></tr><tr><td>Random</td><td>2.78±0.25</td><td>1.50±0.08</td><td>1.15±0.12</td></tr><tr><td>Ours</td><td>3.09±0.31</td><td>1.46±0.05</td><td>1.01±0.05</td></tr><tr><td>L2</td><td>1.03±0.05</td><td>0.71±0.03</td><td>0.57±0.02</td></tr></table>
164
+
165
+ Table 2: Benchmarking of Various Pruning Algorithms on $95\%$ Sparse GRUs on seq. MNIST. SNIP, GraSP and Random pruning are competitive for smaller models, but the results tend to diminish as the network size increases. Our method obtains strong results even as the network size is large. Further experimental details can be found in the appendix.
166
+
167
+ In the preceding section, we postulated that normalization across the objective was necessary for strong performance (see Equation 5). This intuition is confirmed in Table 2, where we present both the normalized results (with Glorot Glorot & Bengio (2010) and $\gamma$ normalization) and the unnormalized results (without both). Indeed, we see that this normalization is crucial for recurrent architectures, with unnormalized architectures having all of the retained network weights concentrated in a single gate. This proved to be prohibitive to training.
168
+
169
+ Finally, in Table 3, we examine the performance of our algorithm at various sparsity levels. Our algorithm continues to outperform random pruning, even at high sparsity levels.
170
+
171
+ <table><tr><td>Sparsity Level (%)</td><td># of Parameters</td><td>Random</td><td>Ours</td><td>Dense</td><td>Δ</td></tr><tr><td>90</td><td>68.4k</td><td>1.12±0.16</td><td>1.05±0.08</td><td>0.63±0.02</td><td>+0.42</td></tr><tr><td>95</td><td>34.2k</td><td>1.50±0.08</td><td>1.46±0.05</td><td>0.77±0.10</td><td>+0.69</td></tr><tr><td>98</td><td>13.7k</td><td>1.82±0.22</td><td>1.77±0.07</td><td>0.67±0.13</td><td>+1.10</td></tr></table>
172
+
173
+ # 3.2 LINGUISTIC SEQUENCE PREDICTION
174
+
175
+ We assess our models on 3 sequence prediction benchmarks: 1) WikiText-2 (wiki2). 2) WikiText-103 (wiki103), an expanded version of (1) with 10 times more tokens. 3) A truncated version of the One Billion Words (1b) benchmark (Chelba et al., 2013), where only the top 100,000 vocabulary tokens are used. The full experiment parameters are given in the appendix. We report the training and validation perplexities on a random $1\%$ sample of the training set in Table 4.
176
+
177
+ Table 3: Sparsity Level vs Validation Error % on 400 Unit GRUs, for seq. MNIST. Our method consistently beats random pruning.
178
+
179
+ <table><tr><td>Dataset (%)</td><td>Random</td><td>Ours</td><td>Dense</td><td>Δ</td></tr><tr><td>wiki2</td><td>22.66</td><td>20.54</td><td>10.479</td><td>+10.61</td></tr><tr><td>wiki103</td><td>49.65</td><td>46.65</td><td>35.87</td><td>+10.78</td></tr><tr><td>Trunc. 1b</td><td>59.17</td><td>53.26</td><td>38.98</td><td>+14.28</td></tr><tr><td># of Parameters</td><td>960k</td><td>960k</td><td>19.2M</td><td>-</td></tr></table>
180
+
181
+ Table 4: Training Perplexities of Training Sparse Models on Large Language Benchmarks. Our method successfully reduces the perplexity score across all benchmarks, often significantly, however there is still a large gap to the dense performance. Parameters are reported only for the recurrent layer as other layers were not pruned during training.
182
+
183
+ From the results, it is clear that our algorithm succeeds in decreasing perplexity across all language tasks. Despite their varying difficulties, our algorithm speeds up initial convergence on all tasks and maintains an advantage throughout training.
184
+
185
+ Finally, we perform an ablation experiment on the Penn Treebank Dataset (PTB) with an 800 unit GRU at different sparsity levels. The results are reported in Table 5.
186
+
187
+ <table><tr><td>Sparsity</td><td>0%</td><td>20%</td><td>40%</td><td>60%</td><td>70%</td><td>80%</td><td>90%</td><td>95%</td><td>98%</td></tr><tr><td>Perplexity</td><td>156.16</td><td>160.32</td><td>165.13</td><td>173.51</td><td>178.55</td><td>184.85</td><td>194.79</td><td>208.14</td><td>228.22</td></tr><tr><td>Parameters</td><td>2.88M</td><td>2.30M</td><td>1.72M</td><td>1.15M</td><td>864K</td><td>576K</td><td>288K</td><td>144K</td><td>57.6K</td></tr></table>
188
+
189
+ Table 5: Validation Perplexities of Pruned 800-unit GRU Models on Penn Treebank. For a simple comparison we do not finetune these models, or apply any regularization tricks besides early stopping.
190
+
191
+ The loss from sparsity increases dramatically as the percentage of parameters remaining approaches zero. This trend is similar to that reported in Gale et al. (2019) and other prior works. For reference, a dense 200 unit GRU (360k parameters) achieves 196.31 perplexity while a 100 unit GRU (150k parameters) achieves 202.97 perplexity.
192
+
193
+ # 3.3 QUALITATIVE ANALYSIS
194
+
195
+ The success of our algorithm can be partially attributed to effective distributions across hidden units. Whereas many of the other algorithms are overly concentrated in certain gates and biased towards the input weights, our algorithm effectively distributes sparse connections across the entire weight matrix. We discuss the distribution of remaining connections on a 400 unit GRUs in Figure 1. We also give a set of sample connections under each algorithm in Figure 3.
196
+
197
+ Finally, we perform an empirical study of the evolution of the Jacobian spectrum to verify our hypothesis on recurrence preservation. We show a 400-unit GRU trained on sequential MNIST, with a dense network, our pruning scheme, and random pruning respectively. It can be observed from Figure 2 after 50000 training steps that our Jacobian has both higher mean and much fewer near-zero singular values, which helps to explain our performance and justify the intuition behind our algorithm. The spectra at initialization also further confirms that the initial singular values of $\mathbf{J}$ are small.
198
+
199
+ ![](images/0c62c034b3164958e020805480a71b380d8d444e72ec585a1c7079f7bf0029e1.jpg)
200
+ (a) SNiP. I/R Ratio: 0.205
201
+
202
+ ![](images/b707bb9f7509b6adb7a0d048fe098e02065615cb80457208f6aac1c8facd5e54.jpg)
203
+ (b) GraSP. I/R Ratio: 0.124
204
+
205
+ ![](images/a0041fd9ba12ea2f626a2389f5a13a71f04a92e28729d2db1a1f35243a242b98.jpg)
206
+ (c) Ours. I/R Ratio: 0.094
207
+
208
+ ![](images/fc8c5cbd8420624cd34fb27902bea047a0b1438a95142768e52f859576b3a9e9.jpg)
209
+ Figure 1: Plot of Remaining Connections by Gate and Type. SNiP and GraSP consistently prune recurrent connections at a much higher ratio than input connections. The ratio of remaining input to recurrent (I/R) connections is given for each method; the dense ratio is 0.07 for comparison. SNiP and GraSP also exhibit severe imbalance between gates, while our imbalance is far milder.
210
+ (a) Initialization, Pre-Pruning
211
+
212
+ ![](images/0dbbee3a51f4c26cd24e07d6b49a4a0d78915fe2e506f22b72ca1cdad20feec4.jpg)
213
+ (b) SNiP
214
+ Figure 2: Singular Value Magnitude Histograms after 50 epochs of Training, for 400 Unit GRU on seq. MNIST. Compared to SNiP, our method prevents spectral concentration at 0, with a mean singular value magnitude of 0.31 to SNiP's 0.18 This helps to explain our relative performance gain.
215
+
216
+ ![](images/76b7bf97a514e750cd885a48e08c0ae7d3126ae12af3693af6a9aa22ef69fbaf.jpg)
217
+ (c) Ours
218
+
219
+ # 4 OTHER RELATED WORK
220
+
221
+ Methods for Pruning Recurrent Networks: Our method is the latest in a series of attempts to generate sparse RNNs. Perhaps the most well-known algorithm for sparse network pruning is Narang et al. (2017a). It is a modification to magnitude based pruning wherein the pruning threshold evolves according to several hyperparameters that have to be tuned by the user. Kliegl et al. (2017) uses iterative trace norm regularization to prune RNNs used for speech recognition. This effectively reduces the sum of the singular values of the weight matrices. But we found in our experiments that these values were often degenerate near 0. Furthermore, this technique is iterative. Narang et al. (2017b) uses iterative ground lasso regularization to induce block sparsity in recurrent neural networks. Wen et al. (2017) alters the structure of LSTMs to decrease their memory requirements. Their intrinsic sparse structures make structural assumptions about the sparsity distribution across the network. Dai et al. (2018) uses magnitude based pruning coupled with a special RNN structure
222
+
223
+ ![](images/1ea72a697bd64f8905ece70636a8c7861af0dd801160865112f901644d0b3d48.jpg)
224
+ (a) SNiP
225
+ Figure 3: Map of remaining connections, with the x-axis indicating the output size (flattened across gates) and the y-axis indicated the input size. Our method is significantly more spread out across neurons and gates than the others.
226
+
227
+ ![](images/c0fcdff0503e00bc0b9a0864e6346c50ef2fe4a78a500de1e4e616687bec3fc2.jpg)
228
+ (b) GraSP
229
+
230
+ ![](images/fa8c754b77118f920972cf9799caa757b8fc45c51055ff8beb709cdbc572124a.jpg)
231
+ (c) Ours
232
+
233
+ to make RNNs more efficient. The pruning algorithm itself is magnitude based. See et al. (2016) uses iterative pruning and retraining to prune a recurrent model for neural translation. The underlying technique is simple iterative pruning, and the final pruning percentage is only $80\%$ . While fine for their application, we are interested in novel pruning techniques and higher levels of sparsity.
234
+
235
+ In summation, all the methods discussed above utilized some variant of L1 or L2 pruning to actually sparsify the network. The novel advances are all related to pruning schedules, modifications to recurrent architectures, or small transformations of the L1 or L2 objective.
236
+
237
+ Other Pruning Techniques: Many extant pruning techniques are applicable to recurrent network architectures, even if these methods were not designed from the ground up to work in the recurrent case. Lee et al. (2018) and Wang et al. (2020) both provide a pruning objective that can be used to prune networks before training begins. They are considered extensively in this work. In Frankle & Carbin (2019), it is shown that at initialization networks contain a small sparse set of connections that can achieve similar results to fully dense networks. However, no known method yet exists to recover these sparse networks to the full extent demonstrated in that work. Han et al. (2015) showed impressive results with magnitude based pruning. Follow up work made further use of magnitude-based pruning techniques (Carreira-Perpinan & Idelbayev, 2018; Guo et al., 2016); however, these techniques are primarily iterative.
238
+
239
+ Mean Replacement Pruning (Evci et al., 2018) uses the absolute-value of the Taylor expansion of the loss to as a criterion for which units in a network should be pruned. This method can not be used with BatchNorm and achieves results comparable to magnitude based pruning. Bayesian methods have recently seen some success in pruning neural networks. (Ullrich et al., 2017), which is itself an extension of Nowlan & Hinton (1992), is the standard citation here. In essence, this method works by re-training a network while also fitting the weights to a GMM prior via a KL penalty. Molchanov et al. (2017) is another Bayesian pruning technique that learns a dropout rate via variational inference that can subsequently be used to prune the network. Finally, there exists several classical pruning techniques. Ishikawa (1996); Chauvin (1989) enforced sparsity penalties during the training process. LeCun et al. (1990); Hassibi et al. (1993) perform Hessian-based pruning, using the Hessian to get a sensitivity metric for the network's weights.
240
+
241
+ While many of the above methods are effective in general, they do not explicitly consider the specifics of RNNs and sequential prediction.
242
+
243
+ Other Related Work Several interesting papers have recently taken a critical look at the problem of network pruning (Liu et al., 2018; Crowley et al., 2018). The problem of network compression is closely related to network pruning. It would be impossible to cite all of the relevant papers here, and no good literature survey exists. Some worthwhile references are Gupta et al. (2015); Gong et al. (2014); Courbariaux et al. (2016); Chen et al. (2018b); Howard et al. (2017). Both problems often share a common goal of reducing the size of a network. Some notable papers explicitly consider the problem of recurrent network compression (Ye et al., 2018; Lobacheva et al., 2017; Wang et al., 2018).
244
+
245
+ In the context of the above work, our method is not iterative and can be fully completed before training even begins. The tradeoffs in accuracy can be remedied by scaling up the network, since there is no longer a need to store fully dense weights during training. Furthermore, our objective is specifically adapted to the sequential prediction context in which RNNs are deployed. We are the first pruning algorithm to consider the temporal Jacobian spectrum as a key to generating faster converging and better performance sparse RNNs. Our method not only performs better in practice compared to other zero-shot methods, but also yields key insight into the factors behind RNN performance. This may aid the development of new architectures and training schemes for sequential prediction.
246
+
247
+ # 5 CLOSING REMARKS
248
+
249
+ In this work, we presented an effective and cheap single-shot pruning algorithm adapted toward recurrent models. Throughout the work, we continually found the importance of the Jacobian spectrum surprising and interesting. Future work could further examine the relationship between network width, the Jacobian spectrum, and generalization.
250
+
251
+ # REFERENCES
252
+
253
+ Sanjeev Arora, Rong Ge, Behnam Neyshabur, and Yi Zhang. Stronger generalization bounds for deep nets via a compression approach. ICML, 2018.
254
+ Miguel A Carreira-Perpinan and Yerlan Idelbayev. "learning-compression" algorithms for neural net pruning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8532-8541, 2018.
255
+ Yves Chauvin. A back-propagation algorithm with optimal use of hidden units. In Advances in neural information processing systems, pp. 519-526, 1989.
256
+ Ciprian Chelba, Tomas Mikolov, Mike Schuster, Qi Ge, Thorsten Brants, Phillip Koehn, and Tony Robinson. One billion word benchmark for measuring progress in statistical language modeling. arXiv preprint arXiv:1312.3005, 2013.
257
+ Minmin Chen, Jeffrey Pennington, and Samuel S Schoenholz. Dynamical isometry and a mean field theory of rnns: Gating enables signal propagation in recurrent neural networks. arXiv preprint arXiv:1806.05394, 2018a.
258
+ Patrick Chen, Si Si, Yang Li, Ciprian Chelba, and Cho-Jui Hsieh. Groupreduce: Block-wise low-rank approximation for neural language model shrinking. In Advances in Neural Information Processing Systems, pp. 10988-10998, 2018b.
259
+ Matthieu Courbariaux, Itay Hubara, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. Binarized neural networks: Training deep neural networks with weights and activations constrained to+ 1 or-1. arXiv preprint arXiv:1602.02830, 2016.
260
+ Elliot J Crowley, Jack Turner, Amos Storkey, and Michael O'Boyle. Pruning neural networks: is it time to nip it in the bud? arXiv preprint arXiv:1810.04622, 2018.
261
+ Xiaoliang Dai, Hongxu Yin, and Niraj K Jha. Grow and prune compact, fast, and accurate lstms. arXiv preprint arXiv:1805.11797, 2018.
262
+ Utku Evci, Nicolas Le Roux, Pablo Castro, and Leon Bottou. Mean replacement pruning. 2018.
263
+ Jonathan Frankle and Michael Carbin. The lottery ticket hypothesis: Finding sparse, trainable neural networks. *ICLR*, 2019.
264
+ Trevor Gale, Erich Elsen, and Sara Hooker. The state of sparsity in deep neural networks. arXiv preprint arXiv:1902.09574, 2019.
265
+ Dar Gilboa, Bo Chang, Minmin Chen, Greg Yang, Samuel S Schoenholz, Ed H Chi, and Jeffrey Pennington. Dynamical isometry and a mean field theory of lstms and grus. arXiv preprint arXiv:1901.08987, 2019.
266
+ Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the thirteenth international conference on artificial intelligence and statistics, pp. 249-256, 2010.
267
+ Yunchao Gong, Liu Liu, Ming Yang, and Lubomir Bourdev. Compressing deep convolutional networks using vector quantization. arXiv preprint arXiv:1412.6115, 2014.
268
+ Yiwen Guo, Anbang Yao, and Yurong Chen. Dynamic network surgery for efficient dnns. In Advances In Neural Information Processing Systems, pp. 1379-1387, 2016.
269
+ Suyog Gupta, Ankur Agrawal, Kailash Gopalakrishnan, and Pritish Narayanan. Deep learning with limited numerical precision. In International Conference on Machine Learning, pp. 1737-1746, 2015.
270
+ Song Han, Jeff Pool, John Tran, and William Dally. Learning both weights and connections for efficient neural network. In Advances in neural information processing systems, pp. 1135-1143, 2015.
271
+
272
+ Babak Hassibi, David G Stork, and Gregory J Wolff. Optimal brain surgeon and general network pruning. In IEEE international conference on neural networks, pp. 293-299. IEEE, 1993.
273
+ Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861, 2017.
274
+ Masumi Ishikawa. Structural learning with forgetting. Neural networks, 9(3):509-521, 1996.
275
+ Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
276
+ Markus Kliegl, Siddharth Goyal, Kexin Zhao, Kavya Srinet, and Mohammad Shoeybi. Trace norm regularization and faster inference for embedded speech recognition rnns. arXiv preprint arXiv:1710.09026, 2017.
277
+ Yann LeCun, John S Denker, and Sara A Solla. Optimal brain damage. In Advances in neural information processing systems, pp. 598-605, 1990.
278
+ Namhoon Lee, Thalaiyasingam Ajanthan, and Philip HS Torr. Snip: Single-shot network pruning based on connection sensitivity. arXiv preprint arXiv:1810.02340, 2018.
279
+ Zhuang Liu, Mingjie Sun, Tinghui Zhou, Gao Huang, and Trevor Darrell. Rethinking the value of network pruning. arXiv preprint arXiv:1810.05270, 2018.
280
+ Ekaterina Lobacheva, Nadezhda Chirkova, and Dmitry Vetrov. Bayesian sparsification of recurrent neural networks. arXiv preprint arXiv:1708.00077, 2017.
281
+ Dmitry Molchanov, Arsenii Ashukha, and Dmitry Vetrov. Variational dropout sparsifies deep neural networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 2498-2507. JMLR.org, 2017.
282
+ Sharan Narang, Erich Elsen, Gregory Diamos, and Shubho Sengupta. Exploring sparsity in recurrent neural networks. arXiv preprint arXiv:1704.05119, 2017a.
283
+ Sharan Narang, Eric Undersander, and Gregory Diamos. Block-sparse recurrent neural networks. arXiv preprint arXiv:1711.02782, 2017b.
284
+ Steven J Nowlan and Geoffrey E Hinton. Simplifying neural networks by soft weight-sharing. Neural computation, 4(4):473-493, 1992.
285
+ Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. On the difficulty of training recurrent neural networks. In International conference on machine learning, pp. 1310-1318, 2013.
286
+ Russell Reed. Pruning algorithms-a survey. IEEE transactions on Neural Networks, 4(5):740-747, 1993.
287
+ Abigail See, Minh-Thang Luong, and Christopher D Manning. Compression of neural machine translation models via pruning. arXiv preprint arXiv:1606.09274, 2016.
288
+ Karen Ullrich, Edward Meeds, and Max Welling. Soft weight-sharing for neural network compression. arXiv preprint arXiv:1702.04008, 2017.
289
+ Chaoqi Wang, Guodong Zhang, and Roger Grosse. Picking winning tickets before training by preserving gradient flow. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=SkgsACVKPH.
290
+ Zhisheng Wang, Jun Lin, and Zhongfeng Wang. Hardware-oriented compression of long short-term memory for efficient inference. IEEE Signal Processing Letters, 25(7):984-988, 2018.
291
+ Wei Wen, Yuxiong He, Samyam Rajbhandari, Minjia Zhang, Wenhan Wang, Fang Liu, Bin Hu, Yiran Chen, and Hai Li. Learning intrinsic sparse structures within long short-term memory. arXiv preprint arXiv:1709.05027, 2017.
292
+ Jinmian Ye, Linnan Wang, Guangxi Li, Di Chen, Shandian Zhe, Xinqi Chu, and Zenglin Xu. Learning compact recurrent neural networks with block-term tensor decomposition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9378-9387, 2018.
293
+
294
+ # 6 APPENDIX A - EXPERIMENT HYPERPARAMETERS
295
+
296
+ Unless otherwise specified, our model consists of a single-layered RNN, followed by an appropriately sized softmax layer with sigmoidal activation. The softmax layer is initialized with standard Xavier. We use a minibatch size of 64 samples during training, and optimize using the AdaM optimizer (Kingma & Ba, 2014) with a learning rate of 1e-3. We use an initial hidden state of zeros for all experiments.
297
+
298
+ For all networks, we only prune the recurrent layer while leaving prior and subsequent layers untouched, since we are primarily interested in performance of recurrent layers. We trained all networks with a single Nvidia P100 GPU.
299
+
300
+ # 6.1 SEQUENTIAL MNIST
301
+
302
+ For seq. MNIST, we follow the same process as SNiP, feeding in row-by-row. We used $\mathcal{N}(0,0.1)$ for our own method, and Glorot initialization for SNiP and GraSP. $\gamma$ is computed from data sampled from a $\mathcal{N}(0,0.1)$ distribution. We use only the activations from the last time step. For L2, the density was annealed according to the schedule $\{0.8,0.6,0.4,0.2,0.1,0.05,0.02,0.01\}$ every 10k training steps.
303
+
304
+ # 6.2 LANGUAGE BENCHMARKS
305
+
306
+ We use 2000-unit LSTMs for all language benchmarks. To reduce the variance of our comparison, we freeze the embedding layer before training. We use sampled sequential cross-entropy loss with 1000 tokens for wiki103 and 1b, and standard cross-entropy for wiki2. We use He initialization for all papers.
307
+
308
+ Wiki2 was trained for 20k training steps (13 epochs), while wiki103 was trained for 12k training steps, and 1b was trained for 30k training steps.
309
+
310
+ # 7 APPENDIX B - ADDITIONAL STUDIES
311
+
312
+ # 7.1 INITIALIZATIONS
313
+
314
+ We benchmark the performance of our algorithm against random pruning using 3 additional initializations, seen in Table 6. With high variance, the first-order expansion we use to estimate our objective fails to hold, so we do significantly worse than the random benchmark.
315
+
316
+ <table><tr><td>Initialization Scheme</td><td>Ours</td><td>Random</td></tr><tr><td rowspan="3">Glorot
317
+ N(0,1)
318
+ uniform(0,0.1)</td><td>1.219</td><td>1.36</td></tr><tr><td>3.30</td><td>1.38</td></tr><tr><td>1.73</td><td>1.32</td></tr></table>
319
+
320
+ Table 6: Benchmarking of validation Error % on Different Initializations, for Sequential MNIST Task with 400 Unit GRU. Our algorithm successfully beats random on well-conditioned normal distributions, but fails on high variance and the uniform distribution.
321
+
322
+ # 7.2 RUNTIME
323
+
324
+ We benchmark the runtimes of SNiP, GraSP and our own algorithm, using only a single batch and time iteration for fairness, seen in Table 7.
325
+
326
+ # 8 APPENDIX C - TRAINING CURVES
327
+
328
+ We present a sample training curve of a 400 unit GRU for sequential MNIST below. As can be seen, random pruning is only competitive algorithm in this instance.
329
+
330
+ <table><tr><td>Pruning Scheme</td><td>Runtime (seconds)</td></tr><tr><td>SNiP</td><td>4.718</td></tr><tr><td>GraSP</td><td>16.406</td></tr><tr><td>Ours</td><td>4.876</td></tr></table>
331
+
332
+ Table 7: Benchmarking of Pruning Algorithm Runtimes; our method is faster than GraSP as the Hessian is larger than the Jacobian, but slower than SNiP for a single time instance. It should be noted that our algorithm works best when iterated across several time steps, while GraSP requires iteration across the entire training set, and SNiP requires only a single computation.
333
+
334
+ ![](images/ac9c45f33b05e371f0dfd9b454cc5f1f626a2cf48d640712191be46cc75ca040.jpg)
335
+ Figure 4: Plot of Log Train Loss for a 400 Unit GRU, trained on Sequential MNIST. GraSP is the worst performing, followed by SNiP and then Random, which is on par with our method. L2 is shown as a lower bound. It is surprising that random is competitive, but it is free from the gate imbalance exhibited by SNiP and GraSP.
336
+
337
+ Subsequently we present a sample training curve in Figure 5 for the 1b words experiment, detailed in Table 4. Our algorithm provides significant benefit over random pruning, but still lags behind the dense model.
338
+
339
+ ![](images/9cb68a951ed5f792f0edb0b474e7765b2e7ea50182558eb0a4c6d735b0d10c3c.jpg)
340
+ Figure 5: Plot of Log Train Perplexity on the 1b dataset, with 2k LSTM network. Our model clearly outperforms random pruning by a significant margin, however more work is needed before we achieve near-dense performance.
oneshotpruningofrecurrentneuralnetworksbyjacobianspectrumevaluation/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8636d6630016e10ccb30b328a347768266e815c3c09d73ec5e34485d7adf7b80
3
+ size 331594
oneshotpruningofrecurrentneuralnetworksbyjacobianspectrumevaluation/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b6807c4e9a98850f10550bb59770a41a264212698bc5367b8a58209946cba07d
3
+ size 401845
ontheconvergenceoffedavgonnoniiddata/a6fa98c7-5455-427d-bd93-84e9253f15ac_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cf10a3f0c873085946a3b95dd02e2a07a0b8a2f999216e4831e80191aad9fb55
3
+ size 191582
ontheconvergenceoffedavgonnoniiddata/a6fa98c7-5455-427d-bd93-84e9253f15ac_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cfc8aacfdad4e0c15ff31db65f308094089c82ec579556b9d4f74a05c1e8d8f9
3
+ size 224756
ontheconvergenceoffedavgonnoniiddata/a6fa98c7-5455-427d-bd93-84e9253f15ac_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7e92359019b7d9dac04c1731157867238c7cedfde5138b96540c6f4eadcec5ce
3
+ size 742569
ontheconvergenceoffedavgonnoniiddata/full.md ADDED
The diff for this file is too large to render. See raw diff
 
ontheconvergenceoffedavgonnoniiddata/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:31d075cd641f4a78691e6c4f1b80731612f73b59b719fbae5d8df3b421f5e219
3
+ size 1069103
ontheconvergenceoffedavgonnoniiddata/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6a07202590671f679d2a045fafae8283dff52b9f8920b6ca6f3e18d29d4c4ba6
3
+ size 1273376
ontheequivalencebetweenpositionalnodeembeddingsandstructuralgraphrepresentations/79265851-6c9f-40c9-9670-1df76f49c641_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f75b574a48979c345741d8e81a9ca18b5beca84e44481986a771cf94d3b83462
3
+ size 163949
ontheequivalencebetweenpositionalnodeembeddingsandstructuralgraphrepresentations/79265851-6c9f-40c9-9670-1df76f49c641_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:36e64fda0b2eec67f0d25b2e67cd64c797e515186f1f4e837d0dfb3295d8d6d3
3
+ size 197962
ontheequivalencebetweenpositionalnodeembeddingsandstructuralgraphrepresentations/79265851-6c9f-40c9-9670-1df76f49c641_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d4916158cef071d643d87b178ee7d93686fcee300a4feb61d988441ea6a9353a
3
+ size 634325
ontheequivalencebetweenpositionalnodeembeddingsandstructuralgraphrepresentations/full.md ADDED
The diff for this file is too large to render. See raw diff
 
ontheequivalencebetweenpositionalnodeembeddingsandstructuralgraphrepresentations/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b05895595b99fe1134c6a340aedd442b9824cb7bcd8ccb703ccd5514fb63fb57
3
+ size 429756
ontheequivalencebetweenpositionalnodeembeddingsandstructuralgraphrepresentations/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d0e2ab47fa5e5310e639ba8fdb5756e115cd6a1ba47041e998047e45ada0ee8e
3
+ size 1126869
ontheglobalconvergenceoftrainingdeeplinearresnets/9ebe3cd7-2a94-45a9-87cd-1cac7a4d4898_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:791be2aaa0fb61af45a8349b1592fec77b2f819bde9dd1e2784b09f53cfc3aa5
3
+ size 213602
ontheglobalconvergenceoftrainingdeeplinearresnets/9ebe3cd7-2a94-45a9-87cd-1cac7a4d4898_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a935740f77087210c38bc9a736ea899570c7dfffbccef10fb5851fc6f5094d2b
3
+ size 242996
ontheglobalconvergenceoftrainingdeeplinearresnets/9ebe3cd7-2a94-45a9-87cd-1cac7a4d4898_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0c731f4b68f2fc56fea9811dca9c7e76ccf7b3426bc6856d16d7d6590157150e
3
+ size 487655
ontheglobalconvergenceoftrainingdeeplinearresnets/full.md ADDED
The diff for this file is too large to render. See raw diff
 
ontheglobalconvergenceoftrainingdeeplinearresnets/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f12d858da6c5da8de90323d1c22191eabc7d0c37891ff3815db71b6b8395aaf7
3
+ size 1642910
ontheglobalconvergenceoftrainingdeeplinearresnets/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1209abc69b0da5bec5b17d24745899d63fbb170747f1ea6bdffffeda8e67b1cf
3
+ size 1192252
ontheinteractionbetweensupervisionandselfplayinemergentcommunication/1b3901be-621d-4bb3-bcdb-d88a447dac73_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8095a0c0b0ed76ed40ab8cc95816b6dd760e2813f53e52dc55a7d31a42e53f2b
3
+ size 80378
ontheinteractionbetweensupervisionandselfplayinemergentcommunication/1b3901be-621d-4bb3-bcdb-d88a447dac73_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8d136354a1b737269e8eb938279bcf234a0957e8d28676756e50ce677699d351
3
+ size 97132
ontheinteractionbetweensupervisionandselfplayinemergentcommunication/1b3901be-621d-4bb3-bcdb-d88a447dac73_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:870d00049f9ed58addb8dc47a1af6808bbf9867bfb5d60206b92b4ec27c5c2a7
3
+ size 3629334
ontheinteractionbetweensupervisionandselfplayinemergentcommunication/full.md ADDED
@@ -0,0 +1,274 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # ON THE INTERACTION BETWEEN SUPERVISION AND SELF-PLAY IN EMERGENT COMMUNICATION
2
+
3
+ Ryan Lowe,\* Abhinav Gupta\* MILA
4
+
5
+ Jakob Foerster, Douwe Kiela
6
+ Facebook AI Research
7
+
8
+ Joelle Pineau
9
+ Facebook AI Research
10
+ MILA
11
+
12
+ # ABSTRACT
13
+
14
+ A promising approach for teaching artificial agents to use natural language involves using human-in-the-loop training. However, recent work suggests that current machine learning methods are too data inefficient to be trained in this way from scratch. In this paper, we investigate the relationship between two categories of learning signals with the ultimate goal of improving sample efficiency: imitating human language data via supervised learning, and maximizing reward in a simulated multi-agent environment via self-play (as done in emergent communication), and introduce the term supervised self-play ( $S2P$ ) for algorithms using both of these signals. We find that first training agents via supervised learning on human data followed by self-play outperforms the converse, suggesting that it is not beneficial to emerge languages from scratch. We then empirically investigate various S2P schedules that begin with supervised learning in two environments: a Lewis signaling game with symbolic inputs, and an image-based referential game with natural language descriptions. Lastly, we introduce population based approaches to S2P, which further improves the performance over single-agent methods.<sup>1</sup>
15
+
16
+ # 1 INTRODUCTION
17
+
18
+ Language is one of the most important aspects of human intelligence; it allows humans to coordinate and share knowledge with each other. It is also crucial for human-machine interaction, as human language is a natural means by which to exchange information, give feedback, and specify goals. A promising approach for training agents to solve problems with natural language is to have a "human in the loop", meaning we collect problem-specific data from humans interacting directly with our agents for learning. However, human-in-the-loop data is expensive and time-consuming to obtain as it requires continuously collecting human data as the agent's policy improves, and recent work suggests that current machine learning methods (e.g. from deep reinforcement learning) are too data-inefficient to be trained in this way from scratch (Chevalier-Boisvert et al., 2019). Thus, an important open problem is: how can we make human-in-the-loop training as data efficient as possible?
19
+
20
+ To maximize data efficiency, it is important to fully leverage all available training signals. In this paper, we study two categories of such training methods: imitating human data via supervised learning, and self-play to maximize reward in a multi-agent environment, both of which provide rich signals for endowing agents with language-using capabilities. However, these are potentially competing objectives, as maximizing environmental reward can lead to the resulting communication protocol drifting from natural language (Lewis et al., 2017; Lee et al., 2019). The crucial question, then, is how do we best combine self-play and supervised updates? This question has received surprisingly little attention from the emergent communication literature, where the question of how to bridge the gap from emergent protocols to natural language is generally left for future work (Mordatch & Abbeel, 2018; Lazaridou et al., 2018; Cao et al., 2018).
21
+
22
+ ![](images/061fccfed32a5d8d2737ea2979b15b89183cb143243792b2d848b91576ba2304.jpg)
23
+ Figure 1: (a) Diagram of the supervised self-play (S2P) procedure (phases 1-3) and the testing procedure considered in this work (phase 4). (b) The environments considered in this paper (Sec. 4).
24
+
25
+ ![](images/6b9d769eec537f94bc3aaef8749b269b1c7669c130ce528713e7d5c1be5c4b1d.jpg)
26
+
27
+ Our goal in this paper is to investigate algorithms for combining supervised learning with self-play — which we call supervised self-play (S2P) algorithms — using two classic emergent communication tasks: a Lewis signaling game with symbolic inputs, and a more complicated image-based referential game with natural language descriptions. Our first finding is that supervised learning followed by self-play outperforms emergent communication with supervised fine-tuning in these environments, and we provide three reasons for why this is the case. We then empirically investigate several supervised-first S2P methods in our environments. Existing approaches in this area have used various ad-hoc schedules for alternating between the two kinds of updates (Lazaridou et al., 2017), but to our knowledge there has been no systematic study that has compared these approaches. Lastly, we propose the use of population-based methods for S2P, and find that it leads to improved performance in the more challenging image-based referential game. Our findings highlight the need for further work in combining supervised learning and self-play to develop more sample-efficient language learners.
28
+
29
+ # 2 RELATED WORK
30
+
31
+ In the past few years, there has been a renewed interest in the field of emergent communication (Sukhbaatar et al., 2016; Foerster et al., 2016; Lazaridou et al., 2017; Havrylov & Titov, 2017) culminating in 3 NeurIPS workshops. Empirical studies have showed that agents can autonomously evolve a communication protocol using discrete symbols when deployed in a multi-agent environment which helps them to play a cooperative or competitive game (Singh et al., 2019; Cao et al., 2018; Choi et al., 2018; Resnick* et al., 2019; Evtimova et al., 2018).
32
+
33
+ While the idea of promoting coordination among agents through communication sounds promising, recent experiments (Lowe et al., 2019; Chaabouni et al., 2019; Kottur et al., 2017; Jaques et al., 2019) have emphasized the difficulty in learning meaningful emergent communication protocols even with centralized training.
34
+
35
+ Apart from the above advances in emergent communication, there has been a long outstanding goal of learning intelligent conversational agents to be able to interact with humans. This involves training the artificial agents in a way so that they achieve high scores while solving the task and their language is interpretable by humans or close to natural language. Recent works also add another axis orthogonal to communication where the agent also takes a discrete action in an interactive environment (de Vries et al., 2018; Mul et al., 2019). Lewis et al. (2017) introduced a negotiation task which involves learning linguistic and reasoning skills. They train models imitating human utterances using supervised learning and found that the model generated human-like captions but were poor negotiators. So they perform self-play with these pretrained agents in an interleaved manner and found that the performance improved drastically while avoiding language drift. Lee et al. (2019) also propose using an auxiliary task for grounding the communication to counter language drift. They use visual grounding to learn the semantics of the language while still generating messages that are close to English.
36
+
37
+ A recent trend on using population based training for multi-agent communication is a promising avenue for research using inspirations from language evolution literature (Smith et al., 2003; Kirby,
38
+
39
+ 2014; Raviv & Arnon, 2018). Cultural transmission is one such technique which focuses on the structure and compression of languages, since a language must be used and learned by all individuals of the culture in which it resides and at the same time be suitable for a variety of tasks. Harding Graesser et al. (2019) shows the emergence of linguistic phenomena when a pool of agents contact each other giving rise to novel creole languages. Li & Bowling (2019); Cogswell et al. (2019); Tieleman et al. (2018) have also tried different ways of imposing cultural pressures on agents, by simulating a large population of them and pairing agents to solve a cooperative game with communication. They train the agent against a sampled generation of agents where the generation corresponds to the different languages of the different agent at different times in the history.
40
+
41
+ Our work is inspired from these works where we aim to formalize the recent advancements in using self-play in dialog modeling, through the lens of emergent communication.
42
+
43
+ # 3 METHODS
44
+
45
+ # 3.1 PROBLEM DEFINITION
46
+
47
+ Our agents are embedded in a multi-agent environment with $N$ agents where they receive observations $o \in O$ (which are functions of a hidden state $S$ ) and perform actions $a \in A$ . Some actions $A_{L} \subset A$ involve sending a message $m \in A_{L}$ over a discrete, costless communication channel (i.e. a cheap talk channel (Farrell & Rabin, 1996)). The agents are rewarded with a reward $r \in R$ for their performance in the environment. We assume throughout that the environment is cooperative and thus the agents are trained to maximize the sum of rewards $R = \sum_{t=1:T} \sum_{i=1:N} r_{i,t}$ across both agents. This can be thought of as a cooperative partially-observable Markov game (Littman (1994)).
48
+
49
+ We define a target language $L^{*} \in \mathcal{L}$ , usually corresponding to natural language, that we want our agents to learn (we further assume $L^{*}$ can be used to achieve high task reward). In this paper, we consider a language $L \in \mathcal{L}$ to be simply a set of valid messages $A_{L}$ and a mapping between observations and messages in the environment, $L: O \times A_{L} \mapsto [0,1]$ . For example, in an English image-based referential game (Section 4) this corresponds to the mapping between images and image descriptions in English. We are given a dataset $\mathcal{D}$ consisting of $|\mathcal{D}|$ (observation, action) pairs, corresponding to $N_{e}$ 'experts' (for us, $N_{e} = 2$ ) playing the game using the target language $L^{*}$ . Our goal is to train agents to achieve a high reward in the game while speaking language $L^{*}$ with an 'expert'. Specifically, we want our agents to generalize and to perform well on examples that are not contained in $\mathcal{D}$ .
50
+
51
+ To summarize, we want agents that can perform well on a collaborative task with English-speaking humans, and we can train them using a supervised dataset $\mathcal{D}$ and via self-play.
52
+
53
+ # 3.2 SUPERVISED SELF-PLAY (S2P)
54
+
55
+ In recent years, there have been several approaches to language learning that have combined supervised or imitation learning with self-play. In this paper, we propose an umbrella term for these algorithms called supervised self-play (S2P). S2P requires two things: (1) a multi-agent environment where at least one agent can send messages over a dedicated communication channel, along with a reward function that measures how well the agents are doing at some task; and (2) a supervised dataset $\mathcal{D}$ of experts acting and speaking language $L^{*}$ in the environment (such that they perform well on the task). Given these ingredients, we define S2P below (see Figure 2).
56
+
57
+ Definition 3.1. Supervised self-play (S2P). Supervised self-play is a class of language learning algorithms that combines: (1) self-play updates in a multi-agent language environment, and (2) supervised updates on an expert dataset $\mathcal{D}$ .
58
+
59
+ S2P algorithms can differ in how they combine self-play and supervised learning updates on $\mathcal{D}$ . When supervised learning is performed before self-play, we refer to the dataset $\mathcal{D}$ as the seed data. Why might we want to train our agents via self-play? Won't their language diverge from $L^{*}$ ? One way to intuitively understand why S2P is beneficial is to think in terms of constraints. In our set-up, there are two known constraints on the target language $L^{*}$ : (1) it is consistent with the samples from the supervised dataset $\mathcal{D}$ , and (2) $L^{*}$ can be used to obtain a high reward in the environment. Thus, finding $L^{*}$ can be loosely viewed as a constrained optimization problem, and enforcing both constraints should clearly lead to better performance.
60
+
61
+ # 3.3 ALGORITHMS FOR S2P
62
+
63
+ Here we describe several methods for S2P training. Our goal is not to exhaustively enumerate all possible optimization strategies, but rather provide a categorization of some well-known ways to combine self-play and supervised learning. To help describe these methods, we further split the seed dataset $\mathcal{D}$ into $\mathcal{D}_{train}$ , which is used for training, and $\mathcal{D}_{val}$ which is used for early-stopping. We also visualize the schedules in Figure 2.
64
+
65
+ Emergent communication with supervised fine-tuning (sp2sup): We first perform self-play updates until the learning converges on the task performance. It is then followed by supervised updates on $\mathcal{D}_{train}$ until the listener performance converges on the dataset $\mathcal{D}_{val}$ .
66
+
67
+ Supervised learning with self-play (sup2sp): This is the complement of the above method which involves doing supervised updates until convergence on $\mathcal{D}_{val}$ followed by self-play updates until convergence on the task performance.
68
+
69
+ Random updates (rand): This is the method used in (Lazaridou et al., 2017). At each time step, we sample a bernoulli random variable $z \sim \text{Bernoulli}(q)$ where $q$ is fixed. If $z = 1$ , we perform one supervised update, else we do one self-play update, and repeat until both losses converge on $\mathcal{D}_{val}$ .
70
+
71
+ ![](images/a56529ae8b0179ff13ce6d62b0afd1e0d34a12ccd393ee0612be6ed6d28bce68.jpg)
72
+ Figure 2: A visual representation of the different S2P methods.
73
+
74
+ Scheduled updates (sched): We first pretrain the listener and the
75
+
76
+ speaker until convergence on $\mathcal{D}_{val}$ . Then we create a schedule, where we perform $l$ self-play updates followed by $m$ supervised updates, and repeat until convergence on the dataset.
77
+
78
+ Scheduled updates with speaker freezing (sched_frz): This method is based on the findings of Lewis et al. (2017), who do sched S2P while freezing the parameters of the speaker during self-play to reduce the amount of language drift. In our case, we freeze the parameters of the speaker after the initial supervised learning.
79
+
80
+ Scheduled updates with random speaker freezing (sched_rand_frz): Experimentally, we noticed that sched_frz didn't perform well in self-play. Thus, we introduce a variation, we sample a bernoulli random variable $z \sim Bernoulli(r)$ where $r$ is fixed. If $z = 1$ , we freeze the parameters of the speaker during both self-play and supervised learning, else we allow updates to the speaker as well.
81
+
82
+ # 3.4 POPULATION-BASED S2P (POP-S2P)
83
+
84
+ As explained above, the goal of S2P is to produce agents that follow dataset $\mathcal{D}$ while maximizing reward in the environment. However, there are many such policies satisfying these criteria. This results in a large space of possible solutions, that increases as the environment grows more complex (but decreases with increasing $|\mathcal{D}|$ ). Experimentally, we find that this can result in diverse agent policies. We show this in Figure 3 by training 50 randomly initialized agents on the image-based referential game (defined in Sec. 4) the agents can often make diverse predictions for a given image (Figure 3a) and achieve variable performance when playing with other populations with a slight preference towards their own partner (the diagonal in Figure 3b).
85
+
86
+ Inspired by these findings, we propose to aug-
87
+
88
+ ![](images/22ac0a2f4d4f8ce64a6f0840d41c17d1e4e95547aa2f026a50c3f92fcda9e490.jpg)
89
+ Figure 3: Results from training 50 S2P agents on the IBR game with $|\mathcal{D}| = 10000$ . (a) The agents have a range of predictions on many images. (b) When playing with each other, the agents exhibit uneven performance (color is mean reward, yellow is higher), indicating policy variability.
90
+
91
+ ment S2P by training a population of $N$ agents, and subsequently aggregating them back into a
92
+
93
+ single agent (the 'student'). We call this population-based S2P (Pop-S2P). While there are many feasible ways of doing this, in this paper we train the populations by simply randomizing the initial seed, and we aggregate the populations using a simple form of policy distillation (Rusu et al., 2016). Another simple technique to boost performance is via ensembling where we simply take the majority prediction at each time step.
94
+
95
+ # 4 ENVIRONMENTS & IMPLEMENTATION DETAILS
96
+
97
+ We consider environments based on classical problems in emergent communication. These environments are cooperative and involve the interaction between a speaker, who makes an observation and sends a message, and a listener, who observes the message and makes a prediction (see Figure 1b). Our goal is to train a listener such that it achieves high reward when playing with an expert speaking the target language $L^{*}$ on inputs unseen during training.2
98
+
99
+ Environment 1: Object Reconstruction (OR) Our first game is a Lewis signaling game (Lewis, 1969) and a simpler version of the Task & Talk game from Kottur et al. (2017), with a single turn and a much larger input space. The speaker agent observes an object with a certain set of properties, and must describe the object to the listener using a sequence of words. The listener then attempts to reconstruct the object. More specifically, the input space consists of $p$ properties (e.g. shape, color) of $t$ types each (e.g. triangle, square). The speaker observes a symbolic representation of the input, consisting of the concatenation of $p = 6$ one-hot vectors, each of length $t = 10$ . The number of possible inputs scales as $t^p$ . We define the vocabulary size (length of each one-hot vector sent from the speaker) as $|V| = 60$ , and the number of words (fixed length message) sent to be $T = 6$ .
100
+
101
+ For our target language $L^{*}$ for this task, we programatically generate a perfectly compositional language, by assigning each object a unique word. In other words, to describe a 'blue shaded triangle', we create a language where the output description would be "blue, triangle, shaded", in some arbitrary order. By 'unique symbol', we mean that no two properties are assigned the same word. The speaker and listener policies are parameterized using a 2-layer linear network (results were similar with added non-linearity and significantly worse with 1-layer linear networks) with 200 hidden units. During both supervised learning and self-play, the listener is trained to minimize the cross-entropy loss over property predictions.
102
+
103
+ Environment 2: Image-Based Referential game with natural language (IBR) Our second game is the communication task introduced in Lee et al. (2018). The speaker observes a target image $d^{*}$ , and must describe the image using a set of words. The listener observes the target image along with $D$ distractor images sampled uniformly at random from the training set (for us, $D = 9$ ), and the message $y_{d^{*}}$ from the speaker, and is rewarded for correctly selecting the target image. For this game, the target language $L^{*}$ is English — we obtain English image descriptions using caption data from MS COCO and Flickr30k. We set the vocabulary size $|V| = 100$ , and filter out any descriptions that contain more than $30\%$ unknown tokens while keeping the maximum message length $T$ to 15.
104
+
105
+ Similar to (Mordatch & Abbeel, 2018; Sukhbaatar et al., 2016), we train our agents end-to-end with backpropagation. Since the speaker sends discrete messages, we use the Straight-Through version of Gumbel-Softmax (Jang et al., 2017; Maddison et al., 2017) to allow gradient flow to the speaker during self-play ( $\mathcal{I}_{\text{self-play}}$ ). The speaker's predictions are trained on the ground truth English captions $m^{*}$ using the cross entropy loss $\mathcal{I}_{\text{spk-supervised}}$ . The listener is trained using the cross-entropy loss $\mathcal{I}_{\text{lsn-supervised}}$ where the logits are the reciprocal of the mean squared error which was found to perform better than directly minimizing MSE loss in Lee et al. (2018). The mean squared error is taken over the listener's image representation $b_{lsn}$ of the distractor (or target) image and the message representation given as input. The loss functions are defined as:
106
+
107
+ $$
108
+ \mathcal {J} _ {\mathrm {s p k - s u p e r v i s e d}} (d ^ {*}) = - \sum_ {t = 1} ^ {T} \log p _ {s p k} (m _ {t} | m _ {< t}, d ^ {*})
109
+ $$
110
+
111
+ $$
112
+ \mathcal {J} _ {\mathrm {l s n - s u p e r v i s e d}} \left(m ^ {*}, d ^ {*}, D\right) = - \sum_ {d = 1} ^ {D + 1} \log (\operatorname {s o f t m a x} \left(1 / p _ {l s n} \left(m ^ {*}\right) - b _ {\mathrm {l s n}} (d)\right) ^ {2})
113
+ $$
114
+
115
+ ![](images/d376f519c905dbbb3ee0578d1319da621c75b12a714fe5348aacb10050dff4ea.jpg)
116
+ (a)
117
+
118
+ ![](images/73f279a0ac627f9b82ef443de92c953efc4b32398e3d9351977c9e61daaf879d.jpg)
119
+ (b)
120
+ Figure 4: (a) Left: In the OR game, best performance (number of total samples required to achieve $95\%$ test accuracy, lower is better) for S2P is achieved when all of the samples are in the seed. 0 on the x-axis corresponds to sp2sup and Optimal is the actual (minimum) number of samples required to solve this optimization problem (see Appendix B). Right: This is also the case in the IBR game, where performance is measured by the generalization accuracy using 10k total training samples (higher is better). (b) Adding more samples to initial supervised learning in the IBR game improves agents' generalization to $L^{*}$ . (c) Even when we learn the perfect distribution with emergent communication in the OR game, it still performs worse than Pop-S2P (using sup2sp S2P).
121
+
122
+ ![](images/1a5f9d6f5cd1b29f776f3da3fcaea1a13cb3b3f37d6c702085210e034d918137.jpg)
123
+ (c)
124
+
125
+ $$
126
+ \mathcal {J} _ {\mathrm {s e l f - p l a y}} (d ^ {*}, D) = - \sum_ {d = 1} ^ {D + 1} \log (\operatorname {s o f t m a x} (1 / p _ {l s n} (y _ {d ^ {*}}) - b _ {\mathrm {l s n}} (d)) ^ {2})
127
+ $$
128
+
129
+ where $y_{d^*}$ is the concatenation of $T$ one-hot vectors $y_{d^*}^t = \mathrm{ST - GumbelSoftmax}(p_{spk}^t)$ .
130
+
131
+ We use the same architecture as described in Lee et al. (2018). The speaker and listener are parameterized by recurrent policies, both using an embedding layer of size 256 followed by a GRU (Cho et al., 2014) of size 512. We provide further hyperparameter details in Table 1 in the Appendix.
132
+
133
+ # 5 DO SUPERVISED LEARNING BEFORE SELF-PLAY
134
+
135
+ A central question in our work is how to combine supervised and self-play updates for effective pre-training of conversational agents. In this section, we study this question by conducting experiments with two schedules: training with emergent communication followed by supervised learning (sp2sup), and training with supervised learning followed by self-play (sup2sp). We also interpolate between these two regimes by performing the rand and sched on $0 < n < |\mathcal{D}|$ samples, followed by supervised fine-tuning on the remaining $|\mathcal{D}| - n$ samples.
136
+
137
+ Our first finding is that it is best to use all of your samples for supervised learning before doing self-play. This can be seen in Figure 4: when all of the samples are used first for supervised learning, the number of total samples required to solve the OR game drastically, and in the IBR game the accuracy for a fixed number of samples is maximized (Figure 4a). While this may seem to be common sense, it in fact runs counter to the prevailing wisdom in some emergent communication literature, where languages are emerged from scratch with the ultimate goal of translating them to natural language.
138
+
139
+ To better understand why it is best to do supervised learning first, we now conduct a set of targeted experiments using the environments from Section 4. Results of our experiments suggest three main explanations:
140
+
141
+ (1) Emerging a language is hard. For many environments, with emergent communication it's often hard to find an equilibrium where the agents meaningfully communicate. The difficulty of 'emergent language discovery' has been well-known in emergent communication (Lowe et al., 2017), so we will only briefly discuss it here. In short, to discover a useful communication protocol agents
142
+
143
+ ![](images/d91ed5de9d222dab599590d856dbe848d676d643d967f22e042aad29774bc9e5.jpg)
144
+ Figure 5: Results from the OR game with 1 property and 10 types. When the supervised updates are performed first (supervised data available for words $0 - 3$ ), then the self-play updates make sensible predictions for the unknown words $4 - 7$ . When the self-play updates are performed first, the subsequent supervised updates merely correct the predictions for words $1 - 4$ , without enforcing the constraint that each word should result in a separate type to solve the task.
145
+
146
+ have to coordinate repeatedly over time, which is difficult when agents are randomly initialized, particularly in environments with sparse reward. Compounding the difficulty is that, if neither agent communicates and both agents act optimally given their lack of knowledge, they converge to a Nash equilibrium called babbling equilibrium (Farrell & Rabin, 1996). This equilibrium must be overcome to learn a useful communication protocol. In S2P, the initial language supervision can help overcome the discovery problem, as it provides an initial policy for how agents could usefully communicate (Lewis et al., 2017).
147
+
148
+ (2) Emergent languages are different than natural language. Even if one does find an equilibrium where agents communicate and perform well on the task, the distribution of languages they find will usually be very different from natural language. This is a problem because, if the languages obtained through self-play are sufficiently different from $L^{*}$ , they will not be helpful for learning. This is seen for the OR game in Figure 4a, where 17 samples are required in the seed before S2P outperforms the supervised learning baseline. We speculate that this is due to the different pressures exerted during the emergence of artificial languages and human languages.
149
+
150
+ Thankfully, we can learn languages closer to $L^{*}$ by simply adding more samples to our initial supervised learning phase. We show this in Figure 4b, where we train populations of 50 agents on the IBR game and use Pop-S2P to produce a single distilled agent. With both 1K and 10K initial supervised samples, the distill agent generalizes to agents in the validation set of their population. However, the distilled agent trained with 10000 samples performs significantly better when playing with an expert agent speaking $L^{*}$ , indicating that the training agents from that population speak languages closer to $L^{*}$ .
151
+
152
+ (3) Starting with self-play violates constraints. Even if you have 'perfect emergent communication' that learns a distribution over languages under which $L^{*}$ has high probability, current methods of supervised fine-tuning do not properly learn from this distribution. What if we had all the correct learning pressures, such that we emerged a distribution over languages $\mathcal{L}$ with structure identical to $L^{*}$ , and then trained a Pop-S2P agent using this distribution? Surprisingly, we find that S2P with all of the samples in the seed performs better than even this optimistic case, in terms of providing useful information for training a Pop-S2P agent. We conduct an experiment in the OR game where we programmatically define a distribution over compositional languages $\mathcal{L}_c$ , of which our target language $L^{*}$ is a sample. Each language $L \in \mathcal{L}_c$ has the same structure and are obtained by randomly permuting the mapping between the word IDs and the corresponding type IDs, along with the order of properties in an utterance. Next, we compare two distilled policies using 50 populations: one is distilled from S2P populations (trained with $X$ samples), and the other is distilled from 'perfect emergent communication' and fine-tuned on $X$ samples. As can be seen in Figure 4c, we show that when we train a Pop-S2P agent on 50 of these compositional populations, we still need $3X$ more samples than regular Pop-S2P (trained on 50 S2P agents with all of the samples in the seed) to reach $95\%$ test accuracy<sup>3</sup>.
153
+
154
+ To understand why this happens, we conduct a case study in an even simpler setting: single-agent S2P in the OR game with $p = 1$ , $t = 10$ , $|V| = 10$ . We find that agents trained via emergent communication consistently learn to solve this task. However, as shown in Figure 5, when subsequently trained via supervised learning on $\mathcal{D}$ to learn $L^*$ , the learned language is no longer coherent (it maps different words to the same type) and doesn't solve the task. On the other hand, agents trained first with supervised learning are able to learn a language that both solves the task and is consistent with $\mathcal{D}$ .
155
+
156
+ Intuitively, what's happening is that the samples in $\mathcal{D}$ are also valid for solving the task, since we assume agents speaking $L^{*}$ can solve the task. Thus, self-play after supervised learning simply 'fills in the gaps' for examples not in $\mathcal{D}$ .<sup>4</sup> Emergent languages that start with self-play, on the other hand, contain input-output mappings that are inconsistent with $L^{*}$ , which must be un-learned during subsequent supervised learning.
157
+
158
+ In theory, the above issue could be resolved using Pop-S2P; if the distilled agent could use the population of emergent languages to discover structural rules (e.g. discovering that the languages in the OR game in Figure 4c are compositional), it could use the samples from $\mathcal{D}$ to refine a posterior distribution over target languages that is consistent with these rules (e.g. learning the distribution of compositional languages consistent with $\mathcal{D}$ ). Current approaches to supervised fine-tuning in language, though, do not do this (Lazaridou et al., 2017; Lewis et al., 2017). An interesting direction for future work is examining how to apply Bayesian techniques to S2P.
159
+
160
+ # 6 EXPLORING VARIANTS OF S2P
161
+
162
+ # 6.1 POPULATION-BASED S2P
163
+
164
+ In this section, we aim to show that (1) S2P outperforms the supervised learning baseline, and (2) Pop-S2P outperforms S2P. We conduct our experiments in the more complex IBR game, since the agents must communicate in English, and measure performance by calculating the accuracy at different (fixed) numbers of samples. Our baseline is then the performance of a supervised learner on a fixed number of samples.
165
+
166
+ We show the results in Figure 6. We first note that, when both 1k and 10k samples are used for supervised learning, S2P (sched) outperforms the supervised learning baseline. We can also see that the population-based approach outperforms single agent S2P (sched) by a significant margin. We also compare our distillation method to an ensembling method that keeps all 50 populations at test time, and find that ensembling performs significantly better, although it is much less efficient. This suggests that there is room to push distilled Pop-S2P to even better performance.
167
+
168
+ ![](images/644b3a73d90c67573af4385c41c62cc188ff7cc652a77496a752a75dca7c7ebb.jpg)
169
+ Figure 6: S2P (sched) outperforms the supervised baseline in the IBR game, and is in turn outperformed by PopS2P.
170
+
171
+ # 6.2 EXAMINING S2P SCHEDULES
172
+
173
+ In this section, we aim to: (1) evaluate several S2P schedules empirically on the IBR game; and (2) attain a better understanding of S2P through quantitative experiments.
174
+
175
+ Parameter freezing improves S2P We show the results comparing different S2P schedules in Figure 7a. We find that in this more complex game, the sup2sp S2P performs much worse than the other options. We also see that adding freezing slightly improves the performance on the target language (Figure 8 in the Appendix also shows that it converges more quickly). We hypothesize that this is because it reduces the language drift that is experienced during each round of self-play updates (Lee et al., 2019). Overall, however, the difference between different S2P schedules is relatively small, and it's unclear if the same ordering will hold in a different domain.
176
+
177
+ ![](images/24fb322bb4063277460a4e1a1cbe32e9f09abfc51a43be7545dfc57bf946c990.jpg)
178
+ (a)
179
+
180
+ ![](images/94706401b663ec6ecc0bdf1a4279898fd3de054f10124e87761a1fcf1644e09a.jpg)
181
+ (b)
182
+
183
+ ![](images/58e3b961ff38b4f785453f498faaaca17ebf1734467b028d1b41ad95b7d4c6a1.jpg)
184
+ (c)
185
+ Figure 7: (a) Comparing test performances of different S2P methods on the IBR game. For each method, we picked the model that gave the best performance on $\mathcal{D}_{val}$ . (b) 2D visualization of S2P (sched) performance over the course of training, in terms of performance on $L^{*}$ (vertical axis) and performance in self-play (horizontal axis). The zig-zag patterns indicate that most self-play updates result in a short-term decrease in target language performance. (c) Visualization of the role of the supervised and self-play updates in sched S2P.
186
+
187
+ Self-play acts as a regularizer What is the role of self-play in S2P? We can start to decipher this by taking a closer look at the sched S2P. We plot the training performance of this method in Figure 7b. Interestingly, we notice from the zig-zag pattern that the validation performance usually goes down after every set of self-play updates. However, the overall validation performance goes up after the next round of supervised updates. This is also reflected in the poor performance of the sup2sp S2P in Figure 6.
188
+
189
+ This phenomenon can be explained by framing self-play as a form of regularization: alternating between supervised and self-play updates is a way to satisfy the parallel constraints of 'is consistent with the dataset $\mathcal{D}$ and 'performs well on the task'. We visualize this pictorially in Figure 7b: while a set of self-play updates results in poor performance on $\mathcal{D}$ , eventually the learned language moves closer to satisfying both constraints.
190
+
191
+ # 7 DISCUSSION
192
+
193
+ In this work, we investigated the research question of how to combine supervised and self-play updates, with a focus on training agents to learn a language. However, this research question is not only important for language learning; it is also a important in equilibrium selection and learning social conventions (Lerer & Peysakhovich, 2019) in general games. For example, in robotics there may be a trade-off between performing a task well (moving an object to a certain place) and having your policy be interpretable by humans (so that they will not stumble over you). Examining how to combine supervised and self-play updates in these settings is an exciting direction for future work.
194
+
195
+ There are several axes of complexity not addressed in our environments and problem set-up. First, we consider only single-state environments, and agents don't have to make temporally extended decisions. Second, we do not consider pre-training on large text corpora that are separate from the desired task (Radford et al., 2019; Devlin et al., 2018). Third, we limit our exploration of self-play to the multi-agent setting, which is not the case in works such as instruction following (Andreas & Klein, 2015). Introducing these elements may result in additional practical considerations for S2P learning, which we leave for future work. Our goal in this paper is not to determine the best method of S2P in all of these settings, but rather to inspire others to use the framing of 'supervised self-play algorithms' to make progress on sample efficient human-in-the-loop language learning.
196
+
197
+ # ACKNOWLEDGEMENTS
198
+
199
+ We are very grateful to Angeliki Lazaridou, with whom discussions at ICML 2019 and her simultaneous work (Lazaridou et al., 2020) shifted the direction of this work considerably. We also thank Jean Harb, Liam Fedus, Amy Zhang, Evgeny Naumov, Cinjon Resnick, Igor Mordatch, and others at MILA and Facebook AI Research for discussions related to the ideas in this paper. Special thanks to Arthur Szlam and Kavya Srinet for discussing their ongoing work with us. RL is supported in part by a Vanier Scholarship.
200
+
201
+ # REFERENCES
202
+
203
+ Jacob Andreas and Dan Klein. Alignment-based compositional semantics for instruction following. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pp. 1165-1174, Lisbon, Portugal, September 2015. Association for Computational Linguistics. doi: 10.18653/v1/D15-1138. URL https://www.aclweb.org/anthology/D15-1138.
204
+ Kris Cao, Angeliki Lazaridou, Marc Lanctot, Joel Z Leibo, Karl Tuyls, and Stephen Clark. Emergent communication through negotiation. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=Hk6WhagRW.
205
+ Rahma Chaabouni, Eugene Kharitonov, Emmanuel Dupoux, and Marco Baroni. Anti-efficient encoding in emergent communication. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d\text{text}quotesingle Alché-Buc, E. Fox, and R. Garnett (eds.), Advances in Neural Information Processing Systems 32, pp. 6290-6300. Curran Associates, Inc., 2019. URL http://papers.nips.cc/paper/8859-anti-efficient-encoding-in-emergent-communication.pdf.
206
+ Maxime Chevalier-Boisvert, Dzmitry Bahdanau, Salem Lahlou, Lucas Willems, Chitwan Saharia, Thien Huu Nguyen, and Yoshua Bengio. BabyAI: First steps towards grounded language learning with a human in the loop. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=rJeXCoOcYX.
207
+ Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1724-1734, Doha, Qatar, October 2014. Association for Computational Linguistics. doi: 10.3115/v1/D14-1179. URL https://www.aclweb.org/anthology/D14-1179.
208
+ Edward Choi, Angeliki Lazaridou, and Nando de Freitas. Multi-agent compositional communication learning from raw visual input. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=rknt2Be0-.
209
+ Michael Cogswell, Jiasen Lu, Stefan Lee, Devi Parikh, and Dhruv Batra. Emergence of Compositional Language with Deep Generational Transmission. arXiv:1904.09067 [cs, stat], April 2019. arXiv: 1904.09067.
210
+ Harm de Vries, Kurt Shuster, Dhruv Batra, Devi Parikh, Jason Weston, and Douwe Kiela. Talk the walk: Navigating new york city through grounded dialogue. arXiv preprint arXiv:1807.03367, 2018.
211
+ Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
212
+ Katrina Evtimova, Andrew Drozdov, Douwe Kiela, and Kyunghyun Cho. Emergent communication in a multi-modal, multi-step referential game. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=rJGZq6g0-.
213
+ Joseph Farrell and Matthew Rabin. Cheap talk. Journal of Economic Perspectives, 10(3): 103-118, September 1996. doi: 10.1257/jep.10.3.103. URL http://www.aeaweb.org/articles?id=10.1257/jep.10.3.103.
214
+
215
+ Jakob Foerster, Ioannis Alexandros Assael, Nando de Freitas, and Shimon Whiteson. Learning to Communicate with Deep Multi-Agent Reinforcement Learning. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett (eds.), Advances in Neural Information Processing Systems 29, pp. 2137-2145. Curran Associates, Inc., 2016.
216
+ Laura Harding Graesser, Kyunghyun Cho, and Douwe Kiela. Emergent linguistic phenomena in multi-agent communication games. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 3691-3701, Hong Kong, China, November 2019. Association for Computational Linguistics. doi: 10.18653/v1/D19-1384. URL https:// www.aclweb.org/anthology/D19-1384.
217
+ Serhii Havrylov and Ivan Titov. Emergence of Language with Multi-agent Games: Learning to Communicate with Sequences of Symbols. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (eds.), Advances in Neural Information Processing Systems 30, pp. 2149-2159. Curran Associates, Inc., 2017.
218
+ Eric Jang, Shixiang Gu, and Ben Poole. Categorical Reparameterization with Gumbel-Softmax. In International Conference on Learning Representations, 2017. URL https://openreview.net/forum?id=rkE3y85ee.
219
+ Natasha Jaques, Angeliki Lazaridou, Edward Hughes, Caglar Gulcehre, Pedro Ortega, Dj Strouse, Joel Z. Leibo, and Nando De Freitas. Social Influence as Intrinsic Motivation for Multi-Agent Deep Reinforcement Learning. In International Conference on Machine Learning, pp. 3040-3049, May 2019. URL http://proceedings.mlr.press/v97/jaques19a.html.
220
+ Simon Kirby. Iterated learning and the evolution of language. Current Opinion in Neurobiology, pp. 7, 2014.
221
+ Satwik Kottur, José Moura, Stefan Lee, and Dhruv Batra. Natural language does not emerge 'naturally' in multi-agent dialog. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pp. 2962-2967, Copenhagen, Denmark, September 2017. Association for Computational Linguistics. doi: 10.18653/v1/D17-1321. URL https://www.aclweb.org/anthology/D17-1321.
222
+ Angeliki Lazaridou, Alexander Peysakhovich, and Marco Baroni. Multi-Agent Cooperation and the Emergence of (Natural) Language. In International Conference on Learning Representations, 2017. URL https://openreview.net/forum?id=Hk8N3Sclg.
223
+ Angeliki Lazaridou, Karl Moritz Hermann, Karl Tuyls, and Stephen Clark. Emergence of linguistic communication from referential games with symbolic and pixel input. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=HJGv1Z-AW.
224
+ Angeliki Lazaridou, Anna Potapenko, and Olivier Tieleman. Multi-agent communication meets natural language: Synergies between functional and structural language learning. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 7663-7674, Online, July 2020. Association for Computational Linguistics. URL https://www.aclweb.org/anthology/2020.acl-main.685.
225
+ Jason Lee, Kyunghyun Cho, Jason Weston, and Douwe Kiela. Emergent translation in multiagent communication. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=H1vEXaxA-.
226
+ Jason Lee, Kyunghyun Cho, and Douwe Kiela. Countering language drift via visual grounding. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 4376-4386, Hong Kong, China, November 2019. Association for Computational Linguistics. doi: 10.18653/v1/D19-1447. URL https://www.aclweb.org/anthology/D19-1447.
227
+ Adam Lerer and Alexander Peysakhovich. Learning existing social conventions via observationally augmented self-play. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, pp. 107-114. ACM, 2019.
228
+
229
+ David Lewis. Convention: A philosophical study. Harvard University Press, 1969.
230
+ Mike Lewis, Denis Yarats, Yann Dauphin, Devi Parikh, and Dhruv Batra. Deal or no deal? end-to-end learning of negotiation dialogues. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pp. 2443-2453, Copenhagen, Denmark, September 2017. Association for Computational Linguistics. doi: 10.18653/v1/D17-1259. URL https://www.aclweb.org/anthology/D17-1259.
231
+ Fushan Li and Michael Bowling. Ease-of-Teaching and Language Structure from Emergent Communication. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'textquotesingle Alché-Buc, E. Fox, and R. Garnett (eds.), Advances in Neural Information Processing Systems 32, pp. 15825-15835. Curran Associates, Inc., 2019. URL http://papers.nips.cc/paper/9714-ease-of-teaching-and-language-structure-from-emergent-communication.pdf.
232
+ Michael L Littman. Markov games as a framework for multi-agent reinforcement learning. In International Conference on Machine Learning, volume 157, pp. 157-163, 1994.
233
+ Ryan Lowe, YI WU, Aviv Tamar, Jean Harb, OpenAI Pieter Abbeel, and Igor Mordatch. Multi-Agent Actor-Critic for Mixed Cooperative-Competitive Environments. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (eds.), Advances in Neural Information Processing Systems 30, pp. 6379–6390. Curran Associates, Inc., 2017. URL http://papers.nips.cc/paper/7217-multi-agent-actor-critic-formixed-cooperative-competitive-environments.pdf.
234
+ Ryan Lowe, Jakob Foerster, Y-Lan Boureau, Joelle Pineau, and Yann Dauphin. On the pitfalls of measuring emergent communication. In Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems, AAMAS '19, pp. 693-701. International Foundation for Autonomous Agents and Multiagent Systems, 2019. ISBN 978-1-4503-6309-9. URL http://www.ifaamas.org/Proceedings/aamas2019/pdfs/p693.pdf.
235
+ Chris J. Maddison, Andriy Mnih, and Yee Whye Teh. The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables. In International Conference on Learning Representations, 2017. URL https://openreview.net/forum?id=S1jE5L5gl.
236
+ Igor Mordatch and Pieter Abbeel. Emergence of grounded compositional language in multi-agent populations. In AAAI Conference on Artificial Intelligence, 2018. URL https://aaai.org/ocs/index.php/AAAI/AAAI18/paper/view/17007.
237
+ Mathijs Mul, Diane Bouchacourt, and Elia Bruni. Mastering emergent language: learning to guide in simulated navigation. arXiv:1908.05135 [cs], August 2019. arXiv: 1908.05135.
238
+ Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners. 2019. URL https://d4mucfpksywv.cloudfront.net/better-language-models/language_models—are_unsupervisedMULTITASK_Learners.pdf.
239
+ Limor Raviv and Inbal Arnon. Systematicity, but not compositionality: Examining the emergence of linguistic structure in children and adults using iterated learning. Cognition, 181:160-173, December 2018. ISSN 0010-0277.
240
+ Cinjon Resnick*, Abhinav Gupta*, Jakob N. Foerster, Andrew M. Dai, and Kyunghyun Cho. Capacity, bandwidth, and compositionality in emergent language learning. *ArXiv*, abs/1910.11424, 2019.
241
+ Andrei A Rusu, Sergio Gomez Colmenarejo, Caglar Gulcehre, Guillaume Desjardins, James Kirkpatrick, Razvan Pascanu, Volodymyr Mnih, Koray Kavukcuoglu, and Raia Hadsell. Policy distillation. In International Conference on Learning Representations, 2016. URL https://arxiv.org/pdf/1511.06295.pdf.
242
+ Amanpreet Singh, Tushar Jain, and Sainbayar Sukhbaatar. Individualized controlled continuous communication model for multiagent cooperative and competitive tasks. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id= rye7knCqK7.
243
+
244
+ Kenny Smith, Henry Brighton, and Simon Kirby. Complex Systems In Language Evolution: The Cultural Emergence Of Compositional Structure. Advances in Complex Systems (ACS), 6(04): 537-558, 2003. doi: 10.1142/S0219525903001055. URL https://ideas.repec.org/a/ wsi/acsxxx/v06y2003i04ns0219525903001055.html.
245
+ Sainbayar Sukhbaatar, Arthur Szlam, and Rob Fergus. Learning multiagent communication with backpropagation. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett (eds.), Advances in Neural Information Processing Systems 29, pp. 2244-2252. Curran Associates, Inc., 2016. URL http://papers.nips.cc/paper/6398-learning-multiagent-communication-with-backpropagation.pdf.
246
+ Olivier Tieleman, Angeliki Lazaridou, Shibl Mourad, Charles Blundell, and Doina Precup. Shaping representations through communication. 2018. URL https://openreview.net/pdf?id=HkzL4hR9Ym.
247
+
248
+ # A HYPERPARAMETERS
249
+
250
+ We provide hyperparameter details in Table 1.
251
+
252
+ <table><tr><td>Hyperparameter</td><td>Values</td></tr><tr><td>Learning rate</td><td>1e-2, 1e-3, 2e-3, 6e-3, 1e-4, 5e-4, 6e-4</td></tr><tr><td>Model architecture</td><td>Linear, Bilinear, Non-Linear</td></tr><tr><td>Number of encoders (perfect emcomm)</td><td>1, 2, 5, 10, 20, 50, 100, 200, 500, 1000</td></tr><tr><td>Hidden layer size (Linear)</td><td>200, 500, 1000</td></tr><tr><td>Number of encoders (Pop-S2P)</td><td>20, 40, 50, 60, 80, 100</td></tr><tr><td>Number of distractors</td><td>1, 4, 9</td></tr><tr><td>GRU hidden size</td><td>256</td></tr><tr><td>Word embedding size</td><td>512</td></tr><tr><td>Image embedding size (from pretrained Resnet50)</td><td>2048</td></tr><tr><td>Batch size</td><td>1, 512, 1000</td></tr><tr><td>Random seeds</td><td>0, 1, 2, 3, 4</td></tr><tr><td>Optimizer</td><td>Adam, SGD</td></tr><tr><td>Dropout</td><td>0, 0.3</td></tr><tr><td>Gumbel relaxation temperature</td><td>1</td></tr><tr><td>Vocabulary size</td><td>100, 200, 500, 1000, 5000</td></tr><tr><td>Max sentence length</td><td>12, 15, 20, 30, 50</td></tr><tr><td>m in sched</td><td>0, 1, 30, 40, 50, 70</td></tr><tr><td>l in sched</td><td>0, 30, 40, 50</td></tr><tr><td>q in rand</td><td>0.75</td></tr><tr><td>r in sched_rand_frz</td><td>0.5</td></tr><tr><td>Number of initial supervised steps (pretraining)</td><td>0, 1000, 2000, 3000, 5000</td></tr></table>
253
+
254
+ Table 1: Hyperparameters considered in S2P training.
255
+
256
+ # B CALCULATION OF OPTIMAL SAMPLE COMPLEXITY IN OR GAME
257
+
258
+ Here we provide a quick calculation for how quickly a human might learn a new compositional language $L$ in the OR game in as few examples as possible, which we use as a baseline in Figure 4a. We assume a OR game with $p = 6$ properties, $t = 10$ types, $T = 6$ words sent per message (concatenated together), and $|V| = 60$ vocabulary size. If this language $L$ is compositional, then each word in the vocabulary is assigned to 1 type. Thus, we need to learn 60 total assignments. In this analysis we assume we can construct (i.e. hand-design) the samples seen by the human, and thus the final number should be considered something like a lower bound.
259
+
260
+ Since $T = 6$ , we get information about 6 word←type assignments for every sample. However, this information is entangled as we don't know which word corresponded to which type. Thus, we (1) divide the problem up by first constructing 9 (word sequence, object) sample pairs where none of the object types overlap between each sample. With this information, we are able to narrow down the word←type assignments into 10 groups of 6 (that is, in each group we have 6 words corresponding to 6 types, but we don't know which type belongs to which word). Note we don't need 10 samples as the last one can be inferred by exclusion. (2) We then construct 5 more samples where each type belongs to a separate group. We can do this because $t > p$ . Because each type belongs to a separate group, cross-referencing the words observed from samples in (1) and (2) uniquely defines each word←type assignment. Note again we don't need 6 samples as the last one can be inferred by exclusion. This gives us a total of $9 + 5 = 14$ samples.
261
+
262
+ # C ADDITIONAL PLOTS
263
+
264
+ We show training curves for various S2P schedules.
265
+
266
+ ![](images/06dd3fc52b2de2ba510ad54c76779778edba00d63c66fab7fdb2c46f1776d957.jpg)
267
+
268
+ ![](images/a5d28cdb339c75ceec1aa0a42ca06c9b502f6ad974dd9794e14b916ba3cd4d18.jpg)
269
+
270
+ Figure 8: Training curves for various S2P methods in the IBR game described in $\S 4$ .
271
+ ![](images/77f3e02a031a4b340df70fcd91abcb5575d1c462a0a005c0f4fe68a054ce7b31.jpg)
272
+ sp2sup sup2sp rand sched sched_frz sched_rand_frz
273
+
274
+ ![](images/30cb5772e22e597616ae59f96b3669e494df816cfcfd573700e98271219972f6.jpg)
ontheinteractionbetweensupervisionandselfplayinemergentcommunication/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4b22522710439ca7a093720e6504e055de06825a4d62be50e736bdb2afe0353c
3
+ size 460771
ontheinteractionbetweensupervisionandselfplayinemergentcommunication/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:617a7f18af3e6eef888c9fc515393eba368d7b2a4bbbbaf89e2b3c990870de5a
3
+ size 414069
ontheneedfortopologyawaregenerativemodelsformanifoldbaseddefenses/58912334-d640-40cb-adcc-b945fac97af5_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7888af95976ca8e1b55f757bfa51c46e226a5cfe920a9bb0a994f4eda17a8c44
3
+ size 181701
ontheneedfortopologyawaregenerativemodelsformanifoldbaseddefenses/58912334-d640-40cb-adcc-b945fac97af5_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d2aa6ec3097fa83f1f01037484e28819eb2d86d914920f7772682b3097a95e80
3
+ size 222385
ontheneedfortopologyawaregenerativemodelsformanifoldbaseddefenses/58912334-d640-40cb-adcc-b945fac97af5_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d470ca87aed3b3589bf505c7d1a75888eb09c820fe715fdf0e799cca058f9963
3
+ size 1304575
ontheneedfortopologyawaregenerativemodelsformanifoldbaseddefenses/full.md ADDED
@@ -0,0 +1,912 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # ON THE NEED FOR TOPOLOGY-AWARE GENERATIVE MODELS FOR MANIFOLD-BASED DEFENSES
2
+
3
+ Uyeong Jang
4
+
5
+ Department of Computer Sciences
6
+
7
+ University of Wisconsin-Madison
8
+
9
+ Madison, WI, USA
10
+
11
+ wjang@cs.wisc.edu
12
+
13
+ Susmit Jha
14
+
15
+ Computer Science Laboratory
16
+
17
+ SRI International
18
+
19
+ Menlo Park, CA, USA
20
+
21
+ susmit.jha@sri.com
22
+
23
+ Somesh Jha
24
+
25
+ Department of Computer Sciences
26
+
27
+ University of Wisconsin-Madison
28
+
29
+ Madison, WI, USA
30
+
31
+ XaiPient
32
+
33
+ Princeton, NJ, USA
34
+
35
+ jha@cs.wisc.edu
36
+
37
+ # ABSTRACT
38
+
39
+ Machine-learning (ML) algorithms or models, especially deep neural networks (DNNs), have shown significant promise in several areas. However, researchers have recently demonstrated that ML algorithms, especially DNNs, are vulnerable to adversarial examples (slightly perturbed samples that cause misclassification). The existence of adversarial examples has hindered the deployment of ML algorithms in safety-critical sectors, such as security. Several defenses for adversarial examples exist in the literature. One of the important classes of defenses are manifold-based defenses, where a sample is "pulled back" into the data manifold before classifying. These defenses rely on the assumption that data lie in a manifold of a lower dimension than the input space. These defenses use a generative model to approximate the input distribution. In this paper, we investigate the following question: do the generative models used in manifold-based defenses need to be topology-aware? We suggest the answer is yes, and we provide theoretical and empirical evidence to support our claim.
40
+
41
+ # 1 INTRODUCTION
42
+
43
+ Machine-learning (ML) algorithms, especially deep-neural networks (DNNs), have had resounding success in several domains. However, adversarial examples have hindered their deployment in safety-critical domains, such as autonomous driving and malware detection. Adversarial examples are constructed by an adversary adding a small perturbation to a data-point so that it is misclassified. Several algorithms for constructing adversarial examples exist in the literature (Biggio et al., 2013; Szegedy et al., 2013; Goodfellow et al., 2014b; Kurakin et al., 2016a; Carlini & Wagner, 2017; Madry et al., 2017; Papernot et al., 2017). Numerous defenses for adversarial examples also have been explored (Kurakin et al., 2016b; Guo et al., 2017; Sinha et al., 2017; Song et al., 2017; Tramér et al., 2017; Xie et al., 2017; Dhillon et al., 2018; Raghunathan et al., 2018; Cohen et al., 2019; Dubey et al., 2019).
44
+
45
+ In this paper, we focus on "manifold-based" defenses (Ilyas et al., 2017; Samangouei et al., 2018). The general idea in these defenses is to "pull back" the data point into the data manifold before classification. These defenses leverage the fact that, in several domains, natural data lies in a low-dimensional manifold (henceforth referred to as the manifold assumptions) (Zhu & Goldberg, 2009). The data distribution and hence actual manifold that the natural data lies in is usually unknown, so these defenses use a generative model to "approximate" the data distribution. Generative models attempt to learn to generate data according to the underlying data distribution. (The input to a generative model is usually random noise from a known distribution, such as Gaussian or uniform.) There are
46
+
47
+ various types of generative models in the literature, such as variational autoencoder (VAE) (Kingma & Welling, 2013), generative adversarial network (GAN) (Goodfellow et al., 2014a) and reversible generative models, e.g., real-valued non-volume preserving transform (Real NVP) (Dinh et al., 2016).
48
+
49
+ This paper addresses the following question:
50
+
51
+ Do manifold-based defenses need to be aware of the topology of the underlying data manifold?
52
+
53
+ In this paper, we suggest the answer to this question is yes. We demonstrate that if the generative model does not capture the topology of the underlying manifold, it can adversely affect these defenses. In these cases, the underlying generative model is being used as an approximation of the underlying manifold. We believe this opens a rich avenue for future work on using topology-aware generative models for defense to adversarial examples.
54
+
55
+ Contributions and Roadmap. We begin with a brief description of related work in Section 2. Section 3 provides the requisite mathematical background. Our main theoretical results are provided in Section 4. Informally, our result says that if the generative model is not topology-aware, it can lead to a "topological mismatch" between the distribution induced by the generative model and the actual distribution. Section 5 describes our experimental verification of our theoretical results and investigates their ramifications on a manifold-based defenses called Invert-and-Classify (INC) (Ilyas et al., 2017; Samangouei et al., 2018).
56
+
57
+ # 2 RELATED WORK
58
+
59
+ # 2.1 GENERATIVE MODELS
60
+
61
+ As a method for sampling high-dimensional data, generative models find applications in various fields in applied math and engineering, e.g., image processing, reinforcement learning, etc. Methods for learning data-generating distribution with neural networks include well-known examples of Variational Autoencoders (VAEs) (Kingma & Welling, 2013) and variations of Generative Adversarial Networks (GANs) (Goodfellow et al., 2014a; Radford et al., 2015; Zhao et al., 2016).
62
+
63
+ These generative models learn how to map latent variables into generated samples. The VAE is a variational Bayesian approach, so it approximates a posterior distribution over latent vectors (given training samples) by a simpler variational distribution. Similar to other variational Bayesian methods, VAE tries to minimize the Kullback-Leibler divergence between the posterior distribution and the variational distribution by minimizing the reconstruction error of the autoencoder. GANs represent another approach to learning how to transform latent vectors into samples. Unlike other approaches, the GAN learns the target distribution by training two networks – generator and discriminator – simultaneously.
64
+
65
+ In addition to generating plausible samples, some generative models construct bijective relations between latent vector and generated samples, so that the probability density of the generated sample can be estimated. Due to their bijective nature, such generative models are called to be reversible. Some examples are normalizing flow (Rezende & Mohamed, 2015), Masked Autoregressive Flow (MAF) (Papamakarios et al., 2017), Real NVP (Dinh et al., 2016), and Glow (Kingma & Dhariwal, 2018).
66
+
67
+ # 2.2 APPLICATIONS OF GENERATIVE MODELS IN ADVERSARIAL MACHINE LEARNING
68
+
69
+ The DNN-based classifier has been shown to be vulnerable to adversarial attacks (Szegedy et al., 2013; Goodfellow et al., 2014b; Moosavi-Dezfooli et al., 2016; Papernot et al., 2016; Madry et al., 2017). Several hypothesis try explaining such vulnerability (Szegedy et al., 2013; Goodfellow et al., 2014b; Tanay & Griffin, 2016; Feinman et al., 2017), and one explanation is that the adversarial examples lie far away from the data manifold. This idea leads to defenses making use of the geometry learned from the dataset – by projecting the input to the nearest point in the data manifold.
70
+
71
+ To learn a manifold from a given dataset, generative models can be exploited. The main idea is to approximate the data-generating distribution with a generative model, to facilitate searching over data manifold by searching over the space of latent vectors. The term Invert-and-Classify (INC) was coined to describe this type of defense (Ilyas et al., 2017), and different types of generative models were tried to detect adversarial examples (Ilyas et al., 2017; Song et al., 2017; Samangouei et al., 2018). Usually, the projection is done by searching the latent vector that minimizes the geometric
72
+
73
+ distance (Ilyas et al., 2017; Samangouei et al., 2018). However, despite the promising theoretical background, all of those methods are still vulnerable (Athalye et al., 2018; Ilyas et al., 2017).
74
+
75
+ # 3 BACKGROUND
76
+
77
+ We formally describe data generation, based on the well-known manifold assumption; data lies close to a manifold whose intrinsic dimension is much lower than that of the ambient space. In our model of data generation, we provide a formal definition of data-generating manifold $M$ on which the data-generating distribution lies such that $M$ conforms to the manifold assumption.
78
+
79
+ # 3.1 REQUIREMENTS
80
+
81
+ Real-world data tends to be noisy, so the data does not easily correspond to an underlying manifold. We first focus on an ideal case where data is generated solely from the manifold $M$ without noise.
82
+
83
+ In the setting of classification with $l$ labels, we consider manifolds $M_1, \ldots, M_l \subset \mathbb{R}^n$ that correspond to the generation of data in each class $i \in \{1, \ldots, l\}$ , respectively. We assume those manifolds are pair-wise disjoint, i.e., $M_i \cap M_j = \emptyset$ for any $i \neq j$ . We set the data-generating manifold $M$ as the disjoint union of those manifolds, $M = \bigcup_{i=1}^{l} M_i$ . We assume $M$ to be a compact Riemannian manifold with a volume measure $dM$ induced by its Riemannian metric. When a density function $p_M$ defined on $M$ satisfies some requirements, it is possible to compute probabilities over $M$ via $\int_{\mathbf{x} \in M} p_M(\mathbf{x}) dM(\mathbf{x})$ . We call such $M$ equipped with $p_M$ an $dM$ as a data-generating manifold. We refer to Appendix A and Appendix D.1 for details about definitions and requirements on $p_M$ .
84
+
85
+ In practice, data generation is affected by noise, so not all data lie on the data-generating manifold. Therefore, we incorporate the noise as an artifact of data-generation and extend the density $p_{M}$ on $M$ to the density $p$ on the entire $\mathbb{R}^n$ by assigning local noise densities on $M$ . We consider a procedure that (1) samples a point $\mathbf{x}_o$ from $M$ first, and (2) adds a noise vector $\mathbf{n}$ to get an observed point $\hat{\mathbf{x}} = \mathbf{x}_o + \mathbf{n}$ . Here, the noise $\mathbf{n}$ is a random vector sampled from a probability distribution, centered at $\mathbf{x}_o$ , whose noise density function is $\nu_{\mathbf{x}_o}$ , satisfying $\nu_{\mathbf{x}}(\mathbf{n}) = \nu_{\mathbf{x}}(\hat{\mathbf{x}} - \mathbf{x}) = p_M(\hat{\mathbf{x}} | \mathbf{x}_o = \mathbf{x})$ .
86
+
87
+ # 3.2 EXTENDING DENSITY
88
+
89
+ When $M$ is equipped with a density function $p_M$ and a measure $dM$ that we can integrate over $M$ , we can compute the density after random noise is added as follows.
90
+
91
+ $$
92
+ p (\hat {\mathbf {x}}) = \int_ {\mathbf {x} \in M} \nu_ {\mathbf {x}} (\hat {\mathbf {x}} - \mathbf {x}) p (\mathbf {x}) d M (\mathbf {x}) \tag {1}
93
+ $$
94
+
95
+ Since $\nu_{\mathbf{x}}(\hat{\mathbf{x}} - \mathbf{x})$ is a function on $\mathbf{x}$ when $\hat{\mathbf{x}}$ is fixed, computing this integration can be viewed as the computing expectation of a real-valued function defined on $M$ . Computing such expectation has been explored in Pennec (1999). A demonstrative example is provided in Appendix B, and this extension is further discussed in Appendix D.2.
96
+
97
+ # 3.3 GENERATIVE MODELS
98
+
99
+ A generative model tries to find a statistical model for joint density $p(\mathbf{x},y)$ (Ng & Jordan, 2002). We mainly discuss a specific type that learns a transform from one distribution $\mathcal{D}_Z$ to another target distribution $\mathcal{D}_X$ . Commonly, a latent vector $\mathbf{z} \sim \mathcal{D}_Z$ is sampled from a simpler distribution, e.g., Gaussian, then a pre-trained deterministic function $G$ maps to a sample $\mathbf{x} = G(\mathbf{z})$ .
100
+
101
+ Specifically, we focus on reversible generative models to facilitate the comparison between the density of generated samples and the target density. In this approach, the dimensions of latent vectors are set to be the same as those of the samples to be generated. Also, for a given $\mathbf{x}$ , the density of $\mathbf{x}$ is estimated by the change of variable formula (equation (2) in Section 5.1).
102
+
103
+ # 3.4 INVERT AND CLASSIFY (INC) APPROACH FOR ROBUST CLASSIFICATION
104
+
105
+ As the data-generating manifold $M$ contains class-wise disjoint manifolds, there is a classifier $f$ on $\mathbb{R}^n$ separating these manifolds. If $f$ separates the manifolds of $M$ , any misclassified point should lie out of $M$ . Therefore, to change a correct classification near a manifold, any adversary would pull a sample further out of the manifold. By projecting misclassified points to the nearest manifold, we may expect the classification to be corrected by the projection. The INC method (Ilyas et al., 2017; Samangouei et al., 2018) implements this using a generative model.
106
+
107
+ The main idea of INC is to invert the perturbed sample by projecting to the nearest point on the data-generating manifold. Ideally, the data-generating manifold $M$ is accessible. For any point $(\hat{\mathbf{x}}, y)$ with $f(\hat{\mathbf{x}}) \neq y$ , out-of-manifold perturbation is reduced by projecting $\hat{\mathbf{x}}$ to $\mathbf{x}^*$ on $M$ . The manifold $M$
108
+
109
+ is unknown in practice. However, as $M$ is the data-generating manifold of $\mathcal{D}_X$ , a generative model $G$ for $\mathcal{D}_X$ is trained to approximate $M$ . Then, searching over $M$ is replaced by searching over latent vectors of $G$ . More details about INC implementations are described in Section 5.1.
110
+
111
+ # 4 TOPOLOGICAL PROPERTIES OF DATA FROM GENERATIVE MODELS
112
+
113
+ In this paper, we study the significance of differences in the topological properties of the latent vector distribution and the target distribution in learning generative models. Initial information about the topology of target distribution<sup>1</sup> is crucial to the generative model performance. Specifically, if there is a difference in the number of connected components in the superlevel set between the target distribution and the distribution of the latent vector, then any continuous generative model $G$ cannot approximate the target distribution properly (irrespective of the training method). Due to the space limit, all proofs are presented in Appendix C.
114
+
115
+ # 4.1 TOPOLOGY OF DISTRIBUTIONS BASED ON SUPERLEVEL SETS
116
+
117
+ The data-generating manifold is a geometric shape that corresponds to the distribution. However, this manifold is not accessible in most cases and we only have indirect access via the distribution extended from it. Therefore, we consider finding a shape from the extended density so that this "shape" successfully approximates the data-generating manifold.
118
+
119
+ $\lambda$ -density superlevel set. We use the concept of $\lambda$ -density superlevel set to capture geometric features of the density function. Simply put, for a density function $p$ and a threshold $\lambda > 0$ , the $\lambda$ -density superlevel set $L_{p,\lambda}$ is the inverse image $p^{-1}[\lambda, \infty]$ . Our theoretical contribution is the conditional existence of a $\lambda$ -density superlevel set reflecting the topology of the data-generating manifold under proper conditions on the noise density.
120
+
121
+ Assumptions on noise density. For a family of densities $\{\nu_{\mathbf{x}}\}_{\mathbf{x}\in M}$ , we require the noise $\nu_{\mathbf{x}}$ to satisfy a number of assumptions. These assumptions facilitate theoretical discussion about the superlevel set reflecting the data-generating manifold. In the following definition, we denote a Euclidean ball of radius $\delta$ centered at $\mathbf{x}$ by $B_{\delta}(\mathbf{x})$ .
122
+
123
+ Definition 1. Let $\nu_{\mathbf{x}}$ be a family of noise densities.
124
+
125
+ - $\lambda$ is small-enough if $L_{\nu_{\mathbf{x}},\lambda}$ is nonempty for all $\mathbf{x} \in M$ ,
126
+ - $\lambda$ - bounding radius $\delta_{\mathbf{x},\lambda} \coloneqq \min \{\delta \mid L_{\nu_{\mathbf{x}},\lambda} \subseteq \overline{B_{\delta}(\mathbf{0})}\}$ is the smallest radius that $\overline{B_{\delta}(\mathbf{0})}$ contains $L_{\nu_{\mathbf{x}},\lambda}$ . When $\max_{\mathbf{x} \in M} \delta_{\mathbf{x},\lambda}$ exists for some $\lambda$ , we denote the maximum value as $\delta_{\lambda}$ .
127
+ - $\lambda$ -guaranteeing radius $\epsilon_{\mathbf{x},\lambda} \coloneqq \max \{\epsilon \mid \overline{B_{\epsilon}(\mathbf{0})} \subseteq L_{\nu_{\mathbf{x}},\lambda}\}$ is the largest radius that $L_{\nu_{\mathbf{x}},\lambda}$ contains $\overline{B_{\epsilon}(\mathbf{0})}$ . When $\min_{\mathbf{x} \in M} \epsilon_{\mathbf{x},\lambda}$ exists for some $\lambda$ , we denote the minimum value as $\epsilon_{\lambda}$ .
128
+
129
+ ![](images/31aaca1ae3b77988535ae51d1b9dd293acdcc15e2231ea476914947a359cade6.jpg)
130
+ Figure 1: Example superlevel set $L_{\mathbf{x},\lambda}$ with $\lambda$ -bounding radius $\delta_{\mathbf{x},\lambda}$ and $\lambda$ -guaranteeing radius $\epsilon_{\mathbf{x},\lambda}$ .
131
+
132
+ Sufficient conditions for the existence of these radii are discussed in Appendix D.3. The properties of these radii are summarized in Lemma 1. (The proof follows from Definition 1).
133
+
134
+ Lemma 1. Let $\nu_{\mathbf{x}}$ be a family of noise densities and let $\lambda$ be small-enough. Then,
135
+
136
+ $$
137
+ \left\| \hat {\mathbf {x}} - \mathbf {x} \right\| > \delta_ {\lambda} \Longrightarrow \nu_ {\mathbf {x}} (\hat {\mathbf {x}} - \mathbf {x}) < \lambda
138
+ $$
139
+
140
+ $$
141
+ \| \hat {\mathbf {x}} - \mathbf {x} \| \leq \epsilon_ {\lambda} \Longrightarrow \nu_ {\mathbf {x}} (\hat {\mathbf {x}} - \mathbf {x}) \geq \lambda
142
+ $$
143
+
144
+ whenever $\delta_{\lambda}$ and $\epsilon_{\lambda}$ exist.
145
+
146
+ Figure 1 shows an example of superlevel set $L_{\mathbf{x},\lambda}$ of noise $\nu_{\mathbf{x}}$ at a point $\mathbf{x}$ and its $\lambda$ -bounding radius $\delta_{\mathbf{x},\lambda}$ and $\lambda$ -guaranteeing radius $\epsilon_{\mathbf{x},\lambda}$ .
147
+
148
+ Finally, we define the continuous variation of noise densities $\nu_{\mathbf{x}}$ over changes of $\mathbf{x} \in M$ . For the continuous variation, we require the continuity of both radii $\delta_{\mathbf{x},\lambda}$ and $\epsilon_{\mathbf{x},\lambda}$ as real-valued functions of $\mathbf{x} \in M$ for any fixed value of $\lambda$ .
149
+
150
+ Definition 2 (Continuously varying radii). Noise densities $\nu_{\mathbf{x}}$ have continuously varying radii if, for a fixed small-enough $\lambda$ , both $\lambda$ -bounding radius $\delta_{\mathbf{x},\lambda}$ and $\lambda$ -guaranteeing radius $\epsilon_{\mathbf{x},\lambda}$ are continuous functions of $\mathbf{x} \in M$ .
151
+
152
+ When noise densities have continuously varying radii, with the compactness of $M$ , we can apply the extreme value theorem to guarantee the existence of both $\delta_{\lambda} = \max_{\mathbf{x}\in M}\delta_{\mathbf{x},\lambda}$ and $\epsilon_{\lambda} = \min_{\mathbf{x}\in M}\epsilon_{\mathbf{x},\lambda}$ .
153
+
154
+ # 4.2 MAIN THEOREM
155
+
156
+ Our main theorem establishes, under the assumptions on noise densities from Section 4.1, the existence of a $\lambda$ such that,
157
+
158
+ - (Inclusion) The $\lambda$ -density superlevel set $L_{p,\lambda}$ includes the data-generating manifold $M$ .
159
+ - (Separation) The $\lambda$ -density superlevel set $L_{p,\lambda}$ consists of connected components such that each component contains at most one manifold $M_i$ .
160
+
161
+ Definition 3. Consider a data-generating manifold $M$ with density function $p_M$ . For a radius $\epsilon > 0$ , we define $\omega_{\epsilon}$ to be the minimum (over $\mathbf{x} \in M$ ) probability of sampling $\mathbf{x}' \in M$ in an $\epsilon$ -ball $B_{\epsilon}(\mathbf{x})$ .
162
+
163
+ $$
164
+ \omega_ {\epsilon} := \min _ {\mathbf {x} \in M} \operatorname * {P r} _ {\mathbf {x} ^ {\prime} \sim p _ {M}} \left[ \mathbf {x} ^ {\prime} \in B _ {\epsilon} (\mathbf {x}) \right]
165
+ $$
166
+
167
+ Definition 4 (Class-wise distance). Let $(X,d)$ be a metric space and let $M = \bigcup_{i=1}^{l} M_i$ be a data-generating manifold in $X$ . The class-wise distance $d_{\mathrm{cw}}$ of $M$ is defined as,
168
+
169
+ $$
170
+ d_{\mathrm{cw}} = \min_{\substack{i,j\in [l]\\ i\neq j}}\min_{\substack{\mathbf{x}\in M_{i}\\ \mathbf{x}^{\prime}\in M_{j}}}d(\mathbf{x},\mathbf{x}^{\prime})
171
+ $$
172
+
173
+ With the definitions above, we proved the following main theorem.
174
+
175
+ Theorem 1. Pick any small-enough threshold $\lambda$ . Fix a value $\lambda^{*} \leq \omega_{\epsilon}\lambda$ and let $\delta^{*} = \delta_{\lambda^{*}}$ be the $\lambda^{*}$ -bounding radius. If $d_{\mathrm{cw}}$ of $M$ is larger than $2\delta^{*}$ , then the superlevel set $L_{p,\lambda^{*}}$ satisfies the following properties.
176
+
177
+ - $L_{p,\lambda^*}$ contains the data-generating manifold $M$ .
178
+ Each connected component of $L_{p,\lambda^*}$ contains at most one manifold $M_i$ of class $i$ .
179
+
180
+ # 4.3 APPLICATION TO THE GENERATIVE MODEL
181
+
182
+ We show an application of Theorem 1. We denote the target distribution by $\mathcal{D}_X$ , the latent distribution by $\mathcal{D}_Z$ , and the distribution of $G(\mathbf{z})$ where $\mathbf{z} \sim \mathcal{D}_Z$ by $\mathcal{D}_{G(Z)}$ . Similarly, we denote the corresponding $\lambda$ -density superlevel sets of densities by $L_{\lambda}^{X}$ , $L_{\lambda}^{Z}$ , and $L_{\lambda}^{G(Z)}$ . We assume the generative model $G$ to be continuous. Then, we get the following theorem regarding the difference between $L_{\lambda}^{X}$ and $L_{\lambda}^{G(Z)}$ in the number of connected components.
183
+
184
+ Theorem 2. Let $\mathcal{D}_Z$ be a mixture of $n_Z$ multivariate Gaussian distributions, and let the data-generating manifold of $\mathcal{D}_X$ contain $n_X$ components. Let $G$ be a continuous generative model for $\mathcal{D}_X$ using latent vectors from $\mathcal{D}_Z$ . Let $\lambda^*$ be the threshold value from the Theorem 1. If $n_Z < n_X$ , $L_{\lambda^*}^X$ and $L_{\lambda^*}^{G(Z)}$ do not agree on the number of connected components.
185
+
186
+ We can use this theorem to deduce the need for adequate information about the target distribution when training a generative model, especially if it is used for a security-critical application, e.g., INC.
187
+
188
+ Corollary 1. If Theorem 2 is satisfied, there is a point $\hat{\mathbf{x}}\in \mathbb{R}^n$ such that $\hat{\mathbf{x}}\notin L_{\lambda^{*}}^{X}$ but $\hat{\mathbf{x}}\in L_{\lambda^{*}}^{G(Z)}$
189
+
190
+ As a result, with density at least $\lambda^{*}$ , $G$ generates a point $\hat{\mathbf{x}}$ that is unlikely to be generated by the target distribution. Since INC is based on generations of $G$ , the INC method can output an out-of-manifold point as a solution of optimization (12).
191
+
192
+ 2 In Appendix D.4, Theorem 2 is generalized for more topological properties.
193
+
194
+ <table><tr><td>two-moons</td><td>spirals</td><td>circles</td></tr><tr><td>M0: {(x1,x2)| x1=cosθ x2=sinθ}</td><td>M0: {(x1,x2)| x1=1/3etcos(t) x2=1/3etcos(t)}</td><td>M0: {(x1,x2)| x1=cosθ x2=sinθ}</td></tr><tr><td>M1: {(x1,x2)| x1=1-cosθ x2=1-sinθ+1/2}</td><td>M1: {(x1,x2)| x1=1/3etcos(t+2/3π) x2=1/3etsin(t+2/3π)}</td><td>M1: {(x1,x2)| x1=1/2cosθ x2=1/2sinθ}</td></tr><tr><td>for θ ∈ [0,π]</td><td>M2: {(x1,x2)| x1=1/2etcos(t+4/3π) x2=1/3etsin(t+4/3π)}</td><td>for θ ∈ [0,2π]</td></tr><tr><td></td><td>for t ∈ [0,T] where T = ln(15/√2 + 1)</td><td></td></tr></table>
195
+
196
+ Table 1: Parameterizations of dataset used in the experiments.
197
+
198
+ # 5 EXPERIMENTAL RESULTS
199
+
200
+ In this section, we empirically demonstrate the consequence of the two theorems and explore their implication for the INC defense. Our main goals are to provide (1) empirical support for the applicability of Theorem 2 and Corollary 1 via toy datasets, and (2) the improvement in INC performance using a class-aware generative model. The main questions and the corresponding answers are shown below.
201
+
202
+ (Q1) Can we experimentally verify the results of section 4.3? Specifically, can we find cases that the superlevel sets of $\mathcal{D}_X$ and $\mathcal{D}_{G(Z)}$ have different numbers of connected components?
203
+ (Q2) How does INC fail when the generative model is ignorant of topology information?
204
+ (Q3) Does the class-aware generative model improve the INC performance?
205
+
206
+ (A1) Theorem 2 and Corollary 1 can be verified by plotting the $\lambda$ -density superlevel set. Especially, we visualize the $\lambda$ -density superlevel set of $\mathcal{D}_{G(Z)}$ reflecting Theorem 2 and Corollary 1.
207
+ (A2) When generative model is not trained with topology information, naive INC may fail. We found out two possible reasons regarding INC failure: (1) choice of a bad initial point and (2) out-of-manifold search due to non-separation of density superlevel set.
208
+ (A3) The performance of INC is improved by training generative models with topology information on the target distribution. We improved the average INC performance by decreasing the error induced by projection to $30\%$ compared to the class-ignorant counterpart.
209
+
210
+ In the rest of this section, we provide a more detailed description of our experiments. First, we briefly describe the experimental setup in Section 5.1: datasets, latent vector distributions, training method, and INC implementation. Then, Sections 5.2-5.4 describe the experimental results regarding the findings summarized above. Section 5.5 contains an additional experiment illustrating the changes of decision boundaries by INC application.
211
+
212
+ # 5.1 EXPERIMENTAL SETUP
213
+
214
+ Datasets. For all experiments, we use three toy datasets in $\mathbb{R}^2$ : two-moons, spirals, and circles. Table 1 summarizes the parameterizations of each data-generating manifold and Figure 2 shows the plots of the corresponding data-generating manifolds. To construct the training set, we first sample 1000 points uniformly from each manifold $M_{i}$ , then each point is perturbed by isotropic Gaussian noise $\mathcal{N}(0,\sigma^2 I_2)$ with $\sigma = 0.05$ . Before the training, each training set is standardized by a preprocessing of Scikit-learn package.
215
+
216
+ Latent vector distributions. For latent vector distributions $\mathcal{D}_Z$ , we prepared three different mixtures of $n_Z$ Gaussian distributions with $n_Z \in \{1,2,3\}$ . When $n_Z = 1$ , we simply use $\mathcal{N}(\mathbf{0},I_2)$ . When $n_Z = 2,3$ , we arranged $n_Z$ Gaussian distributions along a circle of radius $R = 2.5$ , so that $i$ -th Gaussian has mean at $\mu_i = \left(-R\sin \left(\frac{2\pi i}{n}\right), R\cos \left(\frac{2\pi i}{n}\right)\right)$ with $\sigma = 0.5$ for $n = 2$ and $\sigma = 0.3$ for $n = 3$ . Then, the uniform mixtures of the arranged Gaussian are used as $\mathcal{D}_Z$ . In Figure 3 (top row), we visualize the connected components corresponding to the latent vector distributions.
217
+
218
+ Training generative models. Our experiments mostly use the Tensorflow Probability (Dillon et al., 2017) library that contains the implementation of reversible generative models. Specifically, the Tensorflow Probability library contains an implementation of the Real NVP coupling layer that we used as a building block of our models. The default template provided by Tensorflow Probability
219
+
220
+ ![](images/68b7bf19d04fdf2dad6fdaa1cca0fd864a63b77e5b4ecc78f3b20928df83582f.jpg)
221
+ (a) two-moons
222
+
223
+ ![](images/16fe0ef16e686279c2ae65d529fc7ccaa73b9d5a3d5ff8f3a452fa3085da6eab.jpg)
224
+ (b) spirals
225
+
226
+ ![](images/4ea7ef9c7a2370bb379fecfbcc534885dbeb52e4e279effd51f7f2689ba19390.jpg)
227
+ (c) circles
228
+ Figure 2: Data-generating manifolds used in the experiments
229
+
230
+ library was used to construct each Real NVP coupling layer with two hidden layers of 128 units. Each model uses eight coupling layers that are followed by permutations exchanging two dimensions of $\mathbb{R}^2$ except for the last coupling layer.
231
+
232
+ We describe the details of the training procedure of the generative models used in the experiments. We prepared two different types of generative models: class-ignorant and class-aware.
233
+
234
+ The class-ignorant type is the usual Real NVP model. This model uses the empirical estimation of negative log-likelihood over a training batch $\{\mathbf{x}_1,\dots ,\mathbf{x}_m\}$ as its training loss.
235
+
236
+ $$
237
+ \ell_ {\mathrm {c i}} = - \frac {1}{m} \sum_ {t = 1} ^ {m} \log \left(p _ {X} \left(\mathbf {x} _ {t}\right)\right)
238
+ $$
239
+
240
+ The density $p_X$ of $\mathcal{D}_X$ is estimated by applying the change of variables formula,
241
+
242
+ $$
243
+ p _ {X} (\mathbf {x}) = p _ {Z} (\mathbf {z}) \left| \det \left(\frac {\partial G (\mathbf {z})}{\partial \mathbf {z} ^ {T}}\right) \right| ^ {- 1} \tag {2}
244
+ $$
245
+
246
+ where $p_Z$ is the density of $\mathcal{D}_Z$ and $\frac{\partial G(\mathbf{z})}{\partial\mathbf{z}^T}$ is the Jacobian of $G$ as a function from $\mathbb{R}^n$ to itself.
247
+
248
+ The class-aware type is the Real NVP model trained with information about the number of connected components, i.e. the number of class labels $l$ . Using the number of labels, the densities $p_{X}$ and $p_{Z}$ can be decomposed as follows.
249
+
250
+ $$
251
+ p _ {X} (\mathbf {x}) = \sum_ {i \in \{1, \dots , l \}} \Pr [ y = i ] p _ {X, i} (\mathbf {x})
252
+ $$
253
+
254
+ $$
255
+ p _ {Z} (\mathbf {z}) = \sum_ {i \in \{1, \dots , l \}} \Pr [ y = i ] p _ {Z, i} (\mathbf {z}) \tag {3}
256
+ $$
257
+
258
+ where $p_{X,i}(\mathbf{x}) = p_X(\mathbf{x}|y = i)$ and each $p_{Z,i}$ is the $i$ -th Gaussian component described above. Since $\operatorname{Pr}[y = i]$ is not generally known, the uniform distribution $\operatorname{Pr}[y = i] = \frac{1}{l}$ is used, where $l$ is the number of classification labels.
259
+
260
+ The main idea is class-wise training, i.e., training each $p_{X,i}$ from each $p_{Z,i}$ . Applying the change of variables formula for each class $i$ ,
261
+
262
+ $$
263
+ p _ {X, i} (\mathbf {x}) = p _ {Z, i} (\mathbf {z}) \left| \det \left(\frac {\partial G (\mathbf {z})}{\partial \mathbf {z} ^ {T}}\right) \right| ^ {- 1} \tag {4}
264
+ $$
265
+
266
+ Combining equations (3) and (4), we get the change of variables formula (2). We define the class-wise loss function $\bar{\ell}_i$ for class-wise training as follows.
267
+
268
+ $$
269
+ \ell_ {i} = - \frac {1}{m _ {i}} \sum_ {t = 1} ^ {m} \mathbb {1} [ y _ {t} = i ] \log (p _ {X, i} (\mathbf {x} _ {t}))
270
+ $$
271
+
272
+ where $m_{i}$ is the number of training samples in class $i$ . Then, we train a generative model using the weighted sum of $\ell_{i}$ as the training loss function.
273
+
274
+ $$
275
+ \ell_ {\mathrm {c a}} = \sum_ {i \in \{1, \dots , l \}} \Pr [ y = i ] \ell_ {i}
276
+ $$
277
+
278
+ Each model was trained for 30,000 iterations. For each iteration, a batch of 200 random samples was chosen from two-moons and circles dataset, and a batch of 300 random samples was chosen from the spirals dataset. For the choices of latent vector distribution, we chose the mixture of $l - 1$ Gaussians for the class-ignorant type, whereas we chose the mixture of $l$ Gaussians for the class-aware type.
279
+
280
+ ![](images/a26525ce5120a4c2600c3b56a73c63fd4576d3415b861c7e99a24b33105f69cc.jpg)
281
+ (a) Isotropic Gaussian
282
+
283
+ ![](images/516470471a9167ed5ebb27c339e7d4647d5302a92245b16349febac7dfea6318.jpg)
284
+ (b) Mixture of 2 Gaussians
285
+
286
+ ![](images/730b4e9f66b07455b23ab37e1f39a15a3010a6b34f22d939939595e53911f6d4.jpg)
287
+ (c) Mixture of 3 Gaussians
288
+
289
+ ![](images/60f011887cf7f8ea234b077058c322223c1fea716ba3f505b587767764800a00.jpg)
290
+ (d) two-moons, class-ignorant
291
+
292
+ ![](images/0daffc613ba36e80d11d13194ab850f561abc2d64365cad5bcb7716c9dddefd9.jpg)
293
+ (e) spirals, class-ignorant
294
+
295
+ ![](images/1b29b792339bc1eb5c1d43510d1bf9c17cff59f607a2e2eba86dacda49bd494d.jpg)
296
+ (f) circles, class-ignorant
297
+
298
+ ![](images/e34bcc3a1871267ed960534e8e1c742bd6b43e60072fe7b75f407dd114c088e8.jpg)
299
+ (g) two-moons, class-aware
300
+
301
+ ![](images/59ca630a384a5dde44b91ec339780e335f16071603ea26018e71800628aea8c3.jpg)
302
+ (h) spirals, class-aware
303
+
304
+ ![](images/5230dbd21ad61ccf82f51bc20e89e78642ff03ffc5b541dddf6d35b2618761c9.jpg)
305
+ (i) circles, class-aware
306
+ Figure 3: $\lambda$ -density superlevel sets of $\mathcal{D}_Z$ and $\mathcal{D}_{G(Z)}$ with $\lambda = 0.01$ . Top row: $\mathcal{D}_Z$ for $n_Z = 1,2,3$ . Middle row: $\mathcal{D}_{G(Z)}$ , class-ignorant model. Bottom row: $\mathcal{D}_{G(Z)}$ , class-aware model.
307
+
308
+ # 5.2 VISUAL VERIFICATION OF THEOREMS
309
+
310
+ The goal of this section is to verify Theorem 2 and the Corollary 1 by visualizing the superlevel set reflecting the statements. Figure 3 shows the $\lambda$ -density superlevel sets of densities of $\mathcal{D}_{G(Z)}$ using the same threshold $\lambda = 0.01$ . The first row and the second row show the results from the class-ignorant version and those from the class-aware version, respectively. Each column corresponds to each dataset. All distributions are scaled for the standardization preprocessing before the training. In general, superlevel set components are separated when the generative model is class-aware. On the contrary, the class-ignorant generative models introduce connections between the components, as anticipated by Corollary 1. Due to this connection, the class-ignorant generative models contain fewer connected components in their superlevel sets; this verifies Theorem 2 for our choice of $\lambda^{*} = 0.01$ .
311
+
312
+ # 5.3 INC FAILURE DUE TO THE LACK OF INFORMATION ON THE DISTRIBUTION TOPOLOGY
313
+
314
+ We present how the non-separation of superlevel set components influences the performance of the INC. We provide two possible explanations of why the INC fails. First, the bad initialization causes a suboptimal solution on a manifold not-the-nearest to the input. Second, an artifact induced by the topological difference produces an out-of-manifold solution.
315
+
316
+ Figure 4 presents three visualized examples of INC with a class-ignorant generative model for two-moons. In each plot, the black dot is the given point $\hat{\mathbf{x}}$ , and cyan dot is the initial point from choosing $\mathbf{z}$ randomly from the latent vector distribution $-\mathcal{N}(\mathbf{0}, I_2)$ , and magenta dot is the final point output by INC. All intermediate points of the optimization are plotted with dots, changing colors gradually from cyan to magenta. The training set for two-moon used in the training procedure is plotted in gray.
317
+
318
+ ![](images/e27ccc95865514d822553fae10b6bedc80909df58588a2e6858c4347e0fe3d8a.jpg)
319
+ (a) INC with an ideal initialization
320
+
321
+ ![](images/7e59719a0b753fc7269fd184dd415d97b33cf9d7554eeb65bda2c73fbfd70515.jpg)
322
+ Figure 4: Successful and failed cases of INC using class-ignorant generative model of two-moon.
323
+
324
+ ![](images/bacfb83d7e09038b675e988f8154102e003d23115a10f47197821a69c95bceff.jpg)
325
+ (b) INC with a bad initialization
326
+ (c) INC searching out of manifold
327
+
328
+ <table><tr><td colspan="2">Two-moons</td><td colspan="2">Spirals</td><td colspan="2">Circles</td></tr><tr><td>class-ignorant</td><td>class-aware</td><td>class-ignorant</td><td>class-aware</td><td>class-ignorant</td><td>class-aware</td></tr><tr><td>0.647 (0.666)</td><td>0.148 (0.208)</td><td>1.523 (1.338)</td><td>0.443 (0.440)</td><td>0.699 (0.491)</td><td>0.180 (0.259)</td></tr></table>
329
+
330
+ Table 2: Comparison of the projection errors of INC based on the class-awareness of the model.
331
+
332
+ Figure 4a is the INC optimization with an ideal start. The initial point lies in the same manifold as the manifold closest to $\hat{\mathbf{x}}$ . Then, the INC optimization searches along the manifold, converging to a point close to $\hat{\mathbf{x}}$ . Figure 4b shows a case in which INC fails because of a bad initialization. The initial point was chosen on a manifold not containing the desired solution, so the INC converged to a local optimum on the wrong manifold. Our class-aware INC performs manifold-wise initialization to circumvent this issue. Figure 4c shows that the INC failed due to an out-of-manifold search. The INC converged in a wrong manifold, and a nontrivial amount of intermediate points were out of manifold, resulting in an out-of-manifold solution (see Figure 3d).
333
+
334
+ # 5.4 INC IMPROVEMENT VIA CLASS-AWARE GENERATIVE MODEL
335
+
336
+ We demonstrate that INC performance is improved by using class-aware generative models. To measure the performance of the INC, 100 points are chosen uniformly from each manifold $M_{i}$ . Then, each point $\mathbf{x}$ is perturbed by $\mathbf{n}_{\mathbf{x}}$ normal to the manifold at $\mathbf{x}$ , generating 200 adversarial points $\hat{\mathbf{x}} = \mathbf{x} \pm r \mathbf{n}_{\mathbf{x}}$ . For all datasets, $r = 0.2$ is used for perturbation size. We expect two types of INC to map $\hat{\mathbf{x}}$ back to the original point $\mathbf{x}$ , as $\mathbf{x}$ is the optimal solution to (11). We define the projection error of INC as $\| \mathrm{INC}(\hat{\mathbf{x}}) - \mathbf{x}\|_2$ , and collect the statistics of projection errors over all $\hat{\mathbf{x}}$ .
337
+
338
+ Table 2 shows the projection error statistics for two types of INC. Each pair of columns show the results on the indicated dataset. For each pair, one column shows the error of the class-ignorant INC and the other column shows that of the class-aware counterpart. Numbers in each cell are averages and standard deviations (in parenthesis) of the projection error. For any dataset, the class-aware INC achieves lower projection errors. Histograms of the projection errors are provided in Appendix E.
339
+
340
+ # 5.5 ADDITIONAL EXPERIMENTS FOR THE INC PERFORMANCE.
341
+
342
+ Finally, we present experiments to demonstrate the effect of the superlevel set discrepancy on the INC performance. First, we begin with training support vector machines (SVMs) performing classification tasks for our target distributions. For training data, we randomly sampled 1000 training points from each data-generating manifold. The baseline SVMs were intentionally ill-trained by using the high kernel coefficient $\gamma = 100$ . After training SVMs, we formed other classifiers by applying INC to ill-trained SVMs To explain, for each dataset, we have four types of classifiers as follows.
343
+
344
+ (1) Ill-trained SVM: Baseline classifier
345
+ (2) Ideal INC: Classifier with INC using a direct access to the data-generating manifolds
346
+ (3) Class-ignorant INC: Classifier with INC using a topology-ignorant generative model
347
+ (4) Class-aware INC: Classifier with INC with using a topology-aware generative model
348
+
349
+ We want to emphasize that direct access to the data-generating manifold is not possible in general. However, applying INC using direct access gives us an INC purely based on the geometry, so it is an ideal form of INC that should be approximated. Also, since the class-ignorant INC is affected by a bad choice of an initial point, we reduced the effect of bad initialization by sampling more initial points and taking the best solution among the projection results. For this number of initial choices, we
350
+
351
+ ![](images/e4dadb4f6791cc635686af0f9af5684eb116ca29c95aa52c1cc999b2d1c74e5c.jpg)
352
+
353
+ ![](images/3933a0603ad2bcdec919523fe814c7dc68d6ae7542a1b8130823c3d9f5acf7e5.jpg)
354
+
355
+ ![](images/1015c05015dca49df3ac0c5304e2371d9550cff49f3390f60209571009d68d46.jpg)
356
+
357
+ ![](images/6977e2698b2d6e8dd8dc9c658210e1ea80c684dc8ab576db497eccce7b8c191d.jpg)
358
+
359
+ ![](images/7882f26c8cc348eb920e041b661ab921139f9a3f3a691b6d4968fd32abc0261c.jpg)
360
+
361
+ ![](images/f909f267912adfa51130cbb6820a49da478e47203c6eec8953b33bdef7aeed6a.jpg)
362
+
363
+ ![](images/0c2a9fa1176eacb6a0dca6d57b764558fefa20b8e00f62fa1faf18a3d8ee3185.jpg)
364
+
365
+ ![](images/804b0d13cb35a09bf69e5862b51c1c7018d9ebed0e45d3956397d45648a2569a.jpg)
366
+
367
+ ![](images/9016b32f9c6490d3a449e2b834104075b7440b483e96f3e19731a7ef44bd9027.jpg)
368
+ (a) Ill-trained SVM
369
+
370
+ ![](images/f18c9c97d785fd757c8757f3ab9d20a352027dca84c9dfdf7d3487fb065a7f00.jpg)
371
+ (b) Ideal INC
372
+
373
+ ![](images/fc3e25eb4fb30581812d5857616a33eb4195cbc570142a8791bcf36e701ff562.jpg)
374
+ (c) Class-ignorant INC
375
+
376
+ ![](images/83f7e83def1416a455deeced9f8599e8155d00abb0201dd56a4a8dca7b780f97.jpg)
377
+ (d) Class-aware INC
378
+ Figure 5: Changes in the decision boundaries of ill-trained SVM after the INC applications.
379
+
380
+ chose as many initial points as the number of manifolds, which was exactly the same as the number of initial points for the topology-aware INC model.
381
+
382
+ To demonstrate the improvement in the robustness of the model, we visualize the effect by depicting the decision boundary of each classifier. Specifically, we form a $300 \times 300$ grid on the domain of $[-3, 3] \times [-3, 3]$ and compute the classification result. The depicted decision boundaries are presented in Figure 5. Each row corresponds to each dataset: two moons, spirals, and circles, respectively. Each column corresponds to classifiers 1-4 described above, from the first column to the fourth column, respectively. From Figure 5, it is visually evident that the class-aware INC models provide more proper approximations to the ideal INC model compared to the class-ignorant INC models.
383
+
384
+ # 6 CONCLUSION
385
+
386
+ We theoretically and experimentally discussed the necessity of topology awareness in the training of generative models, especially in security-critical applications. A continuous generative model is sensitive to the topological mismatch between the latent vector distribution and the target distribution. Such mismatch leads to potential problems with manifold-based adversarial defenses utilizing generative models such as INC. We described two cases in which the INC failed: the bad initialization and the artifacts from the topological difference. We experimentally verified that topology-aware training effectively prevented these problems, thereby improving the effectiveness of generative models in manifold-based defense. After topology-aware training of generative models, the INC projection errors represented $30\%$ of the errors of the topology-ignorant INC.
387
+
388
+ # 7 ACKNOWLEDGEMENT
389
+
390
+ Dr. Susmit Jha and Uyeong Jang's internship at SRI International were supported in part by U.S. National Science Foundation (NSF) grants #1740079, #1750009, U.S. Army Research Laboratory Cooperative Research Agreement W911NF-17-2-0196, and DARPA Assured Autonomy under contract FA8750-19-C-0089. The views, opinions and/or findings expressed are those of the author(s) and should not be interpreted as representing the official views or policies of the Department of Defense or the U.S. Government. This work is partially supported by Air Force Grant FA9550-18-1-0166, the National Science Foundation (NSF) Grants CCF-FMitF-1836978, SaTC-Frontiers-1804648 and CCF-1652140 and ARO grant number W911NF-17-1-0405.
391
+
392
+ # REFERENCES
393
+
394
+ Anish Athalye, Nicholas Carlini, and David Wagner. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. arXiv preprint arXiv:1802.00420, 2018.
395
+ Battista Biggio, Igino Corona, Davide Maiorca, Blaine Nelson, Nedim Šrndić, Pavel Laskov, Giorgio Giacinto, and Fabio Roli. Evasion attacks against machine learning at test time. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pp. 387-402. Springer, 2013.
396
+ Nicholas Carlini and David Wagner. Towards evaluating the robustness of neural networks. In 2017 IEEE Symposium on Security and Privacy (SP), pp. 39-57. IEEE, 2017.
397
+ Arin Chaudhuri, Deovrat Kakde, Carol Sadek, Laura Gonzalez, and Seunghyun Kong. The mean and median criteria for kernel bandwidth selection for support vector data description. In 2017 IEEE International Conference on Data Mining Workshops (ICDMW), pp. 842-849. IEEE, 2017.
398
+ Tian Qi Chen, Yulia Rubanova, Jesse Bettencourt, and David K Duvenaud. Neural ordinary differential equations. In Advances in Neural Information Processing Systems, pp. 6571-6583, 2018.
399
+ Jeremy M Cohen, Elan Rosenfeld, and J Zico Kolter. Certified adversarial robustness via randomized smoothing. arXiv preprint arXiv:1902.02918, 2019.
400
+ Guneet S Dhillon, Kamyar Azizzadenesheli, Zachary C Lipton, Jeremy Bernstein, Jean Kossaifi, Aran Khanna, and Anima Anandkumar. Stochastic activation pruning for robust adversarial defense. arXiv preprint arXiv:1803.01442, 2018.
401
+ Joshua V Dillon, Ian Langmore, Dustin Tran, Eugene Brevdo, Srinivas Vasudevan, Dave Moore, Brian Patton, Alex Alemi, Matt Hoffman, and Rif A Saurous. Tensorflow distributions. arXiv preprint arXiv:1711.10604, 2017.
402
+ Laurent Dinh, Jascha Sohl-Dickstein, and Samy Bengio. Density estimation using real nvp. arXiv preprint arXiv:1605.08803, 2016.
403
+ Abhimanyu Dubey, Laurens van der Maaten, Zeki Yalniz, Yixuan Li, and Dhruv Mahajan. Defense against adversarial images using web-scale nearest-neighbor search. arXiv preprint arXiv:1903.01612, 2019.
404
+ Reuben Feinman, Ryan R Curtin, Saurabh Shintre, and Andrew B Gardner. Detecting adversarial samples from artifacts. arXiv preprint arXiv:1703.00410, 2017.
405
+ Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems, pp. 2672-2680, 2014a.
406
+ Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014b.
407
+ Will Grathwohl, Ricky TQ Chen, Jesse Betterncourt, Ilya Sutskever, and David Duvenaud. Ffjord: Free-form continuous dynamics for scalable reversible generative models. arXiv preprint arXiv:1810.01367, 2018.
408
+ Chuan Guo, Mayank Rana, Moustapha Cisse, and Laurens Van Der Maaten. Countering adversarial images using input transformations. arXiv preprint arXiv:1711.00117, 2017.
409
+ Andrew Ilyas, Ajil Jalal, Eirini Asteri, Constantinos Daskalakis, and Alexandros G Dimakis. The robust manifold defense: Adversarial training using generative models. arXiv preprint arXiv:1712.09196, 2017.
410
+ Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
411
+ Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
412
+
413
+ Durk P Kingma and Prafulla Dhariwal. Glow: Generative flow with invertible 1x1 convolutions. In Advances in Neural Information Processing Systems, pp. 10215-10224, 2018.
414
+ Alexey Kurakin, Ian Goodfellow, and Samy Bengio. Adversarial examples in the physical world. arXiv preprint arXiv:1607.02533, 2016a.
415
+ Alexey Kurakin, Ian Goodfellow, and Samy Bengio. Adversarial machine learning at scale. arXiv preprint arXiv:1611.01236, 2016b.
416
+ John M Lee. Introduction to smooth manifolds. Graduate Texts in Mathematics, 218, 2003.
417
+ Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083, 2017.
418
+ Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. Deepfool: a simple and accurate method to fool deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2574-2582, 2016.
419
+ James R Munkres. Topology prentice hall. Inc., Upper Saddle River, 2000.
420
+ Andrew Y Ng and Michael I Jordan. On discriminative vs. generative classifiers: A comparison of logistic regression and naive bayes. In Advances in Neural Information Processing Systems, pp. 841-848, 2002.
421
+ George Papamakarios, Theo Pavlakou, and Iain Murray. Masked autoregressive flow for density estimation. In Advances in Neural Information Processing Systems, pp. 2338-2347, 2017.
422
+ Nicolas Papernot, Patrick McDaniel, Somesh Jha, Matt Fredrikson, Z Berkay Celik, and Ananthram Swami. The limitations of deep learning in adversarial settings. In 2016 IEEE European Symposium on Security and Privacy (EuroS&P), pp. 372-387. IEEE, 2016.
423
+ Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z Berkay Celik, and Ananthram Swami. Practical black-box attacks against machine learning. In Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security, pp. 506-519. ACM, 2017.
424
+ Xavier Pennec. Probabilities and statistics on riemannian manifolds: Basic tools for geometric measurements. In Nonlinear Signal and Image Processing, pp. 194-198. CiteSeer, 1999.
425
+ Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015.
426
+ Aditi Raghunathan, Jacob Steinhardt, and Percy Liang. Certified defenses against adversarial examples. arXiv preprint arXiv:1801.09344, 2018.
427
+ Danilo Jimenez Rezende and Shakir Mohamed. Variational inference with normalizing flows. arXiv preprint arXiv:1505.05770, 2015.
428
+ Pouya Samangouei, Maya Kabbab, and Rama Chellappa. Defense-gan: Protecting classifiers against adversarial attacks using generative models. arXiv preprint arXiv:1805.06605, 2018.
429
+ Aman Sinha, Hongseok Namkoong, and John Duchi. Certifiable distributional robustness with principled adversarial training. arXiv preprint arXiv:1710.10571, 2, 2017.
430
+ Yang Song, Taesup Kim, Sebastian Nowozin, Stefano Ermon, and Nate Kushman. Pixeldefend: Leveraging generative models to understand and defend against adversarial examples. arXiv preprint arXiv:1710.10766, 2017.
431
+ Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013.
432
+ Thomas Tanay and Lewis Griffin. A boundary tilting perspective on the phenomenon of adversarial examples. arXiv preprint arXiv:1608.07690, 2016.
433
+
434
+ Florian Tramér, Alexey Kurakin, Nicolas Papernot, Ian Goodfellow, Dan Boneh, and Patrick McDaniel. Ensemble adversarial training: Attacks and defenses. arXiv preprint arXiv:1705.07204, 2017.
435
+ E Weinan. A proposal on machine learning via dynamical systems. Communications in Mathematics and Statistics, 5(1):1-11, 2017.
436
+ Cihang Xie, Jianyu Wang, Zhishuai Zhang, Zhou Ren, and Alan Yuille. Mitigating adversarial effects through randomization. arXiv preprint arXiv:1711.01991, 2017.
437
+ Linfeng Zhang, Lei Wang, et al. Monge-amp\ere flow for generative modeling. arXiv preprint arXiv:1809.10188, 2018.
438
+ Junbo Zhao, Michael Mathieu, and Yann LeCun. Energy-based generative adversarial network. arXiv preprint arXiv:1609.03126, 2016.
439
+ Xiaojin Zhu and Andrew B Goldberg. Introduction to semi-supervised learning. Synthesis Lectures on Artificial Intelligence and Machine Learning, 3(1):1-130, 2009.
440
+
441
+ # A MATHEMATICAL BACKGROUND
442
+
443
+ # A.1 GENERAL TOPOLOGY
444
+
445
+ We introduce definitions and theorems related to general topology appeared in the paper. For more details, all the definitions and theorems can be found in Munkres (2000).
446
+
447
+ Definitions in general topology. We first provide the precise definitions of the terms we brought from the general topology.
448
+
449
+ Definition 5 (Topological space). A topology on a set $X$ is a collection $\mathcal{T}$ of subsets of $X$ having the following properties.
450
+
451
+ 1. $\varnothing$ and $X$ are in $\mathcal{T}$
452
+ 2. The union of the elements of any subcollection of $\mathcal{T}$ is in $\mathcal{T}$ .
453
+ 3. The intersection of the elements of any finite subcollection of $\mathcal{T}$ is in $\mathcal{T}$ .
454
+
455
+ A set $X$ for which a topology $\mathcal{T}$ has been specified is called a topological space.
456
+
457
+ For example, a collection of all open sets in $\mathbb{R}^n$ is a topology, thus $\mathbb{R}^n$ is a topological space. If a topology can be constructed by taking arbitrary union and a finite number of intersections of a smaller collection $\mathcal{B}$ of subsets of $X$ , we call $\mathcal{B}$ is a basis of the topology.
458
+
459
+ Pick a metric $d$ in $\mathbb{R}^n$ and consider $\mathcal{B}$ a set of all open balls in $\mathbb{R}^n$ using the metric $d$ . The topology of $\mathbb{R}^n$ can be constructed by taking $\mathcal{B}$ as a basis. When this construction is possible, metric $d$ is said to induce the topology.
460
+
461
+ Definition 6 (Metrizable space). If $X$ is a topological space, $X$ is said to be metrizable if there exists a metric $d$ on the set $X$ that induces the topology of $X$ . A metric space is a metrizable space $X$ together with a specific metric $d$ that gives the topology of $X$ .
462
+
463
+ Since $\mathbb{R}^n$ is equipped with Euclidean metric that induces its topology, $\mathbb{R}^n$ is metrizable.
464
+
465
+ Continuity and the extreme value theorem. Let $X$ and $Y$ be topological spaces. In the field of general topology, a function $f: X \to Y$ is said to be continuous, if for any subset $V$ open in $Y$ , its inverse image $f^{-1}(V)$ is open in $X$ . Moreover, if $f$ is a continuous bijection whose inverse is also continuous, $f$ is called a homeomorphism. The notion of homeomorphism is important as it always preserves topological property, e.g., connectedness, compactness, etc., and this will be used in the further generalization of Theorem 2.
466
+
467
+ Here, we only introduce the generalized statement of extreme value theorem.
468
+
469
+ Theorem 3 (Extreme value theorem). Let $f: X \to Y$ be continuous, where $Y$ is an ordered set. If $X$ is compact, then there exist points $\underline{\mathbf{x}}$ and $\overline{\mathbf{x}}$ in $X$ such that $f(\underline{\mathbf{x}}) \leq f(\mathbf{x}) \leq f(\overline{\mathbf{x}})$ for every $\mathbf{x} \in X$ .
470
+
471
+ Specifically, if a manifold $M$ is a compact subset in $\mathbb{R}^n$ , we may use $X = M$ and $Y = \mathbb{R}$ .
472
+
473
+ Normal space and Urysohn's lemma. The Urysohn's lemma was used to prove the Corollary 1. We first introduce the notion of normal space.
474
+
475
+ Definition 7 (Normal space). Let $X$ be a topological space that one-point sets in $X$ are closed. Then, $X$ is normal if for each pair $A, B$ of disjoint closed sets of $X$ , there exist disjoint open sets containing $A$ and $B$ , respectively.
476
+
477
+ Urysohn's lemma is another equivalent condition for a space to be normal.
478
+
479
+ Theorem 4 (Urysohn's lemma). Let $X$ be a normal topological space; let $A$ and $B$ be disjoint closed subsets in $X$ . Let $[a, b]$ be a closed interval in the real line. Then there exists a continuous map
480
+
481
+ $$
482
+ f: X \longrightarrow [ a, b ]
483
+ $$
484
+
485
+ such that $f(\mathbf{x}) = a$ for every $\mathbf{x}$ in $A$ , and $f(\mathbf{x}) = b$ for every $\mathbf{x}$ in $B$ .
486
+
487
+ To apply this lemma to $\mathbb{R}^n$ , we only need the following theorem.
488
+
489
+ Theorem 5. Every metrizable space is normal.
490
+
491
+ Since $\mathbb{R}^n$ is metrizable, it is a normal space by Theorem 5. Therefore, we can apply Urysohn's lemma to any pair of disjoint subsets in $\mathbb{R}^n$ , to show the existence of a continuous map $f:X\to [0,1]$ .
492
+
493
+ # A.2 DIFFERENTIAL GEOMETRY
494
+
495
+ We provide the definitions from differential geometry (Lee, 2003) used in the paper.
496
+
497
+ Manifold and tangent space. Formally, topological manifold is defined as follows.
498
+
499
+ Definition 8 (Manifold). Suppose $M$ is a topological space. We say $M$ is a topological manifold of dimension $k$ if it has the following properties.
500
+
501
+ 1. For any pair of distinct points $\mathbf{x}_1, \mathbf{x}_2 \in M$ , there are disjoint open subsets $U_1, U_2 \subset M$ such that $\mathbf{x}_1 \in U$ and $\mathbf{x}_2 \in V$ .
502
+ 2. There exists a countable basis for the topology of $M$ .
503
+ 3. Every point has a neighborhood $U$ that is homeomorphic to an open subset $\tilde{U}$ of $\mathbb{R}^k$ .
504
+
505
+ There are different ways to define tangent space of $k$ -dimensional manifold $M$ . Informally, it can be understood as geometric tangent space to $M \subset \mathbb{R}^n$ at a point $\mathbf{x} \in M$ , which is a collection of pairs $(\mathbf{x}, \mathbf{v})$ where $\mathbf{v}$ is a vector tangentially passing through $\mathbf{x}$ . Here we put a more formal definition of tangent space. Consider a vector space $C^\infty(M)$ , a set of smooth functions on $M$ .
506
+
507
+ Definition 9 (Tangent space). Let $\mathbf{x}$ be a point of a smooth manifold $M$ . A linear map $X:C^{\infty}(M)\to \mathbb{R}$ is called a derivation at $\mathbf{x}$ if it satisfies
508
+
509
+ $$
510
+ X (f g) = f (\mathbf {x}) X g + g (\mathbf {x}) X f
511
+ $$
512
+
513
+ for all $f,g\in C^{\infty}(M)$
514
+
515
+ The set of all derivations of $C^\infty(M)$ at $\mathbf{x}$ forms a vector space called the tangent space to $M$ at $\mathbf{x}$ , and is denoted by $T_{\mathbf{x}}(M)$ .
516
+
517
+ Riemannian metric. As tangent space $T_{\mathbf{x}}(M)$ is a vector space for each $\mathbf{x} \in M$ , we can consider a inner product $g_{bfx}$ defined on $T_{\mathbf{x}}(M)$ .
518
+
519
+ Definition 10 (Riemannian metric). A Riemannian metric $g$ on a smooth manifold $M$ is a smooth collection of inner products $g_{\mathbf{x}}$ defined for each $T_{\mathbf{x}}(M)$ . The condition for smoothness of $g$ is that, for any smooth vector fields $\mathcal{X}, \mathcal{Y}$ on $\mathbf{M}$ , the mapping $\mathbf{x} \mapsto g_{\mathbf{x}}(\mathcal{X}|_{\mathbf{x}}, \mathcal{Y}|_{\mathbf{x}})$ .
520
+
521
+ A manifold $M$ equipped with a Riemannian metric $g$ is called a Riemannian manifold.
522
+
523
+ # B EXAMPLES
524
+
525
+ Computing density $p_M$ over a Riemannian manifold $M$ . This section presents example computations of the probability computations from Section D.1 and Section 3.2 As a concrete example of computing density over a manifold, we use the following simple manifolds, so called two-moons in $\mathbb{R}^2$ .
526
+
527
+ $$
528
+ M _ {0} = \left\{\left(x _ {1}, x _ {2}\right) \middle | \begin{array}{l l} x _ {1} = \cos \theta \\ x _ {2} = \sin \theta \end{array} \text {f o r} \theta \in [ 0, \pi ] \right\}
529
+ $$
530
+
531
+ $$
532
+ M _ {1} = \left\{\left(x _ {1}, x _ {2}\right) \middle | \begin{array}{l} x _ {1} = 1 - \cos \theta \\ x _ {2} = 1 - \sin \theta + \frac {1}{2} \end{array} \text {f o r} \theta \in [ 0, \pi ] \right\}
533
+ $$
534
+
535
+ We take $M = M_0 \cup M_1$ as our example manifold. Figure 6a shows the manifold of two-moons dataset plotted in different colors: $M_0$ in red and $M_1$ in blue.
536
+
537
+ First recall the following equation (equation (8) from the Section D.1).
538
+
539
+ $$
540
+ \int_ {\mathbf {x} \in M} p _ {M} (\mathbf {x}) d M (\mathbf {x}) = \int_ {\mathbf {u} \in D} p _ {M} (X (\mathbf {u})) \sqrt {\left| \det \left[ g _ {X (\mathbf {u})} \right] \right|} d \mathbf {u}
541
+ $$
542
+
543
+ where $[g_{X(\mathbf{u})}]$ is the $k\times k$ matrix representation of the inner product $g_{X(\mathbf{u})}$ at $X(\mathbf{u})\in M$
544
+
545
+ Especially, when a manifold in $\mathbb{R}^n$ is of dimension 1, i.e., parameterized curve $\gamma :[a,b]\to \mathbb{R}^n$ , the integration (8) can be written in simpler way.
546
+
547
+ $$
548
+ \int_ {\mathbf {x} \in M} p _ {M} (\mathbf {x}) d M (\mathbf {x}) = \int_ {t = a} ^ {b} p _ {M} (\gamma (t)) \| \gamma^ {\prime} (t) \| d t \tag {5}
549
+ $$
550
+
551
+ where $\gamma'(t)$ is the $n$ -dimensional velocity vector at $t \in [a, b]$ .
552
+
553
+ ![](images/96b1d2dd765a14b3a9857dede1c77ee260e1a867a20cd64cf4af1f6c4dff400c.jpg)
554
+ (a) Plot of the two-moons manifold in $\mathbb{R}^2$
555
+ Figure 6: Density extension example from two-moons manifold.
556
+
557
+ ![](images/f2f9a40632027db69d8a49b7a87cc0820d63746307956d4446627dcf2ca79d8e.jpg)
558
+ (b) Extended density function over $\mathbb{R}^2$ from the two-moons dataset
559
+
560
+ Let $p_M$ be a probability density function defined on $M$ . As $M$ is composed of two disjoint manifolds $M_0$ and $M_1$ , we consider conditional densities $p_0, p_1$ as follows.
561
+
562
+ $$
563
+ p _ {0} (\mathbf {x}) = p _ {M} (\mathbf {x} \mid \mathbf {x} \in M _ {0}) = \frac {p _ {M} \left| _ {M _ {0}} (\mathbf {x}) \right.}{\Pr [ \mathbf {x} \in M _ {0} ]} \tag {6}
564
+ $$
565
+
566
+ $$
567
+ p _ {1} (\mathbf {x}) = p _ {M} (\mathbf {x} \mid \mathbf {x} \in M _ {1}) = \frac {p _ {M} \left| _ {M _ {1}} (\mathbf {x}) \right.}{\Pr [ \mathbf {x} \in M _ {1} ]}
568
+ $$
569
+
570
+ Here, $p_{M}|_{M_{0}}$ and $p_{M}|_{M_{1}}$ represent the density function $p_{M}$ with its domain restricted to $M_{0}$ and $M_{1}$ , respectively. By our definition of data-generating manifolds, $\operatorname*{Pr}[\mathbf{x} \in M_{i}]$ corresponds to the probability of data generation for class $i$ , i.e. $\operatorname*{Pr}[y = i]$ . For a concrete example of such density, uniform density for each manifold $M_{i}$ can be defined as $p_{i}(\mathbf{x}) = \frac{1}{\pi}$ for all $\mathbf{x} \in M_{i}$ .
571
+
572
+ Note that each manifold has parameterized curves in $\mathbb{R}^2$
573
+
574
+ $$
575
+ \gamma_ {0}: \theta \mapsto (\cos \theta , \sin \theta)
576
+ $$
577
+
578
+ $$
579
+ \gamma_ {1}: \theta \mapsto (1 - \cos \theta , 1 - \sin \theta + 0. 5)
580
+ $$
581
+
582
+ with constant speed $\| \gamma_0'(\theta)\| = \| \gamma_1'(\theta)\| = 1$ at all $\theta \in [0,\pi ]$ . Therefore, from equation (5),
583
+
584
+ $$
585
+ \int_ {\mathbf {x} \in M _ {0}} p _ {M} | _ {M _ {0}} (\mathbf {x}) d M _ {0} (\mathbf {x}) = \int_ {\theta = 0} ^ {\pi} p _ {M} \left(\gamma_ {0} (\theta)\right) d \theta \tag {7}
586
+ $$
587
+
588
+ $$
589
+ \int_ {\mathbf {x} \in M _ {0}} p _ {M} | _ {M _ {1}} (\mathbf {x}) d M _ {1} (\mathbf {x}) = \int_ {\theta = 0} ^ {\pi} p _ {M} (\gamma_ {1} (\theta)) d \theta
590
+ $$
591
+
592
+ For any measurable subset $A \subseteq M$ , the probability for an event that $\mathbf{x}$ is in $A$ can be computed as follows.
593
+
594
+ $$
595
+ \begin{array}{l} \Pr [ \mathbf {x} \in A ] = \int_ {\mathbf {x} \in A \subseteq M} p _ {M} (\mathbf {x}) d M (\mathbf {x}) \\ = \int_ {\mathbf {x} \in A \cap M _ {0}} p _ {M} | _ {M _ {0}} (\mathbf {x}) d M _ {0} (\mathbf {x}) + \int_ {\mathbf {x} \in A \cap M _ {1}} p _ {M} | _ {M _ {1}} (\mathbf {x}) d M _ {1} (\mathbf {x}) \\ = \int_ {\substack {\theta \in [ 0, \pi ] \\ \gamma_ {0} (\theta) \in A}} p _ {M} | _ {M _ {0}} (\gamma_ {0} (\theta)) d \theta + \int_ {\substack {\theta \in [ 0, \pi ] \\ \gamma_ {1} (\theta) \in A}} p _ {M} | _ {M _ {1}} (\gamma_ {1} (\theta)) d \theta \quad (\because (7)) \\ = \Pr [ \mathbf {x} \in M _ {0} ] \int_ {\substack {\theta \in [ 0, \pi ] \\ \gamma_ {0} (\theta) \in A}} p _ {0} (\gamma_ {0} (\theta)) d \theta \\ + \Pr \left[ \mathbf {x} \in M _ {1} \right] \int_ {\substack {\theta \in [ 0, \pi ] \\ \gamma_ {1} (\theta) \in A}} p _ {1} \left(\gamma_ {1} (\theta)\right) d \theta \quad (\because (6)) \\ = \frac{1}{\pi}\left(\operatorname *{Pr}[\mathbf{x}\in M_{0}]\int_{\substack{\theta \in [0,\pi ]\\ \gamma_{0}(\theta)\in A}}1d\theta +\operatorname *{Pr}[\mathbf{x}\in M_{1}]\int_{\substack{\theta \in [0,\pi ]\\ \gamma_{1}(\theta)\in A}}1d\theta\right) \\ \end{array}
596
+ $$
597
+
598
+ We can briefly check all the requirements (R1), (R2), and (R3). The computation of $\operatorname*{Pr}[\mathbf{x}\in A]$ is based on (R1), so (R1) is satisfied trivially. Also, $p_M$ is a function defined only on $M$ , thus (R2) is clear, i.e. $\mathrm{supp}(p_M) = \{\mathbf{x}\in \mathbb{R}^n\mid p(\mathbf{x}) > 0\} \subseteq M$ . To check (R3), when $A = M_i$ , computing this integration will result in the exact probability $\operatorname*{Pr}[\mathbf{x}\in M_i] = \operatorname*{Pr}[y = i]$ , so when $A = M$ , computing the integration will result in $\operatorname*{Pr}[y = 0] + \operatorname*{Pr}[y = 1] = 1$ , as desired in the requirements.
599
+
600
+ Extending density to $\mathbb{R}^n$ . We extend the domain to $\mathbb{R}^n$ for the example of two-moon. In Section 3, we defined the noise density function to satisfy the following requirement.
601
+
602
+ (R0) The translated noise density function, $\nu_{\mathbf{x}}(\hat{\mathbf{x}} - \mathbf{x})$ , is the density of noise $\mathbf{n} = \hat{\mathbf{x}} - \mathbf{x}$ being chosen for a given $\mathbf{x}$ . Given $\mathbf{x}_o = \mathbf{x}$ , since adding noise $\mathbf{n}$ is the only way to generate $\hat{\mathbf{x}}$ by perturbing $\mathbf{x}_0$ , $p(\hat{\mathbf{x}} | \mathbf{x}_o = \mathbf{x})$ is equal to $\nu_{\mathbf{x}}(\mathbf{n})$ .
603
+
604
+ Under a proper noise density function, We show an example construction of the density extended from $M$ satisfying the requirement (R0). For simplicity, we choose isotropic Gaussian distribution, $\mathcal{N}(0,\sigma^2 I)$ with the standard deviation $\sigma$ for each dimension as the noise density function $\nu_{\mathbf{x}}$ for all $\mathbf{x}\in M$ . Such noise density $\nu_{\mathbf{x}}$ defined in $\mathbb{R}^n$ can be written as follows.
605
+
606
+ $$
607
+ \nu_ {\mathbf {x}} (\mathbf {n _ {x}}) = \frac {1}{\sqrt {2 \pi} \sigma^ {2}} \exp \left(- \frac {\| \mathbf {n _ {x}} \| _ {2} ^ {2}}{2 \sigma^ {2}}\right)
608
+ $$
609
+
610
+ By putting $\mathbf{n}_{\mathbf{x}} = \hat{\mathbf{x}} -\mathbf{x}$ to density equation above,
611
+
612
+ $$
613
+ p (\hat {\mathbf {x}}) = \int_ {\mathbf {x} \in M} \frac {1}{\sqrt {2 \pi} \sigma^ {2}} \exp \left(- \frac {\| \hat {\mathbf {x}} - \mathbf {x} \| _ {2} ^ {2}}{2 \sigma^ {2}}\right) p _ {M} (\mathbf {x}) d M (\mathbf {x})
614
+ $$
615
+
616
+ Specifically, We assume an isotropic Gaussian distribution with $\sigma = 0.05$ as the noise density $\nu_{\mathbf{x}}$ for all $\mathbf{x} \in M$ .
617
+
618
+ By the equation (1), we have the following computation of density on $\hat{\mathbf{x}}$ .
619
+
620
+ $$
621
+ \begin{array}{l} p (\hat {\mathbf {x}}) = \int_ {\mathbf {x} \in M} \nu_ {\mathbf {x}} (\hat {\mathbf {x}} - \mathbf {x}) p _ {M} (\mathbf {x}) d M (\mathbf {x}) \\ = \int_ {\mathbf {x} \in M _ {0}} \nu_ {\mathbf {x}} (\hat {\mathbf {x}} - \mathbf {x}) p _ {M} | _ {M _ {0}} (\mathbf {x}) d M _ {0} (\mathbf {x}) + \int_ {\mathbf {x} \in M _ {1}} \nu_ {\mathbf {x}} (\hat {\mathbf {x}} - \mathbf {x}) p _ {M} | _ {M _ {1}} (\mathbf {x}) d M _ {1} (\mathbf {x}) \\ = \int_ {\theta = 0} ^ {\pi} \nu_ {\mathbf {x}} (\hat {\mathbf {x}} - \mathbf {x}) p _ {M} | _ {M _ {0}} (\gamma_ {0} (\theta)) d \theta + \int_ {\theta = 0} ^ {\pi} \nu_ {\mathbf {x}} (\hat {\mathbf {x}} - \mathbf {x}) p _ {M} | _ {M _ {1}} (\gamma_ {1} (\theta)) d \theta \quad \left(\because (5)\right) \\ = \Pr [ \mathbf {x} \in M _ {0} ] \int_ {\theta = 0} ^ {\pi} \nu_ {\mathbf {x}} (\hat {\mathbf {x}} - \mathbf {x}) p _ {0} (\gamma_ {0} (\theta)) d \theta \\ + \Pr [ \mathbf {x} \in M _ {1} ] \int_ {\theta = 0} ^ {\pi} \nu_ {\mathbf {x}} (\hat {\mathbf {x}} - \mathbf {x}) p _ {1} (\gamma_ {1} (\theta)) d \theta \quad (\because (6)) \\ = \frac {1}{\pi \sqrt {2 \pi} \sigma^ {2}} \left[ \Pr [ \mathbf {x} \in M _ {0} ] \int_ {\theta = 0} ^ {\pi} \exp \left(- \frac {\| \hat {\mathbf {x}} - \mathbf {x} \| _ {2} ^ {2}}{2 \sigma^ {2}}\right) d \theta \right. \\ + \Pr [ \mathbf {x} \in M _ {1} ] \int_ {\theta = 0} ^ {\pi} \exp \left(- \frac {\| \hat {\mathbf {x}} - \mathbf {x} \| _ {2} ^ {2}}{2 \sigma^ {2}}\right) d \theta ] \\ \end{array}
622
+ $$
623
+
624
+ We can also check that the requirement (R0) is satisfied by the construction; our construction (equation (1)) is based on (R0). The computed density is shown in Figure 6b.
625
+
626
+ # C PROOFS
627
+
628
+ In this section, we provide the proofs for statements that appeared in Section 4.
629
+
630
+ # C.1 PROOF OF THEOREM 1
631
+
632
+ To begin with, pick a value $\lambda$ such that the $\lambda$ -density superlevel set $L_{\nu_{\mathbf{x}},\lambda}$ is nonempty for all $\mathbf{x} \in M$ . As we use noise densities $\nu_{\mathbf{x}}$ described in Section 4.1, it is safe to assume that both $\lambda$ -bounding radius $\delta_{\lambda} = \max_{\mathbf{x} \in M} \delta_{\mathbf{x},\lambda}$ and $\lambda$ -guaranteeing radius $\epsilon_{\lambda} = \min_{\mathbf{x} \in M} \epsilon_{\mathbf{x},\lambda}$ exist.
633
+
634
+ Then, we can prove that, with a proper choice of threshold $\lambda$ , the $\lambda$ -density superlevel set includes the data-generating manifold.
635
+
636
+ Lemma 2. Assume that noise densities have radii in Definition 1 for all $\mathbf{x} \in M$ and a small enough $\lambda > 0$ . Then, for any $\mathbf{x} \in M$ , the density $p(\mathbf{x})$ is at least $\omega_{\epsilon}\lambda$ , i.e. $p(\mathbf{x}) \geq \omega_{\epsilon}\lambda$ , where $\epsilon = \epsilon_{\lambda}$ .
637
+
638
+ Proof. By Lemma 1,
639
+
640
+ $$
641
+ \begin{array}{l} \mathbf {x} ^ {\prime} \in B _ {\epsilon} (\mathbf {x}) \Longleftrightarrow \mathbf {x} \in B _ {\epsilon} \left(\mathbf {x} ^ {\prime}\right) = B _ {\epsilon_ {\lambda}} \left(\mathbf {x} ^ {\prime}\right) \quad \left(\because \epsilon = \epsilon_ {\lambda}\right) \\ \Longrightarrow \nu_ {\mathbf {x} ^ {\prime}} \left(\mathbf {x} - \mathbf {x} ^ {\prime}\right) \geq \lambda \\ \end{array}
642
+ $$
643
+
644
+ Then, we can lower bound the density $p_M(\mathbf{x})$ as follows.
645
+
646
+ $$
647
+ \begin{array}{l} p (\mathbf {x}) = \int_ {\mathbf {x} ^ {\prime} \in M} \nu_ {\mathbf {x} ^ {\prime}} \left(\mathbf {x} - \mathbf {x} ^ {\prime}\right) p _ {M} \left(\mathbf {x} ^ {\prime}\right) d M \left(\mathbf {x} ^ {\prime}\right) \\ \geq \int_ {\mathbf {x} ^ {\prime} \in M \cap B _ {\epsilon} (\mathbf {x})} \nu_ {\mathbf {x} ^ {\prime}} (\mathbf {x} - \mathbf {x} ^ {\prime}) p _ {M} (\mathbf {x} ^ {\prime}) d M (\mathbf {x} ^ {\prime}) \\ \geq \lambda \int_ {\mathbf {x} ^ {\prime} \in M \cap B _ {\epsilon}} p _ {M} (\mathbf {x} ^ {\prime}) d M (\mathbf {x} ^ {\prime}) \\ = \lambda \Pr_ {\mathbf {x} ^ {\prime} \in M} \left[ \mathbf {x} ^ {\prime} \in B _ {\epsilon} (\mathbf {x}) \right] \\ \geq \omega_ {\epsilon} \lambda \\ \end{array}
648
+ $$
649
+
650
+ ![](images/8b70f9f38ccd37629eb37520d39f389b5fae072c58116d76d0aaf23b96450ab8.jpg)
651
+
652
+ This lemma shows that the thresholding the extended density $p$ with threshold $\lambda^{*} \leq \omega_{\epsilon} \lambda$ guarantees the superlevel set to include the entire manifold $M$ .
653
+
654
+ Corollary 2. For any threshold $\lambda^* \leq \omega_{\epsilon} \lambda$ , the corresponding $\lambda^*$ -density superlevel set $L_{p, \lambda^*}$ of the extended density $p$ includes the data-generating manifold $M$ .
655
+
656
+ Similarly, we show that, with a proper choice of threshold $\lambda$ , each connected component of $\lambda$ -density superlevel set contains at most one manifold.
657
+
658
+ Lemma 3. Assume a family of noise densities satisfies the assumptions of Section 4.1. Let $\lambda >0$ be a value such that the $\lambda$ -density superlevel set $L_{\nu_{\mathbf{x}},\lambda}$ is nonempty for any $\mathbf{x}\in M$ . Also, let $\delta = \delta_{\lambda}$ be the maximum $\lambda$ -bounding radius over $M$ . Then, for any $\hat{\mathbf{x}}\notin N_{\delta}(M)$ , the extended density value is smaller than $\lambda$ , i.e. $p(\hat{\mathbf{x}}) < \lambda$ .
659
+
660
+ Proof. By Lemma 1,
661
+
662
+ $$
663
+ \begin{array}{l} \hat {\mathbf {x}} \notin N _ {\delta} (M) \iff \hat {\mathbf {x}} \notin B _ {\delta} (\mathbf {x}) = B _ {\delta_ {\lambda}} (\mathbf {x}) \text {f o r a n y} \mathbf {x} \in M \quad (\because \delta = \delta_ {\lambda}) \\ \Longrightarrow \nu_ {\mathbf {x}} (\hat {\mathbf {x}} - \mathbf {x}) < \lambda \text {f o r a n y} \mathbf {x} \in M \\ \end{array}
664
+ $$
665
+
666
+ Then, we can upper bound the density $p(\hat{\mathbf{x}})$ as follows.
667
+
668
+ $$
669
+ \begin{array}{l} p (\hat {\mathbf {x}}) = \int_ {\mathbf {x} \in M} \nu_ {\mathbf {x}} (\hat {\mathbf {x}} - \mathbf {x}) p _ {M} (\mathbf {x}) d M (\mathbf {x}) \\ < \lambda \int_ {\mathbf {x} \in M} p _ {M} (\mathbf {x}) d M (\mathbf {x}) \quad (\because \hat {\mathbf {x}} \notin N _ {\delta_ {\lambda}} (M)) \\ = \lambda \\ \end{array}
670
+ $$
671
+
672
+ ![](images/36106986a6b3acc143063658f816909096b4c20fd2931ab487e0b59b86103cd6.jpg)
673
+
674
+ This lemma says that the $\lambda$ -density superlevel set is included by the $\delta$ -neighborhood $N_{\delta}(M)$ of the data-generating manifold $M$ .
675
+
676
+ Now, we can deduce the following main result.
677
+
678
+ Theorem 1. Pick any $\lambda^* \leq \omega_\epsilon \lambda$ threshold value satisfying the Corollary 2. If the class-wise distance of data-generating manifold is larger than $2\delta^*$ where $\delta^* = \delta_{\lambda^*}$ (the $\lambda^*$ -bounding radius), then the superlevel set $L_{p,\lambda^*}$ satisfies the followings.
679
+
680
+ - $L_{p,\lambda^*}$ contains the data-generating manifold $M$ .
681
+ - Each connected component of $L_{p,\lambda^*}$ contains at most one manifold $M_i$ of class $i$ .
682
+
683
+ Proof. The first property is a direct application of Corollary 2 for $\lambda^{*} = \omega_{\epsilon}\lambda$
684
+
685
+ For the second property, since the class-wise distance of $M$ is larger than $2\delta^{*}$ , the $\delta^{*}$ -neighborhood of manifolds are pairwise disjoint, i.e. $N_{\delta^{*}}(M_{i}) \cap N_{\delta^{*}}(M_{j}) = \emptyset$ for each $i \neq j$ . Therefore, $N_{\delta^{*}}(M)$ has exactly $k$ connected components $N_{i} = N_{\delta^{*}}(M_{i})$ 's.
686
+
687
+ By Lemma 3, $\delta^{*}$ -neighborhood $N_{\delta^{*}}(M)$ contains the superlevel set $L_{p,\lambda^{*}}$ , thus each connected component of $L_{p,\lambda^{*}}$ is in exactly one of $N_{i}$ 's. Since $M$ is contained in $L_{p,\lambda^{*}}$ , each $M_{i}$ is contained in some connected component $C$ of $L_{p,\lambda^{*}}$ which is in $N_{i}$ . Then, for any $j \neq i$ , $M_{j} \notin C \subset N_{i}$ , since $M_{j}$ is in $N_{j}$ which is disjoint to $N_{i}$ . Therefore, if a connected component $C$ contains a manifold $M_{i}$ , then it cannot contain any other manifold.
688
+
689
+ # C.2 PROOFS FOR SECTION 4.3
690
+
691
+ Theorem 2. Let $\mathcal{D}_Z$ be a mixture of $n_Z$ multivariate Gaussian distributions, and let $\mathcal{D}_X$ be the target distribution from a data-generating manifold with $n_X$ manifolds. Let $G$ be a continuous generative model for $\mathcal{D}_X$ using latent vectors from $\mathcal{D}_Z$ . Assume the Theorem 1 is satisfied, and let $\lambda^*$ be the threshold value from the Theorem 1. If $n_Z < n_X$ , $L_{\lambda^*}^X$ and $L_{\lambda^*}^{G(Z)}$ do not agree on the number of connected components.
692
+
693
+ Proof. Since $L_{\lambda^*}^X$ is the results of Theorem 1, the number of connected components of $L_{\lambda^*}^X$ is at least $n_X$ .
694
+
695
+ However, since $\mathcal{D}_Z$ is a mixture of Gaussians, for any value of $\lambda$ (including the special case $\lambda = \lambda^*$ ), $L_{\lambda}^{Z}$ can never have more than $n_Z$ connected components. Since $G$ is continuous, it preserves the number of connected components, thus $L_{\lambda^*}^{G(Z)} = G(L_{\lambda^*}^{Z})$ has at most $n_Z$ connected components. As $n_Z < n_X$ , $L_{\lambda^*}^{X}$ and $L_{\lambda^*}^{G(Z)}$ can never agree on the number of connected components.
696
+
697
+ Corollary 1. If Theorem 2 is satisfied, there is a point $\hat{\mathbf{x}}\in \mathbb{R}^n$ such that $\hat{\mathbf{x}}\notin L_{\lambda^{*}}^{X}$ but $\hat{\mathbf{x}}\in L_{\lambda^{*}}^{G(Z)}$
698
+
699
+ Proof. Since $n_Z < n_X$ , there exists a connected component $\hat{C}$ of $L_{\lambda^*}^{G(Z)}$ containing at least two connected components of $S_{\lambda^*}^X$ . Without loss of generality, assume $\hat{C}$ contains exactly two connected components $C$ and $C'$ . By definition, $\lambda$ -superlevel set is a closed set, so $C$ and $C'$ are disjoint closed sets. In Euclidean space $\mathbb{R}^n$ , the Urysohn's lemma tells us that for any disjoint pair of closed sets $A, A'$ in $\mathbb{R}^n$ , there is a continuous function $f$ such that $f|_A(\mathbf{x}) = 0$ and $f|_{A'}(\mathbf{x}) = 1$ for any $\mathbf{x} \in \mathbb{R}^n$ . Especially, when $A = C$ and $A' = C'$ , there exists a continuous function $f$ such that,
700
+
701
+ $f(\mathbf{x}) = 0$ for all $\mathbf{x}$ in $C$
702
+ $f(\mathbf{x}) = 1$ for all $\mathbf{x}$ in $C^\prime$
703
+
704
+ Consider $S = f^{-1}\left(\frac{1}{2}\right)$ which is a separating plane separating $C$ and $C'$ . If $\hat{C} \cap S = \varnothing$ , then $\hat{C} \cap S = f^{-1}[0, \frac{1}{2})$ and $\hat{C} \cap S = f^{-1}\left(\frac{1}{2}, 1\right]$ will be two open sets in subspace $\hat{C}$ , whose union is $\hat{C}$ . This implies that $\hat{C}$ is disconnected, which is a contradiction. Therefore, $\hat{C} \cap S$ should be nonempty, and any point $\mathbf{x}$ in $\hat{C} \cap S$ is not in $L_{\lambda^*}^X$ .
705
+
706
+ # D FURTHER DISCUSSIONS
707
+
708
+ # D.1 COMPUTING DENSITY OVER A DATA-GENERATING MANIFOLD
709
+
710
+ When $M$ is a Riemannian manifold equipped with a Riemannian metric $g$ , we can compute probabilities over $M$ . There are two essential components of probability computation: (a) a density function $p_M$ and (b) a measure $dM$ over $M$ . We assume $p_M$ and $dM$ to satisfy the followings.
711
+
712
+ (R1) For any measurable subset $A \subset M$ , i.e., $\operatorname{Pr}[\mathbf{x} \in A] = \int_{\mathbf{x} \in A} p_M(\mathbf{x}) dM(\mathbf{x})$ .
713
+ (R2) $p$ is zero everywhere out of $M$ , i.e., $\mathrm{supp}(p_M) = \{\mathbf{x} \in \mathbb{R}^n \mid p_M(\mathbf{x}) > 0\} \subseteq M$ .
714
+ (R3) For any $(\mathbf{x},y)$ , $\mathbf{x}$ is sampled from $M_{i}$ if and only if $y = i$ , i.e. $\operatorname*{Pr}[\mathbf{x}\in M_i] = \operatorname*{Pr}[y = i]$ .
715
+
716
+ When equipped with such $p_M$ and $dM$ , we call $M$ as a data-generating manifold.
717
+
718
+ Probability over a Riemannian manifold. We show how to compute a probability of $\mathbf{x}$ being generated from a Riemannian manifold $M$ . We assume a $k$ -dimensional manifold $M$ equipped with a Riemannian metric $g$ , a family of inner products $g_{\mathbf{x}}$ on tangent spaces $T_{\mathbf{x}}M$ . In this case, $g$ induces the volume measure $dM$ for integration over $M$ . If $M$ is parameterized by $\mathbf{x} = X(\mathbf{u})$ for $\mathbf{u} \in D \subseteq \mathbb{R}^k$ , the integration of a density function $p_M$ on $M$ is as follows.
719
+
720
+ $$
721
+ \int_ {\mathbf {x} \in M} p _ {M} (\mathbf {x}) d M (\mathbf {x}) = \int_ {\mathbf {u} \in D} p _ {M} (X (\mathbf {u})) \sqrt {\left| \det [ g _ {X (\mathbf {u})} ] \right|} d \mathbf {u} \tag {8}
722
+ $$
723
+
724
+ where $[g_{X(\mathbf{u})}]$ is the $k\times k$ matrix representation of the inner product $g_{X(\mathbf{u})}$ at $X(\mathbf{u})\in M$
725
+
726
+ In Appendix B, a concrete example of this computation will be provided.
727
+
728
+ # D.2 DENSITY EXTENSION OF THE SECTION 3.2
729
+
730
+ This section introduces some remaining discussions regarding our data-generating process from a data-generating manifold.
731
+
732
+ Relation to kernel density estimation. While this extension is computing the density of compound distribution, it can be interpreted as computing expectation over a family of locally defined densities. Such an expected value can be observed in previous approaches of density estimation. For example, if $\nu_{\mathbf{x}}$ is isotropic Gaussian for each $\mathbf{x}$ , the above integration is equivalent to the kernel density estimation, with Gaussian kernel, over infinitely many points on $M$ .
733
+
734
+ Observed property of the extended density. In Figure 6b in Appendix B, we can observe that the extended density achieved higher values near the data-generating manifold. We formalize this observation to discuss its implication to the INC approach.
735
+
736
+ Let $d(\hat{\mathbf{x}}, M)$ to be the minimum distance from $\hat{\mathbf{x}}$ to the manifold $M$ .
737
+
738
+ (C1) For any given $\hat{\mathbf{x}}$ , let $y^{*}$ be the class label whose conditional density $p(\hat{\mathbf{x}} | y = y*)$ dominates $p(\hat{\mathbf{x}} | y = i)$ for $i \neq y^{*}$ ,
739
+
740
+ $$
741
+ y ^ {*} \in \arg \max _ {i \in [ l ]} p (\hat {\mathbf {x}} | y = i) \tag {9}
742
+ $$
743
+
744
+ and let $M_{y^*}$ be the manifold corresponding to $y^*$ .
745
+
746
+ (C2) For $y^*$ satisfying (C1), we choose $y^*$ such that the distance of $\hat{\mathbf{x}}$ from the manifold $d(\hat{\mathbf{x}}, M_{y^*})$ is the smallest.
747
+
748
+ If there are multiple $y^{*}$ satisfying both of (C1) and (C2), we expect the following property to be true for all of those $y^{*}$ .
749
+
750
+ (P1) Consider the shortest line from $\hat{\mathbf{x}}$ to the manifold $M_{y^*}$ . As $\hat{\mathbf{x}}$ goes closer to $M_{y^*}$ along this line, $\hat{\mathbf{x}}$ should be more likely to be generated as the influence of noise decreases when moving away from the manifold. Therefore, we expect our density $p_M$ to have the following property.
751
+
752
+ $$
753
+ \begin{array}{l} \mathbf{x}^{*}\in \arg \min_{\mathbf{x}\in M_{y^{*}}}d(\hat{\mathbf{x}},\mathbf{x}) \\ \Rightarrow p (\hat {\mathbf {x}}) \leq p ((1 - \lambda) \hat {\mathbf {x}} + \lambda \mathbf {x} ^ {*}) \text {f o r a l l} \lambda \in [ 0, 1 ] \tag {10} \\ \end{array}
754
+ $$
755
+
756
+ Actually, this provides another justification of INC. In reality, the density conditioned by the label is not available even after running a generative model, so finding $y^{*}$ with (C1) is relatively hard. If we only consider (C2) without filtering $y^{*}$ via (C1), we are finding a point $\mathbf{x} \in M$ achieving the minimum distance to $\hat{\mathbf{x}}$ , which is the optimization (11) above. Then projecting $\hat{\mathbf{x}}$ to the $\mathbf{x}^{*}$ , i.e. the solution of the optimization 11, can be explained by 10; when $\lambda = 1$ , $p$ is the highest along the shortest line between $\hat{\mathbf{x}}$ and $\mathbf{x}^{*}$ .
757
+
758
+ # D.3 SUFFICIENT CONDITIONS FOR THE EXISTENCE OF RADI
759
+
760
+ We discuss the sufficient conditions guaranteeing the existence of radii introduced in Definition 1. Those sufficient conditions are derived from natural intuition about the properties of distributions in most machine-learning contexts.
761
+
762
+ The first intuition is that the influence of noise should diminish as observed sample $\hat{\mathbf{x}}$ moves away from a source point $\mathbf{x}_o$ . Therefore, we formalize the noise whose density decreases as the noise $\mathbf{n} = \hat{\mathbf{x}} - \mathbf{x}_o$ gets bigger. We formalize boundedness of noise densities via the boundedness of their $\lambda$ -density superlevel sets and continuity of noise density via the continuity of individual $\nu_{\mathbf{x}}$ .
763
+
764
+ Definition 11 (Center-peaked noise density). Noise density functions $\nu_{\mathbf{x}}$ are center-peaked, if for any source point $\mathbf{x} \in M$ and any noise vector $\mathbf{n} \in \mathbb{R}^n$ with $\| \mathbf{n} \| > 0$ , $\nu_{\mathbf{x}}(\mathbf{n}) < \nu_{\mathbf{x}}(\lambda \mathbf{n})$ for all $\lambda \in [0,1)$ .
765
+
766
+ Definition 12 (Bounded noise density). Noise density functions $\nu_{\mathbf{x}}$ are bounded, if a $\lambda$ -density superlevel set is nonempty, there is a radius $\delta$ by which the $\lambda$ -density superlevel set is bounded, i.e., $L_{\nu_{\mathbf{x}},\lambda} \subseteq \overline{B_{\delta}(\mathbf{0})}$ where $\overline{B_{\delta}(\mathbf{0})}$ is the closed ball of radius $\delta$ centered at $\mathbf{0}$ .
767
+
768
+ Definition 13 (Continuous noise density). Noise density functions $\nu_{\mathbf{x}}$ are continuous, if $\nu_{\mathbf{x}}$ is continuous in $\mathbb{R}^n$ , for any $\mathbf{x} \in M$ .
769
+
770
+ Under the conditions above, the radii in Definition 1 always exist.
771
+
772
+ Proposition 1. If noise densities $\nu_{\mathbf{x}}$ are center-peaked, bounded, and continuous, any nonempty $\lambda$ -density superlevel set $L_{\nu_{\mathbf{x}},\lambda}$ has both $\lambda$ -bounding radius $\delta_{\mathbf{x},\lambda}$ and $\lambda$ -guaranteeing radius $\epsilon_{\mathbf{x},\lambda}$ .
773
+
774
+ Proof. Let $\nu_{\mathbf{x}}$ be a center peaked, superlevel set bounded family of continuous noise densities. Since $\nu_{\mathbf{x}}$ is continuous, superlevel set $L_{\nu_{\mathbf{x}},\lambda} = \nu_{\mathbf{x}}^{-1}\big[\lambda ,\infty \big)$ is closed as an inverse image of $\nu_{\mathbf{x}}$ . Therefore, its boundary $\partial L_{\nu_{\mathbf{x}},\lambda}$ is contained in $L_{\nu_{\mathbf{x}},\lambda}$ .
775
+
776
+ Because $\nu_{\mathbf{x}}$ is superlevel set bounded, superlevel set $L_{\nu_{\mathbf{x}},\lambda}$ is bounded by a closed ball $\overline{B_{\delta}(\mathbf{0})}$ with radius $\delta \geq 0$ . Since $\nu_{\mathbf{x}}$ is center peaked, a nonempty superlevel set $L_{\nu_{\mathbf{x}},\lambda}$ always contains $\mathbf{0}$ as the maximum is achieved at $\mathbf{0}$ . Moreover, there exists a closed neighborhood ball $\overline{B_{\epsilon}(\mathbf{0})}$ with radius $\epsilon \geq 0$ contained in the superlevel set $L_{\nu_{\mathbf{x}},\lambda}$ . Now it is enough to show that the minimum of $\delta$ and the maximum of $\epsilon$ exist.
777
+
778
+ Since $L_{\nu_{\mathbf{x}},\lambda}$ is bounded, its boundary $\partial L_{\nu_{\mathbf{x}},\lambda}$ is also bounded. $\partial L_{\nu_{\mathbf{x}},\lambda}$ is closed and bounded, thus it is a compact set. Therefore, the Euclidean norm, as a continuous function, should achieve the maximum $\bar{r}$ and the minimum $\underline{r}$ on $\partial L_{\nu_{\mathbf{x}},\lambda}$ by the extreme value theorem. From the choice of $\delta$ and $\epsilon$ , we can get,
779
+
780
+ $$
781
+ \epsilon \leq \underline {{r}} \leq \overline {{r}} \leq \delta
782
+ $$
783
+
784
+ Therefore, we can find the minimum $\delta_{\mathbf{x},\lambda} = \overline{r}$ and the maximum $\epsilon_{\mathbf{x},\lambda} = \underline{r}$ .
785
+
786
+ ![](images/191186a2a6bd3c0702c9c2989fb27da3a2077aa3024682ae97ccfdc45ded15cd.jpg)
787
+
788
+ # D.4 GENERALIZATION OF THE THEOREM 1
789
+
790
+ We try generalizing the Theorem 2 to handle more concepts in topology. Theorem 2 mainly uses a fact that the number of connected components of $\lambda$ -density superlevel set is preserved by a continuous generative model $G$ .
791
+
792
+ In algebraic topology, each connected component corresponds to a generator of 0-th homology group $H_0$ , and continuity of a function is enough to preserve each component. In general, generators of $i$ -th homology group $H_i$ for $i > 0$ are not preserved by a continuous map, so we need to restrict $G$ further. By requiring $G$ to be a homeomorphism, we can safely guarantee that all topological properties are preserved by $G$ ; therefore, we can generalize the Theorem 2 with a homeomorphic generative model $G$ .
793
+
794
+ To generalize the proof of the Theorem 2, we first provide the sketch of the proof.
795
+
796
+ (1) $\lambda^{*}$ -density superlevel set $L_{\lambda^{*}}^{Z}$ of a mixture of $n_Z$ Gaussian distributions has at most $n_Z$ connected components.
797
+ (2) Since $G$ is continuous, the number of connected components of $L_{\lambda^*}^G(Z) = G(L_{\lambda^*}^Z)$ is same to the number of connected components of $L_{\lambda^*}^Z$ , so it is also at most $n_Z$ .
798
+ (3) We choose $\lambda^{*}$ so that $L_{\lambda^*}^X$ is included in $\delta^{*}$ -neighborhood of $M$ .
799
+ (4) By assumption on the class-wise distance of $M$ , $\delta^*$ -neighborhood of $M$ has exactly same number of connected components to $M$ , i.e., $n_X$ . Therefore $L_{\lambda^*}^X$ has at least $n_X$ connected components.
800
+ (5) By (2) and (4), we conclude that $L_{\lambda^*}^G(Z)$ and $L_{\lambda^*}^X$ do not agree on the number of connected components as long as $n_Z < n_X$ .
801
+
802
+ In this proof, $n_Z$ corresponds to the maximal 0-th Betti number of $L_{\lambda^*}^Z$ , i.e. the number of generators of $H_0(L_{\lambda^*}^Z)$ . If we keep using a mixture of Gaussians as latent vector distribution, all components of $L_{\lambda^*}^Z$ are contractible, so we may use 0 as the maximal $i$ -th Betti number.
803
+
804
+ Also, $n_X$ corresponds to the 0-th Betti number of $M$ and it worked as the minimal 0-th Betti number of $L_{\lambda^*}^X$ . The condition on the class-wise distance of $M$ is used to ensure $n_X$ to be a lower bound. Combining these observations, we can get the following generalized statement.
805
+
806
+ Theorem 3. Let $\mathcal{D}_Z$ be a mixture of multivariate Gaussian distributions, and let $\mathcal{D}_X$ be the target distribution from data-generating manifold $M$ . Let $n_i$ be the $i$ -th Betti number of $M$ .
807
+
808
+ Consider a generative model $G$ is used to approximate $\mathcal{D}_X$ using the latent vectors sampled from $\mathcal{D}_Z$ . Assume that $G$ is a homeomorphism from $\mathbb{R}^n$ to itself. Assume that data-generating manifold satisfies the conditions of the Theorem 1, and let $\lambda^{*}$ be the threshold value that $L_{\lambda^{*}}^{X}$ corresponds
809
+
810
+ to that superlevel set. Assume that for some $j > 0$ , the homomorphism $\iota^{*}$ induced by the inclusion $\iota : M \to N_{\delta^{*}}(M)$ is injective.
811
+
812
+ If $0 < n_{j}$ , $L_{\lambda^{*}}^{X}$ and $L_{\lambda^{*}}^{G(Z)}$ do not agree on the number of connected components.
813
+
814
+ Proof. Since $L_{\lambda^*}^X$ is the results of Theorem 1, it includes $M$ and is included by $\delta^*$ -neighborhood $N_{\delta^*}(M)$ of $M$ . Define inclusions $\iota_1, \iota_2$ as,
815
+
816
+ $\iota_{1}:M\to L^{X}_{\lambda^{*}}$
817
+ $\iota_{2}:L_{\lambda^{*}}^{X}\to N_{\delta^{*}}(M)$
818
+
819
+ Clearly, $\iota = \iota_{2} \circ \iota_{1}$ .
820
+
821
+ Let $\iota_1^*$ and $\iota_2^*$ be induced homomorphisms of $\iota_{1}$ and $\iota_{2}$ , resp.
822
+
823
+ By the assumption, any generator $[a]$ in $H_{j}(M)$ is injectively mapped to a nonzero generator $\iota^{*}([a])$ in $H_{j}(N_{\delta^{*}}(M))$ . Therefore, the $j$ -th Betti number of $N_{\delta^{*}}(M)$ is equal to that of $M$ , i.e. $n_j$ . Note that $j$ -th Betti number is the rank of $j$ -th homology group $\mathrm{rank}(H_j(N_{\delta^*}(M)))$ Because $\iota_2^*$ is a homomorphism from $H_{j}(L_{\lambda^{*}}^{X})$ to $H_{j}(N_{\delta^{*}}(M))$ , $\mathrm{rank}(L_{\lambda^*}^X)\geq \mathrm{rank}(H_j(N_{\delta^*}(M)))$ . Therefore the $j$ -th Betti number of $L_{\lambda^*}^X$ is at least $n_j$ .
824
+
825
+ However, since $\mathcal{D}_Z$ is a mixture of Gaussians, for any value of $\lambda$ (including the special case $\lambda = \lambda^*$ ), $L_{\lambda}^{Z}$ does not have any generator of $j$ -th homology group, so it has $j$ -th Betti number 0 for all $j > 0$ . Since $G$ is homeomorphic, it preserves all the Betti numbers, thus $L_{\lambda^*}^{G(Z)} = G(L_{\lambda^*}^{Z})$ has the same $j$ -th Betti number. As $0 < n_j$ , $L_{\lambda^*}^{X}$ and $L_{\lambda^*}^{G(Z)}$ can never agree on the number of connected components.
826
+
827
+ In Section 5.2, we see the Figure 3i from the circles dataset, which is a remarkable example that $L_{\lambda}^{G}(Z)$ has the same number of connected components, but does not have any loop (non-contractible circle). This is empirical evidence of Theorem 3, so it is explained by mismatches in the topology of distributions. Each concentric circle has $\mathbb{Z}$ as its first homology group as circle contains exactly one generator. However, latent vector distribution always has a trivial first homology group, as any superlevel set of a mixture of Gaussians is a set of contractible connected components.
828
+
829
+ # D.5 DETAILS OF INC IMPLEMENTATIONS IN THE SECTION 5
830
+
831
+ INC implementation. We start from introducing the optimization for the ideal INC projection when the data-generating manifold $M$ is available.
832
+
833
+ $$
834
+ \mathbf {x} ^ {*} = \underset {\mathbf {x} \in M} {\arg \min } d (\mathbf {x}, \hat {\mathbf {x}}) \tag {11}
835
+ $$
836
+
837
+ where $d$ is a metric defined on the domain $X$ . If perfect classification on $M$ is assumed (model is well-trained on $M$ ) and $\hat{\mathbf{x}}$ is close enough to the manifold of correct label, classification $f(\mathbf{x}^{*})$ is likely to be correct, since $\mathbf{x}^{*}$ is likely to lie on the correct manifold. Since the data-generating manifold $M$ is unknown, the INC approach runs the following optimization with before the classification.
838
+
839
+ $$
840
+ \mathbf {x} ^ {*} = G \left(\mathbf {z} ^ {*}\right) \text {w h e r e} \mathbf {z} ^ {*} = \underset {\mathbf {z} \sim \mathcal {D} _ {Z}} {\arg \min } d (G (\mathbf {z}), \hat {\mathbf {x}}) \tag {12}
841
+ $$
842
+
843
+ where $d$ is a metric defined on the domain $X$ .
844
+
845
+ When INC is implemented with a reversible generative model $G$ , for any given $\hat{\mathbf{x}} \in \mathbb{R}^n$ there exists a trivial solution $\mathbf{z}^{*} = G^{-1}(\hat{\mathbf{x}})$ to the optimization (12), achieving $d(G(\mathbf{z}^{*}), \mathbf{x}) = 0$ . This is even true for $\hat{\mathbf{x}}$ out of the manifold, resulting in the situation that the output $\mathbf{x}^{*} = G(\mathbf{z}^{*}) = \hat{\mathbf{x}}$ is still out of the data-generating manifold.
846
+
847
+ To manage this problem, we add another term penalizing a low density of latent vector to the objective function. Thus, in our INC implementation, we solve the following optimization problem.
848
+
849
+ $$
850
+ \mathbf {x} ^ {*} = G \left(\mathbf {z} ^ {*}\right) \text {w h e r e} \mathbf {z} ^ {*} = \arg \min _ {\mathbf {z} \sim \mathcal {D} _ {Z}} \left[ d \left(G (\mathbf {z}), \hat {\mathbf {x}}\right) + \alpha \left(M - p _ {Z} (\mathbf {z})\right) \right] \tag {13}
851
+ $$
852
+
853
+ where $\alpha$ is the regularization factor and $M$ is the maximum possible value of the density $p_Z$ of the latent vector distribution. For the choice of regularization factor, we used the same value $\alpha = 1$ during the entire experiment.
854
+
855
+ To solve each optimization problem, we used the built-in adam optimizer (Kingma & Ba, 2014) in TensorFlow package. For optimization parameters, we ran 100 iterations of adam optimizer using learning rate 0.01 with random sampling of $\mathbf{z}$ .
856
+
857
+ When implementing INC using a class-aware generative model, we used the following strategy to improve its robustness.
858
+
859
+ - As the class-aware generative model generates each manifold from each Gaussian component, we first sample initial points from each manifold by randomly choosing latent vectors $\mathbf{z}_1,\ldots ,\mathbf{z}_l$ from each Gaussian component.
860
+ - We run INC for $i$ -th manifold by solving the following optimization.
861
+
862
+ $$
863
+ \mathbf {x} _ {i} ^ {*} = G \left(\mathbf {z} _ {i} ^ {*}\right) \text {w h e r e} \mathbf {z} _ {i} ^ {*} = \arg \min _ {\mathbf {z} \sim \mathcal {D} _ {Z}} \left[ d (G (\mathbf {z}), \hat {\mathbf {x}}) + \alpha (M _ {i} - p _ {Z, i} (\mathbf {z})) \right]
864
+ $$
865
+
866
+ where $M_{i}$ is the maximum value of $i$ -th Gaussian component. The regularization term is designed to penalize $\mathbf{z}$ which is unlikely to be generated by $i$ -th Gaussian component, so we only search in the range of $i$ -th Gaussian component, i.e., $i$ -th manifold.
867
+
868
+ - We choose the final solution $\mathbf{x}_i^*$ achieving the minimum $d(\mathbf{x}_i^*,\hat{\mathbf{x}})$ , breaking ties randomly.
869
+
870
+ Since each search is performed only on each submanifold, the artifact observed in Section 5.3 never appears during the optimization process. Also, choosing initial points from each manifold prevents the initialization problem mentioned in Section 5.3.
871
+
872
+ # D.6 DISCUSSION ABOUT THE LIMITATION OF TOPOLOGICAL INFORMATION
873
+
874
+ Given a sufficient number of connected components in the latent vector distribution, does the class-aware training suggested in this paper result in a generative model that achieves manifold separation? For this question, the answer is no, and the manifold separation depends on other factors, e.g., alignment of latent vector distribution, choice of training parameter, etc.
875
+
876
+ ![](images/7e21e5f430a460898f4573aaf7b7a7701443b3ed00ae3376299c39b05b76029a.jpg)
877
+ (a) Superlevel set of $\mathcal{D}_Z$
878
+ Figure 7: Failure cases of class-aware training.
879
+
880
+ ![](images/f3818c72f7dd980da89a2e1de2e54c53d7e7b624dee6944e4219d4f4a8c14a50.jpg)
881
+ (b) Superlevel set of $\mathcal{D}_{G(Z)}$
882
+
883
+ Figure 7b shows the superlevel set of $\mathcal{D}_{G(Z)}$ from a class-aware training to learn the two-moons dataset when latent vector distribution is a mixture of two Gaussian distributions aligned horizontally (Figure 7a). It is clear that in this case, the generative model induced a connection artifact even when the class-aware training was used.
884
+
885
+ We explain this by interpreting reversible generative models as dynamical systems (Weinan, 2017; Chen et al., 2018; Grathwohl et al., 2018; Zhang et al., 2018). To elaborate, a reversible generative
886
+
887
+ model can be viewed as a dynamical system moving the latent vector distribution to the target distribution continuously in time. When two Gaussian mixtures are aligned vertically, a reversible generative model is likely to learn how to move the upper (and lower) Gaussian distribution toward the upper moon (and the lower moon, respectively), without being affected by the entanglement of two moons. However, moving the left (and right) Gaussian distribution toward the left moon (and the right moon, respectively) continuously in time is required to avoid the entanglement of two moons during the transition. This case alludes that information about the topological properties may not be enough to learn a generative model separating manifolds, because it does not provide an understanding of information about how data-generating manifolds are aligned.
888
+
889
+ # E MORE EXPERIMENTAL RESULTS
890
+
891
+ We present more experimental results about the INC performance comparing topology-aware generative model to its topology-ignorant counterpart.
892
+
893
+ Histogram for projection error distributions in 5.4. Figure 8 presents the histogram of the projection errors distributed from 0 to the diameter of the distribution. Each row corresponds to each dataset, whereas the first column and the second column represent the results from the topology-ignorant model and the topology-aware model, respectively. All histograms are normalized so that the sum of values adds up to 1. To explain, the $y$ -axis of each histogram is the estimated probability that INC achieves the projection error on the $x$ -axis. Not only can we observe the improved mean of projection errors in the histograms, but we can also check the reduced standard deviation, i.e., we get more consistent projection errors near the mean.
894
+
895
+ ![](images/6ef96912656e363d841e6569f7730148374c841cc44b5f12c5b40f14fb540b7e.jpg)
896
+
897
+ ![](images/c9bdf2c1973911c88f51421652b31302c15ae8ad7e4d2ce494149609be4fb7f0.jpg)
898
+ (b) Two-moons, topology-aware
899
+
900
+ ![](images/a0fe4937176225e22a0f519845967fc5ab5b8e16c102484215bf666c1c9b5728.jpg)
901
+ (a) Two-moons, topology-ignorant
902
+
903
+ ![](images/2a3062da46a4cbb41375e1431dc91aed49b270d198daddb27fda77e0c9ba85bc.jpg)
904
+ (d) Spirals, topology-aware
905
+
906
+ ![](images/3487a2f779cd45ddfdb5cdc7d9f4470b2de12f1e4c3152eb1e3c9f14b819b585.jpg)
907
+ (c) Spirals, topology-ignorant
908
+ (e) Circles, topology-ignorant
909
+ Figure 8: Histograms of the projection errors of INC. Each $y$ -axis represents the estimated probability that INC incurs the projection error on the corresponding $x$ -axis.
910
+
911
+ ![](images/57a658a861f9c3a0e70a3244a6bb67bd726d99c3c3b735ee82d9eb941ea25139.jpg)
912
+ (f) Circles, topology-aware
ontheneedfortopologyawaregenerativemodelsformanifoldbaseddefenses/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:83b3ea1ec8d6bdc10a4c8138726262ec5b02e527b6fba0c0c752980cdc332b66
3
+ size 649025
ontheneedfortopologyawaregenerativemodelsformanifoldbaseddefenses/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9e78eb1a1f54f8aa61f53fbd3cbd268c32d585464f2a7e5444070a7010181e05
3
+ size 1571205
ontherelationshipbetweenselfattentionandconvolutionallayers/3e5ce1ef-dba0-48e8-afe3-e1959710950f_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4cf7b226a09cb792c00ab74834ccdcf83100a5747e41e7ec18e7696cfb3f3b06
3
+ size 109641
ontherelationshipbetweenselfattentionandconvolutionallayers/3e5ce1ef-dba0-48e8-afe3-e1959710950f_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:74c745218b5a5d5d826a2e357109194e687541f31e561800f3229e1dcc3bbdb3
3
+ size 125972
ontherelationshipbetweenselfattentionandconvolutionallayers/3e5ce1ef-dba0-48e8-afe3-e1959710950f_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fb25b4aa57f846e336d825780fdc03804af7594a4c52606410576f388585e690
3
+ size 3436815
ontherelationshipbetweenselfattentionandconvolutionallayers/full.md ADDED
@@ -0,0 +1,458 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # ON THE RELATIONSHIP BETWEEN SELF-ATTENTION AND CONVOLUTIONAL LAYERS
2
+
3
+ Jean-Baptiste Cordonnier, Andreas Loukas & Martin Jaggi
4
+
5
+ École Polytechnique Fédérale de Lausanne (EPFL)
6
+
7
+ {first.last}@epfl.ch
8
+
9
+ # ABSTRACT
10
+
11
+ Recent trends of incorporating attention mechanisms in vision have led researchers to reconsider the supremacy of convolutional layers as a primary building block. Beyond helping CNNs to handle long-range dependencies, Ramachandran et al. (2019) showed that attention can completely replace convolution and achieve state-of-the-art performance on vision tasks. This raises the question: do learned attention layers operate similarly to convolutional layers? This work provides evidence that attention layers can perform convolution and, indeed, they often learn to do so in practice. Specifically, we prove that a multi-head self-attention layer with sufficient number of heads is at least as expressive as any convolutional layer. Our numerical experiments then show that self-attention layers attend to pixel-grid patterns similarly to CNN layers, corroborating our analysis. Our code is publicly available<sup>1</sup>.
12
+
13
+ # 1 INTRODUCTION
14
+
15
+ Recent advances in Natural Language Processing (NLP) are largely attributed to the rise of the transformer (Vaswani et al., 2017). Pre-trained to solve an unsupervised task on large corpora of text, transformer-based architectures, such as GPT-2 (Radford et al., 2018), BERT (Devlin et al., 2018) and Transformer-XL (Dai et al., 2019), seem to possess the capacity to learn the underlying structure of text and, as a consequence, to learn representations that generalize across tasks. The key difference between transformers and previous methods, such as recurrent neural networks (Hochreiter & Schmidhuber, 1997) and convolutional neural networks (CNN), is that the former can simultaneously attend to every word of their input sequence. This is made possible thanks to the attention mechanism—originally introduced in Neural Machine Translation to better handle long-range dependencies (Bahdanau et al., 2015). With self-attention in particular, the similarity of two words in a sequence is captured by an attention score measuring the distance of their representations. The representation of each word is then updated based on those words whose attention score is highest.
16
+
17
+ Inspired by its capacity to learn meaningful inter-dependencies between words, researchers have recently considered utilizing self-attention in vision tasks. Self-attention was first added to CNN by either using channel-based attention (Hu et al., 2018) or non-local relationships across the image (Wang et al., 2018). More recently, Bello et al. (2019) augmented CNNs by replacing some convolutional layers with self-attention layers, leading to improvements on image classification and object detection tasks. Interestingly, Ramachandran et al. (2019) noticed that, even though state-of-the-art results are reached when attention and convolutional features are combined, under same computation and model size constraints, self-attention-only architectures also reach competitive image classification accuracy.
18
+
19
+ These findings raise the question, do self-attention layers process images in a similar manner to convolutional layers? From a theoretical perspective, one could argue that transformers have the capacity to simulate any function—including a CNN. Indeed, Pérez et al. (2019) showed that a multilayer attention-based architecture with additive positional encodings is Turing complete under some strong theoretical assumptions, such as unbounded precision arithmetic. Unfortunately, universality results do not reveal how a machine solves a task, only that it has the capacity to do so. Thus, the question of how self-attention layers actually process images remains open.
20
+
21
+ Contributions. In this work, we put forth theoretical and empirical evidence that self-attention layers can (and do) learn to behave similar to convolutional layers:
22
+
23
+ I. From a theoretical perspective, we provide a constructive proof showing that self-attention layers can express any convolutional layers.
24
+
25
+ Specifically, we show that a single multi-head self-attention layer using relative positional encoding can be re-parametrized to express any convolutional layer.
26
+
27
+ II. Our experiments show that the first few layers of attention-only architectures (Ramachandran et al., 2019) do learn to attend on grid-like pattern around each query pixel, similar to our theoretical construction.
28
+
29
+ Strikingly, this behavior is confirmed both for our quadratic encoding, but also for relative encoding that is learned. Our results seem to suggest that localized convolution is the right inductive bias for the first few layers of an image classifying network. We provide an interactive website<sup>2</sup> to explore how self-attention exploits localized position-based attention in lower layers and content-based attention in deeper layers. For reproducibility purposes, our code is publicly available.
30
+
31
+ # 2 BACKGROUND ON ATTENTION MECHANISMS FOR VISION
32
+
33
+ We here recall the mathematical formulation of self-attention layers and emphasize the role of positional encodings.
34
+
35
+ # 2.1 THE MULTI-HEAD SELF-ATTENTION LAYER
36
+
37
+ Let $\mathbf{X} \in \mathbb{R}^{T \times D_{in}}$ be an input matrix consisting of $T$ tokens in of $D_{in}$ dimensions each. While in NLP each token corresponds to a word in a sentence, the same formalism can be applied to any sequence of $T$ discrete objects, e.g. pixels. A self-attention layer maps any query token $t \in [T]$ from $D_{in}$ to $D_{out}$ dimensions as follows:
38
+
39
+ $$
40
+ \operatorname {S e l f - A t t e n t i o n} (\boldsymbol {X}) _ {t,:} := \operatorname {s o f t m a x} \left(\boldsymbol {A} _ {t,:}\right) \boldsymbol {X} \boldsymbol {W} _ {\text {v a l}}, \tag {1}
41
+ $$
42
+
43
+ where we refer to the elements of the $T\times T$ matrix
44
+
45
+ $$
46
+ \boldsymbol {A} := \boldsymbol {X} \boldsymbol {W} _ {\text {q r y}} \boldsymbol {W} _ {\text {k e y}} ^ {\top} \boldsymbol {X} ^ {\top} \tag {2}
47
+ $$
48
+
49
+ as attention scores and the softmax output $^3$ as attention probabilities. The layer is parametrized by a query matrix $\mathbf{W}_{qry} \in \mathbb{R}^{D_{in} \times D_k}$ , a key matrix $\mathbf{W}_{key} \in \mathbb{R}^{D_{in} \times D_k}$ and a value matrix $\mathbf{W}_{val} \in \mathbb{R}^{D_{in} \times D_{out}}$ . For simplicity, we exclude any residual connections, batch normalization and constant factors.
50
+
51
+ A key property of the self-attention model described above is that it is equivariant to reordering, that is, it gives the same output independently of how the $T$ input tokens are shuffled. This is problematic for cases we expect the order of things to matter. To alleviate the limitation, a positional encoding is learned for each token in the sequence (or pixel in an image), and added to the representation of the token itself before applying self-attention
52
+
53
+ $$
54
+ \boldsymbol {A} := (\boldsymbol {X} + \boldsymbol {P}) \boldsymbol {W} _ {\text {q r y}} \boldsymbol {W} _ {\text {k e y}} ^ {\top} (\boldsymbol {X} + \boldsymbol {P}) ^ {\top}, \tag {3}
55
+ $$
56
+
57
+ where $\pmb{P} \in \mathbb{R}^{T \times D_{in}}$ contains the embedding vectors for each position. More generally, $\pmb{P}$ may be substituted by any function that returns a vector representation of the position.
58
+
59
+ It has been found beneficial in practice to replicate this self-attention mechanism into multiple heads, each being able to focus on different parts of the input by using different query, key and value matrices. In multi-head self-attention, the output of the $N_{h}$ heads of output dimension $D_{h}$ are concatenated and projected to dimension $D_{out}$ as follows:
60
+
61
+ $$
62
+ \operatorname {M H S A} (\boldsymbol {X}) := \underset {h \in \left[ N _ {h} \right]} {\operatorname {c o n c a t}} \left[ \text {S e l f - A t t e n t i o n} _ {h} (\boldsymbol {X}) \right] \boldsymbol {W} _ {\text {o u t}} + \boldsymbol {b} _ {\text {o u t}} \tag {4}
63
+ $$
64
+
65
+ and two new parameters are introduced: the projection matrix $\mathbf{W}_{out} \in \mathbb{R}^{N_hD_h \times D_{out}}$ and a bias term $\mathbf{b}_{out} \in \mathbb{R}^{D_{out}}$ .
66
+
67
+ # 2.2 ATTENTION FOR IMAGES
68
+
69
+ Convolutional layers are the de facto choice for building neural networks that operate on images. We recall that, given an image tensor $\mathbf{X} \in \mathbb{R}^{W \times H \times D_{in}}$ of width $W$ , height $H$ and $D_{in}$ channels, the output of a convolutional layer for pixel $(i,j)$ is given by
70
+
71
+ $$
72
+ \operatorname {C o n v} (\boldsymbol {X}) _ {i, j,:} := \sum_ {\left(\delta_ {1}, \delta_ {2}\right) \in \mathbb {A} _ {K}} \mathbf {X} _ {i + \delta_ {1}, j + \delta_ {2,:}} \mathbf {W} _ {\delta_ {1}, \delta_ {2},:,:} + \boldsymbol {b}, \tag {5}
73
+ $$
74
+
75
+ where $\mathbf{W}$ is the $K\times K\times D_{in}\times D_{out}$ weight tensor $^4$ $\pmb {b}\in \mathbb{R}^{D_{out}}$ is the bias vector and the set
76
+
77
+ $$
78
+ \mathbb {A} _ {K} := \left[ - \left\lfloor \frac {K}{2} \right\rfloor , \dots , \left\lfloor \frac {K}{2} \right\rfloor \right] \times \left[ - \left\lfloor \frac {K}{2} \right\rfloor , \dots , \left\lfloor \frac {K}{2} \right\rfloor \right]
79
+ $$
80
+
81
+ contains all possible shifts appearing when convolving the image with a $K \times K$ kernel.
82
+
83
+ In the following, we review how self-attention can be adapted from 1D sequences to images.
84
+
85
+ With images, rather than tokens, we have query and key pixels $\mathbf{q}$ , $\mathbf{k} \in [W] \times [H]$ . Accordingly, the input is a tensor $\mathbf{X}$ of dimension $W \times H \times D_{in}$ and each attention score associates a query and a key pixel.
86
+
87
+ To keep the formulas consistent with the 1D case, we abuse notation and slice tensors by using a 2D index vector: if $\pmb{p} = (i,j)$ , we write $\mathbf{X}_{\pmb{p},\cdot}$ and $\mathbf{A}_{\pmb{p},\cdot}$ to mean $\mathbf{X}_{i,j,\cdot}$ and $\mathbf{A}_{i,j,\cdot,\cdot}$ , respectively. With this notation in place, the multi-head self attention layer output at pixel $\pmb{q}$ can be expressed as follows:
88
+
89
+ $$
90
+ \operatorname {S e l f - A t t e n t i o n} (\boldsymbol {X}) _ {\boldsymbol {q},:} = \sum_ {\boldsymbol {k}} \operatorname {s o f t m a x} \left(\mathbf {A} _ {\boldsymbol {q},:}\right) _ {\boldsymbol {k}} \mathbf {X} _ {\boldsymbol {k},:} W _ {\text {v a l}} \tag {6}
91
+ $$
92
+
93
+ and accordingly for the multi-head case.
94
+
95
+ # 2.3 POSITIONAL ENCODING FOR IMAGES
96
+
97
+ There are two types of positional encoding that has been used in transformer-based architectures: the absolute and relative encoding (see also Table 3 in the Appendix).
98
+
99
+ With absolute encodings, a (fixed or learned) vector $\mathbf{P}_{p,:}$ is assigned to each pixel $p$ . The computation of the attention scores we saw in eq. (2) can then be decomposed as follows:
100
+
101
+ $$
102
+ \begin{array}{l} \mathbf {A} _ {\boldsymbol {q}, \boldsymbol {k}} ^ {\mathrm {a b s}} = (\mathbf {X} _ {\boldsymbol {q},:} + \mathbf {P} _ {\boldsymbol {q},:}) W _ {q r y} W _ {k e y} ^ {\top} (\mathbf {X} _ {\boldsymbol {k},:} + \mathbf {P} _ {\boldsymbol {k},:}) ^ {\top} \\ = \mathbf {X} _ {\boldsymbol {q},:} W _ {q r y} W _ {k e y} ^ {\top} \mathbf {X} _ {\boldsymbol {k},:} ^ {\top} + \mathbf {X} _ {\boldsymbol {q},:} W _ {q r y} W _ {k e y} ^ {\top} \mathbf {P} _ {\boldsymbol {k},:} ^ {\top} + \mathbf {P} _ {\boldsymbol {q},:} W _ {q r y} W _ {k e y} ^ {\top} \mathbf {X} _ {\boldsymbol {k},:} + \mathbf {P} _ {\boldsymbol {q},:} W _ {q r y} W _ {k e y} ^ {\top} \mathbf {P} _ {\boldsymbol {k},:} \tag {7} \\ \end{array}
103
+ $$
104
+
105
+ where $\mathbf{q}$ and $\mathbf{k}$ correspond to the query and key pixels, respectively.
106
+
107
+ The relative positional encoding was introduced by Dai et al. (2019). The main idea is to only consider the position difference between the query pixel (pixel we compute the representation of) and the key pixel (pixel we attend) instead of the absolute position of the key pixel:
108
+
109
+ $$
110
+ \mathbf {A} _ {\boldsymbol {q}, \boldsymbol {k}} ^ {\text {r e l}} := \mathbf {X} _ {\boldsymbol {q};;} ^ {\top} W _ {\boldsymbol {q r y}} ^ {\top} W _ {\boldsymbol {k e y}} \mathbf {X} _ {\boldsymbol {k};;} + \mathbf {X} _ {\boldsymbol {q},;} ^ {\top} W _ {\boldsymbol {q r y}} ^ {\top} \widehat {W} _ {\boldsymbol {k e y}} \boldsymbol {r} _ {\delta} + \boldsymbol {u} ^ {\top} W _ {\boldsymbol {k e y}} \mathbf {X} _ {\boldsymbol {k};;} + \boldsymbol {v} ^ {\top} \widehat {W} _ {\boldsymbol {k e y}} \boldsymbol {r} _ {\delta} \tag {8}
111
+ $$
112
+
113
+ In this manner, the attention scores only depend on the shift $\delta \coloneqq k - q$ . Above, the learnable vectors $\mathbf{u}$ and $\mathbf{v}$ are unique for each head, whereas for every shift $\delta$ the relative positional encoding $\mathbf{r}_{\delta} \in \mathbb{R}^{D_p}$ is shared by all layers and heads. Moreover, now the key weights are split into two types: $\mathbf{W}_{key}$ pertain to the input and $\widehat{\mathbf{W}}_{key}$ to the relative position of pixels.
114
+
115
+ # 3 SELF-ATTENTION AS A CONVOLUTIONAL LAYER
116
+
117
+ This section derives sufficient conditions such that a multi-head self-attention layer can simulate a convolutional layer. Our main result is the following:
118
+
119
+ Theorem 1. A multi-head self-attention layer with $N_{h}$ heads of dimension $D_{h}$ , output dimension $D_{out}$ and a relative positional encoding of dimension $D_{p} \geq 3$ can express any convolutional layer of kernel size $\sqrt{N_{h}} \times \sqrt{N_{h}}$ and $\min(D_{h}, D_{out})$ output channels.
120
+
121
+ The theorem is proven constructively by selecting the parameters of the multi-head self-attention layer so that the latter acts like a convolutional layer. In the proposed construction, the attention scores of each self-attention head should attend to a different relative shift within the set $\Delta_K = \{-\lfloor K / 2\rfloor ,\ldots ,\lfloor K / 2\rfloor \} ^2$ of all pixel shifts in a $K\times K$ kernel. The exact condition can be found in the statement of Lemma 1.
122
+
123
+ Then, Lemma 2 shows that the aforementioned condition is satisfied for the relative positional encoding that we refer to as the quadratic encoding:
124
+
125
+ $$
126
+ \boldsymbol {v} ^ {(h)} := - \alpha^ {(h)} \left(1, - 2 \boldsymbol {\Delta} _ {1} ^ {(h)}, - 2 \boldsymbol {\Delta} _ {2} ^ {(h)}\right) \quad \boldsymbol {r} _ {\delta} := \left(\| \boldsymbol {\delta} \| ^ {2}, \boldsymbol {\delta} _ {1}, \boldsymbol {\delta} _ {2}\right) \quad \boldsymbol {W} _ {q r y} = \boldsymbol {W} _ {k e y} := \mathbf {0} \quad \widehat {\boldsymbol {W}} _ {k e y} := \boldsymbol {I} \tag {9}
127
+ $$
128
+
129
+ The learned parameters $\pmb{\Delta}^{(h)} = (\pmb{\Delta}_1^{(h)},\pmb{\Delta}_2^{(h)})$ and $\alpha^{(h)}$ determine the center and width of attention of each head, respectively. On the other hand, $\delta = (\delta_{1},\delta_{2})$ is fixed and expresses the relative shift between query and key pixels.
130
+
131
+ It is important to stress that the above encoding is not the only one for which the conditions of Lemma 1 are satisfied. In fact, in our experiments, the relative encoding learned by the neural network also matched the conditions of the lemma (despite being different from the quadratic encoding). Nevertheless, the encoding defined above is very efficient in terms of size, as only $D_{p} = 3$ dimensions suffice to encode the relative position of pixels, while also reaching similar or better empirical performance (than the learned one).
132
+
133
+ The theorem covers the general convolution operator as defined in eq. (17). However, machine learning practitioners using differential programming frameworks (Paszke et al., 2017; Abadi et al., 2015) might question if the theorem holds for all hyper-parameters of 2D convolutional layers:
134
+
135
+ - Padding: a multi-head self-attention layer uses by default the "SAME" padding while a convolutional layer would decrease the image size by $K - 1$ pixels. The correct way to alleviate these boundary effects is to pad the input image with $\lfloor K / 2 \rfloor$ zeros on each side. In this case, the cropped output of a MHSA and a convolutional layer are the same.
136
+ - Stride: a strided convolution can be seen as a convolution followed by a fixed pooling operation—with computational optimizations. Theorem 1 is defined for stride 1, but a fixed pooling layer could be appended to the Self-Attention layer to simulate any stride.
137
+ - Dilation: a multi-head self-attention layer can express any dilated convolution as each head can attend a value at any pixel shift and form a (dilated) grid pattern.
138
+
139
+ Remark for the 1D case. Convolutional layers acting on sequences are commonly used in the literature for text (Kim, 2014), as well as audio (van den Oord et al., 2016) and time series (Franceschi et al., 2019). Theorem 1 can be straightforwardly extended to show that multi-head self-attention with $N_{h}$ heads can also simulate a 1D convolutional layer with a kernel of size $K = N_{h}$ with $\min(D_{h}, D_{out})$ output channels using a positional encoding of dimension $D_{p} \geq 2$ . Since we have not tested empirically if the preceding construction matches the behavior of 1D self-attention in practice, we cannot claim that it actually learns to convolve an input sequence—only that it has the capacity to do so.
140
+
141
+ # PROOF OF MAIN THEOREM
142
+
143
+ The proof follows directly from Lemmas 1 and 2 stated below:
144
+
145
+ Lemma 1. Consider a multi-head self-attention layer consisting of $N_{h} = K^{2}$ heads, $D_{h} \geq D_{out}$ and let $f: [N_{h}] \to \mathbb{A}_{K}$ be a bijective mapping of heads onto shifts. Further, suppose that for every head the following holds:
146
+
147
+ $$
148
+ \operatorname {s o f t m a x} \left(\boldsymbol {A} _ {\boldsymbol {q},:} ^ {(h)}\right) _ {\boldsymbol {k}} = \left\{ \begin{array}{l l} 1 & \text {i f} \boldsymbol {f} (h) = \boldsymbol {q} - \boldsymbol {k} \\ 0 & \text {o t h e r w i s e .} \end{array} \right. \tag {10}
149
+ $$
150
+
151
+ Then, for any convolutional layer with a $K \times K$ kernel and $D_{out}$ output channels, there exists $\{\mathbf{W}_{val}^{(h)}\}_{h \in [N_h]}$ such that $\mathrm{MHSA}(\mathbf{X}) = \mathrm{Conv}(\mathbf{X})$ for every $\mathbf{X} \in \mathbb{R}^{W \times H \times D_{in}}$ .
152
+
153
+ ![](images/12f936481ac1e9a7cb69008d66ecaf4a460440202e0433909153b75891b0e555.jpg)
154
+ Figure 1: Illustration of a Multi-Head Self-Attention layer applied to a tensor image $\mathbf{X}$ . Each head $h$ attends pixel values around shift $\Delta^{(h)}$ and learn a filter matrix $\boldsymbol{W}_{val}^{(h)}$ . We show attention maps computed for a query pixel at position $\boldsymbol{q}$ .
155
+
156
+ Proof. Our first step will be to rework the expression of the Multi-Head Self-Attention operator from equation (1) and equation (4) such that the effect of the multiple heads becomes more transparent:
157
+
158
+ $$
159
+ \operatorname {M H S A} (\boldsymbol {X}) = \boldsymbol {b} _ {\text {o u t}} + \sum_ {h \in \left[ N _ {h} \right]} \operatorname {s o f t m a x} \left(\boldsymbol {A} ^ {(h)}\right) \boldsymbol {X} \underbrace {\boldsymbol {W} _ {\text {v a l}} ^ {(h)} \boldsymbol {W} _ {\text {o u t}} [ (h - 1) D _ {h} + 1 : h D _ {h} + 1 ]} _ {\boldsymbol {W} ^ {(h)}} \tag {11}
160
+ $$
161
+
162
+ Note that each head's value matrix $\mathbf{W}_{val}^{(h)} \in \mathbb{R}^{D_{in} \times D_h}$ and each block of the projection matrix $\mathbf{W}_{out}$ of dimension $D_h \times D_{out}$ are learned. Assuming that $D_h \geq D_{out}$ , we can replace each pair of matrices by a learned matrix $\mathbf{W}^{(h)}$ for each head. We consider one output pixel of the multi-head self-attention:
163
+
164
+ $$
165
+ \operatorname {M H S A} (\boldsymbol {X}) _ {\boldsymbol {q},:} = \sum_ {h \in \left[ N _ {h} \right]} \left(\sum_ {\boldsymbol {k}} \operatorname {s o f t m a x} \left(\boldsymbol {A} _ {\boldsymbol {q},:} ^ {(h)}\right) _ {\boldsymbol {k}} \boldsymbol {X} _ {\boldsymbol {k},:}\right) \boldsymbol {W} ^ {(h)} + \boldsymbol {b} _ {\text {o u t}} \tag {12}
166
+ $$
167
+
168
+ Due to the conditions of the Lemma, for the $h$ -th attention head the attention probability is one when $\pmb{k} = \pmb{q} - \pmb{f}(h)$ and zero otherwise. The layer's output at pixel $\pmb{q}$ is thus equal to
169
+
170
+ $$
171
+ \operatorname {M H S A} (\mathbf {X}) _ {\mathbf {q}} = \sum_ {h \in [ N _ {h} ]} \mathbf {X} _ {\mathbf {q} - \mathbf {f} (h);}; \mathbf {W} ^ {(h)} + \mathbf {b} _ {\text {o u t}} \tag {13}
172
+ $$
173
+
174
+ For $K = \sqrt{N_h}$ , the above can be seen to be equivalent to a convolutional layer expressed in eq. 17: there is a one-to-one mapping (implied by map $\pmb{f}$ ) between the matrices $\mathbf{W}^{(h)}$ for $h = [N_h]$ and the matrices $\mathbf{W}_{k_1,k_2,\dots}$ for all $(k_{1},k_{2})\in [K]^{2}$ .
175
+
176
+ Remark about $D_h$ and $D_{out}$ . It is frequent in transformer-based architectures to set $D_h = D_{out} / N_h$ , hence $D_h < D_{out}$ . In that case, $W^{(h)}$ can be seen to be of rank $D_{out} - D_h$ which does not suffice to express every convolutional layer with $D_{out}$ channels. Nevertheless, it can be seen that any $D_h$ out of $D_{out}$ outputs of $\mathrm{MHSA}(X)$ can express the output of any convolutional layer with $D_h$ output channels. To cover both cases, in the statement of the main theorem we assert that the output channels of the convolutional layer should be $\min(D_h, D_{out})$ . In practice, we advise to concatenate heads of dimension $D_h = D_{out}$ instead of splitting the $D_{out}$ dimensions among heads to have exact re-parametrization and no "unused" channels.
177
+
178
+ Lemma 2. There exists a relative encoding scheme $\{\pmb{r}_{\delta} \in \mathbb{R}^{D_p}\}_{\delta \in \mathbb{Z}^2}$ with $D_p \geq 3$ and parameters $\pmb{W}_{qry}, \pmb{W}_{key}, \widehat{\pmb{W}}_{key}, \pmb{u}$ with $D_p \leq D_k$ such that, for every $\Delta \in \Delta_K$ there exists some vector $\pmb{v}$ (conditioned on $\Delta$ ) yielding $\text{softmax}(\pmb{A}_{q,:})_k = 1$ if $\pmb{k} - \pmb{q} = \Delta$ and zero, otherwise.
179
+
180
+ Proof. We show by construction the existence of a $D_p = 3$ dimensional relative encoding scheme yielding the required attention probabilities.
181
+
182
+ As the attention probabilities are independent of the input tensor $\mathbf{X}$ , we set $\pmb{W}_{key} = \pmb{W}_{qry} = \mathbf{0}$ which leaves only the last term of eq. (8). Setting $\widehat{\pmb{W}}_{key} \in \mathbb{R}^{D_k \times D_p}$ to the identity matrix (with appropriate row padding), yields $\pmb{A}_{q,k} = \pmb{v}^\top \pmb{r}_\delta$ where $\delta \coloneqq \pmb{k} - \pmb{q}$ . Above, we have assumed that $D_p \leq D_k$ such that no information from $\pmb{r}_\delta$ is lost.
183
+
184
+ Now, suppose that we could write:
185
+
186
+ $$
187
+ \boldsymbol {\Delta} _ {\boldsymbol {q}, \boldsymbol {k}} = - \alpha \left(\left\| \boldsymbol {\delta} - \boldsymbol {\Delta} \right\| ^ {2} + c\right) \tag {14}
188
+ $$
189
+
190
+ for some constant $c$ . In the above expression, the maximum attention score over $\mathbf{A}_{q,\cdot}$ is $-\alpha c$ and it is reached for $\mathbf{A}_{q,k}$ with $\delta = \Delta$ . On the other hand, the $\alpha$ coefficient can be used to scale arbitrarily the difference between $\mathbf{A}_{q,\Delta}$ and the other attention scores.
191
+
192
+ In this way, for $\delta = \Delta$ , we have
193
+
194
+ $$
195
+ \begin{array}{l} \lim _ {\alpha \rightarrow \infty} \operatorname {s o f t m a x} (\mathbf {A} _ {\boldsymbol {q},:}) _ {\boldsymbol {k}} = \lim _ {\alpha \rightarrow \infty} \frac {e ^ {- \alpha \left(\| \boldsymbol {\delta} - \boldsymbol {\Delta} \| ^ {2} + c\right)}}{\sum_ {\boldsymbol {k} ^ {\prime}} e ^ {- \alpha \left(\| (\boldsymbol {k} - \boldsymbol {q} ^ {\prime}) - \boldsymbol {\Delta} \| ^ {2} + c\right)}} \\ = \lim _ {\alpha \to \infty} \frac {e ^ {- \alpha \| \pmb {\delta} - \pmb {\Delta} \| ^ {2}}}{\sum_ {\pmb {k} ^ {\prime}} e ^ {- \alpha \| (\pmb {k} - \pmb {q} ^ {\prime}) - \pmb {\Delta} \| ^ {2}}} = \frac {1}{1 + \lim _ {\alpha \to \infty} \sum_ {\pmb {k} ^ {\prime} \neq \pmb {k}} e ^ {- \alpha \| (\pmb {k} - \pmb {q} ^ {\prime}) - \pmb {\Delta} \| ^ {2}}} = 1 \\ \end{array}
196
+ $$
197
+
198
+ and for $\delta \neq \Delta$ , the equation becomes $\lim_{\alpha \to \infty} \operatorname{softmax}(\mathbf{A}_{q,:})_k = 0$ , exactly as needed to satisfy the lemma statement.
199
+
200
+ What remains is to prove that there exist $\pmb{v}$ and $\{\pmb{r}_{\delta}\}_{\delta \in \mathbb{Z}^2}$ for which eq. (14) holds. Expanding the RHS of the equation, we have $-\alpha (\| \pmb {\delta} - \pmb {\Delta}\| ^2 +c) = -\alpha (\| \pmb {\delta}\| ^2 +\| \pmb {\Delta}\| ^2 -2\langle \pmb {\delta},\pmb {\Delta}\rangle +c)$ . Now if we set $\pmb {v} = -\alpha$ $(1, - 2\Delta_{1}, - 2\Delta_{2})$ and $\pmb {r}_{\delta} = (\| \pmb {\delta}\|^{2},\pmb {\delta}_{1},\pmb {\delta}_{2})$ , then
201
+
202
+ $$
203
+ \mathbf {A} _ {\boldsymbol {q}, \boldsymbol {k}} = \boldsymbol {v} ^ {\top} \boldsymbol {r} _ {\boldsymbol {\delta}} = - \alpha (\| \boldsymbol {\delta} \| ^ {2} - 2 \Delta_ {1} \boldsymbol {\delta} _ {1} - 2 \Delta_ {2} \boldsymbol {\delta} _ {2}) = - \alpha (\| \boldsymbol {\delta} \| ^ {2} - 2 \langle \boldsymbol {\delta}, \mathbf {\Delta} \rangle) = - \alpha (\| \boldsymbol {\delta} - \mathbf {\Delta} \| ^ {2} - \| \mathbf {\Delta} \| ^ {2}),
204
+ $$
205
+
206
+ which matches eq. (14) with $c = -\|\pmb{\Delta}\|^2$ and the proof is concluded.
207
+
208
+ Remark on the magnitude of $\alpha$ . The exact representation of one pixel requires $\alpha$ (or the matrices $W_{qry}$ and $W_{key}$ ) to be arbitrary large, despite the fact that the attention probabilities of all other pixels converge exponentially to 0 as $\alpha$ grows. Nevertheless, practical implementations always rely on finite precision arithmetic for which a constant $\alpha$ suffices to satisfy our construction. For instance, since the smallest positive float32 scalar is approximately $10^{-45}$ , setting $\alpha = 46$ would suffice to obtain hard attention.
209
+
210
+ # 4 EXPERIMENTS
211
+
212
+ The aim of this section is to validate the applicability of our theoretical results—which state that self-attention can perform convolution—and to examine whether self-attention layers in practice do actually learn to operate like convolutional layers when trained on standard image classification tasks. In particular, we study the relationship between self-attention and convolution with quadratic and learned relative positional encodings. We find that, for both cases, the attention probabilities learned tend to respect the conditions of Lemma 1, supporting our hypothesis.
213
+
214
+ # 4.1 IMPLEMENTATION DETAILS
215
+
216
+ We study a fully attentional model consisting of six multi-head self-attention layers. As it has already been shown by Bello et al. (2019) that combining attention features with convolutional features improves performance on CIFar-100 and ImageNet, we do not focus on attaining state-of-the-art performance. Nevertheless, to validate that our model learns a meaningful classifier, we compare it to the standard ResNet18 (He et al., 2015) on the CIFAR-10 dataset (Krizhevsky et al.). In all experiments, we use a $2 \times 2$ invertible down-sampling (Jacobsen et al., 2018) on the input to reduce the size of the image. As the size of the attention coefficient tensors (stored during forward) scales quadratically with the size of the input image, full attention cannot be applied to bigger images. The fixed size representation of the input image is computed as the average pooling of the last layer representations and given to a linear classifier.
217
+
218
+ ![](images/b0e58f722311fe7868418611d779df353d094b6a30bb95eb9dd9ec357e8acd17.jpg)
219
+ Figure 2: Test accuracy on CIFAR-10.
220
+
221
+ <table><tr><td>Models</td><td>accuracy</td><td># of params</td><td># of FLOPS</td></tr><tr><td>ResNet18</td><td>0.938</td><td>11.2M</td><td>1.1B</td></tr><tr><td>SA quadratic emb.</td><td>0.938</td><td>12.1M</td><td>6.2B</td></tr><tr><td>SA learned emb.</td><td>0.918</td><td>12.3M</td><td>6.2B</td></tr><tr><td>SA learned emb. + content</td><td>0.871</td><td>29.5M</td><td>15B</td></tr></table>
222
+
223
+ Table 1: Test accuracy on CIFAR-10 and model sizes. SA stands for Self-Attention.
224
+
225
+ ![](images/9b02683342b14dd439a343a60099090fa74860b9a80622adec7e059b78fe1840.jpg)
226
+ Figure 3: Centers of attention of each attention head (different colors) at layer 4 during the training with quadratic relative positional encoding. The central black square is the query pixel, whereas solid and dotted circles represent the $50\%$ and $90\%$ percentiles of each Gaussian, respectively.
227
+
228
+ ![](images/03fbc683df382c6c4f873c47ddd028d0e84f767efeaa2903524357324a7e3a84.jpg)
229
+
230
+ ![](images/fe311fe4bbd8c58401441b456f27e405c6076b0d2ec1ea2f7729a5e9d3311c25.jpg)
231
+
232
+ ![](images/d18e2169379dc55619e69303af0c9b2a94a9507539073bd0f841471f9b00c36e.jpg)
233
+
234
+ We used the PyTorch library (Paszke et al., 2017) and based our implementation on PyTorch Transformers<sup>5</sup>. We release our code on Github<sup>6</sup> and hyper-parameters are listed in Table 2 (Appendix).
235
+
236
+ Remark on accuracy. To verify that our self-attention models perform reasonably well, we display in Figure 6 the evolution of the test accuracy on CIFAR-10 over the 300 epochs of training for our self-attention models against a small ResNet (Table 1). The ResNet is faster to converge, but we cannot ascertain whether this corresponds to an inherent property of the architecture or an artifact of the adopted optimization procedures. Our implementation could be optimized to exploit the locality of Gaussian attention probabilities and reduce significantly the number of FLOPS. We observed that learned embeddings with content-based attention were harder to train probably due to their increased number of parameters. We believe that the performance gap can be bridged to match the ResNet performance, but this is not the focus of this work.
237
+
238
+ # 4.2 QUADRATIC ENCODING
239
+
240
+ As a first step, we aim to verify that, with the relative position encoding introduced in equation (9), attention layers learn to behave like convolutional layers. We train nine attention heads at each layer to be on par with the $3 \times 3$ kernels used predominantly by the ResNet architecture. The center of attention of each head $h$ is initialized to $\pmb{\Delta}^{(h)} \sim \mathcal{N}(\mathbf{0}, 2\mathbf{I}_2)$ .
241
+
242
+ Figure 3 shows how the initial positions of the heads (different colors) at layer 4 changed during training. We can see that after optimization, the heads attend on specific pixel of the image forming a grid around the query pixel. Our intuition that Self-Attention applied to images learns convolutional filters around the queried pixel is confirmed.
243
+
244
+ Figure 4 displays all attention head at each layer of the model at the end of the training. It can be seen that in the first few layers the heads tend to focus on local patterns (layers 1 and 2), while deeper layers (layers 3-6) also attend to larger patterns by positioning the center of attention further from the queried pixel position. We also include in the Appendix a plot of the attention positions for a higher number of heads ( $N_{h} = 16$ ). Figure 14 displays both local patterns similar to CNN and long range dependencies. Interestingly, attention heads do not overlap and seem to take an arrangement maximizing the coverage of the input space.
245
+
246
+ ![](images/6722180d9611834d14981f99964be987eb4a5b6728a2de1e067e59ce8bea3082.jpg)
247
+ Figure 4: Centers of attention of each attention head (different colors) for the 6 self-attention layers using quadratic positional encoding. The central black square is the query pixel, whereas solid and dotted circles represent the $50\%$ and $90\%$ percentiles of each Gaussian, respectively.
248
+
249
+ ![](images/1239915d9cc98e34adec79c43086ab97f325ffe0934e3df0ea9b86344a64afed.jpg)
250
+
251
+ ![](images/0014b37602bbde2cab5aff85eafbcaa5648b9d56d22cf66edaed3c4a416463e8.jpg)
252
+
253
+ ![](images/025fdc3c9e6aadcaabb9fa82baa4fbdd0e53542beb0dafcda15bc87cf27faa77.jpg)
254
+
255
+ ![](images/c2e30da8f94fca180fcb95ac190f77f35388fe89fb396de1e22c943d9536d1b7.jpg)
256
+
257
+ ![](images/e19ad1721f14c673f95d6f1291cdb84c6fcf52baf5b4b7397c211b758b7b6be6.jpg)
258
+
259
+ # 4.3 LEARNED RELATIVE POSITIONAL ENCODING
260
+
261
+ We move on to study the positional encoding used in practice by fully-attentional models on images.
262
+
263
+ We implemented the 2D relative positional encoding scheme used by (Ramachandran et al., 2019; Bello et al., 2019): we learn a $\lfloor D_p / 2\rfloor$ position encoding vector for each row and each column pixel shift. Hence, the relative positional encoding of a key pixel at position $k$ with a query pixel at position $q$ is the concatenation of the row shift embedding $\delta_{1}$ and the column shift embedding $\delta_{2}$ (where $\delta = k - q$ ). We chose $D_{p} = D_{out} = 400$ in the experiment. We differ from their (unpublished) implementation in the following points: (i) we do not use convolution stem and ResNet bottlenecks for downsampling, but only a $2\times 2$ invertible downsampling layer (Jacobsen et al., 2018) at input, (ii) we use $D_h = D_{out}$ instead of $D_{h} = D_{out} / N_{h}$ backed by our theory that the effective number of learned filters is $\min(D_h,D_{out})$ .
264
+
265
+ At first, we discard the input data and compute the attention scores solely as the last term of eq. (8). The attention probabilities of each head at each layer are displayed on Figure 5. The figure confirms our hypothesis for the first two layers and partially for the third: even when left to learn the positional encoding scheme from randomly initialized vectors, certain self-attention heads (depicted on the left) learn to attend to individual pixels, closely matching the condition of Lemma 1 and thus Theorem 1. At the same time, other heads pay attention to horizontally-symmetric but non-localized patterns, as well as to long-range pixel inter-dependencies.
266
+
267
+ We move on to a more realistic setting where the attention scores are computed using both positional and content-based attention (i.e., $q^{\top}k + q^{\top}r$ in (Ramachandran et al., 2019)) which corresponds to a full-blown standalone self-attention model.
268
+
269
+ The attention probabilities of each head at each layer are displayed in Figure 6. We average the attention probabilities over a batch of 100 test images to outline the focus of each head and remove the dependency on the input image. Our hypothesis is confirmed for some heads of layer 2 and 3: even when left to learn the encoding from the data, certain self-attention heads only exploit position-based attention to attend to distinct pixels at a fixed shift from the query pixel reproducing the receptive field of a convolutional kernel. Other heads use more content-based attention (see Figures 8 to 10 in Appendix for non-averaged probabilities) leveraging the advantage of Self-Attention over CNN which does not contradict our theory. In practice, it was shown by Bello et al. (2019) that combining CNN and self-attention features outperforms each taken separately. Our experiments show that such combination is learned when optimizing an unconstrained fully-attentional model.
270
+
271
+ The similarity between convolution and multi-head self-attention is striking when the query pixel is slid over the image: the localized attention patterns visible in Figure 6 follow the query pixel. This characteristic behavior materializes when comparing Figure 6 with the attention probabilities at a different query pixel (see Figure 7 in Appendix). Attention patterns in layers 2 and 3 are not only localized but stand at a constant shift from the query pixel, similarly to convolving the receptive field of a convolutional kernel over an image. This phenomenon is made evident on our interactive website<sup>7</sup>. This tool is designed to explore different components of attention for diverse images with or without content-based attention. We believe that it is a useful instrument to further understand how MHSA learns to process images.
272
+
273
+ ![](images/0a6924a9abde4be97dc4e2345873fd262fc1826aad1559f90d39326dea496690.jpg)
274
+ Figure 5: Attention probabilities of each head (column) at each layer (row) using learned relative positional encoding without content-based attention. The central black square is the query pixel. We reordered the heads for visualization and zoomed on the 7x7 pixels around the query pixel.
275
+
276
+ ![](images/8a56d797695e6824697d0124d57a529b56758a41425da7a472b75dc268258ffd.jpg)
277
+ Figure 6: Attention probabilities for a model with 6 layers (rows) and 9 heads (columns) using learned relative positional encoding and content-content based attention. Attention maps are averaged over 100 test images to display head behavior and remove the dependence on the input content. The black square is the query pixel. More examples are presented in Appendix A.
278
+
279
+ # 5 RELATED WORK
280
+
281
+ In this section, we review the known differences and similarities between CNNs and transformers.
282
+
283
+ The use of CNN networks for text—at word level (Gehring et al., 2017) or character level (Kim, 2014)—is more seldom than transformers (or RNN). Transformers and convolutional models have been extensively compared empirically on tasks of Natural Language Processing and Neural Machine Translation. It was observed that transformers have a competitive advantage over convolutional model applied to text (Vaswani et al., 2017). It is only recently that Bello et al. (2019); Ramachandran et al. (2019) used transformers on images and showed that they achieve similar accuracy as ResNets. However, their comparison only covers performance and number of parameters and FLOPS but not expressive power.
284
+
285
+ Beyond performance and computational-cost comparisons of transformers and CNN, the study of expressiveness of these architectures has focused on their ability to capture long-term dependencies (Dai et al., 2019). Another interesting line of research has demonstrated that transformers are Turing-complete (Dehghani et al., 2018; Pérez et al., 2019), which is an important theoretical result but is not informative for practitioners. To the best of our knowledge, we are the first to show that the class of functions expressed by a layer of self-attention encloses all convolutional filters.
286
+
287
+ The closest work in bridging the gap between attention and convolution is due to Andreoli (2019). They cast attention and convolution into a unified framework leveraging tensor outer-product. In this framework, the receptive field of a convolution is represented by a "basis" tensor $\mathbf{A} \in \mathbb{R}^{K \times K \times H \times W \times H \times W}$ . For instance, the receptive field of a classical $K \times K$ convolutional kernel would be encoded by $\mathbf{A}_{\Delta, q, k} = \mathbb{1}\{k - q = \Delta\}$ for $\Delta \in \Delta_K$ . The author distinguishes this index-based convolution with content-based convolution where $\mathbf{A}$ is computed from the value of the input, e.g., using a key/query dot-product attention. Our work moves further and presents sufficient conditions for relative positional encoding injected into the input content (as done in practice) to allow content-based convolution to express any index-based convolution. We further show experimentally that such behavior is learned in practice.
288
+
289
+ # 6 CONCLUSION
290
+
291
+ We showed that self-attention layers applied to images can express any convolutional layer (given sufficiently many heads) and that fully-attentional models learn to combine local behavior (similar to convolution) and global attention based on input content. More generally, fully-attentional models seem to learn a generalization of CNNs where the kernel pattern is learned at the same time as the filters—similar to deformable convolutions (Dai et al., 2017; Zampieri, 2019). Interesting directions for future work include translating existing insights from the rich CNNs literature back to transformers on various data modalities, including images, text and time series.
292
+
293
+ # ACKNOWLEDGMENTS
294
+
295
+ Jean-Baptiste Cordonnier is thankful to the Swiss Data Science Center (SDSC) for funding this work. Andreas Loukas was supported by the Swiss National Science Foundation (project "Deep Learning for Graph Structured Data", grant number PZ00P2 179981).
296
+
297
+ # REFERENCES
298
+
299
+ Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mané, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viégas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. Software available from tensorflow.org.
300
+ Jean-Marc Andreoli. Convolution, attention and structure embedding. NeurIPS 2019 workshop on Graph Representation Learning, Dec 13, 2019, Vancouver, BC, Canada, 2019.
301
+ Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, 2015.
302
+ Irwan Bello, Barret Zoph, Ashish Vaswani, Jonathon Shlens, and Quoc V. Le. Attention Augmented Convolutional Networks. arXiv:1904.09925 [cs], April 2019.
303
+ Jifeng Dai, Haozhi Qi, Yuwen Xiong, Yi Li, Guodong Zhang, Han Hu, and Yichen Wei. Deformable convolutional networks. CoRR, abs/1703.06211, 2017.
304
+ Zihang Dai, Zhilin Yang, Yiming Yang, Jaime G. Carbonell, Quoc V. Le, and Ruslan Salakhutdinov. Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context. CoRR, abs/1901.02860, 2019.
305
+ Mostafa Dehghani, Stephan Gouws, Oriol Vinyals, Jakob Uszkoreit, and Lukasz Kaiser. Universal transformers. CoRR, abs/1807.03819, 2018.
306
+ Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: pre-training of deep bidirectional transformers for language understanding. CoRR, abs/1810.04805, 2018.
307
+ Jean-Yves Franceschi, Aymeric Dieuleveut, and Martin Jaggi. Unsupervised scalable representation learning for multivariate time series. In NeurIPS 2019, 2019.
308
+ Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N. Dauphin. Convolutional sequence to sequence learning. CoRR, abs/1705.03122, 2017.
309
+ Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. CoRR, abs/1512.03385, 2015.
310
+ Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural Computation, 9(8): 1735-1780, 1997.
311
+ Jie Hu, Li Shen, and Gang Sun. Squeeze-and-excitation networks. In 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018, pp. 7132-7141, 2018.
312
+ Jörn-Henrik Jacobsen, Arnold W.M. Smeulders, and Edouard Oyallon. i-revnet: Deep invertible networks. In International Conference on Learning Representations, 2018.
313
+ Yoon Kim. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1746-1751, Doha, Qatar, October 2014. Association for Computational Linguistics. doi: 10.3115/v1/D14-1181.
314
+ Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton. Cifar-10 (canadian institute for advanced research).
315
+ Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. Automatic differentiation in PyTorch. In NIPS Autodiff Workshop, 2017.
316
+
317
+ Jorge Pérez, Javier Marinkovic, and Pablo Barceló. On the turing completeness of modern neural network architectures. CoRR, abs/1901.03429, 2019.
318
+ Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners. 2018.
319
+ Prajit Ramachandran, Niki Parmar, Ashish Vaswani, Irwan Bello, Anselm Levskaya, and Jonathon Shlens. Stand-alone self-attention in vision models. CoRR, abs/1906.05909, 2019.
320
+ Aäron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alexander Graves, Nal Kalchbrenner, Andrew Senior, and Koray Kavukcuoglu. Wavenet: A generative model for raw audio. arXiv preprint arXiv:1609.03499, 2016.
321
+ Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. CoRR, abs/1706.03762, 2017.
322
+ Xiaolong Wang, Ross B. Girshick, Abhinav Gupta, and Kaiming He. Non-local neural networks. In 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018, pp. 7794-7803, 2018.
323
+ Zhilin Yang, Zihang Dai, Yiming Yang, Jaime G. Carbonell, Ruslan Salakhutdinov, and Quoc V. Le. Xlnet: Generalized autoregressive pretraining for language understanding. CoRR, abs/1906.08237, 2019.
324
+ Luca Zampieri. Geometric deep learning for volumetric computational fluid dynamics. pp. 67, 2019.
325
+
326
+ # APPENDIX
327
+
328
+ # A MORE EXAMPLES WITH CONTENT-BASED ATTENTION
329
+
330
+ We present more examples of attention probabilities computed by self-attention model. Figure 7 shows average attention at a different query pixel than Figure 6. Figures 8 to 10 display attention for single images.
331
+
332
+ ![](images/59e0db1960bb155a9e95769226642840dd8db3519ebba841a45f03f7bc60b11b.jpg)
333
+ Figure 7: Attention probabilities for a model with 6 layers (rows) and 9 heads (columns) using learned relative positional encoding and content-content attention. We present the average of 100 test images. The black square is the query pixel.
334
+
335
+ ![](images/08c42f947f82d5721e733c35cc1cee9972ff905f98c9bafa355bd56f428863b3.jpg)
336
+ Figure 8: Attention probabilities for a model with 6 layers (rows) and 9 heads (columns) using learned relative positional encoding and content-content based attention. The query pixel (black square) is on the frog head.
337
+
338
+ ![](images/e934b424ffbc1054011d43cdaee5aa2b0463154b824397230b2d87c8a708dc3a.jpg)
339
+ Figure 9: Attention probabilities for a model with 6 layers (rows) and 9 heads (columns) using learned relative positional encoding and content-content based attention. The query pixel (black square) is on the horse head.
340
+
341
+ ![](images/f7ff2018de83ddc74f8e0c63e1a6ab986b432423319b1a2294cf6fcdae3132c4.jpg)
342
+ Figure 10: Attention probabilities for a model with 6 layers (rows) and 9 heads (columns) using learned relative positional encoding and content-content based attention. The query pixel (black square) is on the building in the background.
343
+
344
+ B HYPER-PARAMETERS USED IN OUR EXPERIMENTS
345
+
346
+ <table><tr><td colspan="2">Hyper-parameters</td></tr><tr><td>number of layers</td><td>6</td></tr><tr><td>number of heads</td><td>9</td></tr><tr><td>hidden dimension</td><td>400</td></tr><tr><td>intermediate dimension</td><td>512</td></tr><tr><td>invertible pooling width</td><td>2</td></tr><tr><td>dropout probability</td><td>0.1</td></tr><tr><td>layer normalization epsilon</td><td>10-12</td></tr><tr><td>number of epochs</td><td>300</td></tr><tr><td>batch size</td><td>100</td></tr><tr><td>learning rate</td><td>0.1</td></tr><tr><td>weight decay</td><td>0.0001</td></tr><tr><td>momentum</td><td>0.9</td></tr><tr><td>cosine decay</td><td>✓</td></tr><tr><td>linear warm up ratio</td><td>0.05</td></tr></table>
347
+
348
+ Table 2: Self-attention network parameters
349
+ C POSITIONAL ENCODING REFERENCES
350
+
351
+ <table><tr><td rowspan="2">Model</td><td colspan="3">type of positional encoding</td><td rowspan="2">relative</td></tr><tr><td>sinusoids</td><td>learned</td><td>quadratic</td></tr><tr><td>Vaswani et al. (2017)</td><td>✓</td><td></td><td></td><td></td></tr><tr><td>Radford et al. (2018)</td><td></td><td>✓</td><td></td><td></td></tr><tr><td>Devlin et al. (2018)</td><td></td><td>✓</td><td></td><td></td></tr><tr><td>Dai et al. (2019)</td><td>✓</td><td></td><td></td><td>✓</td></tr><tr><td>Yang et al. (2019)</td><td>✓</td><td></td><td></td><td>✓</td></tr><tr><td>Bello et al. (2019)</td><td></td><td>✓</td><td></td><td>✓</td></tr><tr><td>Ramachandran et al. (2019)</td><td></td><td>✓</td><td></td><td>✓</td></tr><tr><td>Our work</td><td></td><td>✓</td><td>✓</td><td>✓</td></tr></table>
352
+
353
+ Table 3: Types of positional encoding used by transformers models applied to text (top) and images (bottom). When multiple encoding types have been tried, we report the one advised by the authors.
354
+
355
+ # D GENERALIZED LEMMA 1
356
+
357
+ We present a generalization of Lemma 1 that replaces the necessity of hard attention (to single pixels) by a milder assumption: the attention probabilities should span the grid receptive field. The conditions of this Lemma are still satisfied by Lemma 2, hence Theorem 1 follows.
358
+
359
+ Lemma 3. Consider a multi-head self-attention layer consisting of $N_{h} \geq K^{2}$ heads, $D_{h} \geq D_{out}$ and let $\omega : [H] \times [W] \to [HW]$ be a pixel indexing. Then, for any convolutional layer with a $K \times K$ kernel and $D_{out}$ output channels, there exists $\{\mathbf{W}_{val}^{(h)}\}_{h \in [N_{h}]}$ and $\mathbf{W}_{out}$ such that $\mathrm{MHSA}(\mathbf{X}) = \mathrm{Conv}(\mathbf{X})$ for every $\mathbf{X} \in \mathbb{R}^{W \times H \times D_{in}}$ if and only if, for all $\mathbf{q} \in [H] \times [W]$ ,
360
+
361
+ $$
362
+ \operatorname {s p a n} \left(\left\{\boldsymbol {e} _ {\omega (\boldsymbol {q} + \boldsymbol {\Delta})} \in \mathbb {R} ^ {H W}: \boldsymbol {\Delta} \in \mathbb {A} _ {K} \right\}\right) \subseteq \operatorname {s p a n} \left(\left\{\operatorname {v e c t} \left(\operatorname {s o f t m a x} \left(\boldsymbol {A} _ {\boldsymbol {q}, :} ^ {(h)}\right)\right): h \in [ N _ {h} ] \right\}\right).
363
+ $$
364
+
365
+ ![](images/eefe7f0de537c555a90ea79f159ac488d6365071f3ece37274839aa1fa2ea018.jpg)
366
+ Figure 11: Factorization of the vectorized weight matrices $V_{q}^{\mathrm{conv}}$ and $V_{q}^{\mathrm{SA}}$ used to compute the output at position $q$ for an input image of dimension $H \times W$ . On the left: a convolution of kernel $2 \times 2$ , on the right: a self-attention with $N_{h} = 5$ heads. $D_{in} = 2$ , $D_{out} = 3$ in both cases.
367
+
368
+ ![](images/727a83db170929ab3f99d663a99b3bc5259d8a57c111cb00408f2d975091194f.jpg)
369
+
370
+ Proof. Our first step will be to rework the expression of the Multi-Head Self-Attention operator from equation (1) and equation (4) such that the effect of the multiple heads becomes more transparent:
371
+
372
+ $$
373
+ \operatorname {M H S A} (\mathbf {X}) = \boldsymbol {b} _ {\text {o u t}} + \sum_ {h \in \left[ N _ {h} \right]} \operatorname {s o f t m a x} \left(\mathbf {A} ^ {(h)}\right) \mathbf {X} \underbrace {\boldsymbol {W} _ {\text {v a l}} ^ {(h)} \boldsymbol {W} _ {\text {o u t}} [ (h - 1) D _ {h} + 1 : h D _ {h} + 1 ]} _ {\boldsymbol {W} ^ {(h)}} \tag {15}
374
+ $$
375
+
376
+ Note that each head's value matrix $\mathbf{W}_{val}^{(h)} \in \mathbb{R}^{D_{in} \times D_h}$ and each block of the projection matrix $\mathbf{W}_{out}$ of dimension $D_h \times D_{out}$ are learned. Assuming that $D_h \geq D_{out}$ , we can replace each pair of matrices by a learned matrix $\mathbf{W}^{(h)}$ for each head. We consider one output pixel of the multi-head self-attention and drop the bias term for simplicity:
377
+
378
+ $$
379
+ \operatorname {M H S A} (\mathbf {X}) _ {\mathbf {q},:} = \sum_ {h \in [ N _ {h} ]} \left(\sum_ {\mathbf {k}} a _ {\mathbf {q}, \mathbf {k}} ^ {(h)} \mathbf {X} _ {\mathbf {k},:}\right) \boldsymbol {W} ^ {(h)} = \sum_ {\mathbf {k}} \mathbf {X} _ {\mathbf {k},:} \underbrace {\left(\sum_ {h \in [ N _ {h} ]} a _ {\mathbf {q} , \mathbf {k}} ^ {(h)} \boldsymbol {W} ^ {(h)}\right)} _ {\boldsymbol {W} _ {\mathbf {q}, \mathbf {k}} ^ {\mathrm {S A}} \in \mathbb {R} ^ {D _ {i n} \times D _ {o u t}}}, \tag {16}
380
+ $$
381
+
382
+ with $a_{\pmb{q},\pmb{k}}^{(h)} = \mathrm{softmax}(\mathbf{A}_{\pmb{q},:}^{(h)})_{\pmb{k}}$ . We rewrite the output of a convolution at pixel $\pmb{q}$ in the same manner:
383
+
384
+ $$
385
+ \operatorname {C o n v} (\mathbf {X}) _ {\mathbf {q},:} = \sum_ {\boldsymbol {\Delta} \in \boldsymbol {\Delta} _ {K}} \mathbf {X} _ {\mathbf {q} + \boldsymbol {\Delta},:} \mathbf {W} _ {\boldsymbol {\Delta},:,:} = \sum_ {\mathbf {k} \in [ H ] \times [ W ]} \mathbf {X} _ {\mathbf {k},:} \underbrace {\mathbb {1} _ {\left\{\mathbf {k} - \mathbf {q} \in \boldsymbol {\Delta} _ {K} \right\}} \mathbf {W} _ {\mathbf {k} - \mathbf {q} , : :}} _ {\mathbf {W} _ {\mathbf {q}, \mathbf {k}} ^ {\text {c o n v}} \in \mathbb {R} ^ {D _ {i n} \times D _ {o u t}}} \tag {17}
386
+ $$
387
+
388
+ Equality between equations (16) and (17) holds for any input $\mathbf{X}$ if and only if the linear transformations for each pair of key/query pixels are equal, i.e. $\pmb{W}_{\pmb{q},\pmb{k}}^{\mathrm{conv}} = \pmb{W}_{\pmb{q},\pmb{k}}^{\mathrm{SA}}\forall \pmb{q},\pmb{k}$ . We vectorize the weight matrices into matrices of dimension $D_{in}D_{out}\times HW$ as $\pmb{V}_{\pmb{q}}^{\mathrm{conv}}\coloneqq [\mathrm{vec}(\pmb{W}_{\pmb{q},\pmb{k}}^{\mathrm{conv}})]_{\pmb{k}\in [H]\times [W]}$ and $\pmb{V}_{\pmb{q}}^{\mathrm{SA}}\coloneqq [\mathrm{vec}(\pmb{W}_{\pmb{q},\pmb{k}}^{\mathrm{SA}})]_{\pmb{k}\in [H]\times [W]}$ . Hence, to show that $\mathrm{Conv}(\mathbf{X}) = \mathrm{MHSA}(\mathbf{X})$ for all $\mathbf{X}$ , we must show that $\pmb{V}_{\pmb{q}}^{\mathrm{conv}} = \pmb{V}_{\pmb{q}}^{\mathrm{SA}}$ for all $\pmb{q}$ .
389
+
390
+ The matrix $V_{q}^{\mathrm{conv}}$ has a restricted support: only the columns associated with a pixel shift $\Delta \in \mathbb{A}_{K}$ in the receptive field of pixel $q$ can be non-zero. This leads to the factorization $V_{q}^{\mathrm{conv}} = W^{\mathrm{conv}}E_{q}$ displayed in Figure 11 where $W^{\mathrm{conv}} \in \mathbb{R}^{D_{in}D_{out} \times K^2}$ and $E_{q} \in \mathbb{R}^{K^2 \times HW}$ . Given an ordering of the shifts $\Delta \in \mathbb{A}_{K}$ indexed by $j$ , set $(W^{\mathrm{conv}})_{:,j} = \operatorname{vec}(W_{\Delta,:;})$ and $(E_q)_{j,:} = e_{\omega (q + \Delta)}$ . On the other hand, we decompose $V_{q}^{\mathrm{SA}} = W^{\mathrm{SA}}A_{q}$ with $(W^{\mathrm{SA}})_{:,h} = \operatorname{vec}(W^{(h)})$ and $(A_q)_{h,i} = a_{q,\omega(i)}^{(h)}$ .
391
+
392
+ The proof is concluded by showing that $\operatorname{row}(\pmb{E}_q) \subseteq \operatorname{row}(\pmb{A}_q)$ is a necessary and sufficient condition for the existence of a $W^{\mathrm{SA}}$ such that any $V_q^{\mathrm{conv}} = W^{\mathrm{conv}}\pmb{E}_q$ can be written as $W^{\mathrm{SA}}\pmb{A}_q$ .
393
+
394
+ Sufficient. Given that $\operatorname{row}(\pmb{E}_q) \subseteq \operatorname{row}(\pmb{A}_q)$ , there exists $\Phi \in \mathbb{R}^{K^2 \times N_h}$ such that $\pmb{E}_q = \Phi \pmb{A}_q$ and a valid decomposition is $\pmb{W}^{\mathrm{SA}} = \pmb{W}^{\mathrm{conv}}\Phi$ which gives $\pmb{W}^{\mathrm{SA}}\pmb{A}_q = \pmb{V}_q^{\mathrm{conv}}$ .
395
+
396
+ Necessary. Assume there exists $\pmb{x} \in \mathbb{R}^{HW}$ such that $\pmb{x} \in \mathrm{row}(\pmb{E}_{\pmb{q}})$ and $\pmb{x} \notin \mathrm{row}(\pmb{A}_{\pmb{q}})$ and set $\pmb{x}^{\top}$ to be a row of $V_{q}^{\mathrm{conv}}$ . Then, $W^{\mathrm{SA}} A_{q} \neq V_{q}^{\mathrm{conv}}$ for any $W^{\mathrm{SA}}$ and there is no possible decomposition.
397
+
398
+ # E GENERALIZED QUADRATIC POSITIONAL ENCODING
399
+
400
+ We noticed the similarity of the attention probabilities in the quadratic positional encoding (Section 3) to isotropic bivariate Gaussian distributions with bounded support:
401
+
402
+ $$
403
+ \operatorname {s o f t m a x} \left(\mathbf {A} _ {\mathbf {q},:}\right) _ {\mathbf {k}} = \frac {e ^ {- \alpha \left\| (\mathbf {k} - \mathbf {q}) - \boldsymbol {\Delta} \right\| ^ {2}}}{\sum_ {\mathbf {k} ^ {\prime} \in [ W ] \times [ H ]} e ^ {- \alpha \left\| (\mathbf {k} ^ {\prime} - \mathbf {q}) - \boldsymbol {\Delta} \right\| ^ {2}}}. \tag {18}
404
+ $$
405
+
406
+ Building on this observation, we further extended our attention mechanism to non-isotropic Gaussian distribution over pixel positions. Each head is parametrized by a center of attention $\Delta$ and a covariance matrix $\Sigma$ to obtain the following attention scores,
407
+
408
+ $$
409
+ \boldsymbol {A} _ {\boldsymbol {q}, \boldsymbol {k}} = - \frac {1}{2} (\boldsymbol {\delta} - \boldsymbol {\Delta}) ^ {\top} \boldsymbol {\Sigma} ^ {- 1} (\boldsymbol {\delta} - \boldsymbol {\Delta}) = - \frac {1}{2} \boldsymbol {\delta} ^ {\top} \boldsymbol {\Sigma} ^ {- 1} \boldsymbol {\delta} + \boldsymbol {\delta} ^ {\top} \boldsymbol {\Sigma} ^ {- 1} \boldsymbol {\Delta} - \frac {1}{2} \boldsymbol {\Delta} ^ {\top} \boldsymbol {\Sigma} ^ {- 1} \boldsymbol {\Delta}, \tag {19}
410
+ $$
411
+
412
+ where, once more, $\delta = k - q$ . The last term can be discarded because the softmax is shift invariant and we rewrite the attention coefficient as a dot product between the head target vector $\pmb{v}$ and the relative position encoding $r_{\delta}$ (consisting of the first and second order combinations of the shift in pixels $\delta$ ):
413
+
414
+ $$
415
+ \boldsymbol {v} = \frac {1}{2} (2 (\boldsymbol {\Sigma} ^ {- 1} \boldsymbol {\Delta}) _ {1}, 2 (\boldsymbol {\Sigma} ^ {- 1} \boldsymbol {\Delta}) _ {2}, - \boldsymbol {\Sigma} _ {1, 1} ^ {- 1}, - \boldsymbol {\Sigma} _ {2, 2} ^ {- 1}, - 2 \cdot \boldsymbol {\Sigma} _ {1, 2} ^ {- 1}) ^ {\top} \text {a n d} \boldsymbol {r} _ {\boldsymbol {\delta}} = (\boldsymbol {\delta} _ {1}, \boldsymbol {\delta} _ {2}, \boldsymbol {\delta} _ {1} ^ {2}, \boldsymbol {\delta} _ {2} ^ {2}, \boldsymbol {\delta} _ {1} \boldsymbol {\delta} _ {2}) ^ {\top}.
416
+ $$
417
+
418
+ Evaluation. We trained our model using this generalized quadratic relative position encoding. We were curious to see if, using the above encoding the self-attention model would learn to attend to non-isotropic groups of pixels—thus forming unseen patterns in CNNs. Each head was parametrized by $\Delta \in \mathbb{R}^2$ and $\Sigma^{-1/2} \in \mathbb{R}^{2 \times 2}$ to ensure that the covariance matrix remained positive semi-definite. We initialized the center of attention to $\Delta^{(h)} \sim \mathcal{N}(0, 2I_2)$ and $\Sigma^{-1/2} = I_2 + \mathcal{N}(0, 0.01I_2)$ so that initial attention probabilities were close to an isotropic Gaussian. Figure 12 shows that the network did learn non-isotropic attention probability patterns, especially in high layers. Nevertheless, the fact that we do not obtain any performance improvement seems to suggest that attention non-isotropy is not particularly helpful in practice—the quadratic positional encoding suffices.
419
+
420
+ ![](images/077519739b51f5a15c78230a53567e5f266b1d522b39af83c95fb20beb8aa3ef.jpg)
421
+ Figure 12: Centers of attention of each attention head (different colors) for the 6 self-attention layers using non-isotropic Gaussian parametrization. The central black square is the query pixel, whereas solid and dotted circles represent the $50\%$ and $90\%$ percentiles of each Gaussian, respectively.
422
+
423
+ ![](images/99f580cd7fbbbb3bafc87e7ca9fb88596b39282116db2fb45971d85290a51455.jpg)
424
+
425
+ ![](images/d77d3fbb2d9deabab94d76699e0816d6bd45807d5f2082119d4e624d6097c629.jpg)
426
+
427
+ ![](images/bd2aad7a30d4949ec20b53b45115d222254ea3e2533186170eb320b5d833352d.jpg)
428
+
429
+ ![](images/db31f17489dbbbd2242e3d4a21d587ca4b487194a8f65574eb31527e934508dd.jpg)
430
+
431
+ ![](images/fd6beb3f5227cee750261645301b9fac0327eae9502c0bdd2fbed2261ef7bb72.jpg)
432
+
433
+ Pruning degenerated heads. Some non-isotropic attention heads attend on "non-intuitive" patches of pixels: either attending a very thin stripe of pixels, when $\Sigma^{-1}$ was almost singular, or attending all pixels uniformly, when $\Sigma^{-1}$ was close to 0 (i.e. constant attention scores). We asked ourselves, are such attention patterns indeed useful for the model or are these heads degenerated and unused? To find out, we pruned all heads having largest eigen-values smaller than $10^{-5}$ or condition number (ratio of the biggest and smallest eigen-values) greater than $10^{5}$ . Specifically in our model with 6-layer and 9-heads each, we pruned [2, 4, 1, 2, 6, 0] heads from the first to the last layer. This means that these layers cannot express a $3 \times 3$ kernel anymore. As shown in yellow on fig. 2, this ablation initially hurts a bit the performance, probably due to off biases, but after a few epochs of continued training with a smaller learning rate (divided by 10) the accuracy recovers its unpruned value. Hence, without sacrificing performance, we reduce the size of the parameters and the number of FLOPS by a fourth.
434
+
435
+ # F INCREASING THE NUMBER OF HEADS
436
+
437
+ For completeness, we also tested increasing the number of heads of our architecture from 9 to 16.
438
+
439
+ ![](images/cedc94f4830884102d43c6886a02ba9436395060109a103e267e0caaa34e6c5e.jpg)
440
+ Figure 13: Evolution of test accuracy on CIFAR-10. Pruned model (yellow) is continued training of the non-isotropic model (orange).
441
+
442
+ <table><tr><td>Models</td><td>accuracy</td><td># of params</td><td># of FLOPS</td></tr><tr><td>ResNet18</td><td>0.938</td><td>11.2M</td><td>1.1B</td></tr><tr><td>SA quadratic emb.</td><td>0.938</td><td>12.1M</td><td>6.2B</td></tr><tr><td>SA quadratic emb. gen.</td><td>0.934</td><td>12.1M</td><td>6.2B</td></tr><tr><td>SA quadratic emb. gen. pruned</td><td>0.934</td><td>9.7M</td><td>4.9B</td></tr><tr><td>SA learned emb.</td><td>0.918</td><td>12.3M</td><td>6.2B</td></tr><tr><td>SA learned emb. + content</td><td>0.871</td><td>29.5M</td><td>15B</td></tr></table>
443
+
444
+ ![](images/14fb177b0e4ab200a7d3b867f5d53adc2c6deac8d83e1011b1eecb146d442e13.jpg)
445
+ Figure 14: Centers of attention for 16 attention heads (different colors) for the 6 self-attention layers using quadratic positional encoding. The central black square is the query pixel, whereas solid and dotted circles represent the $50\%$ and $90\%$ percentiles of each Gaussian, respectively.
446
+
447
+ ![](images/f826a9fa5513ecb48b7a148f616c8fec009733b0709d645f4d41fac9a329e6e9.jpg)
448
+
449
+ ![](images/1502af23f95b23fb61ccdde96a20ba4cc2528c2e159ae86b417bf2aeca55464c.jpg)
450
+ Table 4: Number of parameters and accuracy on CIFAR-10 per model. SA stands for Self-Attention.
451
+
452
+ ![](images/82652e1d4c8c93699e8e9948fb64d3468e6a2baed9273167beaf7937f7448e1d.jpg)
453
+
454
+ ![](images/16ff27a9886089f013bdd55dec0999fadb1c4d64f56c640b30b42587b11653fa.jpg)
455
+
456
+ ![](images/6bc7254c74ce61efde46c169026151c244767d50ed664c16cf2a2fa50ee8c8b4.jpg)
457
+
458
+ Similar to Figure 4, we see that the network distinguishes two main types of attention patterns. Localized heads (i.e., those that attend to nearly individual pixels) appear more frequently in the first few layers. The self-attention layer uses these heads to act in a manner similar to how convolutional layers do. Heads with less-localized attention become more common at higher layers.
ontherelationshipbetweenselfattentionandconvolutionallayers/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3a53ba2f4daf82127ba89779c5844227da7a3b253663335cb8b53404efbdf365
3
+ size 1235265
ontherelationshipbetweenselfattentionandconvolutionallayers/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9d4a49cce58c09767370fb542c403d00430247f992eaa3a218d67eb3c637f62b
3
+ size 647708
onthesteerabilityofgenerativeadversarialnetworks/a1c0e785-948e-4091-8bae-4fdc1021722e_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a37f0727fc23c6394a526a7f0886bc08cb7d7508b01bf070bc119eab8ee81efa
3
+ size 159508
onthesteerabilityofgenerativeadversarialnetworks/a1c0e785-948e-4091-8bae-4fdc1021722e_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1b69ea38d54f2640c4c7ae6f99c38e3dd79ad44af3f4b6e15847a74c99b1fb7f
3
+ size 167016