SlowGuess commited on
Commit
1d00f9f
·
verified ·
1 Parent(s): 9222bbf

Add Batch e99821f1-547d-4f1b-bf03-b202a39aa214

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. rankingpolicygradient/a0186d38-b246-4407-9105-34dbf20c6dfb_content_list.json +3 -0
  2. rankingpolicygradient/a0186d38-b246-4407-9105-34dbf20c6dfb_model.json +3 -0
  3. rankingpolicygradient/a0186d38-b246-4407-9105-34dbf20c6dfb_origin.pdf +3 -0
  4. rankingpolicygradient/full.md +0 -0
  5. rankingpolicygradient/images.zip +3 -0
  6. rankingpolicygradient/layout.json +3 -0
  7. rapidlearningorfeaturereusetowardsunderstandingtheeffectivenessofmaml/cccfbb6f-c2b5-419f-87d0-8e9ed4fc6a5f_content_list.json +3 -0
  8. rapidlearningorfeaturereusetowardsunderstandingtheeffectivenessofmaml/cccfbb6f-c2b5-419f-87d0-8e9ed4fc6a5f_model.json +3 -0
  9. rapidlearningorfeaturereusetowardsunderstandingtheeffectivenessofmaml/cccfbb6f-c2b5-419f-87d0-8e9ed4fc6a5f_origin.pdf +3 -0
  10. rapidlearningorfeaturereusetowardsunderstandingtheeffectivenessofmaml/full.md +467 -0
  11. rapidlearningorfeaturereusetowardsunderstandingtheeffectivenessofmaml/images.zip +3 -0
  12. rapidlearningorfeaturereusetowardsunderstandingtheeffectivenessofmaml/layout.json +3 -0
  13. reanalysisofvariancereducedtemporaldifferencelearning/ef365b9e-05dc-4056-ae6a-97d3ae3d68ff_content_list.json +3 -0
  14. reanalysisofvariancereducedtemporaldifferencelearning/ef365b9e-05dc-4056-ae6a-97d3ae3d68ff_model.json +3 -0
  15. reanalysisofvariancereducedtemporaldifferencelearning/ef365b9e-05dc-4056-ae6a-97d3ae3d68ff_origin.pdf +3 -0
  16. reanalysisofvariancereducedtemporaldifferencelearning/full.md +0 -0
  17. reanalysisofvariancereducedtemporaldifferencelearning/images.zip +3 -0
  18. reanalysisofvariancereducedtemporaldifferencelearning/layout.json +3 -0
  19. reclorareadingcomprehensiondatasetrequiringlogicalreasoning/e0bea734-685f-4736-9748-9ef3687b3bf0_content_list.json +3 -0
  20. reclorareadingcomprehensiondatasetrequiringlogicalreasoning/e0bea734-685f-4736-9748-9ef3687b3bf0_model.json +3 -0
  21. reclorareadingcomprehensiondatasetrequiringlogicalreasoning/e0bea734-685f-4736-9748-9ef3687b3bf0_origin.pdf +3 -0
  22. reclorareadingcomprehensiondatasetrequiringlogicalreasoning/full.md +550 -0
  23. reclorareadingcomprehensiondatasetrequiringlogicalreasoning/images.zip +3 -0
  24. reclorareadingcomprehensiondatasetrequiringlogicalreasoning/layout.json +3 -0
  25. recurrentneuralcircuitsforcontourdetection/9e1ea1e6-817b-42db-bc4d-a7adb4bf552f_content_list.json +3 -0
  26. recurrentneuralcircuitsforcontourdetection/9e1ea1e6-817b-42db-bc4d-a7adb4bf552f_model.json +3 -0
  27. recurrentneuralcircuitsforcontourdetection/9e1ea1e6-817b-42db-bc4d-a7adb4bf552f_origin.pdf +3 -0
  28. recurrentneuralcircuitsforcontourdetection/full.md +483 -0
  29. recurrentneuralcircuitsforcontourdetection/images.zip +3 -0
  30. recurrentneuralcircuitsforcontourdetection/layout.json +3 -0
  31. reducingtransformerdepthondemandwithstructureddropout/1c2abfdf-944b-4fa5-8a3f-7586904a5c4a_content_list.json +3 -0
  32. reducingtransformerdepthondemandwithstructureddropout/1c2abfdf-944b-4fa5-8a3f-7586904a5c4a_model.json +3 -0
  33. reducingtransformerdepthondemandwithstructureddropout/1c2abfdf-944b-4fa5-8a3f-7586904a5c4a_origin.pdf +3 -0
  34. reducingtransformerdepthondemandwithstructureddropout/full.md +388 -0
  35. reducingtransformerdepthondemandwithstructureddropout/images.zip +3 -0
  36. reducingtransformerdepthondemandwithstructureddropout/layout.json +3 -0
  37. reformertheefficienttransformer/03b1143d-2161-4439-9449-d2e54d75c122_content_list.json +3 -0
  38. reformertheefficienttransformer/03b1143d-2161-4439-9449-d2e54d75c122_model.json +3 -0
  39. reformertheefficienttransformer/03b1143d-2161-4439-9449-d2e54d75c122_origin.pdf +3 -0
  40. reformertheefficienttransformer/full.md +304 -0
  41. reformertheefficienttransformer/images.zip +3 -0
  42. reformertheefficienttransformer/layout.json +3 -0
  43. regularizingactivationsinneuralnetworksviadistributionmatchingwiththewassersteinmetric/6f757651-f26e-4028-9e21-fa728c70a386_content_list.json +3 -0
  44. regularizingactivationsinneuralnetworksviadistributionmatchingwiththewassersteinmetric/6f757651-f26e-4028-9e21-fa728c70a386_model.json +3 -0
  45. regularizingactivationsinneuralnetworksviadistributionmatchingwiththewassersteinmetric/6f757651-f26e-4028-9e21-fa728c70a386_origin.pdf +3 -0
  46. regularizingactivationsinneuralnetworksviadistributionmatchingwiththewassersteinmetric/full.md +311 -0
  47. regularizingactivationsinneuralnetworksviadistributionmatchingwiththewassersteinmetric/images.zip +3 -0
  48. regularizingactivationsinneuralnetworksviadistributionmatchingwiththewassersteinmetric/layout.json +3 -0
  49. reinforcedactivelearningforimagesegmentation/c0e77528-4ae1-4bf4-918b-4bf47e77a12d_content_list.json +3 -0
  50. reinforcedactivelearningforimagesegmentation/c0e77528-4ae1-4bf4-918b-4bf47e77a12d_model.json +3 -0
rankingpolicygradient/a0186d38-b246-4407-9105-34dbf20c6dfb_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5ddacac375b3589a7e7f529a9086d4fbf641c8f29b089e6a280e609b2511f780
3
+ size 177944
rankingpolicygradient/a0186d38-b246-4407-9105-34dbf20c6dfb_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:20f015c340a83f240cb578aad110cf7cf58c528ea100dfc4e02d29406996c5e8
3
+ size 206436
rankingpolicygradient/a0186d38-b246-4407-9105-34dbf20c6dfb_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7b3ec21e0dc106dcd15fb679a53a7a540f1b55ffce1a0f3e81be6bce17badc82
3
+ size 1063861
rankingpolicygradient/full.md ADDED
The diff for this file is too large to render. See raw diff
 
rankingpolicygradient/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4f16a8821e41aaef9948107b0c2ba4857b14896e38f82f402c54aa6298fa4476
3
+ size 1128076
rankingpolicygradient/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3612141a2affbdb1fa0b2db735a3ac38c8d1e002e5c670a119d5dc32bebf4f3f
3
+ size 967965
rapidlearningorfeaturereusetowardsunderstandingtheeffectivenessofmaml/cccfbb6f-c2b5-419f-87d0-8e9ed4fc6a5f_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3d93d54d4093e70b97128882fcf31f7f30af7edaee29ea385b6ce0aab367a6e9
3
+ size 118331
rapidlearningorfeaturereusetowardsunderstandingtheeffectivenessofmaml/cccfbb6f-c2b5-419f-87d0-8e9ed4fc6a5f_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:df835dd438b3ed892df83b50e42c6c986fc6c2d235ef2d42d58b1016c548998d
3
+ size 137805
rapidlearningorfeaturereusetowardsunderstandingtheeffectivenessofmaml/cccfbb6f-c2b5-419f-87d0-8e9ed4fc6a5f_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bc34e842d5138821ce602661487847941804753a86301150a69e2890029e7929
3
+ size 820395
rapidlearningorfeaturereusetowardsunderstandingtheeffectivenessofmaml/full.md ADDED
@@ -0,0 +1,467 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # RAPID LEARNING OR FEATURE REUSE? TOWARDS UNDERSTANDING THE EFFECTIVENESS OF MAML
2
+
3
+ Aniruddh Raghu *
4
+
5
+ MIT
6
+
7
+ araghu@mit.edu
8
+
9
+ Maithra Raghu *
10
+
11
+ Cornell University & Google Brain
12
+
13
+ maithrar@gmail.com
14
+
15
+ Samy Bengio
16
+
17
+ Google Brain
18
+
19
+ Oriol Vinyals
20
+
21
+ DeepMind
22
+
23
+ # ABSTRACT
24
+
25
+ An important research direction in machine learning has centered around developing meta-learning algorithms to tackle few-shot learning. An especially successful algorithm has been Model Agnostic Meta-Learning (MAML), a method that consists of two optimization loops, with the outer loop finding a meta-initialization, from which the inner loop can efficiently learn new tasks. Despite MAML's popularity, a fundamental open question remains – is the effectiveness of MAML due to the meta-initialization being primed for rapid learning (large, efficient changes in the representations) or due to feature reuse, with the meta-initialization already containing high quality features? We investigate this question, via ablation studies and analysis of the latent representations, finding that feature reuse is the dominant factor. This leads to the ANIL (Almost No Inner Loop) algorithm, a simplification of MAML where we remove the inner loop for all but the (task-specific) head of the underlying neural network. ANIL matches MAML's performance on benchmark few-shot image classification and RL and offers computational improvements over MAML. We further study the precise contributions of the head and body of the network, showing that performance on the test tasks is entirely determined by the quality of the learned features, and we can remove even the head of the network (the NIL algorithm). We conclude with a discussion of the rapid learning vs feature reuse question for meta-learning algorithms more broadly.
26
+
27
+ # 1 INTRODUCTION
28
+
29
+ A central problem in machine learning is few-shot learning, where new tasks must be learned with a very limited number of labelled datapoints. A significant body of work has looked at tackling this challenge using meta-learning approaches (Koch et al., 2015; Vinyals et al., 2016; Snell et al., 2017; Finn et al., 2017; Santoro et al., 2016; Ravi and Larochelle, 2016; Nichol and Schulman, 2018). Broadly speaking, these approaches define a family of tasks, some of which are used for training and others solely for evaluation. A proposed meta-learning algorithm then looks at learning properties that generalize across the different training tasks, and result in fast and efficient learning of the evaluation tasks.
30
+
31
+ One highly successful meta-learning algorithm has been Model Agnostic Meta-Learning (MAML) (Finn et al., 2017). At a high level, the MAML algorithm is comprised of two optimization loops. The outer loop (in the spirit of meta-learning) aims to find an effective meta-initialization, from which the inner loop can perform efficient adaptation – optimize parameters to solve new tasks with very few labelled examples. This algorithm, with deep neural networks as the underlying model, has been highly influential, with significant follow on work, such as first order variants (Nichol and Schulman, 2018), probabilistic extensions (Finn et al., 2018), augmentation with generative modelling (Rusu et al., 2018), and many others (Hsu et al., 2018; Finn and Levine, 2017; Grant et al., 2018; Triantafillou et al., 2019).
32
+
33
+ Despite the popularity of MAML, and the numerous followups and extensions, there remains a fundamental open question on the basic algorithm. Does the meta-initialization learned by the outer loop result in rapid learning on unseen test tasks (efficient but significant changes in the representations) or is the success primarily due to feature reuse (with the meta-initialization already providing high quality representations)? In this paper, we explore this question and its many surprising consequences. Our main contributions are:
34
+
35
+ - We perform layer freezing experiments and latent representational analysis of MAML, finding that feature reuse is the predominant reason for efficient learning.
36
+ - Based on these results, we propose the ANIL (Almost No Inner Loop) algorithm, a significant simplification to MAML that removes the inner loop updates for all but the head (final layer) of a neural network during training and inference. ANIL performs identically to MAML on standard benchmark few-shot classification and RL tasks and offers computational benefits over MAML.
37
+ - We study the effect of the head of the network, finding that once training is complete, the head can be removed, and the representations can be used without adaptation to perform unseen tasks, which we call the No Inner Loop (NIL) algorithm.
38
+ - We study different training regimes, e.g. multiclass classification, multitask learning, etc, and find that the task specificity of MAML/ANIL at training facilitate the learning of better features. We also find that multitask training, a popular baseline with no task specificity, performs worse than random features.
39
+ - We discuss rapid learning and feature reuse in the context of other meta-learning approaches.
40
+
41
+ # 2 RELATED WORK
42
+
43
+ MAML (Finn et al., 2017) is a highly popular meta-learning algorithm for few-shot learning, achieving competitive performance on several benchmark few-shot learning problems (Koch et al., 2015; Vinyals et al., 2016; Snell et al., 2017; Santoro et al., 2016; Ravi and Larochelle, 2016; Nichol and Schulman, 2018). It is part of the family of optimization-based meta-learning algorithms, with other members of this family presenting variations around how to learn the weights of the task-specific classifier. For example Lee and Choi (2018); Gordon et al. (2018); Bertinetto et al. (2018); Lee et al. (2019); Zhou et al. (2018) first learn functions to embed the support set and target examples of a few-shot learning task, before using the test support set to learn task specific weights to use on the embedded target examples. Harrison et al. (2018) also proceeds similarly, using a Bayesian approach. The method of Bao et al. (2019) explores a related approach, focusing on applications in text classification.
44
+
45
+ Of these optimization-based meta-learning algorithms, MAML has been especially influential, inspiring numerous direct extensions in recent literature (Antoniou et al., 2018; Finn et al., 2018; Grant et al., 2018; Rusu et al., 2018). Most of these extensions critically rely on the core structure of the MAML algorithm, incorporating an outer loop (for meta-training), and an inner loop (for task-specific adaptation), and there is little prior work analyzing why this central part of the MAML algorithm is practically successful. In this work, we focus on this foundational question, examining how and why MAML leads to effective few-shot learning. To do this, we utilize analytical tools such as Canonical Correlation Analysis (CCA) (Raghu et al., 2017; Morcos et al., 2018) and Centered Kernel Alignment (CKA) (Kornblith et al., 2019) to study the neural network representations learned with the MAML algorithm, which also demonstrates MAML's ability to learn effective features for few-shot learning.
46
+
47
+ Insights from this analysis lead to a simplified algorithm, ANIL, which almost completely removes the inner optimization loop with no reduction in performance. Prior works (Zintgraf et al., 2018; Javed and White, 2019) have proposed algorithms where some parameters are only updated in the outer loop and others only in the inner loop. However, these works are motivated by different questions, such as improving MAML's performance or learning better representations, rather than analysing rapid learning vs feature reuse in MAML.
48
+
49
+ # 3 MAML, RAPID LEARNING, AND FEATURE REUSE
50
+
51
+ Our goal is to understand whether the MAML algorithm efficiently solves new tasks due to rapid learning or feature reuse. In rapid learning, large representational and parameter changes occur during
52
+
53
+ ![](images/81dd4f870420f3af73dccf83577f0d04439ca170544741dcc4bf8228ff5a1b93.jpg)
54
+ Figure 1: Rapid learning and feature reuse paradigms. In Rapid Learning, outer loop training leads to a parameter setting that is well-conditioned for fast learning, and inner loop updates result in significant task specialization. In Feature Reuse, the outer loop leads to parameter values corresponding to reusable features, from which the parameters do not move significantly in the inner loop.
55
+
56
+ ![](images/ac4afde155ce79765b00c514b1d3fe082a89be47f2ca89b797128927bad25871.jpg)
57
+
58
+ adaptation to each new task as a result of favorable weight conditioning from the meta-initialization. In feature reuse, the meta-initialization already contains highly useful features that can mostly be reused as is for new tasks, so little task-specific adaptation occurs. Figure 1 shows a schematic of these two hypotheses.
59
+
60
+ We start off by overviewing the details of the MAML algorithm, and then we study the rapid learning vs feature reuse question via layer freezing experiments and analyzing latent representations of models trained with MAML. The results strongly support feature reuse as the predominant factor behind MAML's success. In Section 4, we explore the consequences of this, providing a significant simplification of MAML, the ANIL algorithm, and in Section 6, we outline the connections to meta-learning more broadly.
61
+
62
+ # 3.1 OVERVIEW OF MAML
63
+
64
+ The MAML algorithm finds an initialization for a neural network so that new tasks can be learnt with very few examples ( $k$ examples from each class for $k$ -shot learning) via two optimization loops:
65
+
66
+ - Outer Loop: Updates the initialization of the neural network parameters (often called the meta-initialization) to a setting that enables fast adaptation to new tasks.
67
+ - Inner Loop: Performs adaptation: takes the outer loop initialization, and, separately for each task, performs a few gradient updates over the $k$ labelled examples (the support set) provided for adaptation.
68
+
69
+ More formally, we first define our base model to be a neural network with meta-initialization parameters $\theta$ ; let this be represented by $f_{\theta}$ . We have a distribution $\mathcal{D}$ over tasks, and draw a batch $\{T_1,\dots,T_B\}$ of $B$ tasks from $\mathcal{D}$ . For each task $T_b$ , we have a support set of examples $S_{T_b}$ which are used for inner loop updates, and a target set of examples $\mathcal{Z}_{T_b}$ , which are used for outer loop updates. Let $\theta_i^{(b)}$ signify $\theta$ after $i$ gradient updates for task $T_b$ , and let $\theta_0^{(b)} = \theta$ . In the inner loop, during each update, we compute
70
+
71
+ $$
72
+ \theta_ {m} ^ {(b)} = \theta_ {m - 1} ^ {(b)} - \alpha \nabla_ {\theta_ {m - 1} ^ {(b)}} \mathcal {L} _ {S _ {T _ {b}}} \left(f _ {\theta_ {m - 1} ^ {(b)} (\theta)}\right) \tag {1}
73
+ $$
74
+
75
+ for $m$ fixed across all tasks, where $\mathcal{L}_{S_{T_b}}(f_{\theta_{m - 1}^{(b)}(\theta)})$ is the loss on the support set of $T_{b}$ after $m - 1$ inner loop updates.
76
+
77
+ We then define the meta-loss as
78
+
79
+ $$
80
+ \mathcal {L} _ {\text {m e t a}} (\theta) = \sum_ {b = 1} ^ {B} \mathcal {L} _ {\mathcal {Z} _ {T _ {b}}} \left(f _ {\theta_ {m} ^ {(b)} (\theta)}\right)
81
+ $$
82
+
83
+ where $\mathcal{L}_{Z_{T_b}}(f_{\theta_m^{(b)}(\theta)})$ is the loss on the target set of $T_{b}$ after $m$ inner loop updates, making clear the dependence of $f_{\theta_m^{(b)}}$ on $\theta$ . The outer optimization loop then updates $\theta$ as
84
+
85
+ $$
86
+ \theta = \theta - \eta \nabla_ {\theta} \mathcal {L} _ {m e t a} (\theta)
87
+ $$
88
+
89
+ <table><tr><td>Freeze layers</td><td>MiniImageNet-5way-1shot</td><td>MiniImageNet-5way-5shot</td></tr><tr><td>None</td><td>46.9 ± 0.2</td><td>63.1 ± 0.4</td></tr><tr><td>1</td><td>46.5 ± 0.3</td><td>63.0 ± 0.6</td></tr><tr><td>1,2</td><td>46.4 ± 0.4</td><td>62.6 ± 0.6</td></tr><tr><td>1,2,3</td><td>46.3 ± 0.4</td><td>61.2 ± 0.5</td></tr><tr><td>1,2,3,4</td><td>46.3 ± 0.4</td><td>61.0 ± 0.6</td></tr></table>
90
+
91
+ Table 1: Freezing successive layers (preventing inner loop adaptation) does not affect accuracy, supporting feature reuse. To test the amount of feature reuse happening in the inner loop adaptation, we test the accuracy of the model when we freeze (prevent inner loop adaptation) a contiguous block of layers at test time. We find that freezing even all four convolutional layers of the network (all layers except the network head) hardly affects accuracy. This strongly supports the feature reuse hypothesis: layers don't have to change rapidly at adaptation time; they already contain good features from the meta-initialization.
92
+
93
+ At test time, we draw unseen tasks $\{T_1^{(test)},\dots,T_n^{(test)}\}$ from the task distribution, and evaluate the loss and accuracy on $\mathcal{Z}_{T_i^{(test)}}$ after inner loop adaptation using $S_{T_i^{(test)}}$ (e.g. loss is $\mathcal{L}_{\mathcal{Z}_{T_i^{(test)}}}\left(f_{\theta_m^{(i)}(\theta)}\right)$ ).
94
+
95
+ # 3.2 RAPID LEARNING OR FEATURE REUSE?
96
+
97
+ We now turn our attention to the key question: Is MAML's efficacy predominantly due to rapid learning or feature reuse? In investigating this question, there is an important distinction between the head (final layer) of the network and the earlier layers (the body of the network). In each few-shot learning task, there is a different alignment between the output neurons and classes. For instance, in task $\mathcal{T}_1$ , the (wlog) five output neurons might correspond, in order, to the classes (dog, cat, frog, cupcake, phone), while for a different task, $\mathcal{T}_2$ , they might correspond, in order, to (airplane, frog, boat, car, pumpkin). This means that the head must necessarily change for each task to learn the new alignment, and for the rapid learning vs feature reuse question, we are primarily interested in the behavior of the body of the network. We return to this in more detail in Section 5, where we present an algorithm (NIL) that does not use a head at test time.
98
+
99
+ To study rapid learning vs feature reuse in the network body, we perform two sets of experiments: (1) We evaluate few-shot learning performance when freezing parameters after MAML training, without test time inner loop adaptation; (2) We use representational similarity tools to directly analyze how much the network features and representations change through the inner loop. We use the MiniImageNet dataset, a popular standard benchmark for few-shot learning, and with the standard convolutional architecture in Finn et al. (2017). Results are averaged over three random seeds. Full implementation details are in Appendix B.
100
+
101
+ # 3.2.1 FREEZING LAYER REPRESENTATIONS
102
+
103
+ To study the impact of the inner loop adaptation, we freeze a contiguous subset of layers of the network, during the inner loop at test time (after using the standard MAML algorithm, incorporating both optimization loops, for training). In particular, the frozen layers are not updated at all to the test time task, and must reuse the features learned by the meta-initialization that the outer loop converges to. We compare the few-shot learning accuracy when freezing to the accuracy when allowing inner loop adaptation.
104
+
105
+ Results are shown in Table 1. We observe that even when freezing all layers in the network body, performance hardly changes. This suggests that the meta-initialization has already learned good enough features that can be reused as is, without needing to perform any rapid learning for each test time task.
106
+
107
+ # 3.2.2 REPRESENTATIONAL SIMILARITY EXPERIMENTS
108
+
109
+ We next study how much the latent representations (the latent functions) learned by the neural network change during the inner loop adaptation phase. Following several recent works (Raghu et al., 2017; Saphra and Lopez, 2018; Morcos et al., 2018; Maheswaranathan et al., 2019; Raghu et al.,
110
+
111
+ ![](images/58b72d642c7029fe397afd36562c4e74d522e473ba719f65fa66339e0afcb5a7.jpg)
112
+ Figure 2: High CCA/CKA similarity between representations before and after adaptation for all layers except the head. We compute CCA/CKA similarity between the representation of a layer before the inner loop adaptation and after adaptation. We observe that for all layers except the head, the CCA/CKA similarity is almost 1, indicating perfect similarity. This suggests that these layers do not change much during adaptation, but mostly perform feature reuse. Note that there is a slight dip in similarity in the higher conv layers (e.g. conv3, conv4); this is likely because the slight representational differences in conv1, conv2 have a compounding effect on the representations of conv3, conv4. The head of the network must change significantly during adaptation, and this is reflected in the much lower CCA/CKA similarity.
113
+
114
+ ![](images/4d0e9473b33b6e5d482996ba2a569e28619ddfc0708a490f4ce72038a70fac2f.jpg)
115
+
116
+ 2019; Gotmare et al., 2018; Bau et al., 2018) we measure this by applying Canonical Correlation Analysis (CCA) to the latent representations of the network. CCA provides a way to compare representations of two (latent) layers $L_{1}$ , $L_{2}$ of a neural network, outputting a similarity score between 0 (not similar at all) and 1 (identical). For full details, see Raghu et al. (2017); Morcos et al. (2018). In our analysis, we take $L_{1}$ to be a layer before the inner loop adaptation steps, and $L_{2}$ after the inner loop adaptation steps. We compute CCA similarity between $L_{1}$ , $L_{2}$ , averaging the similarity score across different random seeds of the model and different test time tasks. Full details are in Appendix B.2
117
+
118
+ The result is shown in Figure 2, left pane. Representations in the body of the network (the convolutional layers) are highly similar, with CCA similarity scores of $>0.9$ , indicating that the inner loop induces little to no functional change. By contrast, the head of the network, which does change significantly in the inner loop, has a CCA similarity of less than 0.5. To further validate this, we also compute CKA (Centered Kernel Alignment) (Kornblith et al., 2019) (Figure 2 right), another similarity metric for neural network representations, which illustrates the same pattern. These representational analysis results strongly support the feature reuse hypothesis, with further results in the Appendix, Sections B.3 and B.4 providing yet more evidence.
119
+
120
+ # 3.2.3 FEATURE REUSE HAPPENS EARLY IN LEARNING
121
+
122
+ Having observed that the inner loop does not significantly affect the learned representations with a fully trained model, we extend our analysis to see whether the inner loop affects representations and features earlier on in training. We take MAML models at 10000, 20000, and 30000 iterations into training, perform freezing experiments (as in Section 3.2.1) and representational similarity experiments (as in Section 3.2.2).
123
+
124
+ Results in Figure 3 show the same patterns from early in training, with CCA similarity between activations pre and post inner loop update on MiniImageNet-5way-5shot being very high for the body (just like Figure 2), and similar to Table 1, test accuracy remaining approximately the same when freezing contiguous subsets of layers, even when freezing all layers of the network body. This shows that even early on in training, significant feature reuse is taking place, with the inner loop having minimal effect on learned representations and features. Results for 1shot MiniImageNet are in Appendix B.5, and show very similar trends.
125
+
126
+ # 4 THE ANIL (ALMOST NO INNER LOOP) ALGORITHM
127
+
128
+ In the previous section we saw that for all layers except the head of the neural network, the metainitialization learned by the outer loop of MAML results in very good features that can be reused
129
+
130
+ ![](images/8d058a177fd33caa9cbdf725d94642865c8833370eed0218fb0bb18f79a8363c.jpg)
131
+ Figure 3: Inner loop updates have little effect on learned representations from early on in learning. Left pane: we freeze contiguous blocks of layers (no adaptation at test time), on MiniImageNet-5way-5shot and see almost identical performance. Right pane: representations of all layers except the head are highly similar pre/post adaptation – i.e. features are being reused. This is true from early (iteration 10000) in training.
132
+
133
+ ![](images/2245f9174388b1fa1860d7914fca2e898a94da5268f83841e57658c1a103e072.jpg)
134
+
135
+ ![](images/aa79b9050b01dd2f49c03b544b4b8fb240248ba701466c8ad112f565d4a5582e.jpg)
136
+ Figure 4: Schematic of MAML and ANIL algorithms. The difference between the MAML and ANIL algorithms: in MAML (left), the inner loop (task-specific) gradient updates are applied to all parameters $\theta$ , which are initialized with the meta-initialization from the outer loop. In ANIL (right), only the parameters corresponding to the network head $\theta_{head}$ are updated by the inner loop, during training and testing.
137
+
138
+ ![](images/a11a168f1a5458c84576683b9a343fcaf803b0c328370b605835df1d22940654.jpg)
139
+
140
+ as is on new tasks. Inner loop adaptation does not significantly change the representations of these layers, even from early on in training. This suggests a natural simplification of the MAML algorithm: the ANIL (Almost No Inner Loop) algorithm.
141
+
142
+ In ANIL, during training and testing, we remove the inner loop updates for the network body, and apply inner loop adaptation only to the head. The head requires the inner loop to allow it to align to the different classes in each task. In Section 5.1 we consider another variant, the NIL (No Inner Loop) algorithm, that removes the head entirely at test time, and uses learned features and cosine similarity to perform effective classification, thus avoiding inner loop updates altogether.
143
+
144
+ For the ANIL algorithm, mathematically, let $\theta = (\theta_{1},\dots,\theta_{l})$ be the (meta-initialization) parameters for the $l$ layers of the network. Following the notation of Section 3.1, let $\theta_m^{(b)}$ be the parameters after $m$ inner gradient updates for task $\mathcal{T}_b$ . In ANIL, we have that:
145
+
146
+ $$
147
+ \theta_ {m} ^ {(b)} = \left(\theta_ {1}, \dots , (\theta_ {l}) _ {m - 1} ^ {(b)} - \alpha \nabla_ {(\theta_ {l}) _ {m - 1} ^ {(b)}} \mathcal {L} _ {S _ {b}} (f _ {\theta_ {m - 1} ^ {(b)}})\right)
148
+ $$
149
+
150
+ i.e. only the final layer gets the inner loop updates. As before, we then define the meta-loss, and compute the outer loop gradient update. The intuition for ANIL arises from Figure 3, where we observe that inner loop updates have little effect on the network body even early in training, suggesting the possibility of removing them entirely. Note that this is distinct to the freezing experiments, where we only removed the inner loop at inference time. Figure 4 presents the difference between MAML and ANIL, and Appendix C.1 considers a simple example of the gradient update in ANIL, showing how the ANIL update differs from MAML.
151
+
152
+ Computational benefit of ANIL: As ANIL almost has no inner loop, it significantly speeds up both training and inference. We found an average speedup of $1.7\mathrm{x}$ per training iteration over MAML and an average speedup of $4.1\mathrm{x}$ per inference iteration. In Appendix C.5 we provide the full results.
153
+
154
+ <table><tr><td>Method</td><td>Omniglot-20way-1shot</td><td>Omniglot-20way-5shot</td><td>MiniImageNet-5way-1shot</td><td>MiniImageNet-5way-5shot</td></tr><tr><td>MAML</td><td>93.7 ± 0.7</td><td>96.4 ± 0.1</td><td>46.9 ± 0.2</td><td>63.1 ± 0.4</td></tr><tr><td>ANIL</td><td>96.2 ± 0.5</td><td>98.0 ± 0.3</td><td>46.7 ± 0.4</td><td>61.5 ± 0.5</td></tr></table>
155
+
156
+ <table><tr><td>Method</td><td>HalfCheetah-Direction</td><td>HalfCheetah-Velocity</td><td>2D-Navigation</td></tr><tr><td>MAML</td><td>170.4 ± 21.0</td><td>-139.0 ± 18.9</td><td>-20.3 ± 3.2</td></tr><tr><td>ANIL</td><td>363.2 ± 14.8</td><td>-120.9 ± 6.3</td><td>-20.1 ± 2.3</td></tr></table>
157
+
158
+ Table 2: ANIL matches the performance of MAML on few-shot image classification and RL. On benchmark few-shot classification tasks MAML and ANIL have comparable accuracy, and also comparable average return (the higher the better) on standard RL tasks (Finn et al., 2017).
159
+
160
+ ![](images/ee1a5ffe4108766bd52391c8230935e199550e990b193437adc392879ae1e91b.jpg)
161
+ Figure 5: MAML and ANIL learn very similarly. Loss and accuracy curves for MAML and ANIL on MiniImageNet-5way-5shot, illustrating how MAML and ANIL behave similarly through the training process.
162
+
163
+ Results of ANIL on Standard Benchmarks: We evaluate ANIL on few-shot image classification and RL benchmarks, using the same model architectures as the original MAML authors, for both supervised learning and RL. Further implementation details are in Appendix C.4. The results in Table 2 (mean and standard deviation of performance over three random initializations) show that ANIL matches the performance of MAML on both few-shot classification (accuracy) and RL (average return, the higher the better), demonstrating that the inner loop adaptation of the body is unnecessary for learning good features.
164
+
165
+ MAML and ANIL Models Show Similar Behavior: MAML and ANIL perform equally well on few-shot learning benchmarks, illustrating that removing the inner loop during training does not hinder performance. To study the behavior of MAML and ANIL models further, we plot learning curves for both algorithms on MiniImageNet-5way-5shot, Figure 5. We see that loss and accuracy for both algorithms look very similar throughout training. We also look at CCA and CKA scores of the representations learned by both algorithms, Table 3. We observe that MAML-ANIL representations have the same average similarity scores as MAML-MAML and ANIL-ANIL representations, suggesting both algorithms learn comparable features (removing the inner loop doesn't change the kinds of features learned.) Further learning curves and representational similarity results are presented in Appendices C.2 and C.3.
166
+
167
+ # 5 CONTRIBUTIONS OF THE NETWORK HEAD AND BODY
168
+
169
+ So far, we have seen that MAML predominantly relies on feature reuse, with the network body (all layers except the last layer) already containing good features at meta-initialization. We also observe that such features can be learned even without inner loop adaptation during training (ANIL algorithm). The head, however, requires inner loop adaptation to enable task specificity.
170
+
171
+ <table><tr><td>Model Pair</td><td>CCA Similarity</td><td>CKA Similarity</td></tr><tr><td>MAML-MAML</td><td>0.51</td><td>0.83</td></tr><tr><td>ANIL-ANIL</td><td>0.51</td><td>0.86</td></tr><tr><td>ANIL-MAML</td><td>0.50</td><td>0.83</td></tr></table>
172
+
173
+ Table 3: MAML and ANIL models learn comparable representations. Comparing CCA/CKA similarity scores of the of MAML-ANIL representations (averaged over network body), and MAML-MAML and ANIL-ANIL similarity scores (across different random seeds) shows algorithmic differences between MAML/ANIL does not result in vastly different types of features learned.
174
+
175
+ <table><tr><td>Method</td><td>Omniglot-20way-1shot</td><td>Omniglot-20way-5shot</td><td>MiniImageNet-5way-1shot</td><td>MiniImageNet-5way-5shot</td></tr><tr><td>MAML</td><td>93.7 ± 0.7</td><td>96.4 ± 0.1</td><td>46.9 ± 0.2</td><td>63.1 ± 0.4</td></tr><tr><td>ANIL</td><td>96.2 ± 0.5</td><td>98.0 ± 0.3</td><td>46.7 ± 0.4</td><td>61.5 ± 0.5</td></tr><tr><td>NIL</td><td>96.7 ± 0.3</td><td>98.0 ± 0.04</td><td>48.0 ± 0.7</td><td>62.2 ± 0.5</td></tr></table>
176
+
177
+ Table 4: NIL algorithm performs as well as MAML and ANIL on few-shot image classification. Performance of MAML, ANIL, and NIL on few-shot image classification benchmarks. We see that with no test-time inner loop, and just learned features, NIL performs comparably to MAML and ANIL, indicating the strength of the learned features, and the relative lack of importance of the head at test time.
178
+
179
+ In this section, we explore the contributions of the network head and body. We first ask: How important is the head at test time, when good features have already been learned? Motivating this question is that the features in the body of the network needed no adaptation at inference time, so perhaps they are themselves sufficient to perform classification, with no head. In Section 5.1, we find that test time performance is entirely determined by the quality of these representations, and we can use similarity of the frozen meta-initialization representations to perform unseen tasks, removing the head entirely. We call this the NIL (No Inner Loop) algorithm.
180
+
181
+ Given this result, we next study how useful the head is at training (in ensuring the network body learns good features). We look at multiple different training regimes (some without the head) for the network body, and evaluate the quality of the representations. We find that MAML/ANIL result in the best representations, demonstrating the importance of the head during training for feature learning.
182
+
183
+ # 5.1 THE HEAD AT TEST TIME AND THE NIL (NO INNER LOOP) ALGORITHM
184
+
185
+ We study how important the head and task specific alignment are when good features have already been learned (through training) by the meta-initialization. At test time, we find that the representations can be used directly, with no adaptation, which leads to the No Inner Loop (NIL) algorithm:
186
+
187
+ 1. Train a few-shot learning model with ANIL/MAML algorithm as standard. We use ANIL training.
188
+ 2. At test time, remove the head of the trained model. For each task, first pass the $k$ labelled examples (support set) through the body of the network, to get their penultimate layer representations. Then, for a test example, compute cosine similarities between its penultimate layer representation and those of the support set, using these similarities to weight the support set labels, as in Vinyals et al. (2016).
189
+
190
+ The results for the NIL algorithm, following ANIL training, on few-shot classification benchmarks are given in Table 4. Despite having no network head and no task specific adaptation, NIL performs comparably to MAML and ANIL. This demonstrates that the features learned by the network body when training with MAML/ANIL (and reused at test time) are the critical component in tackling these benchmarks.
191
+
192
+ # 5.2 TRAINING REGIMES FOR THE NETWORK BODY
193
+
194
+ The NIL algorithm and results of Section 5.1 lead to the question of how important task alignment and the head are during training to ensure good features. Here, we study this question by examining the quality of features arising from different training regimes for the body. We look at (i) MAML
195
+
196
+ <table><tr><td>Method</td><td>MiniImageNet-5way-1shot</td><td>MiniImageNet-5way-5shot</td></tr><tr><td>MAML training-NIL head</td><td>48.4 ± 0.3</td><td>61.5 ± 0.8</td></tr><tr><td>ANIL training-NIL head</td><td>48.0 ± 0.7</td><td>62.2 ± 0.5</td></tr><tr><td>Multiclass training-NIL head</td><td>39.7 ± 0.3</td><td>54.4 ± 0.5</td></tr><tr><td>Multitask training-NIL head</td><td>26.5 ± 1.1</td><td>34.2 ± 3.5</td></tr><tr><td>Random features-NIL head</td><td>32.9 ± 0.6</td><td>43.2 ± 0.5</td></tr><tr><td>NIL training-NIL head</td><td>38.3 ± 0.6</td><td>43.0 ± 0.2</td></tr></table>
197
+
198
+ Table 5: MAML/ANIL training leads to superior features learned, supporting importance of head at training. Training with MAML/ANIL leads to superior performance over other methods which do not have task specific heads, supporting the importance of the head at training.
199
+
200
+ and ANIL training; (ii) multiclass classification, where all of the training data and classes (from which training tasks are drawn) are used to perform standard classification; (iii) multitask training, a standard baseline, where no inner loop or task specific head is used, but the network is trained on all the tasks at the same time; (iv) random features, where the network is not trained at all, and features are frozen after random initialization; (v) NIL at training time, where there is no head and cosine distance on the representations is used to get the label.
201
+
202
+ After training, we apply the NIL algorithm to evaluate test performance, and quality of features learned at training. The results are shown in Table 5. MAML and ANIL training performs best. Multitask training, which has no task specific head, performs the worst, even worse than random features (adding evidence for the need for task specificity at training to facilitate feature learning.) Using NIL during training performs worse than MAML/ANIL. These results suggest that the head is important at training to learn good features in the network body.
203
+
204
+ In Appendix D.1, we study test time performance variations from using a MAML/ANIL head instead of NIL, finding (as suggested by Section 5.1) very little performance difference. Additional results on similarity between the representations of different training regimes is given in Appendix D.2.
205
+
206
+ # 6 FEATURE REUSE IN OTHER META-LEARNING ALGORITHMS
207
+
208
+ Up till now, we have closely examined the MAML algorithm, and have demonstrated empirically that the algorithm's success is primarily due to feature reuse, rather than rapid learning. We now discuss rapid learning vs feature reuse more broadly in meta-learning. By combining our results with an analysis of evidence reported in prior work, we find support for many meta-learning algorithms succeeding via feature reuse, identifying a common theme characterizing the operating regime of much of current meta-learning.
209
+
210
+ # 6.1 OPTIMIZATION AND MODEL BASED META-LEARNING
211
+
212
+ MAML falls within the broader class of optimization based meta-learning algorithms, which at inference time, directly optimize model parameters for a new task using the support set. MAML has inspired many other optimization-based algorithms, which utilize the same two-loop structure (Lee and Choi, 2018; Rusu et al., 2018; Finn et al., 2018). Our analysis so far has thus yielded insights into the feature reuse vs rapid learning question for this class of algorithms. Another broad class of meta-learning consists of model based algorithms, which also have notions of rapid learning and feature reuse.
213
+
214
+ In the model-based setting, the meta-learning model's parameters are not directly optimized for the specific task on the support set. Instead, the model typically conditions its output on some representation of the task definition. One way to achieve this conditioning is to jointly encode the entire support set in the model's latent representation (Vinyals et al., 2016; Sung et al., 2018), enabling it to adapt to the characteristics of each task. This constitutes rapid learning for model based meta-learning algorithms.
215
+
216
+ An alternative to joint encoding would be to encode each member of the support set independently, and apply a cosine similarity rule (as in Vinyals et al. (2016)) to classify an unlabelled example. This mode of operation is purely feature reuse – we do not use information defining the task to directly influence the decision function.
217
+
218
+ If joint encoding gave significant test-time improvement over non-joint encoding, then this would suggest that rapid learning of the test-time task is taking place, as task specific information is being utilized to influence the model's decision function. However, on analyzing results in prior literature, this improvement appears to be minimal. Indeed, in e.g. Matching Networks (Vinyals et al., 2016), using joint encoding one reaches $44.2\%$ accuracy on MiniImageNet-5way-1shot, whereas with independent encoding one obtains $41.2\%$ : a small difference. More refined models suggest the gap is even smaller. For instance, in Chen et al. (2019), many methods for one shot learning were re-implemented and studied, and baselines without joint encoding achieved $48.24\%$ accuracy in MiniImageNet-5way-1shot, whilst other models using joint encoding such as Relation Net (Sung et al., 2018) achieve very similar accuracy of $49.31\%$ (they also report MAML, at $46.47\%$ ). As a result, we believe that the dominant mode of "feature reuse" rather than "rapid learning" is what has currently dominated both MAML-styled optimization based meta-learning and model based meta-learning.
219
+
220
+ # 7 CONCLUSION
221
+
222
+ In this paper, we studied a fundamental question on whether the highly successful MAML algorithm relies on rapid learning or feature reuse. Through a series of experiments, we found that feature reuse is the dominant component in MAML's efficacy on benchmark datasets. This insight led to the ANIL (Almost No Inner Loop) algorithm, a simplification of MAML that has identical performance on standard image classification and reinforcement learning benchmarks, and provides computational benefits. We further study the importance of the head (final layer) of a neural network trained with MAML, discovering that the body (lower layers) of a network is sufficient for few-shot classification at test time, allowing us to remove the network head for testing (NIL algorithm) and still match performance. We connected our results to the broader literature in meta-learning, identifying feature reuse to be a common mode of operation for other meta-learning algorithms also. Based off of our conclusions, future work could look at developing and analyzing new meta-learning algorithms that perform more rapid learning, which may expand the datasets and problems amenable to these techniques. We note that our study mainly considered benchmark datasets, such as Omniglot and MiniImageNet. It is an interesting future direction to consider rapid learning and feature reuse in MAML on other few-shot learning datasets, such as those from Triantafillou et al. (2019).
223
+
224
+ # ACKNOWLEDGEMENTS
225
+
226
+ The authors thank Geoffrey Hinton, Chelsea Finn, Hugo Larochelle and Chiyuan Zhang for helpful feedback on the methods and results.
227
+
228
+ # REFERENCES
229
+
230
+ Antreas Antoniou, Harrison Edwards, and Amos Storkey. How to train your MAML. arXiv preprint arXiv:1810.09502, 2018.
231
+ Yujia Bao, Menghua Wu, Shiyu Chang, and Regina Barzilay. Few-shot text classification with distributional signatures. arXiv preprint arXiv:1908.06039, 2019.
232
+ Anthony Bau, Yonatan Belinkov, Hassan Sajjad, Nadir Durrani, Fahim Dalvi, and James Glass. Identifying and controlling important neurons in neural machine translation. arXiv preprint arXiv:1811.01157, 2018.
233
+ Luca Bertinetto, Joao F Henriques, Philip HS Torr, and Andrea Vedaldi. Meta-learning with differentiable closed-form solvers. arXiv preprint arXiv:1805.08136, 2018.
234
+ Wei-Yu Chen, Yen-Cheng Liu, Zsolt Kira, Yu-Chiang Frank Wang, and Jia-Bin Huang. A closer look at few-shot classification. arXiv preprint arXiv:1904.04232, 2019.
235
+ Chelsea Finn and Sergey Levine. Meta-learning and universality: Deep representations and gradient descent can approximate any learning algorithm. arXiv preprint arXiv:1710.11622, 2017.
236
+ Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 1126-1135. JMLR.org, 2017.
237
+ Chelsea Finn, Kelvin Xu, and Sergey Levine. Probabilistic model-agnostic meta-learning. In Advances in Neural Information Processing Systems, pages 9516-9527, 2018.
238
+ Jonathan Gordon, John Bronskill, Matthias Bauer, Sebastian Nowozin, and Richard E Turner. Meta-Learning probabilistic inference for prediction. arXiv preprint arXiv:1805.09921, 2018.
239
+ Akhilesh Gotmare, Nitish Shirish Keskar, Caiming Xiong, and Richard Socher. A closer look at deep learning heuristics: Learning rate restarts, warmup and distillation. arXiv preprint arXiv:1810.13243, 2018.
240
+ Erin Grant, Chelsea Finn, Sergey Levine, Trevor Darrell, and Thomas Griffiths. Recasting gradient-based meta-learning as hierarchical bayes. arXiv preprint arXiv:1801.08930, 2018.
241
+ James Harrison, Apoorva Sharma, and Marco Pavone. Meta-learning priors for efficient online bayesian regression. arXiv preprint arXiv:1807.08912, 2018.
242
+ Kyle Hsu, Sergey Levine, and Chelsea Finn. Unsupervised learning via meta-learning. arXiv preprint arXiv:1810.02334, 2018.
243
+ Khurram Javed and Martha White. Meta-learning representations for continual learning. arXiv preprint arXiv:1905.12588, 2019.
244
+ Gregory Koch, Richard Zemel, and Ruslan Salakhutdinov. Siamese neural networks for one-shot image recognition. In ICML Deep Learning Workshop, volume 2, 2015.
245
+ Simon Kornblith, Mohammad Norouzi, Honglak Lee, and Geoffrey Hinton. Similarity of neural network representations revisited. arXiv preprint arXiv:1905.00414, 2019.
246
+ Kwonjoon Lee, Subhransu Maji, Avinash Ravichandran, and Stefano Soatto. Meta-learning with differentiable convex optimization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 10657-10665, 2019.
247
+ Yoonho Lee and Seungjin Choi. Gradient-based meta-learning with learned layerwise metric and subspace. arXiv preprint arXiv:1801.05558, 2018.
248
+
249
+ Yixuan Li, Jason Yosinski, Jeff Clune, Hod Lipson, and John E Hopcroft. Convergent learning: Do different neural networks learn the same representations? In $FE@NIPS$ , pages 196-212, 2015.
250
+ Niru Maheswaranathan, Alex H. Willams, Matthew D. Golub, Surya Ganguli, and David Sussillo. Universality and individuality in neural dynamics across large populations of recurrent networks. arXiv preprint arXiv:1907.08549, 2019.
251
+ Ari S Morcos, Maithra Raghu, and Samy Bengio. Insights on representational similarity in neural networks with canonical correlation. arXiv preprint arXiv:1806.05759, 2018.
252
+ Alex Nichol and John Schulman. Reptile: A scalable metalearning algorithm. arXiv preprint arXiv:1803.02999, 2, 2018.
253
+ Maithra Raghu. SVCCA Code and Tutorials. https://github.com/google/svcca.
254
+ Maithra Raghu, Justin Gilmer, Jason Yosinski, and Jascha Sohl-Dickstein. SVCCA: Singular vector canonical correlation analysis for deep learning dynamics and interpretability. In Advances in Neural Information Processing Systems, pages 6076-6085, 2017.
255
+ Maithra Raghu, Chiyuan Zhang, Jon Kleinberg, and Samy Bengio. Transfusion: Understanding transfer learning with applications to medical imaging. arXiv preprint arXiv:1902.07208, 2019.
256
+ Sachin Ravi and Hugo Larochelle. Optimization as a model for few-shot learning. 2016.
257
+ Andrei A Rusu, Dushyant Rao, Jakub Sygnowski, Oriol Vinyals, Razvan Pascanu, Simon Osindero, and Raia Hadsell. Meta-learning with latent embedding optimization. arXiv preprint arXiv:1807.05960, 2018.
258
+ Adam Santoro, Sergey Bartunov, Matthew Botvinick, Daan Wierstra, and Timothy Lillicrap. Meta-learning with memory-augmented neural networks. In International Conference on Machine Learning, pages 1842-1850, 2016.
259
+ Naomi Saphra and Adam Lopez. Understanding learning dynamics of language models with SVCCA. arXiv preprint arXiv:1811.00225, 2018.
260
+ Jake Snell, Kevin Swersky, and Richard Zemel. Prototypical networks for few-shot learning. In Advances in Neural Information Processing Systems, pages 4077-4087, 2017.
261
+ Flood Sung, Yongxin Yang, Li Zhang, Tao Xiang, Philip HS Torr, and Timothy M Hospedales. Learning to compare: Relation network for few-shot learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1199-1208, 2018.
262
+ Eleni Triantafillou, Tyler Zhu, Vincent Dumoulin, Pascal Lamblin, Kelvin Xu, Ross Goroshin, Carles Gelada, Kevin Swersky, Pierre-Antoine Manzagol, and Hugo Larochelle. Meta-Dataset: A dataset of datasets for learning to learn from few examples. arXiv preprint arXiv:1903.03096, 2019.
263
+ Oriol Vinyals, Charles Blundell, Timothy Lillicrap, Daan Wierstra, et al. Matching Networks for one shot learning. In Advances in neural information processing systems, pages 3630-3638, 2016.
264
+ Liwei Wang, Lunjia Hu, Jiayuan Gu, Zhiqiang Hu, Yue Wu, Kun He, and John E. Hopcroft. To what extent do different neural networks learn the same representation: A Neuron Activation Subspace Match Approach. In NeurIPS 2018, 2018.
265
+ Fengwei Zhou, Bin Wu, and Zhenguo Li. Deep meta-learning: learning to learn in the concept space. arXiv preprint arXiv:1802.03596, 2018.
266
+ Luisa M Zintgraf, Kyriacos Shiarlis, Vitaly Kurin, Katja Hofmann, and Shimon Whiteson. Fast context adaptation via meta-learning. arXiv preprint arXiv:1810.03642, 2018.
267
+
268
+ # A FEW-SHOT IMAGE CLASSIFICATION DATASETS AND EXPERIMENTAL SETUPS
269
+
270
+ We consider the few-shot learning paradigm for image classification to evaluate MAML and ANIL. We evaluate using two datasets often used for few-shot multiclass classification – the Omniglot dataset and the MiniImageNet dataset.
271
+
272
+ Omniglot: The Omniglot dataset consists of over 1600 different handwritten character classes from 23 alphabets. The dataset is split on a character-level, so that certain characters are in the training set, and others in the validation set. We consider the 20-way 1-shot and 20-way 5-shot tasks on this dataset, where at test time, we wish our classifier to discriminate between 20 randomly chosen character classes from the held-out set, given only 1 or 5 labelled example(s) from each class from this set of 20 testing classes respectively. The model architecture used is identical to that in the original MAML paper, namely: 4 modules with a $3 \times 3$ convolutions and 64 filters with a stride of 2, followed by batch normalization, and a ReLU nonlinearity. The Omniglot images are downsampled to $28 \times 28$ , so the dimensionality of the last hidden layer is 64. The last layer is fed into a 20-way softmax. Our models are trained using a batch size of 16, 5 inner loop updates, and an inner learning rate of 0.1.
273
+
274
+ MiniImageNet: The MiniImagenet dataset was proposed by Ravi and Larochelle (2016), and consists of 64 training classes, 12 validation classes, and 24 test classes. We consider the 5-way 1-shot and 5-way 5-shot tasks on this dataset, where the test-time task is to classify among 5 different randomly chosen validation classes, given only 1 and 5 labelled examples respectively. The model architecture is again identical to that in the original paper: 4 modules with a $3 \times 3$ convolutions and 32 filters, followed by batch normalization, ReLU nonlinearity, and $2 \times 2$ max pooling. Our models are trained using a batch size of 4. 5 inner loop update steps, and an inner learning rate of 0.01 are used. 10 inner gradient steps are used for evaluation at test time.
275
+
276
+ # B ADDITIONAL DETAILS AND RESULTS: FREEZING AND REPRESENTATIONAL SIMILARITY
277
+
278
+ In this section, we provide further experimental details and results from freezing and representational similarity experiments.
279
+
280
+ # B.1 EXPERIMENTAL DETAILS
281
+
282
+ We concentrate on MiniImageNet for our experiments in Section 3.2, as it is more complex than Omniglot.
283
+
284
+ The model architecture used for our experiments is identical to that in the original paper: 4 modules with a $3 \times 3$ convolutions and 32 filters, followed by batch normalization, ReLU nonlinearity, and $2 \times 2$ max pooling. Our models are trained using a batch size of 4, 5 inner loop update steps, and an inner learning rate of 0.01. 10 inner gradient steps are used for evaluation at test time. We train models 3 times with different random seeds. Models were trained for 30000 iterations.
285
+
286
+ # B.2 DETAILS OF REPRESENTATIONAL SIMILARITY
287
+
288
+ CCA takes in as inputs $L_{1} = \{z_{1}^{(1)}, z_{2}^{(1)}, \dots, z_{m}^{(1)}\}$ and $L_{2} = \{z_{1}^{(2)}, z_{2}^{(1)}, \dots, z_{n}^{(2)}\}$ , where $L_{1}, L_{2}$ are layers, and $z_{i}^{(j)}$ is a neuron activation vector: the vector of outputs of neuron $i$ (of layer $L_{j}$ ) over a set of inputs $X$ . It then finds linear combinations of the neurons in $L_{1}$ and neurons in $L_{2}$ so that the resulting activation vectors are maximally correlated, which is summarized in the canonical correlation coefficient. Iteratively repeating this process gives a similarity score (in [0, 1] with 1 identical and 0 completely different) between the representations of $L_{1}$ and $L_{2}$ .
289
+
290
+ We apply this to compare corresponding layers of two networks, net1 and net2, where net1 and net2 might differ due to training step, training method (ANIL vs MAML) or the random seed. When comparing convolutional layers, as described in Raghu, we perform the comparison over channels,
291
+
292
+ ![](images/5aea06fa6e29d86d53e90e0886e092f9fb22969f3e58399fe3575cb5a5bd2408.jpg)
293
+ Figure 6: Euclidean distance before and after finetuning for MiniImageNet. We compute the average (across tasks) Euclidean distance between the weights before and after inner loop adaptation, separately for different layers. We observe that all layers except for the final layer show very little difference before and after inner loop adaptation, suggesting significant feature reuse.
294
+
295
+ ![](images/e16057cfef7ce8d4bc74e2c3d883e3c8b6fce0e1e8d48f3b99a1ce3c8acbe1d8.jpg)
296
+
297
+ flattening out over all of the spatial dimensions, and then taking the mean CCA coefficient. We average over three random repeats.
298
+
299
+ # B.3 SIMILARITY BEFORE AND AFTER INNER LOOP WITH EUCLIDEAN DISTANCE
300
+
301
+ In addition to assessing representational similarity with CCA/CKA, we also consider the simpler measure of Euclidean distance, capturing how much weights of the network change during the inner loop update (task-specific finetuning). We note that this experiment does not assess functional changes on inner loop updates as well as the CCA experiments do; however, they serve to provide useful intuition.
302
+
303
+ We plot the per-layer average Euclidean distance between the initialization $\theta$ and the finetuned weights $\theta_{m}^{(b)}$ across different tasks $T_{b}$ , i.e.
304
+
305
+ $$
306
+ \frac {1}{N} \sum_ {b = 1} ^ {N} | | (\theta_ {l}) - (\theta_ {l}) _ {m} ^ {(b)} | |
307
+ $$
308
+
309
+ across different layers $l$ , for MiniImageNet in Figure 6. We observe that very quickly after the start of training, all layers except for the last layer have small Euclidean distance difference before and after finetuning, suggesting significant feature reuse. (Note that this is despite the fact that these layers have more parameters than the final layer.)
310
+
311
+ # B.4 CCA SIMILARITY ACROSS RANDOM SEEDS
312
+
313
+ The experiment in Section 3.2.2 compared representational similarity of $L_{1}$ and $L_{2}$ at different points in training (before/after inner loop adaptation) but corresponding to the same random seed. To complete the picture, it is useful to study whether representational similarity across different random seeds is also mostly unaffected by the inner loop adaptation. This motivates four natural comparisons: assume layer $L_{1}$ is from the first seed, and layer $L_{2}$ is from the second seed. Then we can compute the representational similarity between $(L_{1} \text{ pre}, L_{2} \text{ pre})$ , $(L_{1} \text{ pre}, L_{2} \text{ post})$ , $(L_{1} \text{ post}, L_{2} \text{ pre})$ and $(L_{1} \text{ post}, L_{2} \text{ post})$ , where pre/post signify whether we take the representation before or after adaptation.
314
+
315
+ Prior work has shown that neural network representations may vary across different random seeds (Raghu et al., 2017; Morcos et al., 2018; Li et al., 2015; Wang et al., 2018), organically resulting in CCA similarity scores much less than 1. So to identify the effect of the inner loop on the representation, we plot the CCA similarities of (i) $(L_{1}$ pre, $L_{2}$ pre) against $(L_{1}$ pre, $L_{2}$ post) and (ii) $(L_{1}$ pre, $L_{2}$ pre) against $(L_{1}$ post, $L_{2}$ pre) and (iii) $(L_{1}$ pre, $L_{2}$ pre) against $(L_{1}$ post, $L_{2}$ post) separately across the different random seeds and different layers. We then compute the line of best fit for each plot. If the line of best fit fits the data and is close to $y = x$ , this suggests that the inner loop adaptation doesn't affect the features much - the similarity before adaptation is very close to the similarity after adaptation.
316
+
317
+ ![](images/c09c48c4ba3cdb8e8c93271e955b39f3f7aec5e66fe7f2277167e5b60886223d.jpg)
318
+ Figure 7: Computing CCA similarity pre/post adaptation across different random seeds further demonstrates that the inner loop doesn't change representations significantly. We compute CCA similarity of $L_{1}$ from seed 1 and $L_{2}$ from seed 2, varying whether we take the representation pre (before) adaptation or post (after) adaptation. To isolate the effect of adaptation from inherent variation in the network representation across seeds, we plot CCA similarity of the representations before adaptation against representations after adaptation in three different combinations: (i) $(L_{1}$ pre, $L_{2}$ pre) against $(L_{1}$ pre, $L_{1}$ post), (ii) $(L_{1}$ pre, $L_{2}$ pre) against $(L_{1}$ pre, $L_{1}$ post) (iii) $(L_{1}$ pre, $L_{2}$ pre) against $(L_{1}$ post, $L_{2}$ post). We do this separately across different random seeds and different layers. Then, we compute a line of best fit, finding that in all three plots, it is almost identical to $y = x$ , demonstrating that the representation does not change significantly pre/post adaptation. Furthermore a computation of the coefficient of determination $R^{2}$ gives $R^{2} \approx 1$ , illustrating that the data is well explained by this relation. In Figure 8, we perform this comparison with CKA, observing the same high level conclusions.
319
+
320
+ ![](images/335fc672187d6adb025cea194f08698102f3a5e80df89e51744185581044cf07.jpg)
321
+
322
+ ![](images/24ec9f04da3ed5f3fb81940b69cc2068ffc40b6c452573a6f5983959b3246bb9.jpg)
323
+
324
+ ![](images/e1c0c0144b596cb281eb1b438bf82394c5c718a02391f69a890f886ce5855749.jpg)
325
+ Figure 8: We perform the same comparison as in Figure 7, but with CKA instead. There is more variation in the similarity scores, but we still see a strong correlation between (Pre, Pre) and (Post, Post) comparisons, showing that representations do not change significantly over the inner loop.
326
+
327
+ The results are shown in Figure 7. In all of the plots, we see that the line of best fit is almost exactly $y = x$ (even for the pre/pre vs post/post plot, which could conceivably be more different as both seeds change) and a computation of the coefficient of determination $R^2$ gives $R^2 \approx 1$ for all three plots. Putting this together with Figure 2, we can conclude that the inner loop adaptation step doesn't affect the representation learned by any layer except the head, and that the learned representations and features are mostly reused as is for the different tasks.
328
+
329
+ # B.5 MINIIMAGENET-5WAY-1SHOT FREEZING AND CCA OVER TRAINING
330
+
331
+ Figure 9 shows that from early on in training, on MiniImageNet-5way-1shot, that the CCA similarity between activations pre and post inner loop update is very high for all layers but the head. We further see that the validation set accuracy suffers almost no decrease if we remove the inner loop updates and freeze all layers but the head. This shows that even early on in training, the inner loop appears to have minimal effect on learned representations and features. This supplements the results seen in Figure 3 on MiniImageNet-5way-5shot.
332
+
333
+ ![](images/5c4c00c4497fb7961c8df8e7a7515cdbafbb8552cadb74c72827a667a4780ed7.jpg)
334
+ Figure 9: Inner loop updates have little effect on learned representations from early on in learning. We consider freezing and representational similarity experiments for MiniImageNet-5way-1shot. We see that early on in training (from as few as 10k iterations in), the inner loop updates have little effect on the learned representations and features, and that removing the inner loop updates for all layers but the head have little-to-no impact on the validation set accuracy.
335
+
336
+ ![](images/3dc607408617fab4ac2f6d20b49da8ad559f4d3898f2411ef78f7fe87619e754.jpg)
337
+
338
+ # C ANIL ALGORITHM: MORE DETAILS
339
+
340
+ In this section, we provide more details about the ANIL algorithm, including an example of the ANIL update, implementation details, and further experimental results.
341
+
342
+ # C.1 AN EXAMPLE OF THE ANIL UPDATE
343
+
344
+ Consider a simple, two layer linear network with a single hidden unit in each layer: $\hat{y}(x; \boldsymbol{\theta}) = \theta_2(\theta_1 x)$ . In this example, $\theta_2$ is the head. Consider the 1-shot regression problem, where we have access to examples $\left\{(x_1^{(t)}, y_1^{(t)}), (x_2^{(t)}, y_2^{(t)})\right\}$ for tasks $t = 1, \dots, T$ . Note that $(x_1^{(t)}, y_1^{(t)})$ is the (example, label) pair in the meta-training set (used for inner loop adaptation - support set), and $(x_2^{(t)}, y_2^{(t)})$ is the pair in the meta-validation set (used for the outer loop update - target set).
345
+
346
+ In the few-shot learning setting, we firstly draw a set of $N$ tasks and labelled examples from our meta-training set: $\left\{(x_1^{(1)},y_1^{(1)}),\ldots ,(x_1^{(N)},y_1^{(N)})\right\}$ . Assume for simplicity that we only apply one gradient step in the inner loop. The inner loop updates for each task are thus defined as follows:
347
+
348
+ $$
349
+ \theta_ {1} ^ {(t)} \leftarrow \theta_ {1} - \frac {\partial L \left(\hat {y} \left(x _ {1} ^ {(t)} ; \boldsymbol {\theta}\right) , y _ {1} ^ {(t)}\right)}{\partial \theta_ {1}} \tag {1}
350
+ $$
351
+
352
+ $$
353
+ \theta_ {2} ^ {(t)} \leftarrow \theta_ {2} - \frac {\partial L \left(\hat {y} \left(x _ {1} ^ {(t)} ; \boldsymbol {\theta}\right) , y _ {1} ^ {(t)}\right)}{\partial \theta_ {2}} \tag {2}
354
+ $$
355
+
356
+ where $L(\cdot, \cdot)$ is the loss function, (e.g. mean squared error) and $\theta_i^{(t)}$ refers to a parameter after inner loop update for task $t$ .
357
+
358
+ The task-adapted parameters for MAML and ANIL are as follows. Note how only the head parameters change per-task in ANIL:
359
+
360
+ $$
361
+ \boldsymbol {\theta} _ {\mathrm {M A M L}} ^ {(t)} = \left[ \theta_ {1} ^ {(t)}, \theta_ {2} ^ {(t)} \right] \tag {3}
362
+ $$
363
+
364
+ $$
365
+ \boldsymbol {\theta} _ {\text {A N I L}} ^ {(t)} = \left[ \theta_ {1}, \theta_ {2} ^ {(t)} \right] \tag {4}
366
+ $$
367
+
368
+ In the outer loop update, we then perform the following operations using the data from the meta-validation set:
369
+
370
+ $$
371
+ \theta_ {1} \leftarrow \theta_ {1} - \sum_ {t = 1} ^ {N} \frac {\partial L \left(\hat {y} \left(x _ {2} ^ {(t)} ; \boldsymbol {\theta} ^ {(t)}\right) , y _ {2} ^ {(t)}\right)}{\partial \theta_ {1}} \tag {5}
372
+ $$
373
+
374
+ $$
375
+ \theta_ {2} \leftarrow \theta_ {2} - \sum_ {t = 1} ^ {N} \frac {\partial L \left(\hat {y} \left(x _ {2} ^ {(t)} ; \boldsymbol {\theta} ^ {(t)}\right) , y _ {2} ^ {(t)}\right)}{\partial \theta_ {2}} \tag {6}
376
+ $$
377
+
378
+ Considering the update for $\theta_{1}$ in more detail for our simple, two layer, linear network (the case for $\theta_{2}$ is analogous), we have the following update for MAML:
379
+
380
+ $$
381
+ \theta_ {1} \leftarrow \theta_ {1} - \sum_ {t = 1} ^ {N} \frac {\partial L \left(\hat {y} \left(x _ {2} ^ {(t)} ; \boldsymbol {\theta} _ {\mathrm {M A M L}} ^ {(t)}\right) , y _ {2} ^ {(t)}\right)}{\partial \theta_ {1}} \tag {7}
382
+ $$
383
+
384
+ $$
385
+ \hat {y} (x _ {2} ^ {(t)}; \boldsymbol {\theta} _ {\mathrm {M A M L}} ^ {(t)}) = \left(\left[ \theta_ {2} - \frac {\partial L (\hat {y} (x _ {1} ^ {(t)} ; \boldsymbol {\theta}) , y _ {1} ^ {(t)})}{\partial \theta_ {2}} \right] \cdot \left[ \theta_ {1} - \frac {\partial L (\hat {y} (x _ {1} ^ {(t)} ; \boldsymbol {\theta}) , y _ {1} ^ {(t)})}{\partial \theta_ {1}} \right] \cdot x _ {2}\right) \tag {8}
386
+ $$
387
+
388
+ For ANIL, on the other hand, the update will be:
389
+
390
+ $$
391
+ \theta_ {1} \leftarrow \theta_ {1} - \sum_ {t = 1} ^ {N} \frac {\partial L \left(\hat {y} \left(x _ {2} ^ {(t)} ; \boldsymbol {\theta} _ {\mathrm {A N I L}} ^ {(t)}\right) , y _ {2} ^ {(t)}\right)}{\partial \theta_ {1}} \tag {9}
392
+ $$
393
+
394
+ $$
395
+ \hat {y} \left(x _ {2} ^ {(t)}; \boldsymbol {\theta} _ {\mathrm {A N I L}} ^ {(t)}\right) = \left(\left[ \theta_ {2} - \frac {\partial L \left(\hat {y} \left(x _ {1} ^ {(t)} ; \boldsymbol {\theta}\right) , y _ {1} ^ {(t)}\right)}{\partial \theta_ {2}} \right] \cdot \theta_ {1} \cdot x _ {2}\right) \tag {10}
396
+ $$
397
+
398
+ Note the lack of inner loop update for $\theta_{1}$ , and how we do not remove second order terms in ANIL (unlike in first-order MAML); second order terms still persist through the derivative of the inner loop update for the head parameters.
399
+
400
+ # C.2 ANIL LEARNS ALMOST IDENTICALLY TO MAML
401
+
402
+ We implement ANIL on MiniImageNet and Omniglot, and generate learning curves for both algorithms in Figure 10. We find that learning proceeds almost identically for ANIL and MAML, showing that removing the inner loop has little effect on the learning dynamics.
403
+
404
+ # C.3 ANIL AND MAML LEARN SIMILAR REPRESENTATIONS
405
+
406
+ We compute CCA similarities across representations in a MAML seed and an ANIL seed, and then plot these against the same MAML seed representation compared to a different MAML seed (and similarly for ANIL). We find a strong correlation between these similarities (Figure 11), which suggests that MAML and ANIL are learning similar representations, despite their algorithmic differences. (ANIL and MAML are about as similar to each other as two ANILs are to each other, or two MAMLs are to each other.)
407
+
408
+ # C.4 ANIL IMPLEMENTATION DETAILS
409
+
410
+ Supervised Learning Implementation: We used the TensorFlow MAML implementation open-sourced by the original authors (Finn et al., 2017). We used the same model architectures as in the original MAML paper for our experiments, and train models 3 times with different random seeds. All models were trained for 30000 iterations, with a batch size of 4, 5 inner loop update steps, and an inner learning rate of 0.01. 10 inner gradient steps were used for evaluation at test time.
411
+
412
+ Reinforcement Learning Implementation: We used the open source PyTorch implementation of MAML for RL $^{1}$ , due to challenges encountered when running the open-sourced TensorFlow
413
+
414
+ ![](images/f91997e73959b64f41dca3aa537fb711915162cbb39a6ffe6a375e88120ece4b.jpg)
415
+
416
+ ![](images/f43f37a015ac3cd9e276d608c59fa6a303d2d8497e05156d06ff59b315a1f883.jpg)
417
+
418
+ ![](images/33ed681e4ac2be282ab89175e79d76629c4cffea3664b20aa6d308210059498e.jpg)
419
+ Figure 10: ANIL and MAML on MiniImageNet and Omniglot. Loss and accuracy curves for ANIL and MAML on (i) MiniImageNet-5way-1shot (ii) MiniImageNet-5way-5shot (iii) Omniglot-20way-1shot. These illustrate how both algorithms learn very similarly over training.
420
+
421
+ implementation from the original authors. We note that the results for MAML in these RL domains do not exactly match those in the original paper; this may be due to large variance in results, depending on the random initialization. We used the same model architecture as the original paper (two layer MLP with 100 hidden units in each layer), a batch size of 40, 1 inner loop update step with an inner learning rate of 0.1 and 20 trajectories for inner loop adaptation. We trained three MAML and ANIL models with different random initialization, and quote the mean and standard deviation of the results. As in the original MAML paper, for RL experiments, we select the best performing model over 500 iterations of training and evaluate this model at test time on a new set of tasks.
422
+
423
+ ![](images/cd57cf39370969616204420da073a0cfd72f874ae77de9f8e6e2cba06c5443bc.jpg)
424
+ Figure 11: Computing CCA similarity across different seeds of MAML and ANIL networks suggests these representations are similar. We plot the CCA similarity between an ANIL seed and a MAML seed, plotted against (i) the MAML seed compared to a different MAML seed (ii) the ANIL seed compared to a different ANIL seed. We observe a strong correlation of similarity scores in both (i) and (ii). This tells us that (i) two MAML representations vary about as much as MAML and ANIL representations (ii) two ANIL representations vary about as much as MAML and ANIL representations. In particular, this suggests that MAML and ANIL learn similar features, despite having significant algorithmic differences.
425
+
426
+ ![](images/589022bfb380a5a7bfa7720826406d7711d2e2b98fb8d5045dfa40b966472656.jpg)
427
+
428
+ <table><tr><td rowspan="2"></td><td colspan="3">Training: 5way-1shot</td><td colspan="3">Training: 5way-5shot</td></tr><tr><td>Mean (s)</td><td>Median (s)</td><td>Speedup</td><td>Mean (s)</td><td>Median (s)</td><td>Speedup</td></tr><tr><td>MAML</td><td>0.15</td><td>0.13</td><td>1</td><td>0.68</td><td>0.67</td><td>1</td></tr><tr><td>First Order MAML</td><td>0.089</td><td>0.083</td><td>1.69</td><td>0.40</td><td>0.39</td><td>1.7</td></tr><tr><td>ANIL</td><td>0.084</td><td>0.072</td><td>1.79</td><td>0.37</td><td>0.36</td><td>1.84</td></tr></table>
429
+
430
+ <table><tr><td rowspan="2"></td><td colspan="3">Inference: 5way-1shot</td><td colspan="3">Inference: 5way-5shot</td></tr><tr><td>Mean (s)</td><td>Median (s)</td><td>Speedup</td><td>Mean (s)</td><td>Median (s)</td><td>Speedup</td></tr><tr><td>MAML</td><td>0.083</td><td>0.078</td><td>1</td><td>0.37</td><td>0.36</td><td>1</td></tr><tr><td>ANIL</td><td>0.020</td><td>0.017</td><td>4.15</td><td>0.076</td><td>0.071</td><td>4.87</td></tr></table>
431
+
432
+ Table 6: ANIL offers significant computational speedup over MAML, during both training and inference. Table comparing execution times and speedups of MAML, First Order MAML, and ANIL during training (above) and inference (below) on MiniImageNet domains. Speedup is calculated relative to MAML's execution time. We see that ANIL offers noticeable speedup over MAML, as a result of removing the inner loop almost completely. This permits faster training and inference.
433
+
434
+ # C.5 ANIL IS COMPUTATIONALLY SIMPLER THAN MAML
435
+
436
+ Table 6 shows results from a comparison of the computation time for MAML, First Order MAML, and ANIL, during training and inference, with the TensorFlow implementation described previously, on both MiniImageNet domains. These results are average time for executing forward and backward passes during training (above) and a forward pass during inference (bottom), for a task batch size of 1, and a target set size of 1. Results are averaged over 2000 such batches. Speedup is calculated relative to MAML's execution time. Each batches' images were loaded into memory before running the TensorFlow computation graph, to ensure that data loading time was not captured in the timing. Experiments were run on a single NVIDIA Titan-Xp GPU.
437
+
438
+ During training, we see that ANIL is as fast as First Order MAML (which does not compute second order terms during training), and about $1.7\mathrm{x}$ as fast as MAML. This leads to a significant overall training speedup, especially when coupled with the fact that the rate of learning for these ANIL and MAML is very similar; see learning curves in Appendix C.2. Note that unlike First Order MAML, ANIL also performs very comparably to MAML on benchmark tasks (on some tasks, First Order MAML performs worse (Finn et al., 2017)). During inference, ANIL achieves over a 4x speedup over MAML (and thus also 4x over First Order MAML, which is identical to MAML at inference
439
+
440
+ <table><tr><td>Method</td><td>MiniImageNet-5way-1shot</td><td>MiniImageNet-5way-5shot</td></tr><tr><td>MAML training-MAML head</td><td>46.9 ± 0.2</td><td>63.1 ± 0.4</td></tr><tr><td>MAML training-NIL head</td><td>48.4 ± 0.3</td><td>61.5 ± 0.8</td></tr><tr><td>ANIL training-ANIL head</td><td>46.7 ± 0.4</td><td>61.5 ± 0.5</td></tr><tr><td>ANIL training-NIL head</td><td>48.0 ± 0.7</td><td>62.2 ± 0.5</td></tr><tr><td>Multiclass pretrain-MAML head</td><td>38.4 ± 0.8</td><td>54.6 ± 0.4</td></tr><tr><td>Multiclass pretrain-NIL head</td><td>39.7 ± 0.3</td><td>54.4 ± 0.5</td></tr><tr><td>Multitask pretrain-MAML head</td><td>26.5 ± 0.8</td><td>32.8 ± 0.6</td></tr><tr><td>Multitask pretrain-NIL head</td><td>26.5 ± 1.1</td><td>34.2 ± 3.5</td></tr><tr><td>Random features-MAML head</td><td>32.1 ± 0.5</td><td>43.1 ± 0.3</td></tr><tr><td>Random features-NIL head</td><td>32.9 ± 0.6</td><td>43.2 ± 0.5</td></tr></table>
441
+
442
+ Table 7: Test time performance is dominated by features learned, with no difference between NIL/MAML heads. We see identical performances of MAML/NIL heads at test time, indicating that MAML/ANIL training leads to better learned features.
443
+
444
+ time). Both training and inference speedups illustrate the significant computational benefit of ANIL over MAML.
445
+
446
+ # D FURTHER RESULTS ON THE NETWORK HEAD AND BODY
447
+
448
+ # D.1 TRAINING REGIMES FOR THE NETWORK BODY
449
+
450
+ We add to the results of Section 5.2 in the main text by seeing if training a head and applying that to the representations at test time (instead of the NIL algorithm) gives in any change in the results. As might be predicted by Section 5.1, we find no change the results.
451
+
452
+ More specifically, we do the following:
453
+
454
+ - We train MAML/ANIL networks as standard, and do standard test time adaptation.
455
+ - For multiclass training, we first (pre)train with multiclass classification, then throw away the head and freeze the body. We initialize a new e.g. 5-class head, and train that (on top of the frozen multiclass pretrained features) with MAML. At test time we perform standard adaptation.
456
+ - The same process is applied to multitask training.
457
+ - A similar process is applied to random features, except the network is initialized and then frozen.
458
+
459
+ The results of this, along with the results from Table 5 in the main text is shown in Table 7. We observe very little performance difference between using a MAML/ANIL head and a NIL head for each training regime. Specifically, task performance is purely determined by the quality of the features and representations learned during training, with task-specific alignment at test time being (i) unnecessary (ii) unable to influence the final performance of the model (e.g. multitask training performance is equally with a MAML head as it is with a NIL-head.)
460
+
461
+ # D.2 REPRESENTATIONAL ANALYSIS OF DIFFERENT TRAINING REGIMES
462
+
463
+ In Table 8 we include results on using CCA and CKA on the representations learned by the different training methods. Specifically, we studied how similar representations of different training methods were to MAML training, finding a direct correlation with performance – training schemes learning representations most similar to MAML also performed the best. We computed similarity scores by averaging the scores over the first three conv layers in the body of the network.
464
+
465
+ <table><tr><td>Feature pair</td><td>CCA Similarity</td><td>CKA Similarity</td></tr><tr><td>(MAML, MAML)</td><td>0.51</td><td>0.83</td></tr><tr><td>(Multiclass pretrain, MAML)</td><td>0.48</td><td>0.79</td></tr><tr><td>(Random features, MAML)</td><td>0.40</td><td>0.72</td></tr><tr><td>(Multitask pretrain, MAML)</td><td>0.28</td><td>0.65</td></tr></table>
466
+
467
+ Table 8: MAML training most closely resembles multiclass pretraining, as illustrated by CCA and CKA similarities. On analyzing the CCA and CKA similarities between different baseline models and MAML (comparing across different tasks and seeds), we see that multiclass pretraining results in features most similar to MAML training. Multitask pretraining differs quite significantly from MAML-learned features, potentially due to the alignment problem.
rapidlearningorfeaturereusetowardsunderstandingtheeffectivenessofmaml/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:68a820e70f19c51a1f773fa075aa5e9039c31163c7db6cc2d235921fd5409cb6
3
+ size 1012001
rapidlearningorfeaturereusetowardsunderstandingtheeffectivenessofmaml/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6aa80476bd705c07b11142a4bd01b88023fa348db2f1535f3a44e9ac013056d6
3
+ size 565535
reanalysisofvariancereducedtemporaldifferencelearning/ef365b9e-05dc-4056-ae6a-97d3ae3d68ff_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b87d42ac5ec8c846f2f8333b5328ed991e4073de01c8226044dfc55dbb149c09
3
+ size 191221
reanalysisofvariancereducedtemporaldifferencelearning/ef365b9e-05dc-4056-ae6a-97d3ae3d68ff_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fb4134c2f9c76104e31ddf1b1edb2458a944ff4a0b2b5664042b6240470eabaf
3
+ size 216443
reanalysisofvariancereducedtemporaldifferencelearning/ef365b9e-05dc-4056-ae6a-97d3ae3d68ff_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b87365b28627eac6a98afc1e7bbfd1e0197bd8b89ece4736d3f7e7c1d3d5dbac
3
+ size 1190144
reanalysisofvariancereducedtemporaldifferencelearning/full.md ADDED
The diff for this file is too large to render. See raw diff
 
reanalysisofvariancereducedtemporaldifferencelearning/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:53366ce67b5bdea81eb8788ff79d750229e7efdd73efcd2cbab0dfdf29d018c7
3
+ size 1689801
reanalysisofvariancereducedtemporaldifferencelearning/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4b14c595f5bf8367bcf11c69b3bc5ebd5495ed3bd678ac9d708f2cea5545b65d
3
+ size 971584
reclorareadingcomprehensiondatasetrequiringlogicalreasoning/e0bea734-685f-4736-9748-9ef3687b3bf0_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7ec22f20d7c47624954b5ee496e0ba94fca283c491569212fc31ce9693342b78
3
+ size 126150
reclorareadingcomprehensiondatasetrequiringlogicalreasoning/e0bea734-685f-4736-9748-9ef3687b3bf0_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:412236c4b5a47a1144a10ebb8a569d2d5a9479fb3edf93d923c30ac6906f342a
3
+ size 149900
reclorareadingcomprehensiondatasetrequiringlogicalreasoning/e0bea734-685f-4736-9748-9ef3687b3bf0_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6a3b48b485e6b21fc2fd576c6aeec88bdbbcdbaa24269894ccababc4b0db833b
3
+ size 596218
reclorareadingcomprehensiondatasetrequiringlogicalreasoning/full.md ADDED
@@ -0,0 +1,550 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # RECLOR: A READING COMPREHENSION DATASET REQUIRING LOGICAL REASONING
2
+
3
+ Weihao Yu*, Zihang Jiang*, Yanfei Dong & Jiashi Feng
4
+
5
+ National University of Singapore
6
+
7
+ weihaoyu6@gmail.com, {jzihang, dyanfei}@u.nus.edu,
8
+
9
+ elefjia@nus.edu.sg
10
+
11
+ # ABSTRACT
12
+
13
+ Recent powerful pre-trained language models have achieved remarkable performance on most of the popular datasets for reading comprehension. It is time to introduce more challenging datasets to push the development of this field towards more comprehensive reasoning of text. In this paper, we introduce a new Reading Comprehension dataset requiring logical reasoning (ReClor) extracted from standardized graduate admission examinations. As earlier studies suggest, human-annotated datasets usually contain biases, which are often exploited by models to achieve high accuracy without truly understanding the text. In order to comprehensively evaluate the logical reasoning ability of models on ReClor, we propose to identify biased data points and separate them into EASY set while the rest as HARD set. Empirical results show that state-of-the-art models have an outstanding ability to capture biases contained in the dataset with high accuracy on EASY set. However, they struggle on HARD set with poor performance near that of random guess, indicating more research is needed to essentially enhance the logical reasoning ability of current models.<sup>1</sup>
14
+
15
+ # 1 INTRODUCTION
16
+
17
+ Machine reading comprehension (MRC) is a fundamental task in Natural Language Processing, which requires models to understand a body of text and answer a particular question related to the context. With success of unsupervised representation learning in NLP, language pre-training based models such as GPT-2 (Radford et al., 2019), BERT (Devlin et al., 2019), XLNet (Yang et al., 2019) and RoBERTa (Liu et al., 2019) have achieved nearly saturated performance on most of the popular MRC datasets (Rajpurkar et al., 2016; Lai et al., 2017; Rajpurkar et al., 2018; Wang et al., 2018). It is time to challenge state-of-the-art models with more difficult reading comprehension tasks and move a step forward to more comprehensive analysis and reasoning over text (Dua et al., 2019).
18
+
19
+ In natural language understanding, logical reasoning is an important ability to examine, analyze and critically evaluate arguments as they occur in ordinary language according to the definition from Law School Admission Council (2019a). It is a significant component of human intelligence and is essential in negotiation, debate and writing etc. However, existing reading comprehension datasets have none or merely a small amount of data requiring logical reasoning, e.g., $0\%$ in MCTest dataset (Richardson et al., 2013) and $1.2\%$ in SQuAD (Rajpurkar et al., 2016) according to Sugawara & Aizawa (2016). One related task is natural language inference, which requires models to label the logical relationships of sentence pairs. However, this task only considers three types of simple logical relationships and only needs reasoning at sentence-level. To push the development of models in logical reasoning from simple logical relationship classification to multiple complicated logical reasoning and from sentence-level to passage-level, it is necessary to introduce a reading comprehension dataset targeting logical reasoning.
20
+
21
+ A typical example of logical reasoning questions is shown in Table 1. Similar to the format of multiple-choice reading comprehension datasets (Richardson et al., 2013; Lai et al., 2017), it contains a context, a question and four options with only one right answer. To answer the question
22
+
23
+ in this example, readers need to identify the logical connections between the lines to pinpoint the conflict, then understand each of the options and select an option that solves the conflict. Human minds need extensive training and practice to get used to complex reasoning, and it will take immense efforts for crowdsourcing workers to design such logical reasoning questions. Inspired by the datasets extracted from standardized examinations (Lai et al., 2017; Clark et al., 2018), we build a dataset by selecting such logical reasoning questions from standardized exams such as GMAT $^{2}$ and LSAT $^{3}$ . We finally collect 6,138 pieces of logical reasoning questions, which constitute a Reading Comprehension dataset requiring logical reasoning (ReClor).
24
+
25
+ Human-annotated datasets usually contain biases (Schwartz et al., 2017; Cai et al., 2017; Bugert et al., 2017; Poliak et al., 2018; Gururangan et al., 2018; Zellers et al., 2019), which are often exploited by neural network models as shortcut solutions to achieve high testing accuracy. For data points whose options can be selected correctly without knowing the contexts and questions, we classify them as biased ones. In order to fully assess the logical reasoning ability of the models, we propose to identify the biased data points and group them as EASY set, and put the rest into HARD set. Based on our experiments on these separate sets, we find that even the state-of-the-art models can only perform well on EASY set and struggle on HARD set as shown in Figure 1. This phenomenon shows that current models can well capture the biases in the dataset but lack the ability to understand the text and reason based on connections between the lines. On the other hand, human beings perform similarly on both the EASY and HARD set. It is thus observed that there is still a long way to go to equip models with true logical reasoning ability.
26
+
27
+ The contributions of our paper are two-fold. First, we introduce ReClor, a new reading comprehension dataset requiring logical reasoning. We use option-only-input baselines trained with different random seeds to identify the data points with biases in the testing set, and group them as EASY set, with the rest as HARD set to facilitate comprehensive evaluation. Second, we evaluate several state-of-the-art models on ReClor and find these pre-trained language models can perform well on EASY set but struggle on the HARD set. This indicates although current models are good at exploiting biases in the dataset, they are far from capable of performing real logical reasoning yet.
28
+
29
+ ![](images/5b009db65ecae0631f1078982d0b8e3e0c2fa78d57fa54818b3bc351919198be.jpg)
30
+ Figure 1: Performance comparison of state-of-the-art models and humans (graduate students) on EASY and HARD set of ReClor testing set.
31
+
32
+ # 2 RELATED WORK
33
+
34
+ Reading Comprehension Datasets. A variety of reading comprehension datasets have been introduced to promote the development of this field. MCTest (Richardson et al., 2013) is a dataset with 2,000 multiple-choice reading comprehension questions about fictional stories in the format similar to ReClor. Rajpurkar et al. (2016) proposed SQuAD dataset, which contains 107,785 question-answer pairs on 536 Wikipedia articles. The authors manually labeled 192 examples of the dataset and found that the examples mainly require reasoning of lexical or syntactic variation. In an analysis of the above-mentioned datasets, Sugawara & Aizawa (2016) found that none of questions requiring logical reasoning in MCTest dataset (Richardson et al., 2013) and only $1.2\%$ in SQuAD dataset (Rajpurkar et al., 2016). Lai et al. (2017) introduced RACE dataset by collecting the English exams for middle and high school Chinese students in the age range between 12 to 18. They hired crowd workers on Amazon Mechanical Turk to label the reasoning type of 500 samples in the dataset and show that around $70\%$ of the samples are in the category of word matching, paraphrasing or single-sentence reasoning. To encourage progress on deeper comprehension of language,
35
+
36
+ # Context:
37
+
38
+ In jurisdictions where use of headlights is optional when visibility is good, drivers who use headlights at all times are less likely to be involved in a collision than are drivers who use headlights only when visibility is poor. Yet Highway Safety Department records show that making use of headlights mandatory at all times does nothing to reduce the overall number of collisions.
39
+
40
+ Question: Which one of the following, if true, most helps to resolve the apparent discrepancy in the information above?
41
+
42
+ # Options:
43
+
44
+ A. In jurisdictions where use of headlights is optional when visibility is good, one driver in four uses headlights for daytime driving in good weather.
45
+ B. Only very careful drivers use headlights when their use is not legally required.
46
+ C. The jurisdictions where use of headlights is mandatory at all times are those where daytime visibility is frequently poor.
47
+ D. A law making use of headlights mandatory at all times is not especially difficult to enforce.
48
+
49
+ # Answer: B
50
+
51
+ Table 1: An example in the ReClor dataset which is modified from the Law School Admission Council (2019b).
52
+
53
+ more reading comprehension datasets requiring more complicated reasoning types are introduced, such as iterative reasoning about the narrative of a story (Kočisky et al., 2018), multi-hop reasoning across multiple sentences (Khashabi et al., 2018) and multiple documents (Welbl et al., 2018), commonsense knowledge reasoning (Mihaylov et al., 2018; Zhang et al., 2018; Huang et al., 2019) and numerical discrete reasoning over paragraphs (Dua et al., 2019). However, to the best of our knowledge, although there are some datasets targeting logical reasoning in other NLP tasks mentioned in the next section, there is no dataset targeting evaluating logical reasoning in reading comprehension task. This work introduces a new dataset to fill this gap.
54
+
55
+ Logical Reasoning in NLP. There are several tasks and datasets introduced to investigate logical reasoning in NLP. The task of natural language inference, also known as recognizing textual entailment (Fyodorov et al., 2000; Condoravdi et al., 2003; Bos & Markert, 2005; Dagan et al., 2005; MacCartney & Manning, 2009) requires models to take a pair of sentence as input and classify their relationship types, i.e., ENTAILMENT, NEUTRAL, or CONTRADICTION. SNLI (Bowman et al., 2015) and MultiNLI (Williams et al., 2018) datasets are proposed for this task. However, this task only focuses on sentence-level logical relationship reasoning and the relationships are limited to only a few types. Another task related to logical reasoning in NLP is argument reasoning comprehension task introduced by Habernal et al. (2018) with a dataset of this task. Given an argument with a claim and a premise, this task aims to select the correct implicit warrant from two options. Although the task is on passage-level logical reasoning, it is limited to only one logical reasoning type, i.e., identifying warrants. ReClor and the proposed task integrate various logical reasoning types into reading comprehension, with the aim to promote the development of models in logical reasoning not only from sentence-level to passage-level, but also from simple logical reasoning types to the complicated diverse ones.
56
+
57
+ Datasets from Examinations. There have been several datasets extracted from human standardized examinations in NLP, such as RACE dataset (Lai et al., 2017) mentioned above. Besides, NTCIR QA Lab (Shibuki et al., 2014) offers comparative evaluation for solving real-world university entrance exam questions; The dataset of CLEF QA Entrance Exams Task (Rodrigo et al., 2015) is extracted from standardized English examinations for university admission in Japan; ARC dataset (Clark et al., 2018) consists of 7,787 science questions targeting student grade level, ranging from 3rd grade to 9th; The dialogue-based multiple-choice reading comprehension dataset DREAM (Sun et al., 2019) contains 10,197 questions for 6,444 multi-turn multi-party dialogues from English language exams that are designed by human experts to assess the comprehension level of Chinese learners of English. Compared with these datasets, ReClor distinguishes itself by targeting logical reasoning.
58
+
59
+ # 3 RECLOR DATA COLLECTION AND ANALYSIS
60
+
61
+ # 3.1 DATA COLLECTION
62
+
63
+ The format of data in ReClor is similar to other multiple-choice reading comprehension datasets (Richardson et al., 2013; Lai et al., 2017), where a data point contains a context, a question and four
64
+
65
+ <table><tr><td></td><td>ReClor</td><td>DREAM</td><td>MCTest</td><td>ARC</td><td>RACE</td></tr><tr><td>construction method</td><td>exams</td><td>exams</td><td>crowd-sourcing</td><td>exams</td><td>exams</td></tr><tr><td>context type</td><td>written text</td><td>dialogues</td><td>child&#x27;s stories</td><td>-</td><td>written text</td></tr><tr><td># of options</td><td>4</td><td>3</td><td>4</td><td>4</td><td>4</td></tr><tr><td># of context</td><td>6,138</td><td>6,444</td><td>660</td><td>-</td><td>27,933</td></tr><tr><td># of questions</td><td>6,138</td><td>10,197</td><td>2,640</td><td>7,787</td><td>97,687</td></tr><tr><td>Vocab size</td><td>26,576</td><td>13,037</td><td>8,000</td><td>6,329</td><td>136,629</td></tr><tr><td>Context Len</td><td>73.6</td><td>85.9</td><td>210.1</td><td>-</td><td>321.9</td></tr><tr><td>Question Len</td><td>17.0</td><td>8.6</td><td>7.8</td><td>20.5</td><td>10.0</td></tr><tr><td>Option Len</td><td>20.6</td><td>5.3</td><td>3.4</td><td>4.2</td><td>5.3</td></tr></table>
66
+
67
+ Table 2: Statistics of several multiple-choice MRC datasets.
68
+
69
+ answer options, among which only one option is right/most suitable. We collect reading comprehension problems that require complicated logical reasoning. However, producing such data requires the ability to perform complex logical reasoning, which makes it hard for crowdsourcing workers to generate such logical questions. Fortunately, we find the reading comprehension problems in some standardized tests, such as GMAT and LSAT, are highly in line with our expectation.
70
+
71
+ We construct a dataset containing 6,138 logical reasoning questions sourced from open websites and books. In the original problems, there are five answer options in which only one is right. To comply with fair use of law $^4$ , we shuffle the order of answer options and randomly delete one of the wrong options for each data point, which results in four options with one right option and three wrong options. Furthermore, similar to ImageNet dataset $^5$ , ReClor is available for non-commercial research purpose only. We are also hosting a public evaluation server on EvalAI (Yadav et al., 2019) to benchmark progress on Reclor.
72
+
73
+ # 3.2 DATA ANALYSIS
74
+
75
+ As mentioned above, we collect 6,138 data points, in which $91.22\%$ are from actual exams of GMAT and LSAT while others are from high-quality practice exams. They are divided into training set, validation set and testing set with 4,638, 500 and 1,000 data points respectively. The overall statistics of ReClor and comparison with other similar multiple-choice MRC datasets are summarized in Table 2. As shown, ReClor is of comparable size and relatively large vocabulary size. Compared with RACE, the length of the context of ReCor is much shorter. In RACE, there are many redundant sentences in context to answer a question. However, in ReClor, every sentence in the context passages is important, which makes this dataset focus on evaluating the logical reasoning ability of models rather than the ability to extract relevant information from a long context. The length of answer options of ReClor is largest among these datasets. We analyze and manually annotate the types of questions on the testing set and group them into 17 categories, whose percentages and descriptions are shown in Table 3. The percentages of different types of questions reflect those in the logical reasoning module of GMAT and LSAT. Some examples of different types of logical reasoning are listed in Figure 2, and more examples are listed in the Appendix C. Taking two examples, we further express how humans would solve such questions in Table 4, showing the challenge of ReClor.
76
+
77
+ # 3.3 DATA BIASES IN THE DATASET
78
+
79
+ The dataset is collected from exams devised by experts in logical reasoning, which means it is annotated by humans and may introduce biases in the dataset. Recent studies have shown that models can utilize the biases in a dataset of natural language understanding to perform well on the task without truly understanding the text (Schwartz et al., 2017; Cai et al., 2017; Bugert et al., 2017; Poliak et al., 2018; Gururangan et al., 2018; Zellers et al., 2019). It is necessary to analyze such data biases to help evaluate models. In the ReClor dataset, the common context and question are shared across the four options for each data point, so we focus on the analysis of the difference in lexical choice and sentence length of the right and wrong options without contexts and questions. We first investigate the biases of lexical choice. We lowercase the options and then use WordPiece tokenization (Wu et al., 2016) of $\mathrm{BERT}_{\mathrm{BASE}}$ (Devlin et al., 2019) to get the tokens. Similar to
80
+
81
+ <table><tr><td>Type</td><td>Description</td></tr><tr><td>Necessary Assumptions (11.4%)</td><td>identify the claim that must be true or is required in order for the argument to work.</td></tr><tr><td>Sufficient Assumptions (3.0%)</td><td>identify a sufficient assumption, that is, an assumption that, if added to the argument, would make it logically valid.</td></tr><tr><td>Strengthen (9.4%)</td><td>identify information that would strengthen an argument</td></tr><tr><td>Weaken (11.3%)</td><td>identify information that would weaken an argument</td></tr><tr><td>Evaluation (1.3%)</td><td>identify information that would be useful to know to evaluate an argument</td></tr><tr><td>Implication (4.6%)</td><td>identify something that follows logically from a set of premises</td></tr><tr><td>Conclusion/Main Point (3.6%)</td><td>identify the conclusion/main point of a line of reasoning</td></tr><tr><td>Most Strongly Supported (5.6%)</td><td>find the choice that is most strongly supported by a stimulus</td></tr><tr><td>Explain or Resolve (8.4%)</td><td>identify information that would explain or resolve a situation</td></tr><tr><td>Principle (6.5%)</td><td>identify the principle, or find a situation that conforms to a principle, or match the principles</td></tr><tr><td>Dispute (3.0%)</td><td>identify or infer an issue in dispute</td></tr><tr><td>Technique (3.6%)</td><td>identify the technique used in the reasoning of an argument</td></tr><tr><td>Role (3.2%)</td><td>describe the individual role that a statement is playing in a larger argument</td></tr><tr><td>Identify a Flaw (11.7%)</td><td>identify a flaw in an argument&#x27;s reasoning</td></tr><tr><td>Match Flaws (3.1%)</td><td>find a choice containing an argument that exhibits the same flaws as the passage&#x27;s argument</td></tr><tr><td>Match the Structure (3.0%)</td><td>match the structure of an argument in a choice to the structure of the argument in the passage</td></tr><tr><td>Others (7.3%)</td><td>other types of questions which are not included by the above</td></tr></table>
82
+
83
+ Poliak et al. (2018), for the tokens in options, we analyze their conditional probability of label $l \in \{\text{right}, \text{wrong}\}$ given by the token $t$ by $p(l|t) = \frac{\text{count}(t, l)}{\text{count}(t)}$ . The larger the correlation score is for a particular token, the more likely it contributes to the prediction of related option. Table 5 reports tokens in training set which occur at least twenty times with the highest scores since many of the tokens with the highest scores are of low frequency. We further analyze the lengths of right and wrong options (Gururangan et al., 2018) in training set. We notice a slight difference in the distribution of sentence length for right and wrong options. The average length for wrong options is around 21.82 whereas that for right options is generally longer with an average length of 23.06.
84
+
85
+ Table 3: The percentage and description of each logical reasoning type. The descriptions are adapted from those specified by Khan Academy (2019).
86
+
87
+ <table><tr><td>Token</td><td>Score (%)</td><td>Freq</td></tr><tr><td>motive</td><td>65.2</td><td>23</td></tr><tr><td>##ce</td><td>62.5</td><td>24</td></tr><tr><td>thereby</td><td>56.0</td><td>25</td></tr><tr><td>consequence</td><td>52.4</td><td>21</td></tr><tr><td>warm</td><td>52.4</td><td>21</td></tr><tr><td>interfere</td><td>52.2</td><td>23</td></tr><tr><td>contributes</td><td>52.2</td><td>23</td></tr><tr><td>manufacture</td><td>52.0</td><td>25</td></tr><tr><td>included</td><td>52.0</td><td>25</td></tr><tr><td>preferences</td><td>52.0</td><td>25</td></tr></table>
88
+
89
+ ![](images/54de7da77315b0dc0a90e95b3b4de799a984f867710a07b8c915d0466b47ebb5.jpg)
90
+ Figure 3: The distribution of the option length in ReClor with respect to right and wrong labels.
91
+
92
+ Table 5: Top 10 tokens that correlate to right options with more than 20 occurrences.
93
+
94
+ # 4 EXPERIMENTS
95
+
96
+ # 4.1 BASELINE MODELS
97
+
98
+ Many neural network based models such as FastText (Joulin et al., 2017), Bi-LSTM, GPT (Radford et al., 2018), GPT-2 (Radford et al., 2019), BERT (Devlin et al., 2019), XLNet (Yang et al., 2019),
99
+
100
+ <table><tr><td>Context:
101
+ If the purpose of laws is to contribute to people&#x27;s happiness, we have a basis for criticizing existing laws as well as proposing new laws. Hence, if that is not the purpose, then we have no basis for the evaluation of existing laws, from which we must conclude that existing laws acquire legitimacy simply because they are the laws
102
+ Question: The reasoning in the argument is flawed in that the argument
103
+ Options:
104
+ A. takes a sufficient condition for a state of affairs to be a necessary condition for it
105
+ B. draws a conclusion about how the world actually is on the basis of claims about how it should be
106
+ C. infers a causal relationship from the mere presence of a correlation
107
+ D. trades on the use of a term in one sense in a premise and in a different sense in the conclusion
108
+ Answer: A
109
+ Reasoning Process of Humans:
110
+ We may first look at the question to understand the specific task of the question – identify a flaw. We then analyze the argument in the context. The conclusion &#x27;existing laws acquire legitimacy simply because they are the laws.&#x27; is based on the argument (purpose is NOT happiness) → (NOT basis for criticizing laws), which is obtained from the first statement: (purpose is happiness) → (basis for criticizing laws). However, we know ¬A → ¬B cannot be obtained from A → B. Therefore, we should choose option A that describes this flaw.
111
+ The distractors here are different types of reasoning flaws. Prior knowledge of basic logical rules is needed to correctly answer this question.</td></tr><tr><td>Context:
112
+ Psychologist: Phonemic awareness, or the knowledge that spoken language can be broken into component sounds, is essential for learning to read an alphabetic language. But one also needs to learn how sounds are symbolically represented by means of letters; otherwise, phonemic awareness will not translate into the ability to read an alphabetic language. Yet many children who are taught by the whole-language method, which emphasizes the ways words sound, learn to read alphabetic languages.
113
+ Question: Which one of the following can be properly inferred from the psychologist&#x27;s statements?
114
+ Options:
115
+ A. The whole-language method invariably succeeds in teaching awareness of how spoken language can be broken into component sounds.
116
+ B. Some children who are taught by the whole-language method are not prevented from learning how sounds are represented by means of letters.
117
+ C. The whole-language method succeeds in teaching many children how to represent sounds symbolically by means of letters.
118
+ D. When the whole-language method succeeds in teaching someone how to represent sounds by means of letters, that person acquires the ability to read an alphabetic language.
119
+ Answer: B
120
+ Reasoning Process of Humans:
121
+ Looking at the question and we know that it is asking about implication. From the first two sentences in context, we know that there are two necessary conditions to read an alphabetic language: phonemic awareness and symbolic letters. We also learn [(NOT symbolic letters) AND (phonemic awareness)] ≠ read an alphabetic language (denoted as Formula 1). The last sentence in the context says that many children are taught by the whole-language method to learn a language. As for option A, from the context, we only know the whole language method works for &#x27;many&#x27; children, which cannot be inferred to &#x27;invariably&#x27; works. As for option B, combing three sentences in the context, we know that the whole-language method meets the two necessary conditions to learn a language, especially the last sentence mentions &#x27;learn to read alphabetic languages&#x27;. Children learn to read alphabetic languages means that they must recognize symbolic letters that represent sound because symbolic letters is a necessary condition of read an alphabetic language; otherwise, they cannot read because of Formula 1 mentioned above. Therefore, option B is correct. As for option C, from the context we only know the whole-language method teaches phonemic awareness and read an alphabetic language. Symbolic letters may be taught by other methods, so C is wrong. As for D, similar to C, symbolic letters may be taught by other methods and we also cannot obtain: symbolic letters → read an alphabetic language.</td></tr></table>
122
+
123
+ Table 4: Two examples to show how humans would solve the questions.
124
+
125
+ ![](images/e435c4ae49cacfff1cb17bee5e559edf96d364aec409ae279963f7b4bcc53400.jpg)
126
+ Figure 2: Examples of some question types. The correct options are marked by $\checkmark$ . More examples are shown in the Appendix C.
127
+
128
+ RoBERTa (Liu et al., 2019) have achieved impressive results in various NLP tasks. We challenge these neural models with ReClor to investigate how well they can perform. Details of the baseline models and implementation are shown in the Appendix A and B.
129
+
130
+ # 4.2 EXPERIMENTS TO FIND BIASED DATA
131
+
132
+ As mentioned earlier, biases prevalently exist in human-annotated datasets (Poliak et al., 2018; Gururangan et al., 2018; Zellers et al., 2019; Niven & Kao, 2019), which are often exploited by models to perform well without truly understanding the text. Therefore, it is necessary to find out the biased data points in ReClor in order to evaluate models in a more comprehensive manner (Sugawara et al., 2018). To this end, we feed the five strong baseline models (GPT, GPT-2, BERT<sub>BASE</sub>, XLNet<sub>BASE</sub> and RoBERTa<sub>BASE</sub>) with ONLY THE ANSWER OPTIONS for each problem. In other words, we purposely remove the context and question in the inputs. In this way, we are able to identify those problems that can be answered correctly by merely exploiting the biases in answer options without knowing the relevant context and question. However, the setting of this task is a multiple-choice question with 4 probable options, and even a chance baseline could have $25\%$ probability to get it right. To eliminate the effect of random guess, we set four different random seeds for each model and pick the data points that are predicted correctly in all four cases to form the EASY set. Then, the data points which are predicted correctly by the models at random could be nearly eliminated, since any data point only has a probability of $(25\%)^4 = 0.39\%$ to be guessed right consecutively for four times. Then we unite the sets of data points that are consistently predicted right by each model,
133
+
134
+ because intuitively different models may learn different biases of the dataset. The above process is formulated as the following expression,
135
+
136
+ $$
137
+ \begin{array}{l} \mathbb {C} _ {\text {E A S Y}} = \left(\mathbb {C} _ {\text {G P T}} ^ {\text {s e e d} _ {1}} \cap \mathbb {C} _ {\text {G P T}} ^ {\text {s e e d} _ {2}} \cap \mathbb {C} _ {\text {G P T}} ^ {\text {s e e d} _ {3}} \cap \mathbb {C} _ {\text {G P T}} ^ {\text {s e e d} _ {4}}\right) \\ \cup \left(\mathbb {C} _ {\text {G P T} - 2} ^ {\text {s e e d} _ {1}} \cap \mathbb {C} _ {\text {G P T} - 2} ^ {\text {s e e d} _ {2}} \cap \mathbb {C} _ {\text {G P T} - 2} ^ {\text {s e e d} _ {3}} \cap \mathbb {C} _ {\text {G P T} - 2} ^ {\text {s e e d} _ {4}}\right) \\ \cup \left(\mathbb {C} _ {\text {B E R T}} ^ {\text {s e e d} _ {1}} \cap \mathbb {C} _ {\text {B E R T}} ^ {\text {s e e d} _ {2}} \cap \mathbb {C} _ {\text {B E R T}} ^ {\text {s e e d} _ {3}} \cap \mathbb {C} _ {\text {B E R T}} ^ {\text {s e e d} _ {4}}\right) \\ \cup \left(\mathbb {C} _ {\mathrm {X L N e t}} ^ {\mathrm {s e e d} _ {1}} \cap \mathbb {C} _ {\mathrm {X L N e t}} ^ {\mathrm {s e e d} _ {2}} \cap \mathbb {C} _ {\mathrm {X L N e t}} ^ {\mathrm {s e e d} _ {3}} \cap \mathbb {C} _ {\mathrm {X L N e t}} ^ {\mathrm {s e e d} _ {4}}\right) \tag {1} \\ \cup \left(\mathbb {C} _ {\text {R o B E R T a}} ^ {\text {s e e d} _ {1}} \cap \mathbb {C} _ {\text {R o B E R T a}} ^ {\text {s e e d} _ {2}} \cap \mathbb {C} _ {\text {R o B E R T a}} ^ {\text {s e e d} _ {3}} \cap \mathbb {C} _ {\text {R o B E R T a}} ^ {\text {s e e d} _ {4}}\right), \\ \end{array}
138
+ $$
139
+
140
+ $$
141
+ \mathbb {C} _ {\text {H A R D}} = \mathbb {C} _ {\text {T E S T}} - \mathbb {C} _ {\text {E A S Y}},
142
+ $$
143
+
144
+ where $\mathbb{C}_{\mathrm{BERT}}^{\mathrm{seed}_1}$ denotes the set of data points which are predicted correctly by $\mathrm{BERT}_{\mathrm{BASE}}$ with seed 1, and similarly for the rest. Table 6 shows the average performance for each model trained with four different random seeds and the number of data points predicted correctly by all of them. Finally, we get 440 data points from the testing set $\mathbb{C}_{\mathrm{TEST}}$ and we denote this subset as EASY set $\mathbb{C}_{\mathrm{EASY}}$ and the other as HARD set $\mathbb{C}_{\mathrm{HARD}}$ .
145
+
146
+ <table><tr><td>Model</td><td>Val</td><td>Test</td><td>Number</td></tr><tr><td>Chance</td><td>25.0</td><td>25.0</td><td>3.9</td></tr><tr><td>GPT</td><td>45.8</td><td>42.2</td><td>238</td></tr><tr><td>GPT-2</td><td>46.8</td><td>42.6</td><td>245</td></tr><tr><td>BERTBASE</td><td>47.2</td><td>43.2</td><td>234</td></tr><tr><td>XLNetBASE</td><td>47.5</td><td>43.2</td><td>225</td></tr><tr><td>RoBERTaBASE</td><td>48.8</td><td>41.7</td><td>200</td></tr><tr><td>Union</td><td>-</td><td>-</td><td>440</td></tr></table>
147
+
148
+ Table 6: Average accuracy of each model using four different random seeds with only answer options as input, and the number of their common correct predictions.
149
+
150
+ # 4.3 TRANSFER LEARNING THROUGH FINE-TUNING
151
+
152
+ Among multiple-choice reading comprehension or QA datasets from exams, although the size of ReClor is comparable to those of ARC (Clark et al., 2018) and DREAM (Sun et al., 2019), it is much smaller than RACE Lai et al. (2017). Recent studies (Min et al., 2017; Howard & Ruder, 2018; Huang et al., 2019; Jin et al., 2019) have shown the effectiveness of pre-training on similar tasks or datasets then fine-tuning on the target dataset for transfer learning. Jin et al. (2019) find that by first training on RACE (Lai et al., 2017) and then further fine-tuning on the target dataset, the performances of $\mathrm{BERT}_{\mathrm{BASE}}$ on multiple-choice dataset MC500 (Richardson et al., 2013) and DREAM (Sun et al., 2019) can significantly boost from $69.5\%$ to $81.2\%$ , and from $63.2\%$ to $70.2\%$ , respectively. However, they also find that the model cannot obtain significant improvement even performs worse if it is first fine-tuned on span-based dataset like SQuAD (Rajpurkar et al., 2016). ReClor is a multiple-choice dataset, so we choose RACE for fine-tuning study.
153
+
154
+ # 4.4 RESULTS AND ANALYSIS
155
+
156
+ The performance of all tested models on the ReClor is presented in Table 7. This dataset is built on questions designed for students who apply for admission to graduate schools, thus we randomly choose 100 samples from the testing set and divide them into ten tests, which are distributed to ten different graduate students in a university. We take the average of their scores and present it as the baseline of graduate students. The data of ReClor are carefully chosen and modified from only high-quality questions from standardized graduate entrance exams. We set the ceiling performance to $100\%$ since ambiguous questions are not included in the dataset.
157
+
158
+ The performance of fastText is better than random guess, showing that word correlation could be used to help improve performance to some extent. It is difficult for Bi-LSTM to converge on this
159
+
160
+ <table><tr><td>Model</td><td>Input</td><td>RACE</td><td>Val</td><td>Test</td><td>Test-E</td><td>Test-H</td></tr><tr><td>Chance</td><td>(C, Q, A)</td><td></td><td>25.0</td><td>25.0</td><td>25.0</td><td>25.0</td></tr><tr><td>fastText</td><td></td><td></td><td>32.0</td><td>30.8</td><td>40.2</td><td>23.4</td></tr><tr><td>Bi-LSTM</td><td>(C, Q, A)</td><td></td><td>27.8</td><td>27.0</td><td>26.4</td><td>27.5</td></tr><tr><td>GPT</td><td></td><td></td><td>47.6</td><td>45.4</td><td>73.0</td><td>23.8</td></tr><tr><td>GPT-2</td><td></td><td></td><td>52.6</td><td>47.2</td><td>73.0</td><td>27.0</td></tr><tr><td rowspan="2">BERTBASE</td><td>(C, Q, A)</td><td></td><td>54.6</td><td>47.3</td><td>71.6</td><td>28.2</td></tr><tr><td>(C, Q, A)</td><td>✓</td><td>55.2</td><td>49.5</td><td>68.9</td><td>34.3</td></tr><tr><td rowspan="4">BERTLARGE</td><td>(A)</td><td></td><td>46.4</td><td>42.4</td><td>69.3</td><td>21.3</td></tr><tr><td>(Q, A)</td><td></td><td>48.8</td><td>43.4</td><td>72.7</td><td>20.4</td></tr><tr><td>(C, Q, A)</td><td></td><td>53.8</td><td>49.8</td><td>72.0</td><td>32.3</td></tr><tr><td>(C, Q, A)</td><td>✓</td><td>55.6</td><td>54.5</td><td>73.9</td><td>39.3</td></tr><tr><td rowspan="2">XLNetBASE</td><td>(C, Q, A)</td><td></td><td>55.8</td><td>50.4</td><td>75.2</td><td>30.9</td></tr><tr><td>(C, Q, A)</td><td>✓</td><td>62.0</td><td>55.5</td><td>76.1</td><td>39.3</td></tr><tr><td rowspan="4">XLNetLARGE</td><td>(A)</td><td></td><td>45.0</td><td>42.9</td><td>66.1</td><td>24.6</td></tr><tr><td>(Q, A)</td><td></td><td>47.8</td><td>43.4</td><td>68.6</td><td>23.6</td></tr><tr><td>(C, Q, A)</td><td></td><td>62.0</td><td>56.0</td><td>75.7</td><td>40.5</td></tr><tr><td>(C, Q, A)</td><td>✓</td><td>70.8</td><td>62.4</td><td>77.7</td><td>50.4</td></tr><tr><td rowspan="2">RoBERTaBASE</td><td>(C, Q, A)</td><td></td><td>55.0</td><td>48.5</td><td>71.1</td><td>30.7</td></tr><tr><td>(C, Q, A)</td><td>✓</td><td>56.8</td><td>53.0</td><td>72.5</td><td>37.7</td></tr><tr><td rowspan="4">RoBERTaLARGE</td><td>(A)</td><td></td><td>48.8</td><td>43.2</td><td>69.5</td><td>22.5</td></tr><tr><td>(Q, A)</td><td></td><td>49.8</td><td>45.8</td><td>72.0</td><td>25.2</td></tr><tr><td>(C, Q, A)</td><td></td><td>62.6</td><td>55.6</td><td>75.5</td><td>40.0</td></tr><tr><td>(C, Q, A)</td><td>✓</td><td>68.0</td><td>65.1</td><td>78.9</td><td>54.3</td></tr><tr><td>Graduate Students</td><td>(C, Q, A)</td><td></td><td>-</td><td>63.0</td><td>57.1</td><td>67.2</td></tr><tr><td>Ceiling Performance</td><td>(C, Q, A)</td><td></td><td>-</td><td>100</td><td>100</td><td>100</td></tr></table>
161
+
162
+ Table 7: Accuracy (%) of models and human performance. The column Input means whether to input context (C), question (Q) and answer options (A). The RACE column represents whether to first use RACE to fine-tune before training on ReClor.
163
+
164
+ dataset. Transformer-based pre-training models have relatively good performance, close to the performance of graduate students. However, we find that these models only perform well on EASY set with around $75\%$ accuracy, showing these models have an outstanding ability to capture the biases of the dataset, but they perform poorly on HARD set with only around $30\%$ accuracy. In contrast, humans can still keep good performance on HARD set. We notice the difference in testing accuracy performed by graduate students on EASY and HARD set, but this could be due to the small number of students participated in the experiments. Therefore, we say humans perform relatively consistent on both biased and non-biased dataset.
165
+
166
+ It is noticed that if the models are first trained on RACE and then fine-tuned on ReClor, they could obtain significant improvement, especially on HARD set. The overall performance of RoBERTa $_{\text{LARGE}}$ is even better than that of graduate students. This similar phenomenon can also be observed on DREAM dataset (Sun et al., 2019) by Jin et al. (2019), which shows the potential of transfer learning for reasoning tasks. However, even after fine-tuning on RACE, the best performance of these strong baselines on HARD set is around $50\%$ , still lower than that of graduate students and far away from ceiling performance.
167
+
168
+ Experiments in different input settings are also done. Compared with the input setting of answer options only (A), the setting of questions and answer options (Q, A) can not bring significant improvement. This may be because some questions e.g., Which one of the following is an assumption required by the argument?, Which one of the following, if true, most strengthens the argument? can be used in the same reasoning types of question, which could not offer much information. Further adding context causes significant boost, showing the high informativeness of the context.
169
+
170
+ We further analyze the model performance with respect to different question types of logical reasoning. Some results are shown in Figure 4 and the full results are shown in Figure 5, 6 and 7 in the Appendix E. Three models of $\mathrm{BERT}_{\mathrm{LARGE}}$ , $\mathrm{XLNet}_{\mathrm{LARGE}}$ and $\mathrm{RoBERTa}_{\mathrm{LARGE}}$ perform well on most of types. On HARD set, the three models perform poorly on certain types such as STRENGTHEN, WEAKEN and ROLE which require extensive logical reasoning. However, they perform relatively better on other certain types, such as CONCLUSION/Main POINT and MATCH STRUCTURES that
171
+
172
+ are more straight-forward. For the result of transfer learning, we analyze $\mathrm{XLNet}_{\mathrm{LARGE}}$ in detail. Though the overall performance is significantly boosted after fine-tuning on RACE first, the histograms in the bottom of Figure 4 show that on EASY set, accuracy of the model with fine-tuning on RACE is similar to that without it among most question types, while on HARD set, significant improvement on some question types is observed, such as CONCLUSION/Main POINT and MOST STRONGLY SUPPORTED. This may be because these types require less logical reasoning to some extent compared with other types, and similar question types may also be found in RACE dataset. Thus, the pre-training on RACE helps enhance the ability of logical reasoning especially of relatively simple reasoning types, but more methods are still needed to further enhance the ability especially that of relatively complex reasoning types.
173
+
174
+ ![](images/94eb3ba1341f7f38877c49cf4094509979a605bef6ab53b12123dac525cec337.jpg)
175
+
176
+ ![](images/b5b3634620f0cbbf61ade377e1282fd4353fb8c0155aa76dd7e90743be51d166.jpg)
177
+
178
+ ![](images/0794324d613abe273f9c00acfcadb066e45095fc73819dc658ba38b19c3f206c.jpg)
179
+ Figure 4: Performance of models on EASY (left) and HARD (right) testing sets and that of models. XLNet $\mathrm{LARGE}$ +Fine-Tune means the model is first fine-tuned on RACE before training on ReClor.
180
+
181
+ ![](images/a8b76409ed33b40bad0e461ab47c0128d99222f4da096fae5a6fa586178b4306.jpg)
182
+
183
+ # 5 CONCLUSION
184
+
185
+ In this paper, we introduce ReClor, a reading comprehension dataset requiring logical reasoning, with the aim to push research progress on logical reasoning in NLP forward from sentence-level to passage-level and from simple logical reasoning to multiple complicated one. We propose to identify biased data points and split the testing set into EASY and HARD group for biased and non-biased data separately. We further empirically study the different behaviors of state-of-the-art models on these two testing sets, and find recent powerful transformer-based pre-trained language models have an excellent ability to exploit the biases in the dataset but have difficulty in understanding and reasoning given the non-biased data with low performance close to or slightly better than random guess. These results show there is a long way to equip deep learning models with real logical reasoning abilities. We hope this work would inspire more research in future to adopt similar split technique and evaluation scheme when reporting their model performance. We also show by first fine-tuning on a large-scale dataset RACE then fine-tuning on ReClor, the models could obtain significant improvement, showing the potential of transfer learning to solve reasoning tasks.
186
+
187
+ # ACKNOWLEDGMENTS
188
+
189
+ We would like to thank the anonymous reviewers for their insightful comments and suggestions; thank Rishabh Jain from Georgia Tech for helping build up the leaderboard of ReClor on EvalAI. Jiashi Feng was partially supported by NUS IDS R-263-000-C67-646, ECRA R-263-000-C87-133, MOE Tier-II R-263-000-D17-112 and AI.SG R-263-000-D97-490. Weihao Yu and Zihang Jiang would like to thank TFRC program for the support of computational resources.
190
+
191
+ # REFERENCES
192
+
193
+ Common crawl. http://http://commoncrawl.org, 2019.
194
+ Khan Academy. https://www.khanacademy.org/test-prep/lsat/lsat-lessons/logical-reasoning/a/logical-reasoning--article--question-type-catalog, 2019. Accessed Sept. 16, 2019.
195
+ Johan Bos and Katja Markert. Recognising textual entailment with logical inference. In Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing, pp. 628-635. Association for Computational Linguistics, 2005.
196
+ Samuel R Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pp. 632-642, 2015.
197
+ Michael Bugert, Yevgeniy Puzikov, Andreas Rückle, Judith Eckle-Kohler, Teresa Martin, Eugenio Martínez-Cármara, Daniil Sorokin, Maxime Peyrard, and Iryna Gurevych. Lsdsem 2017: Exploring data generation methods for the story cloze test. In Proceedings of the 2nd Workshop on Linking Models of Lexical, Sentential and Discourse-level Semantics, pp. 56-61, 2017.
198
+ Zheng Cai, Lifu Tu, and Kevin Gimpel. Pay attention to the ending: Strong neural baselines for the roc story cloze task. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pp. 616-622, 2017.
199
+ Jamie Callan, Mark Hoy, Changkuk Yoo, and Le Zhao. Clueweb09 data set, 2009.
200
+ Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. Think you have solved question answering? try arc, the ai2 reasoning challenge. arXiv preprint arXiv:1803.05457, 2018.
201
+ Cleo Condoravdi, Dick Crouch, Valeria De Paiva, Reinhard Stolle, and Daniel G Bobrow. Entailment, intensionality and text understanding. In Proceedings of the HLT-NAACL 2003 workshop on Text meaning, pp. 38-45, 2003.
202
+ Law School Admission Council. https://www.lsac.org/lsat/taking-lsat/test-format/logical-reasoning, 2019a. Accessed Sept. 16, 2019.
203
+ Law School Admission Council. https://www.lsac.org/lsat/taking-lsat/test-format/logical-reasoning/logical-reasoning-sample-questions, 2019b. Accessed Sept. 16, 2019.
204
+ Ido Dagan, Oren Glickman, and Bernardo Magnini. The pascal recognising textual entailment challenge. In Machine Learning Challenges Workshop, pp. 177-190. Springer, 2005.
205
+ Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171-4186, 2019.
206
+ Dheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel Stanovsky, Sameer Singh, and Matt Gardner. Drop: A reading comprehension benchmark requiring discrete reasoning over paragraphs. In Proceedings of NAACL-HLT, pp. 2368-2378, 2019.
207
+ Yaroslav Fyodorov, Yoad Winter, and Nissim Francez. A natural logic inference system. In Proceedings of the 2nd Workshop on Inference in Computational Semantics (ICoS-2). CiteSeer, 2000.
208
+
209
+ Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel R Bowman, and Noah A Smith. Annotation artifacts in natural language inference data. arXiv preprint arXiv:1803.02324, 2018.
210
+ Ivan Habernal, Henning Wachsmuth, Iryna Gurevych, and Benno Stein. The argument reasoning comprehension task: Identification and reconstruction of implicit warrants. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pp. 1930-1940, 2018.
211
+ Jeremy Howard and Sebastian Ruder. Universal language model fine-tuning for text classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 328-339, 2018.
212
+ Lifu Huang, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. Cosmos qa: Machine reading comprehension with contextual commonsense reasoning. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2391-2401, 2019.
213
+ Di Jin, Shuyang Gao, Jiun-Yu Kao, Tagyoung Chung, and Dilek Hakkani-tur. Mmm: Multi-stage multi-task learning for multi-choice reading comprehension. arXiv preprint arXiv:1910.00458, 2019.
214
+ Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. Bag of tricks for efficient text classification. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pp. 427-431. Association for Computational Linguistics, April 2017.
215
+ Daniel Khashabi, Snigdha Chaturvedi, Michael Roth, Shyam Upadhyay, and Dan Roth. Looking beyond the surface: A challenge set for reading comprehension over multiple sentences. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pp. 252-262, 2018.
216
+ Tomáš Kočisky, Jonathan Schwarz, Phil Blunsom, Chris Dyer, Karl Moritz Hermann, Gábor Melis, and Edward Grefenstette. The narrativeqa reading comprehension challenge. Transactions of the Association for Computational Linguistics, 6:317-328, 2018.
217
+ Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard Hovy. Race: Large-scale reading comprehension dataset from examinations. arXiv preprint arXiv:1704.04683, 2017.
218
+ Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692, 2019.
219
+ Bill MacCartney and Christopher D Manning. An extended model of natural logic. In Proceedings of the eighth international conference on computational semantics, pp. 140-156. Association for Computational Linguistics, 2009.
220
+ Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sabharwal. Can a suit of armor conduct electricity? a new dataset for open book question answering. arXiv preprint arXiv:1809.02789, 2018.
221
+ Sewon Min, Minjoon Seo, and Hannaneh Hajishirzi. Question answering through transfer learning from large fine-grained supervision data. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pp. 510-517, 2017.
222
+ Timothy Niven and Hung-Yu Kao. Probing neural network comprehension of natural language arguments. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 4658-4664, 2019.
223
+ Robert Parker, David Graff, Junbo Kong, Ke Chen, and Kazuaki Maeda. English gigaword fifth edition, 2011.
224
+
225
+ Jeffrey Pennington, Richard Socher, and Christopher Manning. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pp. 1532-1543, 2014.
226
+ Adam Poliak, Jason Naradowsky, Aparajita Haldar, Rachel Rudinger, and Benjamin Van Durme. Hypothesis only baselines in natural language inference. In Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics, pp. 180-191, 2018.
227
+ Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. Improving language understanding by generative pre-training. URL https://s3-us-west-2. amazonaws. com/openai-assets/researchcovers/languageeunsupervised/language understanding paper. pdf, 2018.
228
+ Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners. OpenAI Blog, 1(8), 2019.
229
+ Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250, 2016.
230
+ Pranav Rajpurkar, Robin Jia, and Percy Liang. Know what you don't know: Unanswerable questions for squad. arXiv preprint arXiv:1806.03822, 2018.
231
+ Matthew Richardson, Christopher JC Burges, and Erin Renshaw. Mctest: A challenge dataset for the open-domain machine comprehension of text. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pp. 193-203, 2013.
232
+ Alvaro Rodrigo, Anselmo Penas, Yusuke Miyao, Eduard H Hovy, and Noriko Kando. Overview of clef qa entrance exams task 2015. In CLEF (Working Notes), 2015.
233
+ Roy Schwartz, Maarten Sap, Ioannis Konstas, Leila Zilles, Yejin Choi, and Noah A Smith. Story cloze task: Uw nlp system. In Proceedings of the 2nd Workshop on Linking Models of Lexical, Sentential and Discourse-level Semantics, pp. 52-55, 2017.
234
+ Hideyuki Shibuki, Kotaro Sakamoto, Yoshinobu Kano, Teruko Mitamura, Madoka Ishioroshi, Kelly Y Itakura, Di Wang, Tatsunori Mori, and Noriko Kando. Overview of the ntcir-11 qa-lab task. In Ntcir, 2014.
235
+ Saku Sugawara and Akiko Aizawa. An analysis of prerequisite skills for reading comprehension. In Proceedings of the Workshop on Uphill Battles in Language Processing: Scaling Early Achievements to Robust Methods, pp. 1-5, 2016.
236
+ Saku Sugawara, Kentaro Inui, Satoshi Sekine, and Akiko Aizawa. What makes reading comprehension questions easier? In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 4208-4219, 2018.
237
+ Kai Sun, Dian Yu, Jianshu Chen, Dong Yu, Yejin Choi, and Claire Cardie. Dream: A challenge data set and models for dialogue-based reading comprehension. Transactions of the Association for Computational Linguistics, 7:217-231, 2019.
238
+ Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in neural information processing systems, pp. 5998-6008, 2017.
239
+ Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. Glue: A multi-task benchmark and analysis platform for natural language understanding. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pp. 353-355, 2018.
240
+ Johannes Welbl, Pontus Stenetorp, and Sebastian Riedel. Constructing datasets for multi-hop reading comprehension across documents. Transactions of the Association for Computational Linguistics, 6:287-302, 2018.
241
+ Adina Williams, Nikita Nangia, and Samuel Bowman. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pp. 1112–1122, 2018.
242
+
243
+ Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. Google's neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144, 2016.
244
+ Deshraj Yadav, Rishabh Jain, Harsh Agrawal, Prithvijit Chattopadhyay, Taranjeet Singh, Akash Jain, Shiv Baran Singh, Stefan Lee, and Dhruv Batra. Evalai: Towards better evaluation systems for ai agents. arXiv preprint arXiv:1902.03570, 2019.
245
+ Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, and Quoc V Le. Xlnet: Generalized autoregressive pretraining for language understanding. arXiv preprint arXiv:1906.08237, 2019.
246
+ Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. Hellaswag: Can a machine really finish your sentence? arXiv preprint arXiv:1905.07830, 2019.
247
+ Sheng Zhang, Xiaodong Liu, Jingjing Liu, Jianfeng Gao, Kevin Duh, and Benjamin Van Durme. Record: Bridging the gap between human and machine commonsense reading comprehension. arXiv preprint arXiv:1810.12885, 2018.
248
+ Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In Proceedings of the IEEE international conference on computer vision, pp. 19-27, 2015.
249
+
250
+ # A BASELINE MODELS
251
+
252
+ fastText. FastText (Joulin et al., 2017) models sentences as a bag of n-grams, and tries to predict the probability of each answer being correct independently. We choose the answer with the highest score as the prediction for the multiple-choice setting.
253
+
254
+ LSTM sentence encoder. A two-layer bi-LSTM is randomly initialized as a sentence encoder with GloVe word embedding (Pennington et al., 2014). With a span of text as input, the last hidden state of the second layer is max-pooled and then fed into a fully-connected layer to compute the output score.
255
+
256
+ GPT and GPT-2. GPT (Radford et al., 2018) and GPT-2 (Radford et al., 2019) are both transformer (Vaswani et al., 2017) based models which are pre-trained using unsupervised method with a standard language modeling objective. GPT is pre-trained on BooksCorpus; GPT-2 is pre-trained using a larger dataset called WebText. Here we use the smallest model proposed in (Radford et al., 2019) as our GPT-2 baseline. To fine-tune on ReClor, the final hidden vector corresponding to the last input token ([_classify_]) is used as the aggregate representation followed by an extra fully connected layer to compute the score.
257
+
258
+ BERT. BERT (Devlin et al., 2019) is also a transformer (Vaswani et al., 2017) based model which is trained by using BooksCorpus (Zhu et al., 2015) and English Wikipedia in two unsupervised tasks, i.e., Masked LM (MLM) and Next Sentence Prediction (NSP). During fine-tuning, the final hidden vector corresponding to the first input token ([CLS]) is used as the aggregate representation followed by two extra fully connected layers to compute the score.
259
+
260
+ XLNet. XLNet (Yang et al., 2019) is trained with Permutation Language Modeling and without NSP. In addition, beside BooksCorpus and English Wikipedia used in BERT, it uses Giga5 (Parker et al., 2011), ClueWeb 2012-B (extended from (Callan et al., 2009)), and Common Crawl (com, 2019) for pre-training. We use the final hidden vector corresponding to the last input token <cls> as the aggregate representation and introduce two fully connected layers to predict the score.
261
+
262
+ RoBERTa. RoBERTa (Liu et al., 2019) is an improved pre-training procedure of BERT with training the model longer, with bigger batches over more data and removing NSP objective etc.. Extra two fully connected layers are added to transform the final hidden vector of the first input token $(<s>$ to the score.
263
+
264
+ The input format of different models is shown in Table 8.
265
+
266
+ <table><tr><td>Model</td><td>Input Format</td></tr><tr><td>GPT Radford et al. (2018)</td><td>_start Context _delimiter_ Question || Option _classify_</td></tr><tr><td>GPT-2 Radford et al. (2019)</td><td>_start Context _del delimiter_ Question || Option _classify_</td></tr><tr><td>BERT (Devlin et al., 2019)</td><td>[CLS] Context [SEP] Question || Option [SEP] [PAD]...</td></tr><tr><td>XLNet (Yang et al., 2019)</td><td>&lt;pad&gt;... Context &lt;sep&gt; Question || Option &lt;sep&gt;&lt;cls&gt;</td></tr><tr><td>RoBERTa (Liu et al., 2019)</td><td>&lt;s&gt; Context &lt;/s&gt;&lt;/s&gt; Question || Option &lt;/s&gt;&lt;pad&gt;...</td></tr></table>
267
+
268
+ Table 8: Input formats of different models. Context, Question and Option represent the token sequences of the context, question and option respectively, and $\left| \right|$ denotes concatenation.
269
+
270
+ # B IMPLEMENTATION DETAIL
271
+
272
+ Adam is used by all models. For fastText, we use its python library<sup>6</sup> by converting ReClor to the required form, and keep the default setting of the hyper parameters. For Bi-LSTM, we use a two-layer Bidirectional LSTM with the GloVe 300d word embedding (Pennington et al., 2014) followed by max-pooling and a fully-connected layer. We train the model for 100 epochs using a batch size of 64 and learning rate of 0.1. A learning rate decay of 0.5 is also applied every 10 epochs. For pre-training models, we modify the code of Transformers of Hugging Face<sup>7</sup> to implement them on ReClor. We use a batch size of 24 and fine-tune for 10 epochs. The maximum input sequence length for all models is 256. The detailed hyperparameters are shown in Table 9.
273
+
274
+ <table><tr><td>HYPERPARAM</td><td>GPT</td><td>GPT-2</td><td>BERTBASE</td><td>BERTLARGE</td><td>XLNetBASE</td><td>XLNetLARGE</td><td>RoBERTaBASE</td><td>RoBERTaLARGE</td></tr><tr><td>Learning Rate</td><td>6.25e-5</td><td>6.25e-5</td><td>2e-5</td><td>2e-5</td><td>2e-5</td><td>2e-5</td><td>1e-5</td><td>1e-5</td></tr><tr><td>Batch Size</td><td></td><td></td><td></td><td></td><td>24</td><td></td><td></td><td></td></tr><tr><td>Max Seq Length</td><td></td><td></td><td></td><td></td><td>256</td><td></td><td></td><td></td></tr><tr><td>Learning Rate Decay</td><td></td><td></td><td></td><td></td><td>Linear</td><td></td><td></td><td></td></tr><tr><td>Number of Epochs</td><td></td><td></td><td></td><td></td><td>10</td><td></td><td></td><td></td></tr><tr><td>Warm-up Proportion</td><td></td><td></td><td></td><td></td><td>0.1</td><td></td><td></td><td></td></tr><tr><td>Weight Decay</td><td>0.01</td><td>0.01</td><td>0.0</td><td>0.0</td><td>0.01</td><td>0.01</td><td>0.01</td><td>0.01</td></tr><tr><td>Adam Epsilon</td><td>1e-8</td><td>1e-8</td><td>1e-6</td><td>1e-6</td><td>1e-6</td><td>1e-6</td><td>1e-6</td><td>1e-6</td></tr><tr><td>Adam Betas</td><td>(0.9, 0.999)</td><td>(0.9, 0.999)</td><td>(0.9, 0.999)</td><td>(0.9, 0.999)</td><td>(0.9, 0.999)</td><td>(0.9, 0.999)</td><td>(0.9, 0.98)</td><td>(0.9, 0.98)</td></tr><tr><td>Clip Grad Norm</td><td></td><td></td><td></td><td></td><td>Not</td><td></td><td></td><td></td></tr></table>
275
+
276
+ Table 9: Hyperparameters for finetuning pre-training language models on ReClor
277
+
278
+ # C EXAMPLES
279
+
280
+ # Type: Necessary Assumptions
281
+
282
+ Definition: identify the claim that must be true or is required in order for the argument to work
283
+
284
+ # Context:
285
+
286
+ Slash-and-burn agriculture involves burning several acres of forest, leaving vegetable ash that provides ample fertilizer for three or four years of bountiful crops. On the cleared land nutrients leach out of the soil, however, and the land becomes too poor to support agriculture. New land is then cleared by burning and the process starts again. Since most farming in the tropics uses this method, forests in this region will eventually be permanently eradicated.
287
+
288
+ Question: The argument depends on the assumption that
289
+
290
+ # Options:
291
+
292
+ A. forests in the tropics do not regenerate well enough to restore themselves once they have been cleared by the slash-and-burn method
293
+ B. some other methods of agriculture are not as destructive to the environment in tropical regions as the slash-and-burn method is
294
+ C. forests in the tropics are naturally deficient in nutrients that are needed to support the growth of plants that are not native to those regions
295
+ D. slash-and-burn agriculture is particularly suitable for farming in tropical areas
296
+
297
+ Answer: A
298
+
299
+ Table 10: The definition and an example of the logical reasoning type - Necessary Assumptions
300
+
301
+ <table><tr><td>Type: Sufficient Assumptions
302
+ Definition: identify a sufficient assumption, that is, an assumption that, if added to the argument, would make it logically valid</td></tr><tr><td>Context:
303
+ Geologist: A new method for forecasting earthquakes has reliably predicted several earthquakes. Unfortunately, this method can predict only that an earthquake will fall somewhere within a range of two and a half points on the Richter scale. Thus, since a difference of two and a half points can be the difference between a marginally perceptible shaking and a quake that causes considerable damage, the new method is unlikely to be useful.
304
+ Question: Which one of the following, if assumed, enables the geologist&#x27;s conclusion to be properly inferred?
305
+ Options:
306
+ A. An earthquake-forecasting method is unlikely to be useful unless its predictions always differentiate earthquakes that are barely noticeable from ones that result in substantial destruction.
307
+ B. Several well-established methods for forecasting earthquakes can predict within much narrower ranges than two and a half points on the Richter scale.
308
+ C. Even if an earthquake-forecasting method makes predictions within a very narrow range on the Richter scale, this method is not likely to be useful unless its predictions are reliable.
309
+ D. An earthquake-forecasting method has not been shown to be useful until it has been used to reliably predict a large number of earthquakes.
310
+ Answer: A</td></tr></table>
311
+
312
+ Table 11: The definition and an example of the logical reasoning type - Sufficient Assumptions
313
+
314
+ <table><tr><td>Type: Strengthen
315
+ Definition: identify information that would strengthen an argument</td></tr><tr><td>Context:
316
+ Financial success does not guarantee happiness. This claim is not mere proverbial wisdom but a fact verified by statistics. In a recently concluded survey, only one-third of the respondents who claimed to have achieved financial success reported that they were happy.
317
+ Question: Which one of the following, if true, most strongly supports the conclusion drawn from the survey results?
318
+ Options:
319
+ A. Most of the respondents who reported they were unhappy were in fact happy.
320
+ B. The respondents who reported financial success were, for the most part, financially successful.
321
+ C. Many of the respondents who claimed not to have achieved financial success reported that they were happy five years ago.
322
+ D. Many of the respondents who failed to report financial success were in fact financially successful.
323
+ Answer: B</td></tr></table>
324
+
325
+ Table 12: The definition and an example of the logical reasoning type - Strengthen
326
+
327
+ <table><tr><td>Type: Weaken
328
+ Definition: identify information that would weaken an argument</td></tr><tr><td>Context:
329
+ “DNA fingerprinting” is a recently-introduced biochemical procedure that uses a pattern derived from a person&#x27;s genetic material to match a suspect&#x27;s genetic material against that of a specimen from a crime scene. Proponents have claimed astronomically high odds against obtaining a match by chance alone. These odds are based on an assumption that there is independence between the different characteristics represented by a single pattern.
330
+ Question: Which one of the following, if true, casts the most doubt on the claim of the proponents of DNA fingerprinting?
331
+ Options:
332
+ A. The skill required of laboratory technicians performing the DNA fingerprinting procedure is not extraordinary.
333
+ B. There is a generally accepted theoretical basis for interpreting the patterns produced by the procedure.
334
+ C. In the whole population there are various different subgroups, within each of which certain sets of genetic characteristics are shared.
335
+ D. In the investigation of certain genetic diseases, the techniques used in DNA fingerprinting have traced the transmission of the diseases among the living members of very large families.
336
+ Answer: C</td></tr></table>
337
+
338
+ Table 13: The definition and an example of the logical reasoning type - Weaken
339
+
340
+ <table><tr><td>Type: Evaluation
341
+ Definition: identify information that would be useful to know to evaluate an argument</td></tr><tr><td>Context:
342
+ George: Some scientists say that global warming will occur because people are releasing large amounts of carbon dioxide into the atmosphere by burning trees and fossil fuels. We can see, though, that the predicted warming is occurring already. In the middle of last winter, we had a month of springlike weather in our area, and this fall, because of unusually mild temperatures, the leaves on our town&#x27;s trees were three weeks late in turning color.
343
+ Question: Which one of the following would it be most relevant to investigate in evaluating the conclusion of George&#x27;s argument?
344
+ Options:
345
+ A. whether air pollution is causing some trees in the area to lose their leaves
346
+ B. what proportion of global emissions of carbon dioxide is due to the burning of trees by humans
347
+ C. whether unusually warm weather is occurring elsewhere on the globe more frequently than before
348
+ D. when leaves on the trees in the town usually change color
349
+ Answer: C</td></tr></table>
350
+
351
+ Table 14: The definition and an example of the logical reasoning type - Evaluation
352
+
353
+ # Type: Implication
354
+
355
+ Definition: identify something that follows logically from a set of premises
356
+
357
+ # Context:
358
+
359
+ To be horrific, a monster must be threatening. Whether or not it presents psychological, moral or social dangers, or triggers enduring infantile fears, if a monster is physically dangerous then it is threatening. In fact, even a physically benign monster is horrific if it inspires revulsion.
360
+
361
+ Question: Which one of the following logically follows from the statements above?
362
+
363
+ # Options:
364
+
365
+ A. Any horror-story monster that is threatening is also horrific.
366
+ B. If a monster triggers infantile fears but is not physically dangerous, then it is not horrific.
367
+ C. All monsters that are not physically dangerous, but that are psychologically dangerous and inspire revulsion, are threatening.
368
+ D. If a monster is both horrific and psychologically threatening, then it does not inspire revulsion.
369
+ Answer: C
370
+
371
+ Table 15: The definition and an example of the logical reasoning type - Implication
372
+
373
+ <table><tr><td>Type: Conclusion/Main Point
374
+ Definition: identify the conclusion/main point of a line of reasoning</td></tr><tr><td>Context:
375
+ Whether or not one can rightfully call a person&#x27;s faithfulness a virtue depends in part on the object of that person&#x27;s faithfulness. Virtues are by definition praiseworthy, which is why no one considers resentment virtuous, even though it is in fact a kind of faithfulness – faithfulness to hatreds or animosities.
376
+ Question: Which one of the following most accurately expresses the overall conclusion drawn in the argument?
377
+ Options:
378
+ A. The object of a person&#x27;s faithfulness partially determines whether or not that faithfulness is virtuous.
379
+ B. Virtuous behavior is praiseworthy by definition.
380
+ C. Resentment should not be considered a virtuous emotion.
381
+ D. Behavior that emerges from hatred or animosity cannot be called virtuous.
382
+ Answer: A</td></tr></table>
383
+
384
+ Table 16: The definition and an example of the logical reasoning type - Conclusion/Main Point
385
+
386
+ <table><tr><td>Type: Most Strongly Supported
387
+ Definition: find the choice that is most strongly supported by a stimulus</td></tr><tr><td>Context:
388
+ After a nuclear power plant accident, researchers found radioactive isotopes of iodine, tellurium, and cesium-but no heavy isotopes-in the atmosphere downwind. This material came either from spent fuel rods or from the plant&#x27;s core. Spent fuel rods never contain significant quantities of tellurium isotopes. Radioactive material ejected into the atmosphere directly from the core would include heavy isotopes. After the accident, steam, which may have been in contact with the core, was released from the plant. The core contains iodine, tellurium, and cesium isotopes, which are easily dissolved by steam.
389
+ Question:
390
+ Of the following statements, which one is most strongly supported by the information above?
391
+ Options:
392
+ A. The nuclear power plant&#x27;s spent fuel rods were not damaged.
393
+ B. Spent fuel rods do not contain heavy isotopes in significant quantities.
394
+ C. The researchers found some radioactive material from spent fuel rods as well as some material that was ejected into the atmosphere directly from the plant&#x27;s core.
395
+ D. The radioactive material detected by the researchers was carried into the atmosphere by the steam that was released from the plant.
396
+ Answer: D</td></tr></table>
397
+
398
+ Table 17: The definition and an example of the logical reasoning type - Most Strongly Supported
399
+
400
+ <table><tr><td>Type: Explain or Resolve
401
+ Definition: identify information that would explain or resolve a situation</td></tr><tr><td>Context:
402
+ To reduce the mosquito population in a resort area, hundreds of trees were planted that bear fruit attractive to birds. Over the years, as the trees matured, they attracted a variety of bird species and greatly increased the summer bird population in the area. As expected, the birds ate many mosquitoes. However, the planting of the fruit trees had the very opposite of its intended effect.
403
+ Question:
404
+ Which one of the following, if true, most helps to explain the apparently paradoxical result?
405
+ Options:
406
+ A. Most of the species of birds that were attracted by the trees that were planted did not eat mosquitoes.
407
+ B. Increases and decreases in mosquito populations tend to follow a cyclical pattern.
408
+ C. The species of birds that were attracted in the greatest number by the fruit of the trees that were planted did not eat mosquitoes.
409
+ D. The birds attracted to the area by the trees ate many more insects that prey on mosquitoes than they did mosquitoes.
410
+ Answer: D</td></tr></table>
411
+
412
+ Table 18: The definition and an example of the logical reasoning type - Explain or Resolve
413
+
414
+ <table><tr><td>Type: Principle
415
+ Definition: identify the principle, or find a situation that conforms to a principle, or match the principles</td></tr><tr><td>Context:
416
+ Buying elaborate screenshots – programs that put moving images on a computer monitor to prevent damage – can cost a company far more in employee time than it saves in electricity and monitor protection.
417
+ Employees cannot resist spending time playing with screenshots that flash interesting graphics across their screens.
418
+ Question:
419
+ Which one of the following most closely conforms to the principle illustrated above?
420
+ Options:
421
+ A. An electronic keyboard may be cheaper to buy than a piano but more expensive to repair.
422
+ B. An energy-efficient insulation system may cost more up front but will ultimately save money over the life of the house.
423
+ C. The time that it takes to have a pizza delivered may be longer than it takes to cook a complete dinner.
424
+ D. A complicated hotel security system may cost more in customer goodwill than it saves in losses by theft.
425
+ Answer: D</td></tr></table>
426
+
427
+ Table 19: The definition and an example of the logical reasoning type - Principle
428
+
429
+ <table><tr><td>Type: Dispute
430
+ Definition: identify or infer an issue in dispute</td></tr><tr><td>Context:
431
+ Raphaela: Forcing people to help others is morally wrong. Therefore, no government has the right to redistribute resources via taxation. Anyone who wants can help others voluntarily. Edward: Governments do have that right, insofar as they give people the freedom to leave and hence not to live under their authority.
432
+ Question:
433
+ Raphaela and Edward disagree about the truth of which one of the following?
434
+ Options:
435
+ A. Any government that forces people to help others should permit emigration.
436
+ B. Any government that permits emigration has the right to redistribute resources via taxation.
437
+ C. Any government that redistributes resources via taxation forces people to help others.
438
+ D. Every government should allow people to help others voluntarily.
439
+ Answer: B</td></tr></table>
440
+
441
+ Table 20: The definition and an example of the logical reasoning type - Dispute
442
+
443
+ <table><tr><td>Type: Technique
444
+ Definition: identify the technique used in the reasoning of an argument</td></tr><tr><td>Context:
445
+ Joanna: The only way for a company to be successful, after emerging from bankruptcy, is to produce the same goods or services that it did before going bankrupt. It is futile for such a company to try to learn a whole new business. Ruth: Wrong. The Kelton Company was a major mining operation that went into bankruptcy. On emerging from bankruptcy, Kelton turned its mines into landfills and is presently a highly successful waste-management concern.
446
+ Question:
447
+ Ruth uses which one of the following argumentative techniques in countering Joanna&#x27;s argument?
448
+ Options:
449
+ A. She undermines a claim by showing that it rests on an ambiguity.
450
+ B. She offers an alternative explanation for a phenomenon.
451
+ C. She presents a counterexample to a claim.
452
+ D. She establishes a conclusion by excluding the only plausible alternative to that conclusion.
453
+ Answer: C</td></tr></table>
454
+
455
+ Table 21: The definition and an example of the logical reasoning type - Technique
456
+
457
+ <table><tr><td>Type: Role
458
+ Definition: describe the individual role that a statement is playing in a larger argument</td></tr><tr><td>Context:
459
+ The position that punishment should be proportional to how serious the offense is but that repeat offenders should receive harsher punishments than first-time offenders is unsustainable. It implies that considerations as remote as what an offender did years ago are relevant to the seriousness of an offense. If such remote considerations were relevant, almost every other consideration would be too. But this would make determining the seriousness of an offense so difficult that it would be impossible to apply the proportionality principle.
460
+ Question:
461
+ The statement that considerations as remote as what an offender did years ago are relevant to the seriousness of an offense plays which one of the following roles in the argument?
462
+ Options:
463
+ A. It is an allegedly untenable consequence of a view rejected in the argument&#x27;s overall conclusion.
464
+ B. It is a statement the argument provides grounds to accept and from which the overall conclusion is inferred.
465
+ C. It is the overall conclusion in favor of which the argument offers evidence.
466
+ D. It is a premise offered in support of an intermediate conclusion of the argument.
467
+ Answer: A</td></tr></table>
468
+
469
+ Table 22: The definition and an example of the logical reasoning type - Role
470
+
471
+ <table><tr><td>Type: Identify a Flaw
472
+ Definition: identify a flaw in an argument&#x27;s reasoning</td></tr><tr><td>Context:
473
+ The tidal range at a particular location is the difference in height between high tide and low tide. Tidal studies have shown that one of the greatest tidal ranges in the world is found in the Bay of Fundy and reaches more than seventeen meters. Since the only forces involved in inducing the tides are the sun&#x27;s and moon&#x27;s gravity, the magnitudes of tidal ranges also must be explained entirely by gravitational forces.
474
+ Question:
475
+ Which one of the following most accurately describes a flaw in the reasoning above?
476
+ Options:
477
+ A. It does not differentiate between the tidal effect of the sun and the tidal effect of the moon.
478
+ B. It fails to consider that the size of a tidal range could be affected by the conditions in which gravitational forces act.
479
+ C. It presumes, without providing warrant, that most activity within the world&#x27;s oceans is a result of an interplay of gravitational forces.
480
+ D. It gives only one example of a tidal range.
481
+ Answer: B</td></tr></table>
482
+
483
+ Table 23: The definition and an example of the logical reasoning type - Identify a Flaw
484
+
485
+ <table><tr><td>Type: Match Flaws
486
+ Definition: find a choice containing an argument that exhibits the same flaws as the passage&#x27;s argument</td></tr><tr><td>Context:
487
+ The museum&#x27;s night security guard maintains that the thieves who stole the portrait did not enter the museum at any point at or above ground level. Therefore, the thieves must have gained access to the museum from below ground level.
488
+ Question:
489
+ The flawed pattern of reasoning in the argument above is most similar to that in which one of the following?
490
+ Options:
491
+ A. As had generally been expected, not all questionnaires were sent in by the official deadline. It follows that plans must have been made for the processing of questionnaires received late.
492
+ B. The store&#x27;s competitors claim that the store, in selling off the shirts at those prices, neither made any profit nor broke even. Consequently, the store&#x27;s customers must have been able to buy shirts there at less than the store&#x27;s cost.
493
+ C. The product label establishes that this insecticide is safe for both humans and pets. Therefore, the insecticide must also be safe for such wild mammals as deer and rabbits.
494
+ D. If the census is to be believed, the percentage of men who are married is higher than the percentage of women who are married. Thus, the census must show a higher number of men than of women overall.
495
+ Answer: B</td></tr></table>
496
+
497
+ Table 24: The definition and an example of the logical reasoning type - Match Flaws
498
+
499
+ <table><tr><td>Type: Match the Structure
500
+ Definition: match the structure of an argument in a choice to the structure of the argument in the passage</td></tr><tr><td>Context:
501
+ It is an absurd idea that whatever artistic endeavor the government refuses to support it does not allow, as one can see by rephrasing the statement to read: No one is allowed to create art without a government subsidy.
502
+ Question:
503
+ The pattern of reasoning in which one of the following is most similar to that in the argument above?
504
+ Options:
505
+ A. The notion that every scientist who has been supported by a government grant will be successful is absurd, as one can see by rewording it:No scientist is allowed to do research without a government grant.
506
+ B. The notion that every scientist who is supported by a government grant will be successful is absurd, as one can see by rewording it:No scientist lacking governmental support will be successful.
507
+ C. The claim that any driver who is not arrested does not break the law is absurd, as one can see by rewording it: Every driver who gets arrested has broken the law.
508
+ D. The claim that any driver who is not arrested does not break the law is absurd, as one can see by rewording it: Every driver who breaks the law gets arrested.
509
+ Answer: D</td></tr></table>
510
+
511
+ Table 25: The definition and an example of the logical reasoning type - Match the Structure
512
+
513
+ <table><tr><td>Type: Others
514
+ Definition: other types of questions which are not included by the above</td></tr><tr><td>Context:
515
+ PhishCo runs a number of farms in the arid province of Nufa, depending largely on irrigation. Now, as part of a plan to efficiently increase the farms&#x27; total production, it plans to drill down to an aquifer containing warm, slightly salty water that will be used to raise fish in ponds. The water from the ponds will later be used to supplement piped-in irrigation water for PhishCo&#x27;s vegetable fields, and the ponds and accompanying vegetation should help reduce the heat in the area of the farms.
516
+ Question:
517
+ Which of the following would, if true, most strongly suggest that the plan, if implemented, would increase the overall efficiency of PhishCo&#x27;s farms?
518
+ Options:
519
+ A. Organic waste from fish in the pond water will help to fertilize fields where it is used for irrigation.
520
+ B. Fish raised on PhishCo&#x27;s farms are likely to be saleable in the nearest urban areas.
521
+ C. Ponds will be located on low-lying land now partially occupied by grain crops.
522
+ D. The government of Nufa will help to arrange loan financing to partially cover the costs of drilling.
523
+ Answer: A</td></tr></table>
524
+
525
+ Table 26: The definition and an example of the logical reasoning type - Others
526
+
527
+ D CONSISTENCY OF DIFFERENT MODELS
528
+
529
+ <table><tr><td></td><td>GPT</td><td>GPT-2</td><td>BERTBASE</td><td>XLNetBASE</td><td>RoBERTaBASE</td></tr><tr><td>GPT</td><td>245</td><td>164</td><td>152</td><td>142</td><td>116</td></tr><tr><td>GPT-2</td><td></td><td>238</td><td>151</td><td>144</td><td>123</td></tr><tr><td>BERTBASE</td><td></td><td></td><td>234</td><td>138</td><td>124</td></tr><tr><td>XLNetBASE</td><td></td><td></td><td></td><td>225</td><td>125</td></tr><tr><td>RoBERTaBASE</td><td></td><td></td><td></td><td></td><td>200</td></tr></table>
530
+
531
+ Table 27: Overlap of each pair of models after intersection among 4 random seeds.
532
+
533
+ ![](images/e033bd3a65c7fd0f61f96ade401dc4dc0de9feb5a718478ade3b5e0fc193b175.jpg)
534
+ E RESULTS WITH RESPECT TO DIFFERENT QUESTION TYPES
535
+ Figure 5: Accuracy of all baseline models on overall testing set
536
+
537
+ ![](images/5a817b118038a31be7abff1132e616e11e1d069545bc9b0c40eeb9f26c7bb844.jpg)
538
+ Figure 6: Accuracy of all baseline models on EASY set of testing set
539
+
540
+ ![](images/2aa756b598de563d0f5013e8cfa6d56e61cb841e9c250b92fe7eeffb2462fe53.jpg)
541
+ Figure 7: Accuracy of all baseline models on HARD set of testing set
542
+
543
+ ![](images/673e7efe4f4a3106f81219b184fa27be738bfffb38ee6bf43eaab71a79ea1291.jpg)
544
+
545
+ ![](images/9ae05e524d454e5145bfa35cd7b5f3815acb61ea5b8f47b0216ca27dfb787e0e.jpg)
546
+
547
+ ![](images/354802a57187569447058b71a7a3e37a7fc4f8ccadc05a575794611232aa6508.jpg)
548
+ Figure 8: Performance of $\mathrm{BERT}_{\mathrm{LARGE}}$ (top) and RoBERTaLARGE (bottom) on EASY (left) and HARD (right) testing sets.
549
+
550
+ ![](images/0b7e82a764092b2ee10568389ef1bf3115784f8e4675e63a9662bc1ba4bab79a.jpg)
reclorareadingcomprehensiondatasetrequiringlogicalreasoning/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c6d7dbd32cc177b3a45d4778b69359e5698cca5b677e5c3b44733b12756f0d04
3
+ size 3261307
reclorareadingcomprehensiondatasetrequiringlogicalreasoning/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:680c6cb849c8e1589ea8f50c6adc3e177cc0254dba5bc9d275bc4a3285671d88
3
+ size 497396
recurrentneuralcircuitsforcontourdetection/9e1ea1e6-817b-42db-bc4d-a7adb4bf552f_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9372dbfdced7a12d76a9f19d0b796f6a73bdf1e3514d47c8dd9ad68992c9f56a
3
+ size 127704
recurrentneuralcircuitsforcontourdetection/9e1ea1e6-817b-42db-bc4d-a7adb4bf552f_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d9da5421f7fa18ab5462648d2f7756937e2adda9f5369524afc161da31660999
3
+ size 150615
recurrentneuralcircuitsforcontourdetection/9e1ea1e6-817b-42db-bc4d-a7adb4bf552f_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ddf56abcfc960980498d459e38ea58fc2b265ea80e6c87071489eb4e0cde54ab
3
+ size 11951037
recurrentneuralcircuitsforcontourdetection/full.md ADDED
@@ -0,0 +1,483 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # RECURRENT NEURAL CIRCUITS FOR CONTOUR DETECTION
2
+
3
+ Drew Linsley†, Junkyung Kim†‡, Alekh Ashok & Thomas Serre
4
+
5
+ Department of Cognitive, Linguistic and Psychological Sciences Brown University
6
+
7
+ Providence, RI 02912, USA
8
+
9
+ {drew_linsley, alekh_ashok, thomas_serre}@brown.edu
10
+
11
+ junkyung@google.com
12
+
13
+ # ABSTRACT
14
+
15
+ We introduce a deep recurrent neural network architecture that approximates known visual cortical circuits (Mély et al., 2018). We show that this architecture, which we refer to as the $\gamma$ -Net, learns to solve contour detection tasks with better sample efficiency than state-of-the-art feedforward networks, while also exhibiting a classic perceptual illusion, known as the orientation-tilt illusion. Correcting this illusion significantly reduces $\gamma$ -Net contour detection accuracy by driving it to prefer low-level edges over high-level object boundary contours. Overall, our study suggests that the orientation-tilt illusion is a byproduct of neural circuits that help biological visual systems achieve robust and efficient contour detection, and that incorporating such circuits in artificial neural networks can improve computer vision.
16
+
17
+ # 1 INTRODUCTION
18
+
19
+ An open debate since the inception of vision science concerns why we experience visual illusions. Consider the class of "contextual" illusions, where the perceived qualities of an image region, such as its orientation or color, are biased by the qualities of surrounding image regions. A well-studied contextual illusion is the orientation-tilt illusion depicted in Fig. 1a, where perception of the central grating's orientation is influenced by the orientation of the surrounding grating (O'Toole & Wenderoth, 1977). When the two orientations are similar, the central grating appears tilted slightly away from the surround (Fig. 1a, top). When the two orientations are dissimilar, the central grating appears tilted slightly towards the surround (Fig. 1a, bottom). Is the contextual bias of the orientation-tilt illusion a bug of biology or a byproduct of optimized neural computations?
20
+
21
+ Over the past 50 years, there has been a number of neural circuit mechanisms proposed to explain individual contextual illusions (reviewed in Mély et al., 2018). Recently, Mély et al. (2018) proposed a cortical circuit, constrained by physiology of primate visual cortex (VI), that offers a unified explanation for contextual illusions across visual domains – from the orientation-tilt illusion to color induction. These illusions arise in the circuit from recurrent interactions between neural populations with receptive fields that tile visual space, leading to contextual (center/surround) effects. For the orientation-tilt illusion, neural populations encoding the surrounding grating can either suppress or facilitate the activity of neural populations encoding the central grating, leading to repulsion vs. attraction, respectively. These surround neural populations compete to influence encodings of the central grating: suppression predominates when center/surround are similar, and facilitation predominates when center/surround are dissimilar.
22
+
23
+ The neural circuit of Mély et al. (2018) explains how contextual illusions might emerge, but it does not explain why. One possibility is that contextual illusions like the orientation-tilt illusion are “bugs”: vestiges of evolution or biological constraints on the neural hardware. Another possibility is that contextual illusions are the by-product of efficient neural routines for scene segmentation (Keemink & van Rossum, 2016; Mély et al., 2018). Here, we provide computational evidence for the latter
24
+
25
+ possibility and demonstrate that the orientation-tilt illusion reflects neural strategies optimized for object contour detection.
26
+
27
+ Contributions We introduce the $\gamma$ -Net, a trainable and hierarchical extension of the neural circuit of Mély et al. (2018), which explains contextual illusions. (i) The $\gamma$ -Net is more sample efficient than state-of-the-art convolutional architectures on two separate contour detection tasks. (ii) Similar to humans but not state-of-the-art contour detection models, the $\gamma$ -Net exhibits an orientation-tilt illusion after being optimized for contour detection. This illusion emerges from its preference for high-level object-boundary contours over low-level edges, indicating that neural circuits involved in contextual illusions also support sample-efficient solutions to contour detection tasks.
28
+
29
+ # 2 RELATED WORK
30
+
31
+ Modeling the visual system Convolutional neural networks (CNNs) are often considered the de facto "standard model" of vision. CNNs and their extensions represent the state of the art for most computer vision applications with performance approaching – and sometimes exceeding – human observers on certain visual recognition tasks (He et al., 2016; Lee et al., 2017; Phillips et al., 2018). CNNs also provide the best fit to rapid neural responses in the visual cortex (see Kriegeskorte 2015; Yamins & DiCarlo 2016 for reviews). Nevertheless, multiple lines of evidence suggest that biological vision is still far more robust and versatile than CNNs (see Serre, 2019, for a recent review). CNNs suffer from occlusions and clutter (Fyall et al., 2017; Rosenfeld et al., 2018; Tang et al., 2018). They are also sample inefficient at learning visual relations (Kim et al., 2018) and solving simple grouping tasks (Linsley et al., 2018c). State-of-the-art CNNs require massive datasets to reach their impressive accuracy (Lake et al., 2015) and their ability to generalize beyond training data is limited (Geirhos et al., 2018; Recht et al., 2018).
32
+
33
+ Cortical feedback contributes to the robustness of biological vision (Hochstein & Ahissar, 2002; Wyatte et al., 2014; Kafaligonul et al., 2015). Feedforward projections in the visual system are almost always matched by feedback projections (Felleman & Van Essen, 1991), and feedback has been implicated in visual "routines" that cannot be implemented through purely feedforward vision, such as incremental grouping or filling-in (O'Reilly et al., 2013; Roelfsema, 2006). There is a also a growing body of work demonstrating the potential of recurrent neural networks (RNNs) to account for neural recordings (Fyall et al., 2017; Klink et al., 2017; Siegel et al., 2015; Tang et al., 2018; Nayebi et al., 2018; Kar et al., 2019; Kietzmann et al., 2019).
34
+
35
+ Feedback for computer vision In contrast to CNNs, which build processing depth through a cascade of filtering and pooling stages with unique weights, RNNs process stimuli with filtering stages that reuse weights over "timesteps" of recurrence. On each discrete processing timestep, an RNN updates its hidden state through a nonlinear combination of an input and its the hidden state from its previous timestep. RNNs have been extended from their roots in sequence processing (e.g., Mozer 1992) to computer vision by computing the activity of RNN units through convolutional kernels. The common interpretation of these convolutional-RNNs, is that the input to each layer functions as a (fixed) feedforward drive, which is combined with layer-specific feedback from an evolving hidden state to dynamically adjust layer activity (Linsley et al., 2018c; George et al., 2017; Lotter et al., 2016; Wen et al., 2018; Liao & Poggio, 2016; Spoerer et al., 2017; Nayebi et al., 2018; Tang et al., 2018). In the current work, we are motivated by a similar convolutional-RNN, the horizontal gated recurrent unit (hGRU, Linsley et al. 2018a), which approximates the recurrent neural circuit model of (Mély et al., 2018) for explaining contextual illusions.
36
+
37
+ # 3 RECURRENT NEURAL MODELS
38
+
39
+ We begin by reviewing the dynamical neural circuit of Mély et al. (2018). This model explains contextual illusions by simulating interactions between cortical hypercolumns tiling the visual field (where hypercolumns describe a set of neurons encoding features for multiple visual domains at a single retinotopic position). In the model, hypercolumns are indexed by their 2D coordinate $(x,y)$ and feature channels $k$ . Units in hypercolumns encode idealized responses for a visual domain (e.g., neural responses from the orientation domain were used to simulate an orientation-tilt illusion;
40
+
41
+ ![](images/8546aa0d5d0828f11dda20ae000f116ec89863a009c27a5e86ffaa73bdcd17a2.jpg)
42
+ Figure 1: The orientation tilt-illusion (O'Toole & Wenderoth, 1977) is a contextual illusion where a central grating's perceived orientation is influenced by a surround grating's orientation. (a) When a central grating has a similar orientation as its surround, it is judged as tilting away from the surround (repulsion). When the two gratings have dissimilar orientations, the central grating is judged as tilting towards the surround (attraction). (b) We extend the recurrent circuit proposed by Mély et al. (2018) to explain this and other contextual illusions into a hierarchical model that learns horizontal (within a layer) and top-down (between layer) interactions between units. The circuit simulates dynamical suppressive $(\mathbf{H}_{xyk}^{(S)})$ and facilitative $(\mathbf{H}_{xyk}^{(F)})$ interactions between units in a layer $\ell$ , which receives feedforward drive from a center pathway encoding feature $k$ (e.g., edges oriented at $0^{\circ}$ or $22.5^{\circ}$ ) at position $(x, y)$ in an image. Blocks depict different layers, and arrowed connections denote top-down feedback. (c) A deep network schematic of the circuit diagram in (b), which forms the basis of the $\gamma$ -Net introduced here. Horizontal and top-down connections are implemented with feedback gated recurrent units (fGRUs). Image encodings pass through these blocks on every timestep, from bottom-up (left path) to top-down (right path), and predictions are read out from the fGRU closest to image resolution on the final timestep. This motif can be stacked to create a hierarchical model.
43
+
44
+ ![](images/a4c70486388c05ee13e494bc2323aeaf4c61b8c268c27b12eac5f039e6d388a9.jpg)
45
+ Fig. 1b). Dynamics of a single unit at $xyk$ obey the following equations (we bold activity tensors to distinguish them from learned kernels and parameters):
46
+
47
+ ![](images/cf6e2de9c75d5ad28e2cd44f92d52f6aac01e53a5273d13d6ec1ac83b45a7b24.jpg)
48
+
49
+ $$
50
+ \eta \dot {H} _ {x y k} ^ {(S)} + \epsilon^ {2} H _ {x y k} ^ {(S)} = \left[ \xi Z _ {x y k} - \left(\alpha H _ {x y k} ^ {(F)} + \mu\right) C _ {x y k} ^ {(S)} \right] _ {+} \quad \# \text {S t a g e 1 : R e c u r r e n t s u p p r e s s i o n o f Z}
51
+ $$
52
+
53
+ $$
54
+ \tau \dot {H} _ {x y k} ^ {(F)} + \sigma^ {2} H _ {x y k} ^ {(F)} = \left[ \nu C _ {x y k} ^ {(F)} \right] _ {+}, \quad \# \text {S t a g e 2 : R e c u r r e n t f a c i l i t a t i o n o f} \mathbf {H} ^ {(S)}
55
+ $$
56
+
57
+ where
58
+
59
+ $$
60
+ C _ {x y k} ^ {(S)} = \left(W ^ {S} * \mathbf {H} ^ {(F)}\right) _ {x y k} \quad \# \text {C o m p u t e s u p p r e s s i o n i n t e r a c t i o n s}
61
+ $$
62
+
63
+ $$
64
+ C _ {x y k} ^ {(F)} = \left(W ^ {F} * \mathbf {H} ^ {(S)}\right) _ {x y k}. \quad \# \text {C o m p u t e f a c i l i t a t i o n i n t e r a c t i o n s}
65
+ $$
66
+
67
+ Circuit activities consist of a feedforward drive, recurrent suppression, and recurrent facilitation, respectively denoted as $\mathbf{Z}$ , $\mathbf{H}^{(S)}$ , $\mathbf{H}^{(F)} \in \mathbb{R}^{X \times Y \times K}$ ( $X$ is width, $Y$ is height of the tensor, and $K$ is its feature channels)*. The circuit takes its "feedforward" input $\mathbf{Z}$ from hypercolumns (e.g., orientation encodings from hypercolumn units), and introduces recurrent suppressive and facilitatory interactions between units, $\mathbf{C}^{(S)}$ , $\mathbf{C}^{(F)} \in \mathbb{R}^{X \times Y \times K}$ (Fig. 1b). These interactions are implemented with separate kernels for suppression and facilitation, $W^{S}$ , $W^{F} \in \mathbb{R}^{E \times E \times K \times K}$ , where $E$ is the spatial extent of connections on a single timestep (connectivity in this model is constrained by primate physiology).
68
+
69
+ These interactions are implemented through convolutions, allowing them to serially spread over timesteps of processing to connect units positioned at different spatial locations. The circuit outputs $\mathbf{H}^{(F)}$ after reaching steady state.
70
+
71
+ The circuit model of Mély et al. (2018) depends on competition between $\mathbf{H}^{(S)}$ and $\mathbf{H}^{(F)}$ to explain the orientation-tilt illusion. Competition is implemented by (i) computing suppression vs. facilitation in separate stages, and (ii) having non-negative activities, which enforces these functionally distinct processing stages. With these constraints in the circuit model, the strength of recurrent suppression – but not facilitation – multiplicatively increases with the net recurrent output. For the orientation-tilt illusion, suppression predominates when center and surround gratings have similar orientations. This causes encodings of the surround grating to “repulse” encodings of the center grating. On the other hand, facilitation predominates (causing “attraction”) when center and surround gratings have dissimilar orientations because it is additive and not directly scaled by the circuit output.
72
+
73
+ Parameters controlling the circuit's integration, suppression/facilitation, and patterns of horizontal connections between units are tuned by hand. Linear and multiplicative suppression (i.e., shunting inhibition) are controlled by scalars $\mu$ and $\alpha$ , feedforward drive is modulated by the scalar $\xi$ , and linear facilitation is controlled by the scalar $\nu$ . Circuit time constants are scalars denoted by $\eta$ , $\epsilon$ , $\tau$ and $\sigma$ . All activities are non-negative and both stages are linearly rectified (ReLU) $[\cdot]_{+} = \max(\cdot, 0)$ .
74
+
75
+ Feedback gated recurrent units Linsley et al. (2018a) developed a version of this circuit for computer vision applications, called the hGRU. In their formulation they use gradient descent (rather than hand-tuning like in the original circuit) to fit its connectivity and parameters to image datasets. The hGRU was designed to learn a difficult synthetic incremental grouping task, and a single layer of the hGRU learned long-range spatial dependencies that CNNs with orders-of-magnitude more weights could not. The hGRU replaced the circuit's time constants with dynamic gates, converted the recurrent state $\mathbf{H}^{(S)}$ for suppression into an instantaneous activity, and introduced a term for quadratic facilitation. The hGRU also relaxed biological constraints from the original circuit, including an assumption of non-negativity, which enforced competition between recurrent suppression vs. facilitation (e.g., guaranteeing that Stage 1 in the circuit model describes suppression of $Z_{xyk}$ ).
76
+
77
+ We extend the hGRU formulation in two important ways. First, like Mély et al. (2018), we introduce non-negativity. This constraint was critical for Mély et al. (2018) to explain contextual illusions, and as we describe below, was also important for our model. Second, we extend the circuit into a hierarchical model which can learn complex contour detection tasks. Recent neurophysiological work indicates that contextual effects emerge from both horizontal and top-down feedback (Chettih & Harvey, 2019). Motivated by this, we develop versions of the circuit to simulate horizontal connections between units within a layer, and top-down connections between units in different layers.
78
+
79
+ We call our module the feedback gated recurrent unit (fGRU). We describe the evolution of fGRU recurrent units in $\mathbf{H} \in \mathbb{R}^{X \times Y \times K}$ , which are influenced by non-negative feedforward encodings $\mathbf{Z} \in \mathbb{R}^{X \times Y \times K}$ (e.g., a convolutional layer's response to a stimulus) over discrete timesteps $\cdot [t]$ :
80
+
81
+ Stage 1:
82
+
83
+ $$
84
+ \mathbf {G} ^ {S} = \operatorname {s i g m o i d} \left(U ^ {S} * \mathbf {H} [ t - 1 ]\right) \quad \# \text {C o m p u t e c h a n n e l - w i s e s e l e c t i o n}
85
+ $$
86
+
87
+ $$
88
+ \mathbf {C} ^ {S} = W ^ {S} * (\mathbf {H} [ t - 1 ] \odot \mathbf {G} ^ {S}) \quad \# \text {C o m p u t e s u p p r e s s i o n i n t e r a c t i o n s}
89
+ $$
90
+
91
+ $$
92
+ \mathbf {S} = \left[ \mathbf {Z} - \left[ (\alpha \mathbf {H} [ t - 1 ] + \mu) \mathbf {C} ^ {S} \right] _ {+} \right] _ {+}, \quad \# \text {S u p p r e s s i o n o f} \mathbf {Z}
93
+ $$
94
+
95
+ Stage 2:
96
+
97
+ $$
98
+ \mathbf {G} ^ {F} = \operatorname {s i g m o i d} \left(U ^ {F} * \mathbf {S}\right) \quad \# \text {C o m p u t e c h a n n e l - w i s e r e c u r r e n t u p d a t e s}
99
+ $$
100
+
101
+ $$
102
+ \mathbf {C} ^ {F} = W ^ {F} * \mathbf {S} \quad \# \text {C o m p u t e f a c i l i t a t i o n i n t e r a c t i o n s}
103
+ $$
104
+
105
+ $$
106
+ \tilde {\mathbf {H}} = \left[ \nu (\mathbf {C} ^ {F} + \mathbf {S}) + \omega (\mathbf {C} ^ {F} * \mathbf {S}) \right] _ {+} \quad \# \text {F a c i l i t a t i o n} \mathbf {S}
107
+ $$
108
+
109
+ $$
110
+ \mathbf {H} [ t ] = \left(1 - \mathbf {G} ^ {F}\right) \odot \mathbf {H} [ t - 1 ] + \mathbf {G} ^ {F} \odot \tilde {\mathbf {H}}. \quad \# \text {U p d a t e r e c u r r e n t s t a t e}
111
+ $$
112
+
113
+ Like the original circuit, the fGRU has separate stages for suppression (S) and facilitation (H). In the first stage, the feedforward encodings $\mathbf{Z}$ are suppressed by non-negative interactions between units in $\mathbf{H}[t - 1]$ (an fGRU hidden state from the previous timestep). Suppressive interactions are computed with the kernel $W^{S} \in \mathbb{R}^{E \times E \times K \times K}$ , where $E$ describes the spatial extent of horizontal connections on a single timestep. This kernel is convolved with a gated version of the persistent hidden state $\mathbf{H}[t - 1]$ . The gate activity $\mathbf{G}^{S}$ is computed by applying a sigmoid nonlinearity to a convolution of the kernel $U^{S} \in \mathbb{R}^{1 \times 1 \times K \times K}$ with $\mathbf{H}[t - 1]$ , which transforms its activity into the range [0, 1]. Additive and multiplicative forms of suppression are controlled by the parameters $\mu, \alpha \in \mathbb{R}^{K}$ , respectively.
114
+
115
+ In the second stage, additive and multiplicative facilitation is applied to the instantaneous activity $\mathbf{S}$ . The kernels $W^{F} \in \mathbb{R}^{E \times E \times K \times K}$ controls facilitation interactions. Additive and multiplicative forms of facilitation are scaled by the parameters $\nu, \omega \in \mathbb{R}^{K}$ , respectively. A gate activity is also computed during this stage to update the persistent recurrent activity $\mathbf{H}$ . The gate activity $\mathbf{G}^{F}$ is computed by applying a sigmoid to a convolution of the kernel $U^{F} \in \mathbb{R}^{1 \times 1 \times K \times K}$ with $\mathbf{S}$ . This gate updates $\mathbf{H}[t]$ by interpolating $\mathbf{H}[t - 1]$ with the candidate activity $\tilde{\mathbf{H}}$ . After every timestep of processing, $\mathbf{H}[t]$ is taken as the fGRU output activity. As detailed in the following section, the fGRU output hidden state is either passed to the next convolutional layer (Fig. 1c, fGRU $(\ell) \to \mathrm{conv}(\ell + 1)$ ), or used to compute top-down connections (Fig. 1c, fGRU $(\ell + 1) \to \mathrm{fGRU}(\ell)$ ).
116
+
117
+ The fGRU has different configurations for learning horizontal connections between units within a layer or top-down connections between layers (Fig. 1b). These two configurations stem from changing the activities used for a fGRU's feedforward encodings and recurrent hidden state. "Horizontal connections" between units within a layer are learned by setting the feedforward encodings $\mathbf{Z}$ to the activity of a preceding convolutional layer, and setting the hidden state $\mathbf{H}$ to a persistent activity initialized as zeros (Fig. 1c, $\mathrm{conv}^{(\ell)}\rightarrow \mathrm{fGRU}^{(\ell)})$ . "Top-down connections" between layers are learned by setting fGRU feedforward encodings $\mathbf{Z}$ to the persistent hidden state $\mathbf{H}^{(\ell)}$ of a fGRU at layer $\ell$ in a hierarchical model, and the hidden state $\mathbf{H}$ to the persistent activity $\mathbf{H}^{(\ell +1)}$ of an fGRU at a layer one level up in the model hierarchy (Fig. 1c, $\mathrm{fGRU}^{(\ell +1)}\to \mathrm{fGRU}^{(\ell)})$ . The functional interpretation of the top-down fGRU is that it first suppresses activity in the lower layer using the higher layer's recurrent horizontal activities, and then applies a kernel to the residue for facilitation, which allows for computations like interpolation, sharpening, or "filling in". Note that an fGRU for top-down connections does not have a persistent state (it mixes high and low-level persistent states), but an fGRU for horizontal connections does.
118
+
119
+ $\gamma$ -Net Our main objective is to test how a model with the capacity for contextual illusions performs on natural image analysis. We do this by incorporating fGRUs into leading feedforward architectures for contour detection tasks, augmenting their feedforward processing with modules for learning feedback from horizontal and top-down connections (Fig. 1c). We refer to the resulting hierarchical models as $\gamma$ -Nets, because information flows in a loop that resembles a $\gamma$ : image encodings make a full bottom-up to top-down cycle through the architecture on every timestep, until dense predictions are read-out from the lowest-level recurrent layer of the network (thus, information flows in at the top of the hierarchy, loops through the network, and flows out from the top of the hierarchy). In our experiments we convert leading architectures for two contour detection problems into $\gamma$ -Nets: A VGG16 for BSDS500 (He et al., 2019), and a U-Net for detection of cell membranes in serial electron microscopy images (Lee et al., 2017). See Appendix A for an algorithmic description of $\gamma$ -net.
120
+
121
+ # 4 CONTOUR DETECTION EXPERIMENTS
122
+
123
+ Overview We evaluated $\gamma$ -Net performance on two contour detection tasks: object contour detection in natural images (BSDS500 dataset; Arbeláez et al., 2011) and cell membrane detection in serial electron microscopy (SEM) images of mouse cortex (Kasthuri et al., 2015) and mouse retina (Ding et al., 2016). Different $\gamma$ -Net configurations were used on each task, with each building on the leading architecture for their respective datasets. All $\gamma$ -Nets use 8-timesteps of recurrence and instance normalization (normalization controls vanishing gradients in RNN training; Ulyanov et al., 2016; Coolijmans et al., 2017, see Appendix A for details). The $\gamma$ -Nets were trained with Tensorflow and NVIDIA Titan RTX GPUs using single-image batches and the Adam optimizer (Kingma & Ba, 2014, dataset-specific learning rates are detailed below). Models were trained with early stopping, which
124
+
125
+ ![](images/9e6bd6c96c7ffd838670add9f8bccf9eaad7ef2c74d3f9f9ca614c0cd8cfdc7b.jpg)
126
+
127
+ ![](images/f600e423b62b6d109b14fe573b137191ab0b2f26f6583e726cfcbf4edadcdfe6.jpg)
128
+
129
+ ![](images/743cea3eae4e91a066e5127fcff5060d53edca82cbec972b41be6c99d3185f2f.jpg)
130
+
131
+ ![](images/1b6fb4f14a07e0e767fc39be0ff89e15e391e23d2e8a4eef001115e34b50c170.jpg)
132
+ Figure 2: Object contour detection in BSDS500 images. (a) The $\gamma$ -Net is on par with humans and the state-of-the-art for contour detection (BDCN; He et al. 2019) when trained on the entire training dataset with augmentations. In this regime, it also outperforms the published F1 ODS of all other approaches to BSDS500 (LPCB: Deng et al. 2018, RCF: Liu et al. 2019, CED: Wang et al. 2019, DB: Kokkinos 2015, HED: Xie & Tu 2017, and OEF: Hallman & Fowlkes 2015). The $\gamma$ -Net outperforms the BDCN when trained on $5\%$ , $10\%$ , or $100\%$ of the dataset. Performance is reported as F1 ODS (Arbeláez et al., 2011). (b) BDCN and $\gamma$ -Net predictions after training on the different proportions of BSDS500 images. (c) The evolution of $\gamma$ -Net predictions across timesteps of processing. Predictions from a $\gamma$ -Net trained on $100\%$ of BSDS are depicted: its initially coarse detections are refined over processing timesteps to select figural object contours.
133
+
134
+ ![](images/b2f0864e504e53ced6811728bc4737994389c18065b5984c187f13984cceb82b.jpg)
135
+
136
+ ![](images/2e4eb319076a45776cd63291d4b12e2b034d9fe5efff46b1f7f899ccf8412b26.jpg)
137
+
138
+ ![](images/54f282ccbbb0412fb574f0c61179c62e198701a01ba4215aafe33fd732193651.jpg)
139
+ Timestep
140
+
141
+ ![](images/bcbaef685e7af3aec3cb68b1c496d8e53e6e508fac6eedb6866921996bec71eb.jpg)
142
+
143
+ terminated training if the validation loss did not drop for 50 straight epochs. The weights with the best validation-set performance were used for testing.
144
+
145
+ Model evaluation We evaluated models in two ways. First, we validated them against state-of-the-art models for each contour dataset using standard benchmarks. As discussed below, we verified that our implementations of these state-of-the-art models matched published performance. Second, we tested sample-efficiency after training on subsets of the contour datasets without augmentations. Sample-efficiency compares the inductive biases of different architectures, and is critical for understanding how the capacity for exhibiting contextual illusions influences performance. We report model "wall time" in Appendix A; however, the focus of our work is on sample efficiency rather than the hardware/software-level optimizations that influence wall time. Model performance is evaluated as the F-measure at the Optimal Dataset Scale across images after non-maximum suppression post-processing (F1-ODS; Arbeláez et al., 2011), as is standard for contour detection tasks.
146
+
147
+ # 4.1 OBJECT CONTOUR DETECTION IN NATURAL IMAGES
148
+
149
+ Dataset We trained models for object contour detection on the BSDS500 dataset (Arbeláez et al., 2011). The dataset contains object-contour annotations for 500 natural images, which are split into train (200), validation (100), and test (200) sets.
150
+
151
+ Architecture details The leading approach to BSDS500 is the Bi-Directional Cascade Network (BDCN, He et al. 2019), which places multi-layer readout modules at every processing block in a
152
+
153
+ ![](images/6a3c960b2cdb0abba00593481006d849228e629dc3f43027b888bfee8b70c839.jpg)
154
+
155
+ ![](images/87dd398297ea58de25be20e012a2f8354d2777efed50db15c06a1f64e6ad886e.jpg)
156
+
157
+ ![](images/44868ca70ec1b021d0bda80d962de852858f1fcadb030f72bb71e9c8df81eaf0.jpg)
158
+
159
+ ![](images/ed68e6518b775b795f2639ba7863495f633bf9ce3e8fbdf32c5f1f5fd9138d9b.jpg)
160
+ Figure 3: Membrane prediction in serial electron microscopy (SEM) images of neural tissue. (a) The $\gamma$ -Net outperforms a state-of-the-art U-Net (Lee et al., 2017) for membrane detection when trained on SEM image datasets of mouse visual cortex (SNEMI3D) and retina (Ding et al., 2016). Performance is F1 ODS. (b) Network predictions after training on different proportions of each dataset. (c) The evolution of $\gamma$ -Net predictions across timesteps of processing after training on $100\%$ of the datasets. $\gamma$ -Net learns to iteratively suppress contours belonging to internal cell features, such as organelles, which should not be annotated as contours for the purpose of neural tissue reconstruction.
161
+
162
+ ![](images/8add8b67c286da0bb38a34b80f665252eec695d955628823e8a1ff1663c610d3.jpg)
163
+
164
+ ![](images/f086b7efac9fe2fec4e307f8aa4af561dd0c828f9d46689e5bb2b40da9b97089.jpg)
165
+ Timestep
166
+
167
+ ![](images/637f4b7e3f9aa8f74baba05fd9d80c0291bef9c281d5bb870ad43f714c3e6ce0.jpg)
168
+
169
+ ![](images/ad7746994c216d74b41ebf10cb8afe02bcd39a39cef6b0d54d430f78de760b64.jpg)
170
+
171
+ ![](images/1d0cc3a5c254c6138b5b11aa99431dcbc89dcd6dd6ea14c06f8070c0b9a2bf2d.jpg)
172
+
173
+ ILSVRC12-pretrained VGG16, and optimizes a loss that balances contributions from each of these readouts to achieve better scale tolerance in final prediction. All leading deep learning approaches to BSDS500 begin with a VGG16 pretrained on ILSVRC12 object recognition (He et al., 2019).
174
+
175
+ Our $\gamma$ -Net for BSDS500 begins with the same ILSVRC12-pretrained VGG16 as the BDCN. fGRUs were introduced for learning horizontal (conv2_2, conv3_3, conv4_3, conv5_3) and top-down connections (conv5_3 $\rightarrow$ conv4_3, conv4_3 $\rightarrow$ conv3_3, and conv3_3 $\rightarrow$ conv2_2). To pass top-down activities between layers, higher-level activities were resized to match lower-level ones, then passed through two layers of $1 \times 1$ convolutions with linear rectification, which registered feature representations from higher-to-lower layers. The $\gamma$ -Net was trained with learning rates of $3e^{-4}$ on its randomly initialized fGRU weights and $1e^{-5}$ on its VGG-initialized weights. Training time and parameter counts for this $\gamma$ -Net and the BDCN is in Appendix Table S1.
176
+
177
+ In contrast to the BDCN (and other recent approaches to BSDS) with multiple read-outs and engineered loss functions, we take $\gamma$ -Net predictions as a linear transformation of the lowest fGRU layer in its feature hierarchy, and optimize the model with binary cross entropy between per-pixel predictions and labels (following the method of Xie & Tu, 2017). This approach works because the $\gamma$ -Net uses feedback to merge high- and low-level image feature representations at the bottom of its feature hierarchy, resembling classic "V1 scratchpad" hypotheses for computation in visual cortex (Gilbert & Sigman, 2007; Lee & Mumford, 2003). We compared $\gamma$ -Nets with a BDCN implementation released by the authors, which was trained using the routine described in He et al. $(2019)^{\dagger}$ .
178
+
179
+ Results We validated the $\gamma$ -Net against the BDCN after training on a full and augmented BSDS training set (Xie & Tu, 2017). The $\gamma$ -Net performed similarly in F1 ODS (0.802) as the BDCN (0.806) and humans (0.803), and outperformed all other approaches to BSDS (Fig. 2a; Deng et al. 2018; Xie & Tu 2017; Hallman & Fowlkes 2015; Kokkinos 2015; Wang et al. 2019; Liu et al. 2019).
180
+
181
+ Our hypothesis is that contextual illusions reflect routines for efficient scene analysis, and that the capacity for exhibiting such illusions improves model sample efficiency. Consistent with this hypothesis, the $\gamma$ -Net was up to an order-of-magnitude more efficient than the BDCN. A $\gamma$ -Net trained on $5\%$ of BSDS performs on par with a BDCN trained on $10\%$ of the BSDS, and a $\gamma$ -Net trained on $10\%$ of the BSDS performs on par with a BDCN trained on $100\%$ of BSDS. Unlike the BDCN, the $\gamma$ -Net trained on $100\%$ of BSDS outperformed the state of the art for non-deep learning based models (Hallman & Fowlkes, 2015), and nearly matched the performance of the popular HED trained with augmentations (Xie & Tu, 2017). We also evaluated lesioned versions of $\gamma$ -Net to measure the importance of its horizontal/top-down connections, recurrence, non-negativity, and different specifications of its fGRU recurrent modules for detecting contours in BSDS500 (Fig. S4).
182
+
183
+ We examined recurrent feedback strategies learned by $\gamma$ -Net for object contour by visualizing its performance on every timestep of processing. This was done by passing its activity at a timestep through the final linear readout layer. The $\gamma$ -Net iteratively refines its initially coarse contour predictions. For example, the top row of Fig. 2c shows that the $\gamma$ -Net selectively enhances the boundaries around the runner's bodies while suppressing the feature activities created by the crowd. In the next row of predictions, salient zebra stripes are gradually suppressed in favor of body contours (see Fig. S5 for $\gamma$ -Net prediction dynamics and its tendency towards steady state solutions).
184
+
185
+ # 4.2 CELL MEMBRANE DETECTION
186
+
187
+ Datasets "Connectomics" involves extracting the wiring diagrams of neurons from serial electron microscope (SEM) imaging data, and is an important step towards understanding the algorithms of brains (Briggman & Bock, 2012). CNNs can automate this procedure by segmenting neuron membranes in SEM images. Large-scale challenges like SNEMI3D (Kasthuri et al., 2015), which contains annotated images of mouse cortex, have helped drive progress towards automation. Here, we test models on membrane detection in SNEMI3D and a separate SEM dataset of mouse retina ("Ding" from Ding et al. 2016). We split both datasets into training (80 images for SNEMI3D and 307 images for Ding) and test sets (20 images for SNEMI3D and 77 images for Ding). Next, we generated versions of each training dataset with $100\%$ , $10\%$ , or $5\%$ of the images, as well as versions of the full datasets augmented with random left-right and up-down flips $(A + 100\%)$ .
188
+
189
+ Architecture details The state-of-the-art on SNEMI3D is a U-Net variant (Ronneberger et al., 2015), which uses a different depth, different number of feature maps at every layer, and introduces new features like residual connections (Lee et al., 2017). We developed a $\gamma$ -Net for cell membrane segmentation, which resembled this U-Net variant Lee et al. (2017) (see Appendix B for details). These $\gamma$ -Net were trained from a random initialization with a learning rate of $1e^{-2}$ to minimize class-balanced per-pixel binary cross-entropy, and compared to the U-Net from Lee et al. (2017). Training time and parameter counts for these models is in Appendix Table S2.
190
+
191
+ Results The $\gamma$ -Net and U-Net of Lee et al. (2017) performed similarly when trained on full augmented versions of both the SNEMI3D and Ding datasets (Fig. 3a A+100%). However, $\gamma$ -Nets were consistently more sample efficient than U-Nets on every reduced dataset condition (Fig. 3b).
192
+
193
+ We visualized the recurrent membrane detection strategies of $\gamma$ -Nets trained on $100\%$ of both datasets. Membrane predictions were obtained by passing neural activity at every timestep through the final linear readout. The $\gamma$ -Net prediction timecourse indicates that it learns a complex visual strategy for membrane detection: it gathers a coarse "gist" of membranes in the first timestep of processing, and iteratively refines these predictions by enhancing cell boundaries and clearing out spurious contours of elements like cell organelles (Fig. 3c).
194
+
195
+ # 5 BUGS OR BYPRODUCTS OF OPTIMIZED NEURAL COMPUTATIONS?
196
+
197
+ Orientation-tilt illusion Like the neural circuit of Mély et al. (2018), fGRU modules are designed with an asymmetry in their ability to suppress and facilitate feedforward input (see Section 3). This
198
+
199
+ ![](images/e15a2458ea6d61558cf4c772f544b61b07410233db686a88f4440a1ffadef8c5.jpg)
200
+ Figure 4: Optimizing for contour detection produces an orientation-tilt illusion in the $\gamma$ -Net. The orientation-tilt illusion (O'Toole & Wenderoth, 1977) describes how perception of the center grating's orientation is repulsed from the surround when the two are in similar orientations (e.g., $\approx 30^{\circ}$ ), and attracted to the surround when the two are in dissimilar (but not orthogonal) orientations (e.g., $\approx 60^{\circ}$ ). We test for the orientation-tilt illusion in models trained on BSDS500 contour detection. Model weights were fixed and new layers were trained to decode the orientation of grating stimuli of a single orientation. These models were tested on grating stimuli in which surround orientations were systematically varied w.r.t. the center (exemplars depicted in the left panel). The $\gamma$ -Net but not the BDCN had an orientation-tilt illusion. Gray curves depict a fourth-order polynomial fit.
201
+
202
+ ![](images/84ab6fae778e5096d8774debe840c492575ed6d97952d8c2db7bdef2a8178e63.jpg)
203
+
204
+ potentially gives $\gamma$ -Nets (which contain fGRU modules) the capacity to exhibit similar contextual illusions as humans. Here, we tested whether a $\gamma$ -Net trained on contour detection in natural images exhibits an orientation-tilt illusion.
205
+
206
+ We tested for this illusion by training orientation decoders on the outputs of models trained on the full BSDS500 dataset. These decoders were trained on 100K grating images, in which the center and surround orientations were the same (Fig. S2a). These images were sampled from all orientations and spatial frequencies. The decoders had two $1 \times 1$ convolution layers and an intervening linear rectification to map model outputs into the sine and cosine of grating orientation. Both the $\gamma$ -Net and BDCN achieved nearly perfect performance on a held-out validation set of gratings.
207
+
208
+ We tested these models on 1K grating stimuli generated with different center-surround grating orientations (following the method of O'Toole & Wenderoth 1977, Fig. S2b), and recorded model predictions for the center pixel in these images (detailed in Appendix C). Surprisingly, $\gamma$ -Net encodings of these test images exhibited a similar tilt illusion as found in human perceptual data (Fig. 4b). There was repulsion when the central and surround gratings had similar orientations, and attraction when these gratings were dissimilar. This illusory phenomenon cannot be explained by accidental factors such as aliasing between the center and the surround, which would predict the opposite pattern, suggesting that the illusion emerges from the model's strategy for contour detection. In contrast, the BDCN, which only relies on feedforward processing, did not exhibit the effect (Fig. 4b). We also tested lesioned $\gamma$ -Nets for the orientation-tilt illusion, but only the full $\gamma$ -Net and a version lesioned to have only (spatially broad) horizontal connections replicated the entire illusion (thought this lesioned model performed worse on contour detection than the full model; Fig. S4). These findings are consistent with the work of Mély et al. (2018), who used spatially broad horizontal connections to model contextual illusions, as well as the more recent neurophysiological work of Chettih & Harvey (2019), who explained contextual effects through both horizontal and top-down interactions.
209
+
210
+ Correcting the orientation-tilt illusion What visual strategies does the orientation-tilt illusion reflect? We tested this question by taking a $\gamma$ -Net that was trained to decode central grating orientation in tilt-illusion, and then training it further to decode the central grating orientation of tilt-illusion stimuli (Fig. 5a, "illusion-corrected" in red). Importantly, $\gamma$ -Net weights were optimized during training, but the orientation decoder was not. Thus, improving performance for decoding the orientation of these illusory stimuli comes at the expense of changing $\gamma$ -Net weights that were
211
+
212
+ ![](images/6bae304f6807444bc5e582c24735d1a0ff50796f43e49ef4da16fc3426c9207d.jpg)
213
+ Figure 5: Contour detection performance of the $\gamma$ -Net depends on an orientation-tilt illusion. (a) F1 ODS scores on BSDS500 test images (200 total) from $\gamma$ -Nets after correcting an orientation-tilt illusion ("illusion-corrected") or not ("domain-transfer control"). The domain-transfer control $\gamma$ -Net was trained to decode the orientation of single-grating stimuli (blue), and the illusion-corrected $\gamma$ -Net was trained to decode the orientation of the central grating in illusory grating stimuli (red). Readouts for decoding orientation were fixed, and $\gamma$ -Net weights were allowed to change during training. Per-image F1 ODS was significantly greater for The domain-transfer control $\gamma$ -Net than the illusion-correction $\gamma$ -Net. (b) The illusion-corrected $\gamma$ -Net was biased towards low-level contours, whereas the domain-transfer control $\gamma$ -Net was biased towards contours on object boundaries.
214
+
215
+ ![](images/4320aa84d8edd4708254e71d79b60852935c5d98d79a0ba8964ef3c35e310f48.jpg)
216
+
217
+ ![](images/adf85ba3d876d8e6b7ed9345c2f684485d2a6141496913035dab46e1dcb92643.jpg)
218
+
219
+ ![](images/0feba71f5cf5d74e1db6e2654d2ad71d1ea6280750d2be9e8d19ea4f889529ae.jpg)
220
+
221
+ responsible for its orientation-tilt illusion. As a control, another $\gamma$ -Net was trained with the same routine to decode the orientation of full-image gratings, for which there is no illusion (Fig. 5a, "domain-transfer control" in blue; see Fig. S6 for training performance of both models). Both models were tested on the BSDS500 test set.
222
+
223
+ Correcting the orientation-tilt illusion of a $\gamma$ -Net significantly hurts its object contour detection performance (Fig. 5a; 1-sample $T$ -test of the per-image ODS F1 difference between models, $T(199) = 13.570$ , $p < 0.001$ ). The illusion reflects $\gamma$ -Net strategies for selecting object-boundaries rather than low-level contours (Fig. 5b; Fig. S7 for more examples).
224
+
225
+ # 6 CONCLUSION
226
+
227
+ Why do we experience visual illusions? Our experiments indicate that one representative contextual illusion, the orientation-tilt illusion, is a consequence of neural strategies for efficient scene segmentation. We directly tested whether this contextual illusion is a bug or a byproduct of optimized neural computations using the $\gamma$ -net: a dense prediction model with recurrent dynamics inspired by neural circuits in visual cortex. On separate contour detection tasks, the $\gamma$ -Net performed on par with state-of-the-art models when trained in typical regimes with full augmented datasets, but was far more efficient than these models when trained on sample-limited versions of the same datasets. At the same time, the $\gamma$ -Net exhibited an orientation-tilt illusion which biased it towards high-level object-boundary contours over low-level edges, and its performance was reduced when it was trained to correct its illusion.
228
+
229
+ While $\gamma$ -Nets are more sample efficient than leading feedforward models for contour detection, they also take much more "wall-time" to train than these feedforward models. Learning algorithms and GPU optimizations for RNNs are lagging behind their feedforward counterparts in computational efficiency, raising the need for more efficient approaches for training hierarchical RNNs like $\gamma$ -Nets to unlock their full potential.
230
+
231
+ More generally, our work demonstrates novel synergy between artificial vision and vision neuroscience: we demonstrated that circuit-level insights from biology can improve the sample efficiency of deep learning models. The neural circuit that inspired the fGRU module explained biological illusions in color, motion, and depth processing (Mély et al., 2018), and we suspect that $\gamma$ -Nets will have similar success in learning sample-efficient strategies – and exhibiting contextual illusions – in these domains.
232
+
233
+ # REFERENCES
234
+
235
+ P. Arbeláez, M. Maire, C. Fowlkes, and J. Malik. Contour detection and hierarchical image segmentation. IEEE Trans. Pattern Anal. Mach. Intell., 33(5):898-916, May 2011.
236
+ K. L. Briggman and D. D. Bock. Volume electron microscopy for neuronal circuit reconstruction. Curr. Opin. Neurobiol., 22(1):154-161, February 2012.
237
+ S. N. Chettih and C. D. Harvey. Single-neuron perturbations reveal feature-specific competition in V1. Nature, 567(7748):334-340, March 2019.
238
+ T. Cooijmans, N. Ballas, C. Laurent, C. Gulçehre, and A. Courville. Recurrent batch normalization. In International Conference on Learning Representations, 2017.
239
+ R. Deng, C. Shen, S. Liu, H. Wang, and X. Liu. Learning to predict crisp boundaries. In Computer Vision - ECCV 2018, pp. 570-586. Springer International Publishing, 2018.
240
+ H. Ding, R. G. Smith, A. Poleg-Polsky, J. S. Diamond, and K. L. Briggman. Species-specific wiring for direction selectivity in the mammalian retina. Nature, 535(7610):105-110, July 2016.
241
+ D. J. Felleman and D. C. Van Essen. Distributed hierarchical processing in the primate cerebral cortex. Cereb. Cortex, 1(1):1-47, 1991.
242
+ A. M. Fyall, Y. El-Shamayleh, H. Choi, E. Shea-Brown, and A. Pasupathy. Dynamic representation of partially occluded objects in primate prefrontal and visual cortex. *Elite*, 6, September 2017.
243
+ R. Geirhos, C. R. M. Temme, J. Rauber, H. H. Schütt, M. Bethge, and F. A. Wichmann. Generalisation in humans and deep neural networks. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (eds.), Advances in Neural Information Processing Systems 31, pp. 7549-7561. Curran Associates, Inc., 2018.
244
+ D. George, W. Lehrach, K. Kansky, M. Lázaro-Gredilla, C. Laan, B. Marthi, X. Lou, Z. Meng, Y. Liu, H. Wang, A. Lavin, and D. S. Phoenix. A generative vision model that trains with high data efficiency and breaks text-based CAPTCHAs. Science, 358(6368), December 2017.
245
+ C. D. Gilbert and M. Sigman. Brain states: top-down influences in sensory processing. *Neuron*, 54 (5):677-696, June 2007.
246
+ R. H. Hahnloser, R. Sarpeshkar, M. a. Mahowald, R. J. Douglas, and H. S. Seung. Digital selection and analogue amplification coexist in a cortex-inspired silicon circuit. Nature, 405(6789):947-951, June 2000.
247
+ S. Hallman and C. C. Fowlkes. Oriented edge forests for boundary detection. In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1732-1740, June 2015.
248
+ J. He, S. Zhang, M. Yang, Y. Shan, and T. Huang. Bi-Directional cascade network for perceptual edge detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3828-3837, 2019.
249
+ K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778, 2016.
250
+ J. C. Heck and F. M. Salem. Simplified minimal gated unit variations for recurrent neural networks. In 2017 IEEE 60th International Midwest Symposium on Circuits and Systems (MWSCAS), pp. 1593-1596, August 2017.
251
+ S. Hochstein and M. Ahissar. View from the top: Hierarchies and reverse hierarchies in the visual system. Neuron, 36(5):791-804, 2002.
252
+ M. Januszewski and V. Jain. Segmentation-Enhanced CycleGAN. February 2019.
253
+ H. Kafaligonul, B. G. Breitmeyer, and H. Öğmen. Feedforward and feedback processes in vision. Front. Psychol., 6:279, March 2015.
254
+
255
+ K. Kar, J. Kubilius, K. Schmidt, E. B. Issa, and J. J. DiCarlo. Evidence that recurrent circuits are critical to the ventral stream's execution of core object recognition behavior. Nat. Neurosci., April 2019.
256
+ N. Kasthuri, K. J. Hayworth, D. R. Berger, R. L. Schalek, J. A. Conchello, S. Knowles-Barley, D. Lee, A. Vázquez-Reina, V. Kaynig, T. R. Jones, M. Roberts, J. L. Morgan, J. C. Tapia, H. S. Seung, W. G. Roncal, J. T. Vogelstein, R. Burns, D. L. Sussman, C. E. Priebe, H. Pfister, and J. W. Lichtman. Saturated reconstruction of a volume of neocortex. Cell, 162(3):648-661, July 2015.
257
+ S. W. Keemink and M. C. W. van Rossum. A unified account of tilt illusions, association fields, and contour detection based on elastica. Vision Res., 126:164-173, September 2016.
258
+ T. C. Kietzmann, C. J. Spoerer, L. Sorensen, and others. Recurrence required to capture the dynamic computations of the human ventral visual stream. arXiv preprint arXiv, 2019.
259
+ J. K. Kim, M. Ricci, and T. Serre. Not-So-CLEVR: learning same-different relations strains feedforward neural networks. Interface Focus theme issue on "Understanding images in biological and computer vision", 2018.
260
+ D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
261
+ P. C. Klink, B. Dagnino, M.-A. Ganiel-Mathis, and P. R. Roelfsema. Distinct feedforward and feedback effects of microstimulation in visual cortex reveal neural mechanisms of texture segregation. Neuron, June 2017.
262
+ I. Kokkinos. Pushing the boundaries of boundary detection using deep learning. arXiv preprint arXiv:1511.07386, 2015.
263
+ N. Kriegeskorte. Deep neural networks: A new framework for modeling biological vision and brain information processing. Annu Rev Vis Sci, 1:417-446, November 2015.
264
+ B. M. Lake, R. Salakhutdinov, and J. B. Tenenbaum. Human-level concept learning through probabilistic program induction. Science, 350(6266):1332-1338, December 2015.
265
+ K. Lee, J. Zung, P. Li, V. Jain, and H. Sebastian Seung. Superhuman accuracy on the SNEMI3D connectomics challenge. In Neural Information Processing Systems, 2017.
266
+ T. S. Lee and D. Mumford. Hierarchical bayesian inference in the visual cortex. J. Opt. Soc. Am. A Opt. Image Sci. Vis., 20(7):1434-1448, July 2003.
267
+ Q. Liao and T. Poggio. Bridging the gaps between residual learning, recurrent neural networks and visual cortex. April 2016.
268
+ D. Linsley, J. K. Kim, V. Veerabadran, C. Windolf, and T. Serre. Learning long-range spatial dependencies with horizontal gated recurrent units. In Neural Information Processing Systems (NIPS), 2018a.
269
+ D. Linsley, J. Kim, D. Berson, and T. Serre. Robust neural circuit reconstruction from serial electron microscopy with convolutional recurrent networks. November 2018b.
270
+ D. Linsley, J. Kim, V. Veerabadran, C. Windolf, and T. Serre. Learning long-range spatial dependencies with horizontal gated recurrent units. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (eds.), Advances in Neural Information Processing Systems 31, pp. 152-164. Curran Associates, Inc., 2018c.
271
+ D. Linsley, D. Shiebler, S. Eberhardt, and T. Serre. Learning what and where to attend with humans in the loop. In International Conference on Learning Representations, 2019.
272
+ Y. Liu, M.-M. Cheng, X. Hu, J.-W. Bian, L. Zhang, X. Bai, and J. Tang. Richer convolutional features for edge detection. IEEE Trans. Pattern Anal. Mach. Intell., 41(8):1939-1946, August 2019.
273
+ W. Lotter, G. Kreiman, and D. Cox. Deep predictive coding networks for video prediction and unsupervised learning. May 2016.
274
+
275
+ D. A. Mély, D. Linsley, and T. Serre. Complementary surrounds explain diverse contextual phenomena across visual modalities. Psychol. Rev., 2018.
276
+ M. C. Mozer. Induction of multiscale temporal structure. In J. E. Moody, S. J. Hanson, and R. P. Lippmann (eds.), Advances in Neural Information Processing Systems 4, pp. 275-282. Morgan-Kaufmann, 1992.
277
+ A. Nayebi, D. Bear, J. Kubilius, K. Kar, S. Ganguli, D. Sussillo, J. J. DiCarlo, and D. L. Yamins. Task-Driven convolutional recurrent models of the visual system. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (eds.), Advances in Neural Information Processing Systems 31, pp. 5295-5306. Curran Associates, Inc., 2018.
278
+ J. Nunez-Iglesias, R. Kennedy, T. Parag, J. Shi, and D. B. Chklovskii. Machine learning of hierarchical clustering to segment 2D and 3D images. PLoS One, 8(8):e71715, August 2013.
279
+ R. C. O'Reilly, D. Wyatte, S. Herd, B. Mingus, and D. J. Jilk. Recurrent processing during object recognition. Front. Psychol., 4(April):1-14, 2013.
280
+ B. O'Toole and P. Wenderoth. The tilt illusion: repulsion and attraction effects in the oblique meridian. Vision Res., 17(3):367-374, 1977.
281
+ P. J. Phillips, A. N. Yates, Y. Hu, C. A. Hahn, E. Noyes, K. Jackson, J. G. Cavazos, G. Jeckeln, R. Ranjan, S. Sankaranarayanan, J.-C. Chen, C. D. Castillo, R. Chellappa, D. White, and A. J. O'Toole. Face recognition accuracy of forensic examiners, superrecognizers, and face recognition algorithms. Proc. Natl. Acad. Sci. U. S. A., 115(24):6171-6176, June 2018.
282
+ B. Recht, R. Roelofs, L. Schmidt, and V. Shankar. Do CIFAR-10 classifiers generalize to CIFAR-10? June 2018.
283
+ P. R. Roelfsema. Cortical algorithms for perceptual grouping. Annu. Rev. Neurosci., 29:203-227, January 2006.
284
+ O. Ronneberger, P. Fischer, and T. Brox. U-Net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015, pp. 234–241. Springer International Publishing, 2015.
285
+ A. Rosenfeld, R. Zemel, and J. K. Tsotsos. The elephant in the room. August 2018.
286
+ T. Serre. Deep learning: The good, the bad, and the ugly. Annu Rev Vis Sci, 5:399-426, September 2019.
287
+ M. Siegel, T. J. Buschman, and E. K. Miller. Cortical information flow during flexible sensorimotor decisions. Science, 348(6241):1352-1355, June 2015.
288
+ C. J. Spoerer, P. McClure, and N. Kriegeskorte. Recurrent convolutional neural networks: A better model of biological object recognition. Front. Psychol., 8:1551, September 2017.
289
+ C. Tallec and Y. Ollivier. Can recurrent neural networks warp time? In International Conference on Learning Representations, 2018.
290
+ H. Tang, M. Schrimpf, W. Lotter, C. Moerman, A. Paredes, J. Ortega Caro, W. Hardesty, D. Cox, and G. Kreiman. Recurrent computations for visual pattern completion. Proc. Natl. Acad. Sci. U. S. A., 115(35):8835-8840, August 2018.
291
+ D. Ulyanov, A. Vedaldi, and V. Lempitsky. Instance normalization: The missing ingredient for fast stylization. July 2016.
292
+ E. Vorontsov, C. Trabelsi, S. Kadoury, and C. Pal. On orthogonality and learning recurrent networks with long term dependencies. January 2017.
293
+ Y. Wang, X. Zhao, Y. Li, and K. Huang. Deep crisp boundaries: From boundaries to Higher-Level tasks. IEEE Trans. Image Process., 28(3):1285-1298, March 2019.
294
+ H. Wen, K. Han, J. Shi, Y. Zhang, E. Culurciello, and Z. Liu. Deep predictive coding network for object recognition. February 2018.
295
+
296
+ D. Wyatte, D. J. Jilk, and R. C. O'Reilly. Early recurrent feedback facilitates visual object recognition under challenging conditions. Front. Psychol., 5:674, July 2014.
297
+ S. Xie and Z. Tu. Holistically-Nested edge detection. Int. J. Comput. Vis., 125(1-3):3-18, December 2017.
298
+ D. L. K. Yamins and J. J. DiCarlo. Using goal-driven deep learning models to understand sensory cortex. Nat. Neurosci., 19(3):356-365, February 2016.
299
+
300
+ A $\gamma$ -NET
301
+
302
+ Algorithm 1 A generic $\gamma$ -Netarchitecture. See Section 3 in the main text for the treatment.
303
+ Require: Image batch I
304
+ 1: $\mathbf{Z}^{\ell} \gets \mathbf{I}$
305
+ 2: $\mathbf{H}^{0}[t] \gets 0$
306
+ 3: for $t = T$ do
307
+ 4: for $\ell = 0$ to L do
308
+ 5: $\mathbf{Z}^{\ell} = \mathrm{ReLU}(\mathrm{Conv}(\mathbf{Z}^{\ell}))$
309
+ 6: $\mathbf{H}^{\ell}[t] = fGRU_{H}(Z = \mathbf{Z}^{\ell}, H = \mathbf{H}^{\ell}[t - 1])$
310
+ 7: if $\ell < L$ then
311
+ 8: $\mathbf{Z}^{\ell + 1} = \mathrm{Maxpool}(\mathbf{H}^{\ell}[t])$
312
+ 9: for $\ell = L$ to 0 do
313
+ 10: if $\ell < L$ then
314
+ 11: $\mathbf{H}^{\ell}[t] = \mathrm{fGRU}_{\mathrm{TD}}(Z = \mathbf{H}^{\ell}[t], H = \mathrm{ReLU}(\mathrm{Conv}(\mathrm{Upsample}(\mathbf{H}^{\ell + 1}[t])))$
315
+ 12: return $\mathbf{H}^{0}[t]$
316
+ 13: Hidden state of layer $\ell$
317
+
318
+ Connectomics The current standard for computer vision applications in connectomics is to train and test on separate partitions of the same tissue volume (Linsley et al., 2018b; Januszewski & Jain, 2019). This makes it difficult to develop new model architectures without overfitting to any particular dataset. For this reason, we first tuned our connectomics $\gamma$ -Net and its hyperparameters on a synthetic dataset of cell images (data not shown).
319
+
320
+ In our experiments on synthetic data, we noted monotonically improved performance with increasing timesteps, which motivated our choice of building these models with as many timesteps as could fit into GPU memory without sacrificing (we carried this concept over to the design of our $\gamma$ -Nets for BSDS). Thus, we settled on 8 timesteps for the $\gamma$ -Nets. We also compared our use of fGRU modules to learn recurrent connection vs. the classic LSTM and GRU recurrent modules, and found that the $\gamma$ -Net was far more effective on small datasets, which we take as evidence that its recurrent application of suppression separately from facilitation is a better inductive bias for learning contour tasks (see Hahnloser et al. 2000 for a theoretical discussion on how these operations can amount to a digital selection of task-relevant features through inhibition, followed by an analog amplification of the residuals through excitation).
321
+
322
+ We found that $\gamma$ -Net cell membrane detection was improved when every bottom-up unit (from a typical convolution) was given a hidden state. Like with gated recurrent architectures, these gates enable gradients to effectively skip timesteps of processing where they pathologically decay. We do this by converting every convolutional layer (except the first and last) into a "minimal gated unit" (Heck & Salem 2017). This conversion introduced two additional kernels to each convolutional layer, $U^{F}$ , $W^{H} \in \mathbb{R}^{1 \times 1 \times K \times K}$ , where the former was responsible for selecting channels from a persistent activity $\mathbf{H} \in \mathbb{R}^{X \times Y \times K}$ for processing on a given timestep and updating the persistent activity. The latter kernel transformed a modulated version of the hidden state $\mathbf{H}$ . This transformed hidden state was combined with a vanilla convolutional feedforward encoding, $\mathbf{Z} \in \mathbb{R}^{X \times Y \times K}$ (see Eq. 1 for the treatment). Weights in these layers were initialized with orthogonal random initializations, which help training recurrent networks (Vorontsov et al., 2017).
323
+
324
+ $$
325
+ \begin{array}{l} \mathbf {F} = \sigma (\mathbf {Z} + W ^ {F} * \mathbf {H} [ t - 1 ] + \mathbf {b} _ {F}) \\ \mathbf {H} [ t ] = \mathbf {F} \odot \mathbf {H} [ t - 1 ] + (1 - \mathbf {F}) \odot \operatorname {E L U} \left(Z + W ^ {H} * (\mathbf {F} \odot \mathbf {H} [ t - 1 ]) + \mathbf {b} _ {H}\right) \tag {1} \\ \end{array}
326
+ $$
327
+
328
+ fGRU Here we describe additional details of the fGRU. fGRU kernels for computing suppressive and facilitative interactions have symmetric weights between channels, similar to the original circuit of Mély et al. (2018). This means that the weight $W_{x_0 + \Delta x,y_0 + \Delta y,k_1,k_2}$ is equal to the weight $W_{x_0 + \Delta x,y_0 + \Delta y,k_2,k_1}$ , where $x_0$ and $y_0$ denote kernel center. This constraint means that there are nearly half as many learnable connections as a normal convolutional kernel. In our experiments, this constraint improved performance.
329
+
330
+ BSDS (20M parameters) $\gamma$ -Net
331
+
332
+ <table><tr><td>Layer</td><td>Operation</td><td>Output shape</td></tr><tr><td rowspan="3">conv-1-down</td><td>conv 3 × 3 / 1</td><td>320 × 480 × 64</td></tr><tr><td>conv 3 × 3 / 1</td><td>320 × 480 × 64</td></tr><tr><td>maxpool 2 × 2 / 2</td><td>160 × 240 × 64</td></tr><tr><td rowspan="4">conv-2-down</td><td>conv 3 × 3 / 1</td><td>160 × 240 × 128</td></tr><tr><td>conv 3 × 3 / 1</td><td>160 × 240 × 128</td></tr><tr><td>fGRU-horizontal 3 × 3 / 1</td><td>160 × 240 × 128</td></tr><tr><td>maxpool 2 × 2 / 2</td><td>80 × 120 × 128</td></tr><tr><td rowspan="5">conv-3-down</td><td>conv 3 × 3 / 1</td><td>80 × 120 × 256</td></tr><tr><td>conv 3 × 3 / 1</td><td>80 × 120 × 256</td></tr><tr><td>conv 3 × 3 / 1</td><td>80 × 120 × 256</td></tr><tr><td>fGRU-horizontal 3 × 3 / 1</td><td>80 × 120 × 256</td></tr><tr><td>maxpool 2 × 2 / 2</td><td>40 × 60 × 256</td></tr><tr><td rowspan="5">conv-4-down</td><td>conv 3 × 3 / 1</td><td>40 × 60 × 512</td></tr><tr><td>conv 3 × 3 / 1</td><td>40 × 60 × 512</td></tr><tr><td>conv 3 × 3 / 1</td><td>40 × 60 × 512</td></tr><tr><td>fGRU-horizontal 3 × 3 / 1</td><td>40 × 60 × 512</td></tr><tr><td>maxpool 2 × 2 / 2</td><td>20 × 30 × 512</td></tr><tr><td rowspan="4">conv-5-down</td><td>conv 3 × 3 / 1</td><td>20 × 30 × 512</td></tr><tr><td>conv 3 × 3 / 1</td><td>20 × 30 × 512</td></tr><tr><td>conv 3 × 3 / 1</td><td>20 × 30 × 512</td></tr><tr><td>fGRU-horizontal 3 × 3 / 1</td><td>20 × 30 × 512</td></tr><tr><td rowspan="5">conv-4-up</td><td>instance-norm</td><td>20 × 30 × 512</td></tr><tr><td>bilinear-resize</td><td>40 × 60 × 512</td></tr><tr><td>conv 1 × 1 / 1</td><td>40 × 60 × 8</td></tr><tr><td>conv 1 × 1 / 1</td><td>40 × 60 × 512</td></tr><tr><td>fGRU-top-down 1 × 1 / 1</td><td>40 × 60 × 512</td></tr><tr><td rowspan="5">conv-3-up</td><td>instance-norm</td><td>40 × 60 × 512</td></tr><tr><td>bilinear-resize</td><td>80 × 120 × 512</td></tr><tr><td>conv 1 × 1 / 1</td><td>80 × 120 × 16</td></tr><tr><td>conv 1 × 1 / 1</td><td>80 × 120 × 256</td></tr><tr><td>fGRU-top-down 1 × 1 / 1</td><td>80 × 120 × 256</td></tr><tr><td rowspan="5">conv-2-up</td><td>instance-norm</td><td>80 × 120 × 256</td></tr><tr><td>bilinear-resize</td><td>160 × 240 × 256</td></tr><tr><td>conv 1 × 1 / 1</td><td>160 × 240 × 64</td></tr><tr><td>conv 1 × 1 / 1</td><td>160 × 240 × 128</td></tr><tr><td>fGRU-top-down 1 × 1 / 1</td><td>160 × 240 × 128</td></tr><tr><td rowspan="3">Readout</td><td>instance-norm</td><td>160 × 240 × 128</td></tr><tr><td>bilinear-resize</td><td>320 × 480 × 128</td></tr><tr><td>conv 1 × 1 / 1</td><td>320 × 480 × 1</td></tr></table>
333
+
334
+ Table S1: $\gamma$ -Net architecture for contour detection in BSDS natural images. For comparison, the BDCN, which is the state of the art on BSDS, contains $\approx 16.3$ M parameters. When training on an NVIDIA GeForce RTX, this $\gamma$ -Nettakes 1.8 seconds per image, whereas the BDCN takes 0.1 seconds per image. "Down" refers to down-sampling layers; "up" refers to up-sampling layers, and "readout" maps model activities into per-pixel decisions. Kernels are described as kernel-height $\times$ kernel-width / stride size. All convolutional layers except for the Readout use non-linearities. All non-linearities in this network are linear rectifications. Model predictions come from the fGRU hidden state for conv-2-down, which are resized to match the input image resolution and passed to the linear per-pixel readout.
335
+
336
+ Connectomics $\gamma$ -Net (450K parameters)
337
+
338
+ <table><tr><td>Layer</td><td>Operation</td><td>Output shape</td></tr><tr><td rowspan="4">conv-1-down</td><td>conv 3 × 3 / 1</td><td>384 × 384 × 24</td></tr><tr><td>conv 3 × 3 / 1</td><td>384 × 384 × 24</td></tr><tr><td>fGRU-horizontal 9 × 9 / 1</td><td>384 × 384 × 24</td></tr><tr><td>maxpool 2 × 2 / 2</td><td>192 × 192 × 24</td></tr><tr><td rowspan="3">conv-2-down</td><td>conv 3 × 3 / 1</td><td>192 × 192 × 28</td></tr><tr><td>fGRU-horizontal 7 × 7 / 1</td><td>192 × 192 × 28</td></tr><tr><td>maxpool 2 × 2 / 2</td><td>96 × 96 × 28</td></tr><tr><td rowspan="3">conv-3-down</td><td>conv 3 × 3 / 1</td><td>96 × 96 × 36</td></tr><tr><td>fGRU-horizontal 5 × 5 / 1</td><td>96 × 96 × 36</td></tr><tr><td>maxpool 2 × 2 / 2</td><td>48 × 48 × 36</td></tr><tr><td rowspan="3">conv-4-down</td><td>conv 3 × 3 / 1</td><td>48 × 48 × 48</td></tr><tr><td>fGRU-horizontal 3 × 3 / 1</td><td>48 × 48 × 48</td></tr><tr><td>maxpool 2 × 2 / 2</td><td>24 × 24 × 48</td></tr><tr><td rowspan="2">conv-5-down</td><td>conv 3 × 3 / 1</td><td>24 × 24 × 64</td></tr><tr><td>fGRU-horizontal 1 × 1 / 1</td><td>24 × 24 × 64</td></tr><tr><td rowspan="4">conv-4-up</td><td>transpose-conv 4 × 4 / 2</td><td>48 × 48 × 48</td></tr><tr><td>conv 3 × 3 / 1</td><td>48 × 48 × 48</td></tr><tr><td>instance-norm</td><td>48 × 48 × 48</td></tr><tr><td>fGRU-top-down 1 × 1 / 1</td><td>48 × 48 × 48</td></tr><tr><td rowspan="4">conv-3-up</td><td>transpose-conv 4 × 4 / 2</td><td>96 × 96 × 36</td></tr><tr><td>conv 3 × 3 / 1</td><td>96 × 96 × 36</td></tr><tr><td>instance-norm</td><td>96 × 96 × 36</td></tr><tr><td>fGRU-top-down 1 × 1 / 1</td><td>96 × 96 × 36</td></tr><tr><td rowspan="4">conv-2-up</td><td>transpose-conv 4 × 4 / 2</td><td>192 × 192 × 28</td></tr><tr><td>conv 3 × 3 / 1</td><td>192 × 192 × 28</td></tr><tr><td>instance-norm</td><td>192 × 192 × 28</td></tr><tr><td>fGRU-top-down 1 × 1 / 1</td><td>192 × 192 × 28</td></tr><tr><td rowspan="4">conv-1-up</td><td>transpose-conv 4 × 4 / 2</td><td>384 × 384 × 24</td></tr><tr><td>conv 3 × 3 / 1</td><td>384 × 384 × 24</td></tr><tr><td>instance-norm</td><td>384 × 384 × 24</td></tr><tr><td>fGRU-top-down 1 × 1 / 1</td><td>384 × 384 × 24</td></tr><tr><td rowspan="2">Readout</td><td>instance-norm</td><td>384 × 384 × 24</td></tr><tr><td>conv 5 × 5 / 1</td><td>384 × 384 × 24</td></tr></table>
339
+
340
+ Table S2: $\gamma$ -Netarchitecture for cell membrane detection in SEM images. A 2D version of the U-Net of Lee et al. (2017), which is the state of the art on SNEMI3D, contains $\approx 600\mathrm{K}$ parameters. When training on an NVIDIA GeForce RTX, this $\gamma$ -Nettakes 0.7 seconds per image, whereas the U-Net takes 0.06 seconds per image. "Down" refers to down-sampling layers; "up" refers to upsampling layers, and "readout" maps model activities into per-pixel decisions. Kernels are described as kernel-height $\times$ kernel-width / stride size. All fGRU non-linearities are linear rectifications, and all convolutional non-linearities are exponential linear units (ELU), as in (Lee et al., 2017). All convolutional layers except for the Readout use non-linearities. Model predictions come from the fGRU hidden state for conv-1-down, which are passed to the linear readout.
341
+
342
+ While optimizing $\gamma$ -Nets on synthetic cell image datasets, we found that a small modification of the fGRU input gate offered a modest improvement in performance. We realized that the input gate in the fGRU is conceptually similar to recently developed models for feedforward self-attention in deep neural networks. Specifically, the global-and-local attention modules of (Linsley et al., 2019), in which a non-linear transformation of a layer's activity is used to modulate the original activity. Here, we took inspiration from global-and-local attention, and introduced an additional gate into the fGRU, resulting in the following modification of the main equations.
343
+
344
+ Stage 1:
345
+
346
+ $$
347
+ \mathbf {A} ^ {S} = U ^ {A} * \mathbf {H} [ t - 1 ]
348
+ $$
349
+
350
+ Compute channel-wise selection
351
+
352
+ $$
353
+ \mathbf {M} ^ {S} = U ^ {M} * \mathbf {H} [ t - 1 ]
354
+ $$
355
+
356
+ Compute spatial selection
357
+
358
+ $$
359
+ \mathbf {G} ^ {S} = \operatorname {s i g m o i d} \left(I N \left(\mathbf {A} ^ {S} \odot \mathbf {M} ^ {S *}\right)\right)
360
+ $$
361
+
362
+ Compute suppression gate
363
+
364
+ $$
365
+ \mathbf {C} ^ {S} = I N \left(W ^ {S} * (\mathbf {H} [ t - 1 ] \odot \mathbf {G} ^ {S})\right)
366
+ $$
367
+
368
+ Compute suppression interactions
369
+
370
+ $$
371
+ \mathbf {S} = \left[ \mathbf {Z} - \left[ (\alpha \mathbf {H} [ t - 1 ] + \mu) \mathbf {C} ^ {S} \right] _ {+} \right] _ {+},
372
+ $$
373
+
374
+ Additive and multiplicative suppression of $\mathbf{Z}$
375
+
376
+ Stage 2:
377
+
378
+ $$
379
+ \mathbf {G} ^ {F} = \operatorname {s i g m o i d} (I N (U ^ {F} * \mathbf {S}))
380
+ $$
381
+
382
+ Compute channel-wise recurrent updates
383
+
384
+ $$
385
+ \mathbf {C} ^ {F} = I N (W ^ {F} * \mathbf {S})
386
+ $$
387
+
388
+ Compute facilitation interactions
389
+
390
+ $$
391
+ \tilde {\mathbf {H}} = \left[ \nu (\mathbf {C} ^ {F} + \mathbf {S}) + \omega (\mathbf {C} ^ {F} * \mathbf {S}) \right] _ {+}
392
+ $$
393
+
394
+ Additive and multiplicative facilitation of S
395
+
396
+ $$
397
+ \mathbf {H} [ t ] = \left(1 - \mathbf {G} ^ {F}\right) \odot \mathbf {H} [ t - 1 ] + \mathbf {G} ^ {F} \odot \tilde {\mathbf {H}}
398
+ $$
399
+
400
+ Update recurrent state
401
+
402
+ $$
403
+ \text {w h e r e} I N (\mathbf {r}; \delta , \nu) = \nu + \delta \odot \frac {\mathbf {r} - \widehat {\mathbb {E}} [ \mathbf {r} ]}{\sqrt {\operatorname {V a r} [ \mathbf {r} ] + \eta}}.
404
+ $$
405
+
406
+ This yields the global input gate activity $\mathbf{A}^S \in \mathbb{R}^{X \times Y \times K}$ and the local input gate activity $\mathbf{M}^{S*} \in \mathbb{R}^{X \times Y \times I}$ , which are computed as filter responses between the previous hidden state $\mathbf{H}[t - 1]$ and the global gate kernel $U^A \in \mathbb{R}^{1 \times 1 \times K \times K}$ and the local gate kernel $U^M \in \mathbb{R}^{3 \times 3 \times K \times I}$ . Note that the latter filter is learning a mapping into 1 dimension and is therefore first tiled into $K$ dimensions, yielding $\mathbf{M}^{S*}$ , before elementwise multiplication with $\mathbf{A}^S$ . All results in the main text use this implementation.
407
+
408
+ Following the lead of (Linsley et al., 2018a), we incorporated normalizations into the fGRU. Let $\mathbf{r} \in \mathbb{R}^d$ denote the vector of layer activations that will be normalized. We chose instance normalization (Ulyanov et al., 2016) since it is independent of batch size, which was 1 for $\gamma$ -Nets in our experiments. Instance normalization introduces two $k$ -dimensional learned parameters, $\delta, \nu \in \mathbb{R}^d$ , which control the scale and bias of normalized activities, and are shared across timesteps of processing. In contrast, means and variances are computed on every timestep, since fGRU activities are not i.i.d. across timesteps. Elementwise multiplication is denoted by $\odot$ and $\eta$ is a regularization hyperparameter.
409
+
410
+ Learnable gates, such as those in the fGRU, are helpful for training RNNs. But there are other heuristics that are also important for optimizing performance. We use several of these with $\gamma$ -Nets, such as Chronos initialization of fGRU gate biases (Tallec & Ollivier, 2018) and random orthogonal initialization of kernels (Vorontsov et al., 2017). We initialized the learnable scale parameter $\delta$ of fGRU normalizations to 0.1, since values near 0 optimize the dynamic range of gradients passing through its sigmoidal gates (Coolijmans et al., 2017). Similarly, fGRU parameters for learning additive suppression/facilitation $(\mu, \nu)$ were initialized to 0, and parameters for learning multiplicative inhibition/excitation $(\alpha, \omega)$ were initialized to 0.1. Finally, when implementing top-down connections, we incorporated an extra skip connection. The activity of layer $\ell$ was added to the fGRU-computed top-down interactions between layer $\ell$ and layer $\ell + 1$ . This additional skip connection improved the stability of training.
411
+
412
+ ![](images/f21efaa376745ae72b59f973ae7365650ce3822d704042ce6b1ab8d77ac20b08.jpg)
413
+ Volume
414
+
415
+ ![](images/4bbd82ad2eb7b077a23d43fe8a9cc05830d914a527b42f843a2849cfb53e0d42.jpg)
416
+ Neurites
417
+
418
+ ![](images/f0cf753b4cc65782a859bd50fd2ed4b95185e99fed3fafa8e761eefd48cae13c.jpg)
419
+ Membrane
420
+
421
+ ![](images/d28730d18158924661223c295cfc10b25f45f95b6ee019080571034e1576d8c7.jpg)
422
+ Segmentations
423
+ Figure S1: We trained the reference 3D U-Net from (Lee et al., 2017) on the SNEMI3D dataset to validate the implementation. Segmentations here are derived by watershedding and agglomeration with GALA (Nunez-Iglesias et al., 2013), resulting in "superhuman" ARAND (evaluated according to the SNEMI3D standard; lower is better) of 0.04, which is below the reported human-performance threshold of 0.06 and on par with the published result (see Table 1 in Lee et al. 2017, mean affinity agglomeration).
424
+
425
+ # B MEMBRANE PREDICTION MODELS
426
+
427
+ Our reference model for membrane prediction is the 3D U-Net of (Lee et al., 2017). This architecture consists of four encoder blocks (multiple convolutions and skip connections, pooling and subsampling), followed by four decoder blocks (transpose convolution and convolution). This U-Net uses spatial pooling between each of its encoder blocks to downsample the input, and transpose convolutions between each of its decoder blocks to upsample intermediate activities. We validated our implementation of this U-Net following the author's training routine, and were able to replicate their reported "superhuman" performance in cell segmentation on SNEMI3D (Fig. S1).
428
+
429
+ The $\gamma$ -Net for connectomics resembles the U-Net architecture of Lee et al. (2017). This $\gamma$ -Net replaces the blocks of convolutions and skip connections of that model with a single layer of convolution followed by an fGRU (as in the high level diagram of Fig. 1c, $\mathrm{Conv}^{(\ell)} \rightarrow \mathrm{fGRU}^{(\ell)}$ ). In the encoder pathway, fGRUs store horizontal interactions between spatially neighboring units of the preceding convolutional layer in their hidden states. In the decoder pathway, $\gamma$ -Net introduces fGRUs that learn top-down connections between layers, and connect recurrent units from higher-feature processing layers to lower-feature processing ones.
430
+
431
+ <table><tr><td>Name</td><td>Tissue</td><td>Imaging</td><td>Resolution</td><td>Voxels (X/Y/Z/Volumes)</td></tr><tr><td>SNEMI3D</td><td>Mouse cortex</td><td>mbSEM</td><td>6 × 6 × 29nm</td><td>1024 × 1024 × 100 × 1</td></tr><tr><td>Ding</td><td>Mouse retina</td><td>SBEM</td><td>13.2 × 13.2 × 26nm</td><td>384 × 384 × 384 × 1</td></tr></table>
432
+
433
+ Table S3: SEM image volumes used in membrane prediction. SNEMI3D images and annotations are publicly available (Kasthuri et al., 2015), whereas the Ding dataset is a volume from (Ding et al., 2016) that we annotated.
434
+
435
+ ![](images/4d9f86fe229f0127920d76c698a78050af81d21201725de9463fa1ff90a38aa5.jpg)
436
+ Figure S2: Examples of tilt-illusion stimuli. (a) For training images, we sample over a range of size and wavelength to generate single oriented grating patches. (b) Test images are obtained by sampling a full range of surround orientation, while fixing all other parameters such as size and frequency of gratings as well as the orientation of the center gratings (at 45 degrees).
437
+
438
+ Key to the approach of Lee et al. (2017) is their use of a large set of random data augmentations applied to SEM image volumes, which simulate common noise and errors in SEM imaging. These are (i) misalignment between consecutive $z$ -locations in each input image volume. (ii) Partial- or fully-missing sections of the input image volumes. (iii) Blurring of portions of the image volume. Augmentations that simulated these types of noise, as well as random flips over the $xyz$ -plane, rotations by $90^{\circ}$ , brightness and contrast perturbations, were applied to volumes following the settings of Lee et al. (2017). The model was trained using Adam (Kingma & Ba, 2014) and the learning rate schedule of Lee et al. (2017), in which the optimizer step-size was halved when validation loss stopped decreasing (up to four times). Training involved single-SEM volume batches of $160 \times 160 \times 18$ (X/Y/Z), normalized to [0, 1]. As in Lee et al. (2017), models were trained to predict nearest-neighbor voxel affinities, as well as 3 other mid- to long-range voxel distances. Only nearest neighbor affinities were used at test time.
439
+
440
+ # C ORIENTATION-TILT ILLUSION IMAGE DATASET
441
+
442
+ Models were tested for a tilt illusion by first training on grating images of a single orientation, then testing on images in which a center grating had the same/different orientation as a surround grating. Each image in the training dataset consisted of a circular patch of oriented grating on a gray canvas of size $500 \times 500$ pixels. To ensure that the decoder successfully decoded orientation information from model activities, the training dataset incorporated a wide variety of grating stimuli with 4 randomly sampled image parameters: $r$ , $\lambda$ , $\theta$ , and $\phi$ . $r$ denotes the radius of the circle in which oriented grating has been rendered, and was sampled from a uniform distribution with interval between 80 and 240 pixels; $\lambda$ specifies the wavelength of the grating pattern and was sampled from a uniform distribution with interval between 30 and 90 pixels; $\theta$ specifies the orientation of the gratings and is uniformly sampled from all possible orientations; $\phi$ denotes the phase offset of the oriented gratings and is also uniformly sampled from all possible values. The models' BSDS-trained weights were fixed and
443
+
444
+ ![](images/aa5bbb21dd65af23642e42adfdb121d5afe0646fa4936dfaf2ccbd487085480f.jpg)
445
+ (a) BDCN (BSDS)
446
+
447
+ ![](images/4bd9705c5f872a7e29f4ac0a2b1fb5b36643ce74d0b3f865989463c99ce35556.jpg)
448
+ Figure S3: Searching over learning rates did not rescue BSDS performance on small BSDS datasets had no affect on performance. over learning rates did not rescue from overfitting. The left panel depicts training and validation losses for BSDS on different sized subsets of BSDS500. (b) Performance after training with three different learning rates on the $5\%$ split. There is little difference in best validation performance between the three learning rates. (c) The full training and validation loss curves for the BDCN trained on $5\%$ of BSDS. The model overfits immediately. The model also overfit on the other dataset sizes, but because there was more data, this happened later in training.
449
+
450
+ ![](images/ff61aaa4387e64aa22fe675ecdec3e8d3bd87ff1ae3aa77d136b32c7c8641300.jpg)
451
+
452
+ ![](images/8758c938e2fe492a4bc09ccc13de029821b96772c96781b323b90c1ba0cfdaae.jpg)
453
+ (b) U-Net (Connectomics)
454
+
455
+ readout layers were trained to decode orientation at the center of each image (procedure described in the main text).
456
+
457
+ This setup allowed us to tease apart the effects of the surround on the representation of orientation in the center by introducing separate surround regions in each test image filled with gratings with same/different orientations as the center (Fig.S2b). Each test image was generated with one additional parameter, $\Delta \theta$ which specified orientation difference of the surround gratings with respect to the center orientation, $\theta$ , and was sampled from a uniform distribution with interval between $-90$ and $+90$ degrees. The radius of the surround grating is denoted by $r$ and was sampled from the same uniform distribution we used in training dataset. Center gratings are then rendered in a circle of radius that is one half of the surround gratings.
458
+
459
+ ![](images/40c9020403642b22f4bd3868b9e25a33aeaf992c3c20705fcc68f8afe424e0ff.jpg)
460
+ Figure S4: Performance of lesioned $\gamma$ -Nets. We evaluate the F1 ODS of these models on BSDS500 test images (left) and test for the presence of an orientation-tilt illusion (right). Top row: A comparison of $\gamma$ -Nets with "untied weights" (i.e., unrolled to 8-timesteps with unique weights on every timestep), only top-down connections, only horizontal connections (using the standard $3 \times 3$ horizontal kernels), and only horizontal connections but with larger kernels ( $15 \times 15$ ). The full $\gamma$ -Net outperformed each of these models on all subsets of BSDS500 data. Both horizontal-only models captured the repulsive regime of the orientation-tilt illusion when center and surround gratings were similar, but only the version with larger kernels also showed an attractive regime, when the center and surround orientations were dissimilar. Bottom row: A comparison of $\gamma$ -Nets with no recurrence (i.e., one timestep of processing), no constraint for non-negativity (see fGRU formulation in main text), no parameters for additive feedback ( $\mu$ and $\kappa$ ), and no parameters for multiplicative feedback ( $\gamma$ and $\omega$ ). Once again, the full $\gamma$ -Net outperformed each of these models. While none of these models showed the full orientation-tilt illusion, the $\gamma$ -Net without non-negativity constraints and the $\gamma$ -Net without additive feedback showed the repulsive regime of the orientation-tilt illusion.
461
+
462
+ ![](images/250b6eb24cddd88d75fb0f2422a9c087c29482bbba4ef770a6d0c410c7d6f0ea.jpg)
463
+
464
+ ![](images/d917a4c7e7c6ce11c5a8ca1b60b40c91603bcaff06b0c09fcd1543fa8502b519.jpg)
465
+
466
+ ![](images/1b2d3d272eb5a6182649c05428ec89ec828aff7084992ddea3ce50f65f9dc59d.jpg)
467
+
468
+ ![](images/813914f63507464efe13e5bbc88691465d4181fcd746eab7713c87604a5dcd92.jpg)
469
+
470
+ ![](images/1c40d4948c18dfa0432b41407b5637c7f6261679b055bd1395a65070a5708c5f.jpg)
471
+ Figure S5: $\gamma$ -Nets trained for contour detection learn to approach a steady-state solution. The processing timecourse of $\gamma$ -Net predictions on representative images from the BSDS500 test set. The L2 norm of per-pixel differences between predictions on consecutive timesteps (i.e., timestep 2 - timestep 1) approaches 0, indicating that the model converges towards steady state by the end of its processing timecourse.
472
+
473
+ ![](images/a8afc31d954168df757352d83033353fe8dea967d29c8235e0fd59cdd77d68e3.jpg)
474
+
475
+ ![](images/d6c5e9ddea30bc301267899839f5b2e3e74100ec48d68566200c3fd021e022e1.jpg)
476
+ Figure S6: Performance of $\gamma$ -Nets during experiments to correct an orientation-tilt illusion. The illusion-corrected model was trained to have veridical representations of the central grating in tilt-illusion stimuli. To control for potential detrimental effects of the training procedure per se, a control model ("domain-transfer control") was trained to decode orientations of single-grating stimuli. (a) Training causes contour-detection performance of both models to drop. However, the illusion-corrected model performance drops significantly more than the biased model (see main text for hypothesis testing). The losses for both models converge towards 0 across training, indicating that both learned to decode central-orientations of their stimuli. (b) Contour detection examples for biased and bias-corrected models across steps of this training procedure.
477
+
478
+ ![](images/9a217d1cdf1109c82ebfe61530d9ee5193ad542d7cf11acb75e417dfc215c7e8.jpg)
479
+
480
+ ![](images/76c7c467c57c0f030e41c29c62dd812650fa779c24fee8f2e52db9daded604e9.jpg)
481
+
482
+ ![](images/b12365775bb374f60e2ed03a56fcf40fd860f51ced0b3d879fe35f1751944b1d.jpg)
483
+ Figure S7: Differences in contour predictions for the illusion-corrected and domain-transfer control $\gamma$ -Nets on BSDS500.
recurrentneuralcircuitsforcontourdetection/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b9d5966a4b75a6c70ec9341692eabb5377816f340e4e01f5ef87af9409ea3916
3
+ size 1498944
recurrentneuralcircuitsforcontourdetection/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:24476a56f1cef88be1f449217a8a4f8f4dd57031d5a61d535e6758e86b88ef12
3
+ size 742964
reducingtransformerdepthondemandwithstructureddropout/1c2abfdf-944b-4fa5-8a3f-7586904a5c4a_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b5818380980d3ca3ab088f9214a0492bf06d21ffd8646ac0720b9f6296a6a51a
3
+ size 89195
reducingtransformerdepthondemandwithstructureddropout/1c2abfdf-944b-4fa5-8a3f-7586904a5c4a_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f8f3513f14c8f5b13afae310f8748f9617f2cf9f36b3d9f88dd6be3f9df77da3
3
+ size 111283
reducingtransformerdepthondemandwithstructureddropout/1c2abfdf-944b-4fa5-8a3f-7586904a5c4a_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:43aa216ee136004b4d9e78bad3af790d2f9e04664f5db19cdb6906ad07937908
3
+ size 590795
reducingtransformerdepthondemandwithstructureddropout/full.md ADDED
@@ -0,0 +1,388 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # REDUCING TRANSFORMER深度ONDEMANDWITH STRUCTURED DROPOUT
2
+
3
+ Angela Fan
4
+
5
+ Facebook AI Research/LORIA
6
+
7
+ angelafan@fb.com
8
+
9
+ Edouard Grave
10
+
11
+ Facebook AI Research
12
+
13
+ egrave@fb.com
14
+
15
+ Armand Joulin
16
+
17
+ Facebook AI Research
18
+
19
+ ajoulin@fb.com
20
+
21
+ # ABSTRACT
22
+
23
+ Overparameterized transformer networks have obtained state of the art results in various natural language processing tasks, such as machine translation, language modeling, and question answering. These models contain hundreds of millions of parameters, necessitating a large amount of computation and making them prone to overfitting. In this work, we explore LayerDrop, a form of structured dropout, which has a regularization effect during training and allows for efficient pruning at inference time. In particular, we show that it is possible to select sub-networks of any depth from one large network without having to finetune them and with limited impact on performance. We demonstrate the effectiveness of our approach by improving the state of the art on machine translation, language modeling, summarization, question answering, and language understanding benchmarks. Moreover, we show that our approach leads to small BERT-like models of higher quality compared to training from scratch or using distillation.
24
+
25
+ # 1 INTRODUCTION
26
+
27
+ Transformer architectures (Vaswani et al., 2017) have become the dominant architecture in natural language processing, with state-of-the-art performance across a variety of tasks, including machine translation (Vaswani et al., 2017; Ott et al., 2018), language modeling (Dai et al., 2019; Baevski & Auli, 2018) and sentence representation (Devlin et al., 2018; Yang et al., 2019). Each of its layers contains millions of parameters accessed during the forward pass, making it computationally demanding in terms of memory and latency during both training and inference. In an ideal situation, we would be able to extract sub-networks — automatically and without finetuning — from this over-parameterized network, for any given memory or latency constraint, while maintaining good performance. In contrast, standard pruning or distillation methods follow a strategy that often includes a finetuning or retraining step, and the process must be repeated for each desired depth.
28
+
29
+ In this work, we propose a novel approach to extract any sub-network without a post-hoc pruning process from over-parameterized networks. The core of our method is to sample small sub-networks from the larger model during training by randomly dropping model weights as in Dropout (Hinton et al., 2012) or DropConnect (Wan et al., 2013). This has the advantage of making the network robust to subsequent pruning. If well-chosen groups of weights are dropped simultaneously, the resulting small sub-networks can be very efficient. In particular, we drop entire layers to extract shallow models at inference time. Previous work (Huang et al., 2016) has shown that dropping layers during training can regularize and reduce the training time of very deep convolutional networks. In contrast, we focus on pruning. As illustrated in Figure 1, an advantage of our layer dropping technique, or LayerDrop, is that from one single deep model, we can extract shallow sub-networks of any desired depth on demand at inference time.
30
+
31
+ We validate our findings on a variety of competitive benchmarks, namely WMT14 English-German for machine translation, WikiText-103 (Merit et al., 2016) for language modeling, CNN-Dailymail (Hermann et al., 2015) for abstractive summarization, ELI5 (Fan et al., 2017) for long form question answering, and several natural language understanding tasks (Wang et al., 2019a) for sentence representation. Our approach achieves state of the art on most of these benchmarks as a result of the regularization effect, which stabilizes the training of larger and deeper networks. We also show that we can prune Transformer architectures to much smaller models while maintaining com
32
+
33
+ ![](images/fe77900d21af09e9edf11ea87c953b1978f8e19bdc41b3a53556a3aae128f267.jpg)
34
+ Figure 1: LayerDrop (right) randomly drops layers at training time. At test time, this allows for sub-network selection to any desired depth as the network has been trained to be robust to pruning. In contrast to standard approaches that must re-train a new model from scratch for each model size (left), our method trains only one network from which multiple shallow models can be extracted.
35
+
36
+ petitive performance, outperforming specific model reduction strategies dedicated to BERT (Devlin et al., 2018; Sanh, 2019) as well as training smaller models from scratch. Overall, applying LayerDrop to Transformer networks provides the following key advantages:
37
+
38
+ - LayerDrop regularizes very deep Transformers and stabilizes their training, leading to state-of-the-art performance across a variety of benchmarks.
39
+ - Small and efficient models of any depth can be extracted automatically at test time from a single large pre-trained model, without the need for finetuning.
40
+ - LayerDrop is as simple to implement as dropout.
41
+
42
+ # 2 RELATED WORK
43
+
44
+ Our approach is a form of Dropout (Srivastava et al., 2014) applied to model weights instead of activations, as in DropConnect (Wan et al., 2013). Different from DropConnect, we drop groups of weights to induce group redundancy to create models suited for pruning to shallow, efficient models at inference time. Gomez et al. (2018) propose a targeted Dropout and DropConnect, where they learn the drop rate of the weights to match a targeted pruning scheme. Instead, we adapt the masks to the structures that we are interested in pruning. Closer to our work, the Stochastic Depth approach of Huang et al. (2016) drops layers randomly during training. As opposed to our work, they are interested in accelerating the training of very deep ResNets (He et al., 2016), so their dropping schedule is adapted to this goal. Concurrently to this work, Pham et al. (2019) applied Stochastic Depth to train very deep Transformers for speech and show the benefits of its regularization effect.
45
+
46
+ More generally, our method is a form of structured pruning (Liu et al., 2018b). As opposed to weight pruning (LeCun et al., 1990), structured pruning removes coherent groups of weights to preserve the original structure of the network. Structured pruning has been used in some NLP applications, such as machine translation (See et al., 2016), text classification (Joulin et al., 2016) and language modeling (Murray & Chiang, 2015). However, it has been more widely adopted in computer vision and applied to convolutional network to remove filters (Li et al., 2016; Wen et al., 2016), channels (He et al., 2017), or residual blocks (Huang et al., 2018; Huang & Wang, 2018). Similar to Mittal et al. (2018), we take advantage of the plasticity of neural networks to learn models that are resilient to random pruning or skipping connections Wang et al. (2018); Wu et al. (2018); Liu et al. (2018a), rather than learning the pruning itself. We refer the reader to Liu et al. (2018b) for an exhaustive study of these approaches and their evaluation in the context of convolutional networks.
47
+
48
+ Reducing the memory footprint of Transformer architectures and BERT in particular is an active subject of research. Several works have compressed BERT as a post-processing step using different forms of distillation (Turc et al., 2019; Tang et al., 2019; Shulga, 2019; Sanh, 2019). Similarly, various papers have shown evidence that Transformers are over-parameterized, especially that most self-attention heads can be dropped at test time (Michel et al., 2019; Voita et al., 2019). Different
49
+
50
+ from these, our models are trained to be resilient to pruning, which significantly reduces the performance drop induced by test time pruning. Others have proposed trainable adaptive mechanisms to control their memory footprint (Jernite et al., 2016; Sukhbaatar et al., 2019; Correia et al., 2019). These approaches are complementary to ours and should benefit from each other.
51
+
52
+ # 3 METHOD
53
+
54
+ In this section, we briefly introduce the Transformer, then describe our Structured Dropout technique and its application to layers. We also discuss several inference time pruning strategies.
55
+
56
+ # 3.1 THE TRANSFORMER ARCHITECTURE
57
+
58
+ We succinctly review the Transformer architecture and refer the reader to Vaswani et al. (2017) for additional details. A Transformer is a stack of layers composed of two sub-layers: multi-head self-attention followed by a feedforward sub-layer. The multi-head self-attention sub-layer consists of multiple attention heads applied in parallel. Each attention head takes a matrix $\mathbf{X}$ where each row represents an element of the input sequence and updates their representations by gathering information from their context using an Attention mechanism (Bahdanau et al., 2014):
59
+
60
+ $$
61
+ \mathbf {Y} = \operatorname {S o f t m a x} (\mathbf {X} ^ {T} \mathbf {K} (\mathbf {Q X} + \mathbf {P})) \mathbf {V X},
62
+ $$
63
+
64
+ where $\mathbf{K},\mathbf{V},\mathbf{Q}$ and $\mathbf{P}$ are matrices of parameters. The outputs of the heads are then concatenated along the time step into a sequence of vectors.
65
+
66
+ The second sub-layer then applies a fully connected feedforward network to each element of this sequence independently, $\mathrm{FFN}(\mathbf{x}) = \mathbf{U}\mathrm{ReLU}(\mathbf{V}\mathbf{x})$ , where $\mathbf{V}$ and $\mathbf{U}$ are matrices of parameters. Each sub-layer is followed by a AddNorm operation that is a residual connection (He et al., 2016) and a layer normalization (Ba et al., 2016).
67
+
68
+ # 3.2 TRAINING TRANSFORMERS WITH RANDOM STRUCTURED PRUNING
69
+
70
+ We present a regularization approach that makes Transformers robust to subsequent structured pruning at inference time. We focus in particular on the case where the targeted structure is a layer.
71
+
72
+ # 3.2.1 RANDOMLY DROPPING STRUCTURES AT TRAINING TIME
73
+
74
+ Regularizing networks to be robust to pruning can be achieved by randomly removing weights during its training as in DropConnect (Wan et al., 2013). In this approach, each weight is dropped independently following a Bernoulli distribution associated with a parameter $p > 0$ that controls the drop rate. This is equivalent to a pointwise multiplication of the weight matrix $\mathbf{W}$ with a randomly sampled $\{0,1\}$ mask matrix $\mathbf{M}$ :
75
+
76
+ $$
77
+ \mathbf {W} _ {d} = \mathbf {M} \odot \mathbf {W}.
78
+ $$
79
+
80
+ DropConnect is a form of random unstructured pruning that leads to smaller, but not necessarily more efficient, models. We propose to add structure to this mechanism to target model efficiency.
81
+
82
+ Random Structured Dropout. The weights of a Transformer network belong to multiple overlapping structures, such as heads, FFN matrices, or layers. Dropping weights using groups that follow some of these inherent structures potentially leads to a significant reduction of the inference time. This is equivalent to constraining the mask $\mathbf{M}$ to be constant over some predefined groups of weights. More precisely, given a set $\mathcal{G}$ of predefined groups of weights, the $\{0,1\}$ mask matrix $\mathbf{M}$ is randomly sampled over groups instead of weights:
83
+
84
+ $$
85
+ \forall i, \mathbf {M} [ i ] \in \{0, 1 \}, \quad \text {a n d} \forall G \in \mathcal {G}, \forall (i, j) \in G, \mathbf {M} [ i ] = \mathbf {M} [ j ].
86
+ $$
87
+
88
+ This structured dropout formulation is general and can be applied to any overlapping groups of weights, whether heads, FFN matrices, or layers. Nonetheless, not all of the structures in a Transformer lead to the same benefits when dropped. For example, dropping attention heads does not reduce runtime as they are usually computed in parallel. For simplicity, we focus on dropping layers, and we name this structured pruning, LayerDrop. This is inspired by the Stochastic Depth approach of Huang et al. (2016) used to train very deep ResNets (He et al., 2015).
89
+
90
+ # 3.2.2 PRUNING AT INFERENCE TIME
91
+
92
+ Selecting Layers to Prune Training with LayerDrop makes the network more robust to predicting with missing layers. However, LayerDrop does not explicitly provide a way to select which groups to prune. We consider several different pruning strategies, described below:
93
+
94
+ - Every Other: A straightforward strategy is to simply drop every other layer. Pruning with a rate $p$ means dropping the layers at a depth $d$ such that $d \equiv 0(\bmod \lfloor \frac{1}{p} \rfloor)$ . This strategy is intuitive and leads to balanced networks.
95
+ - Search on Valid: Another possibility is to compute various combinations of layers to form shallower networks using the validation set, then select the best performing for test. This is straightforward but computationally intensive and can lead to overfitting on validation.
96
+ - Data Driven Pruning: Finally, we propose data driven pruning where we learn the drop rate of each layer. Given a target drop rate $p$ , we learn an individual drop rate $p_d$ for the layer at depth $d$ such that the average rate over layers is equal to $p$ . More precisely, we parameterize $p_d$ as a non-linear function of the activation of its layer and apply a softmax. At inference time, we forward only the fixed top-k highest scoring layers based on the softmax output (e.g. chosen layers do not depend on the input features).
97
+
98
+ In practice, we observe that the Every Other strategy works surprisingly well across many tasks and configurations. Search on Valid and Data Driven Pruning only offer marginal gains. Note that we do not further finetune any of the pruned networks (see Appendix for analysis of finetuning).
99
+
100
+ Setting the drop rate for optimal pruning. There is a straightforward relationship between the drop rate of groups and the average pruning level that the network should be resilient to. Assuming $N$ groups and a fixed drop ratio $p$ , the average number of groups used by the network during training is $N(1 - p)$ . As a consequence, to target a pruning size of $r$ groups, the optimal drop rate is:
101
+
102
+ $$
103
+ p ^ {*} = 1 - \frac {r}{N}
104
+ $$
105
+
106
+ In practice, we observe that networks are more robust to pruning than their expected ratio but higher pruning rates leads to better performance for smaller models. We use a LayerDrop rate of $p = 0.2$ for all our experiments, but we recommend $p = 0.5$ to target very small inference time models.
107
+
108
+ # 4 EXPERIMENTAL SETUP
109
+
110
+ We apply our method to a variety of sequence modeling tasks: neural machine translation, language modeling, summarization, long form question answering, and various natural language understanding tasks. Our models are implemented in PyTorch using fairseq-py (Ott et al., 2019). Additional implementation and training details with hyperparameter settings are in the Appendix.
111
+
112
+ Neural Machine Translation. We experiment on the WMT English-German machine translation benchmark using the Transformer Big architecture. We use the dataset of 4.5M en-de sentence pairs from WMT16 (Vaswani et al., 2017) for training, newstest2013 for validation, and newstest2014 for test. We optimize the dropout value within the range $\{0.1, 0.2, 0.5\}$ on the validation set and set the LayerDrop rate $p$ to 0.2. For generation, we average the last 10 checkpoints, set the length penalty to 0.6, and beam size to 8, following the settings suggested in Wu et al. (2019a), and measure case-sensitive tokenized BLEU. We apply compound splitting, as used in Vaswani et al. (2017).
113
+
114
+ Language Modeling. We experiment on the Wikitext-103 language modeling benchmark (Merit et al., 2016) which contains 100M tokens and a large vocabulary size of 260K. We adopt the 16 layer Transformer used in Baevski & Auli (2018). We set the LayerDrop rate $p$ to 0.2 and tune the standard dropout parameter in $\{0.1, 0.2, 0.3\}$ on the validation set. We report test set perplexity (PPL).
115
+
116
+ <table><tr><td>Model</td><td>Enc Layers</td><td>Dec Layers</td><td>BLEU</td></tr><tr><td>Transformer (Vaswani et al., 2017)</td><td>6</td><td>6</td><td>28.4</td></tr><tr><td>Transformer (Ott et al., 2018)</td><td>6</td><td>6</td><td>29.3</td></tr><tr><td>DynamicConv (Wu et al., 2019a)</td><td>7</td><td>6</td><td>29.7</td></tr><tr><td>Transformer (Ott et al., 2018) + LayerDrop</td><td>6</td><td>6</td><td>29.6</td></tr><tr><td>Transformer (Ott et al., 2018) + LayerDrop</td><td>12</td><td>6</td><td>30.2</td></tr></table>
117
+
118
+ Table 1: Results on WMT en-de Machine Translation (newstest2014 test set)
119
+
120
+ <table><tr><td>Model</td><td>Layers</td><td>Params</td><td>PPL</td></tr><tr><td>Adaptive Inputs (Baevski &amp; Auli, 2018)</td><td>16</td><td>247M</td><td>18.7</td></tr><tr><td>Transformer XL Large (Dai et al., 2019)</td><td>18</td><td>257M</td><td>18.3</td></tr><tr><td>Adaptive Inputs + LayerDrop</td><td>16</td><td>247M</td><td>18.3</td></tr><tr><td>Adaptive Inputs + LayerDrop</td><td>40</td><td>423M</td><td>17.7</td></tr></table>
121
+
122
+ Table 2: Results on Wikitext-103 language modeling benchmark (test set).
123
+
124
+ Summarization. We adopt the Transformer base architecture and training schedule from Edunov et al. (2019) and experiment on the CNN-Dailymail multi-sentence summarization benchmark. The training data contains over 280K full-text news articles paired with multi-sentence summaries (Hermann et al., 2015; See et al., 2017). We tune a generation length in the range $\{40, 50, 60\}$ and use 3-gram blocking. We set the LayerDrop rate $p$ to 0.2. We evaluate using ROUGE (Lin, 2004).
125
+
126
+ Long Form Question Answering. We consider the Long Form Question Answering Dataset ELI5 of Fan et al. (2019), which consists of 272K question answer pairs from the subreddit Explain Like I'm Five along with extracted supporting documents from web search. We follow the Transformer Big architecture and training procedure of Fan et al. (2019). We generate long answers using beam search with beam size 5 and apply 3-gram blocking (Fan et al., 2017). We evaluate with ROUGE.
127
+
128
+ Sentence representation Pre-training. We train base and large BERT (Devlin et al., 2018) models following the open-source implementation of Liu et al. (2019). We use two datasets: Bookscorpus + Wiki from Liu et al. (2019) and the larger combination of Bookscorpus + OpenWebText + CC-News + Stories (Liu et al., 2019). We evaluate the pretrained models on various natural language understanding tasks. Specifically, we evaluate accuracy on MRPC (Dolan & Brockett, 2005), QNLI (Rajpurkar et al., 2016), MNLI (Williams et al., 2018), and SST2 (Socher et al., 2013).
129
+
130
+ # 5 RESULTS
131
+
132
+ # 5.1 LAYERDROP AS A REGULARIZER
133
+
134
+ Language Modeling. In Table 2, we show the impact of LayerDrop on the performance of a Transformer network trained in the setting of Adaptive Inputs (Baevski & Auli, 2018). Adding LayerDrop to a 16 layer Transformer improves the performance by 0.4 perplexity, matching the state-of-the-art results of Transformer-XL. Our 40 layer Transformer with LayerDrop further improves the state of the art by 0.6 points. Very deep Transformers are typically hard to train because of instability and memory usage, and they are prone to overfitting on a small dataset like Wikitext-103. LayerDrop regularizes the network, reduces the memory usage, and increases training stability as fewer layers are active at each forward pass. These results confirm that this type of approach can be used to efficiently train very deep networks, as shown in Huang et al. (2016) for convolutional networks.
135
+
136
+ Sequence to sequence modeling. Similarly, as shown in Table 1 and Table 3, applying LayerDrop to Transformers on text generation tasks such as neural machine translation, summarization, and long form question answering also boosts performance for all tasks. In these experiments, we take the Transformer architectures that are state-the-art and train them with LayerDrop. In neu
137
+
138
+ <table><tr><td>Model</td><td>Enc</td><td>Dec</td><td>ROUGE-1</td><td>ROUGE-2</td><td>ROUGE-L</td></tr><tr><td colspan="6">Abstractive Summarization</td></tr><tr><td>Transformer (Edunov et al., 2019)</td><td>6</td><td>6</td><td>40.1</td><td>17.6</td><td>36.8</td></tr><tr><td>Transformer + LayerDrop</td><td>6</td><td>6</td><td>40.5</td><td>17.9</td><td>37.1</td></tr><tr><td>Transformer + LayerDrop</td><td>6</td><td>8</td><td>41.1</td><td>18.1</td><td>37.5</td></tr><tr><td colspan="6">Long Form Question Answering</td></tr><tr><td>Transformer Multitask (Fan et al., 2019)</td><td>6</td><td>6</td><td>28.9</td><td>5.4</td><td>23.1</td></tr><tr><td>Transformer Multitask + LayerDrop</td><td>6</td><td>6</td><td>29.4</td><td>5.5</td><td>23.4</td></tr></table>
139
+
140
+ Table 3: Results for CNN-Dailymail Summarization and ELI5 QA (test set).
141
+
142
+ <table><tr><td>Data</td><td>Layers</td><td>Model</td><td>MNLI-m</td><td>MRPC</td><td>QNLI</td><td>SST2</td></tr><tr><td rowspan="2">Books + Wiki</td><td>24</td><td>RoBERTa</td><td>89.0</td><td>90.2</td><td>93.9</td><td>95.3</td></tr><tr><td>24</td><td>RoBERTa + LayerDrop</td><td>89.2</td><td>90.2</td><td>94.2</td><td>95.4</td></tr><tr><td rowspan="3">+ more data</td><td>24</td><td>RoBERTa</td><td>90.2</td><td>90.9</td><td>94.7</td><td>96.4</td></tr><tr><td>24</td><td>RoBERTa + LayerDrop</td><td>90.1</td><td>91.0</td><td>94.7</td><td>96.8</td></tr><tr><td>48</td><td>RoBERTa + LayerDrop</td><td>90.4</td><td>90.9</td><td>94.8</td><td>96.9</td></tr></table>
143
+
144
+ Table 4: Results on Various NLU Tasks for RoBERTa Large trained for 500K updates (dev set).
145
+
146
+ ral machine translation on newstest2014, our 12 encoder layer Transformer model with LayerDrop further improves the state of the art, reaching 30.2 BLEU. In comparison, a standard Transformer trained without LayerDrop diverges with 12 encoder layers. This is a known problem, and techniques such as improved initialization could be used to maintain stability (Junczys-Dowmunt, 2019; Zhang et al., 2019; Wang et al., 2019b; Wu et al., 2019b), but are out of the scope of this work. Similar results are seen in summarization.
147
+
148
+ Bi-Directional Pre-training. In a second set of experiments, we look at the impact of LayerDrop on pre-training for sentence representation models and subsequent finetuning on multiple natural language understanding tasks. We compare our models to a variant of BERT for sentence representations, called RoBERTa (Liu et al., 2019), and analyze the results of finetuning for data adaptation on MNLI, MRPC, QNLI, and SST2. We apply LayerDrop during both pre-training and finetuning.
149
+
150
+ We compare the performance of the large architecture on the BooksCorpus+Wiki dataset used in BERT. We analyze the performance of training on the additional data used in RoBERTa as well as pre-training for even longer. Comparing fixed model size and training data, LayerDrop can improve the performance of RoBERTa on several tasks. LayerDrop can further be used to both enable and stabilize the training (Huang et al., 2016) of models double the size for even stronger performance.
151
+
152
+ # 5.2 PRUNING TRANSFORMER LAYERS TO ON-DEMAND深度WITH LAYERDROP
153
+
154
+ Pruning Generation Tasks. In Figure 2, we investigate the impact of the number of pruned decoder layers on the performance of a Transformer for language modeling, neural machine translation, and summarization. We compare three different settings: standard Transformer models trained without LayerDrop but subsequently pruned, standard Transformer models trained from scratch to each desired depth, and lastly our approach: pruning layers of a Transformer trained with LayerDrop. Our model is trained once with the maximum number of layers and then pruned to the desired depth, without any finetuning in the shallower configuration. Our approach outperforms small models trained from scratch, showing that LayerDrop leads to more accurate small models at a whole range of depths. Further, training with LayerDrop does not incur the computational cost of retraining a new model for each desired depth. For completeness, dropping layers of a deep Transformer trained without LayerDrop performs poorly as it was not trained to be robust to missing layers.
155
+
156
+ Pruning BERT-like Models. In Table 7 (left), we compare pruning Transformers trained with LayerDrop to different approaches used to create smaller, shallower models. We compare to BERT
157
+
158
+ ![](images/56c0d0ac88a2e9634b5b688534f4f4b7b0014a0dfe03cd9141a858176409717a.jpg)
159
+ Figure 2: Performance as a function of Pruning on various generation tasks (test set), compared to training smaller models from scratch and pruning a Transformer baseline trained without LayerDrop. Pruning networks with LayerDrop performs strongly compared to these alternatives.
160
+
161
+ ![](images/6ed86009376b227fa4375bbc5c9d1c3f48f6494c117d9125b7e91fe3b8eb2773.jpg)
162
+
163
+ ![](images/03eb9ae2a6ca61467235ae1bc1c0d451916667b62985c0d72f06e6612eb12fd7.jpg)
164
+
165
+ ![](images/ca85dd9a3c0eb8e08ed6f090a4fd13800ed3f6f6761ef5e463a6118000f8e63f.jpg)
166
+
167
+ ![](images/70e403aa574822587a226af1e6485d10988d129ee65cd5bc9399ab497eb58310.jpg)
168
+
169
+ ![](images/49e8a6700db496adddac8e5a5c54dec86d54418a7dd7f1384e98a725714a9274.jpg)
170
+
171
+ <table><tr><td></td><td>MNLI</td><td>SST2</td></tr><tr><td colspan="3">6 Layers (50% Pruned)</td></tr><tr><td>RoBERTa</td><td>82.3</td><td>92.1</td></tr><tr><td>+ LayerDrop</td><td>82.9</td><td>92.5</td></tr><tr><td>+ more data</td><td>84.1</td><td>93.2</td></tr><tr><td colspan="3">3 Layers (75% Pruned)</td></tr><tr><td>RoBERTa</td><td>78.1</td><td>90.3</td></tr><tr><td>+ LayerDrop</td><td>78.6</td><td>90.5</td></tr><tr><td>+ more data</td><td>82.2</td><td>92.0</td></tr></table>
172
+
173
+ ![](images/8a2dfc207f17b16f73983408d62eee40ee8431df7aa03384b5abe2a231213142.jpg)
174
+ Figure 3: (left) Performance as a function of Pruning on MNLI and SST2 compared to BERT and RoBERTa trained from scratch and DistilBERT. Pruning one network trained with LayerDrop (blue) outperforms alternatives that require a new network for each point. (right) Performance when Training on More Data shows even stronger results on MNLI and SST2 for pruned models.
175
+
176
+ base and RoBERTa base trained from scratch with 6 and 3 layers as well as recent work on distillation, called DistilBERT (Sanh, 2019). We analyze both BERT and RoBERTa models as the vocabulary is not the same due to differences in subword tokenization, which affects performance.
177
+
178
+ DistilBERT occasionally performs worse than BERT of the same size trained from scratch, which confirms the findings of Liu et al. (2018b) about the performance of pruned models compared to training small models from scratch. Our approach, however, obtains results better than BERT and RoBERTa trained from scratch. Further, our method does not need any post-processing: we simply prune every other layer of our RoBERTa model that has been pre-trained with LayerDrop and finetune the small models on each of the downstream tasks, following standard procedure. When training with additional data, shown in Table 7 (right), even stronger performance can be achieved.
179
+
180
+ # 6 ABLATION STUDIES
181
+
182
+ Comparison of Structured Dropout Figure 4 (left) contrasts various forms of structured dropout: dropping attention heads, FFN matrices, and entire Transformer layers. Dropping heads alone is worse than dropping entire sub-layers or layers. It also offers no advantage in terms of running time as attention heads are computed in parallel for computational efficiency. We observe no large differences between dropping sub-layers and layers, possibly because we are working with relatively shallow networks. In theory, dropping sub-layers should perform better and we expect this to be the
183
+
184
+ ![](images/abfb1a409d3d44bc3439c9417725e1977600e09248ee8172363598afd93f08b7.jpg)
185
+ Figure 4: (left) Impact of Various Structured Dropouts on Wikitext-103 Valid. Dropping Layers is straightforward and has strong performance. (right) Comparison of Pruning Strategies on Wikitext-103 Valid. Marginal gains can be achieved, but dropping every other layer is hard to beat.
186
+
187
+ ![](images/baea7a96bc376a05ff58b03d425d3fc57bba895145970aae9175d1501d2c33de.jpg)
188
+
189
+ ![](images/5bf1cfbadab4c4a381aa64dead87ee2529398eff0052759b27c05a7322948278.jpg)
190
+ Figure 5: Relative Importance of Specific Layers. (Wikitext-103 Valid) The full network is pruned into various 8 layer sub-network configurations, and the average perplexity pruning layer $n$ is displayed above.
191
+
192
+ ![](images/682b65c042c6041d9f7bd2b45b2700d4e0f4d9756c360c37f85fa3c3157c811c.jpg)
193
+ Figure 6: Effect of Train LayerDrop on Inference-time Pruning. (Wikitext-103 Valid) Training with larger LayerDrop is beneficial for significant pruning.
194
+
195
+ case with very deep Transformers. We experiment with overlapping structured groups, such as heads + layers and heads + sub-layers and find that the beneficial effect can be advantageously combined. We focus on layers for simplicity, as dropping more structures introduces more parameters to tune.
196
+
197
+ Comparison of Various Pruning Strategies. Figure 4 (right) contrasts various approaches to sub-selecting model layers at inference time.
198
+
199
+ The predominant method used in this paper, the straightforward strategy of selecting every other layer, is tough to beat. We find only marginal improvement can be gained by searching over the validation set for the best set of 8 layers to use and by learning which layers to drop. In contrast, dropping chunks of consecutive layers is harmful. Namely, removing the first half or last half of a model is particularly harmful, as the model does not have the ability to process the input or project to the full vocabulary to predict the subsequent word.
200
+
201
+ Choosing which Layers to Prune. Not all layers are equally important. In an experiment on Wikitext-103, we pruned selections of 8 layers at random. Figure 5 displays the perplexity when that layer is removed, averaging results from 20 pruned model per layer. The input and output layers of a network are the most important, as they process the input and project to the output vocabulary.
202
+
203
+ Relationship between LayerDrop at Training Time and Pruning at Inference Time. Figure 6 displays the relationship between the training time LayerDrop and the performance of a pruned network at test time. If significant depth reduction is desired, training with larger LayerDrop is beneficial — this equalizes the train and test time settings. An analysis for BERT is in the Appendix.
204
+
205
+ # 7 CONCLUSION
206
+
207
+ Structured dropout regularizes neural networks to be more robust to applying structured pruning at inference time. We focus on the setting where structures are layers, enabling pruning of shallow and efficient models of any desired depth. In a variety of text generation and pre-training tasks, we show that LayerDrop enables and stabilizes the training of substantially deeper networks and simultaneously allows for the extraction of models of various depths with strong performance.
208
+
209
+ # REFERENCES
210
+
211
+ Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016.
212
+ Alexei Baevski and Michael Auli. Adaptive input representations for neural language modeling. arXiv preprint arXiv:1809.10853, 2018.
213
+ Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014.
214
+ Gonçalo M Correia, Vlad Niculae, and André FT Martins. Adaptively sparse transformers. arXiv preprint arXiv:1909.00015, 2019.
215
+ Zihang Dai, Zhilin Yang, Yiming Yang, William W Cohen, Jaime Carbonell, Quoc V Le, and Ruslan Salakhutdinov. Transformer-x1: Attentive language models beyond a fixed-length context. arXiv preprint arXiv:1901.02860, 2019.
216
+ Yann N. Dauphin, Angela Fan, Michael Auli, and David Grangier. Language modeling with gated convolutional networks. In Proc. of ICML, 2017.
217
+ Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
218
+ William B Dolan and Chris Brockett. Automatically constructing a corpus of sentential paraphrases. In Proceedings of the International Workshop on Paraphrasing, 2005.
219
+ Sergey Edunov, Alexei Baevski, and Michael Auli. Pre-trained language model representations for language generation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4052-4059, 2019.
220
+ Angela Fan, David Grangier, and Michael Auli. Controllable abstractive summarization. arXiv, abs/1711.05217, 2017.
221
+ Angela Fan, Yacine Jernite, Ethan Perez, David Grangier, Jason Weston, and Michael Auli. Eli5: Long form question answering. arXiv preprint arXiv:1907.09190, 2019.
222
+ Aidan N Gomez, Ivan Zhang, Kevin Swersky, Yarin Gal, and Geoffrey E Hinton. Targeted dropout. 2018.
223
+ Edouard Grave, Armand Joulin, Moustapha Cisse, David Grangier, and Herve Jegou. Efficient softmax approximation for gpus. arXiv, abs/1609.04309, 2016.
224
+ Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep Residual Learning for Image Recognition. In Proc. of CVPR, 2015.
225
+ Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778, 2016.
226
+ Yihui He, Xiangyu Zhang, and Jian Sun. Channel pruning for accelerating very deep neural networks. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1389-1397, 2017.
227
+
228
+ Karl Moritz Hermann, Tomáš Kočisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. Teaching machines to read and comprehend. In Proc. of NIPS, 2015.
229
+ Geoffrey E Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan R Salakhutdinov. Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580, 2012.
230
+ Gao Huang, Yu Sun, Zhuang Liu, Daniel Sedra, and Kilian Q Weinberger. Deep networks with stochastic depth. In European conference on computer vision, pp. 646-661. Springer, 2016.
231
+ Gao Huang, Shichen Liu, Laurens Van der Maaten, and Kilian Q Weinberger. Condensenet: An efficient densenet using learned group convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2752-2761, 2018.
232
+ Zehao Huang and Naiyan Wang. Data-driven sparse structure selection for deep neural networks. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 304-320, 2018.
233
+ Yacine Jernite, Edouard Grave, Armand Joulin, and Tomas Mikolov. Variable computation in recurrent neural networks. arXiv preprint arXiv:1611.06188, 2016.
234
+ Armand Joulin, Edouard Grave, Piotr Bojanowski, Matthijs Douze, Hérve Jégou, and Tomas Mikolov. Fasttext. zip: Compressing text classification models. arXiv preprint arXiv:1612.03651, 2016.
235
+ Marcin Junczys-Dowmunt. Microsoft translator at wmt 2019: Towards large-scale document-level neural machine translation. arXiv preprint arXiv:1907.06170, 2019.
236
+ Guillaume Lample and Alexis Conneau. Cross-lingual language model pretraining. arXiv preprint arXiv:1901.07291, 2019.
237
+ Yann LeCun, John S Denker, and Sara A Solla. Optimal brain damage. In Advances in neural information processing systems, pp. 598-605, 1990.
238
+ Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, and Hans Peter Graf. Pruning filters for efficient convnets. arXiv preprint arXiv:1608.08710, 2016.
239
+ Chin-Yew Lin. Rouge: A package for automatic evaluation of summaries. In Workshop on Text Summarization Branches Out, 2004.
240
+ Liyuan Liu, Xiang Ren, Jingbo Shang, Jian Peng, and Jiawei Han. Efficient contextualized representation: Language model pruning for sequence labeling. arXiv preprint arXiv:1804.07827, 2018a.
241
+ Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692, 2019.
242
+ Zhuang Liu, Mingjie Sun, Tinghui Zhou, Gao Huang, and Trevor Darrell. Rethinking the value of network pruning. arXiv preprint arXiv:1810.05270, 2018b.
243
+ Ilya Loshchilov and Frank Hutter. Sgdr: Stochastic gradient descent with warm restarts. arXiv preprint arXiv:1608.03983, 2016.
244
+ Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. Pointer Sentinel Mixture Models. arXiv, abs/1609.07843, 2016.
245
+ Paul Michel, Omer Levy, and Graham Neubig. Are sixteen heads really better than one? arXiv preprint arXiv:1905.10650, 2019.
246
+ Deepak Mittal, Shweta Bhardwaj, Mitesh M Khapra, and Balaraman Ravindran. Recovering from random pruning: On the plasticity of deep convolutional neural networks. In 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 848-857. IEEE, 2018.
247
+
248
+ Kenton Murray and David Chiang. Auto-sizing neural networks: With applications to n-gram language models. arXiv preprint arXiv:1508.05051, 2015.
249
+ Myle Ott, Sergey Edunov, David Grangier, and Michael Auli. Scaling neural machine translation. In Proc. of WMT, 2018.
250
+ Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of NAACL-HLT 2019: Demonstrations, 2019.
251
+ Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, and Yoshua Bengio. How to construct deep recurrent neural networks. In Proceedings of the Second International Conference on Learning Representations (ICLR 2014), 2014.
252
+ Ngoc-Quan Pham, Thai-Son Nguyen, Jan Niehues, Markus Muller, and Alex Waibel. Very deep selfattention networks for end-to-end speech recognition. arXiv preprint arXiv:1904.13377, 2019.
253
+ Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners. 2019.
254
+ Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of EMNLP, pp. 2383-2392. Association for Computational Linguistics, 2016.
255
+ Victor Sanh. Smaller, faster, cheaper, lighter: Introducing distilbert, a distilled version of bert. https://medium.com/huggingface/distilbert-8cf3380435b5, 2019.
256
+ Abigail See, Minh-Thang Luong, and Christopher D Manning. Compression of neural machine translation models via pruning. arXiv preprint arXiv:1606.09274, 2016.
257
+ Abigail See, Peter J Liu, and Christopher D Manning. Get to the point: Summarization with pointer-generator networks. In Proc. of ACL, 2017.
258
+ Rico Sennrich, Barry Haddow, and Alexandra Birch. Neural machine translation of rare words with subword units. arXiv preprint arXiv:1508.07909, 2015.
259
+ Rico Sennrich, Barry Haddow, and Alexandra Birch. Neural machine translation of rare words with subword units. In Proc. of ACL, 2016.
260
+ Dima Shulga. Distilling bert how to achieve bert performance using logistic regression. towards-datascience.com, 2019.
261
+ Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Ng, and Christopher Potts. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of EMNLP, pp. 1631-1642, 2013.
262
+ Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. JMLR, 15(1):1929-1958, 2014.
263
+ Sainbayar Sukhbaatar, Edouard Grave, Piotr Bojanowski, and Armand Joulin. Adaptive attention span in transformers. arXiv preprint arXiv:1905.07799, 2019.
264
+ Ilya Sutskever, James Martens, George Dahl, and Geoffrey Hinton. On the importance of initialization and momentum in deep learning. In International conference on machine learning, pp. 1139-1147, 2013.
265
+ Raphael Tang, Yao Lu, Linqing Liu, Lili Mou, Olga Vechtomova, and Jimmy Lin. Distilling task-specific knowledge from bert into simple neural networks. arXiv preprint arXiv:1903.12136, 2019.
266
+ Iulia Turc, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Well-read students learn better: The impact of student initialization on knowledge distillation. arXiv preprint arXiv:1908.08962, 2019.
267
+
268
+ Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in neural information processing systems, pp. 5998-6008, 2017.
269
+ Elena Voita, David Talbot, Fedor Moiseev, Rico Sennrich, and Ivan Titov. Analyzing multi-head self-attention: Specialized heads do the heavy lifting, the rest can be pruned. arXiv preprint arXiv:1905.09418, 2019.
270
+ Li Wan, Matthew Zeiler, Sixin Zhang, Yann Le Cun, and Rob Fergus. Regularization of neural networks using dropconnect. In International conference on machine learning, pp. 1058-1066, 2013.
271
+ Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. GLUE: A multi-task benchmark and analysis platform for natural language understanding. 2019a. In the Proceedings of ICLR.
272
+ Qiang Wang, Bei Li, Tong Xiao, Jingbo Zhu, Changliang Li, Derek F Wong, and Lidia S Chao. Learning deep transformer models for machine translation. arXiv preprint arXiv:1906.01787, 2019b.
273
+ Xin Wang, Fisher Yu, Zi-Yi Dou, Trevor Darrell, and Joseph E Gonzalez. Skipnet: Learning dynamic routing in convolutional networks. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 409-424, 2018.
274
+ Wei Wen, Chunpeng Wu, Yandan Wang, Yiran Chen, and Hai Li. Learning structured sparsity in deep neural networks. In Advances in neural information processing systems, pp. 2074-2082, 2016.
275
+ Adina Williams, Nikita Nangia, and Samuel R. Bowman. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of NAACL-HLT, 2018.
276
+ Felix Wu, Angela Fan, Alexei Baevski, Yann Dauphin, and Michael Auli. Pay less attention with lightweight and dynamic convolutions. In International Conference on Learning Representations, 2019a. URL https://arxiv.org/abs/1901.10430.
277
+ Lijun Wu, Yiren Wang, Yingce Xia, Fei Tian, Fei Gao, Tao Qin, Jianhuang Lai, and Tie-Yan Liu. Depth growing for neural machine translation. arXiv preprint arXiv:1907.01968, 2019b.
278
+ Zuxuan Wu, Tushar Nagarajan, Abhishek Kumar, Steven Rennie, Larry S Davis, Kristen Grauman, and Rogerio Feris. Blockdrop: Dynamic inference paths in residual networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8817-8826, 2018.
279
+ Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, and Quoc V Le. Xlnet: Generalized autoregressive pretraining for language understanding. arXiv preprint arXiv:1906.08237, 2019.
280
+ Hongyi Zhang, Yann N Dauphin, and Tengyu Ma. Fixup initialization: Residual learning without normalization. arXiv preprint arXiv:1901.09321, 2019.
281
+
282
+ # A APPENDIX
283
+
284
+ # A.1 ADDITIONAL IMPLEMENTATION DETAILS
285
+
286
+ # A.1.1 NEURAL MACHINE TRANSLATION
287
+
288
+ WMT en-de: We model a 32K joint byte-pair encoding. We train using the cosine (Loshchilov & Hutter, 2016) learning rate schedule from Wu et al. (2019a) with label smoothing 0.1. vocabulary (Sennrich et al., 2015). We train on 8 GPU for total training time 66k seconds.
289
+
290
+ IWSLT de-en: The dataset consists of 160K training pairs, fully lowercased. We model a 10K joint BPE vocabulary and generate with beam size 4. We do not average checkpoints. Following Wu et al. (2019a), we use the Transformer base architecture with 6 encoder layers and 6 decoder layers. As the dataset is small, we decrease the overall model size and instead use the following parameters: FFN size 1024, hidden dimension 512, and 4 attention heads. We train on 1 GPU.
291
+
292
+ Pruning: We apply the Every Other Layer strategy to the decoder and do not finetune.
293
+
294
+ # A.1.2 LANGUAGE MODELING
295
+
296
+ Training: To handle the large vocabulary of Wikitext-103, we follow Dauphin et al. (2017) and Baevski & Auli (2018) in using adaptive softmax (Grave et al., 2016) and adaptive input for computational efficiency. For both input and output embeddings, we use dimension size 1024 and three adaptive bands: 20K, 40K, and 200K. We use a cosine learning rate schedule (Baevski & Auli, 2018; Loshchilov & Hutter, 2016) and train with Nesterov's accelerated gradient (Sutskever et al., 2013). We set the momentum to 0.99 and renormalize gradients if the norm exceeds 0.1 (Pascanu et al., 2014). During training, we partition the data into blocks of contiguous tokens that ignore document boundaries. At test time, we respect sentence boundaries. We train on 8 GPU for total training time of 216k seconds.
297
+
298
+ Pruning: We apply the Every Other Layer strategy and do not finetune.
299
+
300
+ # A.1.3 SUMMARIZATION
301
+
302
+ Data: We use the full text (non-anonymized) version of CNN-Dailymail introduced by See et al. (2017). Following Fan et al. (2017), we truncate articles to 400 tokens and model a joint byte-pair vocabulary of 32K types (Sennrich et al., 2016).
303
+
304
+ Training: We train using Adam with a cosine learning rate schedule, warming up for 10K steps. We optimize dropout in the range $\{0.2, 0.3\}$ on the validation set and set LayerDrop to 0.2. We train on 1 GPU.
305
+
306
+ Pruning: We apply the Every Other Layer strategy to the decoder and do not finetune.
307
+
308
+ # A.1.4 LONG FORM QUESTION ANSWERING
309
+
310
+ Training: We compare to the full multi-task setting of Fan et al. (2019), where data augmentation and multi-tasking is done at training time to increase the data available. We train on 8 GPU.
311
+
312
+ Generation: We set the minimum length to 150 tokens and the maximum length to 200.
313
+
314
+ # A.1.5 BI-DIRECTIONAL PRE-TRAINING
315
+
316
+ Training: The base architecture is a 12 layer model with embedding size 768 and FFN size 3072. The large architecture consists of 24 layers with embedding size 1024 and FFN size 4096. For both settings, we follow Liu et al. (2019) in using the subword tokenization scheme from Radford et al. (2019), which uses bytes as subword units. This eliminates unknown tokens. Note this produces a different vocabulary size than BERT (Devlin et al., 2018), meaning models of the same depth do not have the same number of parameters. We train with large batches of size 8192 and maintain this batch size using gradient accumulation. We do not use next sentence prediction (Lample & Conneau, 2019). We optimize with Adam with a polynomial decay learning rate schedule. For
317
+
318
+ <table><tr><td>Hyperparameter</td><td>Base</td><td>Large</td></tr><tr><td>Number of Layers</td><td>12</td><td>24</td></tr><tr><td>Hidden Size</td><td>768</td><td>1024</td></tr><tr><td>FFN Size</td><td>3072</td><td>4096</td></tr><tr><td>Attention Heads</td><td>12</td><td>16</td></tr><tr><td>LayerDrop</td><td>0.2</td><td>0.2</td></tr><tr><td>Warmup Steps</td><td>24k</td><td>30k</td></tr><tr><td>Peak Learning Rate</td><td>6e-4</td><td>4e-4</td></tr><tr><td>Batch Size</td><td>8192</td><td>8192</td></tr></table>
319
+
320
+ Table 5: Hyperparameters for RoBERTa Pretraining
321
+
322
+ <table><tr><td>Model</td><td>BLEU</td></tr><tr><td>Transformer (Wu et al., 2019a)</td><td>34.4</td></tr><tr><td>Dynamic Conv (Wu et al., 2019a)</td><td>35.2</td></tr><tr><td>Transformer + LayerDrop</td><td>34.5</td></tr></table>
323
+
324
+ Table 6: BLEU for IWSLT (test set).
325
+
326
+ BERT-Base, we use 32 GPU (total training time 171k seconds) and for BERT-Large, we use 128 GPU. For the RoBERTa data setting with more data, we use 512 GPU to train BERT-Large.
327
+
328
+ Finetuning: During finetuning, we hyperparameter search over three learning rate options (1e-5, 2e-5, 3e-5) and batchsize (16 or 32 sentences). The other parameters are set following Liu et al. (2019). We do single task finetuning, meaning we only tune on the data provided for the given natural language understanding task. We do not perform assembling. When finetuning models trained with LayerDrop, we apply LayerDrop during finetuning time as well.
329
+
330
+ Training smaller models: We train the 6 and 3 layer RoBERTa models following the same settings, but using the smaller number of layers and without LayerDrop. We finetune with the same sweep parameters. The 6 and 3 layer BERT model results are taken from Devlin et al. (2018).
331
+
332
+ Training larger models: We train the 48 layer RoBERTa model with 0.5 LayerDrop so only 24 layers on average are active during a forward pass.
333
+
334
+ Pruning: When pruning RoBERTa models, we use the Every Other Layer strategy and finetune without LayerDrop for the smaller models.
335
+
336
+ # A.2 ADDITIONAL RESULTS
337
+
338
+ IWSLT Table 6 displays results on the IWSLT de-en dataset. We see small improvement, likely as the network is small and already has a large quantity of regularization with dropout, attention dropout, and weight decay. The Transformer is not the state of the art architecture, and there remains a large gap between the Transformer and the DynamicConv model proposed by Wu et al. (2019a).
339
+
340
+ Pruning BERT Models The numerical values corresponding to the pruned 6 and 3 layer RoBERTa + LayerDrop models are shown in Table 7.
341
+
342
+ # A.3 ADDITIONAL ANALYSIS
343
+
344
+ Impact of LayerDrop on training time. Figure 7 shows the increase in training speed when training with increasingly large quantities of LayerDrop. The words per second were computed on 8 V100 GPUs with 32GB of memory, without floating point 16, for a 16 layer model trained on Wikitext-103. Assuming fixed layer size, LayerDrop removes layers at training time randomly, which increases the training speed almost 2x if dropping half the number of layers.
345
+
346
+ <table><tr><td>Model</td><td>Dataset</td><td>Layers</td><td>MNLI-m</td><td>MRPC</td><td>QNLI</td><td>SST-2</td></tr><tr><td>BERT</td><td>Books + Wiki</td><td>6</td><td>81.9</td><td>84.8</td><td>-</td><td>91.3</td></tr><tr><td>Distil BERT (Sanh, 2019)</td><td>Books + Wiki</td><td>6</td><td>81.6</td><td>82.4</td><td>85.5</td><td>92.7</td></tr><tr><td>RoBERTa</td><td>Books + Wiki</td><td>6</td><td>82.3</td><td>82.5</td><td>89.7</td><td>92.1</td></tr><tr><td>RoBERTa + LayerDrop</td><td>Books + Wiki</td><td>6</td><td>82.9</td><td>85.3</td><td>89.4</td><td>92.5</td></tr><tr><td>RoBERTa + LayerDrop</td><td>+ more data</td><td>6</td><td>84.1</td><td>86.1</td><td>89.5</td><td>93.2</td></tr><tr><td>BERT</td><td>Books + Wiki</td><td>3</td><td>77.9</td><td>79.8</td><td>-</td><td>88.4</td></tr><tr><td>RoBERTa</td><td>Books + Wiki</td><td>3</td><td>78.1</td><td>79.4</td><td>86.2</td><td>90.3</td></tr><tr><td>RoBERTa + LayerDrop</td><td>Books + Wiki</td><td>3</td><td>78.6</td><td>75.1</td><td>86.0</td><td>90.5</td></tr><tr><td>RoBERTa + LayerDrop</td><td>+ more data</td><td>3</td><td>82.2</td><td>79.4</td><td>88.6</td><td>92.0</td></tr></table>
347
+
348
+ ![](images/9319273796bd788a77fe07fee5493757007ea7ba47636992f96c0f4201699c8e.jpg)
349
+ Figure 7: Effect of LayerDrop on Training Time
350
+
351
+ Table 7: Comparison between BERT base with and without distillation with our RoBERTa base trained with LayerDrop. Our models are pruned before finetuning on each individual task. The numbers from BERT are taken from Devlin et al. (2018).
352
+
353
+ <table><tr><td>Model</td><td>Valid PPL</td></tr><tr><td>Pruned w/ LayerDrop</td><td>20.78</td></tr><tr><td>+ Finetune</td><td>20.56</td></tr></table>
354
+
355
+ Table 8: Impact of additional finetuning on a 16 layer language model pruned to 8 layers.
356
+
357
+ ![](images/10c81b986624cb3f89801758ce5880c421b1c0469b906bcdff4d63168013deaa.jpg)
358
+ Figure 8: Effect of Train LayerDrop on Inference-time Pruning on MNLI, SST2, and QNLI
359
+
360
+ ![](images/9f9d40912e8582430a7184fb61f0c4ca14b2c181ab954a5672ab074f576cd6f5.jpg)
361
+
362
+ ![](images/f528053d43f8d321050ed749afb6988b2b3334f49892b18345273f5735aca58f.jpg)
363
+
364
+ ![](images/1eeba5351593ec11823eb83a4a10021cabc23827859a63f0687b41e53e997fbf.jpg)
365
+
366
+ BERT: Relationship between LayerDrop at Training Time and Pruning at Inference Time Similar to the analysis on Language Modeling, we find that training with larger quantities of LayerDrop allows for more aggressive pruning at inference time on various natural language generation tasks. However, as these tasks involve a finetuning step on the downstream tasks after pre-training, the effect is less straightforward. Results are shown in Figure 8.
367
+
368
+ Impact of Finetuning. LayerDrop allows models to be pruned to the desired depth at test time. Apart from finetuning for data adaptation on the GLUE tasks, we do not finetune the performance of our smaller models on any of the other tasks we consider in this work. As shown in Table 8, we found that finetuning the pruned models only results in marginal improvement. Further, the finetuning parameters were dependent on the depth of the model at test time and difficult to optimize.
369
+
370
+ <table><tr><td>LayerDrop</td><td>Dropout</td><td>Valid PPL</td></tr><tr><td>0.5</td><td>0.1</td><td>19.03</td></tr><tr><td>0.5</td><td>0.2</td><td>19.22</td></tr><tr><td>0.5</td><td>0.3</td><td>19.31</td></tr><tr><td>0.5</td><td>0.4</td><td>19.62</td></tr><tr><td>0.5</td><td>0.5</td><td>19.95</td></tr></table>
371
+
372
+ Table 9: Performance Varying Dropout with Fixed LayerDrop on a 16 layer language model trained on Wikitext-103 (Valid).
373
+
374
+ <table><tr><td>Structured Dropout</td><td>Valid PPL</td></tr><tr><td>Half FFN</td><td>29.6</td></tr><tr><td>Baseline</td><td>28.3</td></tr><tr><td>Head</td><td>28.1</td></tr><tr><td>Sublayer</td><td>19.9</td></tr><tr><td>Head + Sublayer</td><td>19.8</td></tr><tr><td>Layer</td><td>19.7</td></tr><tr><td>Head + Layer</td><td>19.7</td></tr></table>
375
+
376
+ Table 11: Performance Varying Structured Dropout and Pruning to an 8 layer language model trained on Wikitext-103 (Valid). Pruning is done by removing every other layer to half the model size.
377
+
378
+ <table><tr><td>Model</td><td>Valid PPL</td></tr><tr><td>Adaptive Input*</td><td>18.4</td></tr><tr><td>Random LayerDrop 0.2</td><td>18.2</td></tr><tr><td>Linear LayerDrop to 0.3</td><td>18.6</td></tr><tr><td>Linear LayerDrop to 0.5</td><td>18.5</td></tr><tr><td>Linear LayerDrop to 0.8</td><td>18.9</td></tr></table>
379
+
380
+ Table 10: Random v. Linear Decay Layer-Drop on a 16 layer language model trained on Wikitext-103 (Valid). * result is from Baevski & Auli (2018)
381
+
382
+ Effect of Varying Standard Dropout. LayerDrop adds a strong regularization effect to neural network training. We examine the importance of tuning the standard dropout parameter when training with LayerDrop. In Table 9, we show the performance when LayerDrop is fixed and standard Dropout is varied. We see that when training with LayerDrop, the quantity of standard Dropout can be reduced.
383
+
384
+ LayerDrop Schedule: Random or Linear. We investigate the random structured dropping of layers compared to the linear decay schedule proposed in Huang et al. (2016) in Table 10. We find that the linear decay schedule does not provide performance improvement compared to random dropping, which is more straightforward to implement.
385
+
386
+ Impact of Types of Structured Dropout when Pruning. Figure 4 (left) contrasts the performance of various forms of structured dropout, such as dropping attention heads, sub-layers of Transformers such as attention or FFN, portions of FFN matrices, and entire Transformer layers. It examines these results in the setting of evaluating the full depth model on language modeling and shows that in general, different types of structured dropout can improve performance.
387
+
388
+ In Table 11, we examine the effect of varying training time structured dropout with performance when pruning. We show that the trend shown in Figure 4 is consistent with inference-time pruning performance, particularly that Half FFN dropout performs slightly worse, but other forms of structured dropout are beneficial.
reducingtransformerdepthondemandwithstructureddropout/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a3ba87638242af64cae91f5ec69f8b30259670a809d5c36875788596c644fcb9
3
+ size 588202
reducingtransformerdepthondemandwithstructureddropout/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d007ad53d2e693f63a6ada9d975283dfb4bdfbcef15a2e3b78be44bb0f70a84e
3
+ size 413215
reformertheefficienttransformer/03b1143d-2161-4439-9449-d2e54d75c122_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5822f823e072100eef2213bfabceea8ebd5cb061b42355cbfaf0c0638aec4440
3
+ size 74069
reformertheefficienttransformer/03b1143d-2161-4439-9449-d2e54d75c122_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:85bfd27f59309200f255055f2f8fc83d6564c531f0dd0da220331e3c0eaf3462
3
+ size 87832
reformertheefficienttransformer/03b1143d-2161-4439-9449-d2e54d75c122_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:10b0d50c0c24a8e3f28445ea1002fd495c9ec640e6502d49e28277a278321235
3
+ size 623413
reformertheefficienttransformer/full.md ADDED
@@ -0,0 +1,304 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # REFORMER: THE EFFICIENT TRANSFORMER
2
+
3
+ Nikita Kitaev*
4
+
5
+ U.C. Berkeley & Google Research
6
+
7
+ kitaev@cs.berkeley.edu
8
+
9
+ Łukasz Kaiser*
10
+
11
+ Google Research
12
+
13
+ {lukaszkaiser,levskaya}@google.com
14
+
15
+ Anselm Levskaya
16
+
17
+ Google Research
18
+
19
+ # ABSTRACT
20
+
21
+ Large Transformer models routinely achieve state-of-the-art results on a number of tasks but training these models can be prohibitively costly, especially on long sequences. We introduce two techniques to improve the efficiency of Transformers. For one, we replace dot-product attention by one that uses locality-sensitive hashing, changing its complexity from $\mathrm{O}(L^2)$ to $\mathrm{O}(L\log L)$ , where $L$ is the length of the sequence. Furthermore, we use reversible residual layers instead of the standard residuals, which allows storing activations only once in the training process instead of $N$ times, where $N$ is the number of layers. The resulting model, the Reformer, performs on par with Transformer models while being much more memory-efficient and much faster on long sequences.
22
+
23
+ # 1 INTRODUCTION
24
+
25
+ The Transformer architecture (Vaswani et al., 2017) is widely used in natural language processing and yields state-of-the-art results on a number of tasks. To obtain these results, researchers have resorted to training ever larger Transformer models. The number of parameters exceeds 0.5B per layer in the largest configuration reported in (Shazeer et al., 2018) while the number of layers goes up to 64 in (Al-Rfou et al., 2018). Transformer models are also used on increasingly long sequences. Up to 11 thousand tokens of text in a single example were processed in (Liu et al., 2018) and when processing other modalities, like music (Huang et al., 2018) and images (Parmar et al., 2018), even longer sequences are commonplace. These large-scale long-sequence models yield great results but strain resources to the point where some argue that this trend is breaking NLP research<sup>1</sup>. Many large Transformer models can only realistically be trained in large industrial research laboratories and such models trained with model parallelism cannot even be fine-tuned on a single GPU as their memory requirements demand a multi-accelerator hardware setup even for a single training step.
26
+
27
+ Do large Transformer models fundamentally require such huge resources or are they simply inefficient? Consider the following calculation: the 0.5B parameters used in the largest reported Transformer layer account for 2GB of memory. Activations for 64K tokens with embedding size 1024 and batch size 8 account for $64\mathrm{K} \times 1\mathrm{K} \times 8 = 0.5\mathrm{B}$ floats, requiring another 2GB of memory. If our memory use was only per-layer, then we should fairly easily fit a large Transformer even on sequences of length 64K on a single accelerator. Further, the whole corpus used to train BERT only requires 17GB to store. Why is it then that we cannot even fine-tune these models on single machines?
28
+
29
+ The above estimate includes only per-layer memory and input activations cost and does not take into account the following major sources of memory use in the Transformer.
30
+
31
+ - Memory in a model with $N$ layers is $N$ -times larger than in a single-layer model due to the fact that activations need to be stored for back-propagation.
32
+ - Since the depth $d_{ff}$ of intermediate feed-forward layers is often much larger than the depth $d_{model}$ of attention activations, it accounts for a large fraction of memory use.
33
+ - Attention on sequences of length $L$ is $\mathrm{O}(L^2)$ in both computational and memory complexity, so even for a single sequence of 64K tokens can exhaust accelerator memory.
34
+
35
+ We introduce the Reformer model which solves these problems using the following techniques:
36
+
37
+ - Reversible layers, first introduced in Gomez et al. (2017), enable storing only a single copy of activations in the whole model, so the $N$ factor disappears.
38
+ - Splitting activations inside feed-forward layers and processing them in chunks removes the $d_{ff}$ factor and saves memory inside feed-forward layers.
39
+ - Approximate attention computation based on locality-sensitive hashing replaces the $\mathrm{O}(L^2)$ factor in attention layers with $\mathrm{O}(L\log L)$ and so allows operating on long sequences.
40
+
41
+ We study these techniques and show that they have negligible impact on the training process compared to the standard Transformer. Splitting activations in fact only affects the implementation; it is numerically identical to the layers used in the Transformer. Applying reversible residuals instead of the standard ones does change the model but has a negligible effect on training in all configurations we experimented with. Finally, locality-sensitive hashing in attention is a more major change that can influence the training dynamics, depending on the number of concurrent hashes used. We study this parameter and find a value which is both efficient to use and yields results very close to full attention.
42
+
43
+ We experiment on a synthetic task, a text task (enwik8) with sequences of length 64K and an image generation task (imagenet-64 generation) with sequences of length 12K. In both cases we show that Reformer matches the results obtained with full Transformer but runs much faster, especially on the text task, and with orders of magnitude better memory efficiency.
44
+
45
+ # 2 LOCALITY-SENSITIVE HASHING ATTENTION
46
+
47
+ Dot-product attention. The standard attention used in the Transformer is the scaled dot-product attention (Vaswani et al., 2017). The input consists of queries and keys of dimension $d_{k}$ , and values of dimension $d_{v}$ . The dot products of the query with all keys are computed, scaled by $\sqrt{d_k}$ , and a softmax function is applied to obtain the weights on the values. In practice, the attention function on a set of queries is computed simultaneously, packed together into a matrix $Q$ . Assuming the keys and values are also packed together into matrices $K$ and $V$ , the matrix of outputs is defined as:
48
+
49
+ $$
50
+ \operatorname {A t t e n t i o n} (Q, K, V) = \operatorname {s o f t m a x} \left(\frac {Q K ^ {T}}{\sqrt {d _ {k}}}\right) V \tag {1}
51
+ $$
52
+
53
+ Multi-head attention. In the Transformer, instead of performing a single attention function with $d_{model}$ -dimensional keys, values and queries, one linearly projects the queries, keys and values $h$ times with different, learned linear projections to $d_k$ , $d_k$ and $d_v$ dimensions, respectively. Attention is applied to each of these projected versions of queries, keys and values in parallel, yielding $d_v$ -dimensional output values. These are concatenated and once again projected, resulting in the final values. This mechanism is known as multi-head attention.
54
+
55
+ Memory-efficient attention. To calculate the memory use of the attention mechanism, let us focus on the attention computation from Equation 1. Let us assume that Q, K and V all have the shape [batch_size, length, $d_{model}$ ]. The main issue is the term $QK^T$ , which has the shape [batch_size, length, length]. In the experimental section we train a model on sequences of length $64K$ – in this case, even at batch-size of 1, this is a $64K \times 64K$ matrix, which in 32-bit floats would take 16GB of memory. This is impractical and has hindered the use of the Transformer for long sequences. But it is important to note that the $QK^T$ matrix does not need to be fully materialized in memory. The attention can indeed be computed for each query $q_i$ separately, only calculating $\mathrm{softmax}(\frac{q_iK^T}{\sqrt{dk}})V$ once in memory, and then re-computing it on the backward pass when needed for gradients. This way of computing attention may be less efficient but it only uses memory proportional to length. We use this memory-efficient implementation of attention to run the full-attention baselines presented in the experimental section.
56
+
57
+ Where do Q, K, V come from? The multi-head attention described above operates on keys, queries and values, but usually we are only given a single tensor of activations A of the shape [batch_size, length, $d_{model}$ ] - e.g., coming from embedding the tokens in a sentence into vectors.
58
+
59
+ ![](images/1531e5b8ff848be21dd68aeb5485fc87f981bbc4e3d06ed95d6a5cf3e1ea192e.jpg)
60
+ Figure 1: An angular locality sensitive hash uses random rotations of spherically projected points to establish buckets by an argmax over signed axes projections. In this highly simplified 2D depiction, two points $x$ and $y$ are unlikely to share the same hash buckets (above) for the three different angular hashes unless their spherical projections are close to one another (below).
61
+
62
+ To build Q, K and V from A, the Transformer uses 3 different linear layers projecting A into Q, K and V with different parameters. For models with LSH attention, we want queries and keys (Q and K) to be identical. This is easily achieved by using the same linear layer to go from A to Q and K, and a separate one for V. We call a model that behaves like this a shared-QK Transformer. It turns out that sharing QK does not affect the performance of Transformer, even if we additionally normalize the length of the keys K, as we show in the experimental Section 5.
63
+
64
+ Hashing attention. For the LSH attention, we start with two tensors, $\mathrm{Q} = \mathrm{K}$ and $\mathrm{V}$ of the shape [batch_size, length, $d_{\text{model}}$ ]. We keep the multi-head mechanism intact and focus on the attention computation from Equation 1. As already mentioned, the main issue is the term $QK^T$ , which has the shape [batch_size, length, length]. But note that we are actually only interested in $\text{softmax}(QK^T)$ . Since softmax is dominated by the largest elements, for each query $q_i$ we only need to focus on the keys in $\mathbf{K}$ that are closest to $q_i$ . For example, if $\mathbf{K}$ is of length 64K, for each $q_i$ we could only consider a small subset of, say, the 32 or 64 closest keys. That is much more efficient, but how can we find the nearest neighbors among the keys?
65
+
66
+ Locality sensitive hashing. The problem of finding nearest neighbors quickly in high-dimensional spaces can be solved by locality-sensitive hashing (LSH). A hashing scheme that assigns each vector $x$ to a hash $h(x)$ is called locality-sensitive if nearby vectors get the same hash with high probability and distant ones do not. In our case, we actually only require that nearby vectors get the same hash with high probability and that hash-backets are of similar size with high probability.
67
+
68
+ We achieve this by employing random projections as follows (see Figure 1). To get $b$ hashes, we first fix a random matrix $R$ of size $[d_k, b/2]$ . We then define $h(x) = \arg \max([xR; -xR])$ where $[u; v]$ denotes the concatenation of two vectors. This method is a known LSH scheme (Andoni et al., 2015) and is easy to implement and apply to batches of vectors.
69
+
70
+ LSH attention. Knowing our LSH scheme and the general idea of hashing attention, we will now formalize the LSH attention we use in this paper. We first rewrite the equation for normal attention, (1), for a single query position $i$ at a time:
71
+
72
+ $$
73
+ o _ {i} = \sum_ {j \in \mathcal {P} _ {i}} \exp \left(q _ {i} \cdot k _ {j} - z (i, \mathcal {P} _ {i})\right) v _ {j} \quad \text {w h e r e} \mathcal {P} _ {i} = \{j: i \geq j \} \tag {2}
74
+ $$
75
+
76
+ We introduce the notation $\mathcal{P}_i$ to represent the set that the query at position $i$ attends to, and $z$ to denote the partition function (i.e. the normalizing term in the softmax). For clarity, we also omit scaling by $\sqrt{d_k}$ .
77
+
78
+ For batching purposes we typically perform attention over a larger set $\widetilde{\mathcal{P}}_i = \{0,1,\dots ,l\} \supseteq \mathcal{P}_i$ while masking out elements not in $\mathcal{P}_i$ :
79
+
80
+ $$
81
+ o _ {i} = \sum_ {j \in \widetilde {\mathcal {P}} _ {i}} \exp \left(q _ {i} \cdot k _ {j} - m (j, \mathcal {P} _ {i}) - z (i, \mathcal {P} _ {i})\right) v _ {j} \quad \text {w h e r e} m (j, \mathcal {P} _ {i}) = \left\{ \begin{array}{l l} \infty & \text {i f} j \notin \mathcal {P} _ {i} \\ 0 & \text {o t h e r w i s e} \end{array} \right. \tag {3}
82
+ $$
83
+
84
+ ![](images/d8db596c5b76d29a4745034b08c46f37c5f2307dbff244b9ccd5319789866f90.jpg)
85
+ Figure 2: Simplified depiction of LSH Attention showing the hash-bucketing, sorting, and chunking steps and the resulting causal attentions. (a-d) Attention matrices for these varieties of attention.
86
+
87
+ ![](images/ca5686db31bde8a9f26c54f6e8f9370a8456cb776592af3c40e06ff2521fc052.jpg)
88
+
89
+ ![](images/55ce100abc20441e6dee08e5039b45b283e6432b98d0d4f3baecb9c80d36946b.jpg)
90
+
91
+ ![](images/f874caefd32393d631a7b77ae4245115f95b938f91d7d0a2eb8bed1d547f2348.jpg)
92
+
93
+ ![](images/49aa7dcca0e839bac59bceb0664dbd70f7414edb9fdf01482cff31b9f62d3c6f.jpg)
94
+
95
+ Now we turn to LSH attention, which we can think of in terms of restricting the set $\mathcal{P}_i$ of target items a query position $i$ can attend to, by only allowing attention within a single hash bucket.
96
+
97
+ $$
98
+ \mathcal {P} _ {i} = \{j: h \left(q _ {i}\right) = h \left(k _ {j}\right) \} \tag {4}
99
+ $$
100
+
101
+ Figure 2(a-b) shows a schematic comparison of full-attention with a hashed variant. Part (a) depicts that the attention matrix for full attention is typically sparse, but the computation does not take advantage of this sparsity. In (b), the queries and keys have been sorted according to their hash bucket. Since similar items fall in the same bucket with high probability, the full attention pattern can be approximated by only allowing attention within each bucket.
102
+
103
+ Hash buckets in this formulation tend to be uneven in size, which makes it difficult to batch across buckets. Moreover, the number of queries and the number of keys within a bucket may be unequal – in fact, it is possible for a bucket to contain many queries but no keys. To alleviate these issues, we first ensure that $h(k_{j}) = h(q_{j})$ by setting $k_{j} = \frac{q_{j}}{\|q_{j}\|}$ . Next, we sort the queries by bucket number and, within each bucket, by sequence position; this defines a permutation where $i \mapsto s_i$ after sorting. In the sorted attention matrix, pairs from the same bucket will cluster near the diagonal (as depicted in Figure 2c). We can follow a batching approach where chunks of $m$ consecutive queries (after sorting) attend to each other, and one chunk back (Figure 2d). Following our earlier notation, this corresponds to setting:
104
+
105
+ $$
106
+ \widetilde {\mathcal {P}} _ {i} = \left\{j: \left\lfloor \frac {s _ {i}}{m} \right\rfloor - 1 \leq \left\lfloor \frac {s _ {j}}{m} \right\rfloor \leq \left\lfloor \frac {s _ {i}}{m} \right\rfloor \right\} \tag {5}
107
+ $$
108
+
109
+ If $\max_i |\mathcal{P}_i| < m$ , then $\mathcal{P}_i \subseteq \widetilde{\mathcal{P}}_i$ . In practice we set $m = \frac{2l}{n_{\text{buckets}}}$ (where $l$ is the sequence length). The average bucket size is $\frac{l}{n_{\text{buckets}}}$ , and we assume that the probability of a bucket growing to twice that size is sufficiently low. The overall process of LSH attention is summarized in Figure 2.
110
+
111
+ Multi-round LSH attention. With hashing, there is always a small probability that similar items nevertheless fall in different buckets. This probability can be reduced by doing multiple rounds of hashing with $n_{rounds}$ distinct hash functions $\{h^{(1)}, h^{(2)}, \ldots\}$ , such that:
112
+
113
+ $$
114
+ \mathcal {P} _ {i} = \bigcup_ {r = 1} ^ {n _ {\text {r o u n d s}}} \mathcal {P} _ {i} ^ {(r)} \quad \text {w h e r e} \mathcal {P} _ {i} ^ {(r)} = \left\{j: h ^ {(r)} \left(q _ {i}\right) = h ^ {(r)} \left(q _ {j}\right) \right\} \tag {6}
115
+ $$
116
+
117
+ The multi-round case essentially involves performing LSH attention $n_{rounds}$ times in parallel; the details of the procedure are described in in Appendix A.
118
+
119
+ Causal masking for shared-QK attention. In a Transformer decoder, masking (denoted by $m(j,\mathcal{P}_i)$ in Equation 3) is used to prevent positions from attending into the future. To implement masking in LSH attention, we associate every query/key vector with a position index, re-order the position indices using the same permutations used to sort the query/key vectors, and then use a comparison operation to compute the mask.
120
+
121
+ Table 1: Memory and time complexity of attention variants. We write $l$ for length, $b$ for batch size, ${n}_{h}$ for the number of heads, ${n}_{c}$ for the number of LSH chunks, ${n}_{r}$ for the number of hash repetitions.
122
+
123
+ <table><tr><td>Attention Type</td><td>Memory Complexity</td><td>Time Complexity</td></tr><tr><td>Scaled Dot-Product</td><td>max(bnhldk, bnhl2)</td><td>max(bnhldk, bnhl2)</td></tr><tr><td>Memory-Efficient</td><td>max(bnhldk, bnhl2)</td><td>max(bnhldk, bnhl2)</td></tr><tr><td>LSH Attention</td><td>max(bnhldk, bnhlnrl(4l/nc)2)</td><td>max(bnhldk, bnhnrl(4l/nc)2)</td></tr></table>
124
+
125
+ Table 2: Accuracies on the duplication task of a 1-layer Transformer model with full attention and with locality-sensitive hashing attention using different number of parallel hashes.
126
+
127
+ <table><tr><td>Eval Train</td><td>Full Attention</td><td>LSH-8</td><td>LSH-4</td><td>LSH-2</td><td>LSH-1</td></tr><tr><td>Full Attention</td><td>100%</td><td>94.8%</td><td>92.5%</td><td>76.9%</td><td>52.5%</td></tr><tr><td>LSH-4</td><td>0.8%</td><td>100%</td><td>99.9%</td><td>99.4%</td><td>91.9%</td></tr><tr><td>LSH-2</td><td>0.8%</td><td>100%</td><td>99.9%</td><td>98.1%</td><td>86.8%</td></tr><tr><td>LSH-1</td><td>0.8%</td><td>99.9%</td><td>99.6%</td><td>94.8%</td><td>77.9%</td></tr></table>
128
+
129
+ While attention to the future is not allowed, typical implementations of the Transformer do allow a position to attend to itself. Such behavior is undesirable in a shared-QK formulation because the dot-product of a query vector with itself will almost always be greater than the dot product of a query vector with a vector at another position. We therefore modify the masking to forbid a token from attending to itself, except in situations where a token has no other valid attention targets (e.g. the first token in a sequence).
130
+
131
+ # 2.1 ANALYSIS ON A SYNTHETIC TASK
132
+
133
+ To verify the performance of LSH attention and study its behavior, we start with the following synthetic task: duplicate a sequence of symbols. In this task, each training and testing example has the form $0w0w$ where $w \in \{1, \dots, N\}^*$ is a sequence of symbols ranging from 1 to $N$ (we use $N = 127$ in our experiments). An example with the word $w$ of length 3 is given below.
134
+
135
+ <table><tr><td>Example:</td><td>0</td><td>19</td><td>113</td><td>72</td><td>0</td><td>19</td><td>113</td><td>72</td></tr></table>
136
+
137
+ To study LSH attention, we train a language model on examples of the above form where each $w$ is of length 511 (so the whole input $0w0w$ is of length 1024). As this is a language modeling task, we always predict the next symbol given all the previous ones, but we mask the loss and accuracy to only consider positions in the second half of the input, i.e., those that can actually be predicted.
138
+
139
+ The above task can be solved perfectly (to accuracy $100\%$ and loss 0) by a 1-layer Transformer model. Note though, that it requires non-local attention lookups, so it cannot be solved by any model relying on sparse attention with a limited span. To make it easy and fast to train but similar to models used in NLP, we use a 1-layer Transformer with $d_{model} = d_{ff} = 256$ and 4 heads. We train it for 150K steps in 4 different settings: with full attention, LSH attention with $n_{rounds} = 1$ , $n_{rounds} = 2$ and $n_{rounds} = 4$ .
140
+
141
+ From the results summarized in Table 2 we see that a model trained with full attention can be immediately used with LSH attention, but at some loss of accuracy. When trained from scratch with LSH attention, the model trained with 4 hashes achieves almost perfect accuracy as well. Interestingly, the accuracy becomes perfect when evaluated with 8 hashes. It goes down when evaluated with 2 or 1 hashes. Models trained with less hashes show worse results but even the model trained with just 1 hash performs almost perfectly when evaluated with 8 hashes.
142
+
143
+ # 3 REVERSIBLE TRANSFORMER
144
+
145
+ As the above section shows, the complexity of attention can be reduced from square in length to linear, provided an approximation is acceptable. But it is clear from Table 1 that each field starts with a $b \cdot n_h \cdot l$ term: the $b \cdot n_h \cdot l \cdot d_k$ , or alternatively $b \cdot l \cdot d_{model}$ cost cannot be avoided. Indeed, the activations before each layer are already of the size $b \cdot l \cdot d_{model}$ , so the memory use of the whole model with $n_l$ layers is at least $b \cdot l \cdot d_{model} \cdot n_l$ . Even worse: inside the feed-forward layers of Transformer this goes up to $b \cdot l \cdot d_{ff} \cdot n_l$ . In a big Transformer it is usual to set $d_{ff} = 4K$ and $n_l = 16$ so with $l = 64K$ this again would use an impractical 16GB of memory
146
+
147
+ In this section, we show how to reduce this cost by first dealing with the $n_l$ part of the term using reversible layers and then showing how chunking can allow us to handle the $d_{ff}$ problem. The effects of each of these approaches on memory and time complexity are summarized in Table 3.
148
+
149
+ RevNets. Reversible residual networks were introduced by Gomez et al. (2017) where it was shown that they can replace ResNets for image classification. The main idea is to allow the activations at any given layer to be recovered from the activations at the following layer, using only the model parameters. Rather than having to checkpoint intermediate values for use in the backward pass, layers can be reversed one-by-one as back-propagation proceeds from the output of the network to its input. Whereas a normal residual layer performs a function $x \mapsto y$ that operates on a single input and produces a single output and has the form $y = x + F(x)$ , a reversible layer works on pairs of inputs/outputs: $(x_1, x_2) \mapsto (y_1, y_2)$ , and follows the equations:
150
+
151
+ $$
152
+ y _ {1} = x _ {1} + F \left(x _ {2}\right) \quad y _ {2} = x _ {2} + G \left(y _ {1}\right) \tag {7}
153
+ $$
154
+
155
+ A layer can be reversed by subtracting (rather than adding) the residuals:
156
+
157
+ $$
158
+ x _ {2} = y _ {2} - G \left(y _ {1}\right) \quad x _ {1} = y _ {1} - F \left(x _ {2}\right) \tag {8}
159
+ $$
160
+
161
+ Reversible Transformer. We apply the RevNet idea to the Transformer by combining the attention and feed-forward layers inside the revnet block. In the notation above, F becomes an attention layer while G becomes the feed-forward layer. Note that Layer Normalization (Ba et al., 2016) is moved inside the residual blocks.
162
+
163
+ $$
164
+ Y _ {1} = X _ {1} + \operatorname {A t t e n t i o n} \left(X _ {2}\right) \quad Y _ {2} = X _ {2} + \operatorname {F e e d F o r w a r d} \left(Y _ {1}\right) \tag {9}
165
+ $$
166
+
167
+ The reversible Transformer does not need to store activations in each layer and so gets rid of the $n_l$ term. In Section 5 we show that it performs the same as the normal Transformer when using the same number of parameters; we achieve this by having both $x_1$ and $x_2$ have size $d_{model}$ .
168
+
169
+ Chunking. While reversibility covers the $n_l$ term, the thicker layers can still use a lot of memory. The feed-forward layer in particular can use intermediate vectors of dimensionality $d_{ff} = 4K$ or higher. However, computations in feed-forward layers are completely independent across positions in a sequence, so the computation can be split into $c$ chunks:
170
+
171
+ $$
172
+ Y _ {2} = \left[ Y _ {2} ^ {(1)}; \dots ; Y _ {2} ^ {(c)} \right] = \left[ X _ {2} ^ {(1)} + \operatorname {F e e d F o r w a r d} \left(Y _ {1} ^ {(1)}\right); \dots ; X _ {2} ^ {(c)} + \operatorname {F e e d F o r w a r d} \left(Y _ {1} ^ {(c)}\right) \right] \tag {10}
173
+ $$
174
+
175
+ This layer is typically batched by performing operations for all positions in parallel, but operating on one chunk at a time can reduce memory. The reverse computation in (8) and the backward pass are also chunked. In addition to the feed-forward layers, for models with large vocabulary (more than $d_{model}$ word types) we also chunk the log-probabilities at the output and calculate the loss for sections of the sequence at a time.
176
+
177
+ Chunking, large batches and parameter reuse. With chunking and reversible layers the memory we use for activations in the whole network is independent of the number of layers. The same is not true for parameters though as their number grows with the number of layers. This problem is remedied though because we can swap layer parameters to and from CPU memory when this layer is not computing. In a standard Transformer this would be inefficient because memory transfer to CPU is slow. The batch size multiplied by length in Reformer is much larger though and therefore the amount of compute done with the parameters amortizes the cost of their transfer.
178
+
179
+ Table 3: Memory and time complexity of Transformer variants. We write $d_{model}$ and $d_{ff}$ for model depth and assume $d_{ff} \geq d_{model}$ ; $b$ stands for batch size, $l$ for length, $n_l$ for the number of layers. We assume $n_c = l / 32$ so $4l / n_c = 128$ and we write $c = 128^2$ .
180
+
181
+ <table><tr><td>Model Type</td><td>Memory Complexity</td><td>Time Complexity</td></tr><tr><td>Transformer</td><td>max(bldff, bnhl2)nl</td><td>(bldff + bnhl2)nl</td></tr><tr><td>Reversible Transformer</td><td>max(bldff, bnhl2)</td><td>(bnhlldff + bnhl2)nl</td></tr><tr><td>Chunked Reversible Transformer</td><td>max(bldmodel, bnhl2)</td><td>(bnhlldff + bnhl2)nl</td></tr><tr><td>LSH Transformer</td><td>max(bldff, bnhlnc)nl</td><td>(bldff + bnhnrlc)nl</td></tr><tr><td>Reformer</td><td>max(bldmodel, bnhlnc)</td><td>(bldff + bnhnrlc)nl</td></tr></table>
182
+
183
+ # 4 RELATED WORK
184
+
185
+ The Transformer model introduced in (Vaswani et al., 2017) has been used widely in natural language tasks and further extended to model diverse data such as music scores (Huang et al., 2018), and images (Parmar et al., 2018; Ramachandran et al., 2019). Most notably, this model class has been applied successfully in the self-supervised training of extremely large language models (Devlin et al., 2018; Radford et al., 2019).
186
+
187
+ Given the enormous computational requirements of state of the art sequence models, there has been increasing interest in finding methods to reduce the memory footprint and computational requirements of Transformer models. In addition to standard methods such as precision reduction and gradient checkpointing (Sohoni et al., 2019), more efficient versions of the Transformer model's self-attention mechanism (Sukhbaatar et al., 2019a;b) have also recently been explored.
188
+
189
+ In particular, leveraging sparsity in the attention layers has proved fruitful. OpenAI introduced the sparse Transformer (Child et al., 2019) which exploits a factorized sparse representation of attention. Using product-key attention to increase the key space has also been used to reduce memory requirements in the feed-forward layers with no loss in performance (Lample et al., 2019).
190
+
191
+ Locality-sensitive hashing (LSH) has, to our knowledge, not been directly applied to Transformer attention layers before. But previous work using external memory with neural networks has dealt with memories of large sizes. The original implementation of memory networks (Weston et al., 2014) and later work on scaling it (Bordes et al., 2015; Chandar et al., 2016) used memory with size in the millions. The cost of doing so is that the memory must be fixed prior to training. Moreover, since during the beginning of training the model is unlikely to query the memory correctly, strong supervision is used to encourage the model to query memory locations that are useful. These hints are either given as additional supervising information by the task or determined heuristically as in Hill et al. (2015). The requirement that the memory be fixed before has been removed in Santoro et al. (2016) at the cost of memory size and later alleviated by Rae et al. (2016). The last paper considered memory lookups with approximate nearest neighbors including both LSH and random kd-trees, but only for lookups in external memory.
192
+
193
+ # 5 EXPERIMENTS
194
+
195
+ In this section we present experimental results demonstrating the techniques described above. We analyze the techniques one-by-one to make clear which combinations have impact on performance. We start by showing that reversible layers and shared query-key spaces do not impact performance, then proceed to analyze hashing attention and finally the full Reformer model.
196
+
197
+ We ran our experiments on the imagenet64 and enwik8-64K tasks, where the latter is a variant of enwik8 that is chunked into subsequences of $2^{16} = 64K$ tokens. We use 3-layer models for our ablations so as to make it tractable to compare with the regular Transformer, which has high memory usage and performs full $O(l^2)$ attention. All experiments have $d_{model} = 1024$ , $d_{ff} = 4096$ , $n_{heads} = 8$ , and a total batch size of 8 sequences. We used the Adafactor optimizer (Shazeer & Stern, 2018) for training these models. We also evaluate on the WMT 2014 English-to-German translation task, following the hyperparameters of Vaswani et al. (2017). Training for all experiments
198
+
199
+ ![](images/7297961e091e865731d82461f0096d58c94d43d975d46a924675b3e55203b4bc.jpg)
200
+
201
+ ![](images/4d2e7ac93bd699b725df42cb13c80d1a61e37e2c24d4e464f5a045deb67bb5f1.jpg)
202
+
203
+ ![](images/395b936052fa4387c72774452e08e6b6356e842dcbb47a1d2ff8b377456124bf.jpg)
204
+ Figure 3: Effect of shared query-key space (left) and reversibility (right) on performance on enwik8 and imagenet64 training. The curves show bits per dim on held-out data.
205
+
206
+ ![](images/da871c8ade9da2abbb0f7edc69a26c9f4d790f8e2d39ec83c1c8dddf0e73bb0f.jpg)
207
+
208
+ Table 4: BLEU scores on newestest2014 for WMT English-German (En-De). We additionally report detokenized BLEU scores as computed by sacreBLEU (Post, 2018).
209
+
210
+ <table><tr><td rowspan="2">Model</td><td rowspan="2">BLEU</td><td colspan="2">sacreBLEU</td></tr><tr><td>Uncased3</td><td>Cased4</td></tr><tr><td>Vaswani et al. (2017), base model</td><td>27.3</td><td></td><td></td></tr><tr><td>Vaswani et al. (2017), big</td><td>28.4</td><td></td><td></td></tr><tr><td>Ott et al. (2018), big</td><td>29.3</td><td></td><td></td></tr><tr><td>Reversible Transformer (base, 100K steps)</td><td>27.6</td><td>27.4</td><td>26.9</td></tr><tr><td>Reversible Transformer (base, 500K steps, no weight sharing)</td><td>28.0</td><td>27.9</td><td>27.4</td></tr><tr><td>Reversible Transformer (big, 300K steps, no weight sharing)</td><td>29.1</td><td>28.9</td><td>28.4</td></tr></table>
211
+
212
+ was parallelized across 8 devices (8 GPUs or 8 TPU v3 cores). Code for training our models is made publicly available.2
213
+
214
+ Effect of sharing QK. We first consider the effect of shared-QK attention on a regular Transformer model. Shared-QK attention sets $k_{j} = \frac{q_{j}}{\|q_{j}\|}$ and prevents tokens from attending to themselves (except when no other context is available). In the left part of Figure 3, we plot perplexity curves for both regular and shared-QK attention. A shared query-key space does not perform worse than regular attention; in fact, for enwik8 it appears to train slightly faster. In other words, we are not sacrificing accuracy by switching to shared-QK attention.
215
+
216
+ Effect of reversible layers. In the two plots on the right in Figure 3, we compare a regular Transformer per Vaswani et al. (2017) with the reversible one describe in Section 3. The two models have identical parameter counts, and the learning curves likewise appear to be nearly the same. These results show that the memory savings in the reversible Transformer do not come at the expense of accuracy.
217
+
218
+ Reversible layers in machine translation. We also evaluate reversible layers in the context of an encoder-decoder Transformer model for machine translation from English to German. We start by making both the encoder and the decoder fully reversible in the Transformer-base architecture, and
219
+
220
+ ![](images/7736ac8e603c99db39552882b707a3f02dd55d59f77c9168ee92ae1a323ea538.jpg)
221
+ Figure 4: LSH attention performance as a function of hashing rounds onImagenet64.
222
+
223
+ ![](images/56f3c5ae38754b95c6af92c9623d1c9e6e33992ad16a495a1e16f0df7aa9b535.jpg)
224
+ Figure 5: Left: LSH attention performance as a function of number of layers on enwik8. Right: Speed of attention evaluation as a function of input length for full- and LSH- attention.
225
+
226
+ ![](images/53d3418baf095a3b08f7480057121b89f8f63176de4601b2b2594b5609abbab7.jpg)
227
+
228
+ see that the resulting model performs comparably to Vaswani et al. (2017) when trained for 100K steps. We also evaluate training for a greater number of steps and with a larger model. Reformer models are very memory-efficient, so for the latter two experiments we do not need to save memory by sharing embedding and output projection weight matrices throughout the model. Results are shown in Table 4. We do not apply LSH attention in this setting because examples are single sentences, and sentences tend to be relatively short. Our typical LSH attention configuration uses chunks of 128 tokens after hashing and sorting, whereas the examples in the WMT14 test set are all shorter than 128 tokens.
229
+
230
+ LSH attention in Transformer. LSH attention is an approximation for full attention that, as evidenced in Figure 4, becomes more accurate as the number of hashes increases. At $n_{rounds} = 8$ , it already almost matches full attention. The computational cost of a model grows with the number of hashes, so this hyperparameter can be adjusted depending on the available compute budget. Additionally, as in Table 2, the number of hashes can be increased at evaluation time to produce more accurate results. On the right half of Figure 5, we plot the speed of different attention types vs. the sequence length, while holding the total number of tokens fixed. We see that while regular attention becomes slower at longer sequence length, LSH attention speed remains flat.
231
+
232
+ Large Reformer models. To verify that the Reformer can indeed fit large models on a single core and train fast on long sequences, we train up to 20-layer big Reformers on enwik8 and imagenet64. As can be seen in Figure 5, these models fit into memory and train. We were not able to train Transformer baselines in this case as they are too slow and memory-hungry, but we see clear improvement with the number of layers. A 12-layer model on enwik8 trained for 20K steps with a dropout rate of 0.1 achieves 1.19 bits/dim on the test set. We also trained a 12-layer Reformer model for longer with further tuning and improvements and we reached 1.05 bits/dim on the enwiki8 test set.
233
+
234
+ # 6 CONCLUSION
235
+
236
+ Reformer combines the modeling capacity of a Transformer with an architecture that can be executed efficiently on long sequences and with small memory use even for models with a large number of layers. We believe that this will help large, richly-parameterized Transformer models become more widespread and accessible. Also, the ability to handle long sequences opens the way for the use of the Reformer on many generative tasks. In addition to generating very long coherent text, the Reformer can bring the power of Transformer models to other domains like time-series forecasting, music, image and video generation.
237
+
238
+ # REFERENCES
239
+
240
+ Rami Al-Rfou, Dokook Choe, Noah Constant, Mandy Guo, and Llion Jones. Character-level language modeling with deeper self-attention. CoRR, abs/1808.04444, 2018. URL http://arxiv.org/abs/1808.04444.
241
+ Alexandr Andoni, Piotr Indyk, Thijs Laarhoven, Ilya P. Razenshteyn, and Ludwig Schmidt. Practical and optimal LSH for angular distance. CoRR, abs/1509.02897, 2015. URL http://arxiv.org/abs/1509.02897.
242
+ Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016. URL http://arxiv.org/abs/1607.06450.
243
+ Antoine Bordes, Nicolas Usunier, Sumit Chopra, and Jason Weston. Large-scale simple question answering with memory networks. CoRR, abs/1506.02075, 2015. URL http://arxiv.org/abs/1506.02075.
244
+ Sarath Chandar, Sungjin Ahn, Hugo Larochelle, Pascal Vincent, Gerald Tesauro, and Yoshua Bengio. Hierarchical memory networks. arXiv preprint arXiv:1605.07427, 2016.
245
+ Rewon Child, Scott Gray, Alec Radford, and Ilya Sutskever. Generating long sequences with sparse transformers. URL https://openai.com/blog/sparse-transformers, 2019.
246
+ Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: pre-training of deep bidirectional transformers for language understanding. CoRR, abs/1810.04805, 2018. URL http://arxiv.org/abs/1810.04805.
247
+ Aidan N Gomez, Mengye Ren, Raquel Urtasun, and Roger B Grosse. The reversible residual network: Backpropagation without storing activations. In Advances in neural information processing systems, pp. 2214-2224, 2017.
248
+ Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. The goldilocks principle: Reading children's books with explicit memory representations. CoRR, abs/1511.02301, 2015. URL http://arxiv.org/abs/1511.02301.
249
+ Cheng-Zhi Anna Huang, Ashish Vaswani, Jakob Uszkoreit, Noam Shazeer, Curtis Hawthorne, Andrew M Dai, Matthew D Hoffman, and Douglas Eck. Music transformer: Generating music with long-term structure. arXiv preprint arXiv:1809.04281, 2018.
250
+ Guillaume Lample, Alexandre Sablayrolles, Marc'Aurelio Ranzato, Ludovic Denoyer, and Hervé Jégou. Large memory layers with product keys. CoRR, abs/1907.05242, 2019. URL http://arxiv.org/abs/1907.05242.
251
+ Peter J. Liu, Mohammad Saleh, Etienne Pot, Ben Goodrich, Ryan Sepassi, Lukasz Kaiser, and Noam Shazeer. Generating wikipedia by summarizing long sequences. CoRR, abs/1801.10198, 2018. URL http://arxiv.org/abs/1801.10198.
252
+ Myle Ott, Sergey Edunov, David Grangier, and Michael Auli. Scaling neural machine translation. In Proceedings of the Third Conference on Machine Translation: Research Papers, pp. 1-9, Brussels, Belgium, October 2018. Association for Computational Linguistics. doi: 10.18653/v1/W18-6301. URL https://www.aclweb.org/anthology/W18-6301.
253
+
254
+ Niki Parmar, Ashish Vaswani, Jakob Uszkoreit, Lukasz Kaiser, Noam Shazeer, and Alexander Ku. Image transformer. CoRR, abs/1802.05751, 2018. URL http://arxiv.org/abs/1802.05751.
255
+ Matt Post. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pp. 186-191, Belgium, Brussels, October 2018. Association for Computational Linguistics. URL https://www.aclweb.org/anthology/W18-6319.
256
+ Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners. 2019.
257
+ Jack W Rae, Jonathan J Hunt, Tim Harley, Ivo Danihelka, Andrew Senior, Greg Wayne, Alex Graves, and Timothy P Lillicrap. Scaling memory-augmented neural networks with sparse reads and writes. In Advances in Neural Information Processing Systems, (NIPS), 2016.
258
+ Prajit Ramachandran, Niki Parmar, Ashish Vaswani, Irwan Bello, Anselm Levskaya, and Jonathon Shlens. Stand-alone self-attention in vision models. CoRR, abs/1906.05909, 2019. URL http://arxiv.org/abs/1906.05909.
259
+ Adam Santoro, Sergey Bartunov, Matthew Botvinick, Daan Wierstra, and Timothy P. Lillicrap. One-shot learning with memory-augmented neural networks. CoRR, abs/1605.06065, 2016. URL http://arxiv.org/abs/1605.06065.
260
+ Noam Shazeer and Mitchell Stern. Adafactor: Adaptive learning rates with sublinear memory cost. CoRR, abs/1804.04235, 2018. URL http://arxiv.org/abs/1804.04235.
261
+ Noam Shazeer, Youlong Cheng, Niki Parmar, Dustin Tran, Ashish Vaswani, Penporn Koanantakool, Peter Hawkins, HyoukJoong Lee, Mingsheng Hong, Cliff Young, Ryan Sepassi, and Blake Hechtman. Mesh-tensorflow: Deep learning for supercomputers. CoRR, abs/1811.02084, 2018. URL http://arxiv.org/abs/1811.02084.
262
+ Nimit Sharad Sohoni, Christopher Richard Aberger, Megan Leszczynski, Jian Zhang, and Christopher Ré. Low-memory neural network training: A technical report. CoRR, abs/1904.10631, 2019. URL http://arxiv.org/abs/1904.10631.
263
+ Sainbayar Sukhbaatar, Edouard Grave, Piotr Bojanowski, and Armand Joulin. Adaptive attention span in transformers. CoRR, abs/1905.07799, 2019a. URL http://arxiv.org/abs/1905.07799.
264
+ Sainbayar Sukhbaatar, Edouard Grave, Guillaume Lample, Hervé Jégou, and Armand Joulin. Aug-mentation self-attention with persistent memory. CoRR, abs/1907.01470, 2019b. URL http://arxiv.org/abs/1907.01470.
265
+ Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. CoRR, 2017. URL http://arxiv.org/abs/1706.03762.
266
+ Jason Weston, Sumit Chopra, and Antoine Bordes. Memory networks. CoRR, abs/1410.3916, 2014. URL http://arxiv.org/abs/1410.3916.
267
+
268
+ # A MULTI-ROUND LSH ATTENTION
269
+
270
+ In this section we describe in more detail the multi hash version of our LSH attention mechanism. We first repeat Equation (3) from the main text, which describes a general formulation of attention with sparsity:
271
+
272
+ $$
273
+ o _ {i} = \sum_ {j \in \widetilde {\mathcal {P}} _ {i}} \exp \left(q _ {i} \cdot k _ {j} - m (j, \mathcal {P} _ {i}) - z (i, \mathcal {P} _ {i})\right) v _ {j} \quad \text {w h e r e} m (j, \mathcal {P} _ {i}) = \left\{ \begin{array}{l l} \infty & \text {i f} j \notin \mathcal {P} _ {i} \\ 0 & \text {o t h e r w i s e} \end{array} \right. \tag {3}
274
+ $$
275
+
276
+ In the multi-round case, a query position $i$ can attend to key positions $\mathcal{P}_i$ as defined in (6), which we also repeat here:
277
+
278
+ $$
279
+ \mathcal {P} _ {i} = \bigcup_ {r = 1} ^ {n _ {\text {r o u n d s}}} \mathcal {P} _ {i} ^ {(r)} \quad \text {w h e r e} \mathcal {P} _ {i} ^ {(r)} = \left\{j: h ^ {(r)} \left(q _ {i}\right) = h ^ {(r)} \left(q _ {j}\right) \right\} \tag {6}
280
+ $$
281
+
282
+ For batching purposes, attention is performed on chunks of sorted queries/keys:
283
+
284
+ $$
285
+ \widetilde {\mathcal {P}} _ {i} ^ {(r)} = \left\{j: \left\lfloor \frac {s _ {i} ^ {(r)}}{m} \right\rfloor - 1 \leq \left\lfloor \frac {s _ {j} ^ {(r)}}{m} \right\rfloor \leq \left\lfloor \frac {s _ {i} ^ {(r)}}{m} \right\rfloor \right\} \tag {11}
286
+ $$
287
+
288
+ Combining (3) and (6) gives:
289
+
290
+ $$
291
+ \begin{array}{l} o _ {i} = \sum_ {j \in \tilde {\mathcal {P}} _ {i}} \exp \left(q _ {i} \cdot k _ {j} - m (j, \mathcal {P} _ {i}) - z (i, \mathcal {P} _ {i})\right) v _ {j} (12) \\ = \sum_ {r = 1} ^ {n _ {\text {r o u n d s}}} \exp \left(z (i, \mathcal {P} _ {i} ^ {(r)}) - z (i, \mathcal {P} _ {i})\right) \sum_ {j \in \widetilde {\mathcal {P}} _ {i} ^ {(r)}} \frac {1}{N _ {i , j}} \exp \left(q _ {i} \cdot k _ {j} - m (j, \mathcal {P} _ {i} ^ {(r)}) - z (i, \mathcal {P} _ {i} ^ {(r)})\right) v _ {j} (13) \\ = \sum_ {r = 1} ^ {n _ {\text {r o u n d s}}} \exp \left(z (i, \mathcal {P} _ {i} ^ {(r)}) - z (i, \mathcal {P} _ {i})\right) o _ {i} ^ {(r)} (14) \\ \end{array}
292
+ $$
293
+
294
+ $$
295
+ o _ {i} ^ {(r)} = \sum_ {j \in \widetilde {\mathcal {P}} _ {i} ^ {(r)}} \exp \left(q _ {i} \cdot k _ {j} - m _ {i, j} ^ {(r)} - z (i, \mathcal {P} _ {i} ^ {(r)})\right) v _ {j} \tag {15}
296
+ $$
297
+
298
+ $$
299
+ \text {w h e r e} N _ {i, j} = \left| \left\{r ^ {\prime}: j \in \mathcal {P} _ {i} ^ {(r ^ {\prime})} \right\} \right| \text {a n d} m _ {i, j} ^ {(r)} = \left\{ \begin{array}{l l} \infty & \text {i f} j \notin \mathcal {P} _ {i} ^ {(r)} \\ 1 0 ^ {5} & \text {i f} i = j \\ \log N _ {i, j} & \text {o t h e r w i s e} \end{array} \right. \tag {16}
300
+ $$
301
+
302
+ Each round of LSH attention produces a vector $o_i^{(r)}$ that can be computed independently from other rounds, except for the inclusion of a term $N_{i,j}$ to avoid double-counting elements when constructing the union of $\mathcal{P}_i^{(r)}$ sets. In our implementation we fold the $N_{i,j}$ factor into the masking term $m_{i,j}^{(r)}$ .
303
+
304
+ We also modify $m_{i,j}^{(r)}$ to introduce a special case for $i = j$ . This case is added because causal masking in a standard Transformer allows position $i$ to attend to itself, which is not desirable in a shared-QK formulation. We set the mask to a large but finite value to disallow attention-in-place, except in the situation where a token has no other valid attention targets. For example, the first token in a sequence attends only to itself, because no prior context is available.
reformertheefficienttransformer/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:69d472d60f8492009f0e839c67185006140668ea81037ab2281ef8e509fd990e
3
+ size 520909
reformertheefficienttransformer/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ff409a6ae689c650c65d9546ad8919706c26e5b2cb61bf6475938d6f010434ea
3
+ size 409586
regularizingactivationsinneuralnetworksviadistributionmatchingwiththewassersteinmetric/6f757651-f26e-4028-9e21-fa728c70a386_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9d2657a2a4682f78e58e413b2785e3f955da9f792b186e59c74c9e6e54ba5a08
3
+ size 75531
regularizingactivationsinneuralnetworksviadistributionmatchingwiththewassersteinmetric/6f757651-f26e-4028-9e21-fa728c70a386_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6d5bec1198e4668a91c3b4c0bab6cba3ef98cc71d928e9d50d6e114ddc61f89e
3
+ size 97721
regularizingactivationsinneuralnetworksviadistributionmatchingwiththewassersteinmetric/6f757651-f26e-4028-9e21-fa728c70a386_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bb66e130bc991d0242b5c34da6da633b334627ebfeceabe5cf47ecdbad5fe1c9
3
+ size 759994
regularizingactivationsinneuralnetworksviadistributionmatchingwiththewassersteinmetric/full.md ADDED
@@ -0,0 +1,311 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # REGULARIZING ACTIVATIONS IN NEURAL NETWORKS VIA DISTRIBUTION MATCHING WITH THE WASSER-STEIN METRIC
2
+
3
+ Taejong Joo
4
+
5
+ ESTsoft
6
+
7
+ Republic of Korea
8
+
9
+ tjoo@estsoft.com
10
+
11
+ Donggu Kang
12
+
13
+ ESTsoft
14
+
15
+ Republic of Korea
16
+
17
+ emppurity@gmail.com
18
+
19
+ Byunghoon Kim
20
+
21
+ Hanyang University
22
+
23
+ Republic of Korea
24
+
25
+ byungkim@hanyang.ac.kr
26
+
27
+ # ABSTRACT
28
+
29
+ Regularization and normalization have become indispensable components in training deep neural networks, resulting in faster training and improved generalization performance. We propose the projected error function regularization loss (PER) that encourages activations to follow the standard normal distribution. PER randomly projects activations onto one-dimensional space and computes the regularization loss in the projected space. PER is similar to the Pseudo-Huber loss in the projected space, thus taking advantage of both $L^1$ and $L^2$ regularization losses. Besides, PER can capture the interaction between hidden units by projection vector drawn from a unit sphere. By doing so, PER minimizes the upper bound of the Wasserstein distance of order one between an empirical distribution of activations and the standard normal distribution. To the best of the authors' knowledge, this is the first work to regularize activations via distribution matching in the probability distribution space. We evaluate the proposed method on the image classification task and the word-level language modeling task.
30
+
31
+ # 1 INTRODUCTION
32
+
33
+ Training of deep neural networks is very challenging due to the vanishing and exploding gradient problem (Hochreiter, 1998; Glorot & Bengio, 2010), the presence of many flat regions and saddle points (Shalev-Shwartz et al., 2017), and the shattered gradient problem (Balduzzi et al., 2017). To remedy these issues, various methods for controlling hidden activations have been proposed such as normalization (Ioffe & Szegedy, 2015; Huang et al., 2018), regularization (Littwin & Wolf, 2018), initialization (Mishkin & Matas, 2016; Zhang et al., 2019), and architecture design (He et al., 2016).
34
+
35
+ Among various techniques of controlling activations, one well-known and successful path is controlling their first and second moments. Back in the 1990s, it has been known that the neural network training can be benefited from normalizing input statistics so that samples have zero mean and identity covariance matrix (LeCun et al., 1998; Schraudolph, 1998). This idea motivated batch normalization (BN) that considers hidden activations as the input to the next layer and normalizes scale and shift of the activations (Ioffe & Szegedy, 2015).
36
+
37
+ Recent works show the effectiveness of different sample statistics of activations for normalization and regularization. Deecke et al. (2019) and Kalayeh & Shah (2019) normalize activations to several modes with different scales and translations. Variance constancy loss (VCL) implicitly normalizes the fourth moment by minimizing the variance of sample variances, which enables adaptive mode separation or collapse based on their prior probabilities (Littwin & Wolf, 2018). BN is also extended to whiten activations (Huang et al., 2018; 2019), and to normalize general order of central moment in the sense of $L^p$ norm including $L^0$ and $L^\infty$ (Liao et al., 2016; Hoffer et al., 2018).
38
+
39
+ In this paper, we propose a projected error function regularization (PER) that regularizes activations in the Wasserstein probability distribution space. Specifically, PER pushes the distribution of activations to be close to the standard normal distribution. PER shares a similar strategy with previous approaches that dictates the ideal distribution of activations. Previous approaches, however, deal with single or few sample statistics of activations. On the contrary, PER regularizes the activations
40
+
41
+ ![](images/d94d74db259427eea7ecc0b7167ed9ade8527ddea30991d77d365a3952088541.jpg)
42
+ (a)
43
+
44
+ ![](images/9c2bcbb199d7905be3e130f845f1a0ae800bfcc96740920131029fb57c9248c3.jpg)
45
+ (b)
46
+
47
+ ![](images/6f35a4c6f780624deee97ff4fedbbfe4ff616c0d5fd437b60f09bfd7ef2bea89.jpg)
48
+ (c)
49
+ Figure 1: Limitation of statistics in terms of representing the probability distribution. In all subplots, $x$ has zero mean and unit variance and $y \sim \mathcal{N}(0,1)$ . In (a) $(x,y) \sim \mathcal{N}(0,I)$ . In (b), $x \sim \mathcal{N}(0,1)$ but correlated with $y$ . In (c), $x$ follows a skewed distribution. In (d), $x$ follows a bi-modal distribution. Standardization cannot differentiate (a)-(d) and whitening cannot differentiate (a), (c), and (d).
50
+
51
+ ![](images/b9825d5ee328ea215a890a4e676b9a775968eae1b5495a7a50bd5a04238afd43.jpg)
52
+ (d)
53
+
54
+ by matching the probability distributions, which considers different statistics simultaneously, e.g., all orders of moments and correlation between hidden units. The extensive experiments on multiple challenging tasks show the effectiveness of PER.
55
+
56
+ # 2 RELATED WORKS
57
+
58
+ Many modern deep learning architectures employ BN as an essential building block for better performance and stable training even though its theoretical aspects of regularization and optimization are still actively investigated (Santurkar et al., 2018; Kohler et al., 2018; Bjorck et al., 2018; Yang et al., 2019). Several studies have applied the idea of BN that normalizes activations via the sample mean and the sample variance to a wide range of domains such as recurrent neural network (Lei Ba et al., 2016) and small batch size training (Wu & He, 2018).
59
+
60
+ Huang et al. (2018; 2019) propose normalization techniques whitening the activation of each layer. This additional constraint on the statistical relationship between activations improves the generalization performance of residual networks compared to BN. Although the correlation between activations are not explicitly considered, dropout prevents activations from being activated at the same time, called co-adaptation, by randomly dropping the activations (Srivastava et al., 2014), the weights (Wan et al., 2013), and the spatially connected activations (Ghiasi et al., 2018).
61
+
62
+ Considering BN as the normalization in the $L^2$ space, several works extend BN to other spaces, i.e., other norms. Streaming normalization (Liao et al., 2016) explores the normalization of a different order of central moment with $L^p$ norm for general $p$ . Similarly, Hoffer et al. (2018) explores $L^1$ and $L^\infty$ normalization, which enable low precision computation. Littwin & Wolf (2018) proposes a regularization loss that reduces the variance of sample variances of activation that is closely related to the fourth moment.
63
+
64
+ The idea of controlling activations via statistical characteristics of activations also has motivated initialization methods. An example includes balancing variances of each layer (Glorot & Bengio, 2010; He et al., 2015), bounding scale of activation and gradient (Mishkin & Matas, 2016; Balduzzi et al., 2017; Gehring et al., 2017; Zhang et al., 2019), and norm preserving (Saxe et al., 2013). Although the desired initial state may not be maintained during training, experimental results show that they can stabilize the learning process as well.
65
+
66
+ Recently, the Wasserstein metric has gained much popularity in a wide range of applications in deep learning with some nice properties such as being a metric in a probability distribution space without requiring common supports of two distributions. For instance, it is successfully applied to a multilabeled classification (Frogner et al., 2015), gradient flow of policy update in reinforcement learning (Zhang et al., 2018), training of generative models (Arjovsky et al., 2017; Gulrajani et al., 2017; Kolouri et al., 2019), and capturing long term semantic structure in sequence-to-sequence language model (Chen et al., 2019).
67
+
68
+ While the statistics such as mean and (co)variance are useful summaries of a probability distribution, they cannot fully represent the underlying structure of the distribution (Fig. 1). Therefore, regular-
69
+
70
+ izing or normalizing activation to follow the target distribution via statistics can be ineffective in some cases. For instance, normalizing activations via single mean and variance such as BN and decorrelated BN (Huang et al., 2018) can be inadequate in learning multimodal distribution (Bilen & Vedaldi, 2017; Deecke et al., 2019). This limitation motivates us to investigate a more general way of regularizing the distribution of activations. Instead of controlling activations via statistics, we define the target distribution and then minimize the Wasserstein distance between the activation distribution and the target distribution.
71
+
72
+ # 3 PROJECTED ERROR FUNCTION REGULARIZATION
73
+
74
+ We consider a neural network with $L$ layers each of which has $d_{l}$ hidden units in layer $l$ . Let $\mathcal{D} = \{(x_i, y_i)\}_{i=1}^n$ be $n$ training samples which are assumed to be i.i.d. samples drawn from a probability distribution $P_{\mathbf{x},\mathbf{y}}$ . In this paper, we consider the optimization by stochastic gradient descent with mini-batch of $b$ samples randomly drawn from $\mathcal{D}$ at each training iteration. For $i$ -th element of the samples, the neural network recursively computes:
75
+
76
+ $$
77
+ \boldsymbol {h} _ {i} ^ {l} = \phi \left(\boldsymbol {W} ^ {l} \boldsymbol {h} _ {i} ^ {l - 1} + \boldsymbol {b} ^ {l}\right) \tag {1}
78
+ $$
79
+
80
+ where $\pmb{h}_i^0 = \pmb{x}_i \in \mathbb{R}^{d_0}$ , $\pmb{h}_i^l \in \mathbb{R}^{d_l}$ is an activation in layer $l$ , and $\phi$ is an activation function. In the case of recurrent neural networks (RNNs), the recursive relationship takes the form of:
81
+
82
+ $$
83
+ \boldsymbol {h} _ {t _ {i}} ^ {l} = \phi \left(\boldsymbol {W} _ {\text {r e c}} ^ {l} \boldsymbol {h} _ {t - 1 _ {i}} ^ {l} + \boldsymbol {W} _ {\text {i n}} ^ {l} \boldsymbol {h} _ {t _ {i}} ^ {l - 1} + \boldsymbol {b} ^ {l}\right) \tag {2}
84
+ $$
85
+
86
+ where $\pmb{h}_{t_i}^l$ is an activation in layer $l$ at time $t$ and $\pmb{h}_{0_i}^l$ is an initial state. Without loss of generality, we focus on activations in layer $l$ of feed-forward networks and the mini-batch of samples $\{(x_{i},y_{i})\}_{i = 1}^{b}$ . Throughout this paper, we let $f^l$ be a function made by compositions of recurrent relation in equation 1 up to layer $l$ , i.e., $\pmb{h}_i^l = f^l (\pmb {x}_i)$ , and $f_{j}^{l}$ be a $j$ -th output of $f^l$ .
87
+
88
+ This paper proposes a new regularization loss, called projected error function regularization (PER), that encourages activations to follow the standard normal distribution. Specifically, PER directly matches the distribution of activations to the target distribution via the Wasserstein metric. Let $\mu \in \mathcal{P}(\mathbb{R}^{d_l})$ be the Gaussian measure defined as $\mu (\mathbb{A}) = \frac{1}{2^{d_l / 2}}\int_{\mathbb{A}}\exp \left(-\frac{1}{2}\parallel \boldsymbol {x}\parallel^2\right)d\boldsymbol{x}$ and $\nu_{\mathbf{h}^l} = \frac{1}{b}\sum_i\delta_{\mathbf{h}_i^l}\in \mathcal{P}(\mathbb{R}^{d_l})$ be the empirical measure of hidden activations where $\delta_{\mathbf{h}_i^l}$ is the Dirac unit mass on $h_i^l$ . Then, the Wasserstein metric of order $p$ between $\mu$ and $\nu_{\mathbf{h}^l}$ is defined by:
89
+
90
+ $$
91
+ W _ {p} (\mu , \nu_ {\mathbf {h} ^ {l}}) = \left(\inf _ {\pi \in \Pi (\mu , \nu_ {\mathbf {h} ^ {l}})} \int_ {\mathbb {R} ^ {d _ {l}} \times \mathbb {R} ^ {d _ {l}}} d ^ {p} (\boldsymbol {x}, \boldsymbol {y}) \pi (d \boldsymbol {x}, d \boldsymbol {y})\right) ^ {1 / p} \tag {3}
92
+ $$
93
+
94
+ where $\prod (\mu ,\nu_{\mathbf{h}^l})$ is the set of all joint probability measures on $\mathbb{R}^{d_l}\times \mathbb{R}^{d_l}$ having the first and the second marginals $\mu$ and $\nu_{\mathbf{h}^l}$ , respectively.
95
+
96
+ Because direct computation of equation 3 is intractable, we consider the sliced Wasserstein distance (Rabin et al., 2011) approximating the Wasserstein distance by projecting the high dimensional distributions onto $\mathbb{R}$ (Fig. 2). It is proved by that the sliced Wasserstein and the Wasserstein are equivalent metrics (Santambrogio, 2015; Bonnotte, 2013). The sliced Wasserstein of order one between $\mu$ and $\nu_{\mathrm{h}^l}$ can be formulated as:
97
+
98
+ $$
99
+ S W _ {1} \left(\mu , \nu_ {\mathbf {h} ^ {l}}\right) = \int_ {\mathbb {S} ^ {d - 1}} W _ {1} \left(\mu_ {\boldsymbol {\theta}}, \nu_ {\mathbf {h} _ {\boldsymbol {\theta}} ^ {l}}\right) d \lambda (\boldsymbol {\theta}) = \int_ {\mathbb {S} ^ {d - 1}} \int_ {- \infty} ^ {\infty} \left| F _ {\mu_ {\boldsymbol {\theta}}} (x) - \frac {1}{b} \sum_ {i = 1} ^ {b} 1 _ {\langle \mathbf {h} _ {i} ^ {l}, \boldsymbol {\theta} \rangle \leq x} \right| d x d \lambda (\boldsymbol {\theta}) \tag {4}
100
+ $$
101
+
102
+ where $\mathbb{S}^{d_l - 1}$ is a unit sphere in $\mathbb{R}^{d_l}$ , $\mu_{\theta}$ and $\nu_{\mathbf{h}_{\theta}^l}$ represent the measures projected to the angle $\theta$ , $\lambda$ is a uniform measure on $\mathbb{S}^{d - 1}$ , and $F_{\mu_{\theta}}(x)$ is a cumulative distribution function of $\mu_{\theta}$ . Herein, equation 4 can be evaluated through sorting $\{\langle h_i^l,\pmb {\theta}\rangle \}_{i}$ for each angle $\pmb{\theta}$ .
103
+
104
+ While we can directly use the sliced Wasserstein in equation 4 as a regularization loss, it has a computational dependency on the batch dimension due to the sorting. The computational dependency between samples may not be desirable in distributed and large-batch training that is becoming more and more prevalent in recent years. For this reason, we remove the dependency by applying the
105
+
106
+ ![](images/778b7476d21e6534ca01abeb82355592661836c92626dd59e847339226e8fa0a.jpg)
107
+ Figure 2: Illustration of minimization of the sliced Wasserstein distance between the current distribution and the target distribution. Note that it only concerns a distance in the projected dimension.
108
+
109
+ # Algorithm 1 Backward pass under PER
110
+
111
+ Input The number of Monte Carlo evaluations $s$ , an activation for $i$ -th sample $h_i$ , the gradient of the loss $\nabla_{h_i}\mathcal{L}$ , a regularization coefficient $\lambda$
112
+
113
+ 1: $\pmb{g} \gets \mathbf{0}$
114
+ 2: for $k \gets 1$ to $s$ do
115
+ 3: Sample $\pmb{v} \sim \mathcal{N}(\pmb{0}, \pmb{I})$
116
+ 4: $\pmb{\theta} \gets \pmb{v} / \| \pmb{v} \|_2$
117
+ 5: Project $h_i^\prime \leftarrow \langle h_i,\pmb {\theta}\rangle$
118
+ 6: $g_{k}\gets \mathrm{erf}\left(h_{i}^{\prime} / \sqrt{2}\right)$
119
+ 7: $\pmb{g} \gets \pmb{g} + g_{k}\pmb{\theta} / s$
120
+ 8: end for
121
+ 9: return $\nabla_{\pmb{h}_i}\mathcal{L} + \lambda \pmb{g}$
122
+
123
+ Minkowski inequality to equation 4, and obtain the regularization loss $\mathcal{L}_{per}(\nu_{\mathbf{h}^l})$
124
+
125
+ $$
126
+ \begin{array}{l} S W _ {1} (\mu , \nu_ {\mathbf {h} ^ {l}}) \leq \int_ {\mathbb {S} ^ {d - 1}} \int_ {- \infty} ^ {\infty} \frac {1}{b} \sum_ {i = 1} ^ {b} \left| F _ {\mu_ {\theta}} (x) - 1 _ {\langle \boldsymbol {h} _ {i} ^ {l}, \boldsymbol {\theta} \rangle \leq x} \right| d x d \lambda (\boldsymbol {\theta}) \\ = \frac {1}{b} \sum_ {i = 1} ^ {b} \int_ {\mathbb {S} ^ {d - 1}} \left(\langle \boldsymbol {h} _ {i} ^ {l}, \boldsymbol {\theta} \rangle \operatorname {e r f} \left(\frac {\langle \boldsymbol {h} _ {i} ^ {l} , \boldsymbol {\theta} \rangle}{\sqrt {2}}\right) + \sqrt {\frac {2}{\pi}} \exp \left(- \frac {\langle \boldsymbol {h} _ {i} ^ {l} , \boldsymbol {\theta} \rangle^ {2}}{2}\right)\right) d \lambda (\boldsymbol {\theta}) = \mathcal {L} _ {p e r} \left(\nu_ {\mathbf {h} ^ {l}}\right) \tag {5} \\ \end{array}
127
+ $$
128
+
129
+ whose gradient with respect to $h_i^l$ is:
130
+
131
+ $$
132
+ \nabla_ {\boldsymbol {h} _ {i} ^ {l}} \mathcal {L} _ {p e r} \left(\nu_ {\mathbf {h} ^ {l}}\right) = \frac {1}{b} \mathbb {E} _ {\boldsymbol {\theta} \sim U \left(\mathbb {S} ^ {d _ {l} - 1}\right)} \left[ \operatorname {e r f} \left(\left\langle \boldsymbol {\theta}, \boldsymbol {h} _ {i} ^ {l} / \sqrt {2} \right\rangle\right) \boldsymbol {\theta} \right] \tag {6}
133
+ $$
134
+
135
+ where $U(\mathbb{S}^{d_l - 1})$ is the uniform distribution on $\mathbb{S}^{d_l - 1}$ . In this paper, expectation over $U(\mathbb{S}^{d_l - 1})$ is approximated by the Monte Carlo method with $s$ number of samples. Therefore, PER results in simple modification of the backward pass as in Alg. 1.
136
+
137
+ Encouraging activations to follow the standard normal distribution can be motivated by the natural gradient (Amari, 1998). The natural gradient is the steepest descent direction in a Riemannian manifold, and it is also the direction that maximizes the probability of not increasing generalization error (Roux et al., 2008). The natural gradient is obtained by multiplying the inverse Fisher information matrix to the gradient. In Raiko et al. (2012) and Desjardins et al. (2015), under the independence assumption between forward and backward passes and activations between different layers, the Fisher information matrix is a block diagonal matrix each of which block is given by:
138
+
139
+ $$
140
+ \boldsymbol {F} _ {l} = \mathbb {E} _ {(\boldsymbol {x}, \boldsymbol {y}) \sim (\boldsymbol {x}, \boldsymbol {y})} \left[ \frac {\partial \mathcal {L}}{\partial \operatorname {v e c} \left(\boldsymbol {W} ^ {l}\right)} \frac {\partial \mathcal {L}}{\partial \operatorname {v e c} \left(\boldsymbol {W} ^ {l}\right)} ^ {T} \right] = \mathbb {E} _ {\boldsymbol {x}} \left[ \boldsymbol {h} ^ {l - 1} \boldsymbol {h} ^ {l - 1} ^ {T} \right] \mathbb {E} _ {(\boldsymbol {x}, \boldsymbol {y})} \left[ \frac {\partial \mathcal {L}}{\partial \boldsymbol {a} ^ {l}} \frac {\partial \mathcal {L}}{\partial \boldsymbol {a} ^ {l}} ^ {T} \right] \tag {7}
141
+ $$
142
+
143
+ where $\operatorname{vec}(\boldsymbol{W}^l)$ is vectorized $\boldsymbol{W}^l$ , $\boldsymbol{h}^{l-1} = f^{l-1}(\boldsymbol{x})$ , and $\boldsymbol{a}^l = \boldsymbol{W}^l f^{l-1}(\boldsymbol{x}) + \boldsymbol{b}^l$ for $\boldsymbol{x} \sim \mathbf{x}$ .
144
+
145
+ Since computing the inverse Fisher information matrix is too expensive to perform every iterations, previous studies put efforts into developing reparametrization techniques, activation functions, and
146
+
147
+ ![](images/edb030f2e630fbd7129f747357990646be30b9946d3083b293e5c537d755465d.jpg)
148
+ Figure 3: Illustration of PER and its gradient in $\mathbb{R}$ . Herein, PER is shifted by $c$ so that $\mathcal{L}_{per}(0) - c = 0$ . The Huber loss is defined as $h(x) = |x| - 0.5$ in $|x| > 1$ and $h(x) = x^2 / 2$ in $|x| \leq 1$ and the Pseudo-Huber loss is defined as $g(x) = \sqrt{1 + x^2} - 1$ .
149
+
150
+ regularization losses to make $\pmb{F}^{l}$ close to $\pmb{I}$ , thereby making the gradient close to the natural gradient. For instance, making zero mean and unit variance activations (LeCun et al., 1998; Schraudolph, 1998; Glorot & Bengio, 2010; Raiko et al., 2012; Wiesler et al., 2014) and decorrelated activations (Cogswell et al., 2016; Xiong et al., 2016; Huang et al., 2018) make $\mathbb{E}\left[h^{l - 1}h^{l - 1^T}\right] \approx I$ , and these techniques result in faster training and improved generalization performance. In this perspective, it is expected that PER will enjoy the same advantages by matching $\nu_{\mathbf{h}^l}$ to $\mathcal{N}(0,I)$ .
151
+
152
+ # 3.1 COMPARISON TO CONTROLLING ACTIVATIONS IN $L^p$ SPACE
153
+
154
+ In this subsection, we theoretically compare PER with existing methods that control activations in $L^p$ space. $L^p(\mathbb{R}^{d_0})$ is the space of measurable functions whose $p$ -th power of absolute value is Lebesgue integrable, and norm of $f \in L^p(\mathbb{R}^{d_0})$ is given by:
155
+
156
+ $$
157
+ \| f \| _ {p} = \left(\int_ {\mathbb {R} ^ {d _ {0}}} | f (\boldsymbol {x}) | ^ {p} d P _ {\boldsymbol {x}} (\boldsymbol {x})\right) ^ {1 / p} < \infty \tag {8}
158
+ $$
159
+
160
+ where $P_{\mathbf{x}}$ is the unknown probability distribution generating training samples $\{\pmb{x}_i\}_{i=1}^n$ . Since we have no access to $P_{\mathbf{x}}$ , it is approximated by the empirical measure of mini-batch samples.
161
+
162
+ The $L^p$ norm is widely used in the literature for regularization and normalization of neural networks. For instance, activation norm regularization (Merit et al., 2017a) penalizes $L^2$ norm of activations. As another example, BN and its $p$ -th order generalization use $L^p$ norm such that the norm of the centralized activation, or pre-activation, is bounded:
163
+
164
+ $$
165
+ \psi \left(h _ {i j} ^ {l}\right) = \gamma_ {j} ^ {l} \xi \left(h _ {i j} ^ {l}\right) + \beta_ {j} ^ {l}, \quad \xi \left(h _ {i j} ^ {l}\right) = \frac {h _ {i j} ^ {l} - \bar {\mu} _ {j}}{\left(\sum_ {k} \frac {1}{b} \left| h _ {k j} ^ {l} - \bar {\mu} _ {j} \right| ^ {p}\right) ^ {1 / p}} \tag {9}
166
+ $$
167
+
168
+ where $h_{ij}^{l}$ is $j$ -th unit of $\pmb{h}_i^l$ , $\bar{\mu}_j = \frac{1}{b}\sum_k h_{kj}^l$ is the sample mean, $\beta_j^l$ is a learnable shift parameter, and $\gamma_j^l$ is a learnable scale parameters. Herein, we have $\| \xi \circ f_j^l\| _p = 1$ for any unit $j$ and any empirical measure, thus $\| \psi \|_p\leq \| \gamma_j^l\xi \circ f_j^l\|_p + \| \beta_j^l\|_p = |\gamma_j^l | + |\beta_j^l |$ .
169
+
170
+ PER differs from $L^p$ norm-based approaches in two aspects. First, PER can be considered as $L^p$ norm with adaptive order in the projected space because it is very similar to the Pseudo-Huber loss in one-dimensional space (Fig. 3). Herein, the Pseudo-Huber loss is a smooth approximation of the Huber loss (Huber, 1964). Therefore, PER smoothly changes its behavior between $L^1$ and $L^2$ norms, making the regularization loss sensitive to small values and insensitive to outliers with large values. However, the previous approaches use predetermined order $p$ , which makes the norm to change insensitively in the near-zero region when $p \leq 1$ or to explode in large value region when $p > 1$ .
171
+
172
+ Second, PER captures the interaction between hidden units by projection vectors, unlike $L^p$ norm. To see this, let $\| f^l\| _p^p = \frac{1}{b}\sum_{i,j}|h_{ij}^l |^p = \frac{1}{b}\sum_{i,j}|\langle h_i^l,e_j\rangle|^p$ where $\{\pmb {e}_j\}_{j = 1}^{d_l}$ is the natural basis of
173
+
174
+ Table 1: Top-1 error rates of ResNets on CIFAR-10. Lower is better. All numbers are rounded to two decimal places. Boldface indicates the minimum error. * and ** are results from Zhang et al. (2019) and He et al. (2016), respectively.
175
+
176
+ <table><tr><td>Model</td><td>Method</td><td>Test error</td></tr><tr><td rowspan="3">ResNet-56</td><td>Vanilla</td><td>7.21</td></tr><tr><td>BN</td><td>6.95</td></tr><tr><td>PER</td><td>6.72</td></tr><tr><td rowspan="3">ResNet-110</td><td>Vanilla</td><td>6.90 (7.24*)</td></tr><tr><td>BN</td><td>6.62 (6.61**)</td></tr><tr><td>PER</td><td>6.19</td></tr></table>
177
+
178
+ Table 2: Top-1 error rates of 11-layer CNNs on tiny ImageNet. Lower is better. All numbers are rounded to two decimal places. Boldface indicates the minimum error. Numbers in parentheses represent results in Littwin & Wolf (2018).
179
+
180
+ <table><tr><td>Method</td><td>Test error</td></tr><tr><td>Vanilla</td><td>37.45 (39.22)</td></tr><tr><td>BN</td><td>39.22 (40.02)</td></tr><tr><td>VCL</td><td>(37.30)</td></tr><tr><td>PER</td><td>36.74</td></tr></table>
181
+
182
+ $\mathbb{R}^{d_l}$ . That is, the norm computes the regularization loss, or the normalizer, of activations with the natural basis as a projection vector. However, PER uses general projection vectors $\theta \sim U(\mathbb{S}^{d_l - 1})$ , capturing the interaction between hidden units when computing the regularization loss. These two differences make PER more delicate criterion for regularizing activations in deep neural networks than $L^p$ norm, as we will show in the next section.
183
+
184
+ # 4 EXPERIMENTS
185
+
186
+ This section illustrates the effectiveness of PER through experiments on different benchmark tasks with various datasets and architectures. We compare PER with BN normalizing the first and second moments and VCL regularizing the fourth moments. PER is also compared with $L^1$ and $L^2$ activation norm regularizations that behave similarly in some regions of the projected space. We then analyze the computational complexity PER and the impact of PER on the distribution of activations. Throughout all experiments, we use 256 number of slices and the same regularization coefficient for the regularization losses computed in each layer.
187
+
188
+ # 4.1 IMAGE CLASSIFICATION IN CIFAR-10, CIFAR-100, AND TINY IMAGENET
189
+
190
+ We evaluate PER in image classification task in CIFAR (Krizhevsky et al., 2009) and a subset of ImageNet (Russakovsky et al., 2015), called tiny ImageNet. We first evaluate PER with ResNet (He et al., 2016) in CIFAR-10 and compare it with BN and a vanilla network initialized by fixup initialization (Zhang et al., 2019). We match the experimental details in training under BN with He et al. (2016) and under PER and vanilla with Zhang et al. (2019), and we obtain similar performances presented in the papers. Herein, we search the regularization coefficient over $\{3\mathrm{e} - 4,1\mathrm{e} - 4,3\mathrm{e} - 5,1\mathrm{e} - 5\}$ . Table 1 presents results of CIFAR-10 experiments with ResNet-56 and ResNet-110. PER outperforms BN as well as vanilla networks in both architectures. Especially, PER improves the test errors by $0.49\%$ and $0.71\%$ in ResNet-56 and ResNet-110 without BN, respectively.
191
+
192
+ We also performed experiments on an 11-layer convolutional neural network (11-layer CNN) examined in VCL (Littwin & Wolf, 2018). This architecture is originally proposed in Clevert et al. (2016). Following Littwin & Wolf (2018), we perform experiments on 11-layer CNNs with ELU, ReLU, and Leaky ReLU activations, and match experimental details in Littwin & Wolf (2018) except that we used $10\mathrm{x}$ less learning rate for bias parameters and additional scalar bias after ReLU and Leaky ReLU based on Zhang et al. (2019). By doing so, we obtain similar results presented in Littwin & Wolf (2018). Again, a search space of the regularization coefficient is $\{3\mathrm{e} - 4, 1\mathrm{e} - 4, 3\mathrm{e} - 5, 1\mathrm{e} - 5\}$ . For ReLU and Leaky ReLU in CIFAR-100, however, we additionally search $\{3\mathrm{e} - 6, 1\mathrm{e} - 6, 3\mathrm{e} - 7, 1\mathrm{e} - 7\}$ because of divergence of training with PER in these settings. As shown in Table 3, PER shows the best performances on four out of six experiments. In other cases, PER gives compatible performances to BN or VCL, giving $0.16\%$ less than the best performances.
193
+
194
+ Following Littwin & Wolf (2018), PER is also evaluated on tiny ImageNet. In this experiment, the number of convolutional filters in each layer is doubled. Due to the limited time and resources, we
195
+
196
+ Table 3: Top-1 error rates of 11-layer CNNs on CIFAR-10 and CIFAR-100. Lower is better. All numbers are rounded to two decimal places. Boldface indicates the minimum error. Numbers in parentheses represent results in Littwin & Wolf (2018).
197
+
198
+ <table><tr><td>Activation</td><td>Method</td><td>CIFAR-10</td><td>CIFAR-100</td></tr><tr><td rowspan="4">ReLU</td><td>Vanilla</td><td>8.43 (8.36)</td><td>29.45 (32.80)</td></tr><tr><td>BN</td><td>7.53 (7.78)</td><td>29.13 (29.10)</td></tr><tr><td>VCL</td><td>7.80 (7.80)</td><td>30.30 (30.30)</td></tr><tr><td>PER</td><td>7.21</td><td>29.29</td></tr><tr><td rowspan="4">LeakyReLU</td><td>Vanilla</td><td>6.73 (6.70)</td><td>26.50 (26.80)</td></tr><tr><td>BN</td><td>6.38 (7.08)</td><td>26.83 (27.20)</td></tr><tr><td>VCL</td><td>6.45 (6.45)</td><td>26.30 (26.30)</td></tr><tr><td>PER</td><td>6.29</td><td>25.50</td></tr><tr><td rowspan="4">ELU</td><td>Vanilla</td><td>6.74 (6.98)</td><td>27.53 (28.70)</td></tr><tr><td>BN</td><td>6.69 (6.63)</td><td>26.60 (26.90)</td></tr><tr><td>VCL</td><td>6.26 (6.15)</td><td>25.86 (25.60)</td></tr><tr><td>PER</td><td>6.42</td><td>25.73</td></tr></table>
199
+
200
+ conduct experiments only with ELU that gives good performances for PER, BN, and VCL in CIFAR. As shown in Table 2, PER is also effective in the larger model in the larger image classification dataset.
201
+
202
+ # 4.2 LANGUAGE MODELING IN PTB AND WIKITEXT2
203
+
204
+ We evaluate PER in word-level language modeling task in PTB (Mikolov et al., 2010) and WikiText2 (Merit et al., 2017b). We apply PER to LSTM with two layers having 650 hidden units with and without reuse embedding (RE) proposed in Inan et al. (2017) and Press & Wolf (2016), and variational dropout (VD) proposed in Gal & Ghahramani (2016). We used the same configurations with Merity et al. (2017a) and failed to reproduce the results in Merity et al. (2017a). Especially, when we rescale gradient when its norm exceeds 10, we observed divergence or bad performance (almost $2\mathrm{x}$ perplexity compared to the published result). Therefore, we rescale gradient with norm over 0.25 instead of 10 based on the default hyperparameter of the PyTorch word-level language model that is also mentioned in Merity et al. (2017a). We also train the networks for 60 epochs instead of 80 epochs since validation perplexity is not improved after 60 epochs in most cases. In this task, PER is compared with recurrent BN (RBN; Coolijmans et al., 2017) because BN is not directly applicable to LSTM. We also compare PER with $L^1$ and $L^2$ activation norm regularizations. Herein, the search space of regularization coefficients of PER, $L^1$ regularization, and $L^2$ regularization is $\{3\mathrm{e - }4,1\mathrm{e - }4,3\mathrm{e - }5\}$ . For $L^1$ and $L^2$ penalties in PTB, we search additional coefficients over $\{1\mathrm{e - }5,$ $3\mathrm{e - }6,1\mathrm{e - }6,3\mathrm{e - }6,1\mathrm{e - }6,3\mathrm{e - }7,1\mathrm{e - }7\}$ because the searched coefficients seem to constrain the capacity.
205
+
206
+ We list in Table 4 the perplexities of methods on PTB and WikiText2. While all regularization techniques show regularization effects by giving improved test perplexity, PER gives the best test perplexity except LSTM and RE-VD-LSTM in the PTB dataset wherein PER is the second-best method. We also note that naively applying RBN often reduces performance. For instance, RBN increases test perplexity of VD-LSTM by about 5 in PTB and WikiText2.
207
+
208
+ # 4.3 ANALYSIS
209
+
210
+ In this subsection, we analyze the computational complexity of PER and its impact on closeness to the standard normal distribution in the 11-layer CNN.
211
+
212
+ Table 4: Validation and test perplexities on PTB and WikiText2. Lower is better. All numbers are rounded to one decimal place. Boldface indicates minimum perplexity.
213
+
214
+ <table><tr><td rowspan="2">Model</td><td rowspan="2">Method</td><td colspan="2">PTB</td><td colspan="2">WikiText2</td></tr><tr><td>Valid</td><td>Test</td><td>Valid</td><td>Test</td></tr><tr><td rowspan="5">LSTM</td><td>Vanilla</td><td>123.2</td><td>122.0</td><td>138.9</td><td>132.7</td></tr><tr><td>\( L^1 \) penalty</td><td>119.6</td><td>114.1</td><td>137.7</td><td>130.0</td></tr><tr><td>\( L^2 \) penalty</td><td>120.5</td><td>115.2</td><td>136.0</td><td>131.1</td></tr><tr><td>RBN</td><td>118.2</td><td>115.1</td><td>156.2</td><td>148.3</td></tr><tr><td>PER</td><td>118.5</td><td>114.5</td><td>134.2</td><td>129.6</td></tr><tr><td rowspan="5">RE-LSTM</td><td>Vanilla</td><td>114.1</td><td>112.2</td><td>129.2</td><td>123.2</td></tr><tr><td>\( L^1 \) penalty</td><td>112.2</td><td>108.5</td><td>128.6</td><td>122.7</td></tr><tr><td>\( L^2 \) penalty</td><td>116.6</td><td>108.2</td><td>126.5</td><td>123.3</td></tr><tr><td>RBN</td><td>113.6</td><td>110.4</td><td>138.1</td><td>131.6</td></tr><tr><td>PER</td><td>110.0</td><td>108.5</td><td>123.2</td><td>117.4</td></tr><tr><td rowspan="5">VD-LSTM</td><td>Vanilla</td><td>84.9</td><td>81.1</td><td>99.6</td><td>94.5</td></tr><tr><td>\( L^1 \) penalty</td><td>84.9</td><td>81.5</td><td>98.2</td><td>92.9</td></tr><tr><td>\( L^2 \) penalty</td><td>84.5</td><td>81.2</td><td>98.8</td><td>94.2</td></tr><tr><td>RBN</td><td>89.7</td><td>86.4</td><td>104.3</td><td>99.4</td></tr><tr><td>PER</td><td>84.1</td><td>80.7</td><td>98.1</td><td>92.6</td></tr><tr><td rowspan="5">RE-VD-LSTM</td><td>Vanilla</td><td>78.9</td><td>75.7</td><td>91.4</td><td>86.4</td></tr><tr><td>\( L^1 \) penalty</td><td>78.3</td><td>75.1</td><td>90.5</td><td>86.1</td></tr><tr><td>\( L^2 \) penalty</td><td>79.2</td><td>75.8</td><td>90.3</td><td>86.1</td></tr><tr><td>RBN</td><td>83.7</td><td>80.5</td><td>95.5</td><td>90.5</td></tr><tr><td>PER</td><td>78.1</td><td>74.9</td><td>90.6</td><td>85.9</td></tr></table>
215
+
216
+ # 4.3.1 COMPUTATIONAL COMPLEXITY
217
+
218
+ PER has no additional parameters. However, BN and VCL require additional parameters for each channel and each location and channel in every layer, respectively; that is, $2.5\mathrm{K}$ and $350\mathrm{K}$ number of parameters are introduced in BN and VCL in the 11-layer CNN, respectively. In terms of time complexity, PER has the complexity of $O(bd_{l}s)$ for projection operation in each layer $l$ . On the other hand, BN and VCL have $O bd_{l})$ complexities. In our benchmarking, each training iteration takes 0.071 seconds for a vanilla network, 0.083 seconds for BN, 0.087 for VCL, and 0.093 seconds for PER on a single NVIDIA Titan X. Even though PER requires slightly more training time than BN and VCL, this disadvantage can be mitigated by computation of PER is only required in training and PER does not have additional parameters.
219
+
220
+ # 4.3.2 CLOSENESS TO THE STANDARD NORMAL DISTRIBUTION
221
+
222
+ To examine the effect of PER on the closeness to $\mathcal{N}(\mathbf{0},\mathbf{I})$ , we analyze the distribution of activations in 11-layer CNN in different perspectives. We first analyze the distribution of a single activation $h_j^l$ for some unit $j$ and layer $l$ (Fig. 4). We observe that changes in probability distributions between two consecutive epochs are small under BN because BN bound the $L^2$ norm of activations into learned parameters. On the contrary, activation distributions under vanilla and PER are jiggled between two consecutive epochs. However, PER prevents the variance explosion and pushes the mean to zero. As shown in Fig. 4, while variances of $\nu_{h_j^6}$ under both PER and Vanilla are very high at the beginning of training, the variance keeps moving towards one under PER during training. Similarly, PER recovers biased means of $\nu_{h_j^3}$ and $\nu_{h_j^9}$ at the early stage of learning.
223
+
224
+ To precisely evaluate closeness to the standard normal distribution, we also analyze $SW_{1}(\mathcal{N}(\mathbf{0},\mathbf{I}),\nu_{\mathbf{h}^{l}})$ at each epoch (Fig. 5). Herein, the sliced Wasserstein distance is computed by approximating the Gaussian measure using the empirical measure of samples drawn from $\mathcal{N}(\mathbf{0},\mathbf{I})$ as in Rabin et al. (2011). As similar to the previous result, while $\mathrm{BN}\beta_j^l = 0$ and $\gamma_j^l = 1$ at initial state gives small $SW_{1}(\mathcal{N}(\mathbf{0},\mathbf{I}),\nu_{\mathbf{h}^{l}})$ in early stage of training, PER also can effectively control
225
+
226
+ ![](images/7193f95b87f438bf32f82c6a9f326d07d89cda7fa303c58719fd50859084e990.jpg)
227
+ Figure 4: Evolution of distributions of $\nu_{h_i^3}$ , $\nu_{h_j^6}$ , and $\nu_{h_j^9}$ for fixed randomly drawn $i, j, k$ on training set. (a)-(c) represent values (0.25, 0.5, 0.75) quantiles under PER, vanilla, and BN. (d) and (e) represent the sample mean and the sample variance of activations. Variance is clipped at 5 for better visualization.
228
+
229
+ ![](images/3c068c9c138e56c8c52981c8962877d20721b3543036bd5d5ba7878983dfbf21.jpg)
230
+ Figure 5: Closeness to $\mathcal{N}(0, I)$ in the Wasserstein probability distribution space.
231
+
232
+ ![](images/b543488b958b4581b63eb43633b2be4d7a742873253601e32d0938376efecb94.jpg)
233
+
234
+ ![](images/c388685eba93ed1ccbe62fd4dccc145691a848250d2b99e7e38bcf66a660ea4b.jpg)
235
+
236
+ the distribution without such normalization. This confirms that PER prevents the distribution of activation to be drifted away from the target distribution.
237
+
238
+ # 5 CONCLUSION
239
+
240
+ We proposed the regularization loss that minimizes the upper bound of the 1-Wasserstein distance between the standard normal distribution and the distribution of activations. In image classification and language modeling experiments, PER gives marginal but consistent improvements over methods based on sample statistics (BN and VCL) as well as $L^1$ and $L^2$ activation regularization methods. The analysis of changes in activations' distribution during training verifies that PER can stabilize the probability distribution of activations without normalization. Considering that the regularization loss can be easily applied to a wide range of tasks without changing architectures or training strategies
241
+
242
+ unlike BN, we believe that the results indicate the valuable potential of regularizing networks in the probability distribution space as a future direction of research.
243
+ The idea of regularizing activations with the metric in probability distribution space can be extended to many useful applications. For instance, one can utilize task-specific prior when determining a target distribution, e.g., the Laplace distribution for making sparse activation. The empirical distribution of activations computed by a pretrained network can also be used as a target distribution to prevent catastrophic forgetting. In this case, the activation distribution can be regularized so that it does not drift away from the activation distribution learned in the previous task as different from previous approaches constrains the changes in the function $L^2$ space of logits (Benjamin et al., 2019).
244
+
245
+ # ACKNOWLEDGMENTS
246
+
247
+ We would like to thank Min-Gwan Seo, Dong-Hyun Lee, Dongmin Shin, and anonymous reviewers for the discussions and suggestions.
248
+
249
+ # REFERENCES
250
+
251
+ Shun-Ichi Amari. Natural gradient works efficiently in learning. Neural Computation, 10(2):251-276, 1998.
252
+ Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein generative adversarial networks. In International Conference on Machine Learning, 2017.
253
+ David Balduzzi, Marcus Frean, Lennox Leary, JP Lewis, Kurt Wan-Duo Ma, and Brian McWilliams. The shattered gradients problem: If resnets are the answer, then what is the question? In International Conference on Machine Learning, 2017.
254
+ Ari S Benjamin, David Rolnick, and Konrad Kording. Measuring and regularizing networks in function space. In International Conference on Learning Representations, 2019.
255
+ Hakan Bilen and Andrea Vedaldi. Universal representations: The missing link between faces, text, planktons, and cat breeds. arXiv preprint arXiv:1701.07275, 2017.
256
+ Nils Bjorck, Carla P Gomes, Bart Selman, and Kilian Q Weinberger. Understanding batch normalization. In Advances in Neural Information Processing Systems, 2018.
257
+ Nicolas Bonnotte. Unidimensional and Evolution Methods for Optimal Transportation. PhD thesis, Paris 11, 2013.
258
+ Liqun Chen, Yizhe Zhang, Ruiyi Zhang, Chenyang Tao, Zhe Gan, Haichao Zhang, Bai Li, Dinghan Shen, Changyou Chen, and Lawrence Carin. Improving sequence-to-sequence learning via optimal transport. In International Conference on Learning Representations, 2019.
259
+ Djork-Arné Clevert, Thomas Unterthiner, and Sepp Hochreiter. Fast and accurate deep network learning by exponential linear units (ELUs). In International Conference of Learning Representations, 2016.
260
+ Michael Cogswell, Faruk Ahmed, Ross Girshick, Larry Zitnick, and Dhruv Batra. Reducing overfitting in deep networks by decorrelating representations. In International Conference on Learning Representations, 2016.
261
+ Tim Cooijmans, Nicolas Ballas, César Laurent, Caglar Gülçehre, and Aaron Courville. Recurrent batch normalization. In International Conference on Learning Representations, 2017.
262
+ Lucas Deecke, Iain Murray, and Hakan Bilen. Mode normalization. In International Conference on Learning Representations, 2019.
263
+ Guillaume Desjardins, Karen Simonyan, Razvan Pascanu, et al. Natural neural networks. In Advances in Neural Information Processing Systems, 2015.
264
+ Charlie Frogner, Chiyuan Zhang, Hossein Mobahi, Mauricio Araya, and Tomaso A Poggio. Learning with a Wasserstein loss. In Advances in Neural Information Processing Systems, 2015.
265
+
266
+ Yarin Gal and Zoubin Ghahramani. A theoretically grounded application of dropout in recurrent neural networks. In Advances in Neural Information Processing Systems, 2016.
267
+ Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N Dauphin. Convolutional sequence to sequence learning. In International Conference on Machine Learning, 2017.
268
+ Golnaz Ghiasi, Tsung-Yi Lin, and Quoc V Le. Dropblock: A regularization method for convolutional networks. In Advances in Neural Information Processing Systems, 2018.
269
+ Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. In Artificial Intelligence and Statistics, 2010.
270
+ Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron C Courville. Improved training of Wasserstein gans. In Advances in Neural Information Processing Systems, 2017.
271
+ Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In IEEE International Conference on Computer Vision, 2015.
272
+ Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In IEEE Conference on Computer Vision and Pattern Recognition, 2016.
273
+ Sepp Hochreiter. The vanishing gradient problem during learning recurrent neural nets and problem solutions. International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems, 6(02): 107-116, 1998.
274
+ Elad Hoffer, Ron Banner, Itay Golan, and Daniel Soudry. Norm matters: Efficient and accurate normalization schemes in deep networks. In Advances in Neural Information Processing Systems, 2018.
275
+ Lei Huang, Dawei Yang, Bo Lang, and Jia Deng. Decorrelated batch normalization. In IEEE Conference on Computer Vision and Pattern Recognition, 2018.
276
+ Lei Huang, Yi Zhou, Fan Zhu, Li Liu, and Ling Shao. Iterative normalization: Beyond standardization towards efficient whitening. In IEEE Conference on Computer Vision and Pattern Recognition, 2019.
277
+ Peter J Huber. Robust estimation of a location parameter. The Annals of Mathematical Statistics, pp. 73-101, 1964.
278
+ Hakan Inan, Khashayar Khosravi, and Richard Socher. Tying word vectors and word classifiers: A loss framework for language modeling. In International Conference on Learning Representations, 2017.
279
+ Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International Conference on Machine Learning, 2015.
280
+ Mahdi M Kalayeh and Mubarak Shah. Training faster by separating modes of variation in batch-normalized models. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019.
281
+ Jonas Kohler, Hadi Daneshmand, Aurelien Lucchi, Ming Zhou, Klaus Neymeyr, and Thomas Hofmann. Towards a theoretical understanding of batch normalization. arXiv preprint arXiv:1805.10694, 2018.
282
+ Soheil Kolouri, Phillip E. Pope, Charles E. Martin, and Gustavo K. Rohde. Sliced Wasserstein auto-encoders. In International Conference on Learning Representations, 2019.
283
+ Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. Technical report, 2009.
284
+ Yann LeCun, Leon Bottou, Genevieve B Orr, and Klaus-Robert Müller. Efficient backprop. In Neural Networks: Tricks of the Trade, pp. 9-50. 1998.
285
+
286
+ Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016.
287
+ Qianli Liao, Kenji Kawaguchi, and Tomaso Poggio. Streaming normalization: Towards simpler and more biologically-plausible normalizations for online and recurrent learning. arXiv preprint arXiv:1610.06160, 2016.
288
+ Etai Littwin and Lior Wolf. Regularizing by the variance of the activations' sample-variances. In Advances in Neural Information Processing Systems, 2018.
289
+ Stephen Merity, Bryan McCann, and Richard Socher. Revisiting activation regularization for language rnns. In International Conference on Machine Learning, 2017a.
290
+ Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. Pointer sentinel mixture models. In International Conference on Learning Representations, 2017b.
291
+ Tomáš Mikolov, Martin Karafiát, Lukáš Burget, Jan Černocký, and Sanjeev Khudanpur. Recurrent neural network based language model. In Annual Conference of the International Speech Communication Association, 2010.
292
+ Dmytro Mishkin and Jiri Matas. All you need is a good init. In International Conference on Learning Representations, 2016.
293
+ Ofir Press and Lior Wolf. Using the output embedding to improve language models. arXiv preprint arXiv:1608.05859, 2016.
294
+ Julien Rabin, Gabriel Peyré, Julie Delon, and Marc Bernot. Wasserstein barycenter and its application to texture mixing. In International Conference on Scale Space and Variational Methods in Computer Vision, 2011.
295
+ Tapani Raiko, Harri Valpola, and Yann LeCun. Deep learning made easier by linear transformations in perceptrons. In Artificial Intelligence and Statistics, 2012.
296
+ Nicolas L Roux, Pierre-Antoine Manzagol, and Yoshua Bengio. Topmoumoute online natural gradient algorithm. In Advances in Neural Information Processing Systems, 2008.
297
+ Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet large scale visual recognition challenge. International Journal of Computer Vision, 115 (3):211-252, 2015.
298
+ Filippo Santambrogio. Optimal transport for applied mathematicians. Birkhäuser, NY, 55:58-63, 2015.
299
+ Shibani Santurkar, Dimitris Tsipras, Andrew Ilyas, and Aleksander Madry. How does batch normalization help optimization? In Advances in Neural Information Processing Systems, 2018.
300
+ Andrew M Saxe, James L McClelland, and Surya Ganguli. Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. arXiv preprint arXiv:1312.6120, 2013.
301
+ Nicol Schraudolph. Accelerated gradient descent by factor-centering decomposition. Technical report, 1998.
302
+ Shai Shalev-Shwartz, Ohad Shamir, and Shaked Shammah. Failures of gradient-based deep learning. In International Conference on Machine Learning, 2017.
303
+ Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15(1):1929-1958, 2014.
304
+ Li Wan, Matthew Zeiler, Sixin Zhang, Yann Le Cun, and Rob Fergus. Regularization of neural networks using dropconnect. In International Conference on Machine Learning, 2013.
305
+
306
+ Simon Wiesler, Alexander Richard, Ralf Schlüter, and Hermann Ney. Mean-normalized stochastic gradient for large-scale deep learning. In IEEE International Conference on Acoustics, Speech and Signal Processing, 2014.
307
+ Yuxin Wu and Kaiming He. Group normalization. In European Conference on Computer Vision, 2018.
308
+ Wei Xiong, Bo Du, Lefei Zhang, Ruimin Hu, and Dacheng Tao. Regularizing deep convolutional neural networks with a structured decorrelation constraint. In IEEE International Conference on Data Mining, 2016.
309
+ Greg Yang, Jeffrey Pennington, Vinay Rao, Jascha Sohl-Dickstein, and Samuel S Schoenholz. A mean field theory of batch normalization. In International Conference on Learning Representations, 2019.
310
+ Hongyi Zhang, Yann N Dauphin, and Tengyu Ma. Fixup initialization: Residual learning without normalization. In International Conference on Learning Representations, 2019.
311
+ Ruiyi Zhang, Changyou Chen, Chunyuan Li, and Lawrence Carin. Policy optimization as Wasserstein gradient flows. In International Conference on Machine Learning, 2018.
regularizingactivationsinneuralnetworksviadistributionmatchingwiththewassersteinmetric/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:46da500d41994a7066e7cd78ba3d317fcb7ab5a6c5cbdebdd714f636c3abf980
3
+ size 483280
regularizingactivationsinneuralnetworksviadistributionmatchingwiththewassersteinmetric/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0f8fad18dbe6de46688801bdacec7ed5614ace78ea13b3ed7266429b677b99bb
3
+ size 484609
reinforcedactivelearningforimagesegmentation/c0e77528-4ae1-4bf4-918b-4bf47e77a12d_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:557d7a88d0015d9c417d9d18383ba2b1cb58ac80750d82b5cf765142ddeb05ab
3
+ size 93086
reinforcedactivelearningforimagesegmentation/c0e77528-4ae1-4bf4-918b-4bf47e77a12d_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:84eb9f760e799e4ff8105019eb24383a725353e26b84138f888e3a237620c8c2
3
+ size 115635