SlowGuess commited on
Commit
f83d35c
·
verified ·
1 Parent(s): 9193daa

Add Batch 5f774b13-06f2-4ac4-8df7-3b4128a37995

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. adistributionalapproachtocontrolledtextgeneration/d2a08d69-80d2-4000-818a-627ebe0bd905_content_list.json +3 -0
  2. adistributionalapproachtocontrolledtextgeneration/d2a08d69-80d2-4000-818a-627ebe0bd905_model.json +3 -0
  3. adistributionalapproachtocontrolledtextgeneration/d2a08d69-80d2-4000-818a-627ebe0bd905_origin.pdf +3 -0
  4. adistributionalapproachtocontrolledtextgeneration/full.md +0 -0
  5. adistributionalapproachtocontrolledtextgeneration/images.zip +3 -0
  6. adistributionalapproachtocontrolledtextgeneration/layout.json +3 -0
  7. animageisworth16x16wordstransformersforimagerecognitionatscale/88a01229-2f24-4aec-9a0a-528c44e30b21_content_list.json +3 -0
  8. animageisworth16x16wordstransformersforimagerecognitionatscale/88a01229-2f24-4aec-9a0a-528c44e30b21_model.json +3 -0
  9. animageisworth16x16wordstransformersforimagerecognitionatscale/88a01229-2f24-4aec-9a0a-528c44e30b21_origin.pdf +3 -0
  10. animageisworth16x16wordstransformersforimagerecognitionatscale/full.md +419 -0
  11. animageisworth16x16wordstransformersforimagerecognitionatscale/images.zip +3 -0
  12. animageisworth16x16wordstransformersforimagerecognitionatscale/layout.json +3 -0
  13. augmentingphysicalmodelswithdeepnetworksforcomplexdynamicsforecasting/dcf9d3c9-a5ad-45a5-a1a2-7371a1b74fb7_content_list.json +3 -0
  14. augmentingphysicalmodelswithdeepnetworksforcomplexdynamicsforecasting/dcf9d3c9-a5ad-45a5-a1a2-7371a1b74fb7_model.json +3 -0
  15. augmentingphysicalmodelswithdeepnetworksforcomplexdynamicsforecasting/dcf9d3c9-a5ad-45a5-a1a2-7371a1b74fb7_origin.pdf +3 -0
  16. augmentingphysicalmodelswithdeepnetworksforcomplexdynamicsforecasting/full.md +692 -0
  17. augmentingphysicalmodelswithdeepnetworksforcomplexdynamicsforecasting/images.zip +3 -0
  18. augmentingphysicalmodelswithdeepnetworksforcomplexdynamicsforecasting/layout.json +3 -0
  19. comixupsaliencyguidedjointmixupwithsupermodulardiversity/2aa797a7-462e-4ce1-b1dd-ff614c84111b_content_list.json +3 -0
  20. comixupsaliencyguidedjointmixupwithsupermodulardiversity/2aa797a7-462e-4ce1-b1dd-ff614c84111b_model.json +3 -0
  21. comixupsaliencyguidedjointmixupwithsupermodulardiversity/2aa797a7-462e-4ce1-b1dd-ff614c84111b_origin.pdf +3 -0
  22. comixupsaliencyguidedjointmixupwithsupermodulardiversity/full.md +578 -0
  23. comixupsaliencyguidedjointmixupwithsupermodulardiversity/images.zip +3 -0
  24. comixupsaliencyguidedjointmixupwithsupermodulardiversity/layout.json +3 -0
  25. complexqueryansweringwithneurallinkpredictors/7bda427d-f1fd-4825-a924-13ee62b9fa36_content_list.json +3 -0
  26. complexqueryansweringwithneurallinkpredictors/7bda427d-f1fd-4825-a924-13ee62b9fa36_model.json +3 -0
  27. complexqueryansweringwithneurallinkpredictors/7bda427d-f1fd-4825-a924-13ee62b9fa36_origin.pdf +3 -0
  28. complexqueryansweringwithneurallinkpredictors/full.md +292 -0
  29. complexqueryansweringwithneurallinkpredictors/images.zip +3 -0
  30. complexqueryansweringwithneurallinkpredictors/layout.json +3 -0
  31. contrastiveexplanationsforreinforcementlearningviaembeddedselfpredictions/f310a0f8-508f-4aa5-b90c-664bf9e3209c_content_list.json +3 -0
  32. contrastiveexplanationsforreinforcementlearningviaembeddedselfpredictions/f310a0f8-508f-4aa5-b90c-664bf9e3209c_model.json +3 -0
  33. contrastiveexplanationsforreinforcementlearningviaembeddedselfpredictions/f310a0f8-508f-4aa5-b90c-664bf9e3209c_origin.pdf +3 -0
  34. contrastiveexplanationsforreinforcementlearningviaembeddedselfpredictions/full.md +508 -0
  35. contrastiveexplanationsforreinforcementlearningviaembeddedselfpredictions/images.zip +3 -0
  36. contrastiveexplanationsforreinforcementlearningviaembeddedselfpredictions/layout.json +3 -0
  37. coupledoscillatoryrecurrentneuralnetworkcornnanaccurateandgradientstablearchitectureforlearninglongtimedependencies/e1b28daf-a304-41b6-a819-d5ff9b97dab2_content_list.json +3 -0
  38. coupledoscillatoryrecurrentneuralnetworkcornnanaccurateandgradientstablearchitectureforlearninglongtimedependencies/e1b28daf-a304-41b6-a819-d5ff9b97dab2_model.json +3 -0
  39. coupledoscillatoryrecurrentneuralnetworkcornnanaccurateandgradientstablearchitectureforlearninglongtimedependencies/e1b28daf-a304-41b6-a819-d5ff9b97dab2_origin.pdf +3 -0
  40. coupledoscillatoryrecurrentneuralnetworkcornnanaccurateandgradientstablearchitectureforlearninglongtimedependencies/full.md +0 -0
  41. coupledoscillatoryrecurrentneuralnetworkcornnanaccurateandgradientstablearchitectureforlearninglongtimedependencies/images.zip +3 -0
  42. coupledoscillatoryrecurrentneuralnetworkcornnanaccurateandgradientstablearchitectureforlearninglongtimedependencies/layout.json +3 -0
  43. datasetcondensationwithgradientmatching/ab4e802f-b06a-4406-be2e-91bcef94cb64_content_list.json +3 -0
  44. datasetcondensationwithgradientmatching/ab4e802f-b06a-4406-be2e-91bcef94cb64_model.json +3 -0
  45. datasetcondensationwithgradientmatching/ab4e802f-b06a-4406-be2e-91bcef94cb64_origin.pdf +3 -0
  46. datasetcondensationwithgradientmatching/full.md +422 -0
  47. datasetcondensationwithgradientmatching/images.zip +3 -0
  48. datasetcondensationwithgradientmatching/layout.json +3 -0
  49. deepsymbolicregressionrecoveringmathematicalexpressionsfromdataviariskseekingpolicygradients/84be15b4-c5e3-44fc-9542-60302997e137_content_list.json +3 -0
  50. deepsymbolicregressionrecoveringmathematicalexpressionsfromdataviariskseekingpolicygradients/84be15b4-c5e3-44fc-9542-60302997e137_model.json +3 -0
adistributionalapproachtocontrolledtextgeneration/d2a08d69-80d2-4000-818a-627ebe0bd905_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b6b402b454f729c6c04f204f5cfd27fb11c4bbf10c6afb77c11afcec9360fef5
3
+ size 393210
adistributionalapproachtocontrolledtextgeneration/d2a08d69-80d2-4000-818a-627ebe0bd905_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fd9c57d595785be2ccbceaf71e20e7d6c560249550efdbac95f8e95a9f49f6ce
3
+ size 416838
adistributionalapproachtocontrolledtextgeneration/d2a08d69-80d2-4000-818a-627ebe0bd905_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9e30457cba16147f77356be19b6694b26d901891b1b62273ccde481261105add
3
+ size 3952064
adistributionalapproachtocontrolledtextgeneration/full.md ADDED
The diff for this file is too large to render. See raw diff
 
adistributionalapproachtocontrolledtextgeneration/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:58ee3957c7c9880135ebd3bbc97b6034e2a735d1b9a05c90adae63b8d5ee999f
3
+ size 10053907
adistributionalapproachtocontrolledtextgeneration/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b37c54e9c8cbe799a0aed6e648407f1cc915e272278711d332bf97d29b703928
3
+ size 1822079
animageisworth16x16wordstransformersforimagerecognitionatscale/88a01229-2f24-4aec-9a0a-528c44e30b21_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:473ea91e6d79b46e9630ef2165d5f1c08d56c2dd809e8ba6cf6dcf92112700c1
3
+ size 114267
animageisworth16x16wordstransformersforimagerecognitionatscale/88a01229-2f24-4aec-9a0a-528c44e30b21_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6558932b3609d4b7743ae3f2b3e0fee17589147b33775a87f80225c59e57dcd4
3
+ size 135822
animageisworth16x16wordstransformersforimagerecognitionatscale/88a01229-2f24-4aec-9a0a-528c44e30b21_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1eb6a81e55cb4739837db2795d4c859e42154f0ae9b50801ad6a111c7c01f782
3
+ size 3635089
animageisworth16x16wordstransformersforimagerecognitionatscale/full.md ADDED
@@ -0,0 +1,419 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # AN IMAGE IS WORTH 16x16 WORDS: TRANSFORMERS FOR IMAGE RECOGNITION AT SCALE
2
+
3
+ Alexey Dosovitskiy*,†, Lucas Beyer*, Alexander Kolesnikov*, Dirk Weissenborn*, Xiaohua Zhai*, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby*,†
4
+
5
+ *equal technical contribution, †equal advising Google Research, Brain Team {adosovitskiy, neilhoulsby}@google.com
6
+
7
+ # ABSTRACT
8
+
9
+ While the Transformer architecture has become the de-facto standard for natural language processing tasks, its applications to computer vision remain limited. In vision, attention is either applied in conjunction with convolutional networks, or used to replace certain components of convolutional networks while keeping their overall structure in place. We show that this reliance on CNNs is not necessary and a pure transformer applied directly to sequences of image patches can perform very well on image classification tasks. When pre-trained on large amounts of data and transferred to multiple mid-sized or small image recognition benchmarks (ImageNet, CIFAR-100, VTAB, etc.), Vision Transformer (ViT) attains excellent results compared to state-of-the-art convolutional networks while requiring substantially fewer computational resources to train.<sup>1</sup>
10
+
11
+ # 1 INTRODUCTION
12
+
13
+ Self-attention-based architectures, in particular Transformers (Vaswani et al., 2017), have become the model of choice in natural language processing (NLP). The dominant approach is to pre-train on a large text corpus and then fine-tune on a smaller task-specific dataset (Devlin et al., 2019). Thanks to Transformers' computational efficiency and scalability, it has become possible to train models of unprecedented size, with over 100B parameters (Brown et al., 2020; Lepikhin et al., 2020). With the models and datasets growing, there is still no sign of saturating performance.
14
+
15
+ In computer vision, however, convolutional architectures remain dominant (LeCun et al., 1989; Krizhevsky et al., 2012; He et al., 2016). Inspired by NLP successes, multiple works try combining CNN-like architectures with self-attention (Wang et al., 2018; Carion et al., 2020), some replacing the convolutions entirely (Ramachandran et al., 2019; Wang et al., 2020a). The latter models, while theoretically efficient, have not yet been scaled effectively on modern hardware accelerators due to the use of specialized attention patterns. Therefore, in large-scale image recognition, classic ResNet-like architectures are still state of the art (Mahajan et al., 2018; Xie et al., 2020; Kolesnikov et al., 2020).
16
+
17
+ Inspired by the Transformer scaling successes in NLP, we experiment with applying a standard Transformer directly to images, with the fewest possible modifications. To do so, we split an image into patches and provide the sequence of linear embeddings of these patches as an input to a Transformer. Image patches are treated the same way as tokens (words) in an NLP application. We train the model on image classification in supervised fashion.
18
+
19
+ When trained on mid-sized datasets such as ImageNet without strong regularization, these models yield modest accuracies of a few percentage points below ResNets of comparable size. This seemingly discouraging outcome may be expected: Transformers lack some of the inductive biases
20
+
21
+ inherent to CNNs, such as translation equivariance and locality, and therefore do not generalize well when trained on insufficient amounts of data.
22
+
23
+ However, the picture changes if the models are trained on larger datasets (14M-300M images). We find that large scale training trumps inductive bias. Our Vision Transformer (ViT) attains excellent results when pre-trained at sufficient scale and transferred to tasks with fewer datapoints. When pre-trained on the public ImageNet-21k dataset or the in-house JFT-300M dataset, ViT approaches or beats state of the art on multiple image recognition benchmarks. In particular, the best model reaches the accuracy of $88.55\%$ on ImageNet, $90.72\%$ on ImageNet-ReaL, $94.55\%$ on CIFAR-100, and $77.63\%$ on the VTAB suite of 19 tasks.
24
+
25
+ # 2 RELATED WORK
26
+
27
+ Transformers were proposed by Vaswani et al. (2017) for machine translation, and have since become the state of the art method in many NLP tasks. Large Transformer-based models are often pre-trained on large corpora and then fine-tuned for the task at hand: BERT (Devlin et al., 2019) uses a denoising self-supervised pre-training task, while the GPT line of work uses language modeling as its pre-training task (Radford et al., 2018; 2019; Brown et al., 2020).
28
+
29
+ Naive application of self-attention to images would require that each pixel attends to every other pixel. With quadratic cost in the number of pixels, this does not scale to realistic input sizes. Thus, to apply Transformers in the context of image processing, several approximations have been tried in the past. Parmar et al. (2018) applied the self-attention only in local neighborhoods for each query pixel instead of globally. Such local multi-head dot-product self attention blocks can completely replace convolutions (Hu et al., 2019; Ramachandran et al., 2019; Zhao et al., 2020). In a different line of work, Sparse Transformers (Child et al., 2019) employ scalable approximations to global self-attention in order to be applicable to images. An alternative way to scale attention is to apply it in blocks of varying sizes (Weissenborn et al., 2019), in the extreme case only along individual axes (Ho et al., 2019; Wang et al., 2020a). Many of these specialized attention architectures demonstrate promising results on computer vision tasks, but require complex engineering to be implemented efficiently on hardware accelerators.
30
+
31
+ Most related to ours is the model of Cordonnier et al. (2020), which extracts patches of size $2 \times 2$ from the input image and applies full self-attention on top. This model is very similar to ViT, but our work goes further to demonstrate that large scale pre-training makes vanilla transformers competitive with (or even better than) state-of-the-art CNNs. Moreover, Cordonnier et al. (2020) use a small patch size of $2 \times 2$ pixels, which makes the model applicable only to small-resolution images, while we handle medium-resolution images as well.
32
+
33
+ There has also been a lot of interest in combining convolutional neural networks (CNNs) with forms of self-attention, e.g. by augmenting feature maps for image classification (Bello et al., 2019) or by further processing the output of a CNN using self-attention, e.g. for object detection (Hu et al., 2018; Carion et al., 2020), video processing (Wang et al., 2018; Sun et al., 2019), image classification (Wu et al., 2020), unsupervised object discovery (Locatello et al., 2020), or unified text-vision tasks (Chen et al., 2020c; Lu et al., 2019; Li et al., 2019).
34
+
35
+ Another recent related model is image GPT (iGPT) (Chen et al., 2020a), which applies Transformers to image pixels after reducing image resolution and color space. The model is trained in an unsupervised fashion as a generative model, and the resulting representation can then be fine-tuned or probed linearly for classification performance, achieving a maximal accuracy of $72\%$ on ImageNet.
36
+
37
+ Our work adds to the increasing collection of papers that explore image recognition at larger scales than the standard ImageNet dataset. The use of additional data sources allows to achieve state-of-the-art results on standard benchmarks (Mahajan et al., 2018; Touvron et al., 2019; Xie et al., 2020). Moreover, Sun et al. (2017) study how CNN performance scales with dataset size, and Kolesnikov et al. (2020); Djolonga et al. (2020) perform an empirical exploration of CNN transfer learning from large scale datasets such as ImageNet-21k and JFT-300M. We focus on these two latter datasets as well, but train Transformers instead of ResNet-based models used in prior works.
38
+
39
+ ![](images/d090de35515af72801d6b4e4156cecefb8e51ed6045534174978058270a5a58d.jpg)
40
+ Figure 1: Model overview. We split an image into fixed-size patches, linearly embed each of them, add position embeddings, and feed the resulting sequence of vectors to a standard Transformer encoder. In order to perform classification, we use the standard approach of adding an extra learnable "classification token" to the sequence. The illustration of the Transformer encoder was inspired by Vaswani et al. (2017).
41
+
42
+ ![](images/9bc2b042e4289a4eadc312a52c06cc88f38c693c07b36546ab73532a849f6581.jpg)
43
+
44
+ # 3 METHOD
45
+
46
+ In model design we follow the original Transformer (Vaswani et al., 2017) as closely as possible. An advantage of this intentionally simple setup is that scalable NLP Transformer architectures – and their efficient implementations – can be used almost out of the box.
47
+
48
+ # 3.1 VISION TRANSFORMER (VIT)
49
+
50
+ An overview of the model is depicted in Figure 1. The standard Transformer receives as input a 1D sequence of token embeddings. To handle 2D images, we reshape the image $\mathbf{x} \in \mathbb{R}^{H \times W \times C}$ into a sequence of flattened 2D patches $\mathbf{x}_p \in \mathbb{R}^{N \times (P^2 \cdot C)}$ , where $(H, W)$ is the resolution of the original image, $C$ is the number of channels, $(P, P)$ is the resolution of each image patch, and $N = HW / P^2$ is the resulting number of patches, which also serves as the effective input sequence length for the Transformer. The Transformer uses constant latent vector size $D$ through all of its layers, so we flatten the patches and map to $D$ dimensions with a trainable linear projection (Eq. 1). We refer to the output of this projection as the patch embeddings.
51
+
52
+ Similar to BERT's [class] token, we prepend a learnable embedding to the sequence of embedded patches $(\mathbf{z}_0^0 = \mathbf{x}_{\mathrm{class}})$ , whose state at the output of the Transformer encoder $(\mathbf{z}_L^0)$ serves as the image representation $\mathbf{y}$ (Eq. 4). Both during pre-training and fine-tuning, a classification head is attached to $\mathbf{z}_L^0$ . The classification head is implemented by a MLP with one hidden layer at pre-training time and by a single linear layer at fine-tuning time.
53
+
54
+ Position embeddings are added to the patch embeddings to retain positional information. We use standard learnable 1D position embeddings, since we have not observed significant performance gains from using more advanced 2D-aware position embeddings (Appendix D.3). The resulting sequence of embedding vectors serves as input to the encoder.
55
+
56
+ The Transformer encoder (Vaswani et al., 2017) consists of alternating layers of multiheaded self-attention (MSA, see Appendix A) and MLP blocks (Eq. 2, 3). Layernorm (LN) is applied before every block, and residual connections after every block (Wang et al., 2019; Baevski & Auli, 2019).
57
+
58
+ The MLP contains two layers with a GELU non-linearity.
59
+
60
+ $$
61
+ \mathbf {z} _ {0} = \left[ \mathbf {x} _ {\text {c l a s s}}; \mathbf {x} _ {p} ^ {1} \mathbf {E}; \mathbf {x} _ {p} ^ {2} \mathbf {E}; \dots ; \mathbf {x} _ {p} ^ {N} \mathbf {E} \right] + \mathbf {E} _ {p o s}, \quad \mathbf {E} \in \mathbb {R} ^ {(P ^ {2} \cdot C) \times D}, \mathbf {E} _ {p o s} \in \mathbb {R} ^ {(N + 1) \times D} \tag {1}
62
+ $$
63
+
64
+ $$
65
+ \mathbf {z} ^ {\prime} _ {\ell} = \operatorname {M S A} (\mathrm {L N} \left(\mathbf {z} _ {\ell - 1}\right)) + \mathbf {z} _ {\ell - 1}, \quad \ell = 1 \dots L \tag {2}
66
+ $$
67
+
68
+ $$
69
+ \mathbf {z} _ {\ell} = \operatorname {M L P} \left(\ln \left(\mathbf {z} ^ {\prime} _ {\ell}\right)\right) + \mathbf {z} ^ {\prime} _ {\ell}, \quad \ell = 1 \dots L \tag {3}
70
+ $$
71
+
72
+ $$
73
+ \mathbf {y} = \operatorname {L N} \left(\mathbf {z} _ {L} ^ {0}\right) \tag {4}
74
+ $$
75
+
76
+ Inductive bias. We note that Vision Transformer has much less image-specific inductive bias than CNNs. In CNNs, locality, two-dimensional neighborhood structure, and translation equivariance are baked into each layer throughout the whole model. In ViT, only MLP layers are local and translationally equivariant, while the self-attention layers are global. The two-dimensional neighborhood structure is used very sparingly: in the beginning of the model by cutting the image into patches and at fine-tuning time for adjusting the position embeddings for images of different resolution (as described below). Other than that, the position embeddings at initialization time carry no information about the 2D positions of the patches and all spatial relations between the patches have to be learned from scratch.
77
+
78
+ Hybrid Architecture. As an alternative to raw image patches, the input sequence can be formed from feature maps of a CNN (LeCun et al., 1989). In this hybrid model, the patch embedding projection $\mathbf{E}$ (Eq. 1) is applied to patches extracted from a CNN feature map. As a special case, the patches can have spatial size $1\times 1$ , which means that the input sequence is obtained by simply flattening the spatial dimensions of the feature map and projecting to the Transformer dimension. The classification input embedding and position embeddings are added as described above.
79
+
80
+ # 3.2 FINE-TUNING AND HIGHER RESOLUTION
81
+
82
+ Typically, we pre-train ViT on large datasets, and fine-tune to (smaller) downstream tasks. For this, we remove the pre-trained prediction head and attach a zero-initialized $D \times K$ feedforward layer, where $K$ is the number of downstream classes. It is often beneficial to fine-tune at higher resolution than pre-training (Touvron et al., 2019; Kolesnikov et al., 2020). When feeding images of higher resolution, we keep the patch size the same, which results in a larger effective sequence length. The Vision Transformer can handle arbitrary sequence lengths (up to memory constraints), however, the pre-trained position embeddings may no longer be meaningful. We therefore perform 2D interpolation of the pre-trained position embeddings, according to their location in the original image. Note that this resolution adjustment and patch extraction are the only points at which an inductive bias about the 2D structure of the images is manually injected into the Vision Transformer.
83
+
84
+ # 4 EXPERIMENTS
85
+
86
+ We evaluate the representation learning capabilities of ResNet, Vision Transformer (ViT), and the hybrid. To understand the data requirements of each model, we pre-train on datasets of varying size and evaluate many benchmark tasks. When considering the computational cost of pre-training the model, ViT performs very favourably, attaining state of the art on most recognition benchmarks at a lower pre-training cost. Lastly, we perform a small experiment using self-supervision, and show that self-supervised ViT holds promise for the future.
87
+
88
+ # 4.1 SETUP
89
+
90
+ Datasets. To explore model scalability, we use the ILSVRC-2012 ImageNet dataset with 1k classes and 1.3M images (we refer to it as ImageNet in what follows), its superset ImageNet-21k with 21k classes and 14M images (Deng et al., 2009), and JFT (Sun et al., 2017) with 18k classes and 303M high-resolution images. We de-duplicate the pre-training datasets w.r.t. the test sets of the downstream tasks following Kolesnikov et al. (2020). We transfer the models trained on these dataset to several benchmark tasks: ImageNet on the original validation labels and the cleaned-up RealL labels (Beyer et al., 2020), CIFAR-10/100 (Krizhevsky, 2009), Oxford-IIIT Pets (Parkhi et al., 2012), and Oxford Flowers-102 (Nilsback & Zisserman, 2008). For these datasets, pre-processing follows Kolesnikov et al. (2020).
91
+
92
+ <table><tr><td>Model</td><td>Layers</td><td>Hidden size D</td><td>MLP size</td><td>Heads</td><td>Params</td></tr><tr><td>ViT-Base</td><td>12</td><td>768</td><td>3072</td><td>12</td><td>86M</td></tr><tr><td>ViT-Large</td><td>24</td><td>1024</td><td>4096</td><td>16</td><td>307M</td></tr><tr><td>ViT-Huge</td><td>32</td><td>1280</td><td>5120</td><td>16</td><td>632M</td></tr></table>
93
+
94
+ Table 1: Details of Vision Transformer model variants.
95
+
96
+ We also evaluate on the 19-task VTAB classification suite (Zhai et al., 2019b). VTAB evaluates low-data transfer to diverse tasks, using 1000 training examples per task. The tasks are divided into three groups: Natural - tasks like the above, Pets, CIFAR, etc. Specialized - medical and satellite imagery, and Structured - tasks that require geometric understanding like localization.
97
+
98
+ Model Variants. We base ViT configurations on those used for BERT (Devlin et al., 2019), as summarized in Table 1. The "Base" and "Large" models are directly adopted from BERT and we add the larger "Huge" model. In what follows we use brief notation to indicate the model size and the input patch size: for instance, ViT-L/16 means the "Large" variant with $16 \times 16$ input patch size. Note that the Transformer's sequence length is inversely proportional to the square of the patch size, thus models with smaller patch size are computationally more expensive.
99
+
100
+ For the baseline CNNs, we use ResNet (He et al., 2016), but replace the Batch Normalization layers (Ioffe & Szegedy, 2015) with Group Normalization (Wu & He, 2018), and used standardized convolutions (Qiao et al., 2019). These modifications improve transfer (Kolesnikov et al., 2020), and we denote the modified model "ResNet (BiT)". For the hybrids, we feed the intermediate feature maps into ViT with patch size of one "pixel". To experiment with different sequence lengths, we either (i) take the output of stage 4 of a regular ResNet50 or (ii) remove stage 4, place the same number of layers in stage 3 (keeping the total number of layers), and take the output of this extended stage 3. Option (ii) results in a 4x longer sequence length, and a more expensive ViT model.
101
+
102
+ Training & Fine-tuning. We train all models, including ResNets, using Adam (Kingma & Ba, 2015) with $\beta_{1} = 0.9$ , $\beta_{2} = 0.999$ , a batch size of 4096 and apply a high weight decay of 0.1, which we found to be useful for transfer of all models (Appendix D.1 shows that, in contrast to common practices, Adam works slightly better than SGD for ResNets in our setting). We use a linear learning rate warmup and decay, see Appendix B.1 for details. For fine-tuning we use SGD with momentum, batch size 512, for all models, see Appendix B.1.1. For ImageNet results in Table 2, we fine-tuned at higher resolution: 512 for ViT-L/16 and 518 for ViT-H/14, and also used Polyak & Juditsky (1992) averaging with a factor of 0.9999 (Ramachandran et al., 2019; Wang et al., 2020b).
103
+
104
+ Metrics. We report results on downstream datasets either through few-shot or fine-tuning accuracy. Fine-tuning accuracies capture the performance of each model after fine-tuning it on the respective dataset. Few-shot accuracies are obtained by solving a regularized least-squares regression problem that maps the (frozen) representation of a subset of training images to $\{-1, 1\}^K$ target vectors. This formulation allows us to recover the exact solution in closed form. Though we mainly focus on fine-tuning performance, we sometimes use linear few-shot accuracies for fast on-the-fly evaluation where fine-tuning would be too costly.
105
+
106
+ # 4.2 COMPARISON TO STATE OF THE ART
107
+
108
+ We first compare our largest models - ViT-H/14 and ViT-L/16 - to state-of-the-art CNNs from the literature. The first comparison point is Big Transfer (BiT) (Kolesnikov et al., 2020), which performs supervised transfer learning with large ResNets. The second is Noisy Student (Xie et al., 2020), which is a large EfficientNet trained using semi-supervised learning on ImageNet and JFT-300M with the labels removed. Currently, Noisy Student is the state of the art on ImageNet and BiT-L on the other datasets reported here. All models were trained on TPUv3 hardware, and we report the number of TPUv3-core-days taken to pre-train each of them, that is, the number of TPU v3 cores (2 per chip) used for training multiplied by the training time in days.
109
+
110
+ Table 2 shows the results. The smaller ViT-L/16 model pre-trained on JFT-300M outperforms BiT-L (which is pre-trained on the same dataset) on all tasks, while requiring substantially less computational resources to train. The larger model, ViT-H/14, further improves the performance, especially on the more challenging datasets - ImageNet, CIFAR-100, and the VTAB suite. Interestingly, this
111
+
112
+ <table><tr><td></td><td>Ours-JFT(ViT-H/14)</td><td>Ours-JFT(ViT-L/16)</td><td>Ours-I21k(ViT-L/16)</td><td>BiT-L(ResNet152x4)</td><td>Noisy Student(EfficientNet-L2)</td></tr><tr><td>ImageNet</td><td>88.55±0.04</td><td>87.76±0.03</td><td>85.30±0.02</td><td>87.54±0.02</td><td>88.4/88.5*</td></tr><tr><td>ImageNet ReaL</td><td>90.72±0.05</td><td>90.54±0.03</td><td>88.62±0.05</td><td>90.54</td><td>90.55</td></tr><tr><td>CIFAR-10</td><td>99.50±0.06</td><td>99.42±0.03</td><td>99.15±0.03</td><td>99.37±0.06</td><td>-</td></tr><tr><td>CIFAR-100</td><td>94.55±0.04</td><td>93.90±0.05</td><td>93.25±0.05</td><td>93.51±0.08</td><td>-</td></tr><tr><td>Oxford-IIIT Pets</td><td>97.56±0.03</td><td>97.32±0.11</td><td>94.67±0.15</td><td>96.62±0.23</td><td>-</td></tr><tr><td>Oxford Flowers-102</td><td>99.68±0.02</td><td>99.74±0.00</td><td>99.61±0.02</td><td>99.63±0.03</td><td>-</td></tr><tr><td>VTAB (19 tasks)</td><td>77.63±0.23</td><td>76.28±0.46</td><td>72.72±0.21</td><td>76.29±1.70</td><td>-</td></tr><tr><td>TPUv3-core-days</td><td>2.5k</td><td>0.68k</td><td>0.23k</td><td>9.9k</td><td>12.3k</td></tr></table>
113
+
114
+ Table 2: Comparison with state of the art on popular image classification benchmarks. We report mean and standard deviation of the accuracies, averaged over three fine-tuning runs. Vision Transformer models pre-trained on the JFT-300M dataset outperform ResNet-based baselines on all datasets, while taking substantially less computational resources to pre-train. ViT pre-trained on the smaller public ImageNet-21k dataset performs well too. *Slightly improved $88.5\%$ result reported in Touvron et al. (2020).
115
+
116
+ ![](images/6c927eeccc1a23999efc0ec6bd56e353478eda7b515a9f29be07ad08e7a02b2b.jpg)
117
+ Figure 2: Breakdown of VTAB performance in Natural, Specialized, and Structured task groups.
118
+
119
+ model still took substantially less compute to pre-train than prior state of the art. However, we note that pre-training efficiency may be affected not only by the architecture choice, but also other parameters, such as training schedule, optimizer, weight decay, etc. We provide a controlled study of performance vs. compute for different architectures in Section 4.4. Finally, the ViT-L/16 model pre-trained on the public ImageNet-21k dataset performs well on most datasets too, while taking fewer resources to pre-train: it could be trained using a standard cloud TPUv3 with 8 cores in approximately 30 days.
120
+
121
+ Figure 2 decomposes the VTAB tasks into their respective groups, and compares to previous SOTA methods on this benchmark: BiT, VIVI - a ResNet co-trained on ImageNet and Youtube (Tschannen et al., 2020), and S4L - supervised plus semi-supervised learning on ImageNet (Zhai et al., 2019a). ViT-H/14 outperforms BiT-R152x4, and other methods, on the Natural and Structured tasks. On the Specialized the performance of the top two models is similar.
122
+
123
+ # 4.3 PRE-TRAINING DATA REQUIREMENTS
124
+
125
+ The Vision Transformer performs well when pre-trained on a large JFT-300M dataset. With fewer inductive biases for vision than ResNets, how crucial is the dataset size? We perform two series of experiments.
126
+
127
+ First, we pre-train ViT models on datasets of increasing size: ImageNet, ImageNet-21k, and JFT-300M. To boost the performance on the smaller datasets, we optimize three basic regularization parameters - weight decay, dropout, and label smoothing. Figure 3 shows the results after fine-tuning to ImageNet (results on other datasets are shown in Table 5) $^2$ . When pre-trained on the smallest dataset, ImageNet, ViT-Large models underperform compared to ViT-Base models, despite (moderate) regularization. With ImageNet-21k pre-training, their performances are similar. Only with JFT-300M, do we see the full benefit of larger models. Figure 3 also shows the performance
128
+
129
+ ![](images/7f79be4a6f52dfcdffb9d19f7d324817b95c64525d26b670f8db7aab60c02716.jpg)
130
+ Figure 3: Transfer to ImageNet. While large ViT models perform worse than BiT ResNets (shaded area) when pre-trained on small datasets, they shine when pre-trained on larger datasets. Similarly, larger ViT variants overtake smaller ones as the dataset grows.
131
+
132
+ ![](images/3ffe61cfc1272b208b5f8d7300903900d7f64e3277574076890a365dee37422f.jpg)
133
+ Figure 4: Linear few-shot evaluation on ImageNet versus pre-training size. ResNets perform better with smaller pre-training datasets but plateau sooner than ViT, which performs better with larger pre-training. ViT-b is ViT-B with all hidden dimensions halved.
134
+
135
+ ![](images/200e08f660d490f1ce4e2f1ffa26270f40210d066829bdedcda98fb17d35044f.jpg)
136
+ Figure 5: Performance versus cost for different architectures: Vision Transformers, ResNets, and hybrids. Vision Transformers generally outperform ResNets with the same computational budget. Hybrids improve upon pure Transformers for smaller model sizes, but the gap vanishes for larger models.
137
+
138
+ region spanned by BiT models of different sizes. The BiT CNNs outperform ViT on ImageNet, but with the larger datasets, ViT overtakes.
139
+
140
+ Second, we train our models on random subsets of 9M, 30M, and 90M as well as the full JFT-300M dataset. We do not perform additional regularization on the smaller subsets and use the same hyper-parameters for all settings. This way, we assess the intrinsic model properties, and not the effect of regularization. We do, however, use early-stopping, and report the best validation accuracy achieved during training. To save compute, we report few-shot linear accuracy instead of full fine-tuning accuracy. Figure 4 contains the results. Vision Transformers overfit more than ResNets with comparable computational cost on smaller datasets. For example, ViT-B/32 is slightly faster than ResNet50; it performs much worse on the 9M subset, but better on $90\mathrm{M}+$ subsets. The same is true for ResNet152x2 and ViT-L/16. This result reinforces the intuition that the convolutional inductive bias is useful for smaller datasets, but for larger ones, learning the relevant patterns directly from data is sufficient, even beneficial.
141
+
142
+ Overall, the few-shot results on ImageNet (Figure 4), as well as the low-data results on VTAB (Table 2) seem promising for very low-data transfer. Further analysis of few-shot properties of ViT is an exciting direction of future work.
143
+
144
+ # 4.4 SCALING STUDY
145
+
146
+ We perform a controlled scaling study of different models by evaluating transfer performance from JFT-300M. In this setting data size does not bottleneck the models' performances, and we assess performance versus pre-training cost of each model. The model set includes: 7 ResNets, R50x1, R50x2 R101x1, R152x1, R152x2, pre-trained for 7 epochs, plus R152x2 and R200x3 pre-trained for 14 epochs; 6 Vision Transformers, ViT-B/32, B/16, L/32, L/16, pre-trained for 7 epochs, plus L/16 and H/14 pre-trained for 14 epochs; and 5 hybrids, R50+ViT-B/32, B/16, L/32, L/16 pretrained for 7 epochs, plus R50+ViT-L/16 pre-trained for 14 epochs (for hybrids, the number at the end of the model name stands not for the patch size, but for the total dowsampling ratio in the ResNet backbone).
147
+
148
+ Figure 5 contains the transfer performance versus total pre-training compute (see Appendix D.4 for details on computational costs). Detailed results per model are provided in Table 6 in the Appendix. A few patterns can be observed. First, Vision Transformers dominate ResNets on the performance/compute trade-off. ViT uses approximately $2 - 4 \times$ less compute to attain the same performance (average over 5 datasets). Second, hybrids slightly outperform ViT at small computational budgets, but the difference vanishes for larger models. This result is somewhat surprising, since one might expect convolutional local feature processing to assist ViT at any size. Third, Vision Transformers appear not to saturate within the range tried, motivating future scaling efforts.
149
+
150
+ # 4.5 INSPECTING VISION TRANSFORMER
151
+
152
+ To begin to understand how the Vision Transformer processes image data, we analyze its internal representations. The first layer of the Vision Transformer linearly projects the flattened patches into a lower-dimensional space (Eq. 1). Figure 7 (left) shows the top principal components of the learned embedding filters. The components resemble plausible basis functions for a low-dimensional representation of the fine structure within each patch.
153
+
154
+ After the projection, a learned position embedding is added to the patch representations. Figure 7 (center) shows that the model learns to encode distance within the image in the similarity of position embeddings, i.e. closer patches tend to have more similar position embeddings. Further, the row-column structure appears; patches in the same row/column have similar embeddings. Finally, a sinusoidal structure is sometimes apparent for larger grids (Appendix D). That the position embeddings learn to represent 2D image topology explains why hand-crafted 2D-aware embedding variants do not yield improvements (Appendix D.3).
155
+
156
+ Self-attention allows ViT to integrate information across the entire image even in the lowest layers. We investigate to what degree the network makes use of this capability. Specifically, we compute the average distance in image space across which information is integrated, based on the attention weights (Figure 7, right). This "attention distance" is analogous to receptive field size in CNNs.
157
+
158
+ We find that some heads attend to most of the image already in the lowest layers, showing that the ability to integrate information globally is indeed used by the model. Other attention heads have consistently small attention distances in the low layers. This highly localized attention is less pronounced in hybrid models that apply a ResNet before the Transformer (Figure 7, right), suggesting that it may serve a similar function as early convolutional layers in CNNs. Further, the attention distance increases with network depth. Globally, we find that the model attends to image regions that are semantically relevant for classification (Figure 6).
159
+
160
+ ![](images/754b59d9f70d3a0c24ea803ddb232df9b5f64403bbe83a9716895157cedc6194.jpg)
161
+ Figure 6: Representative examples of attention from the output token to the input space. See Appendix D.6 for details.
162
+
163
+ # 4.6 SELF-SUPERVISION
164
+
165
+ Transformers show impressive performance on NLP tasks. However, much of their success stems not only from their excellent scalability but also from large scale self-supervised pre-training (Devlin
166
+
167
+ ![](images/976a5bed9353e5751518c04c5fc93ebb13e0b0cc27cfb0fc1ea9f5a16b6aae56.jpg)
168
+ Figure 7: Left: Filters of the initial linear embedding of RGB values of ViT-L/32. Center: Similarity of position embeddings of ViT-L/32. Tiles show the cosine similarity between the position embedding of the patch with the indicated row and column and the position embeddings of all other patches. Right: Size of attended area by head and network depth. Each dot shows the mean attention distance across images for one of 16 heads at one layer. See Appendix D.6 for details.
169
+
170
+ ![](images/728cf4b56e5923e6af1672e0e250c070a03051d2b5365042b7f331ab62602ad4.jpg)
171
+
172
+ ![](images/6f6ee8e70aa592811e9f6e1e672e069ce795ad8a218e88fa1b7b46775cece471.jpg)
173
+
174
+ et al., 2019; Radford et al., 2018). We also perform a preliminary exploration on masked patch prediction for self-supervision, mimicking the masked language modeling task used in BERT. With self-supervised pre-training, our smaller ViT-B/16 model achieves $79.9\%$ accuracy on ImageNet, a significant improvement of $2\%$ to training from scratch, but still $4\%$ behind supervised pre-training. Appendix B.1.2 contains further details. We leave exploration of contrastive pre-training (Chen et al., 2020b; He et al., 2020; Bachman et al., 2019; Henaff et al., 2020) to future work.
175
+
176
+ # 5 CONCLUSION
177
+
178
+ We have explored the direct application of Transformers to image recognition. Unlike prior works using self-attention in computer vision, we do not introduce image-specific inductive biases into the architecture apart from the initial patch extraction step. Instead, we interpret an image as a sequence of patches and process it by a standard Transformer encoder as used in NLP. This simple, yet scalable, strategy works surprisingly well when coupled with pre-training on large datasets. Thus, Vision Transformer matches or exceeds the state of the art on many image classification datasets, whilst being relatively cheap to pre-train.
179
+
180
+ While these initial results are encouraging, many challenges remain. One is to apply ViT to other computer vision tasks, such as detection and segmentation. Our results, coupled with those in Carion et al. (2020), indicate the promise of this approach. Another challenge is to continue exploring self-supervised pre-training methods. Our initial experiments show improvement from self-supervised pre-training, but there is still large gap between self-supervised and large-scale supervised pretraining. Finally, further scaling of ViT would likely lead to improved performance.
181
+
182
+ # ACKNOWLEDGEMENTS
183
+
184
+ The work was performed in Berlin, Zürich, and Amsterdam. We thank many colleagues at Google for their help, in particular Andreas Steiner for crucial help with the infrastructure and the open-source release of the code; Joan Puigcerver and Maxim Neumann for help with the large-scale training infrastructure; Dmitry Lepikhin, Aravindh Mahendran, Daniel Keysers, Mario Lučić, Noam Shazeer, and Colin Raffel for useful discussions.
185
+
186
+ # REFERENCES
187
+
188
+ Samira Abnar and Willem Zuidema. Quantifying attention flow in transformers. In ACL, 2020.
189
+
190
+ Philip Bachman, R Devon Hjelm, and William Buchwalter. Learning representations by maximizing mutual information across views. In NeurIPS, 2019.
191
+
192
+ Alexei Baevski and Michael Auli. Adaptive input representations for neural language modeling. In ICLR, 2019.
193
+ I. Bello, B. Zoph, Q. Le, A. Vaswani, and J. Shlens. Attention augmented convolutional networks. In ICCV, 2019.
194
+ Lucas Beyer, Olivier J. Henaff, Alexander Kolesnikov, Xiaohua Zhai, and Aaron van den Oord. Are we done with imagenet? arXiv, 2020.
195
+ Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. arXiv, 2020.
196
+ Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko. End-to-end object detection with transformers. In ECCV, 2020.
197
+ Mark Chen, Alec Radford, Rewon Child, Jeff Wu, and Heewoo Jun. Generative pretraining from pixels. In ICML, 2020a.
198
+ Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey E. Hinton. A simple framework for contrastive learning of visual representations. In ICML, 2020b.
199
+ Yen-Chun Chen, Linjie Li, Licheng Yu, Ahmed El Kholy, Faisal Ahmed, Zhe Gan, Yu Cheng, and Jingjing Liu. UNITER: UNiversal Image-TExt Representation Learning. In ECCV, 2020c.
200
+ Rewon Child, Scott Gray, Alec Radford, and Ilya Sutskever. Generating long sequences with sparse transformers. arXiv, 2019.
201
+ Jean-Baptiste Cordonnier, Andreas Loukas, and Martin Jaggi. On the relationship between self-attention and convolutional layers. In ICLR, 2020.
202
+ J. Deng, W. Dong, R. Socher, L. Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In CVPR, 2009.
203
+ Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In *NAACL*, 2019.
204
+ Josip Djolonga, Jessica Yung, Michael Tschannen, Rob Romijnders, Lucas Beyer, Alexander Kolesnikov, Joan Puigcerver, Matthias Minderer, Alexander D'Amour, Dan Moldovan, Sylvan Gelly, Neil Houlsby, Xiaohua Zhai, and Mario Lucic. On robustness and transferability of convolutional neural networks. arXiv, 2020.
205
+ Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, 2016.
206
+ Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsupervised visual representation learning. In CVPR, 2020.
207
+ Jonathan Ho, Nal Kalchbrenner, Dirk Weissenborn, and Tim Salimans. Axial attention in multidimensional transformers. arXiv, 2019.
208
+ Han Hu, Jiayuan Gu, Zheng Zhang, Jifeng Dai, and Yichen Wei. Relation networks for object detection. In CVPR, 2018.
209
+ Han Hu, Zheng Zhang, Zhenda Xie, and Stephen Lin. Local relation networks for image recognition. In ICCV, 2019.
210
+ Zilong Huang, Xinggang Wang, Yunchao Wei, Lichao Huang, Humphrey Shi, Wenyu Liu, and Thomas S. Huang. Ccnet: Criss-cross attention for semantic segmentation. In ICCV, 2020.
211
+ Olivier J. Henaff, Aravind Srinivas, Jeffrey De Fauw, Ali Razavi, Carl Doersch, S. M. Ali Eslami, and Aaron van den Oord. Data-efficient image recognition with contrastive predictive coding. In ICML, 2020.
212
+
213
+ Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. 2015.
214
+ Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015.
215
+ Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Joan Puigcerver, Jessica Yung, Sylvain Gelly, and Neil Houlsby. Big transfer (BiT): General visual representation learning. In ECCV, 2020.
216
+ Alex Krizhevsky. Learning multiple layers of features from tiny images. Technical report, 2009.
217
+ Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, 2012.
218
+ Y. LeCun, B. Boser, J. Denker, D. Henderson, R. Howard, W. Hubbard, and L. Jackel. Backpropagation applied to handwritten zip code recognition. Neural Computation, 1:541-551, 1989.
219
+ Dmitry Lepikhin, HyoukJoong Lee, Yuanzhong Xu, Dehao Chen, Orhan First, Yanping Huang, Maxim Krikun, Noam Shazeer, and Zhifeng Chen. Gshard: Scaling giant models with conditional computation and automatic sharding. arXiv, 2020.
220
+ Lianian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, and Kai-Wei Chang. VisualBERT: A Simple and Performant Baseline for Vision and Language. In Arxiv, 2019.
221
+ Francesco Locatello, Dirk Weissenborn, Thomas Unterthiner, Aravindh Mahendran, Georg Heigold, Jakob Uszkoreit, Alexey Dosovitskiy, and Thomas Kipf. Object-centric learning with slot attention. arXiv, 2020.
222
+ Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. ViLBERT: Pretraining Task-Agnostic Visi-olinguistic Representations for Vision-and-Language Tasks. In NeurIPS. 2019.
223
+ Dhruv Mahajan, Ross Girshick, Vignesh Ramanathan, Kaiming He, Manohar Paluri, Yixuan Li, Ashwin Bharambe, and Laurens van der Maaten. Exploring the limits of weakly supervised pretraining. In ECCV, 2018.
224
+ M. Nilsback and A. Zisserman. Automated flower classification over a large number of classes. In ICVGIP, 2008.
225
+ Omkar M. Parkhi, Andrea Vedaldi, Andrew Zisserman, and C. V. Jawahar. Cats and dogs. In CVPR, 2012.
226
+ Niki Parmar, Ashish Vavwani, Jakob Uszkoreit, Lukasz Kaiser, Noam Shazeer, Alexander Ku, and Dustin Tran. Image transformer. In ICML, 2018.
227
+ B. T. Polyak and A. B. Juditsky. Acceleration of stochastic approximation by averaging. SIAM Journal on Control and Optimization, 30(4):838-855, 1992. doi: 10.1137/0330046. URL https://doi.org/10.1137/0330046.
228
+ Siyuan Qiao, Huiyu Wang, Chenxi Liu, Wei Shen, and Alan Yuille. Weight standardization. arXiv preprint arXiv:1903.10520, 2019.
229
+ Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. Improving language understanding with unsupervised learning. Technical Report, 2018.
230
+ Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners. Technical Report, 2019.
231
+ Prajit Ramachandran, Niki Parmar, Ashish Vaswani, Irwan Bello, Anselm Levskaya, and Jon Shlens. Stand-alone self-attention in vision models. In NeurIPS, 2019.
232
+ Chen Sun, Abhinav Shrivastava, Saurabh Singh, and Abhinav Gupta. Revisiting unreasonable effectiveness of data in deep learning era. In ICCV, 2017.
233
+ Chen Sun, Austin Myers, Carl Vondrick, Kevin Murphy, and Cordelia Schmid. Videobert: A joint model for video and language representation learning. In ICCV, 2019.
234
+
235
+ Hugo Touvron, Andrea Vedaldi, Matthijs Douze, and Herve Jegou. Fixing the train-test resolution discrepancy. In NeurIPS. 2019.
236
+ Hugo Touvron, Andrea Vedaldi, Matthijs Douze, and Herve Jegou. Fixing the train-test resolution discrepancy: Fixefficientnet. arXiv preprint arXiv:2003.08237, 2020.
237
+ Michael Tschannen, Josip Djolonga, Marvin Ritter, Aravindh Mahendran, Neil Houlsby, Sylvain Gelly, and Mario Lucic. Self-supervised learning of video-induced visual invariances. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020.
238
+ Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In NIPS, 2017.
239
+ Huiyu Wang, Yukun Zhu, Bradley Green, Hartwig Adam, Alan Yuille, and Liang-Chieh Chen. Axial-deeplab: Stand-alone axial-attention for panoptic segmentation. In ECCV, 2020a.
240
+ Huiyu Wang, Yukun Zhu, Bradley Green, Hartwig Adam, Alan Yuille, and Liang-Chieh Chen. Axial-deeplab: Stand-alone axial-attention for panoptic segmentation. arXiv preprint arXiv:2003.07853, 2020b.
241
+ Qiang Wang, Bei Li, Tong Xiao, Jingbo Zhu, Changliang Li, Derek F. Wong, and Lidia S. Chao. Learning deep transformer models for machine translation. In ACL, 2019.
242
+ Xiaolong Wang, Ross Girshick, Abhinav Gupta, and Kaiming He. Non-local neural networks. In CVPR, 2018.
243
+ Dirk Weissenborn, Oscar Täckström, and Jakob Uszkoreit. Scaling autoregressive video models. In ICLR, 2019.
244
+ Bichen Wu, Chenfeng Xu, Xiaoliang Dai, Alvin Wan, Peizhao Zhang, Masayoshi Tomizuka, Kurt Keutzer, and Peter Vajda. Visual transformers: Token-based image representation and processing for computer vision. arxiv, 2020.
245
+ Yuxin Wu and Kaiming He. Group normalization. In ECCV, 2018.
246
+ Qizhe Xie, Minh-Thang Luong, Eduard Hovy, and Quoc V. Le. Self-training with noisy student improves imagenet classification. In CVPR, 2020.
247
+ Xiaohua Zhai, Avital Oliver, Alexander Kolesnikov, and Lucas Beyer. S $^{4}$ L: Self-Supervised Semi-Supervised Learning. In ICCV, 2019a.
248
+ Xiaohua Zhai, Joan Puigcerver, Alexander Kolesnikov, Pierre Ruyssen, Carlos Riquelme, Mario Lucic, Josip Djolonga, Andre Susano Pinto, Maxim Neumann, Alexey Dosovitskiy, et al. A large-scale study of representation learning with the visual task adaptation benchmark. arXiv preprint arXiv:1910.04867, 2019b.
249
+ Hengshuang Zhao, Jiaya Jia, and Vladlen Koltun. Exploring self-attention for image recognition. In CVPR, 2020.
250
+
251
+ <table><tr><td>Models</td><td>Dataset</td><td>Epochs</td><td>Base LR</td><td>LR decay</td><td>Weight decay</td><td>Dropout</td></tr><tr><td>ViT-B/{16,32}</td><td>JFT-300M</td><td>7</td><td>8·10-4</td><td>linear</td><td>0.1</td><td>0.0</td></tr><tr><td>ViT-L/32</td><td>JFT-300M</td><td>7</td><td>6·10-4</td><td>linear</td><td>0.1</td><td>0.0</td></tr><tr><td>ViT-L/16</td><td>JFT-300M</td><td>7/14</td><td>4·10-4</td><td>linear</td><td>0.1</td><td>0.0</td></tr><tr><td>ViT-H/14</td><td>JFT-300M</td><td>14</td><td>3·10-4</td><td>linear</td><td>0.1</td><td>0.0</td></tr><tr><td>R50x{1,2}</td><td>JFT-300M</td><td>7</td><td>10-3</td><td>linear</td><td>0.1</td><td>0.0</td></tr><tr><td>R101x1</td><td>JFT-300M</td><td>7</td><td>8·10-4</td><td>linear</td><td>0.1</td><td>0.0</td></tr><tr><td>R152x{1,2}</td><td>JFT-300M</td><td>7</td><td>6·10-4</td><td>linear</td><td>0.1</td><td>0.0</td></tr><tr><td>R50+ViT-B/{16,32}</td><td>JFT-300M</td><td>7</td><td>8·10-4</td><td>linear</td><td>0.1</td><td>0.0</td></tr><tr><td>R50+ViT-L/32</td><td>JFT-300M</td><td>7</td><td>2·10-4</td><td>linear</td><td>0.1</td><td>0.0</td></tr><tr><td>R50+ViT-L/16</td><td>JFT-300M</td><td>7/14</td><td>4·10-4</td><td>linear</td><td>0.1</td><td>0.0</td></tr><tr><td>ViT-B/{16,32}</td><td>ImageNet-21k</td><td>90</td><td>10-3</td><td>linear</td><td>0.03</td><td>0.1</td></tr><tr><td>ViT-L/{16,32}</td><td>ImageNet-21k</td><td>30/90</td><td>10-3</td><td>linear</td><td>0.03</td><td>0.1</td></tr><tr><td>ViT-*</td><td>ImageNet</td><td>300</td><td>3·10-3</td><td>cosine</td><td>0.3</td><td>0.1</td></tr></table>
252
+
253
+ Table 3: Hyperparameters for training. All models are trained with a batch size of 4096 and learning rate warmup of 10k steps. For ImageNet we found it beneficial to additionally apply gradient clipping at global norm 1. Training resolution is 224.
254
+
255
+ # APPENDIX
256
+
257
+ # A MULTIHEAD SELF-ATTENTION
258
+
259
+ Standard qkv self-attention (SA, Vaswani et al. (2017)) is a popular building block for neural architectures. For each element in an input sequence $\mathbf{z} \in \mathbb{R}^{N \times D}$ , we compute a weighted sum over all values $\mathbf{v}$ in the sequence. The attention weights $A_{ij}$ are based on the pairwise similarity between two elements of the sequence and their respective query $\mathbf{q}^i$ and key $\mathbf{k}^j$ representations.
260
+
261
+ $$
262
+ [ \mathbf {q}, \mathbf {k}, \mathbf {v} ] = \mathbf {z} \mathbf {U} _ {q k v} \quad \mathbf {U} _ {q k v} \in \mathbb {R} ^ {D \times 3 D _ {h}}, \tag {5}
263
+ $$
264
+
265
+ $$
266
+ A = \operatorname {s o f t m a x} \left(\mathbf {q k} ^ {\top} / \sqrt {D _ {h}}\right) \quad A \in \mathbb {R} ^ {N \times N}, \tag {6}
267
+ $$
268
+
269
+ $$
270
+ \operatorname {S A} (\mathbf {z}) = A \mathbf {v}. \tag {7}
271
+ $$
272
+
273
+ Multihead self-attention (MSA) is an extension of SA in which we run $k$ self-attention operations, called "heads", in parallel, and project their concatenated outputs. To keep compute and number of parameters constant when changing $k$ , $D_h$ (Eq. 5) is typically set to $D / k$ .
274
+
275
+ $$
276
+ \operatorname {M S A} (\mathbf {z}) = \left[ \mathrm {S A} _ {1} (z); \mathrm {S A} _ {2} (z); \dots ; \mathrm {S A} _ {k} (z) \right] \mathbf {U} _ {m s a} \quad \mathbf {U} _ {m s a} \in \mathbb {R} ^ {k \cdot D _ {h} \times D} \tag {8}
277
+ $$
278
+
279
+ # B EXPERIMENT DETAILS
280
+
281
+ # B.1 TRAINING
282
+
283
+ Table 3 summarizes our training setups for our different models. We found strong regularization to be key when training models from scratch on ImageNet. Dropout, when used, is applied after every dense layer except for the qkv-projections and directly after adding positional- to patch embeddings. Hybrid models are trained with the exact setup as their ViT counterparts. Finally, all training is done on resolution 224.
284
+
285
+ # B.1.1 FINE-TUNING
286
+
287
+ We fine-tune all ViT models using SGD with a momentum of 0.9. We run a small grid search over learning rates, see learning rate ranges in Table 4. To do so, we use small sub-splits from the training set (10% for Pets and Flowers, 2% for CIFAR, 1% ImageNet) as development set and train on the remaining data. For final results we train on the entire training set and evaluate on the respective test data. For fine-tuning ResNets and hybrid models we use the exact same setup, with the only exception of ImageNet where we add another value 0.06 to the learning rate sweep. Additionally,
288
+
289
+ <table><tr><td>Dataset</td><td>Steps</td><td>Base LR</td></tr><tr><td>ImageNet</td><td>20000</td><td>{0.003, 0.01, 0.03, 0.06}</td></tr><tr><td>CIFAR100</td><td>10000</td><td>{0.001, 0.003, 0.01, 0.03}</td></tr><tr><td>CIFAR10</td><td>10000</td><td>{0.001, 0.003, 0.01, 0.03}</td></tr><tr><td>Oxford-IIIT Pets</td><td>500</td><td>{0.001, 0.003, 0.01, 0.03}</td></tr><tr><td>Oxford Flowers-102</td><td>500</td><td>{0.001, 0.003, 0.01, 0.03}</td></tr><tr><td>VTAB (19 tasks)</td><td>2500</td><td>0.01</td></tr></table>
290
+
291
+ Table 4: Hyperparameters for fine-tuning. All models are fine-tuned with cosine learning rate decay, a batch size of 512, no weight decay, and grad clipping at global norm 1. If not mentioned otherwise, fine-tuning resolution is 384.
292
+
293
+ for ResNets we also run the setup of Kolesnikov et al. (2020) and select the best results across this run and our sweep. Finally, if not mentioned otherwise, all fine-tuning experiments run at 384 resolution (running fine-tuning at different resolution than training is common practice (Kolesnikov et al., 2020)).
294
+
295
+ When transferring ViT models to another dataset, we remove the whole head (two linear layers) and replace it by a single, zero-initialized linear layer outputting the number of classes required by the target dataset. We found this to be a little more robust than simply re-initializing the very last layer.
296
+
297
+ For VTAB we follow the protocol in Kolesnikov et al. (2020), and use the same hyperparameter setting for all tasks. We use a learning rate of 0.01 and train for 2500 steps (Tab. 4). We chose this setting by running a small sweep over two learning rates and two schedules, and selecting the setting with the highest VTAB score on the 200-example validation sets. We follow the pre-processing used in Kolesnikov et al. (2020), except that we do not use task-specific input resolutions. Instead we find that Vision Transformer benefits most from a high resolution $(384 \times 384)$ for all tasks.
298
+
299
+ # B.1.2 SELF-SUPERVISION
300
+
301
+ We employ the masked patch prediction objective for preliminary self-supervision experiments. To do so we corrupt $50\%$ of patch embeddings by either replacing their embeddings with a learnable [mask] embedding $(80\%)$ , a random other patch embedding $(10\%)$ or just keeping them as is $(10\%)$ . This setup is very similar to the one used for language by Devlin et al. (2019). Finally, we predict the 3-bit, mean color (i.e., 512 colors in total) of every corrupted patch using their respective patch representations.
302
+
303
+ We trained our self-supervised model for 1M steps (ca. 14 epochs) with batch size 4096 on JFT. We use Adam, with a base learning rate of $2 \cdot 10^{-4}$ , warmup of 10k steps and cosine learning rate decay. As prediction targets for pretraining we tried the following settings: 1) predicting only the mean, 3bit color (i.e., 1 prediction of 512 colors), 2) predicting a $4 \times 4$ downsized version of the $16 \times 16$ patch with 3bit colors in parallel (i.e., 16 predictions of 512 colors), 3) regression on the full patch using L2 (i.e., 256 regressions on the 3 RGB channels). Surprisingly, we found that all worked quite well, though L2 was slightly worse. We report final results only for option 1) because it has shown best few-shot performance. We also experimented with $15\%$ corruption rate as used by Devlin et al. (2019) but results were also slightly worse on our few-shot metrics.
304
+
305
+ Lastly, we would like to remark that our instantiation of masked patch prediction doesn't require such an enormous amount of pretraining nor a large dataset such as JFT in order to lead to similar performance gains on ImageNet classification. That is, we observed diminishing returns on downstream performance after 100k pretraining steps, and see similar gains when pretraining on ImageNet.
306
+
307
+ # C ADDITIONAL RESULTS
308
+
309
+ We report detailed results corresponding to the figures presented in the paper. Table 5 corresponds to Figure 3 from the paper and shows transfer performance of different ViT models pre-trained on datasets of increasing size: ImageNet, ImageNet-21k, and JFT-300M. Table 6 corresponds to
310
+
311
+ <table><tr><td></td><td></td><td>ViT-B/16</td><td>ViT-B/32</td><td>ViT-L/16</td><td>ViT-L/32</td><td>ViT-H/14</td></tr><tr><td rowspan="6">ImageNet</td><td>CIFAR-10</td><td>98.13</td><td>97.77</td><td>97.86</td><td>97.94</td><td>-</td></tr><tr><td>CIFAR-100</td><td>87.13</td><td>86.31</td><td>86.35</td><td>87.07</td><td>-</td></tr><tr><td>ImageNet</td><td>77.91</td><td>73.38</td><td>76.53</td><td>71.16</td><td>-</td></tr><tr><td>ImageNet RealL</td><td>83.57</td><td>79.56</td><td>82.19</td><td>77.83</td><td>-</td></tr><tr><td>Oxford Flowers-102</td><td>89.49</td><td>85.43</td><td>89.66</td><td>86.36</td><td>-</td></tr><tr><td>Oxford-IIIT-Pets</td><td>93.81</td><td>92.04</td><td>93.64</td><td>91.35</td><td>-</td></tr><tr><td rowspan="6">ImageNet-21k</td><td>CIFAR-10</td><td>98.95</td><td>98.79</td><td>99.16</td><td>99.13</td><td>99.27</td></tr><tr><td>CIFAR-100</td><td>91.67</td><td>91.97</td><td>93.44</td><td>93.04</td><td>93.82</td></tr><tr><td>ImageNet</td><td>83.97</td><td>81.28</td><td>85.15</td><td>80.99</td><td>85.13</td></tr><tr><td>ImageNet RealL</td><td>88.35</td><td>86.63</td><td>88.40</td><td>85.65</td><td>88.70</td></tr><tr><td>Oxford Flowers-102</td><td>99.38</td><td>99.11</td><td>99.61</td><td>99.19</td><td>99.51</td></tr><tr><td>Oxford-IIIT-Pets</td><td>94.43</td><td>93.02</td><td>94.73</td><td>93.09</td><td>94.82</td></tr><tr><td rowspan="6">JFT-300M</td><td>CIFAR-10</td><td>99.00</td><td>98.61</td><td>99.38</td><td>99.19</td><td>99.50</td></tr><tr><td>CIFAR-100</td><td>91.87</td><td>90.49</td><td>94.04</td><td>92.52</td><td>94.55</td></tr><tr><td>ImageNet</td><td>84.15</td><td>80.73</td><td>87.12</td><td>84.37</td><td>88.04</td></tr><tr><td>ImageNet RealL</td><td>88.85</td><td>86.27</td><td>89.99</td><td>88.28</td><td>90.33</td></tr><tr><td>Oxford Flowers-102</td><td>99.56</td><td>99.27</td><td>99.56</td><td>99.45</td><td>99.68</td></tr><tr><td>Oxford-IIIT-Pets</td><td>95.80</td><td>93.40</td><td>97.11</td><td>95.83</td><td>97.56</td></tr></table>
312
+
313
+ Table 5: Top1 accuracy (in %) of Vision Transformer on various datasets when pre-trained on ImageNet, ImageNet-21k or JFT300M. These values correspond to Figure 3 in the main text. Models are fine-tuned at 384 resolution. Note that the ImageNet results are computed without additional techniques (Polyak averaging and 512 resolution images) used to achieve results in Table 2.
314
+
315
+ <table><tr><td>Model</td><td>Epochs</td><td>ImageNet</td><td>ImageNet ReaL</td><td>CIFAR-10</td><td>CIFAR-100</td><td>Pets</td><td>Flowers</td><td>exaFLOPs</td></tr><tr><td>ViT-B/32</td><td>7</td><td>80.73</td><td>86.27</td><td>98.61</td><td>90.49</td><td>93.40</td><td>99.27</td><td>164</td></tr><tr><td>ViT-B/16</td><td>7</td><td>84.15</td><td>88.85</td><td>99.00</td><td>91.87</td><td>95.80</td><td>99.56</td><td>743</td></tr><tr><td>ViT-L/32</td><td>7</td><td>84.37</td><td>88.28</td><td>99.19</td><td>92.52</td><td>95.83</td><td>99.45</td><td>574</td></tr><tr><td>ViT-L/16</td><td>7</td><td>86.30</td><td>89.43</td><td>99.38</td><td>93.46</td><td>96.81</td><td>99.66</td><td>2586</td></tr><tr><td>ViT-L/16</td><td>14</td><td>87.12</td><td>89.99</td><td>99.38</td><td>94.04</td><td>97.11</td><td>99.56</td><td>5172</td></tr><tr><td>ViT-H/14</td><td>14</td><td>88.08</td><td>90.36</td><td>99.50</td><td>94.71</td><td>97.11</td><td>99.71</td><td>12826</td></tr><tr><td>ResNet50x1</td><td>7</td><td>77.54</td><td>84.56</td><td>97.67</td><td>86.07</td><td>91.11</td><td>94.26</td><td>150</td></tr><tr><td>ResNet50x2</td><td>7</td><td>82.12</td><td>87.94</td><td>98.29</td><td>89.20</td><td>93.43</td><td>97.02</td><td>592</td></tr><tr><td>ResNet101x1</td><td>7</td><td>80.67</td><td>87.07</td><td>98.48</td><td>89.17</td><td>94.08</td><td>95.95</td><td>285</td></tr><tr><td>ResNet152x1</td><td>7</td><td>81.88</td><td>87.96</td><td>98.82</td><td>90.22</td><td>94.17</td><td>96.94</td><td>427</td></tr><tr><td>ResNet152x2</td><td>7</td><td>84.97</td><td>89.69</td><td>99.06</td><td>92.05</td><td>95.37</td><td>98.62</td><td>1681</td></tr><tr><td>ResNet152x2</td><td>14</td><td>85.56</td><td>89.89</td><td>99.24</td><td>91.92</td><td>95.75</td><td>98.75</td><td>3362</td></tr><tr><td>ResNet200x3</td><td>14</td><td>87.22</td><td>90.15</td><td>99.34</td><td>93.53</td><td>96.32</td><td>99.04</td><td>10212</td></tr><tr><td>R50x1+ViT-B/32</td><td>7</td><td>84.90</td><td>89.15</td><td>99.01</td><td>92.24</td><td>95.75</td><td>99.46</td><td>315</td></tr><tr><td>R50x1+ViT-B/16</td><td>7</td><td>85.58</td><td>89.65</td><td>99.14</td><td>92.63</td><td>96.65</td><td>99.40</td><td>855</td></tr><tr><td>R50x1+ViT-L/32</td><td>7</td><td>85.68</td><td>89.04</td><td>99.24</td><td>92.93</td><td>96.97</td><td>99.43</td><td>725</td></tr><tr><td>R50x1+ViT-L/16</td><td>7</td><td>86.60</td><td>89.72</td><td>99.18</td><td>93.64</td><td>97.03</td><td>99.40</td><td>2704</td></tr><tr><td>R50x1+ViT-L/16</td><td>14</td><td>87.12</td><td>89.76</td><td>99.31</td><td>93.89</td><td>97.36</td><td>99.11</td><td>5165</td></tr></table>
316
+
317
+ Table 6: Detailed results of model scaling experiments. These correspond to Figure 5 in the main paper.
318
+
319
+ Figure 5 from the paper and shows the transfer performance of ViT, ResNet, and hybrid models of varying size, as well as the estimated computational cost of their pre-training.
320
+
321
+ # D ADDITIONAL ANALYSES
322
+
323
+ # D.1 SGD VS. ADAM FOR RESNETS
324
+
325
+ ResNets are typically trained with SGD and our use of Adam as optimizer is quite unconventional. Here we show the experiments that motivated this choice. Namely, we compare the fine-tuning performance of two ResNets - 50x1 and 152x2 - pre-trained on JFT with SGD and Adam. For SGD, we use the hyperparameters recommended by Kolesnikov et al. (2020). Results are presented
326
+
327
+ <table><tr><td rowspan="2">Dataset</td><td colspan="2">ResNet50</td><td colspan="2">ResNet152x2</td></tr><tr><td>Adam</td><td>SGD</td><td>Adam</td><td>SGD</td></tr><tr><td>ImageNet</td><td>77.54</td><td>78.24</td><td>84.97</td><td>84.37</td></tr><tr><td>CIFAR10</td><td>97.67</td><td>97.46</td><td>99.06</td><td>99.07</td></tr><tr><td>CIFAR100</td><td>86.07</td><td>85.17</td><td>92.05</td><td>91.06</td></tr><tr><td>Oxford-IIIT Pets</td><td>91.11</td><td>91.00</td><td>95.37</td><td>94.79</td></tr><tr><td>Oxford Flowers-102</td><td>94.26</td><td>92.06</td><td>98.62</td><td>99.32</td></tr><tr><td>Average</td><td>89.33</td><td>88.79</td><td>94.01</td><td>93.72</td></tr></table>
328
+
329
+ Table 7: Fine-tuning ResNet models pre-trained with Adam and SGD.
330
+
331
+ ![](images/f2569f011d27a902143deb49a284009583edf0dc1ce09cb054ad619a0d48e226.jpg)
332
+ Figure 8: Scaling different model dimensions of the Vision Transformer.
333
+
334
+ ![](images/80a0a36b6b53252587f6ab5f47f4c71049a770468918a08cf5ddc96cc1be15da.jpg)
335
+
336
+ in Table 7. Adam pre-training outperforms SGD pre-training on most datasets and on average. This justifies the choice of Adam as the optimizer used to pre-train ResNets on JFT. Note that the absolute numbers are lower than those reported by Kolesnikov et al. (2020), since we pre-train only for 7 epochs, not 30.
337
+
338
+ # D.2 TRANSFORMER SHAPE
339
+
340
+ We ran ablations on scaling different dimensions of the Transformer architecture to find out which are best suited for scaling to very large models. Figure 8 shows 5-shot performance on ImageNet for different configurations. All configurations are based on a ViT model with 8 layers, $D = 1024$ , $D_{MLP} = 2048$ and a patch size of 32, the intersection of all lines. We can see that scaling the depth results in the biggest improvements which are clearly visible up until 64 layers. However, diminishing returns are already visible after 16 layers. Interestingly, scaling the width of the network seems to result in the smallest changes. Decreasing the patch size and thus increasing the effective sequence length shows surprisingly robust improvements without introducing parameters. These findings suggest that compute might be a better predictor of performance than the number of parameters, and that scaling should emphasize depth over width if any. Overall, we find that scaling all dimensions proportionally results in robust improvements.
341
+
342
+ # D.3 POSITIONAL EMBEDDING
343
+
344
+ We ran ablations on different ways of encoding spatial information using positional embedding. We tried the following cases:
345
+
346
+ - Providing no positional information: Considering the inputs as a bag of patches.
347
+ - 1-dimensional positional embedding: Considering the inputs as a sequence of patches in the raster order (default across all other experiments in this paper).
348
+ - 2-dimensional positional embedding: Considering the inputs as a grid of patches in two dimensions. In this case, two sets of embeddings are learned, each for one of the axes, $X$ -embedding, and $Y$ -embedding, each with size $D / 2$ . Then, based on the coordinate on
349
+
350
+ <table><tr><td>Pos. Emb.</td><td>Default/Stem</td><td>Every Layer</td><td>Every Layer-Shared</td></tr><tr><td>No Pos. Emb.</td><td>0.61382</td><td>N/A</td><td>N/A</td></tr><tr><td>1-D Pos. Emb.</td><td>0.64206</td><td>0.63964</td><td>0.64292</td></tr><tr><td>2-D Pos. Emb.</td><td>0.64001</td><td>0.64046</td><td>0.64022</td></tr><tr><td>Rel. Pos. Emb.</td><td>0.64032</td><td>N/A</td><td>N/A</td></tr></table>
351
+
352
+ Table 8: Results of the ablation study on positional embeddings with ViT-B/16 model evaluated on ImageNet 5-shot linear.
353
+
354
+ ![](images/4fed8df2025ee1284124f071c6aa543bcd54062f3d813d1c997853f677a67109.jpg)
355
+ Figure 9: Position embeddings of models trained with different hyperparameters.
356
+
357
+ ![](images/cb865dfa7ec6660162a6c1426601971381e6c6e0c7701d7d9fd76bfd0080ca4d.jpg)
358
+
359
+ ![](images/864e47e4f73b29589512c76eac1a929910e102b9d0697b2961aa0a6f656aa208.jpg)
360
+
361
+ the path in the input, we concatenate the $X$ and $Y$ embedding to get the final positional embedding for that patch.
362
+
363
+ - Relative positional embeddings: Considering the relative distance between patches to encode the spatial information as instead of their absolute position. To do so, we use 1-dimensional Relative Attention, in which we define the relative distance all possible pairs of patches. Thus, for every given pair (one as query, and the other as key/value in the attention mechanism), we have an offset $p_{q} - p_{k}$ , where each offset is associated with an embedding. Then, we simply run extra attention, where we use the original query (the content of query), but use relative positional embeddings as keys. We then use the logits from the relative attention as a bias term and add it to the logits of the main attention (content-based attention) before applying the softmax.
364
+
365
+ In addition to different ways of encoding spatial information, we also tried different ways of incorporating this information in our model. For the 1-dimensional and 2-dimensional positional embeddings, we tried three different cases: (1) add positional embeddings to the inputs right after the stem of them model and before feeding the inputs to the Transformer encoder (default across all other experiments in this paper); (2) learn and add positional embeddings to the inputs at the beginning of each layer; (3) add a learned positional embeddings to the inputs at the beginning of each layer (shared between layers).
366
+
367
+ Table 8 summarizes the results from this ablation study on a ViT-B/16 model. As we can see, while there is a large gap between the performances of the model with no positional embedding and models with positional embedding, there is little to no difference between different ways of encoding positional information. We speculate that since our Transformer encoder operates on patch-level inputs, as opposed to pixel-level, the differences in how to encode spatial information is less important. More precisely, in patch-level inputs, the spatial dimensions are much smaller than the original pixel-level inputs, e.g., $14 \times 14$ as opposed to $224 \times 224$ , and learning to represent the spatial relations in this resolution is equally easy for these different positional encoding strategies. Even so, the specific pattern of position embedding similarity learned by the network depends on the training hyperparameters (Figure 9).
368
+
369
+ ![](images/5c58bd55893e1b1f8795f4b5fb52ab2dd622d3a6ba1de52a13a51f9d170ac7bc.jpg)
370
+ Figure 10: Size of attended area by head and network depth. Attention distance was computed for 128 example images by averaging the distance between the query pixel and all other pixels, weighted by the attention weight. Each dot shows the mean attention distance across images for one of 16 heads at one layer. Image width is 224 pixels.
371
+
372
+ ![](images/07b97843e3ab5f89b0abda62cccd29cc3da39477570e7a7e5662b1f4b937de83.jpg)
373
+
374
+ # D.4 EMPIRICAL COMPUTATIONAL COSTS
375
+
376
+ We are also interested in real-world speed of the architectures on our hardware, which is not always well predicted by theoretical FLOPs due to details like lane widths and cache sizes. For this purpose, we perform timing of inference speed for the main models of interest, on a TPUv3 accelerator; the difference between inference and backprop speed is a constant model-independent factor.
377
+
378
+ Figure 11 (left) shows how many images one core can handle per second, across various input sizes. Every single point refers to the peak performance measured across a wide range of batch-sizes. As can be seen, the theoretical bi-quadratic scaling of ViT with image size only barely starts happening for the largest models at the largest resolutions.
379
+
380
+ Another quantity of interest is the largest batch-size each model can fit onto a core, larger being better for scaling to large datasets. Figure 11 (right) shows this quantity for the same set of models. This shows that large ViT models have a clear advantage in terms of memory-efficiency over ResNet models.
381
+
382
+ # D.5 AXIAL ATTENTION
383
+
384
+ Axial Attention (Huang et al., 2020; Ho et al., 2019) is a simple, yet effective technique to run self-attention on large inputs that are organized as multidimensional tensors. The general idea of axial attention is to perform multiple attention operations, each along a single axis of the input tensor, instead of applying 1-dimensional attention to the flattened version of the input. In axial attention, each attention mixes information along a particular axis, while keeping information along the other axes independent. Along this line, Wang et al. (2020b) proposed the AxialResNet model in which all the convolutions with kernel size $3 \times 3$ in a ResNet50 are replaced by axial self-attention, i.e. a row and column attention, augmented by relative positional encoding. We have implemented AxialResNet as a baseline model. $^{3}$ .
385
+
386
+ Moreover, we have modified ViT to process inputs in the 2-dimensional shape, instead of a 1-dimensional sequence of patches, and incorporate Axial Transformer blocks, in which instead of
387
+
388
+ ![](images/d6cda994070dcc53090ac4a25e0a00597f117c3a196ee802aebfa408354ff2ee.jpg)
389
+ Figure 11: Left: Real wall-clock timings of various architectures across input sizes. ViT models have speed comparable to similar ResNets. Right: Largest per-core batch-size fitting on device with various architectures across input sizes. ViT models are clearly more memory-efficient.
390
+
391
+ a self-attention followed by an MLP, we have a row-self-attention plus an MLP followed by a column-self-attention plus an MLP.
392
+
393
+ ![](images/6b9b9109aa6a1b077e74d1766ffe0a1d98f81f6cbd55522d6b0a7d984dfbd12c.jpg)
394
+ Figure 12: Performance of Axial-Attention based models, in terms of top-1 accuracy on ImageNet 5-shot linear, versus their speed in terms of number of FLOPs (left) and inference time (left).
395
+
396
+ ![](images/ce60bdd53d8c7826e2a4af50dcc1eef41765866c5abca4b82b93bf9ff8b99a13.jpg)
397
+
398
+ Figure 12, present the performance of Axial ResNet, Axial-ViT-B/32 and Axial-ViT-B/16 on ImageNet 5shot linear, when pretrained on JFT dataset, verses the pretraining compute, both in terms of number of FLOPs and inference time (example per seconds). As we can see, both Axial-ViT-B/32 and Axial-ViT-B/16 do better than their ViT-B counterpart in terms of performance, but it comes at the cost of more compute. This is because in Axial-ViT models, each Transformer block with global self-attention is replaced by two Axial Transformer blocks, one with row and one with column self-attention and although the sequence length that self-attention operates on is smaller in axial case, there is a extra MLP per Axial-ViT block. For the AxialResNet, although it looks reasonable in terms of accuracy/compute trade-off (Figure 12, left), the naive implementation is extremely slow on TPUs (Figure 12, right).
399
+
400
+ # D.6 ATTENTION DISTANCE
401
+
402
+ To understand how ViT uses self-attention to integrate information across the image, we analyzed the average distance spanned by attention weights at different layers (Figure 10). This "attention distance" is analogous to receptive field size in CNNs. Average attention distance is highly variable
403
+
404
+ across heads in lower layers, with some heads attending to much of the image, while others attend to small regions at or near the query location. As depth increases, attention distance increases for all heads. In the second half of the network, most heads attend widely across tokens.
405
+
406
+ # D.7 ATTENTION MAPS
407
+
408
+ To compute maps of the attention from the output token to the input space (Figures 6 and 13), we used Attention Rollout (Abnar & Zuidema, 2020). Briefly, we averaged attention weights of ViT-L/16 across all heads and then recursively multiplied the weight matrices of all layers. This accounts for the mixing of attention across tokens through all layers.
409
+
410
+ # D.8 VTAB BREAKDOWN
411
+
412
+ Table 9 shows the scores attained on each of the VTAB-1k tasks.
413
+
414
+ Table 9: Breakdown of VTAB-1k performance across tasks.
415
+
416
+ <table><tr><td></td><td>Caltech101</td><td>CIFAR-100</td><td>DTD</td><td>Flowers102</td><td>Pets</td><td>Sun397</td><td>SVHN</td><td>Camelyon</td><td>EuroSAT</td><td>Resisc45</td><td>Retinopathy</td><td>Clevr-Count</td><td>Clevr-Dist</td><td>DMLab</td><td>dSpr-Loc</td><td>dSpr-Ori</td><td>KITTI-Dist</td><td>sNORB-Azim</td><td>sNORB-Elev</td><td>Mean</td></tr><tr><td>ViT-H/14 (JFT)</td><td>95.3</td><td>85.5</td><td>75.2</td><td>99.7</td><td>97.2</td><td>65.0</td><td>88.9</td><td>83.3</td><td>96.7</td><td>91.4</td><td>76.6</td><td>91.7</td><td>63.8</td><td>53.1</td><td>79.4</td><td>63.3</td><td>84.5</td><td>33.2</td><td>51.2</td><td>77.6</td></tr><tr><td>ViT-L/16 (JFT)</td><td>95.4</td><td>81.9</td><td>74.3</td><td>99.7</td><td>96.7</td><td>63.5</td><td>87.4</td><td>83.6</td><td>96.5</td><td>89.7</td><td>77.1</td><td>86.4</td><td>63.1</td><td>49.7</td><td>74.5</td><td>60.5</td><td>82.2</td><td>36.2</td><td>51.1</td><td>76.3</td></tr><tr><td>ViT-L/16 (I21k)</td><td>90.8</td><td>84.1</td><td>74.1</td><td>99.3</td><td>92.7</td><td>61.0</td><td>80.9</td><td>82.5</td><td>95.6</td><td>85.2</td><td>75.3</td><td>70.3</td><td>56.1</td><td>41.9</td><td>74.7</td><td>64.9</td><td>79.9</td><td>30.5</td><td>41.7</td><td>72.7</td></tr></table>
417
+
418
+ ![](images/0db417957ddbb788ebb272f43ec211e2de771fc11a82341cc33451bd6164f9dd.jpg)
419
+ Figure 13: Further example attention maps as in Figure 6 (random selection).
animageisworth16x16wordstransformersforimagerecognitionatscale/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9a1e0e6ea2d8a6afe6152e793310b3abf5d0150bfa0bca82bb263efa574a9828
3
+ size 1324862
animageisworth16x16wordstransformersforimagerecognitionatscale/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2e8347b568e81b9a060bb67e0355ee65f9881833c6bc5a8f2a56b9ab8091279a
3
+ size 493423
augmentingphysicalmodelswithdeepnetworksforcomplexdynamicsforecasting/dcf9d3c9-a5ad-45a5-a1a2-7371a1b74fb7_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:af337ea53966028432df165f966133daef8a5768e90c17a36e3bc4ad7de200f7
3
+ size 159716
augmentingphysicalmodelswithdeepnetworksforcomplexdynamicsforecasting/dcf9d3c9-a5ad-45a5-a1a2-7371a1b74fb7_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fb127056a737d883ac243653a0c2df35e19cafd1b999ab1dc8c77c582505bbad
3
+ size 184067
augmentingphysicalmodelswithdeepnetworksforcomplexdynamicsforecasting/dcf9d3c9-a5ad-45a5-a1a2-7371a1b74fb7_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6f2e4daf0071d8914dbfb79a3c57c82d2e525ccde9327cc57b79cea25f4bdb81
3
+ size 953142
augmentingphysicalmodelswithdeepnetworksforcomplexdynamicsforecasting/full.md ADDED
@@ -0,0 +1,692 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # AUGMENTING PHYSICAL MODELS WITH DEEP NETWORKS FOR COMPLEX DYNAMICS FORECASTING
2
+
3
+ * Yuan Yin<sup>1</sup> *Vincent Le Guen<sup>2,3</sup> *Jérémie Dona<sup>1</sup> *Emmanuel de Bézenac<sup>1</sup>
4
+
5
+ * Ibrahim Ayed<sup>1,4</sup> Nicolas Thome<sup>2</sup> Patrick Gallinari<sup>1,5</sup>
6
+
7
+ $^{1}$ Sorbonne Université, CNRS, LIP6, Paris, France
8
+ 2 Conservatoire National des Arts et Métiers, CEDRIC, Paris, France
9
+ 3 EDF R&D, Chatou, France
10
+ 4 Theresis Lab, Thales
11
+ 5 Criteo AI Lab, Paris, France
12
+
13
+ # ABSTRACT
14
+
15
+ Forecasting complex dynamical phenomena in settings where only partial knowledge of their dynamics is available is a prevalent problem across various scientific fields. While purely data-driven approaches are arguably insufficient in this context, standard physical modeling based approaches tend to be over-simplistic, inducing non-negligible errors. In this work, we introduce the APHYNITY framework, a principled approach for augmenting incomplete physical dynamics described by differential equations with deep data-driven models. It consists in decomposing the dynamics into two components: a physical component accounting for the dynamics for which we have some prior knowledge, and a data-driven component accounting for errors of the physical model. The learning problem is carefully formulated such that the physical model explains as much of the data as possible, while the data-driven component only describes information that cannot be captured by the physical model, no more, no less. This not only provides the existence and uniqueness for this decomposition, but also ensures interpretability and benefits generalization. Experiments made on three important use cases, each representative of a different family of phenomena, i.e. reaction-diffusion equations, wave equations and the non-linear damped pendulum, show that APHYNITY can efficiently leverage approximate physical models to accurately forecast the evolution of the system and correctly identify relevant physical parameters.
16
+
17
+ # 1 INTRODUCTION
18
+
19
+ Modeling and forecasting complex dynamical systems is a major challenge in domains such as environment and climate (Rolnick et al., 2019), health science (Choi et al., 2016), and in many industrial applications (Toubeau et al., 2018). Model Based (MB) approaches typically rely on partial or ordinary differential equations (PDE/ODE) and stem from a deep understanding of the underlying physical phenomena. Machine learning (ML) and deep learning methods are more prior agnostic yet have become state-of-the-art for several spatio-temporal prediction tasks (Shi et al., 2015; Wang et al., 2018; Oreshkin et al., 2020; Dona et al., 2020), and connections have been drawn between deep architectures and numerical ODE solvers, e.g. neural ODEs (Chen et al., 2018; Ayed et al., 2019b). However, modeling complex physical dynamics is still beyond the scope of pure ML methods, which often cannot properly extrapolate to new conditions as MB approaches do.
20
+
21
+ Combining the MB and ML paradigms is an emerging trend to develop the interplay between the two paradigms. For example, Brunton et al. (2016); Long et al. (2018b) learn the explicit form of PDEs directly from data, Raissi et al. (2019); Sirignano & Spiliopoulos (2018) use NNs as implicit methods for solving PDEs, Seo et al. (2020) learn spatial differences with a graph network, Ummenhofer et al. (2020) introduce continuous convolutions for fluid simulations, de Bézenac et al. (2018) learn the
22
+
23
+ ![](images/3219626df88e5e4d87fe1a6a737537fdc4975cec255dbf38f3d1f23697213198.jpg)
24
+ (a) Data-driven Neural ODE
25
+
26
+ ![](images/32612817e79ff3247c0339d9ded63f3c58934d0c8b6533ed025caaab95b1b0fc.jpg)
27
+ (b) Simple physical model
28
+ Figure 1: Predicted dynamics for the damped pendulum vs. ground truth (GT) trajectories $\mathrm{d}^2\theta/\mathrm{dt}^2 + \omega_0^2\sin\theta + \alpha\mathrm{d}\theta/\mathrm{dt} = 0$ . We show that in (a) the data-driven approach (Chen et al., 2018) fails to properly learn the dynamics due to the lack of training data, while in (b) an ideal pendulum cannot take friction into account. The proposed APHYNITY shown in (c) augments the over-simplified physical model in (b) with a data-driven component. APHYNITY improves both forecasting (MSE) and parameter identification (Error $T_0$ ) compared to (b).
29
+
30
+ ![](images/0386758b984c4287db7fd7fe0da65522ec5b26af56abfc70016557207189771d.jpg)
31
+ (c) Our APHYNITY framework
32
+
33
+ velocity field of an advection-diffusion system, Greydanus et al. (2019); Chen et al. (2020) enforce conservation laws in the network architecture or in the loss function.
34
+
35
+ The large majority of aforementioned MB/ML hybrid approaches assume that the physical model adequately describes the observed dynamics. This assumption is, however, commonly violated in practice. This may be due to various factors, e.g. idealized assumptions and difficulty to explain processes from first principles (Gentine et al., 2018), computational constraints prescribing a fine grain modeling of the system (Ayed et al., 2019a), unknown external factors, forces and sources which are present (Large & Yeager, 2004). In this paper, we aim at leveraging prior dynamical ODE/PDE knowledge in situations where this physical model is incomplete, i.e. unable to represent the whole complexity of observed data. To handle this case, we introduce a principled learning framework to Augment incomplete PHYSical models for ideNtIfying and forecastTing complex dYnamics (APHYNITY). The rationale of APHYNITY, illustrated in Figure 1 on the pendulum problem, is to augment the physical model when—and only when—it falls short.
36
+
37
+ Designing a general method for combining MB and ML approaches is still a widely open problem, and a clear problem formulation for the latter is lacking (Reichstein et al., 2019). Our contributions towards these goals are the following:
38
+
39
+ - We introduce a simple yet principled framework for combining both approaches. We decompose the data into a physical and a data-driven term such that the data-driven component only models information that cannot be captured by the physical model. We provide existence and uniqueness guarantees (Section 3.1) for the decomposition given mild conditions, and show that this formulation ensures interpretability and benefits generalization.
40
+ - We propose a trajectory-based training formulation (Section 3.2) along with an adaptive optimization scheme (Section 3.3) enabling end-to-end learning for both physical and deep learning components. This allows APHYNITY to automatically adjust the complexity of the neural network to different approximation levels of the physical model, paving the way to flexible learned hybrid models.
41
+ - We demonstrate the generality of the approach on three use cases (reaction-diffusion, wave equations and the pendulum) representative of different PDE families (parabolic, hyperbolic), having a wide spectrum of application domains, e.g. acoustics, electromagnetism, chemistry, biology, physics (Section 4). We show that APHYNITY is able to achieve performances close to complete physical models by augmenting incomplete ones, both in terms of forecasting accuracy and physical parameter identification. Moreover, APHYNITY can also be successfully extended to the partially observable setting (see discussion in Section 5).
42
+
43
+ # 2 RELATED WORK
44
+
45
+ Correction in data assimilation Prediction under approximate physical models has been tackled by traditional statistical calibration techniques, which often rely on Bayesian methods (Pernot & Cailliez, 2017). Data assimilation techniques, e.g. the Kalman filter (Kalman, 1960; Becker et al., 2019), 4D-var (Courtier et al., 1994), prediction errors are modeled probabilistically and a correction using observed data is applied after each prediction step. Similar residual correction procedures are commonly used in robotics and optimal control (Chen, 2004; Li et al., 2014). However, these sequential (two-stage) procedures prevent the cooperation between prediction and correction. Besides, in model-based reinforcement learning, model deficiencies are typically handled by considering only short-term rollouts (Janner et al., 2019) or by model predictive control (Nagabandi et al., 2018). The originality of APHYNITY is to leverage model-based prior knowledge by augmenting it with neurally parametrized dynamics. It does so while ensuring optimal cooperation between the prior model and the augmentation.
46
+
47
+ Augmented physical models Combining physical models with machine learning (gray-box or hybrid modeling) was first explored from the 1990's: Psichogios & Ungar (1992); Thompson & Kramer (1994); Rico-Martinez et al. (1994) use neural networks to predict the unknown parameters of physical models. The challenge of proper MB/ML cooperation was already raised as a limitation of gray-box approaches but not addressed. Moreover these methods were evaluated on specific applications with a residual targeted to the form of the equation. In the last few years, there has been a renewed interest in deep hybrid models bridging data assimilation techniques and machine learning to identify complex PDE parameters using cautiously constrained forward model (Long et al., 2018b; de Bézenac et al., 2018), as discussed in introduction. Recently, some approaches have specifically targeted the MB/ML cooperation. HybridNet (Long et al., 2018a) and PhICNet (Saha et al., 2020) both use data-driven networks to learn additive perturbations or source terms to a given PDE. The former considers the favorable context where the perturbations can be accessed, and the latter the special case of additive noise on the input. Wang et al. (2019); Mehta et al. (2020) propose several empirical fusion strategies with deep neural networks but lack theoretical groundings. PhyDNet (Le Guen & Thome, 2020) tackles augmentation in partially-observed settings, but with specific recurrent architectures dedicated to video prediction. Crucially, all the aforementioned approaches do not address the issues of uniqueness of the decomposition or of proper cooperation for correct parameter identification. Besides, we found experimentally that this vanilla cooperation is inferior to the APHYNITY learning scheme in terms of forecasting and parameter identification performances (see experiments in Section 4.2).
48
+
49
+ # 3 THE APHYNITY MODEL
50
+
51
+ In the following, we study dynamics driven by an equation of the form:
52
+
53
+ $$
54
+ \frac {\mathrm {d} X _ {t}}{\mathrm {d} t} = F \left(X _ {t}\right) \tag {1}
55
+ $$
56
+
57
+ defined over a finite time interval $[0,T]$ , where the state $X$ is either vector-valued, i.e. we have $X_{t}\in \mathbb{R}^{d}$ for every $t$ , (pendulum equations in Section 4), or $X_{t}$ is a $d$ -dimensional vector field over a spatial domain $\Omega \subset \mathbb{R}^k$ , with $k\in \{2,3\}$ , i.e. $X_{t}(x)\in \mathbb{R}^{d}$ for every $(t,x)\in [0,T]\times \Omega$ (reaction-diffusion and wave equations in Section 4). We suppose that we have access to a set of observed trajectories $\mathcal{D} = \{X.:[0,T]\to \mathcal{A}\mid \forall t\in [0,T],\mathrm{d}X_{t} / \mathrm{dt} = F(X_{t})\}$ , where $\mathcal{A}$ is the set of $X$ values (either $\mathbb{R}^d$ or vector field). In our case, the unknown $F$ has $\mathcal{A}$ as domain and we only assume that $F\in \mathcal{F}$ , with $(\mathcal{F},\| \cdot \|)$ a normed vector space.
58
+
59
+ # 3.1 DECOMPOSING DYNAMICS INTO PHYSICAL AND AUGMENTED TERMS
60
+
61
+ As introduced in Section 1, we consider the common situation where incomplete information is available on the dynamics, under the form of a family of ODEs or PDEs characterized by their temporal evolution $F_{p} \in \mathcal{F}_{p} \subset \mathcal{F}$ . The APHYNITY framework leverages the knowledge of $\mathcal{F}_{p}$ while mitigating the approximations induced by this simplified model through the combination of physical and data-driven components. $\mathcal{F}$ being a vector space, we can write:
62
+
63
+ $$
64
+ F = F _ {p} + F _ {a}
65
+ $$
66
+
67
+ where $F_{p} \in \mathcal{F}_{p}$ encodes the incomplete physical knowledge and $F_{a} \in \mathcal{F}$ is the data-driven augmentation term complementing $F_{p}$ . The incomplete physical prior is supposed to belong to a known family, but the physical parameters (e.g. propagation speed for the wave equation) are unknown and need to be estimated from data. Both $F_{p}$ and $F_{a}$ parameters are estimated by fitting the trajectories from $\mathcal{D}$ .
68
+
69
+ The decomposition $F = F_{p} + F_{a}$ is in general not unique. For example, all the dynamics could be captured by the $F_{a}$ component. This decomposition is thus ill-defined, which hampers the interpretability and the extrapolation abilities of the model. In other words, one wants the estimated parameters of $F_{p}$ to be as close as possible to the true parameter values of the physical model and $F_{a}$ to play only a complementary role w.r.t $F_{p}$ , so as to model only the information that cannot be captured by the physical prior. For example, when $F \in \mathcal{F}_{p}$ , the data can be fully described by the physical model, and in this case it is sensible to desire $F_{a}$ to be nullified; this is of central importance in a setting where one wishes to identify physical quantities, and for the model to generalize and extrapolate to new conditions. In a more general setting where the physical model is incomplete, the action of $F_{a}$ on the dynamics, as measured through its norm, should be as small as possible.
70
+
71
+ This general idea is embedded in the following optimization problem:
72
+
73
+ $$
74
+ \min _ {F _ {p} \in \mathcal {F} _ {p}, F _ {a} \in \mathcal {F}} \| F _ {a} \| \quad \text {s u b j e c t t o} \quad \forall X \in \mathcal {D}, \forall t, \frac {\mathrm {d} X _ {t}}{\mathrm {d} t} = \left(F _ {p} + F _ {a}\right) \left(X _ {t}\right) \tag {2}
75
+ $$
76
+
77
+ The originality of APHYNITY is to leverage model-based prior knowledge by augmenting it with neurally parametrized dynamics. It does so while ensuring optimal cooperation between the prior model and the augmentation.
78
+
79
+ A first key question is whether the minimum in Eq. (2) is indeed well-defined, in other words whether there exists indeed a decomposition with a minimal norm $F_{a}$ . The answer actually depends on the geometry of $\mathcal{F}_p$ , and is formulated in the following proposition proven in Appendix B:
80
+
81
+ Proposition 1 (Existence of a minimizing pair). If $\mathcal{F}_p$ is a proximal set $^1$ , there exists a decomposition minimizing Eq. (2).
82
+
83
+ Proximinality is a mild condition which, as shown through the proof of the proposition, cannot be weakened. It is a property verified by any boundedly compact set. In particular, it is true for closed subsets of finite dimensional spaces. However, if only existence is guaranteed, while forecasts would be expected to be accurate, non-uniqueness of the decomposition would hamper the interpretability of $F_{p}$ and this would mean that the identified physical parameters are not uniquely determined.
84
+
85
+ It is then natural to ask under which conditions solving problem Eq. (2) leads to a unique decomposition into a physical and a data-driven component. The following result provides guarantees on the existence and uniqueness of the decomposition under mild conditions. The proof is given in Appendix B:
86
+
87
+ Proposition 2 (Uniqueness of the minimizing pair). If $\mathcal{F}_p$ is a Chebyshev set $^1$ , Eq. (2) admits a unique minimizer. The $F_p$ in this minimizer pair is the metric projection of the unknown $F$ onto $\mathcal{F}_p$ .
88
+
89
+ The Chebyshev assumption condition is strictly stronger than proximinality but is still quite mild and necessary. Indeed, in practice, many sets of interest are Chebyshev, including all closed convex spaces in strict normed spaces and, if $\mathcal{F} = L^2$ , $\mathcal{F}_p$ can be any closed convex set, including all finite dimensional subspaces. In particular, all examples considered in the experiments are Chebyshev sets.
90
+
91
+ Propositions 1 and 2 provide, under mild conditions, the theoretical guarantees for the APHYNITY formulation to infer the correct MB/ML decomposition, thus enabling both recovering the proper physical parameters and accurate forecasting.
92
+
93
+ # 3.2 SOLVING APHYNITY WITH DEEP NEURAL NETWORKS
94
+
95
+ In the following, both terms of the decomposition are parametrized and are denoted as $F_{p}^{\theta_{p}}$ and $F_{p}^{\theta_{a}}$ . Solving APHYNITY then consists in estimating the parameters $\theta_{p}$ and $\theta_{a}$ . $\theta_{p}$ are the physical parameters and are typically low-dimensional, e.g. 2 or 3 in our experiments for the considered physical models. For $F_{a}$ , we need sufficiently expressive models able to optimize over all $\mathcal{F}$ : we
96
+
97
+ thus use deep neural networks, which have shown promising performances for the approximation of differential equations (Raissi et al., 2019; Ayed et al., 2019b).
98
+
99
+ When learning the parameters of $F_{p}^{\theta_{p}}$ and $F_{a}^{\theta_{a}}$ , we have access to a finite dataset of trajectories discretized with a given temporal resolution $\Delta t$ : $\mathcal{D}_{\mathrm{train}} = \{(X_{k\Delta t}^{(i)})_{0 \leq k \leq \lfloor T / \Delta t \rfloor}\}_{1 \leq i \leq N}$ . Solving Eq. (2) requires estimating the state derivative $\frac{\mathrm{d}X_{t}}{\mathrm{d}t}$ appearing in the constraint term. One solution is to approximate this derivative using e.g. finite differences as in (Brunton et al., 2016; Greydanus et al., 2019; Cranmer et al., 2020). This numerical scheme requires high space and time resolutions in the observation space in order to get reliable gradient estimates. Furthermore it is often unstable, leading to explosive numerical errors as discussed in Appendix D. We propose instead to solve Eq. (2) using an integral trajectory-based approach: we compute $\widetilde{X}_{k\Delta t,X_0}^i$ from an initial state $X_0^{(i)}$ using the current $F_{p}^{\theta_{p}} + F_{a}^{\theta_{a}}$ dynamics, then enforce the constraint $\widetilde{X}_{k\Delta t,X_0}^i = X_{k\Delta t}^i$ . This leads to our final objective function on $(\theta_p,\theta_a)$ :
100
+
101
+ $$
102
+ \min _ {\theta_ {p}, \theta_ {a}} \left\| F _ {a} ^ {\theta_ {a}} \right\| \quad \text {s u b j e c t t o} \quad \forall i, \forall k, \widetilde {X} _ {k \Delta t} ^ {(i)} = X _ {k \Delta t} ^ {(i)} \tag {3}
103
+ $$
104
+
105
+ where $\widetilde{X}_{k\Delta t}^{(i)}$ is the approximate solution of the integral $\int_{X_0^{(i)}}^{X_0^{(i)} + k\Delta t}(F_p^{\theta_p} + F_a^{\theta_a})(X_s)\mathrm{d}X_s$ obtained by a differentiable ODE solver.
106
+
107
+ In our setting, where we consider situations for which $F_{p}^{\theta_{p}}$ only partially describes the physical phenomenon, this coupled MB + ML formulation leads to different parameter estimates than using the MB formulation alone, as analyzed more thoroughly in Appendix C. Interestingly, our experiments show that using this formulation also leads to a better identification of the physical parameters $\theta_{p}$ than when fitting the simplified physical model $F_{p}^{\theta_{p}}$ alone (Section 4). With only an incomplete knowledge on the physics, $\theta_{p}$ estimator will be biased by the additional dynamics which needs to be fitted in the data. Appendix F also confirms that the integral formulation gives better forecasting results and a more stable behavior than supervising over finite difference approximations of the derivatives.
108
+
109
+ # 3.3 ADAPTIVELY CONSTRAINED OPTIMIZATION
110
+
111
+ The formulation in Eq. (3) involves constraints which are difficult to enforce exactly in practice. We considered a variant of the method of multipliers (Bertsekas, 1996) which uses a sequence of Lagrangian relaxations $\mathcal{L}_{\lambda_j}(\theta_p,\theta_a)$ :
112
+
113
+ $$
114
+ \mathcal {L} _ {\lambda_ {j}} \left(\theta_ {p}, \theta_ {a}\right) = \left\| F _ {a} ^ {\theta_ {a}} \right\| + \lambda_ {j} \cdot \mathcal {L} _ {t r a j} \left(\theta_ {p}, \theta_ {a}\right) \tag {4}
115
+ $$
116
+
117
+ where $\mathcal{L}_{\text{traj}}(\theta_p, \theta_a) = \sum_{i=1}^{N} \sum_{h=1}^{T/\Delta t} \|X_{h\Delta t}^{(i)} - \widetilde{X}_{h\Delta t}^{(i)}\|$ .
118
+
119
+ This method needs an increasing sequence $(\lambda_j)_j$ such that the successive minima of $\mathcal{L}_{\lambda_j}$ converge to a solution (at least a local one) of the constrained problem Eq. (3). We select $(\lambda_j)_j$ by using an iterative strategy: starting from a value $\lambda_0$ , we iterate, minimizing $\mathcal{L}_{\lambda_j}$ by gradient descent, then update $\lambda_j$ with: $\lambda_{j + 1} = \lambda_j + \tau_2\mathcal{L}_{traj}(\theta_{j + 1})$ , where $\tau_{2}$ is a chosen hyper-parameter and $\dot{\theta} = (\theta_p,\theta_a)$ . This procedure is summarized in Algorithm 1. This adaptive iterative procedure allows us to obtain stable and robust results, in a reproducible fashion, as shown in the experiments.
120
+
121
+ # Algorithm 1: APHYNITY
122
+
123
+ Initialization: $\lambda_0\geq 0,\tau_1 > 0,\tau_2 > 0$
124
+
125
+ for epoch $= 1:N_{\text{epochs}}$ do
126
+
127
+ for iter in 1: $N_{iter}$ do
128
+
129
+ for batch in $1:B$ do
130
+
131
+ $$
132
+ \begin{array}{l} \mathbf {\Phi} _ {j + 1} = \theta_ {j} - \\ \tau_ {1} \nabla [ \lambda_ {j} \mathcal {L} _ {t r a} \end{array}
133
+ $$
134
+
135
+ $$
136
+ \lambda_ {j + 1} = \lambda_ {j} + \tau_ {2} \mathcal {L} _ {t r a j} (\theta_ {j + 1})
137
+ $$
138
+
139
+ # 4 EXPERIMENTAL VALIDATION
140
+
141
+ We validate our approach on 3 classes of challenging physical dynamics: reaction-diffusion, wave propagation, and the damped pendulum, representative of various application domains such as chemistry, biology or ecology (for reaction-diffusion) and earth physica, acoustic, electromagnetism or
142
+
143
+ even neuro-biology (for waves equations). The two first dynamics are described by PDEs and thus in practice should be learned from very high-dimensional vectors, discretized from the original compact domain. This makes the learning much more difficult than from the one-dimensional pendulum case. For each problem, we investigate the cooperation between physical models of increasing complexity encoding incomplete knowledge of the dynamics (denoted Incomplete physics in the following) and data-driven models. We show the relevance of APHYNITY (denoted APHYNITY models) both in terms of forecasting accuracy and physical parameter identification.
144
+
145
+ # 4.1 EXPERIMENTAL SETTING
146
+
147
+ We describe the three families of equations studied in the experiments. In all experiments, $\mathcal{F} = \mathcal{L}^2 (\mathcal{A})$ where $\mathcal{A}$ is the set of all admissible states for each problem, and the $\mathcal{L}^2$ norm is computed on $\mathcal{D}_{train}$ by: $\| F\| ^2\approx \sum_{i,k}\| F(X_{k\Delta t}^{(i)})\| ^2$ . All considered sets of physical functionals $\mathcal{F}_p$ are closed and convex in $\mathcal{F}$ and thus are Chebyshev. In order to enable the evaluation on both prediction and parameter identification, all our experiments are conducted on simulated datasets with known model parameters. Each dataset has been simulated using an appropriate high-precision integration scheme for the corresponding equation. All solver-based models take the first state $X_0$ as input and predict the remaining time-steps by integrating $F$ through the same differentiable generic and common ODE solver (4th order Runge-Kutta) $^3$ . Implementation details and architectures are given in Appendix E.
148
+
149
+ Reaction-diffusion equations We consider a 2D FitzHugh-Nagumo type model (Klaasen & Troy, 1984). The system is driven by the PDE $\frac{\partial u}{\partial t} = a\Delta u + R_u(u,v;k)$ , $\frac{\partial v}{\partial t} = b\Delta v + R_v(u,v)$ where $a$ and $b$ are respectively the diffusion coefficients of $u$ and $v$ , $\Delta$ is the Laplace operator. The local reaction terms are $R_u(u,v;k) = u - u^3 - k - v$ , $R_v(u,v) = u - v$ . The state is $X = (u,v)$ and is defined over a compact rectangular domain $\Omega$ with periodic boundary conditions. The considered physical models are: $\text{Param PDE}(a,b)$ , with unknown $(a,b)$ diffusion terms and without reaction terms: $\mathcal{F}_p = \{F_p^{a,b} : (u,v) \mapsto (a\Delta u, b\Delta v) \mid a \geq a_{\min} > 0, b \geq b_{\min} > 0\}$ ; $\text{Param PDE}(a,b,k)$ , the full PDE with unknown parameters: $\mathcal{F}_p = \{F_p^{a,b,k} : (u,v) \mapsto (a\Delta u + R_u(u,v;k), b\Delta v + R_v(u,v) \mid a \geq a_{\min} > 0, b \geq b_{\min} > 0, k \geq k_{\min} > 0\}$ .
150
+
151
+ Damped wave equations We investigate the damped-wave PDE: $\frac{\partial^2 w}{\partial t^2} - c^2 \Delta w + k \frac{\partial w}{\partial t} = 0$ where $k$ is the damping coefficient. The state is $X = (w, \frac{\partial w}{\partial t})$ and we consider a compact spatial domain $\Omega$ with Neumann homogeneous boundary conditions. Note that this damping differs from the pendulum, as its effect is global. Our physical models are: $\bullet$ Param PDE $(c)$ , without damping term: $\mathcal{F}_p = \{F_p^c : (u, v) \mapsto (v, c^2 \Delta u) \mid c \in [\epsilon, +\infty)$ with $\epsilon > 0\}$ ; $\bullet$ Param PDE $(c, k)$ : $\mathcal{F}_p = \{F_p^{c,k} : (u, v) \mapsto (v, c^2 \Delta u - kv) \mid c, k \in [\epsilon, +\infty)$ with $\epsilon > 0\}$ .
152
+
153
+ Damped pendulum The evolution follows the ODE $\mathrm{d}^2\theta/\mathrm{d}t^2 + \omega_0^2\sin\theta + \alpha^{\mathrm{d}\theta}/\mathrm{d}t = 0$ where $\theta(t)$ is the angle, $\omega_0$ the proper pulsation ( $T_0$ the period) and $\alpha$ the damping coefficient. With state $X = (\theta, \mathrm{d}\theta/\mathrm{d}t)$ , the ODE is $F_p^{\omega_0,\alpha}: X \mapsto (\mathrm{d}\theta/\mathrm{d}t, -\omega_0^2\sin\theta - \alpha^{\mathrm{d}\theta}/\mathrm{d}t)$ . Our physical models are: Hamiltonian (Greydanus et al., 2019), a conservative approximation, with $\mathcal{F}_p = \{F_p^\mathcal{H}:(u,v) \mapsto (\partial_y\mathcal{H}(u,v), -\partial_x\mathcal{H}(u,v)) \mid \mathcal{H} \in H^1(\mathbb{R}^2)\}$ , $H^1(\mathbb{R}^2)$ is the first order Sobolev space. Param ODE $(\omega_0)$ , the frictionless pendulum: $\mathcal{F}_p = \{F_p^{\omega_0,\alpha=0} \mid \omega_0 \in [\epsilon, +\infty)$ with $\epsilon > 0$ . Param ODE $(\omega_0,\alpha)$ , the full pendulum equation: $\mathcal{F}_p = \{F_p^{\omega_0,\alpha} \mid \omega_0,\alpha \in [\epsilon, +\infty)$ with $\epsilon > 0$ .
154
+
155
+ Baselines As purely data-driven baselines, we use Neural ODE (Chen et al., 2018) for the three problems and PredRNN++ (Wang et al., 2018, for reaction-diffusion only) which are competitive models for datasets generated by differential equations and for spatio-temporal data. As MB/ML methods, in the ablations studies (see Appendix F), we compare for all problems, to the vanilla MB/ML cooperation scheme found in (Wang et al., 2019; Mehta et al., 2020). We also show results for True PDE/ODE, which corresponds to the equation for data simulation (which do not lead to zero error due to the difference between simulation and training integration schemes). For the pendulum, we compare to Hamiltonian neural networks (Greydanus et al., 2019; Toth et al., 2020) and to the deep Galerkin method (DGM, Sirignano & Spiliopoulos, 2018). See additional details in Appendix E.
156
+
157
+ Table 1: Forecasting and identification results on the (a) reaction-diffusion, (b) wave equation, and (c) damped pendulum datasets. We set for (a) $a = 1 \times 10^{-3}$ , $b = 5 \times 10^{-3}$ , $k = 5 \times 10^{-3}$ , for (b) $c = 330$ , $k = 50$ and for (c) $T_0 = 6$ , $\alpha = 0.2$ as true parameters. log MSEs are computed respectively over 25, 25, and 40 predicted time-steps. %Err param. averages the results when several physical parameters are present. For each level of incorporated physical knowledge, equivalent best results according to a Student t-test are shown in bold. n/a corresponds to non-applicable cases.
158
+
159
+ <table><tr><td colspan="2">Dataset</td><td>Method</td><td>log MSE</td><td>%Err param.</td><td>\( \left\| F_{a}\right\|^2 \)</td></tr><tr><td rowspan="8">(a)Reaction-diffusion</td><td rowspan="2">Data-driven</td><td>Neural ODE</td><td>-3.76±0.02</td><td>n/a</td><td>n/a</td></tr><tr><td>PredRNN++</td><td>-4.60±0.01</td><td>n/a</td><td>n/a</td></tr><tr><td rowspan="2">Incomplete physics</td><td>Param PDE (a,b)</td><td>-1.26±0.02</td><td>67.6</td><td>n/a</td></tr><tr><td>APHYNITY Param PDE (a,b)</td><td>-5.10±0.21</td><td>2.3</td><td>67</td></tr><tr><td rowspan="4">Complete physics</td><td>Param PDE (a,b,k)</td><td>-9.34±0.20</td><td>0.17</td><td>n/a</td></tr><tr><td>APHYNITY Param PDE (a,b,k)</td><td>-9.35±0.02</td><td>0.096</td><td>1.5e-6</td></tr><tr><td>True PDE</td><td>-8.81±0.05</td><td>n/a</td><td>n/a</td></tr><tr><td>APHYNITY True PDE</td><td>-9.17±0.02</td><td>n/a</td><td>1.4e-7</td></tr><tr><td rowspan="7">(b)Wave equation</td><td>Data-driven</td><td>Neural ODE</td><td>-2.51±0.29</td><td>n/a</td><td>n/a</td></tr><tr><td rowspan="2">Incomplete physics</td><td>Param PDE (c)</td><td>0.51±0.07</td><td>10.4</td><td>n/a</td></tr><tr><td>APHYNITY Param PDE (c)</td><td>-4.64±0.25</td><td>0.31</td><td>71.</td></tr><tr><td rowspan="4">Complete physics</td><td>Param PDE (c,k)</td><td>-4.68±0.55</td><td>1.38</td><td>n/a</td></tr><tr><td>APHYNITY Param PDE (c,k)</td><td>-6.09±0.28</td><td>0.70</td><td>4.54</td></tr><tr><td>True PDE</td><td>-4.66±0.30</td><td>n/a</td><td>n/a</td></tr><tr><td>APHYNITY True PDE</td><td>-5.24±0.45</td><td>n/a</td><td>0.14</td></tr><tr><td rowspan="11">(c)Damped pendulum</td><td>Data-driven</td><td>Neural ODE</td><td>-2.84±0.70</td><td>n/a</td><td>n/a</td></tr><tr><td rowspan="5">Incomplete physics</td><td>Hamiltonian</td><td>-0.35±0.10</td><td>n/a</td><td>n/a</td></tr><tr><td>APHYNITY Hamiltonian</td><td>-3.97±1.20</td><td>n/a</td><td>623</td></tr><tr><td>Param ODE (ω0)</td><td>-0.14±0.10</td><td>13.2</td><td>n/a</td></tr><tr><td>Deep Galerkin Method (ω0)</td><td>-3.10±0.40</td><td>22.1</td><td>n/a</td></tr><tr><td>APHYNITY Param ODE (ω0)</td><td>-7.86±0.60</td><td>4.0</td><td>132</td></tr><tr><td rowspan="5">Complete physics</td><td>Param ODE (ω0,α)</td><td>-8.28±0.40</td><td>0.45</td><td>n/a</td></tr><tr><td>Deep Galerkin Method (ω0,α)</td><td>-3.14±0.40</td><td>7.1</td><td>n/a</td></tr><tr><td>APHYNITY Param ODE (ω0,α)</td><td>-8.31±0.30</td><td>0.39</td><td>8.5</td></tr><tr><td>True ODE</td><td>-8.58±0.20</td><td>n/a</td><td>n/a</td></tr><tr><td>APHYNITY True ODE</td><td>-8.44±0.20</td><td>n/a</td><td>2.3</td></tr></table>
160
+
161
+ # 4.2 RESULTS
162
+
163
+ We analyze and discuss below the results obtained for the three kind of dynamics. We successively examine different evaluation or quality criteria. The conclusions are consistent for the three problems, which allows us to highlight clear trends for all of them.
164
+
165
+ Forecasting accuracy The data-driven models do not perform well compared to True PDE/ODE (all values are test errors expressed as log MSE): -4.6 for PredRNN++ vs. -9.17 for reaction-diffusion, -2.51 vs. -5.24 for wave equation, and -2.84 vs. -8.44 for the pendulum in Table 1. The Deep Galerkin method for the pendulum in complete physics $DGM(\omega_0, \alpha)$ , being constrained by the equation, outperforms Neural ODE but is far inferior to APHYNITY models. In the incomplete physics case, $DGM(\omega_0)$ fails to compensate for the missing information. The incomplete physical models, Param PDE $(a, b)$ for the reaction-diffusion, Param PDE $(c)$ for the wave equation, and Param ODE $(\omega_0)$ and Hamiltonian models for the damped pendulum, have even poorer performances than purely data-driven ones, as can be expected since they ignore important dynamical components, e.g. friction in the pendulum case. Using APHYNITY with these imperfect physical models greatly improves forecasting accuracy in all cases, significantly outperforming purely data-driven models, and reaching results often close to the accuracy of the true ODE, when APHYNITY and the true ODE models are integrated with the same numerical scheme (which is different from the one used for data generation, hence the non-null errors even for the true equations), e.g. -5.92 vs. -5.24 for wave equation in
166
+
167
+ ![](images/924d870b3a94780edce91e344599f223256f76a54c88199fc5c30908a16ff7a2.jpg)
168
+ (a) Param PDE $(a,b)$ , diffusion-only
169
+
170
+ ![](images/bef58a13f90461fd6d1f966e8f8334261bd417bc0725be07e28b86cbaf0ce023.jpg)
171
+
172
+ ![](images/e37d47b6db8aff15851a2f0ba56b5cf1a5252a42979b1a5b101a8b528f9d874b.jpg)
173
+
174
+ ![](images/de94919d5e273215cea4f0391dfa73f38698a36ed118d9784dcefc22ec375915.jpg)
175
+
176
+ ![](images/6cc067baef4aec3aa0c4ab8072ab40dc148f6541d4e655095722ffeda4189ac2.jpg)
177
+
178
+ ![](images/f574f8ac60d3b29c96983d4ff7e11d84132dabcda775a129e1d61e2db5ea3801.jpg)
179
+
180
+ ![](images/7b7ac58a99bad780e32a25275e67c226377f1db7b3278a34a2d2ba2503b5c18d.jpg)
181
+
182
+ ![](images/c2e2bad5b4f52a6f5e6ec7f15d4f646c5f63990aa59e417c754cfdbd8d462338.jpg)
183
+
184
+ ![](images/1244c05137b10078af6a8d6e949f8e0fd2b46fd046a3dc85a382e4a33206bd43.jpg)
185
+
186
+ ![](images/559d9377732089a50327b9aa089f66fd5337fdd44d6717459cb8d33b902b38a2.jpg)
187
+
188
+ ![](images/eb54e79b812600f608b565f1c750075be95985c290514d88ecb8eb020e1fc579.jpg)
189
+
190
+ ![](images/a4e659e78136e1b720c5b5220d38b7e638929062931dc9eb79fa2e1e8b559263.jpg)
191
+
192
+ ![](images/624647b2bb599a830f3868ee53b34748e1add109502f68ec6ec1059f20b65ce6.jpg)
193
+
194
+ ![](images/bf4f5317278e23c6b448c784bb7109e2d7d3d2a4a9741abc6a3d8309e142f81f.jpg)
195
+
196
+ ![](images/1fa82ff6d96ba2434e65f0ff4f53e6893fe15a3f4369114b49dbdda5814328fb.jpg)
197
+
198
+ ![](images/18fcd8e96c4007de7ee9eaa1d5449176d2b35d7d2c000e4ec238da0a0f8c8a4d.jpg)
199
+
200
+ ![](images/ddb8ffb69d3afb182379c1ce89f1e1c58fdb3df32e798246dd19f92e11de5933.jpg)
201
+
202
+ ![](images/9adfdf74c9d078a6007ef7d2337596449b718be73073c52dbdd179383c6edcbb.jpg)
203
+
204
+ ![](images/4548c4f259762f7c3c9b8a1dd1856900d184b7496f22b0629d5c41ddac1aeb4a.jpg)
205
+
206
+ ![](images/3be3b4b2b23b35222175fe1897219945ce218f3fa17b90761042548704066574.jpg)
207
+
208
+ ![](images/5b08555919df63578604e6a2f71df35dee7133d359bf45bb88196d77ab339919.jpg)
209
+
210
+ ![](images/5216bc106ea13ffeccec63bb63fec98fc4c083c4b4cf701a99f95fc783811f5b.jpg)
211
+
212
+ ![](images/b244ef3c4ba5b95045a5f1daab7bdbeab2fd16d3ae3aa9634af4c22f859a8114.jpg)
213
+ (c) Ground truth simulation
214
+
215
+ ![](images/70cb3189661cc0d1b4fc3604cef5e45b993b49f9bfce970c0f84bfbe320c573b.jpg)
216
+
217
+ ![](images/b1090bcee914cecc4cc46a886e1295becee8dc0759540fa0e4a67036ff11b3f2.jpg)
218
+
219
+ ![](images/c05c1d47845a04fe6ea270ab02a6499982ea1d55a6a33c40184bf06ac991053e.jpg)
220
+
221
+ ![](images/cb58f0fefe225406fd331b7c0f038aa2001daf701094af45fe8e44aa9419616c.jpg)
222
+
223
+ ![](images/eac1576048af4394ab3e7fbca98b200f9e7ace87428a4d0a1c397d9f4cdf3dab.jpg)
224
+
225
+ ![](images/d3556de6df42c62f977b9412b6e6db8312d987d0da9d0d258002c413f7c9ca61.jpg)
226
+ Figure 2: Comparison of predictions of two components $u$ (top) and $v$ (bottom) of the reaction-diffusion system. Note that $t = 4$ is largely beyond the dataset horizon ( $t = 2.5$ ).
227
+
228
+ ![](images/007a59a044690718246b55d619711fee0e9e7a75e42e4bdd06c5dab732fe5b4b.jpg)
229
+
230
+ ![](images/d16e3eaef1a72b590b79d61aad4c14768fef0f95d9236b3bdfa7b534ab77d03d.jpg)
231
+
232
+ ![](images/d7cfb278a299d17810d2f79c71d6a4e003027cad2570e85cdc011ae8c80ac92f.jpg)
233
+
234
+ ![](images/9346ba0851e9353a0336622e177f03d962f84cbdb6386957c01ead3bfb21ff64.jpg)
235
+
236
+ ![](images/093e97118ffad8551203b582644d290e0d0a9b14b7637b363c33c48010cc89de.jpg)
237
+
238
+ ![](images/394165115eecfec93474f3018224795761eb97bc6d8cf94e10ad090d69ca5e50.jpg)
239
+ (a) Neural ODE
240
+
241
+ ![](images/55d1db3c4e1a5b002029a9d426f3cdcc4f505ae26e34fab54f9fb8e270df0fbc.jpg)
242
+ Figure 3: Comparison between the prediction of APHYNITY when $c$ is estimated and Neural ODE for the damped wave equation. Note that $t + 32$ , last column for (a, b, c) is already beyond the training time horizon ( $t + 25$ ), showing the consistency of APHYNITY method.
243
+
244
+ ![](images/b051f1e4676c7c4991220b176d8c7e6c68c014be659b9afd7667c24f4e522455.jpg)
245
+
246
+ ![](images/fc9cbc6650eeb27353f4b3029a0fe8389bfb3af6eabb1749b8aa17319dc33f82.jpg)
247
+
248
+ ![](images/9d347e2cdd9e070a116f8cf63aebaa0585e65769c9ddd4eb8b12f3e9fa0d02b3.jpg)
249
+
250
+ ![](images/9be2be4b210cca3dc7b9b2e1122efd9bde09a222fcb64532b4059d1af03d4c5a.jpg)
251
+
252
+ ![](images/a583a9ae2fff47c831b8663f6bc3663b17c41110daef5144891d4603046f2f37.jpg)
253
+
254
+ ![](images/aebed9cc49fd496fdf4e3475502a58e7221a05c9fb50d4606800c1c73827ac3f.jpg)
255
+ (b) APHYNITY Param PDE (c)
256
+
257
+ ![](images/d088056ce7635131b644c82378f0769b769874c4f350e256b29408700740c872.jpg)
258
+
259
+ ![](images/f0f84ea2cd36cb23ca442673c8bfc685fd8b040f3b13d9be7dcd9c042603fa8a.jpg)
260
+
261
+ ![](images/be83db8cb59bd62267d060dbc49a55d0fc8f06269333718e97aad081ce59702e.jpg)
262
+
263
+ ![](images/0cc6073d22a0d90e10641c7f71f9673e8ff9ccb1cc6bc840871671bc5863fe8d.jpg)
264
+
265
+ ![](images/c4a4c67396f7970f0a7f80b15ea5e16e70e02ff0396baa1e65796e025f718051.jpg)
266
+
267
+ ![](images/f1a896923d1b7a79a68a78607082c469764f9008a365ee6edea4d627354bf386.jpg)
268
+
269
+ ![](images/4cbe67f20cd8dc38cebde335bbaeaf2f897523c7958d4f93d854196793b4c66d.jpg)
270
+
271
+ ![](images/4496a2e44fc0a3234d22f41aea6d5c06f21812f8869122e8521623965cb00114.jpg)
272
+ (c) Ground truth simulation
273
+
274
+ ![](images/a1c9514b38827c01f0e2baa78e45896da5bb18bbd2fcbb991e5795d35193a525.jpg)
275
+
276
+ ![](images/4b17f598bf90d5be4fc08ed073fe2df1fff73df0aed9083987f7a9082266bb43.jpg)
277
+
278
+ ![](images/52ff41522cbf271cf933afc17608f84976a8c2af1a1486fed8d74effba3d90cd.jpg)
279
+
280
+ ![](images/077a512bb118c2d2eefb4dde4fdc06ed5eeb57ec3d32cae7da0212c34df2237d.jpg)
281
+
282
+ ![](images/4be61c6db3cf9280166a91cb13dc7322a8962383ef4e05ed3c112f03c8b14cc8.jpg)
283
+
284
+ ![](images/20826d08471339c73197aac26949f2e44c763884959584e1edc5d75ff2616143.jpg)
285
+
286
+ Table 1. This clearly highlights the capacity of our approach to augment incomplete physical models with a learned data-driven component.
287
+
288
+ Physical parameter estimation Confirming the phenomenon mentioned in the introduction and detailed in Appendix C, incomplete physical models can lead to bad estimates for the relevant physical parameters: an error respectively up to $67.6\%$ and $10.4\%$ for parameters in the reaction-diffusion and wave equations, and an error of more than $13\%$ for parameters for the pendulum in Table 1. APHYNITY is able to significantly improve physical parameters identification: $2.3\%$ error for the reaction-diffusion, $0.3\%$ for the wave equation, and $4\%$ for the pendulum. This validates the fact that augmenting a simple physical model to compensate its approximations is not only beneficial for prediction, but also helps to limit errors for parameter identification when dynamical models do not fit data well. This is crucial for interpretability and explainability of the estimates.
289
+
290
+ Ablation study We conduct ablation studies to validate the importance of the APHYNITY augmentation compared to a naive strategy consisting in learning $F = F_{p} + F_{a}$ without taking care on the quality of the decomposition, as done in (Wang et al., 2019; Mehta et al., 2020). Results shown in Table 1 of Appendix F show a consistent gain of APHYNITY for the three use cases and for all physical models: for instance for Param ODE $(a,b)$ in reaction-diffusion, both forecasting performances $(\log \mathrm{MSE} = -5.10$ vs. $-4.56)$ and identification parameter $(\mathrm{Error} = 2.33\%$ vs. $6.39\%$ ) improve. Other ablation results are provided in Appendix F showing the relevance of the the trajectory-based approach described in Section 3.2 (vs supervising over finite difference approximations of the derivative $F$ ).
291
+
292
+ Flexibility When applied to complete physical models, APHYNITY does not degrade accuracy, contrary to a vanilla cooperation scheme (see ablations in Appendix F). This is due to the least action principle of our approach: when the physical knowledge is sufficient for properly predicting the observed dynamics, the model learns to ignore the data-driven augmentation. This is shown by the norm of the trained neural net component $F_{a}$ , which is reported in Table 1 last column: as expected, $\| F_{a}\|^{2}$ diminishes as the complexity of the corresponding physical model increases, and, relative to incomplete models, the norm becomes very small for complete physical models (for example in the pendulum experiments, we have $\| F_{a}\| = 8.5$ for the APHYNITY model to be compared with 132 and 623 for the incomplete models). Thus, we see that the norm of $F_{a}$ is a good indication of how imperfect the physical models $\mathcal{F}_p$ are. It highlights the flexibility of APHYNITY to successfully adapt to very different levels of prior knowledge. Note also that APHYNITY sometimes slightly improves over the true ODE, as it compensates the error introduced by different numerical integration methods for data simulation and training (see Appendix E).
293
+
294
+ Qualitative visualizations Results in Figure 2 for reaction-diffusion show that the incomplete diffusion parametric PDE in Figure 2(a) is unable to properly match ground truth simulations: the
295
+
296
+ behavior of the two components in Figure 2(a) is reduced to simple independent diffusions due to the lack of interaction terms between $u$ and $v$ . By using APHYNITY in Figure 2(b), the correlation between the two components appears together with the formation of Turing patterns, which is very similar to the ground truth. This confirms that $F_{a}$ can learn the reaction terms and improve prediction quality. In Figure 3, we see for the wave equation that the data-driven Neural ODE model fails at approximating $\frac{\mathrm{d}w}{\mathrm{d}t}$ as the forecast horizon increases: it misses crucial details for the second component $\frac{\mathrm{d}w}{\mathrm{d}t}$ which makes the forecast diverge from the ground truth. APHYNITY incorporates a Laplacian term as well as the data-driven $F_{a}$ thus capturing the damping phenomenon and succeeding in maintaining physically sound results for long term forecasts, unlike Neural ODE.
297
+
298
+ Extension to non-stationary dynamics We provide additional results in Appendix G to tackle datasets where physical parameters of the equations vary in each sequence. To this end, we design an encoder able to perform parameter estimation for each sequence. Results show that APHYNITY accommodates well to this setting, with similar trends as those reported in this section.
299
+
300
+ Additional illustrations We give further visual illustrations to demonstrate how the estimation of parameters in incomplete physical models is improved with APHYNITY. For the reaction-diffusion equation, we show that the incomplete parametric PDE underestimates both diffusion coefficients. The difference is visually recognizable between the poorly estimated diffusion (Figure 4(a)) and the true one (Figure 4(c)) while APHYNITY gives a fairly good estimation of those diffusion parameters as shown in Figure 4(b).
301
+
302
+ ![](images/2251000125d44e7b6e3ea53a01c142e69b25e5bef89f57cd0c2eee0713723154.jpg)
303
+ Figure 4: Diffusion predictions using coefficient learned with (a) incomplete physical model Param PDE $(a,b)$ and (b) APHYNITY-augmented Param PDE $(a,b)$ , compared with the (c) true diffusion
304
+
305
+ # 5 CONCLUSION
306
+
307
+ In this work, we introduce the APHYNITY framework that can efficiently augment approximate physical models with deep data-driven networks, performing similarly to models for which the underlying dynamics are entirely known. We exhibit the superiority of APHYNITY over data-driven, incomplete physics, and state-of-the-art approaches combining ML and MB methods, both in terms of forecasting and parameter identification on three various classes of physical systems. Besides, APHYNITY is flexible enough to adapt to different approximation levels of prior physical knowledge.
308
+
309
+ An appealing perspective is the applicability of APHYNITY on partially-observable settings, such as video prediction. Besides, we hope that the APHYNITY framework will open up the way to the design of a wide range of more flexible MB/ML models, e.g. in climate science, robotics or reinforcement learning. In particular, analyzing the theoretical decomposition properties in a partially-observed setting is an important direction for future work.
310
+
311
+ # ACKNOWLEDGEMENTS:
312
+
313
+ Funding (P. Gallinari), Chaires de recherche et d'enseignement en intelligence artificielle (Chaires IA), DL4Clim project.
314
+
315
+ # REFERENCES
316
+
317
+ Ibrahim Ayed, Nicolas Cedilnik, Patrick Gallinari, and Maxime Sermesant. Ep-net: Learning cardiac electrophysiology models for physiology-based constraints in data-driven predictions. In Yves Coudière, Valery Ozenne, Edward J. Vigmond, and Nejib Zemzemi (eds.), Functional Imaging and Modeling of the Heart - 10th International Conference, FIMH 2019, Bordeaux, France, June 6-8, 2019, Proceedings, volume 11504 of Lecture Notes in Computer Science, pp. 55-63. Springer, 2019a.
318
+ Ibrahim Ayed, Emmanuel de Bézenac, Arthur Pajot, Julien Brajard, and Patrick Gallinari. Learning dynamical systems from partial observations. arXiv preprint arXiv:1902.11136, 2019b.
319
+ Philipp Becker, Harit Pandya, Gregor Gebhardt, Cheng Zhao, James Taylor, and Gerhard Neumann. Recurrent kalman networks: Factorized inference in high-dimensional deep feature spaces. International Conference on Machine Learning (ICML), 2019.
320
+ Dimitri P. Bertsekas. Constrained Optimization and Lagrange Multiplier Methods (Optimization and Neural Computation Series). Athena Scientific, 1 edition, 1996.
321
+ Steven L. Brunton, Joshua L. Proctor, and J. Nathan Kutz. Discovering governing equations from data by sparse identification of nonlinear dynamical systems. Proceedings of the National Academy of Sciences, 113(15):3932-3937, 2016.
322
+ Tian Qi Chen, Yulia Rubanova, Jesse Bettencourt, and David K. Duvenaud. Neural ordinary differential equations. In Advances in neural information processing systems (NeurIPS), pp. 6571-6583, 2018.
323
+ Wen-Hua Chen. Disturbance observer based control for nonlinear systems. IEEE/ASME transactions on mechatronics, 9(4):706-710, 2004.
324
+ Zhengdao Chen, Jianyu Zhang, Martin Arjovsky, and Léon Bottou. Symplectic recurrent neural networks. International Conference on Learning Representations (ICLR), 2020.
325
+ Edward Choi, Mohammad Taha Bahadori, Jimeng Sun, Joshua Kulas, Andy Schuetz, and Walter Stewart. RETAIN: An interpretable predictive model for healthcare using reverse time attention mechanism. In Advances in Neural Information Processing Systems (NeurIPS), pp. 3504-3512, 2016.
326
+ Philippe Courtier, J-N Thépaut, and Anthony Hollingsworth. A strategy for operational implementation of 4d-var, using an incremental approach. Quarterly Journal of the Royal Meteorological Society, 120(519):1367-1387, 1994.
327
+ Miles Cranmer, Sam Greydanus, Stephan Hoyer, Peter Battaglia, David Spergel, and Shirley Ho. Lagrangian neural networks. ICLR 2020 Deep Differential Equations Workshop, 2020.
328
+ Emmanuel de Bézenac, Arthur Pajot, and Patrick Gallinari. Deep learning for physical processes: Incorporating prior scientific knowledge. International Conference on Learning Representations (ICLR), 2018.
329
+ Jérémie Donà, Jean-Yves Franceschi, Sylvain Lamprier, and Patrick Gallinari. Pde-driven spatiotemporal disentanglement. International Conference on Learning Representations (ICLR), 2020.
330
+ John R Dormand and Peter J Prince. A family of embedded runge-kutta formulae. Journal of computational and applied mathematics, 6(1):19-26, 1980.
331
+ James Fletcher and Warren Moors. Chebyshev sets. Journal of the Australian Mathematical Society, 98:161-231, 04 2014. doi: 10.1017/S1446788714000561.
332
+ P. Gentine, M. Pritchard, S. Rasp, G. Reinaudi, and G. Yacalis. Could machine learning break the convection parameterization deadlock? Geophysical Research Letters, 45(11):5742-5751, 2018.
333
+ Samuel Greydanus, Misko Dzamba, and Jason Yosinski. Hamiltonian neural networks. In Advances in Neural Information Processing Systems (NeurIPS), pp. 15353-15363, 2019.
334
+
335
+ Michael Janner, Justin Fu, Marvin Zhang, and Sergey Levine. When to trust your model: Model-based policy optimization. In Advances in Neural Information Processing Systems (NeurIPS), pp. 12519-12530, 2019.
336
+ Gordon G Johnson. A nonconvex set which has the unique nearest point property. Journal of Approximation Theory, 51(4):289 - 332, 1987.
337
+ Rudolph Emil Kalman. A new approach to linear filtering and prediction problems. 1960.
338
+ Gene A. Klaasen and William C. Troy. Stationary wave solutions of a system of reaction-diffusion equations derived from the fitzhugh-nagumo equations. SIAM Journal on Applied Mathematics, 44(1):96-110, 1984. doi: 10.1137/0144008.
339
+ William Large and Stephen Yeager. Diurnal to decadal global forcing for ocean and sea-ice models: The data sets and flux climatologies, 05 2004.
340
+ Vincent Le Guen and Nicolas Thome. Disentangling physical dynamics from unknown factors for unsupervised video prediction. In Computer Vision and Pattern Recognition (CVPR). 2020.
341
+ Shihua Li, Jun Yang, Wen-Hua Chen, and Xisong Chen. Disturbance observer-based control: methods and applications. CRC press, 2014.
342
+ Yun Long, Xueyuan She, and Saibal Mukhopadhyay. Hybridnet: integrating model-based and data-driven learning to predict evolution of dynamical systems. Conference on Robot Learning (CoRL), 2018a.
343
+ Zichao Long, Yiping Lu, Xianzhong Ma, and Bin Dong. PDE-Net: Learning PDEs from data. In International Conference on Machine Learning (ICML), 2018b.
344
+ Viraj Mehta, Ian Char, Willie Neiswanger, Youngseog Chung, and Jeff Schneider. Neural dynamical systems. ICLR 2020 Deep Differential Equations Workshop, 2020.
345
+ Anusha Nagabandi, Gregory Kahn, Ronald S Fearing, and Sergey Levine. Neural network dynamics for model-based deep reinforcement learning with model-free fine-tuning. In 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 7559-7566. IEEE, 2018.
346
+ Boris N. Oreshkin, Dmitri Carpov, Nicolas Chapados, and Yoshua Bengio. N-BEATS: Neural basis expansion analysis for interpretable time series forecasting. International Conference on Learning Representations (ICLR), 2020.
347
+ Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An imperative style, high-performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché Buc, E. Fox, and R. Garnett (eds.), Advances in Neural Information Processing Systems 32, pp. 8024-8035. 2019.
348
+ Pascal Pernot and Fabien Cailliez. A critical review of statistical calibration/prediction models handling data inconsistency and model inadequacy. AIChE Journal, 63(10):4642-4665, 2017.
349
+ Dimitris C Psichogios and Lyle H Ungar. A hybrid neural network-first principles approach to process modeling. AIChE Journal, 38(10):1499-1511, 1992.
350
+ Maziar Raissi, Paris Perdikaris, and George Em Karniadakis. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. Journal of Computational Physics, 473:686-707, 2019.
351
+ Markus Reichstein, Gustau Camps-Valls, Bjorn Stevens, Martin Jung, Joachim Denzler, Nuno Carvalhais, and & Prabhat. Deep learning and process understanding for data-driven Earth system science. Nature, 566:195-204, 2019.
352
+ R Rico-Martinez, JS Anderson, and IG Kevrekidis. Continuous-time nonlinear signal processing: a neural network based approach for gray box identification. In Proceedings of IEEE Workshop on Neural Networks for Signal Processing, pp. 596-605. IEEE, 1994.
353
+
354
+ David Rolnick, Priya L Donti, Lynn H Kaack, Kelly Kochanski, Alexandre Lacoste, Kris Sankaran, Andrew Slavin Ross, Nikola Milojevic-Dupont, Natasha Jaques, Anna Waldman-Brown, et al. Tackling climate change with machine learning. In NeurIPS 2019 workshop on Climate Change with Machine Learning, 2019.
355
+ Priyabrata Saha, Saurabh Dash, and Saibal Mukhopadhyay. PHICNet: Physics-incorporated convolutional recurrent neural networks for modeling dynamical systems. arXiv preprint arXiv:2004.06243, 2020.
356
+ Sungyong Seo, Chuizheng Meng, and Yan Liu. Physics-aware difference graph networks for sparsely-observed dynamics. International Conference on Learning Representations (ICLR), 2020.
357
+ Xingjian Shi, Zhourong Chen, Hao Wang, Dit-Yan Yeung, Wai-Kin Wong, and Wang-chun Woo. Convolutional LSTM network: A machine learning approach for precipitation nowcasting. In Advances in neural information processing systems (NeurIPS), pp. 802-810, 2015.
358
+ Justin Sirignano and Konstantinos Spiliopoulos. Dgm: A deep learning algorithm for solving partial differential equations. Journal of computational physics, 375:1339-1364, 2018.
359
+ Michael L Thompson and Mark A Kramer. Modeling chemical processes using prior knowledge and neural networks. *AChE Journal*, 40(8):1328-1340, 1994.
360
+ Peter Toth, Danilo Jimenez Rezende, Andrew Jaegle, Sébastien Racanière, Aleksandar Botev, and Irina Higgins. Hamiltonian generative networks. International Conference on Learning Representations (ICLR), 2020.
361
+ Jean-François Toubeau, Jérémie Bottieau, François Vallée, and Zacharie De Grève. Deep learning-based multivariate probabilistic forecasting for short-term scheduling in power markets. IEEE Transactions on Power Systems, 34(2):1203-1215, 2018.
362
+ Benjamin Ummenhofer, Lukas Prantl, Nils Thuerey, and Vladlen Koltun. Lagrangian fluid simulation with continuous convolutions. International Conference on Learning Representations (ICLR), 2020.
363
+ Qi Wang, Feng Li, Yi Tang, and Yan Xu. Integrating model-driven and data-driven methods for power system frequency stability assessment and control. IEEE Transactions on Power Systems, 34(6):4557-4568, 2019.
364
+ Yunbo Wang, Zhifeng Gao, Mingsheng Long, Jianmin Wang, and Philip S. Yu. PredRNN++: Towards a resolution of the deep-in-time dilemma in spatiotemporal predictive learning. In International Conference on Machine Learning (ICML), 2018.
365
+
366
+ # A REMINDER ON PROXIMAL AND CHEBYSHEV SETS
367
+
368
+ We begin by giving a definition of proximal and Chebyshev sets, taken from (Fletcher & Moors, 2014):
369
+
370
+ Definition 1. A proximal set of a normed space $(E, \| \cdot \|)$ is a subset $\mathcal{C} \subset E$ such that every $x \in E$ admits at least a nearest point in $\mathcal{C}$ .
371
+
372
+ Definition 2. A Chebyshev set of a normed space $(E, \| \cdot \|$ is a subset $\mathcal{C} \subset E$ such that every $x \in E$ admits a unique nearest point in $\mathcal{C}$ .
373
+
374
+ Proximinality reduces to a compacity condition in finite dimensional spaces. In general, it is a weaker one: Boundedly compact sets verify this property for example.
375
+
376
+ In Euclidean spaces, Chebyshev sets are simply the closed convex subsets. The question of knowing whether it is the case that all Chebyshev sets are closed convex sets in infinite dimensional Hilbert spaces is still an open question. In general, there exists examples of non-convex Chebyshev sets, a famous one being presented in (Johnson, 1987) for a non-complete inner-product space.
377
+
378
+ Given the importance of this topic in approximation theory, finding necessary conditions for a set to be Chebyshev and studying the properties of those sets have been the subject of many efforts. Some of those properties are summarized below:
379
+
380
+ - The metric projection on a boundedly compact Chebyshev set is continuous.
381
+ - If the norm is strict, every closed convex space, in particular any finite dimensional subspace is Chebyshev.
382
+ - In a Hilbert space, every closed convex set is Chebyshev.
383
+
384
+ # B PROOF OF PROPOSITIONS 1 AND 2
385
+
386
+ We prove the following result which implies both propositions in the article:
387
+
388
+ Proposition 3. The optimization problem:
389
+
390
+ $$
391
+ \min _ {F _ {p} \in \mathcal {F} _ {p}, F _ {a} \in \mathcal {F}} \| F _ {a} \| \quad \text {s u b j e c t t o} \quad \forall X \in \mathcal {D}, \forall t, \frac {\mathrm {d} X _ {t}}{\mathrm {d} t} = \left(F _ {p} + F _ {a}\right) \left(X _ {t}\right) \tag {5}
392
+ $$
393
+
394
+ is equivalent a metric projection onto $\mathcal{F}_p$ .
395
+
396
+ If $\mathcal{F}_p$ is proximal, Eq. (5) admits a minimizing pair.
397
+
398
+ If $\mathcal{F}_p$ is Chebyshev, Eq. (5) admits a unique minimizing pair which $F_{p}$ is the metric projection.
399
+
400
+ Proof. The idea is to reconstruct the full functional from the trajectories of $\mathcal{D}$ . By definition, $\mathcal{A}$ is the set of points reached by trajectories in $\mathcal{D}$ so that:
401
+
402
+ $$
403
+ \mathcal {A} = \{x \in \mathbb {R} ^ {d} \mid \exists X. \in \mathcal {D}, \exists t, X _ {t} = x \}
404
+ $$
405
+
406
+ Then let us define a function $F^{\mathcal{D}}$ in the following way: For $a \in \mathcal{A}$ , we can find $X_{*} \in \mathcal{D}$ and $t_0$ such that $X_{t_0} = a$ . Differentiating $X$ at $t_0$ , which is possible by definition of $\mathcal{D}$ , we take:
407
+
408
+ $$
409
+ F ^ {\mathcal {D}} (a) = \left. \frac {\mathrm {d} X _ {t}}{\mathrm {d} t} \right| _ {t = t _ {0}}
410
+ $$
411
+
412
+ For any $(F_p, F_a)$ satisfying the constraint in Eq. (5), we then have that $(F_p + F_a)(a) = \frac{\mathrm{d}X_t}{\mathrm{d}t} |_{t_0} = F^{\mathcal{D}}(a)$ for all $a \in \mathcal{A}$ . Conversely, any pair such that $(F_p, F_a) \in \mathcal{F}_p \times \mathcal{F}$ and $F_p + F_a = F^{\mathcal{D}}$ , verifies the constraint.
413
+
414
+ Thus we have the equivalence between Eq. (5) and the metric projection formulated as:
415
+
416
+ $$
417
+ \underset {F _ {p} \in \mathcal {F} _ {p}} {\text {m i n i m i z e}} \quad \left\| F ^ {\mathcal {D}} - F _ {p} \right\| \tag {6}
418
+ $$
419
+
420
+ If $\mathcal{F}_p$ is proximal, the projection problem admits a solution which we denote $F_p^\star$ . Taking $F_a^\star = F^{\mathcal{D}} - F_p^\star$ , we have that $F_p^\star + F_a^\star = F^{\mathcal{D}}$ so that $(F_p^\star, F_a^\star)$ verifies the constraint of Eq. (2). Moreover, if there is $(F_p, F_a)$ satisfying the constraint of Eq. (2), we have that $F_p + F_a = F^{\mathcal{D}}$ by what was shown above and $\|F_a\| = \|F^{\mathcal{D}} - F_p\| \geq \|F^{\mathcal{D}} - F_p^\star\|$ by definition of $F_p^\star$ . This shows that $(F_p^\star, F_a^\star)$ is minimal.
421
+
422
+ Moreover, if $\mathcal{F}_p$ is a Chebyshev set, by uniqueness of the projection, if $F_{p}\neq F_{p}^{\star}$ then $\| F_a\| >\| F_a^\star \|$ . Thus the minimal pair is unique.
423
+
424
+ ![](images/42bfaf6089fae7be33d13db6e164762e3003bda51a6bdb3ece0cb7947d4b8c51.jpg)
425
+
426
+ # C PARAMETER ESTIMATION IN INCOMPLETE PHYSICAL MODELS
427
+
428
+ Classically, when a set $\mathcal{F}_p\subset \mathcal{F}$ summarising the most important properties of a system is available, this gives a simplified model of the true dynamics and the adopted problem is then to fit the trajectories using this model as well as possible, solving:
429
+
430
+ $$
431
+ \underset {F _ {p} \in \mathcal {F} _ {p}} {\text {m i n i m i z e}} \quad \mathbb {E} _ {X \sim \mathcal {D}} L \left(\widetilde {X} ^ {X _ {0}}, X\right) \tag {7}
432
+ $$
433
+
434
+ $$
435
+ \text {s u b j e c t} \forall g \in \mathcal {I}, \widetilde {X} _ {0} ^ {g} = g \text {a n d} \forall t, \frac {\mathrm {d} \widetilde {X} _ {t} ^ {g}}{\mathrm {d} t} = F _ {p} (\widetilde {X} _ {t} ^ {g})
436
+ $$
437
+
438
+ where $L$ is a discrepancy measure between trajectories. Recall that $\widetilde{X}^{X_0}$ is the result trajectory of an ODE solver taking $X_0$ as initial condition. In other words, we try to find a function $F_{p}$ which gives trajectories as close as possible to the ones from the dataset. While estimation of the function becomes easier, there is then a residual part which is left unexplained and this can be a non-negligible issue in at least two ways:
439
+
440
+ - When $F \notin \mathcal{F}_p$ , the loss is strictly positive at the minimum. This means that reducing the space of functions $\mathcal{F}_p$ makes us lose in terms of accuracy.<sup>4</sup>
441
+ - The obtained function $F_{p}$ might not even be the most meaningful function from $\mathcal{F}_p$ as it would try to capture phenomena which are not explainable with functions in $\mathcal{F}_p$ , thus giving the wrong bias to the calculated function. For example, if one is considering a dampened periodic trajectory where only the period can be learned in $\mathcal{F}_p$ but not the dampening, the estimated period will account for the dampening and will thus be biased.
442
+
443
+ This is confirmed in the paper in Section 4: the incomplete physical models augmented with APHYNITY get different and experimentally better physical identification results than the physical models alone.
444
+
445
+ Let us compare our approach with this one on the linearized damped pendulum to show how estimates of physical parameters can differ. The equation is the following:
446
+
447
+ $$
448
+ \frac {\mathrm {d} ^ {2} \theta}{\mathrm {d} t ^ {2}} + \omega_ {0} ^ {2} \theta + \alpha \frac {\mathrm {d} \theta}{\mathrm {d} t} = 0
449
+ $$
450
+
451
+ We take the same notations as in the article and parametrize the simplified physical models as:
452
+
453
+ $$
454
+ F _ {p} ^ {a}: X \mapsto \left(\frac {\mathrm {d} \theta}{\mathrm {d} t}, - a \theta\right)
455
+ $$
456
+
457
+ where $a > 0$ corresponds to $\omega_0^2$ . The corresponding solution for an initial state $X_0$ , which we denote $X^a$ , can then written explicitly as:
458
+
459
+ $$
460
+ \theta_ {t} ^ {a} = \theta_ {0} \cos \sqrt {a} t
461
+ $$
462
+
463
+ Let us consider damped pendulum solutions $X$ written as:
464
+
465
+ $$
466
+ \theta_ {t} = \theta_ {0} e ^ {- t} \cos t
467
+ $$
468
+
469
+ which corresponds to:
470
+
471
+ $$
472
+ F: X \mapsto (\frac {\mathrm {d} \theta}{\mathrm {d} t}, - 2 (\theta + \frac {\mathrm {d} \theta}{\mathrm {d} t}))
473
+ $$
474
+
475
+ It is then easy to see that the estimate of $a$ with the physical model alone can be obtained by minimizing:
476
+
477
+ $$
478
+ \int_ {0} ^ {T} \left| e ^ {- t} \cos t - \cos \sqrt {a} t \right| ^ {2}
479
+ $$
480
+
481
+ This expression depends on $T$ and thus, depending on the chosen time interval and the way the integral is discretized will almost always give biased estimates. In other words, the estimated value of $a$ will not give us the desired solution $t \mapsto \cos t$ .
482
+
483
+ On the other hand, for a given $a$ , in the APHYNITY framework, the residual must be equal to:
484
+
485
+ $$
486
+ F _ {r} ^ {a}: X \mapsto (0, (a - 2) \theta - 2 \frac {\mathrm {d} \theta}{\mathrm {d} t})
487
+ $$
488
+
489
+ in order to satisfy the fitting constraint. Here $a$ corresponds to $1 + \omega_0^2$ not to $\omega_0^2$ as in the simplified case. Minimizing its norm, we obtain $a = 2$ which gives us the desired solution:
490
+
491
+ $$
492
+ \theta_ {t} = \theta_ {0} e ^ {- t} \cos t
493
+ $$
494
+
495
+ with the right period.
496
+
497
+ # D DISCUSSION ON SUPERVISION OVER DERIVATIVES
498
+
499
+ In order to find the appropriate decomposition $(F_p, F_a)$ , we use a trajectory-based error by solving:
500
+
501
+ $$
502
+ \underset {F _ {p} \in \mathcal {F} _ {p}, F _ {a} \in \mathcal {F}} {\text {m i n i m i z e}} \| F _ {a} \|
503
+ $$
504
+
505
+ $$
506
+ \text {s u b j e c t} \quad \forall g \in \mathcal {I}, \widetilde {X} _ {0} ^ {g} = g \text {a n d} \forall t, \frac {\mathrm {d} \widetilde {X} _ {t} ^ {g}}{\mathrm {d} t} = \left(F _ {p} + F _ {a}\right) \left(\widetilde {X} _ {t} ^ {g}\right), \tag {8}
507
+ $$
508
+
509
+ $$
510
+ \forall X \in \mathcal {D}, L (X, \widetilde {X} ^ {X _ {0}}) = 0
511
+ $$
512
+
513
+ In the continuous setting where the data is available at all times $t$ , this problem is in fact equivalent to the following one:
514
+
515
+ $$
516
+ \underset {F _ {p} \in \mathcal {F} _ {p}} {\text {m i n i m i z e}} \quad \mathbb {E} _ {X \sim \mathcal {D}} \int \left\| \frac {\mathrm {d} X _ {t}}{\mathrm {d} t} - F _ {p} \left(X _ {t}\right) \right\| \tag {9}
517
+ $$
518
+
519
+ where the supervision is done directly over derivatives, obtained through finite-difference schemes. This echoes the proof in Section B of the Appendix where $F$ can be reconstructed from the continuous data.
520
+
521
+ However, in practice, data is only available at discrete times with a certain time resolution. While Eq. (9) is indeed equivalent to Eq. (8) in the continuous setting, in the practical discrete one, the way error propagates is not anymore: For Eq. (8) it is controlled over integrated trajectories while for Eq. (9) the supervision is over the approximate derivatives of the trajectories from the dataset. We argue that the trajectory-based approach is more flexible and more robust for the following reasons:
522
+
523
+ - In Eq. (8), if $F_{a}$ is appropriately parameterized, it is possible to perfectly fit the data trajectories at the sampled points.
524
+ - The use of finite differences schemes to estimate $F$ as is done in Eq. (9) necessarily induces a non-zero discretization error.
525
+ - This discretization error is explosive in terms of divergence from the true trajectories.
526
+
527
+ This last point is quite important, especially when time sampling is sparse (even though we do observe this adverse effect empirically in our experiments with relatively finely time-sampled trajectories). The following gives a heuristical reasoning as to why this is the case. Let $\widetilde{F} = F + \epsilon$ be the function estimated from the sampled points with an error $\epsilon$ such that $\| \epsilon \|_{\infty}\leq \alpha$ . Denoting $\widetilde{X}$ the corresponding trajectory generated by $\widetilde{F}$ , we then have, for all $X\in \mathcal{D}$ :
528
+
529
+ $$
530
+ \forall t, \frac {\mathrm {d} (X - \widetilde {X}) _ {t}}{\mathrm {d} t} = F (X _ {t}) - F (\widetilde {X} _ {t}) - \epsilon (\widetilde {X} _ {t})
531
+ $$
532
+
533
+ Integrating over $[0,T]$ and using the triangular inequality as well as the mean value inequality, supposing that $F$ has uniformly bounded spatial derivatives:
534
+
535
+ $$
536
+ \forall t \in [ 0, T ], \| (X - \widetilde {X}) _ {t} \| \leq \| \nabla F \| _ {\infty} \int_ {0} ^ {t} \| X _ {s} - \widetilde {X} _ {s} \| + \alpha t
537
+ $$
538
+
539
+ which, using a variant of the Gronwall lemma, gives us the inequality:
540
+
541
+ $$
542
+ \forall t \in [ 0, T ], \| X _ {t} - \widetilde {X} _ {t} \| \leq \frac {\alpha}{\| \nabla F \| _ {\infty}} (\exp (\| \nabla F \| _ {\infty} t) - 1)
543
+ $$
544
+
545
+ When $\alpha$ tends to 0, we recover the true trajectories $X$ . However, as $\alpha$ is bounded away from 0 by the available temporal resolution, this inequality gives a rough estimate of the way $\widetilde{X}$ diverges from them, and it can be an equality in many cases. This exponential behaviour explains our choice of a trajectory-based optimization.
546
+
547
+ # E IMPLEMENTATION DETAILS
548
+
549
+ We describe here the three use cases studied in the paper for validating APHYNITY. All experiments are implemented with PyTorch (Paszke et al., 2019) and the differentiable ODE solvers with the adjoint method implemented in torchdiffreq.<sup>5</sup>
550
+
551
+ # E.1 REACTION-DIFFUSION EQUATIONS
552
+
553
+ The system is driven by a FitzHugh-Nagumo type PDE (Klaasen & Troy, 1984)
554
+
555
+ $$
556
+ \frac {\partial u}{\partial t} = a \Delta u + R _ {u} (u, v; k), \frac {\partial v}{\partial t} = b \Delta v + R _ {v} (u, v)
557
+ $$
558
+
559
+ where $a$ and $b$ are respectively the diffusion coefficients of $u$ and $v$ , $\Delta$ is the Laplace operator. The local reaction terms are $R_{u}(u,v;k) = u - u^{3} - k - v, R_{v}(u,v) = u - v$ .
560
+
561
+ The state $X = (u, v)$ is defined over a compact rectangular domain $\Omega = [-1, 1]^2$ with periodic boundary conditions. $\Omega$ is spatially discretized with a $32 \times 32$ 2D uniform square mesh grid. The periodic boundary condition is implemented with circular padding around the borders. $\Delta$ is systematically estimated with a $3 \times 3$ discrete Laplace operator.
562
+
563
+ Dataset Starting from a randomly sampled initial state $X_{\mathrm{init}} \in [0,1]^{2 \times 32 \times 32}$ , we generate states by integrating the true PDE with fixed $a, b$ , and $k$ in a dataset $(a = 1 \times 10^{-3}, b = 5 \times 10^{-3}, k = 5 \times 10^{-3})$ . We firstly simulate high time-resolution $(\delta t_{\mathrm{sim}} = 0.001)$ sequences with explicit finite difference method. We then extract states every $\delta t_{\mathrm{data}} = 0.1$ to construct our low time-resolution datasets.
564
+
565
+ We set the time of random initial state to $t = -0.5$ and the time horizon to $t = 2.5$ . 1920 sequences are generated, with 1600 for training/validation and 320 for test. We take the state at $t = 0$ as $X_0$ and predict the sequence until the horizon (equivalent to 25 time steps) in all reaction-diffusion experiments. Note that the sub-sequence with $t < 0$ are reserved for the extensive experiments in Appendix G.1.
566
+
567
+ Neural network architectures Our $F_{a}$ here is a 3-layer convolution network (ConvNet). The two input channels are $(u, v)$ and two output ones are $(\frac{\partial u}{\partial t}, \frac{\partial v}{\partial t})$ . The purely data-driven Neural ODE uses such ConvNet as its $F$ . The detailed architecture is provided in Table 2. The estimated physical parameters $\theta_{p}$ in $F_{p}$ are simply a trainable vector $(a, b) \in \mathbb{R}_{+}^{2}$ or $(a, b, k) \in \mathbb{R}_{+}^{3}$ .
568
+
569
+ Table 2: ConvNet architecture in reaction-diffusion and wave equation experiments, used as data-driven derivative operator in APHYNITY and Neural ODE (Chen et al., 2018).
570
+
571
+ <table><tr><td>Module</td><td>Specification</td></tr><tr><td>2D Conv.</td><td>3 × 3 kernel, 2 input channels, 16 output channels, 1 pixel zero padding</td></tr><tr><td>2D Batch Norm.</td><td>No average tracking</td></tr><tr><td>ReLU activation</td><td>—</td></tr><tr><td>2D Conv.</td><td>3 × 3 kernel, 16 input channels, 16 output channels, 1 pixel zero padding</td></tr><tr><td>2D Batch Norm.</td><td>No average tracking</td></tr><tr><td>ReLU activation</td><td>—</td></tr><tr><td>2D Conv.</td><td>3 × 3 kernel, 16 input channels, 2 output channels, 1 pixel zero padding</td></tr></table>
572
+
573
+ Optimization hyperparameters We choose to apply the same hyperparameters for all the reaction-diffusion experiments: $Niter = 1, \lambda_0 = 1, \tau_1 = 1 \times 10^{-3}, \tau_2 = 1 \times 10^3$ .
574
+
575
+ # E.2 WAVE EQUATIONS
576
+
577
+ The damped wave equation is defined by
578
+
579
+ $$
580
+ \frac {\partial^ {2} w}{\partial t ^ {2}} - c ^ {2} \Delta w + k \frac {\partial w}{\partial t} = 0
581
+ $$
582
+
583
+ where $c$ is the wave speed and $k$ is the damping coefficient. The state is $X = (w, \frac{\partial w}{\partial t})$ .
584
+
585
+ We consider a compact spatial domain $\Omega$ represented as a $64\times 64$ grid and discretize the Laplacian operator similarly. $\Delta$ is implemented using a $5\times 5$ discrete Laplace operator in simulation whereas in the experiment is a $3\times 3$ Laplace operator. Null Neumann boundary condition are imposed for generation.
586
+
587
+ Dataset $\delta t$ was set to 0.001 to respect Courant number and provide stable integration. The simulation was integrated using a 4th order finite difference Runge-Kutta scheme for 300 steps from an initial Gaussian state, i.e for all sequence at $t = 0$ , we have:
588
+
589
+ $$
590
+ w (x, y, t = 0) = C \times \exp^ {\frac {(x - x _ {0}) ^ {2} + (y - y _ {0}) ^ {2}}{\sigma^ {2}}} \tag {10}
591
+ $$
592
+
593
+ The amplitude $C$ is fixed to 1, and $(x_0, y_0) = (32, 32)$ to make the Gaussian curve centered for all sequences. However, $\sigma$ is different for each sequence and uniformly sampled in [10, 100]. The same $\delta t$ was used for train and test. All initial conditions are Gaussian with varying amplitudes. 250 sequences are generated, 200 are used for training while 50 are reserved as a test set. In the main paper setting, $c = 330$ and $k = 50$ . As with the reaction diffusion case, the algorithm takes as input a state $X_{t_0} = (w, \frac{\mathrm{d}w}{\mathrm{d}t})(t_0)$ and predicts all states from $t_0 + \delta t$ up to $t_0 + 25\delta t$ .
594
+
595
+ Neural network architectures The neural network for $F_{a}$ is a 3-layer convolution neural network with the same architecture as in Table 2. For $F_{p}$ , the parameter(s) to be estimated is either a scalar $c \in \mathbb{R}_{+}$ or a vector $(c, k) \in \mathbb{R}_{+}^{2}$ . Similarly, Neural ODE networks are built as presented in Table 2.
596
+
597
+ Optimization hyperparameters We use the same hyperparameters for the experiments: $Niter = 3, \lambda_0 = 1, \tau_1 = 1 \times 10^{-4}, \tau_2 = 1 \times 10^2$ .
598
+
599
+ # E.3 DAMPED PENDULUM
600
+
601
+ We consider the non-linear damped pendulum problem, governed by the ODE
602
+
603
+ $$
604
+ \frac {\mathrm {d} ^ {2} \theta}{\mathrm {d} t ^ {2}} + \omega_ {0} ^ {2} \sin \theta + \alpha \frac {\mathrm {d} \theta}{\mathrm {d} t} = 0
605
+ $$
606
+
607
+ where $\theta(t)$ is the angle, $\omega_0 = \frac{2\pi}{T_0}$ is the proper pulsation ( $T_0$ being the period) and $\alpha$ is the damping coefficient. With the state $X = (\theta, \frac{\mathrm{d}\theta}{\mathrm{d}t})$ , the ODE can be written as $\frac{\mathrm{d}X_t}{\mathrm{d}t} = F(X_t)$ with $F: X \mapsto \left(\frac{\mathrm{d}\theta}{\mathrm{d}t}, -\omega_0^2\sin \theta - \alpha \frac{\mathrm{d}\theta}{\mathrm{d}t}\right)$ .
608
+
609
+ Dataset For each train / validation / test split, we simulate a dataset with 25 trajectories of 40 timesteps (time interval [0, 20], timestep $\delta t = 0.5$ ) with fixed ODE coefficients ( $T_0 = 12$ , $\alpha = 0.2$ ) and varying initial conditions. The simulation integrator is Dormand-Prince Runge-Kutta method of order (4)5 (DOPRI5, Dormand & Prince, 1980). We also add a small amount of white gaussian noise ( $\sigma = 0.01$ ) to the state. Note that our pendulum dataset is much more challenging than the ideal frictionless pendulum considered in Greydanus et al. (2019).
610
+
611
+ Neural network architectures We detail in Table 3 the neural architectures used for the damped pendulum experiments. All data-driven augmentations for approximating the mapping $X_{t} \mapsto F(X_{t})$ are implemented by multi-layer perceptrons (MLP) with 3 layers of 200 neurons and ReLU activation functions (except at the last layer: linear activation). The Hamiltonian (Greydanus et al., 2019; Toth et al., 2020) is implemented by a MLP that takes the state $X_{t}$ and outputs a scalar estimation of the Hamiltonian $\mathcal{H}$ of the system: the derivative is then computed by an in-graph gradient of $\mathcal{H}$ with respect to the input: $F(X_{t}) = \left(\frac{\partial\mathcal{H}}{\partial(\mathrm{d}\theta / \mathrm{d}t)}, - \frac{\partial\mathcal{H}}{\mathrm{d}\theta}\right)$ .
612
+
613
+ Table 3: Neural network architectures for the damped pendulum experiments. n/a corresponds to non-applicable cases.
614
+
615
+ <table><tr><td>Method</td><td>Physical model</td><td>Data-driven model</td></tr><tr><td>Neural ODE</td><td>n/a</td><td>MLP(in=2, units=200, layers=3, out=2)</td></tr><tr><td>Hamiltonian</td><td>MLP(in=2, units=200, layers=3, out=1)</td><td>n/a</td></tr><tr><td>APHYNITY Hamiltonian</td><td>MLP(in=2, units=200, layers=3, out=1)</td><td>MLP(in=2, units=200, layers=3, out=2)</td></tr><tr><td>Param ODE (ω0)</td><td>1 trainable parameter ω0</td><td>n/a</td></tr><tr><td>APHYNITY Param ODE (ω0)</td><td>1 trainable parameter ω0</td><td>MLP(in=2, units=200, layers=3, out=2)</td></tr><tr><td>Param ODE (ω0, α)</td><td>2 trainable parameters ω0, λ</td><td>n/a</td></tr><tr><td>APHYNITY Param ODE (ω0, α)</td><td>2 trainable parameters ω0, λ</td><td>MLP(in=2, units=200, layers=3, out=2)</td></tr></table>
616
+
617
+ Optimization hyperparameters The hyperparameters of the APHYNITY optimization algorithm (Niter, $\lambda_0,\tau_1,\tau_2$ ) were cross-validated on the validation set and are shown in Table 4. All models were trained with a maximum number of 5000 steps with early stopping.
618
+
619
+ Table 4: Hyperparameters of the damped pendulum experiments.
620
+
621
+ <table><tr><td>Method</td><td>Niter</td><td>λ0</td><td>τ1</td><td>τ2</td></tr><tr><td>APHYNITY Hamiltonian</td><td>5</td><td>1</td><td>1</td><td>0.1</td></tr><tr><td>APHYNITY ParamODE (ω0)</td><td>5</td><td>1</td><td>1</td><td>10</td></tr><tr><td>APHYNITY ParamODE (ω0,λ)</td><td>5</td><td>1000</td><td>1</td><td>100</td></tr></table>
622
+
623
+ # F ABLATION STUDY
624
+
625
+ We conduct ablation studies to show the effectiveness of APHYNITY's adaptive optimization and trajectory-based learning scheme.
626
+
627
+ # F.1 ABLATION TO VANILLA MB/ML COOPERATION
628
+
629
+ In Table 5, we consider the ablation case with the vanilla augmentation scheme found in Le Guen & Thome (2020); Wang et al. (2019); Mehta et al. (2020), which does not present any proper decomposition guarantee. We observe that the APHYNITY cooperation scheme outperforms this vanilla scheme in all cases, both in terms of forecasting performances (e.g. log MSE= -0.35 vs. -3.97 for the Hamiltonian in the pendulum case) and parameter identification (e.g. Err Param=8.4% vs. 2.3 for Param PDE $(a,b$ for reaction-diffusion). It confirms the crucial benefits of APHYNITY's principled decomposition scheme.
630
+
631
+ Table 5: Ablation study comparing APHYNITY to the vanilla augmentation scheme (Wang et al., 2019; Mehta et al., 2020) for the reaction-diffusion equation, wave equation and damped pendulum.
632
+
633
+ <table><tr><td>Dataset</td><td>Method</td><td>log MSE</td><td>%Err Param.</td><td>\( \| F_a\|^2 \)</td></tr><tr><td rowspan="6">Reaction-diffusion</td><td>Param. PDE (a, b) with vanilla aug.</td><td>-4.56±0.52</td><td>8.4</td><td>(7.5±1.4)e1</td></tr><tr><td>APHYNITY Param. PDE (a, b)</td><td>-5.10±0.21</td><td>2.3</td><td>(6.7±0.4)e1</td></tr><tr><td>Param. PDE (a, b, k) with vanilla aug.</td><td>-8.04±0.03</td><td>25.4</td><td>(1.5±0.2)e-2</td></tr><tr><td>APHYNITY Param. PDE (a, b, k)</td><td>-9.35±0.02</td><td>0.096</td><td>(1.5±0.4)e-6</td></tr><tr><td>True PDE with vanilla aug.</td><td>-8.12±0.05</td><td>n/a</td><td>(6.1±2.3)e-4</td></tr><tr><td>APHYNITY True PDE</td><td>-9.17±0.02</td><td>n/a</td><td>(1.4±0.8)e-7</td></tr><tr><td rowspan="4">Wave equation</td><td>Param PDE (c) with vanilla aug.</td><td>-3.90 ± 0.27</td><td>0.51</td><td>88.66</td></tr><tr><td>APHYNITY Param PDE (c)</td><td>-4.64±0.25</td><td>0.31</td><td>71.0</td></tr><tr><td>Param PDE (c, k) with vanilla aug.</td><td>-5.96 ± 0.10</td><td>0.71</td><td>25.1</td></tr><tr><td>APHYNITY Param PDE (c, k)</td><td>-6.09±0.28</td><td>0.70</td><td>4.54</td></tr><tr><td rowspan="8">Damped pendulum</td><td>Hamiltonian with vanilla aug.</td><td>-0.35±0.1</td><td>n/a</td><td>837±117</td></tr><tr><td>APHYNITY Hamiltonian</td><td>-3.97±1.2</td><td>n/a</td><td>623±68</td></tr><tr><td>Param ODE (ω0) with vanilla aug.</td><td>-7.02±1.7</td><td>4.5</td><td>148±49</td></tr><tr><td>APHYNITY Param ODE (ω0)</td><td>-7.86±0.6</td><td>4.0</td><td>132±11</td></tr><tr><td>Param ODE (ω0, α) with vanilla aug.</td><td>-7.60±0.6</td><td>4.65</td><td>35.5±6.2</td></tr><tr><td>APHYNITY Param ODE (ω0, α)</td><td>-8.31±0.3</td><td>0.39</td><td>8.5±2.0</td></tr><tr><td>Augmented True ODE with vanilla aug.</td><td>-8.40±0.2</td><td>n/a</td><td>3.4±0.8</td></tr><tr><td>APHYNITY True ODE</td><td>-8.44±0.2</td><td>n/a</td><td>2.3±0.4</td></tr></table>
634
+
635
+ # F.2 DETAILED ABLATION STUDY
636
+
637
+ We conduct also two other ablations in Table 6:
638
+
639
+ - derivative supervision: in which $F_{p} + F_{a}$ is trained with supervision over approximated derivatives on ground truth trajectory, as performed in Greydanus et al. (2019); Cranmer et al. (2020). More precisely, APHYNITY's $\mathcal{L}_{\mathrm{traj}}$ is here replaced with $\mathcal{L}_{\mathrm{deriv}} = \left\| \frac{\mathrm{d}X_{t}}{\mathrm{d}t} - F(X_{t}) \right\|$ as in Eq. (9), where $\frac{\mathrm{d}X_{t}}{\mathrm{d}t}$ is approximated by finite differences on $X_{t}$ .
640
+ - non-adaptive optim.: in which we train APHYNITY by minimizing $\| F_{a}\|$ without the adaptive optimization of $\lambda$ shown in Algorithm 1. This case is equivalent to $\lambda = 1, \tau_{2} = 0$ .
641
+
642
+ We highlight the importance to use a principled adaptive optimization algorithm (APHYNITY algorithm described in paper) compared to a non-adaptive optimization: for example in the reaction-diffusion case, log MSE $= -4.55$ vs. $-5.10$ for Param PDE $(a,b)$ . Finally, when the supervision occurs on the derivative, both forecasting and parameter identification results are systematically lower than with APHYNITY's trajectory based approach: for example, log MSE $= -1.16$ vs. $-4.64$ for Param PDE $(c)$ in the wave equation. It confirms the good properties of the APHYNITY training scheme.
643
+
644
+ Table 6: Detailed ablation study on supervision and optimization for the reaction-diffusion equation, wave equation and damped pendulum.
645
+
646
+ <table><tr><td>Dataset</td><td>Method</td><td>log MSE</td><td>%Err Param.</td><td>\( \| F_a\|^2 \)</td></tr><tr><td rowspan="9">Reaction-diffusion</td><td>Augmented Param. PDE (a, b) derivative supervision</td><td>-4.42±0.25</td><td>12.6</td><td>(6.8±0.6)e1</td></tr><tr><td>Augmented Param. PDE (a, b) non-adaptive optim.</td><td>-4.55±0.11</td><td>7.5</td><td>(7.6±1.0)e1</td></tr><tr><td>APHYNITY Param. PDE (a, b)</td><td>-5.10±0.21</td><td>2.3</td><td>(6.7±0.4)e1</td></tr><tr><td>Augmented Param. PDE (a, b, k) derivative supervision</td><td>-4.90±0.06</td><td>11.7</td><td>(1.9±0.3)e-1</td></tr><tr><td>Augmented Param. PDE (a, b, k) non-adaptive optim.</td><td>-9.10±0.02</td><td>0.21</td><td>(5.5±2.9)e-7</td></tr><tr><td>APHYNITY Param. PDE (a, b, k)</td><td>-9.35±0.02</td><td>0.096</td><td>(1.5±0.4)e-6</td></tr><tr><td>Augmented True PDE derivative supervision</td><td>-6.03±0.01</td><td>n/a</td><td>(3.1±0.8)e-3</td></tr><tr><td>Augmented True PDE non-adaptive optim.</td><td>-9.01±0.01</td><td>n/a</td><td>(1.5±0.8)e-6</td></tr><tr><td>APHYNITY True PDE</td><td>-9.17±0.02</td><td>n/a</td><td>(1.4±0.8)e-7</td></tr><tr><td rowspan="9">Wave equation</td><td>Augmented Param PDE (c) derivative supervision</td><td>-1.16±0.48</td><td>12.1</td><td>0.00024</td></tr><tr><td>Augmented Param PDE (c) non-adaptive optim.</td><td>-2.57±0.21</td><td>3.1</td><td>43.6</td></tr><tr><td>APHYNITY Param PDE (c)</td><td>-4.64±0.25</td><td>0.31</td><td>71.0</td></tr><tr><td>Augmented Param PDE (c, k) derivative supervision</td><td>-4.19±0.36</td><td>7.2</td><td>0.00012</td></tr><tr><td>Augmented Param PDE (c, k) non-adaptive optim.</td><td>-4.93±0.51</td><td>1.32</td><td>0.054</td></tr><tr><td>APHYNITY Param PDE (c, k)</td><td>-6.09±0.28</td><td>0.70</td><td>4.54</td></tr><tr><td>Augmented True PDE derivative supervision</td><td>-4.42 ± 0.33</td><td>n/a</td><td>6.02e-5</td></tr><tr><td>Augmented True PDE non-adaptive optim.</td><td>-4.97±0.49</td><td>n/a</td><td>0.23</td></tr><tr><td>APHYNITY True PDE</td><td>-5.24±0.45</td><td>n/a</td><td>0.14</td></tr><tr><td rowspan="12">Damped pendulum</td><td>Augmented Hamiltonian derivative supervision</td><td>-0.83±0.3</td><td>n/a</td><td>642±121</td></tr><tr><td>Augmented Hamiltonian non-adaptive optim.</td><td>-0.49±0.58</td><td>n/a</td><td>165±30</td></tr><tr><td>APHYNITY Hamiltonian</td><td>-3.97±1.2</td><td>n/a</td><td>623±68</td></tr><tr><td>Augmented Param ODE (ω0) derivative supervision</td><td>-1.02±0.04</td><td>5.8</td><td>136±13</td></tr><tr><td>Augmented Param ODE (ω0) non-adaptive optim.</td><td>-4.30±1.3</td><td>4.4</td><td>90.4±27</td></tr><tr><td>APHYNITY Param ODE (ω0)</td><td>-7.86±0.6</td><td>4.0</td><td>132±11</td></tr><tr><td>Augmented Param ODE (ω0, α) derivative supervision</td><td>-2.61±0.2</td><td>5.0</td><td>3.2±1.7</td></tr><tr><td>Augmented Param ODE (ω0, α) non-adaptive optim.</td><td>-7.69±1.3</td><td>1.65</td><td>4.8±7.7</td></tr><tr><td>APHYNITY Param ODE (ω0, α)</td><td>-8.31±0.3</td><td>0.39</td><td>8.5±2.0</td></tr><tr><td>Augmented True ODE derivative supervision</td><td>-2.14±0.3</td><td>n/a</td><td>4.1±0.6</td></tr><tr><td>Augmented True ODE non-adaptive optim.</td><td>-8.34±0.4</td><td>n/a</td><td>1.4±0.3</td></tr><tr><td>APHYNITY True ODE</td><td>-8.44±0.2</td><td>n/a</td><td>2.3±0.4</td></tr></table>
647
+
648
+ # G ADDITIONAL EXPERIMENTS
649
+
650
+ # G.1 REACTION-DIFFUSION SYSTEMS WITH VARYING DIFFUSION PARAMETERS
651
+
652
+ We conduct an extensive evaluation on a setting with varying diffusion parameters for reaction-diffusion equations. The only varying parameters are diffusion coefficients, i.e. individual $a$ and $b$ for each sequence. We randomly sample $a \in [1 \times 10^{-3}, 2 \times 10^{-3}]$ and $b \in [3 \times 10^{-3}, 7 \times 10^{-3}]$ . $k$ is still fixed to $5 \times 10^{-3}$ across the dataset.
653
+
654
+ In order to estimate $a$ and $b$ for each sequence, we use here a ConvNet encoder $E$ to estimate parameters from 5 reserved frames ( $t < 0$ ). The architecture of the encoder $E$ is similar to the one in Table 2 except that $E$ takes 5 frames (10 channels) as input and $E$ outputs a vector of estimated $(\tilde{a},\tilde{b})$ after applying a sigmoid activation scaled by $1\times 10^{-2}$ (to avoid possible divergence). For the baseline Neural ODE, we concatenate $a$ and $b$ to each sequence as two channels.
655
+
656
+ In Table 7, we observe that combining data-driven and physical components outperforms the pure data-driven one. When applying APHYNITY to Param PDE $(a,b)$ , the prediction precision is significantly improved (log MSE: -1.32 vs. -4.32) with $a$ and $b$ respectively reduced from $55.6\%$ and $54.1\%$ to $11.8\%$ and $18.7\%$ . For complete physics cases, the parameter estimations are also improved for Param PDE $(a,b,k)$ by reducing over $60\%$ of the error of $b$ (3.10 vs. 1.23) and $10\%$ to $20\%$ of the errors of $a$ and $k$ (resp. 1.55/0.59 vs. 1.29/0.39).
657
+
658
+ The extensive results reflect the same conclusion as shown in the main article: APHYNITY improves the prediction precision and parameter estimation. The same decreasing tendency of $\| F_{a} \|$ is also confirmed.
659
+
660
+ Table 7: Results of the dataset of reaction-diffusion with varying $(a,b)$ . $k = 5 \times 10^{-3}$ is shared across the dataset.
661
+
662
+ <table><tr><td></td><td>Method</td><td>log MSE</td><td>%Err a</td><td>%Err b</td><td>%Err k</td><td>\( \left\| F_{a}\right\|^{2} \)</td></tr><tr><td>Data-driven</td><td>Neural ODE (Chen et al., 2018)</td><td>-3.61±0.07</td><td>n/a</td><td>n/a</td><td>n/a</td><td>n/a</td></tr><tr><td rowspan="2">Incomplete physics</td><td>Param PDE (a, b)</td><td>-1.32±0.02</td><td>55.6</td><td>54.1</td><td>n/a</td><td>n/a</td></tr><tr><td>APHYNITY Param PDE (a, b)</td><td>-4.32±0.32</td><td>11.8</td><td>18.7</td><td>n/a</td><td>(4.3±0.6)e1</td></tr><tr><td rowspan="4">Complete physics</td><td>Param PDE (a, b, k)</td><td>-5.54±0.38</td><td>1.55</td><td>3.10</td><td>0.59</td><td>n/a</td></tr><tr><td>APHYNITY Param PDE (a, b, k)</td><td>-5.72±0.25</td><td>1.29</td><td>1.23</td><td>0.39</td><td>(5.9±4.3)e-1</td></tr><tr><td>True PDE</td><td>-8.86±0.02</td><td>n/a</td><td>n/a</td><td>n/a</td><td>n/a</td></tr><tr><td>APHYNITY True PDE</td><td>-8.82±0.15</td><td>n/a</td><td>n/a</td><td>n/a</td><td>(1.8±0.6)e-5</td></tr></table>
663
+
664
+ # G.2 ADDITIONAL RESULTS FOR THE WAVE EQUATION
665
+
666
+ We conduct an experiment where each sequence is generated with a different wave celerity. This dataset is challenging because both $c$ and the initial conditions vary across the sequences. For each simulated sequence, an initial condition is sampled as described previously, along with a wave celerity $c$ also sampled uniformly in $[300, 400]$ . Finally our initial state is integrated with the same Runge-Kutta scheme. 200 of such sequences are generated for training while 50 are kept for testing.
667
+
668
+ For this experiment, we also use a ConvNet encoder to estimate the wave speed $c$ from 5 consecutive reserved states $(w, \frac{\partial w}{\partial t})$ . The architecture of the encoder $E$ is the same as in Table 2 but with 10 input channels. Here also, $k$ is fixed for all sequences and $k = 50$ . The hyper-parameters used in these experiments are the same than described in the Section E.2.
669
+
670
+ The results when multiple wave speeds $c$ are in the dataset are consistent with the one present when only one is considered. Indeed, while prediction performances are slightly hindered, the parameter estimation remains consistent for both $c$ and $k$ . This extension provides elements attesting for the robustness and adaptability of our method to more complex settings. Finally the purely data-driven Neural-ODE fails to cope with the increasing difficulty.
671
+
672
+ Table 8: Results for the damped wave equation when considering multiple $c$ sampled uniformly in [300, 400] in the dataset, $k$ is shared across all sequences and $k = 50$ .
673
+
674
+ <table><tr><td></td><td>Method</td><td>log MSE</td><td>%Error c</td><td>%Error k</td><td>\( \left\| F_{a}\right\|^{2} \)</td></tr><tr><td>Data-driven</td><td>Neural ODE</td><td>0.056±0.34</td><td>n/a</td><td>n/a</td><td>n/a</td></tr><tr><td rowspan="2">Incomplete physics</td><td>Param PDE (c)</td><td>-1.32±0.27</td><td>23.9</td><td>n/a</td><td>n/a</td></tr><tr><td>APHYNITY Param PDE (c)</td><td>-4.51±0.38</td><td>3.2</td><td>n/a</td><td>171</td></tr><tr><td rowspan="4">Complete physics</td><td>Param PDE (c,k)</td><td>-4.25±0.28</td><td>3.54</td><td>1.43</td><td>n/a</td></tr><tr><td>APHYNITY Param PDE (c,k)</td><td>-4.84±0.57</td><td>2.41</td><td>0.064</td><td>3.64</td></tr><tr><td>True PDE (c,k)</td><td>-4.51±0.29</td><td>n/a</td><td>n/a</td><td>n/a</td></tr><tr><td>APHYNITY True PDE (c,k)</td><td>-4.49±0.22</td><td>n/a</td><td>n/a</td><td>0.0005</td></tr></table>
675
+
676
+ # G.3 DAMPED PENDULUM WITH VARYING PARAMETERS
677
+
678
+ To extend the experiments conducted in the paper (section 4) with fixed parameters $(T_0 = 6, \alpha = 0.2)$ and varying initial conditions, we evaluate APHYNITY on a much more challenging dataset where we vary both the parameters $(T_0, \alpha)$ and the initial conditions between trajectories.
679
+
680
+ We simulate 500/50/50 trajectories for the train-valid/test sets integrated with DOPRI5. For each trajectory, the period $T_{0}$ (resp. the damping coefficient $\alpha$ ) are sampled uniformly in the range [3, 10] (resp. [0, 0.5]).
681
+
682
+ We train models that take the first 20 steps as input and predict the next 20 steps. To account for the varying ODE parameters between sequences, we use an encoder that estimates the parameters based
683
+
684
+ on the first 20 timesteps. In practice, we use a recurrent encoder composed of 1 layer of 128 GRU units. The output of the encoder is fed as additional input to the data-driven augmentation models and to an MLP with final softplus activations to estimate the physical parameters when necessary $(\omega_0 \in \mathbb{R}_+ \text{ for Param ODE } (\omega_0), (\omega_0, \alpha) \in \mathbb{R}_+^2 \text{ for Param ODE } (\omega_0, \alpha))$ .
685
+
686
+ In this varying ODE context, we also compare to the state-of-the-art univariate time series forecasting method N-Beats (Oreshkin et al., 2020).
687
+
688
+ Results shown in Table 9 are consistent with those presented in the paper. Pure data-driven models Neural ODE (Chen et al., 2018) and N-Beats (Oreshkin et al., 2020) fail to properly extrapolate the pendulum dynamics. Incomplete physical models (Hamiltonian and ParamODE $(\omega_0)$ ) are even worse since they do not account for friction. Augmenting them with APHYNITY significantly and consistently improves forecasting results and parameter identification.
689
+
690
+ Table 9: Forecasting and identification results on the damped pendulum dataset with different parameters for each sequence. log MSEs are computed over 20 predicted time-steps. For each level of incorporated physical knowledge, equivalent best results according to a Student t-test are shown in bold. n/a corresponds to non-applicable cases.
691
+
692
+ <table><tr><td></td><td>Method</td><td>log MSE</td><td>%Error T0</td><td>%Error α</td><td>|Fa|2</td></tr><tr><td rowspan="2">data-driven</td><td>Neural ODE (Chen et al., 2018)</td><td>-4.35±0.9</td><td>n/a</td><td>n/a</td><td>n/a</td></tr><tr><td>N-Beats (Oreshkin et al., 2020)</td><td>-4.57±0.5</td><td>n/a</td><td>n/a</td><td>n/a</td></tr><tr><td rowspan="4">Incomplete physics</td><td>Hamiltonian (Greydanus et al., 2019)</td><td>-1.31±0.4</td><td>n/a</td><td>n/a</td><td>n/a</td></tr><tr><td>APHYNITY Hamiltonian</td><td>-4.72±0.4</td><td>n/a</td><td>n/a</td><td>5.6±0.6</td></tr><tr><td>Param ODE (ω0)</td><td>-2.66±0.9</td><td>21.5±19</td><td>n/a</td><td>n/a</td></tr><tr><td>APHYNITY Param ODE (ω0)</td><td>-5.94±0.7</td><td>5.0±1.8</td><td>n/a</td><td>0.49±0.1</td></tr><tr><td rowspan="4">Complete physics</td><td>Param ODE (ω0, α)</td><td>-5.71±0.4</td><td>4.08±0.8</td><td>152±129</td><td>n/a</td></tr><tr><td>APHYNITY Param ODE (ω0, α)</td><td>-6.22±0.7</td><td>3.26±0.6</td><td>62±27</td><td>(5.39±0.1)e-10</td></tr><tr><td>True ODE</td><td>-8.58±0.1</td><td>n/a</td><td>n/a</td><td>n/a</td></tr><tr><td>APHYNITY True ODE</td><td>-8.58±0.1</td><td>n/a</td><td>n/a</td><td>(2.15±1.6)e-4</td></tr></table>
augmentingphysicalmodelswithdeepnetworksforcomplexdynamicsforecasting/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8a370e0b415ab27efab647fa3cf5331ea98cc41f0e64ce7109c7051f55178741
3
+ size 1099988
augmentingphysicalmodelswithdeepnetworksforcomplexdynamicsforecasting/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e843f942c18d2ed998b6d2d1b950c3e73ab88464f1f9b2bd59554d4e0b780330
3
+ size 989652
comixupsaliencyguidedjointmixupwithsupermodulardiversity/2aa797a7-462e-4ce1-b1dd-ff614c84111b_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b632f9f2d7591051e2b8dba26922a71f508d5754ddc1f04c9b90854ee518dec8
3
+ size 133073
comixupsaliencyguidedjointmixupwithsupermodulardiversity/2aa797a7-462e-4ce1-b1dd-ff614c84111b_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d8122c95dde1b5147cff6efc83623189aa8cdccf3562d4ec3a67c13577979ac8
3
+ size 148461
comixupsaliencyguidedjointmixupwithsupermodulardiversity/2aa797a7-462e-4ce1-b1dd-ff614c84111b_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cb4b499cae9b9910eabe7a600205b3b19dd351556e46bfd8eb1f978979126a92
3
+ size 5175942
comixupsaliencyguidedjointmixupwithsupermodulardiversity/full.md ADDED
@@ -0,0 +1,578 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # CO-MIXUP: SALIENCY GUIDED JOINT MIXUP WITH SUPERMODULAR DIVERSITY
2
+
3
+ Jang-Hyun Kim, Wonho Choo, Hosan Jeong, Hyun Oh Song
4
+
5
+ Department of Computer Science and Engineering, Seoul National University Neural Processing Research Center
6
+
7
+ {janghyun, wonho.choo, grazinglion, hyunoh}@mllab.snu.ac.kr
8
+
9
+ # ABSTRACT
10
+
11
+ While deep neural networks show great performance on fitting to the training distribution, improving the networks' generalization performance to the test distribution and robustness to the sensitivity to input perturbations still remain as a challenge. Although a number of mixup based augmentation strategies have been proposed to partially address them, it remains unclear as to how to best utilize the supervisory signal within each input data for mixup from the optimization perspective. We propose a new perspective on batch mixup and formulate the optimal construction of a batch of mixup data maximizing the data saliency measure of each individual mixup data and encouraging the supermodular diversity among the constructed mixup data. This leads to a novel discrete optimization problem minimizing the difference between submodular functions. We also propose an efficient modular approximation based iterative submodular minimization algorithm for efficient mixup computation per each minibatch suitable for minibatch based neural network training. Our experiments show the proposed method achieves the state of the art generalization, calibration, and weakly supervised localization results compared to other mixup methods. The source code is available at https://github.com/snu-mllab/Co-Mixup.
12
+
13
+ # 1 INTRODUCTION
14
+
15
+ Deep neural networks have been applied to a wide range of artificial intelligence tasks such as computer vision, natural language processing, and signal processing with remarkable performance (Ren et al., 2015; Devlin et al., 2018; Oord et al., 2016). However, it has been shown that neural networks have excessive representation capability and can even fit random data (Zhang et al., 2016). Due to these characteristics, the neural networks can easily overfit to training data and show a large generalization gap when tested on previously unseen data.
16
+
17
+ To improve the generalization performance of the neural networks, a body of research has been proposed to develop regularizers based on priors or to augment the training data with task-dependent transforms (Bishop, 2006; Cubuk et al., 2019). Recently, a new task-independent data augmentation technique, called mixup, has been proposed (Zhang et al., 2018). The original mixup, called Input Mixup, linearly interpolates a given pair of input data and can be easily applied to various data and tasks, improving the generalization performance and robustness of neural networks. Other mixup methods, such as manifold mixup (Verma et al., 2019) or CutMix (Yun et al., 2019), have also been proposed addressing different ways to mix a given pair of input data. Puzzle Mix (Kim et al., 2020) utilizes saliency information and local statistics to ensure mixup data to have rich supervisory signals.
18
+
19
+ However, these approaches only consider mixing a given random pair of input data and do not fully utilize the rich informative supervisory signal in training data including collection of object saliency, relative arrangement, etc. In this work, we simultaneously consider mix-matching different salient regions among all input data so that each generated mixup example accumulates as many salient regions from multiple input data as possible while ensuring
20
+
21
+ ![](images/d3a69431f275aaf0c0d3f8d448093690b3e27dec4c24e9bc94935a890c083312.jpg)
22
+ Figure 1: Example comparison of existing mixup methods and the proposed Co-Mixup. We provide more samples in Appendix H.
23
+
24
+ diversity among the generated mixup examples. To this end, we propose a novel optimization problem that maximizes the saliency measure of each individual mixup example while encouraging diversity among them collectively. This formulation results in a novel discrete submodular-supermodular objective. We also propose a practical modular approximation method for the supermodular term and present an efficient iterative submodular minimization algorithm suitable for minibatch-based mixup for neural network training. As illustrated in the Figure 1, while the proposed method, Co-Mixup, mix-matches the collection of salient regions utilizing inter-arrangements among input data, the existing methods do not consider the saliency information (Input Mixup & CutMix) or disassemble salient parts (Puzzle Mix).
25
+
26
+ We verify the performance of the proposed method by training classifiers on CIFAR-100, Tiny-ImageNet, ImageNet, and the Google commands dataset (Krizhevsky et al., 2009; Chrabaszcz et al., 2017; Deng et al., 2009; Warden, 2017). Our experiments show the models trained with Co-Mixup achieve the state of the performance compared to other mixup baselines. In addition to the generalization experiment, we conduct weakly-supervised object localization and robustness tasks and confirm Co-Mixup outperforms other mixup baselines.
27
+
28
+ # 2 RELATED WORKS
29
+
30
+ Mixup Data augmentation has been widely used to prevent deep neural networks from over-fitting to the training data (Bishop, 1995). The majority of conventional augmentation methods generate new data by applying transformations depending on the data type or the target task (Cubuk et al., 2019). Zhang et al. (2018) proposed mixup, which can be independently applied to various data types and tasks, and improves generalization and robustness of deep neural networks. Input mixup (Zhang et al., 2018) linearly interpolates between two input data and utilizes the mixed data with the corresponding soft label for training. Following this work, manifold mixup (Verma et al., 2019) applies the mixup in the hidden feature space, and CutMix (Yun et al., 2019) suggests a spatial copy and paste based mixup strategy on images. Guo et al. (2019) trains an additional neural network to optimize a mixing ratio. Puzzle Mix (Kim et al., 2020) proposes a mixup method based on saliency and local statistics of the given data. In this paper, we propose a discrete optimization-based mixup method simultaneously finding the best combination of collections of salient regions among all input data while encouraging diversity among the generated mixup examples.
31
+
32
+ Saliency The seminal work from Simonyan et al. (2013) generates a saliency map using a pre-trained neural network classifier without any additional training of the network. Following the work, measuring the saliency of data using neural networks has been studied to obtain a more precise saliency map (Zhao et al., 2015; Wang et al., 2015) or to reduce the saliency computation cost (Zhou et al., 2016; Selvaraju et al., 2017). The saliency information is widely applied to the tasks in various domains, such as object segmentation or speech recognition (Jung and Kim, 2011; Kalinli and Narayanan, 2007).
33
+
34
+ Submodular-Supermodular optimization A submodular (supermodular) function is a set function with diminishing (increasing) returns property (Narasimhan and Bilmes, 2005). It is known that any set function can be expressed as the sum of a submodular and supermodular function (Lovasz, 1983), called BP function. Various problems in machine learning can be naturally formulated as BP functions (Fujishige, 2005), but it is known to be NP-hard (Lovasz, 1983). Therefore, approximate algorithms based on modular approximations of submodular or supermodular terms have been developed (Iyer and Bilmes, 2012). Our formulation falls into a category of BP function consisting of smoothness function within a mixed output (submodular) and a diversity function among the mixup outputs (supermodular).
35
+
36
+ # 3 PRELIMINARY
37
+
38
+ Existing mixup methods return $\{h(x_1, x_{i(1)}), \ldots, h(x_m, x_{i(m)})\}$ for given input data $\{x_1, \ldots, x_m\}$ , where $h: \mathcal{X} \times \mathcal{X} \to \mathcal{X}$ is a mixup function and $(i(1), \ldots, i(m))$ is a random permutation of the data indices. In the case of input mixup, $h(x, x')$ is $\lambda x + (1 - \lambda)x'$ , where $\lambda \in [0, 1]$ is a random mixing ratio. Manifold mixup applies input mixup in the hidden feature space, and CutMix uses $h(x, x') = \mathbb{1}_B \odot x + (1 - \mathbb{1}_B) \odot x'$ , where $\mathbb{1}_B$ is a binary rectangular-shape mask for an image $x$ and $\odot$ represents the element-wise product. Puzzle Mix defines $h(x, x')$ as $z \odot \Pi^{\intercal}x + (1 - z) \odot \Pi'^{\intercal}x'$ , where $\Pi$ is a transport plan and $z$ is a discrete mask. In detail, for $x \in \mathbb{R}^n$ , $\Pi \in \{0, 1\}^n$ and $z \in \mathcal{L}^n$ for $\mathcal{L} = \left\{\frac{l}{L} \mid l = 0, 1, \ldots, L\right\}$ .
39
+
40
+ In this work, we extend the existing mixup functions as $h: \mathcal{X}^m \to \mathcal{X}^{m'}$ which performs mixup on a collection of input data and returns another collection. Let $x_B \in \mathbb{R}^{m \times n}$ denote the batch of input data in matrix form. Then, our proposed mixup function is
41
+
42
+ $$
43
+ h \left(x _ {B}\right) = \left(g \left(z _ {1} \odot x _ {B}\right), \dots , g \left(z _ {m ^ {\prime}} \odot x _ {B}\right)\right),
44
+ $$
45
+
46
+ where $z_{j} \in \mathcal{L}^{m \times n}$ for $j = 1, \ldots, m'$ with $\mathcal{L} = \left\{\frac{l}{L} \mid l = 0, 1, \ldots, L\right\}$ and $g: \mathbb{R}^{m \times n} \to \mathbb{R}^n$ returns a column-wise sum of a given matrix. Note that, the $k^{\text{th}}$ column of $z_{j}$ , denoted as $z_{j,k} \in \mathcal{L}^m$ , can be interpreted as the mixing ratio among $m$ inputs at the $k^{\text{th}}$ location. Also, we enforce $\| z_{j,k} \|_1 = 1$ to maintain the overall statistics of the given input batch. Given the one-hot target labels $y_B \in \{0, 1\}^{m \times C}$ of the input data with $C$ classes, we generate soft target labels for mixup data as $y_B^\intercal \tilde{o}_j$ for $j = 1, \ldots, m'$ , where $\tilde{o}_j = \frac{1}{n} \sum_{k=1}^{n} z_{j,k} \in [0,1]^m$ represents the input source ratio of the $j^{\text{th}}$ mixup data. We train models to estimate the soft target labels by minimizing the cross-entropy loss.
47
+
48
+ # 4 METHOD
49
+
50
+ # 4.1 OBJECTIVE
51
+
52
+ Saliency Our main objective is to maximize the saliency measure of mixup data while maintaining the local smoothness of data, i.e., spatially nearby patches in a natural image look similar, temporally adjacent signals have similar spectrum in speech, etc. (Kim et al., 2020). As we can see from CutMix in Figure 1, disregarding saliency can give a misleading supervisory signal by generating mixup data that does not match with the target soft label. While the existing mixup methods only consider the mixup between two inputs, we generalize the number of inputs $m$ to any positive integer. Note, each $k^{\mathrm{th}}$ location of outputs has $m$ candidate sources from the inputs. We model the unary labeling cost as the negative value of the saliency, and denote the cost vector at the $k^{\mathrm{th}}$ location as $c_{k} \in \mathbb{R}^{m}$ . For the saliency measure, we calculate the gradient values of training loss with respect to the input and measure $\ell_2$ norm of the gradient values across input channels (Simonyan et al., 2013; Kim et al., 2020). Note that this method does not require any additional architecture dependent modules for saliency calculation. In addition to the unary cost, we encourage adjacent locations to have similar labels for the smoothness of each mixup data. In summary, the objective can be formulated as follows:
53
+
54
+ $$
55
+ \sum_ {j = 1} ^ {m ^ {\prime}} \sum_ {k = 1} ^ {n} c _ {k} ^ {\mathsf {T}} z _ {j, k} + \beta \sum_ {j = 1} ^ {m ^ {\prime}} \sum_ {(k, k ^ {\prime}) \in \mathcal {N}} (1 - z _ {j, k} ^ {\mathsf {T}} z _ {j, k ^ {\prime}}) - \eta \sum_ {j = 1} ^ {m ^ {\prime}} \sum_ {k = 1} ^ {n} \log p (z _ {j, k}),
56
+ $$
57
+
58
+ ![](images/9e611e3f7098580dd415cea557346a1be531c5686abd561374aa6025c1bd8c6d.jpg)
59
+ (a)
60
+
61
+ ![](images/a971d31ac85adc9d6858062d0819949495369503fd8788b740b79f4dbcea9077.jpg)
62
+ (b)
63
+ Figure 2: (a) Analysis of our BP optimization problem. The $x$ -axis is a one-dimensional arrangement of solutions: The mixed output is more salient but not diverse towards the right and less salient but diverse on the left. The unary term (red) decreases towards the right side of the axis, while the supermodular term (green) increases. By optimizing the sum of the two terms (brown), we obtain the balanced output $z^{*}$ . (b) A histogram of the number of inputs mixed for each output given a batch of 100 examples from the ImageNet dataset. As $\tau$ increases, more inputs are used to create each output on average. (c) Mean batch saliency measurement of a batch of mixup data using the ImageNet dataset. We normalize the saliency measure of each input to sum up to 1. (d) Diversity measurement of a batch of mixup data. We calculate the diversity as $1 - \sum_{j}\sum_{j^{\prime}\neq j}\tilde{o}_{j}^{\top}\tilde{o}_{j^{\prime}} / m$ , where $\tilde{o}_j = o_j / \| o_j\| _1$ . We can control the diversity among Co-Mixup data (red) and find the optimum by controlling $\tau$ .
64
+
65
+ ![](images/422274f86ffdcd536070cd5cf5217ced3802b3cd09a41edadad2ec0638b4e6b1.jpg)
66
+ (c)
67
+
68
+ ![](images/a771ffb4783f852b395102203538cf1b2a85ed5e8888c1c4c44d28ab6a36ac9a.jpg)
69
+ (d)
70
+
71
+ where the prior $p$ is given by $z_{j,k} \sim \frac{1}{L} Multi(L, \lambda)$ with $\lambda = (\lambda_1, \dots, \lambda_m) \sim Dirichlet(\alpha, \dots, \alpha)$ , which is a generalization of the mixing ratio distribution of Zhang et al. (2018), and $\mathcal{N}$ denotes a set of adjacent locations (i.e., neighboring image patches in vision, subsequent spectrums in speech, etc.).
72
+
73
+ Diversity Note that the naive generalization above leads to the identical outputs because the objective is separable and identical for each output. In order to obtain diverse mixup outputs, we model a similarity penalty between outputs. First, we represent the input source information of the $j^{\mathrm{th}}$ output by aggregating assigned labels as $\sum_{k=1}^{n} z_{j,k}$ . For simplicity, let us denote $\sum_{k=1}^{n} z_{j,k}$ as $o_j$ . Then, we measure the similarity between $o_j$ 's by using the inner-product on $\mathbb{R}^m$ . In addition to the input source similarity between outputs, we model the compatibility between input sources, represented as a symmetric matrix $A_c \in \mathbb{R}_+^{m \times m}$ . Specifically, $A_c[i_1, i_2]$ quantifies the degree to which input $i_1$ and $i_2$ are suitable to be mixed together. In summary, we use inner-product on $A = (1 - \omega)I + \omega A_c$ for $\omega \in [0,1]$ , resulting in a supermodular penalty term. Note that, by minimizing $\langle o_j, o_{j'} \rangle_A = o_j^\top A o_{j'}$ , $\forall j \neq j'$ , we penalize output mixup examples with similar input sources and encourage each individual mixup examples to have high compatibility within. In this work, we measure the distance between locations of salient objects in each input and use the distance matrix $A_c[i, j] = \| \operatorname{argmax}_k s_i[k] - \operatorname{argmax}_k s_j[k] \|_1$ , where $s_i$ is the saliency map of the $i^{\mathrm{th}}$ input and $k$ is a location index (e.g., $k$ is a 2-D index for image data). From now on, we denote this inner-product term as the compatibility term.
74
+
75
+ Over-penalization The conventional mixup methods perform mixup as many as the number of examples in a given mini-batch. In our setting, this is the case when $m = m'$ . However, the compatibility penalty between outputs is influenced by the pigeonhole principle. For example, suppose the first output consists of two inputs. Then, the inputs must be used again for the remaining $m' - 1$ outputs, or only $m - 2$ inputs can be used. In the latter case, the number of available inputs ( $m - 2$ ) is less than the outputs ( $m' - 1$ ), and thus, the same input must be used more than twice. Empirically, we found that the remaining compatibility term above over-penalizes the optimization so that a substantial portion of outputs are returned as singletons without any mixup. To mitigate the over-penalization issue, we apply clipping to the compatibility penalty term. Specifically, we model the objective so that no extra penalty would occur when the compatibility among outputs is below a certain level.
76
+
77
+ Now we present our main objective as following:
78
+
79
+ $$
80
+ z^{*} = \operatorname *{argmin}_{\substack{z_{j,k}\in \mathcal{L}^{m}, \| z_{j,k}\|_{1} = 1}}f(z),
81
+ $$
82
+
83
+ where
84
+
85
+ $$
86
+ \begin{array}{l} f (z) := \sum_ {j = 1} ^ {m ^ {\prime}} \sum_ {k = 1} ^ {n} c _ {k} ^ {\mathsf {T}} z _ {j, k} + \beta \sum_ {j = 1} ^ {m ^ {\prime}} \sum_ {(k, k ^ {\prime}) \in \mathcal {N}} \left(1 - z _ {j, k} ^ {\mathsf {T}} z _ {j, k ^ {\prime}}\right) \tag {1} \\ + \gamma \underbrace {\max \left\{\tau , \sum_ {j = 1} ^ {m ^ {\prime}} \sum_ {j ^ {\prime} \neq j} ^ {m ^ {\prime}} \left(\sum_ {k = 1} ^ {n} z _ {j , k}\right) ^ {\mathsf {T}} A \left(\sum_ {k = 1} ^ {n} z _ {j ^ {\prime} , k}\right) \right\}} _ {= f _ {c} (z)} - \eta \sum_ {j = 1} ^ {m ^ {\prime}} \sum_ {k = 1} ^ {n} \log p (z _ {j, k}). \\ \end{array}
87
+ $$
88
+
89
+ In Figure 2, we describe the properties of the BP optimization problem of Equation (1) and statistics of the resulting mixup data. Next, we verify the supermodularity of the compatibility term. We first extend the definition of the submodularity of a multi-label function as follows (Windheuser et al., 2012).
90
+
91
+ Definition 1. For a given label set $\mathcal{L}$ , a function $s: \mathcal{L}^m \times \mathcal{L}^m \to \mathbb{R}$ is pairwise submodular, if $\forall x, x' \in \mathcal{L}^m$ , $s(x, x) + s(x', x') \leq s(x, x') + s(x', x)$ . A function $s$ is pairwise supermodular, if $-s$ is pairwise submodular.
92
+
93
+ Proposition 1. The compatibility term $f_{c}$ in Equation (1) is pairwise supermodular for every pair of $(z_{j_1,k}, z_{j_2,k})$ if $A$ is positive semi-definite.
94
+
95
+ Proof. See Appendix B.1.
96
+
97
+ ![](images/c7be7fda2776923630f40946048ef06f07a9a18e50668f6d03451650d6008fd5.jpg)
98
+
99
+ Finally note that, $A = (1 - \omega)I + \omega A_{c}$ , where $A_{c}$ is a symmetric matrix. By using spectral decomposition, $A_{c}$ can be represented as $UDU^{\intercal}$ , where $D$ is a diagonal matrix and $U^{\intercal}U = UU^{\intercal} = I$ . Then, $A = U((1 - \omega)I + \omega D)U^{\intercal}$ , and thus for small $\omega > 0$ , we can guarantee $A$ to be positive semi-definite.
100
+
101
+ # 4.2 ALGORITHM
102
+
103
+ Our main objective consists of modular (unary, prior), submodular (smoothness), and supermodular (compatibility) terms. To optimize the main objective, we employ the submodular-supermodular procedure by iteratively approximating the supermodular term as a modular function (Narasimhan and Bilmes, 2005). Note that $z_{j}$ represents the labeling of the $j^{\mathrm{th}}$ output and $o_{j}$ represents the aggregated input source information of the $j^{\mathrm{th}}$ output, $\sum_{k=1}^{n} z_{j,k}$ . Before introducing our algorithm, we first inspect the simpler case without clipping.
104
+
105
+ Proposition 2. The compatibility term $f_{c}$ without clipping is modular with respect to $z_{j}$ .
106
+
107
+ Proof. Note, $A$ is a positive symmetric matrix by the definition. Then, for an index $j_0$ , we can represent $f_c$ without clipping in terms of $o_{j_0}$ as $\sum_{j=1}^{m'} \sum_{j'=1, j' \neq j}^{m'} o_j^{\mathsf{T}} A o_{j'} = 2 \sum_{j=1, j \neq j_0}^{m'} o_j^{\mathsf{T}} A o_{j_0} + \sum_{j=1, j \neq j_0}^{m'} \sum_{j'=1, j' \notin \{j_0, j\}}^{m'} o_j^{\mathsf{T}} A o_{j'} = (2 \sum_{j=1, j \neq j_0}^{m'} A o_j)^{\mathsf{T}} o_{j_0} + c = v_{-j_0}^{\mathsf{T}} o_{j_0} + c$ , where $v_{-j_0} \in \mathbb{R}^m$ and $c \in \mathbb{R}$ are values independent with $o_{j_0}$ . Finally, $v_{-j_0}^{\mathsf{T}} o_{j_0} + c = \sum_{k=1}^{n} v_{-j_0}^{\mathsf{T}} z_{j_0, k} + c$ is a modular function of $z_{j_0}$ .
108
+
109
+ By Proposition 2, we can apply a submodular minimization algorithm to optimize the objective with respect to $z_{j}$ when there is no clipping. Thus, we can optimize the main objective without clipping in coordinate descent fashion (Wright, 2015). For the case with clipping, we modularize the supermodular compatibility term under the following criteria:
110
+
111
+ 1. The modularized function value should increase as the compatibility across outputs increases.
112
+ 2. The modularized function should not apply an extra penalty for the compatibility below a certain level.
113
+
114
+ ![](images/e178ab5b2508185cd283359c12a369a426fa7026a6935f12096bcb8442f71694.jpg)
115
+ Figure 3: Visualization of the proposed mixup procedure. For a given batch of input data (left), a batch of mixup data (right) is generated, which mix-matches different salient regions among the input data while preserving the diversity among the mixup examples. The histograms on the right represent the input source information of each mixup data $(o_j)$ .
116
+
117
+ Borrowing the notation from the proof in Proposition 2, for an index $j$ , $f_{c}(z) = \max \{\tau, v_{-j}^{\intercal} o_{j} + c\} = \max \{\tau - c, v_{-j}^{\intercal} o_{j}\} + c$ . Note, $o_{j} = \sum_{k=1}^{n} z_{j,k}$ represents the input source information of the $j^{\text{th}}$ output and $v_{-j} = 2 \sum_{j'=1, j' \neq j}^{m'} A o_{j'}$ encodes the status of the other outputs. Thus, we can interpret the supermodular term as a penalization of each label of $o_{j}$ in proportion to the corresponding $v_{-j}$ value (criterion 1), but not for the compatibility below $\tau - c$ (criterion 2). As a modular function which satisfies the criteria above, we use the following function:
118
+
119
+ $$
120
+ f _ {c} (z) \approx \max \left\{\tau^ {\prime}, v _ {- j} \right\} ^ {\intercal} o _ {j} \quad \text {f o r} \exists \tau^ {\prime} \in \mathbb {R}. \tag {2}
121
+ $$
122
+
123
+ Note that, by satisfying the criteria above, the modular function reflects the diversity and over-penalization desiderata described in Section 4.1. We illustrate the proposed mixup procedure with the modularized diversity penalty in Figure 3.
124
+
125
+ Proposition 3. The modularization given by Equation (2) satisfies the criteria above.
126
+
127
+ Proof. See Appendix B.2.
128
+
129
+ ![](images/667f94df0b220419e04103b15b1d54b86b7fc7dfb2721fb77df31caeb08cbfb8.jpg)
130
+
131
+ By applying the modular approximation described in Equation (2) to $f_{c}$ in Equation (1), we can iteratively apply a submodular minimization algorithm to obtain the final solution as described in Algorithm 1. In detail, each step can be performed as follows: 1) Conditioning the main objective $f$ on the current values except $z_{j}$ , denoted as $f_{j}(z_{j}) = f(z_{j};z_{1:j - 1},z_{j + 1:m^{\prime}})$ . 2) Modularization of the compatibility term of $f_{j}$ as Equation (2), resulting in a submodular function $\tilde{f}_j$ . We denote the modularization operator as $\Phi$ , i.e., $\tilde{f}_j = \Phi (f_j)$ . 3) Applying a submodular minimization algorithm to $\tilde{f}_j$ . Please refer to Appendix C for implementation details.
132
+
133
+ Algorithm 1 Iterative submodular minimization
134
+
135
+ Initialize $z$ as $z^{(0)}$ .
136
+
137
+ Let $z^{(t)}$ denote a solution of the $t^{\mathrm{th}}$ step.
138
+
139
+ $\Phi$ : modularization operator based on Equation (2).
140
+
141
+ for $t = 1,\dots ,T$ do
142
+
143
+ for $j = 1,\ldots ,m^{\prime}$ do
144
+
145
+ $$
146
+ f _ {j} ^ {(t)} (z _ {j}) := f (z _ {j}; z _ {1: j - 1} ^ {(t)}, z _ {j + 1: m ^ {\prime}} ^ {(t - 1)}).
147
+ $$
148
+
149
+ $$
150
+ \tilde {f} _ {j} ^ {(t)} = \Phi \left(f _ {j} ^ {(t)}\right).
151
+ $$
152
+
153
+ $$
154
+ \text {S o l v e} z _ {j} ^ {(t)} = \operatorname {a r g m i n} \tilde {f} _ {j} ^ {(t)} (z _ {j}).
155
+ $$
156
+
157
+ end for
158
+
159
+ end for
160
+
161
+ return $z^{(T)}$
162
+
163
+ Analysis Narasimhan and Bilmes (2005) proposed a modularization strategy for general supermodular set functions, and apply a submodular minimization algorithm that can monotonically decrease the original BP objective. However, the proposed Algorithm 1 based on Equation (2) is much more suitable for minibatch based mixup for neural network training than the set modularization proposed by Narasimhan and Bilmes (2005) in terms of complexity and modularization variance due to randomness. For simplicity, let us assume
164
+
165
+ each $z_{j,k}$ is an $m$ -dimensional one-hot vector. Then, our problem is to optimize $m'n$ one-hot $m$ -dimensional vectors.
166
+
167
+ To apply the set modularization method, we need to assign each possible value of $z_{j,k}$ as an element of $\{1,2,\ldots,m\}$ . Then the supermodular term in Equation (1) can be interpreted as a set function with $m'nm$ elements, and to apply the set modularization, $O(m'nm)$ sequential evaluations of the supermodular term are required. In contrast, Algorithm 1 calculates $v_{-j}$ in Equation (2) in only $O(m')$ time per each iteration. In addition, each modularization step of the set modularization method requires a random permutation of the $m'nm$ elements. In this case, the optimization can be strongly affected by the randomness from the permutation step. As a result, the optimal labeling of each $z_{j,k}$ from the compatibility term is strongly influenced by the random ordering undermining the interpretability of the algorithm. Please refer to Appendix D for empirical comparison between Algorithm 1 and the method by Narasimhan and Bilmes (2005).
168
+
169
+ # 5 EXPERIMENTS
170
+
171
+ We evaluate our proposed mixup method on generalization, weakly supervised object localization, calibration, and robustness tasks. First, we compare the generalization performance of the proposed method against baselines by training classifiers on CIFAR-100 (Krizhevsky et al., 2009), Tiny-ImageNet (Chrabaszcz et al., 2017), ImageNet (Deng et al., 2009), and the Google commands speech dataset (Warden, 2017). Next, we test the localization performance of classifiers following the evaluation protocol of Qin and Kim (2019). We also measure calibration error (Guo et al., 2017) of classifiers to verify Co-Mixup successfully alleviates the over-confidence issue by Zhang et al. (2018). In Section 5.4, we evaluate the robustness of the classifiers on the test dataset with background corruption in response to the recent problem raised by Lee et al. (2020) that deep neural network agents often fail to generalize to unseen environments. Finally, we perform a sensitivity analysis of Co-Mixup and provide the results in Appendix F.3.
172
+
173
+ # 5.1 CLASSIFICATION
174
+
175
+ We first train PreActResNet18 (He et al., 2016), WRN16-8 (Zagoruyko and Komodakis, 2016), and ResNeXt29-4-24 (Xie et al., 2017) on CIFAR-100 for 300 epochs. We use stochastic gradient descent with an initial learning rate of 0.2 decayed by factor 0.1 at epochs 100 and 200. We set the momentum as 0.9 and add a weight decay of 0.0001. With this setup, we train a vanilla classifier and reproduce the mixup baselines (Zhang et al., 2018; Verma et al., 2019; Yun et al., 2019; Kim et al., 2020), which we denote as Vanilla, Input, Manifold, CutMix, Puzzle Mix in the experiment tables. Note that we use identical hyperparameters regarding Co-Mixup over all of the experiments with different models and datasets, which are provided in Appendix E.
176
+
177
+ Table 1 shows Co-Mixup significantly outperforms all other baselines in Top-1 error rate. Co-Mixup achieves $19.87\%$ in Top-1 error rate with PreActResNet18, outperforming the best baseline by $0.75\%$ . We further test Co-Mixup on different models (WRN16-8 & ResNeXt29-4-24) and verify Co-Mixup improves Top-1 error rate over the best performing baseline.
178
+
179
+ <table><tr><td>Dataset (Model)</td><td>Vanilla</td><td>Input</td><td>Manifold</td><td>CutMix</td><td>Puzzle Mix</td><td>Co-Mixup</td></tr><tr><td>CIFAR-100 (PreActResNet18)</td><td>23.59</td><td>22.43</td><td>21.64</td><td>21.29</td><td>20.62</td><td>19.87</td></tr><tr><td>CIFAR-100 (WRN16-8)</td><td>21.70</td><td>20.08</td><td>20.55</td><td>20.14</td><td>19.24</td><td>19.15</td></tr><tr><td>CIFAR-100 (ResNeXt29-4-24)</td><td>21.79</td><td>21.70</td><td>22.28</td><td>21.86</td><td>21.12</td><td>19.78</td></tr><tr><td>TinyImageNet (PreActResNet18)</td><td>43.40</td><td>43.48</td><td>40.76</td><td>43.11</td><td>36.52</td><td>35.85</td></tr><tr><td>ImageNet (ResNet-50, 100 epochs)</td><td>24.03</td><td>22.97</td><td>23.30</td><td>22.92</td><td>22.49</td><td>22.39</td></tr><tr><td>Google commands (VGG-11)</td><td>4.84</td><td>3.91</td><td>3.67</td><td>3.76</td><td>3.70</td><td>3.54</td></tr></table>
180
+
181
+ Table 1: Top-1 error rate on various datasets and models. For CIFAR-100, we train each model with three different random seeds and report the mean error.
182
+
183
+ We further test Co-Mixup on other datasets; Tiny-ImageNet, ImageNet, and the Google commands dataset (Table 1). For Tiny-ImageNet, we train PreActResNet18 for 1200 epochs
184
+
185
+ ![](images/acada851f8ac2808cd96ed11ed62907a226ae8d6a48b494269c4305142f11d88.jpg)
186
+ Figure 4: Confidence-Accuracy plots for classifiers on CIFAR-100. From the figure, the Vanilla network shows over-confident predictions, whereas other mixup baselines tend to have under-confident predictions. We can find that Co-Mixup has best-calibrated predictions.
187
+
188
+ following the training protocol of Kim et al. (2020). As a result, Co-Mixup consistently improves Top-1 error rate over baselines by $0.67\%$ . In the ImageNet experiment, we follow the experimental protocol provided in Puzzle Mix (Kim et al., 2020), which trains ResNet-50 (He et al., 2015) for 100 epochs. As a result, Co-Mixup outperforms all of the baselines in Top-1 error rate. We further test Co-Mixup on the speech domain with the Google commands dataset and VGG-11 (Simonyan and Zisserman, 2014). We provide a detailed experimental setting and dataset description in Appendix F.1. From Table 1, we confirm that Co-Mixup is the most effective in the speech domain as well.
189
+
190
+ # 5.2 LOCALIZATION
191
+
192
+ We compare weakly supervised object localization (WSOL) performance of classifiers trained on ImageNet (in Table 1) to demonstrate that our mixup method better guides a classifier to focus on salient regions. We test the localization performance using CAM (Zhou et al., 2016), a WSOL method using a pre-trained classifier. We evaluate localization performance following the evaluation protocol in Qin and Kim (2019), with binarization threshold 0.25 in CAM. Table 2 summarizes the WSOL performance of various mixup methods, which shows that our proposed mixup method outperforms other baselines.
193
+
194
+ # 5.3 CALIBRATION
195
+
196
+ We evaluate the expected calibration error (ECE) (Guo et al., 2017) of classifiers trained on CIFAR-100. Note, ECE is calculated by the weighted average of the absolute difference between the confidence and accuracy of a classifier. As shown in Table 2, the Co-Mixup classifier has the lowest calibration error among baselines. From Figure 4, we find that other mixup baselines tend to have under-confident predictions resulting in higher ECE values even than Vanilla network (also pointed out by Wen et al. (2020)), whereas Co-Mixup has best-calibrated predictions resulting in relatively $48\%$ less ECE value. We provide more figures and results with other datasets in Appendix F.2.
197
+
198
+ <table><tr><td>Task</td><td></td><td>Vanilla</td><td>Input</td><td>Manifold</td><td>CutMix</td><td>Puzzle Mix</td><td>Co-Mixup</td></tr><tr><td colspan="2">Localization (Acc. %) (↑)</td><td>54.36</td><td>55.07</td><td>54.86</td><td>54.91</td><td>55.22</td><td>55.32</td></tr><tr><td colspan="2">Calibration (ECE %) (↓)</td><td>3.9</td><td>17.7</td><td>13.1</td><td>5.6</td><td>7.5</td><td>1.9</td></tr></table>
199
+
200
+ Table 2: WSOL results on ImageNet and ECE (%) measurements of CIFAR-100 classifiers.
201
+
202
+ # 5.4 ROBUSTNESS
203
+
204
+ In response to the recent problem raised by Lee et al. (2020) that deep neural network agents often fail to generalize to unseen environments, we consider the situation where the statistics of the foreground object, such as color or shape, is unchanged, but with the corrupted (or replaced) background. In detail, we consider the following operations: 1) replacement with another image and 2) adding Gaussian noise. We use ground-truth bounding boxes to separate the foreground from the background, and then apply the previous operations independently to obtain test datasets. We provide a detailed description of datasets in Appendix G.
205
+
206
+ With the test datasets described above, we evaluate the robustness of the pre-trained classifiers. As shown in Table 3, Co-Mixup shows significant performance gains at various background corruption tests compared to the other mixup baselines. For each corruption case, the classifier trained with Co-Mixup outperforms the others in Top-1 error rate with the performance margins of $2.86\%$ and $3.33\%$ over the Vanilla model.
207
+
208
+ <table><tr><td>Corruption type</td><td>Vanilla</td><td>Input</td><td>Manifold</td><td>CutMix</td><td>Puzzle Mix</td><td>Co-Mixup</td></tr><tr><td>Random replacement</td><td>41.63(+17.62)</td><td>39.41(+16.47)</td><td>39.72(+16.47)</td><td>46.20(+23.16)</td><td>39.23(+16.69)</td><td>38.77(+16.38)</td></tr><tr><td>Gaussian noise</td><td>29.22(+5.21)</td><td>26.29(+3.35)</td><td>26.79(+3.54)</td><td>27.13(+4.09)</td><td>26.11(+3.57)</td><td>25.89(+3.49)</td></tr></table>
209
+
210
+ # 5.5 BASELINES WITH MULTIPLE INPUTS
211
+
212
+ To further investigate the effect of the number of inputs for the mixup in isolation, we conduct an ablation study on baselines using multiple mixing inputs. For fair comparison, we use Dirichlet $(\alpha, \dots, \alpha)$ prior for the mixing ratio distribution and select the best performing $\alpha$ in $\{0.2, 1.0, 2.0\}$ . Note that we overlay multiple boxes in the case of CutMix. Table 4 reports the classification test errors on CIFAR-100 with PreActResNet18. From the table, we find that mixing multiple inputs decreases the performance gains of each mixup baseline. These results demonstrate that mixing multiple inputs could lead to possible degradation of the performance and support the necessity of considering saliency information and diversity as in Co-Mixup.
213
+
214
+ Table 3: Top-1 error rates of various mixup methods for background corrupted ImageNet validation set. The values in the parentheses indicate the error rate increment by corrupted inputs compared to clean inputs.
215
+
216
+ <table><tr><td># inputs for mixup</td><td>Input</td><td>Manifold</td><td>CutMix</td><td>Co-Mixup</td></tr><tr><td># inputs = 2</td><td>22.43</td><td>21.64</td><td>21.29</td><td rowspan="3">19.87</td></tr><tr><td># inputs = 3</td><td>23.03</td><td>22.13</td><td>22.01</td></tr><tr><td># inputs = 4</td><td>23.12</td><td>22.07</td><td>22.20</td></tr></table>
217
+
218
+ Table 4: Top-1 error rates of mixup baselines with multiple mixing inputs on CIFAR-100 and PreActResNet18. We report the mean values of three different random seeds. Note that Co-Mixup optimally determines the number of inputs for each output by solving the optimization problem.
219
+
220
+ # 6 CONCLUSION
221
+
222
+ We presented Co-Mixup for optimal construction of a batch of mixup examples by finding the best combination of salient regions among a collection of input data while encouraging diversity among the generated mixup examples. This leads to a discrete optimization problem minimizing a novel submodular-supermodular objective. In this respect, we present a practical modular approximation and iterative submodular optimization algorithm suitable for minibatch based neural network training. Our experiments on generalization, weakly supervised object localization, and robustness against background corruption show Co-Mixup achieves the state of the art performance compared to other mixup baseline methods. The proposed generalized mixup framework tackles the important question of 'what to mix?' while the existing methods only consider 'how to mix?'. We believe this work can be applied to new applications where the existing mixup methods have not been applied, such as multi-label classification, multi-object detection, or source separation.
223
+
224
+ # ACKNOWLEDGEMENTS
225
+
226
+ This research was supported in part by Samsung Advanced Institute of Technology, Samsung Electronics Co., Ltd, Institute of Information & Communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No. 2020-0-00882, SW STAR LAB) Development of deployable learning intelligence via self-sustainable and trustworthy machine learning), and Research Resettlement Fund for the new faculty of Seoul National University. Hyun Oh Song is the corresponding author.
227
+
228
+ # REFERENCES
229
+
230
+ C. M. Bishop. Training with noise is equivalent to tikhonov regularization. Neural computation, 7(1):108-116, 1995.
231
+ C. M. Bishop. Pattern recognition and machine learning. springer, 2006.
232
+ P. Chrabaszcz, I. Loshchilov, and F. Hutter. A downsampled variant of imagenet as an alternative to the CIFar datasets. arXiv preprint arXiv:1707.08819, 2017.
233
+ E. D. Cubuk, B. Zoph, D. Mane, V. Vasudevan, and Q. V. Le. Autoaugment: Learning augmentation strategies from data. In CVPR, 2019.
234
+ J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and F. F. Li. Imagenet: a large-scale hierarchical image database. In CVPR, 2009.
235
+ J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
236
+ S. Fujishige. Submodular functions and optimization. Elsevier, 2005.
237
+ C. Guo, G. Pleiss, Y. Sun, and K. Q. Weinberger. On calibration of modern neural networks. In ICML, 2017.
238
+ H. Guo, Y. Mao, and R. Zhang. Mixup as locally linear out-of-manifold regularization. In AAAI, 2019.
239
+ K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR, 2015.
240
+ K. He, X. Zhang, S. Ren, and J. Sun. Identity mappings in deep residual networks. In ECCV, 2016.
241
+ T. Horel and Y. Singer. Maximization of approximately submodular functions. In NeurIPS, 2016.
242
+ R. Iyer and J. Bilmes. Algorithms for approximate minimization of the difference between submodular functions, with applications. arXiv preprint arXiv:1207.0560, 2012.
243
+ C. Jung and C. Kim. A unified spectral-domain approach for saliency detection and its application to automatic object segmentation. IEEE Transactions on Image Processing, 21(3):1272-1283, 2011.
244
+ O. Kalinli and S. S. Narayanan. A saliency-based auditory attention model with applications to unsupervised prominent syllable detection in speech. In *Eighth Annual Conference of the International Speech Communication Association*, 2007.
245
+ J.-H. Kim, W. Choo, and H. O. Song. Puzzle mix: Exploiting saliency and local statistics for optimal mixup. In ICML, 2020.
246
+ A. Krizhevsky, G. Hinton, et al. Learning multiple layers of features from tiny images. CiteSeer, 2009.
247
+ Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278-2324, 1998.
248
+
249
+ K. Lee, K. Lee, J. Shin, and H. Lee. Network randomization: A simple technique for generalization in deep reinforcement learning. In ICLR, 2020.
250
+ L. Lovász. Submodular functions and convexity. In Mathematical Programming The State of the Art, pages 235-257. Springer, 1983.
251
+ T. Miyato, S.-i. Maeda, M. Koyama, and S. Ishii. Virtual adversarial training: a regularization method for supervised and semi-supervised learning. IEEE transactions on pattern analysis and machine intelligence, 41(8):1979-1993, 2018.
252
+ M. Narasimhan and J. A. Bilmes. A submodular-supermodular procedure with applications to discriminative structure learning. UAI, 2005.
253
+ A. v. d. Oord, S. Dieleman, H. Zen, K. Simonyan, O. Vinyals, A. Graves, N. Kalchbrenner, A. Senior, and K. Kavukcuoglu. Wavenet: A generative model for raw audio. arXiv preprint arXiv:1609.03499, 2016.
254
+ Z. Qin and D. Kim. Rethinking softmax with cross-entropy: Neural network classifier as mutual information estimator. arXiv preprint arXiv:1911.10688, 2019.
255
+ S. Ren, K. He, R. Girshick, and J. Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In NeurIPS, 2015.
256
+ R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra. Grad-cam: Visual explanations from deep networks via gradient-based localization. In ICCV, 2017.
257
+ K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
258
+ K. Simonyan, A. Vedaldi, and A. Zisserman. Deep inside convolutional networks: Visualising image classification models and saliency maps. arXiv preprint arXiv:1312.6034, 2013.
259
+ V. Verma, A. Lamb, C. Beckham, A. Najafi, I. Mitliagkas, A. Courville, D. Lopez-Paz, and Y. Bengio. *Manifold mixup: Better representations by interpolating hidden states.* In ICML, 2019.
260
+ L. Wang, H. Lu, X. Ruan, and M.-H. Yang. Deep networks for saliency detection via local estimation and global search. In CVPR, 2015.
261
+ P. Warden. URL https://research.googleblog.com/2017/08/launching-speech-commands-dataset.html., 2017.
262
+ Y. Wen, G. Jerfel, R. Muller, M. W. Dusenberry, J. Snoek, B. Lakshminarayanan, and D. Tran. Improving calibration of batchsemble with data augmentation. In ICML Workshop on Uncertainty and Robustness in Deep Learning, 2020.
263
+ T. Windheuser, H. Ishikawa, and D. Cremers. Generalized roof duality for multi-label optimization: Optimal lower bounds and persistency. In ECCV, 2012.
264
+ S. J. Wright. Coordinate descent algorithms. Mathematical Programming, 151(1):3-34, 2015.
265
+ S. Xie, R. Girshick, P. Dollar, Z. Tu, and K. He. Aggregated residual transformations for deep neural networks. pages 1492-1500, 2017.
266
+ S. Yun, D. Han, S. J. Oh, S. Chun, J. Choe, and Y. Yoo. Cutmix: Regularization strategy to train strong classifiers with localizable features. In ICCV, 2019.
267
+ S. Zagoruyko and N. Komodakis. Wide residual networks. arXiv preprint arXiv:1605.07146, 2016.
268
+ C. Zhang, S. Bengio, M. Hardt, B. Recht, and O. Vinyals. Understanding deep learning requires rethinking generalization. arXiv preprint arXiv:1611.03530, 2016.
269
+ H. Zhang, M. Cisse, Y. N. Dauphin, and D. Lopez-Paz. mixup: Beyond empirical risk minimization. In ICLR, 2018.
270
+
271
+ R. Zhao, W. Ouyang, H. Li, and X. Wang. Saliency detection by multi-context deep learning. In CVPR, 2015.
272
+ B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, and A. Torralba. Learning deep features for discriminative localization. In CVPR, 2016.
273
+
274
+ # A SUPPLEMENTARY NOTES FOR OBJECTIVE
275
+
276
+ # A.1 NOTATIONS
277
+
278
+ In Table 5, we provide a summary of notations in the main text.
279
+
280
+ <table><tr><td>Notation</td><td>Meaning</td></tr><tr><td>m, m&#x27;, n</td><td># inputs, # outputs, dimension of data</td></tr><tr><td>ck ∈ Rm (1 ≤ k ≤ n)</td><td>labeling cost for m input sources at the kth location</td></tr><tr><td>zj,k ∈ Lm (1 ≤ j ≤ m&#x27;, 1 ≤ k ≤ n)</td><td>input source ratio at the kth location of the jth output</td></tr><tr><td>zj ∈ Lm×n</td><td>labeling of the jth output</td></tr><tr><td>oj ∈ Rm</td><td>aggregation of the labeling of the jth output</td></tr><tr><td>A ∈ Rm×m</td><td>compatibility between inputs</td></tr></table>
281
+
282
+ Table 5: A summary of notations.
283
+
284
+ # A.2 INTERPRETATION OF COMPATIBILITY
285
+
286
+ In our main objective Equation (1), we introduce a compatibility matrix $A = (1 - \omega)I + \omega A_{c}$ between inputs. By minimizing $\langle o_j, o_{j'} \rangle_A$ for $j \neq j'$ , we encourage each individual mixup examples to have high compatibility within. Figure 5 explains how the compatibility term works by comparing simple cases. Note that our framework can reflect any compatibility measures for the optimal mixup.
287
+
288
+ ![](images/184693becbdd2dd1ba4bd6ac1bb67a131960f1d84708d941e0179e4212c613c2.jpg)
289
+ Figure 5: Let us consider Co-Mixup with three inputs and two outputs. The figure represents two Co-Mixup results. Each input is denoted as a number and color-coded. Let us assume that input 1 and input 2 are more compatible, i.e., $A_{12} \gg A_{23}$ and $A_{12} \gg A_{13}$ . Then, the left Co-Mixup result has a larger inner-product value $\langle o_1, o_2 \rangle_A$ than the right. Thus the mixup result on the right has higher compatibility than the result on the left within each output example.
290
+
291
+ ![](images/7d7f17dd70439a94eafbd5dde97b8732beaa8e8fdcbb22dca518aa95d6080a49.jpg)
292
+
293
+ # B PROOFS
294
+
295
+ # B.1 PROOF OF PROPOSITION 1
296
+
297
+ Lemma 1. For a positive semi-definite matrix $A \in \mathbb{R}_+^{m \times m}$ and $x, x' \in \mathbb{R}^m$ , $s(x, x') = x^\top A x'$ is pairwise supermodular.
298
+
299
+ Proof. $s(x,x) + s(x',x') - s(x,x') - s(x',x) = x^{\intercal}Ax + x^{\intercal}Ax - 2x^{\intercal}Ax' = (x - x')^{\intercal}A(x - x')$ , and because $A$ is positive semi-definite, $(x - x')^{\intercal}A(x - x') \geq 0$ .
300
+
301
+ Proposition 1. The compatibility term $f_{c}$ in Equation (1) is pairwise supermodular for every pair of $(z_{j_1,k}, z_{j_2,k})$ if $A$ is positive semi-definite.
302
+
303
+ Proof. For $j_{1}$ and $j_{2}$ , s.t., $j_{1} \neq j_{2}$ , $\max \left\{\tau, \sum_{j=1}^{m'} \sum_{j'=1, j' \neq j}^{m'} (\sum_{k=1}^{n} z_{j,k})^{\intercal} A (\sum_{k=1}^{n} z_{j',k})\right\} = \max \{\tau, c + 2z_{j_{1},k}^{\intercal} A z_{j_{2},k}\} = -\min \{-\tau, -c - 2z_{j_{1},k}^{\intercal} A z_{j_{2},k}\}$ , for $\exists c \in \mathbb{R}$ . By Lemma 1, $-z_{j_{1},k}^{\intercal} A z_{j_{2},k}$ is pairwise submodular, and because a budget additive function preserves submodularity (Horel and Singer, 2016), $\min \{-\tau, -c - 2z_{j_{1},k}^{\intercal} A z_{j_{2},k}\}$ is pairwise submodular with respect to $(z_{j_{1},k}, z_{j_{2},k})$ .
304
+
305
+ # B.2 PROOF OF PROPOSITION 3
306
+
307
+ Proposition 3. The modularization given by Equation (2) satisfies the criteria.
308
+
309
+ Proof. Note, by the definition in Equation (1), the compatibility between the $j^{th}$ and $j^{\prime th}$ outputs is $o_{j'}^{\mathsf{T}}A o_j$ , and thus, $v_{-j}^{\mathsf{T}}o_{j}$ represents the compatibility between the $j^{th}$ output and the others. In addition, $\| o_j\| _1 = \| \sum_{k = 1}^n z_{j,k}\| _1 = \sum_{k = 1}^n\| z_{j,k}\| _1 = n$ . In a local view, for the given $o_j$ , let us define a vector $o_j^\prime$ as $o_j^\prime [i_1] = o_j[\bar{i}_1] + \alpha$ and $o_j^\prime [i_2] = o_j[i_2] - \alpha$ for $\alpha >0$ . Without loss of generality, let us assume $v_{-j}$ is sorted in ascending order. Then, $v_{-j}^{\mathsf{T}}o_{j}\leq v_{-j}^{\mathsf{T}}o_{j}^{\prime}$ implies $i_1 > i_2$ , and because the max function preserves the ordering, $\max \{\tau ',v_{-j}\}^{\mathsf{T}}o_{j}\leq \max \{\tau ',v_{-j}\}^{\mathsf{T}}o_{j}^{\prime}$ . Thus, the criterion 1 is locally satisfied. Next, for $\tau^{\prime} > 0$ , $\| \max \{\tau ',v_{-j}\}^{\mathsf{T}}o_{j}\|_{1}\geq \tau^{\prime}\| o_{j}\|_{1} = \tau^{\prime}n$ . Let $\exists i_0$ s.t. for $i < i_0,v_{-j}[i] < \tau '$ , and for $i\geq i_0,v_{-j}[i]\geq \tau '$ . Then, for $o_j$ containing positive elements only in indices smaller than $i_0$ , $\max \{\tau ',v_{-j}\}^{\mathsf{T}}o_{j} = \tau^{\prime}n$ which means there is no extra penalty from the compatibility. In this respect, the proposed modularization satisfies the criterion 2 as well.
310
+
311
+ # C IMPLEMENTATION DETAILS
312
+
313
+ We perform the optimization after down-sampling the given inputs and saliency maps to the specified size $(4\times 4)$ . After the optimization, we up-sample the optimal labeling to match the size of the inputs and then mix inputs according to the up-sampled labeling. For the saliency measure, we calculate the gradient values of training loss with respect to the input data and measure $\ell_2$ norm of the gradient values across input channels (Simonyan et al., 2013). In classification experiments, we retain the gradient information of network weights obtained from the saliency calculation for regularization. For the distance in the compatibility term, we measure $\ell_1$ -distance between the most salient regions.
314
+
315
+ For the initialization in Algorithm 1, we use i.i.d. samples from a categorical distribution with equal probabilities. We use alpha-beta swap algorithm from pyGCO<sup>1</sup> to solve the minimization step in Algorithm 1, which can find local-minima of a multi-label submodular function. However, the worst-case complexity of alpha-beta swap algorithm with $|\mathcal{L}| = 2$ is $O(m^2 n)$ , and in the case of mini-batch with 100 examples, iteratively applying the algorithm can become a bottleneck during the network training. To mitigate the computational overhead, we partition the mini-batch (each of size 20) and then apply Algorithm 1 independently per each partition.
316
+
317
+ The worst-case complexity theoretic of the naive implementation of Algorithm 1 increases exponentially as $|\mathcal{L}|$ increases. Specifically, the worst-case theoretic complexity of the alpha-beta swap algorithm is proportional to the square of the number of possible states of $z_{j,k}$ , which is proportional to $m^{|\mathcal{L}| - 1}$ . To reduce the number of possible states in a multi-label case, we solve the problem for binary labels $(|\mathcal{L}| = 2)$ at the first inner-cycle and then extend to multi labels $(|\mathcal{L}| = 3)$ only for the currently assigned indices of each output in the subsequent cycles. This reduces the number of possible states to $O(m + \bar{m}^{|\mathcal{L}| - 1})$ where $\bar{m} \ll m$ . Here, $\bar{m}$ means the number of currently assigned indices for each output.
318
+
319
+ Based on the above implementation, we train models with Co-Mixup in a feasible time. For example, in the case of ImageNet training with 16 Intel I9-9980XE CPU cores and 4 NVIDIA RTX 2080Ti GPUs, Co-Mixup training requires 0.964s per batch, whereas the vanilla training without mixup requires 0.374s per batch. Note that Co-Mixup requires saliency computation, and when we compare the algorithm with Puzzle Mix, which performs the same saliency computation, Co-Mixup is only slower about 1.04 times. Besides, as we down-sample the data to the fixed size regardless of the data dimension, the additional computation cost of Co-Mixup relatively decreases as the data dimension increases. Finally, we present the empirical time complexity of Algorithm 1 in Figure 6. As shown in the figure, Algorithm 1 has linear time complexity over $|\mathcal{L}|$ empirically. Note that we use $|\mathcal{L}| = 3$ in all of our main experiments, including a classification task.
320
+
321
+ ![](images/9a3b85bf0ab00ba0c7b20056fa7f813dbc3bcf11308b5359c9926d086568b4ab.jpg)
322
+ L
323
+
324
+ ![](images/58a771acc4c63695fd1f2a4563ddf135a93a9f4a0b96199e996c9e9a2dcac055.jpg)
325
+ m
326
+ Figure 6: Mean execution time (ms) of Algorithm 1 per each batch of data over 100 trials. The left figure shows the time complexity of the algorithm over $|\mathcal{L}|$ and the right figure shows the time complexity over the number of inputs $m$ . Note that the other parameters are fixed equal to the classification experiments setting, $m = m' = 20$ , $n = 16$ , and $|\mathcal{L}| = 3$ .
327
+
328
+ # D ALGORITHM ANALYSIS
329
+
330
+ In this section, we perform comparison experiments to analyze the proposed Algorithm 1. First, we compare our algorithm with the exact brute force search algorithm to inspect the optimality of the algorithm. Next, we compare our algorithm with the BP algorithm proposed by Narasimhan and Bilmes (2005).
331
+
332
+ # D.1 COMPARISON WITH BRUTE FORCE
333
+
334
+ To inspect the optimality of the proposed algorithm, we compare the function values of the solutions of Algorithm 1, brute force search algorithm, and random guess. Due to the exponential time complexity of the brute force search, we compare the algorithms on small scale experiment settings. Specifically, we test algorithms on settings of $(m = m' = 2, n = 4)$ , $(m = m' = 2, n = 9)$ , and $(m = m' = 3, n = 4)$ varying the number of inputs and outputs $(m, m')$ and the dimension of data $n$ . We generate unary cost matrix in the objective $f$ by sampling data from uniform distribution.
335
+
336
+ We perform experiments with 100 different random seeds and summarize the results on Table 6. From the table, we find that the proposed algorithm achieves near optimal solutions over various settings. We also measure relative errors between ours and random guess, $(f(z_{\mathrm{ours}}) - f(z_{\mathrm{brute}})) / (f(z_{\mathrm{random}}) - f(z_{\mathrm{brute}}))$ . As a result, our algorithm achieves relative error less than 0.01.
337
+
338
+ # D.2 COMPARISON WITH ANOTHER BP ALGORITHM
339
+
340
+ We compare the proposed algorithm and the BP algorithm proposed by Narasimhan and Bilmes (2005). We evaluate function values of solutions by each method using a random
341
+
342
+ <table><tr><td>Configuration</td><td>Ours</td><td>Brute force (optimal)</td><td>Random guess</td><td>Rel. error</td></tr><tr><td>(m=m&#x27; = 2, n = 4)</td><td>1.91</td><td>1.90</td><td>3.54</td><td>0.004</td></tr><tr><td>(m=m&#x27; = 2, n = 9)</td><td>1.93</td><td>1.91</td><td>3.66</td><td>0.01</td></tr><tr><td>(m=m&#x27; = 3, n = 4)</td><td>2.89</td><td>2.85</td><td>22.02</td><td>0.002</td></tr></table>
343
+
344
+ unary cost matrix from a uniform distribution. We compare methods over various scales by controlling the number of mixing inputs $m$ .
345
+
346
+ Table 7 shows the averaged function values with standard deviations in the parenthesis. As we can see from the table, the proposed algorithm achieves much lower function values and deviations than the method by Narasimhan and Bilmes (2005) over various settings. Note that the method by Narasimhan and Bilmes (2005) has high variance due to randomization in the algorithm. We further compare the algorithm convergence time in Table 8. The experiments verify that the proposed algorithm is much faster and effective than the method by Narasimhan and Bilmes (2005).
347
+
348
+ Table 6: Mean function values of the solutions over 100 different random seeds. Rel. error means the relative error between ours and random guess.
349
+
350
+ <table><tr><td>Algorithm</td><td>m = 5</td><td>m = 10</td><td>m = 20</td><td>m = 50</td><td>m = 100</td></tr><tr><td>Ours</td><td>3.1 (1.7)</td><td>15 (6.6)</td><td>54 (15)</td><td>205 (26)</td><td>469 (31)</td></tr><tr><td>Narasimhan</td><td>269 (58)</td><td>1071 (174)</td><td>4344 (701)</td><td>24955 (4439)</td><td>85782 (14337)</td></tr><tr><td>Random</td><td>809 (22)</td><td>7269 (92)</td><td>60964 (413)</td><td>980973 (2462)</td><td>7925650 (10381)</td></tr></table>
351
+
352
+ Table 7: Mean function values of the solutions over 100 different random seeds. We report the standard deviations in the parenthesis. Random represents the random guess algorithm.
353
+
354
+ <table><tr><td>Algorithm</td><td>m = 5</td><td>m = 10</td><td>m = 20</td><td>m = 50</td><td>m = 100</td></tr><tr><td>Ours</td><td>0.02</td><td>0.04</td><td>0.11</td><td>0.54</td><td>2.71</td></tr><tr><td>Narasimhan</td><td>0.06</td><td>0.09</td><td>0.27</td><td>1.27</td><td>7.08</td></tr></table>
355
+
356
+ Table 8: Convergence time (s) of the algorithms.
357
+
358
+ # E HYPERPARAMETER SETTINGS
359
+
360
+ We perform Co-Mixup after down-sampling the given inputs and saliency maps to the pre-defined resolutions regardless of the size of the input data. In addition, we normalize the saliency of each input to sum up to 1 and define unary cost using the normalized saliency. As a result, we use an identical hyperparameter setting for various datasets; CIFAR-100, Tiny-ImageNet, and ImageNet. In details, we use $(\beta, \gamma, \eta, \tau) = (0.32, 1.0, 0.05, 0.83)$ for all of experiments. Note that $\tau$ is normalized according to the size of inputs ( $n$ ) and the ratio of the number of inputs and outputs ( $m/m'$ ), and we use an isotropic Dirichlet distribution with $\alpha = 2$ for prior $p$ . For a compatibility matrix, we use $\omega = 0.001$ .
361
+
362
+ For baselines, we tune the mixing ratio hyperparameter, i.e., the beta distribution parameter (Zhang et al., 2018), among $\{0.2, 1.0, 2.0\}$ for all of the experiments if the specific setting is not provided in the original papers.
363
+
364
+ # F ADDITIONAL EXPERIMENTAL RESULTS
365
+
366
+ # F.1 ANOTHER DOMAIN: SPEECH
367
+
368
+ In addition to the image domain, we conduct experiments on the speech domain, verifying Co-Mixup works on various domains. Following (Zhang et al., 2018), we train LeNet (LeCun
369
+
370
+ ![](images/72282f7e835060bde1174a6fb7d615c5d5244371480b40d0dd02afe4ebe1f3fa.jpg)
371
+ Figure 7: Confidence-Accuracy plots for classifiers on CIFAR-100. Note, ECE is calculated by the mean absolute difference between the two values.
372
+
373
+ et al., 1998) and VGG-11 (Simonyan and Zisserman, 2014) on the Google commands dataset (Warden, 2017). The dataset consists of 65,000 utterances, and each utterance is about one-second-long belonging to one out of 30 classes. We train each classifier for 30 epochs with the same training setting and data pre-processing of Zhang et al. (2018). In more detail, we use $160 \times 100$ normalized spectrograms of utterances for training. As shown in Table 9, we verify that Co-Mixup is still effective in the speech domain.
374
+
375
+ <table><tr><td>Model</td><td>Vanilla</td><td>Input</td><td>Manifold</td><td>CutMix</td><td>Puzzle Mix</td><td>Co-Mixup</td></tr><tr><td>LeNet</td><td>11.24</td><td>10.83</td><td>12.33</td><td>12.80</td><td>10.89</td><td>10.67</td></tr><tr><td>VGG-11</td><td>4.84</td><td>3.91</td><td>3.67</td><td>3.76</td><td>3.70</td><td>3.57</td></tr></table>
376
+
377
+ # F.2 CALIBRATION
378
+
379
+ In this section, we summarize the expected calibration error (ECE) (Guo et al., 2017) of classifiers trained with various mixup methods. For evaluation, we use the official code provided by the TensorFlow-Probability library and set the number of bins as 10. As shown in Table 10, Co-Mixup classifiers have the lowest calibration error on CIFAR-100 and Tiny/ImageNet. As pointed by Guo et al. (2017), the Vanilla networks have overconfident predictions, but however, we find that mixup classifiers tend to have under-confident predictions (Figure 7; Figure 8). As shown in the figures, Co-Mixup successfully alleviates the over-confidence issue and does not suffer from under-confidence predictions.
380
+
381
+ Table 9: Top-1 classification test error on the Google commands dataset. We stop training if validation accuracy does not increase for 5 consecutive epochs.
382
+
383
+ <table><tr><td>Dataset</td><td>Vanilla</td><td>Input</td><td>Manifold</td><td>CutMix</td><td>Puzzle Mix</td><td>Co-Mixup</td></tr><tr><td>CIFAR-100</td><td>3.9</td><td>17.7</td><td>13.1</td><td>5.6</td><td>7.5</td><td>1.9</td></tr><tr><td>Tiny/ImageNet</td><td>4.5</td><td>6.2</td><td>6.8</td><td>12.0</td><td>5.6</td><td>2.5</td></tr><tr><td>ImageNet</td><td>5.9</td><td>1.2</td><td>1.7</td><td>4.3</td><td>2.1</td><td>2.1</td></tr></table>
384
+
385
+ Table 10: Expected calibration error (%) of classifiers trained with various mixup methods on CIFAR-100, Tiny-ImageNet and ImageNet. Note that, at all of three datasets, Co-Mixup outperforms all of the baselines in Top-1 accuracy.
386
+
387
+ # F.3 SENSITIVITY ANALYSIS
388
+
389
+ We measure the Top-1 error rate of the model by sweeping the hyperparameter to show the sensitivity using PreActResNet18 on CIFAR-100 dataset. We sweep the label smoothness coefficient $\beta \in \{0, 0.16, 0.32, 0.48, 0.64\}$ , compatibility coefficient $\gamma \in \{0.6, 0.8, 1.0, 1.2, 1.4\}$ , clipping level $\tau \in \{0.79, 0.81, 0.83, 0.85, 0.87\}$ , compatibility matrix parameter $\omega \in \{0, 5 \cdot 10^{-4}, 10^{-3}, 5 \cdot 10^{-3}, 10^{-2}\}$ , and the size of partition $m \in \{2, 4, 10, 20, 50\}$ . Table 11 shows that Co-Mixup outperforms the best baseline (PuzzleMix, $20.62\%$ ) with a large pool of
390
+
391
+ ![](images/7e8c6c684c1d08976c76101344bff182a0c34a6a0c36864384c48b4799ec6535.jpg)
392
+ Figure 8: Confidence-Accuracy plots for classifiers on Tiny-ImageNet.
393
+
394
+ ![](images/c038fec10f1f3328002016f08bb71e934ce14107701e0af63095d7e282b390d2.jpg)
395
+ Figure 9: Confidence-Accuracy plots for classifiers on ImageNet.
396
+
397
+ hyperparameters. We also find that Top-1 error rate increases as the partition batch size $m$ increases until $m = 20$ .
398
+
399
+ <table><tr><td rowspan="2">Smoothness coefficient, β</td><td>β = 0</td><td>β = 0.16</td><td>β = 0.32</td><td>β = 0.48</td><td>β = 0.64</td></tr><tr><td>20.29</td><td>20.18</td><td>19.87</td><td>20.35</td><td>21.24</td></tr><tr><td rowspan="2">Compatibility coefficient, γ</td><td>γ = 0.6</td><td>γ = 0.8</td><td>γ = 1.0</td><td>γ = 1.2</td><td>γ = 1.4</td></tr><tr><td>20.3</td><td>19.99</td><td>19.87</td><td>20.09</td><td>20.13</td></tr><tr><td rowspan="2">Clipping parameter, τ</td><td>τ = 0.79</td><td>τ = 0.81</td><td>τ = 0.83</td><td>τ = 0.85</td><td>τ = 0.87</td></tr><tr><td>20.45</td><td>20.14</td><td>19.87</td><td>20.15</td><td>20.23</td></tr><tr><td rowspan="2">Compatibility matrix parameter, ω</td><td>ω = 0</td><td>ω = 5·10-4</td><td>ω = 10-3</td><td>ω = 5·10-3</td><td>ω = 10-2</td></tr><tr><td>20.51</td><td>20.42</td><td>19.87</td><td>20.18</td><td>20.14</td></tr><tr><td rowspan="2">Partition size, m</td><td>m = 2</td><td>m = 4</td><td>m = 10</td><td>m = 20</td><td>m = 50</td></tr><tr><td>20.3</td><td>20.22</td><td>20.15</td><td>19.87</td><td>19.96</td></tr></table>
400
+
401
+ Table 11: Hyperparameter sensitivity results (Top-1 error rates) on CIFAR-100 with PreActResNet18. We report the mean values of three different random seeds.
402
+
403
+ # F.4 COMPARISON WITH NON-MIXUP BASELINES
404
+
405
+ We compare the generalization performance of Co-Mixup with non-mixup baselines, verifying the proposed method achieves the state of the art generalization performance not only for the mixup-based methods but for other general regularization based methods. One of the regularization methods called VAT (Miyato et al., 2018) uses virtual adversarial loss, which is defined as the KL-divergence of predictions between input data against local perturbation. We perform the experiment with VAT regularization on CIFAR-100 with PreActResNet18 for 300 epochs in the supervised setting. We tune $\alpha$ (coefficient of VAT regularization term) in $\{0.001, 0.01, 0.1\}$ , $\epsilon$ (radius of $\ell$ -inf ball) in $\{1, 2\}$ , and the number of noise update steps in $\{0, 1\}$ . Table 12 shows that Co-Mixup, which achieves Top-1 error rate of $19.87\%$ , outperforms the VAT regularization method.
406
+
407
+ # G DETAILED DESCRIPTION FOR BACKGROUND CORRUPTION
408
+
409
+ We build the background corrupted test datasets based on ImageNet validation dataset to compare the robustness of the pre-trained classifiers against the background corruption.
410
+
411
+ <table><tr><td rowspan="2">VAT loss coefficient</td><td colspan="2">#update=0</td><td colspan="2">#update=1</td></tr><tr><td>ε = 1</td><td>ε = 2</td><td>ε = 1</td><td>ε = 2</td></tr><tr><td>α = 0.001</td><td>23.38</td><td>23.62</td><td>24.76</td><td>26.22</td></tr><tr><td>α = 0.01</td><td>23.14</td><td>23.67</td><td>28.33</td><td>31.95</td></tr><tr><td>α = 0.1</td><td>23.65</td><td>23.88</td><td>34.75</td><td>39.82</td></tr></table>
412
+
413
+ Table 12: Top-1 error rates of VAT on CIFAR-100 dataset with PreActResNet18.
414
+
415
+ ImageNet consists of images $\{x_{1},\dots,x_{M}\}$ , labels $\{y_{1},\dots,y_{M}\}$ , and the corresponding ground-truth bounding boxes $\{b_{1},\dots,b_{M}\}$ . We use the ground-truth bounding boxes to separate the foreground from the background. Let $z_{j}$ be a binary mask of image $x_{j}$ , which has value 1 inside of the ground-truth bounding box $b_{j}$ . Then, we generate two types of background corrupted sample $\tilde{x}_{j}$ by considering the following operations:
416
+
417
+ 1. Replacement with another image as $\tilde{x}_j = x_j\odot z_j + x_{i(j)}\odot (1 - z_j)$ for a random permutation $\{i(1),\dots,i(M)\}$ .
418
+ 2. Adding Gaussian noise as $\tilde{x}_j = x_j\odot z_j + \epsilon \odot (1 - z_j)$ , where $\epsilon \sim N(0,0.1^2)$ . We clip pixel values of $\tilde{x}_j$ to [0, 1].
419
+
420
+ Figure 10 visualizes subsets of the background corruption test datasets.
421
+
422
+ ![](images/dced779f5020c8e2e68ae72c7f69462cddc497bab62351dbde91a4ec01f63f9a.jpg)
423
+ (a)
424
+
425
+ ![](images/caed1dfa2b2b241b0b0b4b3520ff1620ed313323bf1068154f46597dfd1fd481.jpg)
426
+ (b)
427
+ Figure 10: Each subfigure shows background corrupted samples used in the robustness experiment. (a) Replacement with another image in ImageNet. (b) Adding Gaussian noise. The red boxes on the images represent ground-truth bounding boxes.
428
+
429
+ # H CO-MIXUP GENERATED SAMPLES
430
+
431
+ In Figure 12, we present Co-Mixup generated image samples by using images from ImageNet. We use an input batch consisting of 24 images, which is visualized in Figure 11. As can be seen from Figure 12, Co-Mixup efficiently mix-matches salient regions of the given inputs maximizing saliency and creates diverse outputs. In Figure 12, inputs with the target objects on the left side are mixed with the objects on the right side, and objects on the top side are mixed with the objects on the bottom side. In Figure 13, we present Co-Mixup generated image samples with larger $\tau$ using the same input batch. By increasing $\tau$ , we can encourage Co-Mixup to use more inputs to mix per each output.
432
+
433
+ ![](images/b9da9551643dde0953ee54c3dc6c0eeefeb1302696c29887e9a962600a84e116.jpg)
434
+
435
+ ![](images/c5b90ef4ca30b42013196d0052ce16105c4d1ab57940c3eb9f4e73aac716165b.jpg)
436
+
437
+ ![](images/2fa0aecef0c043197da9f6c935d4eb378dcd373bc1e70565b94f6edee2ce2da0.jpg)
438
+
439
+ ![](images/236b35abc34ee2365b642abd5f64e987e8360b3be9299ebbc02f0cb75bb8f6ca.jpg)
440
+
441
+ ![](images/702e319039ad1454d064f3d085361984fd7eb385d2951e19875e9ef430cf2a22.jpg)
442
+
443
+ ![](images/78d845c3ec9d15eb10bfaabf193adda5cbc324e59ff3842ec84d9c6a8d556703.jpg)
444
+
445
+ ![](images/5e9a0e348bbfae40db02f74e399ded1690b1d80ff4b545af1477f18a80821610.jpg)
446
+
447
+ ![](images/ae2220b4a2c5113c4dae55aa9a2b74368bd7f9e034d20b9c9ccbfef06728d397.jpg)
448
+
449
+ ![](images/35eceac6e989929aa0dd1dbb20e4d0d8e64771c7eac7e679f5903c6b3e95a41c.jpg)
450
+
451
+ ![](images/5e9d46ca8237fd98fed027520b34726a944f6fc6ea8f0abb51f665181766f5a6.jpg)
452
+
453
+ ![](images/c391a84e8aa84e0261590f90804e6d633f4f3e9075d405b43a02a22f46c464fe.jpg)
454
+
455
+ ![](images/735a46b31fc1430a110fda80f116fbfffe3eff26502b7810c1d80a4540a98977.jpg)
456
+
457
+ ![](images/690d51fe2b005ca0b5292d83da9e30ab1bbdb3436132fba30a7ec9ed6b64ddb7.jpg)
458
+
459
+ ![](images/9029ad7603582ff3e08553ed40c82e2f57107f0dcb86df4c6f0587732f71bcd6.jpg)
460
+
461
+ ![](images/194bc3e42aed0ebde2706c452dcb445e72b87146996e997b3c5f93561aba2f64.jpg)
462
+
463
+ ![](images/87439656378372497b1cd9d8015507da4b7a0ccc1ce077f633454a1c6886cf1c.jpg)
464
+
465
+ ![](images/07a3bac17091de840b62305bb20c6f12d75f39cab32a3801fda41f456d96ef14.jpg)
466
+
467
+ ![](images/705a77b9ea1a7897027d2f46e0bd9b52ec2129ce2e39e9ea6ab8ab3b0d448c84.jpg)
468
+
469
+ ![](images/39c1faff9c3a5a811a63967fb665be5f5e54d6e42e7e113b9445fda69ee62d14.jpg)
470
+
471
+ ![](images/c8ea9f18e09ed57d1adda1acd868c5dd848fe8275e828b7663d1cf4ec6e5dc3a.jpg)
472
+
473
+ ![](images/16c444999ee056181034d460cc78980d3b56c716680f105bf74c5aba44c0cd31.jpg)
474
+
475
+ ![](images/70a08bb5d63cd3c1db5196a998d14af2125f5f6f544843520e9886ebabe7480c.jpg)
476
+ Figure 11: Input batch.
477
+
478
+ ![](images/bd2567e9c40a406d1efd956527d514f2d3ed9f7dc40f4a775cec145d5f15a507.jpg)
479
+
480
+ ![](images/7e8ad99da2567b09e6713d2f8a3dcfcc5b60a8946c82a06c927a552577310b86.jpg)
481
+
482
+ ![](images/0c994f12a8da440139bbf0a3baff855f6dc1e55d80055362e73215f89342f7e1.jpg)
483
+
484
+ ![](images/c9a4f1be88ca0f0da172431d7f53cfa346d6f20e3f8f02626c6eaad1c698bfda.jpg)
485
+
486
+ ![](images/45dc64105024c95fe1c8b730d2221eb974ead03ff5ddd09fa0706fe60eabbf2b.jpg)
487
+
488
+ ![](images/ad009411f4fd464d827f46cf83eb80a024d063e786330232ac4d0c404831ecf0.jpg)
489
+
490
+ ![](images/381dac665695583a985b6946f09649e89e21d7d762a5829077f57ff6b4adb4fc.jpg)
491
+
492
+ ![](images/80f0b943a75e993a694ca14c98c967ee5deb2102ece2343d9bef025850939e51.jpg)
493
+
494
+ ![](images/7eae258e7dfca3680db37adec8963acf111e5278ef199bbed900935eb31055f0.jpg)
495
+
496
+ ![](images/2b1959c7d2ad553df0132d7da9525cd78f013bfd1aae8a852844fc639a337fa1.jpg)
497
+
498
+ ![](images/295700d557e06cf8dd2d4eeed41dc4e878fa6eaa90185c462e9de36aa9af1125.jpg)
499
+
500
+ ![](images/a0d694868ce48181d03952386c5792ebb5805eb75800e0e9209c35708dad3c39.jpg)
501
+
502
+ ![](images/3da0cee24e99def9bba2881fc78735fa0f588446e760fd11b543815f75172c98.jpg)
503
+
504
+ ![](images/4601ee27443a3eb267b53c06d4667c3b1943d2cfd87f719faa33ac8a51b1fb15.jpg)
505
+
506
+ ![](images/b39c7bbaff9758544bb924339e9b8663b73149d028a1db2414886745cfe63138.jpg)
507
+
508
+ ![](images/04966ed2c4949a7ba5498eaee7f3d638f51efc8ac25af078ea4e67d5e0c21ee5.jpg)
509
+
510
+ ![](images/0d1e85325bd6ea76cdd4cd8448a8f9237c399b510a58ca827194c04d94eb5e39.jpg)
511
+
512
+ ![](images/56b07992a2d1d2a5890cc28e7bcab7ddbe863da33bb7715171e693693324a244.jpg)
513
+
514
+ ![](images/8bb5d96905a850e24b935df03a0ba652cddad7108fb2a49fada648516c564b8b.jpg)
515
+
516
+ ![](images/c002ddb2468cc89a41bd4e2b3d13e044c8ad4acee6f15736e04939bddd925705.jpg)
517
+
518
+ ![](images/265486140f2d46d020ddf6fec45d33a2fc1446a3897e7f5490b99309f8b373ee.jpg)
519
+
520
+ ![](images/23b6a850cb6dbd4e112ac6483882f5585947b3f537e0af91f3d989cf784a0a84.jpg)
521
+
522
+ ![](images/1df3c26ab082ee2d3c977b3af368956c7a39e2228d7a9567bf9bad47eaf26d00.jpg)
523
+
524
+ ![](images/3c8ca25eac34530f7cef208eea12f4f1690e014565b800a454fb4995651ac4a5.jpg)
525
+ Figure 12: Mixed output batch.
526
+
527
+ ![](images/fa1bcc873e91d3770747f8aa158f68ec5010f095b2f49ec1d02edd7e0988ba15.jpg)
528
+
529
+ ![](images/5d3df5e2ad49973a231ea07465227ddc28d6a767eefc2a941c3aacfcd29db374.jpg)
530
+
531
+ ![](images/778d6df2d64296b25db71e0b4b7e0917e28b8a3242ccb94a088aea3159ee70a4.jpg)
532
+
533
+ ![](images/73ea58dcc9b0b1238454df5243314bc2cde8636b1e37e96a8201b7fd2c8a4c53.jpg)
534
+
535
+ ![](images/aa9ca7786a8f9b927fa691feee23858f33b344cd8825b2deb9e11bc6ff73c297.jpg)
536
+
537
+ ![](images/49b9eaef3e14776fd3ae7eec00fa12b3cb2d6f310291df9480bd8f0a6ed45abb.jpg)
538
+
539
+ ![](images/e466dc8fd5710140a488709bde0077ff0abbc95764514e9a53d402bd668891e4.jpg)
540
+
541
+ ![](images/ee8f5140fdb50a02e31d6148e975d27103a3ea3e701e13b4bda63146bf634653.jpg)
542
+
543
+ ![](images/a3bbe17cfdec56977a3e7a8d18a2802b58f0331c0e41a5417011cf85744fea9f.jpg)
544
+
545
+ ![](images/12058ed21c758d9b8ee46a042cc695caca3605419f30ea61d759ca144c95a3db.jpg)
546
+
547
+ ![](images/c5c9a88bd9260b4227567801c285e7cbb0d93c35c94a9cb33b1abaab3bdaa084.jpg)
548
+
549
+ ![](images/da693ce3e22d6e342f2bc430b543d58e72bf83c5dd5d6c929708f6bd9ae0afd9.jpg)
550
+
551
+ ![](images/a28060abfa7e00143d9735ec057f83389932672f3915c61d4f01d31133fba431.jpg)
552
+
553
+ ![](images/8a1d51def6e3abe54ba155c2fdb081bddf04f728f16e83ddbc445a594c7ac1b3.jpg)
554
+
555
+ ![](images/7492b7b0b633e2c381ba2ec8b10ce8ffa9975b8b9b2107c83559fe58bef729df.jpg)
556
+
557
+ ![](images/45f88d679e6ed4baef5cd102685cf98580577a657bdb5087eb91d96c25b19e18.jpg)
558
+
559
+ ![](images/18682235af6e1520cb93042c8f37a43c3358075acd89bd8957b413c43258349a.jpg)
560
+
561
+ ![](images/70026595c094e8ac7cc7985d20e1412e0e5979550acaa39924e902c18f61739e.jpg)
562
+
563
+ ![](images/26ced5eba461bed8d59c2d4fc1a7d364f4ab173ad9dd957db31dd6b6e71be313.jpg)
564
+
565
+ ![](images/e0ffe4956de823ef76cd7b4b9a85d7be9fe0d5edc68fcc56cf7d5c698d008fea.jpg)
566
+
567
+ ![](images/1335a32199201487e1eb941509b244c554a3c550d54404bd2a367177c9c043d3.jpg)
568
+
569
+ ![](images/2ffa8c48448e40466b22d100866e5fc0c51c14741b73bbf1f9e631f92bb5c581.jpg)
570
+
571
+ ![](images/079e6155fa39f1fa9947e25018b4791b9ffd68f8f33733f957b9e7e2d55b9d7e.jpg)
572
+ Figure 13: Another mixed output batch with larger $\tau$ .
573
+
574
+ ![](images/2cbff3aaf8af07c648153286fe2db87d75269987e507390476e2f396c33f8d36.jpg)
575
+
576
+ ![](images/f5035b35bf58a13d884f148dc44a08ba09a590946dda12e837ea5e1a0dbb8eff.jpg)
577
+
578
+ ![](images/57906ec7f43d0557618f55c2eeb79b80f8fe0ab7d88fe1f16b0d0894e6bf034d.jpg)
comixupsaliencyguidedjointmixupwithsupermodulardiversity/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4cfa8cf9a2312634ee3ce903eab79be914aa776f4e65649e0c4ea6d7f94467c6
3
+ size 1533290
comixupsaliencyguidedjointmixupwithsupermodulardiversity/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ddb77b9cf515125d63e03779065c41fa37cf554fd2560baefaeef797654c52cf
3
+ size 805702
complexqueryansweringwithneurallinkpredictors/7bda427d-f1fd-4825-a924-13ee62b9fa36_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e695b566c0237644e8b830102777e13cba759083a4e27de50c27c2cbaba216a0
3
+ size 83886
complexqueryansweringwithneurallinkpredictors/7bda427d-f1fd-4825-a924-13ee62b9fa36_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:01bcf56e7e7c4fd3fd2134e6465b14d937690b8af5cc4d53363c5d073d9fad8b
3
+ size 101823
complexqueryansweringwithneurallinkpredictors/7bda427d-f1fd-4825-a924-13ee62b9fa36_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:774d10e9072863c672d1449b92f62d667f14fbee992f6a8386fa3f93fd7ccda4
3
+ size 1401714
complexqueryansweringwithneurallinkpredictors/full.md ADDED
@@ -0,0 +1,292 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # COMPLEX QUERY ANSWERING WITH NEURAL LINK PREDICTORS
2
+
3
+ Erik Arakelyan $^{1\dagger}$ , Daniel Daza $^{2,3,4\dagger}$ , Pasquale Minervini $^{1\dagger}$ , & Michael Cochez $^{2,4}$
4
+
5
+ <sup>1</sup>UCL Centre for Artificial Intelligence, University College London, United Kingdom
6
+ 2Vrije Universiteit Amsterdam, The Netherlands
7
+ <sup>3</sup>University of Amsterdam, The Netherlands
8
+ <sup>4</sup>Discovery Lab, Elsevier, The Netherlands
9
+
10
+ {erik.arakelyan.18,p.minervini}@ucl.ac.uk
11
+
12
+ {d.dazacruz,m.cochez}@vu.nl
13
+
14
+ # ABSTRACT
15
+
16
+ Neural link predictors are immensely useful for identifying missing edges in large scale Knowledge Graphs. However, it is still not clear how to use these models for answering more complex queries that arise in a number of domains, such as queries using logical conjunctions (∧), disjunctions (∨) and existential quantifiers (∃), while accounting for missing edges. In this work, we propose a framework for efficiently answering complex queries on incomplete Knowledge Graphs. We translate each query into an end-to-end differentiable objective, where the truth value of each atom is computed by a pre-trained neural link predictor. We then analyse two solutions to the optimisation problem, including gradient-based and combinatorial search. In our experiments, the proposed approach produces more accurate results than state-of-the-art methods — black-box neural models trained on millions of generated queries — without the need of training on a large and diverse set of complex queries. Using orders of magnitude less training data, we obtain relative improvements ranging from $8\%$ up to $40\%$ in Hits@3 across different knowledge graphs containing factual information. Finally, we demonstrate that it is possible to explain the outcome of our model in terms of the intermediate solutions identified for each of the complex query atoms. All our source code and datasets are available online<sup>1</sup>.
17
+
18
+ # 1 INTRODUCTION
19
+
20
+ Knowledge Graphs (KGs) are graph-structured knowledge bases, where knowledge about the world is stored in the form of relationship between entities. KGs are an extremely flexible and versatile knowledge representation formalism - examples include general purpose knowledge bases such as DBpedia (Auer et al., 2007) and YAGO (Suchanek et al., 2007), domain-specific ones such as Bio2RDF (Dumontier et al., 2014) and Hetionet (Himmelstein et al., 2017) for life sciences and WordNet (Miller, 1992) for linguistics, and application-driven graphs such as the Google Knowledge Graph, Microsoft's Bing Knowledge Graph, and Facebook's Social Graph (Noy et al., 2019).
21
+
22
+ Neural link predictors (Nickel et al., 2016) tackle the problem of identifying missing edges in large KGs. However, in many complex domains, an open challenge is developing techniques for answering complex queries involving multiple and potentially unobserved edges, entities, and variables, rather than just single edges.
23
+
24
+ We focus on First-Order Logical Queries that use conjunctions $(\land)$ , disjunctions $(\lor)$ , and existential quantifiers $(\exists)$ . A multitude of queries can be expressed by using such operators – for instance, the query “Which drugs $D$ interact with proteins associated with diseases $t_1$ or $t_2$ ?” can be rewritten as $?D: \exists P.\text{interacts}(D, P) \land [\text{assoc}(P, t_1) \lor \text{assoc}(P, t_2)]$ , which can be answered via sub-graph matching.
25
+
26
+ ![](images/0dac3cf201b91dd76c55d5578742c0f988ac59a0ea51e3c9216654c8c7c3f554.jpg)
27
+ "Which directors directed actors that won either an Oscar or an Emmy?"
28
+
29
+ ![](images/96626f053ae26acca483213715dade1151b7dc955c63405d14ba4a554949a888.jpg)
30
+ Figure 1: Examples of First-Order Logical Queries using existential quantification $(\exists)$ , conjunction $(\land)$ , and disjunction $(\lor)$ operators — their dependency graphs are $D \leftarrow P \leftarrow \{t_1, t_2\}$ , and $D \leftarrow A \leftarrow \{\text{Oscar}, \text{Emmty}\}$ , respectively.
31
+
32
+ ![](images/33880d1a7ca31379defea98add72c8d72b0e12338a2dfee3e87e2fbef6dbb77c.jpg)
33
+ ?D: $\exists A$ .directs(D,A)∧ [prize(A,Oscar) v prize(A,Emmy)]
34
+
35
+ However, plain sub-graph matching cannot capture semantic similarities between entities and relations, and cannot deal with missing facts in the KG. One possible solution consists in computing all missing entries via KG completion methods (Getoor & Taskar, 2007; De Raedt, 2008; Nickel et al., 2016), but that would materialise a significantly denser KG and would have intractable space and time complexity requirements (Krompaß et al., 2014).
36
+
37
+ In this work, we propose a framework for answering First-Order Logic Queries, where the query is compiled in an end-to-end differentiable function, modelling the interactions between its atoms. The truth value of each atom is computed by a neural link predictor (Nickel et al., 2016) – a differentiable model that, given an atomic query, returns the likelihood that the fact it represents holds true. We then propose two approaches for identifying the most likely values for the variable nodes in a query – either by continuous or by combinatorial optimisation.
38
+
39
+ Recent work on embedding logical queries on KGs (Hamilton et al., 2018; Daza & Cochez, 2020; Ren et al., 2020) has suggested that in order to go beyond link prediction, more elaborate architectures, and a large and diverse dataset with millions of queries is required. In this work, we show that this is not the case, and demonstrate that it is possible to use an efficient neural link predictor trained for 1-hop query answering, to generalise to up to 8 complex query structures. By doing so, we produce more accurate results than state-of-the-art models, while using orders of magnitude less training data.
40
+
41
+ Summarising, in comparison with other approaches in the literature such as Query2Box (Ren et al., 2020), we find that the proposed framework i) achieves significantly better or equivalent predictive accuracy on a wide range of complex queries, ii) is capable of out-of-distribution generalisation, since it is trained on simple queries only and evaluated on complex queries, and iii) is more explainable, since the intermediate results for its sub-queries and variable assignments can be used to explain any given answer.
42
+
43
+ # 2 EXISTENTIAL POSITIVE FIRST-ORDER LOGICAL QUERIES
44
+
45
+ A Knowledge Graph $\mathcal{G} \subseteq \mathcal{E} \times \mathcal{R} \times \mathcal{E}$ can be defined as a set of subject-predicate-object $\langle s, p, o \rangle$ triples, where each triple encodes a relationship of type $p \in \mathcal{R}$ between the subject $s \in \mathcal{E}$ and the object $o \in \mathcal{E}$ of the triple, where $\mathcal{E}$ and $\mathcal{R}$ denote the set of all entities and relation types, respectively. One can think of a Knowledge Graph as a labelled multi-graph, where entities $\mathcal{E}$ represent nodes, and edges are labelled with relation types $\mathcal{R}$ . Without loss of generality, a Knowledge Graph can be represented as a First-Order Logic Knowledge Base, where each triple $\langle s, p, o \rangle$ denotes an atomic formula $p(s, o)$ , with $p \in \mathcal{R}$ a binary predicate and $s, o \in \mathcal{E}$ its arguments.
46
+
47
+ Conjunctive queries are a sub-class of First-Order Logical queries that use existential quantification $(\exists)$ and conjunction $(\land)$ operations. We consider conjunctive queries $\mathcal{Q}$ in the following form:
48
+
49
+ $$
50
+ \begin{array}{l} \mathcal {Q} [ A ] \triangleq ? A: \exists V _ {1}, \dots , V _ {m}. e _ {1} \wedge \dots \wedge e _ {n} \\ \text {w h e r e} \quad e _ {i} = p (c, V), \text {w i t h} V \in \{A, V _ {1}, \dots , V _ {m} \}, c \in \mathcal {E}, p \in \mathcal {R} \tag {1} \\ \text {o r} \quad e _ {i} = p \left(V, V ^ {\prime}\right), \text {w i t h} V, V ^ {\prime} \in \left\{A, V _ {1}, \dots , V _ {m} \right\}, V \neq V ^ {\prime}, p \in \mathcal {R}. \\ \end{array}
51
+ $$
52
+
53
+ In Eq. (1), the variable $A$ is the target of the query, $V_{1},\ldots ,V_{m}$ denote the bound variable nodes, while $c\in \mathcal{E}$ represent the input anchor nodes. Each $e_i$ denotes a logical atom, with either one $(p(c,V))$ or two variables $(p(V,V'))$ , and $e_1\wedge \dots \wedge e_n$ denotes a conjunction between $n$ atoms.
54
+
55
+ The goal of answering the logical query $\mathcal{Q}$ consists in finding a set of entities $[\mathcal{Q}]]\subseteq \mathcal{E}$ such that $a\in [\mathcal{Q}]$ iff $\mathcal{Q}[a]$ holds true, where $[\mathcal{Q}]]$ is the answer set of the query $\mathcal{Q}$ .
56
+
57
+ As illustrated in Fig. 1, the dependency graph of a conjunctive query $\mathcal{Q}$ is a graph representation of $\mathcal{Q}$ where nodes correspond to variable or non-variable atom arguments in $\mathcal{Q}$ and edges correspond to atom predicates. We follow Hamilton et al. (2018) and focus on valid conjunctive queries – i.e. the dependency graph needs to be a directed acyclic graph, where anchor entities correspond to source nodes, and the query target $A$ is the unique sink node.
58
+
59
+ Example 2.1 (Conjunctive Query). Consider the query "Which drugs interact with proteins associated with the disease $t$ ?". This query can be formalised as a conjunctive query $\mathcal{Q}$ such as $?D : \exists P.\text{interacts}(D, P) \land \text{assoc}(P, t)$ , where $t$ is an input anchor node, the variable $D$ is the target of the query, $P$ is a bound variable node, and the dependency graph is $D \leftarrow P \leftarrow t$ . The answer set $[\mathcal{Q}]$ of $\mathcal{Q}$ corresponds to the set of all drugs in $\mathcal{E}$ interacting with proteins associated with $t$ .
60
+
61
+ Handling Disjunctions So far we focused on conjunctive queries defined using the existential quantification (∃) and conjunction (∧) logical operators. Our aim is answering a wider class of logical queries, namely Existential Positive First-Order (EPFO) queries (Dalvi & Suciu, 2004) that in addition to existential quantification and conjunction, also involve disjunction (∨). We follow Ren et al. (2020) and, without loss of generality, we transform a given EPFO query into Disjunctive Normal Form (DNF, Davey & Priestley, 2002), i.e. a disjunction of conjunctive queries.
62
+
63
+ Example 2.2 (Disjunctive Normal Form). Consider the following variant of query in Example 2.1: "Which drugs interact with proteins associated with the diseases $t_1$ or $t_2$ ?". This query can be formalised as a EPFO query $\mathcal{Q}$ such as $?D : \exists P.\text{interacts}(D, P) \land [assoc(P, t_1) \lor assoc(P, t_2)]$ . We can transform $\mathcal{Q}$ in the following, equivalent DNF query: $?D : \exists P. [interacts(D, P) \land assoc(P, t_1)] \lor [interacts(D, P) \land assoc(P, t_2)]$ .
64
+
65
+ In our framework, given a DNF query $\mathcal{Q}$ , for each of its conjunctive sub-queries we produce a score for all the entities representing the likelihood that they answer that sub-query. Finally, such scores are aggregated using a t-conorm — a continuous relaxation of the logical disjunction.
66
+
67
+ # 3 COMPLEX QUERY ANSWERING VIA OPTIMISATION
68
+
69
+ We propose a framework for answering EPFO logical queries in the presence of missing edges. Given a query $\mathcal{Q}$ , we define the score of a target node $a \in \mathcal{E}$ as a candidate answer for a query as a function of the score of all atomic queries in $\mathcal{Q}$ , given a variable-to-entity substitution for all variables in $\mathcal{Q}$ .
70
+
71
+ Each variable is mapped to an embedding vector, that can either correspond to an entity $c \in \mathcal{E}$ or to a virtual entity. The score of each of the query atoms is determined individually using a neural link predictor (Nickel et al., 2016). Then, the score of the query with respect to a given candidate answer $\mathcal{Q}[a]$ is computed by aggregating all atom scores using t-norms and t-conorms – continuous relaxations of the logical conjunction and disjunction operators.
72
+
73
+ Neural Link Prediction A neural link predictor is a differentiable model where atom arguments are first mapped into a $k$ -dimensional embedding space, and then used for producing a score for the atom. More formally, given a query atom $p(s,o)$ , where $p \in \mathcal{R}$ and $s, o \in \mathcal{E}$ , the score for $p(s,o)$ is computed as $\phi_p(\mathbf{e}_s, \mathbf{e}_o)$ , where $\mathbf{e}_s, \mathbf{e}_o \in \mathbb{R}^k$ are the embedding vectors of $s$ and $o$ , and $\phi_p: \mathbb{R}^k \times \mathbb{R}^k \mapsto [0,1]$ is a scoring function computing the likelihood that entities $s$ and $o$ are related by the relationship $p$ .
74
+
75
+ In our experiments, as neural link predictor, we use ComplEx (Trouillon et al., 2016) regularised using a variational approximation of the tensor nuclear $p$ -norm proposed by Lacroix et al. (2018).
76
+
77
+ T-Norms A $t$ -norm $\top : [0,1] \times [0,1] \mapsto [0,1]$ is a generalisation of conjunction in logic (Klement et al., 2000; 2004). Some examples include the Gödel $t$ -norm $\top_{\min}(x,y) = \min\{x,y\}$ , the product $t$ -norm $\top_{\operatorname{prod}}(x,y) = x \cdot y$ , and the Lukasiewicz $t$ -norm $\top_{\operatorname{Luk}}(x,y) = \max\{0,x+y-1\}$ . Analogously, $t$ -conorms are dual to $t$ -norms for disjunctions - given a $t$ -norm $\top$ , the complementary $t$ -conorm is defined by $\bot(x,y) = 1 - \top(1-x,1-y)$ .
78
+
79
+ Continuous Reformulation of Complex Queries Let $\mathcal{Q}$ denote the following DNF query:
80
+
81
+ $$
82
+ \mathcal {Q} [ A ] \triangleq ? A: \exists V _ {1}, \dots , V _ {m}. \left(e _ {1} ^ {1} \wedge \dots \wedge e _ {n _ {1}} ^ {1}\right) \vee .. \vee \left(e _ {1} ^ {d} \wedge \dots \wedge e _ {n _ {d}} ^ {d}\right)
83
+ $$
84
+
85
+ where $e_i^j = p(c,V)$ , with $V\in \{A,V_1,\ldots ,V_m\} ,c\in \mathcal{E},p\in \mathcal{R}$ (2)
86
+
87
+ or $e_i^j = p(V,V')$ , with $V,V'\in \{A,V_1,\ldots ,V_m\}, V\neq V',p\in \mathcal{R}$ .
88
+
89
+ We want to know the variable assignments that render $\mathcal{Q}$ true. To achieve this, we can cast this as an optimisation problem, where the aim is finding a mapping from variables to entities that maximises the score of $\mathcal{Q}$ :
90
+
91
+ $$
92
+ \operatorname * {a r g m a x} _ {A, V _ {1}, \dots , V _ {m} \in \mathcal {E}} \left(e _ {1} ^ {1} \top \dots \top e _ {n _ {1}} ^ {1}\right) \perp .. \perp \left(e _ {1} ^ {d} \top \dots \top e _ {n _ {d}} ^ {d}\right)
93
+ $$
94
+
95
+ where $e_i^j = \phi_p(\mathbf{e}_c,\mathbf{e}_V)$ , with $V\in \{A,V_1,\ldots ,V_m\} ,c\in \mathcal{E},p\in \mathcal{R}$ (3)
96
+
97
+ or $e_i^j = \phi_p(\mathbf{e}_V,\mathbf{e}_{V'})$ , with $V,V'\in \{A,V_1,\ldots ,V_m\}, V\neq V',p\in \mathcal{R}$
98
+
99
+ where $\top$ and $\bot$ denote a t-norm and a t-conorm - a continuous generalisation of the logical conjunction and disjunction, respectively - and $\phi_p(\mathbf{e}_s,\mathbf{e}_o)\in [0,1]$ denotes the neural link prediction score for the atom $p(s,o)$ . We write t-norms and t-conorms as infix operators since they are both associative.
100
+
101
+ Note that, in Eq. (3), the bound variable nodes $V_{1},\ldots ,V_{m}$ are only used through their embedding vector: to compute $\phi_p(\mathbf{e}_c,\mathbf{e}_V)$ we only use the embedding representation $\mathbf{e}_V\in \mathbb{R}^k$ of $V$ , and do not need to know which entity the variable $V$ corresponds to. This means that we have two possible strategies for finding the optimal variable embeddings $\mathbf{e}_V\in \mathbb{R}^k$ with $V\in \{A,V_1,\dots ,V_m\}$ for maximising the objective in Eq. (3), namely continuous optimisation, where we optimise $\mathbf{e}_V$ using gradient-based optimisation, and combinatorial optimisation, where we search for the optimal variable-to-entity assignment.
102
+
103
+ # 3.1 COMPLEX QUERY ANSWERING VIA CONTINUOUS OPTIMISATION
104
+
105
+ One way we can solve the optimisation problem in Eq. (3) is by finding the variable embeddings that maximise the score of a complex query. This can be formalised as the following continuous optimisation problem:
106
+
107
+ $\operatorname*{arg max}_{\mathbf{e}_A,\mathbf{e}_{V_1},\ldots ,\mathbf{e}_{V_m}\in \mathbb{R}^k}\left(e_1^1\top \ldots \top e_{n_1}^1\right)\perp \ldots \perp \left(e_1^d\top \ldots \top e_{n_d}^d\right)$ where $e_i^j = \phi_p(\mathbf{e}_c,\mathbf{e}_V)$ , with $V\in \{A,V_1,\dots ,V_m\} ,c\in \mathcal{E},p\in \mathcal{R}$ or $e_i^j = \phi_p(\mathbf{e}_V,\mathbf{e}_{V'})$ , with $V,V^{\prime}\in \{A,V_{1},\ldots ,V_{m}\} ,V\neq V^{\prime},p\in \mathcal{R}$
108
+
109
+ In Eq. (4) we directly optimise the embedding representations $\mathbf{e}_A, \mathbf{e}_{V_1}, \ldots, \mathbf{e}_{V_m} \in \mathbb{R}^k$ of variables $A, V_1, \ldots, V_m$ , rather than exploring the combinatorial space of variable-to-entity mappings. In this way, we can tackle the maximisation problem in Eq. (4) using gradient-based optimisation methods, such as Adam (Kingma & Ba, 2015). Then, after we identified the optimal representation for variables $A, V_1, \ldots, V_m$ , we replace the query target embedding $\mathbf{e}_A$ with the embedding representations $\mathbf{e}_c \in \mathbb{R}^k$ of all entities $c \in \mathcal{E}$ , and use the resulting complex query score to compute the likelihood that such entities answer the query.
110
+
111
+ # 3.2 COMPLEX QUERY ANSWERING VIA COMBINATORIAL OPTIMISATION
112
+
113
+ Another way we tackle the optimisation problem in Eq. (3) is by greedily searching for a set of variable substitutions $S = \{A \gets a, V_1 \gets v_1, \dots, V_m \gets v_m\}$ , with $a, v_1, \dots, v_m \in \mathcal{E}$ , that maximises the complex query score, in a procedure akin to beam search. We do so by traversing the dependency graph of a query $\mathcal{Q}$ and, whenever we find an atom in the form $p(c, V)$ , where $p \in \mathcal{R}$ , $c$ is either an entity or a variable for which we already have a substitution, and $V$ is a variable for which we do not have a substitution yet, we replace $V$ with all entities in $\mathcal{E}$ and retain the top- $k$ entities $t \in \mathcal{E}$ that maximise $\phi_p(\mathbf{e}_c, \mathbf{e}_t)$ - i.e. the most likely entities to appear as a substitution of $V$ according to the neural link predictor.
114
+
115
+ Our procedure is akin to beam search: as we traverse the dependency graph of a query, we keep a beam with the most promising variable-to-entity substitutions identified so far.
116
+
117
+ Example 3.1 (Combinatorial Optimisation). Consider the query "Which drugs $D$ interact with proteins associated with disease $t$ ?" can be rewritten as: $?D : \exists P.\text{interacts}(D, P) \land \text{assoc}(P, t)$ . In order to answer this query via combinatorial optimisation, we first find the top- $k$ proteins $p$ that are most likely to substitute the variable $P$ in $\text{assoc}(P, t)$ . Then, we search for the top- $k$ drugs $d$ that are most likely to substitute $D$ in $\text{interacts}(D, P)$ , ending up with at most $k^2$ candidate drugs. Finally, we rank the candidate drugs $d$ by using the query score produced by the $t$ -norm.
118
+
119
+ Note that scoring all possible entities can be done efficiently and in a single step on a GPU by replacing $V$ with the entity embedding matrix. In our experiments we did not notice any computational bottlenecks due to the branching factors of longer queries. However, that could be handled by using alternate graph exploration strategies.
120
+
121
+ # 4 RELATED WORK
122
+
123
+ This work is closely related to approaches for learning to traverse Knowledge Graphs (Guu et al., 2015; Das et al., 2017; 2018), and more recent works on answering conjunctive queries via black-box neural models trained on generated queries (Hamilton et al., 2018; Daza & Cochez, 2020; Kotnis et al., 2020). The main difference is that we propose a tractable framework for handling a substantially larger subset of First-Order Logic queries.
124
+
125
+ More recently, Ren et al. (2020) proposed Query2Box, a neural model for Existential Positive First-Order logical queries, where queries are represented via box embeddings (Li et al., 2019). Such approaches for query answering require a dataset with millions of generated queries to generalise well – for instance, on the FB15k-237 dataset, approx. $15 \times 10^{4}$ training queries for each query type are used, resulting in approx. $1.2 \times 10^{6}$ training queries. Our framework, on the other hand, only uses a simple, state-of-the-art neural link predictor (Lacroix et al., 2018) trained on a set of 1-hop queries that is orders of magnitude smaller.
126
+
127
+ There is a large body of work on neural link predictors, that learn embeddings of entities and relations in KGs via a simple link prediction training objective (Bordes et al., 2013; Yang et al., 2015; Trouillon et al., 2016; Lacroix et al., 2018). Due to their design, they are often evaluated for answering 1-hop queries only, as their application to more complex queries does not derive directly from their formulation.
128
+
129
+ Previous work has considered using such embeddings for complex query answering, by partitioning the query graph and using an ad-hoc aggregation function to score candidate answers (Wang et al., 2018), or by using a probabilistic mixture model similar to DistMult (Friedman & den Broeck, 2020). In contrast, our proposed method answers a query by using a single pass where aggregation steps are implemented with t-norms and t-conorms, which are continuous relaxations of conjunctions and disjunctions. Such t-norms have been proposed as differentiable formulations of logical operators suitable for gradient-based learning (Serafini & d'Avila Garcez, 2016; Guo et al., 2016; Minervini et al., 2017; van Krieken et al., 2020).
130
+
131
+ Further alternatives for using embeddings from neural link predictors, such as combinatorial optimisation, have been ruled out as unfeasible (Hamilton et al., 2018; Daza & Cochez, 2020). We show that this approach can scale well by reducing the set of possible intermediate answers, while outperforming the state-of-the-art in query answering.
132
+
133
+ The framework proposed in this paper is related to neural theorem provers (Rocktäschel & Riedel, 2017; Weber et al., 2019; Minervini et al., 2020a;b), a differentiable relaxation of the backward-anchaining reasoning algorithm where comparison between symbols is replaced by a differentiable similarity function between their embedding vectors. During the reasoning process, neural theorem provers check which rules can be used for proving a given atomic query. Then it is checked whether the premise of such rules is satisfied, where the premise is a conjunctive query. The procedure they use for answering conjunctions is akin to the combinatorial optimisation procedure we propose in Section 3.2. The main source of difference is how atomic queries are answered – we use the ComplEx neural link predictor (Trouillon et al., 2016), while neural theorem provers use the maximum similarity value between a given atomic query and all facts in the Knowledge Graph, which has linear complexity in the number of triples in the graph.
134
+
135
+ ![](images/07bcbd9b31a5ba24028371e9c99c1ec4cdeb4751c80dfe8591e144e088a3354b.jpg)
136
+ Figure 2: Query structures considered in our experiments, as proposed by Ren et al. (2020) – the naming of each query structure corresponds to projection $(\mathbf{p})$ , intersection $(\mathbf{i})$ , and union $(\mathbf{u})$ , and reflects how they were implemented in the Query2Box model (Ren et al., 2020). An example of a pi query is $?T : \exists V.p(a, V), q(V, T), r(b, T)$ , where $a$ and $b$ are anchor nodes, $V$ is a variable node, and $T$ is the query target node.
137
+
138
+ Table 1: Number of queries in the datasets used for evaluation of query answering performance. Others indicates the number of queries for each of the remaining types.
139
+
140
+ <table><tr><td rowspan="2">Dataset</td><td colspan="2">Training</td><td colspan="2">Validation</td><td colspan="2">Test</td></tr><tr><td>1p</td><td>Others</td><td>1p</td><td>Others</td><td>1p</td><td>Others</td></tr><tr><td>FB15k</td><td>273,710</td><td>273,710</td><td>59,097</td><td>8,000</td><td>67,016</td><td>8,000</td></tr><tr><td>FB15k-237</td><td>149,689</td><td>149,689</td><td>20,101</td><td>5,000</td><td>22,812</td><td>5,000</td></tr><tr><td>NELL995</td><td>107,982</td><td>107,982</td><td>16,927</td><td>4,000</td><td>17,034</td><td>4,000</td></tr></table>
141
+
142
+ # 5 EXPERIMENTS
143
+
144
+ We described a method to answer a query by decomposing it into a continuous formulation, which we refer to as Continuous Query Decomposition (CQD). In this section we demonstrate the effectiveness of CQD on the task of answering complex queries that cannot be answered using the incomplete KG, and report experimental results for continuous optimisation (CQD-CO, Section 3.1) and beam search (CQD-Beam, Section 3.2). We also provide a qualitative analysis of how our method can be used to obtain explanations for a given complex query answer. For the sake of comparison, we use the same datasets and evaluation metrics as Ren et al. (2020).
145
+
146
+ # 5.1 DATASETS
147
+
148
+ Following Ren et al. (2020), we evaluate our approach on FB15k (Bordes et al., 2013) and FB15k-237 (Toutanova & Chen, 2015) - two subset of the Freebase knowledge graph - and NELL995 (Xiong et al., 2017), a KG generated by the NELL system (Mitchell et al., 2015). In order to compare with previous work on query answering, we use the queries generated by Ren et al. (2020) from these datasets. Dataset statistics are detailed in Table 1. We consider a total of 9 query types, including atomic queries, and 2 query types that contain disjunctions - the different query types are shown in Fig. 2. Note that in our framework, the neural link predictor is only trained on atomic queries, while the evaluation is carried out on the complete set of query types in Fig. 2.
149
+
150
+ Note that each query in Table 1 can have multiple answers, therefore the total number of training instances can be higher. For atomic queries (of type 1p), this number is equal to the number of edges in the training graph. Other methods like GQE (Hamilton et al., 2018) and Q2B (Ren et al., 2020) require a dataset with more query types. As an example, the FB15k dataset contains approximately 960k instances for 1p queries. When adding 2p, 3p, 2i, and 3i queries employed by GQE and Q2B during training, this number increases to 65 million instances.
151
+
152
+ # 5.2 MODEL DETAILS
153
+
154
+ To obtain embeddings for the query answering task, we use ComplEx (Trouillon et al., 2016) a variational approximation of the nuclear tensor $p$ -norm for regularisation (Lacroix et al., 2018). We fix a learning rate of 0.1 and use the Adagrad optimiser. We then tune the hyperparameters of ComplEx on the validation set for each dataset, via grid search. We consider ranks (size of the
155
+
156
+ Table 2: Complex query answering results (H@3) across all query types; results for Graph Query Embedding (GQE, Hamilton et al., 2018) and Query2Box (Ren et al., 2020) are from Ren et al. (2020).
157
+
158
+ <table><tr><td>Method</td><td>Avg</td><td>1p</td><td>2p</td><td>3p</td><td>2i</td><td>3i</td><td>ip</td><td>pi</td><td>2u</td><td>up</td></tr><tr><td colspan="11">FB15k</td></tr><tr><td>GQE</td><td>0.384</td><td>0.630</td><td>0.346</td><td>0.250</td><td>0.515</td><td>0.611</td><td>0.153</td><td>0.320</td><td>0.362</td><td>0.271</td></tr><tr><td>Query2Box</td><td>0.484</td><td>0.786</td><td>0.413</td><td>0.303</td><td>0.593</td><td>0.712</td><td>0.211</td><td>0.397</td><td>0.608</td><td>0.330</td></tr><tr><td>CQD-CO</td><td>0.576</td><td>0.918</td><td>0.454</td><td>0.191</td><td>0.796</td><td>0.837</td><td>0.336</td><td>0.513</td><td>0.816</td><td>0.319</td></tr><tr><td>CQD-Beam</td><td>0.680</td><td>0.918</td><td>0.779</td><td>0.577</td><td>0.796</td><td>0.837</td><td>0.375</td><td>0.658</td><td>0.839</td><td>0.345</td></tr><tr><td colspan="11">FB15k-237</td></tr><tr><td>GQE</td><td>0.230</td><td>0.405</td><td>0.213</td><td>0.153</td><td>0.298</td><td>0.411</td><td>0.085</td><td>0.182</td><td>0.167</td><td>0.160</td></tr><tr><td>Query2Box</td><td>0.268</td><td>0.467</td><td>0.240</td><td>0.186</td><td>0.324</td><td>0.453</td><td>0.108</td><td>0.205</td><td>0.239</td><td>0.193</td></tr><tr><td>CQD-CO</td><td>0.272</td><td>0.512</td><td>0.213</td><td>0.131</td><td>0.352</td><td>0.457</td><td>0.146</td><td>0.222</td><td>0.281</td><td>0.132</td></tr><tr><td>CQD-Beam</td><td>0.290</td><td>0.512</td><td>0.288</td><td>0.221</td><td>0.352</td><td>0.457</td><td>0.129</td><td>0.249</td><td>0.284</td><td>0.121</td></tr><tr><td colspan="11">NELL995</td></tr><tr><td>GQE</td><td>0.248</td><td>0.417</td><td>0.231</td><td>0.203</td><td>0.318</td><td>0.454</td><td>0.081</td><td>0.188</td><td>0.200</td><td>0.139</td></tr><tr><td>Query2Box</td><td>0.306</td><td>0.555</td><td>0.266</td><td>0.233</td><td>0.343</td><td>0.480</td><td>0.132</td><td>0.212</td><td>0.369</td><td>0.163</td></tr><tr><td>CQD-CO</td><td>0.368</td><td>0.667</td><td>0.265</td><td>0.220</td><td>0.410</td><td>0.529</td><td>0.196</td><td>0.302</td><td>0.531</td><td>0.194</td></tr><tr><td>CQD-Beam</td><td>0.375</td><td>0.667</td><td>0.350</td><td>0.288</td><td>0.410</td><td>0.529</td><td>0.171</td><td>0.277</td><td>0.531</td><td>0.156</td></tr></table>
159
+
160
+ embedding) in $\{100, 200, 500, 1000\}$ , batch size in $\{100, 500, 1000\}$ , and regularisation coefficients in the interval $[10^{-4}, 0.5]$ .
161
+
162
+ For query answering we experimented with the Gödel and product t-norms - we select the best t-norm for each query type according to the best validation accuracy. For CQD-CO, we optimise variable and target embeddings with Adam, using the same initialisation scheme as Lacroix et al. (2018), with an initial learning rate of 0.1 and a maximum of 1,000 iterations. In practice, we observed that the procedure usually converges in less than 300 iterations. For CQD-Beam, the beam size $k \in \{2^2, 2^3, \ldots, 2^8\}$ is found on an held-out validation set.
163
+
164
+ # 5.3 EVALUATION
165
+
166
+ As in Ren et al. (2020), for each test query, we assign a score to every entity in the graph, and use such score for ranking such entities. We then compute the Hits at 3 (H@3) metric, which measures the frequency with which the correct answer is contained in the top three answers in the ranking. Since a query can have multiple answers, we use the filtered setting (Bordes et al., 2013), where we filter out other correct answers from the ranking before calculating the H@3.
167
+
168
+ As baselines we use two recent state-of-the-art models for complex query answering, namely Graph Query Embedding (GQE, Hamilton et al., 2018) and Query2Box (Q2B, Ren et al., 2020).
169
+
170
+ # 5.4 RESULTS
171
+
172
+ We detail the results of H@3 for all different query types in Table 2. We observe that, on average, CQD produces more accurate results than GQE and Q2B, while using orders of magnitude less training data. In particular, combinatorial optimisation in CQD-Beam consistently outperforms the baselines across all datasets.
173
+
174
+ The results for chained queries $(2p$ and $3p)$ show that CQD-Beam is effective, even when increasing the length of the chain. The most difficult case corresponds to 3p queries, where the number of candidate variable substitutions increases due to the branching factor of the search procedure.
175
+
176
+ We also note that having more variables does not always translate into worse performance for CQD-CO: it yields the best ranking scores for $ip$ queries on FB15k-237, and for $ip$ and $pi$ queries for NELL995, and both such query types contain two variables.
177
+
178
+ "In what genres of movies did Martin Lawrence appear?"
179
+
180
+ ?G: $\exists M$ .perform(ML,M)∧genre(M,G)
181
+
182
+ ![](images/482a302b25b256f34f73f60059d950110bdbd439ab95fb9e6de83c5b1700b49a.jpg)
183
+
184
+ "What international organisations contain the country of nationality of Thomas Aquinas?"
185
+
186
+ ?O:3.C.nationality(TA,C)∧ member(C,O)
187
+
188
+ ![](images/adedbfccd6d09d771cced474277c11d0c0d751ee7824307c4503622f408b5117.jpg)
189
+ Figure 3: Intermediate variable assignments and ranks for two example queries, obtained with CQD-Beam. Correctness indicates whether the answer belongs to the ground-truth set of answers.
190
+
191
+ Query: $?G:\exists M.\mathrm{perform}(\mathbf{ML},M)\wedge \mathrm{genre}(M,G)$
192
+
193
+ <table><tr><td>M</td><td>G</td><td>Rank</td><td>Correctness</td></tr><tr><td rowspan="3">Do the Right Thing</td><td>Drama</td><td>1</td><td>✓</td></tr><tr><td>Comedy</td><td>4</td><td>✓</td></tr><tr><td>Crime Fiction</td><td>7</td><td>✓</td></tr><tr><td rowspan="3">National Security</td><td>Action</td><td>2</td><td>✓</td></tr><tr><td>Thriller</td><td>3</td><td>✓</td></tr><tr><td>Crime Fiction</td><td>5</td><td>✓</td></tr><tr><td rowspan="3">The Nutty Professor</td><td>Comedy</td><td>6</td><td>✓</td></tr><tr><td>Romantic Com.</td><td>8</td><td>✗</td></tr><tr><td>Romance Film</td><td>9</td><td>✗</td></tr></table>
194
+
195
+ Query: $?O:\exists C.$ nationality(TA, $C$ ) $\wedge$ memberOf(C,O)
196
+
197
+ <table><tr><td>C</td><td>O</td><td>Rank</td><td>Correctness</td></tr><tr><td rowspan="3">United States</td><td>NATO</td><td>1</td><td>✓</td></tr><tr><td>OECD</td><td>2</td><td>✓</td></tr><tr><td>EU</td><td>9</td><td>✓</td></tr><tr><td rowspan="3">United Kingdom</td><td>NATO</td><td>3</td><td>✓</td></tr><tr><td>OECD</td><td>4</td><td>✓</td></tr><tr><td>EU</td><td>5</td><td>✓</td></tr><tr><td rowspan="3">Germany</td><td>OECD</td><td>6</td><td>✓</td></tr><tr><td>EU</td><td>7</td><td>✓</td></tr><tr><td>WTO</td><td>8</td><td>✓</td></tr></table>
198
+
199
+ The results presented in Table 2 were obtained with a rank of 1,000, as they produced the best performance in the validation set. We present results for other values of the rank in Appendix A, where we observe that even with a rank of 100, CQD still outperforms baselines with a larger embedding size. Furthermore, in Appendix B, we report the number of seconds required to answer each query type, showing that CQD-Beam requires less than 50ms for all considered queries.
200
+
201
+ We also experimented with a variant of CQD-Beam that uses DistMult (Yang et al., 2015) as the link predictor - results are reported in Appendix C. As expected, results when using DistMult are slightly less accurate than when using ComplEx, while still being more accurate than those produced by GQE and Q2B.
202
+
203
+ # 5.5 EXPLAINING ANSWERS TO COMPLEX QUERIES
204
+
205
+ A useful property of our framework is its transparency when computing scores for distinct atoms in a query. Unlike GQE and Q2B – two neural models that encode a query into a vector via a set of non-linear transformations – our framework is able to produce an explanation for a given answer in terms of intermediate variable assignments.
206
+
207
+ Consider the following test query from the FB15k-237 knowledge graph: "In what genres of movies did Martin Lawrence appear?" This query can be formalised as $?G : \exists M. \text{perform}(\mathbf{ML}, M) \land \text{genre}(M, G)$ , where ML is an anchor node representing Martin Lawrence. The ground truth answers to this query are 7 genres, including Drama, Comedy, and Crime Fiction. In Fig. 3 we show the intermediate assignments obtained when using CQD-Beam, to the variable $M$ in the query, and the rank for each combination of movie $M$ and genre $G$ . We note that the assignments to the variable $M$ are correct, as these are movies where Martin Lawrence appeared. Furthermore, these intermediate assignments lead to correct answers in the first seven positions of the ranking, which correctly belong to the ground-truth set of answers.
208
+
209
+ In a second example, consider the following query: "What international organisations contain the country of nationality of Thomas Aquinas?" Its conjunctive form is $?O:\exists C.$ nationality(TA,C) ∧ memberOf(C,O), where TA is an anchor node representing Thomas Aquinas. The ground-truth answers to this query are the Organisation for Economic Co-operation and Development (OECD), the European Union (EU), the North Atlantic Treaty Organisation (NATO), and the World Trade Organisation (WTO). As shown in Fig. 3, CQD-Beam yields the correct answers in the first four positions in the rank. However, by inspecting the intermediate assignments, we note that such correct answers are produced by an incorrect (although related) intermediate assignment, since the country
210
+
211
+ of nationality of Thomas Aquinas is Italy. By inspecting these decisions we can thus identify failure modes of our framework, even when it produces seemingly correct answers. This is in contrast with other neural black-box models for complex query answering outlined in Section 4, where such an analysis is not possible.
212
+
213
+ # 6 CONCLUSIONS
214
+
215
+ We proposed a framework — Complex Query Decomposition (CQD) — for answering Existential Positive First-Order logical queries by reasoning over sets of entities in embedding space. In our framework, answering a complex query is reduced to answering each of its sub-queries, and aggregating the resulting scores via t-norms. The benefit of the method is that we only need to train a neural link prediction model on atomic queries to use our framework for answering a given complex query, without the need of training on millions of generated complex queries. This comes with the added value that we are able to explain each step of the query answering process regardless of query complexity, instead of using a black-box neural query embedding model.
216
+
217
+ The proposed method is agnostic to the type of query, and is able to generalise without explicitly training on a specific variety of queries. Experimental results show that CQD produces significantly more accurate results than current state-of-the-art complex query answering methods on incomplete Knowledge Graphs.
218
+
219
+ # ACKNOWLEDGEMENTS
220
+
221
+ This research was supported by the European Union's Horizon 2020 research and innovation programme under grant agreement no. 875160. This project was partially funded by Elsevier's Discovery Lab. Finally, we thank NVIDIA for GPU donations.
222
+
223
+ # REFERENCES
224
+
225
+ Sören Auer, Christian Bizer, Georgi Kobilarov, Jens Lehmann, Richard Cyganiak, and Zachary G. Ives. DBpedia: A nucleus for a web of open data. In ISWC/ASWC, volume 4825 of Lecture Notes in Computer Science, pp. 722-735. Springer, 2007.
226
+ Antoine Bordes, Nicolas Usunier, Alberto García-Durán, Jason Weston, and Oksana Yakhnenko. Translating embeddings for modeling multi-relational data. In NIPS, pp. 2787-2795, 2013.
227
+ Nilesh N. Dalvi and Dan Suciu. Efficient query evaluation on probabilistic databases. In VLDB, pp. 864-875. Morgan Kaufmann, 2004.
228
+ Rajarshi Das, Arvind Neelakantan, David Belanger, and Andrew McCallum. Chains of reasoning over entities, relations, and text using recurrent neural networks. In EACL (1), pp. 132-141. Association for Computational Linguistics, 2017.
229
+ Rajarshi Das, Shehzaad Dhuliawala, Manzil Zaheer, Luke Vilnis, Ishan Durugkar, Akshay Krishnamurthy, Alex Smola, and Andrew McCallum. Go for a walk and arrive at the answer: Reasoning over paths in knowledge bases using reinforcement learning. In ICLR (Poster). OpenReview.net, 2018.
230
+ Brian A. Davey and Hilary A. Priestley. Introduction to Lattices and Order, Second Edition. Cambridge University Press, 2002.
231
+ Daniel Daza and Michael Cochez. Message passing query embedding. In ICML Workshop - Graph Representation Learning and Beyond, 2020. URL https://arxiv.org/abs/2002.02406.
232
+ Luc De Raedt. Logical and relational learning. Cognitive Technologies. Springer, 2008.
233
+ Michel Dumontier, Alison Callahan, Jose Cruz-Toledo, Peter Ansell, Vincent Emonet, François Belleau, and Arnaud Droit. Bio2RDF release 3: A larger, more connected network of linked data for the life sciences. In International Semantic Web Conference (Posters & Demos), volume 1272 of CEUR Workshop Proceedings, pp. 401-404. CEUR-WS.org, 2014.
234
+
235
+ Tal Friedman and Guy Van den Broeck. Symbolic querying of vector spaces: Probabilistic databases meets relational embeddings. In UAI, volume 124 of Proceedings of Machine Learning Research, pp. 1268-1277. AUAI Press, 2020.
236
+ Lise Getoor and Ben Taskar. Introduction to statistical relational learning. The MIT Press, 2007.
237
+ Shu Guo, Quan Wang, Lihong Wang, Bin Wang, and Li Guo. Jointly embedding knowledge graphs and logical rules. In EMNLP, pp. 192-202. The Association for Computational Linguistics, 2016.
238
+ Kelvin Guu, John Miller, and Percy Liang. Traversing knowledge graphs in vector space. In EMNLP, pp. 318-327. The Association for Computational Linguistics, 2015.
239
+ William L. Hamilton, Payal Bajaj, Marinka Zitnik, Dan Jurafsky, and Jure Leskovec. Embedding logical queries on knowledge graphs. In NeurIPS, pp. 2030-2041, 2018.
240
+ Daniel S. Himmelstein, Antoine Lizee, Christine Hessler, Leo Brueggeman, Sabrina L. Chen, Dexter Hadley, Ari Green, Pouya Khankhanian, and Sergio E. Baranzini. Systematic integration of biomedical knowledge prioritizes drugs for repurposing. bioRxiv, 2017. doi: 10.1101/087619. URL https://www.biorxiv.org/content/early/2017/08/31/087619.
241
+ Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In *ICLR (Poster)*, 2015.
242
+ Erich-Peter Klement, Radko Mesiar, and Endre Pap. Triangular Norms, volume 8 of Trends in Logic. Springer, 2000.
243
+ Erich-Peter Klement, Radko Mesiar, and Endre Pap. Triangular norms. position paper I: basic analytical and algebraic properties. Fuzzy Sets Syst., 143(1):5-26, 2004.
244
+ Bhushan Kotnis, Carolin Lawrence, and Mathias Niepert. Answering complex queries in knowledge graphs with bidirectional sequence encoders. CoRR, abs/2004.02596, 2020.
245
+ Denis Krompaß, Maximilian Nickel, and Volker Tresp. Querying factorized probabilistic triple databases. In International Semantic Web Conference (2), volume 8797 of Lecture Notes in Computer Science, pp. 114-129. Springer, 2014.
246
+ Timothee Lacroix, Nicolas Usunier, and Guillaume Obozinski. Canonical tensor decomposition for knowledge base completion. In ICML, volume 80 of Proceedings of Machine Learning Research, pp. 2869-2878. PMLR, 2018.
247
+ Xiang Li, Luke Vilnis, Dongxu Zhang, Michael Boratko, and Andrew McCallum. Smoothing the geometry of probabilistic box embeddings. In *ICLR*. OpenReview.net, 2019.
248
+ George A. Miller. WORDNET: a lexical database for english. In HLT. Morgan Kaufmann, 1992.
249
+ Pasquale Minervini, Thomas Demeester, Tim Roektaschel, and Sebastian Riedel. Adversarial sets for regularising neural link predictors. In UAI. AUAI Press, 2017.
250
+ Pasquale Minervini, Matko Bosnjak, Tim Rocktäschel, Sebastian Riedel, and Edward Grefenstette. Differentiable reasoning on large knowledge bases and natural language. In AAAI, pp. 5182-5190. AAAI Press, 2020a.
251
+ Pasquale Minervini, Sebastian Riedel, Pontus Stenetorp, Edward Grefenstette, and Tim Roktaschel. Learning reasoning strategies in end-to-end differentiable proving. In ICML, Proceedings of Machine Learning Research. PMLR, 2020b.
252
+ Tom M. Mitchell, William W. Cohen, Estevam R. Hruschka Jr., Partha Pratim Talukdar, Justin Betteridge, Andrew Carlson, Bhavana Dalvi Mishra, Matthew Gardner, Bryan Kisiel, Jayant Krishnamurthy, Ni Lao, Kathryn Mazaitis, Thahir Mohamed, Ndapandula Nakashole, Emmanouil A. Platanios, Alan Ritter, Mehdi Samadi, Burr Settles, Richard C. Wang, Derry Wijaya, Abhinav Gupta, Xinlei Chen, Abdulhair Saparov, Malcolm Greaves, and Joel Welling. Never-ending learning. In AAAI, pp. 2302-2310. AAAI Press, 2015.
253
+ Maximilian Nickel, Kevin Murphy, Volker Tresp, and Evgeniy Gabrilovich. A review of relational machine learning for knowledge graphs. Proceedings of the IEEE, 104(1):11-33, 2016.
254
+
255
+ Natalya Fridman Noy, Yuqing Gao, Anshu Jain, Anant Narayanan, Alan Patterson, and Jamie Taylor. Industry-scale knowledge graphs: lessons and challenges. Commun. ACM, 62(8):36-43, 2019.
256
+ Hongyu Ren, Weihua Hu, and Jure Leskovec. Query2box: Reasoning over knowledge graphs in vector space using box embeddings. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net, 2020. URL https://openreview.net/forum?id=BJgr4kSFDS.
257
+ Tim Rocktäschel and Sebastian Riedel. End-to-end differentiable proving. In NIPS, pp. 3788-3800, 2017.
258
+ Luciano Serafini and Artur S. d'Avila Garcez. Logic tensor networks: Deep learning and logical reasoning from data and knowledge. CoRR, abs/1606.04422, 2016. URL http://arxiv.org/abs/1606.04422.
259
+ Fabian M. Suchanek, Gjergji Kasneci, and Gerhard Weikum. Yago: a core of semantic knowledge. In WWW, pp. 697-706. ACM, 2007.
260
+ Kristina Toutanova and Danqi Chen. Observed versus latent features for knowledge base and text inference. In Proceedings of the 3rd Workshop on Continuous Vector Space Models and their Compositionality, pp. 57-66, Beijing, China, July 2015. Association for Computational Linguistics. doi: 10.18653/v1/W15-4007. URL https://www.aclweb.org/anthology/W15-4007.
261
+ Théo Trouillon, Johannes Welbl, Sebastian Riedel, Éric Gaussier, and Guillaume Bouchard. Complex embeddings for simple link prediction. In ICML, volume 48 of JMLR Workshop and Conference Proceedings, pp. 2071-2080. JMLR.org, 2016.
262
+ Emile van Krieken, Erman Acar, and Frank van Harmelen. Analyzing Differentiable Fuzzy Implications. In Proceedings of the 17th International Conference on Principles of Knowledge Representation and Reasoning, pp. 893-903, 9 2020. doi: 10.24963/kr.2020/92. URL https://doi.org/10.24963/kr.2020/92.
263
+ Meng Wang, Ruijie Wang, Jun Liu, Yihe Chen, Lei Zhang, and Guilin Qi. Towards empty answers in SPARQL: approximating querying with RDF embedding. In International Semantic Web Conference (1), volume 11136 of Lecture Notes in Computer Science, pp. 513-529. Springer, 2018.
264
+ Leon Weber, Pasquale Minervini, Jannes Münchmeyer, Ulf Leser, and Tim Rocktäschel. Nlprolog: Reasoning with weak unification for question answering in natural language. In ACL (1), pp. 6151-6161. Association for Computational Linguistics, 2019.
265
+ Wenhan Xiong, Thien Hoang, and William Yang Wang. Deeppath: A reinforcement learning method for knowledge graph reasoning. In EMNLP, pp. 564-573. Association for Computational Linguistics, 2017.
266
+ Bishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. Embedding entities and relations for learning and inference in knowledge bases. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, 2015.
267
+
268
+ # A INFLUENCE OF THE EMBEDDING SIZE ON THE RESULTS
269
+
270
+ Table 3: Complex query answering results (H@3) across all query types, for different rank (embedding size) values - results for Graph Query Embedding (GQE, Hamilton et al., 2018) and Query2Box (Ren et al., 2020) are from Ren et al. (2020).
271
+
272
+ <table><tr><td>Method</td><td>Rank</td><td>1p</td><td>2p</td><td>3p</td><td>2i</td><td>3i</td><td>ip</td><td>pi</td><td>2u</td><td>up</td></tr><tr><td colspan="11">FB15k</td></tr><tr><td>GQE</td><td>800</td><td>0.630</td><td>0.346</td><td>0.250</td><td>0.515</td><td>0.611</td><td>0.153</td><td>0.320</td><td>0.362</td><td>0.271</td></tr><tr><td>Query2Box</td><td>400</td><td>0.786</td><td>0.413</td><td>0.303</td><td>0.593</td><td>0.712</td><td>0.211</td><td>0.397</td><td>0.608</td><td>0.330</td></tr><tr><td rowspan="4">CQD-CO</td><td>100</td><td>0.893</td><td>0.162</td><td>0.076</td><td>0.773</td><td>0.818</td><td>0.118</td><td>0.344</td><td>0.493</td><td>0.073</td></tr><tr><td>200</td><td>0.906</td><td>0.257</td><td>0.092</td><td>0.785</td><td>0.828</td><td>0.210</td><td>0.426</td><td>0.753</td><td>0.110</td></tr><tr><td>500</td><td>0.912</td><td>0.345</td><td>0.123</td><td>0.772</td><td>0.817</td><td>0.257</td><td>0.454</td><td>0.795</td><td>0.206</td></tr><tr><td>1000</td><td>0.918</td><td>0.454</td><td>0.191</td><td>0.796</td><td>0.837</td><td>0.336</td><td>0.513</td><td>0.816</td><td>0.319</td></tr><tr><td rowspan="4">CQD-Beam</td><td>100</td><td>0.893</td><td>0.746</td><td>0.557</td><td>0.773</td><td>0.818</td><td>0.357</td><td>0.669</td><td>0.689</td><td>0.313</td></tr><tr><td>200</td><td>0.906</td><td>0.770</td><td>0.585</td><td>0.785</td><td>0.828</td><td>0.373</td><td>0.679</td><td>0.815</td><td>0.357</td></tr><tr><td>500</td><td>0.912</td><td>0.759</td><td>0.580</td><td>0.772</td><td>0.817</td><td>0.372</td><td>0.650</td><td>0.831</td><td>0.351</td></tr><tr><td>1000</td><td>0.918</td><td>0.779</td><td>0.584</td><td>0.796</td><td>0.837</td><td>0.377</td><td>0.658</td><td>0.839</td><td>0.355</td></tr><tr><td colspan="11">FB15k-237</td></tr><tr><td>GQE</td><td>800</td><td>0.405</td><td>0.213</td><td>0.153</td><td>0.298</td><td>0.411</td><td>0.085</td><td>0.182</td><td>0.167</td><td>0.160</td></tr><tr><td>Query2Box</td><td>400</td><td>0.467</td><td>0.240</td><td>0.186</td><td>0.324</td><td>0.453</td><td>0.108</td><td>0.205</td><td>0.239</td><td>0.193</td></tr><tr><td rowspan="4">CQD-CO</td><td>100</td><td>0.493</td><td>0.162</td><td>0.076</td><td>0.311</td><td>0.415</td><td>0.118</td><td>0.199</td><td>0.238</td><td>0.073</td></tr><tr><td>200</td><td>0.500</td><td>0.187</td><td>0.092</td><td>0.329</td><td>0.439</td><td>0.128</td><td>0.204</td><td>0.254</td><td>0.103</td></tr><tr><td>500</td><td>0.508</td><td>0.210</td><td>0.123</td><td>0.346</td><td>0.454</td><td>0.142</td><td>0.216</td><td>0.273</td><td>0.119</td></tr><tr><td>1000</td><td>0.512</td><td>0.213</td><td>0.131</td><td>0.352</td><td>0.457</td><td>0.146</td><td>0.222</td><td>0.281</td><td>0.132</td></tr><tr><td rowspan="4">CQD-Beam</td><td>100</td><td>0.493</td><td>0.256</td><td>0.207</td><td>0.311</td><td>0.415</td><td>0.119</td><td>0.234</td><td>0.254</td><td>0.121</td></tr><tr><td>200</td><td>0.500</td><td>0.272</td><td>0.216</td><td>0.329</td><td>0.439</td><td>0.122</td><td>0.244</td><td>0.264</td><td>0.127</td></tr><tr><td>500</td><td>0.508</td><td>0.280</td><td>0.216</td><td>0.346</td><td>0.454</td><td>0.127</td><td>0.257</td><td>0.280</td><td>0.128</td></tr><tr><td>1000</td><td>0.512</td><td>0.279</td><td>0.219</td><td>0.352</td><td>0.457</td><td>0.129</td><td>0.249</td><td>0.284</td><td>0.128</td></tr><tr><td colspan="11">NELL995</td></tr><tr><td>GQE</td><td>800</td><td>0.417</td><td>0.231</td><td>0.203</td><td>0.318</td><td>0.454</td><td>0.081</td><td>0.188</td><td>0.200</td><td>0.139</td></tr><tr><td>Query2Box</td><td>400</td><td>0.555</td><td>0.266</td><td>0.233</td><td>0.343</td><td>0.480</td><td>0.132</td><td>0.212</td><td>0.369</td><td>0.163</td></tr><tr><td rowspan="4">CQD-CO</td><td>100</td><td>0.647</td><td>0.234</td><td>0.145</td><td>0.389</td><td>0.508</td><td>0.165</td><td>0.283</td><td>0.465</td><td>0.126</td></tr><tr><td>200</td><td>0.658</td><td>0.238</td><td>0.164</td><td>0.401</td><td>0.524</td><td>0.172</td><td>0.282</td><td>0.502</td><td>0.148</td></tr><tr><td>500</td><td>0.665</td><td>0.261</td><td>0.208</td><td>0.406</td><td>0.525</td><td>0.187</td><td>0.293</td><td>0.523</td><td>0.171</td></tr><tr><td>1000</td><td>0.667</td><td>0.265</td><td>0.220</td><td>0.410</td><td>0.529</td><td>0.196</td><td>0.302</td><td>0.531</td><td>0.194</td></tr><tr><td rowspan="4">CQD-Beam</td><td>100</td><td>0.647</td><td>0.333</td><td>0.296</td><td>0.389</td><td>0.508</td><td>0.160</td><td>0.293</td><td>0.469</td><td>0.150</td></tr><tr><td>200</td><td>0.658</td><td>0.335</td><td>0.292</td><td>0.401</td><td>0.524</td><td>0.162</td><td>0.290</td><td>0.508</td><td>0.146</td></tr><tr><td>500</td><td>0.665</td><td>0.348</td><td>0.296</td><td>0.406</td><td>0.525</td><td>0.166</td><td>0.291</td><td>0.527</td><td>0.149</td></tr><tr><td>1000</td><td>0.667</td><td>0.343</td><td>0.297</td><td>0.410</td><td>0.529</td><td>0.168</td><td>0.283</td><td>0.536</td><td>0.157</td></tr></table>
273
+
274
+ In Table 3 we report results for CQD-CO (Section 3.1) and CQD-Beam (Section 3.2) for different rank (embedding size) values. We can see that the model produces very accurate results even with significantly fewer parameters.
275
+
276
+ # B TIMING EXPERIMENTS
277
+
278
+ ![](images/e2f267e5495034b5d6860e65941dfb3ebf71269299579cab28a0fa97afcfbb11.jpg)
279
+ Figure 4: Number of seconds required by Q2B (Ren et al., 2020) and CQD-Beam (Section 3.2 for answering each query type in FB15k.
280
+
281
+ ![](images/d9b69e6db9b27f9b66de269a159459dd80d8c9a8b3b7bb5e5d68df1c1bbef775.jpg)
282
+ Figure 5: Number of seconds required by Q2B (Ren et al., 2020) and CQD-Beam (Section 3.2 for answering each query type in FB15k-237.
283
+
284
+ In Fig. 4 and Fig. 5 we report the time (seconds) required by Q2B (Ren et al., 2020) and CQD-Beam (Section 3.2 for answering each query type, aggregated over FB15k, FB15k-237, and NELL. We can see that, in CQD-Beam, the main computation bottleneck are multi-hop queries, since the model is required to invoke the neural link prediction model for each step of the chain to obtain the top- $k$ candidates for the next step in the chain.
285
+
286
+ # C DISTRUMENT EXPERIMENTS
287
+
288
+ Table 4: Complex query answering results (H@3) across all query types, for two different neural link prediction models, namely ComplEx (Trouillon et al., 2016) and DistMult (Yang et al., 2015).
289
+
290
+ <table><tr><td>Method</td><td>Model</td><td>1p</td><td>2p</td><td>3p</td><td>2i</td><td>3i</td><td>ip</td><td>pi</td><td>2u</td><td>up</td></tr><tr><td colspan="11">FB15k</td></tr><tr><td rowspan="2">CQD-Beam</td><td>ComplEx</td><td>0.918</td><td>0.779</td><td>0.584</td><td>0.796</td><td>0.837</td><td>0.377</td><td>0.658</td><td>0.839</td><td>0.355</td></tr><tr><td>DistMult</td><td>0.869</td><td>0.761</td><td>0.581</td><td>0.778</td><td>0.824</td><td>0.369</td><td>0.608</td><td>0.822</td><td>0.355</td></tr><tr><td colspan="11">FB15k-237</td></tr><tr><td rowspan="2">CQD-Beam</td><td>ComplEx</td><td>0.512</td><td>0.279</td><td>0.219</td><td>0.352</td><td>0.457</td><td>0.129</td><td>0.249</td><td>0.284</td><td>0.128</td></tr><tr><td>DistMult</td><td>0.485</td><td>0.277</td><td>0.210</td><td>0.332</td><td>0.443</td><td>0.117</td><td>0.224</td><td>0.281</td><td>0.123</td></tr><tr><td colspan="11">NELL995</td></tr><tr><td rowspan="2">CQD-Beam</td><td>ComplEx</td><td>0.667</td><td>0.343</td><td>0.297</td><td>0.410</td><td>0.529</td><td>0.168</td><td>0.283</td><td>0.536</td><td>0.157</td></tr><tr><td>DistMult</td><td>0.642</td><td>0.348</td><td>0.297</td><td>0.392</td><td>0.517</td><td>0.160</td><td>0.260</td><td>0.502</td><td>0.169</td></tr></table>
291
+
292
+ In Table 4 we report the results for CQD-Beam with two different neural link prediction models, namely ComplEx (Trouillon et al., 2016) and DistMult (Yang et al., 2015). Both models were trained using the loss and regulariser proposed by Lacroix et al. (2018), and their hyperparameters were tuned according to their performance in the validation set; in both cases, the embedding size is set to 1,000. As expected, CQD-Beam with DistMult produces slightly less accurate results than with ComplEx, while still yielding more accurate results than the Q2B and GQE baselines.
complexqueryansweringwithneurallinkpredictors/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:403f038ec50332aaef92ba404f5ed4f5065414ca1e5b096b08dc2dac36524fdd
3
+ size 613000
complexqueryansweringwithneurallinkpredictors/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d142833518973b77e605ad22f1090041c78cb67b32e63d8741f14d34ca7d2953
3
+ size 482890
contrastiveexplanationsforreinforcementlearningviaembeddedselfpredictions/f310a0f8-508f-4aa5-b90c-664bf9e3209c_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bf30c0ba1eb11f5674a8d9d9e659eea20cdd3c3a335c15f38a17feca3f30415c
3
+ size 136427
contrastiveexplanationsforreinforcementlearningviaembeddedselfpredictions/f310a0f8-508f-4aa5-b90c-664bf9e3209c_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1531a3837a1d504de68cabc0e7375ab166eb996c431b6f43ba05cc034adecfd1
3
+ size 155497
contrastiveexplanationsforreinforcementlearningviaembeddedselfpredictions/f310a0f8-508f-4aa5-b90c-664bf9e3209c_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7d416b3abeb2a7aaa26c09a5c09a1d16b0097c009a025840c2e4f6f264c9e1d9
3
+ size 16974582
contrastiveexplanationsforreinforcementlearningviaembeddedselfpredictions/full.md ADDED
@@ -0,0 +1,508 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # CONTRASTIVE EXPLANATIONS FOR REINFORCEMENT LEARNING VIA EMBEDDED SELF PREDICTIONS
2
+
3
+ Zhengxian Lin, Kim-Ho Lam, Alan Fern
4
+
5
+ Department of EECS
6
+
7
+ Oregon State University
8
+
9
+ {linzhe, lamki, alan.fern}@oregonstate.edu
10
+
11
+ # ABSTRACT
12
+
13
+ We investigate a deep reinforcement learning (RL) architecture that supports explaining why a learned agent prefers one action over another. The key idea is to learn action-values that are directly represented via human-understandable properties of expected futures. This is realized via the embedded self-prediction (ESP) model, which learns said properties in terms of human provided features. Action preferences can then be explained by contrasting the future properties predicted for each action. To address cases where there are a large number of features, we develop a novel method for computing minimal sufficient explanations from an ESP. Our case studies in three domains, including a complex strategy game, show that ESP models can be effectively learned and support insightful explanations.
14
+
15
+ # 1 INTRODUCTION
16
+
17
+ Traditional RL agents explain their action preference by revealing action $A$ or $B$ 's predicted values, which provide little insight into its reasoning. Conversely, a human might explain their preference by contrasting meaningful properties of the predicted futures following each action. In this work, we develop a model allowing RL agents to explain action preferences by contrasting human-understandable future predictions. Our approach learns deep generalized value functions (GVFs) (Sutton et al., 2011) to make the future predictions, which are able to predict the future accumulation of arbitrary features when following a policy. Thus, given human-understandable features, the corresponding GVFs capture meaningful properties of a policy's future trajectories.
18
+
19
+ To support sound explanation of action preferences via GVFs, it is important that the agent uses the GVFs to form preferences. To this end, our first contribution is the embedded self-prediction (ESP) model, which: 1) directly "embeds" meaningful GVFs into the agent's action-value function, and 2) trains those GVFs to be "self-predicting" of the agent's Q-function maximizing greedy policy. This enables meaningful and sound contrastive explanations in terms of GVFs. However, this circularly defined ESP model, i.e. the policy depends on the GVFs and vice-versa, suggests training may be difficult. Our second contribution is the ESP-DQN learning algorithm, for which we provide theoretical convergence conditions in the table-based setting and demonstrate empirical effectiveness.
20
+
21
+ Because ESP models combine embedded GVFs non-linearly, comparing the contributions of GVFs to preferences for explanations can be difficult. Our third contribution is a novel application of the integrated gradient (IG) (Sundararajan et al., 2017) for producing explanations that are sound in a well-defined sense. To further support cases with many features, we use the notion of minimal sufficient explanation (Juozapaitis et al., 2019), which can significantly simplify explanations while remaining sound. Our fourth contribution is case studies in two RL benchmarks and a complex real-time strategy game. These demonstrate insights provide by the explanations including both validating and finding flaws in reasons for preferences.
22
+
23
+ In Defense of Manually-Designed Features. It can be controversial to provide deep learning algorithms with engineered meaningful features. The key question is whether the utility of providing such features is worth the cost of their acquisition. We argue that for many applications that can benefit from informative explanations, the utility will outweigh the cost. Without meaningful features, explanations must be expressed as visualizations on top of lower-level perceptual information (e.g.
24
+
25
+ ![](images/ceb85e037a7939ed10208cb94d7ef39762cd5eb704f4e2432cd5213351453ba3.jpg)
26
+ Figure 1: The ESP model provides a estimate of the agent's Q-function for any state-action pair. The model first maps a state-action pair $(s, a)$ to a GVF vector $\hat{Q}_F^{\hat{\pi}}$ of the agent's greedy policy $\hat{\pi}(s) = \hat{Q}(s, a)$ . This vector is then processed by the combining function $\hat{C}$ , which produces a Q-value estimate $\hat{Q}(s, a)$ . The embedded GVF is self-predicting in the sense that it is predicting values of the greedy policy for which it is being used to compute.
27
+
28
+ saliency/attention maps). Such explanations have utility, but they may not adequately relate to human-understandable concepts, require subjective interpretation, and can offer limited insight. Further, in many applications, meaningful features already exist and/or the level of effort to acquire them from domain experts and AI engineers is reasonable. It is thus important to develop deep learning methods, such as our ESP model, that can deliver enhanced explainability when such features are available.
29
+
30
+ # 2 EMBEDDED SELF-PREDICTION MODEL
31
+
32
+ An MDP is a tuple $\langle S, A, T, R \rangle$ , with states $S$ , actions $A$ , transition function $T(s, a, s')$ , and reward function $R(s, a)$ . A policy $\pi$ maps states to actions and has Q-function $Q^{\pi}(s, a)$ giving the expected infinite-horizon $\beta$ -discounted reward of following $\pi$ after taking action $a$ in $s$ . The optimal policy $\pi^{*}$ and Q-function $Q^{*}$ satisfy $\pi^{*}(s) = \arg \max_{a} Q^{*}(s, a)$ . $Q^{*}$ can be computed given the MDP by repeated application of the Bellman Backup Operator, which for any Q-function $Q$ , returns a new Q-function $B[Q](s, a) = R(s, a) + \beta \sum_{s'} T(s, a, s') \max_{a'} Q(s', a')$ .
33
+
34
+ We focus on RL agents that learn an approximation $\hat{Q}$ of $Q^{*}$ and follow the corresponding greedy policy $\hat{\pi}(s) = \arg \max_{a} \hat{Q}(s, a)$ . We aim to explain a preference for action $a$ over $b$ in a state $s$ , i.e. explain why $\hat{Q}(s, a) > \hat{Q}(s, b)$ . Importantly, the explanations should be meaningful to humans and soundly reflect the actual agent preferences. Below, we define the embedded self-prediction model, which will be used for producing such explanations (Section 4) in terms of generalized value functions.
35
+
36
+ Generalized Value Functions (GVFs). GVFs (Sutton et al., 2011) are a generalization of traditional value functions that accumulate arbitrary feature functions rather than reward functions. Specifically, given a policy $\pi$ , an $n$ -dimensional state-action feature function $F(s,a) = \langle f_1(s,a),\dots ,f_n(s,a)\rangle$ , and a discount factor $\gamma$ , the corresponding $n$ -dimensional GVF, denoted $Q_{F}^{\pi}(s,a)$ , is the expected infinite-horizon $\gamma$ -discounted accumulation of $F$ when following $\pi$ after taking $a$ in $s$ . Given an MDP, policy $\pi$ , and feature function $F$ , the GVF can be computed by iterating the Bellman GVF operator, which takes a GVF $Q_{F}$ and returns a new GVF $B_{F}^{\pi}[Q_{F}](s,a) = F(s,a) + \gamma \sum_{s^{\prime}}T(s,a,s^{\prime})Q_{F}(s^{\prime},\pi (s^{\prime}))$ .
37
+
38
+ To produce human-understandable explanations, we assume semantically-meaningful features are available, so that the corresponding GVFs describe meaningful properties of the expected future—e.g., expected energy usage, or time spent in a particular spatial region, or future change in altitude.
39
+
40
+ ESP Model Definition. Given policy $\pi$ and features $F$ , we can contrast actions $a$ and $b$ via the GVF difference $\Delta_F^\pi(s, a, b) = Q_F^\pi(s, a) - Q_F^\pi(s, b)$ , which may highlight meaningful differences in how the actions impact the future. Such differences, however, cannot necessarily be used to soundly explain an agent preference, since the agent may not explicitly consider those GVFs for action selection. Thus, the ESP model forces agents to directly define action values, and hence preferences, in terms of GVFs of their own policies, which allows for such differences to be used soundly.
41
+
42
+ As depicted in Figure 1, the ESP model embeds a GVF $Q_F^{\hat{\pi}}$ of the agent's greedy policy $\hat{\pi}$ into the agents Q-function $\hat{Q}$ , via $\hat{Q}(s,a) = \hat{C}(\hat{Q}_F(s,a))$ , where $\hat{C}: R^n \to R$ is a learned combining function from GVF vectors to action values. When the GVF discount factor $\gamma$ is zero, the ESP model becomes a direct combination of the features, i.e. $\hat{Q}(s,a) = \hat{C}(F(s,a))$ , which is the traditional approach to using features for function approximation. By using $\gamma > 0$ we can leverage human-
43
+
44
+ provided features in a potentially more powerful way. Because an ESP agent represents action-values via GVF components, it is possible to produce sound contrastive explanations in terms of GVFs, as described in Section 4.
45
+
46
+ In general, the ability to learn a quality Q-function and hence policy using the ESP model requires that the GVF features are sufficiently expressive. While, in concept, using a single feature equal to the reward is sufficient for learning the Q-function (i.e. the GVF is the Q-function and the identity combining function could be used), that choice does not support explainability. Thus, it is desirable to use a set of features that meaningfully decompose important aspects of the environment and at the same time have GVFs that are expressive enough to combine into the Q-function. In Section 6, we describe the generic schema used for GVF features in our experimental environments.
47
+
48
+ # 3 ESP MODEL TRAINING: ESP-DQN
49
+
50
+ We will represent the learned combining function, $\hat{C}$ , and GVF, $\hat{Q}_F$ , as neural networks with parameters $\theta_C$ and $\theta_F$ . The goal is to optimize the parameters so that $\hat{Q}(s,a) = \hat{C}(\hat{Q}_F(s,a))$ approximates $Q^*$ and $\hat{Q}_F(s,a)$ approximates $Q_F^{\pi^*}(s,a)$ . The GVF accuracy condition is important since humans will interpret the GVF values in explanations. A potential learning complication is the circular dependence where $Q_F^{\hat{\pi}}$ is both an input to $\hat{Q}$ and depends on $\hat{Q}$ through the greedy policy $\hat{\pi}$ . Below we overview our learning algorithm, ESP-DQN, a variant of DQN (Mnih et al., 2015), which we later show to be empirically effective. Full pseudo-code is provided in the Appendix A.
51
+
52
+ ESP-DQN follows an $\epsilon$ -greedy exploration policy while adding transitions to a replay buffer $D = \{(s_i, a_i, r_i, F_i, s_i')\}$ , where $F_i$ is the feature vector for GVF training. Each learning step updates $\theta_C$ and $\theta_F$ using a random mini-batch. Like DQN, updates are based on a target network, which uses a second set of target parameters $\theta_C'$ and $\theta_F'$ , defining target combining and GVF functions $\hat{C}'$ and $\hat{Q}_F'$ , yield target Q-function $\hat{Q}'(s, a) = \hat{C}'(\hat{Q}_F'(s, a))$ . The target parameters are updated to the values of the non-target parameters every $K$ learning steps and otherwise held fixed.
53
+
54
+ Combination Function Update. Since the output of $\hat{C}$ should approximate $Q^{*}$ , optimizing $\theta_{C}$ can use traditional DQN updates. The updates, however, only impact $\theta_{C}$ while keeping $\theta_{F}$ fixed so that the GVF output $\hat{Q}_F(s,a)$ is viewed as a fixed input to $\hat{C}$ . Given a mini-batch the update to $\theta_{C}$ is based on L2 loss with a target value for sample $i$ being $y_{i} = r_{i} + \beta \hat{Q}^{\prime}(s_{i}^{\prime},\hat{a}_{i}^{\prime})$ , where $\hat{a}_i^\prime = \arg \max_a\hat{Q} '(s',a)$ is the greedy action of the target network.
55
+
56
+ GVF Update. Training $Q_F^\pi$ is similar to learning a critic in actor-critic methods for the evolving greedy policy, but instead of learning to predict long-term reward, we predict the long-term accumulation of $F$ . Given a mini-batch we update $\theta_F$ based on L2 loss at the output of $\hat{Q}_F$ with respect to a target value $y_i = F_i + \gamma \hat{Q}_F'(s_i', \hat{a}_i')$ , where $\hat{a}_i$ is the same target greedy action from above.
57
+
58
+ Convergence. Even with sufficiently expressive features, most combinations of function approximation and Q-learning, including DQN, do not have general convergence guarantees (Sutton & Barto, 2018). Rather, for table-based representations that record a value for each state-action pair, Q-learning, from which DQN is derived, almost surely converges to $Q^{*}$ (Watkins & Dayan, 1992), which at least shows that DQN is built on sound principles. We now consider convergence for ESP-Table, a table-based analog of ESP-DQN.
59
+
60
+ ESP-Table uses size 1 mini-batches and updates target tables (i.e. analogs of target networks) every $K$ steps. The $\hat{Q}_F$ table is over state-action pairs, while for $\hat{C}$ we assume a hash function $h$ that maps its continuous GVF inputs to a finite table. For example, since GVFs are bounded, this can be done with arbitrarily small error via quantization. A pair of feature and hash function $(F,h)$ must be sufficiently expressive to provide any convergence guarantee. First, we assume $h$ is locally consistent, meaning that for any input $q$ there exists a finite $\epsilon$ such that for all $|q' - q| \leq \epsilon$ , $h(q) = h(q')$ . Second, we assume the pair $(F,h)$ is Bellman Sufficient, which characterizes the representational capacity of the $\hat{C}$ table after Bellman GVF backups (see Section 2) with respect to representing Bellman backups.
61
+
62
+ Definition 1 (Bellman Sufficiency). A feature and hash function pair $(F, h)$ is Bellman sufficient if for any ESP model $\hat{Q}(s, a) = \hat{C}(\hat{Q}_F(s, a))$ with greedy policy $\hat{\pi}$ and state-action pairs $(s, a)$ and $(x, y)$ , if $h(\hat{Q}_F^+(s, a)) = h(\hat{Q}_F^+(x, y))$ then $B[\hat{Q}](s, a) = B[\hat{Q}](x, y)$ , where $\hat{Q}_F^+ = B_F^{\hat{\pi}}[\hat{Q}_F]$ .
63
+
64
+ Let $\hat{C}^t$ , $\hat{Q}_F^t$ , $\hat{Q}^t$ , and $\hat{\pi}^t$ be random variables denoting the learned combining function, GVF, corresponding Q-function, and greedy policy after $t$ updates. The following gives conditions for convergence of $\hat{\pi}^t$ to $\pi^*$ and $\hat{Q}_F^t$ to a neighborhood of $Q_F^*$ given a large enough update interval $K$ .
65
+
66
+ Theorem 1. If ESP-Table is run under the standard conditions for the almost surely (a.s.) convergence of $Q$ -learning and uses a Bellman-sufficient pair $(F, h)$ with locally consistent $h$ , then for any $\epsilon > 0$ there exists a finite target update interval $K$ , such that for all $s$ and $a$ , $\hat{\pi}^t(s)$ converges a.s. to $\pi^*(s)$ and $\lim_{t \to \infty} |\hat{Q}_F^t(s, a) - Q_F^*(s, a)| \leq \epsilon$ with probability 1.
67
+
68
+ The full proof is in the Appendix B. It is an open problem of whether a stronger convergence result holds for $K = 1$ , which would be analogous to results for traditional Q-learning.
69
+
70
+ # 4 CONTRASTIVE EXPLANATIONS FOR THE ESP MODEL
71
+
72
+ We focus on contrastive explanation of a preference, $\hat{Q}(s,a) > \hat{Q}(s,b)$ , that decomposes the preference magnitude $\hat{Q}(s,a) - \hat{Q}(s,b)$ in terms of components of the GVF difference vector $\Delta_F(s,a,b) = \hat{Q}_F(s,a) - \hat{Q}_F(s,b)$ . Explanations will be tuples $\langle \Delta_F(s,a,b), W(s,a,b) \rangle$ , where $W(s,a,b) \in R^n$ is an attribution weight vector corresponding to $\Delta_F(s,a,b)$ . The meaningfulness of an explanation is largely determined by the meaningfulness of the GVF features. We say that an explanation is sound if $\hat{Q}(s,a) - \hat{Q}(s,b) = W(s,a,b) \cdot \Delta_F(s,a,b)$ , i.e. it accounts for the preference magnitude. We are interested in explanation methods that only return sound explanations, since these explanations can be viewed as certificates for the agent's preferences. In particular, the definition implies that $W(s,a,b) \cdot \Delta_F(s,a,b) > 0$ if and only if $\hat{Q}(s,a) > \hat{Q}(s,b)$ . In the simple case of a linear combining function $\hat{C}$ with weights $w \in R^n$ , the preference magnitude factors as $\hat{Q}(s,a) - \hat{Q}(s,b) = w \cdot \Delta_F(s,a,b)$ . Thus, $\langle \Delta_F(s,a,b), w \rangle$ is a sound explanation for any preference.
73
+
74
+ Non-Linear Combining Functions. Non-linear combining functions are necessary when it is difficult to provide features that support good policies via linear combining functions. Since the above linear factoring does not directly hold for non-linear $\hat{C}$ , we draw on the Integrated Gradient (IG) (Sundararajan et al., 2017), which was originally developed to score feature importance of a single input relative to a "baseline" input. We adapt IG to our setting by treating the less preferred action as the baseline, which we describe below in the terminology of this paper.
75
+
76
+ Let $X_{sa} = \hat{Q}_F(s, a)$ and $X_{sb} = \hat{Q}_F(s, b)$ be the GVF outputs of the compared actions. Given a differentiable combining function $\hat{C}$ , IG computes an attribution weight $\theta_i(s, a, b)$ for component $i$ by integrating the gradient of $\hat{C}$ while interpolating between $X_a$ and $X_b$ . That is, $\theta_i(s, a, b) = \int_0^1 \frac{\partial \hat{C}(X_{sb} + \alpha \cdot (X_{sa} - X_{sb}))}{\partial X_{sa, i}} \mathrm{d}\alpha$ , which we approximate via finite differences. The key property is that the IG weights linearly attribute feature differences to the overall output difference, i.e. $\hat{C}(X_{sa}) - \hat{C}(X_{sb}) = \theta(s, a, b) \cdot (X_{sa} - X_{sb})$ . Rewriting this gives the key relationship for the ESP model.
77
+
78
+ $$
79
+ \hat {Q} (s, a) - \hat {Q} (s, b) = \hat {C} (\hat {Q} _ {F} (s, a)) - \hat {C} (\hat {Q} _ {F} (s, b)) = \theta (s, a, b) \cdot \Delta_ {F} (s, a, b) \tag {1}
80
+ $$
81
+
82
+ Thus, $\mathrm{IGX}(s,a,b) = \langle \Delta_F(s,a,b),\theta (s,a,b)\rangle$ is a sound explanation, which generalizes the above linear case, since for linear $\hat{C}$ with weights $w$ , we have $\theta (s,a,b) = w$ . In practice, we typically visualize $\mathrm{IGX}(s,a,b)$ by showing a bar for each component with magnitude $\theta_{i}(s,a,b)\cdot \Delta_{F}(s,a,b)$ which reflects the positive/negative contributions to the preference (e.g. Figure 3a bottom-right).
83
+
84
+ Minimal Sufficient Explanations. When there are many features $\mathrm{IGX}(s,a,b)$ will likely overwhelm users. To soundly reduce the size, we use the concept of minimal sufficient explanation (MSX), which was recently developed for the much more restricted space of linear reward decomposition models (Juozapaitis et al., 2019). Equation 1, however, allows us to adapt the MSX to our non-linear setting. Let $P$ and $N$ be the indices of the GVF components that have positive and negative attribution to the preference, i.e., $P = \{i:\Delta_{F,i}(s,a,b)\cdot \theta_i(s,a,b) > 0\}$ and $N = \{1,\dots ,n\} -P$ . Also, for an arbitrary subset of indices $E$ , let $S(E) = \sum_{i\in E}|\Delta_{F,i}(s,a,b)\cdot \theta_i(s,a,b)|$ be the total magnitude of the components, which lets the preference be expressed as $S(P) > S(N)$ . The key idea of the MSX is that often only a small subset of positive components are required to overcome negative components and maintain the preference of $a$ over $b$ . An MSX is simply a minimal set of such positive components. Thus, an MSX is a solution to $\arg \min \{|E|:E\subseteq P,S(E) > S(N)\}$ , which
85
+
86
+ is not unique in general. We select a solution that has the largest positive weight by sorting $P$ and including indices into the MSX from largest to smallest until the total is larger than $S(N)$ .
87
+
88
+ # 5 RELATED WORK
89
+
90
+ Prior work considered linear reward decomposition models with known weights for speeding up RL (Van Seijen et al., 2017), multi-agent RL (Russell & Zimdars, 2003; Kok & Vlassis, 2004), and explanation (Juozapaitis et al., 2019). This is a special case of the ESP model, with GVF features equal to reward components and a known linear combining function. Generalized value function networks (Schlegel et al., 2018) are a related, but orthogonal, model that combines GVFs (with given policies) by treating GVFs as features accumulated by other GVFs. Rather, our GVFs are used as input to a combining network, which defines the policy used for the GVF definition. Integrating GVF networks and the ESP model is an interesting direction to consider.
91
+
92
+ The MSX for linear models was originally introduced for MDP planning (Khan et al., 2009) and more recently for reward decomposition (Juozapaitis et al., 2019). We extend to the non-linear case. A recent approach to contrastive explanations (Waa et al., 2018) extracts properties from policy simulations at explanation time (Waa et al., 2018), which can be expensive or impossible. Further, the explanations are not sound, since they are not tied to the agent's internal preference computation. Saliency explanations have been used in RL to indicate important parts of input images (Greydanus et al., 2018; Iyer et al., 2018; Gupta et al., 2020; Atrey et al., 2020; Olson et al., 2019). These methods lack a clear semantics for the explanations and hence any notion of soundness.
93
+
94
+ # 6 EXPERIMENTAL CASE STUDIES
95
+
96
+ Below we introduce our domains and experiments, which address these questions: 1) (Section 6.2) Can we learn ESP models that perform as well as standard models? 2) (Section 6.2) Do the learned ESP models have accurate GVFs? 3) (Section 6.3) Do our explanations provide meaningful insight?
97
+
98
+ # 6.1 ENVIRONMENT DESCRIPTION
99
+
100
+ Schema for Selecting GVF Features. Before introducing the environments we first describe the schema used to select GVF features across these environments. This schema can serve as a general starting point for applying the ESP model to new environments. In general, episodic environments have two main types of rewards: 1) a terminal reward, which occurs at the end of an episode and can depend on the final state, and 2) pre-terminal rewards, which occur during the episode depending on the states and/or actions. Since the value of a policy will typically depend on both types of rewards, it is important to have GVF features that capture both terminal and pre-terminal that are potentially relevant and interpretable. Thus, in each domain, as describe below, we include simple terminal GVF features that describe basic conditions at the end of the episode (e.g. indicating if the cart went out-of-bound in Cartpole). In addition, we include pre-terminal GVF features that are obtained from the state variables of the environment or derived reward variables that are used to compute the reward function, which are typically readily available from a domain description.
101
+
102
+ Discrete state or reward variables can simply be encoded as indicator GVF features. For continuous state and reward variables we consider two options: a) When a variable has a small number of meaningful regions, we can use indicator features for the regions as features. The GVFs then indicate how long the agent is in each region; b) We also consider delta GVF features that are equal to the change in a variable across a time step. The GVF value for these features can be interpreted as the future change in the variables' values. While we focus on the above generic GVF features in this paper, an agent designer can define arbitrary GVF features based on their intuition and knowledge.
103
+
104
+ Lunar Lander. We use the standard OpenAI Gym version of Lunar Lander, a physics simulation game where the agent aims to safely land a rocket ship in a target region by deciding at each step which of three thrusters (if any) to activate. The raw state variables are the position and velocity vectors and the reward function penalizes crashing, rewards landing in the goal area, and includes other "shaping" reward variables. These variables are all easily extracted from the simulation environment. In this domain, the continuous variables do not have intuitively meaningful discretizations, so we use the delta features as the main features for explanation case studies (ESP-continuous). However, to illustrate that learning can be done with the discretization approach in this domain, we also
105
+
106
+ include results for the continuous features being discretized into 8 uniform bins (ESP-discrete). The pre-terminal features are based on the variables distance-to-goal, velocity, tilt-angle, right landing leg in goal position, left landing leg in goal position, main engine use, side engine use. The terminal feature is an indicator of safely landing.
107
+
108
+ Cart Pole. We use the standard OpenAI Gym Cart Pole environment, a physics simulation where the agent aims to vertically balance a free-swinging pole attached to a cart that can have a force applied to the left or right each step. The state variables include the position, pole angle, and their velocities, and reward is a constant $+1$ until termination, which occurs when either the pole falls below a certain angle from vertical, moves out of bounds, or after 500 steps.
109
+
110
+ Since the CartPole variables discretize into a small number of intuitively meaningful regions, we consider both discrete (ESP-discrete) and delta encodings (ESP-continuous) of the GVF features. For ESP-discrete, there are 8 pre-terminal GVF features that discretize the state variables into meaningful regions corresponding to an intuitive notion of safety. This includes two indicators for each variable corresponding to cart position, cart velocity, pole angle, and angle velocity. A perfectly balanced pole will always remain in the defined safe regions. These discretized features also double as the terminal features, since they capture the relevant aspects of the state at termination (being in a safe or unsafe region). For ESP-continuous, we have 12 features, the first 8 pre-terminal GVF features corresponding to the delta features of the cart position, cart velocity, pole angle, and angle velocity for left and right sides. The 4 terminal GVF features are indicators of whether the episode ended by moving out-of-bounds to the left or right or the pole fell below the termination angle to the left or right.
111
+
112
+ Tug of War. Tug of War (ToW) is an adversarial two-player strategy game we designed using PySC2 for Starcraft 2. ToW is interesting for humans and presents many challenges to RL including an enormous state space, thousands of actions, long horizons, and sparse reward (win/loss). A detailed description is in Appendix C. ToW is played on a rectangular map divided into top and bottom horizontal lanes. Each lane has two bases structures at opposite ends, one for each player. The first player to destroy one of the opponent's bases in either lane wins. The game proceeds in 30 second waves. By the beginning of each wave, players must decide on either the top or bottom lane, and how many of each type of military production building to purchase for that lane. Purchases are constrained by the player's available currency, which is given at a fixed amount each wave. Each purchased building produces one unit of the specified type at the beginning of each wave. The units move across the lanes toward the opponent, engage enemy units, and attack the enemy base if close enough. The three types of units are Marines, Immortals, and Banelings, which have a rock-paper-scissors relationship and have different costs. If no base is destroyed after 40 waves, the player with the lowest base health loses. In this work, we trained a single agent against a reasonably strong agent produced via pool-based self-play learning (similar to AlphaStar training(Vinyals et al., 2019)).
113
+
114
+ We present two ToW ESP agents that use 17 and 131 structured GVF features (noting that the 131 features are very sparse). These feature sets are detailed in Appendix E. For the 17 feature agent, the pre-terminal features correspond to the delta damage to each of the four bases by each of the three types of units; allowing GVFs to predict the amount of base damage done by each type of unit, giving insight into the strategy. Note that there is no natural discretization of the numeric damage variables and hence we only consider the delta encoding. The terminal GVF features are indicators of which base has the lowest health at the end of the game and whether the game reached 40 waves. The terminal GVF features encode the possible ways that the game can end. The 131 feature agent extends these features to to keep track of damage done in each lane to and from each combination of unit types along with additional information about the economy.
115
+
116
+ # 6.2 LEARNING PERFORMANCE
117
+
118
+ To evaluate whether using ESP models hurts performance relative to "standard" models we compare against two DQN instances: $DQN$ -full uses the same overall network architecture as ESP-DQN, i.e. the GVF network structure feeding into the combining network. However, unlike ESP-DQN, the DQN-full agent does not have access to GVF features and does not attempt to train the GVF network explicitly. It is possible DQN-full will suffer due to the bottleneck introduced at the interface between the GVF and combiner networks. Thus, we also evaluate Vanilla $DQN$ , which only uses the combining network of ESP-DQN, but directly connects that network to the raw agent input. Details of network architectures, optimizers, and hyperparameters are in the Appendix D.
119
+
120
+ ![](images/04a7b8fd5dd1138882aeed9f6a9ae8f00acb8d77549731f692821c02c9c10b8d.jpg)
121
+
122
+ ![](images/15d98efd073374d3ce3c19ac77f62222b691165dc6ed4f02f3ddc41ead37ff2b.jpg)
123
+
124
+ ![](images/8c63098774b3c0ea0fafd41134c3f4863897c296c56766abf1e6ffddc5dea7ec.jpg)
125
+
126
+ ![](images/69ceb5aef65e6d9d01cc9b178405355554283c06f7a1ca1ea0563ed3cdb16bac.jpg)
127
+ (a) Lunar Lander
128
+
129
+ ![](images/6df2d8ff43a6c14c89799142376469c27e13fc42e280db096e3cda0012442861.jpg)
130
+ (b) Cart Pole
131
+
132
+ ![](images/d9ea41b955edea8912f96dee00e4ca7fcedf206c2a5a9a6fd94bba6821029c81.jpg)
133
+ (c) Tug-of-war
134
+ Figure 2: Reward learning curves (top row) and GVF Loss learning curves (bottom row) for the different agents in three environments. We show the mean $+/-$ std over 10 independent runs.
135
+
136
+ Figure 2 (top row) shows the learning curves for different agents and for the random policy. All curves are averages of 10 full training runs from scratch using 10 random seeds. For the control problems, CartPole (with discrete and continuous GVFs features) and LunarLander, we see that all agents are statistically indistinguishable near the end of learning and reach peak performance after about the same amount of experience. This indicates that the potential complications of training the ESP model did not significantly impact performance in these domains. We see that the discrete feature version of CartPole converged slightly faster than the continuous version, but the difference is relatively small. For ToW, the ESP-DQN agents perform as well or better than the DQN variants, with all agents showing more variance. ESP-DQN with 17 features consistently converges to a win rate of nearly $100\%$ and is more stable than the 131-feature version and other DQN variants. Interestingly, DQN-full with 17 features consistently fails to learn, which we hypothesize is due to the extreme 17 feature bottleneck inserted into the architecture. This is supported by seeing that with 131 features DQN-full does learn, though more slowly than ESP-DQN.
137
+
138
+ To evaluate the GVF accuracy of ESP-DQN we produce ground truth GVF data along the learning curves. Specifically, given the ESP policy $\hat{\pi}$ at any point, we can use Monte-Carlo simulation to estimate $Q_{F}^{\hat{\pi}}(s,a)$ for all actions at a test set of states generated by running $\hat{\pi}$ . Figure 2 (bottom row) shows the mean squared GVF prediction error on the test sets as learning progresses. First, for each domain the GVF error is small at the end of learning and tends to rapidly decrease when the policy approaches its peak reward performance. LunarLander and ToW show a continual decrease of GVF error as learning progresses. CartPole, rather shows a sharp initial increase then sharp decrease. This is due to the initially bad policy always failing quickly, which trivializes GVF prediction. As the policy improves the GVFs become more challenging to predict leading to the initial error increase.
139
+
140
+ # 6.3 EXAMPLE EXPLANATIONS
141
+
142
+ Appendix F includes a larger set of examples with detailed analysis in each domain.
143
+
144
+ Lunar Lander. In Figure 3a, the game state (top) shows a state in Lunar Lander entered by a near-optimal learned ESP policy. The state is dangerous due to the fast downward and clockwise rotational velocity depicted by arrows. The GVFs (bottom-left) shows the Q-values for the actions and the predicted GVF bars. We see that the "main engine" and "right engine" actions have nearly the same Q-values with "main engine" slightly preferred, while "left engine" and "noop" are considered significantly worse. We want to understand the rationale for the strong and weak preferences.
145
+
146
+ While a user can observe differences among GVFs across actions, it is not clear how they relate to the reference. The IG and MSX (bottom-right) shows the IGXs corresponding to the preference of "main engine" over the other three actions. In addition, the MSX is depicted via dashed lines over IGX components in the MSX. Focusing first on the larger preferences, "main engine" is preferred to
147
+
148
+ ![](images/7ce6c6f7a79b4122b7358bde9578e2715393611190cb2b9fed030ab52e3e068e.jpg)
149
+
150
+ ![](images/5a35a4737de13b7a3f1e287fc2e98bbc4890c4dd824cbe3a605e3c32560409f6.jpg)
151
+ (a) Lunar Lander
152
+
153
+ ![](images/c36a906729e43ee40f970cf0878e603b8978cf707fb430ecdeffd098162dab2b.jpg)
154
+ (b) Cart Pole
155
+ Figure 3: Explanation examples for Lunar Lander (left) and CartPole (right). Each example shows the game state, the Q-values and GVF predictions for actions, and the IGX and MSX.
156
+
157
+ "left engine" primarily due to GVF differences in the velocity and landing features, with the MSX showing that landing alone is sufficient for the preference. This rationale agrees with common sense, since the left engine will accelerate the already dangerous clockwise rotation requiring more extreme actions that put the future reward related to landing at risk.
158
+
159
+ For the preference over "noop" the velocity feature dominates the IGX and is the only MSX feature. This agrees with intuition since by doing nothing the dangerous downward velocity will not be addressed, which means the landing velocity will have a more negative impact on reward. Comparing "main engine" to the nearly equally valued "right engine" shows that the slight preference is based on the distance and right leg landing feature. This is more arbitrary, but agrees with intuition since the right engine will both reduce the downward velocity and straighten the ship, but will increase the leftward velocity compared to the main engine. This puts it at greater risk of reducing reward for missing the right leg landing goal and distance reward. Overall the explanations agreed well with intuition, which together with similar confirmation can increase our confidence in the general reasoning of the policy. We also see the MSXs were uniformly very small.
160
+
161
+ Cart Pole. We compare a Cart Pole state-action explanation to an explanation produced by its reversed state as shown in Figure 3b. This comparison illustrates how in one case, the explanation agrees with intuition and builds confidence; while the other exposes an underlying inaccuracy or flaw.
162
+
163
+ Our original game state (left) positions the cart in a dangerous position moving right, close to the end of the track. The pole is almost vertical and has a small angle velocity towards the left. The action "push left" (move cart left) agrees with intuition as the cart is at the right edge of the screen and cannot move right without failing the scenario. The IG and MSX (left) concurs, showing the cart's current position close to the right edge as the main reason why it prefers the "push left" action over the "push right"; moving left will put the cart back within a safe boundary.
164
+
165
+ Reversing the game state (left) by multiplying -1 to each value in the input state vector produces a flipped game state (right). The cart is now positioned in a dangerous position moving left, close to the end of the track. Once again the pole is almost vertical and now has a small angle velocity towards the right. One would expect the agent to perform the action "push right" (the opposite action to game state (left)) as moving left will cause the agent to move off the screen and fail the scenario. However, as depicted in IG and MSX (right) we see the agent prefers "push left" over "push right". The agent justifies this action via an MSX that focuses on maintaining pole vertically to the left. This justification indicates that the agent is putting too much weight on the pole angle versus the boundary condition in this dangerous situation. The agent has not learned the critical importance of the left boundary. This indicates further training on the left side of the game map is needed. Presumably, during training the agent did not experience similar situations very often.
166
+
167
+ Tug of War. In Figure 4, we give 2 examples from a high-performing 17 feature ESP agent, one that agrees with common sense and one that reveals a flaw. Game state (top) shows the ESP agent (blue
168
+
169
+ ![](images/d536c0c809b5ef1a0d7ff4604187d3b467a95b6b7d101a8c560944ae63a37615.jpg)
170
+
171
+ ![](images/05fc5e8a747f41362e179d69692d504a18ed20a0378a6d69c4846cb0b82c3890.jpg)
172
+
173
+ ![](images/bb4a3f085e797fd3840bafe72fbdd02102303c170e6e76f6aac2620ebbc7ec87.jpg)
174
+
175
+ ![](images/b927841995eb5b15666962f787d030291df72670671a8e29509b4751892c2f7d.jpg)
176
+
177
+ ![](images/24a0afd0a3058198073089ee4ab3bea49472767c7643f3d3a39d214158f19f6f.jpg)
178
+ Figure 4: Example Explanations for Tug-of-War 17 feature ESP-DQN agent. Each row is a decision point showing: (left) game state; (middle) Q-values and GVFs for preferred action and a non-preferred action; (right) IGX for action pair and corresponding MSX (indicated by highlighted bars). For Game 1 (top) the agent's preferred action is $+4$ Marine, $+1$ Baneling in Top Lane and the non-preferred action is $+10$ Marine, $+1$ Baneling on Bottom. For Game 2 (bottom) the highest ranked action is $+1$ Baneling in Bottom Lane and sub-optimal action is $+2$ Marine, $+4$ Baneling in Bottom Lane.
179
+
180
+ ![](images/18c06a28b2972ff0b0baa69145d9067d7d427ce4a1b557c91eda70e3422f55dd.jpg)
181
+
182
+ ![](images/cf270da78f66395de785014322ac5a6f4982bacdbf87fd527a2c7feb815bb187.jpg)
183
+
184
+ player) with too few marine buildings to defend against the Immortals. We show information for the best ranked action and a sub-optimal action (details in caption). The best action creates top-lane units, while the sub-optimal action creates the maximum bottom-lane units. The IGX and MSX show (top) that the most responsible GVF feature for the preference is "damage to the top base from immortals", which agrees with intuition since the best action attempts to defend the top base, while the sub-optimal action does not. Indeed, the GVFs for the sub-optimal action show the top base is predicted to take $80\%$ damage from the enemy's Immortals compared to nearly 0 for the best action.
185
+
186
+ In the second game state (bottom), the ESP agent plays against an opponent that it was not trained against and loses by having the bottom base destroyed. The state shows a large enemy attack in the bottom with the ESP agent having enough resources (1500 minerals) to defend if it takes the right action. However, the most preferred action is to add just one Baneling building to the bottom lane, which results in losing. Why was this mistake made?
187
+
188
+ We compare the preferred action to an action that adds more buildings to the bottom lane, which should be preferred. The IGX and MSX show that the action preference is dominated by the GVF feature related to inflicting damage in the top lane with Banelings. Thus, the agent is "planning" to save minerals to purchase more top Baneling buildings. The IGX does indicate that the agent understands the sub-optimal action will be able to defend the bottom lane, however, this advantage for the sub-optimal action is overtaken by the optimism about the top lane. This misjudgement of relative values causes the agent to lose the game. On further analysis, we found that this misjudgement is likely due to the ESP agent never experiencing a loss due to such a bottom lane attack during training.
189
+
190
+ # 7 SUMMARY
191
+
192
+ We introduced the ESP model for producing meaningful and sound contrastive explanations for RL agents. The key idea is to structure the agent's action-value function in terms of meaningful future predictions of its behavior. This allows for action-value differences to be compared in terms of deltas in the future behaviors they entail. To achieve meaningfulness, we required the agent designer to provide semantic features of the environment, upon which GVFs were learned. To achieve soundness, we ensured that our explanations were formally related to the agent's preferences in a well-defined way. Our case studies provide evidence that ESP models can be learned in non-trivial environments and that the explanations give insights into the agent's preferences. An interesting direction for future work is to continue to enhance the internal structure of the GVFs to allow for explanations at different levels of granularity, which may draw on ideas from GVF networks (Schlegel et al., 2018).
193
+
194
+ # REFERENCES
195
+
196
+ Akanksha Atrey, Raleigh Clary, and David Jensen. Exploratory not explanatory: Counterfactual analysis of saliency maps for deep {rl}. In International Conference on Learning Representations, 2020.
197
+ Dimitri P Bertsekas and John N Tsitsiklis. Neuro-dynamic programming. Athena Scientific, 1996.
198
+ Samuel Greydanus, Anurag Koul, Jonathan Dodge, and Alan Fern. Visualizing and understanding Atari agents. In Jennifer Dy and Andreas Krause (eds.), Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pp. 1792-1801, Stockholm, Sweden, 10-15 Jul 2018. PMLR.
199
+ Piyush Gupta, Nikaash Puri, Sukriti Verma, Dhruv Kayastha, Shripad Deshmukh, Balaji Krishnamurthy, and Sameer Singh. Explain your move: Understanding agent actions using focused feature saliency. In International Conference on Learning Representations, 2020.
200
+ Rahul Iyer, Yuezhang Li, Huao Li, Michael Lewis, Ramitha Sundar, and Katia P. Sycara. Transparency and explanation in deep reinforcement learning neural networks. CoRR, abs/1809.06061, 2018.
201
+ Zoe Juozapaitis, Anurag Koul, Alan Fern, Martin Erwig, and Finale Doshi-Velez. Explainable reinforcement learning via reward decomposition. In Proceedings of the IJCAI 2019 Workshop on Explainable Artificial Intelligence, pp. 47-53, 2019.
202
+ Omar Zia Khan, Pascal Poupart, and James P Black. Minimal sufficient explanations for factored markov decision processes. In *Nineteenth International Conference on Automated Planning and Scheduling*, 2009.
203
+ Jelle R Kok and Nikos Vlassis. Sparse cooperative q-learning. In Proceedings of the twenty-first international conference on Machine learning, pp. 61. ACM, 2004.
204
+ Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971, 2015.
205
+ Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529-533, 2015.
206
+ Matthew L Olson, Lawrence Neal, Fuxin Li, and Weng-Keen Wong. Counterfactual states for atari agents via generative deep learning. arXiv preprint arXiv:1909.12969, 2019.
207
+ Stuart J Russell and Andrew Zimdars. Q-decomposition for reinforcement learning agents. In Proceedings of the 20th International Conference on Machine Learning (ICML-03), pp. 656-663, 2003.
208
+ Matthew Schlegel, Adam White, Andrew Patterson, and Martha White. General value function networks. arXiv preprint arXiv:1807.06763, 2018.
209
+ Mukund Sundararajan, Ankur Taly, and Qiqi Yan. Axiomatic attribution for deep networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 3319-3328. JMLR.org, 2017.
210
+ Richard Sutton, Joseph Modayil, Michael Delp, Thomas Degris, Patrick Pilarski, Adam White, and Doina Precup. Horde : A scalable real-time architecture for learning knowledge from unsupervised sensorimotor interaction categories and subject descriptors. In International Conference on Autonomous Agents and Multiagent Systems, volume 2, 2011.
211
+ Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction. MIT press, 2018.
212
+ Harm Van Seijen, Mehdi Fatemi, Joshua Romoff, Romain Laroche, Tavian Barnes, and Jeffrey Tsang. Hybrid reward architecture for reinforcement learning. In Advances in Neural Information Processing Systems, pp. 5392-5402, 2017.
213
+
214
+ Oriol Vinyals, Igor Babuschkin, Wojciech M. Czarnecki, Michael Mathieu, Andrew Dudzik, Junyoung Chung, David H. Choi, Richard Powell, Timo Ewalds, Petko Georgiev, and et al. Grandmaster level in starcraft ii using multi-agent reinforcement learning. Nature, 575(7782):350-354, 2019. doi: 10.1038/s41586-019-1724-z.
215
+
216
+ J Waa, J van Diggelen, K Bosch, and M Neerincx. Contrastive explanations for reinforcement learning in terms of expected consequences. In Proceedings of the Workshop on Explainable AI on the IJCAI conference, Stockholm, Sweden., 37, 2018.
217
+
218
+ Christopher JCH Watkins and Peter Dayan. Q-learning. Machine learning, 8(3-4):279-292, 1992.
219
+
220
+ # A ESP-DQN PSEUDO-CODE
221
+
222
+ The Pseudo-code for ESP-DQN is given in Algorithm 1.
223
+
224
+ Algorithm 1 ESP-DQN: Pseudo-code for ESP-DQN agent Learning.
225
+ Require: Act(s,a) :: returns tuple $(s',r,F,\text{done})$ of next state $s'$ , reward $r$ , GVF features $F\in R^{n}$ , and terminal state indicator done
226
+ Require: $K$ - target update interval, $\beta$ - reward discount factor, $\gamma$ - GVF discount factor Init $\hat{Q}_F,\hat{Q}_F^{\prime}$ ; The non-target and target GVF networks with parameters $\theta_F$ and $\theta_F^{\prime}$ respectively. Init $\hat{C},\hat{C}^{\prime}$ ; The non-target and target combining networks with $\theta_C$ and $\theta_C^\prime$ respectively. Init $M\gets \emptyset$ ; initialize replay buffer
227
+ ; Q-function is defined by $\hat{Q} (s,a) = \hat{C} (\hat{Q}_F(s,a))$
228
+ ; Target Q-function is defined by $\hat{Q} '(s,a) = \hat{C} '(\hat{Q}_F'(s,a))$
229
+ repeat
230
+ Environment Reset $s_0\gets$ Initial State totalUpdates $\leftarrow 0$
231
+ for $t\gets 0$ to $T$ do
232
+ $a_{t}\gets \epsilon (\hat{Q},s_{t}) / /\epsilon$ greedy
233
+ $(s_{t + 1},r_t,F_t,donet)\gets \mathrm{Act}(s_t,a_t)$
234
+ Add $(s_t,a_t,r_t,F_t,s_{t + 1},donet)$ to $M$
235
+ ; update networks
236
+ Randomly sample a mini-batch $\{(s_i,a_i,r_i,F_i,s_i',done_i)\}$ from $M$ $\hat{a}_i\gets \arg \max_{a\in A}\hat{Q} '(s_i',a)$ $f_{i}^{\prime}\gets \begin{cases} F_{i} & \text{If done}_{i}\text{is true}\\ F_{i} + \gamma \hat{Q}_{F}^{\prime}(s_{i}^{\prime},\hat{a}_{i}) & \text{Otherwise} \end{cases}$ $q_{i}^{\prime}\gets \begin{cases} r_{i} & \text{If done}_{i}\text{is true}\\ r_{i} + \beta \hat{Q}^{\prime}(s_{i}^{\prime},\hat{a}_{i}) & \text{Otherwise} \end{cases}$
237
+ Update $\theta_F$ via gradient descent on average mini-batch loss $(f_i' - \hat{Q}_F(s_i,a_i))^2$
238
+ Update $\theta_C$ via gradient descent on average mini-batch loss $(q_i' - \hat{Q}(s_i,a_i))^2$
239
+ if totalUpdates mod $K == 0$ then
240
+ $\theta_F^{\prime}\gets \theta_F$ $\theta_C^{\prime}\gets \theta_C$
241
+ end if
242
+ totalUpdates $\leftarrow$ totalUpdates + 1
243
+ if done_t is true then
244
+ break
245
+ end if
246
+ end for
247
+ until convergence
248
+
249
+ # B CONVERGENCE PROOF FOR ESP-TABLE
250
+
251
+ Algorithm 2 gives the pseudo-code for ESP-Table based on $\epsilon$ -greedy exploration. Note that, as for Q-learning, the convergence proof applies to any exploration strategy that guarantees all state-action pairs are visited infinitely often in the limit.
252
+
253
+ Algorithm 2 ESP-Table: Pseudo-code for a table-based variant of ESP-DQN. The notation $Q \xleftarrow{\alpha} x$ is shorthand for $Q \gets (1 - \alpha)Q + \alpha x$ .
254
+ Require: Act(s,a) ;; returns tuple $(s^{\prime},r,F)$ of next state $s^\prime$ ,reward $r$ ,and GVF features $F\in R^{n}$
255
+ Require: $h(q)$ - hash function from $R^n$ to a finite set of indices I
256
+ Require: $K$ - target update interval
257
+ Require: $\gamma ,\beta$ - discount factors for GVF and reward respectively Init $\alpha_{F,0},\alpha_{F,0}$ ;; learning rates for GVF and combining function Init $\hat{Q}_F[s,a]$ ; GVF table indexed by state-action pairs Init $\hat{C} [i]$ ; Combining function table indexed by indices in I Init $\hat{Q}_F^{\prime}[s,a]$ ; Target GVF table indexed by state-action pairs Init $\hat{C}''[i]$ ; Target Combining function table indexed by indices in I ;; Q-function is defined by $\hat{Q} (s,a) = \hat{C} [h(\hat{Q}_F[s,a])]$ .. Target Q-function is defined by $\hat{Q} '(s,a) = \hat{C} '[h(\hat{Q}_F'[s,a])]$ $s_0\gets$ Initial State
258
+ $t = 0$
259
+ repeat if t mod $K = = 0$ then $\hat{Q}_F^{\prime}\gets \hat{Q}_F$ $\hat{C}^{\prime}\gets \hat{C}$
260
+ end if
261
+ $a_{t}\gets \epsilon (\hat{Q},s_{t})$ . $\epsilon$ -greedy exploration $(s_{t + 1},r_t,F_t)\gets \mathrm{Act}(s_t,a_t)$ $a^\prime \gets \arg \max_a\hat{Q} '(s_{t + 1},a)$ $\hat{Q}_F[s_t,a_t]\xleftarrow{\alpha_{F,t}}F_t+\gamma \hat{Q}_F'(s_{t + 1},a')$ $\hat{C} [h(\hat{Q}_F[s_t,a_t])]\xleftarrow{\alpha_{C,t}}r_t + \beta \hat{Q} '(s_{t + 1},a')$ $t\gets t + 1$
262
+ until convergence
263
+
264
+ For the proof we will let $t$ index the number of learning updates and $i = \lfloor t / K\rfloor$ be the number of updates to the target tables. The formal statements refer to the "conditions for the almost surely convergence of standard $Q$ -learning". These conditions are: 1) There must be an unbounded number of updates for each state-action pair, and 2) The learning rate schedule $\alpha_{t}$ must satisfy $\sum_{t}\alpha_{t} = \infty$ and $\sum_{t}\alpha_{t}^{2} < \infty$ . ESP-Table uses two learning rates, one for the GVF and one for the combining function.
265
+
266
+ We will view the algorithm as proceeding through a sequence of target intervals, indexed by $i$ , with each interval having $K$ updates. We will let $\hat{C}_i^{\prime}$ and $\hat{Q}_{F,i}^{\prime}$ denote the target GVF and combining functions, respectively, for target interval $i$ with corresponding target Q-function $\hat{Q}_i^\prime (s,a) = \hat{C}_i^\prime [h(\hat{Q}_{F,i}^\prime [s,a])]$ and greedy policy $\hat{\pi}_i^\prime (s) = \arg \max_a\hat{Q}^\prime (s,a)$ . The following lemma relates the targets via the Bellman backup operators. Below for a GVF $Q_{F}$ we define the max-norm as $|Q_F|_{\infty} = \max_s\max_a\max_k|Q_{f_k}(s,a)|$ .
267
+
268
+ Lemma 1. If ESP-Table is run under the standard conditions for the almost surely (a.s.) convergence of $Q$ -learning and uses a Bellman-sufficient pair $(F,h)$ with locally consistent $h$ , then for any $\epsilon >0$ there exists a finite target update interval $K$ , such that, with probability 1, for all $i$ , $\left|\hat{Q}_{i + 1}^{\prime} - B[\hat{Q}_i^{\prime}]\right|_{\infty}\leq \epsilon$ and $\left|\hat{Q}_{F,i + 1}^{\prime} - B_{F}^{\hat{\pi}_{i}^{\prime}}[\hat{Q}_{F,i}^{\prime}]\right|_{\infty}\leq \epsilon$ .
269
+
270
+ That is, after a finite number of learning steps during an interval, the updated target Q-function and GVF are guaranteed to be close to the Bellman backups of the previous target Q-function and GVF.
271
+
272
+ Note that since the targets are arbitrary on the first iteration, these conditions hold for any table-based ESP Q-function.
273
+
274
+ Proof. Consider an arbitrary iteration $i$ with target functions $\hat{Q}_i^{\prime}$ , $\hat{C}_i^{\prime}$ , $\hat{Q}_{F,i}^{\prime}$ , and let $\hat{Q}_i^t$ , $\hat{C}_i^t$ , and $\hat{Q}_{F,i}^t$ be the corresponding non-target functions after $t$ updates during the interval. Note that for $t = 0$ the non-targets equal to the targets. The primary technical issue is that $\hat{C}_i^t$ is based on a table that can change whenever $\hat{Q}_{F,i}^t$ changes. Thus, the proof strategy is to first show a convergence condition for $\hat{Q}_{F,i}^t$ that implies the table for $\hat{C}_i^t$ will no longer change, which will then lead to the convergence of $\hat{C}_i^t$ .
275
+
276
+ Each update of $\hat{Q}_{F,i}^{t}$ is based on a fixed target policy $\hat{\pi}_i^{\prime}$ and a fixed target GVF $\hat{Q}_{F,i}^{\prime}$ so that the series of updates can be viewed as a stochastic approximation algorithm for estimating the result of a single Bellman GVF backup given by
277
+
278
+ $$
279
+ B _ {F} ^ {\hat {\pi} _ {i} ^ {\prime}} \left[ \hat {Q} _ {F, i} ^ {\prime} \right] (s, a) = F (s, a) + \gamma \sum_ {s ^ {\prime}} T (s, a, s ^ {\prime}) \cdot \hat {Q} _ {F, i} ^ {\prime} \left[ s ^ {\prime}, \hat {\pi} _ {i} ^ {\prime} \left(s ^ {\prime}\right) \right], \tag {2}
280
+ $$
281
+
282
+ which is just the expectation of $F(s, a) + \gamma \hat{Q}_{F,i}'[S', \hat{\pi}_i'(S'')]$ with $S' \sim T(s, a, \cdot)$ . Given the conditions on the learning rate $\alpha_t$ it is well known that $\hat{Q}_{F,i}^t$ will thus converge almost surely (a.s.) to this expectation, i.e. to $B_{F}^{\hat{\pi}_i'}[\hat{Q}_{F,i}]$ . The a.s. convergence of $\hat{Q}_{F,i}^t$ implies that for any $\epsilon'$ there is a finite $t_1$ such that for all $t > t_1$ , $|\hat{Q}_{F,i}^t - B_{F}^{\hat{\pi}_i'}[\hat{Q}_{F,i}]| \leq \epsilon'$ . This satisfies the second consequence of the lemma if $\epsilon' \leq \epsilon$ and $K > t_1$ .
283
+
284
+ Let $\epsilon' < \epsilon$ be such that it satisfies the local consistency condition of $h$ , which implies that for all $t > t_1$ and all $(s, a)$ , $h(\hat{Q}_{F,i}^t[s, a]) = h(B_F^{\hat{\pi}_i'}[\hat{Q}_{F,i}'[s, a])$ . That is, after $t_1$ updates, $h$ will map the non-target GVF to the same table entry as the Bellman GVF Backup of the target GVF and policy. Combining this with the Bellman sufficiency of $(F, h)$ implies that for any state action pairs $(s, a)$ and $(x, y)$ , if $h(\hat{Q}_{F,i}^t[s, a]) = h(\hat{Q}_{F,i}^t[x, y])$ then $B[\hat{Q}_i'](s, a) = B[\hat{Q}_i'](x, y)$ . This means that after $t_1$ updates all of the updates to a table entry $h(\hat{Q}_{F,i}^t[s, a])$ have the same expected value $B[\hat{Q}_i'](s, a)$ . Using a similar argument as above this implies that $\hat{Q}_i^t(s, a) = h(\hat{Q}_{F,i}^t[s, a])$ converges a.s. to $B[\hat{Q}_i'](s, a)$ for all $(s, a)$ pairs. Let $t_2$ be the implied finite number of updates after $t_1$ where the error is within $\epsilon$ . The target update interval $K = t_1 + t_2$ satisfies both conditions of the lemma, which completes the proof.
285
+
286
+ Using Lemma 1 we can prove the main convergence result.
287
+
288
+ Theorem 2. If ESP-Table is run under the standard conditions for the almost surely (a.s.) convergence of $Q$ -learning and uses a Bellman-sufficient pair $(F, h)$ with locally consistent $h$ , then for any $\epsilon > 0$ there exists a finite target update interval $K$ , such that for all $s$ and $a$ , $\hat{\pi}^t(s)$ converges a.s. to $\pi^*(s)$ and $\lim_{t \to \infty} |\hat{Q}_F^t(s, a) - Q_F^{\pi^*}(s, a)| \leq \epsilon$ with probability 1.
289
+
290
+ Proof. From Lemma 1 we can view ESP-Table as performing approximate Q-value iteration, with respect to the sequence of target functions $\hat{Q}_i^{\prime}$ . That is, the total updates done during a target interval define an approximate Bellman backup operator $\hat{B}$ , such that $\hat{Q}_{i + 1}^{\prime} = \hat{B} [\hat{Q}_i^{\prime}]$ . Specifically, there exists a $K$ , such that the approximate operator is $\epsilon$ -accurate, in the sense that for any Q-function $Q$ , $\left|\hat{B}[Q] - B[Q]\right|_{\infty} \leq \epsilon$ .
291
+
292
+ Let $\hat{B}^i [Q]$ denote $i$ applications of the operator starting at $Q$ so that $\hat{\pi}_i^\prime$ is the greedy policy with respect to $\hat{B}^i [\hat{Q}_0']$ . Prior work (Bertsekas & Tsitsiklis, 1996) implies that for any starting $Q$ , the sub-optimality of this greedy policy is bounded in the limit.
293
+
294
+ $$
295
+ \lim _ {i \rightarrow \infty} \left| V ^ {*} - V ^ {\hat {\pi} _ {i} ^ {\prime}} \right| _ {\infty} \leq \frac {2 \beta}{(1 - \beta) ^ {2}} \epsilon \tag {3}
296
+ $$
297
+
298
+ where $V^{*}$ is the optimal value function and $V^{\pi}$ is the value function of a policy $\pi$ .
299
+
300
+ Now let
301
+
302
+ $$
303
+ \delta = \min_{\pi}\min_{s:V^{*}(s)\neq V^{\pi}(s)}|V^{*}(s) - V^{\pi}(s)|
304
+ $$
305
+
306
+ be smallest non-zero difference between an optimal value at a state and sub-optimal value of a state across all non-optimal policies. From this definition it follows that, if $\left|V^{*} - V^{\hat{\pi}^{i}}\right|_{\infty} \leq \delta$ , then $\hat{\pi}^{i} = \pi^{*}$ . From Equation 3 this condition is achieved in the limit as $i \to \infty$ if we select $\epsilon < \frac{(1 - \beta)^2}{2\beta}\delta$ . Let $K_{1}$ be the finite target interval implied by Lemma 1 to achieve this constraint on $\epsilon$ . Since Lemma 1 holds with probability 1, we have proven that $\hat{\pi}_{i}'$ converges almost surely to $\pi^{*}$ for a finite $K_{1}$ . This implies the first part of the theorem.
307
+
308
+ For the second part of the theorem, similar to the above reasoning, Lemma 1 says that we can view the target GVF $\hat{Q}_{F,i}^{\prime}$ as being updated by an approximate Bellman GVF operator $\hat{B}_F^{\hat{\pi}_i^{\prime}}$ . That is, for any GVF $Q_{F}$ and policy $\pi$ , $\left|B_F^\pi [Q_F] - \hat{B}_F^\pi [Q_F]\right|_\infty \leq \epsilon$ . Further, it is straightforward to show that our approximate Bellman GVF operator satisfies an analogous condition to Equation 3, but for GVF evaluation accuracy in the limit. In particular, for any $\pi$ and initial $Q_{F}$ , if we define $\hat{Q}_{F,i}^{\pi}$ to be the GVF that results after $i$ approximate backups the following holds.2
309
+
310
+ $$
311
+ \lim _ {i \rightarrow \infty} \left| Q _ {F} ^ {\pi} - \hat {Q} _ {F, i} ^ {\pi} \right| _ {\infty} \leq \frac {\epsilon}{(1 - \gamma)}. \tag {4}
312
+ $$
313
+
314
+ Thus, for a fixed policy the approximate backup can be made arbitrarily accurate for small enough $\epsilon$ .
315
+
316
+ From the almost sure convergence of $\hat{\pi}_i^{\prime}$ , we can infer that there exists a finite $i^{*}$ such that for all $i > i^{*}$ , $\hat{\pi}_i^\prime = \pi^*$ . Thus, if $K > K_{1}$ , then after the $i^{*}$ target update the target policy will be optimal thereafter. At this point the algorithm enters a pure policy evaluation mode for fixed policy $\pi^{*}$ , which means that the approximate GVF operator is continually being applied to $\pi^{*}$ across target intervals. From Equation 4 this means that in the limit as $i\to \infty$ we have that
317
+
318
+ $$
319
+ \lim _ {i \to \infty} \sup _ {\left| Q _ {F} ^ {\pi^ {*}} - \hat {Q} _ {F, i} ^ {\prime} \right| _ {\infty} \leq \frac {\epsilon}{(1 - \gamma)}.
320
+ $$
321
+
322
+ Thus, we can achieve any desired accuracy tolerance in the limit by selecting a small enough $\epsilon$ . Let $K_{2}$ be the target interval size implied by Lemma 1 for that epsilon and let the target interval be $K = \max \{K_1,K_2\}$ . This implies that using a target interval $K$ , there is a finite number of target updates $i^{\prime}$ after the first $i^{*}$ updates such that for all $i > i^{*} + i^{\prime}$ , $\hat{Q}_{F,i}^{\prime}$ will achieve the error tolerance. This completes the second part of the proof.
323
+
324
+ # C TUG OF WAR DOMAIN
325
+
326
+ In this section, we overview the real-time strategy (RTS) game, 'Tug of War' (ToW), used for this study. Tug of War (ToW) is an adversarial two-player zero-sum strategy game we designed using Blizzard's PySC2 interface to Starcraft 2. Tug of War is played on a rectangular map divided horizontally into top and bottom lanes as shown in Figure 5. The game is viewed from an omnipotent camera position looking down at the map. Each lane has two base structures; Player 1 owns the two bases on the left of the map, and Player 2 owns the two bases on the right. The game proceeds in 30 second waves. Before the next wave begins, players may select either the top or bottom lane for which to purchase some number of military-unit production buildings with their available currency.
327
+
328
+ We have designed Tug of War allowing AI vs AI, Human vs Human, and AI vs Human gameplay. Watch a Human vs Human ToW game from Player 1's perspective here: https://www.youtube.com/watch?v=krfDz0xjfKg
329
+
330
+ Each purchased building produces one unit of the specified type at the beginning of each wave. Buildings have different costs and will require players to budget their capital. These three unit types,
331
+
332
+ ![](images/9687f595531beaadd1a200e24eb1f8471ddf6177d14e331a4ba2093dee46e3fd.jpg)
333
+ Figure 5: (left) Tug of War game map - Top lane and bottom lane, Player 1 owns the two bases on the left (gold star-shaped buildings), Player 2 owns the two bases on the right. Troops from opposing players automatically march towards their opponent's side of the map and attack the closest enemy in their lane. (right) Unit Rock Paper Scissors - Marines beats Immortals, Immortals beats Banelings, and Banelings beats Marines. We have adjusted unit stats in our custom Starcraft 2 map to befit ToW's balance.
334
+
335
+ Marines, Immortals, and Banelings, have strengths and weaknesses that form a rock-paper-scissors relationship as shown in Figure 5. Units automatically move across the lanes toward the opponent's side, engage enemy units, and attack the enemy base if close enough. Units will only attack enemy troops and bases in their lane. If no base is destroyed after 40 waves, the player who owns the base with the lowest health loses.
336
+
337
+ Both Players receive a small amount of currency at the beginning of each wave. A player can linearly increase this stipend by saving to purchase up to three expensive economic buildings, referred to as a Pylon.
338
+
339
+ ToW is a near full-information game; players can see the all units and buildings up to the current wave. Both player's last purchased buildings are revealed the moment after a wave spawns. The only hidden information is the unspent currency the opponent has saved; one could deduce this value as the wave number, cost of each building, currency earned per wave, and the quantities of buildings up to the current snapshot are known. It would be difficult for a human to perform this calculation quickly.
340
+
341
+ Tug of War is a stochastic domain where there is slight randomness in how opposing units fight and significant uncertainty to how the opponent will play. Winning requires players assessing the current state of the game and balancing their economic investment between producing units immediately or saving for the future. Players must always be mindful of what their opponent may do so as to not fall behind economically or in unit production. Purchasing a Pylon will increase one's currency income and gradually allow the player to purchase more buildings, but players must be wary as Pylons are expensive, saving currency means not purchasing unit-production buildings which may lead to a vulnerable position. Conversely, if the opponent seems to be saving their currency, the player can only guess as to what their opponent is saving for; the opponent may be saving to purchase a Pylon or they may be planning to purchase a lot of units in a single lane.
342
+
343
+ Tug of War presents a challenging domain to solve with Reinforcement Learning (RL). These challenges include a large state space, large action space, and sparse reward. States in ToW can have conceivably infinite combinations of units on the field, different quantities of buildings in lanes, or different base health. The number of possible actions in a state corresponds to the number of ways to allocate the current budget, which can range from 10s to 1000s. Finally, the reward is sparse giving $+1$ (winning) or 0 (losing) at the end of the game, where games can last up to 40 waves/decisions.
344
+
345
+ # C.1 TUG OF WAR FEATURE DESIGN
346
+
347
+ While humans need continuous visual feedback to interact with video games, computer systems can use simple numeric values received in disjointed intervals to interpret game state changes. We have designed an abstract "snapshot" of the ToW game state at a single point in time represented as a 68 dimensional feature vector. Note that for this study, we have increased added additional features to capture granular details, thus bringing the total to 131 features. At the last moment before a wave spawns, the AI agent receives this feature snapshot and uses it to select an action for the next wave. We call this moment a decision point. The decision point is the only time when the agent receives information about the game and executes an action; the agent does not continuously sample
348
+
349
+ observations from the game. The agent's performance indicates this abstraction is sufficient for it to learn and play the game competently.
350
+
351
+ The state feature vector includes information such as the current wave number, health of all 4 bases, the agent's current unspent currency, the agent's current building counts in both top and bottom lanes, the enemy's last observed building counts in the top and bottom lanes, pylon quantities, and the number of troops in each grid of the 4 grid sections of the map as depicted in Figure 6. opponent's current unspent mineral count is not sent to the agent as this hidden information is part of the game's design.
352
+
353
+ ![](images/1ebb16c0cc94afdedbd69ff1edde5c9c941f0c8d0974334f9cbf685cfdd9a80e.jpg)
354
+ Figure 6: ToW 2 Lane 4 Grid - Unit quantities and positions on the map is descretized into four sections per lane.
355
+
356
+ # D AGENT DETAILS: HYPERPARAMETERS AND ARCHITECTURES
357
+
358
+ The ESP agent code is provided in Supplementary Material, including pre-trained models for all domains we present.
359
+
360
+ Table 1 gives the hyperparameters used in our implementation. Note that our implementation of ESP-DQN supports both hard target updates as shown in the pseudo-code and "soft target updates" (Lillicrap et al., 2015), where at each step the target network parameters are gradually moved toward the currently learned parameters via a mixing proportion $\tau$ . We found that this can sometimes lead to more stable learning and use it in two of our domains as indicated in the table.
361
+
362
+ Table 2 presents our GVF network structures used to train the agents in each domain. The choice of activation function for each GVF output component is based on the output type of the component. For GVF outputs in [0, 1] we use a sigmoid activation, for sets of mutually exclusive indicator GVFs (e.g. the win condition) we use a softmax activation over the set, and for GVF outputs with arbitrary numeric ranges we use a linear activation. Specifically, we use Sigmoid functions on F1 through F12 and F17 features for our Tug of War ESP-DQN 17-feature agent and on F131 for our Tug of War ESP-DQN 131-feature agent because the data ranges (0, 1). We apply a SoftMax function to features F13 to F16 and F1 to F8 for our our Tug of War ESP-DQN 17-feature and 131-feature agents because said features correspond to probabilities that sum to 1.
363
+
364
+ # E TUG OF WAR 131 FEATURES
365
+
366
+ We introduce a detailed description of the 131 features used to train our Tug of War ESP-DQN agent. These features capture events in ToW, namely:
367
+
368
+ - Game ending win-condition probabilities; The likely-hood for each base to be destroyed or have the lowest HP at wave 40.
369
+ - P1 and P2 currency; These features allow GVFs to predict the amount of money players will receive in the future.
370
+ Quantity of units spawned.
371
+ - The number of each type of units will be survive at different ranges ${}^{4}$ we defined on the map for both players; allowing the GVFs to predict the advantage of each lane of each type of unit in the future.
372
+
373
+ <table><tr><td>Hyper-Parameters</td><td>Lunar Lander</td><td>Cart Pole</td><td>Tug-of-War(both)</td></tr><tr><td>Discount factors(γ and β)</td><td>0.99</td><td>0.99</td><td>0.9999</td></tr><tr><td>Learning Rate(α)</td><td>10-4</td><td>10-5</td><td>10-4</td></tr><tr><td>Start Exploration(εs)</td><td>1.0</td><td>1.0</td><td>1.0</td></tr><tr><td>Final Exploration(εf)</td><td>0.01</td><td>0.05</td><td>0.1</td></tr><tr><td>Exploration Decrease(linearly) Steps</td><td>2 * 105</td><td>2 * 105</td><td>4 * 104</td></tr><tr><td>Batch Size</td><td>128</td><td>128</td><td>32</td></tr><tr><td>Soft/Hard Replace</td><td>Soft</td><td>Soft</td><td>Hard</td></tr><tr><td>Soft Replace(τ)</td><td>5 * 10-4</td><td>5 * 10-4</td><td>N/A</td></tr><tr><td>Hard Replace Steps</td><td>N/A</td><td>N/A</td><td>6 * 103</td></tr><tr><td>GVF Net Optimizer</td><td>Adam</td><td>Adam</td><td>Adam</td></tr><tr><td>Combiner Net Optimizer</td><td>SGD</td><td>SGD</td><td>SGD</td></tr><tr><td>Training Episodes</td><td>5 * 103</td><td>104</td><td>1.3 * 104</td></tr><tr><td>Evaluation Intervals</td><td>200</td><td>100</td><td>40</td></tr><tr><td>Evaluation Episodes</td><td>100</td><td>100</td><td>10</td></tr><tr><td>Riemann approximation steps of IGX3</td><td>30</td><td>30</td><td>30</td></tr></table>
374
+
375
+ Table 1: Hyper-parameters and optimizers used to train our ESP-DQN and DQN agents on Lunar Lander, Cart Pole and Tug of War.
376
+
377
+ <table><tr><td></td><td>Lunar Lander</td><td>Cart Pole</td><td>Tug-of-War(17f)</td><td>Tug-of-War(131f)</td></tr><tr><td>GVF Net</td><td>3 layers MLP</td><td>3 layers MLP</td><td>4 layers MLP</td><td>4 layers MLP</td></tr><tr><td rowspan="2">GVF Output Activation Function</td><td rowspan="2">Linear</td><td rowspan="2">Linear</td><td>Sigmoid(F1-F12, F17)</td><td>SoftMax(F1-F8)</td></tr><tr><td>SoftMax(F13-F16)</td><td>Sigmoid(F131)</td></tr><tr><td>Combiner Net</td><td>3 layers MLP</td><td>3 layers MLP</td><td>4 layers MLP</td><td>4 layers MLP</td></tr></table>
378
+
379
+ Table 2: Network structures we used to train our ESP-DQN and DQN agents on Lunar Lander, Cart Pole and Tug of War.
380
+
381
+ - Delta damage to each of the four bases by each of the three unit types. These features allow GVFs to predict the amount of damage each unit type will inflict on the opponent's base in the unit's respective lane.
382
+ - The amount of damage inflicted by which type of units on another type of units for both players, like the damage the friendly Marine inflicted on enemy immortal; Allows the GVFs to predict the amount damage for each type of units inflicting on each type of units.
383
+ - An indicator of whether the game reaches waves of tie-breaker.
384
+
385
+ # F EXAMPLE EXPLANATIONS
386
+
387
+ Cart Pole. Figure 7a shows a Cart Pole state encountered by a learned near-optimal ESP policy, where the cart and the pole are moving in the left direction with the pole angle being in a dangerous range already. The action "push left" is preferred over "push right", which agrees with intuition. We still wish to verify that the reasons for the preference agree with our common sense. From the IGX and MSX in Figure 7c the primary reason for the preference is the "pole angle left" GVF, which indicates that pushing to the right will lead to a future where the pole angle spends more time in the dangerous left region. Interestingly we see that "push right" is considered advantageous compared to "push left" with respect to the left boundary and left velocity features, which indicates some risk for push left with respect to these components. All of these preference reasons agree with intuition and along with similar examples can build our confidence in the agent.
388
+
389
+ Lunar Lander. Figure 8a illustrates a Lunar Lander state achieved by a near-optimal ESP policy. The lander is moving down to the left and is close to landing within the goal. Additionally, the left leg has touched the ground as marked by the green dot. Figure 8b shows the GVF values of all actions and expects the lander to land successfully with the right leg touching down. The GVFs values of the distance, velocity, and angle are small because the lander is close to the goal.
390
+
391
+ Although this state allows the lander to successfully land after taking any action, the IGX shown in Figure 8c illustrates the agent prefers "use left engine". This is because using the right engine will
392
+
393
+ ![](images/fde7a0876051e1f104cae0a74290918cd41a912163aaeebccb4dac8277aa5c28.jpg)
394
+
395
+ ![](images/c4e3835f3a5f0fadc2b4b79f849de416bb71f4c9363d16eb2e80b1d47fa18bc5.jpg)
396
+
397
+ ![](images/2138de603deea16640827062546c936cd4cf4edc798e72b9cd2deb7f524caea3.jpg)
398
+
399
+ ![](images/cdfc0ada0a21f118500284e2507b85ba085d3b9ae0311ce80ae82f799307be9d.jpg)
400
+ (a) Game State
401
+ (b) GVFs
402
+ (c) IG and MSX
403
+
404
+ ![](images/35d4ce93fee9982f03c07fe17ef20b594ee68c07343cfe35339160568e86cc09.jpg)
405
+ (a) Game State
406
+
407
+ ![](images/4e44240377c679191f26554a96d08b15d8e840b6cb2817433e2a3e750405cc44.jpg)
408
+ Figure 7: Explanation example for Cart Pole. Three Figures show the game state, the Q-values and GVF predictions for actions, and the IGX and MSX respectively.
409
+
410
+ ![](images/31a03666f173156ea08087350c9a7a33bf995a01ff80647685bde475f9d0e8b8.jpg)
411
+
412
+ ![](images/03c28c2f4ba0ff00027dbbe7647d7189fc0d4cb767472b2dbfdda20423dbba64.jpg)
413
+ (b) GVFs
414
+ (c) IG and MSX
415
+ Figure 8: Explanation example for Lunar Lander. Three Figures show the game state, the Q-values and GVF predictions for actions, and the IGX and MSX respectively.
416
+
417
+ increase the velocity of the lander, pushing it towards the left and increasing "F1 Distance" from the goal. This action and justification makes intuitive sense as the lander is unlikely to fail in this state and has chosen an action that reduces its velocity and decreases its landing delay.
418
+
419
+ The "use main engine" action also delays the landing increases distance to the goal, as indicated by the MSX bars in Figure 8c. The IGX also shows the "use main engine" engine risks the left leg leaving the ground which agrees with intuition as moving up pushes the lander back into space. However, "use main engine" gives the lander another opportunity to adjust its velocity and angle. That may be why the IGX of velocity and tile-angle are negative. The "no-op" action has a lower preference than the best action because the lander is slightly drifting and may move out of the goal. Two largest IGXs of "noop" action agrees with this rationale. However, the IGX of landing is negative that may be arbitrary or indicates doing the "noop" action will lead the lander to land faster since the lander is moving down already, but sometimes landing faster gets less reward because moving to the center of the goal can gain more reward by reducing the distance between the center of goal and lander.
420
+
421
+ Tug of War: 17 Features. Figure 9a depicts a screenshot of a Tug of War game where our ESP agent (P1, blue) is playing against a new AI opponent (P2, orange) it has never encountered. The ESP agent's top base is destroyed after two waves thus losing the game. The annotated game state shows the ESP agent doesn't have enough units to defend its top base as its opponent's banelings can kill almost all its units, and the agent's Top lane base has approximately $35\%$ hit points (HP) remaining. We can regard this state as a critical moment in the game because the agent spends all its money to defend the top lane and still looses the base in two waves after taking the its highest ranked action. Given our deep ToW game knowledge, we want to understand why the ESP agent chose to purchase Banelings in the Bottom Lane (arguably sub-optimal) rather than purchase Immortals in the Top Lane (intuitively a better action).
422
+
423
+ To understand why the agent prefers the action that is worse than an action we intuitively recognize to be better, we analyze both action's GVFs (Figure 9b), and IG & MSX (Figure 9c). The sub-optimal action's GVFs show the sub-optimal action is expected to reduce damage from the enemy's top Banelings. This indicates the agent understands taking the sub-optimal action can pose a better
424
+
425
+ ![](images/9209cfc48d1bf71f05aae9d3df7f181b67a36bc9834c4325e4b8599511056588.jpg)
426
+ (a) Game State
427
+
428
+ ![](images/88fd01a96ac277e3534c4f5e9f6877d68c3c6b1df24fc981b12741668f57343c.jpg)
429
+ (b) GVFs
430
+ Figure 9: Explanation example for Tug-of-War 17 feature ESP-DQN agent. Three Figures show the game state, the Q-values and GVF predictions for actions, and the IGX and MSX respectively. The top ranked action $+2$ Baneling in Bottom Lane and sub-optimal is $+1$ Immortals in Top Lane
431
+
432
+ ![](images/a30c6f0528a5c47330864378cac3955fbce73319c31c3b465193b628cef7fce8.jpg)
433
+ (c) IG and MSX
434
+
435
+ defense. However, the MSX bar shows positive IGX of the self bottom Baneling damage still can cover the negative IGX of enemy top Baneling damage; indicating the agent is focusing on destroying the enemy's bottom base while ignoring the damage its top base will take. This misjudgement can be attributed to the agent over-fitting to its fixed-agent opponent during training.
436
+
437
+ ![](images/ef960d20fdfa7ff8086a5e6762726388fc2c226da48a3a0205920388189ab657.jpg)
438
+ (a) Game State
439
+
440
+ Tug of War: 131 Features. Figure 10a depicts a screenshot of a Tug of War game where our (P1, blue) ESP agent is playing against the same fixed-policy AI opponent (P2, orange) it was trained against. The ESP agent wins by destroying the opponent's bottom base. The state in Figure 10a indicates both players have a balanced quantity of units in the top lane. We also observe P2 has an advantage in the bottom lane as the ESP agent doesn't have enough units to defend. The ESP agent has determined its best action is to spend all its money on producing $+8$ Marine buildings in the Bottom Lane to defend, which agrees with intuition as Marines counter Immortals. To justify why one can regard this choice as optimal, we compare the agent's best-determined action, $+8$ Marine buildings in Bottom Lane, to a sub-optimally ranked action, $+5$ Baneling buildings in Bottom Lane, due to Immortals counter Banelings.
441
+
442
+ Figure 10b shows the GVF value of both action. Given the dense nature of the 131 Features, we summarize the following:
443
+
444
+ - The values concerning accumulated quantity features such as future currency to be earned are higher in the sub-optimal action than the best action because the game is expected to be prolonged if a sub-optimal action is taken. The probability to end the game by tie-breaker(F131) as shown in Figure 10b, graph "Probability to End by Tie-breaker" agrees taking the best action leads to a faster win.
445
+ - The sub-optimal action raises the probability of our ESP agent's bottom base getting destroyed(F2) and lowers the probability of the opponent's bottom base getting destroyed(F4). This assessment agrees with the game rules as Banelings do little to counter Immortals.
446
+ - Agent's Expected Bottom Marine to Spawn(F14) is higher of if it takes the best action, and Expected Bottom Baneling to Spawn(F15) is higher if it takes sub-optimal action.
447
+ - By taken the best action, the agent expects its future surviving bottom marines to be closer to P2's bottom base(F44, F47 and F50); indicating the agent's units are able to push the enemy back. Contrasted to the sub-optimal action, where the opponent's surviving bottom Immortal
448
+
449
+ ![](images/d0c0e17d6d9b2763e7e4274ff48c50af0950fe15e609084d4dca97e3ef1e8948.jpg)
450
+
451
+ ![](images/f6206a6cfce29ce1cf95d4cf6cd7a656dd97941ec4e456f5fbfea3deb43c220d.jpg)
452
+
453
+ ![](images/43e79d0ea51bb6f903642c8d974029c62c2a9671ca3a40d564bf17a220beffdb.jpg)
454
+
455
+ ![](images/483612a30a89e170398a99a40dbd296beca6e560c4c54bd9dee2528da247495b.jpg)
456
+
457
+ ![](images/3c40cd57c5ced322df177f66426de20ced495340b6ae13c9b32cf94679d13f80.jpg)
458
+
459
+ ![](images/586965bb5fad9451eafef604b0e96e91d7860d5cf08497d3988da5a5d391d0a6.jpg)
460
+
461
+ ![](images/8367c90c9779c8bdcd75c200ec38170491d11f867470894890645fdd076536ba.jpg)
462
+
463
+ ![](images/3d9267ead11ccb7467e29200a70989111115c845300e909a86a28f1af9c151eb.jpg)
464
+
465
+ ![](images/01dcd5ab2da1f38c15816ced743528fdad517b6b4f19a6fcaab2f1972f2cd10f.jpg)
466
+
467
+ ![](images/87e9e117ce340102ee28eceeee81e39c94c621bc031453a95353a14d18a03c9b.jpg)
468
+
469
+ ![](images/27ab51d6c15a8ab9a6c254346c852aa00920450d5054b1a39de25b0be1fa830b.jpg)
470
+ (b) GVFs
471
+
472
+ is expected to be closer to the ESP agent's bottom base(F70, F73 and F76), indicating the opponent pushed the agent back.
473
+
474
+ - If the ESP agent purchases $+8$ marines in the bottom lane (best ranked action), the agent expects to take no damage from the enemy(F89 to F94). This can be contrasted to the expected damage if the agent were to purchase $+5$ baneling buildings in the bottom lane (sub-optimal action) where the agent expects to take base damage from P2's immortals(F94) as shown in Figure 10b, graph "Units Attacking Top/Bottom Base".
475
+ - We can validate the agent understands the rock-paper-scissor interaction between marines, banelings, and immortals from the GVF graphs as shown in Figure 10b, graph "P1 Unit on P2 Damage" and "P2 Unit on P1 Damage". If the agent produces marines, the ESP agent correctly expects to inflict a large amount of damage on P2's immortals. If the agent produces banelings, the ESP agent correctly expects to inflict a large amount of damage on P2's marines.
476
+ - There exist some flaws in the agent's GVF predictions. Some values such as Future Surviving Units in Figure 10b should not be negative, indicating some flaw in the agent's training. This suggests an engineer can add a ReLU function on the output to prevent negative values.
477
+
478
+ Explanations produced by our ESP model are sound because said explanations do not depend on GVF comparisons alone. Figure 10c, graph "Units Attacking Top/Bottom Base" illustrates P2 Immortal Damage on bottom base (F94); the primary MSX contribution for why the agent ranked +8 marine buildings as its best action. Given the notion that P2's Immortals in the bottom lane presents a significant threat, producing marines to defend the immortals makes good intuitive sense. Banelings are a sub-optimal choice in this scenario, and would do little to defend against Immortals. We summarize the IG and MSX graph in Figure 10c as follows,
479
+
480
+ ![](images/d9c43fe6dbf2bc27cb200ef44af622d13147b942acec455e30e36a27157e685f.jpg)
481
+
482
+ ![](images/17a236003a29dd9617869fcc713e73e9d2b6647cea3e20eb544fc1c38d3eacab.jpg)
483
+
484
+ ![](images/31bf8cb27b63c2eb811d005613fe6b1c1f11dc6856eac754e5b862f26cf78a96.jpg)
485
+
486
+ ![](images/807353c799f8901e1c90a96b3d76c6803eebb59bb4f7b090cde0bbfe9b57b460.jpg)
487
+
488
+ ![](images/c38232ecaaf0851d0aef89565d5ee0836e6b8b734ed2a300fd5bc63ab61c109a.jpg)
489
+
490
+ ![](images/f4ebf247e1c832833ee5a382525b45d43b0d0468ecff101477d136b097e9f2db.jpg)
491
+
492
+ ![](images/c2fd2a569724fb35bf17ace55a5b0ae30cbc2d390694c187cdbcbaf0fc71be26.jpg)
493
+
494
+ ![](images/13899778948759d86e5a59a551ae75c859ce5b2eb97cec0f63c01d9006731632.jpg)
495
+
496
+ ![](images/25747b2bce0b2a170ace3666d168b1095ddf555ff8c722a2aead836220849873.jpg)
497
+
498
+ ![](images/fab4b2bd5f2fb5fc5162865e091ddc215f52312b9f674b43581a48ba09494138.jpg)
499
+ Figure 10: Explanation example for Tug-of-War 131 feature ESP-DQN agent. Since there are too much features to show as one figure, we separate them into 11 clusters. Three Figures show the game state, the Q-values and GVF predictions for actions, and the IGX and MSX respectively. The top ranked action +8 Marines in Bottom Lane and sub-optimal is +5 Banelings in Bottom Lane.
500
+
501
+ ![](images/9f959e66ed2a501bf37eed6d466ad79f0ab49fcba5bd57b28f8c635c73e2cdb4.jpg)
502
+ (c) IG and MSX
503
+
504
+ - The best action adds more Marine buildings; thus increasing the quantity of marines spawned per wave, but the agent doesn't care about the quantity of marines(F14) as the IGX is close to 0. However, the agent cares about the damage the Marine inflict (F86), although this is not as important as defending against opponent's Immortals.
505
+ - Graph "Destroy and Lowest HP Probability" illustrates the two mutually exclusive win types in ToW; winning by destroying one of P2's bases, or winning by making sure one of P2's base has the lowest HP at wave 40. The probability Base Destroyed IGX indicates the agent expects to destroy the opponent's bottom base(F4) and defend its own bottom base(F2).
506
+ - Graph "Future Surviving Friendly (Bottom)" illustrates the contribution of P1's surviving troops in the bottom lane. The positive IGX contribution of feature "P1 Bottom Marine Grid 4(F47)" and "Grid 5(F50)" indicates the agent cares about its marines moving closer to the enemy's bottom base. The IGX of the "P1 Bot Marine Grid 3(F44)" is negative, possibly because Grid 3 is too far from the opponent's base to be considered a disadvantage.
507
+
508
+ Given the large number of features, the MSX is critical to get a quick understanding of the agent's preference. In general, user interface design will be an important consideration when the number of features is large. Such interfaces should allow users to incrementally explore the IGX and GVFs of different actions flexibly and on demand.
contrastiveexplanationsforreinforcementlearningviaembeddedselfpredictions/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f77b3b541c5e165adc9705c5fc147516116f8d9139720d3c7f17855716cfa965
3
+ size 705973
contrastiveexplanationsforreinforcementlearningviaembeddedselfpredictions/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b6e6d326d802db2e19371726edd122218ec103783f791cfed70d562a298ae54c
3
+ size 867009
coupledoscillatoryrecurrentneuralnetworkcornnanaccurateandgradientstablearchitectureforlearninglongtimedependencies/e1b28daf-a304-41b6-a819-d5ff9b97dab2_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:74a6c422b50767981ea60912517b1020fe5653678b9f38c197fb747420a27cb9
3
+ size 177536
coupledoscillatoryrecurrentneuralnetworkcornnanaccurateandgradientstablearchitectureforlearninglongtimedependencies/e1b28daf-a304-41b6-a819-d5ff9b97dab2_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9e225f16df5f48d24138c46f303c8e4b5d3dba7b8dddd294e1a65a0255e2abd0
3
+ size 201782
coupledoscillatoryrecurrentneuralnetworkcornnanaccurateandgradientstablearchitectureforlearninglongtimedependencies/e1b28daf-a304-41b6-a819-d5ff9b97dab2_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:95c35fb2bfa89fe43f356f038854f8281b9707f8a97c1f3847950cbdf9208b60
3
+ size 2926485
coupledoscillatoryrecurrentneuralnetworkcornnanaccurateandgradientstablearchitectureforlearninglongtimedependencies/full.md ADDED
The diff for this file is too large to render. See raw diff
 
coupledoscillatoryrecurrentneuralnetworkcornnanaccurateandgradientstablearchitectureforlearninglongtimedependencies/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3e2d105b63806d95a03bd127c3e9664a9ff4108572129f18d0586d2f0ff062df
3
+ size 1423168
coupledoscillatoryrecurrentneuralnetworkcornnanaccurateandgradientstablearchitectureforlearninglongtimedependencies/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9eb517ccc5b72241d8be803ef832c8b256985fd4eaa62b5b817c215534034e8b
3
+ size 879587
datasetcondensationwithgradientmatching/ab4e802f-b06a-4406-be2e-91bcef94cb64_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d16712e33fdfb0763a94e9ee892a99a201c5fa8060da24664913ba3c0218f81b
3
+ size 127875
datasetcondensationwithgradientmatching/ab4e802f-b06a-4406-be2e-91bcef94cb64_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:69f33ccc5711382f204f3454fbd19396e219827e83341c00deba045e272fe665
3
+ size 151049
datasetcondensationwithgradientmatching/ab4e802f-b06a-4406-be2e-91bcef94cb64_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:27ea7854abc4ab7a38b7aec9d6329332921aed2dcb9415c1e76442411da7f374
3
+ size 2857557
datasetcondensationwithgradientmatching/full.md ADDED
@@ -0,0 +1,422 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # DATASET CONDENSATION WITH GRADIENT MATCHING
2
+
3
+ Bo Zhao, Konda Reddy Mopuri, Hakan Bilen
4
+ School of Informatics, The University of Edinburgh
5
+ {bo.zhao, kmopuri, hbilen}@ed.ac.uk
6
+
7
+ # ABSTRACT
8
+
9
+ As the state-of-the-art machine learning methods in many fields rely on larger datasets, storing datasets and training models on them become significantly more expensive. This paper proposes a training set synthesis technique for data-efficient learning, called Dataset Condensation, that learns to condense large dataset into a small set of informative synthetic samples for training deep neural networks from scratch. We formulate this goal as a gradient matching problem between the gradients of deep neural network weights that are trained on the original and our synthetic data. We rigorously evaluate its performance in several computer vision benchmarks and demonstrate that it significantly outperforms the state-of-the-art methods<sup>1</sup>. Finally we explore the use of our method in continual learning and neural architecture search and report promising gains when limited memory and computations are available.
10
+
11
+ # 1 INTRODUCTION
12
+
13
+ Large-scale datasets, comprising millions of samples, are becoming the norm to obtain state-of-the-art machine learning models in multiple fields including computer vision, natural language processing and speech recognition. At such scales, even storing and preprocessing the data becomes burdensome, and training machine learning models on them demands for specialized equipment and infrastructure. An effective way to deal with large data is data selection – identifying the most representative training samples – that aims at improving data efficiency of machine learning techniques. While classical data selection methods, also known as coreset construction (Agarwal et al., 2004; Har-Peled & Mazumdar, 2004; Feldman et al., 2013), focus on clustering problems, recent work can be found in continual learning (Rebuffi et al., 2017; Toneva et al., 2019; Castro et al., 2018; Aljundi et al., 2019) and active learning (Sener & Savarese, 2018) where there is typically a fixed budget in storing and labeling training samples respectively. These methods commonly first define a criterion for representativeness (e.g. in terms of compactness (Rebuffi et al., 2017; Castro et al., 2018), diversity (Sener & Savarese, 2018; Aljundi et al., 2019), forgetfulness (Toneva et al., 2019)), then select the representative samples based on the criterion, finally use the selected small set to train their model for a downstream task.
14
+
15
+ Unfortunately, these methods have two shortcomings: they typically rely on i) heuristics (e.g. picking cluster centers) that does not guarantee any optimal solution for the downstream task (e.g. image classification), ii) presence of representative samples, which is neither guaranteed. A recent method, Dataset Distillation (DD) (Wang et al., 2018) goes beyond these limitations by learning a small set of informative images from large training data. In particular, the authors model the network parameters as a function of the synthetic training data and learn them by minimizing the training loss over the original training data w.r.t. synthetic data. Unlike in the coreset methods, the synthesized data are directly optimized for the downstream task and thus the success of the method does not rely on the presence of representative samples.
16
+
17
+ Inspired from DD (Wang et al., 2018), we focus on learning to synthesize informative samples that are optimized to train neural networks for downstream tasks and not limited to individual samples in original dataset. Like DD, our goal is to obtain the highest generalization performance with a model trained on a small set of synthetic images, ideally comparable performance to that of a model trained on the original images (see Figure 1(a)). In particular, we investigate the following
18
+
19
+ ![](images/00e4d01e31fbdae433a4f80830470ba38e9b273777da66a779b36e0c7887ddb8.jpg)
20
+ Figure 1: Dataset Condensation (left) aims to generate a small set of synthetic images that can match the performance of a network trained on a large image dataset. Our method (right) realizes this goal by learning a synthetic set such that a deep network trained on it and the large set produces similar gradients w.r.t. its weights. The synthetic data can later be used to train a network from scratch in a small fraction of the original computational load. CE denotes Cross-Entropy.
21
+
22
+ ![](images/f35c49ec3f95fedbc7814ba6fb045655fa74c07c3f35ed7ea2757fdfa56f0b2c.jpg)
23
+
24
+ questions. Is it possible to i) compress a large image classification dataset into a small synthetic set, ii) train an image classification model on the synthetic set that can be further used to classify real images, iii) learn a single set of synthetic images that can be used to train different neural network architectures? To this end, we propose a Dataset Condensation method to learn a small set of "condensed" synthetic samples such that a deep neural network trained on them obtains not only similar performance but also a close solution to a network trained on the large training data in the network parameter space. We formulate this goal as a minimization problem between two sets of gradients of the network parameters that are computed for a training loss over a large fixed training set and a learnable condensed set (see Figure 1(b)). We show that our method enables effective learning of synthetic images and neural networks trained on them, outperforms (Wang et al., 2018) and coreset methods with a wide margin in multiple computer vision benchmarks. In addition, learning a compact set of synthetic samples also benefits other learning problems when there is a fixed budget on training images. We show that our method outperforms popular data selection methods by providing more informative training samples in continual learning. Finally, we explore a promising use case of our method in neural architecture search, and show that - once our condensed images are learned - they can be used to train numerous network architectures extremely efficiently.
25
+
26
+ Our method is related to knowledge distillation (KD) techniques (Hinton et al., 2015; Bucilua et al., 2006; Ba & Caruana, 2014; Romero et al., 2014) that transfer the knowledge in an ensemble of models to a single one. Unlike KD, we distill knowledge of a large training set into a small synthetic set. Our method is also related to Generative Adversarial Networks (Goodfellow et al., 2014a; Mirza & Osindero, 2014; Radford et al., 2015) and Variational AutoEncoders (Kingma & Welling, 2013) that synthesize high-fidelity samples by capturing the data distribution. In contrast, our goal is to generate informative samples for training deep neural networks rather than to produce "real-looking" samples. Finally our method is related to the methods that produce image patches by projecting the feature activations back to the input pixel space (Zeiler & Fergus, 2014), reconstruct the input image by matching the feature activations (Mahendran & Vedaldi, 2015), recover private training images for given training gradients (Zhu et al., 2019; Zhao et al., 2020), synthesize features from semantic embeddings for zero-shot learning (Sariyildiz & Cinbis, 2019). Our goal is however to synthesize a set of condensed training images not to recover the original or missing training images.
27
+
28
+ In the remainder of this paper, we first review the problem of dataset condensation and introduce our method in section 2, present and analyze our results in several image recognition benchmarks in section 3.1, showcase applications in continual learning and network architecture search in section 3.2, and conclude the paper with remarks for future directions in section 4.
29
+
30
+ # 2 METHOD
31
+
32
+ # 2.1 DATASET CONDENSATION
33
+
34
+ Suppose we are given a large dataset consisting of $|\mathcal{T}|$ pairs of a training image and its class label $\mathcal{T} = \{(\pmb{x}_i, y_i)\}_{i=1}^{|\mathcal{T}|}$ where $\pmb{x} \in \mathcal{X} \subset \mathbb{R}^d$ , $y \in \{0, \dots, C-1\}$ , $\mathcal{X}$ is a d-dimensional input space and $C$ is the number of classes. We wish to learn a differentiable function $\phi$ (i.e. deep neural network)
35
+
36
+ with parameters $\theta$ that correctly predicts labels of previously unseen images, i.e. $y = \phi_{\theta}(\pmb{x})$ . One can learn the parameters of this function by minimizing an empirical loss term over the training set:
37
+
38
+ $$
39
+ \boldsymbol {\theta} ^ {\mathcal {T}} = \underset {\boldsymbol {\theta}} {\arg \min } \mathcal {L} ^ {\mathcal {T}} (\boldsymbol {\theta}) \tag {1}
40
+ $$
41
+
42
+ where $\mathcal{L}^{\mathcal{T}}(\pmb {\theta}) = \frac{1}{|\mathcal{T}|}\sum_{(\pmb {x},\pmb {y})\in \mathcal{T}}\ell (\phi_{\pmb{\theta}}(\pmb {x}),\pmb {y}),\ell (\cdot ,\cdot)$ is a task specific loss (i.e. cross-entropy) and $\pmb{\theta}^{\mathcal{T}}$ is the minimizer of $\mathcal{L}^{\mathcal{T}}$ . The generalization performance of the obtained model $\phi_{\pmb{\theta}}\tau$ can be written as $\mathbb{E}_{\pmb{x}\sim P_{\mathcal{D}}}[\ell (\phi_{\pmb{\theta}}\tau (\pmb {x}),\pmb {y})]$ where $P_{\mathcal{D}}$ is the data distribution. Our goal is to generate a small set of condensed synthetic samples with their labels, $S = \{(s_i,y_i)\}_{i = 1}^{|\mathcal{S}|}$ where $s\in \mathbb{R}^d$ and $y\in \mathcal{V}$ , $|\mathcal{S}|\ll |\mathcal{T}|$ . Similar to eq. (1), once the condensed set is learned, one can train $\phi$ on them as follows
43
+
44
+ $$
45
+ \boldsymbol {\theta} ^ {S} = \underset {\boldsymbol {\theta}} {\arg \min } \mathcal {L} ^ {S} (\boldsymbol {\theta}) \tag {2}
46
+ $$
47
+
48
+ where $\mathcal{L}^S (\pmb {\theta}) = \frac{1}{|\mathcal{S}|}\sum_{(s,y)\in S}\ell (\phi_\pmb {\theta}(s),y)$ and $\pmb{\theta}^{S}$ is the minimizer of $\mathcal{L}^S$ . As the synthetic set $S$ is significantly smaller (2-3 orders of magnitude), we expect the optimization in eq. (2) to be significantly faster than that in eq. (1). We also wish the generalization performance of $\phi_{\pmb{\theta}}s$ to be close to $\phi_{\pmb{\theta}}\tau$ , i.e. $\mathbb{E}_{\pmb{x}\sim P_{\mathcal{D}}}[\ell (\phi_{\pmb{\theta}}\tau (\pmb {x}),y)]\simeq \mathbb{E}_{\pmb{x}\sim P_{\mathcal{D}}}[\ell (\phi_{\pmb{\theta}}s(\pmb {x}),y)]$ over the real data distribution $P_{\mathcal{D}}$ .
49
+
50
+ Discussion. The goal of obtaining comparable generalization performance by training on the condensed data can be formulated in different ways. One approach, which is proposed in (Wang et al., 2018) and extended in (Sucholutsky & Schonlau, 2019; Bohdal et al., 2020; Such et al., 2020), is to pose the parameters $\theta^S$ as a function of the synthetic data $S$ :
51
+
52
+ $$
53
+ \mathcal {S} ^ {*} = \underset {\mathcal {S}} {\arg \min } \mathcal {L} ^ {\mathcal {T}} \left(\boldsymbol {\theta} ^ {\mathcal {S}} (\mathcal {S})\right) \quad \text {s u b j e c t t o} \quad \boldsymbol {\theta} ^ {\mathcal {S}} (\mathcal {S}) = \underset {\boldsymbol {\theta}} {\arg \min } \mathcal {L} ^ {\mathcal {S}} (\boldsymbol {\theta}). \tag {3}
54
+ $$
55
+
56
+ The method aims to find the optimum set of synthetic images $S^*$ such that the model $\phi_{\theta^S}$ trained on them minimizes the training loss over the original data. Optimizing eq. (3) involves a nested loop optimization and solving the inner loop for $\theta^S(S)$ at each iteration to recover the gradients for $S$ which requires a computationally expensive procedure – unrolling the recursive computation graph for $S$ over multiple optimization steps for $\theta$ (see (Samuel & Tappen, 2009; Domke, 2012)). Hence, it does not scale to large models and/or accurate inner-loop optimizers with many steps. Next we propose an alternative formulation for dataset condensation.
57
+
58
+ # 2.2 DATASET CONDENSATION WITH PARAMETER MATCHING
59
+
60
+ Here we aim to learn $S$ such that the model $\phi_{\pmb{\theta}^S}$ trained on them achieves not only comparable generalization performance to $\phi_{\pmb{\theta}^T}$ but also converges to a similar solution in the parameter space (i.e. $\pmb{\theta}^S \approx \pmb{\theta}^T$ ). Let $\phi_{\pmb{\theta}}$ be a locally smooth function, similar weights $(\pmb{\theta}^S \approx \pmb{\theta}^T)$ imply similar mappings in a local neighborhood and thus generalization performance, i.e. $\mathbb{E}_{\pmb{x} \sim P_D}[\ell(\phi_{\pmb{\theta}^T}(\pmb{x}), y)] \simeq \mathbb{E}_{\pmb{x} \sim P_D}[\ell(\phi_{\pmb{\theta}^S}(\pmb{x}), y)]$ . Now we can formulate this goal as
61
+
62
+ $$
63
+ \min _ {\mathcal {S}} D \left(\boldsymbol {\theta} ^ {\mathcal {S}}, \boldsymbol {\theta} ^ {\mathcal {T}}\right) \quad \text {s u b j e c t t o} \quad \boldsymbol {\theta} ^ {\mathcal {S}} (\mathcal {S}) = \underset {\boldsymbol {\theta}} {\arg \min } \mathcal {L} ^ {\mathcal {S}} (\boldsymbol {\theta}) \tag {4}
64
+ $$
65
+
66
+ where $\pmb{\theta}^{\mathcal{T}} = \arg \min_{\pmb{\theta}}\mathcal{L}^{\mathcal{T}}(\pmb {\theta})$ and $D(\cdot ,\cdot)$ is a distance function. In a deep neural network, $\pmb{\theta}^{\mathcal{T}}$ typically depends on its initial values $\pmb{\theta}_0$ . However, the optimization in eq. (4) aims to obtain an optimum set of synthetic images only for one model $\phi_{\pmb{\theta}}\tau$ with the initialization $\pmb{\theta}_0$ , while our actual goal is to generate samples that can work with a distribution of random initializations $P_{\theta_0}$ . Thus we modify eq. (4) as follows:
67
+
68
+ $$
69
+ \min _ {\mathcal {S}} \operatorname {E} _ {\boldsymbol {\theta} _ {0} \sim P _ {\boldsymbol {\theta} _ {0}}} \left[ D \left(\boldsymbol {\theta} ^ {\mathcal {S}} \left(\boldsymbol {\theta} _ {0}\right), \boldsymbol {\theta} ^ {\mathcal {T}} \left(\boldsymbol {\theta} _ {0}\right)\right) \right] \quad \text {s u b j e c t t o} \quad \boldsymbol {\theta} ^ {\mathcal {S}} (\mathcal {S}) = \underset {\boldsymbol {\theta}} {\arg \min } \mathcal {L} ^ {\mathcal {S}} \left(\boldsymbol {\theta} \left(\boldsymbol {\theta} _ {0}\right)\right) \tag {5}
70
+ $$
71
+
72
+ where $\pmb{\theta}^{\mathcal{T}} = \arg \min_{\pmb{\theta}}\mathcal{L}^{\mathcal{T}}(\pmb{\theta}(\pmb{\theta}_0))$ . For brevity, we use only $\pmb{\theta}^{\mathcal{S}}$ and $\pmb{\theta}^{\mathcal{T}}$ to indicate $\pmb{\theta}^{\mathcal{S}}(\pmb{\theta}_0)$ and $\pmb{\theta}^{\mathcal{T}}(\pmb{\theta}_0)$ respectively in the next sections. The standard approach to solving eq. (5) employs implicit differentiation (see (Domke, 2012) for details), which involves solving an inner loop optimization for $\pmb{\theta}^{\mathcal{S}}$ . As the inner loop optimization $\pmb{\theta}^{\mathcal{S}}(\mathcal{S}) = \arg \min_{\pmb{\theta}}\mathcal{L}^{\mathcal{S}}(\pmb{\theta})$ can be computationally expensive in
73
+
74
+ case of large-scale models, one can adopt the back-optimization approach in (Domke, 2012) which re-defines $\theta^S$ as the output of an incomplete optimization:
75
+
76
+ $$
77
+ \boldsymbol {\theta} ^ {S} (\mathcal {S}) = \operatorname {o p t - a l g} _ {\boldsymbol {\theta}} \left(\mathcal {L} ^ {S} (\boldsymbol {\theta}), \varsigma\right) \tag {6}
78
+ $$
79
+
80
+ where opt-alg is a specific optimization procedure with a fixed number of steps $(\varsigma)$ .
81
+
82
+ In practice, $\pmb{\theta}^T$ for different initializations can be trained first in an offline stage and then used as the target parameter vector in eq. (5). However, there are two potential issues by learning to regress $\pmb{\theta}^T$ as the target vector. First the distance between $\pmb{\theta}^T$ and intermediate values of $\pmb{\theta}^S$ can be too big in the parameter space with multiple local minima traps along the path and thus it can be too challenging to reach. Second opt-alg involves a limited number of optimization steps as a tradeoff between speed and accuracy which may not be sufficient to take enough steps for reaching the optimal solution. These problems are similar to those of (Wang et al., 2018), as they both involve parameterizing $\pmb{\theta}^S$ with $S$ and $\pmb{\theta}_0$ .
83
+
84
+ # 2.3 DATASET CONDENSATION WITH CURRICULUM GRADIENT MATCHING
85
+
86
+ Here we propose a curriculum based approach to address the above mentioned challenges. The key idea is that we wish $\theta^S$ to be close to not only the final $\theta^T$ but also to follow a similar path to $\theta^T$ throughout the optimization. While this can restrict the optimization dynamics for $\theta$ , we argue that it also enables a more guided optimization and effective use of the incomplete optimizer. We can now decompose eq. (5) into multiple subproblems:
87
+
88
+ $$
89
+ \min _ {S} \mathrm {E} _ {\boldsymbol {\theta} _ {0} \sim P _ {\boldsymbol {\theta} _ {0}}} [ \sum_ {t = 0} ^ {T - 1} D \left(\boldsymbol {\theta} _ {t} ^ {S}, \boldsymbol {\theta} _ {t} ^ {\mathcal {T}}\right) ] \quad \text {s u b j e c t t o} \tag {7}
90
+ $$
91
+
92
+ $$
93
+ \boldsymbol {\theta} _ {t + 1} ^ {\mathcal {S}} (\mathcal {S}) = \operatorname {o p t - a l g} _ {\boldsymbol {\theta}} \left(\mathcal {L} ^ {\mathcal {S}} \left(\boldsymbol {\theta} _ {t} ^ {\mathcal {S}}\right), \varsigma^ {\mathcal {S}}\right) \quad \text {a n d} \quad \boldsymbol {\theta} _ {t + 1} ^ {\mathcal {T}} = \operatorname {o p t - a l g} _ {\boldsymbol {\theta}} \left(\mathcal {L} ^ {\mathcal {T}} \left(\boldsymbol {\theta} _ {t} ^ {\mathcal {T}}\right), \varsigma^ {\mathcal {T}}\right)
94
+ $$
95
+
96
+ where $T$ is the number of iterations, $\varsigma^S$ and $\varsigma^T$ are the numbers of optimization steps for $\pmb{\theta}^{S}$ and $\pmb{\theta}^{T}$ respectively. In words, we wish to generate a set of condensed samples $\mathcal{S}$ such that the network parameters trained on them $(\pmb{\theta}_t^S)$ are similar to the ones trained on the original training set $(\pmb{\theta}_t^T)$ at each iteration $t$ . In our preliminary experiments, we observe that $\pmb{\theta}_{t + 1}^{S}$ , which is parameterized with $\mathcal{S}$ , can successfully track $\pmb{\theta}_{t + 1}^{T}$ by updating $\mathcal{S}$ and minimizing $D(\pmb{\theta}_t^S,\pmb{\theta}_t^T)$ close to zero.
97
+
98
+ In the case of one step gradient descent optimization for opt-alg, the update rule is:
99
+
100
+ $$
101
+ \boldsymbol {\theta} _ {t + 1} ^ {\mathcal {S}} \leftarrow \boldsymbol {\theta} _ {t} ^ {\mathcal {S}} - \eta_ {\boldsymbol {\theta}} \nabla_ {\boldsymbol {\theta}} \mathcal {L} ^ {\mathcal {S}} \left(\boldsymbol {\theta} _ {t} ^ {\mathcal {S}}\right) \quad \text {a n d} \quad \boldsymbol {\theta} _ {t + 1} ^ {\mathcal {T}} \leftarrow \boldsymbol {\theta} _ {t} ^ {\mathcal {T}} - \eta_ {\boldsymbol {\theta}} \nabla_ {\boldsymbol {\theta}} \mathcal {L} ^ {\mathcal {T}} \left(\boldsymbol {\theta} _ {t} ^ {\mathcal {T}}\right), \tag {8}
102
+ $$
103
+
104
+ where $\eta_{\pmb{\theta}}$ is the learning rate. Based on our observation $(D(\pmb{\theta}_t^S, \pmb{\theta}_t^T) \approx 0)$ , we simplify the formulation in eq. (7) by replacing $\pmb{\theta}_t^T$ with $\pmb{\theta}_t^S$ and use $\pmb{\theta}$ to denote $\pmb{\theta}^S$ in the rest of the paper:
105
+
106
+ $$
107
+ \min _ {\mathcal {S}} \mathrm {E} _ {\boldsymbol {\theta} _ {0} \sim P _ {\boldsymbol {\theta} _ {0}}} [ \sum_ {t = 0} ^ {T - 1} D (\nabla_ {\boldsymbol {\theta}} \mathcal {L} ^ {\mathcal {S}} (\boldsymbol {\theta} _ {t}), \nabla_ {\boldsymbol {\theta}} \mathcal {L} ^ {\mathcal {T}} (\boldsymbol {\theta} _ {t})) ]. \tag {9}
108
+ $$
109
+
110
+ We now have a single deep network with parameters $\theta$ trained on the synthetic set $S$ which is optimized such that the distance between the gradients for the loss over the training samples $\mathcal{L}^T$ w.r.t. $\theta$ and the gradients for the loss over the condensed samples $\mathcal{L}^S$ w.r.t. $\theta$ is minimized. In words, our goal reduces to matching the gradients for the real and synthetic training loss w.r.t. $\theta$ via updating the condensed samples. This approximation has the key advantage over (Wang et al., 2018) and eq. (5) that it does not require the expensive unrolling of the recursive computation graph over the previous parameters $\{\theta_0,\dots ,\theta_{t - 1}\}$ . The important consequence is that the optimization is significantly faster, memory efficient and thus scales up to the state-of-the-art deep neural networks (e.g. ResNet (He et al., 2016)).
111
+
112
+ Discussion. The synthetic data contains not only samples but also their labels $(s, y)$ that can be jointly learned by optimizing eq. (9) in theory. However, their joint optimization is challenging, as the content of the samples depend on their label and vice-versa. Thus in our experiments we learn to synthesize images for fixed labels, e.g. one synthetic image per class.
113
+
114
+ Algorithm. We depict the optimization details in Alg. 1. At the outer level, it contains a loop over random weight initializations, as we want to obtain condensed images that can later be used to train previously unseen models. Once $\theta$ is randomly initialized, we use $\phi_{\theta}$ to first compute the loss over both the training samples $(\mathcal{L}^T)$ , synthetic samples $(\mathcal{L}^S)$ and their gradients w.r.t. $\theta$ , then optimize the synthetic samples $S$ to match these gradients $\nabla_{\theta}\mathcal{L}^S$ to $\nabla_{\theta}\mathcal{L}^T$ by applying $\varsigma_S$ gradient descent steps with learning rate $\eta_S$ . We use the stochastic gradient descent optimization for both opt-alg $\theta$ and opt-alg $S$ . Next we train $\theta$ on the updated synthetic images by minimizing the loss $\mathcal{L}^S$ with learning rate $\eta_{\theta}$ for $\varsigma_{\theta}$ steps. Note that we sample each real and synthetic batch pair from $\mathcal{T}$ and $S$ containing samples from a single class and the synthetic data for each class are separately (or parallelly) updated at each iteration $(t)$ for the following reasons: i) this reduces memory use at train time, ii) imitating the mean gradients w.r.t. the data from single class is easier compared to those of multiple classes. This does not bring any extra computational cost.
115
+
116
+ Algorithm 1: Dataset condensation with gradient matching
117
+ Output: $S$
118
+ Input: Training set $\mathcal{T}$
119
+ 1 Required: Randomly initialized set of synthetic samples $S$ for $C$ classes, probability distribution over randomly initialized weights $P_{\theta_0}$ deep neural network $\phi_{\theta}$ number of outer-loop steps $K$ number of inner-loop steps $T$ ,number of steps for updating weights $\varsigma_{\theta}$ and synthetic samples $\varsigma_{S}$ in each inner-loop step respectively, learning rates for updating weights $\eta_{\theta}$ and synthetic samples $\eta_{S}$
120
+ 2 for $k = 0,\dots ,K - 1$ do
121
+ 3 Initialize $\theta_0\sim P_{\theta_0}$
122
+ 4 for $t = 0,\dots ,T - 1$ do
123
+ 5 for $c = 0,\dots ,C - 1$ do Sample a minibatch pair $B_c^{\mathcal{T}}\sim \mathcal{T}$ and $B_c^S\sim S$ $\triangleright B_c^T$ and $B_c^S$ are of the same class c.
124
+ 7 Compute $\mathcal{L}_c^T = \frac{1}{|B_c^T|}\sum_{(\boldsymbol {x},\boldsymbol {y})\in B_c^T}\ell (\phi_{\boldsymbol {\theta}_t}(\boldsymbol {x}),\boldsymbol {y})$ and $\mathcal{L}_c^S = \frac{1}{|B_c^S|}\sum_{(\boldsymbol {s},\boldsymbol {y})\in B_c^S}\ell (\phi_{\boldsymbol {\theta}_t}(\boldsymbol {s}),\boldsymbol {y})$
125
+ 8 Update $\mathcal{S}_c\gets \mathrm{opt - alg}_{\mathcal{S}}(D(\nabla_{\boldsymbol{\theta}}\mathcal{L}_{c}^{S}(\boldsymbol{\theta}_{t}),\nabla_{\boldsymbol{\theta}}\mathcal{L}_{c}^{T}(\boldsymbol{\theta}_{t})),\varsigma_{S},\eta_{S})$
126
+ 9 Update $\pmb{\theta}_{t + 1}\gets \mathrm{opt - alg}_{\pmb{\theta}}(\mathcal{L}^{\mathcal{S}}(\pmb{\theta}_t),\varsigma_{\pmb{\theta}},\eta_{\pmb{\theta}})$ ▷ Use the whole $S$
127
+
128
+ Gradient matching loss. The matching loss $D(\cdot, \cdot)$ in eq. (9) measures the distance between the gradients for $\mathcal{L}^S$ and $\mathcal{L}^T$ w.r.t. $\pmb{\theta}$ . When $\phi_{\pmb{\theta}}$ is a multi-layered neural network, the gradients correspond to a set of learnable 2D (out × in) and 4D (out × in × h × w) weights for each fully connected (FC) and convolutional layer resp where out, in, h, w are number of output and input channels, kernel height and width resp. The matching loss can be decomposed into a sum of layerwise losses as $D(\nabla_{\pmb{\theta}}\mathcal{L}^S,\nabla_{\pmb{\theta}}\mathcal{L}^T) = \sum_{l = 1}^{L}d(\nabla_{\pmb{\theta}^{(l)}}\mathcal{L}^S,\nabla_{\pmb{\theta}^{(l)}}\mathcal{L}^T)$ where $l$ is the layer index, $L$ is the number of layers with weights and
129
+
130
+ $$
131
+ d (\mathbf {A}, \mathbf {B}) = \sum_ {i = 1} ^ {\text {o u t}} \left(1 - \frac {\mathbf {A} _ {\mathrm {i} .} \cdot \mathbf {B} _ {\mathrm {i} .}}{\| \mathbf {A} _ {\mathrm {i} .} \| \| \mathbf {B} _ {\mathrm {i} .} \|}\right) \tag {10}
132
+ $$
133
+
134
+ where $\mathbf{A}_i$ and $\mathbf{B}_i$ are flattened vectors of gradients corresponding to each output node $i$ , which is in dimensional for FC weights and $\mathrm{in} \times \mathrm{h} \times \mathrm{w}$ dimensional for convolutional weights. In contrast to (Lopez-Paz et al., 2017; Aljundi et al., 2019; Zhu et al., 2019) that ignore the layer-wise structure by flattening tensors over all layers to one vector and then computing the distance between two vectors, we group them for each output node. We found that this is a better distance for gradient matching (see the supplementary) and enables using a single learning rate across all layers.
135
+
136
+ # 3 EXPERIMENTS
137
+
138
+ # 3.1 DATASET CONDENSATION
139
+
140
+ First we evaluate classification performance with the condensed images on four standard benchmark datasets: digit recognition on MNIST (LeCun et al., 1998), SVHN (Netzer et al., 2011) and object classification on FashionMNIST (Xiao et al., 2017), CIFAR10 (Krizhevsky et al., 2009). We test our method using six standard deep network architectures: MLP, ConvNet (Gidaris & Komodakis, 2018), LeNet (LeCun et al., 1998), AlexNet (Krizhevsky et al., 2012), VGG-11 (Simonyan & Zisserman, 2014) and ResNet-18 (He et al., 2016). MLP is a multilayer perceptron with two nonlinear hidden layers, each has 128 units. ConvNet is a commonly used modular architecture in few-shot
141
+
142
+ <table><tr><td rowspan="2"></td><td rowspan="2">Img/Cls</td><td rowspan="2">Ratio %</td><td rowspan="2">Random</td><td colspan="3">Coreset Selection</td><td rowspan="2">Forgetting</td><td rowspan="2">Ours</td><td rowspan="2">Whole Dataset</td></tr><tr><td>Herding</td><td>K-Center</td><td>Forgetting</td></tr><tr><td rowspan="3">MNIST</td><td>1</td><td>0.017</td><td>64.9±3.5</td><td>89.2±1.6</td><td>89.3±1.5</td><td>35.5±5.6</td><td>91.7±0.5</td><td></td><td></td></tr><tr><td>10</td><td>0.17</td><td>95.1±0.9</td><td>93.7±0.3</td><td>84.4±1.7</td><td>68.1±3.3</td><td>97.4±0.2</td><td>99.6±0.0</td><td></td></tr><tr><td>50</td><td>0.83</td><td>97.9±0.2</td><td>94.9±0.2</td><td>97.4±0.3</td><td>88.2±1.2</td><td>98.8±0.2</td><td></td><td></td></tr><tr><td rowspan="3">FashionMNIST</td><td>1</td><td>0.017</td><td>51.4±3.8</td><td>67.0±1.9</td><td>66.9±1.8</td><td>42.0±5.5</td><td>70.5±0.6</td><td></td><td></td></tr><tr><td>10</td><td>0.17</td><td>73.8±0.7</td><td>71.1±0.7</td><td>54.7±1.5</td><td>53.9±2.0</td><td>82.3±0.4</td><td>93.5±0.1</td><td></td></tr><tr><td>50</td><td>0.83</td><td>82.5±0.7</td><td>71.9±0.8</td><td>68.3±0.8</td><td>55.0±1.1</td><td>83.6±0.4</td><td></td><td></td></tr><tr><td rowspan="3">SVHN</td><td>1</td><td>0.014</td><td>14.6±1.6</td><td>20.9±1.3</td><td>21.0±1.5</td><td>12.1±1.7</td><td>31.2±1.4</td><td></td><td></td></tr><tr><td>10</td><td>0.14</td><td>35.1±4.1</td><td>50.5±3.3</td><td>14.0±1.3</td><td>16.8±1.2</td><td>76.1±0.6</td><td>95.4±0.1</td><td></td></tr><tr><td>50</td><td>0.7</td><td>70.9±0.9</td><td>72.6±0.8</td><td>20.1±1.4</td><td>27.2±1.5</td><td>82.3±0.3</td><td></td><td></td></tr><tr><td rowspan="3">CIFAR10</td><td>1</td><td>0.02</td><td>14.4±2.0</td><td>21.5±1.2</td><td>21.5±1.3</td><td>13.5±1.2</td><td>28.3±0.5</td><td></td><td></td></tr><tr><td>10</td><td>0.2</td><td>26.0±1.2</td><td>31.6±0.7</td><td>14.7±0.9</td><td>23.3±1.0</td><td>44.9±0.5</td><td>84.8±0.1</td><td></td></tr><tr><td>50</td><td>1</td><td>43.4±1.0</td><td>40.4±0.6</td><td>27.0±1.4</td><td>23.3±1.1</td><td>53.9±0.5</td><td></td><td></td></tr></table>
143
+
144
+ Table 1: The performance comparison to coreset methods. This table shows the testing accuracies $(\%)$ of different methods on four datasets. ConvNet is used for training and testing. Img/Cls: image(s) per class, Ratio $(\%)$ : the ratio of condensed images to whole training set.
145
+
146
+ learning (Snell et al., 2017; Vinyals et al., 2016; Gidaris & Komodakis, 2018) with $D$ duplicate blocks, and each block has a convolutional layer with $W(3 \times 3)$ filters, a normalization layer $N$ , an activation layer $A$ and a pooling layer $P$ , denoted as $[W, N, A, P] \times D$ . The default ConvNet (unless specified otherwise) includes 3 blocks, each with 128 filters, followed by InstanceNorm (Ulyanov et al., 2016), ReLU and AvgPooling modules. The final block is followed by a linear classifier. We use Kaiming initialization (He et al., 2015) for network weights. The synthetic images can be initialized from Gaussian noise or randomly selected real training images. More details about the datasets, networks and hyper-parameters can be found in the supplementary.
147
+
148
+ The pipeline for dataset condensation has two stages: learning the condensed images (denoted as C) and training classifiers from scratch on them (denoted as T). Note that the model architectures used in two stages might be different. For the coreset baselines, the coreset is selected in the first stage. We investigate three settings: 1, 10 and 50 image/class learning, which means that the condensed set or coreset contains 1, 10 and 50 images per class respectively. Each method is run for 5 times, and 5 synthetic sets are generated in the first stage; each generated synthetic set is used to train 20 randomly initialized models in the second stage and evaluated on the test set, which amounts to evaluating 100 models in the second stage. In all experiments, we report the mean and standard deviation of these 100 testing results.
149
+
150
+ Baselines. We compare our method to four coreset baselines (Random, Herding, K-Center and Forgetting) and also to DD (Wang et al., 2018). In Random, the training samples are randomly selected as the coreset. Herding baseline, which selects closest samples to the cluster center, is based on (Welling, 2009) and used in (Rebuffi et al., 2017; Castro et al., 2018; Wu et al., 2019; Belouadah & Popescu, 2020). K-Center (Wolf, 2011; Sener & Savarese, 2018) picks multiple center points such that the largest distance between a data point and its nearest center is minimized. For Herding and K-Center, we use models trained on the whole dataset to extract features, compute $l_{2}$ distance to centers. Forgetting method (Toneva et al., 2019) selects the training samples which are easy to forget during training. We do not compare to GSS-Greedy (Aljundi et al., 2019), because it is also a similarity based greedy algorithm like K-Center, but GSS-Greedy trains an online learning model to measure the similarity of samples, which is different from general image classification problem. More detailed comparisons can be found in the supplementary.
151
+
152
+ Comparison to coreset methods. We first compare our method to the coreset baselines on MNIST, FashionMNIST, SVHN and CIFAR10 in Table 1 using the default ConvNet in classification accuracy. Whole dataset indicates training on the whole original set which serves as an approximate upper-bound performance. First we observe that our method outperforms all the baselines significantly and achieves a comparable result (98.8%) in case of 50 images per class to the upper bound (99.6%) in MNIST which uses two orders of magnitude more training images per class (6000). We also obtain promising results in FashionMNIST, however, the gap between our method and upper bound is bigger in SVHN and CIFAR10 which contain more diverse images with varying foregrounds and backgrounds. We also observe that, (i) the random selection baseline is competitive to other coreset methods in 10 and 50 images per class and (ii) herding method is on average the best coreset technique. We visualize the condensed images produced by our method under 1 image/class setting in Figure 2. Interestingly they are interpretable and look like "prototypes" of each class.
153
+
154
+ ![](images/ca0816b2e42676b79d335de788e98c57b85031a2ecb190a0ffb2375162a2d851.jpg)
155
+ Figure 2: Visualization of condensed 1 image/class with ConvNet for MNIST, FashionMNIST, SVHN and CIFAR10.
156
+
157
+ <table><tr><td>C\T</td><td>MLP</td><td>ConvNet</td><td>LeNet</td><td>AlexNet</td><td>VGG</td><td>ResNet</td></tr><tr><td>MLP</td><td>70.5±1.2</td><td>63.9±6.5</td><td>77.3±5.8</td><td>70.9±11.6</td><td>53.2±7.0</td><td>80.9±3.6</td></tr><tr><td>ConvNet</td><td>69.6±1.6</td><td>91.7±0.5</td><td>85.3±1.8</td><td>85.1±3.0</td><td>83.4±1.8</td><td>90.0±0.8</td></tr><tr><td>LeNet</td><td>71.0±1.6</td><td>90.3±1.2</td><td>85.0±1.7</td><td>84.7±2.4</td><td>80.3±2.7</td><td>89.0±0.8</td></tr><tr><td>AlexNet</td><td>72.1±1.7</td><td>87.5±1.6</td><td>84.0±2.8</td><td>82.7±2.9</td><td>81.2±3.0</td><td>88.9±1.1</td></tr><tr><td>VGG</td><td>70.3±1.6</td><td>90.1±0.7</td><td>83.9±2.7</td><td>83.4±3.7</td><td>81.7±2.6</td><td>89.1±0.9</td></tr><tr><td>ResNet</td><td>73.6±1.2</td><td>91.6±0.5</td><td>86.4±1.5</td><td>85.4±1.9</td><td>83.4±2.4</td><td>89.4±0.9</td></tr></table>
158
+
159
+ Table 2: Cross-architecture performance in testing accuracy $(\%)$ for condensed 1 image/class in MNIST.
160
+
161
+ <table><tr><td>Dataset</td><td>Img/Cls</td><td>DD</td><td>Ours</td><td>Whole Dataset</td></tr><tr><td rowspan="2">MNIST</td><td>1</td><td>-</td><td>85.0±1.6</td><td rowspan="2">99.5±0.0</td></tr><tr><td>10</td><td>79.5±8.1</td><td>93.9±0.6</td></tr><tr><td rowspan="2">CIFAR10</td><td>1</td><td>-</td><td>24.2±0.9</td><td rowspan="2">83.1±0.2</td></tr><tr><td>10</td><td>36.8±1.2</td><td>39.1±1.2</td></tr></table>
162
+
163
+ Table 3: Comparison to DD (Wang et al., 2018) in terms of testing accuracy (\%).
164
+
165
+ <table><tr><td></td><td>Random</td><td>Herding</td><td>Ours</td><td>Early-stopping</td><td>Whole Dataset</td></tr><tr><td>Performance (%)</td><td>76.2</td><td>76.2</td><td>84.5</td><td>84.5</td><td>85.9</td></tr><tr><td>Correlation</td><td>-0.21</td><td>-0.20</td><td>0.79</td><td>0.42</td><td>1.00</td></tr><tr><td>Time cost (min)</td><td>18.8</td><td>18.8</td><td>18.8</td><td>18.8</td><td>8604.3</td></tr><tr><td>Storage (imgs)</td><td>102</td><td>102</td><td>102</td><td>104</td><td>5 × 104</td></tr></table>
166
+
167
+ Table 4: Neural Architecture Search. Methods are compared in performance, ranking correlation, time and memory cost.
168
+
169
+ Comparison to DD (Wang et al., 2018). Unlike the setting in Table 1, DD (Wang et al., 2018) reports results only for 10 images per class on MNIST and CIFAR10 over LeNet and AlexCifarNet (a customized AlexNet). We strictly follow the experimental setting in (Wang et al., 2018), use the same architectures and report our and their original results in Table 3 for a fair comparison. Our method achieves significantly better performance than DD on both benchmarks; obtains $5\%$ higher accuracy with only 1 synthetic sample per class than DD with 10 samples per class. In addition, our method obtains consistent results over multiple runs with a standard deviation of only $0.6\%$ on MNIST, while DD's performance significantly varies over different runs $(8.1\%)$ . Finally our method trains 2 times faster than DD and requires $50\%$ less memory on CIFAR10 experiments. More detailed runtime and qualitative comparison can be found in the supplementary.
170
+
171
+ Cross-architecture generalization. Another key advantage of our method is that the condensed images learned using one architecture can be used to train another unseen one. Here we learn 1 condensed image per class for MNIST over a diverse set of networks including MLP, ConvNet (Gidaris & Komodakis, 2018), LeNet (LeCun et al., 1998), AlexNet (Krizhevsky et al., 2012), VGG-11 (Simonyan & Zisserman, 2014) and ResNet-18 (He et al., 2016) (see Table 2). Once the condensed sets are synthesized, we train every network on all the sets separately from scratch and evaluate their cross architecture performance in terms of classification accuracy on the MNIST test set. Table 2 shows that the condensed images, especially the ones that are trained with convolutional networks, perform well and are thus architecture generic. MLP generated images do not work well for training convolutional architectures which is possibly due to the mismatch between translation invariance properties of MLP and convolutional networks. Interestingly, MLP achieves better performance with convolutional network generated images than the MLP generated ones. The best results are obtained in most cases with ResNet generated images and ConvNet or ResNet as classifiers which is inline with their performances when trained on the original dataset.
172
+
173
+ Number of condensed images. We also study the test performance of a ConvNet trained on them for MNIST, FashionMNIST, SVHN and CIFAR10 for various number of condensed images per class in Figure 3 in absolute and relative terms - normalized by its upper-bound. Increasing the number of condensed images improves the accuracies in all benchmarks and further closes the gap with the upper-bound performance especially in MNIST and FashionMNIST, while the gap remains larger in SVHN and CIFAR10. In addition, our method outperforms the coreset method - Herding by a large margin in all cases.
174
+
175
+ Activation, normalization & pooling. We also study the effect of various activation (sigmoid, ReLU (Nair & Hinton, 2010; Zeiler et al., 2013), leaky ReLU (Maas et al., 2013)), pooling (max, average) and normalization functions (batch (Ioffe & Szegedy, 2015), group (Wu & He, 2018), layer (Ba et al., 2016), instance norm (Ulyanov et al., 2016)) and have the following observations: i) leaky ReLU over ReLU and average pooling over max pooling enable learning better condensed images, as they allow for denser gradient flow; ii) instance normalization obtains better classifica
176
+
177
+ ![](images/68cc773984a64fd380566f4fefec172570205f77accc5198196c5b9f2e44929d.jpg)
178
+ Figure 3: Absolute and relative testing accuracies for varying the number of condensed images/class for MNIST, FashionMNIST, SVHN and CIFAR10. The relative accuracy means the ratio compared to its upper-bound, i.e. training with the whole dataset.
179
+
180
+ ![](images/76a7a7abf3f7cf720a4f539848cf5f394d88ae81fdcde9c5c14acda276e6baf8.jpg)
181
+ Figure 4: Continual learning performance in accuracy $(\%)$ . Herding denotes the original E2E (Castro et al., 2018). T1, T2, T3 are three learning stages. The performance at each stage is the mean testing accuracy on all learned tasks.
182
+
183
+ tion performance than its alternatives when used in the networks that are trained on a small set of condensed images. We refer to the supplementary for detailed results and discussion.
184
+
185
+ # 3.2 APPLICATIONS
186
+
187
+ Continual Learning First we apply our method to a continual-learning scenario (Rebuffi et al., 2017; Castro et al., 2018) where new tasks are learned incrementally and the goal is to preserve the performance on the old tasks while learning the new ones. We build our model on E2E method in (Castro et al., 2018) that uses a limited budget rehearsal memory (we consider 10 images/class here) to keep representative samples from the old tasks and knowledge distillation (KD) to regularize the network's output w.r.t. to previous predictions. We replace its sample selection mechanism (herding) with ours such that a set of condensed images are generated and stored in the memory, keep the rest of the model same and evaluate this model on the task-incremental learning problem on the digit recognition datasets, SVHN (Netzer et al., 2011), MNIST (LeCun et al., 1998) and USPS (Hull, 1994) in the same order. MNIST and USPS images are reshaped to $32 \times 32$ RGB images.
188
+
189
+ We compare our method to E2E (Castro et al., 2018), depicted as herding in Figure 4, with and without KD regularization. The experiment contains 3 incremental training stages (SVHN $\rightarrow$ MNIST $\rightarrow$ USPS) and testing accuracies are computed by averaging over the test sets of the previous and current tasks after each stage. The desired outcome is to obtain high mean classification accuracy at T3. The results indicate that the condensed images are more data-efficient than the ones sampled by herding and thus our method outperforms E2E in both settings, while by a larger margin (2.3% at T3) when KD is not employed.
190
+
191
+ Neural Architecture Search. Here we explore the use of our method in a simple neural architecture search (NAS) experiment on CIFAR10 which typically requires expensive training of numerous architectures multiple times on the whole training set and picking the best performing ones on a validation set. Our goal is to verify that our condensed images can be used to efficiently train multiple networks to identify the best network. To this end, we construct the search space of 720 ConvNets as described in Section 3.1 by varying hyper-parameters $W$ , $N$ , $A$ , $P$ , $D$ over an uniform grid (see supplementary for more details), train them for 100 epochs on three small proxy datasets (10 images/class) that are obtained with Random sampling, Herding and our method. Note that we train the condensed images for once only with the default ConvNet architecture and use them to train all kinds of architectures. We also compare to early-stopping (Li & Talwalkar, 2020) in which the model is trained on whole training set but with the same number of training iterations as the one required for the small proxy datasets, in other words, for the same amount of computations.
192
+
193
+ Table 4 depicts i) the average test performance of the best selected model over 5 runs when trained on the whole dataset, ii) Spearman's rank correlation coefficient between the validation accuracies obtained by training the selected top 10 models on the proxy dataset and whole dataset, iii) time for training 720 architectures on a NVIDIA GTX1080-Ti GPU, and iv) memory print of the training images. Our method achieves the highest testing performance (84.5%) and performance correlation (0.79), meanwhile significantly decreases the searching time (from 8604.3 to 18.8 minutes) and storage space (from $5 \times 10^{4}$ to $1 \times 10^{2}$ images) compared to whole-dataset training. The competitive early-stopping baseline achieves on par performance for the best performing model with
194
+
195
+ ours, however, the rank correlation (0.42) of top 10 models is significantly lower than ours (0.79) which indicates unreliable correlation of performances between early-stopping and whole-dataset training. Furthermore, early-stopping needs 100 times as many training images as ours needs. Note that the training time for synthetic images is around 50 minutes (for $K = 500$ ) which is one time off and negligible cost when training thousands even millions of candidate architectures in NAS.
196
+
197
+ # 4 CONCLUSION
198
+
199
+ In this paper, we propose a dataset condensation method that learns to synthesize a small set of informative images. We show that these images are significantly more data-efficient than the same number of original images and the ones produced by the previous method, and they are not architecture dependent, can be used to train different deep networks. Once trained, they can be used to lower the memory print of datasets and efficiently train numerous networks which are crucial in continual learning and neural architecture search respectively. For future work, we plan to explore the use of condensed images in more diverse and thus challenging datasets like ImageNet (Deng et al., 2009) that contain higher resolution images with larger variations in appearance and pose of objects, background.
200
+
201
+ Acknowledgment. This work is funded by China Scholarship Council 201806010331 and the EPSRC programme grant Visual AI EP/T028572/1. We thank Iain Murray and Oisin Mac Aodha for their valuable feedback.
202
+
203
+ # REFERENCES
204
+
205
+ Pankaj K Agarwal, Sariel Har-Peled, and Kasturi R Varadarajan. Approximating extent measures of points. Journal of the ACM (JACM), 51(4):606-635, 2004.
206
+ Rahaf Aljundi, Min Lin, Baptiste Goujaud, and Yoshua Bengio. Gradient based sample selection for online continual learning. In Advances in Neural Information Processing Systems, pp. 11816-11825, 2019.
207
+ Jimmy Ba and Rich Caruana. Do deep nets really need to be deep? In Advances in neural information processing systems, pp. 2654-2662, 2014.
208
+ Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016.
209
+ Eden Belouadah and Adrian Popescu. Scail: Classifier weights scaling for class incremental learning. In The IEEE Winter Conference on Applications of Computer Vision, 2020.
210
+ Ondrej Bohdal, Yongxin Yang, and Timothy Hospedales. Flexible dataset distillation: Learn labels instead of images. Neural Information Processing Systems Workshop, 2020.
211
+ Cristian Bucilua, Rich Caruana, and Alexandru Niculescu-Mizil. Model compression. In Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 535-541, 2006.
212
+ Francisco M Castro, Manuel J Marín-Jiménez, Nicolás Guil, Cordelia Schmid, and Karteek Alahari. End-to-end incremental learning. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 233-248, 2018.
213
+ Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pp. 248-255. IEEE, 2009.
214
+ Justin Domke. Generic methods for optimization-based modeling. In Artificial Intelligence and Statistics, pp. 318-326, 2012.
215
+ Dan Feldman, Melanie Schmidt, and Christian Sohler. Turning big data into tiny data: Constant-size coresets for k-means, pca and projective clustering. In Proceedings of the twenty-fourth annual ACM-SIAM symposium on Discrete algorithms, pp. 1434-1453. SIAM, 2013.
216
+
217
+ Spyros Gidaris and Nikos Komodakis. Dynamic few-shot visual learning without forgetting. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4367-4375, 2018.
218
+ Jack Goetz and Ambuj Tewari. Federated learning via synthetic data. arXiv preprint arXiv:2008.04489, 2020.
219
+ Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural information processing systems, pp. 2672-2680, 2014a.
220
+ Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014b.
221
+ Sariel Har-Peled and Soham Mazumdar. On coresets for k-means and k-median clustering. In Proceedings of the thirty-sixth annual ACM symposium on Theory of computing, 2004.
222
+ Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the IEEE international conference on computer vision, pp. 1026-1034, 2015.
223
+ Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778, 2016.
224
+ Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015.
225
+ Jonathan J. Hull. A database for handwritten text recognition research. IEEE Transactions on pattern analysis and machine intelligence, 16(5):550-554, 1994.
226
+ Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. *ArXiv*, abs/1502.03167, 2015.
227
+ Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
228
+ Pang Wei Koh and Percy Liang. Understanding black-box predictions via influence functions. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 1885-1894. JMLR.org, 2017.
229
+ Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. Technical report, Citeseer, 2009.
230
+ Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pp. 1097-1105, 2012.
231
+ Yann LeCun, Léon Bottou, Yoshua Bengio, Patrick Haffner, et al. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278-2324, 1998.
232
+ Guang Li, Ren Togo, Takahiro Ogawa, and Miki Haseyama. Soft-label anonymous gastric x-ray image distillation. In 2020 IEEE International Conference on Image Processing (ICIP), pp. 305-309. IEEE, 2020.
233
+ Liam Li and Ameet Talwalkar. Random search and reproducibility for neural architecture search. In Uncertainty in Artificial Intelligence, pp. 367-377. PMLR, 2020.
234
+ Raphael Gontijo Lopes, Stefano Fenu, and Thad Starner. Data-free knowledge distillation for deep neural networks. In LLD Workshop at Neural Information Processing Systems (NIPS), 2017.
235
+ David Lopez-Paz et al. Gradient episodic memory for continual learning. In Advances in Neural Information Processing Systems, pp. 6467-6476, 2017.
236
+
237
+ Andrew L Maas, Awni Y Hannun, and Andrew Y Ng. Rectifier nonlinearities improve neural network acoustic models. In International conference on machine learning (ICML), 2013.
238
+ Aravindh Mahendran and Andrea Vedaldi. Understanding deep image representations by inverting them. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 5188-5196, 2015.
239
+ Mehdi Mirza and Simon Osindero. Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784, 2014.
240
+ Vinod Nair and Geoffrey E Hinton. Rectified linear units improve restricted boltzmann machines. In Proceedings of the 27th international conference on machine learning (ICML-10), pp. 807-814, 2010.
241
+ Gaurav Kumar Nayak, Konda Reddy Mopuri, Vaisakh Shaj, Venkatesh Babu Radhakrishnan, and Anirban Chakraborty. Zero-shot knowledge distillation in deep networks. In Proceedings of the 36th International Conference on Machine Learning, 2019.
242
+ Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y Ng. Reading digits in natural images with unsupervised feature learning. 2011.
243
+ Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015.
244
+ Sylvestre-Alvise Rebuffi, Alexander Kolesnikov, Georg Sperl, and Christoph H Lampert. icarl: Incremental classifier and representation learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2001-2010, 2017.
245
+ Salah Rifai, Yoshua Bengio, Yann Dauphin, and Pascal Vincent. A generative process for sampling contractive auto-encoders. arXiv preprint arXiv:1206.6434, 2012.
246
+ Adriana Romero, Nicolas Ballas, Samira Ebrahimi Kahou, Antoine Chassang, Carlo Gatta, and Yoshua Bengio. Fitnets: Hints for thin deep nets. arXiv preprint arXiv:1412.6550, 2014.
247
+ Kegan GG Samuel and Marshall F Tappen. Learning optimized map estimates in continuously-valued mrf models. In 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 477-484. IEEE, 2009.
248
+ Mert Bulent Sariyildiz and Ramazan Gokberk Cinbis. Gradient matching generative networks for zero-shot learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2168-2178, 2019.
249
+ Ozan Sener and Silvio Savarese. Active learning for convolutional neural networks: A core-set approach. *ICLR*, 2018.
250
+ Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
251
+ Jake Snell, Kevin Swersky, and Richard Zemel. Prototypical networks for few-shot learning. In Advances in neural information processing systems, pp. 4077-4087, 2017.
252
+ Felipe Petroski Such, Aditya Rawal, Joel Lehman, Kenneth O Stanley, and Jeff Clune. Generative teaching networks: Accelerating neural architecture search by learning to generate synthetic training data. International Conference on Machine Learning, 2020.
253
+ Ilia Sucholutsky and Matthias Schonlau. Soft-label dataset distillation and text dataset distillation. arXiv preprint arXiv:1910.02551, 2019.
254
+ Mariya Toneva, Alessandro Sordoni, Remi Tachet des Combes, Adam Trischler, Yoshua Bengio, and Geoffrey J Gordon. An empirical study of example forgetting during deep neural network learning. *ICLR*, 2019.
255
+ Dmitry Ulyanov, Andrea Vedaldi, and Victor Lempitsky. Instance normalization: The missing ingredient for fast stylization. arXiv preprint arXiv:1607.08022, 2016.
256
+
257
+ Oriol Vinyals, Charles Blundell, Timothy Lillicrap, Daan Wierstra, et al. Matching networks for one shot learning. In Advances in neural information processing systems, 2016.
258
+ Tongzhou Wang, Jun-Yan Zhu, Antonio Torralba, and Alexei A Efros. Dataset distillation. arXiv preprint arXiv:1811.10959, 2018.
259
+ Max Welling. Herding dynamical weights to learn. In Proceedings of the 26th Annual International Conference on Machine Learning, pp. 1121-1128. ACM, 2009.
260
+ G W Wolf. Facility location: concepts, models, algorithms and case studies. 2011.
261
+ Yue Wu, Yinpeng Chen, Lijuan Wang, Yuancheng Ye, Zicheng Liu, Yandong Guo, and Yun Fu. Large scale incremental learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 374-382, 2019.
262
+ Yuxin Wu and Kaiming He. Group normalization. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 3-19, 2018.
263
+ Han Xiao, Kashif Rasul, and Roland Vollgraf. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747, 2017.
264
+ Matthew D Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. In European conference on computer vision, pp. 818-833. Springer, 2014.
265
+ Matthew D. Zeiler, Marc'Aurelio Ranzato, Rajat Monga, Mark Z. Mao, Kyeongcheol Yang, Quoc V. Le, Patrick Nguyen, Andrew W. Senior, Vincent Vanhoucke, Jeffrey Dean, and Geoffrey E. Hinton. On rectified linear units for speech processing. 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3517-3521, 2013.
266
+ Bo Zhao, Konda Reddy Mopuri, and Hakan Bilen. idlg: Improved deep leakage from gradients. arXiv preprint arXiv:2001.02610, 2020.
267
+ Yanlin Zhou, George Pu, Xiyao Ma, Xiaolin Li, and Dapeng Wu. Distilled one-shot federated learning. arXiv preprint arXiv:2009.07999, 2020.
268
+ Ligeng Zhu, Zhijian Liu, and Song Han. Deep leakage from gradients. In Advances in Neural Information Processing Systems, pp. 14747-14756, 2019.
269
+
270
+ # A IMPLEMENTATION DETAILS
271
+
272
+ In this part, we explain the implementation details for the dataset condensation, continual learning and neural architecture search experiments.
273
+
274
+ Dataset condensation. The presented experiments involve tuning of six hyperparameters – the number of outer-loop $K$ and inner-loop steps $T$ , learning rates $\eta_{S}$ and number of optimization steps $\varsigma_{S}$ for the condensed samples, learning rates $\eta_{\theta}$ and number of optimization steps $\varsigma_{\theta}$ for the model weights. In all experiments, we set $K = 1000$ , $\eta_{S} = 0.1$ , $\eta_{\theta} = 0.01$ , $\varsigma_{S} = 1$ and employ Stochastic Gradient Descent (SGD) as the optimizer. The only exception is that we set $\eta_{S}$ to 0.01 for synthesizing data with MLP in cross-architecture experiments (Table 2), as MLP requires a slightly different treatment. Note that while $K$ is the maximum number of outer-loop steps, the optimization can early-stop automatically if it converges before $K$ steps. For the remaining hyperparameters, we use different sets for 1, 10 and 50 image(s)/class learning. We set $T = 1$ , $\varsigma_{\theta} = 1$ for 1 image/class, $T = 10$ , $\varsigma_{\theta} = 50$ for 10 images/class, $T = 50$ , $\varsigma_{\theta} = 10$ for 50 images/class learning. Note that when $T = 1$ , it is not required to update the model parameters (Step 9 in Algorithm 1), as this model is not further used. For those experiments where more than 10 images/class are synthesized, we set $T$ to be the same number as the synthetic images per class and $\varsigma_{\theta} = 500 / T$ , e.g. $T = 20$ , $\varsigma_{\theta} = 25$ for 20 images/class learning. The ablation study on hyper-parameters are given in Appendix B which shows that our method is not sensitive to varying hyper-parameters.
275
+
276
+ We do separate-class mini-batch sampling for Step 6 in Algorithm 1. Specifically, we sample a mini-batch pair $B_{c}^{\mathcal{T}}$ and $B_{c}^{\mathcal{S}}$ that contain real and synthetic images from the same class $c$ at each inner iteration. Then, the matching loss for each class is computed with the sampled mini-batch pair and used to update corresponding synthetic images $S_{c}$ by back-propagation (Step 7 and 8). This is repeated separately (or parallelly given enough computational resources) for every class. Training as such is not slower than using mixed-class batches. Although our method still works well when we randomly sample the real and synthetic mini-batches with mixed labels, we found that separate-class strategy is faster to train as matching gradients w.r.t. data from single class is easier compared to those of multiple classes. In experiments, we randomly sample 256 real images of a class as a mini-batch to calculate the mean gradient and match it with the mean gradient that is averaged over all synthetic samples with the same class label. The performance is not sensitive to the size of real-image mini-batch if it is greater than 64.
277
+
278
+ In all experiments, we use the standard train/test splits of the datasets – the train/test statistics are shown in Table T5. We apply data augmentation (crop, scale and rotate) only for experiments (coreset methods and ours) on MNIST. The only exception is that we also use data augmentation when compared to DD (Wang et al., 2018) on CIFAR10 with AlexCifarNet, and data augmentation is also used in (Wang et al., 2018). For initialization of condensed images, we tried both Gaussian noise and randomly selected real training images, and obtained overall comparable performances in different settings and datasets. Then, we used Gaussian noise for initialization in experiments.
279
+
280
+ <table><tr><td></td><td>USPS</td><td>MNIST</td><td>FashionMNIST</td><td>SVHN</td><td>CIFAR10</td><td>CIFAR100</td></tr><tr><td>Train</td><td>7,291</td><td>60,000</td><td>60,000</td><td>73,257</td><td>50,000</td><td>50,000</td></tr><tr><td>Test</td><td>2,007</td><td>10,000</td><td>10,000</td><td>26,032</td><td>10,000</td><td>10,000</td></tr></table>
281
+
282
+ Table T5: Train/test statistics for USPS, MNIST, FashionMNIST, SVHN, CIFAR10 and CIFAR100 datasets.
283
+
284
+ In the first stage - while training the condensed images -, we use Batch Normalization in the VGG and ResNet networks. For reliable estimation of the running mean and variance, we sample many real training data to estimate the running mean and variance and then freeze them ahead of Step 7. In the second stage - while training a deep network on the condensed set -, we replace Batch Normalization layers with Instance Normalization in VGG and ResNet, due to the fact that the batch statistics are not reliable when training networks with few condensed images. Another minor modification that we apply to the standard network ResNet architecture in the first stage is replacing the strided convolutions where stride = 2 with convolutional layers where stride = 1 coupled with an average pooling layer. We observe that this change enables more detailed (per pixel) gradients w.r.t. the condensed images and leads to better condensed images.
285
+
286
+ Continual learning. In this experiment, we focus on a task-incremental learning on SVHN, MNIST and USPS with the given order. The three tasks share the same label space, however have
287
+
288
+ ![](images/e906fdd61d4b9fa9a7922d8da0e364394a0ac5db603024c2ee2f5be39e120a3e.jpg)
289
+ Figure F5: The performance correlation between the training on proxy dataset and whole-dataset. For each proxy dataset, the best 10 models are selected based on validation set performance. In the figure, each point represents an architecture.
290
+
291
+ ![](images/1c3ac131f5ff05aa52a8414939293b031803b25b069b914c83e13134e28437f9.jpg)
292
+
293
+ ![](images/e43d1ff99306a0dbb12e4a62876f53ed533c98445781aa06590228fe17699c14.jpg)
294
+
295
+ ![](images/136279f940e2abd8ac61e766671e067b801ec67dcf08d22faf6f1c09132c58dd.jpg)
296
+
297
+ <table><tr><td>C\T</td><td>Sigmoid</td><td>ReLU</td><td>LeakyReLU</td></tr><tr><td>Sigmoid</td><td>86.7±0.7</td><td>91.2±0.6</td><td>91.2±0.6</td></tr><tr><td>ReLU</td><td>86.1±0.9</td><td>91.7±0.5</td><td>91.7±0.5</td></tr><tr><td>LeakyReLU</td><td>86.3±0.9</td><td>91.7±0.5</td><td>91.7±0.4</td></tr></table>
298
+
299
+ Table T6: Cross-activation experiments in accuracy $(\%)$ for 1 condensed image/class in MNIST.
300
+
301
+ <table><tr><td>C\T</td><td>None</td><td>MaxPooling</td><td>AvgPooling</td></tr><tr><td>None</td><td>78.7±3.0</td><td>80.8±3.5</td><td>88.3±1.0</td></tr><tr><td>MaxPooling</td><td>81.2±2.8</td><td>89.5±1.1</td><td>91.1±0.6</td></tr><tr><td>Avgpooling</td><td>81.8±2.9</td><td>90.2±0.8</td><td>91.7±0.5</td></tr></table>
302
+
303
+ Table T7: Cross-pooling experiments in accuracy $(\%)$ for 1 condensed image/class in MNIST.
304
+
305
+ significantly different image statistics. The images of the three datasets are reshaped to $32 \times 32$ RGB size for standardization. We use the standard splits for training sets and randomly sample 2,000 test images for each datasets to obtain a balanced evaluation over three datasets. Thus each model is tested on a growing test set with 2,000, 4,000 and 6,000 images at the three stages respectively. We use the default ConvNet in this experiment and set the weight of distillation loss to 1.0 and the temperature to 2. We run 5,000 and 500 iterations for training and balanced finetuning as in (Castro et al., 2018) with the learning rates 0.01 and 0.001 respectively. We run 5 experiments and report the mean and standard variance in Figure 4.
306
+
307
+ Neural Architecture Search. To construct the searching space of 720 ConvNets, we vary hyperparameters $W \in \{32,64,128,256\}$ , $D \in \{1,2,3,4\}$ , $N \in \{\text{None}, \text{BatchNorm}, \text{LayerNorm}, \text{InstanceNorm}, \text{GroupNorm}\}$ , $A \in \{\text{Sigmoid}, \text{ReLU}, \text{LeakyReLU}\}$ , $P \in \{\text{None}, \text{MaxPooling}, \text{AvgPooling}\}$ . We randomly sample 5,000 images from the 50,000 training images in CIFAR10 as the validation set. Every candidate ConvNet is trained with the proxy dataset, and then evaluated on the validation set. These candidates are ranked by the validation performance. 10 architectures with top validation accuracies are selected to calculate Spearman's rank correlation coefficient, because the best model that we want will come from the top 10 architectures. We train each ConvNet for 5 times to get averaged validation and testing accuracies.
308
+
309
+ We visualize the performance correlation for different proxy datasets in Figure F5. Obviously, the condensed proxy dataset produced by our method achieves the highest performance correlation (0.79) which significantly higher than early-stopping (0.42). It means our method can produce more reliable results for NAS.
310
+
311
+ # B FURTHER ANALYSIS
312
+
313
+ Next we provide additional results on ablative studies over various deep network layers including activation, pooling and normalization functions and also over depth and width of deep network architecture. We also study the selection of hyper-parameters and the gradient distance metric. An additional qualitative analysis on the learned condensed images is also given.
314
+
315
+ Ablation study on activation functions. Here we study the use of three activation functions – Sigmoid, ReLU, LeakyReLu (negative slope is set to 0.01) – in two stages, when training condensed images (denoted as C) and when training a ConvNet from scratch on the learned condensed images (denoted as T). The experiments are conducted in MNIST dataset for 1 condensed image/class setting. Table T6 shows that all three activation functions are good for the first stage while generating good condensed images, however, Sigmoid performs poor in the second stage while learning a classifier on the condensed images – its testing accuracies are lower than ReLu and LeakyReLu by around $5\%$ . This suggests that ReLU can provide sufficiently informative gradients for learning condensed images, though the gradient of ReLU w.r.t. to its input is typically sparse.
316
+
317
+ <table><tr><td>C\T</td><td>None</td><td>BatchNorm</td><td>LayerNorm</td><td>InstanceNorm</td><td>GroupNorm</td></tr><tr><td>None</td><td>79.0±2.2</td><td>80.8±2.0</td><td>85.8±1.7</td><td>90.7±0.7</td><td>85.9±1.7</td></tr><tr><td>BatchNorm</td><td>78.6±2.1</td><td>80.7±1.8</td><td>85.7±1.6</td><td>90.9±0.6</td><td>85.9±1.5</td></tr><tr><td>LayerNorm</td><td>81.2±1.8</td><td>78.6±3.0</td><td>87.4±1.3</td><td>90.7±0.7</td><td>87.3±1.4</td></tr><tr><td>InstanceNorm</td><td>72.9±7.1</td><td>56.7±6.5</td><td>82.7±5.3</td><td>91.7±0.5</td><td>84.3±4.2</td></tr><tr><td>GroupNorm</td><td>79.5±2.1</td><td>81.8±2.3</td><td>87.3±1.2</td><td>91.6±0.5</td><td>87.2±1.2</td></tr></table>
318
+
319
+ Table T8: Cross-normalization experiments in accuracy (%) for 1 condensed image/class in MNIST.
320
+
321
+ <table><tr><td>C\T</td><td>1</td><td>2</td><td>3</td><td>4</td></tr><tr><td>1</td><td>61.3±3.5</td><td>78.2±3.0</td><td>77.1±4.0</td><td>76.4±3.5</td></tr><tr><td>2</td><td>78.3±2.3</td><td>89.0±0.8</td><td>91.0±0.6</td><td>89.4±0.8</td></tr><tr><td>3</td><td>81.6±1.5</td><td>89.8±0.8</td><td>91.7±0.5</td><td>90.4±0.6</td></tr><tr><td>4</td><td>82.5±1.3</td><td>89.9±0.8</td><td>91.9±0.5</td><td>90.6±0.4</td></tr></table>
322
+
323
+ Table T9: Cross-depth performance in accuracy $(\%)$ for 1 condensed image/class in MNIST.
324
+
325
+ <table><tr><td>C\T</td><td>32</td><td>64</td><td>128</td><td>256</td></tr><tr><td>32</td><td>90.6±0.8</td><td>91.4±0.5</td><td>91.5±0.5</td><td>91.3±0.6</td></tr><tr><td>64</td><td>91.0±0.8</td><td>91.6±0.6</td><td>91.8±0.5</td><td>91.4±0.6</td></tr><tr><td>128</td><td>90.8±0.7</td><td>91.5±0.6</td><td>91.7±0.5</td><td>91.2±0.7</td></tr><tr><td>256</td><td>91.0±0.7</td><td>91.6±0.6</td><td>91.7±0.5</td><td>91.4±0.5</td></tr></table>
326
+
327
+ Table T10: Cross-width performance in accuracy $(\%)$ for 1 condensed image/class in MNIST.
328
+
329
+ Ablation study on pooling functions. Next we investigate the performance of two pooling functions – average pooling and max pooling – also no pooling for 1 image/class dataset condensation with ConvNet in MNIST in terms of classification accuracy. Table T7 shows that max and average pooling both perform significantly better than no pooling (None) when they are used in the second stage. When the condensed samples are trained and tested on models with average pooling, the best testing accuracy $(91.7 \pm 0.5\%)$ is obtained, possibly, because average pooling provides more informative and smooth gradients for the whole image rather than only for its discriminative parts.
330
+
331
+ Ablation study on normalization functions. Next we study the performance of four normalization options – No normalization, Batch (Ioffe & Szegedy, 2015), Layer (Ba et al., 2016), Instance (Ulyanov et al., 2016) and Group Normalization (Wu & He, 2018) (number of groups is set to be four) – for 1 image/class dataset condensation with ConvNet architecture in MNIST classification accuracy. Table T8 shows that the normalization layer has little influence for learning the condensed set, while the choice of normalization layer is important for training networks on the condensed set. LayerNorm and GroupNorm have similar performance, and InstanceNorm is the best choice for training a model on condensed images. BatchNorm obtains lower performance which is similar to None (no normalization), as it is known to perform poorly when training models on few condensed samples as also observed in (Wu & He, 2018). Note that Batch Normalization does not allow for a stable training in the first stage (C); thus we replace its running mean and variance for each batch with those of randomly sampled real training images.
332
+
333
+ Ablation study on network depth and width. Here we study the effect of network depth and width for 1 image/class dataset condensation with ConvNet architecture in MNIST in terms of classification accuracy. To this end we conduct multiple experiments by varying the depth and width of the networks that are used to train condensed synthetic images and that are trained to classify testing data in ConvNet architecture and report the results in Table T9 and Table T10. In Table T9, we observe that deeper ConvNets with more blocks generate better condensed images that results in better classification performance when a network is trained on them, while ConvNet with 3 blocks performs best as classifier. Interestingly, Table T10 shows that the best results are obtained with the classifier that has 128 filters at each block, while network width (number of filters at each block) in generation has little overall impact on the final classification performance.
334
+
335
+ Ablation study on hyper-parameters. Our performance is not sensitive to hyper-parameter selection. The testing accuracy for various $K$ and $T$ , when learning 10 images/class condensed sets, is depicted in Figure F6. The results show that the optimum $K$ and $T$ are around similar values across all datasets. Thus we simply set $K$ to 1000 and $T$ to 10 for all datasets. Similarly, for the remaining ones including learning rate, weight decay, we use a single set of hyperparameters that are observed to work well for all datasets and architectures in our preliminary experiments.
336
+
337
+ Ablation study on gradient distance metric. To prove the effectiveness and robustness of the proposed distance metric for gradients (or weights), we compare to the traditional ones (Lopez-Paz et al., 2017; Aljundi et al., 2019; Zhu et al., 2019) which vectorize and concatenate the whole gradient, $\mathbf{G}^{\mathcal{T}},\mathbf{G}^{S}\in \mathbb{R}^{D}$ , and compute the squared Euclidean distance $\| \mathbf{G}^T -\mathbf{G}^S\| ^2$ and the Cosine distance $1 - \cos (\mathbf{G}^T,\mathbf{G}^S)$ , where $D$ is the number of all network parameters. We do 1
338
+
339
+ ![](images/ef3e962550b41e6e213c5f6ff1e549dfcccf18fab65d14c09c2fe454cf03be99.jpg)
340
+ Figure F6: Ablation study on the hyper-parameters $K$ and $T$ when learning 10 images/class condensed sets.
341
+
342
+ ![](images/245d4e00477c8a1e3d2a6a81659edd670a1293ad413b5c512581d3c4ae51ec8e.jpg)
343
+
344
+ image/class learning experiment on MNIST with different architectures. For simplicity, the synthetic images are learned and tested on the same architecture in this experiment. Table T11 shows that the proposed gradient distance metric remarkably outperforms others on complex architectures (e.g. LeNet, AlexNet, VGG and ResNet) and achieves the best performances in most settings, which means it is more effective and robust than the traditional ones. Note that we set $\eta_{S} = 0.1$ for MLP-Euclidean and MLP-Cosine because it works better than $\eta_{S} = 0.01$ .
345
+
346
+ <table><tr><td></td><td>MLP</td><td>ConvNet</td><td>LeNet</td><td>AlexNet</td><td>VGG</td><td>ResNet</td></tr><tr><td>Euclidean</td><td>69.3±0.9</td><td>92.7±0.3</td><td>65.0±5.1</td><td>66.2±5.6</td><td>57.1±7.0</td><td>68.0±5.2</td></tr><tr><td>Cosine</td><td>45.2±3.6</td><td>69.2±2.7</td><td>61.1±8.2</td><td>58.3±4.1</td><td>55.0±5.0</td><td>68.8±7.8</td></tr><tr><td>Ours</td><td>70.5±1.2</td><td>91.7±0.5</td><td>85.0±1.7</td><td>82.7±2.9</td><td>81.7±2.6</td><td>89.4±0.9</td></tr></table>
347
+
348
+ Table T11: Ablation study on different gradient distance metrics. Obviously, the proposed distance metric is more effective and robust. Euclidean: squared Euclidean distance, Cosine: Cosine distance.
349
+
350
+ Further qualitative analysis We first depict the condensed images that are learned on MNIST, FashionMNIST, SVHN and CIFAR10 datasets in one experiment using the default ConvNet in 10 images/class setting in Figure F7. It is interesting that the 10 images/class results in Figure F7 are diverse which cover the main variations, while the condensed images for 1 image/class setting (see Figure 2) look like the "prototype" of each class. For example, in Figure F7 (a), the ten images of "four" indicate ten different styles. The ten "bag" images in Figure F7 (b) are significantly different from each other, similarly "wallet" (1st row), "shopping bag" (3rd row), "handbag" (8th row) and "schoolbag" (10th row). Figure F7 (c) also shows the diverse house numbers with different shapes, colors and shadows. Besides, different poses of a "horse" have been learned in Figure F7 (d).
351
+
352
+ # C COMPARISON TO MORE BASELINES
353
+
354
+ Optimal random selection. One interesting and strong baseline is Optimal Random Selection (ORS) that we implement random selection experiments for 1,000 times and pick the best ones. Table T12 presents the performance comparison to the selected Top 1000 (all), Top 100 and Top 10 coresets. These optimal coresets are selected by ranking their performance. Obviously, the condensed set generated by our method surpasses the selected Top 10 of 1000 coresets with a large margin on all four datasets.
355
+
356
+ Generative model. We also compare to the popular generative model, namely, Conditional Generative Adversarial Networks (cGAN) (Mirza & Osindero, 2014). The generator has two blocks which consists of the Up-sampling (scale_factor=2), Convolution (stride=1), BatchNorm and LeakyReLU layers. The discriminator has three blocks which consists of Convolution (stride=2), BatchNorm and LeakyReLU layers. In additional to the random noise, we also input the class label as the condition. We generate 1 and 10 images per class for each dataset with random noise. Table T12 shows that the images produced by cGAN have similar performances to those randomly selected coresets (i.e. Top 1000). It is reasonable, because the aim of cGAN is to generate real-look images. In contrast, our method aims to generate images that can train deep neural networks efficiently.
357
+
358
+ Analysis of coreset performances We find that K-Center (Wolf, 2011; Sener & Savarese, 2018) and Forgetting (Toneva et al., 2019) don't work as well as other general coreset methods, namely
359
+
360
+ ![](images/5497a7d287eab68d7902ab5b2d2f5e8116cfc4abb1529995841399ad36e84964.jpg)
361
+ (a) MNIST
362
+
363
+ ![](images/9a52456cfbf2b73d51fe9373eb9067b58e7b076de572ff8d1ce22559f39853d8.jpg)
364
+ (b) FashionMNIST
365
+
366
+ ![](images/08a48fe45ddaae7a9c42a3894ea5263e8a21f1f33346dda5216619b110e2b9bd.jpg)
367
+ (c) SVHN
368
+
369
+ ![](images/e82b7d5b993bfc21be138a167a69bc2726d7dd9285450619f9c5f128c823d330.jpg)
370
+ (d) CIFAR10
371
+ Figure F7: The synthetic images for MNIST, FashionMNIST, SVHN and CIFAR10 produced by our method with ConvNet under 10 images/class setting.
372
+
373
+ <table><tr><td rowspan="2"></td><td rowspan="2">Img/Cls</td><td rowspan="2">Ratio %</td><td colspan="3">Optimal Random Selection</td><td rowspan="2">cGAN</td><td rowspan="2">Ours</td><td rowspan="2">Whole Dataset</td></tr><tr><td>Top 1000</td><td>Top 100</td><td>Top 10</td></tr><tr><td rowspan="2">MNIST</td><td>1</td><td>0.017</td><td>64.3±6.1</td><td>74.4±1.8</td><td>78.2±1.7</td><td>64.0±3.2</td><td>91.7±0.5</td><td rowspan="2">99.6±0.0</td></tr><tr><td>10</td><td>0.17</td><td>94.8±0.7</td><td>96.0±0.2</td><td>96.4±0.1</td><td>94.9±0.6</td><td>97.4±0.2</td></tr><tr><td rowspan="2">FashionMNIST</td><td>1</td><td>0.017</td><td>51.3±5.4</td><td>59.6±1.3</td><td>62.4±0.9</td><td>51.1±0.8</td><td>70.5±0.6</td><td rowspan="2">93.5±0.1</td></tr><tr><td>10</td><td>0.17</td><td>73.8±1.6</td><td>76.4±0.6</td><td>77.6±0.2</td><td>73.9±0.7</td><td>82.3±0.4</td></tr><tr><td rowspan="2">SVHN</td><td>1</td><td>0.014</td><td>14.3±2.1</td><td>18.1±0.9</td><td>19.9±0.2</td><td>16.1±0.9</td><td>31.2±1.4</td><td rowspan="2">95.4±0.1</td></tr><tr><td>10</td><td>0.14</td><td>34.6±3.2</td><td>40.3±1.3</td><td>42.9±0.9</td><td>33.9±1.1</td><td>76.1±0.6</td></tr><tr><td rowspan="2">CIFAR10</td><td>1</td><td>0.02</td><td>15.0±2.0</td><td>18.5±0.8</td><td>20.1±0.5</td><td>16.3±1.4</td><td>28.3±0.5</td><td rowspan="2">84.8±0.1</td></tr><tr><td>10</td><td>0.2</td><td>27.1±1.6</td><td>29.8±0.7</td><td>31.4±0.2</td><td>27.9±1.1</td><td>44.9±0.5</td></tr></table>
374
+
375
+ Table T12: The performance comparison to optimal random selection (ORS) and conditional generative adversarial networks (cGAN) baselines. This table shows the testing accuracies $(\%)$ of different methods on four datasets. ConvNet is used for training and testing. Img/Cls: image(s) per class, Ratio $(\%)$ : the ratio of condensed images to whole training set. Top 1000, Top 100 and Top 10 means the selected 1000, 100 and 10 optimal coresets by ranking their performances.
376
+
377
+ <table><tr><td rowspan="2"></td><td rowspan="2">Img/Cls</td><td rowspan="2">Ratio %</td><td colspan="4">Core-set Selection</td><td rowspan="2">LD†</td><td rowspan="2">Ours</td><td rowspan="2">Whole Dataset</td></tr><tr><td>Random</td><td>Herding</td><td>K-Center</td><td>Forgetting</td></tr><tr><td rowspan="2">CIFAR100</td><td>1</td><td>0.2</td><td>4.2±0.3</td><td>8.4±0.3</td><td>8.3±0.3</td><td>3.5±0.3</td><td>11.5±0.4</td><td>12.8±0.3</td><td rowspan="2">56.2±0.3</td></tr><tr><td>10</td><td>2</td><td>14.6±0.5</td><td>17.3±0.3</td><td>7.1±0.3</td><td>9.8±0.2</td><td>-</td><td>25.2±0.3</td></tr></table>
378
+
379
+ Table T13: The performance comparison on CIFAR100. This table shows the testing accuracies $(\%)$ of different methods. ConvNet is used for training and testing except that $\mathrm{LD}^{\dagger}$ uses AlexNet. Img/Cls: image(s) per class, Ratio $(\%)$ : the ratio of condensed images to whole training set.
380
+
381
+ <table><tr><td>Method</td><td>MLP</td><td>ConvNet</td><td>LeNet</td><td>AlexNet</td><td>VGG</td><td>ResNet</td></tr><tr><td>DD</td><td>72.7±2.8</td><td>77.6±2.9</td><td>79.5±8.1</td><td>51.3±19.9</td><td>11.4±2.6</td><td>63.6±12.7</td></tr><tr><td>Ours</td><td>83.0±2.5</td><td>92.9±0.5</td><td>93.9±0.6</td><td>90.6±1.9</td><td>92.9±0.5</td><td>94.5±0.4</td></tr></table>
382
+
383
+ Random and Herding (Rebuffi et al., 2017), in this experimental setting. After analyzing the algorithms and coresets, we find two main reasons. 1) K-Center and Forgetting are not designed for training deep networks from scratch, instead they are for active learning and continual learning respectively. 2) The two algorithms both tend to select "hard" samples which are often outliers when only a small number of images are selected. These outliers confuse the training, which results in worse performance. Specifically, the first sample per class in K-Center coreset is initialized by selecting the one closest to each class center. The later ones selected by the greedy criterion that pursues maximum coverage are often outliers which confuse the training.
384
+
385
+ Performance on CIFAR100. We supplement the performance comparison on CIFAR100 dataset which includes 10 times as many classes as other benchmarks. More classes while fewer images per class makes CIFAR100 significantly more challenging than other datasets. We use the same set of hyper-parameters for CIFAR100 as other datasets. Table T13 depicts the performances of coreset selection methods, Label Distillation (LD) Bohdal et al. (2020) and ours. Our method achieves $12.8\%$ and $25.2\%$ testing accuracies on CIFAR100 when learning 1 and 10 images per class, which are the best compared with others.
386
+
387
+ # D FURTHER COMPARISON TO DD (WANG ET AL., 2018)
388
+
389
+ Next we compare our method to DD (Wang et al., 2018) first quantitatively in terms of cross-architecture generalization, then qualitatively in terms of synthetic image quality, and finally in terms of computational load for training synthetic images. Note that we use the original source code to obtain the results for DD that is provided by the authors of DD in the experiments.
390
+
391
+ Generalization ability comparison. Here we compare the generalization ability across different deep network architectures to DD. To this end, we use the synthesized 10 images/class data learned with LeNet on MNIST to train MLP, ConvNet, LeNet, AlexNet, VGG11 and ResNet18 and report the results in Table T14. We see that the condensed set produced by our method achieves good classification performances with all architectures, while the synthetic set produced by DD perform poorly when used to trained some architectures, e.g. AlexNet, VGG and ResNet. Note that DD generates learning rates to be used in every training step in addition to the synthetic data. This is in contrast to our method which does not learn learning rates for specific training steps. Although the tied learning rates improve the performance of DD while training and testing on the same architecture, they will hinder the generalization to unseen architectures.
392
+
393
+ Table T14: Generalization ability comparison to DD. The 10 condensed images per class are trained with LeNet, and tested on various architectures. It shows that condensed images generated by our method have better generalization ability.
394
+
395
+ <table><tr><td>Method</td><td>Dataset</td><td>Architecture</td><td>Memory (MB)</td><td>Time (min)</td><td>Test Acc.</td></tr><tr><td>DD</td><td>MNIST</td><td>LeNet</td><td>785</td><td>160</td><td>79.5±8.1</td></tr><tr><td>Ours</td><td>MNIST</td><td>LeNet</td><td>653</td><td>46</td><td>93.9±0.6</td></tr><tr><td>DD</td><td>CIFAR10</td><td>AlexCifarNet</td><td>3211</td><td>214</td><td>36.8±1.2</td></tr><tr><td>Ours</td><td>CIFAR10</td><td>AlexCifarNet</td><td>1445</td><td>105</td><td>39.1±1.2</td></tr></table>
396
+
397
+ Table T15: Time and memory use for training DD and our method in 10 images/class setting.
398
+
399
+ ![](images/c30d4cdab1cf416df2a90a4f20f426b3354d06616b0500aced76650cf83d3b97.jpg)
400
+ (a) MNIST of DD
401
+
402
+ ![](images/146c45a72efe29f2a70031587f0c6c38356c311404cc69d4776aad471a74d6cf.jpg)
403
+ (b) CIFAR10 of DD
404
+
405
+ ![](images/ca946421f1b093b43beaa8a912ddc10de5f22ea5e58acc2e9d6297bd30c2cd42.jpg)
406
+ (c) MNIST of ours
407
+
408
+ ![](images/18f9ddb3f754af2dade8315feb7b346a119fcd86fbad6be8fa90bcbc2f2b1804.jpg)
409
+ (d) CIFAR10 of ours
410
+ Figure F8: Qualitative comparison between the condensed images produced by DD and ours under 10 images/class setting. LeNet and AlexCifarNet are utilized for MNIST and CIFAR10 respectively.
411
+
412
+ Qualitative comparison. We also provide a qualitative comparison to DD in terms of image quality in Figure F8. Note that both of the synthetic sets are trained with LeNet on MNIST and AlexCifarNet on CIFAR10. Our method produces more interpretable and realistic images than DD, although it is not our goal. The MNIST images produced by DD are noisy, and the CIFAR10 images produced by DD do not show any clear structure of the corresponding class. In contrast, the MNIST and CIFAR10 images produced by our method are both visually meaningful and diverse.
413
+
414
+ Training memory and time. One advantage of our method is that we decouple the model weights from its previous states in training, while DD requires to maintain the recursive computation graph which is not scalable to large models and inner-loop optimizers with many steps. Hence, our method requires less training time and memory cost. We compare the training time and memory cost required by DD and our method with one NVIDIA GTX1080-Ti GPU. Table T15 shows that our method requires significantly less memory and training time than DD and provides an approximation reduction of $17\%$ and $55\%$ in memory and $71\%$ and $51\%$ in train time to learn MNIST and CIFAR10 datasets respectively. Furthermore, our training time and memory cost can be significantly decreased by using smaller hyper-parameters, e.g. $K$ , $T$ and the batch size of sampled real images, with a slight performance decline (refer to Figure F6).
415
+
416
+ # E EXTENDED RELATED WORK
417
+
418
+ Variations of Dataset Distillation. There exists recent work that extends Dataset Distillation (Wang et al., 2018). For example, (Sucholutsky & Schonlau, 2019; Bohdal et al., 2020) aim to improve DD by learning soft labels with/without synthetic images. (Such et al., 2020) utilizes a generator to synthesize images instead of directly updating image pixels. However, the reported quantitative and qualitative improvements over DD are minor compared to our improvements. In addition, none of these methods have thoroughly verified the cross-architecture generalization ability of the synthetic images.
419
+
420
+ Zero-shot Knowledge Distillation. Recent zero-shot KD methods (Lopes et al., 2017; Nayak et al., 2019) aim to perform KD from a trained model in the absence of training data by generating synthetic data as the intermediate production to further use. Unlike them, our method does not require pretrained teacher models to provide the knowledge, i.e. to obtain the features and labels.
421
+
422
+ Data Privacy & Federated Learning. Synthetic dataset is also a promising solution to protecting data privacy and enabling safe federated learning. There exists some work that uses synthetic dataset to protect the privacy of medical dataset (Li et al., 2020) and reduce the communication rounds in federated learning (Zhou et al., 2020). Although transmitting model weights or gradients (Zhu et al., 2019; Zhao et al., 2020) may increase the transmission security, the huge parameters of modern deep neural networks are prohibitive to transmit frequently. In contrast, transmitting small-scale synthetic dataset between clients and server is low-cost (Goetz & Tewari, 2020).
datasetcondensationwithgradientmatching/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:38cfd6ef742391869c1b68ae917902ddc429e1cd1fdc915f8401b32801c40efc
3
+ size 1220377
datasetcondensationwithgradientmatching/layout.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:801eed31dd9283aefe981ccbd3227f3910d5876bbdea38d24ac110e1a74c7fff
3
+ size 672999
deepsymbolicregressionrecoveringmathematicalexpressionsfromdataviariskseekingpolicygradients/84be15b4-c5e3-44fc-9542-60302997e137_content_list.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:28e48cd7222cb6f83da6814e3496835674095324b9165ebdeb696b0d96ac025c
3
+ size 167655
deepsymbolicregressionrecoveringmathematicalexpressionsfromdataviariskseekingpolicygradients/84be15b4-c5e3-44fc-9542-60302997e137_model.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7448ad06386132cbc10abc418ae08c7189e31958e1a9484fe325fe00e7a0e16a
3
+ size 191912