Add Batch 864e3c22-1a64-4b09-90d3-7b12b0aa94e6
Browse filesThis view is limited to 50 files because it contains too many changes. See raw diff
- metagmvaemixtureofgaussianvaeforunsupervisedmetalearning/62b1fbbd-7196-49c5-9a18-728592c0f987_content_list.json +3 -0
- metagmvaemixtureofgaussianvaeforunsupervisedmetalearning/62b1fbbd-7196-49c5-9a18-728592c0f987_model.json +3 -0
- metagmvaemixtureofgaussianvaeforunsupervisedmetalearning/62b1fbbd-7196-49c5-9a18-728592c0f987_origin.pdf +3 -0
- metagmvaemixtureofgaussianvaeforunsupervisedmetalearning/full.md +343 -0
- metagmvaemixtureofgaussianvaeforunsupervisedmetalearning/images.zip +3 -0
- metagmvaemixtureofgaussianvaeforunsupervisedmetalearning/layout.json +3 -0
- mindthepadcnnscandevelopblindspots/3552c9a6-8d0a-4fb1-b42b-7481e78bb67d_content_list.json +3 -0
- mindthepadcnnscandevelopblindspots/3552c9a6-8d0a-4fb1-b42b-7481e78bb67d_model.json +3 -0
- mindthepadcnnscandevelopblindspots/3552c9a6-8d0a-4fb1-b42b-7481e78bb67d_origin.pdf +3 -0
- mindthepadcnnscandevelopblindspots/full.md +1492 -0
- mindthepadcnnscandevelopblindspots/images.zip +3 -0
- mindthepadcnnscandevelopblindspots/layout.json +3 -0
- minimumwidthforuniversalapproximation/618e1fd2-0ea8-4023-9875-54c8e635fe2b_content_list.json +3 -0
- minimumwidthforuniversalapproximation/618e1fd2-0ea8-4023-9875-54c8e635fe2b_model.json +3 -0
- minimumwidthforuniversalapproximation/618e1fd2-0ea8-4023-9875-54c8e635fe2b_origin.pdf +3 -0
- minimumwidthforuniversalapproximation/full.md +0 -0
- minimumwidthforuniversalapproximation/images.zip +3 -0
- minimumwidthforuniversalapproximation/layout.json +3 -0
- modelbasedvisualplanningwithselfsupervisedfunctionaldistances/91e786e7-cdc2-4c2c-957e-715d02d5d2ba_content_list.json +3 -0
- modelbasedvisualplanningwithselfsupervisedfunctionaldistances/91e786e7-cdc2-4c2c-957e-715d02d5d2ba_model.json +3 -0
- modelbasedvisualplanningwithselfsupervisedfunctionaldistances/91e786e7-cdc2-4c2c-957e-715d02d5d2ba_origin.pdf +3 -0
- modelbasedvisualplanningwithselfsupervisedfunctionaldistances/full.md +384 -0
- modelbasedvisualplanningwithselfsupervisedfunctionaldistances/images.zip +3 -0
- modelbasedvisualplanningwithselfsupervisedfunctionaldistances/layout.json +3 -0
- multivariateprobabilistictimeseriesforecastingviaconditionednormalizingflows/0d955605-445e-463c-ab48-ea4bd1d83f78_content_list.json +3 -0
- multivariateprobabilistictimeseriesforecastingviaconditionednormalizingflows/0d955605-445e-463c-ab48-ea4bd1d83f78_model.json +3 -0
- multivariateprobabilistictimeseriesforecastingviaconditionednormalizingflows/0d955605-445e-463c-ab48-ea4bd1d83f78_origin.pdf +3 -0
- multivariateprobabilistictimeseriesforecastingviaconditionednormalizingflows/full.md +561 -0
- multivariateprobabilistictimeseriesforecastingviaconditionednormalizingflows/images.zip +3 -0
- multivariateprobabilistictimeseriesforecastingviaconditionednormalizingflows/layout.json +3 -0
- mutualinformationstateintrinsiccontrol/e347bec0-55c8-45aa-aba8-e5cfd14a392b_content_list.json +3 -0
- mutualinformationstateintrinsiccontrol/e347bec0-55c8-45aa-aba8-e5cfd14a392b_model.json +3 -0
- mutualinformationstateintrinsiccontrol/e347bec0-55c8-45aa-aba8-e5cfd14a392b_origin.pdf +3 -0
- mutualinformationstateintrinsiccontrol/full.md +402 -0
- mutualinformationstateintrinsiccontrol/images.zip +3 -0
- mutualinformationstateintrinsiccontrol/layout.json +3 -0
- neuralapproximatesufficientstatisticsforimplicitmodels/16f1e78d-9335-4c06-ac73-02a62d0df26f_content_list.json +3 -0
- neuralapproximatesufficientstatisticsforimplicitmodels/16f1e78d-9335-4c06-ac73-02a62d0df26f_model.json +3 -0
- neuralapproximatesufficientstatisticsforimplicitmodels/16f1e78d-9335-4c06-ac73-02a62d0df26f_origin.pdf +3 -0
- neuralapproximatesufficientstatisticsforimplicitmodels/full.md +432 -0
- neuralapproximatesufficientstatisticsforimplicitmodels/images.zip +3 -0
- neuralapproximatesufficientstatisticsforimplicitmodels/layout.json +3 -0
- neuraltopicmodelviaoptimaltransport/b0b88773-75ad-4d9b-af3f-190d5b40e839_content_list.json +3 -0
- neuraltopicmodelviaoptimaltransport/b0b88773-75ad-4d9b-af3f-190d5b40e839_model.json +3 -0
- neuraltopicmodelviaoptimaltransport/b0b88773-75ad-4d9b-af3f-190d5b40e839_origin.pdf +3 -0
- neuraltopicmodelviaoptimaltransport/full.md +370 -0
- neuraltopicmodelviaoptimaltransport/images.zip +3 -0
- neuraltopicmodelviaoptimaltransport/layout.json +3 -0
- noiseagainstnoisestochasticlabelnoisehelpscombatinherentlabelnoise/a943647e-e121-45ad-a41c-0a21533e1dca_content_list.json +3 -0
- noiseagainstnoisestochasticlabelnoisehelpscombatinherentlabelnoise/a943647e-e121-45ad-a41c-0a21533e1dca_model.json +3 -0
metagmvaemixtureofgaussianvaeforunsupervisedmetalearning/62b1fbbd-7196-49c5-9a18-728592c0f987_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:ac2cf088b04a0a6e16fe308f379b31f8ea95c730a7392813d316d6e8a4690c21
|
| 3 |
+
size 103906
|
metagmvaemixtureofgaussianvaeforunsupervisedmetalearning/62b1fbbd-7196-49c5-9a18-728592c0f987_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:ddd44e066245f8476644124111e8987c4dccd88b844ef1670049493b1cbefa0f
|
| 3 |
+
size 123950
|
metagmvaemixtureofgaussianvaeforunsupervisedmetalearning/62b1fbbd-7196-49c5-9a18-728592c0f987_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:e5637e0ba3db966afa861c1de18c2d58fd00b778f462917cb20f0c0250db4d67
|
| 3 |
+
size 17643251
|
metagmvaemixtureofgaussianvaeforunsupervisedmetalearning/full.md
ADDED
|
@@ -0,0 +1,343 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# META-GMVAE: MIXTURE OF GAUSSIAN VAES FOR UNSUPERVISED META-LEARNING
|
| 2 |
+
|
| 3 |
+
Dong Bok Lee $^{1}$ , Dongchan Min $^{1}$ , Seanie Lee $^{1}$ , and Sung Ju Hwang $^{1,2}$
|
| 4 |
+
|
| 5 |
+
KAIST<sup>1</sup>, AITRICS<sup>2</sup>, South Korea
|
| 6 |
+
|
| 7 |
+
{markhi, alsehdcks95, lsnfamily02, sjhwang82}@kaist.ac.kr
|
| 8 |
+
|
| 9 |
+
# ABSTRACT
|
| 10 |
+
|
| 11 |
+
Unsupervised learning aims to learn meaningful representations from unlabeled data which can capture its intrinsic structure, that can be transferred to downstream tasks. Meta-learning, whose objective is to learn to generalize across tasks such that the learned model can rapidly adapt to a novel task, shares the spirit of unsupervised learning in that the both seek to learn more effective and efficient learning procedure than learning from scratch. The fundamental difference of the two is that the most meta-learning approaches are supervised, assuming full access to the labels. However, acquiring labeled dataset for meta-training not only is costly as it requires human efforts in labeling but also limits its applications to pre-defined task distributions. In this paper, we propose a principled unsupervised meta-learning model, namely Meta-GMVAE, based on Variational Autoencoder (VAE) and set-level variational inference. Moreover, we introduce a mixture of Gaussian (GMM) prior, assuming that each modality represents each class-concept in a randomly sampled episode, which we optimize with Expectation-Maximization (EM). Then, the learned model can be used for downstream few-shot classification tasks, where we obtain task-specific parameters by performing semi-supervised EM on the latent representations of the support and query set, and predict labels of the query set by computing aggregated posteriors. We validate our model on Omniglot and Mini-ImageNet datasets by evaluating its performance on downstream few-shot classification tasks. The results show that our model obtains impressive performance gains over existing unsupervised meta-learning baselines, even outperforming supervised MAML on a certain setting.
|
| 12 |
+
|
| 13 |
+
# 1 INTRODUCTION
|
| 14 |
+
|
| 15 |
+
Unsupervised learning is one of the most fundamental and challenging problems in machine learning, due to the absence of target labels to guide the learning process. Thanks to the enormous research efforts, there now exist many unsupervised learning methods that have shown promising results on real-world domains, including image recognition (Le, 2013) and natural language understanding (Ramachandran et al., 2017). The essential goal of unsupervised learning is obtaining meaningful feature representations that best characterize the data, which can be later utilized to improve the performance of the downstream tasks, by training a supervised task-specific model on the top of the learned representations (Reed et al., 2014; Cheung et al., 2015; Chen et al., 2016) or fine-tuning the entire pre-trained models (Erhan et al., 2010).
|
| 16 |
+
|
| 17 |
+
Meta-learning, whose objective is to learn general knowledge across diverse tasks, such that the learned model can rapidly adapt to novel tasks, shares the spirit of unsupervised learning in that both seek more efficient and effective learning procedure over learning from scratch. However, the essential difference between the two is that most meta-learning approaches have been built on the supervised learning scheme, and require human-crafted task distributions to be applied in few-shot classification. Acquiring labeled dataset for meta-training may require a massive amount of human efforts, and more importantly, meta-learning limits its applications to the pre-defined task distributions (e.g. classification of specific set of classes).
|
| 18 |
+
|
| 19 |
+
Two recent works have proposed unsupervised meta-learning that can bridge the gap between unsupervised learning and meta-learning by focusing on constructing supervised tasks with pseudo-labels from the unlabeled data. To do so, CACTUs (Hsu et al., 2019) clusters data in the embedding space
|
| 20 |
+
|
| 21 |
+

|
| 22 |
+
Figure 1: During meta-training, Meta-GMVAE learns multi-modal latent space that can best explain the unlabeled data using EM algorithm. At meta-test time, we use semi-supervised EM to map both the support (labeled data) and queries (unlabeled data) to each mode learned during meta-training.
|
| 23 |
+
|
| 24 |
+
learned with several unsupervised learning methods, while UMTRA (Khodadadeh et al., 2019) assumed that each randomly drawn sample represents a different class and augmented each pseudo-class with data augmentation (Cubuk et al., 2018). After constructing the meta-training dataset with such heuristics, they simply apply supervised meta-learning algorithms as usual. Despite the success of the existing unsupervised meta-learning methods, they are fundamentally limited, since 1) they only consider unsupervised learning for heuristic pseudo-labeling of unlabeled data, and 2) the two-stage approach makes it impossible to recover from incorrect pseudo-class assignment when learning the unsupervised representation space.
|
| 25 |
+
|
| 26 |
+
In this paper, we propose a principled unsupervised meta-learning model based on Variational Autoencoder (VAE) (Kingma & Welling, 2014) and set-level variational inference using self-attention (Vaswani et al., 2017). Moreover, we introduce multi-modal prior distributions, a mixture of Gaussians (GMM), assuming that each modality represents each class-concept in any given tasks. Then the parameter of GMM is optimized by running Expectation-Maximization (EM) on the observations sampled from the set-dependent variational posterior. In this framework, however, there is no guarantee that each modality obtained from EM algorithm corresponds to a label. To realize modality as label, we deploy semi-supervised EM at meta-test time, considering the support set and query set as labeled and unlabeled observations, respectively. We refer to our method as Meta-Gaussian Mixture Variational Autoencoders (Meta-GMVAE) (See Figure 1 for high-level concept). While our method can be used as a full generative model for generating the samples (images), the ability to generalize to generate samples may not be necessary for capturing the meta-knowledge for non-generative downstream tasks. Thus, we propose another version of Meta-GMVAE that reconstructs high-level features learned by unsupervised representation learning approaches (e.g. Chen et al. (2020)).
|
| 27 |
+
|
| 28 |
+
To investigate the effectiveness of our framework, we run experiments on two benchmark few-shot image classification datasets, namely Omiglot (Lake et al., 2011) and Mini-Imagenet (Ravi & Larochelle, 2017). The experimental results show that our Meta-GMVAE obtains impressive performance gains over the relevant unsupervised meta-learning baselines on both datasets, obtaining even better accuracy than fully supervised MAML (Finn et al., 2017) while utilizing as small as $0.1\%$ of the labeled data on one-shot settings in Omniglot dataset. Moreover, our model can generalize to classification tasks with different number of ways (classes) without loss of accuracy. Our contribution is threefold:
|
| 29 |
+
|
| 30 |
+
- We propose a novel unsupervised meta-learning model, namely Meta-GMVAE, which metalearns the set-conditioned prior and posterior network for a VAE. Our Meta-GMVAE is a principled unsupervised meta-learning method, unlike existing methods on unsupervised meta-learning that combines heuristic pseudo-labeling with supervised meta-learning.
|
| 31 |
+
- We propose to learn the multi-modal structure of a given dataset with the Gaussian mixture prior, such that it can adapt to a novel dataset via the EM algorithm. This flexible adaptation to a new task, is not possible with existing methods that propose VAEs with Gaussian mixture priors for single task learning.
|
| 32 |
+
- We show that Meta-GMVAE largely outperforms relevant unsupervised meta-learning baselines on two benchmark datasets, while obtaining even better performance than a supervised meta-learning model under a specific setting. We further show that Meta-GMVAE can generalize to classification tasks with different number of ways (classes).
|
| 33 |
+
|
| 34 |
+
# 2 RELATED WORK
|
| 35 |
+
|
| 36 |
+
Unsupervised learning Many prior unsupervised learning methods have developed proxy objectives which is either based on reconstruction (Vincent et al., 2010; Higgins et al., 2017), adversarially obtained image fidelity (Radford et al., 2016; Salimans et al., 2016; Donahue et al., 2017; Dumoulin et al., 2017), disentanglement (Bengio et al., 2013; Reed et al., 2014; Cheung et al., 2015; Chen et al., 2016; Mathieu et al., 2016; Denton & Birodkar, 2017; Kim & Mnih, 2018; Ding et al., 2020), clustering (Coates & Ng, 2012; Krähenbuhl et al., 2016; Bojanowski & Joulin, 2017; Caron et al., 2018), or contrastive learning (Chen et al., 2020). In the unsupervised learning literature, the most relevant work to ours are methods that use Gaussian Mixture priors for variational autoencoders. Dilokthanakul et al. (2016); Jiang et al. (2017) consider single task learning and therefore, the learned prior parameter is fixed after training, and thus cannot adapt to new tasks. CURL (Rao et al., 2019) learns a network that outputs Gaussian mixture priors over a sequence of tasks for unsupervised continual learning. However CURL cannot adapt to a new task without training on it, while our framework can generalize to a new task without any training, via amortized inference with a dataset (task) encoder. Also, our model does not learn Gaussian mixture priors but rather obtain them on the fly using the expectation-maximization algorithm.
|
| 37 |
+
|
| 38 |
+
Meta-learning Meta-learning (Thrun & Pratt, 1998) shares the intuition of unsupervised learning in that it aims to improve the model performance on an unseen task by leveraging prior knowledge, rather than learning from scratch. While the literature on meta-learning is vast, we only discuss relevant existing works for few-shot image classification. Metric-based meta-learning (Koch et al., 2015; Vinyals et al., 2016; Snell et al., 2017; Oreshkin et al., 2018; Mishra et al., 2018) is one of the most popular approaches, where it learns to embed the data instances of the same class to be closer in the shared embedding space. One can measure the distance in the embedding space by cosine similarity (Vinyals et al., 2016), or Euclidean distance (Snell et al., 2017). On the other hand, gradient-based meta-learning (Finn et al., 2017; 2018; Li et al., 2017; Lee & Choi, 2018; Ravi & Beatson, 2019; Flennerhag et al., 2020) aims at learning a global initialization of parameters, which can rapidly adapt to a novel task with only a few gradient steps. Moreover, some previous works (Hewitt et al., 2018; Edwards & Storkey, 2017; Garnelo et al., 2018) tackle meta-learning by modeling the set-dependent variational posterior with a single global latent variable, however, we model the variational posterior conditioned on each data instances. Moreover, while all of these works assume supervised learning scenarios where one has access to full labels in meta-training stage, we focus on unsupervised setting in this paper.
|
| 39 |
+
|
| 40 |
+
Unsupervised meta-learning One of the main limitations of conventional meta-learning methods is that their application is strictly limited to the tasks from a pre-defined task distribution. A few works (Hsu et al., 2019; Khodadadeh et al., 2019) have been proposed to resolve this issue by combining unsupervised learning with meta-learning. The main idea is to construct meta-training dataset in an unsupervised manner by leveraging existing supervised meta-learning models. CACTUs (Hsu et al., 2019) deploy several deep metric learning (Berthelot et al., 2019; Donahue et al., 2017; Caron et al., 2018; Chen et al., 2016) to episodically cluster the unlabeled dataset, and then train MAML (Finn et al., 2017) and Prototypical Networks (Snell et al., 2017) on the constructed data. UMTRA (Khodadadeh et al., 2019) assumes that each randomly drawn sample is from a different class from others, and use data augmentation (Cubuk et al., 2018) to construct synthetic task distribution for meta-training. Instead of only deploying unsupervised learning for constructing meta-training task distributions, we propose an unsupervised meta-learning model that meta-learns set-level variational posterior by matching the multi-modal prior distribution representing latent classes.
|
| 41 |
+
|
| 42 |
+
# 3 UNSUPERVISED META-LEARNING WITH META-GMVAES
|
| 43 |
+
|
| 44 |
+
In this section, we describe our problem setting with respect to unsupervised meta-learning, and demonstrate our approach. The graphical illustration of our model for unsupervised meta-training and supervised meta-test is depicted in Figure 2.
|
| 45 |
+
|
| 46 |
+
# 3.1 PROBLEM STATEMENT
|
| 47 |
+
|
| 48 |
+
Our goal is to learn unsupervised feature representations which can be transferred to wide range of downstream few-shot classification tasks. As suggested by Hsu et al. (2019); Khodadadeh et al.
|
| 49 |
+
|
| 50 |
+

|
| 51 |
+
(a) Unsupervised Meta-training
|
| 52 |
+
|
| 53 |
+

|
| 54 |
+
(b) Suervised Meta-test
|
| 55 |
+
Figure 2: The graphical illustration of Meta-GMVAE. The dotted lines denote either variational inference or Expectation Maximization. (a): We introduce the multimodal distribution $p_{\psi}(\mathbf{z})$ into prior distribution, and its optimal task-specific parameter $\psi_i^*$ is obtained by EM in an episodic manner. (b): For meta-test, we obtain task-specific parameter $\psi_i^*$ by semi-supervised EM using $\mathbf{x}_s, \mathbf{y}_s$ , and $\mathbf{x}_q$ .
|
| 56 |
+
|
| 57 |
+
(2019), we only assume an unlabeled dataset $\mathcal{D}_u = \{\mathbf{x}_u\}_{u=1}^U$ in the meta-training stage. We aim toward applying the knowledge learned during unsupervised meta-training stage to novel tasks in meta-test stage, which comes with a modest amount of labeled data (or as few as a single example per class) for each task. As with most meta-learning methods, we further assume that the labeled data are drawn from the same distribution as that of the unlabeled data, with a different set of classes. Specifically, the goal of a $K$ -way $S$ -shot classification task $\mathcal{T}$ is to correctly predict the labels of query data points $\mathcal{Q} = \{\mathbf{x}_q\}_{q=1}^Q$ , using $S$ support data points and labels $\mathcal{S} = \{(\mathbf{x}_s, \mathbf{y}_s)\}_{s=1}^S$ per class, where $S$ is relatively small (i.e. between 1 and 50).
|
| 58 |
+
|
| 59 |
+
# 3.2 META-LEVEL GAUSSIAN MIXTURE VAE
|
| 60 |
+
|
| 61 |
+
Unsupervised meta-training We now describe the meta-learning framework for learning unsupervised latent representations that can be transferred to human-designed few-shot image-classification tasks. In particular, we aim toward learning multi-modal latent spaces for Variational Autoencoder (VAE) in an episodic manner. We use the Gaussian mixture for the prior distribution $p_{\psi}(\mathbf{z}) = \sum_{k=1}^{K} p_{\psi}(\mathbf{y} = k) p_{\psi}(\mathbf{z}|\mathbf{y} = k)$ , where $\psi$ is the parameter of the prior network. Then the generative process can be described as follows:
|
| 62 |
+
|
| 63 |
+
- $\mathrm{y} \sim {p}_{\psi }\left( \mathrm{y}\right)$ ,where $\mathrm{y}$ corresponds to the categorical L.V. for a single mode.
|
| 64 |
+
- $\mathbf{z} \sim p_{\psi}(\mathbf{z}|\mathbf{y})$ , where $\mathbf{z}$ corresponds to the Gaussian L.V. responsible for data generation.
|
| 65 |
+
- $\mathbf{x} \sim p_{\theta}(\mathbf{x}|\mathbf{z})$ , where $\theta$ is the parameter of the generative model.
|
| 66 |
+
|
| 67 |
+
The above generative process is similar to those from the previous works (Dilokthanakul et al., 2016; Jiang et al., 2017) on modeling the VAE prior with Gaussian mixtures. However, they target single-task learning and the parameter of the prior network is fixed after training such as equation 1c in Dilokthanakul et al. (2016) and equation 5 in Jiang et al. (2017), which is suboptimal since a meta-learning model should be able to adapt and generalize to a novel task.
|
| 68 |
+
|
| 69 |
+
To learn the set-dependent multi-modalities, we further assume that there exists a parameter $\psi_{i}$ for each episodic dataset $\mathcal{D}_i = \{\mathbf{x}_j\}_{j = 1}^M$ , which is randomly drawn from the unlabeled dataset $\mathcal{D}_u$ . Then we derive the variational lower bound for the marginal log-likelihood of $\mathcal{D}_i$ as follows:
|
| 70 |
+
|
| 71 |
+
$$
|
| 72 |
+
\begin{array}{l} \log p _ {\theta} (\mathcal {D} _ {i}) = \sum_ {j = 1} ^ {M} \log p _ {\theta} (\mathbf {x} _ {j}) = \sum_ {j = 1} ^ {M} \log \int p _ {\theta} (\mathbf {x} _ {j} | \mathbf {z} _ {j}) p _ {\psi_ {i}} (\mathbf {z} _ {j}) \frac {q _ {\phi} (\mathbf {z} _ {j} | \mathbf {x} _ {j} , \mathcal {D} _ {i})}{q _ {\phi} (\mathbf {z} _ {j} | \mathbf {x} _ {j} , \mathcal {D} _ {i})} d \mathbf {z} _ {j} (1) \\ \geq \sum_ {j = 1} ^ {M} \left[ \mathbb {E} _ {\mathbf {z} _ {j} \sim q _ {\phi} \left(\mathbf {z} _ {j} \mid \mathbf {x} _ {j}, \mathcal {D} _ {i}\right)} \left[ \log p _ {\theta} \left(\mathbf {x} _ {j} \mid \mathbf {z} _ {j}\right) + \log p _ {\psi_ {i}} \left(\mathbf {z} _ {j}\right) - \log q _ {\phi} \left(\mathbf {z} _ {j} \mid \mathbf {x} _ {j}, \mathcal {D} _ {i}\right)\right) \right] (2) \\ \approx \sum_ {j = 1} ^ {M} \frac {1}{N} \sum_ {n = 1} ^ {N} \left[ \log p _ {\theta} \left(\mathbf {x} _ {j} \mid \mathbf {z} _ {j} ^ {(n)}\right) + \log p _ {\psi_ {i}} \left(\mathbf {z} _ {j} ^ {(n)}\right) - \log q _ {\phi} \left(\mathbf {z} _ {j} ^ {(n)} \mid \mathbf {x} _ {j}, \mathcal {D} _ {i}\right) \right] (3) \\ =: \mathcal {L} (\theta , \phi , \psi_ {i}, \mathcal {D} _ {i}), \quad \mathbf {z} _ {j} ^ {(n)} \stackrel {i. i. d} {\sim} q _ {\phi} (\mathbf {z} _ {j} | \mathbf {x} _ {j}, \mathcal {D} _ {i}). (4) \\ \end{array}
|
| 73 |
+
$$
|
| 74 |
+
|
| 75 |
+
Here the lower bound for each datapoint is approximated by Monte Carlo estimation with the sample size $N$ . Following the convention of the VAE literature, we assume that the variational posterior $q_{\phi}(\mathbf{z}_j|\mathbf{x}_j,\mathcal{D}_i)$ follows an isotropic Gaussian distribution.
|
| 76 |
+
|
| 77 |
+
<table><tr><td colspan="2">Algorithm 1 Meta-training</td><td>Algorithm 2 Meta-test for an episode</td></tr><tr><td colspan="2">Require: An unlabeled dataset Du</td><td>Require: A test task T = S ∪ Q</td></tr><tr><td colspan="2">1: Initialize parameters θ, φ</td><td>1: Set D = {xs}S=1 ∪ {xq}Qq=1</td></tr><tr><td colspan="2">2: while not done do</td><td>2: Draw n MC samples from qφ(zj|xj, D)</td></tr><tr><td colspan="2">3: Sample B episode datasets {Di}B i=1 from Du</td><td></td></tr><tr><td colspan="2">4: for all i ∈ [1, B] do</td><td>3: Initialize μk = ∑S,n=1sN y(n)s = k y(s) = k and σ2k = I</td></tr><tr><td colspan="2">5: Draw n MC samples from qφ(zj|xj, Di)</td><td></td></tr><tr><td colspan="2">6: Initialize πk as 1/K and randomly choose K different points for μk.</td><td>4: Compute optimal parameter ψ* using Eq 10</td></tr><tr><td colspan="2">7: Compute optimal parameter ψi* using Eq 7</td><td>5:Compute p(yq|xq, D) using Eq 11</td></tr><tr><td colspan="2">8: end for</td><td>6: Infer the label yq = arg maxp(yq = k|xq, D)</td></tr><tr><td colspan="2">9: Update θ, φ using L(θ, φ, {Di}B i=1) in Eq 9.</td><td>7:</td></tr><tr><td colspan="2">10: end while</td><td>8:</td></tr></table>
|
| 78 |
+
|
| 79 |
+
Set-dependent variational posterior Our derivation of the evidence lower bound in Eq 4 is similar to that of the hierarchical VAE framework, such as equation 3 in Edwards & Storkey (2017) and equation 4 in Hewitt et al. (2018), in that we use the i.i.d assumption that the log likelihood of a dataset equals the sum over the log-likelihoods of each individual data point. Yet, previous works assume that each input set consists of data instances from a single concept (e.g. a class), therefore, they encode the dataset into a single global latent variable (e.g. $q_{\phi}(\mathbf{z}|\mathcal{D})$ ). This is not appropriate for unsupervised meta-learning where labels are unavailable. Thus we learn a set-conditioned variational posterior $q_{\phi}(\mathbf{z}_j|\mathbf{x}_j,\mathcal{D}_i)$ , which models a latent variable to encode each data $\mathbf{x}_j$ within the given dataset $\mathcal{D}_i$ into the latent space. Specifically, we model the variational posterior $q_{\phi}(\mathbf{z}_j|\mathbf{x}_j,\mathcal{D}_i)$ using the self-attention mechanism (Vaswani et al., 2017) as follows:
|
| 80 |
+
|
| 81 |
+
$$
|
| 82 |
+
H = \operatorname {T r a n s f o r m e r E n c o d e r} (f (\mathcal {D} _ {i}))
|
| 83 |
+
$$
|
| 84 |
+
|
| 85 |
+
$$
|
| 86 |
+
\boldsymbol {\mu} _ {j} = W _ {\boldsymbol {\mu}} H _ {j} + \mathbf {b} _ {\boldsymbol {\mu}}, \quad \sigma_ {j} ^ {2} = \exp \left(W _ {\sigma^ {2}} H _ {j} + \mathbf {b} _ {\sigma^ {2}}\right) \tag {5}
|
| 87 |
+
$$
|
| 88 |
+
|
| 89 |
+
$$
|
| 90 |
+
q _ {\phi} (\mathbf {z} _ {j} | \mathbf {x} _ {j}, \mathcal {D} _ {i}) = \mathcal {N} (\mathbf {z} _ {j}; \boldsymbol {\mu} _ {j}, \boldsymbol {\sigma} _ {j} ^ {2})
|
| 91 |
+
$$
|
| 92 |
+
|
| 93 |
+
Here we deploy TransformerEncoder(\cdot), a neural network based on the multi-head self-attention mechanism proposed by Vaswani et al. (2017), to model the dependency between data instances, and $f$ is a convolutional neural network (or an identity function for the Mini-ImageNet) which takes each data in $\mathcal{D}_i$ as an input. Moreover, we use the reparameterization trick (Kingma & Welling, 2014) to train the model with backpropagation since the stochastic sampling process $\mathbf{z}_j^{(n)} \stackrel{i.i.d}{\sim} q_{\phi}(\mathbf{z}_j|\mathbf{x}_j,\mathcal{D}_i)$ is non-differentiable.
|
| 94 |
+
|
| 95 |
+
Expectation Maximization As discussed before, we assume that the parameter $\psi_{i}$ of the prior Gaussian Mixture is task-specific and characterizes the given dataset $\mathcal{D}_i$ . To obtain the task-specific parameter that optimally explain the given dataset, we propose to locally maximize the lower bound in Eq 4 with respect to the prior parameter $\psi_{i}$ . We can obtain the optimal parameter $\psi_{i}^{*}$ by solving the following optimization problem:
|
| 96 |
+
|
| 97 |
+
$$
|
| 98 |
+
\psi_ {i} ^ {*} = \underset {\psi_ {i}} {\arg \max } \mathcal {L} (\theta , \phi , \psi_ {i}, \mathcal {D} _ {i}) = \underset {\psi_ {i}} {\arg \max } \sum_ {j, n = 1} ^ {M, N} \log p _ {\psi} \left(\mathbf {z} _ {j} ^ {(n)}\right), \quad \mathbf {z} _ {j} ^ {(n)} \stackrel {{i. i. d}} {{\sim}} q _ {\phi} \left(\mathbf {z} _ {j} | \mathbf {x} _ {j}, \mathcal {D} _ {i}\right), \tag {6}
|
| 99 |
+
$$
|
| 100 |
+
|
| 101 |
+
where we only consider the term related to the task-specific parameter $\psi_{i}$ , and eliminate the normalization term $\frac{1}{N}$ since it does not change the solution of the optimization problem. The above formula implies that the optimal parameter maximizes the log-likelihood of observations which can be drawn from the variational posterior distribution. However, we do not have an analytic solution for Maximum Likelihood Estimation (MLE) of a GMM.
|
| 102 |
+
|
| 103 |
+
The most prevalent approach for estimating the parameters for the mixture of Gaussian is solving it with Expectation Maximization (EM) algorithm. To this end, we propose to optimize the task-specific parameter of GMM prior distribution using EM algorithm as follows:
|
| 104 |
+
|
| 105 |
+
$$
|
| 106 |
+
\text {(E - s t e p)} \quad Q _ {j, n} (k) := p \left(\mathbf {y} _ {j} ^ {(n)} = k \mid \mathbf {z} _ {j} ^ {(n)}\right) = \frac {\pi_ {k} \mathcal {N} \left(\mathbf {z} _ {j} ^ {(n)} ; \boldsymbol {\mu} _ {k} , \boldsymbol {I}\right)}{\sum_ {k} \pi_ {k} \mathcal {N} \left(\mathbf {z} _ {j} ^ {(n)} ; \boldsymbol {\mu} _ {k} , \boldsymbol {I}\right)}
|
| 107 |
+
$$
|
| 108 |
+
|
| 109 |
+
$$
|
| 110 |
+
\left(\mathbf {M} - \text {s t e p}\right) \quad \boldsymbol {\mu} _ {k} := \frac {\sum_ {j , n = 1} ^ {M , N} Q _ {j , n} (k) \mathbf {z} _ {j} ^ {(n)}}{\sum_ {j , n = 1} ^ {M , N} Q _ {j , n} (k)}, \quad \pi_ {k} := \frac {\sum_ {j , n = 1} ^ {M , N} Q _ {j , n} (k)}{\sum_ {k = 1} ^ {K} \sum_ {j , n = 1} ^ {M , N} Q _ {j , n} (k)} \tag {7}
|
| 111 |
+
$$
|
| 112 |
+
|
| 113 |
+
$$
|
| 114 |
+
\psi_ {i} := \left\{\left(\boldsymbol {\mu} _ {k}, \boldsymbol {I}, \pi_ {k}\right) \right\} _ {k = 1} ^ {K},
|
| 115 |
+
$$
|
| 116 |
+
|
| 117 |
+
where $\pi_k$ , $\pmb{\mu}_k$ , and $\mathcal{N}(\cdot)$ denote the mixing probability of $k$ -th component, mean parameter, and normal distribution, respectively. We assume that the covariance matrix of Gaussian distribution is fixed with the identity matrix $\pmb{I}$ , following the assumption of original VAE on the prior distribution. We initialize $\{\pi_k\}_{k=1}^K$ and $\{\pmb{\mu}_k\}_{k=1}^K$ as $\frac{1}{K}$ and randomly drawn $K$ different points, respectively. We can obtain MLE solution for the parameters of GMM, by iteratively performing E-step and M-step until the log-likelihood converges. We found that using a fixed number of iterations for the EM algorithm does not degrade the performance, and consider it as a hyperparameter of our framework.
|
| 118 |
+
|
| 119 |
+
Training objective Note that we want to maximize the variational lower bound of the marginal log-likelihood over all the episode datasets $\mathcal{D}_i$ that can be sampled from $\mathcal{D}_u$ . We use stochastic gradient ascent with respect to the variational parameter $\phi$ and the generative parameter $\theta$ , to maximize the following objective:
|
| 120 |
+
|
| 121 |
+
$$
|
| 122 |
+
\begin{array}{l} \mathcal {L} \left(\theta , \phi , \left\{\mathcal {D} _ {i} \right\} _ {i = 1} ^ {B}\right) := \frac {1}{B} \sum_ {i = 1} ^ {B} \left[ \max _ {\psi_ {i}} \mathcal {L} \left(\theta , \phi , \psi_ {i}, \mathcal {D} _ {i}\right) \right] (8) \\ = \frac {1}{B} \sum_ {i = 1} ^ {B} \sum_ {j = 1} ^ {M} \frac {1}{N} \sum_ {n = 1} ^ {N} \left[ \log p _ {\theta} \left(\mathbf {x} _ {j} \mid \mathbf {z} _ {j} ^ {(n)}\right) + \log p _ {\psi_ {i} ^ {*}} \left(\mathbf {z} _ {j} ^ {(n)}\right) - \log q _ {\phi} \left(\mathbf {z} _ {j} ^ {(n)} \mid \mathbf {x} _ {j}, \mathcal {D} _ {i}\right) \right]. (9) \\ \end{array}
|
| 123 |
+
$$
|
| 124 |
+
|
| 125 |
+
Here we use $B$ mini-batch of episode datasets, where each dataset consists of $M$ datapoints. The task-specific parameter $\psi_i^*$ for each episode dataset $\mathcal{D}_i$ is obtained by EM algorithm in Eq 7.
|
| 126 |
+
|
| 127 |
+
Supervised meta-test By introducing the multi-modal prior distribution into a generative learning framework, our model learns pseudo-class concepts by clustering latent features with EM algorithm. However, there is no guarantee that each modality obtained by EM algorithm corresponds to the label we are interested in at the meta-test stage. To realize modality as label in downstream few-shot image classification tasks, we deploy semi-supervised EM algorithm instead. Given a task $\mathcal{T}$ consisting of support set $S = \{(\mathbf{x}_s,\mathbf{y}_s)\}_{s = 1}^S$ and query set $\mathcal{Q} = \{\mathbf{x}_q\}_{q = 1}^Q$ , we use both the support set and query set as an episode dataset $\mathcal{D} = \{\mathbf{x}_s\}_{s = 1}^S\cup \{\mathbf{x}_q\}_{q = 1}^Q$ and draw latent variables from the variational posterior $q_{\phi}(\mathbf{z}_j|\mathbf{x}_j,\mathcal{D})$ . Note that we abbreviate the index $i$ since we consider a single task for now. We then perform semi-supervised EM algorithm as follows:
|
| 128 |
+
|
| 129 |
+
$$
|
| 130 |
+
\text {(E - s t e p)} \quad Q _ {q, n} (k) := p \left(\mathbf {y} _ {q} ^ {(n)} = k \mid \mathbf {z} _ {q} ^ {(n)}\right) = \frac {\mathcal {N} \left(\mathbf {z} _ {q} ^ {(n)} ; \boldsymbol {\mu} _ {k} , \boldsymbol {\sigma} _ {k} ^ {2}\right)}{\sum_ {k} \mathcal {N} \left(\mathbf {z} _ {q} ^ {(n)} ; \boldsymbol {\mu} _ {k} , \boldsymbol {\sigma} _ {k} ^ {2}\right)}
|
| 131 |
+
$$
|
| 132 |
+
|
| 133 |
+
$$
|
| 134 |
+
\boldsymbol {\mu} _ {k} := \frac {\sum_ {s , n = 1} ^ {S , N} \mathbf {1} _ {\mathrm {y} _ {\mathrm {s}} ^ {(\mathrm {n})} = \mathrm {k}} \mathbf {z} _ {s} ^ {(n)} + \sum_ {q , n = 1} ^ {Q , N} Q _ {q , n} (k) \mathbf {z} _ {q} ^ {(n)}}{\sum_ {s , n = 1} ^ {S , N} \mathbf {1} _ {\mathrm {y} _ {\mathrm {s}} ^ {(\mathrm {n})} = \mathrm {k}} + \sum_ {q , n = 1} ^ {Q , N} Q _ {q , n} (k)}, \tag {10}
|
| 135 |
+
$$
|
| 136 |
+
|
| 137 |
+
(M-step)
|
| 138 |
+
|
| 139 |
+
$$
|
| 140 |
+
\pmb {\sigma} _ {k} ^ {\mathbf {2}} := \frac {\sum_ {s , n = 1} ^ {S , N} \mathbf {1} _ {\mathrm {y} _ {\mathrm {s}} ^ {(\mathrm {n})} = \mathrm {k}} (\mathbf {z} _ {s} ^ {(n)} - \pmb {\mu} _ {k}) ^ {2} + \sum_ {q , n = 1} ^ {Q , N} Q _ {q , n} (k) (\mathbf {z} _ {q} ^ {(n)} - \pmb {\mu} _ {k}) ^ {2}}{\sum_ {s , n = 1} ^ {S , N} \mathbf {1} _ {\mathrm {y} _ {\mathrm {s}} ^ {(\mathrm {n})} = \mathrm {k}} + \sum_ {q , n = 1} ^ {Q , N} Q _ {q , n} (k)}
|
| 141 |
+
$$
|
| 142 |
+
|
| 143 |
+
$$
|
| 144 |
+
\psi := \left\{\left(\boldsymbol {\mu} _ {k}, \boldsymbol {\sigma} _ {k} ^ {2}, \frac {1}{K}\right) \right\} _ {k = 1} ^ {K},
|
| 145 |
+
$$
|
| 146 |
+
|
| 147 |
+
where $\mathbf{1}$ denotes an indicator function. We fix the mixing probability as $\frac{1}{K}$ since the labels in each task $\mathcal{T}$ are uniformly distributed. Moreover, we utilize diagonal covariance $\sigma_k^2$ to obtain more accurate statistics for the inference. We initialize $\mu_{k}$ and $\sigma_k^2$ as the average value of support latent representations and the identity matrix $\pmb{I}$ , respectively. Similar to the meta-training stage, we obtain the MLE solution for the parameters of GMM, by performing E-step and M-step for a fixed number of iterations. Finally, we compute the conditional probability of $p(\mathbf{y}_q|\mathbf{x}_q,\mathcal{D})$ using the obtained parameters $\psi^{*}$ as follows:
|
| 148 |
+
|
| 149 |
+
$$
|
| 150 |
+
p \left(\mathbf {y} _ {q} \mid \mathbf {x} _ {q}, \mathcal {D}\right) = \mathbb {E} _ {q _ {\phi} \left(\mathbf {z} _ {q} \mid \mathbf {x} _ {q}, \mathcal {D}\right)} \left[ p _ {\psi^ {*}} \left(\mathbf {y} _ {q} \mid \mathbf {z} _ {q}\right) \right] \approx \frac {1}{N} \sum_ {n = 1} ^ {N} p _ {\psi^ {*}} \left(\mathbf {y} _ {q} \mid \mathbf {z} _ {q} ^ {(n)}\right), \quad \mathbf {z} _ {q} ^ {(n)} \stackrel {i. i. d} {\sim} q _ {\phi} \left(\mathbf {z} _ {q} \mid \mathbf {x} _ {q}, \mathcal {D}\right). \tag {11}
|
| 151 |
+
$$
|
| 152 |
+
|
| 153 |
+
Here we compute $p_{\psi^*}(\mathbf{y}_q|\mathbf{z}_q^{(n)})$ with Bayes rule, and we reuse $N$ different Monte Carlo samples that is drawn for Eq 10, where the prediction of query $\hat{\mathbf{y}}_q = \underset{k}{\arg \max}p(\mathbf{y}_q = k|\mathbf{x}_q,\mathcal{D})$ . We present the pseudo-code of the algorithm for training and inference of Meta-GMVAE in the Algorithm 1 and 2.
|
| 154 |
+
|
| 155 |
+
Visual feature reconstruction While our method is a generative model that can generate samples from output distribution, the ability to generate samples may not be necessary for discriminative downstream tasks (Chen et al., 2020). Moreover, we found that VAEs almost fail to learn in MiniImageNet dataset with the architecturally limited constraints of the meta-learning literature. Thus, we propose a high-level feature reconstruction objective instead for Mini-ImageNet dataset. We experimentally find that the recently proposed constrastive learning framework, namely SimCLR (Chen et al., 2020), is the most effective for our settings. Specifically, SimCLR learns high-level representation by performing a constrastive prediction task on pairs of augmented examples derived from a minibatch. We train SimCLR on the unsupervised dataset $\mathcal{D}_u = \{\mathbf{x}_u\}_{u=1}^U$ , and use high-level features extracted by SimCLR as an input for our framework.
|
| 156 |
+
|
| 157 |
+
# 4 EXPERIMENT
|
| 158 |
+
|
| 159 |
+
In this section, we now validate the effectiveness of our Meta-GMVAE on several downstream few-shot classification tasks. The source codes are available at https://github.com/db-Lee/Meta-GMVAE.
|
| 160 |
+
|
| 161 |
+
# 4.1 EXPERIMENTAL SETUPS
|
| 162 |
+
|
| 163 |
+
Baselines and ours We now describe two supervised meta-learning approaches which we consider as "oracles", unsupervised meta-learning baselines, and the proposed Meta-GMVAE. 1) MAML (oracle): Model Agnostic Meta Learning by Finn et al. (2017). We compare against its performance reported in Hsu et al. (2019). 2) ProtoNets (oracle): Euclidean distance-based meta-learning approach by Snell et al. (2017). We also compare against it using its performance reported in Hsu et al. (2019). 3) CACTUs: Clustering to Automatically Construct Tasks for Unsupervised meta-learning by Hsu et al. (2019). It automatically constructs tasks by clustering the unsupervised dataset in embedding space learned by ACAI (Berthelot et al., 2019), BiGAN (Donahue et al., 2017), and Deep-Cluster (Caron et al., 2018). Then they train either MAML or ProtoNets using the cluster indices as pseudo-labels. 4) UMTRA: Unsupervised Meta-learning with Tasks constructed by Random sampling and Augmentation by Khodadadeh et al. (2019). For constructing a K-way 1-shot task, it randomly samples K-way datapoints from unsupervised dataset and augments each datapoint. Then MAML is trained on the constructed tasks. 5) Meta-GMVAE: Our proposed Meta-level Gaussian Mixture VAE. It learns a latent representation by matching set-level amortized variational posterior and task-specific multimodal prior optimized by EM algorithm.
|
| 164 |
+
|
| 165 |
+
Datasets We validate all the models on two benchmark datasets for few-shot classification. 1) Omniglot: This is a collection of $28 \times 28$ gray-scale hand-written characters that describe 1623 different alphabets, each of which contains 20 instances. Following the experimental setup of Hsu et al. (2019), we use 1200 classes for unsupervised meta-training, 100 classes for meta-validation and the remaining 323 classes for meta-test. We further augment each class by rotating the images 90, 180, and 270 degrees, such that the total number of classes is $1623 \times 4$ , following the convention. 2) Mini-ImageNet: This is a subset of ILSVRC-2012 (Deng et al., 2009) introduced by Ravi & Larochelle (2017), consisting of 100 classes that comes with 600 images of size $84 \times 84$ that describe different instances. We use 64 classes for unsupervised meta-training, 16 classes for meta-validation, and the remaining 20 classes for meta-test, following the standard protocol.
|
| 166 |
+
|
| 167 |
+
Implementation details We now introduce the specific implementation details of Meta-GMVAE on the two benchmark datasets. 1) Variational posterior network $q_{\phi}(\mathbf{z}|\mathbf{x},\mathcal{D}_i)$ : we use the standard Conv4 architecture on Omniglot dataset for a fair comparison against relevant baselines. On top of the Conv4 architecture, we stack two TransformerEncoder layers and an affine transformation layer to predict the mean and log-variance of Gaussian distribution. For Mini-ImageNet dataset, we only utilize two TransformerEncoder layers and an affine transformation layer since the input used for Mini-ImageNet is already a high-level visual representation extracted from the Conv5 architecture trained with SimCLR. For both datasets, we set the dimensionality of the latent variable to 64. 2) Generative network $p_{\theta}(\mathbf{x}|\mathbf{z})$ : For Omniglot dataset, the architecture of generative network is symmetric to the Conv4 architecture of variational posterior network. The last layer outputs the parameter of output Bernoulli distribution. For Mini-ImageNet dataset, we use 3-layer MLP with ReLU activation to predict the mean of output Gaussian distribution. 3) Other details: we utilize Adam optimizer (Kingma & Ba, 2015) with a constant learning rate of 0.001 and 0.0001 for Omniglot and MiniImageNet experiments, respectively. We set the number of iterations for EM algorithm as 10 for all the experiments. For the more details, please see the Appendix.
|
| 168 |
+
|
| 169 |
+
<table><tr><td></td><td></td><td colspan="4">Omniglot (way, shot)</td><td colspan="4">Mini-ImagNet (way, shot)</td></tr><tr><td>Method</td><td>Clustering</td><td>(5,1)</td><td>(5,5)</td><td>(20,1)</td><td>(20,5)</td><td>(5,1)</td><td>(5,5)</td><td>(5,20)</td><td>(5,50)</td></tr><tr><td>Training from Scratch</td><td>N/A</td><td>52.50</td><td>74.78</td><td>24.91</td><td>47.62</td><td>27.59</td><td>38.48</td><td>51.53</td><td>59.63</td></tr><tr><td>CACTUs-MAML</td><td>BiGAN</td><td>58.18</td><td>78.66</td><td>35.56</td><td>58.62</td><td>36.24</td><td>51.28</td><td>61.33</td><td>66.91</td></tr><tr><td>CACTUs-ProtoNets</td><td>BiGAN</td><td>54.74</td><td>71.69</td><td>33.40</td><td>50.62</td><td>36.62</td><td>50.16</td><td>59.56</td><td>63.27</td></tr><tr><td>CACTUs-MAML</td><td>ACAI/DC</td><td>68.84</td><td>87.78</td><td>48.09</td><td>73.36</td><td>39.90</td><td>53.97</td><td>63.84</td><td>69.64</td></tr><tr><td>CACTUs-ProtoNets</td><td>ACAI/DC</td><td>68.12</td><td>83.58</td><td>47.75</td><td>66.27</td><td>39.18</td><td>53.36</td><td>61.54</td><td>63.55</td></tr><tr><td>UMTRA</td><td>N/A</td><td>83.80</td><td>95.43</td><td>74.25</td><td>92.12</td><td>39.93</td><td>50.73</td><td>61.11</td><td>67.15</td></tr><tr><td>Meta-GMVAE (ours)</td><td>N/A</td><td>94.92</td><td>97.09</td><td>82.21</td><td>90.61</td><td>42.82</td><td>55.73</td><td>63.14</td><td>68.26</td></tr><tr><td>MAML (oracle)</td><td>N/A</td><td>94.46</td><td>98.83</td><td>84.60</td><td>96.29</td><td>46.81</td><td>62.13</td><td>71.03</td><td>75.54</td></tr><tr><td>ProtoNets (oracle)</td><td>N/A</td><td>98.35</td><td>99.58</td><td>95.31</td><td>98.81</td><td>46.56</td><td>62.29</td><td>70.05</td><td>72.04</td></tr></table>
|
| 170 |
+
|
| 171 |
+
Table 1: The few-shot classification results (way, shot) on the Omniglot and Mini-ImageNet datasets. DC denotes DeepCluster. We report the average of accuracies evaluated over 1000 episodes. All the values are based on the reported performance in Hsu et al. (2019) and Khodadadeh et al. (2019), except for ours.
|
| 172 |
+
|
| 173 |
+

|
| 174 |
+
Figure 3: The samples obtained and generated for each mode at unsupervised meta-training and supervised meta-test step of Meta-GMVAE. Samples in each row are in the same modality obtained by EM.
|
| 175 |
+
|
| 176 |
+

|
| 177 |
+
|
| 178 |
+

|
| 179 |
+
|
| 180 |
+

|
| 181 |
+
|
| 182 |
+
# 4.2 EXPERIMENTAL RESULTS
|
| 183 |
+
|
| 184 |
+
Few-shot classification Table 1 shows the few-shot classification results obtained by supervised meta-learning baselines (oracle), the two unsupervised meta-learning baselines, and our Meta-GMVAE. For the Omniglot dataset, the Meta-GMVAE outperforms all the baselines that only utilize unsupervised-learning for constructing meta-training tasks, except for the UMTRA on the 20-shot 5-shot classification. Meta-GMVAE also outperforms baselines on Mini-ImageNet 1-shot, and 5-shot settings which are the most widely used settings, while it matches the performance of baselines in 20-shot, and 50-shot settings. This shows that meta-learning the posterior network can capture the multi-modal distribution of any given tasks with Meta-GMVAE, is indeed more effective over unsupervised meta-learning baselines which simply trains supervised meta-learning models with pseudo-labels obtained from unlabeled data. Moreover, our Meta-GMVAE obtains better performance than supervised MAML on Omniglot 5-way 1-shot classification, while utilizing as small as $0.1\%$ of the labeled data. This matches the observation in Chen et al. (2020) that well-calibrated unsupervised learning approaches with a modest amount of labels can obtain a performance comparable to or even better than supervised approaches.
|
| 185 |
+
|
| 186 |
+
Visualization To better understand how our Meta-GMVAE learns and realizes class-concepts in few-shot classification tasks, we visualize the actual samples in an episode classified by Meta-GMVAE and ones generated by generative network $p_{\theta}(\mathbf{x}|\mathbf{z})$ , during unsupervised meta-training and supervised meta-testing. We visualize the actual samples and generated ones that have a same modality in a same row. In Figure 3-a, b, we can observe that our Meta-GMVAE captures the similar visual structure in each modality during meta-training, but the modalities are not the class-concepts. However, as shown Figure 3-c, d, our Meta-GMVAE easily realizes each modality as each class-concept at meta-test time.
|
| 187 |
+
|
| 188 |
+
Ablation study Furthermore, we compare the performance of our model variants by eliminating each of the most important components for our model. We describe the each variant as follows: 1) LR (SimCLR): This performs the logistic regression using support set on top of features pretrained by SimCLR. 2) Vanilla VAE: We train Vanilla VAE on $\mathcal{D}_u$ and predict labels using semi-supervised EM with fixed identity covariance $I$ . 3) Vanilla VAE (SimCLR): This is same as 2) Vanilla VAE except that it is trained on features pretrained by SimCLR. 4) Ep: Meta-GMM with an episodic training with task specific parameter $\psi_i^*$ obtained by EM. 5) Set: Meta-GMM whether having set-
|
| 189 |
+
|
| 190 |
+
<table><tr><td>Method</td><td>Ep</td><td>Set</td><td>σ2</td><td>O</td><td>M</td></tr><tr><td>Training From Scratch</td><td></td><td></td><td></td><td>24.91</td><td>27.59</td></tr><tr><td>LR (SimCLR)</td><td></td><td></td><td></td><td>N/A</td><td>40.11</td></tr><tr><td>Vanilla VAE</td><td></td><td></td><td></td><td>69.68</td><td>N/A</td></tr><tr><td>Vanilla VAE (SimCLR)</td><td></td><td></td><td></td><td>N/A</td><td>38.40</td></tr><tr><td rowspan="4">Meta-GMVAE</td><td>✓</td><td></td><td></td><td>78.64</td><td>40.51</td></tr><tr><td>✓</td><td>✓</td><td></td><td>81.65</td><td>41.13</td></tr><tr><td>✓</td><td></td><td>✓</td><td>80.94</td><td>40.92</td></tr><tr><td>✓</td><td>✓</td><td>✓</td><td>82.21</td><td>42.82</td></tr><tr><td>MAML (oracle)</td><td>✓</td><td></td><td></td><td>84.60</td><td>46.81</td></tr><tr><td>ProtoNets (oracle)</td><td>✓</td><td></td><td></td><td>95.31</td><td>46.56</td></tr></table>
|
| 191 |
+
|
| 192 |
+
<table><tr><td></td><td colspan="4">Training (way, shot)</td></tr><tr><td>Test way</td><td>(5, 1)</td><td>(5, 5)</td><td>(20, 1)</td><td>(20, 5)</td></tr><tr><td>2-way</td><td>98.26</td><td>98.00</td><td>98.36</td><td>98.23</td></tr><tr><td>5-way</td><td>94.92</td><td>94.57</td><td>93.93</td><td>94.01</td></tr><tr><td>10-way</td><td>89.87</td><td>89.99</td><td>89.10</td><td>89.30</td></tr><tr><td>15-way</td><td>85.11</td><td>85.12</td><td>85.36</td><td>85.33</td></tr><tr><td>20-way</td><td>81.38</td><td>81.11</td><td>82.21</td><td>81.98</td></tr><tr><td>30-way</td><td>77.80</td><td>77.42</td><td>78.40</td><td>77.24</td></tr><tr><td>40-way</td><td>73.76</td><td>73.15</td><td>74.03</td><td>73.56</td></tr><tr><td>50-way</td><td>70.92</td><td>70.85</td><td>69.86</td><td>70.02</td></tr></table>
|
| 193 |
+
|
| 194 |
+
Table 2: Left: The results of the ablation study on Meta-GMVAE (O: 20-way 1-shot classification on Omniglot, M: 5-way 1-shot classification on Mini-ImageNet). Right: The results of cross-way 1-shot experiments on Omniglot. The values in the parenthesis indicate that a model is trained based on the (way, shot) setting.
|
| 195 |
+
|
| 196 |
+
level variational posterior (i.e. $q_{\phi}(\mathbf{z}|\mathbf{x},\mathcal{D}_i)$ ) or not (i.e. $q_{\phi}(\mathbf{z}|\mathbf{x})$ ). 6) $\sigma^2$ : Meta-GMM performs semi-supervised EM algorithm whether using diagonal covariance matrix or fixing it with identity matrix $I$ . Table 2-Left shows that all the components we consider are critical for the performance on the few-shot classification tasks as expected. The best performance gain comes from $\mathbf{E}\mathbf{p}$ , which supports our proposal on meta-learning the set-level variational posterior by matching it with the multi-modal prior, where the task-specific parameter is obtained with EM.
|
| 197 |
+
|
| 198 |
+
Cross-way classification We then experiment our Meta-GMVAE by varying the number of way (between 2 and 50) and fixing the number of shot as 1. In particular, we set the number of component $k$ as the Test way for the meta-test and perform semi-supervised EM algorithm in Eq 10. Table 2-Right shows that the difference in the number of way used for training and test does not significantly affect the performance, which demonstrates the robustness of Meta-GMVAE on varying number of way. We
|
| 199 |
+
|
| 200 |
+

|
| 201 |
+
(a) 20-way Meta-training
|
| 202 |
+
|
| 203 |
+

|
| 204 |
+
(b) 5-way Meta-test
|
| 205 |
+
Figure 4: The visualization of the latent space for the cross-shot generalization experiment.
|
| 206 |
+
|
| 207 |
+
also visualize the latent space for the cross-shot experiment using t-SNE (Rauber et al., 2016), in Figure 4, which shows that Meta-GMVAE trained with 20-way can cluster 5-way meta-test task.
|
| 208 |
+
|
| 209 |
+
# 5 CONCLUSION
|
| 210 |
+
|
| 211 |
+
We proposed a novel unsupervised meta-learning model, namely Meta-GMVAE, which can generate a task-dependent posterior for a given unseen task with multi-modal Gaussian Mixture priors. Given a random episode that consists of samples from diverse classes, we optimize the task-specific parameter of the mixture of Gaussian prior with Expectation-Maximization algorithm, such that each mode can capture intrinsic groupings in the given data. We meta-train the variational posterior network over such data-driven prior obtained over large number of episodes. Then, at the meta-test step, we realize each modality with a label by deploying semi-supervised EM algorithm with both the support and the query set. We validate our method on two few-shot image classification benchmark datasets, and show that Meta-GMVAE largely outperforms the relevant unsupervised meta-learning baselines, even achieving better performance than supervised MAML on Omniglot 5-way 1-shot experiments.
|
| 212 |
+
|
| 213 |
+
Acknowledgements This work was supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No.2019-0-00075, Artificial Intelligence Graduate School Program (KAIST)), Samsung Research Funding Center of Samsung Electronics (No. SRFC-IT1502-51), Samsung Electronics (IO201214-08145-01), and the Engineering Research Center Program through the National Research Foundation of Korea (NRF) funded by the Korean Government MSIT (NRF-2018R1A5A1059921). We sincerely thank the anonymous reviewers for their constructive comments which helped us significantly improve our paper during the rebuttal period. We also appreciate D. Khue Lé-Huu for the valuable discussion on Rao et al. (2019).
|
| 214 |
+
|
| 215 |
+
# REFERENCES
|
| 216 |
+
|
| 217 |
+
Yoshua Bengio, Aaron C. Courville, and Pascal Vincent. Representation learning: A review and new perspectives. IEEE Trans. Pattern Anal. Mach. Intell., 35(8):1798-1828, 2013.
|
| 218 |
+
David Berthelot, Colin Raffel, Aurko Roy, and Ian J. Goodfellow. Understanding and improving interpolation in autoencoders via an adversarial regularizer. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net, 2019.
|
| 219 |
+
Piotr Bojanowski and Armand Joulin. Unsupervised learning by predicting noise. In Doina Precup and Yee Whye Teh (eds.), Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, volume 70 of Proceedings of Machine Learning Research, pp. 517-526. PMLR, 2017.
|
| 220 |
+
Mathilde Caron, Piotr Bojanowski, Armand Joulin, and Matthijs Douze. Deep clustering for unsupervised learning of visual features. In Vittorio Ferrari, Martial Hebert, Cristian Sminchisescu, and Yair Weiss (eds.), Computer Vision - ECCV 2018 - 15th European Conference, Munich, Germany, September 8-14, 2018, Proceedings, Part XIV, volume 11218 of Lecture Notes in Computer Science, pp. 139-156. Springer, 2018.
|
| 221 |
+
Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey E. Hinton. A simple framework for contrastive learning of visual representations. CoRR, abs/2002.05709, 2020.
|
| 222 |
+
Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. Infogan: Interpretable representation learning by information maximizing generative adversarial nets. In Daniel D. Lee, Masashi Sugiyama, Ulrike von Luxburg, Isabelle Guyon, and Roman Garnett (eds.), Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain, pp. 2172-2180, 2016.
|
| 223 |
+
Brian Cheung, Jesse A. Livezey, Arjun K. Bansal, and Bruno A. Olshausen. Discovering hidden factors of variation in deep networks. In *Yoshua Bengio and Yann LeCun* (eds.), 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Workshop Track Proceedings, 2015.
|
| 224 |
+
Adam Coates and Andrew Y. Ng. Learning feature representations with k-means. In Gregoire Montavon, Genevieve B. Orr, and Klaus-Robert Müller (eds.), Neural Networks: Tricks of the Trade - Second Edition, volume 7700 of Lecture Notes in Computer Science, pp. 561-580. Springer, 2012.
|
| 225 |
+
Ekin Dogus Cubuk, Barret Zoph, Dandelion Mane, Vijay Vasudevan, and Quoc V. Le. Autoaugment: Learning augmentation policies from data. CoRR, abs/1805.09501, 2018.
|
| 226 |
+
Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Fei-Fei Li. Imagenet: A large-scale hierarchical image database. In 2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2009), 20-25 June 2009, Miami, Florida, USA, pp. 248-255. IEEE Computer Society, 2009.
|
| 227 |
+
Emily L. Denton and Vighnesh Birodkar. Unsupervised learning of disentangled representations from video. In Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V. N. Vishwanathan, and Roman Garnett (eds.), Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4-9 December 2017, Long Beach, CA, USA, pp. 4414-4423, 2017.
|
| 228 |
+
Nat Dilokthanakul, Pedro A. M. Mediano, Marta Garnelo, Matthew C. H. Lee, Hugh Salimbeni, Kai Arulkumaran, and Murray Shanahan. Deep unsupervised clustering with gaussian mixture variational autoencoders. CoRR, abs/1611.02648, 2016.
|
| 229 |
+
Zheng Ding, Yifan Xu, Weijian Xu, Gaurav Parmar, Yang Yang, Max Welling, and Zhuowen Tu. Guided variational autoencoder for disentanglement learning. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA, June 13-19, 2020, pp. 7917-7926. IEEE, 2020.
|
| 230 |
+
|
| 231 |
+
Jeff Donahue, Philipp Krahenbuhl, and Trevor Darrell. Adversarial feature learning. In 5th International Conference on Learning Representations, ICLR 2017, Toulouse, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net, 2017.
|
| 232 |
+
Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Alex Lamb, Martin Arjovsky, Olivier Mastropietro, and Aaron C. Courville. Adversarily learned inference. In 5th International Conference on Learning Representations, ICLR 2017, Toulouse, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net, 2017.
|
| 233 |
+
Harrison Edwards and Amos J. Storkey. Towards a neural statistician. In 5th International Conference on Learning Representations, ICLR 2017, Toulouse, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net, 2017.
|
| 234 |
+
Dumitru Erhan, Yoshua Bengio, Aaron C. Courville, Pierre-Antoine Manzagol, Pascal Vincent, and Samy Bengio. Why does unsupervised pre-training help deep learning? J. Mach. Learn. Res., 11: 625-660, 2010.
|
| 235 |
+
Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In Doina Precup and Yee Whye Teh (eds.), Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, volume 70 of Proceedings of Machine Learning Research, pp. 1126-1135. PMLR, 2017.
|
| 236 |
+
Chelsea Finn, Kelvin Xu, and Sergey Levine. Probabilistic model-agnostic meta-learning. In Samy Bengio, Hanna M. Wallach, Hugo Larochelle, Kristen Grauman, Nicolò Cesa-Bianchi, and Roman Garnett (eds.), Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, 3-8 December 2018, Montréal, Canada, pp. 9537-9548, 2018.
|
| 237 |
+
Sebastian Flennerhag, Andrei A. Rusu, Razvan Pascanu, Francesco Visin, Hujun Yin, and Raia Hadsell. Meta-learning with warped gradient descent. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net, 2020.
|
| 238 |
+
Marta Garnelo, Jonathan Schwarz, Dan Rosenbaum, Fabio Viola, Danilo J. Rezende, S. M. Ali Eslami, and Yee Whye Teh. Neural processes. CoRR, abs/1807.01622, 2018.
|
| 239 |
+
Luke B. Hewitt, Maxwell I. Nye, Andreea Gane, Tommi S. Jaakkola, and Joshua B. Tenenbaum. The variational homoencoder: Learning to learn high capacity generative models from few examples. In Amir Globerson and Ricardo Silva (eds.), Proceedings of the Thirty-Fourth Conference on Uncertainty in Artificial Intelligence, UAI 2018, Monterey, California, USA, August 6-10, 2018, pp. 988-997. AUAI Press, 2018.
|
| 240 |
+
Irina Higgins, Loic Matthew, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, and Alexander Lerchner. beta-vae: Learning basic visual concepts with a constrained variational framework. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net, 2017.
|
| 241 |
+
Kyle Hsu, Sergey Levine, and Chelsea Finn. Unsupervised learning via meta-learning. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net, 2019.
|
| 242 |
+
Zhuxi Jiang, Yin Zheng, Huachun Tan, Bangsheng Tang, and Hanning Zhou. Variational deep embedding: An unsupervised and generative approach to clustering. In Carles Sierra (ed.), Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI 2017, Melbourne, Australia, August 19-25, 2017, pp. 1965-1972, 2017.
|
| 243 |
+
Siavash Khodadadeh, Ladislau Bölönji, and Mubarak Shah. Unsupervised meta-learning for few-shot image classification. In Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d'Alché-Buc, Emily B. Fox, and Roman Garnett (eds.), Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, 8-14 December 2019, Vancouver, BC, Canada, pp. 10132-10142, 2019.
|
| 244 |
+
|
| 245 |
+
Hyunjik Kim and Andriy Mnih. Disentangling by factorising. In Jennifer G. Dy and Andreas Krause (eds.), Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholm, Sweden, July 10-15, 2018, volume 80 of Proceedings of Machine Learning Research, pp. 2654-2663. PMLR, 2018.
|
| 246 |
+
Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Yoshua Bengio and Yann LeCun (eds.), 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, 2015.
|
| 247 |
+
Diederik P. Kingma and Max Welling. Auto-encoding variational bayes. In Yoshua Bengio and Yann LeCun (eds.), 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Conference Track Proceedings, 2014.
|
| 248 |
+
Gregory Koch, Richard Zemel, and Ruslan Salakhutdinov. Siamese neural networks for one-shot image recognition. In Proceedings of the 32th International Conference on Machine Learning, ICML 2015, 2015.
|
| 249 |
+
Philipp Krahenbuhl, Carl Doersch, Jeff Donahue, and Trevor Darrell. Data-dependent initializations of convolutional neural networks. In Yoshua Bengio and Yann LeCun (eds.), 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings, 2016.
|
| 250 |
+
Brenden M. Lake, Ruslan Salakhutdinov, Jason Gross, and Joshua B. Tenenbaum. One shot learning of simple visual concepts. In Laura A. Carlson, Christoph Holscher, and Thomas F. Shipley (eds.), Proceedings of the 33th Annual Meeting of the Cognitive Science Society, CogSci 2011, Boston, Massachusetts, USA, July 20-23, 2011. cognitivesciencesociety.org, 2011.
|
| 251 |
+
Quoc V. Le. Building high-level features using large scale unsupervised learning. In IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2013, Vancouver, BC, Canada, May 26-31, 2013, pp. 8595-8598. IEEE, 2013.
|
| 252 |
+
Yoonho Lee and Seungjin Choi. Gradient-based meta-learning with learned layerwise metric and subspace. In Jennifer G. Dy and Andreas Krause (eds.), Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholm, Sweden, July 10-15, 2018, volume 80 of Proceedings of Machine Learning Research, pp. 2933-2942. PMLR, 2018.
|
| 253 |
+
Zhenguo Li, Fengwei Zhou, Fei Chen, and Hang Li. Meta-sgd: Learning to learn quickly for few shot learning. CoRR, abs/1707.09835, 2017.
|
| 254 |
+
Michaël Mathieu, Junbo Jake Zhao, Pablo Sprechmann, Aditya Ramesh, and Yann LeCun. Disentangling factors of variation in deep representation using adversarial training. In Daniel D. Lee, Masashi Sugiyama, Ulrike von Luxburg, Isabelle Guyon, and Roman Garnett (eds.), Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain, pp. 5041-5049, 2016.
|
| 255 |
+
Nikhil Mishra, Mostafa Rohaninejad, Xi Chen, and Pieter Abbeel. A simple neural attentive metalearner. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net, 2018.
|
| 256 |
+
Boris N. Oreshkin, Pau Rodríguez López, and Alexandre Lacoste. TADAM: task dependent adaptive metric for improved few-shot learning. In Samy Bengio, Hanna M. Wallach, Hugo Larochelle, Kristen Grauman, Nicolò Cesa-Bianchi, and Roman Garnett (eds.), Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, 3-8 December 2018, Montréal, Canada, pp. 719-729, 2018.
|
| 257 |
+
Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. In Yoshua Bengio and Yann LeCun (eds.), 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings, 2016.
|
| 258 |
+
|
| 259 |
+
Prajit Ramachandran, Peter J. Liu, and Quoc V. Le. Unsupervised pretraining for sequence to sequence learning. In Martha Palmer, Rebecca Hwa, and Sebastian Riedel (eds.), Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, September 9-11, 2017, pp. 383-391. Association for Computational Linguistics, 2017.
|
| 260 |
+
Dushyant Rao, Francesco Visin, Andrei A. Rusu, Razvan Pascanu, Yee Whye Teh, and Raia Hadsell. Continual unsupervised representation learning. In Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d'Alché-Buc, Emily B. Fox, and Roman Garnett (eds.), Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pp. 7645-7655, 2019.
|
| 261 |
+
Paulo E. Rauber, Alexandre X. Falcao, and Alexandru C. Telea. Visualizing time-dependent data using dynamic t-sne. In Enrico Bertini, Niklas Elmqvist, and Thomas Wischgoll (eds.), 18th Eurographics Conference on Visualization, EuroVis 2016 - Short Papers, Groningen, The Netherlands, June 6-10, 2016, pp. 73-77. Eurographics Association, 2016.
|
| 262 |
+
Sachin Ravi and Alex Beatson. Amortized bayesian meta-learning. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net, 2019.
|
| 263 |
+
Sachin Ravi and Hugo Larochelle. Optimization as a model for few-shot learning. In 5th International Conference on Learning Representations, ICLR 2017, Toulouse, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net, 2017.
|
| 264 |
+
Scott E. Reed, Kihyuk Sohn, Yuting Zhang, and Honglak Lee. Learning to disentangle factors of variation with manifold interaction. In Proceedings of the 31th International Conference on Machine Learning, ICML 2014, Beijing, China, 21-26 June 2014, volume 32 of JMLR Workshop and Conference Proceedings, pp. 1431-1439. JMLR.org, 2014.
|
| 265 |
+
Oleh Rybkin, Kostas Daniilidis, and Sergey Levine. Simple and effective VAE training with calibrated decoders. CoRR, abs/2006.13202, 2020.
|
| 266 |
+
Tim Salimans, Ian J. Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training gans. In Daniel D. Lee, Masashi Sugiyama, Ulrike von Luxburg, Isabelle Guyon, and Roman Garnett (eds.), Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain, pp. 2226-2234, 2016.
|
| 267 |
+
Jake Snell, Kevin Swersky, and Richard S. Zemel. Prototypical networks for few-shot learning. In Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V. N. Vishwanathan, and Roman Garnett (eds.), Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4-9 December 2017, Long Beach, CA, USA, pp. 4077-4087, 2017.
|
| 268 |
+
Sebastian Thrun and Lorien Y. Pratt (eds.). Learning to Learn. Springer, 1998.
|
| 269 |
+
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V. N. Vishwanathan, and Roman Garnett (eds.), Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4-9 December 2017, Long Beach, CA, USA, pp. 5998-6008, 2017.
|
| 270 |
+
Pascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, and Pierre-Antoine Manzagol. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. J. Mach. Learn. Res., 11:3371-3408, 2010.
|
| 271 |
+
Oriol Vinyals, Charles Blundell, Tim Lillicrap, Koray Kavukcuoglu, and Daan Wierstra. Matching networks for one shot learning. In Daniel D. Lee, Masashi Sugiyama, Ulrike von Luxburg, Isabelle Guyon, and Roman Garnett (eds.), Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain, pp. 3630-3638, 2016.
|
| 272 |
+
|
| 273 |
+
# A OMNIGLOT EXPERIMENTS
|
| 274 |
+
|
| 275 |
+
# A.1 TRAINING PROCEDURE
|
| 276 |
+
|
| 277 |
+
Omniglot is a collection of $28 \times 28$ gray-scale hand-written characters that describe 1623 different alphabets, each of which contains 20 instances. Following the experimental setup of Hsu et al. (2019), we use 1200 classes for unsupervised meta-training, 100 classes for meta-validation and the remaining 323 classes for meta-test. We further augment each class by rotating the images 90, 180, and 270 degrees, such that the total number of classes is $1623 \times 4$ , following the convention. We evaluate the trained model using 1000 randomly selected tasks from test set. During evaluation, $K \times S$ data instances are used as support inputs and $K \times 15$ data instances are used as query inputs. We use the Adam (Kingma & Ba, 2015) optimizer with a constant learning rate of 0.001 to train all models. All models are trained for 60,000 iterations. For the 5-way experiments (i.e. $K = 5$ ), we set the mini-batch size, the number of datapoints, and Monte Carlo sample size as 4, 200, and 32, respectively (i.e. $B = 4$ , $M = 200$ , and $N = 32$ ). For the 20-way experiments (i.e. $K = 20$ ), we set them as 4, 300, and 32 (i.e. $B = 4$ , $M = 300$ , and $N = 32$ ). We set the number of EM iterations as 10.
|
| 278 |
+
|
| 279 |
+
# A.2 NETWORK ARCHITECTURE
|
| 280 |
+
|
| 281 |
+
We summarize the network architecture in the following Table 3, and 4. We assume that the output follows Bernoulli distribution, therefore, the output of generative network $p_{\theta}(\mathbf{x}|\mathbf{z})$ is the mean parameter.
|
| 282 |
+
|
| 283 |
+
Set-level variational posterior network $q_{\phi}(\mathbf{z}|\mathbf{x},\mathcal{D}_i)$
|
| 284 |
+
|
| 285 |
+
<table><tr><td>Output Size</td><td>Layers</td></tr><tr><td>1 × 28 × 28</td><td>Input Images</td></tr><tr><td>64 × 14 × 14</td><td>conv2d(3 × 3, stride 1, padding 1), BatchNorm2D, ReLU, Maxpool(2 × 2, stride 2)</td></tr><tr><td>64 × 7 × 7</td><td>conv2d(3 × 3, stride 1, padding 1), BatchNorm2D, ReLU, Maxpool(2 × 2, stride 2)</td></tr><tr><td>64 × 4 × 4</td><td>conv2d(3 × 3, stride 1, padding 1), BatchNorm2D, ReLU, Maxpool(2 × 2, stride 2)</td></tr><tr><td>64 × 2 × 2</td><td>conv2d(3 × 3, stride 1, padding 1), BatchNorm2D, ReLU, Maxpool(2 × 2, stride 2)</td></tr><tr><td>256</td><td>Flatten</td></tr><tr><td>256</td><td>TransformerEncoder(dmodel = 256, dff = 256, h = 4, ELU, LayerNorm = False)</td></tr><tr><td>256</td><td>TransformerEncoder(dmodel = 256, dff = 256, h = 4, ELU, LayerNorm = False)</td></tr><tr><td>64 × 2</td><td>Linear(256, 64 × 2)</td></tr></table>
|
| 286 |
+
|
| 287 |
+
Table 3: Set-level variational posterior network used for Omniglot dataset. We refer the hyperparameter notation of TransformerEncoder to Vaswani et al. (2017).
|
| 288 |
+
Generative network $p_{\theta}(\mathbf{x}|\mathbf{z})$
|
| 289 |
+
|
| 290 |
+
<table><tr><td>Output Size</td><td>Layers</td></tr><tr><td>64</td><td>Latent code</td></tr><tr><td>256</td><td>Linear(64,256), ELU</td></tr><tr><td>256</td><td>Linear(256, 256), ELU</td></tr><tr><td>256</td><td>Linear(256, 256), ELU</td></tr><tr><td>64 × 2 × 2</td><td>Unflatten</td></tr><tr><td>64 × 4 × 4</td><td>deconv2d(4 × 4, stride 2, padding 1), BatchNorm2D, ReLU</td></tr><tr><td>64 × 7 × 7</td><td>deconv2d(3 × 3, stride 2, padding 1), BatchNorm2D, ReLU</td></tr><tr><td>64 × 14 × 14</td><td>deconv2d(4 × 4, stride 2, padding 1), BatchNorm2D, ReLU</td></tr><tr><td>1 × 28 × 28</td><td>deconv2d(4 × 4, stride 2, padding 1), Sigmoid</td></tr></table>
|
| 291 |
+
|
| 292 |
+
Table 4: Generative Network for ${p}_{\theta }\left( {\mathbf{x} \mid \mathbf{z}}\right)$ for Omniglot dataset.
|
| 293 |
+
|
| 294 |
+
# A.3 $95\%$ CONFIDENCE INTERVAL
|
| 295 |
+
|
| 296 |
+
We provide the standard errors of our model's performance at $95\%$ confidence interval over 1000 episodes on the Omniglot dataset in Table 5.
|
| 297 |
+
|
| 298 |
+
<table><tr><td>Omniglot</td><td>(5,1)</td><td>(5,5)</td><td>(20,1)</td><td>(20,5)</td></tr><tr><td>Meta-GMVAE</td><td>94.92 ± 0.42</td><td>97.09 ± 0.20</td><td>82.21 ± 0.44</td><td>90.61 ± 0.19</td></tr></table>
|
| 299 |
+
|
| 300 |
+
Table 5: The few-shot classification results (way, shot) with ${95}\%$ confidence interval on the Omniglot.
|
| 301 |
+
|
| 302 |
+
# B MINI-IMAGENET EXPERIMENTS
|
| 303 |
+
|
| 304 |
+
# B.1 TRAINING PROCEDURE
|
| 305 |
+
|
| 306 |
+
Mini-ImageNet is a subset of ILSVRC-2012 (Deng et al., 2009) introduced by Ravi & Larochelle (2017), consisting of 100 classes that comes with 600 images of size $84 \times 84$ that describe different instances. We first train Conv5 feature extractor using SimCLR objective with temperature term $\tau = 0.5$ , on Mini-ImageNet unsupervised meta-training dataset. We train the feature extractor using Adam optimizer with learning rate of 0.0001 for 400 epochs. We use 64 classes for unsupervised meta-training, 16 classes for meta-validation, and the remaining 20 classes for meta-test, following the standard protocol. We evaluate the trained model using 1000 randomly selected tasks from test set. During evaluation, $5 \times S$ data instances are used as support inputs and $5 \times 15$ data instances are used as query inputs. For all the experiments, we use the Adam (Kingma & Ba, 2015) optimizer with a constant learning rate of 0.0001, and set the mini-batch size, the number of datapoints, and Monte Carlo sample size as 16, 5, and 256, respectively (i.e. $B = 16$ , $M = 5$ , and $N = 256$ ) for the 1, 5, and 20-shot experiments. For the 50-shot experiment, we set them 4, 200, and 256, respectively (i.e. $B = M = 200$ , and $N = 256$ ). We train the models for 5K, 10K, 15K, 25K, and 30K for 1, 5, 20, and 50-shot experiments, respectively. We set the number of EM iterations as 10.
|
| 307 |
+
|
| 308 |
+
# B.2 NETWORK ARCHITECTURE
|
| 309 |
+
|
| 310 |
+
We summarize the network architecture in the following Table 6, 7, and 8. We assume that the output follows Gaussian distribution, therefore, the output of generative network $p_{\theta}(\mathbf{x}|\mathbf{z})$ is the mean parameter. Moreover, the variance of output Gaussian distribution is obtained as suggested in Rybkin et al. (2020).
|
| 311 |
+
|
| 312 |
+
Feature Extractor for SimCLR
|
| 313 |
+
|
| 314 |
+
<table><tr><td>Output Size</td><td>Layers</td></tr><tr><td>3 × 84 × 84</td><td>Input Images</td></tr><tr><td>64 × 42 × 42</td><td>conv2d(3 × 3, stride 1, padding 1), BatchNorm2D, ReLU, Maxpool(2 × 2, stride 2)</td></tr><tr><td>64 × 21 × 21</td><td>conv2d(3 × 3, stride 1, padding 1), BatchNorm2D, ReLU, Maxpool(2 × 2, stride 2)</td></tr><tr><td>64 × 10 × 10</td><td>conv2d(3 × 3, stride 1, padding 1), BatchNorm2D, ReLU, Maxpool(2 × 2, stride 2)</td></tr><tr><td>64 × 5 × 5</td><td>conv2d(3 × 3, stride 1, padding 1), BatchNorm2D, ReLU, Maxpool(2 × 2, stride 2)</td></tr><tr><td>64 × 2 × 2</td><td>conv2d(3 × 3, stride 1, padding 1), BatchNorm2D, ReLU, Maxpool(2 × 2, stride 2)</td></tr><tr><td>256</td><td>Flatten</td></tr></table>
|
| 315 |
+
|
| 316 |
+
Table 6: Feature Extractor trained on Mini-ImageNet dataset using SimCLR objective.
|
| 317 |
+
Set-level variational posterior network $q_{\phi}(\mathbf{z}|\mathbf{x},\mathcal{D}_i)$
|
| 318 |
+
|
| 319 |
+
<table><tr><td>Output Size</td><td>Layers</td></tr><tr><td>256</td><td>Input Features</td></tr><tr><td>256</td><td>TransformerEncoder(dmodel = 256, df = 256, h = 4, ReLU, LayerNorm = False)</td></tr><tr><td>256</td><td>TransformerEncoder(dmodel = 256, df = 256, h = 4, ReLU, LayerNorm = False)</td></tr><tr><td>64 × 2</td><td>Linear(256, 64 × 2)</td></tr></table>
|
| 320 |
+
|
| 321 |
+
Table 7: Set-level variational posterior network used for Mini-ImageNet dataset. We refer the hyperparameter notation of TransformerEncoder to Vaswani et al. (2017).
|
| 322 |
+
|
| 323 |
+
Generative network $p_{\theta}(\mathbf{x}|\mathbf{z})$
|
| 324 |
+
|
| 325 |
+
<table><tr><td>Output Size</td><td>Layers</td></tr><tr><td>64</td><td>Latent code</td></tr><tr><td>512</td><td>Linear(64, 512), ReLU</td></tr><tr><td>512</td><td>Linear(512, 512), ReLU</td></tr><tr><td>256</td><td>Linear(512, 256), ReLU</td></tr></table>
|
| 326 |
+
|
| 327 |
+
# B.3 95% CONFIDENCE INTERVAL
|
| 328 |
+
|
| 329 |
+
We provide the standard errors at $95\%$ confidence interval over 1000 episodes on the Mini-ImageNet dataset in Table 9.
|
| 330 |
+
|
| 331 |
+
Table 8: Generative Network for ${p}_{\theta }\left( {\mathbf{x} \mid \mathbf{z}}\right)$ for Mini-ImageNet dataset.
|
| 332 |
+
|
| 333 |
+
<table><tr><td>Mini-ImageNet</td><td>(5,1)</td><td>(5,5)</td><td>(20,1)</td><td>(20,5)</td></tr><tr><td>Meta-GMVAE</td><td>42.82 ± 0.56</td><td>55.73 ± 0.48</td><td>63.14 ± 0.47</td><td>68.26 ± 0.42</td></tr></table>
|
| 334 |
+
|
| 335 |
+
# B.4 ADDITIONAL COMPARISON USING SIMCLR
|
| 336 |
+
|
| 337 |
+
To further understand where the improvement of Meta-GMVAE on the Mini-ImageNet dataset, we ran experiment on baselines with SimCLR pretrained features. For CACTUs, we cluster in the embedding space pretrained by SimCLR. For UMTRA, we follow the exact same procedure to generate training episode, which is proposed by the authors. Moreover, we fix the pretrained SimCLR features as the setting of Meta-GMVAE for the both of baselines. Table 10 shows that Meta-GMVAE outperforms the baselines with SimCLR, which supports the effectiveness of Meta-GMVAE combined with SimCLR pretrained features.
|
| 338 |
+
|
| 339 |
+
Table 9: The few-shot classification results (way, shot) with ${95}\%$ confidence interval on the Mini-ImageNet.
|
| 340 |
+
|
| 341 |
+
<table><tr><td>Mini-ImageNet</td><td>(5,1)</td><td>(5,5)</td><td>(20,1)</td><td>(20,5)</td></tr><tr><td>CACTUs-MAML (SimCLR)</td><td>40.39</td><td>52.35</td><td>61.09</td><td>64.89</td></tr><tr><td>UMTRA (SimCLR)</td><td>40.85</td><td>51.47</td><td>61.03</td><td>67.30</td></tr><tr><td>Meta-GMVAE</td><td>42.82</td><td>55.73</td><td>63.14</td><td>68.26</td></tr></table>
|
| 342 |
+
|
| 343 |
+
Table 10: The comparison on the few-shot classification results (way, shot) using SimCLR.
|
metagmvaemixtureofgaussianvaeforunsupervisedmetalearning/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a0aa3f3f141e976ff2c9d8a99e0a1876355cf42be94632a2722ebcb83ba56b93
|
| 3 |
+
size 772401
|
metagmvaemixtureofgaussianvaeforunsupervisedmetalearning/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:cfe8e214ee90bdbe0a59c5b783e247b500c62eb2c57ddbf8d04beef413855f1a
|
| 3 |
+
size 479696
|
mindthepadcnnscandevelopblindspots/3552c9a6-8d0a-4fb1-b42b-7481e78bb67d_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:4a8e275f6a978bea260bd53d869fd5fe47992c309eb19015f35afd240d838e75
|
| 3 |
+
size 244503
|
mindthepadcnnscandevelopblindspots/3552c9a6-8d0a-4fb1-b42b-7481e78bb67d_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:03eb34d8919d73684f6402b7f09b5bca986f01c0493d5796e76aadc9db97ced7
|
| 3 |
+
size 266677
|
mindthepadcnnscandevelopblindspots/3552c9a6-8d0a-4fb1-b42b-7481e78bb67d_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b9653d8fff360f2ce897b04724b0a05d4492ad6704df03400db61023b7bdc5ef
|
| 3 |
+
size 8297752
|
mindthepadcnnscandevelopblindspots/full.md
ADDED
|
@@ -0,0 +1,1492 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# MIND THE PAD - CNNS CAN DEVELOP BLIND SPOTS
|
| 2 |
+
|
| 3 |
+
Bilal Alsallakh
|
| 4 |
+
|
| 5 |
+
Facebook AI
|
| 6 |
+
|
| 7 |
+
Narine Kokhlikyan
|
| 8 |
+
|
| 9 |
+
Facebook AI
|
| 10 |
+
|
| 11 |
+
Vivek Miglani
|
| 12 |
+
|
| 13 |
+
Facebook AI
|
| 14 |
+
|
| 15 |
+
Jun Yuan
|
| 16 |
+
|
| 17 |
+
NYU
|
| 18 |
+
|
| 19 |
+
Orion Reblitz-Richardson
|
| 20 |
+
|
| 21 |
+
Facebook AI
|
| 22 |
+
|
| 23 |
+
# ABSTRACT
|
| 24 |
+
|
| 25 |
+
We show how feature maps in convolutional networks are susceptible to spatial bias. Due to a combination of architectural choices, the activation at certain locations is systematically elevated or weakened. The major source of this bias is the padding mechanism. Depending on several aspects of convolution arithmetic, this mechanism can apply the padding unevenly, leading to asymmetries in the learned weights. We demonstrate how such bias can be detrimental to certain tasks such as small object detection: the activation is suppressed if the stimulus lies in the impacted area, leading to blind spots and misdetection. We propose solutions to mitigate spatial bias and demonstrate how they can improve model accuracy.
|
| 26 |
+
|
| 27 |
+
# 1 MOTIVATION
|
| 28 |
+
|
| 29 |
+
Convolutional neural networks (CNNs) serve as feature extractors for a wide variety of machine-learning tasks. Little attention has been paid to the spatial distribution of activation in the feature maps a CNN computes. Our interest in analyzing this distribution is triggered by mysterious failure cases of a traffic light detector: The detector successfully detects a small but visible traffic light in a road scene. However, it fails completely in detecting the same traffic light in the next frame captured by the ego-vehicle. The major difference between both frames is a limited shift along the vertical dimension as the vehicle moves forward. Therefore, the drastic difference in object detection is surprising given that CNNs are often assumed to have a high degree of translation invariance [8; 17].
|
| 30 |
+
|
| 31 |
+
The spatial distribution of activation in feature maps varies with the input. Nevertheless, by closely examining this distribution for a large number of samples, we found consistent patterns among them, often in the form of artifacts that do not resemble any input features. This work aims to analyze the root cause of such artifacts and their impact on CNNs. We show that these artifacts are responsible for the mysterious failure cases mentioned earlier, as they can induce 'blind spots' for the object detection head. Our contributions are:
|
| 32 |
+
|
| 33 |
+
- Demonstrating how the padding mechanism can induce spatial bias in CNNs (Section 2).
|
| 34 |
+
- Demonstrating how spatial bias can impair downstream tasks (Section 3).
|
| 35 |
+
- Identifying uneven application of 0-padding as a resolvable source of bias (Section 5).
|
| 36 |
+
- Relating the padding mechanism with the foveation behavior of CNNs (Section ⑥).
|
| 37 |
+
- Providing recommendations to mitigate spatial bias and demonstrating how this can prevent blind spots and boost model accuracy.
|
| 38 |
+
|
| 39 |
+
# 2 THE EMERGENCE OF SPATIAL BIAS IN CNNS
|
| 40 |
+
|
| 41 |
+
Our aim is to determine to which extent activation magnitude in CNN feature maps is influenced by location. We demonstrate our analysis on a publicly-available traffic-light detection model [36]. This model implements the SSD architecture [26] in TensorFlow [1], using MobileNet-v1 [13] as a feature extractor. The model is trained on the BSTLD dataset [4] which annotates traffic lights in road scenes. Figure 1 shows two example scenes from the dataset. For each scene, we show two feature maps computed by two filters in the $11^{\text{th}}$ convolutional layer. This layer contains 512 filters whose feature maps are used directly by the first box predictor in the SSD to detect small objects.
|
| 42 |
+
|
| 43 |
+

|
| 44 |
+
Figure 1: Averaging feature maps per input (column marginal) and per filter (row marginal) in the last convolutional layer of a traffic light detector. Color indicates activation strength (the brighter, the higher), revealing line artifacts in the maps. These artifacts are the manifestation of spatial bias.
|
| 45 |
+
|
| 46 |
+
The bottom row in Figure 1 shows the average response of each of the two aforementioned filters, computed over the test set in BSTLD. The first filter seems to respond mainly to features in the top half of the input, while the second filter responds mainly to street areas. There are visible lines in the two average maps that do not seem to resemble any scene features and are consistently present in the individual feature maps. We analyzed the prevalence of these line artifacts in the feature maps of all 512 filters. The right column in Figure 1 shows the average of these maps per scene, as well as over the entire test set (see supplemental for all 512 maps). The artifacts are largely visible in the average maps, with variations per scene depending on which individual maps are dominant.
|
| 47 |
+
|
| 48 |
+
A useful way to make the artifacts stand out is to neutralize scene features by computing the feature maps for a zero-valued input. Figure ② depicts the resulting average map for each convolutional layer after applying ReLU units. The first average map is constant as we expect with a 0-valued input. The second map is also constant except for a 1-pixel boundary where the value is lower at the left border and higher at the other three borders. We magnify the corners to make these deviations visible. The border deviations increase in thickness and in variance at subsequent layers, creating multiple line artifacts at each border. These artifacts become quite pronounced at ReLU 8 where they start to propagate inwards, resembling the ones in Figure ①.
|
| 49 |
+
|
| 50 |
+

|
| 51 |
+
Figure 2: Activation maps for a 0 input, averaged over each layer's filters (title format: $\mathrm{H}\times \mathrm{W}\times \mathrm{C}$ ).
|
| 52 |
+
|
| 53 |
+
It is evident that the 1-pixel border variations in the second map are caused by the padding mechanism in use. This mechanism pads the output of the previous layer with a 1-pixel 0-valued border in order to maintain the size of the feature map after applying $3 \times 3$ convolutional. The maps in the first layer are not impacted because the input we feed is zero valued. Subsequent layers, however, are increasingly impacted by the padding, as preceding bias terms do not warrant 0-valued input.
|
| 54 |
+
|
| 55 |
+
It is noticeable in Figure 2 that the artifacts caused by the padding differ across the four borders. To investigate this asymmetry, we analyze the convolutional kernels (often called filters) that produce the feature maps. Figure 3 depicts a per-layer mean of these 3x3 kernels. These mean kernels exhibit different degrees of asymmetry in the spatial distribution of their weights. For example, the kernels in L1 assign (on average) a negative weight at the left border, and a positive weight at the bottom. This directly impacts the padding-induced variation at each border. Such asymmetries are related to uneven application of padding as we explain in Section 5.
|
| 56 |
+
|
| 57 |
+

|
| 58 |
+
Figure 3: Mean kernel per convolutional layer. All kernels are $3 \times 3$ , the titles show their counts.
|
| 59 |
+
|
| 60 |
+
# 3 IMPLICATIONS OF SPATIAL BIAS
|
| 61 |
+
|
| 62 |
+
We demonstrate how feature-map artifacts can cause blind spots for the SSD model. Similar issues arise in several small-object detectors, e.g., for faces and masks, as well as in pixel-oriented tasks such as semantic segmentation and image inpainting (see supplemental for examples).
|
| 63 |
+
|
| 64 |
+
Figure 4 illustrates how the SSD predicts small objects based on the feature maps of the 11-th convolutional layer. The SSD uses the pixel positions in these maps as anchors of object proposals. Each proposal is scored by the SSD to represent a target category, with "background" being an implicit category that is crucial to exclude irrelevant parts of the input. In addition to these scores, the SSD computes a bounding box to localize the predicted object at each anchor. We examine
|
| 65 |
+
|
| 66 |
+

|
| 67 |
+
Figure 4: The formation of blind spots in SSD, illustrated via its box predictor internals with a zero-valued input. The predictor uses spatial anchors to detect and localize the target object at $45 \times 80$ possible locations based on 512 feature maps. Certain anchors are predisposed to predict background due to feature-map artifacts, as evident in the logit maps. Traffic lights at the corresponding location cannot be detected as demonstrated with a real scene (middle one in the bottom).
|
| 68 |
+
|
| 69 |
+

|
| 70 |
+
Figure 5: (a) A map showing via color the detection score the SSD computes for a traffic light when present at various locations. The detection is muted when the stimulus lies in the area impacted by the artifacts. (b) The same map after changing the padding method to SYMMETRIC. The detection scores are rather constant except for periodic variations due to the SSD's reliance on anchors.
|
| 71 |
+
|
| 72 |
+
object proposals computed at 1:2 aspect ratio, as they resemble the shape of most traffic lights in the dataset. We visualize the resulting score maps both for the background category and for traffic lights, when feeding a 0-valued input to the SSD. We also visualize the bounding boxes of these proposals in the image space. The SSD predicts the image content to be of background category at all anchor locations, as evident from the value range in both score maps. Such predictions are expected with an input that contains no traffic lights. However, the line artifacts in the feature maps have a strong impact on the score maps. These artifacts elevate the likelihood of anchors closer to the top to be classified as background (see the yellow band in the background score map). Conversely, these anchors have significantly lower scores for the traffic light category, compared with other anchors in the feature map. Such difference in the impact on the target categories is due to the different weights the SSD assigns to the feature maps for each target. As a result, the artifacts lead to potential blind spots in which the scores for certain categories are artificially muted.
|
| 73 |
+
|
| 74 |
+
To validate whether or not the blind spots hinder object detection, we examine road scenes that contain highly-visible traffic light instances in the impacted area. Figure4-bottom shows an example of such a scene. The SSD computes a low detection score of $7\%$ when the traffic light lies in the blind spot (see middle image), far below the detection false-positive cutoff. Shifting the scene image upwards or downwards makes the instance detectable with a high score as long as it lies outside the blind spot. This explains the failure cases mentioned in Section 1. To further validate this effect, we run the SSD on baseline images that each contains one traffic light instance at a specific location in the input. We store the detection score for each instance. Figure5a depicts the computed scores in a 2D map. It is evident that the model fails to detect the traffic light instance exactly when it is located within the "blind spot" band. The artifacts further disrupt the localization of the objects as evident in the top-right plot in Figure4 which shows per-anchor object proposals computed for a 0 input.
|
| 75 |
+
|
| 76 |
+
# 4 REMINDER: WHY IS PADDING NEEDED IN CNNS?
|
| 77 |
+
|
| 78 |
+
Padding is applied at most convolutional layers in CNNs to serve two fundamental purposes:
|
| 79 |
+
|
| 80 |
+
Maintaining feature map size A padding that satisfies this property is often described as SAME or HALF padding. FULL padding expands the maps by kernel size - 1 along each dimension. VALID padding performs no padding, eroding the maps by the same amount. SAME padding is important to (1) design deep networks that can handle arbitrary input size (a challenge in the presence of gradual erosion), (2) maintain the aspect ratio of non-square input, and (3) concatenate feature maps from different layers as in Inception [39] and ResNet [12] models.
|
| 81 |
+
|
| 82 |
+
Reducing information bias against the boundary Consider a $3\times 3$ kernel applied to a 2D input. An input location at least 2 pixels away from the boundary contributes to nine local convolution operations when computing the feature map. On the other hand, the corner is involved only one time under VALID padding, four times under a 1-pixel SAME 0-padding, and nine times under a 2-pixel FULL 0-padding. With SAME 0-padding, the cumulative contribution differences among the input pixels grow exponentially over the CNN layers. We refer to such uneven treatment of input pixels as the foveation behavior of the padding mechanism and elaborate on this in Section 6.
|
| 83 |
+
|
| 84 |
+
We next explore solutions to the issues that cause padding to induce spatial bias.
|
| 85 |
+
|
| 86 |
+

|
| 87 |
+
Figure 6: (a) Illustrating the problem of uneven padding when down-sampling at a stride of 2. The padding along x-axis is consumed only at the left side. (b) Mean $3 \times 3$ filters in three ResNet models, trained on ImageNet with two input sizes. Color encodes average weight (green is positive). A size that induces uneven padding (top row) can lead to asymmetries, esp. around down-sampling layers. These asymmetries are mitigated when the input size induces no uneven padding (bottom row).
|
| 88 |
+
|
| 89 |
+

|
| 90 |
+
|
| 91 |
+

|
| 92 |
+
|
| 93 |
+
# 5 ELIMINATING UNEVEN APPLICATION OF PADDING
|
| 94 |
+
|
| 95 |
+
While useful to reduce bias against the boundary, applying padding at down-sampling layers can lead to asymmetry in CNN internals. Figure 6a illustrates the source of this asymmetry when stripped convolution is used for downsampling: At one side of the feature map, the padding is consumed by the kernel while at the other side it is not. To warrant even application of padding throughout the CNN, the following must hold at all $d$ down-sampling layers, where $(h_i, w_i)$ is the output shape at the i-th layer with $k_i^h \times k_i^w$ as kernel size, $(s_i^h, s_i^w)$ as strides, and $\mathbf{\Phi} = (p_i^h, p_i^w)$ as padding amount (refer to appendix A for a proof):
|
| 96 |
+
|
| 97 |
+
$$
|
| 98 |
+
\forall i \in \{1,.., d \}: h _ {i - 1} = s _ {i} ^ {h} \cdot (h _ {i} - 1) + k _ {i} ^ {h} - 2 \cdot p _ {i} ^ {h} \wedge w _ {i - 1} = s _ {i} ^ {w} \cdot (w _ {i} - 1) + k _ {i} ^ {w} - 2 \cdot p _ {i} ^ {w} \tag {1}
|
| 99 |
+
$$
|
| 100 |
+
|
| 101 |
+
The values $h_0$ and $w_0$ represent the CNN input dimensions. The above constraints are not always satisfied during training or inference with arbitrary input dimensions. For example, ImageNet classifiers based on ResNet [12] and MobileNet [13] contain five down-sampling layers ( $d = 5$ ) that apply 1-pixel 0-padding before performing 2-strided convolution. To avoid uneven application of padding, the input to these CNNs must satisfy the following, as explained in appendix A:
|
| 102 |
+
|
| 103 |
+
$$
|
| 104 |
+
h _ {0} = a _ {1} \times 2 ^ {d} + 1 = 3 2 \cdot a _ {1} + 1 \quad \text {a n d} \quad w _ {0} = a _ {2} \times 2 ^ {d} + 1 = 3 2 \cdot a _ {2} + 1 \quad \text {w h e r e} \quad a _ {1}, a _ {2} \in \mathbb {N} ^ {+} \tag {2}
|
| 105 |
+
$$
|
| 106 |
+
|
| 107 |
+
The traditional $\boxed{1}$ and prevalent input size for training ImageNet models is $224\times 224$ . This size violates Eq. $\boxed{2}$ leading to uneven padding at every down-sampling layer in ResNet and MobileNet models where 0-padding is effectively applied only at the left and top sides of layer input. This over-represents zeros at the top and left sides of $3\times 3$ feature-map patches the filters are convolved with during training. The top row of Figure $\boxed{6b}$ shows per-layer mean filters in three ResNet models in PyTorch [33], pre-trained on ImageNet with $224\times 224$ images. In all of these models, a few of the mean filters, adjacent to down-sampling layers, exhibit stark asymmetry about their centers.
|
| 108 |
+
|
| 109 |
+
We increase the image size to $225 \times 225$ without introducing additional image information. This size satisfies Eq. ② warranting even application of padding at every downsampling layer in the above models. Retraining the models with this size strongly reduces this asymmetry as evident in the bottom row of Figure ⑥b. This, in turn, visibly boosts the accuracy in all models we experimented with as we report in Table ①. The accuracy did not improve further when we retrained two of the models, ResNet-18 and ResNet-34, on $226 \times 226$ images. This provides evidence that the boost is due to eliminating uneven padding and not merely due to increasing the input size.
|
| 110 |
+
|
| 111 |
+
Replacing 0-padding with a padding method that reuses feature map values can alleviate the asymmetry in the learned filters in the presence of unevenly applied padding. Another possibility is to use a rigid downsampling kernel, such as max-pooling, instead of a learned one. Appendix C demonstrates both possibilities. Finally, antialiasing before downsampling [43] can strongly reduce the asymmetry as we elaborate in Section 8 and in Appendix E.
|
| 112 |
+
|
| 113 |
+
Table 1: Top-1 (and top-5) accuracy of five ImageNet classifiers trained with different input sizes.
|
| 114 |
+
|
| 115 |
+
<table><tr><td>Input Size2</td><td>MobileNet</td><td>ResNet-18</td><td>ResNet-34</td><td>ResNet-50</td><td>ResNet-101</td></tr><tr><td>224×224</td><td>68.19 (88.44)</td><td>69.93 (89.22)</td><td>73.30 (91.42)</td><td>75.65 (92.47)</td><td>77.37 (93.56)</td></tr><tr><td>225×225</td><td>68.80 (88.78)</td><td>70.27 (89.52)</td><td>73.72 (91.58)</td><td>76.01 (92.90)</td><td>77.67 (93.81)</td></tr></table>
|
| 116 |
+
|
| 117 |
+
Even when no padding is applied ( $p_i^h = 0$ or $p_i^w = 0$ ), an input size that does no satisfy Eq. [1] can lead to uneven erosion of feature maps, in turn, reducing the contribution of pixels from the impacted sides (Fig 7e. Satisfying Eq [1] imposes a restriction on input size, e.g., to values in increments of $2^d = 32$ with the above models ( $193 \times 193$ , $225 \times 225$ , $257 \times 257$ , ...). Depending on the application domain, this can be guaranteed either by resizing an input to the closest increment, or by padding it accordingly with suited values.
|
| 118 |
+
|
| 119 |
+
# 6 PADDING MECHANISM AND FOVEATION
|
| 120 |
+
|
| 121 |
+
By foveation we mean the unequal involvement of input pixels in convolutional operations throughout the CNN. Padding plays a fundamental role in the foveation behavior of CNNs. We visualize this behavior by means of a foveation map that counts for each input pixel the number of convolutional paths through which it can propagate information to the CNN output. We obtain these counts by computing the effective receptive field [28] for the sum of the final convolutional layer after assigning all weights in the network to 1 (code in supplemental). Neutralizing the weights is essential to obtain per-pixel counts of input-output paths that reflect the foveation behavior.
|
| 122 |
+
|
| 123 |
+

|
| 124 |
+
Figure 7: Foveation behavior of different padding methods applied to VGG-19 [37], and illustrated in a $512 \times 512$ input space (unless otherwise stated). Color represents the number of paths to the output for each input pixel. (a) The difference between VALID, FULL, and SAME 0-padding. (b) SAME alternatives to 0-padding. (c) Dilation amplifies foveation of SAME 0-padding. (d) Strides can lead to checkerboard patterns. (e) Foveation effects are more extensive in smaller inputs (relative to input size) and are sensitive to uneven padding.
|
| 125 |
+
|
| 126 |
+
Figure 7a shows the extensive foveation effect when no padding is applied. The diminishing contribution of vast areas of the input explains the drastic drop in accuracy recently observed under VALID padding [16]. In contrast, FULL 0-padding does not incur foveation, however, at the cost of increasing the output size after each layer, making it impractical as explained in Section 4. SAME 0-padding incurs moderate foveation at the periphery, whose absolute extent depends on the number of convolutional layers and their filter sizes. Its relative extent depends on the input size: the larger the input, the larger the ratio of the constant area in yellow (refer to appendix B for a detailed example).
|
| 127 |
+
|
| 128 |
+
Figure 7b shows the foveation behavior of alternatives to SAME 0-padding that have roots in wavelet analysis [19] and image processing [27]. Mirror padding mirrors pixels at the boundary to fill the padding area. When the border is included (SYMMETRIC mode in TensorFlow) all input pixels have an equal number of input-output paths resulting in a uniform foveation map. When the border is not included (REFLECT mode both in PyTorch and in TensorFlow), the map exhibits bias against the border and towards a contour in its proximity. This bias is amplified over multiple layers. Replication padding exhibits the opposite bias when the padding area is wider than 1 pixel. This is because it replicates the outer 1-pixel border multiple times to fill this area. The method is equivalent to SYMMETRIC if the padding area is 1-pixel wide. Circular padding wraps opposing borders, enabling the kernels to seamlessly operate on the boundary and resulting in a uniform map. Partial Convolution [22] has been proposed as a padding method that treats pixels outside the original image as missing values and rescales the computed convolutions accordingly [23]. Its foveation behavior resembles reflective padding. Distribution padding [30] resizes the input to fill the padding area around the original feature map, aiming at preserving the distribution of the map. Its foveation map is largely uniform, except for the corners and edges.
|
| 129 |
+
|
| 130 |
+
Impact of input size Besides influencing the relative extent of foveation effects, the input size also determines the presence of uneven padding (or uneven feature-map erosion), as we discussed in Section 5. Figure 7 shows the foveation map for VGG-19 with a $127 \times 127$ input. This input violates Eq. 1 at every downsampling layer (appendix A), leading to successive feature map erosion at the bottom and right sides which is reflected in the foveation map (see appendix B for a detailed example). The bottom-right part of the input is hence less involved in the CNN computations.
|
| 131 |
+
|
| 132 |
+
Impact of dilation We assign a dilation factor of 2 to all VGG-19 convolutional layers. While this exponentially increases the receptive field of the neurons at deeper layers [42], dilation doubles the extent of the non-uniform peripheral areas that emerge with SAME 0-padding as evident in Figure 7c. SYMMETRIC and circular padding maintain uniform foveation maps regardless of dilation $^{3}$ . In contrast, dilation increases the complexity of these maps for REFLECT and replication padding.
|
| 133 |
+
|
| 134 |
+
Impact of strides Whether learned on based on pooling, downsampling layers can amplify the impact of succeeding convolutional layers on foveation behaviour. Furthermore, these layers can cause input pixels to vary in the count of their input-output paths. This can happen when the kernel size is not divisible by the stride, leading to a checkerboard pattern in the foveation maps. This manifests in ResNet models as we illustrate in appendix B. In VGG-19, all max-pooling layers use a stride of 2 and kernel size of 2. Changing the kernel size to 3 leads to a checkerboard pattern as evident in Figure 7d. Such effects were shown to impact pixel-oriented tasks [32].
|
| 135 |
+
|
| 136 |
+
The padding technique and its foveation behaviour have direct impact on feature-map artifacts (Section 7), and on the ability of CNNs to encode spatial information (Section 8). Understanding the foveation behavior is key to determine how suited a padding method is for a given task. For example, small object detection is known to be challenging close to the boundary [26], in part due to the foveation behavior of SAME 0-padding. In Figure 5b, we change the padding method in the SSD to SYMMETRIC. The stimulus is noticeably more detectable at the boundary, compared with 0-padding. In contrast, ImageNet classification is less sensitive to foveation effects because the target objects are mostly located away from the periphery. Nevertheless, the padding method was shown to impact classification accuracy [23] because it still affects feature map artifacts.
|
| 137 |
+
|
| 138 |
+
# 7 PADDING METHODS AND FEATURE MAP ARTIFACTS
|
| 139 |
+
|
| 140 |
+
It is also noticeable that the score map in Figure 5b is more uniform than in Figure 5a. In particular, under SYMMETRIC padding the model is able to detect traffic lights placed in the blind spots of the original 0-padded model. To verify whether the line artifacts in Figure 2 are mitigated, we inspect the mean feature maps of the adapted model. With a constant input, SYMMETRIC padding warrants constant maps throughout the CNN because it reuses the border to fill the padding area. Instead, we average these maps over 30 samples generated uniformly at random. Figure 8 depicts the mean maps which are largely uniform, unlike the case with 0-padding.
|
| 141 |
+
|
| 142 |
+

|
| 143 |
+
Figure 8: The same feature maps in Figure 2 generated under mirror padding and averaged over 30 randomly-generated input samples. The line artifacts induced by 0-padding are largely mitigated.
|
| 144 |
+
|
| 145 |
+
To further analyze the impact of SYMMETRIC padding, we retrain the adapted model following the original training protocol. This significantly improves the average precision (AP) as reported in Table 2 under different overlap thresholds (matching IoU), confirming that small object detection is particularly sensitive to feature-map artifacts.
|
| 146 |
+
|
| 147 |
+
Table 2: Performance of the SSD traffic light detector, trained under two different padding schemes.
|
| 148 |
+
|
| 149 |
+
<table><tr><td>Average Precision (AP)</td><td>AP@.20IOU</td><td>AP@.50IOU</td><td>AP@.75IOU</td><td>AP@.90IOU</td></tr><tr><td>Zero Padding</td><td>80.24%</td><td>49.58%</td><td>3.7%</td><td>0.007%</td></tr><tr><td>Mirror Padding</td><td>83.20%</td><td>57%</td><td>8.44%</td><td>0.02%</td></tr></table>
|
| 150 |
+
|
| 151 |
+
Of the padding methods listed in Section 6, mirror padding in both SYMMETRIC and REFLECT modes, PartialConv, and circular padding are generally effective at reducing feature map artifacts that emerge under zero padding, in particular salient line patterns. In contrast, distribution padding can induce significant artifacts. Refer to appendix D for comparative examples of artifacts under the aforementioned padding schemes.
|
| 152 |
+
|
| 153 |
+
Artifact magnitude and propagation While feature-map artifacts are induced by the padding mechanism at the boundary, their magnitude and inward propagation in the maps are impacted by several architectural aspects of CNNs. In particular, certain normalization schemes such as batchnorm [15] tend to limit the range of variation within a feature map and to relatively harmonize this range across different maps. This, in turn, impacts how possible artifacts in these maps accumulate when they are processed by the next convolutional layer. Similarly, artifacts that manifest after applying ReLU units are of a positive sign. These factors were instrumental in the formation of potential blind spots described in Section 3. We hence recommend to involve non-convolutional layers when inspecting the feature maps. Besides having possible impact on artifact magnitude, several aspects of convolution arithmetic, such as filter size and dilation factors, can also impact the spatial propagation of these artifacts.
|
| 154 |
+
|
| 155 |
+
# 8 RELATED FINDINGS AND TAKEAWAYS
|
| 156 |
+
|
| 157 |
+
Handling the boundary is an inherent challenge when dealing with spatial data [9]. Mean padding is known to cause visual artifacts in traditional image processing, with alternative methods proposed to mitigate them [24]. CNNs have been often assumed to deal with such effects implicitly. Innamorati et al [14] propose learning separate sets of filters dedicated to the boundaries to avoid impacting the weights learned by regular filters. A grouped padding strategy, proposed to support $2 \times 2$ filters [41], offers avenues to mitigate uneven padding and corresponding skewness in foveation maps without restrictions on input size (see our note in appendix B for explanation). Finally, insights from signal and image processing [10; 11] could inspire further CNN padding schemes.
|
| 158 |
+
|
| 159 |
+
Zero padding has been recently linked to CNNs' ability to encode position information [7; 16; 18; 29]. In contrast, circular padding was shown to limit this ability [7] and to boost shift invariance [35]. The input sizes in those studies do induce uneven padding. This can be, in part, the underlying mechanism behind the aforementioned ability. Whether or not this ability is desirable depends on the task, with several methods proposed to explicitly encode spatial information [5; 6; 20; 25; 29; 31].
|
| 160 |
+
|
| 161 |
+
Downsampling using max-pooling or stripped convolution has been shown to impact shift invariance in CNNs by incurring aliasing effects [3, 38, 43]. These effects can manifest in the same symptoms we reported in Section I albeit for a different reason. Zhang [43] demonstrated how blurring the feature maps before subsampling mitigates aliasing effects and improves ImageNet classification accuracy of various popular CNNs. We analyzed the mean filters in antialiased MobileNet and ResNet models pre-trained on ImageNet under 0-padding, with $224 \times 224$ as input size (refer to Appendix E). We found that antialIASing can also mitigate the asymmetry of mean filters that exhibited high asymmetry in the baseline models, especially at deeper layers. This is remarkable given that these models are trained on $224 \times 224$ images, which incurs one-sided zero padding at every downsampling layer. This could, in part, be attributed to the ability of the BlurPool operator used in antialiaised CNN to smoothen the acuity of zero-padded borders, in turn, reducing the value imbalance incurred by one-sided padding. Further analysis is needed to examine the interaction between padding and aliasing effects in CNNs and to establish possible synergy between antialIASing and eliminating uneven application of padding.
|
| 162 |
+
|
| 163 |
+
Luo et al [28] drew connections between effective receptive fields and foveated vision. Our analysis links foveation behavior with the padding scheme and suggests that it might occur implicitly in CNNs when using VALID or SAME 0-padding, without the need for explicit mechanisms [2; 21]. Furthermore, it explains the drastic accuracy drop noted by [16] under VALID padding, which is amplified by feature map erosion.
|
| 164 |
+
|
| 165 |
+
Choosing a padding method SAME 0-padding is by far the most widely-used method. Compared with other methods, it can enable as much as $50\%$ faster training and inference. Problem-specific constraints can dictate different choices [34, 35, 40]. In the lack of a universally superior padding method, we recommend considering multiple ones while paying attention to the nature of the data and the task, as well as to the following aspects:
|
| 166 |
+
|
| 167 |
+
- Feature-map statistics: 0-padding can alter the value distribution within the feature maps and can shift their mean value in the presence of ReLU units. The alternatives presented in Section 6 tend to preserve this distribution, thanks to reusing existing values in the maps.
|
| 168 |
+
- Foveation behavior: 0-padding might not be suited for tasks that require high precision at the periphery, unlike circular and SYMMETRIC mirror padding.
|
| 169 |
+
- Interference with image semantics (esp. with a padding amount $>1$ pixel): For example, circular padding could introduce border discontinuities unless the input is panoramic [35].
|
| 170 |
+
- Potential to induce feature map artifacts: All alternatives to 0-padding induce relatively fewer artifacts, except for Distribution padding [30] (see appendix D).
|
| 171 |
+
|
| 172 |
+
We also recommend eliminating uneven padding at downsampling layers both at training and at inference time, as we illustrated in Section 5. This is especially important when zero padding is applied and the downsampling is learned. The scripts used to generate the visualizations in this paper are available in the supplemental as well as at http://mind-the-pad.github.io.
|
| 173 |
+
|
| 174 |
+
Summary We demonstrated how the padding mechanism can induce spatial bias in CNNs, in the form of skewed kernels and feature-map artifacts. These artifacts can be highly pronounced with the widely-used 0-padding when applied unevenly at the four sides of the feature maps. We demonstrated how such uneven padding can inherently take place in state-of-the-art CNNs, and how the artifacts it causes can be detrimental to certain tasks such as small object detection. We provided visualization methods to expose these artifacts and to analyze the implication of various padding schemes on boundary pixels. We further proposed solutions to eliminate uneven padding and to mitigate spatial bias in CNNs. Further work is needed to closely examine the implications of spatial bias and foveation in various applications (see supplementary for examples), as well as padding impact on recurrent models and 1-D CNNs.
|
| 175 |
+
|
| 176 |
+
# ACKNOWLEDGEMENT
|
| 177 |
+
|
| 178 |
+
We are thankful to Ross Girshick for providing useful recommendations and experiment ideas, and to Shubham Muttepawar for implementing an interactive tool out of our analysis scripts, guided by our front-end specialist Edward Wang and our AI user-experience designer Sara Zhang.
|
| 179 |
+
|
| 180 |
+
# REFERENCES
|
| 181 |
+
|
| 182 |
+
[1] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, et al. TensorFlow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467, 2016.
|
| 183 |
+
[2] E. Akbas and M. P. Eckstein. Object detection through search with a foveated visual system. PLoS computational biology, 13(10):e1005743, 2017.
|
| 184 |
+
[3] A. Azulay and Y. Weiss. Why do deep convolutional networks generalize so poorly to small image transformations? Journal of Machine Learning Research (JMLR), 20(184):1-25, 2019.
|
| 185 |
+
[4] K. Behrendt, L. Novak, and R. Botros. A deep learning approach to traffic lights: Detection, tracking, and classification. In Robotics and Automation (ICRA), 2017 IEEE International Conference on, pp. 1370-1377. IEEE, 2017.
|
| 186 |
+
[5] C.-A. Brust, S. Sickert, M. Simon, E. Rodner, and J. Denzler. Convolutional patch networks with spatial prior for road detection and urban scene understanding. In International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISAPP), 2015.
|
| 187 |
+
[6] G. F. Elsayed, P. Ramachandran, J. Shlens, and S. Kornblith. Revisiting spatial invariance with low-rank local connectivity. In International Conference on Machine Learning (ICML), 2020.
|
| 188 |
+
[7] J. Geiping, H. Bauermeister, H. Droge, and M. Moeller. Inverting gradients—how easy is it to break privacy in federated learning? arXiv preprint arXiv:2003.14053, 2020.
|
| 189 |
+
[8] R. Gens and P. M. Domingos. Deep symmetry networks. In Advances in neural information processing systems (NeurIPS), pp. 2537-2545, 2014.
|
| 190 |
+
[9] D. Griffith and C. Amrhein. An evaluation of correction techniques for boundary effects in spatial statistical analysis: traditional methods. Geographical Analysis, 15(4):352-360, 1983.
|
| 191 |
+
[10] V. Gupta and N. Ramani. A note on convolution and padding for two-dimensional data. Geophysical Prospecting, 26(1):214-217, 1978.
|
| 192 |
+
[11] L. Hamey. A functional approach to border handling in image processing. In International Conference on Digital Image Computing: Techniques and Applications, pp. 1-8, 2015.
|
| 193 |
+
[12] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In IEEE conference on Computer Vision and Pattern Recognition (CVPR), pp. 770-778, 2016.
|
| 194 |
+
[13] A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam. MobileNets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861, 2017.
|
| 195 |
+
[14] C. Innamorati, T. Ritschel, T. Weyrich, and N. J. Mitra. Learning on the edge: Investigating boundary filters in CNNs. International Journal of Computer Vision (IJCV), pp. 1-10, 2019.
|
| 196 |
+
[15] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International Conference on Machine Learning (ICML), pp. 448-456, 2015.
|
| 197 |
+
[16] M. A. Islam, S. Jia, and N. D. Bruce. How much position information do convolutional neural networks encode? In International Conference on Learning Representations (ICLR), 2020.
|
| 198 |
+
[17] M. Jaderberg, K. Simonyan, A. Zisserman, et al. Spatial transformer networks. In Advances in neural information processing systems (NeurIPS), pp. 2017-2025, 2015.
|
| 199 |
+
[18] O. S. Kayhan and J. C. van Gemert. On translation invariance in CNNs: Convolutional layers can exploit absolute spatial location. In IEEE conference on Computer Vision and Pattern Recognition (CVPR), 2020.
|
| 200 |
+
[19] T. L. Kijewski-Correa. Full-scale measurements and system identification: A time-frequency perspective. PhD thesis, University of Notre Dame., 2003.
|
| 201 |
+
|
| 202 |
+
[20] I. Kim, W. Baek, and S. Kim. Spatially attentive output layer for image classification. In IEEE conference on Computer Vision and Pattern Recognition (CVPR), 2020.
|
| 203 |
+
[21] H. Larochelle and G. E. Hinton. Learning to combine foveal glimpses with a third-order boltzmann machine. In Advances in neural information processing systems (NeurIPS), pp. 1243-1251, 2010.
|
| 204 |
+
[22] G. Liu, F. A. Reda, K. J. Shih, T.-C. Wang, A. Tao, and B. Catanzaro. Image inpainting for irregular holes using partial convolutions. In European Conference on Computer Vision, 2018.
|
| 205 |
+
[23] G. Liu, K. J. Shih, T.-C. Wang, F. A. Reda, K. Sapra, Z. Yu, A. Tao, and B. Catanzaro. Partial convolution based padding. In arXiv preprint arXiv:1811.11718, 2018.
|
| 206 |
+
[24] R. Liu and J. Jia. Reducing boundary artifacts in image deconvolution. In IEEE International Conference on Image Processing (ICIP), pp. 505-508, 2008.
|
| 207 |
+
[25] R. Liu, J. Lehman, P. Molino, F. P. Such, E. Frank, A. Sergeev, and J. Yosinski. An intriguing failing of convolutional neural networks and the CoordConv solution. In Advances in Neural Information Processing Systems (NeurIPS), pp. 9605–9616, 2018.
|
| 208 |
+
[26] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, and A. C. Berg. SSD: Single shot multibox detector. In European Conference on Computer Vision, pp. 21-37, 2016.
|
| 209 |
+
[27] S. Lou, X. Jiang, and P. J. Scott. Fast algorithm for morphological filters. Journal of Physics: Conference Series, 311(1):012001, 2011.
|
| 210 |
+
[28] W. Luo, Y. Li, R. Urtasun, and R. Zemel. Understanding the effective receptive field in deep convolutional neural networks. In Advances in Neural Information Processing Systems (NeurIPS), pp. 4898-4906, 2016.
|
| 211 |
+
[29] R. Murase, M. Suganuma, and T. Okatani. How can cnns use image position for segmentation? arXiv preprint arXiv:2005.03463, 2020.
|
| 212 |
+
[30] A.-D. Nguyen, S. Choi, W. Kim, S. Ahn, J. Kim, and S. Lee. Distribution padding in convolutional neural networks. In IEEE International Conference on Image Processing (ICIP), pp. 4275-4279, 2019.
|
| 213 |
+
[31] D. Novotny, S. Albanie, D. Larlus, and A. Vedaldi. Semi-convolutional operators for instance segmentation. In European Conference on Computer Vision (ECCV), pp. 86–102, 2018.
|
| 214 |
+
[32] A. Odena, V. Dumoulin, and C. Olah. Deconvolution and checkerboard artifacts. Distill, 1 (10):e3, 2016.
|
| 215 |
+
[33] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, et al. PyTorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems (NeurIPS), pp. 8024–8035, 2019.
|
| 216 |
+
[34] P. O. Pinheiro, T.-Y. Lin, R. Collobert, and P. Dólár. Learning to refine object segments. In European Conference on Computer Vision (ECCV), pp. 75–91, 2016.
|
| 217 |
+
[35] S. Schubert, P. Neubert, J. Pöschmann, and P. Pretzel. Circular convolutional neural networks for panoramic images and laser data. In IEEE Intelligent Vehicles Symposium (IV), pp. 653-660, 2019.
|
| 218 |
+
[36] E. Shalnov. BSTLD-demo: A sample project to train and evaluate model on BSTLD. https://github.com/e-sha/BSTLD_demo, 2019.
|
| 219 |
+
[37] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In International Conference on Learning Representations (ICLR), 2015.
|
| 220 |
+
[38] G. Sundaramoorthi and T. E. Wang. Translation insensitive CNNs. arXiv preprint arXiv:1911.11238, 2019.
|
| 221 |
+
|
| 222 |
+
[39] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In IEEE conference on Computer Vision and Pattern Recognition (CVPR), pp. 1-9, 2015.
|
| 223 |
+
[40] S. Vashisth, S. Sanyal, V. Nitin, N. Agrawal, and P. Talukdar. InteractE: Improving convolution-based knowledge graph embeddings by increasing feature interactions. In AAAI conference on Artificial Intelligence, 2020.
|
| 224 |
+
[41] S. Wu, G. Wang, P. Tang, F. Chen, and L. Shi. Convolution with even-sized kernels and symmetric padding. In Advances in Neural Information Processing Systems (NeurIPS), pp. 1192-1203, 2019.
|
| 225 |
+
[42] F. Yu and V. Koltun. Multi-scale context aggregation by dilated convolutions. In International Conference on Learning Representations (ICLR), 2016.
|
| 226 |
+
[43] R. Zhang. Making convolutional networks shift-invariant again. In International Conference on Machine Learning (ICML), 2019.
|
| 227 |
+
|
| 228 |
+
# A ELIMINATING UNEVEN APPLICATION OF PADDING
|
| 229 |
+
|
| 230 |
+
Consider a CNN with $d$ downsampling layers, $L_{1}, L_{2}, \ldots, L_{d}$ . To simplify the analysis and without loss of generality we assume that the kernels in these layers are of square shape and that all other layers maintain their input size. We denote by $s_i$ and $k_i$ the stride and kernel size of layer $L_i$ . We denote by $h_i$ and $w_i$ the dimensions of the feature maps computed by $L_i$ . We denote by $h_0$ and $w_0$ the size of the CNN input. We examine the conditions to warrant no uneven application of padding along the height dimension. Parallel conditions apply to the width dimension.
|
| 231 |
+
|
| 232 |
+
We denote by $\bar{h}_i$ the height of the padded input to $L_{i}$ . The effective portion $\hat{h}_i\leq \bar{h}_i$ of this amount processed by the convolutional filters in $L_{i}$ is equal to:
|
| 233 |
+
|
| 234 |
+
$$
|
| 235 |
+
\hat {h} _ {i} = s _ {i} \cdot (h _ {i} - 1) + k _ {i}
|
| 236 |
+
$$
|
| 237 |
+
|
| 238 |
+
Our goal is to warrant that $\hat{h}_i = \bar{h}_i$ to prevent information loss and to avoid uneven padding along the vertical dimension when the unconsumed part $\bar{h}_i - \hat{h}_i < s_i$ is an odd number.
|
| 239 |
+
|
| 240 |
+
Since the non-downsampling layers maintain their input size, we can formulate the height of the padded input as follows:
|
| 241 |
+
|
| 242 |
+
$$
|
| 243 |
+
\bar {h} _ {i} = h _ {i - 1} + 2 \cdot p _ {i}
|
| 244 |
+
$$
|
| 245 |
+
|
| 246 |
+
where $p_i$ is the amount of padding applied at the top and at the bottom of the input in $L_i$ . Accordingly, we can warrant no uneven padding if the following holds:
|
| 247 |
+
|
| 248 |
+
$$
|
| 249 |
+
\forall i \in [ 1.. d ]: \quad h _ {i - 1} = s _ {i} \cdot \left(h _ {i} - 1\right) + k _ {i} - 2 \cdot p _ {i} \tag {3}
|
| 250 |
+
$$
|
| 251 |
+
|
| 252 |
+
Example 1: ResNet-18 This network contains five downsampling layers ( $d = 5$ ) all of which use a stride of 2. Despite performing downsampling, all of these layers apply a padding amount entailed by SAME padding to avoid information bias against the boundary. In four of these layers having $3 \times 3$ kernels ( $k_i = 3$ ), the amount used is $p_i = 1$ . For the first layer having $7 \times 7$ kernels, this amount is equal to 3. In both cases, the term $k_i - 2 \cdot p_i$ in Eq. [3] is equal to 1. To warrant no uneven padding along the vertical dimension, the heights of the feature maps at downsampling layers should hence satisfy:
|
| 253 |
+
|
| 254 |
+
$$
|
| 255 |
+
\forall i \in [ 1.. d ]: \quad h _ {i - 1} = 2 \cdot (h _ {i} - 1) + 1 = 2 \cdot h _ {i} - 1
|
| 256 |
+
$$
|
| 257 |
+
|
| 258 |
+
Accordingly, the input height should satisfy:
|
| 259 |
+
|
| 260 |
+
$$
|
| 261 |
+
h _ {0} = 2 ^ {d} \cdot h _ {d} - (2 ^ {d} - 1) = 2 ^ {d} \cdot (h _ {d} - 1) + 1
|
| 262 |
+
$$
|
| 263 |
+
|
| 264 |
+
where $h_d$ is the height of the final feature map, and can be any natural number larger than 1 to avoid a degenerate case of a $1 \times 1$ input. The same holds for the input width:
|
| 265 |
+
|
| 266 |
+
$$
|
| 267 |
+
w _ {0} = 2 ^ {d} \cdot (w _ {d} - 1) + 1
|
| 268 |
+
$$
|
| 269 |
+
|
| 270 |
+
A $225 \times 225$ input satisfies these constraints since $225 = 2^5 \cdot 7 + 1$ , yielding even padding in all five downsampling layers and output feature maps of size $8 \times 8$ .
|
| 271 |
+
|
| 272 |
+
Example 2: VGG-16 This network contains five max-pooling layers ( $d = 5$ ) all of which use a stride of 2 and a kernel size of 2 and apply no padding. To warrant no uneven padding along the vertical dimension, the heights of the feature maps at all of these layers should hence satisfy:
|
| 273 |
+
|
| 274 |
+
$$
|
| 275 |
+
\forall i \in [ 1.. d ]: \quad h _ {i - 1} = 2 \cdot (h _ {i} - 1) + 2 = 2 \cdot h _ {i}
|
| 276 |
+
$$
|
| 277 |
+
|
| 278 |
+
Accordingly, the input dimensions should satisfy:
|
| 279 |
+
|
| 280 |
+
$$
|
| 281 |
+
h _ {0} = 2 ^ {d} \cdot h _ {d} \quad \text {a n d} \quad w _ {0} = 2 ^ {d} \cdot w _ {d} \tag {4}
|
| 282 |
+
$$
|
| 283 |
+
|
| 284 |
+
A $224 \times 224$ input satisfies these constraints since $224 = 2^5 \cdot 7$ , causing no feature-map erosion at any downsampling layer and resulting in output feature maps of size $7 \times 7$ .
|
| 285 |
+
|
| 286 |
+
# B THE EXTENT OF FOVEATION UNDER SAME 0-PADDING
|
| 287 |
+
|
| 288 |
+
We illustrate how the absolute extent of foveation under SAME 0-padding depends on the number of convolutional layers, and how its relative extent depends on the input size.
|
| 289 |
+
|
| 290 |
+
In the following maps, color represents the number of paths to the CNN output for each input pixel. Note: The checkerboard pattern is caused by downsampling layers in ResNet that use $3 \times 3$ kernels and a stride of 2.
|
| 291 |
+
|
| 292 |
+

|
| 293 |
+
Figure 9: The foveation maps of two ResNet architectures under 0 padding, illustrated with a $225 \times 225$ input. Compared with ResNet-50, ResNet-101 has twice the number of convolutional layers with non-unitary filter sizes. Accordingly, the extent of the foveation effect is doubled.
|
| 294 |
+
|
| 295 |
+

|
| 296 |
+
|
| 297 |
+

|
| 298 |
+
Figure 10: The foveation maps of ResNet-50 under 0 padding, illustrated with inputs of different size. The smaller the input, the larger the relative extent of foveation.
|
| 299 |
+
|
| 300 |
+
In the next figure, we illustrate how uneven application of padding impacts the foveation maps.
|
| 301 |
+
|
| 302 |
+
Note: It is possible to rectify the skewness in the 2nd foveation map by alternating the side where one-sided padding is applied between successive downsampling layers. This, however, does not mitigate the skewness in the learned filters (see next Section).
|
| 303 |
+
|
| 304 |
+

|
| 305 |
+
Figure 11: The foveation maps of ResNet-50 under 0 padding, illustrated with two input sizes. With a $257 \times 257$ input, the padding is evenly applied at all downsampling layers, leading to a symmetric foveation map. With a $256 \times 256$ input, the padding is applied only to the left and top sides of feature maps at all downsampling layers, which limits the number of convolutional input-output paths for pixels in the bottom and right sides as evident in the skewed foveation map.
|
| 306 |
+
|
| 307 |
+

|
| 308 |
+
|
| 309 |
+
# C THE IMPACT OF THE PADDING METHOD ON LEARNED WEIGHTS
|
| 310 |
+
|
| 311 |
+
In the presence of uneven application of padding, 0-padding causes skewness in the learned weights because the filters are exposed more frequently to feature-map patches with zeros at their top and left sides. Redundancy methods such as circular or mirror padding mitigate such skewness because they fill the padding areas with values taken from the feature maps. PartialConv also mitigates such skewness because it assumes the pixels in the padding area are missing, and rescales the partial convolutional sum to account for them. Below we show the effectiveness of these alternatives in mitigating the skewness in three ResNet architectures.
|
| 312 |
+
|
| 313 |
+
0-Padding
|
| 314 |
+

|
| 315 |
+
(a) Mean filters of ResNet-18 trained on $224 \times 224$ images under two padding methods, reaching $69.93\%$ top-1 accuracy under 0-padding and $70.28\%$ top-1 accuracy under circular padding.
|
| 316 |
+
|
| 317 |
+

|
| 318 |
+
Circular Padding
|
| 319 |
+
|
| 320 |
+
0-Padding
|
| 321 |
+

|
| 322 |
+
(b) Mean filters of ResNet-50 trained on $224 \times 224$ images under two padding methods, reaching $76.15\%$ top-1 accuracy under 0-padding and $76.61\%$ top-1 accuracy under PartialConv.
|
| 323 |
+
|
| 324 |
+

|
| 325 |
+
PartialConv
|
| 326 |
+
|
| 327 |
+

|
| 328 |
+
Figure 12: Mean filters of two ResNet models trained on ImageNet with $224 \times 224$ images. The input size causes uneven application of padding, leading to frequent asymmetries in the mean filters under 0 padding. We illustrate how two alternatives, circular padding and PartialConv [23], enable learning highly-symmetric mean filters despite the uneven application of padding.
|
| 329 |
+
0-Padding
|
| 330 |
+
|
| 331 |
+

|
| 332 |
+
PartialConv
|
| 333 |
+
Figure 13: Mean filters of ResNet-101 trained on ImageNet with $224 \times 224$ images under both 0-padding and PartialConv [23]. The input size causes uneven application of padding, leading to frequent asymmetries in the mean filters under 0 padding. In contrast, PartialConv produces highly symmetric mean filters, thanks for its treatment of pixels outside the feature map as missing values.
|
| 334 |
+
|
| 335 |
+
What if no padding is applied during downsampling? VGG models perform downsampling using $2 \times 2$ pooling layers that do not apply any padding. Accordingly, the mean filters do not exhibit significant skewness, even if the input size does not satisfy Eq4:
|
| 336 |
+
|
| 337 |
+

|
| 338 |
+
0-padding with 224x224 images
|
| 339 |
+
Figure 14: Mean filters of VGG-16 trained on ImageNet under different conditions. Most mean filters exhibit high symmetry when trained with $225 \times 225$ images where the size violates Eq. 4.
|
| 340 |
+
|
| 341 |
+

|
| 342 |
+
0-padding with 225x225 images
|
| 343 |
+
|
| 344 |
+

|
| 345 |
+
PartialConv with 224x224 images
|
| 346 |
+
|
| 347 |
+
# D THE IMPACT OF PADDING METHODS ON FEATURE-MAP ARTIFACTS
|
| 348 |
+
|
| 349 |
+
We show per-layer mean feature maps in ResNet-18 under different padding methods. The mean maps are averaged over 20 input samples generated at random.
|
| 350 |
+
|
| 351 |
+

|
| 352 |
+
Figure 15: Feature map artifacts under zero padding. Line artifacts accumulate to become significant and asymmetric at deeper layers.
|
| 353 |
+
|
| 354 |
+

|
| 355 |
+
Figure 16: Circular padding largely preserves the randomness and mitigates line artifacts.
|
| 356 |
+
|
| 357 |
+

|
| 358 |
+
Figure 17: SYMMETRIC mirror padding also preserves the randomness and mitigates line artifacts.
|
| 359 |
+
|
| 360 |
+

|
| 361 |
+
Figure 18: REFLECT mirror padding also preserves the randomness and mitigates line artifacts.
|
| 362 |
+
|
| 363 |
+

|
| 364 |
+
Figure 19: PartialConv [23] highly preserves the symmetry of the feature maps. The scaling factors it uses can break the randomness at the boundary.
|
| 365 |
+
|
| 366 |
+

|
| 367 |
+
Figure 20: Feature map artifacts of a VGG-19 model under Distribution Padding (interpolation mode) [30]. Due to multiple resize operations used to fill the padding area, the artifacts grow from the boundary inwards. We use a saturated constant input to make the effect visible.
|
| 368 |
+
|
| 369 |
+
# E THE IMPACT OF ANTIALIASING ON THE LEARNED WEIGHTS
|
| 370 |
+
|
| 371 |
+
We demonstrate how antialIASing [43] significantly reduces the asymmetry of mean filters around downsampling layers, even in the presence of unevenly-applied zero padding.
|
| 372 |
+
|
| 373 |
+

|
| 374 |
+
Figure 21: Mean filters of four models trained on ImageNet with $224 \times 224$ images under 0-padding both without and with antialIASing.
|
| 375 |
+
|
| 376 |
+

|
| 377 |
+
Figure 22: Mean filters of two models trained on ImageNet with $224 \times 224$ images under 0-padding both without and with antialIASing.
|
| 378 |
+
|
| 379 |
+
# F FOVEATION ANALYSIS OF PADDING ALGORITHMS
|
| 380 |
+
|
| 381 |
+
Refer to http://mind-the-pad.github.io for an interactive and animated visual illustration of padding algorithms and their foveation behavior. This appendix serves as a print version.
|
| 382 |
+
|
| 383 |
+
Among the SAME padding algorithms we discussed in the manuscript, two algorithms warrant that each input pixel is involved in an equal number of convolutional operations, leading to uniform foveation maps: circular padding and SYMMETRIC mirror padding. In contrast, this number varies under zero padding, REFLECT mirror padding, replication padding, and partial convolution.
|
| 384 |
+
|
| 385 |
+
We illustrate in detail how each padding algorithm treats the input pixels. For this purpose we illustrate step by step how each pixel is processed by the convolutional kernel. We choose a set of pixels that are sufficient to expose the behavior of the respective algorithm. This set spans an area within two or three pixels from the boundary that encompasses all relevant cases for the analysis and is situated at the top-left corner. The behavior at the other corners is analogous.
|
| 386 |
+
|
| 387 |
+
All illustrations use a stride of 1. Except for VALID, all configurations warrant SAME padding.
|
| 388 |
+
|
| 389 |
+
- VALID Padding: This algorithm is illustrated on a $3 \times 3$ kernel without dilation. A larger kernel size or dilation factor will increase the foveation effect.
|
| 390 |
+
|
| 391 |
+
- Zero Padding: This algorithm is illustrated on a $3 \times 3$ kernel without dilation. A larger kernel size or dilation factor will increase the foveation effect.
|
| 392 |
+
|
| 393 |
+
- Circular Padding: This algorithm is illustrated on a $3 \times 3$ kernel without dilation. It is straightforward to prove that the algorithm warrants equal treatment of the pixels irrespective of the kernel size or dilation factor. This is because it effectively applies circular convolution: Once the kernel hits one side, it can seamlessly operate on the pixels of the other side. Circular convolution hence renders the feature map as infinite to the kernel, warranting that edge pixels are treated in the same manner as interior pixels.
|
| 394 |
+
|
| 395 |
+
- Mirror Padding (SYMMETRIC): This algorithm warrants that each pixel is involved in the same number of convolutional operations. It is important to notice that, unlike under circular convolution, these operations do not utilize the kernel pixels uniformly as we demonstrate in detail. We illustrate the algorithm behavior under the following settings:
|
| 396 |
+
|
| 397 |
+
- $3 \times 3$ kernel and dilation factor of 1.
|
| 398 |
+
- $5 \times 5$ kernel and dilation factor of 1.
|
| 399 |
+
- $3 \times 3$ kernel and dilation factor of 2.
|
| 400 |
+
- $2 \times 2$ kernel and dilation factor of 1, along with a grouped padding strategy to compensate for uneven padding [41].
|
| 401 |
+
- $4 \times 4$ kernel size and dilation factor of 1, along with a grouped padding strategy.
|
| 402 |
+
|
| 403 |
+
- Mirror Padding (REFLECT): This algorithm is illustrated on a $3 \times 3$ kernel without dilation.
|
| 404 |
+
|
| 405 |
+
- Replication Padding: This algorithm is illustrated on a $5 \times 5$ kernel without dilation. We choose this kernel size since a $3 \times 3$ kernel under SAME padding would render the algorithm equivalent to SYMMETRIC mirror padding.
|
| 406 |
+
- Partial Convolution: This algorithm is illustrated on a $3 \times 3$ kernel without dilation. Its foveation behavior is analogous to REFLECT mirror padding.
|
| 407 |
+
|
| 408 |
+
# VALID Padding Illustrated on a 3x3 kernel
|
| 409 |
+
|
| 410 |
+

|
| 411 |
+
Input
|
| 412 |
+
|
| 413 |
+

|
| 414 |
+
of conv ops each pixel is involved in
|
| 415 |
+
|
| 416 |
+
# Which kernel cells these ops utilize?
|
| 417 |
+
|
| 418 |
+

|
| 419 |
+
sum = 6.25
|
| 420 |
+
|
| 421 |
+

|
| 422 |
+
sum = 8.75
|
| 423 |
+
|
| 424 |
+

|
| 425 |
+
sum = 7.5
|
| 426 |
+
|
| 427 |
+

|
| 428 |
+
sum = 8.75
|
| 429 |
+
|
| 430 |
+

|
| 431 |
+
sum = 9
|
| 432 |
+
|
| 433 |
+

|
| 434 |
+
sum = 10.5
|
| 435 |
+
|
| 436 |
+

|
| 437 |
+
sum = 7.5
|
| 438 |
+
|
| 439 |
+

|
| 440 |
+
sum = 10.5
|
| 441 |
+
|
| 442 |
+

|
| 443 |
+
sum = 9
|
| 444 |
+
uniform
|
| 445 |
+
|
| 446 |
+
# Detailed Illustration of how the counts are derived
|
| 447 |
+
|
| 448 |
+

|
| 449 |
+
Convolutions involving (a)
|
| 450 |
+
|
| 451 |
+

|
| 452 |
+
Convolutions involving (b)
|
| 453 |
+
|
| 454 |
+

|
| 455 |
+
|
| 456 |
+

|
| 457 |
+
Convolutions involving (c)
|
| 458 |
+
|
| 459 |
+

|
| 460 |
+
|
| 461 |
+

|
| 462 |
+
|
| 463 |
+
Convolutions involving (d): rotated version of (b)
|
| 464 |
+
|
| 465 |
+

|
| 466 |
+
Convolutions involving (e)
|
| 467 |
+
|
| 468 |
+

|
| 469 |
+
Convolutions involving (f)
|
| 470 |
+
|
| 471 |
+

|
| 472 |
+
|
| 473 |
+

|
| 474 |
+
|
| 475 |
+

|
| 476 |
+
|
| 477 |
+

|
| 478 |
+
|
| 479 |
+

|
| 480 |
+
|
| 481 |
+

|
| 482 |
+
|
| 483 |
+

|
| 484 |
+
|
| 485 |
+

|
| 486 |
+
|
| 487 |
+
Convolutions involving (g): Rotated version of (c)
|
| 488 |
+
|
| 489 |
+
Convolutions involving (h): Rotated version of (f)
|
| 490 |
+
|
| 491 |
+
Convolutions involving (i): Regular uniform treatment
|
| 492 |
+
|
| 493 |
+
# Zero Padding
|
| 494 |
+
|
| 495 |
+
Illustrated on 3x3 kernel and 1-pixel padding
|
| 496 |
+
|
| 497 |
+

|
| 498 |
+
Original Input
|
| 499 |
+
|
| 500 |
+

|
| 501 |
+
Padded Input
|
| 502 |
+
|
| 503 |
+

|
| 504 |
+
of conv ops each pixel is involved in
|
| 505 |
+
|
| 506 |
+
Which kernel cells these ops utilize?
|
| 507 |
+
|
| 508 |
+

|
| 509 |
+
a
|
| 510 |
+
sum = 4
|
| 511 |
+
|
| 512 |
+

|
| 513 |
+
b
|
| 514 |
+
sum = 6
|
| 515 |
+
|
| 516 |
+

|
| 517 |
+
C
|
| 518 |
+
sum = 6
|
| 519 |
+
|
| 520 |
+

|
| 521 |
+
d
|
| 522 |
+
sum = 9
|
| 523 |
+
uniform
|
| 524 |
+
|
| 525 |
+
# Detailed Illustration of how the counts are derived
|
| 526 |
+
|
| 527 |
+

|
| 528 |
+
Convolutions involving (a)
|
| 529 |
+
|
| 530 |
+

|
| 531 |
+
|
| 532 |
+

|
| 533 |
+
|
| 534 |
+

|
| 535 |
+
|
| 536 |
+

|
| 537 |
+
Convolutions involving (b)
|
| 538 |
+
|
| 539 |
+

|
| 540 |
+
|
| 541 |
+

|
| 542 |
+
|
| 543 |
+

|
| 544 |
+
|
| 545 |
+

|
| 546 |
+
|
| 547 |
+

|
| 548 |
+
|
| 549 |
+
Other border cases are translation or rotation of (a) or (b)
|
| 550 |
+
|
| 551 |
+
# Circular Padding
|
| 552 |
+
|
| 553 |
+
Illustrated on 3x3 kernel and 1-pixel padding
|
| 554 |
+
|
| 555 |
+

|
| 556 |
+
Original Input
|
| 557 |
+
|
| 558 |
+

|
| 559 |
+
Padded Input
|
| 560 |
+
|
| 561 |
+

|
| 562 |
+
of conv ops each pixel is involved in
|
| 563 |
+
|
| 564 |
+

|
| 565 |
+
Which kernel cells these ops utilize?
|
| 566 |
+
|
| 567 |
+

|
| 568 |
+
b
|
| 569 |
+
|
| 570 |
+
# Detailed Illustration of how the counts are derived
|
| 571 |
+
|
| 572 |
+

|
| 573 |
+
Convolutions involving (a)
|
| 574 |
+
|
| 575 |
+

|
| 576 |
+
|
| 577 |
+

|
| 578 |
+
|
| 579 |
+

|
| 580 |
+
|
| 581 |
+

|
| 582 |
+
|
| 583 |
+

|
| 584 |
+
|
| 585 |
+

|
| 586 |
+
|
| 587 |
+

|
| 588 |
+
|
| 589 |
+

|
| 590 |
+
|
| 591 |
+

|
| 592 |
+
Convolutions involving (b)
|
| 593 |
+
|
| 594 |
+

|
| 595 |
+
|
| 596 |
+

|
| 597 |
+
|
| 598 |
+

|
| 599 |
+
|
| 600 |
+

|
| 601 |
+
|
| 602 |
+

|
| 603 |
+
|
| 604 |
+

|
| 605 |
+
|
| 606 |
+

|
| 607 |
+
|
| 608 |
+

|
| 609 |
+
|
| 610 |
+
Other border cases are translation or rotation of (a) or (b)
|
| 611 |
+
|
| 612 |
+
# Mirror Padding (SYMMETRIC)
|
| 613 |
+
|
| 614 |
+
Illustrated on 3x3 kernel and 1-pixel padding
|
| 615 |
+
|
| 616 |
+
# Original Input
|
| 617 |
+
|
| 618 |
+
# Padded Input
|
| 619 |
+
|
| 620 |
+
of conv ops each pixel is involved in
|
| 621 |
+
|
| 622 |
+

|
| 623 |
+
|
| 624 |
+

|
| 625 |
+
|
| 626 |
+

|
| 627 |
+
|
| 628 |
+
# Which kernel cells these ops utilize?
|
| 629 |
+
|
| 630 |
+

|
| 631 |
+
sum = 9
|
| 632 |
+
|
| 633 |
+

|
| 634 |
+
sum = 9
|
| 635 |
+
|
| 636 |
+

|
| 637 |
+
sum = 9
|
| 638 |
+
|
| 639 |
+

|
| 640 |
+
sum = 9
|
| 641 |
+
|
| 642 |
+

|
| 643 |
+
sum = 9
|
| 644 |
+
|
| 645 |
+

|
| 646 |
+
uniform
|
| 647 |
+
sum = 9
|
| 648 |
+
uniform
|
| 649 |
+
|
| 650 |
+

|
| 651 |
+
sum = 9
|
| 652 |
+
|
| 653 |
+

|
| 654 |
+
sum = 9
|
| 655 |
+
uniform
|
| 656 |
+
|
| 657 |
+

|
| 658 |
+
sum = 9
|
| 659 |
+
uniform
|
| 660 |
+
|
| 661 |
+
# Detailed Illustration of how the counts are derived
|
| 662 |
+
|
| 663 |
+
# Convolutions involving (a)
|
| 664 |
+
|
| 665 |
+

|
| 666 |
+
|
| 667 |
+

|
| 668 |
+
|
| 669 |
+

|
| 670 |
+
|
| 671 |
+

|
| 672 |
+
|
| 673 |
+
# Convolutions involving (b)
|
| 674 |
+
|
| 675 |
+

|
| 676 |
+
|
| 677 |
+

|
| 678 |
+
|
| 679 |
+

|
| 680 |
+
|
| 681 |
+

|
| 682 |
+
|
| 683 |
+

|
| 684 |
+
|
| 685 |
+

|
| 686 |
+
|
| 687 |
+
# Convolutions involving (c)
|
| 688 |
+
|
| 689 |
+

|
| 690 |
+
|
| 691 |
+

|
| 692 |
+
|
| 693 |
+

|
| 694 |
+
|
| 695 |
+

|
| 696 |
+
|
| 697 |
+

|
| 698 |
+
|
| 699 |
+

|
| 700 |
+
|
| 701 |
+
Convolutions involving (d): Rotated version of (b)
|
| 702 |
+
|
| 703 |
+
Convolutions involving (e)
|
| 704 |
+
|
| 705 |
+

|
| 706 |
+
|
| 707 |
+

|
| 708 |
+
|
| 709 |
+

|
| 710 |
+
|
| 711 |
+

|
| 712 |
+
|
| 713 |
+

|
| 714 |
+
|
| 715 |
+

|
| 716 |
+
|
| 717 |
+

|
| 718 |
+
|
| 719 |
+

|
| 720 |
+
|
| 721 |
+

|
| 722 |
+
|
| 723 |
+
Convolutions involving (f)
|
| 724 |
+
|
| 725 |
+

|
| 726 |
+
|
| 727 |
+

|
| 728 |
+
|
| 729 |
+

|
| 730 |
+
|
| 731 |
+

|
| 732 |
+
|
| 733 |
+

|
| 734 |
+
|
| 735 |
+

|
| 736 |
+
|
| 737 |
+

|
| 738 |
+
|
| 739 |
+

|
| 740 |
+
|
| 741 |
+

|
| 742 |
+
|
| 743 |
+
Convolutions involving (g): Rotated version of (c)
|
| 744 |
+
|
| 745 |
+
Convolutions involving (h): Rotated version of (f)
|
| 746 |
+
|
| 747 |
+
Convolutions involving (i): Regular uniform treatment
|
| 748 |
+
|
| 749 |
+

|
| 750 |
+
Original Input
|
| 751 |
+
|
| 752 |
+
# Mirror Padding (SYMMETRIC)
|
| 753 |
+
|
| 754 |
+
# Illustrated on 5x5 kernel and 2-pixel padding
|
| 755 |
+
|
| 756 |
+
# Which kernel cells these ops utilize?
|
| 757 |
+
|
| 758 |
+

|
| 759 |
+
sum = 25
|
| 760 |
+
|
| 761 |
+

|
| 762 |
+
sum = 25
|
| 763 |
+
|
| 764 |
+

|
| 765 |
+
sum = 25
|
| 766 |
+
|
| 767 |
+

|
| 768 |
+
sum = 25
|
| 769 |
+
|
| 770 |
+

|
| 771 |
+
sum = 25
|
| 772 |
+
|
| 773 |
+

|
| 774 |
+
sum = 25
|
| 775 |
+
|
| 776 |
+

|
| 777 |
+
sum = 25
|
| 778 |
+
|
| 779 |
+

|
| 780 |
+
sum = 25
|
| 781 |
+
|
| 782 |
+

|
| 783 |
+
sum = 25 uniform
|
| 784 |
+
|
| 785 |
+
# Detailed Illustration of how the counts are derived
|
| 786 |
+
|
| 787 |
+
# Convolutions involving (a)
|
| 788 |
+
|
| 789 |
+

|
| 790 |
+
|
| 791 |
+

|
| 792 |
+
|
| 793 |
+

|
| 794 |
+
|
| 795 |
+

|
| 796 |
+
|
| 797 |
+

|
| 798 |
+
|
| 799 |
+

|
| 800 |
+
|
| 801 |
+

|
| 802 |
+
|
| 803 |
+

|
| 804 |
+
|
| 805 |
+

|
| 806 |
+
|
| 807 |
+
# Convolutions involving (b)
|
| 808 |
+
|
| 809 |
+

|
| 810 |
+
|
| 811 |
+

|
| 812 |
+
|
| 813 |
+

|
| 814 |
+
|
| 815 |
+

|
| 816 |
+
|
| 817 |
+

|
| 818 |
+
|
| 819 |
+

|
| 820 |
+
|
| 821 |
+

|
| 822 |
+
|
| 823 |
+

|
| 824 |
+
|
| 825 |
+

|
| 826 |
+
|
| 827 |
+

|
| 828 |
+
|
| 829 |
+

|
| 830 |
+
|
| 831 |
+

|
| 832 |
+
|
| 833 |
+

|
| 834 |
+
|
| 835 |
+
# Convolutions involving (c)
|
| 836 |
+
|
| 837 |
+

|
| 838 |
+
|
| 839 |
+

|
| 840 |
+
|
| 841 |
+

|
| 842 |
+
|
| 843 |
+

|
| 844 |
+
|
| 845 |
+

|
| 846 |
+
|
| 847 |
+

|
| 848 |
+
|
| 849 |
+

|
| 850 |
+
|
| 851 |
+

|
| 852 |
+
|
| 853 |
+

|
| 854 |
+
|
| 855 |
+

|
| 856 |
+
|
| 857 |
+

|
| 858 |
+
|
| 859 |
+

|
| 860 |
+
|
| 861 |
+

|
| 862 |
+
|
| 863 |
+

|
| 864 |
+
|
| 865 |
+

|
| 866 |
+
|
| 867 |
+
Convolutions involving (d): Rotated version of (b)
|
| 868 |
+
|
| 869 |
+
Convolutions involving (e)
|
| 870 |
+
|
| 871 |
+

|
| 872 |
+
|
| 873 |
+

|
| 874 |
+
|
| 875 |
+

|
| 876 |
+
|
| 877 |
+

|
| 878 |
+
|
| 879 |
+

|
| 880 |
+
|
| 881 |
+

|
| 882 |
+
|
| 883 |
+

|
| 884 |
+
|
| 885 |
+

|
| 886 |
+
|
| 887 |
+

|
| 888 |
+
|
| 889 |
+

|
| 890 |
+
|
| 891 |
+

|
| 892 |
+
|
| 893 |
+

|
| 894 |
+
|
| 895 |
+

|
| 896 |
+
|
| 897 |
+

|
| 898 |
+
|
| 899 |
+

|
| 900 |
+
|
| 901 |
+

|
| 902 |
+
|
| 903 |
+

|
| 904 |
+
Convolutions involving (f)
|
| 905 |
+
f:
|
| 906 |
+
|
| 907 |
+

|
| 908 |
+
|
| 909 |
+

|
| 910 |
+
|
| 911 |
+

|
| 912 |
+
|
| 913 |
+

|
| 914 |
+
|
| 915 |
+

|
| 916 |
+
|
| 917 |
+

|
| 918 |
+
|
| 919 |
+

|
| 920 |
+
|
| 921 |
+

|
| 922 |
+
|
| 923 |
+

|
| 924 |
+
|
| 925 |
+

|
| 926 |
+
|
| 927 |
+

|
| 928 |
+
|
| 929 |
+

|
| 930 |
+
|
| 931 |
+

|
| 932 |
+
|
| 933 |
+

|
| 934 |
+
|
| 935 |
+

|
| 936 |
+
|
| 937 |
+

|
| 938 |
+
|
| 939 |
+

|
| 940 |
+
|
| 941 |
+

|
| 942 |
+
|
| 943 |
+

|
| 944 |
+
|
| 945 |
+
Convolutions involving (g): Rotated version of (c)
|
| 946 |
+
|
| 947 |
+
Convolutions involving (h): Rotated version of (f)
|
| 948 |
+
|
| 949 |
+
Convolutions involving (i): Regular uniform treatment
|
| 950 |
+
|
| 951 |
+

|
| 952 |
+
Original Input
|
| 953 |
+
|
| 954 |
+

|
| 955 |
+
Padded Input
|
| 956 |
+
|
| 957 |
+

|
| 958 |
+
of conv ops for each pixel
|
| 959 |
+
|
| 960 |
+
# Mirror Padding (SYMMETRIC)
|
| 961 |
+
|
| 962 |
+
# Illustrated on 3x3 kernel and 1-pixel padding with dilation factor of 2
|
| 963 |
+
|
| 964 |
+
# Which kernel cells these ops utilize?
|
| 965 |
+
|
| 966 |
+

|
| 967 |
+
|
| 968 |
+

|
| 969 |
+
|
| 970 |
+

|
| 971 |
+
|
| 972 |
+

|
| 973 |
+
|
| 974 |
+

|
| 975 |
+
|
| 976 |
+

|
| 977 |
+
|
| 978 |
+

|
| 979 |
+
|
| 980 |
+

|
| 981 |
+
|
| 982 |
+

|
| 983 |
+
sum = 9
|
| 984 |
+
uniform
|
| 985 |
+
|
| 986 |
+
# Detailed Illustration of how the counts are derived
|
| 987 |
+
|
| 988 |
+

|
| 989 |
+
Convolutions involving (a)
|
| 990 |
+
|
| 991 |
+

|
| 992 |
+
|
| 993 |
+

|
| 994 |
+
|
| 995 |
+

|
| 996 |
+
|
| 997 |
+

|
| 998 |
+
|
| 999 |
+

|
| 1000 |
+
|
| 1001 |
+

|
| 1002 |
+
Convolutions involving (b)
|
| 1003 |
+
|
| 1004 |
+

|
| 1005 |
+
|
| 1006 |
+

|
| 1007 |
+
|
| 1008 |
+

|
| 1009 |
+
|
| 1010 |
+

|
| 1011 |
+
|
| 1012 |
+

|
| 1013 |
+
|
| 1014 |
+

|
| 1015 |
+
|
| 1016 |
+

|
| 1017 |
+
|
| 1018 |
+

|
| 1019 |
+
|
| 1020 |
+

|
| 1021 |
+
|
| 1022 |
+

|
| 1023 |
+
|
| 1024 |
+

|
| 1025 |
+
|
| 1026 |
+

|
| 1027 |
+
|
| 1028 |
+

|
| 1029 |
+
|
| 1030 |
+

|
| 1031 |
+
|
| 1032 |
+

|
| 1033 |
+
Convolutions involving (c)
|
| 1034 |
+
|
| 1035 |
+

|
| 1036 |
+
|
| 1037 |
+

|
| 1038 |
+
|
| 1039 |
+

|
| 1040 |
+
|
| 1041 |
+

|
| 1042 |
+
|
| 1043 |
+

|
| 1044 |
+
|
| 1045 |
+

|
| 1046 |
+
|
| 1047 |
+

|
| 1048 |
+
|
| 1049 |
+

|
| 1050 |
+
|
| 1051 |
+

|
| 1052 |
+
|
| 1053 |
+

|
| 1054 |
+
|
| 1055 |
+

|
| 1056 |
+
|
| 1057 |
+

|
| 1058 |
+
|
| 1059 |
+

|
| 1060 |
+
|
| 1061 |
+

|
| 1062 |
+
|
| 1063 |
+

|
| 1064 |
+
|
| 1065 |
+

|
| 1066 |
+
|
| 1067 |
+
Convolutions involving (d): Rotated version of (b)
|
| 1068 |
+
|
| 1069 |
+
Convolutions involving (e)
|
| 1070 |
+
|
| 1071 |
+

|
| 1072 |
+
|
| 1073 |
+
Convolutions involving (f)
|
| 1074 |
+
|
| 1075 |
+

|
| 1076 |
+
|
| 1077 |
+
Convolutions involving (g): Rotated version of (c)
|
| 1078 |
+
|
| 1079 |
+
Convolutions involving (h): Rotated version of (f)
|
| 1080 |
+
|
| 1081 |
+
Convolutions involving (i): Regular uniform treatment
|
| 1082 |
+
|
| 1083 |
+
# Mirror Padding (SYMMETRIC) with Grouping
|
| 1084 |
+
|
| 1085 |
+
Illustrated on 2x2 kernel and 1-pixel padding
|
| 1086 |
+
|
| 1087 |
+
A grouped padding strategy is applied to balance uneven padding (Wu et al 2019)
|
| 1088 |
+
|
| 1089 |
+

|
| 1090 |
+
Original Input
|
| 1091 |
+
|
| 1092 |
+

|
| 1093 |
+
Padded at top-left corner
|
| 1094 |
+
|
| 1095 |
+

|
| 1096 |
+
|
| 1097 |
+

|
| 1098 |
+
Padded at bottom-left corner
|
| 1099 |
+
Padded at top-right corner
|
| 1100 |
+
|
| 1101 |
+

|
| 1102 |
+
Padded at bottomright corner
|
| 1103 |
+
|
| 1104 |
+
Number of conv ops each pixel is involved in
|
| 1105 |
+
|
| 1106 |
+

|
| 1107 |
+
Padded at top-left corner
|
| 1108 |
+
|
| 1109 |
+

|
| 1110 |
+
Padded at bottom-left corner
|
| 1111 |
+
|
| 1112 |
+

|
| 1113 |
+
Padded at top-right corner
|
| 1114 |
+
|
| 1115 |
+

|
| 1116 |
+
Padded at bottom-right corner
|
| 1117 |
+
|
| 1118 |
+

|
| 1119 |
+
Average (grouped padding strategy)
|
| 1120 |
+
|
| 1121 |
+
Which kernel cells these ops utilize?
|
| 1122 |
+
|
| 1123 |
+
<table><tr><td></td><td>Padded at top-left corner</td><td>Padded at bottom-left corner</td><td>Padded at top-right corner</td><td>Padded at bottom-right corner</td><td>Average (grouped padding strategy)</td></tr><tr><td rowspan="3">Convolutions involving (a)</td><td>3 2</td><td>2 1</td><td>2</td><td>1</td><td>2 0.75</td></tr><tr><td>2 2</td><td></td><td>1</td><td></td><td>0.75 0.5</td></tr><tr><td>sum = 9</td><td>sum = 3</td><td>sum = 3</td><td>sum = 1</td><td>sum = 4</td></tr><tr><td rowspan="3">Convolutions involving (b)</td><td>2 2</td><td>1 1</td><td>2 2</td><td>1 1</td><td>1.5 1.5</td></tr><tr><td>1 1</td><td></td><td>1 1</td><td></td><td>0.5 0.5</td></tr><tr><td>sum = 6</td><td>sum = 2</td><td>sum = 6</td><td>sum = 2</td><td>sum = 4</td></tr><tr><td rowspan="3">Convolutions involving (c)</td><td>2 1</td><td>2 1</td><td>1</td><td>1</td><td>1.5 0.5</td></tr><tr><td>2 1</td><td>2 1</td><td>1</td><td>1</td><td>1.5 0.5</td></tr><tr><td>sum = 6</td><td>sum = 6</td><td>sum = 2</td><td>sum = 2</td><td>sum = 4</td></tr><tr><td rowspan="4">Convolutions involving (d)</td><td>1 1</td><td>1 1</td><td>1 1</td><td>1 1</td><td>1 1</td></tr><tr><td>1 1</td><td>1 1</td><td>1 1</td><td>1 1</td><td>1 1</td></tr><tr><td>sum = 4</td><td>sum = 4</td><td>sum = 4</td><td>sum = 4</td><td>sum = 4</td></tr><tr><td>uniform</td><td>uniform</td><td>uniform</td><td>uniform</td><td>uniform</td></tr></table>
|
| 1124 |
+
|
| 1125 |
+
# Detailed Illustration of how the counts are derived
|
| 1126 |
+
|
| 1127 |
+
Convolutions involving (a)
|
| 1128 |
+
|
| 1129 |
+

|
| 1130 |
+
|
| 1131 |
+
Convolutions involving (b)
|
| 1132 |
+
|
| 1133 |
+

|
| 1134 |
+
|
| 1135 |
+

|
| 1136 |
+
|
| 1137 |
+

|
| 1138 |
+
|
| 1139 |
+
Convolutions involving (c): Rotated version of (b)
|
| 1140 |
+
|
| 1141 |
+
Convolutions involving (d): Rotated version of (a)
|
| 1142 |
+
|
| 1143 |
+
# Mirror Padding (SYMMETRIC) with Grouping
|
| 1144 |
+
|
| 1145 |
+
# Illustrated on 4x4 kernel and 1-pixel padding
|
| 1146 |
+
|
| 1147 |
+

|
| 1148 |
+
|
| 1149 |
+
# Number of conv ops each pixel is involved in
|
| 1150 |
+
|
| 1151 |
+

|
| 1152 |
+
|
| 1153 |
+
# Which kernel cells these ops utilize?
|
| 1154 |
+
|
| 1155 |
+

|
| 1156 |
+
|
| 1157 |
+

|
| 1158 |
+
Original Input
|
| 1159 |
+
|
| 1160 |
+

|
| 1161 |
+
of conv ops for each pixel
|
| 1162 |
+
|
| 1163 |
+
# Replication Padding
|
| 1164 |
+
|
| 1165 |
+
# Illustrated on 5x5 kernel and 2-pixel padding
|
| 1166 |
+
|
| 1167 |
+

|
| 1168 |
+
Which kernel cells these ops utilize?
|
| 1169 |
+
|
| 1170 |
+

|
| 1171 |
+
|
| 1172 |
+

|
| 1173 |
+
|
| 1174 |
+

|
| 1175 |
+
|
| 1176 |
+

|
| 1177 |
+
|
| 1178 |
+

|
| 1179 |
+
|
| 1180 |
+

|
| 1181 |
+
|
| 1182 |
+

|
| 1183 |
+
|
| 1184 |
+

|
| 1185 |
+
|
| 1186 |
+
# Detailed Illustration of how the counts are derived
|
| 1187 |
+
|
| 1188 |
+

|
| 1189 |
+
Convolutions involving (a)
|
| 1190 |
+
|
| 1191 |
+

|
| 1192 |
+
|
| 1193 |
+

|
| 1194 |
+
|
| 1195 |
+

|
| 1196 |
+
|
| 1197 |
+

|
| 1198 |
+
|
| 1199 |
+

|
| 1200 |
+
|
| 1201 |
+

|
| 1202 |
+
Convolutions involving (b)
|
| 1203 |
+
|
| 1204 |
+

|
| 1205 |
+
|
| 1206 |
+

|
| 1207 |
+
|
| 1208 |
+

|
| 1209 |
+
|
| 1210 |
+

|
| 1211 |
+
|
| 1212 |
+

|
| 1213 |
+
|
| 1214 |
+

|
| 1215 |
+
|
| 1216 |
+

|
| 1217 |
+
|
| 1218 |
+

|
| 1219 |
+
|
| 1220 |
+

|
| 1221 |
+
Convolutions involving (c)
|
| 1222 |
+
|
| 1223 |
+

|
| 1224 |
+
|
| 1225 |
+

|
| 1226 |
+
|
| 1227 |
+

|
| 1228 |
+
|
| 1229 |
+

|
| 1230 |
+
|
| 1231 |
+

|
| 1232 |
+
|
| 1233 |
+

|
| 1234 |
+
|
| 1235 |
+

|
| 1236 |
+
|
| 1237 |
+

|
| 1238 |
+
|
| 1239 |
+

|
| 1240 |
+
|
| 1241 |
+

|
| 1242 |
+
|
| 1243 |
+

|
| 1244 |
+
|
| 1245 |
+

|
| 1246 |
+
|
| 1247 |
+

|
| 1248 |
+
|
| 1249 |
+

|
| 1250 |
+
|
| 1251 |
+

|
| 1252 |
+
|
| 1253 |
+

|
| 1254 |
+
|
| 1255 |
+

|
| 1256 |
+
|
| 1257 |
+

|
| 1258 |
+
|
| 1259 |
+

|
| 1260 |
+
|
| 1261 |
+

|
| 1262 |
+
|
| 1263 |
+
# Convolutions involving (e)
|
| 1264 |
+
|
| 1265 |
+

|
| 1266 |
+
|
| 1267 |
+
# Convolutions involving (f)
|
| 1268 |
+
|
| 1269 |
+

|
| 1270 |
+
|
| 1271 |
+
Convolutions involving (g): Rotated version of (c)
|
| 1272 |
+
|
| 1273 |
+
Convolutions involving (h): Rotated version of (f)
|
| 1274 |
+
|
| 1275 |
+
Convolutions involving (i): Regular uniform treatment
|
| 1276 |
+
|
| 1277 |
+
# Mirror Padding (REFLECT)
|
| 1278 |
+
|
| 1279 |
+
# Illustrated on 3x3 kernel and 1-pixel padding
|
| 1280 |
+
|
| 1281 |
+

|
| 1282 |
+
Original Input
|
| 1283 |
+
|
| 1284 |
+

|
| 1285 |
+
of conv ops each pixel is involved in
|
| 1286 |
+
|
| 1287 |
+
# Which kernel cells these ops utilize?
|
| 1288 |
+
|
| 1289 |
+

|
| 1290 |
+
|
| 1291 |
+
# Detailed Illustration of how the counts are derived
|
| 1292 |
+
|
| 1293 |
+
Convolutions involving (a)
|
| 1294 |
+
|
| 1295 |
+

|
| 1296 |
+
|
| 1297 |
+
Convolutions involving (b)
|
| 1298 |
+
|
| 1299 |
+

|
| 1300 |
+
|
| 1301 |
+
Convolutions involving (c)
|
| 1302 |
+
|
| 1303 |
+

|
| 1304 |
+
|
| 1305 |
+
Convolutions involving (d): Rotated version of (b)
|
| 1306 |
+
|
| 1307 |
+
Convolutions involving (e)
|
| 1308 |
+
|
| 1309 |
+

|
| 1310 |
+
|
| 1311 |
+

|
| 1312 |
+
|
| 1313 |
+

|
| 1314 |
+
|
| 1315 |
+

|
| 1316 |
+
|
| 1317 |
+

|
| 1318 |
+
|
| 1319 |
+

|
| 1320 |
+
|
| 1321 |
+

|
| 1322 |
+
|
| 1323 |
+

|
| 1324 |
+
|
| 1325 |
+

|
| 1326 |
+
|
| 1327 |
+
Convolutions involving (f)
|
| 1328 |
+
|
| 1329 |
+

|
| 1330 |
+
|
| 1331 |
+

|
| 1332 |
+
|
| 1333 |
+

|
| 1334 |
+
|
| 1335 |
+

|
| 1336 |
+
|
| 1337 |
+

|
| 1338 |
+
|
| 1339 |
+

|
| 1340 |
+
|
| 1341 |
+

|
| 1342 |
+
|
| 1343 |
+

|
| 1344 |
+
|
| 1345 |
+

|
| 1346 |
+
|
| 1347 |
+
Convolutions involving (g): Rotated version of (c)
|
| 1348 |
+
|
| 1349 |
+
Convolutions involving (h): Rotated version of (f)
|
| 1350 |
+
|
| 1351 |
+
Convolutions involving (i): Regular uniform treatment
|
| 1352 |
+
|
| 1353 |
+
# Partial Convolution Illustrated on a 3x3 kernel
|
| 1354 |
+
|
| 1355 |
+

|
| 1356 |
+
Input
|
| 1357 |
+
|
| 1358 |
+

|
| 1359 |
+
Weighted # of conv ops each pixel is involved in
|
| 1360 |
+
|
| 1361 |
+
# Which kernel cells these ops utilize?
|
| 1362 |
+
|
| 1363 |
+

|
| 1364 |
+
sum = 6.25
|
| 1365 |
+
|
| 1366 |
+

|
| 1367 |
+
sum = 8.75
|
| 1368 |
+
|
| 1369 |
+

|
| 1370 |
+
sum = 7.5
|
| 1371 |
+
|
| 1372 |
+

|
| 1373 |
+
sum = 8.75
|
| 1374 |
+
|
| 1375 |
+

|
| 1376 |
+
sum = 9
|
| 1377 |
+
|
| 1378 |
+

|
| 1379 |
+
sum = 10.5
|
| 1380 |
+
|
| 1381 |
+

|
| 1382 |
+
sum = 7.5
|
| 1383 |
+
|
| 1384 |
+

|
| 1385 |
+
sum = 10.5
|
| 1386 |
+
|
| 1387 |
+

|
| 1388 |
+
sum = 9
|
| 1389 |
+
uniform
|
| 1390 |
+
|
| 1391 |
+
# Detailed Illustration of how the counts are derived
|
| 1392 |
+
|
| 1393 |
+

|
| 1394 |
+
Convolutions involving (a)
|
| 1395 |
+
weight
|
| 1396 |
+
9/4= 2.25
|
| 1397 |
+
|
| 1398 |
+

|
| 1399 |
+
9/6= 1.5
|
| 1400 |
+
|
| 1401 |
+

|
| 1402 |
+
9/6= 1.5
|
| 1403 |
+
|
| 1404 |
+

|
| 1405 |
+
9/9= 1
|
| 1406 |
+
|
| 1407 |
+

|
| 1408 |
+
Convolutions involving (b)
|
| 1409 |
+
weight
|
| 1410 |
+
9/4= 2.25
|
| 1411 |
+
|
| 1412 |
+

|
| 1413 |
+
9/6= 1.5
|
| 1414 |
+
|
| 1415 |
+

|
| 1416 |
+
9/6= 1.5
|
| 1417 |
+
|
| 1418 |
+

|
| 1419 |
+
9/6= 1.5
|
| 1420 |
+
|
| 1421 |
+

|
| 1422 |
+
9/9= 1
|
| 1423 |
+
|
| 1424 |
+

|
| 1425 |
+
Convolutions involving (c)
|
| 1426 |
+
weight
|
| 1427 |
+
9/6= 1.5
|
| 1428 |
+
|
| 1429 |
+

|
| 1430 |
+
9/9= 1
|
| 1431 |
+
|
| 1432 |
+

|
| 1433 |
+
9/6= 1.5
|
| 1434 |
+
|
| 1435 |
+

|
| 1436 |
+
9/9= 1
|
| 1437 |
+
|
| 1438 |
+

|
| 1439 |
+
9/6= 1.5
|
| 1440 |
+
|
| 1441 |
+

|
| 1442 |
+
9/9= 1
|
| 1443 |
+
|
| 1444 |
+
sum = 7.5
|
| 1445 |
+
|
| 1446 |
+
Convolutions involving (d): rotated version of (b)
|
| 1447 |
+
|
| 1448 |
+
Convolutions involving (e)
|
| 1449 |
+
|
| 1450 |
+

|
| 1451 |
+
|
| 1452 |
+

|
| 1453 |
+
|
| 1454 |
+

|
| 1455 |
+
|
| 1456 |
+

|
| 1457 |
+
|
| 1458 |
+

|
| 1459 |
+
|
| 1460 |
+

|
| 1461 |
+
|
| 1462 |
+

|
| 1463 |
+
|
| 1464 |
+

|
| 1465 |
+
|
| 1466 |
+

|
| 1467 |
+
|
| 1468 |
+
Convolutions involving (f)
|
| 1469 |
+
|
| 1470 |
+

|
| 1471 |
+
|
| 1472 |
+

|
| 1473 |
+
|
| 1474 |
+

|
| 1475 |
+
|
| 1476 |
+

|
| 1477 |
+
|
| 1478 |
+

|
| 1479 |
+
|
| 1480 |
+

|
| 1481 |
+
|
| 1482 |
+

|
| 1483 |
+
|
| 1484 |
+

|
| 1485 |
+
|
| 1486 |
+

|
| 1487 |
+
|
| 1488 |
+
Convolutions involving (g): Rotated version of (c)
|
| 1489 |
+
|
| 1490 |
+
Convolutions involving (h): Rotated version of (f)
|
| 1491 |
+
|
| 1492 |
+
Convolutions involving (i): Regular uniform treatment
|
mindthepadcnnscandevelopblindspots/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:da2a019acb842722acb6c6b65c4d2bc827775f3e7f1953270e3289a84fcfa948
|
| 3 |
+
size 2835581
|
mindthepadcnnscandevelopblindspots/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:dfe0d06861ee16d5a8804a01accc26795d0ef93145594c8b7029ef6a29342ce9
|
| 3 |
+
size 1496026
|
minimumwidthforuniversalapproximation/618e1fd2-0ea8-4023-9875-54c8e635fe2b_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:8865155cad1762942c23c069c78efc5ca89bad97a56c1904bb26ab120211d131
|
| 3 |
+
size 199665
|
minimumwidthforuniversalapproximation/618e1fd2-0ea8-4023-9875-54c8e635fe2b_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:30b364ae629d264626c6a1041e4d66fe8ff6181ff71d63b0aeb426e42d756fb2
|
| 3 |
+
size 232502
|
minimumwidthforuniversalapproximation/618e1fd2-0ea8-4023-9875-54c8e635fe2b_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:dfce37200aaec6f6209e24154ca3a6beabc4f46a58db4345e0d1a0bfcae86acb
|
| 3 |
+
size 656240
|
minimumwidthforuniversalapproximation/full.md
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
minimumwidthforuniversalapproximation/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b8b48b752811f90aa4111e9175f11d4bc3c594c4274776a132b9defff5e84322
|
| 3 |
+
size 793667
|
minimumwidthforuniversalapproximation/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:facf975e2115c8b83f9815bfb36edce245f491312ec2994074840abc7bc689de
|
| 3 |
+
size 1874436
|
modelbasedvisualplanningwithselfsupervisedfunctionaldistances/91e786e7-cdc2-4c2c-957e-715d02d5d2ba_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b5c37836388d7265dc57acf3d63d7cbfc3d071ae8f08d538392c2954dbb0bfd6
|
| 3 |
+
size 115284
|
modelbasedvisualplanningwithselfsupervisedfunctionaldistances/91e786e7-cdc2-4c2c-957e-715d02d5d2ba_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:0d1dda7778c5ca11660fe312abd2477eb61e0d4b4eb62aa8e6c14806b1f70ee7
|
| 3 |
+
size 139562
|
modelbasedvisualplanningwithselfsupervisedfunctionaldistances/91e786e7-cdc2-4c2c-957e-715d02d5d2ba_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:c6bd022720a99635d942904695642a242bec6489766f1f72041a46cbf7580d38
|
| 3 |
+
size 4217680
|
modelbasedvisualplanningwithselfsupervisedfunctionaldistances/full.md
ADDED
|
@@ -0,0 +1,384 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# MODEL-BASED VISUAL PLANNING WITH SELF-SUPERVISED FUNCTIONAL DISTANCES
|
| 2 |
+
|
| 3 |
+
Stephen Tian $^{1}$ , Suraj Nair $^{2}$ , Frederik Ebert $^{1}$ , Sudeep Dasari $^{3}$ , Benjamin Eysenbach $^{3}$ , Chelsea Finn $^{2}$ , Sergey Levine $^{1}$
|
| 4 |
+
|
| 5 |
+
<sup>1</sup>University of California, Berkeley
|
| 6 |
+
2Stanford University
|
| 7 |
+
$^{3}$ Carnegie Mellon University
|
| 8 |
+
|
| 9 |
+
# ABSTRACT
|
| 10 |
+
|
| 11 |
+
A generalist robot must be able to complete a variety of tasks in its environment. One appealing way to specify each task is in terms of a goal observation. However, learning goal-reaching policies with reinforcement learning remains a challenging problem, particularly when hand-engineered reward functions are not available. Learned dynamics models are a promising approach for learning about the environment without rewards or task-directed data, but planning to reach goals with such a model requires a notion of functional similarity between observations and goal states. We present a self-supervised method for model-based visual goal reaching, which uses both a visual dynamics model as well as a dynamical distance function learned using model-free reinforcement learning. Our approach learns entirely using offline, unlabeled data, making it practical to scale to large and diverse datasets. In our experiments, we find that our method can successfully learn models that perform a variety of tasks at test-time, moving objects amid distractors with a simulated robotic arm and even learning to open and close a drawer using a real-world robot. In comparisons, we find that this approach substantially outperforms both model-free and model-based prior methods. Videos and visualizations are available here: https://sites.google.com/berkeley.edu/mbold.
|
| 12 |
+
|
| 13 |
+
# 1 INTRODUCTION
|
| 14 |
+
|
| 15 |
+
Designing general-purpose robots that can perform a wide range of tasks remains an open problem in AI and robotics. Reinforcement learning (RL) represents a particularly promising tool for learning robotic behaviors when skills can be learned one at a time from user-defined reward functions. However, general-purpose robots will likely require large and diverse repertoires of skills, and learning individual tasks one at a time from manually-specified rewards is onerous and time-consuming. How can we design learning systems that can autonomously acquire general-purpose knowledge that allows them to solve many different downstream tasks?
|
| 16 |
+
|
| 17 |
+
To address this problem, we must resolve three questions. (1) How can the robot be commanded to perform specific downstream tasks? A simple and versatile choice is to define tasks in terms of desired outcomes, such as an example observation of the completed task. (2) What types of data should this robot learn from? In settings where modern machine learning attains the best generalization results (Deng et al., 2009; Rajpurkar et al., 2016; Devlin et al., 2018), a common theme is that excellent generalization is achieved by learning from large and diverse task-agnostic datasets. In the context of RL, this means we need offline methods that can use all sources of prior data, even in the absence of reward labels. As collecting new experience on a physical robot is often expensive, offline data is often more practical to use in real-world settings (Levine et al., 2020). (3) What should the robot learn from this data to enable goal-reaching? Similar to prior work (Botvinick & Weinstein, 2014; Watter et al., 2015; Finn & Levine, 2017; Ebert et al., 2018b), we note that policies and value functions are specific to a particular task, while a predictive model captures the physics of the environment independently of the task, and thus can be used for solving almost any task. This makes model learning particularly effective for learning from large and diverse datasets, which do not necessarily contain successful behaviors.
|
| 18 |
+
|
| 19 |
+
While model-based approaches have demonstrated promising results, including for vision-based tasks in real-world robotic systems (Ebert et al., 2018a; Finn & Levine, 2017), such methods face two major challenges. First, predictive models on raw images are only effective over short horizons, as uncertainty accumulates far into the future (Denton & Fergus, 2018; Finn et al., 2016; Hafner et al., 2019b; Babaeizadeh et al., 2017). Second, using such models for planning toward goals requires a notion of similarity between images. While prior methods have utilized latent variable models (Watter et al., 2015; Nair et al., 2018), $\ell_2$ pixel-space distance (Nair & Finn, 2020), and other heuristic measures of similarity (Ebert et al., 2018b), these metrics only capture visual similarity. To enable reliable control with predictive models, we instead need distances that are aware of dynamics.
|
| 20 |
+
|
| 21 |
+
In this paper, we propose Model-Based RL with Offline Learned Distances (MBOLD), which aims to ad
|
| 22 |
+
|
| 23 |
+
dress both of these challenges by learning predictive models together with image-based distance functions that reflect functionality, from offline, unlabeled data. The learned distance function estimates of the number of steps that the optimal policy would take to transition from one state to another, incorporating not just visual appearance, but also an understanding of dynamics. However, to learn dynamical distances from task-agnostic data, supervised regression will lead to overestimation, since the paths in the data are not all optimal for any task. Instead, we utilize approximate dynamic programming for distance estimation. While prior work has studied such methods to learn goal-conditioned policies in online model-free RL settings (Eysenbach et al., 2019; Florensna et al., 2019), we extend it to the offline setting and show that approximate dynamic programming techniques derived from Q-learning style Bellman updates can learn effective shortest path dynamical distances. Although this procedure resembles model-free reinforcement learning, we find empirically that it does not by itself produce useful policies. Instead, our method (Fig. 1) combines the strengths of dynamics models and distance functions, using the predictive model to plan over short horizons, and using the learned distances to provide a global cost that captures progress toward distant goals.
|
| 24 |
+
|
| 25 |
+
The primary contribution of this work is an offline, self-supervised approach for solving arbitrary goal-reaching tasks by combining planning with predictive models and learned dynamical distances. To our knowledge, our method is the first to directly combine predictive models on images with dynamical distance estimators on images, entirely from random, offline data without reward labels. Through our experimental evaluation on challenging robotic object manipulation tasks, including simulated object relocation and real-world drawer manipulation, we find that our method can outperform previously introduced reward specification methods for visual model-based control with a relative performance improvement of at least $50\%$ across all tasks, and compares favorably to prior work in model-based and model-free RL. We also find that combining Q-functions with planning improves dramatically over policies directly learned with model-free RL.
|
| 26 |
+
|
| 27 |
+

|
| 28 |
+
Figure 1: The robot must find actions that quickly achieve the desired goal. State transitions and the true optimal distances between states are unknown, so our method learns an approximate shortest distance function and dynamics model directly on images. These models allow the robot to find the shortest path to the goal at test-time.
|
| 29 |
+
|
| 30 |
+
# 2 RELATED WORK
|
| 31 |
+
|
| 32 |
+
Offline and Model-based RL: A number of prior works have studied the problem of learning behaviors from existing offline datasets. While recent progress has been made in applying model-free RL techniques to this problem of offline or batch RL (Fujimoto et al., 2019; Wu et al., 2019; Kumar et al., 2019; Nair et al., 2020b), one approach that has shown promise is offline model-based RL (Lowrey et al., 2019; Kidambi et al., 2020; Yu et al., 2020; Argenson & Dulac-Arnold, 2020), where the agent learns a predictive model of the world from data. Such model-based methods have seen success both in the offline and online RL settings, and have a rich history of being effective for planning (Deisenroth & Rasmussen, 2011; Watter et al., 2015; McAllister & Rasmussen, 2016; Chua et al., 2018; Amos et al., 2018; Hafner et al., 2019b; Nagabandi et al., 2018; Kahn et al., 2020; Dong et al., 2020) or policy optimization (Sutton, 1991; Weber et al., 2017; Ha & Schmidhuber, 2018; Janner et al., 2019; Wang & Ba, 2019; Hafner et al., 2019a). However, the vast majority of
|
| 33 |
+
|
| 34 |
+
these prior works consider the single task setting where the agent aims to maximize a single task reward. In contrast, in this work we circumvent the need for task rewards by adopting a self-supervised multi-task approach, where a single learned model is used to perform a variety of tasks, specified in a flexible and general way by desired outcomes – i.e., goal images.
|
| 35 |
+
|
| 36 |
+
Self-supervised goal reaching: While the standard RL problem involves optimizing for a task-specific reward, an alternative and potentially more general formulation involves learning a generic goal reaching policy, without task-specific reward labels. In fact, a number of prior works learn goal-conditioned policies using model-free RL (Kaelbling, 1993; Nair et al., 2018; Mandlekar et al., 2019; Nair et al., 2020a), or variants of goal-conditioned behavioral cloning (GCBC) (Ghosh et al., 2019; Ding et al., 2019; Lynch et al., 2020). In our experiments, we show that our method outperforms both model-free approaches and goal-conditioned behavioral cloning. A number of methods combine model-free and model-based elements by planning over a graph representation (Eysenbach et al., 2019; Nasiriany et al., 2019; Savinov et al., 2018; Liu et al., 2020). Such methods can struggle in higher dimensions, where constructing graphs that adequately cover the space may require an excessive number of samples. We compare to these methods in our experiments. Similarly to Finn & Levine (2017); Ebert et al. (2018b); Nair & Finn (2020); Yen-Chen et al. (2019); Suh & Tedrake (2020), our method uses an action-conditioned video prediction model to generate plans. However, these prior methods generally utilize hand-crafted image similarity reward measures such as $\ell_2$ pixel-error (Ebert et al., 2018a; Nair & Finn, 2020) and pixel-flow prediction (Finn & Levine, 2017). In complex scenes, this can become a major bottleneck: predictions degrade rapidly further in the future, making an informative image similarity metric critical for effective planning. We propose to learn functional similarity metrics in terms of dynamical distances, which we find can be combined with predictive models to attain significantly improved results.
|
| 37 |
+
|
| 38 |
+
Dynamical distance learning: Our method learns dynamical distances – distances that represent shortest paths – from offline data. In the literature, dynamical distances have been learned via direct regression using online data (Hartikainen et al., 2019), representation learning (Warde-Farley et al., 2018; Yu et al., 2019b), or via Q-learning by relabeling goals (Eysenbach et al., 2019; Florensa et al., 2019). While these last two works are most similar to ours, in that they also employ approximate dynamic programming to learn distances, our method directly combines these dynamical distances with visual predictive models and planning. Lastly, while prior work has also explored combining model-based planning with value functions (Zhong et al., 2013; Lowrey et al., 2019; Hafner et al., 2019a; Schrittwieser et al., 2019; Argenson & Dulac-Arnold, 2020), these works consider the single task domain with a reward function, while our learned value function considers the multi-task goal reaching domain from entirely random, offline data without reward labels.
|
| 39 |
+
|
| 40 |
+
# 3 THE SELF-SUPERVISED OFFLINE RL PROBLEM STATEMENT
|
| 41 |
+
|
| 42 |
+
In this section, we introduce notation and define the problem setting. We will employ a Markov decision process (MDP) with state observations $s_t \in S$ and actions $a_t \in \mathcal{A}$ , both indexed by time $t \in [0,1,\dots ,H]$ , where $H$ denotes the maximum episode length. The initial state is sampled from an initial state distribution $s_0 \sim p_0(s_0)$ , and subsequent states are sampled according to Markovian dynamics: $s_{t + 1} \sim p(s_{t + 1} \mid s_t, a_t)$ . Actions are sampled $a_t \sim \pi(a_t \mid s_t, s_g)$ from a policy that is conditioned on both the current state and a goal state $s_g \in S$ . In our experiments, both the state and goal are images (i.e., $S = \mathbb{R}^{H \times W \times 3}$ ).
|
| 43 |
+
|
| 44 |
+
We tackle offline learning in this setting, assuming access to a fixed dataset $\mathcal{D}$ consisting of trajectories $\{s_0, a_0, s_1, \ldots, s_T\}$ of the agent interacting with the environment. This data can include any environment interactions, from expert demonstrations to trajectories which are not particularly successful at any task. In our experiments, we use data collected using a random policy, which is inexpensive to obtain. The agent does not have access to the environment to collect additional training data. Given this dataset, the objective is to determine the optimal goal-conditioned policy $\pi^{\star}(a_t \mid s_t, s_g)$ , under which the agent is able to transition to any goal state $s_g$ from any starting state $s_t$ in the minimum number of time steps possible. Note that unlike in the standard formulation of the RL problem, the agent does not receive any reward signal from its environment.
|
| 45 |
+
|
| 46 |
+

|
| 47 |
+
Figure 2: Model-based visual goal reaching: (Left) During offline learning, we train an image-based predictive model and distance function on the same random dataset. (Right) At test time, we use the learned distance model for MPC, plugging in the learned distance as a cost function.
|
| 48 |
+
|
| 49 |
+

|
| 50 |
+
|
| 51 |
+
# 4 MODEL-BASED VISUAL GOAL-REACHING
|
| 52 |
+
|
| 53 |
+
In this section we will introduce our method, MBOLD, for offline, goal-conditioned reinforcement learning. MBOLD, illustrated in Fig. 2, is composed of two neural networks: a predictive model and a learned distance function. The video-predictive dynamics model allows the agent to predict the result of hypothetical sequences of actions. However, this model cannot accurately predict far into the future, and has no notion of whether the predicted outcomes are desirable. Thus, we also learn a distance function, corresponding to a value function with a self-supervised goal-reaching reward, which will estimate the timestep length of the shortest path between a predicted state and a given goal. Both networks are trained on the same offline dataset.
|
| 54 |
+
|
| 55 |
+
At test-time, we use the learned dynamics model and distance function for model-predictive control (MPC). MBOLD predicts future states for candidate action sequences using the learned dynamics model, and uses the learned distance function to determine which action sequence will lead the agent closest to the goal. The first of the actions is then executed, and planning repeats upon receiving the subsequent observation from the environment. The remainder of this section describes how we learn the dynamics model and distance function, and use them to perform control.
|
| 56 |
+
|
| 57 |
+
Dynamics learning. Our method learns environment dynamics in order to solve for actions during test time, without an explicit task reward signal during training. MBOLD can use arbitrary image-based forward models, including latent variable models (Hafner et al., 2019b; Lee et al., 2019). The particular choice of model is a design decision when implementing our method. In our implementation, we use a convolutional video prediction model adapted from SAVP (Lee et al., 2018). The network takes as input the current observation $s_t$ and a sequence of $h$ actions $a_{t:t + h - 1}$ and returns a prediction for the next $h$ image observations, $\hat{f}_{\theta}(s_t,a_{t:t + h - 1}) = \{\hat{s}_{t + 1},\dots ,\hat{s}_{t + h}\}$ . We train this model to minimize the $\ell_2$ image reconstruction loss:
|
| 58 |
+
|
| 59 |
+
$$
|
| 60 |
+
\min _ {\theta} \mathbb {E} _ {\mathcal {D}} \left[ \frac {1}{h} \sum_ {t ^ {\prime} = t} ^ {t + h} \| \hat {f} _ {\theta} \left(s _ {t}, a _ {t: t + h - 1}\right) \left[ t ^ {\prime} - t \right] - s _ {t ^ {\prime}} \| ^ {2} \right]. \tag {1}
|
| 61 |
+
$$
|
| 62 |
+
|
| 63 |
+
Distance learning. Our method also learns a dynamical distance function, so that it can evaluate a functional notion of distance from the predicted states to the goal state, for use as a planning cost. However, the environment does not provide a reward signal that might be used to deduce these distances. Indeed, the offline dataset is typically composed of highly suboptimal trajectories, so our method may not even have access to examples of shortest path trajectories between states. Our key observation is that a goal-conditioned Q-function trained on a modified MDP with an indicator cost function yields values that correspond to shortest path distances in the original environment. Thus, Q-learning-like methods can recover optimal distance functions even from sub-optimal data.
|
| 64 |
+
|
| 65 |
+
We therefore formulate an MDP by augmenting environment trajectories with the reward function $r(s_{t}, a, s_{t + 1}, g) = \mathbf{1}_{s_{t + 1} = s_{g}}$ , adding a discount factor of $\gamma$ , and considering episodes terminated once they reach the goal state. Note that $s_{t}$ , $s_{t + 1}$ , and $g$ all represent images, and the reward is
|
| 66 |
+
|
| 67 |
+
only given when the next state and goal images exactly match. During training, goals are sampled according to a distribution on $S$ , which we will discuss later. If $\gamma < 1$ , the Q-values for a policy that maximizes expected discounted returns in this MDP can be directly mapped to shortest path distances. Specifically, in discrete state environments, the optimal Q-function can be written as $Q(s,a,g) = \gamma^{d(s,a,g)}$ , where $d(s,a,g)$ is a shortest path distance between $s$ and $g$ after taking action $a$ . Similarly, we can recover $d(s,a,g) = \log_{\gamma}Q(s,a,g)$ . Ultimately, our Q-learning approach corresponds to the following Bellman error optimization objective:
|
| 68 |
+
|
| 69 |
+
$$
|
| 70 |
+
\min _ {\phi} \mathbb {E} _ {s _ {t}, a _ {t}, s _ {t + 1} \sim \mathcal {D}, g \sim \mathcal {S}} \left[ Q _ {\phi} \left(s _ {t}, a _ {t}, g\right) - \left(\mathbf {1} _ {\mathrm {s} _ {t + 1} = \mathrm {g}} + \gamma \mathbf {1} _ {\mathrm {s} _ {t + 1} \neq \mathrm {g}} \max _ {a _ {t + 1}} Q _ {\phi} \left(s _ {t + 1}, a _ {t + 1}, g\right)\right) \right] ^ {2}. \tag {2}
|
| 71 |
+
$$
|
| 72 |
+
|
| 73 |
+
In practice, we use a deep network to represent the Q-function. During training, we sample transitions $(s_t, a_t, s_{t+1}, g)$ to optimize the objective in Equation 2. The first three components $(s_t, a_t, s_{t+1})$ can be sampled randomly from the dataset. However, trajectories in the offline dataset may not be directed towards any particular goals, so a key challenge lies in selecting which goals $g$ to choose. The next section describes our approach to sampling these goals.
|
| 74 |
+
|
| 75 |
+
Selecting goals for relabeling transitions. Naively choosing $g$ , say by sampling random states uniformly from the dataset, will provide an extremely sparse reward signal, as two random state images will almost never be exactly identical. The sparse reward problem can be mitigated by selectively sampling as goals the states that were actually reached in future time steps along the same trajectory as $s_t$ (Kaelbling, 1993; Andrychowicz et al., 2017). More precisely, to sample goals for a transition at time step $t$ , we sample a discrete time offset $\Delta \sim \mathrm{Geom}(p)$ , where $p \in [0,1]$ is a hyper-parameter, and use the state at time $t + \Delta$ as the goal. Note that if $\Delta = 1$ , the reward for this transition is 1, avoiding the sparsity issue.
|
| 76 |
+
|
| 77 |
+
However, relabeling all transitions in this way creates a major issue: since the distance function would only be trained on goals that were actually reached, it would systematically underestimate the distance to unreachable goals. Put another way, goals that were not reached from $s_t$ would be out-of-distribution goals for the resulting Q-function. We found this to result in poor performance. In practice, prior work (Kaelbling, 1993; Andrychowicz et al., 2017) actually relabels with a mixture of reached goals and commanded but not necessarily reached goals.
|
| 78 |
+
|
| 79 |
+
These prior methods can obtain such "negative" goals based on the goals that were commanded during online data collection. This is impossible in our setting, since our offline data may not even have been collected with a goal-directed policy. We therefore need a procedure to select such "negative" goals that are distant yet relevant. Randomly selecting dataset states will lead to pairs of images that are clearly distant with high probability (e.g., pairs in which all objects and the robot have been moved), but not necessarily relevant. We would like a goal sampling procedure that produces less obvious examples of distant states, which are more informative for training. Hard negative mining is one example of such a procedure, where pairs are selected based on the model's predictions, but is computationally expensive with large datasets.
|
| 80 |
+
|
| 81 |
+
Instead, we build upon the intuition that distance functions are likely to pay excessive attention to fully actuated factors in the state, such as the position of the robot's arm, because they are strongly predictive of distances. We propose sampling "negative" goal states $g$ which have similar actuated components to reached states. When randomly sampling pairs of states under this constraint, the underactuated dimensions (e.g. the objects), which are generally not known, are likely to have distinct positions. Hence, these data points can serve as informative hard negatives that encourage the model to pay more attention to the difficult, underactuated parts of the state. Unlike hard negative mining, this sampling approach is computationally inexpensive, as it does not rely on the current distance function, and practical, as actuated components of the state can typically be measured through encoders on the actuator. In practice, we sample these "negative" goals from observations across all dataset trajectories via nearest-neighbors search, using arm joint $\ell_2$ distance as the similarity key. Note that this does assume proprioceptive state information from the agent (e.g. robot joint angles), which is almost always available in real-world robotics settings, but does not require knowledge about object positions or other ground-truth environment information. While we use actuator information for generating training examples, the distance function and dynamics model use only image observations and actions as inputs. See Appendix A.1 for details.
|
| 82 |
+
|
| 83 |
+
Control via MBOLD. At test-time, the learned distance function and dynamics model are used together to solve control tasks via MPC. In other words, the dynamics model predicts how candidate
|
| 84 |
+
|
| 85 |
+

|
| 86 |
+
Figure 3: Comparative evaluation results: (Left) Example initial states and task definitions for Sawyer object pushing and Franka door sliding simulated environments, as well as the real-world drawer closing task. Note that "hard" tasks require the arm to take detours from moving to the final arm position in order to relocate the object. Arrows indicate successful trajectories. (Right) MBOLD is consistently able to outperform prior methods on these harder manipulation tasks, and by a larger margin on the most difficult tasks ("hard" variants of object pushing and door sliding). Error bars show standard deviations over 5 seeds.
|
| 87 |
+
|
| 88 |
+
actions will affect the environment, and the distance model rates predicted sequences based on which bring the agent closest to the user-defined goal state. This mechanism works as follows: given the current state $s_t$ , goal state $s_g$ , candidate actions $a_{t:t + h - 1}$ , and predicted future states $\hat{f}_{\theta}(s_t, a_{t:t + h - 1})$ from the learned dynamics model, the learned distance function calculates
|
| 89 |
+
|
| 90 |
+
$$
|
| 91 |
+
V \left(a _ {t: t + h - 1}\right) = \max _ {\alpha} Q _ {\phi} \left(\hat {f} _ {\theta} \left(s _ {t}, a _ {t: t + h - 1}\right) [ t + h ], \alpha , s _ {g}\right). \tag {3}
|
| 92 |
+
$$
|
| 93 |
+
|
| 94 |
+
In practice, the maximization over $\alpha$ is performed by an actor network learned simultaneously with the Q-function. $V(a_{t:t + h - 1})$ acts as an objective function for MPC. Plainly, the controller's goal is to find candidate actions $a_{t:t + h - 1}$ which minimize the dynamical distance to the goal $h$ steps into the future. After this process completes, the best action is executed by the agent. Note that this controller re-plans after every action taken in the environment (i.e every timestep), in order to prevent errors in dynamics prediction from compounding.
|
| 95 |
+
|
| 96 |
+
MPC Algorithm. MBOLD uses the CEM algorithm (De Boer et al., 2005) to optimize the objective in Equation 3. It begins by sampling $N$ random trajectories from a prior multi-variate Gaussian distribution. Then, the top $K$ actions which score highest according to $V(a_{t:t + h - 1})$ are selected as candidates. A new Gaussian distribution is fit on these candidates, and the loop starts over again by sampling fresh actions from this distribution. After $I$ iterations, the loop finishes and returns the best action found so far. See Appendix A.2 for full CEM implementation details.
|
| 97 |
+
|
| 98 |
+
# 5 EXPERIMENTS
|
| 99 |
+
|
| 100 |
+
Our experiments aim to answer three questions: (1) How does MBOLD compare to prior model-based and model-free methods when learning to reach goals from task-agnostic offline data? (2) Can our method perform visual robotic manipulation in real-world settings? (3) How do different dynamical distance learning methods compare to MBOLD in terms of providing effective distance functions for planning?
|
| 101 |
+
|
| 102 |
+
We first evaluate our method, prior methods, and baselines on three simulated tasks with visual observations: (1) a simple reaching task that requires moving a Sawyer 7-DoF arm to a goal location, which provides a way to validate implementations of all methods, (2) object pushing, in which a Sawyer arm must relocate an object to a particular goal location, in environments with 1 or 3 objects, and (3) door sliding, which requires repositioning a sliding door with a Franka 7-DoF arm. These tasks are challenging because they require long-horizon planning without access to intermediate rewards.
|
| 103 |
+
|
| 104 |
+
For each task, we define the action space $\mathcal{A}$ such that actions control the Cartesian position of the robot's end-effector, as well as the robot's gripper. We randomly generate a set of 100 test goals, consisting of a goal image and starting state, for each task, on which all methods are tested. A trial is considered successful if the final distance to the goal of each relevant object, e.g. slide position for the door sliding task, ends below a given threshold. For the object relocation task, we evaluate each method on two scenes, containing one and three objects. All evaluation goals require the robot to move one of the objects, with the others serving as distractors. We also study two levels of difficulty: "regular," where goals are generated from random trajectories in which the object moves a certain
|
| 105 |
+
|
| 106 |
+

|
| 107 |
+
Figure 4: Real-world robot evaluation: (Left) Third-person view of an example task setting and (Right) results. Success rates are computed using 10 trials for each task. Each task is specified by a goal image, and as in previous experiments, the same trained models are used across tasks. Task success is determined by the final position of the drawer only.
|
| 108 |
+
|
| 109 |
+

|
| 110 |
+
|
| 111 |
+
<table><tr><td></td><td>MBOLD (ours)</td><td>Visual Foresight (l2pixel error)</td></tr><tr><td>Drawer open</td><td>8/10</td><td>5/10</td></tr><tr><td>Drawer close</td><td>7/10</td><td>0/10</td></tr></table>
|
| 112 |
+
|
| 113 |
+
minimum distance, and "hard," where the arm is additionally enforced to be distant from the object in the goal observation, requiring the robot to push the object and then withdraw the arm. We depict the tasks in Fig. 3 (left) and provide full experimental details in Appendix A.3.
|
| 114 |
+
|
| 115 |
+
For all tasks, we generate an offline dataset by running random policies for 1e4 episodes of 30 timesteps each. We provide only this offline dataset to all methods, with no online training. At test time, the agent only receives the goal image and current observation at each step, and no intermediate rewards besides those that it computes itself.
|
| 116 |
+
|
| 117 |
+
Comparative evaluation. We compare MBOLD to prior work in model-based and model-free RL. As MBOLD uses purely offline data and does not require rewards from the environment, we make modifications to these methods where necessary to provide a fair comparison. Many of these prior methods (though not all) require the environment to provide a ground truth reward signal. In this case, we provide these methods with simple "uninformative" rewards, following prior work (Nair et al., 2018), which consist of the MSE between the current and goal image. Many of these methods were initially presented in the online setting. The offline setting is harder for RL methods (Fujimoto et al., 2019; Wu et al., 2019; Kumar et al., 2019), partially explaining their poor performance.
|
| 118 |
+
|
| 119 |
+
See Appendix B for details on all baselines. We compare MBOLD to the following methods:
|
| 120 |
+
|
| 121 |
+

|
| 122 |
+
Figure 5: Comparisons on the simple reaching task, where most methods attain good performance.
|
| 123 |
+
|
| 124 |
+
- Reinforcement Learning with Imagined Goals (RIG) (Nair et al., 2018): RIG is a model-free RL method for visual goal-reaching. Unlike the other methods, we still allow RIG to collect additional online data to train its policy.
|
| 125 |
+
- Dreamer (Hafner et al., 2019a): Dreamer, a model-based method for image-based tasks, also uses a combination of value functions and planning, but uses online data collection and, crucially, ground truth reward signals. We adapt Dreamer for the offline, reward-free setting.
|
| 126 |
+
- Dreamer $\ell_{2}$ arm distance: We additionally compare with an "oracle" version of Dreamer that uses privileged information about the ground-truth position of the arm.
|
| 127 |
+
- Search on the Replay Buffer (SoRB) (Eysenbach et al., 2019): SoRB performs planning on a graph constructed using learned distances, learned without a reward function.
|
| 128 |
+
- Goal-Conditioned Behavior Cloning: We train a behavior cloning model using goals sampled from observations achieved further in a given trajectory. This can be viewed as an offline variant of GCSL (Ghosh et al., 2019) or a non-recurrent version of Lynch et al. (2020).
|
| 129 |
+
- Visual Foresight (Ebert et al., 2018b): Visual Foresight also plans with an action-conditioned video prediction model, but uses (among other choices) $\ell_2$ pixel error as a cost function.
|
| 130 |
+
|
| 131 |
+
Since all methods are trained from offline data with no additional environment interaction, we present final performance on the test goals as a bar graph, rather than learning curves. The comparison on the simple reaching task is shown in Figure 5, and suggests that on this task, many of the methods perform quite well. However, on the substantially more complex tasks, shown in Figure 3, we see clearer differentiation between the different algorithms. On harder object pushing tasks, MBOLD attains the best performance, by a considerable margin. Interestingly, simple goal-conditioned behavioral cloning actually represents one of the strongest baselines on this task. On the hardest simulated door sliding task, our method attains the best performance by a large margin.
|
| 132 |
+
|
| 133 |
+

|
| 134 |
+
Figure 6: Heatmap visualizations of our distance functions. Each pixel in every heatmap represents the distance between a generated starting image containing the object at that $(x, y)$ coordinate and the fixed goal image (pictured on left). All three distance functions show a minimum when the object position is near the goal position of $(0.1, -0.05)$ . However, our Q-function produces a better-shaped signal than the direct regression model, and avoids occlusion errors - like the local minimum at high $y$ -values, which plague pixel-wise MSE.
|
| 135 |
+
|
| 136 |
+

|
| 137 |
+
|
| 138 |
+

|
| 139 |
+
|
| 140 |
+

|
| 141 |
+
|
| 142 |
+
Real-world evaluation. We additionally evaluate MBOLD in a real-world drawer manipulation task using a 7-DoF Franka arm. We train the dynamics model and distance function on a preexisting dataset of 1000 trajectories collected by a weakly supervised batch exploration algorithm in prior work (Chen et al., 2020). As shown in Figure 4, MBOLD outperforms visual foresight on both manipulation tasks with visual inputs, particularly on drawer closing, for which simply matching the arm position in the goal image does not solve the task. The success of our method in this domain highlights that our method can be applied to offline datasets collected using different exploration strategies. While MBOLD performs well on manipulation tasks even with complex real-world visuals, we find that the negative sampling procedure we adopt limits precision in matching highly actuated components such as the arm position. We perform additional analysis through simulated experiments detailed in Appendix E.1. Videos of both simulated and real-world task execution can be found at the project website: https://sites.google.com/berkeley.edu/mbold.
|
| 143 |
+
|
| 144 |
+
Qualitative analysis. In this section, we examine the distance functions learned by MBOLD, and show qualitatively that our learned distances better model the dependence of functional separation between two states on the relative positions of objects in their scenes. Figure 6 presents heatmaps of predicted distances for a fixed goal image on the object pushing task, as the initial observation is varied based on object position. The robot arm is set to the same position in each initial image. We see that the Q-function is able to learn a relatively well-shaped distance which accounts for the object position.
|
| 145 |
+
|
| 146 |
+
We additionally visualize baseline distance models for comparison. First, we look at an ablation of our distance model, which is trained via re
|
| 147 |
+
|
| 148 |
+
gression to map pairs of states randomly sampled from a given dataset trajectory to the number of timesteps separating them in that trajectory, and can be viewed as an offline variant of DDL (Hartikainen et al., 2019). We call this scheme that effectively predicts random walk distances "temporal distance regression." The second baseline we compare to is pixel-wise mean-squared error.
|
| 149 |
+
|
| 150 |
+
We find that the temporal distance regression model produces more sharply peaked distances than the Q-function, and performed worse as a reward signal during planning, as we find through our ablation experiments. The pixel-wise MSE metric produces low distances near the goal object position, but is impacted by occlusions of the objects as well as the position of the visually pronounced arm. While this analysis does not necessarily directly correspond to control performance, as it ignores the movement of the robot, it demonstrates that our learned distances are aware of the functional similarity of nearby object positions, despite the fact that they are learned entirely from images with actions corresponding to the movement of the arm, not the object.
|
| 151 |
+
|
| 152 |
+
Ablations. Our ablation studies aim to answer three questions: (1) How does Q-learning for learning dynamical distances compare to alternative distance metrics, such as distance in the latent space of a VAE, or dynamical distances learned using direct regression on temporal distances found in random data? (2) How important is mining negative transitions to the performance of our method? (3) How
|
| 153 |
+
|
| 154 |
+

|
| 155 |
+
Figure 7: Our learned distance function yields higher success rates than alternative approaches from prior work, such as the $\ell_2$ distance of a VAE latent space (Nair et al., 2018) and temporal distance regression (Hartikainen et al., 2019). We also see consistent improvements from using negative transition mining, especially on "hard" tasks.
|
| 156 |
+
|
| 157 |
+
beneficial is it to combine the learned distance function with planning through a predictive model, as compared to directly acting using the learned policy, as in standard model-free offline RL?
|
| 158 |
+
|
| 159 |
+
To answer the first two questions, we perform experiments in the object pushing domain. We evaluate alternative distance metrics for visual planning, by duplicating the planning setup, using the same dynamics model, and only modifying the metric used for scoring candidate trajectories. The first distance we consider is Euclidean distance in the latent space of a VAE, that is, $d(s,g) = \| e(s) - e(g)\| _2$ , where $e$ is a learned encoder, which resembles the reward function used in prior work on image-based goal reaching (Nair et al., 2018). The second is the direct temporal distance regression model described previously. As shown in Figure 7, Q-function distances outperform alternative distances on all of the object pushing tasks. While the temporal distance regression scheme provides competitive performance in some settings, it often provides overestimates of distances between states rather than shortest paths, as shown qualitatively in Figure 6.
|
| 160 |
+
|
| 161 |
+
We also find that the negative transition mining scheme also consistently improves performance, and is particularly important for the "hard" tasks. We hypothesize this is because augmenting the training data in this way causes learned distance functions to better take into account the positions of objects in the scene, rather than just visually prominent components such as arm position.
|
| 162 |
+
|
| 163 |
+
To address the third question, we compare our method, which uses learned distances for planning, to the policy discovered when performing Q-learning to learn dynamical distances. As shown in Table 1, the policy learned directly
|
| 164 |
+
|
| 165 |
+
Table 1: Comparison of success rates $\pm$ standard deviation across 5 random training seeds for our method, which combines Q-functions and planning with a model, to a baseline that uses the Q-function to choose actions directly without planning.
|
| 166 |
+
|
| 167 |
+
<table><tr><td></td><td>Q-function + planning</td><td>Q-function only</td></tr><tr><td>1 object push</td><td>55.2 ± 4.3%</td><td>19.2 ± 3.6%</td></tr><tr><td>3 object push</td><td>44.8 ± 2.9%</td><td>15.6 ± 3.6%</td></tr><tr><td>Reach</td><td>94.4 ± 3.3%</td><td>31.8 ± 5.2%</td></tr></table>
|
| 168 |
+
|
| 169 |
+
from offline RL alone is greatly outperformed by MBOLD. We hypothesize that this is due to challenges in advantage learning from offline data with extremely sparse rewards.
|
| 170 |
+
|
| 171 |
+
# 6 CONCLUSION
|
| 172 |
+
|
| 173 |
+
We presented a self-supervised approach to tackling goal-reaching tasks, which learns to reach unseen visual goals given only an offline, random dataset without reward labels. Our method combines the strengths of predictive models and learned dynamical distances, where a predictive model can provide effective predictions for planning actions over short horizons, while dynamical distances can provide a useful planning cost that captures distance to goals over longer horizons. By performing visual model predictive control with a learned visual dynamics model and a goal conditioned Q-function as the planning cost, we find that our method is able to perform goal reaching tasks more effectively than model-based planning approaches that utilize other reward specification techniques, as well as purely model-free methods. We show that MBOLD can also scale to real-world manipulation settings and learn from offline datasets collected with various exploration strategies, outperforming visual foresight on a drawer manipulation task. By leveraging offline data collected without a specific goal in mind, our method may make it possible to utilize large, unstructured, open-world robotic manipulation datasets. Scaling up this method to more complex real-world systems and large data sources therefore represents a particularly exciting direction for future work, which may broaden the capabilities and generality of robotic systems.
|
| 174 |
+
|
| 175 |
+
Acknowledgements. We thank students from the Robotic AI and Learning Lab for insightful feedback on earlier drafts of this paper and Aurick Zhou and Danijar Hafner for helpful discussions. This work was supported in part by Schmidt Futures, the Fannie and John Hertz Foundation, the Office of Naval Research (grants N00014-20-1-2675, N00014-16-1-2420, & N00014-19-1-2042), and the National Science Foundation (DGE-1745016 and through an NSF GRFP (GRFP 2018259676)). This research used the Savio computational cluster resource provided by the Berkeley Research Computing program at the University of California, Berkeley.
|
| 176 |
+
|
| 177 |
+
# REFERENCES
|
| 178 |
+
|
| 179 |
+
Brandon Amos, Laurent Dinh, Serkan Cabi, Thomas Rothörl, Sergio Gómez Colmenarejo, Alistair Muldal, Tom Erez, Yuval Tassa, Nando de Freitas, and Misha Denil. Learning awareness models, 2018.
|
| 180 |
+
Marcin Andrychowicz, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder, Bob McGrew, Josh Tobin, OpenAI Pieter Abbeel, and Wojciech Zaremba. Hindsight experience replay. In Advances in neural information processing systems, pp. 5048-5058, 2017.
|
| 181 |
+
Arthur Argenson and Gabriel Dulac-Arnold. Model-based offline planning. arXiv preprint arXiv:2008.05556, 2020.
|
| 182 |
+
Mohammad Babaeizadeh, Chelsea Finn, Dumitru Erhan, Roy H Campbell, and Sergey Levine. Stochastic variational video prediction. arXiv preprint arXiv:1710.11252, 2017.
|
| 183 |
+
Matthew Botvinick and Ari Weinstein. Model-based hierarchical reinforcement learning and human action control. Philosophical Transactions of the Royal Society B: Biological Sciences, 369 (1655):20130480, 2014.
|
| 184 |
+
Annie S. Chen, HyunJi Nam, Suraj Nair, and Chelsea Finn. Batch exploration with examples for scalable robotic reinforcement learning, 2020.
|
| 185 |
+
Kurtland Chua, Roberto Calandra, Rowan McAllister, and Sergey Levine. Deep reinforcement learning in a handful of trials using probabilistic dynamics models. In Advances in Neural Information Processing Systems, pp. 4754-4765, 2018.
|
| 186 |
+
Pieter-Tjerk De Boer, Dirk P Kroese, Shie Mannor, and Reuven Y Rubinstein. A tutorial on the cross-entropy method. Annals of operations research, 134(1):19-67, 2005.
|
| 187 |
+
M. Deisenroth and C. Rasmussen. *Pilco: A model-based and data-efficient approach to policy search*. In ICML, 2011.
|
| 188 |
+
J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A Large-Scale Hierarchical Image Database. In CVPR09, 2009.
|
| 189 |
+
Emily Denton and Rob Fergus. Stochastic video generation with a learned prior. arXiv preprint arXiv:1802.07687, 2018.
|
| 190 |
+
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
|
| 191 |
+
Yiming Ding, Carlos Florensa, Pieter Abbeel, and Mariano Phielipp. Goal-conditioned imitation learning. In Advances in Neural Information Processing Systems, pp. 15324-15335, 2019.
|
| 192 |
+
Kefan Dong, Yuping Luo, Tianhe Yu, Chelsea Finn, and Tengyu Ma. On the expressivity of neural networks for deep reinforcement learning. In International Conference on Machine Learning, pp. 2627-2637. PMLR, 2020.
|
| 193 |
+
Frederik Ebert, Sudeep Dasari, Alex X. Lee, S. Levine, and Chelsea Finn. Robustness via retrying: Closed-loop robotic manipulation with self-supervised learning. In Conference on Robot Learning, 2018a.
|
| 194 |
+
Frederik Ebert, Chelsea Finn, Sudeep Dasari, Annie Xie, Alex Lee, and Sergey Levine. Visual foresight: Model-based deep reinforcement learning for vision-based robotic control. arXiv preprint arXiv:1812.00568, 2018b.
|
| 195 |
+
Ben Eysenbach, Russ R Salakhutdinov, and Sergey Levine. Search on the replay buffer: Bridging planning and reinforcement learning. In Advances in Neural Information Processing Systems, pp. 15246-15257, 2019.
|
| 196 |
+
Chelsea Finn and Sergey Levine. Deep visual foresight for planning robot motion. In 2017 IEEE International Conference on Robotics and Automation (ICRA), pp. 2786-2793. IEEE, 2017.
|
| 197 |
+
|
| 198 |
+
Chelsea Finn, Ian Goodfellow, and Sergey Levine. Unsupervised learning for physical interaction through video prediction. In Advances in neural information processing systems, pp. 64-72, 2016.
|
| 199 |
+
Carlos Florensa, Jonas Degrave, Nicolas Heess, Jost Tobias Springenberg, and Martin Riedmiller. Self-supervised learning of image embedding for continuous control, 2019.
|
| 200 |
+
Scott Fujimoto, Herke Hoof, and David Meger. Addressing function approximation error in actor-critic methods. In International Conference on Machine Learning, pp. 1582-1591, 2018.
|
| 201 |
+
Scott Fujimoto, David Meger, and Doina Precup. Off-policy deep reinforcement learning without exploration. In International Conference on Machine Learning, pp. 2052-2062, 2019.
|
| 202 |
+
Dibya Ghosh, Abhishek Gupta, Ashwin Reddy, Justin Fu, Coline Devin, Benjamin Eysenbach, and Sergey Levine. Learning to reach goals via iterated supervised learning, 2019.
|
| 203 |
+
David Ha and Jirgen Schmidhuber. World models, 2018.
|
| 204 |
+
Danijar Hafner, Timothy Lillicrap, Jimmy Ba, and Mohammad Norouzi. Dream to control: Learning behaviors by latent imagination. arXiv preprint arXiv:1912.01603, 2019a.
|
| 205 |
+
Danijar Hafner, Timothy Lillicrap, Ian Fischer, Ruben Villegas, David Ha, Honglak Lee, and James Davidson. Learning latent dynamics for planning from pixels. In International Conference on Machine Learning, pp. 2555-2565. PMLR, 2019b.
|
| 206 |
+
Kristian Hartikainen, Xinyang Geng, Tuomas Haarnoja, and Sergey Levine. Dynamical distance learning for semi-supervised and unsupervised skill discovery. In International Conference on Learning Representations, 2019.
|
| 207 |
+
Michael Janner, Justin Fu, Marvin Zhang, and Sergey Levine. When to trust your model: Model-based policy optimization. In Advances in Neural Information Processing Systems, pp. 12519-12530, 2019.
|
| 208 |
+
Jeff Johnson, Matthijs Douze, and Hervé Jégou. Billion-scale similarity search with gpus. arXiv preprint arXiv:1702.08734, 2017.
|
| 209 |
+
Leslie Pack Kaelbling. Learning to achieve goals. In IN PROC. OF IJCAI-93, pp. 1094-1098. Morgan Kaufmann, 1993.
|
| 210 |
+
Gregory Kahn, Pieter Abbeel, and Sergey Levine. Badgr: An autonomous self-supervised learning-based navigation system. arXiv preprint arXiv:2002.05700, 2020.
|
| 211 |
+
Rahul Kidambi, Aravind Rajeswaran, Praneeth Netrapalli, and Thorsten Joachims. Morel: Model-based offline reinforcement learning. In Advances in neural information processing systems, 2020.
|
| 212 |
+
Aviral Kumar, Justin Fu, Matthew Soh, George Tucker, and Sergey Levine. Stabilizing off-policy q-learning via bootstrapping error reduction. In Advances in Neural Information Processing Systems, pp. 11784-11794, 2019.
|
| 213 |
+
Aviral Kumar, Aurick Zhou, George Tucker, and Sergey Levine. Conservative q-learning for offline reinforcement learning. arXiv preprint arXiv:2006.04779, 2020.
|
| 214 |
+
Alex X Lee, Richard Zhang, Frederik Ebert, Pieter Abbeel, Chelsea Finn, and Sergey Levine. Stochastic adversarial video prediction. arXiv preprint arXiv:1804.01523, 2018.
|
| 215 |
+
Alex X. Lee, Anusha Nagabandi, Pieter Abbeel, and Sergey Levine. Stochastic latent actor-critic: Deep reinforcement learning with a latent variable model, 2019.
|
| 216 |
+
Sergey Levine, Aviral Kumar, George Tucker, and Justin Fu. Offline reinforcement learning: Tutorial, review, and perspectives on open problems. arXiv preprint arXiv:2005.01643, 2020.
|
| 217 |
+
Kara Liu, Thanard Kurutach, Christine Tung, Pieter Abbeel, and Aviv Tamar. Hallucinative topological memory for zero-shot visual planning. 2020.
|
| 218 |
+
|
| 219 |
+
Kendall Lowrey, Aravind Rajeswaran, Sham Kakade, Emanuel Todorov, and Igor Mordatch. Plan Online, Learn Offline: Efficient Learning and Exploration via Model-Based Control. In International Conference on Learning Representations (ICLR), 2019.
|
| 220 |
+
Corey Lynch, Mohi Khansari, Ted Xiao, Vikash Kumar, Jonathan Tompson, Sergey Levine, and Pierre Sermanet. Learning latent plans from play. In Conference on Robot Learning, pp. 1113-1132, 2020.
|
| 221 |
+
Ajay Mandlekar, Fabio Ramos, Byron Boots, Li Fei-Fei, Animesh Garg, and Dieter Fox. Iris: Implicit reinforcement without interaction at scale for learning control from offline robot manipulation data. arXiv preprint arXiv:1911.05321, 2019.
|
| 222 |
+
Rowan McAllister and C. Rasmussen. Improving pilco with bayesian neural network dynamics models. 2016.
|
| 223 |
+
Anusha Nagabandi, Gregory Kahn, Ronald S Fearing, and Sergey Levine. Neural network dynamics for model-based deep reinforcement learning with model-free fine-tuning. In 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 7559-7566. IEEE, 2018.
|
| 224 |
+
Anusha Nagabandi, Kurt Konolige, Sergey Levine, and Vikash Kumar. Deep dynamics models for learning dexterous manipulation. In Conference on Robot Learning, pp. 1101-1112, 2020.
|
| 225 |
+
Ashvin Nair, Shikhar Bahl, Alexander Khazatsky, Vitchyr Pong, Glen Berseth, and Sergey Levine. Contextual imagined goals for self-supervised robotic learning. In Conference on Robot Learning, pp. 530-539, 2020a.
|
| 226 |
+
Ashvin Nair, Murtaza Dalal, Abhishek Gupta, and Sergey Levine. Accelerating online reinforcement learning with offline datasets. arXiv preprint arXiv:2006.09359, 2020b.
|
| 227 |
+
Ashvin V Nair, Vitchyr Pong, Murtaza Dalal, Shikhar Bahl, Steven Lin, and Sergey Levine. Visual reinforcement learning with imagined goals. In Advances in Neural Information Processing Systems, pp. 9191-9200, 2018.
|
| 228 |
+
Suraj Nair and Chelsea Finn. Hierarchical foresight: Self-supervised learning of long-horizon tasks via visual subgoal generation. In International Conference on Learning Representations (ICLR), 2020.
|
| 229 |
+
Soroush Nasiriany, Vitchyr Pong, Steven Lin, and Sergey Levine. Planning with goal-conditioned policies. In Advances in Neural Information Processing Systems, pp. 14843-14854, 2019.
|
| 230 |
+
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250, 2016.
|
| 231 |
+
Nikolay Savinov, Alexey Dosovitskiy, and Vladlen Koltun. Semi-parametric topological memory for navigation. In International Conference on Learning Representations (ICLR), 2018.
|
| 232 |
+
Julian Schrittwieser, Ioannis Antonoglou, Thomas Hubert, Karen Simonyan, Laurent Sifre, Simon Schmitt, Arthur Guez, Edward Lockhart, Demis Hassabis, Thore Graepel, Timothy Lillicrap, and David Silver. Mastering atari, go, chess and shogi by planning with a learned model, 2019.
|
| 233 |
+
H. J. Terry Suh and Russ Tedrake. The surprising effectiveness of linear models for visual foresight in object pile manipulation, 2020.
|
| 234 |
+
Richard S Sutton. Dyna, an integrated architecture for learning, planning, and reacting. ACM Sigart Bulletin, 2(4):160-163, 1991.
|
| 235 |
+
Tingwu Wang and Jimmy Ba. Exploring model-based planning with policy networks, 2019.
|
| 236 |
+
David Warde-Farley, Tom Van de Wiele, Tejas Kulkarni, Catalin Ionescu, Steven Hansen, and Volodymyr Mnih. Unsupervised control through non-parametric discriminative rewards, 2018.
|
| 237 |
+
Manuel Watter, Jost Springenberg, Joschka Boedecker, and Martin Riedmiller. Embed to control: A locally linear latent dynamics model for control from raw images. In Advances in neural information processing systems, pp. 2746-2754, 2015.
|
| 238 |
+
|
| 239 |
+
Théophane Weber, Sébastien Racanière, David P. Reichert, Lars Buesing, Arthur Guez, Danilo Jimenez Rezende, Adria Puigdomènech Badia, Oriol Vinyals, Nicolas Heess, Yujia Li, Razvan Pascanu, Peter Battaglia, Demis Hassabis, David Silver, and Daan Wierstra. Imagination-augmented agents for deep reinforcement learning, 2017.
|
| 240 |
+
Yifan Wu, George Tucker, and Ofir Nachum. Behavior regularized offline reinforcement learning. arXiv preprint arXiv:1911.11361, 2019.
|
| 241 |
+
Lin Yen-Chen, Maria Bauza, and Phillip Isola. Experience-embedded visual foresight. In Conference on Robot Learning, 2019.
|
| 242 |
+
Tianhe Yu, Deirdre Quillen, Zhanpeng He, Ryan Julian, Karol Hausman, Chelsea Finn, and Sergey Levine. Meta-world: A benchmark and evaluation for multi-task and meta reinforcement learning. In Conference on Robot Learning (CoRL), 2019a. URL https://arxiv.org/abs/1910.10897.
|
| 243 |
+
Tianhe Yu, Gleb Shevchuk, Dorsa Sadigh, and Chelsea Finn. Unsupervised visuomotor control through distributional planning networks. Robotics: Science and Systems XV, Jun 2019b. doi: 10.15607/rss.2019.xv.020. URL http://dx.doi.org/10.15607/RSS.2019.XV.020.
|
| 244 |
+
Tianhe Yu, Garrett Thomas, Lantao Yu, Stefano Ermon, James Zou, Sergey Levine, Chelsea Finn, and Tengyu Ma. Mopo: Model-based offline policy optimization. In Advances in neural information processing systems, 2020.
|
| 245 |
+
Mingyuan Zhong, Mikala Johnson, Yuval Tassa, Tom Erez, and Emanuel Todorov. Value function approximation and model predictive control. In 2013 IEEE symposium on adaptive dynamic programming and reinforcement learning (ADPRL), pp. 100-107. IEEE, 2013.
|
| 246 |
+
|
| 247 |
+
# A MBOLD IMPLEMENTATION DETAILS
|
| 248 |
+
|
| 249 |
+
# A.1 DISTANCE FUNCTION
|
| 250 |
+
|
| 251 |
+
This section explains the implementation details for our distance function. Following prior work (Fujimoto et al., 2018), we learn two independent Q-functions and use the minimum for performing Bellman backups. Recall that we sampled goals from two distributions: future states in the same trajectories, and states from different trajectories where the robot arm was in a similar position. To implement the second strategy, we fit a $k$ -nearest neighbors graph on 200000 (about $60\%$ of total) dataset observations, and use the $\ell_2$ arm joint distance as the similarity key. Each batch contains equal numbers of transitions generated from each goal sampling method. For computational efficiency, we implement the $k$ -NN search using the GPU-enabled FAISS library (Johnson et al., 2017).
|
| 252 |
+
|
| 253 |
+
We relabel half of the transitions in each training batch with reached goals and the other half with "negative" goals with similar actuated components, finding through ablation experiments that this combination achieves stronger performance compared to using just reached goals in our evaluation environments. In other domains, more careful consideration is required to determine if the assumptions which motivate this "negative" goal sampling strategy are satisfied.
|
| 254 |
+
|
| 255 |
+
We also modify the reward specification scheme by providing a small positive reward at each step where the goal is not reached, and then a large positive reward upon reaching the goal. Specifically, we choose to give a reward of 1 by default and 10 when the goal is reached (compared to 0 and 1 respectively as presented in the discussion in Section 4), although we do not extensively tune this parameter. We find that it does not affect performance in a statistically significant way (results for each reward choice are within 1 standard deviation of one another) to choose this reward over the $(0,1)$ rewards. Note that this does not change the interpretation of the Q-function as a shortest path distance, merely slightly complicating the conversion calculations from Q-values to distances in timesteps.
|
| 256 |
+
|
| 257 |
+
Finally, we add an additional loss term to perform conservative Q-learning (CQL) (Kumar et al., 2020), a method for offline model-free RL, which penalizes Q-values of randomly selected actions and increases Q-values of in-dataset actions. We use the Lagrangian version of CQL to automatically tune the weighting term, and detail the parameters below. We find using CQL improves performance on the door sliding task from a mean success rate of $41\%$ to $58\%$ , but does not significantly impact performance on the others.
|
| 258 |
+
|
| 259 |
+
The Q-function network architecture consists of convolutional and fully connected layers. We define a network called the convolutional encoder, which will be used throughout the appendix. This takes as input an image of shape $64 \times 64 \times 6$ , containing the starting and goal images concatenated channelwise, and consists of 4 2D convolutional layers, with [8, 16, 32, 64] filters, respectively, with all with kernel size (4, 4) and strides of (2, 2). We use Leaky ReLU activations after each intermediate convolutional layer, and batch-norm layers after the second and third Leaky ReLUs.
|
| 260 |
+
|
| 261 |
+
We flatten the output of the convolutional encoder, concatenate the inputted actions, and feed the features through 6 fully-connected linear layers of 128 units each, with the final layer outputting a single value. Each intermediate fully-connected layer is followed by a ReLU activation and a batch-norm layer.
|
| 262 |
+
|
| 263 |
+
The actor network architecture first contains the above "convolutional encoder", whose outputs are flattened and input into a 10 layer MLP with 128 fully connected units each, and ReLU activations and batch-norm layers in between. The final output, of dimension 4, is passed through a tanh activation to constrain it to the normalized action space $[-1, 1]$ .
|
| 264 |
+
|
| 265 |
+
Additional training hyperparameters are detailed in Table 2.
|
| 266 |
+
|
| 267 |
+
# A.2 MODEL-PREDICTIVE CONTROL
|
| 268 |
+
|
| 269 |
+
In Table 3, we describe the parameters for model-based planning in our experiments. These parameters are shared across all tasks and planning costs (in ablation experiments). Most values are selected based on prior work (Ebert et al., 2018b). We find that replanning every 6 steps produces slightly better performance than replanning every 13 steps, but not by a large margin, and we do
|
| 270 |
+
|
| 271 |
+
<table><tr><td>Parameter</td><td>Value</td></tr><tr><td>Dataset size</td><td>10000 trajectories</td></tr><tr><td>Train/test/val split</td><td>0.9/0.05/0.05</td></tr><tr><td>Trajectory length</td><td>30 steps</td></tr><tr><td>Observation dimensions</td><td>64 × 64 × 3</td></tr><tr><td>State observations in kNN graph</td><td>200000</td></tr><tr><td>Goal relabeling sampling parameter (p)</td><td>0.3 (tuned over [0.2, 0.3])</td></tr><tr><td>Discount factor (γ)</td><td>0.8</td></tr><tr><td>Learning rate</td><td>3e-4</td></tr><tr><td>Target network update Polyak factor</td><td>0.995</td></tr><tr><td>Batch size</td><td>64</td></tr><tr><td>Actor network noise σ</td><td>0.1</td></tr><tr><td>Actor network maximum noise magnitude</td><td>0.2</td></tr><tr><td>Training iterations</td><td>93750 (300 epochs)</td></tr><tr><td>Optimizer</td><td>Adam</td></tr><tr><td>CQL Lagrange multiplier learning rate</td><td>1e-3</td></tr><tr><td>CQL slack parameter τ (object pushing)</td><td>3.0</td></tr><tr><td>CQL slack parameter τ (reaching)</td><td>3.0</td></tr><tr><td>CQL slack parameter τ (door sliding)</td><td>10.0</td></tr><tr><td>CQL number of randomly selected actions</td><td>10</td></tr></table>
|
| 272 |
+
|
| 273 |
+
not tune this further due to computation constraints. We sample actions using the filtering scheme described in Nagabandi et al. (2020) to make sequences smoother in time. We initialize sampling distributions using each environment's data collection parameters, as shown in Table 4.
|
| 274 |
+
|
| 275 |
+
To compute the planning cost described in Equation 3, we maximize over $\alpha$ by feeding in the final predicted state to the policy network learned by TD3, and using the outputted action as the maximizer.
|
| 276 |
+
|
| 277 |
+
Table 2: Hyperparameters for distance learning
|
| 278 |
+
|
| 279 |
+
<table><tr><td>Parameter</td><td>Value</td></tr><tr><td>Planning horizon (h)</td><td>13 steps</td></tr><tr><td>Actions executed per planning step (k)</td><td>6 actions</td></tr><tr><td>CEM Iterations</td><td>3 iterations</td></tr><tr><td>Elite sample fraction</td><td>0.05 (10 samples)</td></tr><tr><td>Samples per CEM iteration</td><td>200 samples</td></tr></table>
|
| 280 |
+
|
| 281 |
+
Table 3: Hyperparameters for model-based planning
|
| 282 |
+
|
| 283 |
+
# A.3 ENVIRONMENTS
|
| 284 |
+
|
| 285 |
+
The Sawyer environments are adapted from the Meta-World benchmark (Yu et al., 2019a), and the door sliding environment is based off of the environment presented by Lynch et al. (2020). For each task, we define the 4-dimensional action space $\mathcal{A}$ such that actions control the Cartesian position of the robot's end-effector, as well as the robot's gripper.
|
| 286 |
+
|
| 287 |
+
We randomly generate a set of 100 different test goals for each setting. Each task is defined by a goal image and starting state, on which all methods are tested. We define success for each task in terms of the final distance to the goal of each relevant object, e.g. object position for the object repositioning task. A trial is considered successful if the final distance is below a certain threshold $\epsilon$ manually chosen for each task, listed in the table below. We evaluate the success rate of each method over 5 different random training seeds.
|
| 288 |
+
|
| 289 |
+
We generate offline datasets for each task by running random policies for $1e4$ episodes of 30 timesteps each. In the beginning of each episode, object positions are reset uniformly randomly over the range of possible positions across each joint. The random policy actions are drawn using a filtering technique, which smooths random zero-mean Gaussian samples across time. We apply the
|
| 290 |
+
|
| 291 |
+
correlated noise scheme described by Nagabandi et al. (2020), setting the hyperparameter $\beta = 0.5$ . The parameters of the multi-variate Gaussian samples in each dimension are listed in Table 4.
|
| 292 |
+
|
| 293 |
+
<table><tr><td></td><td>Reaching</td><td>Object pushing</td><td>Door sliding</td></tr><tr><td>Data colln. stdev (diag(Σ))</td><td>[0.6, 0.6, 0.3, 0.3]</td><td>[0.6, 0.6, 0.3, 0.3]</td><td>[0.3, 0.3, 0.3, 0.15]</td></tr><tr><td>Object compared in success threshold</td><td>Arm end effector</td><td>Block</td><td>Slide</td></tr><tr><td>Success distance threshold</td><td>0.05m</td><td>0.05m</td><td>0.075m</td></tr></table>
|
| 294 |
+
|
| 295 |
+
Table 4: Environment and task details
|
| 296 |
+
|
| 297 |
+
# B COMPARATIVE EVALUATION IMPLEMENTATION DETAILS
|
| 298 |
+
|
| 299 |
+
# B.1 REINFORCEMENT LEARNING WITH IMAGINED GOALS (NAIR ET AL., 2018)
|
| 300 |
+
|
| 301 |
+
In this section, we will discuss implementation details of our adaptation of Reinforcement Learning with Imagined Goals (RIG). We begin by training a $\beta$ -VAE with latent dimension 8. The VAE is trained on randomly sampled states from the entire offline dataset. For the loss, we use a combination of a maximum likelihood term and a KL divergence term which constrains the latent space to a unit Gaussian. In particular, we compute the mean pixel error, that is, $\frac{1}{HW} \|s - \hat{s}\|_2^2$ , where $s$ is the original image, and $\hat{s}$ is the reconstruction, both normalized to be in $[0,1]$ . We add this to the KL divergence between the latent distribution and the unit Gaussian, with a weighting factor of $1e^{-3}$ on the KL penalty.
|
| 302 |
+
|
| 303 |
+
The architecture of the VAE encoder consists of the "convolutional encoder" described in section A.1, whose features are passed through two FC layers with 128 units with a ReLU activation and batch-norm layer in between. The VAE decoder takes as input latent states into two FC layers with 128 units with a batch-norm layer and ReLU activation after each. This is followed by the inverted architecture of the encoder, consisting of transposed 2D convolutions.
|
| 304 |
+
|
| 305 |
+
Then, we perform model-free RL in a modified MDP, using encoded observations as a substitute for environment observations, and computing rewards as negative $\ell_2$ distances in latent space. We sample random goals from the multivariate Gaussian prior $(\mathcal{N}(0,I))$ at the beginning of every episode. We use the open-source implementation of soft actor-critic (SAC) in RLKit, and use the default SAC parameters and architecture found in the implementation, making the following modifications: We increase the number of layers of all MLP networks from 2 to 6. We use a maximum path length of 30 steps for consistency with our other experiments, and a discount factor of 0.95. Along with the goal sampled from the prior at the beginning of each episode, we find that relabeling goals with the achieved observation at the end of the trajectory improves performance, and add these transitions to the replay buffer as well. Note that unlike in the original RIG formulation, we do not update the weights of the learned VAE using data collected online. We evaluate the learned policy after 600 epochs of training, long after environment returns plateau.
|
| 306 |
+
|
| 307 |
+
# B.2 DREAMER (HAFNER ET AL., 2019A)
|
| 308 |
+
|
| 309 |
+
Dreamer, a model-based method for image-based tasks, also uses a combination of value functions and planning. We adapt Dreamer from its original single-task setting to learn a goal-conditioned policy, reward predictor, and value function; however, we do not condition the dynamics model on the goal. Dreamer has been previously demonstrated only in settings where the environment provides rewards to the agent, so we modify the method to learn from unlabeled, offline data by using experience replay. We find that using an indicator reward function as in our method or a heuristically defined reward function, image MSE, causes Dreamer to struggle to learn. We thus additionally demonstrate the performance of Dreamer using a manually specified arm distance reward for the Sawyer reaching task.
|
| 310 |
+
|
| 311 |
+
We build off of the open source implementation of Dreamer by the original authors, written in TensorFlow2 and found at https://github.com/danijar/dreamer. Specifically, to modify the networks to support goal-conditioning, we add independent convolutional encoders which take the goal image as input to each network. Each encoder consists of 2D convolution layers with [32, 64, 128, 256] filters and kernel sizes of 4 to each network, and we concatenate the flattened features to the inputs
|
| 312 |
+
|
| 313 |
+
of each network. We additionally increase the number of fully-connected layers for the value and actor networks from 3 and 2 respectively to 10. We use a discount factor of $\gamma = 0.95$ . All other hyperparameter values are defaults from the public implementation.
|
| 314 |
+
|
| 315 |
+
For training, we relabel trajectories sampled from the fixed, offline dataset with a uniformly randomly selected observation from the trajectory as the goal. In most of our experiments, we compute the negative pixel-wise MSE as the reward, but in one reaching experiment, we use the negative $\ell_2$ Euclidean distance between the arm end-effector position and the goal end-effector position. We train for 2000 iterations for each experiment, although initial experiments in which we trained for $20\mathrm{x}$ longer did not yield improved results.
|
| 316 |
+
|
| 317 |
+
# B.3 GOAL-CONDITIONED BEHAVIOR CLONING
|
| 318 |
+
|
| 319 |
+
To train a goal-conditioned behavior cloning policy, we begin by relabeling random transitions from the dataset with goals which are later achieved in those trajectories. Specifically, we sample state-goal pairs from trajectories in the dataset by first selecting the initial state index $t_i$ uniformly from all timesteps, and then selecting the goal state index $t_g$ uniformly from timesteps greater than $t_i$ . We then train a neural network to predict the transition action $a_i$ given the state $s_i$ and the relabeled goal $s_g$ , using a mean-squared error loss.
|
| 320 |
+
|
| 321 |
+
The network architecture is the same as that of the actor network used in Q-learning for MBOLD, described in Appendix A.1. We train the model for 3125000 iterations (1000 epochs) using a batch size of 32, and use the same optimizer and learning rate as the distance learned for MBOLD.
|
| 322 |
+
|
| 323 |
+
# B.4 SEARCH ON THE REPLAY BUFFER (EYSENBAH ET AL., 2019)
|
| 324 |
+
|
| 325 |
+
For Search on the Replay Buffer (SoRB), we train a distributional Q-function to represent distances as in the original paper. Distributional RL discretizes possible value estimates into a set of bins – we use 10 for all of our experiments. We train this distributional Q-function for 300 epochs, as in the distance function training for MBOLD. We also use the same architecture and training scheme, altering the number of outputs to 10 bins and using the KL-divergence loss for the distributional Q-function as in Eysenbach et al. (2019). However, unlike in Eysenbach et al. (2019), we train on just the fixed, offline dataset. We then perform the planning portion of SoRB with the “maxdist” parameter set to 4, after manual tuning. We use a graph size of 2000 states for all experiments, due to computational constraints.
|
| 326 |
+
|
| 327 |
+
We find that the policy learned through Q-learning performs very poorly at reaching subgoals, so we instead substitute the goal-conditioned behavior cloning policy for this purpose. We find that this greatly improves performance across all tasks.
|
| 328 |
+
|
| 329 |
+
# B.5 VISUAL FORESIGHT (EBERT ET AL., 2018B)
|
| 330 |
+
|
| 331 |
+
To compare MBOLD to visual foresight, we use the same dynamics model and planning setup as in MBOLD, however, we substitute the learned dynamical distance function with the $\ell_2$ pixel error cost used in visual foresight.
|
| 332 |
+
|
| 333 |
+
# C ABLATION EXPERIMENTS IMPLEMENTATION DETAILS
|
| 334 |
+
|
| 335 |
+
# C.1 VAEDISTANCE
|
| 336 |
+
|
| 337 |
+
We use the same architecture as the VAE used in the RIG comparison described in Appendix B. We set the latent space dimension to 256 and weight the KL divergence term using a factor of $1e^{-5}$ . We train the model for 3125000 iterations (1000 epochs) using a batch size of 32, and use the same optimizer and learning rate as the distance learned for MBOLD.
|
| 338 |
+
|
| 339 |
+
# C.2 TEMPORAL DISTANCE REGRESSION
|
| 340 |
+
|
| 341 |
+
To train the temporal distance regression model, we sample state-goal pairs from trajectories in the dataset by first selecting the initial state index $t_i$ uniformly from all timesteps, and then selecting the
|
| 342 |
+
|
| 343 |
+
goal state index $t_g$ uniformly from timesteps greater than $t_i$ . We compute the label for this pair as $\min(t_g - t_i, \text{maxdist})$ , where $\text{maxdist}$ is a hyperparameter we set to 10. The maxdist parameter helps to improve the optimality of distances on average. We train the neural network to regress this target label using an $\ell_2$ error loss. We train the network for 3125000 iterations (1000 epochs) with a batch size of 32, and use the same optimizer and learning rate as the distance learned for MBOLD.
|
| 344 |
+
|
| 345 |
+
The architecture for the temporal distance regression model begins with the convolutional encoder described in Appendix B. Its flattened outputs are fed into 5 fully-connected layers of 256 units each, with batch-norm and ReLU activations after each intermediate layer.
|
| 346 |
+
|
| 347 |
+
# C.3 Q-FUNCTION POLICY
|
| 348 |
+
|
| 349 |
+
We find that the policy directly learned by our method when learning distances performs extremely poorly. However, performing Q-learning using random shooting over 100 uniformly random actions selected from $[-1,1]^4$ to optimize over actions to compute target values produces much better results when used directly as a policy, compared to using an actor network to perform this optimization as in our method. Therefore, we report results from acting according to this random shooting method. At test time, we estimate the optimal action $a^{\star} = \arg \max_{a}Q(s_{t},a,g)$ by again sampling 100 uniformly random actions, and selecting the best one.
|
| 350 |
+
|
| 351 |
+
# D COMPUTATIONAL COMPLEXITY ANALYSIS
|
| 352 |
+
|
| 353 |
+
In this section, we discuss the computation complexity of training and acting using MBOLD.
|
| 354 |
+
|
| 355 |
+
Training: Training the dynamics model takes about 30 hr while training the distance function takes about 5 hr. These training times are dwarfed by the cost of collecting data in the real world, which could take on the order of 3-4 days in the real world (but can be reused for various tasks). In contrast, a single RL approach only requires learning the distance function. While this means that it takes MBOLD significantly longer to train than the single RL approach, note that the dynamics model can be shared across many tasks. We train the dynamics model for 200k and distance function for 94k training steps. A training step for the dynamics model involves one forward and backward pass through the dynamics model. A training step for the distance function requires sampling positive and negative goals, two Q-function forward passes and a policy network forward pass to compute target values and current Q-values, and a backward pass to update model parameters. In contrast, a single RL approach would just learn the distance function, not the dynamics model. From the above estimates, this means that training steps for the dynamics model are around 3 times slower than training steps for the distance model. Because the dynamics model can be used to perform many tasks, this cost is amortized over these tasks, as compared to a single RL approach.
|
| 356 |
+
|
| 357 |
+
Acting: Selecting a sequence of actions (6 actions in our experiments) using MBOLD requires one forward pass of the dynamics model for each CEM iteration (3 total in our experiments), and one forward pass through the distance function and policy network. Amortized over a trajectory, this amounts to about 2 seconds wall clock time per action, which can be sped up by around 2x with similar performance by replanning less frequently. For a single RL approach, each action would require just one forward pass through the policy network.
|
| 358 |
+
|
| 359 |
+
# E ABLATION EXPERIMENTS
|
| 360 |
+
|
| 361 |
+
# E.1 NEGATIVE MINING & ACTUATED STATE COMPONENTS
|
| 362 |
+
|
| 363 |
+
The ablation experiments presented in Section 5 demonstrate that the negative mining technique can improve performance on manipulation tasks, as evaluated by the final position of the object being manipulated. However, in experiments performed in the real-world Franka drawer setting which only required the robot arm to "reach" to a particular location to match the goal, we found that MBOLD achieved a mean final Euclidean distance to goal of $0.14\mathrm{m}$ , while Visual Foresight achieved $0.066\mathrm{m}$ over 10 trials. Here, we conduct additional experiments in simulation to investigate the effect of negative mining on reaching goals based on accuracy of matching the highly actuated components, for example, the robot arm. In the single-object block pushing setting, we evaluate the performance of distance functions trained with and without negative mining on reaching the desired
|
| 364 |
+
|
| 365 |
+
goal arm position. We perform the evaluation using (1) the set of test goals used in our original experiments, which include object movement, and (2) an additional set of test goals which only require robot arm movement. We present the results in Table 6. We find that training without negative mining improves the planner's ability to reach goal arm positions when goals also require object movement, but note that this results in weaker performance in actually relocating those objects, establishing a trade-off. When goals are selected to require just arm movement, performance is comparable with and without the negative sampling scheme.
|
| 366 |
+
|
| 367 |
+
# E.2 PLANNING HORIZON ABLATIONS
|
| 368 |
+
|
| 369 |
+
In this section, we investigate the effect of the planning horizon $h$ on control performance. After training distance functions according to Appendix A.1, we perform planning with three different settings for $h$ on the simulated block pushing tasks. We present the results in Figure 8. We find that a longer planning horizon is beneficial, especially for solving more difficult tasks. We hypothesize that this is because longer planning horizons allow the planner and distance function to better distinguish promising predicted states, while the fidelity of state predictions remains relatively high.
|
| 370 |
+
|
| 371 |
+
# E.3 RANDOM OBJECT RESET ABLATIONS
|
| 372 |
+
|
| 373 |
+
In this section, we perform experiments to evaluate the impact of the distribution of initial object position on task performance. In particular, we look at the single-object Sawyer pushing task. We collect an additional dataset with the same policy and other parameters as that used in the main comparative evaluations, but restrict the random object initialization position to be within $[-0.05, 0.05]^2$ as opposed to $[-0.2, 0.2]^2$ . This represents a 16x reduction in the area of possible initializations. We then train a new dynamics model and distance function from scratch and compare the control performance on the same benchmark tasks from the main comparisons. We present the results in Table 5. We find that the control performance on these tasks remain within one standard deviation despite the restriction in reset position.
|
| 374 |
+
|
| 375 |
+

|
| 376 |
+
Figure 8: Results for planning horizon ablations.
|
| 377 |
+
|
| 378 |
+
Table 5: Comparison of success rates for our method when trained using a dataset where object positions at the start of each episode were greatly restricted, compared to uniform over the entire space. Standard deviations are over 5 random seeds.
|
| 379 |
+
|
| 380 |
+
<table><tr><td></td><td>Uniform reset</td><td>Restricted reset</td></tr><tr><td>1 object push (regular)</td><td>55.2 ± 4.3%</td><td>54.5 ± 3.9%</td></tr><tr><td>1 object push (hard)</td><td>40.2 ± 7.2%</td><td>43.2 ± 7.2%</td></tr></table>
|
| 381 |
+
|
| 382 |
+
Table 6: Effect of training using negative mining on final arm position matching performance. A final $\ell_2$ distance to goal arm position of $0.05\mathrm{m}$ or less is considered a success. Standard deviations of success rates are computed over 5 random seeds.
|
| 383 |
+
|
| 384 |
+
<table><tr><td>Test goals</td><td>MBOLD</td><td>MBOLD (no negative mining)</td></tr><tr><td>No object movement</td><td>89.2 ± 1.9%</td><td>91.6 ± 2.3%</td></tr><tr><td>Object movement</td><td>64.4 ± 5.9%</td><td>83.4 ± 4.0%</td></tr></table>
|
modelbasedvisualplanningwithselfsupervisedfunctionaldistances/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:94425aa306a455591db93d0009357bccd521268df04736840e9d32469884b0a1
|
| 3 |
+
size 430870
|
modelbasedvisualplanningwithselfsupervisedfunctionaldistances/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:61c946009abcaef031933d57e40bcee655056466c5b53bf1f2f94566679a2146
|
| 3 |
+
size 532258
|
multivariateprobabilistictimeseriesforecastingviaconditionednormalizingflows/0d955605-445e-463c-ab48-ea4bd1d83f78_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:eba06147085d2b02d1a090617ca3ae8f6db364043718dbe5b871aad88839696a
|
| 3 |
+
size 122608
|
multivariateprobabilistictimeseriesforecastingviaconditionednormalizingflows/0d955605-445e-463c-ab48-ea4bd1d83f78_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:883b52e5c377a89041c109a0ac10cb2a6b39bc1e2c168acf98d8785daba0fd5d
|
| 3 |
+
size 133465
|
multivariateprobabilistictimeseriesforecastingviaconditionednormalizingflows/0d955605-445e-463c-ab48-ea4bd1d83f78_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:48df59934ef3b77e0e75873c3bca2f7ab28520e9434676bef7e82a1d158a9d5e
|
| 3 |
+
size 1879076
|
multivariateprobabilistictimeseriesforecastingviaconditionednormalizingflows/full.md
ADDED
|
@@ -0,0 +1,561 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# MULTIVARIATE PROBABILISTIC TIME SERIES FORECASTING VIA CONDITIONED NORMALIZING FLOWS
|
| 2 |
+
|
| 3 |
+
Kashif Rasul, Abdul-Saboor Sheikh, Ingmar Schuster, Urs Bergmann & Roland Vollgraf
|
| 4 |
+
|
| 5 |
+
Zalando Research
|
| 6 |
+
|
| 7 |
+
Mühlenstraße 25
|
| 8 |
+
|
| 9 |
+
10243 Berlin
|
| 10 |
+
|
| 11 |
+
Germany
|
| 12 |
+
|
| 13 |
+
{kashif.rasul, ingmar.schuster, roland.vollgraf}@zalando.de
|
| 14 |
+
|
| 15 |
+
# ABSTRACT
|
| 16 |
+
|
| 17 |
+
Time series forecasting is often fundamental to scientific and engineering problems and enables decision making. With ever increasing data set sizes, a trivial solution to scale up predictions is to assume independence between interacting time series. However, modeling statistical dependencies can improve accuracy and enable analysis of interaction effects. Deep learning methods are well suited for this problem, but multivariate models often assume a simple parametric distribution and do not scale to high dimensions. In this work we model the multivariate temporal dynamics of time series via an autoregressive deep learning model, where the data distribution is represented by a conditioned normalizing flow. This combination retains the power of autoregressive models, such as good performance in extrapolation into the future, with the flexibility of flows as a general purpose high-dimensional distribution model, while remaining computationally tractable. We show that it improves over the state-of-the-art for standard metrics on many real-world data sets with several thousand interacting time-series.
|
| 18 |
+
|
| 19 |
+
# 1 INTRODUCTION
|
| 20 |
+
|
| 21 |
+
Classical time series forecasting methods such as those in Hyndman & Athanasopoulos (2018) typically provide univariate forecasts and require hand-tuned features to model seasonality and other parameters. Time series models based on recurrent neural networks (RNN), like LSTM (Hochreiter & Schmidhuber, 1997), have become popular methods due to their end-to-end training, the ease of incorporating exogenous covariates, and their automatic feature extraction abilities, which are the hallmarks of deep learning. Forecasting outputs can either be points or probability distributions, in which case the forecasts typically come with uncertainty bounds.
|
| 22 |
+
|
| 23 |
+
The problem of modeling uncertainties in time series forecasting is of vital importance for assessing how much to trust the predictions for downstream tasks, such as anomaly detection or (business) decision making. Without probabilistic modeling, the importance of the forecast in regions of low noise (small variance around a mean value) versus a scenario with high noise cannot be distinguished. Hence, point estimation models ignore risk stemming from this noise, which would be of particular importance in some contexts such as making (business) decisions.
|
| 24 |
+
|
| 25 |
+
Finally, individual time series, in many cases, are statistically dependent on each other, and models need the capacity to adapt to this in order to improve forecast accuracy (Tsay, 2014). For example, to model the demand for a retail article, it is important to not only model its sales dependent on its own past sales, but also to take into account the effect of interacting articles, which can lead to cannibalization effects in the case of article competition. As another example, consider traffic flow in a network of streets as measured by occupancy sensors. A disruption on one particular street will also ripple to occupancy sensors of nearby streets—a univariate model would arguably not be able to account for these effects.
|
| 26 |
+
|
| 27 |
+
In this work, we propose end-to-end trainable autoregressive deep learning architectures for probabilistic forecasting that explicitly models multivariate time series and their temporal dynamics by employing a normalizing flow, like the Masked Autoregressive Flow (Papamakarios et al., 2017) or
|
| 28 |
+
|
| 29 |
+
Real NVP (Dinh et al., 2017). These models are able to scale to thousands of interacting time series, we show that they are able to learn ground-truth dependency structure on toy data and we establish new state-of-the-art results on diverse real world data sets by comparing to competitive baselines. Additionally, these methods adapt to a broad class of underlying data distribution on account of using a normalizing flow and our Transformer based model is highly efficient due to the parallel nature of attention layers while training.
|
| 30 |
+
|
| 31 |
+
The paper first provides some background context in Section 2. We cover related work in Section 3. Section 4 introduces our model and the experiments are detailed in Section 5. We conclude with some discussion in Section 6. The Appendix contains details of the datasets, additional metrics and exploratory plots of forecast intervals as well as details of our model.
|
| 32 |
+
|
| 33 |
+
# 2 BACKGROUND
|
| 34 |
+
|
| 35 |
+
# 2.1 DENSITY ESTIMATION VIA NORMALIZING FLOWS
|
| 36 |
+
|
| 37 |
+
Normalizing flows (Tabak & Turner, 2013; Papamakarios et al., 2019) are mappings from $\mathbb{R}^D$ to $\mathbb{R}^D$ such that densities $p_{\mathcal{X}}$ on the input space $\mathcal{X} = \mathbb{R}^D$ are transformed into some simple distribution $p_{\mathcal{Z}}$ (e.g. an isotropic Gaussian) on the space $\mathcal{Z} = \mathbb{R}^D$ . These mappings, $f\colon \mathcal{X}\mapsto \mathcal{Z}$ , are composed of a sequence of bijections or invertible functions. Due to the change of variables formula we can express $p_{\mathcal{X}}(\mathbf{x})$ by
|
| 38 |
+
|
| 39 |
+
$$
|
| 40 |
+
p _ {\mathcal {X}} (\mathbf {x}) = p _ {\mathcal {Z}} (\mathbf {z}) \left| \det \left(\frac {\partial f (\mathbf {x})}{\partial \mathbf {x}}\right) \right|,
|
| 41 |
+
$$
|
| 42 |
+
|
| 43 |
+
where $\partial f(\mathbf{x}) / \partial \mathbf{x}$ is the Jacobian of $f$ at $\mathbf{x}$ . Normalizing flows have the property that the inverse $\mathbf{x} = f^{-1}(\mathbf{z})$ is easy to evaluate and computing the Jacobian determinant takes $O(D)$ time.
|
| 44 |
+
|
| 45 |
+
The bijection introduced by Real NVP (Dinh et al., 2017) called the coupling layer satisfies the above two properties. It leaves part of its inputs unchanged and transforms the other part via functions of the un-transformed variables (with superscript denoting the coordinate indices)
|
| 46 |
+
|
| 47 |
+
$$
|
| 48 |
+
\left\{ \begin{array}{l} \mathbf {y} ^ {1: d} = \mathbf {x} ^ {1: d} \\ \mathbf {y} ^ {d + 1: D} = \mathbf {x} ^ {d + 1: D} \odot \exp (s (\mathbf {x} ^ {1: d})) + t (\mathbf {x} ^ {1: d}), \end{array} \right.
|
| 49 |
+
$$
|
| 50 |
+
|
| 51 |
+
where $\odot$ is an element wise product, $s()$ is a scaling and $t()$ a translation function from $\mathbb{R}^d\mapsto \mathbb{R}^{D - d}$ , given by neural networks. To model a nonlinear density map $f(\mathbf{x})$ , a number of coupling layers which map $\mathcal{X}\mapsto \mathcal{Y}_1\mapsto \dots \mapsto \mathcal{Y}_{K - 1}\mapsto \mathcal{Z}$ are composed together all the while alternating the dimensions which are unchanged and transformed. Via the change of variables formula the probability density function (PDF) of the flow given a data point can be written as
|
| 52 |
+
|
| 53 |
+
$$
|
| 54 |
+
\log p _ {\mathcal {X}} (\mathbf {x}) = \log p _ {\mathcal {Z}} (\mathbf {z}) + \log | \det (\partial \mathbf {z} / \partial \mathbf {x}) | = \log p _ {\mathcal {Z}} (\mathbf {z}) + \sum_ {i = 1} ^ {K} \log | \det (\partial \mathbf {y} _ {i} / \partial \mathbf {y} _ {i - 1}) |. \tag {1}
|
| 55 |
+
$$
|
| 56 |
+
|
| 57 |
+
Note that the Jacobian for the Real NVP is a block-triangular matrix and thus the log-determinant of each map simply becomes
|
| 58 |
+
|
| 59 |
+
$$
|
| 60 |
+
\log | \det (\partial \mathbf {y} _ {i} / \partial \mathbf {y} _ {i - 1}) | = \log | \exp (\sum \left(s _ {i} \left(\mathbf {y} _ {i - 1} ^ {1: d}\right)\right) |, \tag {2}
|
| 61 |
+
$$
|
| 62 |
+
|
| 63 |
+
where $\mathrm{sum}()$ is the sum over all the vector elements. This model, parameterized by the weights of the scaling and translation neural networks $\theta$ , is then trained via stochastic gradient descent (SGD) on training data points where for each batch $\mathcal{D}$ we maximize the average log likelihood (1) given by
|
| 64 |
+
|
| 65 |
+
$$
|
| 66 |
+
\mathcal {L} = \frac {1}{| \mathcal {D} |} \sum_ {\mathbf {x} \in \mathcal {D}} \log p _ {\mathcal {X}} (\mathbf {x}; \theta).
|
| 67 |
+
$$
|
| 68 |
+
|
| 69 |
+
In practice, Batch Normalization (Ioffe & Szegedy, 2015) is applied as a bijection to outputs of successive coupling layers to stabilize the training of normalizing flows. This bijection implements the normalization procedure using a weighted moving average of the layer's mean and standard deviation values, which has to be adapted to either training or inference regimes.
|
| 70 |
+
|
| 71 |
+
The Real NVP approach can be generalized, resulting in Masked Autoregressive Flows (Papamakarios et al., 2017) (MAF) where the transformation layer is built as an autoregressive neural network in the sense that it takes in some input $\mathbf{x} \in \mathbb{R}^D$ and outputs $\mathbf{y} = (y^1, \ldots, y^D)$ with the requirement that this transformation is invertible and any output $y^i$ cannot depend on input with dimension indices $\geq i$ , i.e. $\mathbf{x}^{\geq i}$ . The Jacobian of this transformation is triangular and thus the Jacobian determinant is tractable. Instead of using a RNN to share parameters across the $D$ dimensions of $\mathbf{x}$ one avoids this sequential computation by using masking, giving the method its name. The inverse however, needed for generating samples, is sequential.
|
| 72 |
+
|
| 73 |
+
By realizing that the scaling and translation function approximators don't need to be invertible, it is straightforward to implement conditioning of the PDF $p_{\mathcal{X}}(\mathbf{x}|\mathbf{h})$ on some additional information $\mathbf{h} \in \mathbb{R}^H$ : we concatenate $\mathbf{h}$ to the inputs of the scaling and translation function approximators of the coupling layers, i.e. $s(\mathrm{concat}(\mathbf{x}^{1:d}, \mathbf{h}))$ and $t(\mathrm{concat}(\mathbf{x}^{1:d}, \mathbf{h}))$ which are modified to map $\mathbb{R}^{d + H} \mapsto \mathbb{R}^{D - d}$ . Another approach is to add a bias computed from $\mathbf{h}$ to every layer inside the $s$ and $t$ networks as proposed by Korshunova et al. (2018). This does not change the log-determinant of the coupling layers given by (2). More importantly for us, for sequential data, indexed by $t$ , we can share parameters across the different conditioners $\mathbf{h}_t$ by using RNNs or Attention in an autoregressive fashion.
|
| 74 |
+
|
| 75 |
+
For discrete data the distribution has differential entropy of negative infinity, which leads to arbitrary high likelihood when training normalizing flow models, even on test data. To avoid this one can dequantize the data, often by adding Uniform[0, 1) noise to integer-valued data. The log-likelihood of the resulting continuous model is then lower-bounded by the log-likelihood of the discrete one as shown in Theis et al. (2016).
|
| 76 |
+
|
| 77 |
+
# 2.2 SELF-ATTENTION
|
| 78 |
+
|
| 79 |
+
The self-attention based Transformer (Vaswani et al., 2017) model has been used for sequence modeling with great success. The multi-head self-attention mechanism enables it to capture both long- and short-term dependencies in time series data. Essentially, the Transformer takes in a sequence $\mathbf{X} = [\mathbf{x}_1,\dots ,\mathbf{x}_T]^{\intercal}\in \mathbb{R}^{T\times D}$ , and the multi-head self-attention transforms this into $H$ distinct query $\mathbf{Q}_h = \mathbf{X}\mathbf{W}_h^Q$ , key $\mathbf{K}_h = \mathbf{X}\mathbf{W}_h^K$ and value $\mathbf{V}_h = \mathbf{X}\mathbf{W}_h^V$ matrices, where the $\mathbf{W}_h^Q$ , $\mathbf{W}_h^K$ , and $\mathbf{W}_h^V$ are learnable parameters. After these linear projections the scaled dot-product attention computes a sequence of vector outputs via:
|
| 80 |
+
|
| 81 |
+
$$
|
| 82 |
+
\mathbf {O} _ {h} = \mathrm {A t t e n t i o n} (\mathbf {Q} _ {h}, \mathbf {K} _ {h}, \mathbf {V} _ {h}) = \mathrm {s o f t m a x} \left(\frac {\mathbf {Q} _ {h} \mathbf {K} _ {h} ^ {\intercal}}{\sqrt {d _ {K}}} \cdot \mathbf {M}\right) \mathbf {V} _ {h},
|
| 83 |
+
$$
|
| 84 |
+
|
| 85 |
+
where a mask $\mathbf{M}$ can be applied to filter out right-ward attention (or future information leakage) by setting its upper-triangular elements to $-\infty$ and we normalize by $d_{K}$ the dimension of the $\mathbf{W}_h^K$ matrices. Afterwards, all $H$ outputs $\mathbf{O}_h$ are concatenated and linearly projected again.
|
| 86 |
+
|
| 87 |
+
One typically uses the Transformer in an encoder-decoder setup, where some warm-up time series is passed through the encoder and the decoder can be used to learn and autoregressively generate outputs.
|
| 88 |
+
|
| 89 |
+
# 3 RELATED WORK
|
| 90 |
+
|
| 91 |
+
Related to this work are models that combine normalizing flows with sequential modeling in some way. Transformation Autoregressive Networks (Oliva et al., 2018) which model the density of a multi-variate variable $\mathbf{x} \in \mathbb{R}^{D}$ as $D$ conditional distributions $\Pi_{i=1}^{D} p_{\mathcal{X}}(x^i | x^{i-1}, \ldots, x^1)$ , where the conditioning is given by a mixture model coming from the state of a RNN, and is then transformed via a bijection. The PixelSNAIL (Chen et al., 2018) method also models the joint as a product of conditional distributions, optionally with some global conditioning, via causal convolutions and self-attention (Vaswani et al., 2017) to capture long-term temporal dependencies. These methods are well suited to modeling high dimensional data like images, however their use in modeling the temporal development of data has only recently been explored for example in VideoFlow (Kumar et al., 2019) in which they model the distribution of the next video frame via a flow where the model outputs the parameters of the flow's base distribution via a ConvNet, whereas our approach will be based on conditioning of the PDF as described above.
|
| 92 |
+
|
| 93 |
+
Using RNNs for modeling either multivariate or temporal dynamics introduces sequential computational dependencies that are not amenable to parallelization. Despite this, RNNs have been shown to be very effective in modeling sequential dynamics. A recent work in this direction (Hwang et al., 2019) employs bipartite flows with RNNs for temporal conditioning to develop a conditional generative model of multivariate sequential data. The authors use a bidirectional training procedure to learn a generative model of observations that together with the temporal conditioning through a RNN, can also be conditioned on (observed) covariates that are modeled as additional conditioning variables in the latent space, which adds extra padding dimensions to the normalizing flow.
|
| 94 |
+
|
| 95 |
+
The other aspect of related works deals with multivariate probabilistic time series methods which are able to model high dimensional data. The Gaussian Copula Process method (Salinas et al., 2019a) is a RNN-based time series method with a Gaussian copula process output modeled using a low-rank covariance structure to reduce computational complexity and handle non-Gaussian marginal distributions. By using a low-rank approximation of the covariance matrix they obtain a computationally tractable method and are able to scale to multivariate dimensions in the thousands with state-of-the-art results. We will compare our model to this method in what follows.
|
| 96 |
+
|
| 97 |
+
# 4 TEMPORAL CONDITIONED NORMALIZING FLOWS
|
| 98 |
+
|
| 99 |
+
We denote the entities of a multivariate time series by $x_{t}^{i}\in \mathbb{R}$ for $i\in \{1,\dots ,D\}$ where $t$ is the time index. Thus the multivariate vector at time $t$ is given by $\mathbf{x}_t\in \mathbb{R}^D$ . We will in what follows consider time series with $t\in [1,T]$ , sampled from the complete time series history of our data, where for training we will split this time series by some context window $[1,t_0)$ and prediction window $[t_0,T]$ .
|
| 100 |
+
|
| 101 |
+
In the DeepAR model (Salinas et al., 2019b), the log-likelihood of each entity $x_{t}^{i}$ at a time step $t \in [t_0, T]$ is maximized given an individual time series' prediction window. This is done with respect to the parameters of the chosen distributional model (e.g. negative binomial for count data) via the state of a RNN derived from its previous time step $x_{t - 1}^{i}$ and current covariates $\mathbf{c}_t^i$ . The emission distribution model, which is typically Gaussian for real-valued data or negative binomial for count data, is selected to best match the statistics of the time series and the network incorporates activation functions that satisfy the constraints of these distribution parameters, e.g. a softmax() for the scale parameter of the Gaussian.
|
| 102 |
+
|
| 103 |
+
A simple model for multivariate real-valued data could use a factorizing distribution in the emissions. Shared parameters can then learn patterns across the individual time series through the temporal component—but the model falls short of capturing dependencies in the emissions of the model. For this, a full joint distribution at each time step must be modeled, for example by using a multivariate Gaussian model. However, modeling the full covariance matrix not only increases the number of parameters of the neural network by $O(D^2)$ , making learning difficult, but computing the loss becomes expensive when $D$ is large. Furthermore, statistical dependencies in the emissions would be limited to second-order effects. These models are referred to as Vec-LSTM in Salinas et al. (2019a).
|
| 104 |
+
|
| 105 |
+
We wish to have a scalable model of $D$ interacting time-series $\mathbf{x}_t$ , and further to use a flexible distribution model on the emissions that allows for capturing and representing higher order moments. To this end, we model the conditional joint distribution at time $t$ of all time series $p_{\mathcal{X}}(\mathbf{x}_t|\mathbf{h}_t;\theta)$ with a flow, e.g. a Real NVP, conditioned on either the hidden state of a RNN at time $t$ or an embedding of the time series up to $t - 1$ from an attention module. In the case of an autoregressive RNN (either a LSTM or a GRU (Chung et al., 2014)), its hidden state $\mathbf{h}_t$ is updated given the previous time step observation $\mathbf{x}_{t - 1}$ and the current time step's covariates $\mathbf{c}_t$ (as in Figure 1):
|
| 106 |
+
|
| 107 |
+
$$
|
| 108 |
+
\mathbf {h} _ {t} = \operatorname {R N N} \left(\operatorname {c o n c a t} \left(\mathbf {x} _ {t - 1}, \mathbf {c} _ {t}\right), \mathbf {h} _ {t - 1}\right). \tag {3}
|
| 109 |
+
$$
|
| 110 |
+
|
| 111 |
+
This model is autoregressive since it consumes the observation of the last time step $\mathbf{x}_{t - 1}$ as well as the recurrent state $\mathbf{h}_{t - 1}$ to produce the state $\mathbf{h}_t$ on which we condition the current observation.
|
| 112 |
+
|
| 113 |
+
To get a powerful and general emission distribution model, we stack $K$ layers of a conditional flow module (Real NVP or MAF) and together with the RNN, we arrive at our model of the conditional distribution of the future of all time series, given its past $t \in [1, t_0)$ and all the covariates in $t \in [1, T]$ . As the model is autoregressive it can be written as a product of factors
|
| 114 |
+
|
| 115 |
+
$$
|
| 116 |
+
p _ {\mathcal {X}} \left(\mathbf {x} _ {t _ {0}: T} | \mathbf {x} _ {1: t _ {0} - 1}, \mathbf {c} _ {1: T}; \theta\right) = \Pi_ {t = t _ {0}} ^ {T} p _ {\mathcal {X}} \left(\mathbf {x} _ {t} | \mathbf {h} _ {t}; \theta\right), \tag {4}
|
| 117 |
+
$$
|
| 118 |
+
|
| 119 |
+
where $\theta$ denotes the set of all parameters of both the flow and the RNN.
|
| 120 |
+
|
| 121 |
+
For modeling the time evolution, we also investigate an encoder-decoder Transformer (Vaswani et al., 2017) architecture where the encoder embeds $\mathbf{x}_{1:t_0 - 1}$ and the decoder outputs the conditioning for the flow over $\mathbf{x}_{t_0:T}$ via a masked attention module. See Figure 2 for a schematic of the overall model in this case. While training, care has to be taken to prevent using information from future time points as well as to preserve the autoregressive property by utilizing a mask that reflects the causal direction of the progressing time, i.e. to mask out future time points. The Transformer allows the model to access any part of the historic time series regardless of temporal distance (Li et al., 2019) and thus is potentially able to generate better conditioning for the normalizing flow head.
|
| 122 |
+
|
| 123 |
+
In real-world data the magnitudes of different
|
| 124 |
+
|
| 125 |
+
time series can vary drastically. To normalize scales, we divide each individual time series by their training window means before feeding it into the model. At inference the distributions are then correspondingly transformed with the same mean values to match the original scale. This rescaling technique simplifies the problem for the model, which is reflected in significantly improved empirical performance as noted in Salinas et al. (2019b).
|
| 126 |
+
|
| 127 |
+

|
| 128 |
+
Figure 1: RNN Conditioned Real NVP model schematic at time $t$ , consisting of $K$ blocks of coupling layers and Batch Normalization, where in each coupling layer we condition $\mathbf{x}_t$ and its transformations on the state of a shared RNN from the previous time step $\mathbf{x}_{t-1}$ and current time covariates $\mathbf{c}_t$ which are typically time dependent and time independent features.
|
| 129 |
+
|
| 130 |
+
# 4.1 TRAINING
|
| 131 |
+
|
| 132 |
+
Given $\mathcal{D}$ , a batch of time series, where for each time series and each time step we have $\mathbf{x}_t \in \mathbb{R}^D$ and their associated covariates $\mathbf{c}_t$ , we maximize the log-likelihood given by (1) and (3), i.e.
|
| 133 |
+
|
| 134 |
+
$$
|
| 135 |
+
\mathcal {L} = \frac {1}{| \mathcal {D} | T} \sum_ {\mathbf {x} _ {1: T} \in \mathcal {D}} \sum_ {t = 1} ^ {T} \log p _ {\mathcal {X}} (\mathbf {x} _ {t} | \mathbf {h} _ {t}; \theta)
|
| 136 |
+
$$
|
| 137 |
+
|
| 138 |
+
via SGD using Adam (Kingma & Ba, 2015) with respect to the parameters $\theta$ of the conditional flow and the RNN or Transformer. In practice, the time series $\mathbf{x}_{1:T}$ in a batch $\mathcal{D}$ are selected from a random time window of size $T$ within our training data, and the relative time steps are kept constant. This allows the model to learn to cold-start given only the covariates. This also increases the size of our training data when the training data has small time history and allows us to trade off computation time with memory consumption especially when $D$ or $T$ are large.
|
| 139 |
+
|
| 140 |
+
Note that information about absolute time is only available to the RNN or Transformer via the covariates and not the relative position of $\mathbf{x}_t$ in the training data.
|
| 141 |
+
|
| 142 |
+
The Transformer has computational complexity $O(T^2 D)$ compared to a RNN which is $O(TD^2)$ , where $T$ is the time series length and the assumption that the dimension of the hidden states are proportional to the number of simultaneous time-series modeled. This means for large multivariate time series, i.e. $D > T$ , the Transformer flow model has smaller computational complexity and unlike the RNN, all computation while training, over the time dimension happens in parallel.
|
| 143 |
+
|
| 144 |
+

|
| 145 |
+
Figure 2: Transformer Conditioned Real NVP model schematic consisting of an encoder-decoder stack where the encoder takes in some context length of time series and then uses it to generate conditioning for the prediction length portion of the time series via a causally masked decoder stack. Note that the positional encodings are part of the covariates and unlike the RNN model, here all $\mathbf{x}_{1:T}$ time points are trained in parallel.
|
| 146 |
+
|
| 147 |
+
# 4.2 COVARIATES
|
| 148 |
+
|
| 149 |
+
We employ embeddings for categorical features (Charrington, 2018), which allows for relationships within a category, or its context, to be captured while training models. Combining these embeddings
|
| 150 |
+
|
| 151 |
+
as features for time series forecasting yields powerful models like the first place winner of the Kaggle Taxi Trajectory Prediction<sup>1</sup> challenge (De Brébisson et al., 2015). The covariates $\mathbf{c}_t$ we use are composed of time-dependent (e.g. day of week, hour of day) and time-independent embeddings, if applicable, as well as lag features depending on the time frequency of the data set we are training on. All covariates are thus known for the time periods we wish to forecast.
|
| 152 |
+
|
| 153 |
+
# 4.3 INFERENCE
|
| 154 |
+
|
| 155 |
+
For inference we either obtain the hidden state $\hat{\mathbf{h}}_{t_1}$ by passing a "warm up" time series $\mathbf{x}_{1:t_1-1}$ through the RNN or use the cold-start hidden state, i.e. we set $\hat{\mathbf{h}}_{t_1} = \mathbf{h}_1 = \vec{0}$ , and then by sampling a noise vector $\mathbf{z}_{t_1} \in \mathbb{R}^D$ from an isotropic Gaussian, go backward through the flow to obtain a sample of our time series for the next time step, $\hat{\mathbf{x}}_{t_1} = f^{-1}(\mathbf{z}_{t_1}|\hat{\mathbf{h}}_{t_1})$ , conditioned on this starting state. We then use this sample and its covariates to obtain the next conditioning state $\hat{\mathbf{h}}_{t_1+1}$ via the RNN and repeat till our inference horizon. This process of sampling trajectories from some initial state can be repeated many times to obtain empirical quantiles of the uncertainty of our prediction for arbitrary long forecast horizons.
|
| 156 |
+
|
| 157 |
+
The attention model similarly uses a warm-up time series $\mathbf{x}_{1:t_1 - 1}$ and covariates and passes them through the encoder and then uses the decoder to output the conditioning for sampling from the flow. This sample is then used again in the decoder to iteratively sample the next conditioning state, similar to the inference procedure in seq-to-seq models.
|
| 158 |
+
|
| 159 |
+
Note that we do not sample from a reduced-temperature model, e.g. by scaling the variance of the isotropic Gaussian, unlike what is done in likelihood-based generative models (Parmar et al., 2018) to obtain higher quality samples.
|
| 160 |
+
|
| 161 |
+
# 5 EXPERIMENTS
|
| 162 |
+
|
| 163 |
+
Here we discuss a toy experiment for sanity-checking our model and evaluate probabilistic forecasting results on six real-world data sets with competitive baselines. The source code of the model, as well as other time series models, is available at https://github.com/zalandoresearch/pytorch-ts.
|
| 164 |
+
|
| 165 |
+
# 5.1 SIMULATED FLOW IN A SYSTEM OF PIPES
|
| 166 |
+
|
| 167 |
+
In this toy experiment, we check if the inductive bias of incorporating relations between time series is learnt in our model by simulating flow of a liquid in a system of pipes with valves. See Figure 3 for a depiction of the system.
|
| 168 |
+
|
| 169 |
+
Liquid flows from left to right, where pressure at the first sensor in the system is given by $S_0 = X + 3$ , $X \sim \mathrm{Gamma}(1,0.2)$ in the shape/scale parameterization of the Gamma distribution. The valves are given by $V_{1},V_{2} \sim_{\mathrm{iid}}$ Beta(0.5,0.5), and we have
|
| 170 |
+
|
| 171 |
+
$$
|
| 172 |
+
S _ {i} = \frac {V _ {i}}{V _ {1} + V _ {2}} S _ {0} + \epsilon_ {i}
|
| 173 |
+
$$
|
| 174 |
+
|
| 175 |
+

|
| 176 |
+
Figure 3: System of pipes with liquid flowing from left to right with sensors $(S_{i})$ and valves $(V_{i})$ .
|
| 177 |
+
|
| 178 |
+
for $i \in \{1, 2\}$ and finally $S_{3} = S_{1} + S_{2} + \epsilon_{3}$ with $\epsilon_{*} \sim \mathcal{N}(0, 0.1)$ . With this simulation we check whether our model captures correlations in space and time. The correlation between $S_{1}$ and $S_{2}$ results from both having the same source, measured by $S_{0}$ . This is reflected by $\mathrm{Cov}(S_{1}, S_{2}) > 0$ , which is captured by our model as shown in Figure 4 left.
|
| 179 |
+
|
| 180 |
+
The cross-covariance structure between consecutive time points in the ground truth and as captured by our trained model is depicted in Figure 4 right. It reflects the true flow of liquid in the system from $S_0$ at time $t$ to $S_1$ and $S_2$ at time $t + 1$ , on to $S_3$ at time $t + 2$ .
|
| 181 |
+
|
| 182 |
+

|
| 183 |
+
Figure 4: Estimated (cross-)covariance matrices. Darker means higher positive values. left: Covariance matrix for a fixed time point capturing the correlation between $S_{1}$ and $S_{2}$ . right: Cross-covariance matrix between consecutive time points capturing true flow of liquid in the pipe system.
|
| 184 |
+
|
| 185 |
+

|
| 186 |
+
|
| 187 |
+

|
| 188 |
+
|
| 189 |
+

|
| 190 |
+
|
| 191 |
+
Table 1: Test set $\mathrm{CRPS}_{\mathrm{sum}}$ comparison (lower is better) of models from Salinas et al. (2019a) and our models LSTM-Real-NVP, LSTM-MAF and Transformer-MAF. The two best methods are in bold and the mean and standard errors of our methods are obtained by rerunning them 20 times.
|
| 192 |
+
|
| 193 |
+
<table><tr><td>Data set</td><td>Vec-LSTM
|
| 194 |
+
ind-scaling</td><td>Vec-LSTM
|
| 195 |
+
lowrank-Copula</td><td>GP
|
| 196 |
+
scaling</td><td>GP
|
| 197 |
+
Copula</td><td>LSTM
|
| 198 |
+
Real-NVP</td><td>LSTM
|
| 199 |
+
MAF</td><td>Transformer
|
| 200 |
+
MAF</td></tr><tr><td>Exchange</td><td>0.008±0.001</td><td>0.007±0.000</td><td>0.009±0.000</td><td>0.007±0.000</td><td>0.0064±0.003</td><td>0.005±0.003</td><td>0.005±0.003</td></tr><tr><td>Solar</td><td>0.391±0.017</td><td>0.319±0.011</td><td>0.368±0.012</td><td>0.337±0.024</td><td>0.331±0.02</td><td>0.315±0.023</td><td>0.301±0.014</td></tr><tr><td>Electricity</td><td>0.025±0.001</td><td>0.064±0.008</td><td>0.022±0.000</td><td>0.024±0.002</td><td>0.024±0.001</td><td>0.0208±0.000</td><td>0.0207±0.000</td></tr><tr><td>Traffic</td><td>0.087±0.041</td><td>0.103±0.006</td><td>0.079±0.000</td><td>0.078±0.002</td><td>0.078±0.001</td><td>0.069±0.002</td><td>0.056±0.001</td></tr><tr><td>Taxi</td><td>0.506±0.005</td><td>0.326±0.007</td><td>0.183±0.395</td><td>0.208±0.183</td><td>0.175±0.001</td><td>0.161±0.002</td><td>0.179±0.002</td></tr><tr><td>Wikipedia</td><td>0.133±0.002</td><td>0.241±0.033</td><td>1.483±1.034</td><td>0.086±0.004</td><td>0.078±0.001</td><td>0.067±0.001</td><td>0.063±0.003</td></tr></table>
|
| 201 |
+
|
| 202 |
+
# 5.2 REAL WORLD DATA SETS
|
| 203 |
+
|
| 204 |
+
For evaluation we compute the Continuous Ranked Probability Score (CRPS) (Matheson & Winkler, 1976) on each individual time series, as well as on the sum of all time series (the latter denoted by $\mathrm{CRPS}_{\mathrm{sum}}$ ). CRPS measures the compatibility of a cumulative distribution function $F$ with an observation $x$ as
|
| 205 |
+
|
| 206 |
+
$$
|
| 207 |
+
\operatorname {C R P S} (F, x) = \int_ {\mathbb {R}} (F (z) - \mathbb {I} \{x \leq z \}) ^ {2} \mathrm {d} z \tag {5}
|
| 208 |
+
$$
|
| 209 |
+
|
| 210 |
+
where $\mathbb{I}\{x\leq z\}$ is the indicator function which is one if $x\leq z$ and zero otherwise. CRPS is a proper scoring function, hence CRPS attains its minimum when the predictive distribution $F$ and the data distribution are equal. Employing the empirical CDF of $F$ , i.e. $\hat{F} (z) = \frac{1}{n}\sum_{i = 1}^{n}\mathbb{I}\{X_i\leq z\}$ with $n$ samples $X_{i}\sim F$ as a natural approximation of the predictive CDF, CRPS can be directly computed from simulated samples of the conditional distribution (4) at each time point (Jordan et al., 2019). We take 100 samples to estimate the empirical CDF in practice. Finally, $\mathrm{CRPS}_{\mathrm{sum}}$ is obtained by first summing across the $D$ time-series—both for the ground-truth data, and sampled data (yielding $\hat{F}_{\mathrm{sum}}(t)$ for each time point). The results are then averaged over the prediction horizon, i.e. formally
|
| 211 |
+
|
| 212 |
+
$$
|
| 213 |
+
\mathrm {C R P S} _ {\text {s u m}} = \mathbb {E} _ {t} \left[ \mathrm {C R P S} \left(\hat {F} _ {\text {s u m}} (t), \sum_ {i} x _ {t} ^ {i}\right) \right].
|
| 214 |
+
$$
|
| 215 |
+
|
| 216 |
+
Our model is trained on the training split of each data set, and for testing we use a rolling windows prediction starting from the last point seen in the training data set and compare it to the test set. We train on Exchange (Lai et al., 2018), Solar (Lai et al., 2018), Electricity $^2$ , Traffic $^3$ , Taxi $^4$ and Wikipedia $^5$ open data sets, preprocessed exactly as in Salinas et al. (2019a), with their
|
| 217 |
+
|
| 218 |
+
<https://www.kaggle.com/c/pkdd-15-predict-taxi-service-trajectory-i>
|
| 219 |
+
<sup>2</sup>https://archive.ics.uci.edu/ml/datasets/ElectricityLoadDiagrams20112014
|
| 220 |
+
<sup>3</sup>https://archive.ics.uci.edu/ml/datasets/PEMS-SF
|
| 221 |
+
<sup>4</sup>https://www1.nyc.gov/site/tlc/about/tlc-trip-record-data.page
|
| 222 |
+
<sup>5</sup>https://github.com/mbohlkeschneider/gluon-ts/tree/mv_release/datasets
|
| 223 |
+
|
| 224 |
+
properties listed in Table 2 of the appendix. Both Taxi and Wikipedia consist of count data and are thus dequantized before being fed to the flow (and mean-scaled).
|
| 225 |
+
|
| 226 |
+
We compare our method using LSTM and two different normalizing flows (LSTM-Real-NVP and LSTM-MAF based on Real NVP and MAF, respectively) as well as a Transformer model with MAF (Transformer-MAF), with the most competitive baseline probabilistic models from Salinas et al. (2019a) on the six data sets and report the results in Table 1. Vec-LSTM-ind-scaling outputs the parameters of an independent Gaussian distribution with mean-scaling, Vec-LSTM-lowrank-Copula parametrizes a low-rank plus diagonal covariance via Copula process. GP-scaling unrolls a LSTM with scaling on each individual time series before reconstructing the joint distribution via a low-rank Gaussian. Similarly, GP-Copula unrolls a LSTM on each individual time series and then the joint emission distribution is given by a low-rank plus diagonal covariance Gaussian copula.
|
| 227 |
+
|
| 228 |
+

|
| 229 |
+
Figure 5: Visual analysis of the dependency structure extrapolation of the model. Left: Cross-covariance matrix computed from the test split of Traffic benchmark. Middle: Cross-covariance matrix computed from the mean of 100 sample trajectories drawn from the Transformer-MAF model's extrapolation into the future (test split). Right: The absolute difference of the two matrices mostly shows small deviations between ground-truth and extrapolation.
|
| 230 |
+
|
| 231 |
+

|
| 232 |
+
|
| 233 |
+

|
| 234 |
+
|
| 235 |
+
In Table 1 we observe that MAF with either RNN or self-attention mechanism for temporal conditioning achieves the state-of-the-art (to the best of our knowledge) $\mathrm{CRPS}_{\mathrm{sum}}$ on all benchmarks. Moreover, bipartite flows with RNN either also outperform or are found to be competitive w.r.t. the previous state-of-the-art results as listed in the first four columns of Table 1. Further analyses with other metrics (e.g. CRPS and MSE) are reported in Section B of the appendix.
|
| 236 |
+
|
| 237 |
+
To showcase how well our model captures dependencies in extrapolating the time series into the future versus real data, we plot in Figure 5 the cross-covariance matrix of observations (plotted left) as well as the mean of 100 sample trajectories (middle plot) drawn from Transformer-MAF model for the test split of Traffic data set. As can be seen, most of the covariance structure especially in the top-left region of highly correlated sensors is very well reflected in the samples drawn from the model.
|
| 238 |
+
|
| 239 |
+
# 6 CONCLUSION
|
| 240 |
+
|
| 241 |
+
We have presented a general method to model high-dimensional probabilistic multivariate time series by combining conditional normalizing flows with an autoregressive model, such as a recurrent neural network or an attention module. Autoregressive models have a long-standing reputation for working very well for time series forecasting, as they show good performance in extrapolation into the future. The flow model, on the other hand, does not assume any simple fixed distribution class, but instead can adapt to a broad range of high-dimensional data distributions. The combination hence combines the extrapolation power of the autoregressive model class with the density estimation flexibility of flows. Furthermore, it is computationally efficient, without the need of resorting to approximations (e.g. low-rank approximations of a covariance structure as in Gaussian copula methods) and is robust compared to Deep Kernel learning methods especially for large $D$ . Analysis on six commonly used time series benchmarks establishes the new state-of-the-art performance against competitive methods.
|
| 242 |
+
|
| 243 |
+
A natural way to improve our method is to incorporate a better underlying flow model. For example, Table 1 shows that swapping the Real NVP flow with a MAF improved the performance, which is a
|
| 244 |
+
|
| 245 |
+
consequence of Real NVP lacking in density modeling performance compared to MAF. Likewise, we would expect other design choices of the flow model to improve performance, e.g. changes to the dequantization method, the specific affine coupling layer or more expressive conditioning, say via another Transformer. Recent improvements to flows, e.g. as proposed in the Flow++ (Ho et al., 2019), to obtain expressive bipartite flow models, or models to handle discrete categorical data (Tran et al., 2019), are left as future work to assess their usefulness. To our knowledge, it is however still an open problem how to model discrete ordinal data via flows—which would best capture the nature of some data sets (e.g. sales data).
|
| 246 |
+
|
| 247 |
+
# ACKNOWLEDGMENTS
|
| 248 |
+
|
| 249 |
+
K.R.: I would like to thank Rob Hyndman and Zaeem Burq for the helpful discussions and suggestions. I would like to acknowledge the traditional owners of the land on which I have lived and worked, the Wurundjeri people of the Kulin nation who have been custodians of their land for thousands of years. I pay my respects to their elders, past and present as well as past and present aboriginal elders of other communities.
|
| 250 |
+
|
| 251 |
+
We wish to acknowledge and thank the authors and contributors of the following open source libraries that were used in this work: GluonTS (Alexandrov et al., 2020), NumPy (Harris et al., 2020), Pandas (Pandas development team, 2020), matplotlib (Hunter, 2007) and PyTorch (Paszke et al., 2019). We would also like to thank and acknowledge the hard work of the reviewers whose comments and suggestions have without a doubt help improve this paper.
|
| 252 |
+
|
| 253 |
+
# REFERENCES
|
| 254 |
+
|
| 255 |
+
Alexander Alexandrov, Konstantinos Benidis, Michael Bohlke-Schneider, Valentin Flunkert, Jan Gasthaus, Tim Januschowski, Danielle C. Maddix, Syama Rangapuram, David Salinas, Jasper Schulz, Lorenzo Stella, Ali Caner Türkmen, and Yuyang Wang. GluonTS: Probabilistic and Neural Time Series Modeling in Python. Journal of Machine Learning Research, 21(116):1-6, 2020. URL http://jmlr.org/papers/v21/19-820.html.
|
| 256 |
+
Sam Charrington. TWiML & AI Podcast: Systems and software for machine learning at scale with Jeff Dean, 2018. URL https://bit.ly/2G0LmGg.
|
| 257 |
+
XI Chen, Nikhil Mishra, Mostafa Rohaninejad, and Pieter Abbeel. PixelSNAIL: An improved autoregressive generative model. In Jennifer Dy and Andreas Krause (eds.), Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pp. 864-872, Stockholm, Sweden, 2018. PMLR. URL http://proceedings.mlr.press/v80/chen18h.html.
|
| 258 |
+
Junyoung Chung, Caglar Gulcehre, Kyunghyun Cho, and Yoshua Bengio. Empirical evaluation of gated recurrent neural networks on sequence modeling. In NIPS 2014 Workshop on Deep Learning, December 2014, 2014.
|
| 259 |
+
Djork-Arné Clevert, Thomas Unterthiner, and Sepp Hochreiter. Fast and accurate deep network learning by exponential linear units (elus). In Yoshua Bengio and Yann LeCun (eds.), 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings, 2016. URL http://arxiv.org/abs/1511.07289.
|
| 260 |
+
Emmanuel de Bézenac, Syama Sundar Rangapuram, Konstantinos Benidis, Michael Bohlke-Schneider, Richard Kurle, Lorenzo Stella, Hilaf Hasson, Patrick Gallinari, and Tim Januschowski. Normalizing Kalman Filters for Multivariate Time series Analysis. In Advances in Neural Information Processing Systems, volume 33. Curran Associates, Inc., 2020.
|
| 261 |
+
Alexandre De Brébisson, Étienne Simon, Alex Auvolat, Pascal Vincent, and Yoshua Bengio. Artificial neural networks applied to taxi destination prediction. In Proceedings of the 2015th International Conference on ECML PKDD Discovery Challenge - Volume 1526, ECMLPKDDDC'15, pp. 40-51, Aachen, Germany, Germany, 2015. CEUR-WS.org. URL http://dl.acm.org/citation.cfm?id=3056172.3056178.
|
| 262 |
+
|
| 263 |
+
Laurent Dinh, Jascha Sohl-Dickstein, and Samy Bengio. Density estimation using Real NVP. In International Conference on Learning Representations 2017 (Conference Track), 2017. URL https://openreview.net/forum?id=HkpbnH91x.
|
| 264 |
+
Charles R. Harris, K. Jarrod Millman, Stefan J. van der Walt, Ralf Gommers, Pauli Virtanen, David Cournapeau, Eric Wieser, Julian Taylor, Sebastian Berg, Nathaniel J. Smith, Robert Kern, Matti Picus, Stephan Hoyer, Marten H. van Kerkwijk, Matthew Brett, Allan Haldane, Jaime Fern'andez del R'io, Mark Wiebe, Pearu Peterson, Pierre G'erard-Marchant, Kevin Sheppard, Tyler Reddy, Warren Weckesser, Hameer Abbasi, Christoph Gohlke, and Travis E. Oliphant. Array programming with NumPy. Nature, 585(7825):357-362, September 2020. doi: 10.1038/s41586-020-2649-2. URL https://doi.org/10.1038/s41586-020-2649-2.
|
| 265 |
+
Jonathan Ho, Xi Chen, Aravind Srinivas, Yan Duan, and Pieter Abbeel. Flow++: Improving flow-based generative models with variational dequantization and architecture design. In Kamalika Chaudhuri and Ruslan Salakhutdinov (eds.), Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pp. 2722-2730, Long Beach, California, USA, 2019. PMLR. URL http://proceedings.mlr.press/v97/ho19a.html.
|
| 266 |
+
S. Hochreiter and J. Schmidhuber. Long Short-Term Memory. Neural Computation, 9(8):1735-1780, November 1997. ISSN 0899-7667. doi: 10.1162/neco.1997.9.8.1735.
|
| 267 |
+
J. D. Hunter. Matplotlib: A 2d graphics environment. Computing in Science & Engineering, 9(3): 90-95, 2007. doi: 10.1109/MCSE.2007.55.
|
| 268 |
+
Seong Jae Hwang, Zirui Tao, Won Hwa Kim, and Vikas Singh. Conditional recurrent flow: Conditional generation of longitudinal samples with applications to neuroimaging. In The IEEE International Conference on Computer Vision (ICCV), October 2019.
|
| 269 |
+
R.J. Hyndman and G. Athanasopoulos. Forecasting: Principles and practice. OTexts, 2018. ISBN 9780987507112.
|
| 270 |
+
Rob Hyndman, Anne Koehler, Keith Ord, and Ralph Snyder. Forecasting with exponential smoothing. The state space approach, chapter 17, pp. 287-300. Springer-Verlag, 2008. doi: 10.1007/978-3-540-71918-2.
|
| 271 |
+
Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the 32Nd International Conference on International Conference on Machine Learning - Volume 37, ICML'15, pp. 448-456. JMLR.org, 2015. URL http://dl.acm.org/citation.cfm?id=3045118.3045167.
|
| 272 |
+
Alexander Jordan, Fabian Krüger, and Sebastian Lerch. Evaluating probabilistic forecasts with scoringRules. Journal of Statistical Software, Articles, 90(12):1-37, 2019. ISSN 1548-7660. doi: 10.18637/jss.v090.i12. URL https://www.jstatsoft.org/v090/i12.
|
| 273 |
+
Diederick P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In International Conference on Learning Representations (ICLR), 2015.
|
| 274 |
+
Iryna Korshunova, Yarin Gal, Arthur Gretton, and Joni Dambre. Conditional BRUNO: A Deep Recurrent Process for Exchangeable Labelled Data. In Bayesian Deep Learning workshop, NIPS, 2018.
|
| 275 |
+
Rahul G Krishnan, Uri Shalit, and David Sontag. Structured inference networks for nonlinear state space models. In AAAI, 2017.
|
| 276 |
+
Manoj Kumar, Mohammad Babaeizadeh, Dumitru Erhan, Chelsea Finn, Sergey Levine, Laurent Dinh, and Durk Kingma. VideoFlow: A Flow-Based Generative Model for Video. In Workshop on Invertible Neural Nets and Normalizing Flows, ICML, 2019.
|
| 277 |
+
Guokun Lai, Wei-Cheng Chang, Yiming Yang, and Hanxiao Liu. Modeling long- and short-term temporal patterns with deep neural networks. In The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval, SIGIR '18, pp. 95-104, New York, NY, USA, 2018. ACM. ISBN 978-1-4503-5657-2. doi: 10.1145/3209978.3210006. URL http://doi.acm.org/10.1145/3209978.3210006.
|
| 278 |
+
|
| 279 |
+
Shiyang Li, Xiaoyong Jin, Yao Xuan, Xiyou Zhou, Wenhu Chen, Yu-Xiang Wang, and Xifeng Yan. Enhancing the locality and breaking the memory bottleneck of transformer on time series forecasting. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché Buc, E. Fox, and R. Garnett (eds.), Advances in Neural Information Processing Systems 32, pp. 5244-5254. Curran Associates, Inc., 2019.
|
| 280 |
+
H. Lütkepohl. New Introduction to Multiple Time Series Analysis. Springer Berlin Heidelberg, 2007. ISBN 9783540262398. URL https://books.google.de/books?id=muorJ6FHIiEC.
|
| 281 |
+
James E. Matheson and Robert L. Winkler. Scoring rules for continuous probability distributions. Management Science, 22(10):1087-1096, 1976.
|
| 282 |
+
Junier Oliva, Avinava Dubey, Manzil Zaheer, Barnabas Poczos, Ruslan Salakhutdinov, Eric Xing, and Jeff Schneider. Transformation autoregressive networks. In Jennifer Dy and Andreas Krause (eds.), Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pp. 3898-3907, Stockholm, Sweden, 2018. PMLR. URL http://proceedings.mlr.press/v80/oliva18a.html.
|
| 283 |
+
Boris N. Oreshkin, Dmitri Carpov, Nicolas Chapados, and Yoshua Bengio. N-BEATS: Neural basis expansion analysis for interpretable time series forecasting. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=r1ecqn4YwB.
|
| 284 |
+
The Pandas development team. pandas-dev/pandas: Pandas, February 2020. URL https://doi.org/10.5281/zenodo.3509134.
|
| 285 |
+
George Papamakarios, Theo Pavlakou, and Iain Murray. Masked autoregressive flow for density estimation. Advances in Neural Information Processing Systems 30, 2017.
|
| 286 |
+
George Papamakarios, Eric Nalisnick, Danilo Jimenez Rezende, Shakir Mohamed, and Balaji Lakshminarayanan. Normalizing flows for probabilistic modeling and inference, 2019.
|
| 287 |
+
Niki Parmar, Ashish Vaswani, Jakob Uszkoreit, Lukasz Kaiser, Noam Shazeer, Alexander Ku, and Dustin Tran. Image transformer. In Jennifer Dy and Andreas Krause (eds.), Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pp. 4055-4064, Stockholm Männssan, Stockholm Sweden, 10-15 Jul 2018. PMLR. URL http://proceedings.mlr.press/v80/parmar18a.html.
|
| 288 |
+
Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. PyTorch: An imperative style, high-performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché Buc, E. Fox, and R. Garnett (eds.), Advances in Neural Information Processing Systems 32, pp. 8026-8037. Curran Associates, Inc., 2019.
|
| 289 |
+
David Salinas, Michael Bohlke-Schneider, Laurent Callot, Roberto Medico, and Jan Gasthaus. High-dimensional multivariate forecasting with low-rank Gaussian copula processes. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché Buc, E. Fox, and R. Garnett (eds.), Advances in Neural Information Processing Systems 32, pp. 6824-6834. Curran Associates, Inc., 2019a.
|
| 290 |
+
David Salinas, Valentin Flunkert, Jan Gasthaus, and Tim Januschowski. DeepAR: Probabilistic forecasting with autoregressive recurrent networks. International Journal of Forecasting, 2019b. ISSN 0169-2070. URL http://www.sciencedirect.com/science/article/pii/S0169207019301888.
|
| 291 |
+
E.G. Tabak and C.V. Turner. A family of nonparametric density estimation algorithms. Communications on Pure and Applied Mathematics, 66(2):145-164, 2013.
|
| 292 |
+
L. Theis, A. van den Oord, and M. Bethge. A note on the evaluation of generative models. In International Conference on Learning Representations, 2016. URL http://arxiv.org/abs/1511.01844.arXiv:1511.01844.
|
| 293 |
+
|
| 294 |
+
Dustin Tran, Keyon Vafa, Kumar Agrawal, Laurent Dinh, and Ben Poole. Discrete flows: Invertible generative models of discrete data. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché Buc, E. Fox, and R. Garnett (eds.), Advances in Neural Information Processing Systems 32, pp. 14692-14701. Curran Associates, Inc., 2019.
|
| 295 |
+
Ruey S. Tsay. Multivariate Time Series Analysis: With $R$ and Financial Applications. Wiley Series in Probability and Statistics. Wiley, 2014. ISBN 9781118617908.
|
| 296 |
+
Roy van der Weide. GO-GARCH: a multivariate generalized orthogonal GARCH model. Journal of Applied Econometrics, 17(5):549-564, 2002. doi: 10.1002/jae.688.
|
| 297 |
+
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In I. Guyon, U.V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (eds.), Advances in Neural Information Processing Systems 30, pp. 5998-6008. Curran Associates, Inc., 2017. URL http://papers.nips.cc/paper/7181-attention-is-all-you-need.pdf.
|
| 298 |
+
|
| 299 |
+
# A DATA SET DETAILS
|
| 300 |
+
|
| 301 |
+
Table 2: Properties of the data sets used in experiments.
|
| 302 |
+
|
| 303 |
+
<table><tr><td>DATA SET</td><td>DIMENSION D</td><td>DOMAIN</td><td>FREQ.</td><td>TOTAL TIME STEPS</td><td>PREDICTION LENGTH</td></tr><tr><td>EXCHANGE</td><td>8</td><td>R+</td><td>DAILY</td><td>6,071</td><td>30</td></tr><tr><td>SOLAR</td><td>137</td><td>R+</td><td>HOURLY</td><td>7,009</td><td>24</td></tr><tr><td>ELECTRICITY</td><td>370</td><td>R+</td><td>HOURLY</td><td>5,833</td><td>24</td></tr><tr><td>TRAFFIC</td><td>963</td><td>(0,1)</td><td>HOURLY</td><td>4,001</td><td>24</td></tr><tr><td>TAXI</td><td>1,214</td><td>N</td><td>30-MIN</td><td>1,488</td><td>24</td></tr><tr><td>WIKIPEDIA</td><td>2,000</td><td>N</td><td>DAILY</td><td>792</td><td>30</td></tr></table>
|
| 304 |
+
|
| 305 |
+
# B ADDITIONAL METRICS
|
| 306 |
+
|
| 307 |
+
We used exactly the same open source code to evaluate our metrics as provided by the authors of Salinas et al. (2019a).
|
| 308 |
+
|
| 309 |
+
# B.1 COMPARISON AGAINST CLASSICAL BASELINES
|
| 310 |
+
|
| 311 |
+
We report test set $\mathrm{CRPS}_{\mathrm{sum}}$ results on VAR (Lütkepohl, 2007) a multivariate linear vector autoregressive model with lags corresponding to the periodicity of the data, VAR-Lasso a Lasso regularized VAR, GARCH (van der Weide, 2002) a multivariate conditional heteroskedastic model, GP Gaussian process model, KVAE (Krishnan et al., 2017) a variational autoencoder on top of a linear state space model and VES a innovation state space model (Hyndman et al., 2008) in Table 3. Note that VAR-Lasso, KVAE and VES metrics are from (de Bézenac et al., 2020).
|
| 312 |
+
|
| 313 |
+
Table 3: Test set CRPSum (lower is better) of classical methods and our Transformer-MAF model, where the mean and standard errors of our model are obtained over a mean of 20 runs.
|
| 314 |
+
|
| 315 |
+
<table><tr><td>Data set</td><td>VAR</td><td>VAR-Lasso</td><td>GP</td><td>GARCH</td><td>VES</td><td>KVAE</td><td>Transformer MAF</td></tr><tr><td>Exchange</td><td>0.010±0.00</td><td>0.012±0.000</td><td>0.011±0.001</td><td>0.020±0.000</td><td>0.005±0.00</td><td>0.014±0.002</td><td>0.005±0.003</td></tr><tr><td>Solar</td><td>0.524±0.001</td><td>0.51±0.006</td><td>0.828±0.01</td><td>0.869±0.00</td><td>0.9±0.003</td><td>0.34±0.025</td><td>0.301±0.014</td></tr><tr><td>Electricity</td><td>0.031±0.00</td><td>0.025±0.00</td><td>0.947±0.016</td><td>0.278±0.00</td><td>0.88±0.003</td><td>0.051±0.019</td><td>0.0207±0.000</td></tr><tr><td>Traffic</td><td>0.144±0.00</td><td>0.15±0.002</td><td>2.198±0.774</td><td>0.368±0.00</td><td>0.35±0.002</td><td>0.1±0.005</td><td>0.056±0.001</td></tr><tr><td>Taxi</td><td>0.292±0.00</td><td>-</td><td>0.425±0.199</td><td>-</td><td>-</td><td>-</td><td>0.179±0.002</td></tr><tr><td>Wikipedia</td><td>3.4±0.003</td><td>3.1±0.004</td><td>0.933±0.003</td><td>-</td><td>-</td><td>0.095±0.012</td><td>0.063±0.003</td></tr></table>
|
| 316 |
+
|
| 317 |
+
# B.2 CONTINUOUS RANKED PROBABILITY SCORE (CRPS)
|
| 318 |
+
|
| 319 |
+
The average marginal CRPS over dimensions $D$ and over the predicted time steps compared to the test interval is given in Table 4.
|
| 320 |
+
|
| 321 |
+
Table 4: Test set CRPS comparison (lower is better) of models from Salinas et al. (2019a) and our models LSTM-Real-NVP, LSTM-MAF and Transformer-MAF. The mean and standard errors are obtained by re-running each method three times.
|
| 322 |
+
|
| 323 |
+
<table><tr><td>Data set</td><td>Vec-LSTM
|
| 324 |
+
ind-scaling</td><td>Vec-LSTM
|
| 325 |
+
lowrank-Copula</td><td>GP
|
| 326 |
+
scaling</td><td>GP
|
| 327 |
+
Copula</td><td>LSTM
|
| 328 |
+
Real-NVP</td><td>LSTM
|
| 329 |
+
MAF</td><td>Transformer
|
| 330 |
+
MAF</td></tr><tr><td>Exchange</td><td>0.013±0.000</td><td>0.009±0.000</td><td>0.017±0.000</td><td>0.008±0.000</td><td>0.010±0.001</td><td>0.012±0.003</td><td>0.012±0.003</td></tr><tr><td>Solar</td><td>0.434±0.012</td><td>0.384±0.010</td><td>0.415±0.009</td><td>0.371±0.022</td><td>0.365±0.02</td><td>0.378±0.032</td><td>0.368±0.001</td></tr><tr><td>Electricity</td><td>0.059±0.001</td><td>0.084±0.006</td><td>0.053±0.000</td><td>0.056±0.002</td><td>0.059±0.001</td><td>0.051±0.000</td><td>0.052±0.000</td></tr><tr><td>Traffic</td><td>0.168±0.037</td><td>0.165±0.004</td><td>0.140±0.002</td><td>0.133±0.001</td><td>0.172±0.001</td><td>0.124±0.002</td><td>0.134±0.001</td></tr><tr><td>Taxi</td><td>0.586±0.004</td><td>0.416±0.004</td><td>0.346±0.348</td><td>0.360±0.201</td><td>0.327±0.001</td><td>0.314±0.003</td><td>0.377±0.002</td></tr><tr><td>Wikipedia</td><td>0.379±0.004</td><td>0.247±0.001</td><td>1.549±1.017</td><td>0.236±0.000</td><td>0.333±0.001</td><td>0.282±0.002</td><td>0.274±0.007</td></tr></table>
|
| 331 |
+
|
| 332 |
+
# B.3 MEAN SQUARED ERROR (MSE)
|
| 333 |
+
|
| 334 |
+
The MSE is defined as the mean squared error over all the time series dimensions $D$ and over the whole prediction range with respect to the test data. Table 5 shows the MSE results for the the marginal MSE.
|
| 335 |
+
|
| 336 |
+
Table 5: Test set MSE comparison (lower is better) of models from Salinas et al. (2019a) and our models LSTM-Real-NVP, LSTM-MAF and Transformer-MAF.
|
| 337 |
+
|
| 338 |
+
<table><tr><td>Data set</td><td>Vec-LSTM
|
| 339 |
+
ind-scaling</td><td>Vec-LSTM
|
| 340 |
+
lowrank-Copula</td><td>GP
|
| 341 |
+
scaling</td><td>GP
|
| 342 |
+
Copula</td><td>LSTM
|
| 343 |
+
Real-NVP</td><td>LSTM
|
| 344 |
+
MAF</td><td>Transformer
|
| 345 |
+
MAF</td></tr><tr><td>Exchange</td><td>1.6 × 10-4</td><td>1.9 × 10-4</td><td>2.9 × 10-4</td><td>1.7 × 10-4</td><td>2.4 × 10-4</td><td>3.8 × 10-4</td><td>3.4 × 10-4</td></tr><tr><td>Solar</td><td>9.3 × 102</td><td>2.9 × 103</td><td>1.1 × 103</td><td>9.8 × 102</td><td>9.1 × 102</td><td>9.8 × 102</td><td>9.3 × 102</td></tr><tr><td>Electricity</td><td>2.1 × 105</td><td>5.5 × 106</td><td>1.8 × 105</td><td>2.4 × 105</td><td>2.5 × 105</td><td>1.8 × 105</td><td>2.0 × 105</td></tr><tr><td>Traffic</td><td>6.3 × 10-4</td><td>1.5 × 10-3</td><td>5.2 × 10-4</td><td>6.9 × 10-4</td><td>6.9 × 10-4</td><td>4.9 × 10-4</td><td>5.0 × 10-4</td></tr><tr><td>Taxi</td><td>7.3 × 101</td><td>5.1 × 101</td><td>2.7 × 101</td><td>3.1 × 101</td><td>2.6 × 101</td><td>2.4 × 101</td><td>4.5 × 101</td></tr><tr><td>Wikipedia</td><td>7.2 × 107</td><td>3.8 × 107</td><td>5.5 × 107</td><td>4.0 × 107</td><td>4.7 × 107</td><td>3.8 × 107</td><td>3.1 × 107</td></tr></table>
|
| 346 |
+
|
| 347 |
+
# C UNIVARIATE AND POINT FORECASTS
|
| 348 |
+
|
| 349 |
+
Univariate methods typically give better forecasts than multivariate ones, which is counter-intuitive, the reason being the difficulty in estimating the cross-series correlations. The additional variance that multivariate methods add often ends up harming the forecast, even when one knows that individual time series are related. Thus as an additional sanity check, that this method is good enough to improve the forecast and not make it worse, we report the metrics with respect to a modern univariate point forecasting method as well as a multivariate point forecasting method for the Traffic data set.
|
| 350 |
+
|
| 351 |
+
Figure 6 reports the metrics from LSTNet (Lai et al., 2018) a multivariate point forecasting method and Figure 7 reports the metrics from N-BEATS (Oreshkin et al., 2020) a univariate model. As can be seen, our methods improve on the metrics for the Traffic data set and this pattern holds for other data sets in our experiments. As a visual comparison, we have also plotted the prediction intervals using our models in Figures 8, 9, 10 and 11.
|
| 352 |
+
|
| 353 |
+
# D EXPERIMENT DETAILS
|
| 354 |
+
|
| 355 |
+
# D.1 FEATURES
|
| 356 |
+
|
| 357 |
+
For hourly data sets we used hour of day, day of week, day of month features which are normalized. For daily data sets we use the day of week features. For data sets with minute granularity we use minute of hour, hour of day and day of week features. The normalized features are concatenated to
|
| 358 |
+
|
| 359 |
+

|
| 360 |
+
|
| 361 |
+

|
| 362 |
+
|
| 363 |
+

|
| 364 |
+
|
| 365 |
+

|
| 366 |
+
|
| 367 |
+

|
| 368 |
+
|
| 369 |
+

|
| 370 |
+
|
| 371 |
+

|
| 372 |
+
|
| 373 |
+

|
| 374 |
+
|
| 375 |
+

|
| 376 |
+
|
| 377 |
+

|
| 378 |
+
|
| 379 |
+

|
| 380 |
+
|
| 381 |
+

|
| 382 |
+
|
| 383 |
+

|
| 384 |
+
Figure 6: Point forecast and test set ground-truth from LSTNet multivariate model for Traffic data of the first 16 of 963 time series. CRPS<sub>sum</sub>: 0.125, CRPS: 0.202 and MSE: $7.4 \times 10^{-4}$ .
|
| 385 |
+
|
| 386 |
+

|
| 387 |
+
|
| 388 |
+

|
| 389 |
+
|
| 390 |
+

|
| 391 |
+
|
| 392 |
+
the RNN or Transformer input at each time step. We also concatenate lag values as inputs according to the data set's time frequency: [1, 24, 168] for hourly data, [1, 7, 14] for daily and [1, 2, 4, 12, 24, 48] for the half-hourly data.
|
| 393 |
+
|
| 394 |
+
# D.2 HYPERPARAMETERS
|
| 395 |
+
|
| 396 |
+
We use batch sizes of 64, with 100 batches per epoch and train for a maximum of 40 epochs with a learning rate of $1\mathrm{e} - 3$ . The LSTM hyperparameters were the ones from Salinas et al. (2019a) and we used $K = 5$ stacks of normalizing flow bijections layers. The components of the normalizing flows $(f$ and $g)$ are linear feed forward layers (with fixed input and final output sizes because we model bijections) with hidden dimensions of 100 and ELU (Clevert et al., 2016) activation functions. We sample 100 times to report the metrics on the test set. The Transformer uses $H = 8$ heads and $n = 3$ encoding and $m = 3$ decoding layers and a dropout rate of 0.1. All experiments run on a single Nvidia V-100 GPU and the code to reproduce the results will be made available after the review process.
|
| 397 |
+
|
| 398 |
+

|
| 399 |
+
|
| 400 |
+

|
| 401 |
+
|
| 402 |
+

|
| 403 |
+
|
| 404 |
+

|
| 405 |
+
|
| 406 |
+

|
| 407 |
+
|
| 408 |
+

|
| 409 |
+
|
| 410 |
+

|
| 411 |
+
|
| 412 |
+

|
| 413 |
+
|
| 414 |
+

|
| 415 |
+
|
| 416 |
+

|
| 417 |
+
|
| 418 |
+

|
| 419 |
+
|
| 420 |
+

|
| 421 |
+
|
| 422 |
+

|
| 423 |
+
Figure 7: Point forecast and test set ground-truth from N-BEATS univariate model for Traffic data of the first 16 of 963 time series. $\mathrm{CRPS}_{\mathrm{sum}}$ : 0.174, CRPS: 0.228 and MSE: $8.4\times 10^{-4}$
|
| 424 |
+
|
| 425 |
+

|
| 426 |
+
|
| 427 |
+

|
| 428 |
+
|
| 429 |
+

|
| 430 |
+
|
| 431 |
+

|
| 432 |
+
|
| 433 |
+

|
| 434 |
+
|
| 435 |
+

|
| 436 |
+
|
| 437 |
+

|
| 438 |
+
|
| 439 |
+

|
| 440 |
+
|
| 441 |
+

|
| 442 |
+
|
| 443 |
+

|
| 444 |
+
|
| 445 |
+

|
| 446 |
+
|
| 447 |
+

|
| 448 |
+
|
| 449 |
+

|
| 450 |
+
|
| 451 |
+

|
| 452 |
+
|
| 453 |
+

|
| 454 |
+
|
| 455 |
+

|
| 456 |
+
Figure 8: Prediction intervals and test set ground-truth from LSTM-REAL-NVP model for Traffic data of the first 16 of 963 time series.
|
| 457 |
+
|
| 458 |
+

|
| 459 |
+
|
| 460 |
+

|
| 461 |
+
|
| 462 |
+

|
| 463 |
+
|
| 464 |
+

|
| 465 |
+
|
| 466 |
+

|
| 467 |
+
|
| 468 |
+

|
| 469 |
+
|
| 470 |
+

|
| 471 |
+
|
| 472 |
+

|
| 473 |
+
|
| 474 |
+

|
| 475 |
+
|
| 476 |
+

|
| 477 |
+
|
| 478 |
+

|
| 479 |
+
|
| 480 |
+

|
| 481 |
+
|
| 482 |
+

|
| 483 |
+
|
| 484 |
+

|
| 485 |
+
|
| 486 |
+

|
| 487 |
+
|
| 488 |
+

|
| 489 |
+
Figure 9: Prediction intervals and test set ground-truth from Transformer-MAF model for Traffic data of the first 16 of 963 time series.
|
| 490 |
+
|
| 491 |
+

|
| 492 |
+
|
| 493 |
+

|
| 494 |
+
|
| 495 |
+

|
| 496 |
+
|
| 497 |
+

|
| 498 |
+
|
| 499 |
+

|
| 500 |
+
|
| 501 |
+

|
| 502 |
+
|
| 503 |
+

|
| 504 |
+
|
| 505 |
+

|
| 506 |
+
|
| 507 |
+

|
| 508 |
+
|
| 509 |
+

|
| 510 |
+
|
| 511 |
+

|
| 512 |
+
|
| 513 |
+

|
| 514 |
+
|
| 515 |
+

|
| 516 |
+
|
| 517 |
+

|
| 518 |
+
|
| 519 |
+

|
| 520 |
+
|
| 521 |
+

|
| 522 |
+
Figure 10: Prediction intervals and test set ground-truth from LSTM-REAL-NVP model for Electricity data of the first 16 of 370 time series.
|
| 523 |
+
|
| 524 |
+

|
| 525 |
+
|
| 526 |
+

|
| 527 |
+
|
| 528 |
+

|
| 529 |
+
|
| 530 |
+

|
| 531 |
+
|
| 532 |
+

|
| 533 |
+
|
| 534 |
+

|
| 535 |
+
|
| 536 |
+

|
| 537 |
+
|
| 538 |
+

|
| 539 |
+
|
| 540 |
+

|
| 541 |
+
|
| 542 |
+

|
| 543 |
+
|
| 544 |
+

|
| 545 |
+
|
| 546 |
+

|
| 547 |
+
|
| 548 |
+

|
| 549 |
+
|
| 550 |
+

|
| 551 |
+
|
| 552 |
+

|
| 553 |
+
|
| 554 |
+

|
| 555 |
+
Figure 11: Prediction intervals and test set ground-truth from Transformer-MAF model for Electricity data of the first 16 of 370 time series.
|
| 556 |
+
|
| 557 |
+

|
| 558 |
+
|
| 559 |
+

|
| 560 |
+
|
| 561 |
+

|
multivariateprobabilistictimeseriesforecastingviaconditionednormalizingflows/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:2bbc8e0e6beb947a88f0ff474013b8627728109ffc806dbdc405d2abafc34204
|
| 3 |
+
size 1274317
|
multivariateprobabilistictimeseriesforecastingviaconditionednormalizingflows/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:0b7996f5b5649b738a1344c63f0db87186d0c3727ffc5ba648c0cb838b683bcd
|
| 3 |
+
size 673170
|
mutualinformationstateintrinsiccontrol/e347bec0-55c8-45aa-aba8-e5cfd14a392b_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a4a8d7f56162147fb8b74a10a73aebfbb8cd26507769d2ee1f247d4b12d3c3d7
|
| 3 |
+
size 94937
|
mutualinformationstateintrinsiccontrol/e347bec0-55c8-45aa-aba8-e5cfd14a392b_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:d556be7f4e7c15bfeb0eac4166c61ca5d0918a75958d672ab7a26993d6c786ba
|
| 3 |
+
size 118782
|
mutualinformationstateintrinsiccontrol/e347bec0-55c8-45aa-aba8-e5cfd14a392b_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f4c6ee8c0ee92285ba09a2c4e3664e4ea73082dd7de0862494f367b2a183bf89
|
| 3 |
+
size 3520378
|
mutualinformationstateintrinsiccontrol/full.md
ADDED
|
@@ -0,0 +1,402 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# MUTUAL INFORMATION STATE INTRINSIC CONTROL
|
| 2 |
+
|
| 3 |
+
Rui Zhao $^{1,2}$ ; Yang Gao $^{3}$ , Pieter Abbeel $^{4}$ , Volker Tresp $^{1,2}$ , Wei Xu $^{5}$
|
| 4 |
+
|
| 5 |
+
$^{1}$ Ludwig Maximilian University of Munich $^{2}$ Siemens AG
|
| 6 |
+
|
| 7 |
+
$^{3}$ Tsinghua University $^{4}$ University of California, Berkeley $^{5}$ Horizon Robotics
|
| 8 |
+
|
| 9 |
+
# ABSTRACT
|
| 10 |
+
|
| 11 |
+
Reinforcement learning has been shown to be highly successful at many challenging tasks. However, success heavily relies on well-shaped rewards. Intrinsically motivated RL attempts to remove this constraint by defining an intrinsic reward function. Motivated by the self-consciousness concept in psychology, we make a natural assumption that the agent knows what constitutes itself, and propose a new intrinsic objective that encourages the agent to have maximum control on the environment. We mathematically formalize this reward as the mutual information between the agent state and the surrounding state under the current agent policy. With this new intrinsic motivation, we are able to outperform previous methods, including being able to complete the pick-and-place task for the first time without using any task reward. A video showing experimental results is available at https://youtu.be/AUCwc9RThpk.
|
| 12 |
+
|
| 13 |
+
# 1 INTRODUCTION
|
| 14 |
+
|
| 15 |
+
Reinforcement learning (RL) allows an agent to learn meaningful skills by interacting with an environment and optimizing some reward function, provided by the environment. Although RL has achieved impressive achievements on various tasks (Silver et al., 2017; Mnih et al., 2015; Berner et al., 2019), it is very expensive to provide dense rewards for every task we want the robot to learn. Intrinsically motivated reinforcement learning encourages the agent to explore by providing an "internal motivation" instead, such as curiosity (Schmidhuber, 1991; Pathak et al., 2017; Burda et al., 2018), diversity (Gregor et al., 2016; Haarnoja et al., 2018; Eysenbach et al., 2019) and empowerment (Klyubin et al., 2005; Salge et al., 2014; Mohamed & Rezende, 2015). Those internal motivations can be computed on the fly when the agent is interacting with the environment, without any human engineered reward. We hope to extract useful "skills" from those internally motivated agents, which could later be used to solve downstream tasks, or simply augment the sparse reward with those intrinsic rewards to solve a given task faster.
|
| 16 |
+
|
| 17 |
+
Most of the previous works in RL model the environment as a Markov Decision Process (MDP). In an MDP, we use a single state vector to describe the current state of the whole environment, without explicitly distinguishing the agent itself from its surrounding. However, in the physical world, there is a clear boundary between an intelligent agent and its surrounding. The skin of any mammal is an example of such boundary. The separation of the agent and its surrounding also holds true for most of the man-made agents, such as any mechanical robot. This agent-surrounding separation has been studied for a long time in psychology under the concept of self-consciousness. Self-consciousness refers that a subject knows itself is the object of awareness (Smith, 2020), effectively treating the agent itself differently from everything else. Gallup (1970) has shown that self-consciousness widely exists in chimpanzees, dolphins, some elephants and human infants. To equally emphasize the agent and its surrounding, we name this separation as agent-surrounding separation in this paper. The widely adopted MDP formulation ignores the natural agent-surrounding separation, but simply stacks the agent state and its surrounding state together as a single state vector. Although this formulation is mathematically concise, we argue that it is over-simplistic, and as a result, it makes the learning harder.
|
| 18 |
+
|
| 19 |
+
With this agent-surrounding separation in mind, we are able to design a much more efficient intrinsically motivated RL algorithm. We propose a new intrinsic motivation by encouraging the agent to
|
| 20 |
+
|
| 21 |
+
perform actions such that the resulting agent state should have high Mutual Information (MI) with the surrounding state. Intuitively, the higher the MI, the more control the agent could have on its surrounding. We name the proposed method "MUtual information-based State Intrinsic Control", or MUSIC in short. With the proposed MUSIC method, we are able to learn many complex skills in an unsupervised manner, such as learning to pick up an object without any task reward. We can also augment a sparse reward with the dense MUSIC intrinsic reward, to accelerate the learning process.
|
| 22 |
+
|
| 23 |
+
Our contributions are three-fold. First, we propose a novel intrinsic motivation (MUSIC) that encourages the agent to have maximum control on its surrounding, based on the natural agent-surrounding separation assumption. Secondly, we propose scalable objectives that make the MUSIC intrinsic reward easy to optimize. Last but not least, we show MUSIC's superior performance, by comparing it with other competitive intrinsic rewards on multiple environments. Noticeably, with our method, for the first time the pick-and-place task can be solved without any task reward.
|
| 24 |
+
|
| 25 |
+
# 2 PRELIMINARIES
|
| 26 |
+
|
| 27 |
+
For environments, we consider four robotic tasks, including push, slide, pick-and-place, and navigation, as shown in Figure 2. The goal in the manipulation task is to move the target object to a desired position. For the navigation task, the goal is to navigate to a target ball. In the following, we define some terminologies.
|
| 28 |
+
|
| 29 |
+
# 2.1 AGENT STATE, SURROUNDING STATE, AND REINFORCEMENT LEARNING SETTINGS
|
| 30 |
+
|
| 31 |
+
In this paper, the agent state $s^a$ means literally the state variable of the agent. The surrounding state $s^s$ refers to the state variable that describes the surrounding of the agent, for example, the state variable of an object. For multi-goal environments, we use the same assumption as previous works (Andrychowicz et al., 2017; Plappert et al., 2018), which consider that the goals can be represented as states and we denote the goal variable as $g$ . For example, in the manipulation task, a goal is a particular desired position of the object in the episode. These desired positions, i.e., goals, are sampled from the environment.
|
| 32 |
+
|
| 33 |
+
The division between the agent state and the surrounding state is naturally defined by the agent-surrounding separation concept introduced in Section 1. From a biology point of view, a human can naturally distinguish its own parts, like hands or legs from the environments. Analog to this, when we design a robotic system, we can easily know what is the agent state and what is its surrounding state. In this paper, we use upper letters, such as $S$ , to denote random variables and the corresponding lower case letter, such as $s$ , to represent the values of random variables.
|
| 34 |
+
|
| 35 |
+
We assume the world is fully observable, including a set of states $S$ , a set of actions $A$ , a distribution of initial states $p(s_0)$ , transition probabilities $p(s_{t+1} \mid s_t, a_t)$ , a reward function $r \colon S \times A \to \mathbb{R}$ , and a discount factor $\gamma \in [0,1]$ . These components formulate a Markov Decision Process represented as a tuple, $(S, A, p, r, \gamma)$ . We use $\tau$ to denote a trajectory, which contains a series of agent states and surrounding states. Its random variable is denoted as $\mathcal{T}$ .
|
| 36 |
+
|
| 37 |
+
# 3 METHOD
|
| 38 |
+
|
| 39 |
+
We focus on agent learning to control its surrounding purely by using its observations and actions without supervision. Motivated by the idea that when an agent takes control of its surrounding, then there is a high MI between the agent state and the surrounding state, we formulate the problem of learning without external supervision as one of learning a policy $\pi_{\theta}(a_t \mid s_t)$ with parameters $\theta$ to maximize intrinsic MI rewards, $r = I(S^a; S^s)$ . In this section, we formally describe our method, mutual information-based state intrinsic control (MUSIC).
|
| 40 |
+
|
| 41 |
+
# 3.1 MUTUAL INFORMATION REWARD FUNCTION
|
| 42 |
+
|
| 43 |
+
Our framework simultaneously learns a policy and an intrinsic reward function by maximizing the MI between the surrounding state and the agent state. Mathematically, the MI between the surround-
|
| 44 |
+
|
| 45 |
+

|
| 46 |
+
Figure 1: MUSIC Algorithm: We update the estimator to better predict the MI, and update the agent to control the surrounding state to have higher MI with the agent state.
|
| 47 |
+
|
| 48 |
+
ing state random variable $S^s$ and the agent state random variable $S^a$ is represented as follows:
|
| 49 |
+
|
| 50 |
+
$$
|
| 51 |
+
\begin{array}{l} I \left(S ^ {s}; S ^ {a}\right) = K L \left(\mathbb {P} _ {S ^ {s} S ^ {a}} \mid \mid \mathbb {P} _ {S ^ {s}} \otimes \mathbb {P} _ {S ^ {a}}\right) (1) \\ = \sup _ {T: \Omega \rightarrow \mathbb {R}} \mathbb {E} _ {\mathbb {P} _ {S ^ {s} S ^ {a}}} [ T ] - \log \left(\mathbb {E} _ {\mathbb {P} _ {S ^ {s}} \otimes \mathbb {P} _ {S ^ {a}}} [ e ^ {T} ]\right) (2) \\ \geq \sup _ {\phi \in \Phi} \mathbb {E} _ {\mathbb {P} _ {S ^ {s} S ^ {a}}} [ T _ {\phi} ] - \log \left(\mathbb {E} _ {\mathbb {P} _ {S ^ {s}} \otimes \mathbb {P} _ {S ^ {a}}} \left[ e ^ {T _ {\phi}} \right]\right) := I _ {\Phi} \left(S ^ {s}; S ^ {a}\right), (3) \\ \end{array}
|
| 52 |
+
$$
|
| 53 |
+
|
| 54 |
+
where $\mathbb{P}_{S^s S^a}$ is the joint probability distribution; $\mathbb{P}_{S^s} \otimes \mathbb{P}_{S^a}$ is the product of the marginal distributions $\mathbb{P}_{S^s}$ and $\mathbb{P}_{S^a}$ ; $KL$ denotes the Kullback-Leibler (KL) divergence. MI is notoriously difficult to compute in real-world settings (Hjelm et al., 2019). Compared to the variational information maximizing-based approaches (Barber & Agakov, 2003; Alemi et al., 2016; Chalk et al., 2016; Kolchinsky et al., 2017), the recent MINE-based approaches have shown superior performance (Belghazi et al., 2018; Hjelm et al., 2019; Velickovic et al., 2019). Motivated by MINE (Belghazi et al., 2018), we use a lower bound to approximate the MI quantity $I(S^s; S^a)$ . First, we rewrite Equation (1), the KL formulation of the MI objective, using the Donsker-Varadhan representation, to Equation (2) (Donsker & Varadhan, 1975). The input space $\Omega$ is a compact domain of $\mathbb{R}^d$ , i.e., $\Omega \subset \mathbb{R}^d$ , and the supremum is taken over all functions $T$ such that the two expectations are finite. Secondly, we lower bound the MI in the Donsker-Varadhan representation with the compression lemma in the PAC-Bayes literature and derive Equation (3) (Banerjee, 2006; Belghazi et al., 2018). The expectations in Equation (3) are estimated by using empirical samples from $\mathbb{P}_{S^s S^a}$ and $\mathbb{P}_{S^s} \otimes \mathbb{P}_{S^a}$ . The statistics model $T_{\phi}$ is parameterized by a deep neural network with parameters $\phi \in \Phi$ , whose inputs are the empirical samples.
|
| 55 |
+
|
| 56 |
+
# 3.2 EFFECTIVELY COMPUTING THE MUTUAL INFORMATION REWARD IN PRACTICE
|
| 57 |
+
|
| 58 |
+
Lemma 1. There is a monotonically increasing relationship between $I_{\phi}(S^s; S^a \mid \mathcal{T})$ and $\mathbb{E}_{\mathbb{P}_{\mathcal{T}'}}[I_{\phi}(S^s; S^a \mid \mathcal{T}')]$ , mathematically,
|
| 59 |
+
|
| 60 |
+
$$
|
| 61 |
+
I _ {\phi} \left(S ^ {s}; S ^ {a} \mid \mathcal {T}\right) \ltimes \mathbb {E} _ {\mathbb {P} _ {\mathcal {T} ^ {\prime}}} \left[ I _ {\phi} \left(S ^ {s}; S ^ {a} \mid \mathcal {T} ^ {\prime}\right) \right], \tag {4}
|
| 62 |
+
$$
|
| 63 |
+
|
| 64 |
+
where $S^s$ , $S^a$ , and $\mathcal{T}$ denote the surrounding state, the agent state, and the trajectory, respectively. The trajectory fractions are defined as the adjacent state pairs, namely $\mathcal{T}' = \{S_t, S_{t+1}\}$ . The symbol $\ltimes$ denotes a monotonically increasing relationship between two variables and $\phi$ represents the parameter of the statistics model in MINE. Proof. See Appendix A.
|
| 65 |
+
|
| 66 |
+
We define the reward for each transition at a given time-step as the mutual information of the pair of adjacent states at that time-step, see Equation (4) Right-Hand Side (RHS). However, in practice, we find that it is not very efficient to train the MI estimator using state pairs. To counter this issue, we use all the states in the same trajectory in a batch to train the MI estimator, see Equation (4) Left-Hand Side (LHS), since more empirical samples help to reduce variance and therefore accelerate learning. In Lemma 1, we prove the monotonically increasing relationship between Equation (4) RHS and Equation (4) LHS.
|
| 67 |
+
|
| 68 |
+
In more detail, we divide the process of computing rewards into two phases, i.e., the training phase and the evaluation phase. In the training phase, we efficiently train the MI estimator with a large
|
| 69 |
+
|
| 70 |
+
batch of samples from the whole trajectory. For training the MI estimator network, we first randomly sample the trajectory $\tau$ from the replay buffer. Then, the states $s_t^a$ used for calculating the product of marginal distributions are sampled by shuffling the states $s_t^a$ from the joint distribution along the temporal axis $t$ within the trajectory. We use back-propagation to optimize the parameter $(\phi)$ to maximize the MI lower bound, see Equation (4) LHS.
|
| 71 |
+
|
| 72 |
+
For evaluating the MI reward, we use a pair of transitions to calculate the transition reward, see Equation (4) RHS and Equation (5), instead of using the complete trajectory. Each time, to calculate the MI reward for the transition, the reward is calculated over a small fraction of the complete trajectory $\tau'$ , namely $r = I_{\phi}(S^s; S^a \mid \mathcal{T}')$ . The trajectory fraction, $\tau'$ , is defined as adjacent state pairs, $\tau' = \{s_t, s_{t+1}\}$ , and $\mathcal{T}'$ represents its corresponding random variable.
|
| 73 |
+
|
| 74 |
+
The derived Lemma 1 brings us two important benefits. First, it enables us to efficiently train the MI estimator using all the states in the same trajectory. And a large batch of empirical samples reduce the variance of the gradients. Secondly, it allows us to estimate the MI reward for each transition with only the relevant state pair. This way of estimating MI enables us to assign rewards more accurately at the transition level.
|
| 75 |
+
|
| 76 |
+
Based on Lemma 1, we calculate the transition reward as the MI of each trajectory fraction, namely
|
| 77 |
+
|
| 78 |
+
$$
|
| 79 |
+
r _ {\phi} \left(a _ {t}, s _ {t}\right) := I _ {\phi} \left(S ^ {s}; S ^ {a} \mid \mathcal {T} ^ {\prime}\right) = 0. 5 \sum_ {i = t} ^ {t + 1} T _ {\phi} \left(s _ {i} ^ {s}, s _ {i} ^ {a}\right) - \log \left(0. 5 \sum_ {i = t} ^ {t + 1} e ^ {T _ {\phi} \left(s _ {i} ^ {s}, \bar {s} _ {i} ^ {a}\right)}\right), \tag {5}
|
| 80 |
+
$$
|
| 81 |
+
|
| 82 |
+
where $(s_i^s, s_i^a) \sim \mathbb{P}_{S^s S^{a|T'}}$ , $\bar{s}_i^a \sim \mathbb{P}_{S^a|T'}$ , and $\tau' = \{s_t, s_{t+1}\}$ . In case that the estimated MI value is particularly small, we scale the reward with a hyper-parameter $\alpha$ and clip the reward between 0 and 1. MUSIC can be combined with any off-the-shelf reinforcement learning methods, such as deep deterministic policy gradient (DDPG) (Lillicrap et al., 2016) and soft actor-critic (SAC) (Haarnoja et al., 2018). We summarize the complete training algorithm in Algorithm 1 and in Figure 1.
|
| 83 |
+
|
| 84 |
+
MUSIC Variants with Task Rewards: The introduced MUSIC method is an unsupervised reinforcement learning approach, which is denoted as "MUSIC-u", where "-u" stands for unsupervised learning. We propose three ways of using MUSIC to accelerate learning. The first method is using the MUSIC-u pretrained policy as the parameter initialization and then fine-tuning the agent with the task rewards. We denote this variant as "MUSIC-f", where "-f" stands for fine-tuning. The second variant is to use the MI intrinsic reward to help the agent to explore more efficiently. Here, the MI reward and the task reward are added together. We name this method as "MUSIC-r", where "-r" stands for reward. The third approach is to use the MI quantity from MUSIC to prioritize trajectories for replay. The approach is similar to the TD-error-based prioritized experience replay (PER) (Schaul et al., 2016). The only difference is that we use the estimated MI instead of the TD-error as the priority for sampling. We name this method as "MUSIC-p", where "-p" stands for prioritization.
|
| 85 |
+
|
| 86 |
+
Skill Discovery with MUSIC and DIAYN: One of the relevant works on unsupervised RL, DIAYN (Eysenbach et al., 2019), introduces an information-theoretical objective $\mathcal{F}_{\mathrm{DIAYN}}$ , which learns diverse discriminable skills indexed by the latent variable $Z$ , mathematically, $\mathcal{F}_{\mathrm{DIAYN}} = I(S;Z) + \mathcal{H}(A|S,Z)$ . The first term, $I(S;Z)$ , in the objective, $\mathcal{F}_{\mathrm{DIAYN}}$ , is implemented via a skill discriminator, which serves as a variational lower bound of the original objective (Barber & Agakov, 2003; Eysenbach et al., 2019). The skill discriminator assigns high rewards to the agent, if it can predict the skill-options, $Z$ , given the states, $S$ . Here, we substitute the full state $S$ with the surrounding state $S^s$ to encourage the agent to learn control skills. DIAYN and MUSIC can be combined as follows: $\mathcal{F}_{\mathrm{MUSIC} + \mathrm{DIAYN}} = I(S^a;S^s) + I(S^s;Z) + \mathcal{H}(A|S,Z)$ . The combined version enables the agent to learn diverse control primitives via skill-conditioned policy (Eysenbach et al., 2019) in an unsupervised fashion.
|
| 87 |
+
|
| 88 |
+
Comparison and Combination with DISCERN: Another relevant work is Discriminative Embedding Reward Networks (DISCERN) (Warde-Farley et al., 2019), whose objective is to maximize the MI between the state $S$ and the goal $G$ , namely $I(S;G)$ . While MUSIC's objective is to maximize the MI between the agent state $S^a$ and the surrounding state $S^s$ , namely $I(S^a;S^s)$ . Intuitively, DISCERN attempts to reach a particular goal in each episode, while our method tries to manipulate the surrounding state to any different value. MUSIC and DISCERN can be combined as $\mathcal{F}_{\text{MUSIC} + \text{DISCERN}} = I(S^a;S^s) + I(S;G)$ . Optionally, we can replace the full states $S$ with $S^s$ , since it performs better than with $S$ empirically. Through this combination, MUSIC helps DISCERN to learn its discriminative objective.
|
| 89 |
+
|
| 90 |
+

|
| 91 |
+
Figure 2: Fetch robot arm manipulation tasks in OpenAI Gym and a navigation task based on the Gazebo simulator: FetchPush, FetchPickAndPlace, FetchSlide, SocialBot-PlayGround.
|
| 92 |
+
|
| 93 |
+

|
| 94 |
+
|
| 95 |
+

|
| 96 |
+
|
| 97 |
+

|
| 98 |
+
|
| 99 |
+
# 4 EXPERIMENTS
|
| 100 |
+
|
| 101 |
+
**Environments:** To evaluate the proposed methods, we used the robotic manipulation tasks and a navigation task, see Figure 2 (Brockman et al., 2016; Plappert et al., 2018). The navigation task is based on the Gazebo simulator. In the navigation task, the task reward is 1 if the agent reaches the ball, otherwise, the task reward is 0. Here, the agent state is the robot car position and the surrounding state is the red ball. The manipulation environments, including push, pick-and-place, and slide, have a set of predefined goals, which are represented as the red dots. The task for the RL agent is to manipulate the object to the goal positions. In the manipulation task, the agent state is the gripper position and the surrounding state is the object position.
|
| 102 |
+
|
| 103 |
+
Experiments: First, we analyze the control behaviors learned purely with the intrinsic reward, i.e., MUSIC-u. Secondly, we show that the pretrained models can be used for improving performance in conjunction with the task rewards. Interestingly, we show that the pretrained MI estimator can be transferred among different tasks and still improve performance. We compared MUSIC with other methods, including DDPG (Lillicrap et al., 2016), SAC (Haarnoja et al., 2018), DIAYN (Eysenbach et al., 2019), DISCERN (Warde-Farley et al., 2019), PER (Schaul et al., 2016), VIME (Houthooft et al., 2016), ICM (Pathak et al., 2017), and Empowerment (Mohamed & Rezende, 2015). Thirdly, we show some insights about how the MUSIC rewards are distributed across a trajectory. The experimental details are shown in Appendix G. Our code is available at https://github.com/ruizhaogit/music and https://github.com/ruizhaogit/alf.
|
| 104 |
+
|
| 105 |
+
# Question 1. What behavior does MUSIC-u learn?
|
| 106 |
+
|
| 107 |
+
We tested MUSIC-u in the robotic manipulation tasks. During training, the agent only receives the intrinsic MUSIC reward. In all three environments, the behavior of reaching objects emerges. In the push environments, the agent learns to push the object around on the table. In the slide environment, the agent learns to slide the object to different directions. Perhaps surprisingly, in the pick-and-place environment, the agent learns to pick up the object from the table without any task reward. All the observations are shown in the supplementary video.
|
| 108 |
+
|
| 109 |
+
# Question 2. How does MUSIC-u compare to Empowerment or ICM?
|
| 110 |
+
|
| 111 |
+
We tested our method in the navigation task. We combined our method with PPO (Schulman et al., 2017) and compared the performance with ICM (Pathak et al., 2017) and Empowerment (Mohamed & Rezende, 2015). During training, we only used one of the intrinsic rewards such as MUSIC, ICM, or Empowerment to train the agent. Then, we used the averaged task reward as the evaluation
|
| 112 |
+
|
| 113 |
+

|
| 114 |
+
Figure 3: Experimental results
|
| 115 |
+
|
| 116 |
+

|
| 117 |
+
|
| 118 |
+
metric. The experimental results are shown in Figure 3 (left). The y-axis represents the mean task reward and the x-axis denotes the training epochs. Figure 3 (right) shows that the MUSIC reward signal $I(S^{a}, S^{s})$ is relatively strong compared to the Empowerment reward signal $I(A, S^{s})$ . Subsequently, high MI reward encourages the agent to explore more states with higher MI. A theoretical connection between Empowerment and MUSIC is shown in Appendix B. The video starting from 1:28 shows the learned navigation behaviors.
|
| 119 |
+
|
| 120 |
+
# Question 3. How does MUSIC compare to DIAYN?
|
| 121 |
+
|
| 122 |
+

|
| 123 |
+
Figure 4: Mean success rate with standard deviation: The percentage values after colon (:) represent the best mean success rate during training. The shaded area describes the standard deviation. A full comparison is shown in Appendix D Figure 9.
|
| 124 |
+
|
| 125 |
+
We compared MUSIC, DIAYN and MUSIC+DIAYN in the pick-and-place environment. For implementing MUSIC+DIAYN, we first pre-train the agent with only MUSIC, and then fine-tune the policy with DIAYN. After pre-training, the MUSIC-trained agent learns manipulation behaviors such as, reaching, pushing, sliding, and picking up an object. Compared to MUSIC, the DIAYN-trained agent rarely learns to pick up the object. It mostly pushes or flicks the object with the gripper. However, the combined model, MUSIC+DIAYN, learns to pick up the object and moves it to different locations, depending on the skill-option. These observations are shown in the video starting from 0:46. From this experiment, we can see that MUSIC helps the agent to learn the DIAYN objective. DIAYN alone doesn't succeed because DIAYN doesn't start to learn any skills until it touches the object, which is rare in the first place. This happens because the skill discriminator only encourages the skills to be different.
|
| 126 |
+
|
| 127 |
+
# Question 4. How does MUSIC+DISCERN compare to DISCERN?
|
| 128 |
+
|
| 129 |
+
The combination of MUSIC and DISCERN, encourages the agent to learn to control the object via MUSIC and then move the object to the target position via DISCERN. Table 1 shows that DISCERN+MUSIC significantly outperforms DISCERN.
|
| 130 |
+
|
| 131 |
+
Table 1: Comparison of DISCERN with and without MUSIC
|
| 132 |
+
|
| 133 |
+
<table><tr><td>Method</td><td>Push (%)</td><td>Pick & Place (%)</td></tr><tr><td>DISCERN</td><td>7.94% ± 0.71%</td><td>4.23% ± 0.47%</td></tr><tr><td>R (Task Reward)</td><td>11.65% ± 1.36%</td><td>4.21% ± 0.46%</td></tr><tr><td>R+DISCERN</td><td>21.15% ± 5.49%</td><td>4.28% ± 0.52%</td></tr><tr><td>R+DISCERN+MUSIC</td><td>95.15% ± 8.13%</td><td>48.91% ± 12.67%</td></tr></table>
|
| 134 |
+
|
| 135 |
+
This is because that MUSIC emphases more on state-control and teaches the agent to interact with an object. Afterwards, DISCERN teaches the agent to move the object to the goal position in each episode.
|
| 136 |
+
|
| 137 |
+
# Question 5. How can we use MUSIC to accelerate learning?
|
| 138 |
+
|
| 139 |
+
We investigated three ways, including MUSIC-f, MUSIC-p, and MUSIC-r, of using MUSIC to accelerate learning in addition to the task reward. We combined these three variants with DDPG and SAC and tested them in the multi-goal robotic tasks. From Figure 4, we can see that all these three methods, including MUSIC-f, MUSIC-p, and MUSIC-r, accelerate learning in the presence of task rewards. Among these variants, the MUSIC-r has the best overall improvements. In the push and pick-and-place tasks, MUSIC enables the agent to learn in a short period of time. In the slide tasks, MUSIC-r also improves the performances by a decent margin.
|
| 140 |
+
|
| 141 |
+
We also compare our methods with their closest related methods. To be more specific, we compare MUSIC-f against the parameter initialization using DIAYN (Eysenbach et al., 2019); MUSIC-p against Prioritized Experience Replay (PER), which uses TD-errors for prioritization (Schaul et al., 2016); and MUSIC-r versus Variational Information Maximizing Exploration (VIME) (Houthooft et al., 2016). The experimental results are shown in Figure 5. From Figure 5 ( $1^{\text{st}}$ column), we can see that MUSIC-f enables the agent to learn, while DIYN does not. In the $2^{\text{nd}}$ column of Figure 5, MUSIC-r performs better than VIME. This result indicates that the MI between states is a crucial quantity for accelerating learning. The MI intrinsic rewards boost performance significantly compared to VIME. This observation is consistent with the experimental results of MUSIC-p and
|
| 142 |
+
|
| 143 |
+

|
| 144 |
+
Figure 5: Performance comparison: We compare the MUSIC variants, including MUSIC-f, MUSIC-r, and MUSIC-p, with DIAYN, VIME, and PER, respectively. A full comparison is shown in Appendix D Figure 10.
|
| 145 |
+
|
| 146 |
+
PER, as shown in Figure 5 ( $3^{\text{rd}}$ column), where the MI-based prioritization framework performs better than the TD-error-based approach, PER. On all tasks, MUSIC enables the agent to learn the benchmark task more quickly.
|
| 147 |
+
|
| 148 |
+
Question 6. Can the learned MI estimator be transferred to new tasks?
|
| 149 |
+
|
| 150 |
+
It would be beneficial if the pretrained MI estimator could be transferred to a new task and still improve the performance (Pan et al., 2010; Bengio, 2012). To verify this idea, we directly applied the pretrained MI estimator from the pick-and-place environment to the push and slide environments, respectively, and train the agent from scratch.
|
| 151 |
+
|
| 152 |
+

|
| 153 |
+
Figure 6: Transferred MUSIC
|
| 154 |
+
|
| 155 |
+
We denote this transferred method as
|
| 156 |
+
|
| 157 |
+
"MUSIC-t", where "-t" stands for transfer. The MUSIC reward function trained in its corresponding environments is denoted as "MUSIC-r". We compared the performances of DDPG, MUSIC-r, and MUSIC-t. The results are in Figure 6, which shows that the transferred MUSIC still improved the performance significantly. Furthermore, as expected, MUSIC-r performed better than MUSIC-t. We can see that the MI estimator can be trained in a task-agnostic (Finn et al., 2017) fashion and later utilized in unseen tasks.
|
| 158 |
+
|
| 159 |
+
Question 7. How does MUSIC distribute rewards over a trajectory?
|
| 160 |
+
|
| 161 |
+
To understand why MUSIC works, we visualize the learned MUSIC-u reward in Figure 7. We can observe that the MI reward peaks between the 4th and 5th frame, where the robot quickly picks up the cube from the table. Around the peak reward value, the middle range reward values are corresponding to the relatively slow move
|
| 162 |
+
|
| 163 |
+

|
| 164 |
+
Figure 7: MUSIC rewards over a trajectory
|
| 165 |
+
|
| 166 |
+
ment of the object and the gripper (see the 3rd, 9th, and 10th frame). When there is no contact between the gripper and the cube (see the 1st & 2nd frames), or the gripper holds the object still (see the 6th to 8th frames) the intrinsic reward remains nearly zero. From this example, we see that MUSIC distributes positive intrinsic rewards when the surrounding state has correlated changes with the agent state.
|
| 167 |
+
|
| 168 |
+
Question 8. How does MUSIC reward compare to reward shaping?
|
| 169 |
+
|
| 170 |
+
Here, we want to compare MUSIC and reward shaping and show that MUSIC cannot be easily replaced by reward shaping. We consider a simple L2-norm reward shaping, which is the distance between the robot's gripper and the object. With this hand-engineered reward, the agent learns to move its gripper close to the object but barely touch the object. However, with MUSIC reward, the
|
| 171 |
+
|
| 172 |
+
agent reaches the object and moves it into different locations. MUSIC automatically induces a lot of hand-engineered rewards, including the L2-norm distance reward between the gripper and the object, the contact reward between the agent and the object, the L2-norm distance reward between the object and the goal position and any other rewards that maximize the mutual information between the agent and the surrounding state. From this perspective, MUSIC can be considered as a meta-reward for the state-control tasks, which helps the agent to learn any specific downstream tasks that falls into this category.
|
| 173 |
+
|
| 174 |
+
Question 9. Can MUSIC help the agent to learn when there are multiple surrounding objects?
|
| 175 |
+
|
| 176 |
+
When there are multiple objects, the agent is trained to maximize the MI between the surrounding objects and the agent via MUSIC. In the case that there is a red and a blue ball on the ground, with MUSIC, the agent learns to reach both balls and sometimes also learns to use one ball to hit the other ball. The results are shown in the supplementary video starting from 1:56.
|
| 177 |
+
|
| 178 |
+
Summary and Future Work: We can see that, with different combinations of the surrounding state and the agent state, the agent is able to learn different control behaviors. We can train a skill-conditioned policy corresponding to different combinations of the agent state and the surrounding state and later use the pretrained policy for the tasks at hand, see Appendix F "Skill Discovery for Hierarchical Reinforcement Learning". In some cases, when there is no clear agent-surrounding separation or the existing separation is suboptimal, new methods are needed to divide and select the states automatically. Another future work direction is to extend the current method to the partially observed cases. For example, we can combine MUSIC with state estimation methods and extend MUSIC to the partially observed settings.
|
| 179 |
+
|
| 180 |
+
# 5 RELATED WORK
|
| 181 |
+
|
| 182 |
+
Intrinsically motivated RL is a challenging topic. We divide the previous works in three categories. In the first category, intrinsic rewards are often used to help the agent learn more efficiently to solve tasks. For example, Jung et al. (2011) and Mohamed & Rezende (2015) use empowerment, which is the channel capacity between states and actions. A theoretical connection between MUSIC and empowerment is shown in Appendix B. VIME (Houthooft et al., 2016) and ICM (Pathak et al., 2017) use curiosity as intrinsic rewards to encourage the agents to explore the environment more thoroughly. Another category of work on intrinsic motivation for RL is to discover meaningful skills, such as Variational Intrinsic Control (VIC) (Gregor et al., 2016), DIAYN (Eysenbach et al., 2019), and Explore Discover Learn (EDL) (Campos et al., 2020). In the third category, intrinsic motivation also helps the agent to learn goal-conditioned policies. Warde-Farley et al. (2019) proposed DISCERN, a method to learn a MI objective between the states and goals. Based on DISCERN, Pong et al. (2019) introduced Skew-fit, which adapts a maximum entropy strategy to sample goals from the replay buffer (Zhao et al., 2019) in order to make the agent learn more efficiently in the absence of rewards. However, these methods fail to enable the agent to learn meaningful interaction skills in the environment, such as in the robot manipulation tasks. Our work is based on the agent-surrounding separation concept and drives an efficient state intrinsic control objective, which empowers RL agents to learn meaningful interaction and control skills without any task reward. A recent work (Song et al., 2020) with similar motivation, introduces mega-reward, which aims to maximize the control capabilities of agents on given entities in a given environment and show promising results in Atari games. Another related work (Dilokthanakul et al., 2019) proposes feature control as intrinsic motivation and shows state-of-the-art results in Montezuma's revenge.
|
| 183 |
+
|
| 184 |
+
In this paper, we introduce MUSIC, a method that uses the MI between the surrounding state and the agent state as the intrinsic reward. In contrast to previous works on intrinsic rewards (Mohamed & Rezende, 2015; Houthooft et al., 2016; Pathak et al., 2017; Eysenbach et al., 2019; Warde-Farley et al., 2019), MUSIC encourages the agent to interact with the interested part of the environment, which is represented by the surrounding state, and learn to control it. The MUSIC intrinsic reward is critical when controlling a specific subset of the environmental state is the key to complete the task, such as the case in robotic manipulation tasks. Our method is complementary to these previous works, such as DIAYN and DISCERN, and can be combined with them. Inspired by previous works (Schaul et al., 2016; Houthooft et al., 2016; Eysenbach et al., 2019), we additionally demonstrate three variants, including MUSIC-based fine-tuning, rewarding, and prioritizing mechanisms, to significantly accelerate learning in the downstream tasks.
|
| 185 |
+
|
| 186 |
+
# 6 CONCLUSION
|
| 187 |
+
|
| 188 |
+
This paper introduces Mutual Information-based State Intrinsic Control (MUSIC), an unsupervised RL framework for learning useful control behaviors. The derived efficient MI-based theoretical objective encourages the agent to control states without any task reward. MUSIC enables the agent to self-learn different control behaviors, which are non-trivial, intuitively meaningful, and useful for learning and planning. Additionally, the pretrained policy and the MI estimator significantly accelerate learning in the presence of task rewards. We evaluated three MUSIC-based variants in different environments and demonstrate a substantial improvement in learning efficiency compared to state-of-the-art methods.
|
| 189 |
+
|
| 190 |
+
# REFERENCES
|
| 191 |
+
|
| 192 |
+
Alexander A Alemi, Ian Fischer, Joshua V Dillon, and Kevin Murphy. Deep variational information bottleneck. arXiv preprint arXiv:1612.00410, 2016.
|
| 193 |
+
Marcin Andrychowicz, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder, Bob McGrew, Josh Tobin, OpenAI Pieter Abbeel, and Wojciech Zaremba. Hindsight experience replay. In Advances in Neural Information Processing Systems, pp. 5048-5058, 2017.
|
| 194 |
+
Arindam Banerjee. On bayesian bounds. In Proceedings of the 23rd international conference on Machine learning, pp. 81-88. ACM, 2006.
|
| 195 |
+
David Barber and Felix V Agakov. The im algorithm: a variational approach to information maximization. In Advances in neural information processing systems, pp. None, 2003.
|
| 196 |
+
Mohamed Ishmael Belghazi, Aristide Baratin, Sai Rajeswar, Sherjil Ozair, Yoshua Bengio, Aaron Courville, and R Devon Hjelm. Mine: mutual information neural estimation. In Proceedings of the 35th International Conference on Machine Learning, pp. 531-540. PMLR, 2018.
|
| 197 |
+
Yoshua Bengio. Deep learning of representations for unsupervised and transfer learning. In Proceedings of ICML workshop on unsupervised and transfer learning, pp. 17-36, 2012.
|
| 198 |
+
Christopher Berner, Greg Brockman, Brooke Chan, Vicki Cheung, Przemyslaw Debiak, Christy Dennison, David Farhi, Quirin Fischer, Shariq Hashme, Chris Hesse, et al. Dota 2 with large scale deep reinforcement learning. arXiv preprint arXiv:1912.06680, 2019.
|
| 199 |
+
Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. Openai gym. arXiv preprint arXiv:1606.01540, 2016.
|
| 200 |
+
Yuri Burda, Harri Edwards, Deepak Pathak, Amos Storkey, Trevor Darrell, and Alexei A Efros. Large-scale study of curiosity-driven learning. arXiv preprint arXiv:1808.04355, 2018.
|
| 201 |
+
Víctor Campos, Alexander Trott, Caiming Xiong, Richard Socher, Xavier Giro-i Nieto, and Jordi Torres. Explore, discover and learn: Unsupervised discovery of state-covering skills. In Proceedings of the 37th International Conference on Machine Learning. PMLR, 2020.
|
| 202 |
+
Matthew Chalk, Olivier Marre, and Gasper Tkacik. Relevant sparse codes with variational information bottleneck. In Advances in Neural Information Processing Systems, pp. 1957-1965, 2016.
|
| 203 |
+
Nat Dilokthanakul, Christos Kaplanis, Nick Pawlowski, and Murray Shanahan. Feature control as intrinsic motivation for hierarchical reinforcement learning. IEEE transactions on neural networks and learning systems, 30(11):3409-3418, 2019.
|
| 204 |
+
Monroe D Donsker and SR Srinivasa Varadhan. Asymptotic evaluation of certain markov process expectations for large time, i. Communications on Pure and Applied Mathematics, 28(1):1-47, 1975.
|
| 205 |
+
Benjamin Eysenbach, Abhishek Gupta, Julian Ibarz, and Sergey Levine. Diversity is all you need: Learning skills without a reward function. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=SJx63jRqFm.
|
| 206 |
+
|
| 207 |
+
Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 1126-1135. JMLR.org, 2017.
|
| 208 |
+
Gordon G Gallup. Chimpanzees: self-recognition. Science, 167(3914):86-87, 1970.
|
| 209 |
+
Karol Gregor, Danilo Jimenez Rezende, and Daan Wierstra. Variational intrinsic control. arXiv preprint arXiv:1611.07507, 2016.
|
| 210 |
+
Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. In Proceedings of the 35th International Conference on Machine Learning, pp. 1861-1870. PMLR, 2018.
|
| 211 |
+
R Devon Hjelm, Alex Fedorov, Samuel Lavoie-Marchildon, Karan Grewal, Phil Bachman, Adam Trischler, and Yoshua Bengio. Learning deep representations by mutual information estimation and maximization. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=Bklr3j0cKX.
|
| 212 |
+
Rein Houthooft, Xi Chen, Yan Duan, John Schulman, Filip De Turck, and Pieter Abbeel. Vime: Variational information maximizing exploration. In Advances in Neural Information Processing Systems, pp. 1109-1117, 2016.
|
| 213 |
+
Tobias Jung, Daniel Polani, and Peter Stone. Empowerment for continuous agent—environment systems. Adaptive Behavior, 19(1):16-39, 2011.
|
| 214 |
+
Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
|
| 215 |
+
Alexander S Klyubin, Daniel Polani, and Chrystopher L Nehaniv. Empowerment: A universal agent-centric measure of control. In 2005 IEEE Congress on Evolutionary Computation, volume 1, pp. 128-135. IEEE, 2005.
|
| 216 |
+
Artemy Kolchinsky, Brendan D Tracey, and David H Wolpert. Nonlinear information bottleneck. arXiv preprint arXiv:1705.02436, 2017.
|
| 217 |
+
Alexander Kraskov, Harald Stögbauer, and Peter Grassberger. Estimating mutual information. *Physical review E*, 69(6):066138, 2004.
|
| 218 |
+
Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. In International Conference on Learning Representations, 2016.
|
| 219 |
+
Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. nature, 518(7540):529-533, 2015.
|
| 220 |
+
Shakir Mohamed and Danilo Jimenez Rezende. Variational information maximisation for intrinsically motivated reinforcement learning. In Advances in neural information processing systems, pp. 2125-2133, 2015.
|
| 221 |
+
Sinno Jialin Pan, Qiang Yang, et al. A survey on transfer learning. IEEE Transactions on knowledge and data engineering, 22(10):1345-1359, 2010.
|
| 222 |
+
Deepak Pathak, Pulkit Agrawal, Alexei A Efros, and Trevor Darrell. Curiosity-driven exploration by self-supervised prediction. In International Conference on Machine Learning (ICML), volume 2017, 2017.
|
| 223 |
+
Matthias Plappert, Marcin Andrychowicz, Alex Ray, Bob McGrew, Bowen Baker, Glenn Powell, Jonas Schneider, Josh Tobin, Maciek Chogiej, Peter Welinder, et al. Multi-goal reinforcement learning: Challenging robotics environments and request for research. arXiv preprint arXiv:1802.09464, 2018.
|
| 224 |
+
Vitchyr H Pong, Murtaza Dalal, Steven Lin, Ashvin Nair, Shikhar Bahl, and Sergey Levine. Skewfit: State-covering self-supervised reinforcement learning. arXiv preprint arXiv:1903.03698, 2019.
|
| 225 |
+
|
| 226 |
+
Christoph Salge, Cornelius Glackin, and Daniel Polani. Empowerment—an introduction. In *Guided Self-Organization: Inception*, pp. 67–114. Springer, 2014.
|
| 227 |
+
Tom Schaul, John Quan, Ioannis Antonoglou, and David Silver. Prioritized experience replay. In International Conference on Learning Representations, 2016.
|
| 228 |
+
Jürgen Schmidhuber. A possibility for implementing curiosity and boredom in model-building neural controllers. In Proc. of the international conference on simulation of adaptive behavior: From animals to animals, pp. 222-227, 1991.
|
| 229 |
+
John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017.
|
| 230 |
+
David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, et al. Mastering the game of go without human knowledge. nature, 550(7676):354-359, 2017.
|
| 231 |
+
Joel Smith. Self-Consciousness. In Edward N. Zalta (ed.), The Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University, summer 2020 edition, 2020.
|
| 232 |
+
Yuhang Song, Jianyi Wang, Thomas Lukasiewicz, Zhenghua Xu, Shangtong Zhang, Andrzej Wojcicki, and Mai Xu. Mega-reward: Achieving human-level play without extrinsic rewards. In AAAI, pp. 5826-5833, 2020.
|
| 233 |
+
Petar Velickovic, William Fedus, William L Hamilton, Pietro Lio, Yoshua Bengio, and R Devon Hjelm. Deep graph infomax. In ICLR (Poster), 2019.
|
| 234 |
+
David Warde-Farley, Tom Van de Wiele, Tejas Kulkarni, Catalin Ionescu, Steven Hansen, and Volodymyr Mnih. Unsupervised control through non-parametric discriminative rewards. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=r1eVMnA9K7.
|
| 235 |
+
Rui Zhao, Xudong Sun, and Volker Tresp. Maximum entropy-regularized multi-goal reinforcement learning. In Proceedings of the 36th International Conference on Machine Learning, pp. 7553-7562. PMLR, 2019.
|
| 236 |
+
|
| 237 |
+
# APPENDIX
|
| 238 |
+
|
| 239 |
+
# A MONOTONICALLY INCREASING RELATIONSHIP
|
| 240 |
+
|
| 241 |
+
Lemma 2. There is a monotonically increasing relationship between $I_{\phi}(S^s; S^a \mid \mathcal{T})$ and $\mathbb{E}_{\mathbb{P}_{\mathcal{T}'}}[I_{\phi}(S^s; S^a \mid \mathcal{T}')]$ , mathematically,
|
| 242 |
+
|
| 243 |
+
$$
|
| 244 |
+
I _ {\phi} \left(S ^ {s}; S ^ {a} \mid \mathcal {T}\right) \times \mathbb {E} _ {\mathbb {P} _ {\mathcal {T} ^ {\prime}}} \left[ I _ {\phi} \left(S ^ {s}; S ^ {a} \mid \mathcal {T} ^ {\prime}\right) \right], \tag {6}
|
| 245 |
+
$$
|
| 246 |
+
|
| 247 |
+
where $S^s$ , $S^a$ , and $\mathcal{T}$ denote the surrounding state, the agent state, and the trajectory, respectively. The trajectory fractions are defined as the adjacent state pairs, namely $\mathcal{T}' = \{S_t, S_{t+1}\}$ . The symbol $\ltimes$ denotes a monotonically increasing relationship between two variables and $\phi$ represents the parameter of the statistics model in MINE.
|
| 248 |
+
|
| 249 |
+
Proof. The derivation of the monotonically increasing relationship is shown as follows:
|
| 250 |
+
|
| 251 |
+
$$
|
| 252 |
+
\begin{array}{l} I _ {\phi} \left(S ^ {s}; S ^ {a} \mid \mathcal {T}\right) = \mathbb {E} _ {\mathbb {P} _ {S ^ {s} S ^ {a} \mid \mathcal {T}}} \left[ T _ {\phi} \right] - \log \left(\mathbb {E} _ {\mathbb {P} _ {S ^ {s} \mid \mathcal {T}} \otimes \mathbb {P} _ {S ^ {a} \mid \mathcal {T}}} \left[ e ^ {T _ {\phi}} \right]\right) (7) \\ \times \mathbb {E} _ {\mathbb {P} _ {S ^ {s} S ^ {a} | \mathcal {T}}} \left[ T _ {\phi} \right] - \mathbb {E} _ {\mathbb {P} _ {S ^ {s} | \mathcal {T}} \otimes \mathbb {P} _ {S ^ {a} | \mathcal {T}}} \left[ e ^ {T _ {\phi}} \right] (8) \\ = \mathbb {E} _ {\mathbb {P} _ {\mathcal {T} ^ {\prime}}} \left[ \mathbb {E} _ {\mathbb {P} _ {S ^ {s} \mid \mathcal {T} ^ {a}} \mid \mathcal {T} ^ {\prime}} \left[ T _ {\phi} \right] - \mathbb {E} _ {\mathbb {P} _ {S ^ {s} \mid \mathcal {T} ^ {\prime}} \otimes \mathbb {P} _ {S ^ {a} \mid \mathcal {T} ^ {\prime}}} \left[ e ^ {T _ {\phi}} \right] \right] (9) \\ \ltimes \mathbb {E} _ {\mathcal {P} _ {\mathcal {T} ^ {\prime}}} \left[ \mathbb {E} _ {\mathbb {P} _ {S ^ {s} S ^ {a} | \mathcal {T} ^ {\prime}}} \left[ T _ {\phi} \right] - \log \left(\mathbb {E} _ {\mathbb {P} _ {S ^ {s} | \mathcal {T} ^ {\prime}} \otimes \mathbb {P} _ {S ^ {a} | \mathcal {T} ^ {\prime}}} \left[ e ^ {T _ {\phi}} \right]\right) \right] = \mathbb {E} _ {\mathcal {P} _ {\mathcal {T} ^ {\prime}}} \left[ I _ {\phi} \left(S ^ {s}; S ^ {a} \mid \mathcal {T} ^ {\prime}\right) \right], (10) \\ \end{array}
|
| 253 |
+
$$
|
| 254 |
+
|
| 255 |
+
where $T_{\phi}$ represents a neural network, whose inputs are state samples and the output is a scalar. For simplicity, we use the symbol $\ltimes$ to denote a monotonically increasing relationship between two
|
| 256 |
+
|
| 257 |
+
variables, for example, $\log (x)\ltimes x$ means that as the value of $x$ increases, the value of $\log (x)$ also increases and vice versa. To decompose the lower bound Equation (7) into small parts, we make the following derivations, see Equation (8,9,10). Deriving from Equation (7) to Equation (8), we use the property that $\log (x)\ltimes x$ . Here, the new form, Equation (8), allows us to decompose the MI estimation into the expectation over MI estimations of each trajectory fractions, Equation (9). To be more specific, we move the implicit expectation over trajectory fractions in Equation (8) to the front, and then have Equation (9). The quantity inside the expectation over trajectory fractions is the MI estimation using only each trajectory fraction, see Equation (9). We use the property, $\log (x)\ltimes x$ , again to derive from Equation (9) to Equation (10).
|
| 258 |
+
|
| 259 |
+
# B CONNECTION TO EMPLOYMENT
|
| 260 |
+
|
| 261 |
+
The state $S$ contains the surrounding state $S^s$ and the agent state $S^a$ . For example, in robotic tasks, the surrounding state and the agent state represents the object state and the end-effector state, respectively. The action space is the change of the gripper position and the status of the gripper, such as open or closed. Note that, the agent's action directly alters the agent state.
|
| 262 |
+
|
| 263 |
+
Here, given the assumption that the transform, $S^a = F(A)$ , from the action, $A$ , to the agent state, $S^a$ , is a smooth and uniquely invertible mapping (Kraskov et al., 2004), then we can prove that the MUSIC objective, $I(S^a, S^s)$ , is equivalent to the empowerment objective, $I(A, S^s)$ .
|
| 264 |
+
|
| 265 |
+
The empowerment objective (Klyubin et al., 2005; Salge et al., 2014; Mohamed & Rezende, 2015) is defined as the channel capacity in information theory, which means the amount of information contained in the action $A$ about the state $S$ , mathematically:
|
| 266 |
+
|
| 267 |
+
$$
|
| 268 |
+
\mathcal {E} = I (S, A). \tag {11}
|
| 269 |
+
$$
|
| 270 |
+
|
| 271 |
+
Here, we replace the state variable $S$ with the surrounding state $S^s$ , we have the empowerment objective as follows,
|
| 272 |
+
|
| 273 |
+
$$
|
| 274 |
+
\mathcal {E} = I \left(S ^ {s}, A\right). \tag {12}
|
| 275 |
+
$$
|
| 276 |
+
|
| 277 |
+
Theorem 3. The MUSIC objective, $I(S^a, S^s)$ , is equivalent to the empowerment objective, $I(A, S^s)$ , given the assumption that the transform, $S^a = F(A)$ , is a smooth and uniquely invertible mapping:
|
| 278 |
+
|
| 279 |
+
$$
|
| 280 |
+
I \left(S ^ {a}, S ^ {s}\right) = I (A, S ^ {s}) \tag {13}
|
| 281 |
+
$$
|
| 282 |
+
|
| 283 |
+
where $S^s$ , $S^a$ , and $A$ denote the surrounding state, the agent state, and the action, respectively.
|
| 284 |
+
|
| 285 |
+
Proof.
|
| 286 |
+
|
| 287 |
+
$$
|
| 288 |
+
\begin{array}{l} I \left(S ^ {a}, S ^ {s}\right) = \int \int d s ^ {a} d s ^ {s} p \left(s ^ {a}, s ^ {s}\right) \log \frac {p \left(s ^ {a} , s ^ {s}\right)}{p \left(s ^ {a}\right) p \left(s ^ {s}\right)} (14) \\ = \iint d s ^ {a} d s ^ {s} \left\| \frac {\partial A}{\partial S ^ {a}} \right\| p (a, s ^ {s}) \log \frac {\left\| \frac {\partial A}{\partial S ^ {a}} \right\| p (a , s ^ {s})}{\left\| \frac {\partial A}{\partial S ^ {a}} \right\| p (a) p \left(s ^ {s}\right)} (15) \\ = \int \int d s ^ {a} d s ^ {s} J _ {A} \left(s ^ {a}\right) p (a, s ^ {s}) \log \frac {J _ {A} \left(s ^ {a}\right) p (a , s ^ {s})}{J _ {A} \left(s ^ {a}\right) p (a) p \left(s ^ {s}\right)} (16) \\ = \int \int d a d s ^ {s} p (a, s ^ {s}) \log \frac {p (a , s ^ {s})}{p (a) p \left(s ^ {s}\right)} (17) \\ = I (A, S ^ {s}) (18) \\ \end{array}
|
| 289 |
+
$$
|
| 290 |
+
|
| 291 |
+

|
| 292 |
+
|
| 293 |
+
# C MUTUAL INFORMATION NEURAL ESTIMATOR TRAINING
|
| 294 |
+
|
| 295 |
+
Algorithm 2: MINE (Belghazi et al., 2018)
|
| 296 |
+
$\theta \leftarrow$ initialize network parameters
|
| 297 |
+
repeat Draw $b$ minibatch samples from the joint distribution: $(\pmb{x}^{(1)},\pmb{z}^{(1)}),\dots ,(\pmb{x}^{(b)},\pmb{z}^{(b)})\sim \mathbb{P}_{XZ}$ Draw $n$ samples from the $Z$ marginal distribution: $\bar{\boldsymbol{z}}^{(1)},\ldots ,\bar{\boldsymbol{z}}^{(b)}\sim \mathbb{P}_Z$ Evaluate the lower-bound: $\mathcal{V}(\theta)\gets \frac{1}{b}\sum_{i = 1}^{b}T_{\phi}(\pmb{x}^{(i)},\pmb{z}^{(i)}) - \log (\frac{1}{b}\sum_{i = 1}^{b}e^{T_{\phi}(\pmb{x}^{(i)},\bar{\pmb{z}}^{(i)})})$ Evaluate bias corrected gradients (e.g., moving average): $\widehat{G} (\theta)\gets \widetilde{\nabla}_{\theta}\mathcal{V}(\theta)$ Update the statistics network parameters: $\theta \leftarrow \theta +\widehat{G} (\theta)$
|
| 298 |
+
until convergence
|
| 299 |
+
|
| 300 |
+
One potential pitfall of training the RL agent using the MINE reward is that the MINE reward signal can be relatively small compared to the task reward signal. The practical guidance to solve this problem is to tune the scale of the MINE reward to be similar to the scale of the task reward.
|
| 301 |
+
|
| 302 |
+
# D EXPERIMENTAL RESULTS
|
| 303 |
+
|
| 304 |
+
The learned control behaviors without supervision are shown in Figure 8 as well as in the supplementary video. The detailed experimental results are shown in Figure 9 and Figure 10.
|
| 305 |
+
|
| 306 |
+

|
| 307 |
+
Figure 8: Learned Control behaviors with MUSIC: Without any reward, MUSIC enables the agent to learn control behaviors, such as reaching, pushing, sliding, and picking up an object. The learned behaviors are shown in the supplementary video.
|
| 308 |
+
|
| 309 |
+

|
| 310 |
+
|
| 311 |
+

|
| 312 |
+
|
| 313 |
+

|
| 314 |
+
|
| 315 |
+

|
| 316 |
+
Figure 9: Mean success rate with standard deviation: The percentage values after colon (:) represent the best mean success rate during training. The shaded area describes the standard deviation.
|
| 317 |
+
|
| 318 |
+

|
| 319 |
+
|
| 320 |
+

|
| 321 |
+
|
| 322 |
+
# E COMPARISON OF VARIATIONAL MI-BASED AND MINE-BASED IMPLEMENTATIONS
|
| 323 |
+
|
| 324 |
+
Here, we compare the variational approach-based (Barber & Agakov, 2003) implementation of MUSIC and MINE-based implementation (Belghazi et al., 2018) of MUSIC in Table 2. All the experiments are conducted with 5 different random seeds. The performance metric is mean success rate $(\%)\pm$ standard deviation. The "Task-r" stands for the task reward. From Table 2, we can see that
|
| 325 |
+
|
| 326 |
+
Table 2: Comparison of variational MI (v)-based and MINE (m)-based MUSIC
|
| 327 |
+
|
| 328 |
+
<table><tr><td>Method</td><td>Push (%)</td><td>Pick & Place (%)</td></tr><tr><td>Task-r+MUSIC(v)</td><td>94.9% ± 5.83%</td><td>49.17% ± 4.9%</td></tr><tr><td>Task-r+MUSIC(m)</td><td>94.83% ± 4.95%</td><td>50.38% ± 8.8%</td></tr></table>
|
| 329 |
+
|
| 330 |
+
the performance of these two MI estimation methods are similar. However, the variational method introduces additional complicated sampling mechanisms, and two additional hyper-parameters, i.e., the number of the candidates and the type of the similarity measurement (Barber & Agakov, 2003; Eysenbach et al., 2019; Warde-Farley et al., 2019). In contrast, MINE-style MUSIC is easier to implement and has less hyper-parameters to tune. Furthermore, the derived objective improves the scalability of the MINE-style MUSIC.
|
| 331 |
+
|
| 332 |
+
# F SKILL DISCOVERY FOR HIERARCHICAL REINFORCEMENT LEARNING
|
| 333 |
+
|
| 334 |
+
In this section, we explore the direction of Hierarchical Reinforcement Learning based on MUSIC.
|
| 335 |
+
|
| 336 |
+
For example, in the Fetch robot arm pick-and-place environment, we have the following states as the observation: grip_pos, object_pos, object_velp, object_rot, object_velr, where the abbreviation "pos" stands for position; "rot" stands for rotation; "velp" stands for linear velocity, and "velr" stands for rotational velocity.
|
| 337 |
+
|
| 338 |
+
The grip_pos is the agent state. The surrounding states are object_pos, object_velp, object.rot, object_ velr. In Table 3, we show the MI value with different state-pair combinations prior to training and post to training. When the MI value difference is high, it means that
|
| 339 |
+
|
| 340 |
+

|
| 341 |
+
Figure 10: Performance comparison: We compare the MUSIC variants, including MUSIC-f, MUSIC-r, and MUSIC-p, with DIAYN, VIME, and PER, respectively.
|
| 342 |
+
|
| 343 |
+
the agent has a good learning progress with the corresponding MI objective. From Table 3 first row,
|
| 344 |
+
|
| 345 |
+
Table 3: Mutual Information estimation prior and post to the training
|
| 346 |
+
|
| 347 |
+
<table><tr><td>Mutual Information Objective</td><td>Prior-train Value</td><td>Post-train Value</td></tr><tr><td>MI(grip_pos; object_pos)</td><td>0.003 ± 0.017</td><td>0.164 ± 0.055</td></tr><tr><td>MI(grip_pos; object.rot)</td><td>0.017 ± 0.084</td><td>0.461 ± 0.088</td></tr><tr><td>MI(grip_pos; object_velp)</td><td>0.005 ± 0.010</td><td>0.157 ± 0.050</td></tr><tr><td>MI(grip_pos; object_velr)</td><td>0.016 ± 0.083</td><td>0.438 ± 0.084</td></tr></table>
|
| 348 |
+
|
| 349 |
+
we can see that with the intrinsic reward MI(grip_pos; object_pos), the agent achieves a high MI after training, which means that the agent learns to better control the object positions using its gripper. Similarly, in the second row of the table, with MI(grip_pos; object.rot), the agent learns to control object rotation with its gripper.
|
| 350 |
+
|
| 351 |
+
From the experimental results, we can see that with different combination of state-pairs of the agent and surrounding state, the agent can learn different skills, such as manipulate object positions or rotations. We can connect these learned skills with different skill-options (Eysenbach et al., 2019) and train a meta-controller to control these motion primitives to complete long-horizon tasks in a hierarchical reinforcement learning framework (Eysenbach et al., 2019). We consider this as a future research direction, which could be a solution in solving more challenging and complex long-horizon tasks.
|
| 352 |
+
|
| 353 |
+
# G EXPERIMENTAL DETAILS
|
| 354 |
+
|
| 355 |
+
We ran all the methods in each environment with 5 different random seeds and report the mean success rate and the standard deviation. The experiments of the robotic manipulation tasks in this paper use the following hyper-parameters:
|
| 356 |
+
|
| 357 |
+
- Actor and critic networks: 3 layers with 256 units each and ReLU non-linearities
|
| 358 |
+
- Adam optimizer (Kingma & Ba, 2014) with $1 \cdot 10^{-3}$ for training both actor and critic
|
| 359 |
+
- Buffer size: ${10}^{6}$ transitions
|
| 360 |
+
- Polyak-averaging coefficient: 0.95
|
| 361 |
+
Action L2 norm coefficient: 1.0
|
| 362 |
+
- Observation clipping: $[-200, 200]$
|
| 363 |
+
- Batch size: 256
|
| 364 |
+
Rollouts per MPI worker: 2
|
| 365 |
+
Number of MPI workers: 16
|
| 366 |
+
- Cycles per epoch: 50
|
| 367 |
+
- Batches per cycle: 40
|
| 368 |
+
Test rollouts per epoch: 10
|
| 369 |
+
- Probability of random actions: 0.3
|
| 370 |
+
- Scale of additive Gaussian noise: 0.2
|
| 371 |
+
- Scale of the mutual information reward: 5000
|
| 372 |
+
|
| 373 |
+
The specific hyper-parameters for DIAYN are follows:
|
| 374 |
+
|
| 375 |
+
Number of skill options: 5
|
| 376 |
+
- Discriminate skills based on the surrounding state
|
| 377 |
+
|
| 378 |
+
The specific hyper-parameters for VIME are follows:
|
| 379 |
+
|
| 380 |
+
Weight for intrinsic reward $\eta$ 0.2
|
| 381 |
+
- Bayesian Neural Network (BNN) learning rate: 0.0001
|
| 382 |
+
- BNN number of hidden units: 32
|
| 383 |
+
- BNN number of layers: 2
|
| 384 |
+
- Prior standard deviation: 0.5
|
| 385 |
+
- Use second order update: True
|
| 386 |
+
- Use information gain: True
|
| 387 |
+
- Use KL ratio: True
|
| 388 |
+
Number updates per sample: 1
|
| 389 |
+
|
| 390 |
+
The specific hyper-parameters for DISCERN are follows:
|
| 391 |
+
|
| 392 |
+
Number of candidates to calculate the contrastive loss: 10
|
| 393 |
+
Calculate the MI using the surrounding state
|
| 394 |
+
|
| 395 |
+
The specific hyper-parameters for PER are follows:
|
| 396 |
+
|
| 397 |
+
- Prioritization strength $\alpha$ : 0.6
|
| 398 |
+
- Importance sampling factor $\beta$ : 0.4
|
| 399 |
+
|
| 400 |
+
The specific hyper-parameter for SAC is following:
|
| 401 |
+
|
| 402 |
+
Weight of the entropy reward: 0.02
|
mutualinformationstateintrinsiccontrol/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:0636a1f80799fd7b1f6b53b4cb4ff86dbabae4e11f488912230785b2069be6ff
|
| 3 |
+
size 709101
|
mutualinformationstateintrinsiccontrol/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:bef16514c22a972a2e053dd5774a0f8ad261c919b499ac99d0a2beea7fd878ad
|
| 3 |
+
size 521539
|
neuralapproximatesufficientstatisticsforimplicitmodels/16f1e78d-9335-4c06-ac73-02a62d0df26f_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:6a9c44b56f096ba9264974e784a245da4d7ef41b0d49e1996be9519867f93c99
|
| 3 |
+
size 105228
|
neuralapproximatesufficientstatisticsforimplicitmodels/16f1e78d-9335-4c06-ac73-02a62d0df26f_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:7f720dddd6f7bf53b0da5dd95edeaa8e7d80237f8298c7524eeabc9e869219c0
|
| 3 |
+
size 126939
|
neuralapproximatesufficientstatisticsforimplicitmodels/16f1e78d-9335-4c06-ac73-02a62d0df26f_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:19766a4438baab50705be694231149d41a4d387841e44a6ac6cfcd5f9c68e7b9
|
| 3 |
+
size 1257832
|
neuralapproximatesufficientstatisticsforimplicitmodels/full.md
ADDED
|
@@ -0,0 +1,432 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# NEURAL APPROXIMATE SUFFICIENT STATISTICS FOR IMPLICIT MODELS
|
| 2 |
+
|
| 3 |
+
Yanzhi Chen $^{1*}$ , Dinghuai Zhang $^{2*}$ , Michael U. Gutmann $^{1}$ , Aaron Courville $^{2}$ , Zhanxing Zhu $^{3}$
|
| 4 |
+
|
| 5 |
+
<sup>1</sup>The University of Edinburgh, <sup>2</sup>MILA, <sup>3</sup>Beijing Institute of Big Data Research
|
| 6 |
+
|
| 7 |
+
# ABSTRACT
|
| 8 |
+
|
| 9 |
+
We consider the fundamental problem of how to automatically construct summary statistics for implicit generative models where the evaluation of the likelihood function is intractable but sampling data from the model is possible. The idea is to frame the task of constructing sufficient statistics as learning mutual information maximizing representations of the data with the help of deep neural networks. The infomax learning procedure does not need to estimate any density or density ratio. We apply our approach to both traditional approximate Bayesian computation and recent neural likelihood methods, boosting their performance on a range of tasks.
|
| 10 |
+
|
| 11 |
+
# 1 INTRODUCTION
|
| 12 |
+
|
| 13 |
+
Many data generating processes can be well-described by a parametric statistical model that can be easily simulated forward but does not possess an analytical likelihood function. These models are called implicit generative models (Diggle & Gratton, 1984) or simulator-based models (Lintsaari et al., 2017) and are widely used in science and engineering domains, including physics (Sjöstrand et al., 2008), genetics (Järvenpää et al., 2018), computer graphics (Mansinghka et al., 2013), robotics (Lopez-Guevara et al., 2017), finance (Bansal & Yaron, 2004), cosmology (Weyant et al., 2013), ecology (Wood, 2010) and epidemiology (Chinazzi et al., 2020). For example, the number of infected/healthy people in an outbreak could be well modelled by stochastic differential equations (SDE) simulated by Euler-Maruyama discretization but the likelihood function of a SDE is generally non-analytical. Directly inferring the parameters of these implicit models is often very challenging.
|
| 14 |
+
|
| 15 |
+
The techniques coined as likelihood-free inference open us a door for performing Bayesian inference in such circumstances. Likelihood-free inference needs to evaluate neither the likelihood function nor its derivatives. Rather, it only requires the ability to sample (i.e. simulate) data from the model. Early approaches in approximate Bayesian computation (ABC) perform likelihood-free inference by repeatedly simulating data from the model, and pick a small subset of the simulated data close to the observed data to build the posterior (Pritchard et al., 1999; Marjoram et al., 2003; Beaumont et al., 2009; Sisson et al., 2007). Recent advances make use of flexible neural density estimators to approximate either the intractable likelihood (Papamakarios et al., 2019) or directly the posterior (Papamakarios & Murray, 2016; Lueckmann et al., 2017; Greenberg et al., 2019).
|
| 16 |
+
|
| 17 |
+
Despite the algorithmic differences, a shared ingredient in likelihood-free inference methods is the choice of summary statistics. Well-chosen summary statistics have been proven crucial for the performance of likelihood-free inference methods (Blum et al., 2013; Fearnhead & Prangle, 2012; Sisson et al., 2018). Unfortunately, in practice it is often difficult to determine low-dimensional and informative summary statistic without domain knowledge from experts. In this work, we propose a novel deep neural network-based approach for automatic construction of summary statistics. Neural networks have been previously applied to learning summary statistics for likelihood-free inference (Jiang et al., 2017; Dinev & Gutmann, 2018; Alsing et al., 2018; Brehmer et al., 2020). Our approach is unique in that our learned statistics directly target global sufficiency. The main idea is to exploit the link between statistical sufficiency and information theory, and to formulate the task of learning sufficient statistic as the task of learning information-maximizing representations of data. We achieve this with distribution-free mutual information estimators or their proxies (Székely et al., 2014; Hjelm
|
| 18 |
+
|
| 19 |
+
et al., 2018). Importantly, our statistics can be learned jointly with the posterior, resulting in fast learning where the two can refine each other iteratively. To sum up, our main contributions are:
|
| 20 |
+
|
| 21 |
+
- We propose a new neural approach to automatically extract compact, near-sufficient statistics from raw data. The approach removes the need for careful handcrafted design of summary statistics.
|
| 22 |
+
- With the proposed statistics, we develop two new likelihood-free inference methods namely SMC-ABC+ and SNL+. Experiments on tasks with various types of data demonstrate their effectiveness.
|
| 23 |
+
|
| 24 |
+
# 2 BACKGROUND
|
| 25 |
+
|
| 26 |
+
Likelihood-free inference. LFI considers the task of Bayesian inference when the likelihood function of the model is intractable but simulating (sampling) data from the model is possible:
|
| 27 |
+
|
| 28 |
+
$$
|
| 29 |
+
\pi (\boldsymbol {\theta} | \mathbf {x} _ {o}) \propto \pi (\boldsymbol {\theta}) \underbrace {p \left(\mathbf {x} _ {o} \mid \boldsymbol {\theta}\right)} _ {?} \tag {1}
|
| 30 |
+
$$
|
| 31 |
+
|
| 32 |
+
where $\mathbf{x}_o$ is the observed data, $\pi(\pmb{\theta})$ is the prior over the model parameters $\pmb{\theta}$ , $p(\mathbf{x}_o|\pmb{\theta})$ is the (possibly) non-analytical likelihood function and $\pi(\pmb{\theta}|\mathbf{x}_o)$ is the posterior over $\pmb{\theta}$ . We assume that, while we do not have access to the exact likelihood, we can still sample (simulate) data from the model with a simulator: $\mathbf{x} \sim p(\mathbf{x}|\pmb{\theta})$ . The task is then to infer $\pi(\pmb{\theta}|\mathbf{x}_o)$ given $\mathbf{x}_o$ and the sampled data: $\mathcal{D} = \{\pmb{\theta}_i, \mathbf{x}_i\}_{i=1}^n$ where $\pmb{\theta}_i \sim p(\pmb{\theta}), \mathbf{x}_i \sim p(\mathbf{x}|\pmb{\theta}_i)$ . Note that $p(\pmb{\theta})$ is not necessarily the prior $\pi(\pmb{\theta})$ .
|
| 33 |
+
|
| 34 |
+
**Curse of dimensionality.** Different likelihood-free inference algorithms might learn $\pi(\boldsymbol{\theta}|\mathbf{x}_o)$ in different ways, nevertheless most existing methods suffer from the curse of dimensionality. For example, traditional ABC methods use a small subset of $\mathcal{D}$ closest to $\mathbf{x}_o$ under some metric to build the posterior (Pritchard et al., 1999; Marjoram et al., 2003; Beaumont et al., 2009; Sisson et al., 2007), however in high-dimensional space measuring the distance sensibly is notoriously hard (Sorzano et al., 2014; Xie et al., 2017). On the other hand, recent advances (Papamakarios et al., 2019; Lueckmann et al., 2017; Papamakarios & Murray, 2016; Greenberg et al., 2019) utilize neural density estimators (NDE) to model the intractable likelihood or the posterior. Unfortunately, modeling high-dimensional distributions with NDE accurately is also known to be very difficult (Rippel & Adams, 2013; Van Oord et al., 2016), especially when the available training data is scarce.
|
| 35 |
+
|
| 36 |
+
Our interest here is not to design a new inference algorithm, but to find a low-dimensional statistic $\mathbf{s} = s(\mathbf{x})$ that is (Bayesian) sufficient:
|
| 37 |
+
|
| 38 |
+
$$
|
| 39 |
+
\pi (\boldsymbol {\theta} | \mathbf {x} _ {o}) \approx \pi (\boldsymbol {\theta} | \mathbf {s} _ {o}) \propto \pi (\boldsymbol {\theta}) p (\mathbf {s} _ {o} | \boldsymbol {\theta}), \tag {2}
|
| 40 |
+
$$
|
| 41 |
+
|
| 42 |
+
where $s: \mathcal{X} \to \mathcal{S}$ is a deterministic function also learned from $\mathcal{D}$ . We conjecture that the learning of $s(\cdot)$ might be an easier task than direct density estimation. The resultant statistic $s$ could then be applied to a wide range of likelihood-free inference algorithms as we will elaborate in Section 3.2.
|
| 43 |
+
|
| 44 |
+
# 3 METHODOLOGY
|
| 45 |
+
|
| 46 |
+
# 3.1 NEURAL SUFFICIENT STATISTICS
|
| 47 |
+
|
| 48 |
+
Our new deep neural network-based approach for automatic construction of near-sufficient statistics is based on the infomax principle, as illustrated by the following proposition (also see Figure 1):
|
| 49 |
+
|
| 50 |
+
Proposition 1. Let $\pmb{\theta} \sim p(\pmb{\theta})$ , $\mathbf{x} \sim p(\mathbf{x}|\pmb{\theta})$ , and $s: \mathcal{X} \to S$ be a deterministic function. Then $\mathbf{s} = s(\mathbf{x})$ is a sufficient statistic for $p(\mathbf{x}|\pmb{\theta})$ if and only if
|
| 51 |
+
|
| 52 |
+
$$
|
| 53 |
+
s = \operatorname *{arg max}_{\substack{S:\mathcal{X}\to \mathcal{S}}}I(\boldsymbol {\theta};S(\mathbf{x})),
|
| 54 |
+
$$
|
| 55 |
+
|
| 56 |
+
where $S$ is deterministic mapping and $I(\cdot ;\cdot)$ is the mutual information between random variables.
|
| 57 |
+
|
| 58 |
+
Proof. We defer the complete proof to the appendix. This proposition is a variant of Theorem 8 in (Shamir et al., 2010) with an adaption to the likelihood-free inference scenario. $\square$
|
| 59 |
+
|
| 60 |
+
This important result suggests that we could find the sufficient statistic $\mathbf{s} = s(\mathbf{x})$ for a likelihood function $p(\mathbf{x}|\pmb{\theta})$ by maximizing the mutual information (MI) $I(\pmb{\theta};\mathbf{s}) = KL[p(\pmb{\theta},\mathbf{s})\| p(\pmb{\theta})p(\mathbf{s})]$ between $\pmb{\theta}$ and $\mathbf{s}$ . Moreover, as our interest is in maximizing MI rather than knowing its precise value,
|
| 61 |
+
|
| 62 |
+

|
| 63 |
+
Figure 1: Left. Traditional likelihood-free inference algorithm needs handcrafted design of summary statistic, which requires expert knowledge. Right. Our method automatically mines a low dimensional, near-sufficient statistic $s$ of $x$ via the infomax principle, which removes the need for careful summary statistic design. Furthermore, this statistics can be re-learned as the posterior inference proceeds.
|
| 64 |
+
|
| 65 |
+

|
| 66 |
+
|
| 67 |
+
we can maximize a non-KL surrogate, which may have an advantage in e.g. estimation accuracy or computational efficiency (Székely et al., 2014; Hjelm et al., 2018; Ozair et al., 2019). To this end, we utilize the following two non-KL estimators:
|
| 68 |
+
|
| 69 |
+
Jensen-Shannon divergence (JSD) (Hjelm et al., 2018): this non-KL estimator is shown to be more robust than KL-based ones. More specifically, it is defined as:
|
| 70 |
+
|
| 71 |
+
$$
|
| 72 |
+
\hat {I} ^ {\mathrm {J S D}} (\boldsymbol {\theta}; \mathbf {s}) = \sup _ {T: \Theta \times \mathcal {S} \rightarrow \mathbb {R}} \mathbb {E} _ {p (\boldsymbol {\theta}, \mathbf {s})} [ - \operatorname {s p} (- T (\boldsymbol {\theta}, \mathbf {s})) ] - \mathbb {E} _ {p (\boldsymbol {\theta}) p (\mathbf {s})} [ \operatorname {s p} (T (\boldsymbol {\theta}, \mathbf {s})) ], \tag {3}
|
| 73 |
+
$$
|
| 74 |
+
|
| 75 |
+
where $\mathrm{sp}(t) = \log (1 + \exp (t))$ is the softplus function. With this estimator, we set up the following objective for learning the sufficient statistics, which simultaneously estimates and maximizes the MI:
|
| 76 |
+
|
| 77 |
+
$$
|
| 78 |
+
\max _ {S, T} \mathcal {L} (S, T) = \mathbb {E} _ {p (\boldsymbol {\theta}, \mathbf {x})} \left[ - \operatorname {s p} (- T (\boldsymbol {\theta}, S (\mathbf {x}))) \right] - \mathbb {E} _ {p (\boldsymbol {\theta}) p (\mathbf {x})} \left[ \operatorname {s p} (T (\boldsymbol {\theta}, S (\mathbf {x})) \right], \tag {4}
|
| 79 |
+
$$
|
| 80 |
+
|
| 81 |
+
where the two deterministic mappings $S$ and $T$ are parameterized by two neural networks. Note that we have used the law of the unconscious statistician (LOTUS) from equation 3 to equation 4. The mini-batch version of this objective is given in the appendix.
|
| 82 |
+
|
| 83 |
+
Distance correlation (DC) (Székely et al., 2014): unlike the JSD estimator, this estimator does not need to learn an additional network $T$ , and can be learned much faster. It is defined as:
|
| 84 |
+
|
| 85 |
+
$$
|
| 86 |
+
\hat {I} ^ {\mathrm {D C}} (\boldsymbol {\theta}; \mathbf {s}) = \frac {\mathbb {E} _ {p (\boldsymbol {\theta} , \mathbf {s}) p \left(\boldsymbol {\theta} ^ {\prime} , \mathbf {s} ^ {\prime}\right)} \left[ h \left(\boldsymbol {\theta} , \boldsymbol {\theta} ^ {\prime}\right) h \left(\mathbf {s} , \mathbf {s} ^ {\prime}\right) \right]}{\sqrt {\mathbb {E} _ {p (\boldsymbol {\theta}) p \left(\boldsymbol {\theta} ^ {\prime}\right)} \left[ h ^ {2} \left(\boldsymbol {\theta} , \boldsymbol {\theta} ^ {\prime}\right) \right] \cdot \sqrt {\mathbb {E} _ {p (\mathbf {s}) p \left(\mathbf {s} ^ {\prime}\right)} \left[ h ^ {2} \left(\mathbf {s} , \mathbf {s} ^ {\prime}\right) \right]}}}, \tag {5}
|
| 87 |
+
$$
|
| 88 |
+
|
| 89 |
+
where $h(\mathbf{a}, \mathbf{b}) = \| \mathbf{a} - \mathbf{b} \| - \mathbb{E}_{p(\mathbf{b}')} [ \| \mathbf{a} - \mathbf{b}' \| ] - \mathbb{E}_{p(\mathbf{a}')} [ \| \mathbf{a}' - \mathbf{b} \| ] + \mathbb{E}_{p(\mathbf{a}') p(\mathbf{b}')} [ \| \mathbf{a}' - \mathbf{b}' \| ]$ . Similar to the case of the JSD estimator, we set up the following objective for learning the sufficient statistics:
|
| 90 |
+
|
| 91 |
+
$$
|
| 92 |
+
\max _ {S} \mathcal {L} (S) = \frac {\mathbb {E} _ {p (\boldsymbol {\theta} , \mathbf {x}) p \left(\boldsymbol {\theta} ^ {\prime} , \mathbf {x} ^ {\prime}\right)} \left[ h \left(\boldsymbol {\theta}, \boldsymbol {\theta} ^ {\prime}\right) h \left(S (\mathbf {x}), S \left(\mathbf {x} ^ {\prime}\right)\right) \right]}{\sqrt {\mathbb {E} _ {p (\boldsymbol {\theta}) p \left(\boldsymbol {\theta} ^ {\prime}\right)} \left[ h ^ {2} \left(\boldsymbol {\theta} , \boldsymbol {\theta} ^ {\prime}\right) \right] \cdot \sqrt {\mathbb {E} _ {p (\mathbf {x}) p \left(\mathbf {x} ^ {\prime}\right)} \left[ h ^ {2} \left(S (\mathbf {x}), S \left(\mathbf {x} ^ {\prime}\right)\right) \right]}}}, \tag {6}
|
| 93 |
+
$$
|
| 94 |
+
|
| 95 |
+
where the deterministic mapping $S$ is parameterized by a neural network. Again LOTUS is used from equation 5 to equation 6. The mini-batch version of this objective is given in the appendix.
|
| 96 |
+
|
| 97 |
+
A comparison between the accuracy and efficiency of these two MI estimators (as well as other estimators (Belghazi et al., 2018; Ozair et al., 2019)) for infomax statistics learning is in the appendix.
|
| 98 |
+
|
| 99 |
+
With enough training samples and powerful neural networks, we can obtain near-sufficient statistics with either $s = \arg \max_{S} \max_{T} \mathcal{L}(S, T)$ or $s = \arg \max_{S} \mathcal{L}(S)$ , depending on the estimator. The statistic $\mathbf{s}$ of data $\mathbf{x}$ is then given by
|
| 100 |
+
|
| 101 |
+
$$
|
| 102 |
+
\mathbf {s} = s (\mathbf {x}). \tag {7}
|
| 103 |
+
$$
|
| 104 |
+
|
| 105 |
+
In the above construction, we have not specified the form of the networks $S$ and $T$ . For $S$ , any prior knowledge about the data $\mathbf{x}$ could in principle be incorporated into its design. For example, for sequential data we can realize $S$ as a transformer (Vaswani et al., 2017), and for exchangeable data we can realize $S$ as an exchangeable neural network (Chan et al., 2018). Here we simply adopt a fully-connected architecture for $S$ , and leave the problem-specific design of $S$ as future work. For $T$ , we choose it to be a split architecture $T(\pmb{\theta}, S(\mathbf{x})) = T'(H(\pmb{\theta}), S(\mathbf{x}))$ where $T'(\cdot, \cdot), H(\cdot)$ are both MLPs. Therefore we separately learn low-dimensional representations for $\mathbf{x}$ and $\pmb{\theta}$ before processing
|
| 106 |
+
|
| 107 |
+
them together. This could be seen as that we incorporate the inductive bias into the design of the networks that $\mathbf{x}$ and $\pmb{\theta}$ should not interact with each other directly, based on their true relationship (for example, consider the likelihood function of exponential family: $L(\pmb {\theta};\mathbf{x})\propto \exp (H(\pmb {\theta})^{\top}S(\mathbf{x}))$
|
| 108 |
+
|
| 109 |
+
We are left with the problem of how to select $d$ , the dimensionality of the sufficient statistics. The Pitman-Koopman-Darmois theorem (Koopman, 1936) tells us that sufficient statistics with fixed dimensionality only exists for exponential family, so there is no universal way to select $d$ . Here, we propose to use the following simple heuristics to determine $d$ :
|
| 110 |
+
|
| 111 |
+
$$
|
| 112 |
+
d = 2 K \tag {8}
|
| 113 |
+
$$
|
| 114 |
+
|
| 115 |
+
where $K$ is the dimensionality of $\pmb{\theta}$ (which typically satisfies $K\ll D$ ). The rationale behind this heuristics is that the dimensionality of the sufficient statistics in the exponential family is $K$ , and exponential family has been proven reasonably accurate for posterior approximation (see e.g. Thomas et al., 2021; Pacchiardi & Dutta, 2020). By doubling the dimensionality of the statistics to $2K$ we are likely to have a better representative power than the exponential family while still keeping $d$ small.
|
| 116 |
+
|
| 117 |
+
Furthermore, we have the following proposition comparing our method to the existing posterior-mean-as-statistic approaches (Fearnhead & Prangle, 2012; Jiang et al., 2017).
|
| 118 |
+
|
| 119 |
+
Proposition 2. Let $\pmb{\theta} \sim p(\pmb{\theta})$ and $\mathbf{x} \sim p(\mathbf{x}|\pmb{\theta})$ . Let $s(\cdot)$ be a deterministic function that satisfies
|
| 120 |
+
|
| 121 |
+
$$
|
| 122 |
+
s = \operatorname *{arg min}_{S:\mathcal{X}\to \mathcal{S}}\mathbb{E}_{p(\boldsymbol {\theta},\mathbf{x})}[\| S(\mathbf{x}) - \boldsymbol {\theta}\|_{2}^{2}],
|
| 123 |
+
$$
|
| 124 |
+
|
| 125 |
+
then $\mathbf{s} = s(\mathbf{x})$ is generally not a maximizer of $I(S(\mathbf{x});\pmb{\theta})$ and hence it is not a sufficient statistic.
|
| 126 |
+
|
| 127 |
+
Proof. We defer the proof to the appendix.
|
| 128 |
+
|
| 129 |
+

|
| 130 |
+
|
| 131 |
+
This proposition tells us that unlike our method, the existing (posterior-)mean-as-statistic approaches widely used in likelihood-free inference community lose information about the posterior, and it is only optimal for predicting the posterior mean (Fearnhead & Prangle, 2012; Jiang et al., 2017). When using this statistics in inference, it may yield inaccurate estimates of e.g. the posterior uncertainty. Nonetheless which statistics to use depends on the task, e.g. full posterior vs. point estimation.
|
| 132 |
+
|
| 133 |
+
# 3.2 DYNAMIC STATISTICS-POSTERIOR LEARNING
|
| 134 |
+
|
| 135 |
+
The above neural sufficient statistic could, in principle, be learned via a pilot run before the inference starts, as, for example, done in the work by Drovandi et al. (2011); Fearhead & Prangle (2012); Jiang et al. (2017). Such a strategy requires extra simulation cost, and the learned statistic is kept fixed during inference. We propose a dynamic learning strategy below to overcome these limitations.
|
| 136 |
+
|
| 137 |
+
Our idea is to jointly learn the statistic and the posterior in multiple rounds. More concretely, at round $j$ , we use the current statistic $s(\cdot)$ to build the $j$ -th estimate to the posterior: $q_{j}(\pmb{\theta}|\mathbf{s}_{o}) \approx \pi (\pmb{\theta}|\mathbf{x}_{o})$ and at round $j + 1$ , this estimate is used as the new proposal distribution to simulate data: $p_{j + 1}(\pmb{\theta}) \gets q_{j}(\pmb{\theta}|\mathbf{s}_{o}), \pmb{\theta}_{i} \sim p_{j + 1}(\pmb{\theta}), \mathbf{x}_{i} \sim p(\mathbf{x}|\pmb{\theta}_{i})$ . We then re-learn $s(\cdot)$ and $q(\cdot)$ with all the data up to the new round. In this process, $s(\cdot)$ and $q(\cdot)$ refine each other: a good $s(\cdot)$ helps to learn $q(\cdot)$ more accurately, whereas an improved $q(\cdot)$ as a better proposal in turn helps to learn $s(\cdot)$ more efficiently.
|
| 138 |
+
|
| 139 |
+
The theoretical basis of this multi-rounds strategy is provided by Proposition 1, which tells us that the sufficiency of the learned statistics is insensitive to the choice of $p(\pmb{\theta})$ , the marginal distribution of $\pmb{\theta}$ in sampled data $\mathcal{D} = \{\mathbf{x}_i, \pmb{\theta}_i\}_{i=1}^{nj}$ . This means that we are indeed safe to use any proposal distribution $p_l(\pmb{\theta})$ at any round $l$ in multi-rounds learning, and in such case $p(\pmb{\theta})$ after round $j$ will be a mixture distribution formed by the proposal distributions of the previous rounds, i.e. $p(\pmb{\theta}) = \frac{1}{j} \sum_{l=1}^{j} p_l(\pmb{\theta})$ .
|
| 140 |
+
|
| 141 |
+
<table><tr><td>Algorithm 1 SMC-ABC+</td></tr><tr><td>Input: prior π(θ), observed data x0</td></tr><tr><td>Output: estimated posterior π(θ|x0)</td></tr><tr><td>Initialization: D = ∅, p1(θ) = π(θ)</td></tr><tr><td>for j in 1 to r do</td></tr><tr><td>repeat</td></tr><tr><td>sample θi ~ pj(θ) ;</td></tr><tr><td>simulate xi ~ p(x|θi) ;</td></tr><tr><td>until n samples</td></tr><tr><td>D ← D ∪ {θi, xi}n i=1</td></tr><tr><td>fit statistic net s(·) with D by equation 4 ;</td></tr><tr><td>sort D according to ||s(xi) - s(xo)|| ;</td></tr><tr><td>fit p(θ|s0) with the top m θs in D;</td></tr><tr><td>qj(θ|s0) ∝ p(θ|s0)π(θ)/∑l j pl(θ);</td></tr><tr><td>pj+1(θ) ← qj(θ|s0);</td></tr><tr><td>end for</td></tr><tr><td>return π(θ|x0) = qr(θ|s0)</td></tr></table>
|
| 142 |
+
|
| 143 |
+
<table><tr><td>Algorithm 2 SNL+</td></tr><tr><td>Input: prior π(θ), observed data x0</td></tr><tr><td>Output: estimated posterior π(θ|xo)</td></tr><tr><td>Initialization: D = ∅, p1(θ) = π(θ)</td></tr><tr><td>for j in 1 to r do</td></tr><tr><td>repeat</td></tr><tr><td>sample θi ~ pj(θ);</td></tr><tr><td>simulate xi ~ p(x|θi);</td></tr><tr><td>until n samples</td></tr><tr><td>D ← D ∪ {θi, xi}ni=1n</td></tr><tr><td>fit statistic net s(·) with D by equation 4;</td></tr><tr><td>convert D with the learned s(·);</td></tr><tr><td>fit q(s|θ) with converted D by equation 11;</td></tr><tr><td>qj(θ|s0) ∝ π(θ) · q(s0|θ);</td></tr><tr><td>pj+1(θ) ← qj(θ|s0);</td></tr><tr><td>end for</td></tr><tr><td>return π(θ|x0) = qr(θ|s0)</td></tr></table>
|
| 144 |
+
|
| 145 |
+
In practice, any likelihood-free inference algorithm that learns the posterior sequentially naturally fits well within the above joint statistic-posterior learning strategy. Here we study two such instances:
|
| 146 |
+
|
| 147 |
+
Sequential Monte Carlo ABC (SMC-ABC) (Beaumont et al., 2009). This classical algorithm learns the posterior in a non-parametric way within multiple rounds. Here, we consider a variant of it to better make use of the above neural sufficient statistic, and to re-use all previous simulated data. The new SMC-ABC algorithm estimates the posterior $q_{j}(\pmb{\theta}|\mathbf{s}_{o})$ at the $j$ -th round as follows. We first sort data in $\mathcal{D} = \{\mathbf{x}_i,\pmb{\theta}_i\}_{i=1}^{nj}$ according to the distances $\| s(\mathbf{x}_i) - s(\mathbf{x}_o)\|$ . We then pick the top- $m$ $\pmb{\theta}$ s whose corresponding distances are the smallest. The picked $\pmb{\theta}$ s then follow $\pmb{\theta} \sim p(\pmb{\theta} \mid \mathbf{s}_o)$ as below:
|
| 148 |
+
|
| 149 |
+
$$
|
| 150 |
+
p (\boldsymbol {\theta} \mid \mathbf {s} _ {o}) \propto \sum_ {l = 1} ^ {j} p _ {l} (\boldsymbol {\theta}) \cdot \Pr \left(\left\| \mathbf {s} - \mathbf {s} _ {o} \right\| < \epsilon \mid \boldsymbol {\theta}\right), \tag {9}
|
| 151 |
+
$$
|
| 152 |
+
|
| 153 |
+
where the threshold $\epsilon$ is implicitly defined by the ratio $\frac{m}{nj}$ (which automatically goes to zero as $j\to \infty$ ). We then fit $p(\pmb {\theta}|\mathbf{s}_o)$ with the collected $\pmb{\theta}$ s by a flexible parametric model (e.g. a Gaussian copula), with which we can obtain the $j$ -th estimate to the posterior by importance (re-)weighting:
|
| 154 |
+
|
| 155 |
+
$$
|
| 156 |
+
q _ {j} (\boldsymbol {\theta} \mid \mathbf {s} _ {o}) \propto p (\boldsymbol {\theta} \mid \mathbf {s} _ {o}) \pi (\boldsymbol {\theta}) / \sum_ {l = 1} ^ {j} p _ {l} (\boldsymbol {\theta}). \tag {10}
|
| 157 |
+
$$
|
| 158 |
+
|
| 159 |
+
The whole procedure of this new inference algorithm, SMC-ABC+, is summarized in Algorithm 1.
|
| 160 |
+
|
| 161 |
+
Sequential Neural Likelihood (SNL) (Papamakarios et al., 2019). This recent algorithm learns the posterior in a parametric way, also in multiple rounds. The original SNL method approximates the likelihood function $p(\mathbf{x}|\boldsymbol{\theta})$ by a conditional neural density estimator $q(\mathbf{x}|\boldsymbol{\theta})$ , which could be difficult to learn if the dimensionality of $\mathbf{x}$ is high. Here, we alleviate such difficulty with our neural statistic. The new SNL algorithm estimates the posterior $q_{j}(\boldsymbol{\theta}|\mathbf{s}_{o})$ at the $j$ -th round as follows. At round $j$ , where we have $nj$ simulated data $\mathcal{D} = \{\pmb{\theta}_i, \mathbf{x}_i\}_{i=1}^{nj}$ , we fit a neural density estimator $q(\mathbf{s}|\boldsymbol{\theta})$ as:
|
| 162 |
+
|
| 163 |
+
$$
|
| 164 |
+
q (\mathbf {s} \mid \boldsymbol {\theta}) = \underset {Q} {\arg \max } \sum_ {i = 1} ^ {n j} \log Q (s \left(\mathbf {x} _ {i}\right) \mid \boldsymbol {\theta} _ {i}), \tag {11}
|
| 165 |
+
$$
|
| 166 |
+
|
| 167 |
+
where $s(\cdot)$ is the current statistic network and $Q$ is a neural density estimator (e.g. Durkan et al. (2019); Papamakarios et al. (2017)). With $nj$ being moderately large and $Q$ flexible enough, this would yield us $q(\mathbf{s}|\pmb{\theta}) \approx p(\mathbf{s}|\pmb{\theta})$ . We then obtain the $j$ -th estimate of the posterior by Bayes rule:
|
| 168 |
+
|
| 169 |
+
$$
|
| 170 |
+
q _ {j} (\boldsymbol {\theta} \mid \mathbf {s} ^ {o}) \propto \pi (\boldsymbol {\theta}) \cdot q \left(\mathbf {s} ^ {o} \mid \boldsymbol {\theta}\right). \tag {12}
|
| 171 |
+
$$
|
| 172 |
+
|
| 173 |
+
The whole procedure of this new SNL algorithm, denoted as $\mathrm{SNL}+$ , is summarized in Algorithm 2.
|
| 174 |
+
|
| 175 |
+
# 4 RELATED WORKS
|
| 176 |
+
|
| 177 |
+
Approximate Bayesian computation. ABC denotes techniques for likelihood-free inference which work by repeatedly simulating data from the model and picking those similar to the observed data to estimate the posterior (Sisson et al., 2018). Naive ABC performs simulation with the prior, whereas MCMC-ABC (Marjoram et al., 2003; Meeds et al., 2015) and SMC-ABC (Beaumont et al., 2009; Sisson et al., 2007) use informed proposals, and more advanced methods employ experimental design or active learning to accelerate the inference (Gutmann & Corander, 2016; Järvenpää et al., 2019). To measure the similarity to the observed data, it is often wise to use low-dimensional summary statistics rather than the raw data. Here we develop a way to learn compact sufficient statistics for ABC.
|
| 178 |
+
|
| 179 |
+
Neural density estimator-based inference. Apart from ABC, a recent line of research uses a conditional neural density estimator to (sequentially) learn the intractable likelihood (e.g SNL Papamakarios et al. (2019); Lueckmann et al. (2019)) or directly the posterior (e.g SNPE Papamakarios & Murray (2016); Lueckmann et al. (2017); Greenberg et al. (2019)). Likelihood-targeting approaches have the advantage that they could readily make use of any proposal distribution in sequential learning, but they rely on low-dimensional, well-chosen summary statistic. Posterior-targeting methods on the contrary need no design of summary statistic, but they require non-trivial efforts to facilitate sequential learning. Our approach (e.g $\mathrm{SNL}+$ ) can be seen as taking the advantages from both worlds.
|
| 180 |
+
|
| 181 |
+
Automatic construction of summary statistics. A set of works have been proposed to automatically construct low-dimensional summary statistics. Two lines of them are most related to our approach. The first line (Fearnhead & Prangle, 2012; Jiang et al., 2017; Chan et al., 2018; Wiqvist et al., 2019; Dinev & Gutmann, 2018) train a neural network to predict the posterior mean and use this prediction as the summary statistic. These mean-as-statistic approaches, as analyzed previously in Proposition 2, indeed do not guarantee sufficiency. Rather than taking the predicted mean, the works (Alsing et al., 2018; Brehmer et al., 2020) take the score function $\nabla_{\pmb{\theta}}\log p(\mathbf{x}|\pmb{\theta})|_{\pmb{\theta} = \pmb{\theta}^*}$ around some fiducial parameter $\pmb{\theta}^*$ as the summary statistic. However, these score-as-statistic methods are only locally sufficient around $\pmb{\theta}^*$ . Our approach differs from all these methods as it is globally sufficient for all $\pmb{\theta}$ .
|
| 182 |
+
|
| 183 |
+
MI and ratio estimation. It has been shown in the literature that many variational MI estimators $I(X;Y)$ also estimate the ratio $p(X,Y) / p(X)p(Y)$ up to a constant (Nowozin et al., 2016; Nguyen et al., 2010). Therefore our MI-based statistic learning method is closely related to ratio estimating approaches to posterior inference (Hermans et al., 2020; Thomas et al., 2021). The differences are 1) we decouple the task of statistics learning from the task of density estimation for LFI, which grants us the privilege to use any infomax representation learning methods that are ratio-free (Székely et al., 2014; Ozair et al., 2019); and 2) even if we do estimate the ratio, we do this in the low-dimensional space based on a sufficient statistics perspective, which is typically easier than in the original space.
|
| 184 |
+
|
| 185 |
+
# 5 EXPERIMENTS
|
| 186 |
+
|
| 187 |
+
# 5.1 SETUP
|
| 188 |
+
|
| 189 |
+
Baselines. We apply the proposed statistics to two aforementioned likelihood-free inference methods: (i) SMC-ABC (Beaumont et al., 2009) and (ii) SNL (Papamakarios et al., 2019). We compare the performance of the algorithms augmented with our neural statistics (dubbed as SMC-ABC+ and SNL+) to their original versions as well as the versions based on expert-designed statistics (details presented later; we call the corresponding methods SMC-ABC' and SNL'). We also compare to the sequential neural posterior estimate (SNPE) method<sup>1</sup> which needs no statistic design, as well as the sequential ratio estimate (SRE) method (Hermans et al., 2020) which is closely related to our MI-based method<sup>2</sup>. All methods are run for 10 rounds with 1,000 simulations each. The results presented below are for the JSD estimator; the DC estimator achieves similar accuracy (see appendix).
|
| 190 |
+
|
| 191 |
+
Evaluation metric. To assess the quality of the estimated posterior, we compare the Jensen-Shannon divergence (JSD) between the approximate posterior $Q$ and the true posterior $P$ for each method
|
| 192 |
+
|
| 193 |
+

|
| 194 |
+
(a)
|
| 195 |
+
|
| 196 |
+

|
| 197 |
+
(b)
|
| 198 |
+
|
| 199 |
+

|
| 200 |
+
(c)
|
| 201 |
+
Figure 2: Ising model. (a) The 64D observed data $\mathbf{x}_o\in \{-1,1\}^{64}$ . (b) The JSD between the true and the learned posteriors. (c) The relationship between the learned statistics and the sufficient statistic.
|
| 202 |
+
|
| 203 |
+
<table><tr><td>SMC'</td><td>SMC+</td><td>SNL'</td><td>SNL+</td><td>SRE</td><td>SNPE</td></tr><tr><td>0.008 ± 0.006</td><td>0.046 ± 0.051</td><td>0.007 ± 0.002</td><td>0.015 ±0.011</td><td>0.083 ± 0.029</td><td>0.058 ± 0.039</td></tr></table>
|
| 204 |
+
|
| 205 |
+
Table 1: Ising model. The JSD between the learned and true posterior with 10,000 simulations. Here SMC' and SNL' utilize the ground-truth sufficient statistics guided by human prior knowledge.
|
| 206 |
+
|
| 207 |
+
(see appendix). For the problems we consider, the true posterior $P$ is either analytically available, or can be accurately approximated by a standard rejection ABC algorithm (Pritchard et al., 1999) with known low-dimensional sufficient statistic (e.g. $s(\mathbf{x}) \in \mathbb{Z}$ ) and extensive simulations (e.g. $10^{6}$ ).
|
| 208 |
+
|
| 209 |
+
# 5.2 RESULTS
|
| 210 |
+
|
| 211 |
+
We demonstrate the effectiveness of our method on three models: (a) an Ising model; (b) a Gaussian copula model; (c) an Ornstein-Uhlenbeck process. The Ising model does not have an analytical likelihood but the posterior can be approximated accurately by rejection ABC due to the existence of low-dimensional, discrete sufficient statistic. The last two models have analytical likelihoods and hence analytical posteriors. These models cover the cases of graph data, i.i.d data and sequence data.<sup>3</sup>
|
| 212 |
+
|
| 213 |
+
Ising model. The first model we consider is a mathematical model in statistical physics that describes the states of atomic spins on a $8 \times 8$ lattice (see Figure 1(a)). Each spin has two states described by a discrete random variable $x_{i} \in \{-1, +1\}$ , and is only allowed to interact with its neighbour. Given parameters $\theta = \{\theta_{1}, \theta_{2}\}$ , the probability density function of the Ising model is:
|
| 214 |
+
|
| 215 |
+
$$
|
| 216 |
+
p (\mathbf {x} | \boldsymbol {\theta}) \propto \exp (- H (\mathbf {x}; \boldsymbol {\theta})),
|
| 217 |
+
$$
|
| 218 |
+
|
| 219 |
+
$$
|
| 220 |
+
H (\mathbf {x}; \boldsymbol {\theta}) = - \theta_ {1} \sum_ {\langle i, j \rangle} x _ {i} x _ {j} - \theta_ {2} \sum_ {i} x _ {i}.
|
| 221 |
+
$$
|
| 222 |
+
|
| 223 |
+
where $\langle i,j\rangle$ denotes that spin $i$ and spin $j$ are neighbours. $H$ is also called the Hamiltonian of the model. The likelihood function of this model is not analytical due to the intractable normalizing constant $Z(\pmb {\theta}) = \sum_{\mathbf{x}\in \{-1,1\}^{m\cdot m}}\exp [-H(\mathbf{x};\pmb {\theta})]$ . However, sampling from the model by MCMC is possible. Note that the sufficient statistics are known for this model: $s^* (\mathbf{x}) = \{\sum_{\langle i,j\rangle}x_ix_j,\sum_ix_i\}$ . The true posterior can easily be approximated by rejection ABC with the low-dimensional sufficient statistics and extensive simulations. Here, we assume that $\theta_{2}$ is known, and the task is to infer the posterior of $\theta_{1}$ under an uniform prior $\theta_{1}\sim \mathcal{U}(0,1.5)$ (in this case the sufficient statistic becomes only 1D: $s^* (\mathbf{x}) = \sum_{\langle i,j\rangle}x_ix_j$ ). The true parameters are $\pmb{\theta}^{*} = \{0.3,0.1\}$ .
|
| 224 |
+
|
| 225 |
+
In Figure 1(c), we investigate whether the proposed statistic could achieve sufficiency. Ideally, if the learned statistic $s$ in our method does recover the true sufficient statistic $s^*$ well, the relationship between $s$ and $s^*$ should be nearly monotonic (note that both $s$ and $s^*$ are here 1D). To verify this,
|
| 226 |
+
|
| 227 |
+

|
| 228 |
+
Figure 3: Gaussian copula. (a) The observed data $\mathbf{x}_o$ in this problem, which is comprised of a population of 200 i.i.d samples. (b) The JSD between the true/learned posteriors. (c) The contours.
|
| 229 |
+
|
| 230 |
+

|
| 231 |
+
|
| 232 |
+

|
| 233 |
+
|
| 234 |
+
<table><tr><td>SMC'</td><td>SMC+</td><td>SNL'</td><td>SNL+</td><td>SRE</td><td>SNPE</td></tr><tr><td>0.183 ± 0.014</td><td>0.047 ± 0.009</td><td>0.054 ± 0.016</td><td>0.042 ± 0.006</td><td>0.052 ± 0.032</td><td>0.037 ± 0.018</td></tr></table>
|
| 235 |
+
|
| 236 |
+
Table 2: Gaussian copula. The JSD between the learned and true posterior with 10,000 simulations. Here SMC' and SNL' utilize the hand-crafted summary statistics guided by human prior knowledge.
|
| 237 |
+
|
| 238 |
+
we plot the relationship between $s^*$ and $s$ . We see from the figure that $s$ learned by our method does, approximately, increase monotonically with $s^*$ , suggesting that $s$ recovers $s^*$ reasonably well. In comparison, the statistics learned with the widely-used posterior-mean-as-statistics approach only weakly depends on the true sufficient statistic; it is nearly indistinguishable for different $s^*$ . In other words, it loses sufficiency. The result empirically verifies our theoretical result in Proposition 2.
|
| 239 |
+
|
| 240 |
+
Figure 1(b) further shows the JSD between the true and learned posterior for different methods across the rounds (the vertical lines indicate standard derivation, each JSD is obtained by calculating the average of 5 independent runs. The results shown in the below experiments have the same setup). It can be seen from the figure that for this model, likelihood-free inference methods augmented with the proposed statistic (SMC-ABC+, SNL+) outperform their original counterparts (SMC-ABC, SNL) by a large margin. In Table 1, we further compare our statistics with the expert designed statistics, from which one can see their close performance (here the expert statistics is taken as the true sufficient statistics $\mathbf{s}^*$ ). We further see that our method also outperforms SRE which directly estimates the ratio $t(\mathbf{x},\pmb {\theta}) = p(\mathbf{x},\pmb {\theta}) / p(\mathbf{x})p(\pmb {\theta})\propto L(\pmb {\theta};\mathbf{x})$ in high-dimensional space (note that the true likelihood is of the form $L(\pmb {\theta};\mathbf{x}) = \exp (\pmb {\theta}s^{*}(\mathbf{x})) / Z(\pmb {\theta}))$ as well as SNPE (version B). The reason why SNPE(-B) does not perform more satisfactorily might be that it relies on importance weights to facilitate sequential learning, which can induce high variance that makes the training unstable.
|
| 241 |
+
|
| 242 |
+
Gaussian copula. The second model we consider is a 2D Gaussian copula model (Chen & Gutmann, 2019). Data $\mathbf{x}$ for this model can be generated with aid of a latent variable $\mathbf{z}$ as follows:
|
| 243 |
+
|
| 244 |
+
$$
|
| 245 |
+
\mathbf {z} \sim \mathcal {N} \Big (\mathbf {z}; \mathbf {0}, \left[ \begin{array}{c c} 1, & \theta_ {3} \\ \theta_ {3}, & 1 \end{array} \right] \Big),
|
| 246 |
+
$$
|
| 247 |
+
|
| 248 |
+
$$
|
| 249 |
+
x _ {1} = F _ {1} ^ {- 1} \left(\Phi \left(z _ {1}\right); \theta_ {1}\right), \quad x _ {2} = F _ {2} ^ {- 1} \left(\Phi \left(z _ {2}\right); \theta_ {2}\right),
|
| 250 |
+
$$
|
| 251 |
+
|
| 252 |
+
$$
|
| 253 |
+
f _ {1} \left(x _ {1}; \theta_ {1}\right) = \operatorname {B e t a} \left(x _ {1}; \theta_ {1}, 2\right), \quad f _ {2} \left(x _ {2}; \theta_ {2}\right) = \theta_ {2} \mathcal {N} \left(x _ {2}; 1, 1\right) + \left(1 - \theta_ {2}\right) \mathcal {N} \left(x _ {2}; 4, 1 / 4\right).
|
| 254 |
+
$$
|
| 255 |
+
|
| 256 |
+
where $\Phi (\cdot),F_{1}(x_{1};\theta_{1}),F_{2}(x_{2};\theta_{2})$ are the cumulative distribution function (CDF) of the standard normal distribution, the CDF of $f_{1}(x_{1};\theta_{1})$ and the CDF of $f_{2}(x_{2};\theta_{2})$ respectively. We assume that a total number of 200 samples are i.i.d drawn from this model, yielding a population $\mathbf{X} = \{\mathbf{x}_i\}_{i = 1}^{200}$ that serves as our observed data. Note that the likelihood of this model can be computed analytically by the law of variable transformation. To perform inference, we compute a rudimentary statistic to describe $\mathbf{X}$ , namely (a) the 20-equally spaced quantiles of the marginal distributions of $\mathbf{X}$ and (b) the correlation between the latent variables $z_{1},z_{2}$ in $\mathbf{X}$ , resulting in a statistic of dimensionality 41. A uniform prior is used: $\theta_{1}\sim \mathcal{U}(0.5,12.5),\theta_{2}\sim \mathcal{U}(0,1),\theta_{3}\sim \mathcal{U}(0.4,0.8)$ and $\pmb{\theta}^{*} = \{6,0.5,0.6\}$ .
|
| 257 |
+
|
| 258 |
+
In Figure 2(b), we demonstrate the power of our neural sufficient statistic learning method on the Gaussian copula problem. Overall, we see that the proposed method improves the accuracy of
|
| 259 |
+
|
| 260 |
+

|
| 261 |
+
(a)
|
| 262 |
+
|
| 263 |
+

|
| 264 |
+
(b)
|
| 265 |
+
|
| 266 |
+

|
| 267 |
+
(c)
|
| 268 |
+
Figure 4: OU process. (a) The observed time-series data $\mathbf{x}_o = \{x_t\}_{t=1}^{50}$ . (b) The JSD between the true and the learned posteriors. (c) The contours of the true posterior and the learned posteriors.
|
| 269 |
+
|
| 270 |
+
<table><tr><td>SMC'</td><td>SMC+</td><td>SNL'</td><td>SNL+</td><td>SRE</td><td>SNPE</td></tr><tr><td>0.040 ± 0.006</td><td>0.044 ± 0.018</td><td>0.004 ± 0.001</td><td>0.009 ± 0.002</td><td>0.022 ± 0.013</td><td>0.019 ± 0.009</td></tr></table>
|
| 271 |
+
|
| 272 |
+
Table 3: OU process. The JSD between the learned and the true posterior with 10,000 simulations. Here SMC' and SNL' utilize the hand-crafted summary statistics guided by human prior knowledge.
|
| 273 |
+
|
| 274 |
+
existing likelihood-free inference methods, as well as their robustness, see e.g. the reduced variability for $\mathrm{SNL}+$ (the high variability in SNL may be due to the lack of training data required to learn the 41-dimensional likelihood function well). This is also confirmed by the contours plots in Figure 2(c). In Table 2 we further compare the proposed statistic with the expert-designed low-dimension statistic (here the expert statistic is taken to be the 5-equally spaced marginal quantiles and the correlations between $z_{1}, z_{2}$ ), from which we see that our proposed statistic achieves a better performance. For this model, the average performance of our method is slightly worse than that of SNPE. However, SNPE has a higher variability, so that the difference in performance is actually not significant.
|
| 275 |
+
|
| 276 |
+
Ornstein-Uhlenbeck process. The last model we consider is a stochastic differential equation (SDE). Data $\mathbf{x} = \{x_{t}\}_{t=1}^{D}$ in this model is sequentially generated as:
|
| 277 |
+
|
| 278 |
+
$$
|
| 279 |
+
x _ {t + 1} = x _ {t} + \Delta x _ {t},
|
| 280 |
+
$$
|
| 281 |
+
|
| 282 |
+
$$
|
| 283 |
+
\Delta x _ {t} = \theta_ {1} (\exp (\theta_ {2}) - x _ {t}) \Delta t + 0. 5 \epsilon , \quad \epsilon \sim \mathcal {N} (\epsilon ; 0, \Delta t).
|
| 284 |
+
$$
|
| 285 |
+
|
| 286 |
+
where $D = 50$ , $\Delta t = 0.2$ and $x_0 = 10$ . This SDE can be simulated by the Euler-Maruyama method, and has an analytical likelihood. It has a wide application in financial mathematics and the physical sciences. Here, the parameters of interest are $\pmb{\theta} = \{\theta_{1},\theta_{2}\}$ , and a uniform prior is placed on them: $\theta_{1}\sim \mathcal{U}(0,1)$ , $\theta_{1}\sim \mathcal{U}(-2.0,2.0)$ . The true parameters are set to be $\pmb{\theta}^{*} = \{0.5,1.0\}$ .
|
| 287 |
+
|
| 288 |
+
Figure 3(b) compares the JSD of each method against the simulation cost. Again, we find that the proposed neural sufficient statistics greatly improve the performance of both SMC-ABC and SNL. In Table 3, we compare our statistics to expert statistics (here the expert statistics are taken as the mean, standard error and autocorrelation with lag 1, 2, 3 of the time series). It can be seen that our statistics perform comparably to the expert statistics. Our method also seems to outperform SRE and SNPE.
|
| 289 |
+
|
| 290 |
+
# 6 CONCLUSION
|
| 291 |
+
|
| 292 |
+
We proposed a new deep learning-based approach for automatically constructing low-dimensional sufficient statistics for likelihood-free inference. The obtained neural approximate sufficient statistics can be applied to both traditional ABC-based and recent NDE-based methods. Our main hypothesis is that learning such sufficient statistics via the infomax principle might be easier than estimating the density itself. We verify this hypothesis by experiments on various tasks with graphs, i.i.d and sequence data. Our method establishes a link between representation learning and likelihood-free inference communities. For future works, we can consider further infomax representation learning approaches, as well as more principle ways to determine the dimensionality of the sufficient statistics.
|
| 293 |
+
|
| 294 |
+
# REFERENCES
|
| 295 |
+
|
| 296 |
+
Justin Alsing, Benjamin Wandelt, and Stephen Feeney. Massive optimal data compression and density estimation for scalable, likelihood-free inference in cosmology. Monthly Notices of the Royal Astronomical Society, 477(3):2874-2885, 2018.
|
| 297 |
+
Ravi Bansal and Amir Yaron. Risks for the long run: A potential resolution of asset pricing puzzles. The journal of Finance, 59(4):1481-1509, 2004.
|
| 298 |
+
Mark A Beaumont, Jean-Marie Cornuet, Jean-Michel Marin, and Christian P Robert. Adaptive approximate Bayesian computation. Biometrika, 96(4):983-990, 2009.
|
| 299 |
+
Mohamed Ishmael Belghazi, Aristide Baratin, Sai Rajeshwar, Sherjil Ozair, Yoshua Bengio, Aaron Courville, and Devon Hjelm. Mutual information neural estimation. In International Conference on Machine Learning, pp. 531-540. PMLR, 2018.
|
| 300 |
+
Michael GB Blum, Maria Antonieta Nunes, Dennis Prangle, Scott A Sisson, et al. A comparative review of dimension reduction methods in approximate bayesian computation. Statistical Science, 28(2):189-208, 2013.
|
| 301 |
+
Johann Brehmer, Gilles Louppe, Juan Pavez, and Kyle Cranmer. Mining gold from implicit models to improve likelihood-free inference. Proceedings of the National Academy of Sciences, 117(10): 5242-5249, 2020.
|
| 302 |
+
Jeffrey Chan, Valerio Perrone, Jeffrey Spence, Paul Jenkins, Sara Mathieson, and Yun Song. A likelihood-free inference framework for population genetic data using exchangeable neural networks. In Advances in Neural Information Processing Systems, pp. 8594-8605, 2018.
|
| 303 |
+
Yanzhi Chen and Michael U Gutmann. Adaptive gaussian copula abc. In The 22nd International Conference on Artificial Intelligence and Statistics, pp. 1584-1592. PMLR, 2019.
|
| 304 |
+
Matteo Chinazzi, Jessica T Davis, Marco Ajelli, Corrado Gioannini, Maria Litvinova, Stefano Merler, Ana Pastore y Pionti, Kunpeng Mu, Luca Rossi, Kaiyuan Sun, et al. The effect of travel restrictions on the spread of the 2019 novel coronavirus (covid-19) outbreak. Science, 368(6489):395-400, 2020.
|
| 305 |
+
Tm Cover, Ja Thomas, and J Wiley. Elements of information theory. Tsinghua University Press, 2003.
|
| 306 |
+
Peter J Diggle and Richard J Gratton. Monte Carlo methods of inference for implicit statistical models. Journal of the Royal Statistical Society. Series B, pp. 193-227, 1984.
|
| 307 |
+
Traiko Dinev and Michael U Gutmann. Dynamic likelihood-free inference via ratio estimation (dire). arXiv preprint arXiv:1810.09899, 2018.
|
| 308 |
+
Christopher C Drovandi, Anthony N Pettitt, and Malcolm J Faddy. Approximate Bayesian computation using indirect inference. Journal of the Royal Statistical Society: Series C (Applied Statistics), 60(3):317-337, 2011.
|
| 309 |
+
Conor Durkan, Artur Bekasov, Iain Murray, and George Papamakarios. Neural spline flows. arXiv preprint arXiv:1906.04032, 2019.
|
| 310 |
+
Conor Durkan, Iain Murray, and George Papamakarios. On contrastive learning for likelihood-free inference. arXiv preprint arXiv:2002.03712, 2020.
|
| 311 |
+
Paul Fearnhead and Dennis Prangle. Constructing summary statistics for approximate Bayesian computation: semi-automatic approximate Bayesian computation. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 74(3):419-474, 2012.
|
| 312 |
+
David Greenberg, Marcel Nonnenmacher, and Jakob Macke. Automatic posterior transformation for likelihood-free inference. In International Conference on Machine Learning, pp. 2404-2414, 2019.
|
| 313 |
+
Michael Gutmann and Jukka Corander. Bayesian optimization for likelihood-free inference of simulator-based statistical models. Journal of Machine Learning Research, 17(1):4256-4302, 2016.
|
| 314 |
+
|
| 315 |
+
Joeri Hermans, Volodimir Begy, and Gilles Louppe. Likelihood-free mcmc with amortized approximate ratio estimators. In International Conference on Machine Learning, pp. 4239-4248. PMLR, 2020.
|
| 316 |
+
R Devon Hjelm, Alex Fedorov, Samuel Lavoie-Marchildon, Karan Grewal, Phil Bachman, Adam Trischler, and Yoshua Bengio. Learning deep representations by mutual information estimation and maximization. In International Conference on Learning Representations, 2018.
|
| 317 |
+
M. Järvenpää, M.U. Gutmann, A. Vehtari, and P. Marttinen. Gaussian process modeling in approximate Bayesian computation to estimate horizontal gene transfer in bacteria. Annals of Applied Statistics, 2018.
|
| 318 |
+
M. Järvenpää, M.U. Gutmann, A. Vehtari, and P. Marttinen. Efficient acquisition rules for model-based approximate Bayesian computation. Bayesian Analysis, 14(2):595-622, 2019. doi: doi: 10.1214/18-BA1121.
|
| 319 |
+
Bai Jiang, Tung-yu Wu, Charles Zheng, and Wing H Wong. Learning summary statistic for approximate bayesian computation via deep neural network. Statistica Sinica, pp. 1595-1618, 2017.
|
| 320 |
+
Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
|
| 321 |
+
Bernard Osgood Koopman. On distributions admitting a sufficient statistic. Transactions of the American Mathematical society, 39(3):399-409, 1936.
|
| 322 |
+
J. Lintusaari, M.U. Gutmann, R. Dutta, S. Kaski, and J. Corander. Fundamentals and recent developments in approximate Bayesian computation. Systematic Biology, 66(1):e66-e82, January 2017.
|
| 323 |
+
T. Lopez-Guevara, N.K. Taylor, M.U. Gutmann, S. Ramamoorthy, and K. Subr. Adaptable pouring: Teaching robots not to spill using fast but approximate fluid simulation. In Sergey Levine, Vincent Vanhoucke, and Ken Goldberg (eds.), Proceedings of the 1st Annual Conference on Robot Learning (CoRL), volume 78 of Proceedings of Machine Learning Research, pp. 77-86, November 2017.
|
| 324 |
+
Jan-Matthis Lueckmann, Pedro J Goncalves, Giacomo Bassetto, Kaan Öcal, Marcel Nonnenmacher, and Jakob H Macke. Flexible statistical inference for mechanistic models of neural dynamics. In Advances in Neural Information Processing Systems, pp. 1289-1299, 2017.
|
| 325 |
+
Jan-Matthis Lueckmann, Giacomo Bassetto, Theofanis Karaletsos, and Jakob H Macke. Likelihood-free inference with emulator networks. In Symposium on Advances in Approximate Bayesian Inference, pp. 32-53, 2019.
|
| 326 |
+
Jan-Matthis Lueckmann, Jan Boelts, David S Greenberg, Pedro J Gonçalves, and Jakob H Macke. Benchmarking simulation-based inference. arXiv preprint arXiv:2101.04653, 2021.
|
| 327 |
+
Vikash K Mansinghka, Tejas D Kulkarni, Yura N Perov, and Josh Tenenbaum. Approximate bayesian image interpretation using generative probabilistic graphics programs. In Advances in Neural Information Processing Systems, pp. 1520-1528, 2013.
|
| 328 |
+
Paul Marjoram, John Molitor, Vincent Plagnol, and Simon Tavare. Markov chain Monte Carlo without likelihoods. Proceedings of the National Academy of Sciences, 100(26):15324-15328, 2003.
|
| 329 |
+
Edward Meeds, Robert Leenders, and Max Welling. Hamiltonian abc. In Proceedings of the Thirty-First Conference on Uncertainty in Artificial Intelligence, pp. 582-591, 2015.
|
| 330 |
+
XuanLong Nguyen, Martin J Wainwright, and Michael I Jordan. Estimating divergence functionals and the likelihood ratio by convex risk minimization. IEEE Transactions on Information Theory, 56(11):5847-5861, 2010.
|
| 331 |
+
Sebastian Nowozin, Botond Cseke, and Ryota Tomioka. f-gan: Training generative neural samplers using variational divergence minimization. In Advances in neural information processing systems, pp. 271-279, 2016.
|
| 332 |
+
|
| 333 |
+
Sherjil Ozair, Corey Lynch, Yoshua Bengio, Aaron Van den Oord, Sergey Levine, and Pierre Sermanet. Wasserstein dependency measure for representation learning. In Advances in Neural Information Processing Systems, pp. 15604-15614, 2019.
|
| 334 |
+
Lorenzo Pacchiardi and Ritabrata Dutta. Score matched conditional exponential families for likelihood-free inference. arXiv preprint arXiv:2012.10903, 2020.
|
| 335 |
+
George Papamakarios and Iain Murray. Fast $\varepsilon$ -free inference of simulation models with bayesian conditional density estimation. In Advances in Neural Information Processing Systems, pp. 1028-1036, 2016.
|
| 336 |
+
George Papamakarios, Theo Pavlakou, and Iain Murray. Masked autoregressive flow for density estimation. In Advances in Neural Information Processing Systems, pp. 2338-2347, 2017.
|
| 337 |
+
George Papamakarios, David Sterratt, and Iain Murray. Sequential neural likelihood: Fast likelihood-free inference with autoregressive flows. In The 22nd International Conference on Artificial Intelligence and Statistics, pp. 837-848. PMLR, 2019.
|
| 338 |
+
Jonathan K Pritchard, Mark T Seielstad, Anna Perez-Lezaun, and Marcus W Feldman. Population growth of human y chromosomes: a study of y chromosome microsatellites. Molecular biology and evolution, 16(12):1791-1798, 1999.
|
| 339 |
+
Oren Rippel and Ryan Prescott Adams. High-dimensional probability estimation with deep density models. arXiv preprint arXiv:1302.5125, 2013.
|
| 340 |
+
Ohad Shamir, Sivan Sabato, and Naftali Tishby. Learning and generalization with the information bottleneck. Theoretical Computer Science, 411(29-30):2696-2711, 2010.
|
| 341 |
+
S.A. Sisson, Y Fan, and M.A. Beaumont. Handbook of Approximate Bayesian Computation., chapter 1. Overview of Approximate Bayesian Computation. Chapman and Hall/CRC Press, 2018.
|
| 342 |
+
Scott A Sisson, Yanan Fan, and Mark M Tanaka. Sequential Monte Carlo without likelihoods. Proceedings of the National Academy of Sciences, 104(6):1760-1765, 2007.
|
| 343 |
+
Torbjörn Sjöstrand, Stephen Mrenna, and Peter Skands. A brief introduction to pythia 8.1. Computer Physics Communications, 178(11):852-867, 2008.
|
| 344 |
+
Carlos Oscar Sánchez Sorzano, Javier Vargas, and A Pascual Montano. A survey of dimensionality reduction techniques. arXiv preprint arXiv:1403.2877, 2014.
|
| 345 |
+
Gábor J Székely, Maria L Rizzo, et al. Partial distance correlation with methods for dissimilarities. Annals of Statistics, 42(6):2382-2412, 2014.
|
| 346 |
+
Owen Thomas, Ritabrata Dutta, Jukka Corander, Samuel Kaski, Michael U Gutmann, et al. Likelihood-free inference by ratio estimation. Bayesian Analysis, 2021.
|
| 347 |
+
Aaron Van Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks. In International Conference on Machine Learning, pp. 1747-1756. PMLR, 2016.
|
| 348 |
+
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. arXiv preprint arXiv:1706.03762, 2017.
|
| 349 |
+
Anja Weyant, Chad Schafer, and W Michael Wood-Vasey. Likelihood-free cosmological inference with type ia supernovae: approximate Bayesian computation for a complete treatment of uncertainty. The Astrophysical Journal, 764(2), 2013.
|
| 350 |
+
Samuel Wiqvist, Pierre-Alexandre Mattei, Umberto Picchini, and Jes Frellsen. Partially exchangeable networks and architectures for learning summary statistics in approximate bayesian computation. In International Conference on Machine Learning, pp. 6798-6807, 2019.
|
| 351 |
+
Simon N Wood. Statistical inference for noisy nonlinear ecological dynamic systems. Nature, 466 (7310):1102, 2010.
|
| 352 |
+
Haozhe Xie, Jie Li, and Hanqing Xue. A survey of dimensionality reduction techniques based on random projection. arXiv preprint arXiv:1706.04371, 2017.
|
| 353 |
+
|
| 354 |
+
# A THEORETICAL PROOFS
|
| 355 |
+
|
| 356 |
+
# A.1 PROOF OF PROPOSITION 1
|
| 357 |
+
|
| 358 |
+
Proof. Firstly, assume $s(\cdot)$ is a sufficient statistic. By the definition of sufficient statistic we know $p(\mathbf{x}|\pmb{\theta}) = p(\mathbf{x}|\mathbf{s})p(\mathbf{s}|\pmb{\theta})$ . Then we have the Markov chain $\pmb{\theta} \rightarrow \mathbf{s} \rightarrow \mathbf{x}$ for the data generating process. On the other hand, since $\mathbf{x} \sim p(\mathbf{x}|\pmb{\theta})$ and $S$ is a deterministic function we have the Markov chain $\pmb{\theta} \rightarrow \mathbf{x} \rightarrow \mathbf{s}$ . By data processing inequality we have $I(\pmb{\theta}; s(\mathbf{x})) \leq I(\pmb{\theta}; \mathbf{x})$ for the first chain and $I(\pmb{\theta}; \mathbf{x}) \leq I(\pmb{\theta}; s(\mathbf{x}))$ for the second chain. This implies that $I(\pmb{\theta}; \mathbf{x}) = I(\pmb{\theta}; s(\mathbf{x}))$ i.e. $s$ is the maximizer of $I(\pmb{\theta}; S(\mathbf{x}))$ . For the other direction, since $I(\pmb{\theta}; s(\mathbf{x})) = \max_{S} I(\pmb{\theta}; S(\mathbf{x}))$ , we have $I(\pmb{\theta}; s(\mathbf{x})) = I(\pmb{\theta}; \mathbf{x})$ . Note that $\pmb{\theta} \rightarrow \mathbf{x} \rightarrow \mathbf{s}$ is a Markov chain, from Theorem 2.8.1 of Cover et al. (2003) we can get $\pmb{\theta}$ and $X$ is conditionally independent given $\mathbf{s}$ . This implies $s$ is sufficient.
|
| 359 |
+
|
| 360 |
+
# A.2 PROOF OF PROPOSITION 2
|
| 361 |
+
|
| 362 |
+
Proof. We can write the objective as $\mathbb{E}_{p(\pmb{\theta},\mathbf{x})}[\| S(\mathbf{x}) - \pmb{\theta}\|_2^2] = \int p(\pmb{\theta},\mathbf{x})\log e^{\| S(\mathbf{x}) - \pmb{\theta}\|_2^2}d\mathbf{x}d\pmb{\theta}$ . On the other hand we have $I(\pmb{\theta};S(\mathbf{x})) = \int p(\pmb{\theta},\mathbf{x})\log p(S(\mathbf{x})|\pmb{\theta}) / p(S(\mathbf{x}))d\mathbf{x}d\pmb{\theta}$ . By comparing them, we see they are generally not equivalent. Equivalence only holds in special cases (e.g. Gaussians).
|
| 363 |
+
|
| 364 |
+
# B MORE EXPERIMENTAL DETAILS AND RESULTS
|
| 365 |
+
|
| 366 |
+
# B.1 DETAILED EXPERIMENTAL SETTINGS
|
| 367 |
+
|
| 368 |
+
Neural networks settings. For the statistic network $S$ in our method (for both JSD and DC estimators), we adopt a $D - 100 - 100 - d$ fully-connected architecture with $D$ being the dimensionality of input data and $d$ the dimensionality of the statistic. For the network $H$ used to extract the representation of $\pmb{\theta}$ , we adopt a $K - 100 - 100 - K$ fully-connected architecture with $K$ being the dimensionality of the model parameters $\pmb{\theta}$ . For the critic network, we adopt a $(d + K)$ -100-1 fully connected architecture. ReLU is adopted as the non-linearity in all networks. For SRE, which is closely related to our method, we use a $(D + K)$ -144-144-100-1 architecture. This architecture has a similar complexity as our networks. All these neural networks are trained with Adam (Kingma & Ba, 2014) with a learning rate of $1 \times 10^{-4}$ and a batch size of 200. No weight decay is applied. We take $20\%$ of the data for validation, and stop training if the validation error does not improve after 100 epochs. We take the snapshot with the best validation error as the final result.
|
| 369 |
+
|
| 370 |
+
For the neural density estimator in SNL/SNPE, which is realized by a Masked Autoregressive Flow (MAF) (Papamakarios et al., 2017), we adopt 5 autoregressive layers, each of which has two hidden layers with 50 tanh units. This is the same settings as in SNL. The MAF is trained with Adam with a learning rate of $5 \times 10^{-4}$ and a batch size of 500 and a slight weight decay ( $1 \times 10^{-4}$ ). Similar to the case of MI networks, we take $20\%$ of the data for validation, and stop training if the validation error does not improve after 100 epochs. The snapshot with the best validation error is taken as the result.
|
| 371 |
+
|
| 372 |
+
Sampling from the approximate posterior/learnt proposal. For fair comparison, we adopt simple rejection sampling for all LFI methods (ABC, SNL, SNPE, SRE) when sampling from the learnt posterior, so that each LFI method only differs in the way they learn the posterior. No MCMC is used.
|
| 373 |
+
|
| 374 |
+
Empirical version of objective functions. Recall that in the JSD estimator, the statistic network $S(\cdot)$ is trained with the following objective together with the critic network $T(\cdot)$ :
|
| 375 |
+
|
| 376 |
+
$$
|
| 377 |
+
\operatorname {m a x i m i z e} _ {S, T} \mathcal {L} (S, T) = \mathbb {E} _ {p (\boldsymbol {\theta}, \mathbf {x})} [ - \operatorname {s p} (- T (\boldsymbol {\theta}, S (\mathbf {x}))) ] - \mathbb {E} _ {p (\boldsymbol {\theta}) p (\mathbf {x})} [ \operatorname {s p} (T (\boldsymbol {\theta}, S (\mathbf {x})) ]
|
| 378 |
+
$$
|
| 379 |
+
|
| 380 |
+
the mini-batch approximation to this objective is:
|
| 381 |
+
|
| 382 |
+
$$
|
| 383 |
+
\mathcal {L} (S, T) \approx \frac {1}{n} \sum_ {i} ^ {n} [ - \mathrm {s p} (- T (\pmb {\theta} _ {i}, S (\mathbf {x} _ {i}))) ] - \frac {1}{m} \frac {1}{n} \sum_ {j} ^ {m} \sum_ {i} ^ {n} [ \mathrm {s p} (T (\pmb {\theta} _ {j i}, S (\mathbf {x} _ {i})) ]
|
| 384 |
+
$$
|
| 385 |
+
|
| 386 |
+
where $\{j_1, j_2, \dots, j_n\}$ is the $j$ -th random permutation of the indexes $\{1, 2, \dots, n\}$ and the pair $(\pmb{\theta}_i, \mathbf{x}_i)$ are randomly picked from the data $\mathcal{D} = \{\pmb{\theta}_i, \mathbf{x}_i\}_{i=1}^N$ . Here we set $m = 400$ and $n$ is the batch size.
|
| 387 |
+
|
| 388 |
+
In the DC estimator, the statistic network is trained by the following objective:
|
| 389 |
+
|
| 390 |
+
$$
|
| 391 |
+
\operatorname {m a x i m i z e} _ {S} \mathcal {L} (S) = \frac {\mathbb {E} _ {p (\boldsymbol {\theta} , \mathbf {x}) p (\boldsymbol {\theta} ^ {\prime} , \mathbf {x} ^ {\prime})} [ h (\boldsymbol {\theta} , \boldsymbol {\theta} ^ {\prime}) h (S (\mathbf {x}) , S (\mathbf {x} ^ {\prime})) ]}{\sqrt {\mathbb {E} _ {p (\boldsymbol {\theta}) p (\boldsymbol {\theta} ^ {\prime})} [ h ^ {2} (\boldsymbol {\theta} , \boldsymbol {\theta} ^ {\prime}) ]} \cdot \sqrt {\mathbb {E} _ {p (\mathbf {x}) p (\mathbf {x} ^ {\prime})} [ h ^ {2} (S (\mathbf {x}) , S (\mathbf {x} ^ {\prime})) ]}},
|
| 392 |
+
$$
|
| 393 |
+
|
| 394 |
+
where $h(\mathbf{a},\mathbf{b}) = \|\mathbf{a} - \mathbf{b}\| - \mathbb{E}_{p(\mathbf{b}')}[\|\mathbf{a} - \mathbf{b}'\|] - \mathbb{E}_{p(\mathbf{a}')}[\|\mathbf{a}' - \mathbf{b}\|] + \mathbb{E}_{p(\mathbf{a}')p(\mathbf{b}')}[\|\mathbf{a}' - \mathbf{b}'\|]$ . The mini-batch approximation to this objective is:
|
| 395 |
+
|
| 396 |
+
$$
|
| 397 |
+
\mathcal {L} (S) \approx \frac {\sum_ {i , j} ^ {n , n} \tilde {h} (\pmb {\theta} _ {i} , \pmb {\theta} _ {j}) \tilde {h} (S (\mathbf {x} _ {i}) , S (\mathbf {x} _ {j}))}{\sqrt {\sum_ {i , j} ^ {n , n} \tilde {h} ^ {2} (\pmb {\theta} _ {i} , \pmb {\theta} _ {j})} \cdot \sqrt {\sum_ {i , j} ^ {n , n} \tilde {h} ^ {2} (S (\mathbf {x} _ {i}) , S (\mathbf {x} _ {j}))}},
|
| 398 |
+
$$
|
| 399 |
+
|
| 400 |
+
where $\tilde{h} (\mathbf{a}_i,\mathbf{b}_j) = \| \mathbf{a}_i - \mathbf{b}_j\| -\frac{1}{n - 2}\sum_{j'}^n\| \mathbf{a}_i - \mathbf{b}_{j'}\| -\frac{1}{n - 2}\sum_{i'}^n\| \mathbf{a}_{i'} - \mathbf{b}_j\| +\frac{1}{(n - 1)(n - 2)}\sum_{i',j'}^{n,n}\| \mathbf{a}_{i'} - \mathbf{b}_{j'}\|$ . Here $i,j,i',j'$ are the indexes in the mini-batch. $n$ is again the batch size.
|
| 401 |
+
|
| 402 |
+
JSD calculation between true posterior and approximate posterior. The calculation of the Jensen-Shannon divergence between the true posterior $P$ and approximate posterior $Q$ , namely $\mathrm{JSD}(P, Q) = \frac{1}{2} \mathrm{KL}[P \| (P + Q) / 2] + \frac{1}{2} \mathrm{KL}[Q \| (P + Q) / 2]$ , is done numerically by a Riemann sum over $30^K$ equally spaced grid points with $K$ being the dimensionality of $\theta$ . The region of these grid points is defined by the min and max values of 500 samples drawn from $P$ . When we only have samples from the true posterior (e.g. the Ising model), we approximate $P$ by a mixture of Gaussian with 8 components.
|
| 403 |
+
|
| 404 |
+
# B.2 ADDITIONAL EXPERIMENTAL RESULTS
|
| 405 |
+
|
| 406 |
+
Comparison of different MI estimators. We compare the performances of four MI estimator for infomax statistics learning: Donsker-Varadhan (DV) estimator (Belghazi et al., 2018), Jensen-Shannon divergence (JSD) estimator (Hjelm et al., 2018), distance correlation (DC) Székely et al. (2014) and Wasserstein distance (WD) (Ozair et al., 2019). We highlight that the last two estimators (DC and WD) are ratio-free. We compare the discrepancy between the true posterior and the posterior inferred with the statistics learned by each estimator, as well as the execution time per each mini-batch. The results, which are averaged over 5 independent runs, are shown in the figure and the table below.
|
| 407 |
+
|
| 408 |
+
From the figure we see that the JSD estimator generally yields the best accuracy among the four estimators. In terms of execution time, the DC estimator is clearly the winner, with its execution time being only 1/15 of the other estimators. However, the accuracy of the DC estimator is still comparable to the JSD estimator, especially when the number of training samples is large (e.g. 10,000). According to these results, we suggest using JSD in small-scale settings (e.g. early rounds in sequential learning), and use DC in large-scale ones (e.g. later rounds in sequential learning).
|
| 409 |
+
|
| 410 |
+

|
| 411 |
+
(a) Ising model
|
| 412 |
+
|
| 413 |
+

|
| 414 |
+
(b) Gaussian copula
|
| 415 |
+
|
| 416 |
+

|
| 417 |
+
(c) OU process
|
| 418 |
+
Figure 5: Comparing the accuracy of different MI estimator for infomax statistics learning.
|
| 419 |
+
|
| 420 |
+
<table><tr><td colspan="4">Ising model</td><td colspan="4">Gaussian copula</td><td colspan="4">OU process</td></tr><tr><td>DV</td><td>JSD</td><td>DC</td><td>WD</td><td>DV</td><td>JSD</td><td>DC</td><td>WD</td><td>DV</td><td>JSD</td><td>DC</td><td>WD</td></tr><tr><td>115</td><td>124</td><td>6</td><td>230</td><td>154</td><td>167</td><td>10</td><td>288</td><td>143</td><td>158</td><td>13</td><td>256</td></tr></table>
|
| 421 |
+
|
| 422 |
+
Table 4: Comparing the execution time (ms) of different MI estimator for infomax statistics learning.
|
| 423 |
+
|
| 424 |
+
Contrastive learning v.s. MLE. In the experiment in the main text, we discover that our method does not always achieve the best performance; it does not work better than SNPE-B on the Gaussian copula problem. Here we would like to investigate why this happens.
|
| 425 |
+
|
| 426 |
+
Upon a closer look, we discover that SRE, which is closely related to our method when used with the JSD estimator, is outperformed by SNPE-B on the Gaussian copula problem. Remark that both SRE and our method, when used with the JSD estimator, uses contrastive learning rather than MLE. Since both of these two contrastive learning methods do not perform better than the MLE-based SNPE-B, it makes us suspect the reason is due to imperfect contrastive learning. To verify this, we further conduct experiments for SNPE-C, which shares the same loss function with SRE but with a different parameterization to the density ratio (SRE: fully-connected network; SNPE-C: NDE-based parameterization. This NDE is the same as in SNL). The result is as follows:
|
| 427 |
+
|
| 428 |
+
<table><tr><td colspan="4">Ising model</td><td colspan="4">Gaussian copula</td><td colspan="4">OU process</td></tr><tr><td>SRE</td><td>SNPE-B</td><td>SNPE-C</td><td>SNL+</td><td>SRE</td><td>SNPE-B</td><td>SNPE-C</td><td>SNL+</td><td>SRE</td><td>SNPE-B</td><td>SNPE-C</td><td>SNL+</td></tr><tr><td>0.083</td><td>0.058</td><td>0.030</td><td>0.017</td><td>0.052</td><td>0.037</td><td>0.047</td><td>0.042</td><td>0.022</td><td>0.018</td><td>0.016</td><td>0.009</td></tr></table>
|
| 429 |
+
|
| 430 |
+
Table 5: Comparing the the JSD of contrastive learning-based methods (SRE, SNPE-C, SNL+) and MLE-based method (SNPE-B) on the three models considered in the experiments in the main text.
|
| 431 |
+
|
| 432 |
+
Surprisingly, we find that SNPE-C also perform less satisfactorily than SNPE-B on the Gaussian copula problem. This suggests that contrastive learning might be less preferable than MLE on the Gaussian copula problem, which might also explain the less satisfactory performance of our method.
|
neuralapproximatesufficientstatisticsforimplicitmodels/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:daf975b173f814f77983cd1c82c93e7395fe1a9e1e5719b2b779aa950223bcdc
|
| 3 |
+
size 533944
|
neuralapproximatesufficientstatisticsforimplicitmodels/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:2e3d9a4440e95ae6706e54e5628e69e34296b86a6092b534e9bcd5d9e6d44988
|
| 3 |
+
size 626093
|
neuraltopicmodelviaoptimaltransport/b0b88773-75ad-4d9b-af3f-190d5b40e839_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:277099b83a5744c3f51b8a3ba579a9f1a537099c849d98adb9a68f56b26b8574
|
| 3 |
+
size 87592
|
neuraltopicmodelviaoptimaltransport/b0b88773-75ad-4d9b-af3f-190d5b40e839_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f983ea942691f9aff73b6df3e0bd95d8e3619069416b2e86d47200bce0c286e3
|
| 3 |
+
size 108688
|
neuraltopicmodelviaoptimaltransport/b0b88773-75ad-4d9b-af3f-190d5b40e839_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:8cfb7488e0cce8cbd9ac9eff1fd6274100c960efa5efe56dbf3f6cf27498bab3
|
| 3 |
+
size 1455902
|
neuraltopicmodelviaoptimaltransport/full.md
ADDED
|
@@ -0,0 +1,370 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# NEURAL TOPIC MODEL VIA OPTIMAL TRANSPORT
|
| 2 |
+
|
| 3 |
+
He Zhao, Dinh Phung, Viet Huynh, Trung Le, Wray Buntine
|
| 4 |
+
|
| 5 |
+
Department of Data Science and Artificial Intelligence, Faculty of Information Technology Monash University, Australia
|
| 6 |
+
|
| 7 |
+
{ethan.zhao,dinh.phung,viet.huynh,trunglm,wray.buntine}@monash.edu
|
| 8 |
+
|
| 9 |
+
# ABSTRACT
|
| 10 |
+
|
| 11 |
+
Recently, Neural Topic Models (NTMs) inspired by variational autoencoders have obtained increasingly research interest due to their promising results on text analysis. However, it is usually hard for existing NTMs to achieve good document representation and coherent/diverse topics at the same time. Moreover, they often degrade their performance severely on short documents. The requirement of reparameterisation could also comprise their training quality and model flexibility. To address these shortcomings, we present a new neural topic model via the theory of optimal transport (OT). Specifically, we propose to learn the topic distribution of a document by directly minimising its OT distance to the document's word distributions. Importantly, the cost matrix of the OT distance models the weights between topics and words, which is constructed by the distances between topics and words in an embedding space. Our proposed model can be trained efficiently with a differentiable loss. Extensive experiments show that our framework significantly outperforms the state-of-the-art NTMs on discovering more coherent and diverse topics and deriving better document representations for both regular and short texts
|
| 12 |
+
|
| 13 |
+
# 1 INTRODUCTION
|
| 14 |
+
|
| 15 |
+
As an unsupervised approach, topic modelling has enjoyed great success in automatic text analysis. In general, a topic model aims to discover a set of latent topics from a collection of documents, each of which describes an interpretable semantic concept. Topic models like Latent Dirichlet Allocation (LDA) (Blei et al., 2003) and its hierarchical/Bayesian extensions, e.g., in Blei et al. (2010); Paisley et al. (2015); Gan et al. (2015); Zhou et al. (2016) have achieved impressive performance for document analysis. Recently, the developments of Variational AutoEncoders (VAEs) and Autoencoding Variational Inference (AVI) (Kingma & Welling, 2013; Rezende et al., 2014) have facilitated the proposal of Neural Topic Models (NTMs) such as in Miao et al. (2016); Srivastava & Sutton (2017); Krishnan et al. (2018); Burkhardt & Kramer (2019). Inspired by VAE, many NTMs use an encoder that takes the Bag-of-Words (BoW) representation of a document as input and approximates the posterior distribution of the latent topics. The posterior samples are further input into a decoder to reconstruct the BoW representation. Compared with conventional topic models, NTMs usually enjoy better flexibility and scalability, which are important for the applications on large-scale data.
|
| 16 |
+
|
| 17 |
+
Despite the promising performance and recent popularity, there are several shortcomings for existing NTMs, which could hinder their usefulness and further extensions. i) The training and inference processes of NTMs are typically complex due to the prior and posterior constructions of latent topics. To encourage topic sparsity and smoothness, Dirichlet (Burkhardt & Kramer, 2019) or gamma (Zhang et al., 2018) distributions are usually used as the prior and posterior of topics, but reparameterisation is inapplicable to them, thus, complex sampling schemes or approximations have to be used, which could limit the model flexibility. ii) A desideratum of a topic model is to generate better topical representations of documents with more coherent and diverse topics; but for many existing NTMs, it is hard to achieve good document representation and coherent/diverse topics at the same time. This is because the objective of NTMs is to achieve lower reconstruction error, which usually means topics are less coherent and diverse, as observed and analysed in Srivastava & Sutton (2017); Burkhardt & Kramer (2019). iii) It is well-known that topic models degrade their performance severely on short documents such as tweets, news headlines and product reviews, as each individual document contains insufficient word co-occurrence information. This issue can be exacerbated for NTMs because of the use of the encoder and decoder networks, which are usually more vulnerable to data sparsity.
|
| 18 |
+
|
| 19 |
+
To address the above shortcomings for NTMs, we in this paper propose a neural topic model, which is built upon a novel Optimal Transport (OT) framework derived from a new view of topic modelling. For a document, we consider its content to be encoded by two representations: the observed representation, $x$ , a distribution over all the words in the vocabulary and the latent representation, $z$ , a distribution over all the topics. $x$ can be obtained by normalising a document's word count vector while $z$ needs to be learned by a model. For a document collection, the vocabulary size (i.e., the number of unique words) can be very large but one individual document usually consists of a tiny subset of the words. Therefore, $x$ is a sparse and low-level representation of the semantic information of a document. As the number of topics is much smaller than the vocabulary size, $z$ is the relatively dense and high-level representation of the same content. Therefore, the learning of a topic model can be viewed as the process of learning the distribution $z$ to be as close to the distribution $x$ as possible. Accordingly, it is crucial to investigate how to measure the distance between two distributions with different supports (i.e., words to $x$ and topics to $z$ ). As optimal transport is a powerful tool for measuring the distance travelled in transporting the mass in one distribution to match another given a specific cost function, and recent development on computational OT (e.g., in Cuturi (2013); Frogner et al. (2015); Seguy et al. (2018); Peyre et al. (2019)) has shown the promising feasibility to efficiently compute OT for large-scale problems, it is natural for us to develop a new NTM based on the minimisation of OT.
|
| 20 |
+
|
| 21 |
+
Specifically, our model leverages an encoder that outputs topic distribution $z$ of a document by taking its word count vector as input like standard NTMs, but we minimise the OT distance between $x$ and $z$ , which are two discrete distributions on the support of words and topics, respectively. Notably, the cost function of the OT distance specifies the weights between topics and words, which we define as the distance in an embedding space. To represent their semantics, all the topics and words are embedded in this space. By leveraging the pretrained word embeddings, the cost function is then a function of topic embeddings, which will be learned jointly with the encoder. With the advanced properties of OT on modelling geometric structures on spaces of probability distributions, our model is able to achieve a better balance between obtaining good document representation and generating coherent/diverse topics. In addition, our model eases the burden of designing complex sampling schemes for the posterior of NTMs. More interestingly, our model is a natural way of incorporating pretrained word embeddings, which have been demonstrated to alleviate the issue of insufficient word co-occurrence information in short texts (Zhao et al., 2017; Dieng et al., 2020). With extensive experiments, our model can be shown to enjoy the state-of-the-art performance in terms of both topic quality and document representations for both regular and short texts.
|
| 22 |
+
|
| 23 |
+
# 2 BACKGROUND
|
| 24 |
+
|
| 25 |
+
In this section, we recap the essential background of neural topic models and optimal transport.
|
| 26 |
+
|
| 27 |
+
# 2.1 NEURAL TOPIC MODELS
|
| 28 |
+
|
| 29 |
+
Most of existing NTMs can be viewed as the extensions of the framework of VAEs where the latent variables can be interpreted as topics. Suppose the document collection to be analysed has $V$ unique words (i.e., vocabulary size). Each document consists of a word count vector denoted as $\pmb{x} \in \mathbb{N}^{V}$ and a latent distribution over $K$ topics: $\pmb{z} \in \mathbb{R}^{K}$ . An NTM assumes that $\pmb{z}$ for a document is generated from a prior distribution $p(\pmb{z})$ and $\pmb{x}$ is generated by the conditional distribution $p_{\phi}(\pmb{x}|\pmb{z})$ that is modelled by a decoder $\phi$ . The model's goal is to infer the topic distribution given the word counts, i.e., to calculate the posterior $p(\pmb{z}|\pmb{x})$ , which is approximated by the variational distribution $q_{\theta}(\pmb{z}|\pmb{x})$ modelled by an encoder $\theta$ . Similar to VAEs, the training objective of NTMs is the maximisation of the Evidence Lower BOund (ELBO):
|
| 30 |
+
|
| 31 |
+
$$
|
| 32 |
+
\max _ {\theta , \phi} \left(\mathbb {E} _ {q _ {\theta} (\boldsymbol {z} | \boldsymbol {x})} [ \log p _ {\phi} (\boldsymbol {x} | \boldsymbol {z}) ] - \mathbb {K L} [ q _ {\theta} (\boldsymbol {z} | \boldsymbol {x}) \| p (\boldsymbol {z}) ]\right). \tag {1}
|
| 33 |
+
$$
|
| 34 |
+
|
| 35 |
+
The first term above is the expected log-likelihood or reconstruction error. As $\pmb{x}$ is a count-valued vector, it is usually assumed to be generated from the multinomial distribution: $p_{\phi}(\pmb{x}|\pmb{z}) \coloneqq \mathrm{Multi}(\phi(\pmb{z}))$ , where $\phi(\pmb{z})$ is a probability vector output from the decoder. Therefore, the expected log-likelihood is proportional to $\pmb{x}^T\log \phi(\pmb{z})$ . The second term is the Kullback-Leibler (KL) divergence that regularises $q_{\theta}(\pmb{z}|\pmb{x})$ to be close to its prior $p(\pmb{z})$ . To interpret topics with words, $\phi(\pmb{z})$ is usually constructed by a single-layer network (Srivastava & Sutton, 2017): $\phi(\pmb{z}) \coloneqq \mathrm{softmax}(\mathbf{W}\pmb{z})$ , where $\mathbf{W} \in \mathbb{R}^{V \times K}$
|
| 36 |
+
|
| 37 |
+
indicates the weights between topics and words. Different NTMs may vary in the prior and the posterior of $z$ , for example, the model in Miao et al. (2017) applies Gaussian distributions for them and Srivastava & Sutton (2017); Burkhardt & Kramer (2019) show that Dirichlet is a better choice. However, reparameterisation cannot be directly applied to a Dirichlet, so various approximations and sampling schemes have been proposed.
|
| 38 |
+
|
| 39 |
+
# 2.2 OPTIMAL TRANSPORT
|
| 40 |
+
|
| 41 |
+
OT distances have been widely used for the comparison of probabilities. Here we limit our discussion to OT for discrete distributions, although it applies for continuous distributions as well. Specifically, let us consider two probability vectors $\boldsymbol{r} \in \Delta^{D_r}$ and $\boldsymbol{c} \in \Delta^{D_c}$ , where $\Delta^D$ denotes a $D - 1$ simplex. The OT distance<sup>1</sup> between the two probability vectors can be defined as:
|
| 42 |
+
|
| 43 |
+
$$
|
| 44 |
+
d _ {\mathbf {M}} (\boldsymbol {r}, \boldsymbol {c}) := \min _ {\mathbf {P} \in U (\boldsymbol {r}, \boldsymbol {c})} \left\langle \mathbf {P}, \mathbf {M} \right\rangle , \tag {2}
|
| 45 |
+
$$
|
| 46 |
+
|
| 47 |
+
where $\langle \cdot, \cdot \rangle$ denotes the Frobenius dot-product; $\mathbf{M} \in \mathbb{R}_{\geq 0}^{D_r \times D_c}$ is the cost matrix/function of the transport; $\mathbf{P} \in \mathbb{R}_{>0}^{D_r \times D_c}$ is the transport matrix/plan; $U(\boldsymbol{r}, \boldsymbol{c})$ denotes the transport polytope of $\boldsymbol{r}$ and $\boldsymbol{c}$ , which is the polyhedral set of $D_r \times D_c$ matrices: $U(\boldsymbol{r}, \boldsymbol{c}) \coloneqq \{P \in \mathbb{R}_{>0}^{D_r \times D_c} | P \mathbf{1}_{D_c} = \boldsymbol{r}, P^T \mathbf{1}_{D_r} = \boldsymbol{c}\}$ ; and $\mathbf{1}_D$ is the $D$ dimensional vector of ones. Intuitively, if we consider two discrete random variables $X \sim \text{Categorical}(\boldsymbol{r})$ and $Y \sim \text{Categorical}(\boldsymbol{c})$ , the transport matrix $\mathbf{P}$ is a joint probability of $(X, Y)$ , i.e., $\mathrm{p}(X = i, Y = j) = p_{ij}$ and $U(\boldsymbol{r}, \boldsymbol{c})$ is the set of all the joint probabilities. The above optimal transport distance can be computed by finding the optimal transport matrix $\mathbf{P}^*$ . It is also noteworthy that the Wasserstein distance can be viewed as a specific case of the OT distances.
|
| 48 |
+
|
| 49 |
+
As directly optimising Eq. (2) can be time-consuming for large-scale problems, a regularised optimal transport distance with an entropic constraint is introduced in Cuturi (2013), named the Sinkhorn distance:
|
| 50 |
+
|
| 51 |
+
$$
|
| 52 |
+
d _ {\mathbf {M}, \alpha} (\boldsymbol {r}, \boldsymbol {c}) := \min _ {\mathbf {P} \in U _ {\alpha} (\boldsymbol {r}, \boldsymbol {c})} \left\langle \mathbf {P}, \mathbf {M} \right\rangle , \tag {3}
|
| 53 |
+
$$
|
| 54 |
+
|
| 55 |
+
where $U_{\alpha}(\boldsymbol{r}, \boldsymbol{c}) \coloneqq \{\mathbf{P} \in U(\boldsymbol{r}, \boldsymbol{c}) | h(\mathbf{P}) \geq h(\boldsymbol{r}) + h(\boldsymbol{c}) - \alpha\}$ , $h(\cdot)$ is the entropy function, and $\alpha \in [0, \infty)$ . To compute the Sinkhorn distance, a Lagrange multiplier is introduced for the entropy constraint to minimise Eq. (3), resulting in the Sinkhorn algorithm, widely-used for discrete OT problems.
|
| 56 |
+
|
| 57 |
+
# 3 PROPOSED MODEL
|
| 58 |
+
|
| 59 |
+
Now we introduce the details of our proposed model. Specifically, we present each document as a distribution over $V$ words, $\tilde{\pmb{x}}\in \Delta^{V}$ obtained by normalising $\pmb{x}$ : $\tilde{\pmb{x}}\coloneqq \pmb {x} / S$ where $S\coloneqq \sum_{v = 1}^{V}\pmb{x}$ is the length of a document. Also, each document is associated with a distribution over $K$ topics: $\pmb{z}\in \Delta^K$ , each entry of which indicates the proportion of one topic in this document. Like other NTMs, we leverage an encoder to generate $\pmb{z}$ from $\tilde{\pmb{x}}$ : $\pmb{z} = \mathrm{softmax}(\theta (\tilde{\pmb{x}}))$ . Notably, $\theta$ is implemented with a neural network with dropout layers for adding randomness. As $\tilde{\pmb{x}}$ and $\pmb{z}$ are two distributions with different supports for the same document, to learn the encoder, we propose to minimise the following OT distance to push $\pmb{z}$ towards $\tilde{\pmb{x}}$ :
|
| 60 |
+
|
| 61 |
+
$$
|
| 62 |
+
\min _ {\theta} d _ {\mathbf {M}} (\tilde {\boldsymbol {x}}, \boldsymbol {z}). \tag {4}
|
| 63 |
+
$$
|
| 64 |
+
|
| 65 |
+
Here $\mathbf{M} \in \mathbb{R}_{>0}^{V \times K}$ is the cost matrix, where $m_{vk}$ indicates the semantic distance between topic $k$ and word $v$ . Therefore, each column of $\mathbf{M}$ captures the importance of the words in the corresponding topic. In addition to the encoder, $\mathbf{M}$ is a variable that needs to be learned in our model. However, learning the cost function is reported to be a non-trivial task (Cuturi & Avis, 2014; Sun et al., 2020). To address this problem, we specify the following construction of $\mathbf{M}$ :
|
| 66 |
+
|
| 67 |
+
$$
|
| 68 |
+
m _ {v k} = 1 - \cos \left(\boldsymbol {e} _ {v}, \boldsymbol {g} _ {k}\right), \tag {5}
|
| 69 |
+
$$
|
| 70 |
+
|
| 71 |
+
where $\cos (\cdot ,\cdot)\in [-1,1]$ is the cosine similarity; $g_{k}\in \mathbb{R}^{L}$ and $\pmb {e}_v\in \mathbb{R}^L$ are the embeddings of topic $k$ and word $v$ , respectively.
|
| 72 |
+
|
| 73 |
+
The embeddings are expected to capture the semantic information of the topics and words. Instead of learning the word embeddings, we propose to feed them with pretrained word embeddings such as word2vec (Mikolov et al., 2013) and GloVe (Pennington et al., 2014). This not only reduces the parameter space to make the learning of $\mathbf{M}$ more stable but also enables us to leverage the rich semantic information in pretrained word embeddings, which is beneficial for short documents. Here the cosine distance instead of others is used for two reasons: it is the most commonly-used distance metric for word embeddings and the cost matrix $\mathbf{M}$ is positive thus the similarity metric requires to be upper-bounded. As cosine similarity falls in the range of $[-1,1]$ , we have $\mathbf{M} \in [0,2]^{V \times K}$ .
|
| 74 |
+
|
| 75 |
+
For easy presentation, we denote $\mathbf{G} \in \mathbb{R}^{L \times K}$ and $\mathbf{E} \in \mathbb{R}^{L \times V}$ as the collection of the embeddings of all topics and words, respectively. Now we can rewrite Eq. (4) as:
|
| 76 |
+
|
| 77 |
+
$$
|
| 78 |
+
\min _ {\theta , \mathbf {G}} d _ {\mathbf {M}} (\tilde {\boldsymbol {x}}, \boldsymbol {z}). \tag {6}
|
| 79 |
+
$$
|
| 80 |
+
|
| 81 |
+
Although the mechanisms are totally different, both M of our model and W in NTMs (See Section 2.1) capture the relations between topics and words (M is distance while W is similarity). Here M is the cost function of our OT loss while W is the weights in the decoder of NTMs. Different from other NTMs based on VAEs, our model does not explicitly has a decoder to project $z$ back to the word space to reconstruct $x$ , as the OT distance facilitates us to compute the distance between $z$ and $\tilde{x}$ directly. To further understand our model, we can actually project $z$ to the space of $x$ by "virtually" defining a decoder: $\phi(z) \coloneqq \text{softmax}((2 - M)z)$ . With the notation of $\phi(z)$ , we show the following theorem to reveal the relationships between other NTMs and ours, whose proof is shown in Section A of the appendix.
|
| 82 |
+
|
| 83 |
+
Theorem 1. When $V \geq 8$ and $\mathbf{M} \in [0,2]^{V \times K}$ , we have:
|
| 84 |
+
|
| 85 |
+
$$
|
| 86 |
+
d _ {\mathbf {M}} (\tilde {\boldsymbol {x}}, \boldsymbol {z}) \leq - \tilde {\boldsymbol {x}} ^ {T} \log \phi (\boldsymbol {z}). \tag {7}
|
| 87 |
+
$$
|
| 88 |
+
|
| 89 |
+
With Theorem 1, we have:
|
| 90 |
+
|
| 91 |
+
Lemma 1. Maximising the expected multinomial log-likelihood of NTMs is equivalent to minimising the upper bound of the OT distance in our model.
|
| 92 |
+
|
| 93 |
+
Frogner et al. (2015) propose to minimise the OT distance between the predicted and true label distributions for classification tasks. It is reported in the paper that combining the OT loss with the conventional cross-entropy loss gives better performance on using either of them. As the expected multinomial log-likelihood is easier to learn and can be helpful to guide the optimisation of the OT distance, empirically inspired by Frogner et al. (2015) and theoretically motivated by Theorem 1, we propose the following joint loss for our model that combines the OT distance with the expected log-likelihood:
|
| 94 |
+
|
| 95 |
+
$$
|
| 96 |
+
\max _ {\theta , \mathbf {G}} \left(\tilde {\boldsymbol {x}} ^ {T} \log \phi (\boldsymbol {z}) - d _ {\mathbf {M}} (\tilde {\boldsymbol {x}}, \boldsymbol {z})\right). \tag {8}
|
| 97 |
+
$$
|
| 98 |
+
|
| 99 |
+
If we compare the above loss with the ELBO of Eq. (1), it can be observed that similar to the KL divergence of NTMs, our OT distance can be viewed as a regularisation term to the expected log-likelihood $(\tilde{\boldsymbol{x}}^T\log \phi (\boldsymbol {z})\coloneqq \frac{1}{S}\boldsymbol{x}^T\log \phi (\boldsymbol {z}))$ . Compared with other NTMs, our model eases the burden of developing the prior/posterior distributions and the associated sampling schemes. Moreover, with OT's ability to better modelling geometric structures, our model is able to achieve better performance in terms of both document representation and topic quality. In addition, the cost function of the OT distance provides a natural way of incorporating pretrained word embeddings, which boosts our model's performance on short documents.
|
| 100 |
+
|
| 101 |
+
Finally, we replace the OT distance with the Sinkhorn distance (Cuturi, 2013), which leads to the final loss function:
|
| 102 |
+
|
| 103 |
+
$$
|
| 104 |
+
\max _ {\theta , \mathbf {G}} \left(\epsilon \tilde {\boldsymbol {x}} ^ {T} \log \phi (\boldsymbol {z}) - d _ {\mathbf {M}, \alpha} (\tilde {\boldsymbol {x}}, \boldsymbol {z})\right). \tag {9}
|
| 105 |
+
$$
|
| 106 |
+
|
| 107 |
+
where $z = \mathrm{softmax}(\theta (\tilde{x}))$ ; $\mathbf{M}$ is parameterised by $\mathbf{G}$ ; $\phi (z)\coloneqq \mathrm{softmax}((2 - \mathbf{M})z)$ ; $x$ and $\tilde{x}$ are the word count vector and its normalisation, respectively; $\epsilon$ is the hyperparameter that controls the weight of the expected likelihood; $\alpha$ is the hyperparameter for the Sinkhorn distance.
|
| 108 |
+
|
| 109 |
+
To compute the Sinkhorn distance, we leverage the Sinkhorn algorithm (Cuturei, 2013). Accordingly, we name our model Neural Sinkhorn Topic Model (NSTM), whose training algorithm is shown in
|
| 110 |
+
|
| 111 |
+
input :Input documents, Pretrained word embeddings E, Topic number $K,\epsilon ,\alpha$
|
| 112 |
+
output: $\theta ,\mathbf{G}$
|
| 113 |
+
Randomly initialise $\theta$ and G;
|
| 114 |
+
while Not converged do Sample a batch of $B$ input documents X; Column-wisely normalise $\mathbf{X}$ to get $\tilde{\mathbf{X}}$ Compute M with G and E by Eq. (5); Compute $\mathbf{Z} = \mathrm{softmax}(\theta (\tilde{\mathbf{X}}))$ ; Compute the first term of Eq. (9); # Sinkhorn iterations # $\Psi_{1} = \mathrm{ones}(K,B) / K,\Psi_{2} = \mathrm{ones}(V,B) / V;$ $\mathbf{H} = e^{-\mathbf{M} / \alpha}$ while $\Psi_{1}$ changes or any other relevant stopping criterion do $\begin{array}{rl} & {\Psi_{2} = \tilde{\mathbf{X}}\odot 1 / (\mathbf{H}\Psi_{1});}\\ & {\Psi_{1} = \mathbf{Z}\odot 1 / (\mathbf{H}^{T}\Psi_{2});} \end{array}$ end Compute the second term of Eq. (9): $d_{\mathrm{M},\alpha} = \mathrm{sum}(\Psi_2^T (\mathbf{H}\odot \mathbf{M})\Psi_1)$ Compute the gradients of Eq. (9) in terms of $\theta ,\mathbf{G};$ Update $\theta ,\mathbf{G}$ with the gradients;
|
| 115 |
+
end
|
| 116 |
+
|
| 117 |
+
Algorithm 1: Training algorithm for NSTM. $\mathbf{X} \in \mathbb{N}^{V \times B}$ and $\mathbf{Z} \in \mathbb{R}_{>0}^{K \times B}$ consists of the word count vectors and topic distributions for all the documents, respectively; $\odot$ is the element-wise multiplication.
|
| 118 |
+
|
| 119 |
+
Algorithm 1. It is noteworthy that the Sinkhorn iterations can be implemented with the tensors of TensorFlow/PyTorch (Patrini et al., 2020). Therefore, the loss of Eq. (9) is differentiable in terms of $\theta$ and $\mathbf{G}$ , which can be optimised jointly in one training iteration. After training the model, we can infer $\mathbf{z}$ by conducting a forward-pass of the encoder $\theta$ with the input $\tilde{\mathbf{x}}$ . In practice, $\mathbf{x}$ can be normalised by other methods e.g., softmax or one can use TF-IDF as the input data of the encoder.
|
| 120 |
+
|
| 121 |
+
# 4 RELATED WORKS
|
| 122 |
+
|
| 123 |
+
We first consider NTMs (e.g. in Miao et al. (2016); Srivastava & Sutton (2017); Krishnan et al. (2018); Card et al. (2018); Burkhardt & Kramer (2019); Dieng et al. (2020) reviewed in Section 2.1 as the closest line of related works to ours. For a detailed survey of NTMs, we refer to Zhao et al. (2021). Connections and comparisons between our model and NTMs have been discussed in Section 3. In addition, word embeddings have been recently widely-used as complementary metadata for topic models, especially for modelling short texts. For Bayesian probabilistic topic models, word embeddings are usually incorporated into the generative process of word counts, such as in Petterson et al. (2010); Nguyen et al. (2015); Li et al. (2016); Zhao et al. (2017). Due to the flexibility of NTMs, word embeddings can be incorporated as part of the encoder input, such as in Card et al. (2018) or they can be used in the generative process of words such as in Dieng et al. (2020). Our novelty with LSTM is that word embeddings are naturally incorporated in the cost function of the OT distance.
|
| 124 |
+
|
| 125 |
+
To our knowledge, the works that connect topic modelling with OT are still very limited. In Yurochkin et al. (2019) authors proposed to compare two documents' similarity with the OT distance between their topic distributions extracted from a pretrained LDA, but the aim is not to learn a topic model. Another recent work related to ours is Wasserstein LDA (WLDA) (Nan et al., 2019), which adapts the framework of Wasserstein AutoEncoders (WAEs) (Tolstikhin et al., 2018). The key difference from ours is that WLDA minimises the Wasserstein distance between the fake data generated with topics and real data, which can be viewed as an OT variant to VAE-NTMs. However, our LSTM directly minimises the OT distance between $z$ and $x$ , where there are no explicit generative processes from topics to data. Other two related works are Distilled Wasserstein Learning (DWL) (Xu et al., 2018) and Optimal Transport LDA (OTLDA) (Huynh et al., 2020), which adapt the idea of Wasserstein barycentres and Wasserstein Dictionary Learning (Rolet et al., 2016; Schmitz et al., 2018). There are fundamental differences of ours from DWL and OTLDA in terms of the relations between
|
| 126 |
+
|
| 127 |
+
Table 1: Statistics of the datasets
|
| 128 |
+
|
| 129 |
+
<table><tr><td></td><td>Number of docs</td><td>Vocabulary size (V)</td><td>Total number of words</td><td>Number of labels</td></tr><tr><td>20NG</td><td>18,846</td><td>22,636</td><td>2,037,671</td><td>20</td></tr><tr><td>WS</td><td>12,337</td><td>10,052</td><td>192,483</td><td>8</td></tr><tr><td>TMN</td><td>32,597</td><td>13,368</td><td>592,973</td><td>7</td></tr><tr><td>Reuters</td><td>11,367</td><td>8,817</td><td>836,397</td><td>N/A</td></tr><tr><td>RCV2</td><td>804,414</td><td>7,282</td><td>60,209,009</td><td>N/A</td></tr></table>
|
| 130 |
+
|
| 131 |
+
documents, topics, and words. Specifically, in DWL and OTLDA, documents and topics locate in one space of words (i.e., both are distributions over words) and $x$ can be approximated with the weighted Wasserstein barycentres of all the topic-word distributions, where the weights can be interpreted as the topic proportions of the document, i.e., $z$ . However, in NSTM, a document locates in both the topic space and the word space and topics and words are embedded in the embedding space. These differences lead to different views of topic modelling and different frameworks as well. Moreover, DWL mainly focuses on learning word embeddings and representations for International Classification of Diseases (ICD) codes, while NSTM aims to be a general method of topic modelling. Finally, DWL and OTLDA are not neural network models while ours is.
|
| 132 |
+
|
| 133 |
+
# 5 EXPERIMENTS
|
| 134 |
+
|
| 135 |
+
We conduct extensive experiments on several benchmark text datasets to evaluate the performance of LSTM against the state-of-the-art neural topic models.
|
| 136 |
+
|
| 137 |
+
# 5.1 EXPERIMENTAL SETTINGS
|
| 138 |
+
|
| 139 |
+
Datasets: Our experiments are conducted on five widely-used benchmark text datasets, varying in different sizes, including 20 News Groups $(20\mathbf{NG})^2$ , Web Snippets (WS) (Phan et al., 2008), Tag My News (TMN) (Vitale et al., 2012) $^3$ , Reuters extracted from the Reuters-21578 dataset $^4$ , Reuters Corpus Volume 2 (RCV2) (Lewis et al., 2004) $^5$ . The statistics of the datasets in the experiments are shown in Table 1. In particular, WS and TMN are short documents; 20NG, WS, and TMN are associated with document labels $^6$ .
|
| 140 |
+
|
| 141 |
+
Evaluation metrics: We report Topic Coherence (TC) and Topic Diversity (TD) as performance metrics for topic quality. TC measures the semantic coherence in the most significant words (top words) of a topic, given a reference corpus. We apply the widely-used Normalized Pointwise Mutual Information (NPMI) (Aletras & Stevenson, 2013; Lau et al., 2014) computed over the top 10 words of each topic, by the Palmetto package (Röder et al., 2015)<sup>7</sup>. As not all the discovered topics are interpretable (Yang et al., 2015; Zhao et al., 2018), to comprehensively evaluate the topic quality, we choose the topics with the highest NPMI and report the average score over those selected topics. We vary the proportion of the selected topics from $10\%$ to $100\%$ , where $10\%$ indicates the top $10\%$ topics with the highest NPMI are selected and $100\%$ means all the topics are used. TD, as its name implies, measures how diverse the discovered topics are. We define topic diversity to be the percentage of unique words in the top 25 words (Dieng et al., 2020) of the selected topics, similar in TC. TD close to 0 indicates redundant topics; TD close to 1 indicates more varied topics. As doc-topic distributions can be viewed as unsupervised document representations, to evaluate the quality of such representations, we perform document clustering tasks and report the purity and Normalized Mutual Information (NMI) (Manning et al., 2008) on 20NG, WS, and TMN, where the document labels are considered. With the default training/testing splits of the datasets, we train a model on the training documents and infer the topic distributions $z$ on the testing documents. Given $z$ , we
|
| 142 |
+
|
| 143 |
+
<sup>2</sup>http://qwone.com/~jason/20Newsgroups/
|
| 144 |
+
<sup>3</sup>http://acube.di.unipi.it/tmn-dataset/
|
| 145 |
+
<sup>4</sup>https://kdd.ics.uci.edu/databases/reuters21578/reuters21578.html
|
| 146 |
+
<sup>5</sup>https://trec.nist.gov/data/reuters/reuters.html
|
| 147 |
+
<sup>6</sup>We do not consider the labels of Reuters and RCV2 as there are multiple labels for one document.
|
| 148 |
+
<sup>7</sup>http://palmetto.aksw.org
|
| 149 |
+
|
| 150 |
+

|
| 151 |
+
(a) 20NG
|
| 152 |
+
|
| 153 |
+

|
| 154 |
+
(b) WS
|
| 155 |
+
|
| 156 |
+

|
| 157 |
+
(c) TMN
|
| 158 |
+
|
| 159 |
+

|
| 160 |
+
(d)Reuters
|
| 161 |
+
|
| 162 |
+

|
| 163 |
+
(e) RCV2
|
| 164 |
+
|
| 165 |
+

|
| 166 |
+
Figure 1: The first row shows the TC scores for all the datasets and the second row shows the corresponding TD scores. In each subfigure, the horizontal axis indicates the proportion of selected topics according to their NPMIs.
|
| 167 |
+
|
| 168 |
+

|
| 169 |
+
|
| 170 |
+

|
| 171 |
+
|
| 172 |
+

|
| 173 |
+
|
| 174 |
+

|
| 175 |
+
|
| 176 |
+

|
| 177 |
+
(a) 20NG
|
| 178 |
+
|
| 179 |
+

|
| 180 |
+
(b) WS
|
| 181 |
+
|
| 182 |
+

|
| 183 |
+
(c) TMN
|
| 184 |
+
|
| 185 |
+

|
| 186 |
+
(a) Over batches
|
| 187 |
+
(b) Over seconds
|
| 188 |
+
|
| 189 |
+

|
| 190 |
+
|
| 191 |
+

|
| 192 |
+
|
| 193 |
+

|
| 194 |
+
|
| 195 |
+

|
| 196 |
+
Figure 2: The first row shows the km-Purity scores and the second row shows the corresponding km-NMI scores. In each subfigure, the horizontal axis indicates the number of KMeans clusters.
|
| 197 |
+
Figure 3: Training loss.
|
| 198 |
+
|
| 199 |
+
adopt two strategies to perform the document clustering task: i) Following Nguyen et al. (2015), we use the most significant topic of a testing document as its clustering assignment to compute purity and NMI (denoted by top-Purity and top-NMI); ii) We apply the KMeans algorithm on $z$ (over all the topics) of the testing documents and report the purity and NMI of the KMeans clusters (denoted by km-Purity and km-NMI). For the first strategy, the number of clusters equals to the number of topics while for the second one, we vary the number of clusters of KMeans in the range of {20, 40, 60, 80, 100}. Note that our goal is not to achieve the state-of-the-art document clustering results but compare document representations of topic models. For all the metrics, higher values indicate better performance.
|
| 200 |
+
|
| 201 |
+
Baseline methods and their settings: We compare with the state-of-the-art NTMs, including: LDA with Products of Experts (ProdLDA) (Srivastava & Sutton, 2017), which replaces the mixture model in LDA with a product of experts and uses the AVI for training; Dirichlet VAE (DVAE) (Burkhardt & Kramer, 2019), which is a neural topic model imposing the Dirichlet prior/posterior on $z$ . We use the variant of DVAE with rejection sampling VI, which is reported to perform the best; Embedding Topic Model (ETM) (Dieng et al., 2020), which is a topic model that incorporates word embeddings and is learned by AVI; Wasserstein LDA (WLDA) (Nan et al., 2019), which is a WAE-based topic model. For all the above baselines, we use their official code with the best reported settings.
|
| 202 |
+
|
| 203 |
+
Settings for NSTM: NSTM is implemented on TensorFlow. For the encoder $\theta$ , to keep simplicity, we use a fully-connected neural network with one hidden layer of 200 units and ReLU as the activation function, followed by a dropout layer (rate=0.75) and a batch norm layer, same to the
|
| 204 |
+
|
| 205 |
+
Table 2: top-Purity and top-NMI for document clustering. The best and second scores of each dataset are highlighted in boldface and with an underline, respectively.
|
| 206 |
+
|
| 207 |
+
<table><tr><td rowspan="2"></td><td colspan="3">top-Purity</td><td colspan="3">top-NMI</td></tr><tr><td>20NG</td><td>WS</td><td>TMN</td><td>20NG</td><td>WS</td><td>TMN</td></tr><tr><td>LDA</td><td>0.398±0.013</td><td>0.446±0.022</td><td>0.470±0.008</td><td>0.320±0.010</td><td>0.185±0.013</td><td>0.125±0.006</td></tr><tr><td>ProdLDA</td><td>0.417±0.004</td><td>0.293±0.023</td><td>0.405±0.157</td><td>0.321±0.004</td><td>0.066±0.016</td><td>0.091±0.101</td></tr><tr><td>DVAE</td><td>0.281±0.006</td><td>0.284±0.005</td><td>0.477±0.012</td><td>0.187±0.005</td><td>0.059±0.001</td><td>0.113±0.004</td></tr><tr><td>ETM</td><td>0.063±0.003</td><td>0.215±0.001</td><td>0.556±0.022</td><td>0.005±0.005</td><td>0.003±0.003</td><td>0.328±0.010</td></tr><tr><td>WLDA</td><td>0.117±0.001</td><td>0.239±0.003</td><td>0.260±0.002</td><td>0.060±0.001</td><td>0.026±0.001</td><td>0.009±0.001</td></tr><tr><td>NSTM</td><td>0.477±0.011</td><td>0.451±0.009</td><td>0.637±0.010</td><td>0.415±0.012</td><td>0.201±0.004</td><td>0.334±0.004</td></tr></table>
|
| 208 |
+
|
| 209 |
+
settings of Burkhardt & Kramer (2019). For the Sinkhorn algorithm, following Cuturi (2013), the maximum number of iterations is 1,000 and the stop tolerance is $0.005^8$ . In all the experiments, we fix $\alpha = 20$ and $\epsilon = 0.07$ . We further vary the two hyperparameters to study our model's sensitivity to them in Figure B.1 of the appendix. Finetuning the parameters specifically to a dataset may give better results. The optimisation of LSTM is done by Adam (Kingma & Ba, 2015) with learning rate 0.001 and batch size 200 for maximally 50 iterations. For LSTM and ETM, the 50-dimensional (i.e., $L = 50$ , see Eq. (5)) GloVe word embeddings (Pennington et al., 2014) pre-trained on Wikipedia are used. We use the number of topics $K = 100$ in most cases and set $K = 500$ on RCV2 to test our model's scalability.
|
| 210 |
+
|
| 211 |
+
# 5.2 RESULTS
|
| 212 |
+
|
| 213 |
+
Quantitative results: We run all the models in comparison five times with different random seeds and report the mean and standard deviation (as error bars). We show the results of TC and TD in Figure 1 and top-Purity/NMI in Table 2, and km-Purity/NMI in Figure 2, respectively. We have the following remarks about the results: i) Our proposed NSTM outperforms the others significantly in terms of topic coherence while obtaining high topic diversity on all the datasets. Although others may have higher TD than ours in one dataset or two, they usually cannot achieve a high TC at the same time. ii) In terms of document clustering, our model performs the best in general with a significant gap over other NTMs, except the case where ours is the second for the KMeans clustering on 20NG. This demonstrates that NSTM is not only able to discover interpretable topics with better quality but also learn good document representations for clustering. It also shows that with the OT distance, our model can achieve a better balance among the comprehensive metrics of topic modelling. iii) For all the evaluation metrics, our model is consistently the best on the short documents including WS and TMN. This demonstrates the effectiveness of our way of incorporating pretrained word embeddings, which shows our model's potential on short text topic modelling. Although ETM also uses pretrained word embeddings, its performance is incomparable to ours.
|
| 214 |
+
|
| 215 |
+
Scalability: LSTM has comparable scalability with other NTMs and is able to scale on large datasets with a large number of topics. To demonstrate the scalability, we run LSTM, DVAE, ProdLDA (as these three are implemented in TensorFlow, while ETM is in PyTorch, and WLDA is in MXNet) on RCV2 with $K = 500$ . The three models run on a Titan RTX GPU with batch size 1,000. Figure 3 shows the training losses, which demonstrate that LSTM has similar learning speed to ProdLDA, better than DVAE. The TC and TD scores of this experiment are shown in Section C of the appendix, where it can be observed that with 500 topics, our model shows similar performance advantage over others.
|
| 216 |
+
|
| 217 |
+
Qualitative analysis: As topics in our model are embedded in the same space as pretrained word embeddings, they share similar geometric properties. Figure 4 shows a qualitative analysis. For the t-SNE (Maaten & Hinton, 2008) visualisation, we select the top 50 topics with the highest NPMI learned by a run of NSTM on RCV2 with $K = 100$ and feed their (50 dimensional) embeddings into the t-SNE method. We also show the top five words and the topic number (1 to 50) of each topic. We
|
| 218 |
+
|
| 219 |
+

|
| 220 |
+
Figure 4: Left: t-SNE visualisation of topic embeddings on RCV2. One red dot represents a topic. The top 5 words and the topic number (1 to 50) of each topic are also shown. Right: interactions between word and topic embeddings.
|
| 221 |
+
|
| 222 |
+
can observe that although the words of the topics are different, the semantic similarity between the topics captured by the embeddings is highly interpretable. In addition, we take the GloVe embeddings of the polysemantic word "apple" and find the closest 10 related words among the 0.4 million words of the GloVe vocabulary according to their cosine similarity. It can be seen that by default "apple" refers to the Apple company more in GloVe. Either adding the embeddings of topic 1 that describes the concept of "food" or subtracting the embeddings of topic 46 that describes the concept of "tech companies" reveals the fruit semantic for the word "apple". More qualitative analysis on topics are provided in Section E of the appendix.
|
| 223 |
+
|
| 224 |
+
# 6 CONCLUSION
|
| 225 |
+
|
| 226 |
+
In this paper, we presented a novel neural topic model based on optimal transport, where a document is endowed with two representations: the word distribution, $\tilde{\pmb{x}}$ , and the topic distribution, $z$ . An OT distance is leveraged to compare the semantic distance between the two distributions, whose cost function is defined according to the cosine similarities between topics and words in the embedding space. $z$ is obtained from an encoder that takes $\tilde{\pmb{x}}$ as input and is trained by minimising the OT distance between $z$ and $\tilde{\pmb{x}}$ . With pretrained word embeddings, topic embeddings are learned by the same minimisation of the OT distance in terms of the cost function. Our model has shown appealing properties that are able to overcome several shortcomings of existing neural topic models. Extensive experiments have been conducted, showing that our model achieves state-of-the-art performance on both discovering quality topics and deriving useful document representations for both regular and short texts.
|
| 227 |
+
|
| 228 |
+
# ACKNOWLEDGMENTS
|
| 229 |
+
|
| 230 |
+
Trung Le was supported by AOARD grant FA2386-19-1-4040. Wray Buntine was supported by the Australian Research Council under award DP190100017.
|
| 231 |
+
|
| 232 |
+
# REFERENCES
|
| 233 |
+
|
| 234 |
+
Nikolaos Aletras and Mark Stevenson. Evaluating topic coherence using distributional semantics. In International Conference on Computational Semantics, pp. 13-22, 2013.
|
| 235 |
+
David M Blei, Andrew Y Ng, and Michael I Jordan. Latent Dirichlet allocation. JMLR, 3:993-1022, 2003.
|
| 236 |
+
David M Blei, Thomas L Griffiths, and Michael I Jordan. The nested Chinese restaurant process and Bayesian nonparametric inference of topic hierarchies. Journal of the ACM, 57(2):7, 2010.
|
| 237 |
+
Sophie Burkhardt and Stefan Kramer. Decoupling sparsity and smoothness in the Dirichlet variational autoencoder topic model. JMLR, 20(131):1-27, 2019.
|
| 238 |
+
|
| 239 |
+
Dallas Card, Chenhao Tan, and Noah A Smith. Neural models for documents with metadata. In ACL, pp. 2031-2040, 2018.
|
| 240 |
+
Marco Cuturi. Sinkhorn distances: Lightspeed computation of optimal transport. In NIPS, pp. 2292-2300, 2013.
|
| 241 |
+
Marco Cuturi and David Avis. Ground metric learning. JMLR, 15(1):533-564, 2014.
|
| 242 |
+
Adji B Dieng, Francisco JR Ruiz, and David M Blei. Topic modeling in embedding spaces. TACL, 8: 439-453, 2020.
|
| 243 |
+
Charlie Frogner, Chiyuan Zhang, Hossein Mobahi, Mauricio Araya, and Tomaso A Poggio. Learning with a Wasserstein loss. In NIPS, pp. 2053-2061, 2015.
|
| 244 |
+
Zhe Gan, R. Henao, D. Carlson, and Lawrence Carin. Learning deep sigmoid belief networks with data augmentation. In AISTATS, pp. 268-276, 2015.
|
| 245 |
+
Viet Huynh, He Zhao, and Dinh Phung. OTLDA: A geometry-aware optimal transport approach for topic modeling. NeurIPS, 2020.
|
| 246 |
+
Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR. 2015.
|
| 247 |
+
Diederik P Kingma and Max Welling. Auto-encoding variational Bayes. ICLR, 2013.
|
| 248 |
+
Rahul Krishnan, Dawen Liang, and Matthew Hoffman. On the challenges of learning with inference networks on sparse, high-dimensional data. In AISTATS, pp. 143-151, 2018.
|
| 249 |
+
John D Lafferty and David M Blei. Correlated topic models. In NIPS, pp. 147-154, 2006.
|
| 250 |
+
Jey Han Lau, David Newman, and Timothy Baldwin. Machine reading tea leaves: Automatically evaluating topic coherence and topic model quality. In EACL, pp. 530-539, 2014.
|
| 251 |
+
David D Lewis, Yiming Yang, Tony G Rose, and Fan Li. RCV1: A new benchmark collection for text categorization research. JMLR, 5(Apr):361-397, 2004.
|
| 252 |
+
Chenliang Li, Haoran Wang, Zhiqian Zhang, Aixin Sun, and Zongyang Ma. Topic modeling for short texts with auxiliary word embeddings. In SIGIR, pp. 165-174, 2016.
|
| 253 |
+
Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t-SNE. JMLR, 9(Nov): 2579-2605, 2008.
|
| 254 |
+
Christopher D Manning, Prabhakar Raghavan, Hinrich Schütze, et al. Introduction to Information Retrieval. Cambridge University Press, Cambridge, 2008.
|
| 255 |
+
Yishu Miao, Lei Yu, and Phil Blunsom. Neural variational inference for text processing. In ICML, pp. 1727-1736, 2016.
|
| 256 |
+
Yishu Miao, Edward Grefenstette, and Phil Blunsom. Discovering discrete latent topics with neural variational inference. In ICML, pp. 2410-2419, 2017.
|
| 257 |
+
Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word representations in vector space. *ICLR*, 2013.
|
| 258 |
+
Feng Nan, Ran Ding, Ramesh Nallapati, and Bing Xiang. Topic modeling with Wasserstein autoencoders. In ACL, pp. 6345-6381, 2019.
|
| 259 |
+
Dat Quoc Nguyen, Richard Billingsley, Lan Du, and Mark Johnson. Improving topic models with latent feature word representations. TACL, 3:299-313, 2015.
|
| 260 |
+
John Paisley, Chong Wang, David M Blei, and Michael I Jordan. Nested hierarchical Dirichlet processes. TPAMI, 37(2):256-270, 2015.
|
| 261 |
+
Giorgio Patrini, Rianne van den Berg, Patrick Forre, Marcello Carioni, Samarth Bhargav, Max Welling, Tim Genewein, and Frank Nielsen. Sinkhorn autoencoders. In UAI, pp. 733-743, 2020.
|
| 262 |
+
|
| 263 |
+
Jeffrey Pennington, Richard Socher, and Christopher Manning. GloVe: Global vectors for word representation. In EMNLP, pp. 1532-1543, 2014.
|
| 264 |
+
James Petterson, Wray Buntine, Shravan M Narayanamurthy, Tiberio S Caetano, and Alex J Smola. Word features for latent Dirichlet allocation. In NIPS, pp. 1921-1929, 2010.
|
| 265 |
+
Gabriel Peyre, Marco Cuturi, et al. Computational optimal transport. Foundations and Trends in Machine Learning, 11(5-6):355-607, 2019.
|
| 266 |
+
Xuan-Hieu Phan, Le-Minh Nguyen, and Susumu Horiguchi. Learning to classify short and sparse text & web with hidden topics from large-scale data collections. In WWW, pp. 91-100, 2008.
|
| 267 |
+
Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In ICML, pp. 1278-1286, 2014.
|
| 268 |
+
Michael Röder, Andreas Both, and Alexander Hinneburg. Exploring the space of topic coherence measures. In WSDM, pp. 399-408, 2015.
|
| 269 |
+
Antoine Rolet, Marco Cuturi, and Gabriel Peyre. Fast dictionary learning with a smoothed Wasserstein loss. In AISTATS, pp. 630-638, 2016.
|
| 270 |
+
Morgan A Schmitz, Matthieu Heitz, Nicolas Bonneel, Fred Ngole, David Coeurjolly, Marco Cuturi, Gabriel Peyre, and Jean-Luc Starck. Wasserstein dictionary learning: Optimal transport-based unsupervised nonlinear dictionary learning. SIAM Journal on Imaging Sciences, 11(1):643-678, 2018.
|
| 271 |
+
Vivien Seguy, Bharath Bhushan Damodaran, Remi Flamary, Nicolas Courty, Antoine Rolet, and Mathieu Blondel. Large scale optimal transport and mapping estimation. *ICLR*, 2018.
|
| 272 |
+
Akash Srivastava and Charles Sutton. Autoencoding variational inference for topic models. *ICLR*, 2017.
|
| 273 |
+
Haodong Sun, Haomin Zhou, Hongyuan Zha, and Xiaojing Ye. Learning cost functions for optimal transport. arXiv preprint arXiv:2002.09650, 2020.
|
| 274 |
+
Ilya Tolstikhin, Olivier Bousquet, Sylvain Gelly, and Bernhard Schoelkopf. Wasserstein auto-encoders. *ICLR*, 2018.
|
| 275 |
+
Daniele Vitale, Paolo Ferragina, and Ugo Scaiella. Classification of short texts by deploying topical annotations. In ECIR, pp. 376-387, 2012.
|
| 276 |
+
Hongteng Xu, Wenlin Wang, Wei Liu, and Lawrence Carin. Distilled Wasserstein learning for word embedding and topic modeling. In NeurIPS, pp. 1716-1725, 2018.
|
| 277 |
+
Yi Yang, Doug Downey, and Jordan Boyd-Graber. Efficient methods for incorporating knowledge into topic models. In EMNLP, pp. 308-317, 2015.
|
| 278 |
+
Mikhail Yurochkin, Sebastian Claici, Edward Chien, Farzaneh Mirzazadeh, and Justin M Solomon. Hierarchical optimal transport for document representation. In NeurIPS, pp. 1599-1609, 2019.
|
| 279 |
+
Hao Zhang, Bo Chen, Dandan Guo, and Mingyuan Zhou. WHAI: Weibull hybrid autoencoding inference for deep topic modeling. 2018.
|
| 280 |
+
He Zhao, Lan Du, and Wray Buntine. A word embeddings informed focused topic model. In ACML, pp. 423-438, 2017.
|
| 281 |
+
He Zhao, Lan Du, Wray Buntine, and Mingyuan Zhou. Dirichlet belief networks for topic structure learning. In NeurIPS, pp. 7966-7977, 2018.
|
| 282 |
+
He Zhao, Dinh Phung, Viet Huynh, Yuan Jin, Lan Du, and Wray Buntine. Topic modelling meets deep neural networks: A survey. arXiv preprint arXiv:2103.00498, 2021.
|
| 283 |
+
Mingyuan Zhou, Yulai Cong, and Bo Chen. Augmentable gamma belief networks. JMLR, 17(163): 1-44, 2016.
|
| 284 |
+
|
| 285 |
+
# A PROOF OF THEOREM 1
|
| 286 |
+
|
| 287 |
+
Proof. Before showing the proof, we introduce the following notations: We denote $k \in \{1, \dots, K\}$ and $v \in \{1, \dots, V\}$ as the indexes; The $s^{\text{th}}$ ( $s \in \{1, \dots, S\}$ ) token of the document picks a word in the vocabulary, denoted by $w_s \in \{1, \dots, V\}$ ; the normaliser in the softmax function of $\phi(z)$ is denoted as $\hat{\phi}$ so:
|
| 288 |
+
|
| 289 |
+
$$
|
| 290 |
+
\hat {\phi} = \sum_ {v = 1} ^ {V} e ^ {\sum_ {k = 1} ^ {K} z _ {k} (2 - m _ {v k})} = e ^ {2} \sum_ {v = 1} ^ {V} e ^ {- \sum_ {k = 1} ^ {K} z _ {k} m _ {v k}}.
|
| 291 |
+
$$
|
| 292 |
+
|
| 293 |
+
With these notations, we first have the following equation for the multinomial log-likelihood:
|
| 294 |
+
|
| 295 |
+
$$
|
| 296 |
+
\begin{array}{l} \tilde {\boldsymbol {x}} ^ {T} \log \phi (\boldsymbol {z}) = \frac {1}{S} \sum_ {s = 1} ^ {S} \log \phi (\boldsymbol {z}) _ {w _ {s}} \\ = \frac {1}{S} \sum_ {s = 1} ^ {S} \left(\sum_ {k = 1} ^ {K} z _ {k} \left(2 - m _ {w _ {s} k}\right) - \log \hat {\phi}\right) \\ = 2 - \log \hat {\phi} - \frac {1}{S} \sum_ {s = 1} ^ {S} \sum_ {k = 1} ^ {K} z _ {k} m _ {w _ {s} k}. \tag {A.1} \\ \end{array}
|
| 297 |
+
$$
|
| 298 |
+
|
| 299 |
+
Recall that in Eq. (1) of the main paper, the transport matrix $\mathbf{P}$ is one of the joint distributions of $\tilde{\pmb{x}}$ and $\pmb{z}$ . We introduce the conditional distribution of $\pmb{z}$ given $\tilde{\pmb{x}}$ as $\mathbf{Q}$ , where $q(v,k)$ indicates the probability of assigning a token of word $v$ to topic $k$ .
|
| 300 |
+
|
| 301 |
+
Given that $\mathbf{P}$ satisfies $\mathbf{P} \in U(\tilde{\boldsymbol{x}},\boldsymbol{z})$ and $p_{vk} = \tilde{x}_vq(v,k)$ , $\mathbf{Q}$ must satisfy $U^{\prime}(\tilde{\boldsymbol{x}},\boldsymbol{z}) \coloneqq \{\mathbf{Q} \in \mathbb{R}_{>0}^{V\times K} \mid \sum_{v=1}^{V} \tilde{x}_vq(v,k) = z_k\}$ . With $\mathbf{Q}$ , we can rewrite the OT distance as:
|
| 302 |
+
|
| 303 |
+
$$
|
| 304 |
+
\begin{array}{l} d _ {\mathbf {M}} (\tilde {\boldsymbol {x}}, \boldsymbol {z}) = \min _ {\mathbf {Q} \in U ^ {\prime} (\tilde {\boldsymbol {x}}, \boldsymbol {z})} \sum_ {v = 1, k = 1} ^ {V, K} \tilde {x} _ {v} q (v, k) m _ {v k} \\ = \frac {1}{S} \min _ {\mathbf {Q} \in U ^ {\prime} (\tilde {\boldsymbol {x}}, \boldsymbol {z})} \sum_ {k = 1} ^ {K} \sum_ {s = 1} ^ {S} q (w _ {s}, k) m _ {w _ {s} k}. \\ \end{array}
|
| 305 |
+
$$
|
| 306 |
+
|
| 307 |
+
If we let $q(v, k) = z_k$ , meaning that all the tokens of a document to the topics according to the document's doc-topic distribution, then $\mathbf{Q}$ satisfies $U'(\tilde{x}, z)$ , which leads to:
|
| 308 |
+
|
| 309 |
+
$$
|
| 310 |
+
d _ {\mathbf {M}} (\tilde {z}, \boldsymbol {x}) \leq \frac {1}{S} \sum_ {k = 1} ^ {K} \sum_ {s = 1} ^ {S} z _ {k} m _ {w _ {s} k}. \tag {A.2}
|
| 311 |
+
$$
|
| 312 |
+
|
| 313 |
+
Together with Eq. (A.1), the definition of $\hat{\phi}$ , and the fact that $m_{vk} \leq 2$ , we have:
|
| 314 |
+
|
| 315 |
+
$$
|
| 316 |
+
\begin{array}{l} \tilde {\boldsymbol {x}} ^ {T} \log \phi (\boldsymbol {z}) = 2 - \log \hat {\phi} - \frac {1}{S} \sum_ {s = 1} ^ {S} \sum_ {k = 1} ^ {K} z _ {k} m _ {w _ {s} k} \\ \leq - \log \left(\sum_ {v = 1} ^ {V} e ^ {- \sum_ {k = 1} ^ {K} z _ {k} m _ {v k}}\right) - d _ {\mathbf {M}} (\tilde {\boldsymbol {x}}, \boldsymbol {z}) \\ \leq - (\log V - 2) - d _ {\mathbf {M}} (\tilde {\boldsymbol {x}}, \boldsymbol {z}) \\ \leq - d _ {\mathbf {M}} (\tilde {\boldsymbol {x}}, \boldsymbol {z}), \tag {A.3} \\ \end{array}
|
| 317 |
+
$$
|
| 318 |
+
|
| 319 |
+
where the last equation holds if $\log V > 2$ , i.e., $V \geq 8$ .
|
| 320 |
+
|
| 321 |
+
□
|
| 322 |
+
|
| 323 |
+

|
| 324 |
+
Figure B.1: Parameter sensitivity of NSTM on 20News. The first and second show the performance with different values of $\epsilon$ and $\alpha$ , respectively. In the first row, we fix $\alpha = 20$ and vary $\epsilon$ while in the second row, we fix $\epsilon = 0.07$ and vary $\alpha$ .
|
| 325 |
+
|
| 326 |
+

|
| 327 |
+
Figure C.1: TC and TD on RCV2 with 500 topics.
|
| 328 |
+
|
| 329 |
+

|
| 330 |
+
|
| 331 |
+
# B PARAMETER SENSITIVITY
|
| 332 |
+
|
| 333 |
+
In the previous experiments, we fix the values of $\epsilon$ and $\alpha$ , which control the weight of the multinomial likelihood in Eq (9) and the weight of the entropic regularisation in the Sinkhorn distance, respectively. Here we report the performance of LSTM on 20NG (blue lines) under different settings of the two hyperparameters in Figure B.1. Moreover, we propose two variants of LSTM. The first one removes the Sinkhorn distance in the training loss of Eq. (9) (i.e., only the expected log-likelihood term left) and its performance is shown as the red lines. The second variant removes the expected log-likelihood term in the training loss of Eq. (9) (i.e., only Sinkhorn distance left) and its performance is shown as yellow lines.
|
| 334 |
+
|
| 335 |
+
# C TC AND TD ON RCV2 WITH 500 TOPICS
|
| 336 |
+
|
| 337 |
+
The results are shown in Figure C.1.
|
| 338 |
+
|
| 339 |
+
# D AVERAGE SINKHORN DISTANCE WITH VARIORED NUMBER OF TOPICS
|
| 340 |
+
|
| 341 |
+
In Figure D.1, we show the average Sinkhorn distance with varied number of topics on 20NG, WS, TMN, and Reuters. It can be observed that when $K$ increases, there is a clear trend that $d_{\mathbf{M}}(\boldsymbol{z}, \boldsymbol{x})$ decreases.
|
| 342 |
+
|
| 343 |
+

|
| 344 |
+
(a) 20NG
|
| 345 |
+
|
| 346 |
+

|
| 347 |
+
(b) WS
|
| 348 |
+
Figure D.1: Sinkhorn distance with varied $K$ . Vertical axis: the average Sinkhorn distance over all the training documents, i.e., mean $d_{\mathbf{M}}(\pmb {z},\pmb {x})$ . Horizontal axis: the number of topics, i.e., $K$
|
| 349 |
+
|
| 350 |
+

|
| 351 |
+
(c) TMN
|
| 352 |
+
|
| 353 |
+

|
| 354 |
+
(d)Reuters
|
| 355 |
+
|
| 356 |
+

|
| 357 |
+
Figure E.1: t-SNE visualisation of topic embeddings on 20NG.
|
| 358 |
+
|
| 359 |
+
# E MORE TOPIC EMBEDDING VISUALISATIONS
|
| 360 |
+
|
| 361 |
+
In Figure E.1, E.2, E.3, and E.4, we show the visualisations of 20NG, WS, TMN, and Reuters, respectively. We note that the topic embeddings in general present much better clustering structures of topics in the semantic space. Such topic correlations can only be detected by specialised topic models (e.g., in Lafferty & Blei (2006); Blei et al. (2010); Zhou et al. (2016)). Instead, the correlations of topics in our model are implicitly captured by the semantic embeddings.
|
| 362 |
+
|
| 363 |
+

|
| 364 |
+
Figure E.2: t-SNE visualisation of topic embeddings on WS.
|
| 365 |
+
|
| 366 |
+

|
| 367 |
+
Figure E.3: t-SNE visualisation of topic embeddings on TMN.
|
| 368 |
+
|
| 369 |
+

|
| 370 |
+
Figure E.4: t-SNE visualisation of topic embeddings on Reuters.
|
neuraltopicmodelviaoptimaltransport/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:d5f117003b053da45846c6141ab250340361f2ee60c4be7e01356e9a969e3254
|
| 3 |
+
size 860461
|
neuraltopicmodelviaoptimaltransport/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:3973af318e1b1e67883229a825d534f77dea697df7b86d806c31ca488ff1898a
|
| 3 |
+
size 559968
|
noiseagainstnoisestochasticlabelnoisehelpscombatinherentlabelnoise/a943647e-e121-45ad-a41c-0a21533e1dca_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:1c3563f39f0cf6a2b54eb320a9124fd7bbe7a0707b1b4681ce0c49942922ee82
|
| 3 |
+
size 118012
|
noiseagainstnoisestochasticlabelnoisehelpscombatinherentlabelnoise/a943647e-e121-45ad-a41c-0a21533e1dca_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:9f7ceacf1f0cda4c1351200eacd56ee63d661938b49d6b6cd8a1897edc2b59ff
|
| 3 |
+
size 149627
|