Add Batch 9ba4158e-754b-4fe6-8743-bed9cdfe7400
Browse filesThis view is limited to 50 files because it contains too many changes. See raw diff
- afinegrainedanalysisondistributionshift/81f8dcd8-b5da-4331-ba89-5462246cac3f_content_list.json +3 -0
- afinegrainedanalysisondistributionshift/81f8dcd8-b5da-4331-ba89-5462246cac3f_model.json +3 -0
- afinegrainedanalysisondistributionshift/81f8dcd8-b5da-4331-ba89-5462246cac3f_origin.pdf +3 -0
- afinegrainedanalysisondistributionshift/full.md +326 -0
- afinegrainedanalysisondistributionshift/images.zip +3 -0
- afinegrainedanalysisondistributionshift/layout.json +3 -0
- analyticdpmananalyticestimateoftheoptimalreversevarianceindiffusionprobabilisticmodels/b5478703-78f4-4a48-a2ec-73165fb1c59c_content_list.json +3 -0
- analyticdpmananalyticestimateoftheoptimalreversevarianceindiffusionprobabilisticmodels/b5478703-78f4-4a48-a2ec-73165fb1c59c_model.json +3 -0
- analyticdpmananalyticestimateoftheoptimalreversevarianceindiffusionprobabilisticmodels/b5478703-78f4-4a48-a2ec-73165fb1c59c_origin.pdf +3 -0
- analyticdpmananalyticestimateoftheoptimalreversevarianceindiffusionprobabilisticmodels/full.md +0 -0
- analyticdpmananalyticestimateoftheoptimalreversevarianceindiffusionprobabilisticmodels/images.zip +3 -0
- analyticdpmananalyticestimateoftheoptimalreversevarianceindiffusionprobabilisticmodels/layout.json +3 -0
- anewperspectiveonhowgraphneuralnetworksgobeyondweisfeilerlehman/1897f62e-bcce-409b-a7fe-f30526d9220c_content_list.json +3 -0
- anewperspectiveonhowgraphneuralnetworksgobeyondweisfeilerlehman/1897f62e-bcce-409b-a7fe-f30526d9220c_model.json +3 -0
- anewperspectiveonhowgraphneuralnetworksgobeyondweisfeilerlehman/1897f62e-bcce-409b-a7fe-f30526d9220c_origin.pdf +3 -0
- anewperspectiveonhowgraphneuralnetworksgobeyondweisfeilerlehman/full.md +563 -0
- anewperspectiveonhowgraphneuralnetworksgobeyondweisfeilerlehman/images.zip +3 -0
- anewperspectiveonhowgraphneuralnetworksgobeyondweisfeilerlehman/layout.json +3 -0
- asymmetrylearningforcounterfactuallyinvariantclassificationinoodtasks/78538d90-c4f0-43bb-8803-fc2021fdd25c_content_list.json +3 -0
- asymmetrylearningforcounterfactuallyinvariantclassificationinoodtasks/78538d90-c4f0-43bb-8803-fc2021fdd25c_model.json +3 -0
- asymmetrylearningforcounterfactuallyinvariantclassificationinoodtasks/78538d90-c4f0-43bb-8803-fc2021fdd25c_origin.pdf +3 -0
- asymmetrylearningforcounterfactuallyinvariantclassificationinoodtasks/full.md +447 -0
- asymmetrylearningforcounterfactuallyinvariantclassificationinoodtasks/images.zip +3 -0
- asymmetrylearningforcounterfactuallyinvariantclassificationinoodtasks/layout.json +3 -0
- beitbertpretrainingofimagetransformers/ca973394-5fd9-479e-8af4-ce04c9509247_content_list.json +3 -0
- beitbertpretrainingofimagetransformers/ca973394-5fd9-479e-8af4-ce04c9509247_model.json +3 -0
- beitbertpretrainingofimagetransformers/ca973394-5fd9-479e-8af4-ce04c9509247_origin.pdf +3 -0
- beitbertpretrainingofimagetransformers/full.md +338 -0
- beitbertpretrainingofimagetransformers/images.zip +3 -0
- beitbertpretrainingofimagetransformers/layout.json +3 -0
- betaintactvaeidentifyingandestimatingcausaleffectsunderlimitedoverlap/1625e013-94b7-45bc-b557-732b97354c2e_content_list.json +3 -0
- betaintactvaeidentifyingandestimatingcausaleffectsunderlimitedoverlap/1625e013-94b7-45bc-b557-732b97354c2e_model.json +3 -0
- betaintactvaeidentifyingandestimatingcausaleffectsunderlimitedoverlap/1625e013-94b7-45bc-b557-732b97354c2e_origin.pdf +3 -0
- betaintactvaeidentifyingandestimatingcausaleffectsunderlimitedoverlap/full.md +0 -0
- betaintactvaeidentifyingandestimatingcausaleffectsunderlimitedoverlap/images.zip +3 -0
- betaintactvaeidentifyingandestimatingcausaleffectsunderlimitedoverlap/layout.json +3 -0
- bootstrappedmetalearning/de1b972e-09c6-4fcd-8bde-fab396a261dd_content_list.json +3 -0
- bootstrappedmetalearning/de1b972e-09c6-4fcd-8bde-fab396a261dd_model.json +3 -0
- bootstrappedmetalearning/de1b972e-09c6-4fcd-8bde-fab396a261dd_origin.pdf +3 -0
- bootstrappedmetalearning/full.md +0 -0
- bootstrappedmetalearning/images.zip +3 -0
- bootstrappedmetalearning/layout.json +3 -0
- comparingdistributionsbymeasuringdifferencesthataffectdecisionmaking/44a9f630-b762-4301-8f83-a41fcf2d0c4b_content_list.json +3 -0
- comparingdistributionsbymeasuringdifferencesthataffectdecisionmaking/44a9f630-b762-4301-8f83-a41fcf2d0c4b_model.json +3 -0
- comparingdistributionsbymeasuringdifferencesthataffectdecisionmaking/44a9f630-b762-4301-8f83-a41fcf2d0c4b_origin.pdf +3 -0
- comparingdistributionsbymeasuringdifferencesthataffectdecisionmaking/full.md +723 -0
- comparingdistributionsbymeasuringdifferencesthataffectdecisionmaking/images.zip +3 -0
- comparingdistributionsbymeasuringdifferencesthataffectdecisionmaking/layout.json +3 -0
- coordinationamongneuralmodulesthroughasharedglobalworkspace/a937660a-9b49-49b5-9181-ee4c710a6943_content_list.json +3 -0
- coordinationamongneuralmodulesthroughasharedglobalworkspace/a937660a-9b49-49b5-9181-ee4c710a6943_model.json +3 -0
afinegrainedanalysisondistributionshift/81f8dcd8-b5da-4331-ba89-5462246cac3f_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:03efd4d1344c9ef35f6ab16a180265d1e40a368a64500cdf8f572c580625fd39
|
| 3 |
+
size 87474
|
afinegrainedanalysisondistributionshift/81f8dcd8-b5da-4331-ba89-5462246cac3f_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:5641339625da579d6df610ee67821096412efe9e2c27095953ac8dede93f310a
|
| 3 |
+
size 115742
|
afinegrainedanalysisondistributionshift/81f8dcd8-b5da-4331-ba89-5462246cac3f_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:7152206aed08effcf231b2d26c0e6c33a4dde90ef5860c4775209978661e15e7
|
| 3 |
+
size 1477286
|
afinegrainedanalysisondistributionshift/full.md
ADDED
|
@@ -0,0 +1,326 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# A FINE-GRAINED ANALYSIS ON DISTRIBUTION SHIFT
|
| 2 |
+
|
| 3 |
+
Olivia Wiles Sven Gowal Florian Stimberg Sylvestre-Alvise Rebuffi Ira Ktena Krishnamurthy (Dj) Dvijotham Taylan Cemgil
|
| 4 |
+
|
| 5 |
+
DeepMind, London, UK
|
| 6 |
+
|
| 7 |
+
{oawiles,sgowal,stimberg,sylvestre,iraktena,taylancemgil}@deepmind.com dvij@google.com
|
| 8 |
+
|
| 9 |
+
# ABSTRACT
|
| 10 |
+
|
| 11 |
+
Robustness to distribution shifts is critical for deploying machine learning models in the real world. Despite this necessity, there has been little work in defining the underlying mechanisms that cause these shifts and evaluating the robustness of algorithms across multiple, different distribution shifts. To this end, we introduce a framework that enables fine-grained analysis of various distribution shifts. We provide a holistic analysis of current state-of-the-art methods by evaluating 19 distinct methods grouped into five categories across both synthetic and real-world datasets. Overall, we train more than 85K models. Our experimental framework can be easily extended to include new methods, shifts, and datasets. We find, unlike previous work (Gulrajani & Lopez-Paz, 2021), that progress has been made over a standard ERM baseline; in particular, pretraining and augmentations (learned or heuristic) offer large gains in many cases. However, the best methods are not consistent over different datasets and shifts. Code is available at github.com/deepmind/distributionhift_framework.
|
| 12 |
+
|
| 13 |
+
# 1 INTRODUCTION
|
| 14 |
+
|
| 15 |
+
If machine learning models are to be ubiquitous in critical applications such as driverless cars (Janai et al., 2020), medical imaging (Erickson et al., 2017), and science (Jumper et al., 2021), it is pivotal to build models that are robust to distribution shifts. Otherwise, models may fail surprisingly in ways that derail trust in the system. For example, Koh et al. (2020); Perone et al. (2019); AlBadawy et al. (2018); Heaven (2020); Castro et al. (2020) find that a model trained on one set of hospitals may not generalise to the imaging conditions of another; Alcorn et al. (2019); Dai & Van Gool (2018) find that a model for driverless cars may not generalise to new lighting conditions or object poses; and Buolamwini & Gebru (2018) find that a model may perform worse on subsets of the distribution, such as different ethnicities, if the training set has an imbalanced distribution. Thus, it is important to understand when we expect a model to generalise and when we do not. This would allow a practitioner to have confidence in the system (e.g. if a model is demonstrated to be robust to the imaging conditions of different hospitals, then it can be deployed in new hospitals with confidence).
|
| 16 |
+
|
| 17 |
+
While domain generalization is a well studied area, Gulrajani & Lopez-Paz (2021); Schott et al. (2021) have cast doubt on the efficacy of existing methods, raising the question: has any progress been made in domain generalization over a standard expectation risk minimization (ERM) algorithm? Despite these discouraging results, there are many examples that machine learning models do generalise across datasets with different distributions. For example, CLIP (Radford et al., 2021), with well engineered prompts, generalizes to many standard image datasets. Taori et al. (2020) found that models trained on one image dataset generalise to another, albeit with some drop in performance; in particular, higher performing models generalise better. However, there is little understanding and experimentation on when and why models generalise, especially in realistic settings inspired by real-world applications. This begs the following question:
|
| 18 |
+
|
| 19 |
+
Can we define the important distribution shifts to be robust to and then systematically evaluate the robustness of different methods?
|
| 20 |
+
|
| 21 |
+
To answer the above question, we present a grounded understanding of robustness to distribution shifts. We draw inspiration from disentanglement literature (see section 6), which aims to separate images into an independent set of factors of variation (or attributes). In brief, we assume the data
|
| 22 |
+
|
| 23 |
+
is composed of some (possibly extremely large) set of attributes. We expect models, having seen some distribution of values for an attribute, to be able to learn invariance to that attribute and so to generalise to unseen examples of the attribute and different distributions over that attribute. Using a simple example to clarify the setup, assume our data has two attributes (shape and color) among others. Given data with some distribution over the set of possible colors (e.g. red and blue) and the task of predicting shape (e.g. circle or square), we want our model to generalise to unseen colors (e.g. green) or a different distribution of colors (e.g. there are very few red circles in the training set, but the samples at evaluation are uniformly sampled from the set of possible colors and shapes).
|
| 24 |
+
|
| 25 |
+
Using this framework, we evaluate models across three distribution shifts: spurious correlation, low-data drift, and unseen data shift (illustrated in figure 1) and two additional conditions (label noise and dataset size). We choose these settings as they arise in the real world and harm generalization performance. Moreover, in our framework, these distribution shifts are the fundamental blocks of building more complex distribution shifts. We additionally evaluate models when there is varying amounts of label noise (as inspired by noise arising from human raters) and when the total size of the train set varies (to understand how models perform as the number of training examples changes). The unique ability of our framework to evaluate fine-grained performance of models across different distribution shifts and under different conditions is of critical importance to analyze methods under a variety of real-world settings. This work makes the following contributions:
|
| 26 |
+
|
| 27 |
+
- We propose a framework to define when and why we expect methods to generalise. We use this framework to define three, real world inspired distribution shifts. We then use this framework to create a systematic evaluation setup across real and synthetic datasets for different distribution shifts. Our evaluation framework is easily extendable to new distribution shifts, datasets, or methods to be evaluated.
|
| 28 |
+
- We evaluate and compare 19 different methods (training more than $85\mathrm{K}$ models) in these settings. These methods span the following 5 common approaches: architecture choice, data augmentation, domain generalization, adaptive algorithms, and representation learning. This allows for a direct comparison across different areas in machine learning.
|
| 29 |
+
- We find that simple techniques, such as data augmentation and pretraining are often effective and that domain generalization algorithms do work for certain datasets and distribution shifts. However, there is no easy way to select the best approach a-priori and results are inconsistent over different datasets and attributes, demonstrating there is still much work to be done to improve robustness in real-world settings.
|
| 30 |
+
|
| 31 |
+
# 2 FRAMEWORK TO EVALUATE GENERALIZATION
|
| 32 |
+
|
| 33 |
+
In this section we introduce our robustness framework for characterizing distribution shifts in a principled manner. We then relate three common, real world inspired distribution shifts.
|
| 34 |
+
|
| 35 |
+
# 2.1 LATENT FACTORSATION
|
| 36 |
+
|
| 37 |
+
We assume a joint distribution $p$ of inputs $x$ and corresponding attributes $y^{1},y^{2},\ldots ,y^{K}$ (denoted as $y^{1:K}$ ) with $y^{k}\in \mathbb{A}^{k}$ where $\mathbb{A}^k$ is a finite set. One of these $K$ attributes is a label of interest, denoted as $y^{l}$ (in a mammogram, the label could be cancer/benign and a nuisance attribute $y^{i}$ with $i\neq l$ could be the identity of the hospital where the mammogram was taken). Our aim is to build a classifier $f$ that minimizes the risk $R$ . However, in real-world applications, we only have access to a finite set of inputs and attributes of size $n$ . Hence, we minimize the empirical risk $\hat{R}$ instead:
|
| 38 |
+
|
| 39 |
+
$$
|
| 40 |
+
R(f) = \mathbb{E}_{(\boldsymbol {x},y^{l})\sim p}\left[\mathcal{L}(y^{l},f(\boldsymbol {x}))\right] \qquad \hat{R} (f;p) = \frac{1}{n}\sum_{\{(y_{i}^{l},\boldsymbol{x}_{i})\sim p\}_{i = 1}^{n}}\mathcal{L}(y_{i}^{l},f(\boldsymbol{x}_{i})).
|
| 41 |
+
$$
|
| 42 |
+
|
| 43 |
+
where $\mathcal{L}$ is a suitable loss function. Here, all nuisance attributes $y^{k}$ with $k\neq l$ are ignored and we work with samples obtained from the marginal $p(y^l,\boldsymbol {x})$ . In practice, however, due to selection bias or other confounding factors in data collection, we are only able to train and test our models on data collected from two related but distinct distributions: $p_{\mathrm{train}},p_{\mathrm{test}}$ . For example, $p_{\mathrm{train}}$ and $p_{\mathrm{test}}$ may be concentrated on different subsets of hospitals and this discrepancy may result in a distribution shift; for example, hospitals may use different equipment, leading to different staining on their cell
|
| 44 |
+
|
| 45 |
+

|
| 46 |
+
(a) $p_{\mathrm{train}}$ : SC.
|
| 47 |
+
|
| 48 |
+

|
| 49 |
+
(b) $p_{\mathrm{train}}$ :LDD.
|
| 50 |
+
Figure 1: Visualization of the joint distribution for the different shifts we consider on the DSPRITES example. The lighter the color, the more likely the given sample. figure 1a-1c visualise different shifts over $p_{\mathrm{train}}(y^l,y^a)$ discussed in 2.2: spurious correlation (SC), low-data drift (LDD), and unseen data shift (UDS). figure 1d visualises the test set, where the attributes are uniformly distributed.
|
| 51 |
+
|
| 52 |
+

|
| 53 |
+
(c) $p_{\mathrm{train}}$ : UDS.
|
| 54 |
+
|
| 55 |
+

|
| 56 |
+
(d) $p_{\mathrm{test}}:y^l,y^a$ areIID.
|
| 57 |
+
|
| 58 |
+
images. While we train $f$ on data from $p_{\mathrm{train}}$ by minimizing $\hat{R}(f; p_{\mathrm{train}})$ , we aim to learn a model that generalises well to data from $p_{\mathrm{test}}$ ; that is, it should achieve a small $\hat{R}(f; p_{\mathrm{test}})$ .
|
| 59 |
+
|
| 60 |
+
While generalization in the above sense is desirable for machine learning models, it is not clear why a model $f$ trained on data from $p_{\mathrm{train}}$ should generalise to $p_{\mathrm{test}}$ . It is worth noting that while $p_{\mathrm{train}}$ and $p_{\mathrm{test}}$ can be different, they are both related to the true distribution $p$ . We take inspiration from disentanglement literature to express this relationship. In particular, that we can view data as being decomposed into an underlying set of factors of variations. We formalise various distribution shifts using a latent variable model for the true data generation process:
|
| 61 |
+
|
| 62 |
+
$$
|
| 63 |
+
z \sim p (z) \quad y ^ {i} \sim p \left(y ^ {i} \mid z\right) \quad i = 1 \dots K \quad \boldsymbol {x} \sim p (\boldsymbol {x} | z) \tag {1}
|
| 64 |
+
$$
|
| 65 |
+
|
| 66 |
+
where $z$ denotes latent factors. By a simple refactorization, we can write
|
| 67 |
+
|
| 68 |
+
$$
|
| 69 |
+
p (y ^ {1: K}, \boldsymbol {x}) = p (y ^ {1: K}) \int p (\boldsymbol {x} | z) p (z | y ^ {1: K}) d z = p (y ^ {1: K}) p (\boldsymbol {x} | y ^ {1: K}).
|
| 70 |
+
$$
|
| 71 |
+
|
| 72 |
+
Thus, the true distribution can be expressed as the product of the marginal distribution of the attributes with a conditional generative model. We assume that distribution shifts arise when a new marginal distribution for the attributes is chosen, such as $p(y^{1:K}) \neq p_{\mathrm{train}}(y^{1:K}) \neq p_{\mathrm{test}}(y^{1:K})$ , but otherwise the conditional generative model is shared across all distributions, i.e., we have $p_{\mathrm{test}}(y^{1:K}, \boldsymbol{x}) = p_{\mathrm{test}}(y^{1:K}) \int p(\boldsymbol{x} | z) p(z | y^{1:K}) dz$ , and similarly for $p_{\mathrm{train}}$ .
|
| 73 |
+
|
| 74 |
+
To provide more context, as a running example, we use the color DSPRITES dataset (Matthey et al., 2017); where in our notation $y^{1}$ defines the color with $\mathbb{A}^1 = \{\text{red, green, blue}\}$ , and $y^{2}$ defines the shape with $\mathbb{A}^2 = \{\text{ellipse, heart, square}\}$ . We can imagine that a data collector (intentionally or implicitly) selects some marginal distribution over attributes $p_{\text{train}}(y^{1:K})$ when training; for example they select mostly blue ellipses and red hearts. This induces a new joint distribution over latent factors and attributes: $p_{\text{train}}(z, y^{1:K}) = p(z|y^{1:K})p_{\text{train}}(y^{1:K})$ . Consequently, during training, we get images with a different joint distribution $p_{\text{train}}(\boldsymbol{x}, y^{1:K}) = \int p(\boldsymbol{x}|z)p_{\text{train}}(z, y^{1:K})$ . This similarly applies when collecting data for the test distribution. We focus on common cases of distribution shifts visualized in figure 1; we discuss these in more detail in section 2.2.
|
| 75 |
+
|
| 76 |
+
The goal of enforcing robustness to distribution shifts is to maintain performance when the data generating distribution $p_{\mathrm{train}}$ changes. In other words, we would like to minimize risk on $p$ , $p_{\mathrm{test}}$ given only access to $p_{\mathrm{train}}$ . We can achieve robustness in the following ways:
|
| 77 |
+
|
| 78 |
+
- Weighted resampling. We can resample the training set using importance weights $W(y^{1:K}) = p(y^{1:K}) / p_{\mathrm{train}}(y^{1:K})$ . This means that given the attributes, the $i$ -th data point $(y_i^{1:K}, x_i)$ in the training set is used with probability $W(y_i^{1:K}) / \sum_{i' = 1}^n W(y_{i'}^{1:K})$ rather than $1/n$ . We refer to this empirical distribution as $p_{\mathrm{reweight}}$ . This procedure requires access to the true distribution of attributes $p(y^{1:K})$ , so to avoid bias and improve fairness, it is often assumed that all combinations of attributes happen uniformly at random.
|
| 79 |
+
- Data Augmentation: Alternatively, we can learn a generative model $\hat{p}(\boldsymbol{x}|\boldsymbol{y}^{1:K})$ from the training data that aims to approximate $\int p(\boldsymbol{x}|\boldsymbol{z})p(\boldsymbol{z}|\boldsymbol{y}^{1:K})dz$ , as the true conditional
|
| 80 |
+
|
| 81 |
+

|
| 82 |
+
DSPRITES.
|
| 83 |
+
|
| 84 |
+

|
| 85 |
+
MPI3D.
|
| 86 |
+
|
| 87 |
+

|
| 88 |
+
SHAPES3D.
|
| 89 |
+
|
| 90 |
+

|
| 91 |
+
SMALLNORB.
|
| 92 |
+
|
| 93 |
+

|
| 94 |
+
CAMELYON17.
|
| 95 |
+
|
| 96 |
+

|
| 97 |
+
IWILDCAM.
|
| 98 |
+
Figure 2: Dataset samples. Each row fixes an attribute (e.g. color for DSPRITES, MPI3D, SHAPES3D; azimuth for SMALLNORB; hospital for CAMELYON17; and location for IWILDCAM).
|
| 99 |
+
|
| 100 |
+
generator is by our assumption the same over all (e.g. train and test) distributions. If such a conditional generative model can be learned, we can sample new synthetic data at training time (e.g. according to the true distribution $p(y^{1:K})$ ) to correct for the distribution shift. More precisely, we can generate data from the augmented distribution $p_{\mathrm{aug}} = (1 - \alpha)p_{\mathrm{reweight}} + \alpha \hat{p}(\boldsymbol{x}|y^{1:K})p(y^{1:K})$ and train a supervised classifier on this augmented dataset. Here, $\alpha \in [0,1]$ is the percentage of synthetic data used for training.
|
| 101 |
+
|
| 102 |
+
- Representation Learning: An alternative factorization of a data generating distribution (e.g. train) is $p_{\mathrm{train}}(y^{1:K}, \boldsymbol{x}) = \int p(z|\boldsymbol{x}) p_{\mathrm{train}}(y^{1:K}|z) dz$ . We can learn an unsupervised representation that approximates $p(z|\boldsymbol{x})$ using the training data only, and attach a classifier to learn a task specific head that approximates $p_{\mathrm{train}}(y^l | z)$ . Again, by our assumption $p(z|\boldsymbol{x}) \propto p(\boldsymbol{x}|z)p(z)$ . Given a good guess of the true prior, the learned representation would not be impacted by the specific attribute distribution and so generalise to $p_{\mathrm{test}}, p$ .
|
| 103 |
+
|
| 104 |
+
# 2.2 DISTRIBUTION SHIFTS
|
| 105 |
+
|
| 106 |
+
While distribution shifts can happen in a continuum, we consider three types of shifts, inspired by real-world challenges. We discuss these shifts and two additional, real-world inspired conditions.
|
| 107 |
+
|
| 108 |
+
Test distribution $p_{\mathrm{test}}$ . We assume that the attributes are distributed uniformly: $p_{\mathrm{test}}(y^{1:K}) = 1 / \prod_i |\mathbb{A}^i|$ . This is desirable, as all attributes are represented and a-priori independent.
|
| 109 |
+
|
| 110 |
+
Shift 1: Spurious correlation - Attributes are correlated under $p_{\mathrm{train}}$ but not $p_{\mathrm{test}}$ . Spurious correlation arises in the wild for a number of reasons including capture bias, environmental factors, and geographical bias (Beery et al., 2018; Torralba & Efros, 2011). These spurious correlations lead to surprising results and poor generalization. Therefore, it is important to be able to build models that are robust to such challenges. In our framework, spurious correlation arises when two attributes $y^{a}$ , $y^{b}$ are correlated at training time, but this is not true of $p_{\mathrm{test}}$ , for which attributes are independent: $p_{\mathrm{train}}(y^{a}|y^{1}\dots y^{b}\dots y^{K}) > p_{\mathrm{train}}(y^{a}|y^{1}\dots y^{b - 1},y^{b + 1}\dots y^{K})$ . This is especially problematic when one attribute $y^{b}$ is $y^{l}$ , the label. Using the running DSPRITES example, shape and color may be correlated and the model may find it easier to predict color. If color is the label, the model will generalise well. However, if the aim is to predict shape, the model's reliance on color will lead to poor generalization.
|
| 111 |
+
|
| 112 |
+
Shift 2: Low-data drift - Attribute values are unevenly distributed under $p_{\text{train}}$ but not under $p_{\text{test}}$ . Low-data drift arises in the wild (e.g. in (Buolamwini & Gebru, 2018) for different ethnicities) when data has not been collected uniformly across different attributes. When deploying models in the wild, it is important to be able to reason and have confidence that the final predictions will
|
| 113 |
+
|
| 114 |
+

|
| 115 |
+
Figure 3: Spurious Correlation. We use all correlated samples and vary the number of samples $N$ from the true, uncorrelated distribution. We plot the percentage change over the baseline ResNet, averaged over all seeds and datasets. Blue is better, red worse. CYCLEGAN performs consistently best while ImageNet augmentation and pretraining on ImageNet also consistently boosts performance.
|
| 116 |
+
|
| 117 |
+

|
| 118 |
+
Figure 4: Low-data drift. We use all samples from the high data regions and vary the number of samples $N$ from the low-data region. We plot the percentage change over the baseline ResNet, averaged over all seeds and datasets. Blue is better, red worse. Pretraining on ImageNet performs consistently best, while CYCLEGAN, most domain generalization methods and ImageNet augmentation also provide some boost in performance.
|
| 119 |
+
|
| 120 |
+
be consistent and fair across different attributes. In the framework above, low-data shifts arise when certain values in the set $\mathbb{A}^a$ of an attribute $y^{a}$ are sampled with a much smaller probability than in $p_{\mathrm{test}}$ : $p_{\mathrm{train}}(y^a = v)\ll p_{\mathrm{test}}(y^a = v)$ . Using the DSPRITES example, only a handful of red shapes may be seen at training time, yet in $p_{\mathrm{test}}$ all colors are sampled with equal probability.
|
| 121 |
+
|
| 122 |
+
Shift 3: Unseen data shift - Some attribute values are unseen under $p_{\text{train}}$ but are under $p_{\text{test}}$ . This is a special case of shift 2: low-data drift which we make explicit due to its important real world applications. Unseen data shift arises when a model, trained in one setting is expected to work in another, disjoint setting. For example: a model trained to classify animals on images at certain times of day should generalise to other times of day. In our framework, unseen data shift arises when some values in the set $\mathbb{A}^a$ of an attribute $y^a$ are unseen in $p_{\text{train}}$ but are in $p_{\text{test}}$ :
|
| 123 |
+
|
| 124 |
+
$$
|
| 125 |
+
p _ {\text {t r a i n}} \left(y ^ {a} = v\right) = 0 \quad p _ {\text {t e s t}} \left(y ^ {a} = v\right) > 0 \quad | \{v | p _ {\text {t r a i n}} \left(y ^ {a} = v\right) \} | > 1 \tag {2}
|
| 126 |
+
$$
|
| 127 |
+
|
| 128 |
+
This is a stronger constraint than in standard out-of-distribution generalization (see section 6), as multiple values for $\mathbb{A}^a$ must be seen under $p_{\mathrm{train}}$ , which allows the model to learn invariance to $y^{a}$ . In the DSPRITES example, the color red may be unseen at train time but all colors are in $p_{\mathrm{test}}$ .
|
| 129 |
+
|
| 130 |
+
Discussion. We choose these sets of shifts as they are the building blocks of more complex distribution shifts. Consider the simplest case of two attributes: the label and a nuisance attribute. If we consider the marginal distribution of the label, it decomposes into two terms: the conditional probability and the probability of a given attribute value: $p(y^l) = \sum_{y^a} p(y^l | y^a) p(y^a)$ . The three shifts we consider control these terms independently: unseen data shift and low-data drift control $p(y^a)$ whereas spurious correlation controls $p(y^l | y^a)$ . The composition of these terms describes any distribution shift for these two variables.
|
| 131 |
+
|
| 132 |
+
# 2.3 CONDITIONS
|
| 133 |
+
|
| 134 |
+
Label noise. We investigate the change in performance due to noisy information. This can arise when there are disagreements and errors among the labellers (e.g. in medical imaging (Castro et al., 2020)). We model this as an observed attribute (e.g. the label) being corrupted by noise. $\hat{y}^i\sim c(y^i)$ where $y^{i}\in \mathbb{A}^{i}$ is the true label, $\hat{y}^i\in \mathbb{A}^i$ the corrupted, observed one, and $c$ the corrupting function.
|
| 135 |
+
|
| 136 |
+
Dataset size. We investigate how performance changes with the size of the training dataset. This setting arises when it is unrealistic or expensive to collect additional data (e.g. in medical imaging or in camera trap imagery). Therefore, it is important to understand how performance degrades given fewer total samples. We do this by limiting the total number of samples from $p_{\text{train}}$ .
|
| 137 |
+
|
| 138 |
+
# 3 MODELS EVALUATED
|
| 139 |
+
|
| 140 |
+
We evaluate 19 algorithms to cover a broad range of approaches that can be used to improve model robustness to distribution shifts and demonstrate how they relate to the three ways to achieve robustness, outlined in section 2. We believe this is the first paper to comprehensively evaluate a large set of different approaches in a variety of settings. These algorithms cover the following areas: architecture choice, data augmentation, domain adaptation, adaptive approaches and representation learning. Further discussion on how these models relate to our robustness framework is in appendix E.
|
| 141 |
+
|
| 142 |
+
Architecture choice. We evaluate the following standard vision models: ResNet18, ResNet50, ResNet101 (He et al., 2016), ViT (Dosovitskiy et al., 2021), and an MLP (Vapnik, 1992). We use weighted resampling $p_{\mathrm{reweight}}$ to oversample from the parts of the distribution that have a lower probability of being sampled from under $p_{\mathrm{train}}$ . Performance depends on how robust the learned representation is to distribution shift.
|
| 143 |
+
|
| 144 |
+
Heuristic data augmentation. These approaches attempt to approximate the true underlying generative model $p(x|y^{1:K})$ in order to improve robustness. We analyze the following augmentation methods: standard ImageNet augmentation (He et al., 2016), AugMix without JSD (Hendrycks et al., 2020), RandAugment (Cubuk et al., 2020), and AutoAugment (Cubuk et al., 2019). Performance depends on how well the heuristic augmentations approximate the true generative model.
|
| 145 |
+
|
| 146 |
+
Learned data augmentation. These approaches approximate the true underlying generative model $p(x|y^{1:K})$ by learning augmentations conditioned on the nuisance attribute. The learned augmentations can be used to transform any image $x$ to have a new attribute, while keeping the other attributes fixed. We follow Goel et al. (2020), who use CYCLEGAN (Zhu et al., 2017), but we do not use their SGDRO objective in order to evaluate the performance of learned data augmentation alone. Performance depends on how well the learned augmentations approximate the true generative model.
|
| 147 |
+
|
| 148 |
+
Domain generalization. These approaches aim to recover a representation $z$ that is independent of the attribute: $p(y^a, z) = p(y^a)p(z)$ to allow generalization over that attribute. We evaluate IRM (Arjovsky et al., 2019), DeepCORAL (Sun & Saenko, 2016), domain MixUp (Gulrajani & Lopez-Paz, 2021), DANN (Ganin et al., 2016), and SagNet (Nam et al., 2021). Performance depends on the invariance of the learned representation $z$ .
|
| 149 |
+
|
| 150 |
+
Adaptive approaches. These works modify $p_{\mathrm{reweight}}$ dynamically. We evaluate JTT (Liu et al., 2021) and BN-Adapt (Schneider et al., 2020). These methods do not give performance guarantees.
|
| 151 |
+
|
| 152 |
+
Representation learning. These works aim to learn a robust representation of $z$ that describes the true prior. We evaluate using a $\beta$ -VAE (Higgins et al., 2017a) and pretraining on ImageNet (Deng et al., 2009). Performance depends on the quality of the learned representation for the specific task.
|
| 153 |
+
|
| 154 |
+
# 4 EXPERIMENTS
|
| 155 |
+
|
| 156 |
+
We first introduce the datasets and experimental setup. We evaluate the 19 different methods across these six datasets, three distribution shifts, varying label noise, and dataset size. We plot aggregate results in figures 3-7 and complete results in the appendix in figures 10-12. We discuss the results by distilling them into seven concrete takeaways in section 4.1 and four practical tips in section 4.2.
|
| 157 |
+
|
| 158 |
+
Datasets. We evaluate these approaches on six vision, classification datasets - DSPRITES (Matthey et al., 2017), MPI3D (Gondal et al., 2019), SMALLNORB (LeCun et al., 2004), SHAPES3D (Burgess & Kim, 2018), CAMELYON17 (Koh et al., 2020; Bandi et al., 2018), and IWILDCAM (Koh et al., 2020; Beery et al., 2018). These datasets consist of multiple (potentially an arbitrarily large number) attributes. We select two attributes $y^{l}$ , $y^{a}$ for each dataset and make one $y^{l}$ the label. We then use these two attributes to build the three shifts. Visualizations of samples from the datasets are given in figure 2 and further description in appendix D.1. We discuss precisely how we set up the shifts, choose the attributes, and additional conditions for these datasets in appendix D.2.
|
| 159 |
+
|
| 160 |
+
Model selection. When investigating heuristic data augmentation, domain generalization, learned augmentation, adaptive approaches, and representation learning, we use a ResNet18 for the simpler, synthetic datasets (DSPRITES, MPI3D, SHAPES3D, and SMALLNORB) but a ResNet50 for the
|
| 161 |
+
|
| 162 |
+

|
| 163 |
+
Figure 5: Unseen data shift. We rank the methods (where best is 1, worst 19) for each dataset and seed and plot the rankings, with the overall median rank as the black bar. Pretraining on ImageNet and ImageNet augmentation perform consistently best. DANN, CycleGAN and other heuristic augmentations perform consistently well.
|
| 164 |
+
|
| 165 |
+
more complex, real world ones (CAMELYON17 and IWILDCAM). To perform model selection, we choose the best model according to the validation set which matches the distribution of the test set. In the unseen data shift setting for the CAMELYON17 and IWILDCAM, we use the given out-of-distribution validation set, which is a different, distinct set in $\mathcal{D}$ that is independent of $\mathcal{D}_{\mathrm{train}}, \mathcal{D}_{\mathrm{test}}$ . (We consider using the in-distribution validation set in appendix B.4.)
|
| 166 |
+
|
| 167 |
+
Hyperparameter choices. We perform a sweep over the hyperparameters (the precise sweeps are given in appendix F.8). We run each set of hyperparameters for five seeds for each setting. To choose the best model for each seed, we perform model selection over all hyperparameters using the top-1 accuracy on the validation set. In the low-data and spurious correlation settings, we choose a different set of samples from the low-data region with each seed. We report the mean and standard deviation over the five seeds.
|
| 168 |
+
|
| 169 |
+
# 4.1 TAKEAWAYS
|
| 170 |
+
|
| 171 |
+
Takeaway 1: While we can improve over ERM, no one method always performs best. The relative performance between methods varies across datasets and shifts. Under spurious correlation (figure 3), CYCLEGAN consistently performs best but in figure 4, under low-data drift, pretraining consistently performs best. Under unseen data shift (figure 5), pretraining is again one of the best performing models. However, if we drill down on the results in figure 10 (appendix B.1), we can see pretraining performs best on the synthetic datasets, but not on CAMELYON17 (where using augmentation or DANN is best) or IWILDCAM (where using ViT or an MLP is best).
|
| 172 |
+
|
| 173 |
+
Takeaway 2: Pretraining is a powerful tool across different shifts and datasets. While pretraining is not always helpful (e.g. in appendix B.1 on CAMELYON17 in figures 10-11, IWILDCAM in figures 10-11), it often provides a strong boost in performance. This is presumably because the representation $z$ learned during pretraining is helpful for the downstream task. For example, the representation may have been trained to be invariant to certain useful properties (e.g. scale, shift, and color). If these properties are useful on the downstream tasks, then the learned representation should improve generalization.
|
| 174 |
+
|
| 175 |
+
Takeaway 3: Heuristic augmentation improves generalization if the augmentation describes an attribute. In all settings (figures 3-5), ImageNet augmentation generally improves performance. However, RandAugment, AugMix, and AutoAugment have more variable performance (as further shown in figures 10-12). These methods are compositions of different augmentations. We investigate the impact of each augmentation in RandAugment in appendix B.2 and find variable performance. Augmentations that approximate the true underlying generative model $p(\pmb{x}|\pmb{y}^{1:K})$ lead to the best results; otherwise, the model may waste capacity. For example, on CAMELYON17 (which consists of cell images), color jitter harms performance but on SHAPES3D and MPI3D it is essential.
|
| 176 |
+
|
| 177 |
+
Takeaway 4: Learned data augmentation is effective across different conditions and distribution shifts. This approach is highly effective in the spurious correlation setting (figure 3). It can also help in the low-data and unseen data shift settings (figure 4,5) (though the gains for these two shifts are not as large as for pretraining). The effectiveness of this approach can be explained by the fact that if the augmentations are learned perfectly, then augmented samples by design are from the true underlying generative model and can cover missing parts of the distribution.
|
| 178 |
+
|
| 179 |
+
Takeaway 5: Domain generalization algorithms offer limited performance improvement. In some cases these methods (in particular DANN) do improve performance, most notably in the low-data drift and unseen data shift settings (figures 4-5). However, this depends on the dataset (see figures 10-12) and performance is rarely much better than using heuristic augmentation.
|
| 180 |
+
|
| 181 |
+

|
| 182 |
+
Figure 6: Condition 1: Noisy labels. We vary the amount of noise $p$ in the labels. We plot the percentage change over the baseline ResNet, averaged over all seeds and datasets.
|
| 183 |
+
|
| 184 |
+

|
| 185 |
+
Figure 7: Condition 2: Fixed data. We vary the total size of the dataset $T$ . We plot the percentage change over the baseline ResNet, averaged over all seeds and datasets.
|
| 186 |
+
|
| 187 |
+
Takeaway 6: The best algorithms may differ under the precise conditions. When labels have varying noise in figure 6, relative performance is reasonably consistent. When the dataset size decreases in figure 7, heuristic augmentation methods perform poorly. However, using pretraining and learned augmentation is consistently robust.
|
| 188 |
+
|
| 189 |
+
Takeaway 7: The precise attributes we consider directly impacts the results. For example, on DSPRITES, if we make color $y^{l}$ and shape $y^{a}$ , we find that all methods generalise perfectly in the unseen data shift setting (as demonstrated in appendix B.3) unlike when shape is $y^{l}$ (figure 10).
|
| 190 |
+
|
| 191 |
+
# 4.2 PRACTICAL TIPS
|
| 192 |
+
|
| 193 |
+
While there is no free lunch in terms of the method to choose, we recommend the following tips.
|
| 194 |
+
|
| 195 |
+
Tip 1: If heuristic augmentations approximate part of the true underlying generative model, use them. Under this constraint, heuristic augmentations can significantly improve performance; this should be a first point of call. How to heuristically choose these augmentations without exhaustively trying all possible combinations is an open research question.
|
| 196 |
+
|
| 197 |
+
Tip 2: If heuristic augmentations do not help, learn the augmentation. If the true underlying generative model cannot be readily approximated with heuristic techniques, but some subset of the generative model can be learned by conditioning on known attributes, this is a promising way to further improve performance. How to learn the underlying generative model directly from data and use this for augmentation is a promising area to explore more thoroughly.
|
| 198 |
+
|
| 199 |
+
Tip 3: Use pretraining. In general, pretraining was found to be a useful way to learn a robust representation. While this was not true for all datasets (e.g. CAMELYON17, IWILDCAM), performance could be dramatically improved by pretraining (DSPRITES, MPI3D, SMALLNORB, SHAPES3D). An area to be investigated is the utility of self-supervised pre-training.
|
| 200 |
+
|
| 201 |
+
Tip 4: More complex approaches lead to limited improvements. Domain generalization, adaptive approaches and disentangling lead to limited improvements, if any, across the different datasets and shifts. Of these approaches, DANN performs generally best. How to make these approaches generically useful for robustness is still an open research question.
|
| 202 |
+
|
| 203 |
+
# 5 DISCUSSION
|
| 204 |
+
|
| 205 |
+
Our experiments demonstrate that no one method performs best over all shifts and that performance is dependent on the precise attribute being considered. This leads to the following considerations.
|
| 206 |
+
|
| 207 |
+
There is no way to decide a-priori on the best method given only the dataset. It would be helpful for practitioners to be able to select the best approaches without requiring comprehensive evaluations and comparisons. Moreover, it is unclear how to pinpoint the precise distribution shift (and thereby methods to explore) in a given application. This should be an important future area of investigation.
|
| 208 |
+
|
| 209 |
+
We should focus on the cases where we have knowledge about the distribution shift. We found that the ability of a given algorithm to generalize depends heavily on the attribute and dataset being
|
| 210 |
+
|
| 211 |
+
considered. Instead of trying to make one algorithm for any possible shift, it makes sense to have adaptable algorithms which can use auxiliary information if given. Moreover, algorithms should be evaluated in the context for which we will use them.
|
| 212 |
+
|
| 213 |
+
It is pivotal to evaluate methods in a variety of conditions. Performance varies due to the number of examples, amount of noise, and size of the dataset. Thus it is important to perform comprehensive evaluations when comparing different methods, as in our framework. This gives others a more realistic view of different models' relative performance in practice.
|
| 214 |
+
|
| 215 |
+
# 6 RELATED WORK
|
| 216 |
+
|
| 217 |
+
We briefly summarize benchmarks on distribution shift, leaving a complete review to appendix C.
|
| 218 |
+
|
| 219 |
+
Benchmarking robustness to out of distribution (OOD) generalization. While a multitude of methods exist that report improved OOD generalization, Gulrajani & Lopez-Paz (2021) found that in actuality no evaluated method performed significantly better than a strong ERM baseline on a variety of datasets. However, Hendrycks et al. (2021) found that, when we focus on better augmentation, larger models and pretraining, we can get a sizeable boost in performance. This can be seen on the Koh et al. (2020) benchmark (the largest boosts come from larger models and better augmentation). Our work is complementary to these methods, as we look at a range of approaches (pretraining, heuristic augmentation, learned augmentation, domain generalisation, adaptive, disentangled representations) on a range of both synthetic and real-world datasets. Moreover, we allow for a fine-grained analysis of methods over different distribution shifts.
|
| 220 |
+
|
| 221 |
+
Benchmarking spurious correlation and low-data drift. Studies on fairness and bias (surveyed by Mehrabi et al. (2021)) have demonstrated the pernicious impact of low-data in face recognition (Buolamwini & Gebru, 2018), medical imaging (Castro et al., 2020), and conservation (Beery et al., 2018) and spurious correlation in classification (Geirhos et al., 2019) and conservation (Beery et al., 2020). Arjovsky et al. (2019) hypothesized that spurious correlation may be the underlying reason for poor generalization of models to unseen data. To our knowledge, there has been no large scale work focused on understanding the benefits of different methods across these distribution shifts systematically across multiple datasets and with fine-grained control on the amount of shift. Here we introduce a framework for creating these shifts in a controllable way to allow such challenges to be investigated robustly.
|
| 222 |
+
|
| 223 |
+
Benchmarking disentangled representations. A related area, disentangled representation learning, aims to learn a representation where the factors of variation in the data are separated. If this could be achieved, then models should be able to generalise effortlessly to unseen data as investigated in multiple settings such as reinforcement learning (Higgins et al., 2017b). Despite many years of work in disentangled representations (Higgins et al., 2017a; Burgess et al., 2017; Kim & Mnih, 2018; Chen et al., 2018), a benchmark study by Locatello et al. (2019) found that, without supervision or implicit model or data assumptions, one cannot reliably perform disentanglement; however, weak supervision appears sufficient to do so (Locatello et al., 2020). Dittadi et al. (2021); Schott et al. (2021); Montero et al. (2020) further investigated whether representations (disentangled or not) can interpolate, extrapolate, or compose properties; they found that when considering complex combinations of properties and multiple datasets, representations do not do so reliably.
|
| 224 |
+
|
| 225 |
+
# 7 CONCLUSIONS
|
| 226 |
+
|
| 227 |
+
This work has put forward a general, comprehensive framework to reason about distribution shifts. We analyzed 19 different methods, spanning a range of techniques, over three distribution shifts – spurious correlation, low-data drift, and unseen data shift, and two additional conditions – label noise and dataset size. We found that while results are not consistent across datasets and methods, a number of methods do better than an ERM baseline in some settings. We then put forward a number of practical tips, promising directions, and open research questions. We hope that our framework and comprehensive benchmark spurs research on in this area and provides a useful tool for practitioners to evaluate which methods work best under which conditions and shifts.
|
| 228 |
+
|
| 229 |
+
# ACKNOWLEDGMENTS
|
| 230 |
+
|
| 231 |
+
The authors thank Irina Higgins and Timothy Mann for feedback and discussions while developing their work. They also thank Irina, Rosemary Ke, and Dilan Gorur for reviewing earlier drafts.
|
| 232 |
+
|
| 233 |
+
# REFERENCES
|
| 234 |
+
|
| 235 |
+
Ehab A AlBadawy, Ashirbani Saha, and Maciej A Mazurowski. Deep learning for segmentation of brain tumors: Impact of cross-institutional training and testing. Medical physics, 2018.
|
| 236 |
+
Michael A Alcorn, Qi Li, Zhitao Gong, Chengfei Wang, Long Mai, Wei-Shinn Ku, and Anh Nguyen. Strike (with) a pose: Neural networks are easily fooled by strange poses of familiar objects. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019.
|
| 237 |
+
Martin Arjovsky, Léon Bottou, Ishaan Gulrajani, and David Lopez-Paz. Invariant risk minimization. arXiv preprint arXiv:1907.02893, 2019.
|
| 238 |
+
Peter Bandi, Oscar Geessink, Quirine Manson, Marcory Van Dijk, Maschenka Balkenhol, Meyke Hermsen, Babak Ehteshami Bejnordi, Byungjae Lee, Kyunghyun Paeng, Aoxiao Zhong, et al. From detection of individual metastases to classification of lymph node status at the patient level: the CAMELYON17 challenge. IEEE Transactions on Medical Imaging, 2018.
|
| 239 |
+
Sara Beery, Grant Van Horn, and Pietro Perona. Recognition in terra incognita. In Proceedings of the European Conference on Computer Vision, 2018.
|
| 240 |
+
Sara Beery, Yang Liu, Dan Morris, Jim Piavis, Ashish Kapoor, Neel Joshi, Markus Meister, and Pietro Perona. Synthetic examples improve generalization for rare classes. In Proceedings of the IEEE Workshop on Applications of Computer Vision, 2020.
|
| 241 |
+
Joy Buolamwini and Timnit Gebru. Gender shades: Intersectional accuracy disparities in commercial gender classification. In Conference on fairness, accountability and transparency, 2018.
|
| 242 |
+
Chris Burgess and Hyunjik Kim. 3D shapes dataset. https://github.com/deepmind/3dshapes-dataset/, 2018.
|
| 243 |
+
Christopher P Burgess, Irina Higgins, Arka Pal, Loic Matthey, Nick Watters, Guillaume Desjardins, and Alexander Lerchner. Understanding disentangling in $\beta$ -VAE. In Workshop on Learning Disentangled Representations at the 31st Conference on Neural Information Processing Systems, 2017.
|
| 244 |
+
Fabio M Carlucci, Paolo Russo, Tatiana Tommasi, and Barbara Caputo. Hallucinating agnostic images to generalize across domains. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2019.
|
| 245 |
+
Daniel C Castro, Ian Walker, and Ben Glocker. Causality matters in medical imaging. Nature Communications, 2020.
|
| 246 |
+
Ricky TQ Chen, Xuechen Li, Roger Grosse, and David Duvenaud. Isolating sources of disentangle-ment in variational autoencoders. In Advances in Neural Information Processing Systems, 2018.
|
| 247 |
+
Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. InfoGAN: Interpretable representation learning by information maximizing generative adversarial nets. In Advances in Neural Information Processing Systems, 2016.
|
| 248 |
+
Yunjey Choi, Minje Choi, Munyoung Kim, Jung-Woo Ha, Sunghun Kim, and Jaegul Choo. Star-Gan: Unified generative adversarial networks for multi-domain image-to-image translation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018.
|
| 249 |
+
Ekin D Cubuk, Barret Zoph, Dandelion Mane, Vijay Vasudevan, and Quoc V Le. AutoAugment: Learning augmentation strategies from data. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019.
|
| 250 |
+
|
| 251 |
+
Ekin D Cubuk, Barret Zoph, Jonathon Shlens, and Quoc V Le. RandAugment: Practical automated data augmentation with a reduced search space. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2020.
|
| 252 |
+
Dengxin Dai and Luc Van Gool. Dark model adaptation: Semantic image segmentation from daytime to nighttime. In International Conference on Intelligent Transportation Systems, 2018.
|
| 253 |
+
Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2009.
|
| 254 |
+
Andrea Dittadi, Frederik Träuble, Francesco Locatello, Manuel Wüthrich, Vaibhav Agrawal, Ole Winther, Stefan Bauer, and Bernhard Schölkopf. On the transfer of disentangled representations in realistic settings. In Proceedings of the International Conference on Learning Representations, 2021.
|
| 255 |
+
Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. In Proceedings of the International Conference on Learning Representations, 2021.
|
| 256 |
+
Bradley J Erickson, Panagiotis Korfiatis, Zeynettin Akkus, and Timothy L Kline. Machine learning for medical imaging. Radiographics, 2017.
|
| 257 |
+
Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In Proceedings of the International Conference on Machine Learning, 2017.
|
| 258 |
+
Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, and Victor Lempitsky. Domain-adversarial training of neural networks. Journal of Machine Learning Research, 17(1):2096-2030, 2016.
|
| 259 |
+
Robert Geirhos, Patricia Rubisch, Claudio Michaelis, Matthias Bethge, Felix A Wichmann, and Wieland Brendel. Imagenet-trained cnns are biased towards texture; increasing shape bias improves accuracy and robustness. In Proceedings of the International Conference on Learning Representations, 2019.
|
| 260 |
+
Karan Goel, Albert Gu, Yixuan Li, and Christopher Ré. Model patching: Closing the subgroup performance gap with data augmentation. arXiv preprint arXiv:2008.06775, 2020.
|
| 261 |
+
Muhammad Waleed Gondal, Manuel Wüthrich, Dorde Miladinović, Francesco Locatello, Martin Breidt, Valentin Volchkov, Joel Akpo, Olivier Bachem, Bernhard Schölkopf, and Stefan Bauer. On the transfer of inductive bias from simulation to the real world: a new disentanglement dataset. arXiv preprint arXiv:1906.03292, 2019.
|
| 262 |
+
Ian J Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron C Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems, 2014.
|
| 263 |
+
Sven Gowal, Chongli Qin, Po-Sen Huang, Taylan Cemgil, Krishnamurthy Dvijotham, Timothy Mann, and Pushmeet Kohli. Achieving robustness in the wild via adversarial mixing with disentangled representations. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2020.
|
| 264 |
+
Keren Gu, Xander Masotto, Vandana Bachani, Balaji Lakshminarayanan, Jack Nikodem, and Dong Yin. An instance-dependent simulation framework for learning with label noise. arXiv preprint arXiv:2107.11413, 2021.
|
| 265 |
+
Ishaan Gulrajani and David Lopez-Paz. In search of lost domain generalization. In Proceedings of the International Conference on Learning Representations, 2021.
|
| 266 |
+
Bo Han, Quanming Yao, Xingrui Yu, Gang Niu, Miao Xu, Weihua Hu, Ivor Tsang, and Masashi Sugiyama. Co-teaching: Robust training of deep neural networks with extremely noisy labels. In Advances in Neural Information Processing Systems, 2018.
|
| 267 |
+
|
| 268 |
+
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016.
|
| 269 |
+
Will Douglas Heaven. Google's medical ai was super accurate in a lab. real life was a different story., 2020.
|
| 270 |
+
Dan Hendrycks and Thomas Dietterich. Benchmarking neural network robustness to common corruptions and perturbations. In Proceedings of the International Conference on Learning Representations, 2019.
|
| 271 |
+
Dan Hendrycks, Mantas Mazeika, Duncan Wilson, and Kevin Gimpel. Using trusted data to train deep networks on labels corrupted by severe noise. In Advances in Neural Information Processing Systems, 2018.
|
| 272 |
+
Dan Hendrycks, Kimin Lee, and Mantas Mazeika. Using pre-training can improve model robustness and uncertainty. In Proceedings of the International Conference on Machine Learning, 2019.
|
| 273 |
+
Dan Hendrycks, Norman Mu, Ekin D Cubuk, Barret Zoph, Justin Gilmer, and Balaji Lakshminarayanan. AugMix: A simple data processing method to improve robustness and uncertainty. In Advances in Neural Information Processing Systems, 2020.
|
| 274 |
+
Dan Hendrycks, Steven Basart, Norman Mu, Saurav Kadavath, Frank Wang, Evan Dorundo, Rahul Desai, Tyler Zhu, Samyak Parajuli, Mike Guo, et al. The many faces of robustness: A critical analysis of out-of-distribution generalization. Proceedings of the International Conference on Computer Vision, 2021.
|
| 275 |
+
Irina Higgins, Loic Matthew, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, and Alexander Lerchner. $\beta$ -VAE: Learning basic visual concepts with a constrained variational framework. In Proceedings of the International Conference on Learning Representations, 2017a.
|
| 276 |
+
Irina Higgins, Arka Pal, Andrei Rusu, Loic Matthew, Christopher Burgess, Alexander Pritzel, Matthew Botvinick, Charles Blundell, and Alexander Lerchner. Darla: Improving zero-shot transfer in reinforcement learning. In Proceedings of the International Conference on Machine Learning, 2017b.
|
| 277 |
+
Joel Janai, Fatma Güney, Aseem Behl, Andreas Geiger, et al. Computer vision for autonomous vehicles: Problems, datasets and state of the art. Foundations and Trends® in Computer Graphics and Vision, 2020.
|
| 278 |
+
Fredrik D Johansson, David Sontag, and Rajesh Ranganath. Support and invertibility in domain-invariant representations. In The International Conference on Artificial Intelligence and Statistics. PMLR, 2019.
|
| 279 |
+
John Jumper, Richard Evans, Alexander Pritzel, Tim Green, Michael Figurnov, Olaf Ronneberger, Kathryn Tunyasuvunakool, Russ Bates, Augustin Žídek, Anna Potapenko, Alex Bridgland, Clemens Meyer, Simon A. A. Kohl, Andrew J. Ballard, Andrew Cowie, Bernardino Romera-Paredes, Stanislav Nikolov, Rishub Jain, Jonas Adler, Trevor Back, Stig Petersen, David Reiman, Ellen Clancy, Michal Zielinski, Martin Steinegger, Michalina Pacholska, Tamas Berhammer, Sebastian Bodenstein, David Silver, Oriol Vinyals, Andrew W. Senior, Koray Kavukcuoglu, Pushmeet Kohli, and Demis Hassabis. Highly accurate protein structure prediction with AlphaFold. Nature, 2021.
|
| 280 |
+
Tero Karras, Samuli Laine, and Timo Aila. A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019.
|
| 281 |
+
Ashish Khetan, Zachary C Lipton, and Anima Anandkumar. Learning from noisy singly-labeled data. In Proceedings of the International Conference on Learning Representations, 2018.
|
| 282 |
+
Hyunjik Kim and Andriy Mnih. Disentangling by factorising. In Proceedings of the International Conference on Machine Learning, 2018.
|
| 283 |
+
|
| 284 |
+
Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
|
| 285 |
+
Pang Wei Koh, Shiori Sagawa, Henrik Marklund, Sang Michael Xie, Marvin Zhang, Akshay Balsubramani, Weihua Hu, Michihiro Yasunaga, Richard Lanas Phillips, Sara Beery, Jure Leskovec, Anshul Kundaje, Emma Pierson, Sergey Levine, Chelsea Finn, and Percy Liang. WILDS: A benchmark of in-the-wild distribution shifts. arXiv preprint arXiv:2012.07421, 2020.
|
| 286 |
+
Yann LeCun, Fu Jie Huang, and Léon Bottou. Learning methods for generic object recognition with invariance to pose and lighting. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2004.
|
| 287 |
+
Da Li, Yongxin Yang, Yi-Zhe Song, and Timothy M Hospedales. Deeper, broader and artier domain generalization. In Proceedings of the International Conference on Computer Vision, 2017.
|
| 288 |
+
Ya Li, Xinmei Tian, Mingming Gong, Yajing Liu, Tongliang Liu, Kun Zhang, and Dacheng Tao. Deep domain generalization via conditional invariant adversarial networks. In Proceedings of the European Conference on Computer Vision, pp. 624-639, 2018.
|
| 289 |
+
Evan Z Liu, Behzad Haghloo, Annie S Chen, Aditi Raghunathan, Pang Wei Koh, Shiori Sagawa, Percy Liang, and Chelsea Finn. Just Train Twice: Improving group robustness without training group information. In Proceedings of the International Conference on Machine Learning, 2021.
|
| 290 |
+
Francesco Locatello, Stefan Bauer, Mario Lucic, Gunnar Raetsch, Sylvain Gelly, Bernhard Schölkopf, and Olivier Bachem. Challenging common assumptions in the unsupervised learning of disentangled representations. In Proceedings of the International Conference on Machine Learning, 2019.
|
| 291 |
+
Francesco Locatello, Ben Poole, Gunnar Ratsch, Bernhard Schölkopf, Olivier Bachem, and Michael Tschannen. Weakly-supervised disentanglement without compromises. In Proceedings of the International Conference on Machine Learning, 2020.
|
| 292 |
+
Mingsheng Long, Yue Cao, Jianmin Wang, and Michael Jordan. Learning transferable features with deep adaptation networks. In Proceedings of the International Conference on Machine Learning, 2015.
|
| 293 |
+
Mingsheng Long, Han Zhu, Jianmin Wang, and Michael I Jordan. Deep transfer learning with joint adaptation networks. In Proceedings of the International Conference on Machine Learning, 2017.
|
| 294 |
+
Loic Matthew, Irina Higgins, Demis Hassabis, and Alexander Lerchner. dsprites: Disentanglement testing sprites dataset. https://github.com/deepmind/dsprites-dataset/, 2017.
|
| 295 |
+
Ninareh Mehrabi, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, and Aram Galstyan. A survey on bias and fairness in machine learning. ACM Computing Surveys (CSUR), 2021.
|
| 296 |
+
Milton Llera Montero, Casimir JH Ludwig, Rui Ponte Costa, Gaurav Malhotra, and Jeffrey Bowers. The role of disentanglement in generalisation. In Proceedings of the International Conference on Learning Representations, 2020.
|
| 297 |
+
Vinod Nair and Geoffrey E Hinton. Rectified linear units improve restricted boltzmann machines. In Proceedings of the International Conference on Machine Learning, 2010.
|
| 298 |
+
Hyeonseob Nam, HyunJae Lee, Jongchan Park, Wonjun Yoon, and Donggeun Yoo. Reducing domain gap by reducing style bias. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2021.
|
| 299 |
+
Giorgio Patrini, Alessandro Rozza, Aditya Krishna Menon, Richard Nock, and Lizhen Qu. Making deep neural networks robust to label noise: A loss correction approach. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017.
|
| 300 |
+
Xingchao Peng, Qinxun Bai, Xide Xia, Zijun Huang, Kate Saenko, and Bo Wang. Moment matching for multi-source domain adaptation. In Proceedings of the International Conference on Computer Vision, 2019.
|
| 301 |
+
|
| 302 |
+
Christian S Perone, Pedro Ballester, Rodrigo C Barros, and Julien Cohen-Adad. Unsupervised domain adaptation for medical imaging segmentation with self-ensembling. NeuroImage, 2019.
|
| 303 |
+
Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. Learning transferable visual models from natural language supervision. arXiv preprint arXiv:2103.00020, 2021.
|
| 304 |
+
Benjamin Recht, Rebecca Roelofs, Ludwig Schmidt, and Vaishaal Shankar. Do imagenet classifiers generalize toImagenet? In Proceedings of the International Conference on Machine Learning, 2019.
|
| 305 |
+
Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In Proceedings of the International Conference on Machine Learning, 2014.
|
| 306 |
+
Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. Imagenet large scale visual recognition challenge. International Journal of Computer Vision, 2015.
|
| 307 |
+
Shiori Sagawa, Pang Wei Koh, Tatsunori B Hashimoto, and Percy Liang. Distributionally robust neural networks for group shifts: On the importance of regularization for worst-case generalization. Proceedings of the International Conference on Learning Representations, 2020.
|
| 308 |
+
Steffen Schneider, Evgenia Rusak, Luisa Eck, Oliver Bringmann, Wieland Brendel, and Matthias Bethge. Improving robustness against common corruptions by covariate shift adaptation. In Proceedings of the International Conference on Learning Representations, 2020.
|
| 309 |
+
Lukas Schott, Julius von Kugelgen, Frederik Trauble, Peter Gehler, Chris Russell, Matthias Bethge, Bernhard Schölkopf, Francesco Locatello, and Wieland Brendel. Visual representation learning does not generalize strongly within the same domain. In Proceedings of the International Conference on Learning Representations, 2021.
|
| 310 |
+
Vaishaal Shankar, Achal Dave, Rebecca Roelofs, Deva Ramanan, Benjamin Recht, and Ludwig Schmidt. Do image classifiers generalize across time? arXiv preprint arXiv:1906.02168, 2019.
|
| 311 |
+
Baochen Sun and Kate Saenko. Deep coral: Correlation alignment for deep domain adaptation. In Proceedings of the European Conference on Computer Vision, 2016.
|
| 312 |
+
Rohan Taori, Achal Dave, Vaishaal Shankar, Nicholas Carlini, Benjamin Recht, and Ludwig Schmidt. Measuring robustness to natural distribution shifts in image classification. arXiv preprint arXiv:2007.00644, 2020.
|
| 313 |
+
Antonio Torralba and Alexei A Efros. Unbiased look at dataset bias. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2011.
|
| 314 |
+
Vladimir Vapnik. Principles of risk minimization for learning theory. In Advances in Neural Information Processing Systems, 1992.
|
| 315 |
+
Hemanth Venkateswara, Jose Eusebio, Shayok Chakraborty, and Sethuraman Panchanathan. Deep hashing network for unsupervised domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017.
|
| 316 |
+
Yufei Wang, Haoliang Li, and Alex C Kot. Heterogeneous domain generalization via domain mixup. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, 2020.
|
| 317 |
+
Kai Xiao, Logan Engstrom, Andrew Ilyas, and Aleksander Madry. Noise or Signal: The role of image backgrounds in object recognition. *ArXiv preprint arXiv*: 2006.09994, 2020.
|
| 318 |
+
Cihang Xie and Alan Yuille. Intriguing properties of adversarial training at scale. In Proceedings of the International Conference on Learning Representations, 2020.
|
| 319 |
+
|
| 320 |
+
Minghao Xu, Jian Zhang, Bingbing Ni, Teng Li, Chengjie Wang, Qi Tian, and Wenjun Zhang. Adversarial domain adaptation with domain mixup. In AAAI Conference on Artificial Intelligence, 2020.
|
| 321 |
+
Shen Yan, Huan Song, Nanxiang Li, Lincan Zou, and Liu Ren. Improve unsupervised domain adaptation with mixup training. arXiv preprint arXiv:2001.00677, 2020.
|
| 322 |
+
Hongyi Zhang, Moustapha Cisse, Yann N Dauphin, and David Lopez-Paz. MixUp: Beyond empirical risk minimization. In Proceedings of the International Conference on Learning Representations, 2018.
|
| 323 |
+
Ling Zhang, Xiaosong Wang, Dong Yang, Thomas Sanford, Stephanie Harmon, Baris Turkbey, Holger Roth, Andriy Myronenko, Daguang Xu, and Ziyue Xu. When unseen domain generalization is unnecessary? rethinking data augmentation. arXiv preprint arXiv:1906.03347, 2019.
|
| 324 |
+
Han Zhao, Remi Tachet Des Combes, Kun Zhang, and Geoffrey Gordon. On learning invariant representations for domain adaptation. In Proceedings of the International Conference on Machine Learning, 2019.
|
| 325 |
+
Kaiyang Zhou, Yongxin Yang, Timothy Hospedales, and Tao Xiang. Deep domain-adversarial image generation for domain generalisation. In AAAI Conference on Artificial Intelligence, 2020.
|
| 326 |
+
Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the International Conference on Computer Vision, 2017.
|
afinegrainedanalysisondistributionshift/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:97dcedcd6d57ec01fed8ad7150329aa6e8c614581cf1fec85be4c1b441a3e4d7
|
| 3 |
+
size 193026
|
afinegrainedanalysisondistributionshift/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:60ae21a747b81f712eea3493e2a63ea273fcb949a9efad3a66a7fca9bc98a8fe
|
| 3 |
+
size 494799
|
analyticdpmananalyticestimateoftheoptimalreversevarianceindiffusionprobabilisticmodels/b5478703-78f4-4a48-a2ec-73165fb1c59c_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:9e005c5b6343a6dcba666743b5c9a9ad68e4b2cf624fcf491cd980d95816fe8e
|
| 3 |
+
size 264264
|
analyticdpmananalyticestimateoftheoptimalreversevarianceindiffusionprobabilisticmodels/b5478703-78f4-4a48-a2ec-73165fb1c59c_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a9529eef91c725af5e277e1edc4afbb6cc4ce7f3cbd08b361c6109cdc2ce53dd
|
| 3 |
+
size 308923
|
analyticdpmananalyticestimateoftheoptimalreversevarianceindiffusionprobabilisticmodels/b5478703-78f4-4a48-a2ec-73165fb1c59c_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:4f4826084c9cb875f0fa2faaa82c8d73bf14d57fb4ea3b092d8fcf66ad88b5ae
|
| 3 |
+
size 7954291
|
analyticdpmananalyticestimateoftheoptimalreversevarianceindiffusionprobabilisticmodels/full.md
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
analyticdpmananalyticestimateoftheoptimalreversevarianceindiffusionprobabilisticmodels/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:4fda9c558137b30f8d6c49a8f96d9b07a57628c64c86f030223e358f60e20fe5
|
| 3 |
+
size 3756025
|
analyticdpmananalyticestimateoftheoptimalreversevarianceindiffusionprobabilisticmodels/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:676a98c5f27874a217c1843ff3d2a4771e340218e19f2552cb863da08ee88c95
|
| 3 |
+
size 1595727
|
anewperspectiveonhowgraphneuralnetworksgobeyondweisfeilerlehman/1897f62e-bcce-409b-a7fe-f30526d9220c_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:8db4e96c12a39ee59ecda82c86d61ee6a8cc4fb8990e2488d035bc5d54f6f293
|
| 3 |
+
size 149307
|
anewperspectiveonhowgraphneuralnetworksgobeyondweisfeilerlehman/1897f62e-bcce-409b-a7fe-f30526d9220c_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:673a66355f58b033f220126493795b1f91a7478ef3abc1179d09e61c903df6bc
|
| 3 |
+
size 178520
|
anewperspectiveonhowgraphneuralnetworksgobeyondweisfeilerlehman/1897f62e-bcce-409b-a7fe-f30526d9220c_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:9f0ebaca06f596b87772bb128c63f4017194108be0c208bb949d41a0f4ecd6cd
|
| 3 |
+
size 590348
|
anewperspectiveonhowgraphneuralnetworksgobeyondweisfeilerlehman/full.md
ADDED
|
@@ -0,0 +1,563 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# A NEW PERSPECTIVE ON "HOW GRAPH NEURAL NETWORKS GO BEYOND WEISFEILER-LEHMAN?"
|
| 2 |
+
|
| 3 |
+
Asiri Wijesinghe & Qing Wang
|
| 4 |
+
|
| 5 |
+
School of Computing, Australian National University, Canberra, Australia
|
| 6 |
+
|
| 7 |
+
{asiri.wijesinghe,qing.wang}@anu.edu.au
|
| 8 |
+
|
| 9 |
+
# ABSTRACT
|
| 10 |
+
|
| 11 |
+
We propose a new perspective on designing powerful Graph Neural Networks (GNNs). In a nutshell, this enables a general solution to inject structural properties of graphs into a message-passing aggregation scheme of GNNs. As a theoretical basis, we develop a new hierarchy of local isomorphism on neighborhood subgraphs. Then, we theoretically characterize how message-passing GNNs can be designed to be more expressive than the Weisfeiler Lehman test. To elaborate this characterization, we propose a novel neural model, called GraphSNN, and prove that this model is strictly more expressive than the Weisfeiler Lehman test in distinguishing graph structures. We empirically verify the strength of our model on different graph learning tasks. It is shown that our model consistently improves the state-of-the-art methods on the benchmark tasks without sacrificing computational simplicity and efficiency.
|
| 12 |
+
|
| 13 |
+
# 1 INTRODUCTION
|
| 14 |
+
|
| 15 |
+
Many Graph Neural Networks (GNNs) employ a message-passing aggregation scheme to learn low-dimensional vector space representations for nodes in a graph (Kipf & Welling, 2017; Velicković et al., 2017; Hamilton et al., 2017; Gilmer et al., 2017; Sato, 2020; Loukas, 2020; de Haan et al., 2020). Let $G = (V, E)$ be a graph. For each node $v \in V$ , a message-passing aggregation scheme recursively aggregates the feature vectors of nodes in the neighborhood of $v$ and combines the aggregated information with the feature vector of $v$ itself to obtain a representation. Since there is no natural ordering on nodes, such message-passing aggregation schemes are usually required to be permutation-invariant (Maron et al., 2018; Keriven & Peyré, 2019; Garg et al., 2020).
|
| 16 |
+
|
| 17 |
+
Despite advances of GNNs in various graph learning tasks such as node classification (Kipf & Welling, 2017; Xu et al., 2018), graph classification (Xu et al., 2019; Wu et al., 2019) and link prediction (Zhang & Chen, 2017), there is still a lack of theoretical understanding of how to design powerful and practically useful GNNs that can capture rich structural information of graphs. Recent studies (Xu et al., 2019; Morris et al., 2019) have explored the connections between GNNs and the Weisfeiler-Lehman (WL) test (Weisfeiler & Leman, 1968). By representing a neighborhood as a multiset of feature vectors and treating the neighborhood aggregation as an aggregation function over multisets, Xu et al. (2019) showed that message-passing GNNs are at most as powerful as the WL test in distinguishing graph structures. However, many simple graph structures still cannot be distinguished by the WL test, e.g., $G_{1}$ and $G_{2}$ shown in Figure 1. A question is: how to design expressive yet simple GNNs that can go beyond the WL test with a theoretically provable guarantee?
|
| 18 |
+
|
| 19 |
+
Recently, there have been three main directions of extending GNNs beyond WL: (1) building GNNs for higher-order WL (i.e. $k$ -WL with $k \geq 3$ ) or variants (Maron et al., 2019; Morris et al., 2020; 2019); (2) counting on pre-defined substructures as additional features (Bouritsas et al., 2020); (3) augmenting node identifiers or random features into GNNs (You et al., 2021; Vignac et al., 2020; Sato et al., 2021). Unlike these works, we aim to introduce a general solution upon which GNNs can be enhanced to capture structural properties of graphs. This solution enables GNNs to provably be more expressive than the Weisfeiler-Lehman test, but still computationally efficient. It overcomes the following limitations of existing works. Compared with higher-order WL methods in (1) which require high computational overhead and are impractical, our method goes beyond the WL test but is still computationally efficient. Compared with the methods on counting substructures in (2), our
|
| 20 |
+
|
| 21 |
+

|
| 22 |
+
Figure 1: An overview of our proposed framework for GNNs that can go beyond the WL test in distinguishing non-isomorphic graphs $G_{1}$ and $G_{2}$ . The overlap subgraphs of $G_{1}$ and $G_{2}$ are structurally different, which are captured by structural coefficients defined in Eq. 4.
|
| 23 |
+
|
| 24 |
+
method does not require to handcraft substructures. Compared with the methods of augmenting node identifiers or random features in (3), our method can flexibly quantify local structures (see examples in Figure 3) and also capture different classes of local structures w.r.t. different graph learning tasks.
|
| 25 |
+
|
| 26 |
+
Our work is grounded in three observations: (i) Treating a neighborhood as a multiset of feature vectors ignores the rich structure information among vertices in the neighborhood, thereby limiting the representational capacity of the model. Thus, we represent a neighborhood as a neighborhood subgraph in which vertices are structurally related, and show that the WL test is only as powerful as distinguishing neighborhood subgraphs in terms of their subtree structures in the neighborhood. (ii) There exists a natural class of isomorphic graphs, which strictly lies in between neighborhood subgraph isomorphism and neighborhood subtree isomorphism. We call it overlap (subgraph) isomorphism. The notion of overlap subgraph enables us to characterize structural interactions of vertices and inject them into a message-passing aggregation scheme for GNNs. (iii) By designing a proper function for quantifying structural interactions of vertices and preserving the injectiveness of a message-passing aggregation scheme, more expressive GNNs can be developed. We propose a new GNN model that is strictly more expressive than the WL test to demonstrate an instance of this kind.
|
| 27 |
+
|
| 28 |
+
Contributions. In summary, the main contributions of this work are as follows:
|
| 29 |
+
|
| 30 |
+
- We introduce a new hierarchy of local isomorphism to characterise different classes of local structures in neighborhood subgraphs, and discuss its connections with the WL test and GNNs (Section 2 and Theorems 1-2).
|
| 31 |
+
- We develop a simple yet powerful framework to inject structural properties into a message-passing aggregation scheme, and theoretically characterize how GNNs can be designed to be more expressive beyond the WL test (Section 3 and Theorem 3).
|
| 32 |
+
- We propose a novel neural model for graph learning, called GraphSNN, and prove that GraphSNN is strictly more expressive than the WL test in distinguishing graph structures (Section 4 and Theorem 4).
|
| 33 |
+
- We show that, due to the way of injecting structural properties into a structured-message-passing aggregation scheme, GraphSNN can overcome the oversmoothing issue (Chen et al., 2020a; Zhao & Akoglu, 2019; Li et al., 2018) (Section 5.4).
|
| 34 |
+
|
| 35 |
+
We have conducted experiments on benchmark tasks (Hu et al., 2020). The experimental results show that our model is highly efficient and can significantly improve the state-of-the-art methods without sacrificing computational simplicity.
|
| 36 |
+
|
| 37 |
+
Related work. Weisfeiler-Lehman (WL) hierarchy is a well-established framework for graph isomorphism tests (Grohe, 2017). Introduced by Weisfeiler and Lehman (Weisfeiler & Leman, 1968), the Weisfeiler-Lehman algorithm (also called 1-WL or color refinement) is a computationally efficient heuristic for testing graph isomorphism (Babai & Kucera, 1979). It is known that k-WL is strictly more powerful than (k-1)-WL when $\mathrm{k} \geq 3$ (Cai et al., 1992; Grohe, 2017).
|
| 38 |
+
|
| 39 |
+
Message-passing GNNs are typically considered as a differentiable neural generalization of the Weisfeiler-Lehman algorithms on graphs. It has been reported (Xu et al., 2019) that some popular GNNs such as GCN (Kipf & Welling, 2017) and GraphSAGE (Hamilton et al., 2017) are at most powerful as 1-WL in distinguishing graph structures. Xu et al. (2019) has shown that Graph Isomorphism Network (GIN) can be as powerful as 1-WL. At its core, GIN provides an injective aggregation scheme that is defined as a function over multisets of feature vectors, and thus GIN has the representational power to map any two different multisets of feature vectors to different representations in an embedding space.
|
| 40 |
+
|
| 41 |
+
A considerable amount of efforts has been devoted to improve the expressive power of GNNs beyond 1-WL. Generally, there are three directions: (1) Several works proposed higher-order variants of GNNs that are as powerful as k-WL with $k \geq 3$ (Azizian & Lelarge, 2020). For example, Morris et al. (2019) introduced k-order graph networks that are expressive as a set-based variant of k-WL, Maron et al. (2019) proposed a reduced 2-order graph network that is as expressive as 3-WL, and Morris et al. (2020) proposed a local version of k-WL which considers only a subset of vertices in a neighborhood. However, these more expressive GNNs are impractical to use due to their inherent high computational costs and sophisticated design. (2) Some works attempted to incorporate inductive biases based on isomorphism counting on pre-defined topological features such as triangles, cliques, and rings (Bouritsas et al., 2020; Liu et al., 2020; Monti et al., 2018), similar to the traditional ideas of graph kernels (Yanardag & Vishwanathan, 2015). However, pre-defining topological features requires domain-specific expertise, which is often not readily available. (3) Most recently, several works explored the ideas of augmenting GNNs using node identifiers or random features. For example, Vignac et al. (2020) proposed a method that maintains a "local context" for each node based on manipulating node identifiers in a permutation equivariant way. You et al. (2021) developed ID-GNNs by taking into account the identity information of vertices. Chen et al. (2020b) and Murphy et al. (2019) assigned one-hot IDs to vertices based on the ideas of relational pooling. Sato et al. (2021) added a random feature to each node to improve the representational capability of GNNs.
|
| 42 |
+
|
| 43 |
+
Our work is fundamentally different from existing models by injecting properties of structural interactions among vertices based on a natural class of isomorphic graphs in the local neighborhood (i.e., overlap subgraph isomorphism) into a message-passing aggregation scheme of GNNs.
|
| 44 |
+
|
| 45 |
+
# 2 A NEW HIERARCHY OF LOCAL ISOMORPHISM
|
| 46 |
+
|
| 47 |
+
In this section, we characterize a hierarchy of graph isomorphism based on local neighborhood subgraphs and explore its connections to 1-WL.
|
| 48 |
+
|
| 49 |
+
Let $G = (V, E)$ be a simple, undirected graph with a set $V$ of vertices and a set $E$ of edges. The set of neighbors of a vertex $v$ is denoted by $\mathcal{N}(v) = \{u \in V | (v, u) \in E\}$ . The neighborhood subgraph of a vertex $v$ , denoted by $S_v$ , is the subgraph induced in $G$ by $\tilde{\mathcal{N}}(v) = \mathcal{N}(v) \cup \{v\}$ , which contains all edges in $E$ that have both endpoints in $\tilde{\mathcal{N}}(v)$ . For two adjacent vertices $v$ and $u$ , i.e., $(v, u) \in E$ , the overlap subgraph $S_{vu}$ between $v$ and $u$ is defined as $S_{vu} = S_v \cap S_u$ .
|
| 50 |
+
|
| 51 |
+
Let $S_{i}$ and $S_{j}$ be the neighborhood subgraphs of two vertices $i$ and $j$ that are not necessarily adjacent, and $h_v$ be the feature vector of a vertex $v \in V$ . In the following, we define three notions of isomorphism, which correspond to different classes of local structures in neighborhood subgraphs.
|
| 52 |
+
|
| 53 |
+
Definition 1. $S_{i}$ and $S_{j}$ are subgraph-isomorphic, denoted as $S_{i} \simeq_{subgraph} S_{j}$ , if there exists a bijective mapping $g: \tilde{\mathcal{N}}(i) \to \tilde{\mathcal{N}}(j)$ such that $g(i) = j$ and for any two vertices $v_{1}, v_{2} \in \tilde{\mathcal{N}}(i)$ , $v_{1}$ and $v_{2}$ are adjacent in $S_{i}$ iff $g(v_{1})$ and $g(v_{2})$ are adjacent in $S_{j}$ , and $h_{v_{1}} = h_{g(v_{1})}$ and $h_{v_{2}} = h_{g(v_{2})}$ .
|
| 54 |
+
|
| 55 |
+
Definition 2. $S_{i}$ and $S_{j}$ are overlap-isomorphic, denoted as $S_{i} \simeq_{\text{overlap}} S_{j}$ , if there exists a bijective mapping $g: \tilde{\mathcal{N}}(i) \to \tilde{\mathcal{N}}(j)$ such that $g(i) = j$ and for any $v' \in \mathcal{N}(i)$ and $g(v') = u'$ , $S_{iv'}$ and $S_{ju'}$ are subgraph-isomorphic.
|
| 56 |
+
|
| 57 |
+
Definition 3. $S_{i}$ and $S_{j}$ are subtree-isomorphic, denoted as $S_{i} \simeq_{\text{subtree}} S_{j}$ , if there exists a bijective mapping $g: \tilde{\mathcal{N}}(i) \to \tilde{\mathcal{N}}(j)$ such that $g(i) = j$ and for any $v' \in \tilde{\mathcal{N}}(i)$ and $g(v') = u'$ , $h_{v'} = h_{u'}$ .
|
| 58 |
+
|
| 59 |
+
Theorem 1 states that there is a hierarchy among these notions of local isomorphism on neighborhood subgraphs, where subgraph-isomorphism is the strongest one, subtree-isomorphism is the weakest,
|
| 60 |
+
|
| 61 |
+

|
| 62 |
+
Figure 2: (a) $S_{i}$ and $S_{j}$ are overlap-isomorphic (i.e., having the same overlap subgraph) but not subgraph-isomorphic; (b) Four neighborhood subgraphs $\{S_{v_i}|i = 1,2,3,4\}$ are subtree-isomorphic (i.e., having the same subtree) but not overlap-isomorphic.
|
| 63 |
+
|
| 64 |
+
and overlap-isomorphism lies in between. Figure 2 shows two groups of graphs: one is distinguishable w.r.t. subgraph-isomorphism but not overlap-isomorphism, while the other is distinguishable by overlap-isomorphism but not subtree-isomorphism.
|
| 65 |
+
|
| 66 |
+
Theorem 1. The following statements are true: (a) If $S_{i} \simeq_{\text{subgraph}} S_{j}$ , then $S_{i} \simeq_{\text{overlap}} S_{j}$ ; but not vice versa; (b) If $S_{i} \simeq_{\text{overlap}} S_{j}$ , then $S_{i} \simeq_{\text{subtree}} S_{j}$ ; but not vice versa.
|
| 67 |
+
|
| 68 |
+
Let $\mathcal{S} = \{S_v | v \in V\}$ and $\zeta : \mathcal{S} \to \mathbb{R}^d$ mapping each neighborhood subgraph in $\mathcal{S}$ into a node embedding in $\mathbb{R}^d$ . The following theorem states that GNNs that are as powerful as 1-WL can distinguish two neighborhood subgraphs only w.r.t. subtree-isomorphism at each layer.
|
| 69 |
+
|
| 70 |
+
Theorem 2. Let $M$ be a GNN. $M$ is as powerful as 1-WL in distinguishing non-isomorphic graphs if $M$ has a sufficient number of layers and each layer can map any $S_{i}$ and $S_{j}$ in $S$ into two different embeddings (i.e., $\zeta(S_{i}) \neq \zeta(S_{j})$ ) if and only if $S_{i} \not\cong_{\text{subtree}} S_{j}$ .
|
| 71 |
+
|
| 72 |
+
The complete proofs of these theorems are provided in Appendix C.
|
| 73 |
+
|
| 74 |
+
# 3 A GENERALISEDMESSAGE-PASSING FRAMEWORK
|
| 75 |
+
|
| 76 |
+
In this section, we present a generalised message-passing framework (GMP) which enables to inject local structure into an aggregation scheme, in light of overlap subgraphs. We theoretically characterize how GNNs can be designed to be more expressive than 1-WL in this framework.
|
| 77 |
+
|
| 78 |
+
Let $\mathcal{S}^* = \{S_{vu}|(v,u)\in E\}$ be the set of overlap subgraphs in $G$ . We define structural coefficients for each vertex $v$ and its neighbors, i.e., $\omega : S\times S^{*}\to \mathbb{R}$ such that $A_{vu} = \omega (S_v,S_{vu})$ . A question arising is: what are the desirable properties of such a function $\omega$ ? Ideally, it should quantify how a vertex $v$ structurally interacts with its neighbor $u$ in the local neighborhood. Thus, given $S_{vu} = (V_{vu},E_{vu})$ and $S_{vu'} = (V_{vu'},E_{vu'})$ , a carefully designed $\omega$ should exhibit the following properties:
|
| 79 |
+
|
| 80 |
+
(1) Local closeness: $\omega(S_v, S_{vu}) > \omega(S_v, S_{vu'})$ if $S_{vu}$ and $S_{vu'}$ are complete graphs with $S_{vu} = K_i$ , $S_{vu'} = K_j$ , and $i > j$ , where $K_i$ refers to a complete graph on $i$ vertices.
|
| 81 |
+
(2) Local denseness: $\omega(S_v, S_{vu}) > \omega(S_v, S_{vu'})$ if $S_{vu}$ and $S_{vu'}$ have the same number of vertices but differ in the number of edges s.t. $|V_{vu}| = |V_{vu'}|$ and $|E_{vu}| > |E_{vu'}|$ .
|
| 82 |
+
(3) Isomorphic invariant: $\omega(S_v, S_{vu}) = \omega(S_v, S_{vu'})$ if $S_{vu}$ and $S_{vu'}$ are isomorphic.
|
| 83 |
+
|
| 84 |
+
Figure 3 illustrates the first two properties. Let $\{\{\cdot\}\}$ denote a multiset, $\tilde{A} = (\tilde{A}_{vu})_{v,u\in V}$ where $\tilde{A}_{vu}$ is a normalised value of $A_{vu}$ , and $X\in \mathbb{R}^{|V|\times f}$ be a matrix of input feature vectors where $x_{v}\in \mathbb{R}^{f}$ associates each $v\in V$ . We denote the feature vector of $v$ at the t-th layer by $h_v^{(t)}$ and $h_v^{(0)} = x_v$ . Then, the $(t + 1)$ -th layer of an aggregation scheme can be defined as:
|
| 85 |
+
|
| 86 |
+
$$
|
| 87 |
+
m _ {a} ^ {(t)} = \operatorname {A G G R E G A T E} ^ {N} \left(\left\{\left(\left(\tilde {A} _ {v u}, h _ {u} ^ {(t)}\right) \mid u \in \mathcal {N} (v) \right\} \right\}\right), \tag {1}
|
| 88 |
+
$$
|
| 89 |
+
|
| 90 |
+
$$
|
| 91 |
+
m _ {v} ^ {(t)} = \operatorname {A G G R E G A T E} ^ {I} \left(\left\{\left\{\tilde {A} _ {v u} \mid u \in \mathcal {N} (v) \right\} \right\}\right) h _ {v} ^ {(t)}, \tag {2}
|
| 92 |
+
$$
|
| 93 |
+
|
| 94 |
+
$$
|
| 95 |
+
h _ {v} ^ {(t + 1)} = \operatorname {C o m b i n e} \left(m _ {v} ^ {(t)}, m _ {a} ^ {(t)}\right). \tag {3}
|
| 96 |
+
$$
|
| 97 |
+
|
| 98 |
+
$\mathrm{AGGREGATE}^N (\cdot)$ and $\mathrm{AGGREGATE}^I (\cdot)$ are two possibly different parameterized functions. Here, $m_{a}^{(t)}$ is a message aggregated from the neighbors of $v$ and their structural coefficients, and $m_v^{(t)}$ is an
|
| 99 |
+
|
| 100 |
+

|
| 101 |
+
Figure 3: (a) Local closeness: for overlap subgraphs that are complete graphs, their structural coefficients increase with the number of vertices; (b) Local denseness: for overlap subgraphs that have the same number of vertices, their structural coefficients increase with the number of edges.
|
| 102 |
+
|
| 103 |
+

|
| 104 |
+
|
| 105 |
+
"adjusted" message from $v$ after performing an element-wise multiplication between $AGGREGATE^{I}(\cdot)$ and $h_{v}^{(t)}$ to account for structural effects from its neighbors. Then, $m_{v}^{(t)}$ and $m_{a}^{(t)}$ are combined by $COMBINE(\cdot)$ to obtain the feature vector $h_{v}^{(t + 1)}$ .
|
| 106 |
+
|
| 107 |
+
The following theorem states that a GNN can be more expressive than 1-WL if $\omega$ is powerful enough to distinguish structure beyond neighborhood subtrees and the neighborhood aggregation function $\Phi$ is injective under a sufficient number of layers. The proof is provided in Appendix C.
|
| 108 |
+
|
| 109 |
+
Theorem 3. Let $M$ be a GNN whose aggregation scheme $\Phi$ is defined by Eq. 1-Eq. 3. $M$ is strictly more expressive than 1-WL in distinguishing non-isomorphic graphs if $M$ has a sufficient number of layers and also satisfies the following conditions:
|
| 110 |
+
|
| 111 |
+
(1) $M$ can distinguish at least two neighborhood subgraphs $S_{i}$ and $S_{j}$ with $S_{i} \simeq_{\text{subtree}} S_{j}$ , $S_{i} \not\simeq_{\text{subgraph}} S_{j}$ and $\{\{\tilde{A}_{iv'}|v' \in \mathcal{N}(i)\}\} \neq \{\{\tilde{A}_{ju'}|u' \in \mathcal{N}(j)\}\}$ ;
|
| 112 |
+
(2) $\Phi\left(h_v^{(t)}, \{\{h_u^{(t)}|u \in \mathcal{N}(v)\}\}, \{\{(\tilde{A}_{vu}, h_u^{(t)})|u \in \mathcal{N}(v)\}\}\right)$ is injective.
|
| 113 |
+
|
| 114 |
+
# 4 GRAPHSNN
|
| 115 |
+
|
| 116 |
+
Generally, there are many different ways of designing $\omega$ and $\Phi$ functions, leading to GNNs with different expressive powers. To elaborate this, we propose a novel GNN model, named GraphSNN, whose aggregation scheme is an instantiation of our generalised message-passing framework. We prove that the expressive power of GraphSNN goes beyond 1-WL.
|
| 117 |
+
|
| 118 |
+
Model design. In the following, we provide a definition of $\omega$ that satisfies the properties of local closeness, local denseness, and isomorphic invariant. One key idea behind this definition is to make it capable of being generalized to support different graph learning tasks, controlled by $\lambda > 0$ (will be further discussed in Section 5.3):
|
| 119 |
+
|
| 120 |
+
$$
|
| 121 |
+
\omega \left(S _ {v}, S _ {v u}\right) = \frac {\left| E _ {v u} \right|}{\left| V _ {v u} \right| \cdot \left| V _ {v u} - 1 \right|} \left| V _ {v u} \right| ^ {\lambda}. \tag {4}
|
| 122 |
+
$$
|
| 123 |
+
|
| 124 |
+
This definition allows us to formulate a weighted adjacency matrix $A = (A_{vu})_{v,u\in V}$ for GraphSNN. To compare structural coefficients across different nodes, we normalize $A$ to $\tilde{A}$ by $\tilde{A}_{vu} = \frac{A_{vu}}{\sum_{u\in\mathcal{N}(v)}A_{vu}}$ . Alternatively, $A$ can be normalized using Softmax or other normalization techniques. For each vertex $v\in V$ , the feature vector at the $(t + 1)$ -th layer is generated by
|
| 125 |
+
|
| 126 |
+
$$
|
| 127 |
+
h _ {v} ^ {(t + 1)} = \mathrm {M L P} _ {\theta} \left(\gamma^ {(t)} \left(\sum_ {u \in \mathcal {N} (v)} \tilde {A} _ {v u} + 1\right) h _ {v} ^ {(t)} + \sum_ {u \in \mathcal {N} (v)} \left(\tilde {A} _ {v u} + 1\right) h _ {u} ^ {(t)}\right), \tag {5}
|
| 128 |
+
$$
|
| 129 |
+
|
| 130 |
+
where $\gamma^{(t)}$ is a learnable scalar parameter. Since $\mathcal{N}(v)$ refers to one-hop neighbors of $v$ , one can stack multiple layers to handle more than one-hop neighborhood. Note that, to ensure the injectivity in the feature aggregation in the presence of structural coefficients, we add 1 into the first and second terms in Eq. 5. This design is critical for guaranteeing the expressiveness of GraphSNN beyond 1-WL, as will be discussed in the proofs of the lemmas and Theorem 4 later.
|
| 131 |
+
|
| 132 |
+
Expressiveness analysis. We first generalise the result of universal functions over multisetts (Xu et al., 2019) to universal functions over pairs of multisetts since Eq. 5 involves not only node features but
|
| 133 |
+
|
| 134 |
+
<table><tr><td>Method</td><td>Cora</td><td>CiteSeer</td><td>Pubmed</td><td>NELL</td><td>ogbn-arxiv</td></tr><tr><td>GCN</td><td>81.5 ± 0.4</td><td>70.3 ± 0.5</td><td>79.0 ± 0.5</td><td>66.0 ± 1.7</td><td>71.74 ± 0.29</td></tr><tr><td>GraphSNNGCN</td><td>83.1 ± 1.8</td><td>72.3 ± 1.5</td><td>79.8 ± 1.2</td><td>68.3 ± 1.6</td><td>72.20 ± 0.90</td></tr><tr><td>GAT</td><td>83.0 ± 0.6</td><td>72.6 ± 0.6</td><td>78.5 ± 0.3</td><td>-</td><td>-</td></tr><tr><td>GraphSNNGAT</td><td>83.8 ± 1.2</td><td>73.5 ± 1.6</td><td>79.6 ± 1.4</td><td>-</td><td>-</td></tr><tr><td>GIN</td><td>77.6 ± 1.1</td><td>66.1 ± 1.5</td><td>77.0 ± 1.2</td><td>61.5 ± 2.3</td><td>-</td></tr><tr><td>GraphSNNGIN</td><td>79.2 ± 1.7</td><td>68.3 ± 1.5</td><td>78.8 ± 1.3</td><td>63.8 ± 2.7</td><td>-</td></tr><tr><td>GraphSAGE</td><td>79.2 ± 3.7</td><td>71.6 ± 1.9</td><td>77.4 ± 2.2</td><td>63.7 ± 5.2</td><td>71.49 ± 0.27</td></tr><tr><td>GraphSNNGraphSAGE</td><td>80.5 ± 2.5</td><td>72.7 ± 3.2</td><td>79.0 ± 3.5</td><td>66.3 ± 5.6</td><td>71.80 ± 0.70</td></tr></table>
|
| 135 |
+
|
| 136 |
+
Table 1: Classification accuracy (%) averaged over 10 runs on node classification.
|
| 137 |
+
|
| 138 |
+
also structural coefficients. Assume that $\mathcal{H}$ , $\mathcal{A}$ and $\mathcal{W}$ are countable sets where $\mathcal{H}$ is a node feature space, $\mathcal{A}$ is a structural coefficient space, and $\mathcal{W} = \{A_{ij}h_i|A_{ij}\in \mathcal{A}, h_i\in \mathcal{H}\}$ . Let $H$ and $W$ be two multisets containing elements from $\mathcal{H}$ and $\mathcal{W}$ , respectively, and $|H| = |W|$ . We can prove Lemma 1, Lemma 2 and Theorem 4 below, where the proof details are provided in Appendix C.
|
| 139 |
+
|
| 140 |
+
Lemma 1. There exists a function $f$ s.t. $\pi(H, W) = \sum_{h \in H, w \in W} f(h, w)$ is unique for any distinct pair of multisets $(H, W)$ .
|
| 141 |
+
|
| 142 |
+
Then, the injectiveness of $\pi (H,W)$ can be extended to $\pi^{\prime}(a,H,W)$ as in the lemma below.
|
| 143 |
+
|
| 144 |
+
Lemma 2. There exists a function $f$ s.t. $\pi^{\prime}(h_v,H,W) = \gamma f(h_v,|H|h_v) + \sum_{h\in H,w\in W}f(h,w)$ is unique for any distinct $(h_v,H,W)$ , where $h_v\in \mathcal{H}$ , $|H|h_v\in \mathcal{W}$ , and $\gamma$ can be an irrational number.
|
| 145 |
+
|
| 146 |
+
Since any function over $(h_v, H, W)$ can be decomposed as $g(\gamma f(h_v, |H| h_v) + \sum_{h \in H, w \in W} f(h, w))$ , similar to Xu et al. (2019), we use a parameterized multi-layer perceptron (MLP) to learn $f$ and $g$ . The following theorem characterizes the expressive power of GraphSNN.
|
| 147 |
+
|
| 148 |
+
Theorem 4. GraphSNN is more expressive than 1-WL in testing non-isomorphic graphs.
|
| 149 |
+
|
| 150 |
+
Since GIN is as powerful as 1-WL (Xu et al., 2019), this theorem implies that GraphSNN is more expressive than GIN, i.e., GraphSNN can map at least two different neighborhood subgraphs that correspond to the same multiset of feature vectors to different representations.
|
| 151 |
+
|
| 152 |
+
Complexity analysis. Similar to GCN and GIN, GraphSNN is computationally efficient. The time complexity and memory complexity are linear w.r.t. the number of edges in a graph. Further, due to the locality of GraphSNN, the computation of aggregating feature vectors from neighborhood subgraphs at each layer can be parallelized across all vertices. Structural coefficients can be precomputed with the time complexity $O(ml)$ , where $m$ is the number of edges and $l$ is the maximum degree of vertices in a graph, and this computation can also be parallelized across all edges. Table 9 in Appendix A summarizes the time and space complexities of several popular message-passing GNNs in comparison with GraphSNN.
|
| 153 |
+
|
| 154 |
+
# 5 NUMERICAL EXPERIMENTS
|
| 155 |
+
|
| 156 |
+
In this section, we evaluate our models on node classification and graph classification benchmark tasks. All the results of our models are statistically significant at 0.05 level of significance.
|
| 157 |
+
|
| 158 |
+
# 5.1 NODE CLASSIFICATION
|
| 159 |
+
|
| 160 |
+
Datasets. We use five datasets: three citation network datasets Cora, Citeseer, and Pubmed (Sen et al., 2008) for semi-supervised document classification, one knowledge graph dataset NELL (Carlson et al., 2010) for semi-supervised entity classification, and one OGB dataset ogbn-arxiv from (Hu et al., 2020). Table 10 in Appendix B contains statistics for these datasets.
|
| 161 |
+
|
| 162 |
+
**Baseline methods.** We consider the popular message-passing GNNs: GCN (Kipf & Welling, 2017), GAT (Veličković et al., 2017), GIN (Xu et al., 2019), and GraphSAGE (Hamilton et al., 2017). For each of these baselines, we construct a $\mathrm{GraphSNN}_M$ model by replacing its aggregation scheme by our aggregation scheme, which is detailed in Appendix A. The purpose of this setup is to evaluate
|
| 163 |
+
|
| 164 |
+
<table><tr><td>Method</td><td>MUTAG</td><td>PTC-MR</td><td>PROTEINS</td><td>D&D</td><td>BZR</td><td>COX2</td><td>IMDB-B</td><td>RDT-M5K</td></tr><tr><td>WL</td><td>90.4 ± 5.7</td><td>59.9 ± 4.3</td><td>75.0 ± 3.1</td><td>79.4 ± 0.3</td><td>78.5 ± 0.6</td><td>81.7 ± 0.7</td><td>73.8 ± 3.9</td><td>52.5 ± 2.1</td></tr><tr><td>RetGK</td><td>90.3 ± 1.1</td><td>62.5 ± 1.6</td><td>75.8 ± 0.6</td><td>81.6 ± 0.3</td><td>-</td><td>-</td><td>71.9 ± 1.0</td><td>-</td></tr><tr><td>GNTK</td><td>90.0 ± 8.5</td><td>67.9 ± 6.9</td><td>75.6 ± 4.2</td><td>75.6 ± 3.9</td><td>83.6 ± 2.9</td><td>-</td><td>76.9 ± 3.6</td><td>-</td></tr><tr><td>P-WL</td><td>90.5 ± 1.3</td><td>64.0 ± 0.8</td><td>75.2 ± 0.3</td><td>78.6 ± 0.3</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>WL-PM</td><td>87.7 ± 0.8</td><td>61.4 ± 0.8</td><td>-</td><td>78.6 ± 0.2</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>WWL</td><td>87.2 ± 1.5</td><td>66.3 ± 1.2</td><td>74.2 ± 0.5</td><td>79.6 ± 0.5</td><td>84.4 ± 2.0</td><td>78.2 ± 0.4</td><td>74.3 ± 0.8</td><td>-</td></tr><tr><td>FGW</td><td>88.4 ± 5.6</td><td>65.3 ± 7.9</td><td>74.5 ± 2.7</td><td>-</td><td>85.1 ± 4.1</td><td>77.2 ± 4.8</td><td>63.8 ± 3.4</td><td>-</td></tr><tr><td>DGCNN</td><td>85.8 ± 1.7</td><td>58.6 ± 2.5</td><td>75.5 ± 0.9</td><td>79.3 ± 0.9</td><td>-</td><td>-</td><td>70.0 ± 0.9</td><td>48.7 ± 4.5</td></tr><tr><td>CapsGNN</td><td>86.6 ± 6.8</td><td>66.0 ± 1.8</td><td>76.2 ± 3.6</td><td>75.4 ± 4.1</td><td>-</td><td>-</td><td>73.1 ± 4.8</td><td>52.9 ± 1.5</td></tr><tr><td>†GraphSAGE</td><td>85.1 ± 7.6</td><td>63.9 ± 7.7</td><td>75.9 ± 3.2</td><td>72.9 ± 2.0</td><td>-</td><td>-</td><td>72.3 ± 5.3</td><td>50.0 ± 1.3</td></tr><tr><td>†GIN</td><td>89.4 ± 5.6</td><td>64.6 ± 7.0</td><td>75.9 ± 2.8</td><td>-</td><td>-</td><td>-</td><td>75.1 ± 5.1</td><td>57.5 ± 1.5</td></tr><tr><td>†GraphSNN (S)</td><td>91.57 ± 2.8</td><td>66.70 ± 3.7</td><td>76.83 ± 2.5</td><td>81.97 ± 2.6</td><td>88.69 ± 3.2</td><td>82.86 ± 3.1</td><td>77.86 ± 3.6</td><td>58.43 ± 2.3</td></tr><tr><td>†GraphSNN (R)</td><td>91.24 ± 2.5</td><td>66.96 ± 3.5</td><td>76.51 ± 2.5</td><td>82.46 ± 2.7</td><td>88.97 ± 2.9</td><td>83.13 ± 3.5</td><td>76.93 ± 3.3</td><td>58.51 ± 2.7</td></tr><tr><td>GraphSNN (S)</td><td>94.70 ± 1.9</td><td>70.58 ± 3.1</td><td>78.42 ± 2.7</td><td>83.92 ± 2.3</td><td>91.12 ± 3.0</td><td>86.28 ± 3.3</td><td>78.51 ± 2.8</td><td>59.86 ± 2.6</td></tr><tr><td>GraphSNN (R)</td><td>94.14 ± 1.2</td><td>71.01 ± 3.6</td><td>78.21 ± 2.9</td><td>84.61 ± 1.5</td><td>91.88 ± 3.2</td><td>86.72 ± 2.9</td><td>77.87 ± 3.1</td><td>60.23 ± 2.2</td></tr></table>
|
| 165 |
+
|
| 166 |
+
Table 2: Classification accuracy (%) averaged over 10 runs on graph classification. The results of WL and RetGK are taken from (Du et al., 2019), GraphSAGE from (Xu et al., 2019), DGCNN from (Maron et al., 2019) and others from their original papers. $\dagger$ indicates the reporting setting used in GIN and further details on the experimental settings are discussed in Appendix B.
|
| 167 |
+
|
| 168 |
+
how effectively our aggregation scheme with structural coefficients can learn representations for vertices, compared with the standard message-passing aggregation scheme.
|
| 169 |
+
|
| 170 |
+
Experimental setup. We use the Adam optimizer (Kingma & Ba, 2015) and $\lambda = 1$ . For ogbnarxiv, our models are trained for 500 epochs with the learning rate 0.01, dropout 0.5, hidden units 256, and $\gamma = 0.1$ . For the other datasets, we use 200 epochs with the learning rate 0.001, and choose the best values for weight decay from $\{0.001, 0.002, \dots, 0.009\}$ and hidden units from $\{64, 128, 256, 512\}$ . For $\gamma$ and dropout at each layer, the best value for each model in each dataset is selected from $\{0.1, 0.2, \dots, 0.6\}$ . GraphSNN $_{GAT}$ uses the attention dropout 0.6 and 8 multi-attention heads. GraphSNN $_{GraphSAGE}$ uses the neighborhood sample size 25 with the mean aggregation.
|
| 171 |
+
|
| 172 |
+
We consider two settings of data splits for all datasets except for ogbn-arxiv: (1) the standard splits in Kipf & Welling (2017), i.e., 20 nodes from each class for training, 500 nodes for validation and 1000 nodes for testing, for which the results are presented in Table 1; (2) the random splits in Pei et al. (2020), i.e., randomly splitting nodes into $60\%$ , $20\%$ and $20\%$ for training, validation and testing, respectively, for which the results are presented in Table 13 in Appendix B. For ogbn-arxiv, we follow Hu et al. (2020) to use a time-based data split based on publication dates.
|
| 173 |
+
|
| 174 |
+
# 5.2 GRAPH CLASSIFICATION
|
| 175 |
+
|
| 176 |
+
We evaluate GraphSNN from three aspects: (1) small standard graph datasets, (2) large graph datasets and (3) comparison with GNNs that are go beyond 1-WL.
|
| 177 |
+
|
| 178 |
+
Experiments on small graphs. We use eight datasets from two categories: (1) bioinformatics datasets: MUTAG, PTC-MR, COX2, BZR, PROTEINS, and D&D (Debnath et al., 1991; Kriege et al., 2016; Wale et al., 2008; Shervashidze et al., 2011; Sutherland et al., 2003; Borgwardt & Kriegel, 2005); (2) social network datasets: IMDB-B and RDT-M5K (Yanardag & Vishwanathan, 2015). Table 11 in Appendix B contains statistics for these small graph datasets.
|
| 179 |
+
|
| 180 |
+
We compare against eleven baselines: (1) Graph kernel based methods: WL subtree kernel (Shervashidze et al., 2011), RetGK (Zhang et al., 2018b), GNTK (Du et al., 2019), P-WL (Rieck et al., 2019), WL-PM (Nikolentzos et al., 2017), WWL (Togninalli et al., 2019) and FGW (Titouan et al., 2019); (2) GNN based methods: DGCNN (Zhang et al., 2018a), CapsGNN (Xinyi & Chen, 2018), GIN (Xu et al., 2019), and GraphSAGE (Hamilton et al., 2017).
|
| 181 |
+
|
| 182 |
+
Both the standard stratified splits (Xu et al., 2019) and the random splits are considered. We use 10-fold cross validation with $90\%$ training and $10\%$ testing, and report the best mean accuracy. For both settings, we use the Adam optimizer (Kingma & Ba, 2015), batch size 64, hidden dimension 64, weight decay of 0.009, a 2-layer MLP with batch normalization, 500 epochs and dropout of 0.6, and $\gamma = 0.1$ over all datasets. The readout function as in (Xu et al., 2019) is used which concatenates representations of all layers to obtain a final graph representation. For the standard stratified splits, we use the learning rate 0.009 over all datasets. For the random splits, we use the learning rate 0.008 for MUTAG and RDT-M5K, and 0.007 for the other datasets. Table 2 presents the results.
|
| 183 |
+
|
| 184 |
+
<table><tr><td>Method</td><td>ogbg-molhiv</td><td>ogbg-moltox21</td><td>ogbg-moltoxcast</td><td>ogbg-ppa</td><td>ogbg-molpcba</td></tr><tr><td>GIN</td><td>75.58±1.40</td><td>74.91±0.51</td><td>63.41±0.74</td><td>68.92±1.00</td><td>22.66±0.28</td></tr><tr><td>GIN+VN</td><td>75.20±1.30</td><td>76.21±0.82</td><td>66.18±0.68</td><td>70.37±1.07</td><td>27.03±0.23</td></tr><tr><td>GSN</td><td>77.99±1.00</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>PNA</td><td>79.05±1.30</td><td>-</td><td>-</td><td>-</td><td>28.38±0.35</td></tr><tr><td>ID-GNN</td><td>78.30±2.00</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Deep LRP</td><td>77.19±1.40</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>GraphSNN</td><td>78.51±1.70</td><td>75.45±1.10</td><td>65.40±0.71</td><td>70.66±1.65</td><td>24.96±1.50</td></tr><tr><td>GraphSNN+VN</td><td>79.72±1.83</td><td>76.78±1.27</td><td>67.68±0.92</td><td>72.02±1.48</td><td>28.50±1.68</td></tr></table>
|
| 185 |
+
|
| 186 |
+
Table 3: Classification accuracy (%) averaged over 10 runs on graph classification, where $\lambda = 2$ . The results of the baselines are taken from (Hu et al., 2020) and the leaderboard of the OGB website.
|
| 187 |
+
|
| 188 |
+
<table><tr><td></td><td>Method</td><td>MUTAG</td><td>PTC-MR</td><td>PROTEINS</td><td>BZR</td><td>IMDB-B</td></tr><tr><td rowspan="2">GSN</td><td>GSN-e</td><td>90.6 ± 7.5</td><td>68.2 ± 7.2</td><td>76.6 ± 5.0</td><td>-</td><td>77.8 ± 3.3</td></tr><tr><td>GSN-v</td><td>92.2 ± 7.5</td><td>67.4 ± 5.7</td><td>74.5 ± 5.0</td><td>-</td><td>76.8 ± 2.0</td></tr><tr><td rowspan="2">ID-GNNs</td><td>ID-GNN Fast</td><td>96.5 ± 3.2</td><td>61.9 ± 5.4</td><td>78.0 ± 3.5</td><td>86.4 ± 3.0</td><td>-</td></tr><tr><td>ID-GNN Full</td><td>93.0 ± 5.6</td><td>62.5 ± 5.3</td><td>77.9 ± 2.4</td><td>88.1 ± 4.0</td><td>-</td></tr><tr><td>Ours</td><td>GraphSNN</td><td>91.57 ± 2.8</td><td>66.70 ± 3.7</td><td>76.83 ± 2.5</td><td>88.69 ± 3.2</td><td>77.86 ± 3.6</td></tr><tr><td rowspan="2">k-WL</td><td>1-GNNNT</td><td>82.7 ± 0.0</td><td>51.2 ± 0.0</td><td>-</td><td>-</td><td>69.4 ± 0.0</td></tr><tr><td>1-GNN</td><td>82.2 ± 0.0</td><td>59.0 ± 0.0</td><td>-</td><td>-</td><td>71.2 ± 0.0</td></tr><tr><td rowspan="2">GNNs</td><td>1-2-3-GNNNT</td><td>84.4 ± 0.0</td><td>59.3 ± 0.0</td><td>-</td><td>-</td><td>70.3 ± 0.0</td></tr><tr><td>1-2-3-GNN</td><td>86.1 ± 0.0</td><td>60.9 ± 0.0</td><td>-</td><td>-</td><td>74.2 ± 0.0</td></tr><tr><td>Ours</td><td>GraphSNN</td><td>87.30 ± 3.1</td><td>61.63 ± 2.8</td><td>74.01 ± 3.2</td><td>82.72 ± 3.9</td><td>74.81 ± 3.5</td></tr></table>
|
| 189 |
+
|
| 190 |
+
Table 4: Classification accuracy (\%) averaged over 10 runs on graph classification, where $\lambda = 2$ . The results of the baselines are taken from their original papers. GSN and ID-GNNs use the same experimental setup as GIN, while k-WL GNNs uses the same experimental setup as CapsGNN. These experimental setups are detailed in Appendix B.
|
| 191 |
+
|
| 192 |
+
Experiments on large graphs. We use five large graph datasets from Open Graph Benchmark (OGB) Hu et al. (2020), including four molecular graph datasets (ogbg-molhiv, ogbg-moltox21, ogbg-moltoxcast and ogb-molpcba) and one protein-protein association network (ogbg-ppa). Table 12 in Appendix B contains statistics for these large graph datasets.
|
| 193 |
+
|
| 194 |
+
We compare against the following methods that have reported the results on the above OGB datasets: GIN and $\mathrm{GIN + VN}$ (Hu et al., 2020), GSN (Bouritsas et al., 2020), PNA (Corso et al., 2020), ID-GNNs (You et al., 2021) and Deep LRP (Chen et al., 2020b). In addition to the original model of GraphSNN, we also consider a variant, denoted as GraphSNN+VN, which performs the message passing over augmented graphs with virtual nodes in GraphSNN (Hu et al., 2020; Ishiguro et al., 2019).
|
| 195 |
+
|
| 196 |
+
We follow the same experiment setup as in Hu et al. (2020). We use the Adam optimizer with learning rate 0.001, batch size 32, dropout 0.5 and 100 epochs for all datasets. GraphSNN uses a 8-layer MLP with embedding dimension 512 for ogbg-moltoxcast and ogbg-moltox21, while GraphSNN+VN has the embedding dimensions 300 and 256, and 8-layer and 5-layer MLPs for ogbg-moltoxcast and ogbg-moltox21, respectively. For ogbg-molhiv, ogbg-molpcba and ogbg-ppa, both GraphSNN and GraphSNN+VN use a 5-layer MLP and embedding dimension 200. Table 3 shows the results for the classification accuracy. Table 15 in Appendix B shows the results for the running time of the preprocessing step.
|
| 197 |
+
|
| 198 |
+
Comparison with GNNs beyond 1-WL. We compare GraphSNN with the other GNNs that are more expressive than 1-WL, including: GSN (Bouritsas et al., 2020), ID-GNNs (You et al., 2021) and k-WL GNN (Morris et al., 2019). We use the same experimental setup as in (Xu et al., 2019; Bouritsas et al., 2020; Maron et al., 2019). Table 4 shows the results.
|
| 199 |
+
|
| 200 |
+
# 5.3 ABLATION STUDY
|
| 201 |
+
|
| 202 |
+
We perform an ablation study to analyze the effect of $\lambda$ values on model performance. Tables 5 and 6 show that $\lambda = 1$ yields the highest performance for node classification, while $\lambda = 2$ is the best for graph classification. This reflects a critical point - different classes of structure information are needed by different graph learning tasks. $\lambda = 1$ captures local density, e.g., two overlap subgraphs may
|
| 203 |
+
|
| 204 |
+
<table><tr><td>Dataset</td><td>Method</td><td>λ=1</td><td>λ=2</td><td>λ=3</td><td>λ=4</td><td>λ=5</td></tr><tr><td rowspan="4">Cora</td><td>GraphSNNGCN</td><td>83.1±1.8</td><td>82.8±1.3</td><td>82.3±2.4</td><td>81.8±1.6</td><td>82.1±1.6</td></tr><tr><td>GraphSNNGIN</td><td>79.2±1.7</td><td>78.8±1.2</td><td>78.5±1.3</td><td>78.1±1.6</td><td>77.7±1.2</td></tr><tr><td>GraphSNNGraphSAGE</td><td>80.5±2.5</td><td>80.3±2.1</td><td>79.8±1.9</td><td>79.2±1.9</td><td>79.4±2.2</td></tr><tr><td>GraphSNNGAT</td><td>83.8±1.2</td><td>83.5±1.5</td><td>83.2±1.7</td><td>82.8±1.3</td><td>83.2±1.9</td></tr><tr><td rowspan="4">Citeseer</td><td>GraphSNNGCN</td><td>72.3±1.5</td><td>71.7±1.3</td><td>71.1±1.6</td><td>70.6±1.2</td><td>70.9±1.1</td></tr><tr><td>GraphSNNGIN</td><td>68.3±1.5</td><td>68.3±1.9</td><td>67.7±1.4</td><td>67.1±1.3</td><td>67.3±1.4</td></tr><tr><td>GraphSNNGraphSAGE</td><td>72.7±3.2</td><td>72.0±2.5</td><td>71.6±2.9</td><td>71.9±2.1</td><td>71.3±2.3</td></tr><tr><td>GraphSNNGAT</td><td>73.5±1.6</td><td>72.9±1.7</td><td>72.5±1.1</td><td>72.6±1.6</td><td>72.0±1.3</td></tr></table>
|
| 205 |
+
|
| 206 |
+
Table 5: Classification accuracy (%) averaged over 10 runs on node classification with standard splits.
|
| 207 |
+
|
| 208 |
+
<table><tr><td>Dataset</td><td>Method</td><td>λ=1</td><td>λ=2</td><td>λ=3</td><td>λ=4</td><td>λ=5</td></tr><tr><td>MUTAG</td><td></td><td>92.66±2.4</td><td>94.14±1.2</td><td>93.38±1.5</td><td>92.25±2.1</td><td>92.79±2.0</td></tr><tr><td>PTC-MR</td><td></td><td>70.76±5.1</td><td>71.01±3.6</td><td>70.67±2.8</td><td>69.59±2.1</td><td>69.97±3.1</td></tr><tr><td>PROTEINS</td><td></td><td>77.90±4.9</td><td>78.21±2.9</td><td>78.15±2.1</td><td>77.20±3.1</td><td>76.93±3.2</td></tr><tr><td>D&D</td><td rowspan="2">GraphSNN</td><td>82.70±4.6</td><td>84.61±1.5</td><td>84.34±1.2</td><td>82.60±2.6</td><td>82.30±2.3</td></tr><tr><td>BZR</td><td>87.61±4.9</td><td>91.88±3.2</td><td>91.45±2.6</td><td>91.38±2.1</td><td>90.90±3.1</td></tr><tr><td>COX2</td><td></td><td>86.20±3.3</td><td>86.72±2.9</td><td>83.81±3.1</td><td>83.13±2.6</td><td>83.94±3.2</td></tr><tr><td>IMDB-B</td><td></td><td>77.07±5.2</td><td>77.87±3.1</td><td>77.60±3.6</td><td>77.32±3.2</td><td>77.10±3.3</td></tr><tr><td>RDT-M5K</td><td></td><td>59.53±2.6</td><td>60.23±2.2</td><td>60.10±2.3</td><td>60.00±2.1</td><td>59.90±2.6</td></tr></table>
|
| 209 |
+
|
| 210 |
+
Table 6: Classification accuracy (%) averaged over 10 runs on graph classification with random splits.
|
| 211 |
+
|
| 212 |
+
considerably vary in the number of vertices but their local density can be very close. Our experiments show that injecting such local density helps improve the performance of node classification. $\lambda = 2$ captures local similarity, i.e., how similar two overlap subgraphs are. Two overlap subgraphs that considerably differ in the number of vertices would have very different structural coefficients. Since graph classification requires to compare the similarity of two graphs, $\lambda = 2$ is thus the best.
|
| 213 |
+
|
| 214 |
+
# 5.4 OVERSMOOTHING ANALYSIS
|
| 215 |
+
|
| 216 |
+
We analyse the impact of model depth (number of layers) on node classification performance. In addition to GCN and GraphSNN $_{GCN}$ , we also compare these models with a residual connection (i.e., GCN+residual and GraphSNN $_{GCN}$ +residual). We evaluate all the models on Cora dataset using the standard splits and same hyperparameters as in Section 5.1. Table 7 shows the results. When increasing the model depth, GraphSNN $_{GCN}$ performs consistently better than GCN at each layer. This is because structural coefficients capture structural connectivity between a target vertex and its neighbors. Thus, a neighbor whose structural connectivity is weak would pass little messages to the target vertex, whereas a neighbor whose structural connectivity is strong would pass a strong message to the target vertex. GraphSNN helps alleviate the oversmoothing issue even in the presence of residual connections. Further results of the oversmoothing analysis are provided in Appendix B.
|
| 217 |
+
|
| 218 |
+
<table><tr><td>#Layers</td><td>GCN</td><td>GCN+residual</td><td>GraphSNNGCN</td><td>GraphSNNGCN+residual</td></tr><tr><td>1</td><td>79.6±0.5</td><td>80.3±0.7</td><td>80.1±0.8</td><td>81.6±1.6</td></tr><tr><td>2</td><td>81.5±0.4</td><td>82.8±1.2</td><td>83.1±1.8</td><td>84.1±1.7</td></tr><tr><td>3</td><td>80.3±0.6</td><td>82.3±0.5</td><td>82.0±0.8</td><td>83.4±0.7</td></tr><tr><td>4</td><td>78.2±0.9</td><td>81.5±0.9</td><td>80.1±0.7</td><td>82.9±0.9</td></tr><tr><td>5</td><td>74.3±1.3</td><td>81.0±1.3</td><td>79.1±1.2</td><td>82.3±0.3</td></tr><tr><td>6</td><td>35.6±1.5</td><td>80.6±0.5</td><td>76.5±1.3</td><td>81.5±1.2</td></tr><tr><td>7</td><td>31.6±0.9</td><td>79.7±0.6</td><td>76.3±1.3</td><td>80.9±0.9</td></tr><tr><td>8</td><td>16.2±1.2</td><td>78.4±1.1</td><td>75.7±1.2</td><td>80.3±1.3</td></tr></table>
|
| 219 |
+
|
| 220 |
+
Table 7: Classification accuracy (%) averaged over 10 runs on Cora dataset.
|
| 221 |
+
|
| 222 |
+
# 6 CONCLUSIONS
|
| 223 |
+
|
| 224 |
+
In this paper, we have introduced a GNN framework, which enables a general way of injecting structural information into a message-passing aggregation scheme. We have also introduced a novel GNN model, GraphSNN, for graph learning, and prove that GraphSNN is more expressive than 1-WL in distinguishing graph structures. It is shown that GraphSNN consistently outperforms all the state-of-the-art approaches in both node classification and graph classification benchmark tasks.
|
| 225 |
+
|
| 226 |
+
# REFERENCES
|
| 227 |
+
|
| 228 |
+
Waiss Azizian and Marc Lelarge. Expressive power of invariant and equivariant graph neural networks. In International Conference on Learning Representations (ICLR), 2020.
|
| 229 |
+
László Babai and Ludik Kucera. Canonical labelling of graphs in linear average time. In 20th Annual Symposium on Foundations of Computer Science (SFCS), pp. 39-46, 1979.
|
| 230 |
+
Karsten M Borgwardt and Hans-Peter Kriegel. Shortest-path kernels on graphs. In Fifth IEEE international conference on data mining (ICDM'05), pp. 8-pp. IEEE, 2005.
|
| 231 |
+
Giorgos Bouritsas, Fabrizio Frasca, Stefanos Zafeiriou, and Michael M Bronstein. Improving graph neural network expressivity via subgraph isomorphism counting. arXiv preprint arXiv:2006.09252, 2020.
|
| 232 |
+
Jin-Yi Cai, Martin Fürer, and Neil Immerman. An optimal lower bound on the number of variables for graph identification. Combinatorica, 12(4):389-410, 1992.
|
| 233 |
+
Andrew Carlson, Justin Betteridge, Bryan Kisiel, Burr Settles, Estevam R Hruschka, and Tom M Mitchell. Toward an architecture for never-ending language learning. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), 2010.
|
| 234 |
+
Deli Chen, Yankai Lin, Wei Li, Peng Li, Jie Zhou, and Xu Sun. Measuring and relieving the oversmoothing problem for graph neural networks from the topological view. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), pp. 3438-3445, 2020a.
|
| 235 |
+
Zhengdao Chen, Lei Chen, Soledad Villar, and Joan Bruna. Can graph neural networks count substructures? Advances in neural information processing systems (NeurIPS), 2020b.
|
| 236 |
+
Gabriele Corso, Luca Cavalleri, Dominique Beaini, Pietro Lio, and Petar Velicković. Principal neighbourhood aggregation for graph nets. Advances in Neural Information Processing Systems (NeurIPS), 2020.
|
| 237 |
+
Pim de Haan, Taco Cohen, and Max Welling. Natural graph networks. arXiv preprint arXiv:2007.08349, 2020.
|
| 238 |
+
Asim Kumar Debnath, Rosa L Lopez de Compadre, Gargi Debnath, Alan J Shusterman, and Corwin Hansch. Structure-activity relationship of mutagenic aromatic and heteroaromatic nitro compounds. correlation with molecular orbital energies and hydrophobicity. Journal of medicinal chemistry, 34 (2):786-797, 1991.
|
| 239 |
+
Simon S Du, Kangcheng Hou, Barnabás Póczos, Ruslan Salakhutdinov, Ruosong Wang, and Keyulu Xu. Graph neural tangent kernel: Fusing graph neural networks with graph kernels. arXiv preprint arXiv:1905.13192, 2019.
|
| 240 |
+
Federico Errica, Marco Podda, Davide Bacciu, and Alessio Micheli. A fair comparison of graph neural networks for graph classification. 2020.
|
| 241 |
+
Vikas Garg, Stefanie Jegelka, and Tommi Jaakkola. Generalization and representational limits of graph neural networks. In International Conference on Machine Learning (ICML), pp. 3419-3430, 2020.
|
| 242 |
+
Justin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl. Neural message passing for quantum chemistry. In International Conference on Machine Learning (ICML), pp. 1263-1272. PMLR, 2017.
|
| 243 |
+
Martin Grohe. Descriptive complexity, canonisation, and definable graph structure theory, volume 47. Cambridge University Press, 2017.
|
| 244 |
+
Will Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. In Advances in Neural Information Processing Systems (NeurIPS), pp. 1024-1034, 2017.
|
| 245 |
+
Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, and Jure Leskovec. Open graph benchmark: Datasets for machine learning on graphs. Advances in Neural Information Processing Systems (NeurIPS), 2020.
|
| 246 |
+
|
| 247 |
+
Katsuhiko Ishiguro, Shin-ichi Maeda, and Masanori Koyama. Graph warp module: an auxiliary module for boosting the power of graph neural networks in molecular graph analysis. arXiv preprint arXiv:1902.01020, 2019.
|
| 248 |
+
Nicolas Keriven and Gabriel Peyré. Universal invariant and equivariant graph neural networks. In Advances in Neural Information Processing Systems (NeurIPS), 2019.
|
| 249 |
+
Diederick P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In International Conference on Learning Representations (ICLR), 2015.
|
| 250 |
+
Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. In International Conference on Learning Representations (ICLR), 2017.
|
| 251 |
+
Nils M Kriege, Pierre-Louis Giscard, and Richard Wilson. On valid optimal assignment kernels and applications to graph classification. In Advances in Neural Information Processing Systems (NeurIPS), pp. 1623-1631, 2016.
|
| 252 |
+
Qimai Li, Zhichao Han, and Xiao-Ming Wu. Deeper insights into graph convolutional networks for semi-supervised learning. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), 2018.
|
| 253 |
+
Xin Liu, Haojie Pan, Mutian He, Yangqiu Song, Xin Jiang, and Lifeng Shang. Neural subgraph isomorphism counting. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 1959-1969, 2020.
|
| 254 |
+
Andreas Loukas. What graph neural networks cannot learn: depth vs width. In International Conference on Learning Representations (ICLR), 2020.
|
| 255 |
+
Haggai Maron, Heli Ben-Hamu, Nadav Shamir, and Yaron Lipman. Invariant and equivariant graph networks. In International Conference on Learning Representations (ICLR), 2018.
|
| 256 |
+
Haggai Maron, Heli Ben-Hamu, Hadar Serviansky, and Yaron Lipman. Provably powerful graph networks. Advances in Neural Information Processing Systems (NeurIPS), 2019.
|
| 257 |
+
Federico Monti, Karl Otness, and Michael M Bronstein. Motifnet: a motif-based graph convolutional network for directed graphs. In 2018 IEEE Data Science Workshop (DSW), pp. 225-228, 2018.
|
| 258 |
+
Christopher Morris, Martin Ritzert, Matthias Fey, William L Hamilton, Jan Eric Lenssen, Gaurav Rattan, and Martin Grohe. Weisfeiler and leman go neural: Higher-order graph neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), pp. 4602-4609, 2019.
|
| 259 |
+
Christopher Morris, Gaurav Rattan, and Petra Mutzel. Weisfeiler and leman go sparse: Towards scalable higher-order graph embeddings. Advances in Neural Information Processing Systems (NeurIPS), 2020.
|
| 260 |
+
Ryan Murphy, Balasubramaniam Srinivasan, Vinayak Rao, and Bruno Ribeiro. Relational pooling for graph representations. In International Conference on Machine Learning (ICML), pp. 4663-4673, 2019.
|
| 261 |
+
Giannis Nikolentzos, Polykarpos Meladianos, and Michalis Vazirgiannis. Matching node embeddings for graph similarity. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), 2017.
|
| 262 |
+
Hongbin Pei, Bingzhe Wei, Kevin Chen-Chuan Chang, Yu Lei, and Bo Yang. Geom-gcn: Geometric graph convolutional networks. In International Conference on Learning Representations (ICLR), 2020.
|
| 263 |
+
Bastian Rieck, Christian Bock, and Karsten Borgwardt. A persistent weisfeiler-lehman procedure for graph classification. In International Conference on Machine Learning (ICML), pp. 5448-5458. PMLR, 2019.
|
| 264 |
+
Ryoma Sato. A survey on the expressive power of graph neural networks. arXiv preprint arXiv:2003.04078, 2020.
|
| 265 |
+
|
| 266 |
+
Ryoma Sato, Makoto Yamada, and Hisashi Kashima. Random features strengthen graph neural networks. In Proceedings of the 2021 SIAM International Conference on Data Mining (SDM), pp. 333-341, 2021.
|
| 267 |
+
Prithviraj Sen, Galileo Namata, Mustafa Bilgic, Lise Getoor, Brian Galligher, and Tina Eliassi-Rad. Collective classification in network data. AI magazine, 29(3):93-93, 2008.
|
| 268 |
+
Nino Shervashidze, Pascal Schweitzer, Erik Jan Van Leeuwen, Kurt Mehlhorn, and Karsten M Borgwardt. Weisfeiler-lehman graph kernels. Journal of Machine Learning Research, 12(9), 2011.
|
| 269 |
+
Jeffrey J Sutherland, Lee A O'brien, and Donald F Weaver. Spline-fitting with a genetic algorithm: A method for developing classification structure- activity relationships. Journal of chemical information and computer sciences, 43(6):1906-1915, 2003.
|
| 270 |
+
Vayer Titouan, Nicolas Courty, Romain Tavenard, and Rémi Flamary. Optimal transport for structured data with application on graphs. In International Conference on Machine Learning (ICML), pp. 6275-6284, 2019.
|
| 271 |
+
Matteo Togninalli, Elisabetta Ghisu, Felipe Llinares-López, Bastian Rieck, and Karsten Borgwardt. Wasserstein weisfeiler-lehman graph kernels. In Advances in Neural Information Processing Systems (NeurIPS), pp. 6439-6449, 2019.
|
| 272 |
+
Petar Velicković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. Graph attention networks. International Conference on Learning Representations (ICLR), 2017.
|
| 273 |
+
Clément Vignac, Andreas Loukas, and Pascal Frossard. Building powerful and equivariant graph neural networks with structural message-passing. In Advances in Neural Information Processing Systems (NeurIPS), 2020.
|
| 274 |
+
Nikil Wale, Ian A Watson, and George Karypis. Comparison of descriptor spaces for chemical compound retrieval and classification. Knowledge and Information Systems, 14(3):347-375, 2008.
|
| 275 |
+
Boris Weisfeiler and Andrei Leman. The reduction of a graph to canonical form and the algebra which appears therein. NTI, Series, 2(9):12-16, 1968.
|
| 276 |
+
Asiri Wijesinghe and Qing Wang. Dfnets: Spectral cnns for graphs with feedback-looped filters. Advances in neural information processing systems (NeurIPS), 2019.
|
| 277 |
+
Jun Wu, Jingrui He, and Jiejun Xu. Demo-net: Degree-specific graph neural networks for node and graph classification. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 406-415, 2019.
|
| 278 |
+
Zhang Xinyi and Lihui Chen. Capsule graph neural network. In International conference on learning representations (ICLR), 2018.
|
| 279 |
+
Keyulu Xu, Chengtao Li, Yonglong Tian, Tomohiro Sonobe, Ken-ichi Kawarabayashi, and Stefanie Jegelka. Representation learning on graphs with jumping knowledge networks. In International Conference on Machine Learning (ICML), pp. 5453-5462. PMLR, 2018.
|
| 280 |
+
Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? In International Conference on Learning Representations (ICLR), 2019.
|
| 281 |
+
Pinar Yanardag and SVN Vishwanathan. Deep graph kernels. In Proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining, pp. 1365-1374, 2015.
|
| 282 |
+
Jiaxuan You, Jonathan Gomes-Selman, Rex Ying, and Jure Leskovec. Identity-aware graph neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), 2021.
|
| 283 |
+
Muhan Zhang and Yixin Chen. Weisfeiler-lehman neural machine for link prediction. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 575-583, 2017.
|
| 284 |
+
|
| 285 |
+
Muhan Zhang, Zhicheng Cui, Marion Neumann, and Yixin Chen. An end-to-end deep learning architecture for graph classification. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), 2018a.
|
| 286 |
+
Zhen Zhang, Mianzhi Wang, Yijian Xiang, Yan Huang, and Arye Nehorai. Retgk: Graph kernels based on return probabilities of random walks. In Advances in Neural Information Processing Systems (NeurIPS), pp. 3964-3974, 2018b.
|
| 287 |
+
Lingxiao Zhao and Leman Akoglu. Pairnorm: Tackling oversmoothing in gnns. In International Conference on Learning Representations (ICLR), 2019.
|
| 288 |
+
|
| 289 |
+
# APPENDIX
|
| 290 |
+
|
| 291 |
+
# A. CONNECTIONS TO PREVIOUS WORK
|
| 292 |
+
|
| 293 |
+
In the following, we discuss how our framework generalizes the existing message-passing GNNs in the literature such as GCN (Kipf & Welling, 2017), GraphSAGE (Hamilton et al., 2017), GAT (Veličković et al., 2017) and GIN (Xu et al., 2019) as special cases. Table 8 presents the local aggregation schemes used by these existing GNN models. They differ from each other w.r.t. the way of aggregating feature vectors in a neighborhood and how they are combined with the current vertex's feature itself, i.e., summation or concatenation. Here, $\alpha_{vu}$ is an attention coefficient capturing the importance of a neighbor in GAT, $\epsilon$ is a learnable or fixed scalar parameter used in GIN, $W$ is a learnable weight matrix and $\sigma$ is a non-linear activation function, such as ReLU.
|
| 294 |
+
|
| 295 |
+
Note that, as defined in Equation 3, $m_{a}^{(t)}$ and $m_{v}^{(t)}$ refer to the messages aggregated by AGGREGATE $^{N}(\cdot)$ and AGGREGATE $^{I}(\cdot)$ , respectively.
|
| 296 |
+
|
| 297 |
+
<table><tr><td>GNN Model</td><td>AGGREGATEN(·)</td><td>AGGREGATEI(·)</td><td>COMBINE(·)</td></tr><tr><td>GCN</td><td>∑u∈N(v)W(t)hu(t)/√|N(u)||N(v)|</td><td>W(t)hv(t)/√|N(v)||N(v)|</td><td>σ(SUM(mv(t),ma(t))</td></tr><tr><td>GraphSAGE</td><td>∑u∈N(v)h(u)(t)/|N(v)|</td><td>hv(t)</td><td>σ(W(t)·CONCAT(mv(t),ma(t))</td></tr><tr><td>GAT</td><td>∑u∈N(v)αvuW(t)hu(t)</td><td>αvvW(t)hv(t)</td><td>σ(SUM(mv(t),ma(t))</td></tr><tr><td>GIN</td><td>∑u∈N(v)h(u)(t)</td><td>(1+ε)hv(t)</td><td>MLPθ(SUM(mv(t),ma(t))</td></tr></table>
|
| 298 |
+
|
| 299 |
+
# COMPLEXITY ANALYSIS
|
| 300 |
+
|
| 301 |
+
Table 9 summarizes the time and space complexities of several popular message-passing GNNs and GraphSNN, where $n$ and $m$ are the numbers of vertices and edges in a graph, respectively, $k$ refers to the number of layers, $f$ and $d$ are the dimensions of input and output feature vectors, respectively, $a$ is the number of attention heads used in GAT, and $s$ is the number of neighbors sampled for each node at each layer in GraphSAGE.
|
| 302 |
+
|
| 303 |
+
Table 8: Comparison of the aggregation schemes used in existing message-passing GNNs
|
| 304 |
+
|
| 305 |
+
<table><tr><td>GNN Model</td><td>Time Complexity</td><td>Memory Complexity</td></tr><tr><td>GCN (Kipf & Welling, 2017)</td><td>O(kmfd)</td><td>O(m)</td></tr><tr><td>GIN (Xu et al., 2019)</td><td>O(kmfd)</td><td>O(m)</td></tr><tr><td>GAT (Veličković et al., 2017)</td><td>O(k(anfd + amd))</td><td>O(n2)</td></tr><tr><td>GraphSAGE (Hamilton et al., 2017)</td><td>O(snfd)</td><td>O(n)</td></tr><tr><td>GraphSNN (ours)</td><td>O(kmfd)</td><td>O(m)</td></tr></table>
|
| 306 |
+
|
| 307 |
+
Table 9: Time and space complexities of message-passing GNNs and GraphSNN.
|
| 308 |
+
|
| 309 |
+
# FORMULATION OF GRAPHGNN $M$
|
| 310 |
+
|
| 311 |
+
For each of these message-passing GNNs, denoted as $M$ , we construct a variant GraphSNN $_M$ by replacing its existing aggregation scheme by our aggregation scheme with structural coefficients as formulated in Eq. 5. These variants are used in our experiments for node classification benchmark tasks (see Section 5.1) in order to evaluate how our aggregation scheme with structural coefficients can improve performance, compared with their standard message-passing aggregation schemes. Below are the details of these variants.
|
| 312 |
+
|
| 313 |
+
# GCN and GraphSNN ${}_{GCN}$
|
| 314 |
+
|
| 315 |
+
Graph Convolutional Network (GCN) (Kipf & Welling, 2017) applies a normalized mean aggregation to combine the feature vector of a node $v$ with the feature vectors in its neighborhood $\mathcal{N}(v)$ :
|
| 316 |
+
|
| 317 |
+
$$
|
| 318 |
+
h _ {v} ^ {(t + 1)} = \sigma \left(\frac {W ^ {(t)} h _ {v} ^ {(t)}}{\sqrt {\left| \mathcal {N} (v) \right| \left| \mathcal {N} (v) \right|}} + \sum_ {u \in \left\{\mathcal {N} (v) \right\}} \frac {W ^ {(t)} h _ {u} ^ {(t)}}{\sqrt {\left| \mathcal {N} (v) \right| \left| \mathcal {N} (u) \right|}}\right). \tag {6}
|
| 319 |
+
$$
|
| 320 |
+
|
| 321 |
+
$\sqrt{|\mathcal{N}(u)||\mathcal{N}(v)|}$ is a normalization constant for the edge $(v,u)$ , which originates from the normalized adjacency matrix $D^{-1 / 2}AD^{-1 / 2}$ . $W^{(t)}$ is a trainable weight matrix and $\sigma$ is a non-linear activation function such as ReLU. We generalise GCN to a model under the GMP framework, namely $\mathrm{GraphSNN}_{GCN}$ , to improve the expressive power of GCN. We first construct a normalized structural coefficient matrix $\tilde{A}$ . Formally, each neural layer of $\mathrm{GraphSNN}_{GCN}$ may then be expressed as:
|
| 322 |
+
|
| 323 |
+
$$
|
| 324 |
+
h _ {v} ^ {(t + 1)} = \sigma \left(\gamma^ {(t)} \left(\sum_ {u \in \mathcal {N} (v)} \tilde {A} _ {v u} + 1\right) \frac {W ^ {(t)} h _ {v} ^ {(t)}}{\sqrt {| \tilde {\mathcal {N}} (v) | | \tilde {\mathcal {N}} (v) |}} + \sum_ {u \in \mathcal {N} (v)} \left(\tilde {A} _ {v u} + 1\right) \frac {W ^ {(t)} h _ {u} ^ {(t)}}{\sqrt {| \tilde {\mathcal {N}} (u) | | \tilde {\mathcal {N}} (v) |}}\right). \tag {7}
|
| 325 |
+
$$
|
| 326 |
+
|
| 327 |
+
# GraphSAGE and GraphSNNGraphSAGE
|
| 328 |
+
|
| 329 |
+
GraphSAGE (Hamilton et al., 2017) learns aggregation functions to induce new node feature vectors by sampling and aggregating features from a node's local neighborhood. GraphSAGE has considered three different aggregation functions such as mean aggregator, LSTM aggregator and pooling aggregator. In our work, we mainly focus on the mean aggregator that, for each vertex $v$ , takes the mean of the feature vectors of the nodes in its neighborhood and concatenates it with the feature vector of $v$ as shown below:
|
| 330 |
+
|
| 331 |
+
$$
|
| 332 |
+
h _ {v} ^ {(t + 1)} = \sigma \left(W ^ {(t)} \cdot \operatorname {C o n c a t} \left(\frac {1}{| \mathcal {N} (v) |} \sum_ {u \in \mathcal {N} (v)} h _ {u} ^ {(t)}, h _ {v} ^ {(t)}\right)\right), \tag {8}
|
| 333 |
+
$$
|
| 334 |
+
|
| 335 |
+
where $W^{(t)}$ is a learnable weight matrix, and $\sigma$ represents a non-linear activation function. We also generalise GraphSNN to a model under the GMP framework, namely $\mathbf{GraphSNN}_{\mathbf{GraphSAGE}}$ . This model first takes a mean aggregation of the feature vectors in the neighborhood $\mathcal{N}(v)$ and then concatenates it with the feature vector of $v$ itself in the following manner:
|
| 336 |
+
|
| 337 |
+
$$
|
| 338 |
+
h _ {v} ^ {(t + 1)} = \sigma \left(W ^ {(t)} \cdot \operatorname {C O N C A T} \left(\frac {1}{| \mathcal {N} (v) |} \sum_ {u \in \mathcal {N} (v)} \left(\tilde {A} _ {v u} + 1\right) h _ {u} ^ {(t)}, \gamma^ {(t)} \left(\sum_ {u \in \mathcal {N} (v)} \tilde {A} _ {v u} + 1\right) h _ {v} ^ {(t)}\right)\right). \tag {9}
|
| 339 |
+
$$
|
| 340 |
+
|
| 341 |
+
# GAT and GraphSNNGAT
|
| 342 |
+
|
| 343 |
+
Graph Attention Network (GAT) (Velicković et al., 2017) linearly transforms the input feature vectors and performs a weighted sum of the feature vectors for vertices in a neighborhood after the transformation. GAT computes attention weights $\alpha_{vu}^{(t)}$ using an attention mechanism and aggregates the feature vectors in a neighborhood as follows:
|
| 344 |
+
|
| 345 |
+
$$
|
| 346 |
+
h _ {v} ^ {(t + 1)} = \sigma \Big (\sum_ {(v, u) \in E} \alpha_ {v u} ^ {(t)} W ^ {(t)} h _ {u} ^ {(t)} \Big), \tag {10}
|
| 347 |
+
$$
|
| 348 |
+
|
| 349 |
+
where $W^{(t)}$ is a trainable weight matrix and $\sigma$ represents a non-linear activation function. We generalise GAT to a model, called GraphSNN $_{GAT}$ , in the GMP framework. Firstly, we aggregate the feature vectors based on structural coefficients in our aggregation scheme, i.e., we compute
|
| 350 |
+
|
| 351 |
+
$$
|
| 352 |
+
\tilde {h} _ {u} ^ {(t)} = \gamma^ {(t)} \left(\sum_ {z \in \mathcal {N} (u)} \tilde {A} _ {u z} + 1\right) \frac {h _ {u} ^ {(t)}}{\sqrt {| \tilde {\mathcal {N}} (u) | | \tilde {\mathcal {N}} (u) |}} + \sum_ {z \in \mathcal {N} (u)} \left(\tilde {A} _ {u z} + 1\right) \frac {h _ {z} ^ {(t)}}{\sqrt {| \tilde {\mathcal {N}} (z) | | \tilde {\mathcal {N}} (u) |}} \tag {11}
|
| 353 |
+
$$
|
| 354 |
+
|
| 355 |
+
and
|
| 356 |
+
|
| 357 |
+
$$
|
| 358 |
+
\tilde {h} _ {v} ^ {(t)} = \gamma^ {(t)} \left(\sum_ {z ^ {\prime} \in \mathcal {N} (v)} \tilde {A} _ {v z ^ {\prime}} + 1\right) \frac {h _ {v} ^ {(t)}}{\sqrt {| \tilde {\mathcal {N}} (v) | | \tilde {\mathcal {N}} (v) |}} + \sum_ {z ^ {\prime} \in \mathcal {N} (v)} \left(\tilde {A} _ {v z ^ {\prime}} + 1\right) \frac {h _ {z ^ {\prime}} ^ {(t)}}{\sqrt {| \tilde {\mathcal {N}} (z ^ {\prime}) | | \tilde {\mathcal {N}} (v) |}}. \tag {12}
|
| 359 |
+
$$
|
| 360 |
+
|
| 361 |
+
We then construct attention coefficients $\alpha_{vu}^{(t)}$ on these aggregated feature vectors as follows:
|
| 362 |
+
|
| 363 |
+
$$
|
| 364 |
+
\alpha_ {v u} ^ {(t)} = \frac {\exp \left(\text {L e a k y R e L U} \left(a ^ {T} \left[ W ^ {(t)} \tilde {h} _ {v} ^ {(t)} \right| | W ^ {(t)} \tilde {h} _ {u} ^ {(t)} ]\right)\right)}{\sum_ {z \in \mathcal {N} (v)} \exp \left(\text {L e a k y R e L U} \left(a ^ {T} \left[ W ^ {(t)} \tilde {h} _ {v} ^ {(t)} \right| | W ^ {(t)} \tilde {h} _ {z} ^ {(t)} ]\right)\right)}, \tag {13}
|
| 365 |
+
$$
|
| 366 |
+
|
| 367 |
+
where $||$ represents the concatenation, $W^{(t)}$ is a learnable weight matrix and $a$ is a learnable weight vector. After that, we aggregate the neighborhood features as follows using attention coefficients.
|
| 368 |
+
|
| 369 |
+
$$
|
| 370 |
+
h _ {v} ^ {(t + 1)} = \sigma \Big (\sum_ {(v, u) \in E} \alpha_ {v u} ^ {(t)} W ^ {(t)} \tilde {h} _ {u} ^ {(t)} \Big), \tag {14}
|
| 371 |
+
$$
|
| 372 |
+
|
| 373 |
+
where $W^{(t)}$ is a learnable weight matrix, and $\sigma$ represents a non-linear activation function. We use multi-head attention as stated in the original work Velicković et al. (2017).
|
| 374 |
+
|
| 375 |
+
# GIN and GraphSNN $G_{IN}$
|
| 376 |
+
|
| 377 |
+
Graph Isomorphism Network (GIN) (Xu et al., 2019) takes the sum aggregation over a neighborhood, followed by a 2-layer MLP. The $\epsilon^{(t + 1)}$ is a learnable parameter or fixed scalar. Each neural layer is expressed as:
|
| 378 |
+
|
| 379 |
+
$$
|
| 380 |
+
h _ {v} ^ {(t + 1)} = \mathrm {M L P} ^ {(t + 1)} \left((1 + \epsilon^ {(t + 1)}) h _ {v} ^ {(t)} + \sum_ {u \in \mathcal {N} (v)} h _ {u} ^ {(t)}\right). \tag {15}
|
| 381 |
+
$$
|
| 382 |
+
|
| 383 |
+
Here, we consider one of GIN variants employed in the original paper, where the learnable parameter $\epsilon = 0$ , and generalise it to $\mathbf{GraphSNN}_{GIN}$ as defined below:
|
| 384 |
+
|
| 385 |
+
$$
|
| 386 |
+
h _ {v} ^ {(t + 1)} = \mathrm {M L P} ^ {(t + 1)} \left(\gamma^ {(t)} \left(\sum_ {u \in \mathcal {N} (v)} \tilde {A} _ {v u} + 1\right) h _ {v} ^ {(t)} + \sum_ {u \in \mathcal {N} (v)} \left(\tilde {A} _ {v u} + 1\right) h _ {u} ^ {(t)}\right). \tag {16}
|
| 387 |
+
$$
|
| 388 |
+
|
| 389 |
+
# B. EXPERIMENTS
|
| 390 |
+
|
| 391 |
+
# DATASETS
|
| 392 |
+
|
| 393 |
+
Table 10 contains the statistics for the five datasets used in our experiments for node classification in Section 5.1.
|
| 394 |
+
|
| 395 |
+
<table><tr><td>Dataset</td><td>Type</td><td>#Nodes</td><td>#Edges</td><td>#Classes</td><td>#Features</td></tr><tr><td>Cora</td><td>Citation network</td><td>2,708</td><td>5,429</td><td>7</td><td>1,433</td></tr><tr><td>Citeseer</td><td>Citation network</td><td>3,327</td><td>4,732</td><td>6</td><td>3,703</td></tr><tr><td>Pubmed</td><td>Citation network</td><td>19,717</td><td>44,338</td><td>3</td><td>500</td></tr><tr><td>NELL</td><td>Knowledge graph</td><td>65,755</td><td>266,144</td><td>210</td><td>5,414</td></tr><tr><td>ogbn-arxiv</td><td>Citation network</td><td>169,343</td><td>1,166,243</td><td>40</td><td>128</td></tr></table>
|
| 396 |
+
|
| 397 |
+
Table 10: Statistics for node classification datasets.
|
| 398 |
+
|
| 399 |
+
Table 11 below contains the statistics for the datasets used in our experiments on small graph classification in Section 5.2, as well as the datasets used in an additional experiment for graph classification following the data splits and experimental setup in (Errica et al., 2020). The results of this additional experiment are reported under Section "Graph Classification using Setup (Errica et al., 2020)" in Appendix B.
|
| 400 |
+
|
| 401 |
+
Table 12 contains the statistics for the five large graph datasets from Open Graph Benchmark (OGB) Hu et al. (2020), used in our experiments for large graph classification in Section 5.2.
|
| 402 |
+
|
| 403 |
+
# EXPERIMENTAL SETUP ON SMALL GRAPHS
|
| 404 |
+
|
| 405 |
+
Previously, several experimental setups have been considered for evaluating graph classification on small graphs in TUD benchmark datasets (https://chrsmrrs.github.io/datasets/). All the baseline methods in our paper use the 10-fold cross validation technique. However, they differ in how they split training/validation/testing data and how they report the final results in terms of classification accuracy. Below, we discuss the details of their experimental setups.
|
| 406 |
+
|
| 407 |
+
<table><tr><td>Dataset</td><td>#Graphs</td><td>Avg # Nodes</td><td>Avg # Edges</td><td>#Classes</td></tr><tr><td>MUTAG</td><td>188</td><td>17.93</td><td>19.79</td><td>2</td></tr><tr><td>PTC-MR</td><td>344</td><td>14.29</td><td>14.69</td><td>2</td></tr><tr><td>BZR</td><td>405</td><td>35.75</td><td>38.36</td><td>2</td></tr><tr><td>COX2</td><td>467</td><td>41.22</td><td>43.45</td><td>2</td></tr><tr><td>ENZYMES</td><td>600</td><td>32.63</td><td>64.14</td><td>6</td></tr><tr><td>IMDB-B</td><td>1000</td><td>19.77</td><td>96.53</td><td>2</td></tr><tr><td>PROTEINS</td><td>1113</td><td>39.06</td><td>72.82</td><td>2</td></tr><tr><td>D & D</td><td>1178</td><td>284.32</td><td>715.66</td><td>2</td></tr><tr><td>NCI1</td><td>4110</td><td>29.87</td><td>32.30</td><td>2</td></tr><tr><td>RDT-M5K</td><td>5000</td><td>508.52</td><td>594.87</td><td>5</td></tr><tr><td>COLLAB</td><td>5000</td><td>74.49</td><td>2457.78</td><td>3</td></tr></table>
|
| 408 |
+
|
| 409 |
+
Table 11: Statistics for small graph classification datasets.
|
| 410 |
+
|
| 411 |
+
<table><tr><td>Dataset</td><td>#Graphs</td><td>Avg # Nodes</td><td>Avg # Edges</td><td>#Tasks</td><td>Task Type</td></tr><tr><td>ogbg-molmholiv</td><td>41,127</td><td>25.5</td><td>27.5</td><td>1</td><td>Binary classification</td></tr><tr><td>ogbg-moltox21</td><td>7,831</td><td>18.6</td><td>19.3</td><td>12</td><td>Binary classification</td></tr><tr><td>ogbg-moltoxcast</td><td>8,576</td><td>18.8</td><td>19.3</td><td>617</td><td>Binary classification</td></tr><tr><td>ogbg-molpcba</td><td>437,929</td><td>26.0</td><td>28.1</td><td>128</td><td>Binary classification</td></tr><tr><td>ogbg-ppa</td><td>158,100</td><td>243.4</td><td>2,266.1</td><td>1</td><td>Multi-class classification</td></tr></table>
|
| 412 |
+
|
| 413 |
+
Table 12: Statistics for large graph classification dataset (OGB graph datasets).
|
| 414 |
+
|
| 415 |
+
- CapsGNN (Xinyi & Chen, 2018) splits the datasets into $80\%$ for training, $10\%$ for validation, and $10\%$ for testing. The training is stopped when the performance on the validation set goes to the highest. Then they obtain the test set accuracy that corresponds to the epoch with the highest validation accuracy in each fold. The final results are reported by computing the mean accuracy and standard deviation over 10 folds.
|
| 416 |
+
- DGCNN Zhang et al. (2018a) splits the datasets into $90\%$ for training and $10\%$ for testing. They obtain the test accuracy of the last epoch in each fold. They report the final results by computing the mean accuracy and standard deviation on the test accuracy over 10 folds.
|
| 417 |
+
- GIN and GraphSAGE (Xu et al., 2019) split the datasets into $90\%$ for training and $10\%$ for testing. They average the test accuracy on 10 folds and select the epoch with the highest averaged accuracy. Then they report the final results by computing the mean accuracy and standard deviation based on the selected epoch.
|
| 418 |
+
- FGW (Titouan et al., 2019) splits the datasets into $90\%$ for training and $10\%$ for testing. Then, they use the nested cross validation technique on the same folds, and repeat the process 10 times. They report the final results by computing the mean accuracy and standard deviation.
|
| 419 |
+
- The other baseline methods split the datasets into $90\%$ for training and $10\%$ for testing, and repeat their experiment 10 times. Then they report the final results by computing the mean accuracy and standard deviation.
|
| 420 |
+
|
| 421 |
+
In our work, we split the datasets into $90\%$ for training and $10\%$ for testing. We obtain the best validation accuracy on each fold. Then we report the final results by computing the mean accuracy and standard deviation over 10 folds<sup>1</sup>.
|
| 422 |
+
|
| 423 |
+
# NODE CLASSIFICATION USING RANDOM Splits
|
| 424 |
+
|
| 425 |
+
Following the work Pei et al. (2020), we randomly split graph nodes into $60\%$ , $20\%$ and $20\%$ for training, validation and testing, respectively. The other hyperparameter settings are the same as in Section 5.1. Table 13 shows the results. We see that our models consistently outperform all of the baseline methods on all benchmark datasets. Specifically, $\mathrm{GraphSN}_{GCN}$ improves upon GCN by a margin of $1.5\%$ , $1.7\%$ , $1.6\%$ and $2.4\%$ on Cora, Citeseer, Pubmed and NELL, respectively.
|
| 426 |
+
|
| 427 |
+
GraphSNGAT improves upon GAT by $1.3\%$ , $1.6\%$ and $2.0\%$ on Cora, CiteSeer and Pubmed, respectively. GraphSNGIN improves upon GIN by $3.8\%$ , $1.7\%$ , $1.8\%$ and $1.6\%$ on Cora, CiteSeer, Pubmed and NELL, respectively. GraphSNGraphSAGE improves upon GraphSAGE by $1.3\%$ , $1.7\%$ , $1.1\%$ and $2.3\%$ on Cora, CiteSeer, Pubmed and NELL, respectively.
|
| 428 |
+
|
| 429 |
+
<table><tr><td>Method</td><td>Cora</td><td>Citeseer</td><td>Pubmed</td><td>NELL</td></tr><tr><td>GCN</td><td>85.7 ± 1.6</td><td>73.6 ± 1.0</td><td>88.1 ± 1.2</td><td>72.2 ± 5.6</td></tr><tr><td>GraphSNNGCN</td><td>87.2 ± 1.5</td><td>75.3 ± 1.3</td><td>89.7 ± 1.7</td><td>74.6 ± 6.3</td></tr><tr><td>GAT</td><td>86.3 ± 0.3</td><td>74.3 ± 0.3</td><td>87.6 ± 0.1</td><td>-</td></tr><tr><td>GraphSNNGAT</td><td>87.6 ± 0.9</td><td>75.9 ± 0.8</td><td>89.6 ± 0.6</td><td>-</td></tr><tr><td>GIN</td><td>82.5 ± 0.8</td><td>70.8 ± 1.9</td><td>85.0 ± 1.5</td><td>66.7 ± 3.3</td></tr><tr><td>GraphSNNGIN</td><td>86.3 ± 0.7</td><td>72.5 ± 1.5</td><td>86.8 ± 1.2</td><td>68.3 ± 3.7</td></tr><tr><td>GraphSAGE</td><td>86.8 ± 1.9</td><td>74.2 ± 1.8</td><td>88.3 ± 1.1</td><td>69.4 ± 4.3</td></tr><tr><td>GraphSNNGraphSAGE</td><td>88.1 ± 1.5</td><td>75.9 ± 1.3</td><td>89.4 ± 2.4</td><td>71.7 ± 4.5</td></tr></table>
|
| 430 |
+
|
| 431 |
+
# GRAPH CLASSIFICATION USING SETUP (ERRICA ET AL., 2020)
|
| 432 |
+
|
| 433 |
+
Following the data splits and experiment setup introduced in (Errica et al., 2020), we further evaluate our method. The experimental setup in (Errica et al., 2020) provides a fair performance comparison process on GNN methods. The evaluation process has two different phases: (1) model selection on the validation set, (2) model assessment on the test set. More specifically, they first split the datasets into $90\%$ for training and $10\%$ for testing. Then the entire training set is further split into $90\%$ of training and $10\%$ of validation. They apply the inner hold-out method to select the best model based on validation accuracy. After selecting the best model, they train the model three times on the entire training set with early stopping.
|
| 434 |
+
|
| 435 |
+
We have conducted experiments on four bioinformatics datasets (NCI1, PROTEINS, ENZYMES and D&D) and three social network datasets (COLLAB, IMDB-B and REDDIT-5k) with node features. The results of the baseline, DGCNN and GIN are taken from the paper (Errica et al., 2020). Note that the final results of DGCNN and GIN from the paper (Errica et al., 2020) are reported by computing the mean accuracy and standard deviation on the test set in these three runs, which are different from the original papers of DGCNN and GIN. Table 14 shows the results.
|
| 436 |
+
|
| 437 |
+
Table 13: Classification accuracy (%) averaged over 10 random splits on node classification.
|
| 438 |
+
|
| 439 |
+
<table><tr><td>Method</td><td>NCI1</td><td>PROTEINS</td><td>ENZYMES</td><td>D&D</td><td>COLLAB</td><td>IMDB-B</td><td>REDDIT-5k</td></tr><tr><td>Baseline</td><td>69.8±2.2</td><td>75.8 ± 3.7</td><td>65.2±6.4</td><td>78.4 ± 4.5</td><td>70.2±1.5</td><td>70.8±5.0</td><td>52.2±1.5</td></tr><tr><td>DGCNN</td><td>76.4±1.7</td><td>72.9±3.5</td><td>38.9±5.7</td><td>76.6±4.3</td><td>71.2±1.9</td><td>69.2±3.0</td><td>49.2±1.2</td></tr><tr><td>GIN</td><td>80.0±1.4</td><td>73.3±4.0</td><td>59.6±4.5</td><td>75.3±2.9</td><td>75.6±2.3</td><td>71.2±3.9</td><td>56.1±1.7</td></tr><tr><td>GraphSNN</td><td>81.6 ± 2.8</td><td>74.5 ± 3.5</td><td>61.7 ± 3.4</td><td>77.1 ± 3.3</td><td>77.0 ± 3.1</td><td>72.3 ± 3.6</td><td>57.1 ± 3.1</td></tr></table>
|
| 440 |
+
|
| 441 |
+
Table 14: Classification accuracy (%) averaged over 10 runs on graph classification.
|
| 442 |
+
|
| 443 |
+
# GRAPH CLASSIFICATION ON OGB GRAPH DATASETS
|
| 444 |
+
|
| 445 |
+
Table 15 shows the results for the running time of the preprocessing step in our method GraphSNN for large graph datasets (averaged over 5 runs). Note that the preprocessing step can be parallelized efficiently at the node level. The CPU time shows the total preprocessing time of a dataset in which each node is preprocessed sequentially, and the CPU time per node shows the average preprocessing time per node.
|
| 446 |
+
|
| 447 |
+
# OVERSMOOTHING ANALYSIS
|
| 448 |
+
|
| 449 |
+
We have also conducted further experiments to analyze the effectiveness of our method in alleviating the over-smoothing issue. We compare GIN (i.e., a spatial GNN), DFNets (Wijesinghe & Wang, 2019) (i.e., a spectral GNN), GraphSNN $_{GIN}$ and GraphSNN $_{GCN}$ . For a fair comparison, we remove the dense-net architecture of DFNets and use the same hyperparameters as in the original paper. We
|
| 450 |
+
|
| 451 |
+
<table><tr><td>Dataset</td><td>CPU time (seconds)</td><td>CPU time per node (milliseconds)</td></tr><tr><td>ogbg-molhiv</td><td>66.97</td><td>0.06383</td></tr><tr><td>ogbg-moltox21</td><td>79.37</td><td>0.54565</td></tr><tr><td>ogbg-moltoxcast</td><td>380.84</td><td>2.36417</td></tr><tr><td>ogbg-ppa</td><td>820.12</td><td>4.71235</td></tr></table>
|
| 452 |
+
|
| 453 |
+
evaluate all models over the cora dataset using the standard splits. The classification accuracy is averaged over 10 runs on node-classification.
|
| 454 |
+
|
| 455 |
+
Table 15: Running time of the preprocessing step for large graph datasets averaged over 5 runs.
|
| 456 |
+
|
| 457 |
+
<table><tr><td>#Layers</td><td>GIN</td><td>GraphSNNGIN</td><td>DFNet</td><td>GraphSNNGCN</td></tr><tr><td>1</td><td>73.3±1.5</td><td>76.1±1.6</td><td>80.5±0.6</td><td>80.1±0.8</td></tr><tr><td>2</td><td>77.6±1.3</td><td>79.2±1.7</td><td>81.9±0.5</td><td>83.1±1.8</td></tr><tr><td>3</td><td>75.2±1.7</td><td>78.5±1.3</td><td>82.6±0.3</td><td>82.0±0.8</td></tr><tr><td>4</td><td>48.6±2.1</td><td>77.2±2.3</td><td>80.7±0.6</td><td>80.1±0.7</td></tr><tr><td>5</td><td>40.3±1.9</td><td>75.9±2.1</td><td>75.6±0.3</td><td>79.1±1.2</td></tr><tr><td>6</td><td>36.1±2.3</td><td>73.3±1.8</td><td>65.3±1.3</td><td>76.5±1.3</td></tr><tr><td>7</td><td>27.5±2.1</td><td>71.9±1.5</td><td>60.9±1.5</td><td>76.3±1.3</td></tr><tr><td>8</td><td>20.3±1.8</td><td>69.3±2.2</td><td>53.6±1.3</td><td>75.7±1.2</td></tr></table>
|
| 458 |
+
|
| 459 |
+
Table 16: Oversmoothing analysis of GIN and spectral GNN (DFNet) on cora dataset.
|
| 460 |
+
|
| 461 |
+
GraphSNN can alleviate over-smoothing is because structural coefficients capture structural connectivity between a target vertex and its neighbors. Thus, a neighbor whose structural connectivity is weak would pass little message to the target vertex, whereas a neighbor whose structural connectivity is strong would pass strong message to the target vertex.
|
| 462 |
+
|
| 463 |
+
Figure 4 shows the results of GCN and GraphSNN $_{GCN}$ on the datasets Cora, Citeseer and Pubmed, in terms of classification accuracy averaged over 10 runs in the setting of standard splits.
|
| 464 |
+
|
| 465 |
+

|
| 466 |
+
Figure 4: Oversmoothing analysis w.r.t. the model depth for node classification.
|
| 467 |
+
|
| 468 |
+

|
| 469 |
+
|
| 470 |
+

|
| 471 |
+
|
| 472 |
+
# ABLATION STUDY WITH AUGMENTED NODE FEATURES
|
| 473 |
+
|
| 474 |
+
We consider an experimental evaluation setup called BL, which serves as the baseline for all experiments in this ablation study. In the setting of BL, the AGGREGATE in GraphSNN is set to 1. Then, different variants of BL consider different local substructure counts as additional node features. This allows us to analyse what types of local substructures our proposed architecture can distinguish.
|
| 475 |
+
|
| 476 |
+
There are five variants of BL being considered in the ablation study:
|
| 477 |
+
|
| 478 |
+
(1) $\mathrm{BL}_{SC}$ : Setting AGGREGATION of GraphSNN to 1 and keeping structural coefficients for neighbors.
|
| 479 |
+
(2) $\mathrm{BL}_{NF}^{clique}$ : Setting AGGREGATION of GraphSNN to 1, removing structural coefficients for neighbors, and adding additional node features (triangle and 4-clique counts) into the original feature vectors.
|
| 480 |
+
|
| 481 |
+
<table><tr><td>Method</td><td>GSN-v</td><td>\(BL_{NF}^{clique}\)</td><td>\(BL_{SC}^{9}\)</td><td>\(BL_{SC+NF}^{clique}\)</td><td>GraphSNN</td></tr><tr><td>MUTAG</td><td>92.20±7.5</td><td>90.21±2.3</td><td>94.06±2.4</td><td>95.16±2.5</td><td>94.70±1.9</td></tr><tr><td>PTC-MR</td><td>67.40±5.7</td><td>67.13±2.9</td><td>70.18±3.1</td><td>71.04±3.1</td><td>70.58±3.1</td></tr><tr><td>PROTEINS</td><td>74.59±5.0</td><td>76.42±2.6</td><td>78.05±2.3</td><td>78.66±2.1</td><td>78.42±2.7</td></tr><tr><td>BZR</td><td>-</td><td>86.82±3.1</td><td>90.67±3.1</td><td>91.98±3.2</td><td>91.12±3.0</td></tr><tr><td>IMDB-B</td><td>76.80±2.0</td><td>77.00±3.1</td><td>77.23±2.8</td><td>78.53±2.9</td><td>78.01±2.8</td></tr></table>
|
| 482 |
+
|
| 483 |
+
Table 17: Analysis the effects of our structural coefficients with substructure counts, i.e., triangle and 4-clique counts. Classification accuracy (\%) averaged over 10 runs on graph classification.
|
| 484 |
+
|
| 485 |
+
<table><tr><td>Method</td><td>ID-GNN</td><td>\(BL_{NF}^{cycle}\)</td><td>\(BL_{SC}^{cycle}\)</td><td>\(BL_{SC+NF}^{cycle}\)</td><td>GraphSNN</td></tr><tr><td>MUTAG</td><td>96.50±3.2</td><td>91.36±2.1</td><td>94.06±2.4</td><td>96.61±2.3</td><td>94.70±1.9</td></tr><tr><td>PTC-MR</td><td>61.90±5.4</td><td>67.57±3.3</td><td>70.18±3.1</td><td>71.76±3.2</td><td>70.58±3.1</td></tr><tr><td>PROTEINS</td><td>78.00±3.5</td><td>77.26±2.5</td><td>78.05±2.3</td><td>78.95±2.5</td><td>78.42±2.7</td></tr><tr><td>BZR</td><td>86.40±3.0</td><td>86.83±3.3</td><td>90.67±3.1</td><td>91.75±3.4</td><td>91.12±3.0</td></tr><tr><td>IMDB-B</td><td>-</td><td>76.36±2.6</td><td>77.23±2.8</td><td>78.58±2.4</td><td>78.01±2.8</td></tr></table>
|
| 486 |
+
|
| 487 |
+
Table 18: Analysis the effects of our structural coefficients with substructure counts, i.e., cycle counts. Classification accuracy (\%) averaged over 10 runs on graph classification.
|
| 488 |
+
|
| 489 |
+
(3) $\mathbf{BL}_{SC + NF}^{clique}$ : Setting AGGREGATION of GraphSNN to 1, keeping structural coefficients for neighbors, and adding additional node features (triangle and 4-clique counts) into the original feature vectors.
|
| 490 |
+
(4) $\mathbf{BL}_{NF}^{cycle}$ : Setting AGGREGATION $^I$ of GraphSNN to 1, removing structural coefficients for neighbors, and adding additional node features (cycle counts) into the original feature vectors.
|
| 491 |
+
(5) $\mathbf{BL}_{SC + NP}^{cycle}$ : Setting AGGREGATION of GraphSNN to 1, keeping structural coefficients for neighbors, and adding additional node features (cycle counts) into the original feature vectors.
|
| 492 |
+
|
| 493 |
+
We compare GraphSNN with GSN-v (Bouritsas et al., 2020), $\mathrm{BL}_{NF}^{clique}$ , $\mathrm{BL}_{SC}$ , and $\mathrm{BL}_{SC + NF}^{clique}$ to analyze how our proposed architecture relates to the models with triangle and 4 clique counts as additional node features. Similarly, we compare GraphSNN with ID-GNNs (You et al., 2021), $\mathrm{BL}_{NF}^{cycle}$ , $\mathrm{BL}_{SC}$ , and $\mathrm{BL}_{SC + NF}^{cycle}$ to analyze how our proposed architecture relates to the models with cycle counts as additional node features. We concatenate the counts of cycles with length 1 to 4 starting and ending at the given source node with its original feature vector as in (You et al., 2021). Table 17 and Table 18 show the experimental results. As AGGREGATE is set to 1 in the setting of BL, the performance gap between $\mathrm{BL}_{NF}$ and $\mathrm{BL}_{SC + NF}$ reflects the effectiveness of structural coefficients on enhancing relational inference between a target vertex and its neighbors. The performance gap between $\mathrm{BL}_{SC}$ and GraphSNN above shows the effectiveness of AGGREGATE in our proposed model GraphSNN. Furthermore, $\mathrm{BL}_{SC + NF}$ consistently performs best since we incorporate both extra node features and structural coefficients into the feature aggregation. There is a small performance gap between $\mathrm{BL}_{SC + NF}$ and GraphSNN due to augmented node features that can capture additional structural information that cannot be captured using structural coefficients.
|
| 494 |
+
|
| 495 |
+
# C. PROOFS FOR LEMMAS AND THEOREMS
|
| 496 |
+
|
| 497 |
+
# Proof for Theorem 1
|
| 498 |
+
|
| 499 |
+
Theorem 1. The following statements are true: (a) If $S_i \simeq_{subgraph} S_j$ , then $S_i \simeq_{overlap} S_j$ ; but not vice versa; (b) If $S_i \simeq_{overlap} S_j$ , then $S_i \simeq_{subtree} S_j$ ; but not vice versa.
|
| 500 |
+
|
| 501 |
+
Proof. In the following, we prove the statements in this theorem one by one.
|
| 502 |
+
|
| 503 |
+
For Statement (a), by $S_{i} \simeq_{subgraph} S_{j}$ and Definition 1, we know that there exists a bijective mapping $g': \tilde{\mathcal{N}}(i) \to \tilde{\mathcal{N}}(j)$ such that for the vertex $i$ and any vertex $v' \in \mathcal{N}(i)$ , $i$ and $v'$ are adjacent in $S_{i}$ iff $j = g(i)$ and $u' = g(v')$ are adjacent in $S_{j}$ , and $h_{i} = h_{j}$ and $h_{v'} = h_{u'}$ , where $g$ is a bijective mapping between $S_{i}$ and $S_{j}$ as defined by Definition 1. Then for each pair of overlap subgraphs $S_{iv'}$
|
| 504 |
+
|
| 505 |
+
and $S_{ju'}$ , we can further extend $g'$ along $g$ on $S_{iv'}$ and $S_{ju'}$ . That is, $g'(v) = u$ iff $g(v) = u$ . If $v$ in $S_{iv'}$ , by the definition of overlap subgraph, $v$ must either be $i$ or a neighbor of $i$ . Hence $u = g'(v)$ in this case must be either $j$ or a neighbor of $j$ . By the definition of $g$ and the fact that $g'(v) = u$ iff $g(v) = u$ , we know that for any two vertices $v_1$ and $v_2$ in $S_{iv'}$ , they are adjacent in $S_{iv'}$ iff their corresponding vertices $g'(v_1)$ and $g'(v_2)$ are adjacent in $S_{ju'}$ and their corresponding feature vectors are indistinguishable, i.e., $S_{iv'} \simeq_{subgraph} S_{ju'}$ for any $v' \in \mathcal{N}(i)$ and $g(v') = u'$ . Conversely, if $S_i \simeq_{overlap} S_j$ , then it is possible that $S_i \neq_{subgraph} S_j$ as shown by the two graphs in Figure 2(a).
|
| 506 |
+
|
| 507 |
+
For Statement (b), if $S_i \simeq_{\text{overlap}} S_j$ , then to prove $S_i \simeq_{\text{subtree}} S_j$ we need to show that there exists a bijective mapping $g: \tilde{\mathcal{N}}(i) \to \tilde{\mathcal{N}}(j)$ such that $g(i) = j$ and, for any $v' \in \tilde{\mathcal{N}}(i)$ and $g(v') = u'$ , the feature vectors of $v'$ and $u'$ are indistinguishable, i.e., $h_{v'} = h_{u'}$ . By Def. 2, we can find a bijective mapping $g': \tilde{\mathcal{N}}(i) \to \tilde{\mathcal{N}}(j)$ such that $g'(i) = j$ and, for any $v' \in \mathcal{N}(i)$ and $g'(v') = u'$ , $S_{iv'}$ and $S_{ju'}$ are subgraph-isomorphic. This implies that $g'$ cannot distinguish the feature vectors of $v'$ and $u'$ for any $v' \in \tilde{\mathcal{N}}(i)$ and $g(v') = u'$ . Similarly, the converse does not necessarily hold and one counterexample is the set of graphs as shown in Figure 2(b) which are subtree-isomorphic but not overlap-isomorphic.
|
| 508 |
+
|
| 509 |
+
# Proof for Theorem 2
|
| 510 |
+
|
| 511 |
+
Theorem 2. Let $M$ be a GNN. $M$ is as powerful as 1-WL in distinguishing non-isomorphic graphs if $M$ has a sufficient number of layers and each layer can map any $S_{i}$ and $S_{j}$ in $S$ into two different embeddings (i.e., $\zeta(S_{i}) \neq \zeta(S_{j})$ ) if and only if $S_{i} \not\cong_{\text{subtree}} S_{j}$ .
|
| 512 |
+
|
| 513 |
+
Proof. We first show that, for any two graphs $G_{1}$ and $G_{2}$ , if they can be distinguished by 1-WL, then they must be distinguishable by such a GNN $M$ as well. Suppose that 1-WL takes $k$ iterations to distinguish $G_{1}$ and $G_{2}$ , i.e., 1-WL yields the same multiset of node labels on $G_{1}$ and $G_{2}$ in the iterations from 0 to $k - 1$ , but two different multisets of node labels on $G_{1}$ and $G_{2}$ in the $k$ -th iteration. To derive a contradiction, we assume that a GNN $M$ that satisfies the above two conditions cannot distinguish $G_{1}$ and $G_{2}$ in the iterations from 0 to $k$ . Since 1-WL can distinguish $G_{1}$ and $G_{2}$ in the $k$ -th iteration, it means that there must exist two neighborhood subgraphs, say $S_{i}$ and $S_{j}$ , which correspond to two different multisets of node labels on $G_{1}$ and $G_{2}$ at the $k$ -th iteration. These two different multisets of node labels correspond to two different multisets of feature vectors in their neighborhoods, i.e., $\{\{h_v | v \in \mathcal{N}(i)\}\} \neq \{\{h_u | u \in \mathcal{N}(j)\}\}$ . By Def. 3, we know that $S_{i} \neq_{\text{subtree}} S_{j}$ . Then this means that $\zeta(S_{i}) \neq \zeta(S_{j})$ , which contradicts the assumption that $M$ cannot distinguish $G_{1}$ and $G_{2}$ in the iteration $k$ .
|
| 514 |
+
|
| 515 |
+
Now, we show the other direction that, for any two graphs $G_{1} = (V_{1},E_{1})$ and $G_{2} = (V_{2},E_{2})$ , if they can be distinguished by such a GNN $M$ , then they must be distinguishable by 1-WL. Similarly, suppose that at the k-th iteration, $M$ maps the neighborhood subgraphs of these two graphs into two different multisets of node embeddings, i.e., $\{\{\zeta(S_v)|v\in V_1\}\} \neq \{\{\zeta(S_u)|v\in V_2\}\}$ . This is means that we can find at least two different neighborhood subgraphs $S_{i}$ and $S_{j}$ such that $\zeta(S_{i}) \neq \zeta(S_{j})$ . For such neighborhood subgraphs $S_{i}$ and $S_{j}$ , we know that $S_{i} \not\cong_{subtree} S_{j}$ . Then this means that $S_{i}$ and $S_{j}$ correspond to either $h_{i} \neq h_{j}$ or $\{\{h_v|v\in \mathcal{N}(i)\}\} \neq \{\{h_u|u\in \mathcal{N}(j)\}\}$ , which can be relabeled by 1-WL into two different new labels. Thus, 1-WL can also distinguish such neighborhood subgraphs, and accordingly distinguish $G_{1}$ and $G_{2}$ .
|
| 516 |
+
|
| 517 |
+
The proof is completed.
|
| 518 |
+
|
| 519 |
+

|
| 520 |
+
|
| 521 |
+
# Proof for Theorem 3
|
| 522 |
+
|
| 523 |
+
Theorem 3. Let $M$ be a GNN whose aggregation scheme $\Phi$ is defined by Eq. 1-Eq. 3. $M$ is strictly more expressive than 1-WL in distinguishing non-isomorphic graphs if $M$ has a sufficient number of layers and also satisfies the following conditions:
|
| 524 |
+
|
| 525 |
+
(1) $M$ can distinguish at least two neighborhood subgraphs $S_{i}$ and $S_{j}$ with $S_{i} \simeq_{\text{subtree}} S_{j}$ , $S_{i} \not\simeq_{\text{subgraph}} S_{j}$ and $\{\{\tilde{A}_{iv'}|v' \in \mathcal{N}(i)\}\} \neq \{\{\tilde{A}_{ju'}|u' \in \mathcal{N}(j)\}\}$ ;
|
| 526 |
+
(2) $\Phi\left(h_v^{(t)}, \{\{h_u^{(t)}|u \in \mathcal{N}(v)\}\}, \{\{(\tilde{A}_{vu}, h_u^{(t)})|u \in \mathcal{N}(v)\}\}\right)$ is injective.
|
| 527 |
+
|
| 528 |
+
Proof. We prove this theorem in two steps. First, we prove that a GNN $M$ satisfying the above conditions can distinguish any two graphs that are distinguishable by 1-WL by contradiction. Assume that there exist two graphs $G_{1}$ and $G_{2}$ which can be distinguished by 1-WL but cannot be distinguished by $M$ . Further, suppose that 1-WL cannot distinguish these two graphs in the iterations from 0 to $k - 1$ , but can distinguish them in the $k$ -th iteration. Then, there must exist two neighborhood subgraphs $S_{i}$ and $S_{j}$ whose neighboring nodes correspond to two different multisets of node labels at the $k$ -th iteration, i.e., $\{h_v^{(k)}|v\in \mathcal{N}(i)\} \neq \{h_u^{(k)}|u\in \mathcal{N}(j)\}$ . By the above condition (2), we know that $\Phi$ is injective. Thus, for $S_{i}$ and $S_{j}$ , $\Phi$ would yield two different feature vectors at the $k$ -th iteration. This means that $M$ can also distinguish $G_{1}$ and $G_{2}$ , which contradicts the assumption. Our proof in the first step is done. For the second step, we can prove that there exist at least two graphs that can be distinguished by $M$ but cannot be distinguished by 1-WL. Figure 1 presents two of such graphs.
|
| 529 |
+
|
| 530 |
+
# Proof for Theorem 4
|
| 531 |
+
|
| 532 |
+
We consider that, for each vertex in a graph, its node features are from a countable set; similarly, for each pair of adjacent vertices in a graph, its structural coefficient is also from a countable set. Assume that $\mathcal{H}$ , $\mathcal{A}$ and $\mathcal{W}$ are countable sets where $\mathcal{H}$ is a node feature space, $\mathcal{A}$ is a structural coefficient space, and $\mathcal{W} = \{A_{ij}h_i|A_{ij}\in \mathcal{A},h_i\in \mathcal{H}\}$ . Let $H$ and $W$ be two multisets containing elements from $\mathcal{H}$ and $\mathcal{W}$ , respectively, and $|H| = |W|$ .
|
| 533 |
+
|
| 534 |
+
Lemma 1. There exists a function $f$ s.t. $\pi(H, W) = \sum_{h \in H, w \in W} f(h, w)$ is unique for any distinct pair of multisets $(H, W)$ .
|
| 535 |
+
|
| 536 |
+
Proof. Since $\mathcal{H}$ and $\mathcal{W}$ are countable, there must exist two functions $\psi_1: \mathcal{H} \to \mathbb{N}_{odd}$ mapping $h \in \mathcal{H}$ to odd natural numbers and $\psi_2: \mathcal{W} \to \mathbb{N}_{even}$ mapping $w \in \mathcal{W}$ to even natural numbers. Further, for any pair of multisets $(H, W)$ , since the cardinality of $H$ and $W$ is bounded, there must exist a number $N \in \mathbb{N}$ such that $|H| < N$ and $|W| < N$ . Thus, we can find a prime number $P > 2N$ . Then we have a mapping $f$ as $f(h, w) = P^{-\psi_1(h)} + P^{-\psi_2(w)}$ such that $\sum_{h \in H, w \in W} f(h, w)$ is unique for each distinct pair of $(H, W)$ .
|
| 537 |
+
|
| 538 |
+
Lemma 2. There exists a function $f$ s.t. $\pi'(h_v, H, W) = \gamma f(h_v, |H| h_v) + \sum_{h \in H, w \in W} f(h, w)$ is unique for any distinct $(h_v, H, W)$ , where $h_v \in \mathcal{H}$ , $|H| h_v \in \mathcal{W}$ , and $\gamma$ can be an irrational number.
|
| 539 |
+
|
| 540 |
+
Proof. As $h_v \in \mathcal{H}$ and $|H|h_v \in \mathcal{W}$ , we may have $f(h_v, |H|h_v) = P^{-\psi_1(h_v)} + P^{-\psi_1(|H|h_v)}$ where $\psi_1: \mathcal{H} \to \mathbb{N}_{odd}$ and $\psi_2: \mathcal{W} \to \mathbb{N}_{even}$ as defined in the proof for Lemma 1. Let $(h_{v1}, H_1, W_1)$ and $(h_{v2}, H_2, W_2)$ be two different tuples. Then, there are two cases:
|
| 541 |
+
|
| 542 |
+
(1) When $h_{v1} = h_{v2}$ but $(H_1, W_1) \neq (H_2, W_2)$ , by Lemma 1, we know that $\sum_{h \in H_1, w \in W_1} f(h, w) \neq \sum_{h \in H_2, w \in W_2} f(h, w)$ . Thus, $\pi'(h_{v1}, H_1, W_1) \neq \pi'(h_{v2}, H_2, W_2)$ .
|
| 543 |
+
(2) When $h_{v1} \neq h_{v2}$ , we prove $\pi'(h_{v1}, H_1, W_1) \neq \pi'(h_{v2}, H_2, W_2)$ by contradiction. Assume that $\pi'(h_{v1}, H_1, W_1) = \pi'(h_{v2}, H_2, W_2)$ . Then, we have:
|
| 544 |
+
|
| 545 |
+
$$
|
| 546 |
+
\gamma f (h _ {v 1}, | H _ {1} | h _ {v 1}) + \sum_ {h \in H _ {1}, w \in W _ {1}} f (h, w) = \gamma f (h _ {v 2}, | H _ {2} | h _ {v _ {2}}) + \sum_ {h \in H _ {2}, w \in W _ {2}} f (h, w).
|
| 547 |
+
$$
|
| 548 |
+
|
| 549 |
+
This gives us the following equation:
|
| 550 |
+
|
| 551 |
+
$$
|
| 552 |
+
\gamma \Big (f (h _ {v 1}, | H _ {1} | h _ {v 1}) - f (h _ {v 2}, | H _ {2} | h _ {v 2}) \Big) = \Big (\sum_ {h \in H _ {2}, w \in W _ {2}} f (h, w) \Big) - \Big (\sum_ {h \in H _ {1}, w \in W _ {1}} f (h, w) \Big).
|
| 553 |
+
$$
|
| 554 |
+
|
| 555 |
+
When $\gamma$ is an irrational number, L.H.S. of the above equation is irrational but R.H.S. is rational. There is a contradiction. Thus, $\pi'(h_{v1}, H_1, W_1) \neq \pi'(h_{v2}, H_2, W_2)$ .
|
| 556 |
+
|
| 557 |
+
□
|
| 558 |
+
|
| 559 |
+
Based on Lemma 1 and Lemma 2, we can prove the following theorem.
|
| 560 |
+
|
| 561 |
+
Theorem 4. GraphSNN is more expressive than 1-WL in testing non-isomorphic graphs.
|
| 562 |
+
|
| 563 |
+
Proof. We prove this theorem by showing that GraphSNN is a GNN satisfying the conditions stated in Theorem 3. For the first condition, consider the two graphs shown in Figure 1. GraphSNN can distinguish these two neighborhood subgraphs $S_{i}$ and $S_{j}$ with $\{\{\tilde{A}_{iv'}|v'\in \mathcal{N}(i)\} \} \neq \{\{\tilde{A}_{ju'}|u'\in \mathcal{N}(j)\} \}$ . For the second condition, by Lemmas 1 and 2 as well as the fact that MLP as a universal approximator (Xu et al., 2019) can be used to model and learn the functions $f$ and $g$ , we know that GraphSNN also satisfies this condition.
|
anewperspectiveonhowgraphneuralnetworksgobeyondweisfeilerlehman/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:5b07a0904b01cbac3589db25af21fc24ffa41382b6c64f2b5c2fe178e03bf35f
|
| 3 |
+
size 1196067
|
anewperspectiveonhowgraphneuralnetworksgobeyondweisfeilerlehman/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:629bd5d5e494dc2bdeae3b321b4c07effeac2f0831e0c5549c23b694f62e6145
|
| 3 |
+
size 1024088
|
asymmetrylearningforcounterfactuallyinvariantclassificationinoodtasks/78538d90-c4f0-43bb-8803-fc2021fdd25c_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:d122638af7471cda1c776ae4f87037a880d8d0112d839cccc90c77635056af9d
|
| 3 |
+
size 133970
|
asymmetrylearningforcounterfactuallyinvariantclassificationinoodtasks/78538d90-c4f0-43bb-8803-fc2021fdd25c_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b323cea79820603d69ef6d550bfd7281a7487551ff78ab1ba6e8579f485690b8
|
| 3 |
+
size 161875
|
asymmetrylearningforcounterfactuallyinvariantclassificationinoodtasks/78538d90-c4f0-43bb-8803-fc2021fdd25c_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a059e78d8443348afaecad08d17003cee1d59ed0a30c718cc7e0acbd37fda9f8
|
| 3 |
+
size 1033813
|
asymmetrylearningforcounterfactuallyinvariantclassificationinoodtasks/full.md
ADDED
|
@@ -0,0 +1,447 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# ASYMMETRY LEARNING FOR COUNTERFACTUAL-INVARIANT CLASSIFICATION IN OOD TASKS
|
| 2 |
+
|
| 3 |
+
# S Chandra Mouli
|
| 4 |
+
|
| 5 |
+
Department of Computer Science
|
| 6 |
+
|
| 7 |
+
Purdue University
|
| 8 |
+
|
| 9 |
+
chandr@purdue.edu
|
| 10 |
+
|
| 11 |
+
# Bruno Ribeiro
|
| 12 |
+
|
| 13 |
+
Department of Computer Science
|
| 14 |
+
|
| 15 |
+
Purdue University
|
| 16 |
+
|
| 17 |
+
ribeiro@cs.purdue.edu
|
| 18 |
+
|
| 19 |
+
# ABSTRACT
|
| 20 |
+
|
| 21 |
+
Generalizing from observed to new related environments (out-of-distribution) is central to the reliability of classifiers. However, most classifiers fail to predict label $Y$ from input $X$ when the change in environment is due a (stochastic) input transformation $T^{\mathrm{te}} \circ X'$ not observed in training, as in training we observe $T^{\mathrm{tr}} \circ X'$ , where $X'$ is a hidden variable. This work argues that when the transformations in train $T^{\mathrm{tr}}$ and test $T^{\mathrm{te}}$ are (arbitrary) symmetry transformations induced by a collection of known $m$ equivalence relations, the task of finding a robust OOD classifier can be defined as finding the simplest causal model that defines a causal connection between the target labels and the symmetry transformations that are associated with label changes. We then propose a new learning paradigm, asymmetry learning, that identifies which symmetries the classifier must break in order to correctly predict $Y$ in both train and test. Asymmetry learning performs a causal model search that, under certain identifiability conditions, finds classifiers that perform equally well in-distribution and out-of-distribution. Finally, we show how to learn counterfactually-invariant representations with asymmetry learning in two simulated physics tasks and six image classification tasks.
|
| 22 |
+
|
| 23 |
+
# 1 INTRODUCTION
|
| 24 |
+
|
| 25 |
+
A significant challenge in classification tasks happens when the test distribution differs from the training distribution (i.e., the task requires out-of-distribution (OOD) generalization), since not accounting for the distribution shift can lead to poor generalization accuracy (Geirhos et al., 2020; Hu et al., 2020; Koh et al., 2020; D'Amour et al., 2020). If the learner sees examples from the test distribution, finding a classifier invariant to the distribution shift can still be a data-driven task (e.g., classical domain adaptation Ben-David et al. (2007); Muandet et al. (2013); Zhao et al. (2019)). This includes cases such as invariant risk minimization (Arjovsky et al., 2019) and its generalizations (Bellot & van der Schaar, 2020), where the training data and the test data distributions overlap in a way that can be exploited by data-driven algorithms (Creager et al., 2021; Krueger et al., 2021; Rosenfeld et al., 2020).
|
| 26 |
+
|
| 27 |
+
However, if the learner sees no examples from the test distribution, the task is not purely data-driven and requires assumptions about the data generation process. More formally, our work considers general OOD tasks with training distribution $P(Y^{\mathrm{tr}}, X^{\mathrm{tr}})$ , where $X^{\mathrm{tr}} \coloneqq T^{\mathrm{tr}} \circ X^{\dagger}$ , with $X^{\dagger}$ as a hidden variable with distribution $P(X^{\dagger})$ and $T^{\mathrm{tr}} \in \mathcal{T}$ is a random input transformation in training $T^{\mathrm{tr}}: \mathcal{X} \to \mathcal{X}$ , where $t \circ x$ is the application of transformation $t \in \mathcal{T}$ on $x \in \mathcal{X}$ . The difference between train and test is a change in input transformation with $Y^{\mathrm{te}} \coloneqq Y^{\mathrm{tr}}$ and $X^{\mathrm{te}} \coloneqq T^{\mathrm{te}} \circ X^{\dagger}$ , where $P(T^{\mathrm{tr}}) \neq P(T^{\mathrm{te}})$ . We are interested in learning an invariant classifier that generalizes well in held out examples from the training and test distributions.
|
| 28 |
+
|
| 29 |
+
The definition of transformation matters in this task. We first seek to generalize the existing literature on transformation invariances, e.g. (Shawe-Taylor, 1993; Kondor & Trivedi, 2018; Finzi et al., 2021; Maron et al., 2018; Murphy et al., 2019b; Mouli & Ribeiro, 2021; Bronstein et al., 2017). Our transformations are tied to equivalence relations rather than transformation groups, which frees them from the need to have inverses (in order to form a transformation group). Our transformations may not have inverses.
|
| 30 |
+
|
| 31 |
+
We also explain why the task of learning an invariant OOD classifier is not, in general, solvable via traditional data augmentation. Before we continue describing our OOD learning task, it is important to clarify the connection between Pearl's causal hierarchy and invariant representation learning.
|
| 32 |
+
|
| 33 |
+
Pearl's causal hierarchy and invariant representation learning. Pearl's causal hierarchy (Pearl & Mackenzie, 2018; Bareinboim et al., 2020)) has three layers: Observational (Layer 1), interventional (Layer 2), and counterfactual (Layer 3). Upper layers can perform lower layer tasks, but not vice-versa (see Bareinboim et al. (2020)). Tasks should be described using the lowest layer that can solve them.
|
| 34 |
+
|
| 35 |
+
Layer 1: Any task that can be performed without constraints on the causal model, i.e., by data alone, is observational (Layer 1). Traditional domain adaptation is a Layer 1 task. Note that a classifier that performs well OOD is itself a Layer 1 classifier, since it tries to predict $P(Y^{\mathrm{te}}|X^{\mathrm{te}})$ .
|
| 36 |
+
|
| 37 |
+
Layer 2: Without observations from $P(X^{\mathrm{te}})$ and/or $P(Y^{\mathrm{te}}|X^{\mathrm{te}})$ , learning an OOD classifier requires some assumptions about the data generation process (Layers 2 or 3 assumptions). Data augmentation is traditionally an interventional task (Layer 2), with new interesting methods increasingly using causal language (Ilse et al., 2021; Teney et al., 2020). For instance, in a task predicting an image's foreground, knowing how to act on an image in training $X^{\mathrm{tr}}$ to change the background seen in training to the backgrounds seen in test $X^{\mathrm{te}} = T \circ X^{\mathrm{tr}}$ with a transformation $T$ , implies we know how to predict $P(Y|X,do(T))$ .
|
| 38 |
+
|
| 39 |
+
Layer 3: Counterfactuals are the most challenging task. We start our description with an example. Consider a random continuous transformation $T_2^{\mathrm{tr}}$ (in training) which changes to random transformation $T_2^{\mathrm{te}}$ (in test). Let $X^{\dagger}$ describe a hidden variable such that $X^{\mathrm{tr}} := T_1 \circ T_2^{\mathrm{tr}} \circ T_3 \circ X^{\dagger}$ and $X^{\mathrm{te}} := T_1 \circ T_2^{\mathrm{te}} \circ T_3 \circ X^{\dagger}$ , where $T_1$ and $T_3$ are independent continuous random transformations and $P(T_2^{\mathrm{tr}}) \neq P(T_2^{\mathrm{te}})$ . Assume the target variable $Y$ depends only on $X^{\dagger}, T_1,$ and $T_3$ . To counterfactually ask what would have happened to the observed input $x$ if we had forced $do(T_2^{\mathrm{tr}} = \tilde{t}_2)$ , we are inquiring about $X(T_2^{\mathrm{tr}} = \tilde{t}_2)|X^{\mathrm{tr}} = x$ . Note that $do(T_2^{\mathrm{tr}} = \tilde{t}_2)$ does not change $Y$ . Also note that the knowledge of $X^{\mathrm{tr}} = x$ is an indirect statement about $T_2^{\mathrm{tr}}$ since $P(T_2^{\mathrm{tr}}|X^{\mathrm{tr}} = x) \neq P(T_2^{\mathrm{tr}})$ . That is, for $x, x' \in \mathcal{X}$ ,
|
| 40 |
+
|
| 41 |
+
$$
|
| 42 |
+
P \left(X \left(T _ {2} ^ {\mathrm {t r}} = \tilde {t} _ {2}\right) = x ^ {\prime} \mid X ^ {\mathrm {t r}} = x\right) = \int_ {t} P \left(X \left(T _ {2} ^ {\mathrm {t r}} = \tilde {t} _ {2}\right) = x ^ {\prime} \mid T _ {2} ^ {\mathrm {t r}} = t, X ^ {\mathrm {t r}} = x\right) d P \left(T _ {2} ^ {\mathrm {t r}} = t \mid X ^ {\mathrm {t r}} = x\right). \tag {1}
|
| 43 |
+
$$
|
| 44 |
+
|
| 45 |
+
Equation (1) and the difference between the causal hierarchy layers will be relevant for our results.
|
| 46 |
+
|
| 47 |
+
Contributions. Our contributions can be described as follows:
|
| 48 |
+
|
| 49 |
+
1. We introduce a generalization of transformation groups via symmetry transformations tied to equivalence classes that removes the requirement of invertible transformations common in definitions using transformation groups.
|
| 50 |
+
2. We introduce the concept of counterfactual invariant representations for symmetry transformations and show how it can be described as a counterfactual task for causal structure discovery.
|
| 51 |
+
3. Finally, we introduce asymmetry learning, which describes a representation regularization that, under a set of assumptions, learns the correct counterfactual invariant OOD classifier.
|
| 52 |
+
|
| 53 |
+
# 2 SYMMETRIES AND TRANSFORMATIONS
|
| 54 |
+
|
| 55 |
+
Geometrically, an object is called symmetric if there is a transformation on the object that does not change its shape (in some definition of shape). For example, a square is symmetric with respect to rotations. The notion of symmetry however is not restricted to geometric notions. In general, we can define a mathematical object as symmetric if there is a transformation on the object that returns another object equivalent to the first (Rosen, 2008, Chapter 10). It is clear from this definition of symmetry that we first need to define what we mean by equivalent objects. For instance, we say two geometrical objects are equivalent if they have the same shape, but we need a more general definition.
|
| 56 |
+
|
| 57 |
+
We define an input symmetry in a space $\mathcal{X}$ with at least two elements as an equivalence relation $\sim$ . An equivalence relation in $\mathcal{X}$ is a binary relation $\sim$ such that for all $a, b, c \in \mathcal{X}$ , we have (i) $a \sim a$ , (ii) $a \sim b \iff b \sim a$ , and (iii) $(a \sim b$ and $b \sim c) \implies a \sim c$ . Equivalence relations allow us to
|
| 58 |
+
|
| 59 |
+
define equivalent objects in $\mathcal{X}$ : $a \sim b$ means $a$ is equivalent to $b$ . The set of all objects equivalent to some $a \in \mathcal{X}$ is called the equivalence class of $a$ , defined as $[a] := \{x \in \mathcal{X} : x \sim a\}$ . Note that one can define $m \geqslant 2$ equivalence relations on the same input space. The equivalence class of $x$ with respect to equivalence relation $k$ is denoted $[x]^{(k)}, k = 1, \ldots, m$ . Two inputs $a, b \in \mathcal{X}$ might be equivalent under one equivalence relation $\sim_1$ , but not equivalent under a different equivalence relation $\sim_2$ , that is, we can have both $b \in [a]^{(1)}$ and $b \notin [a]^{(2)}$ . Still, even in this last case it is possible that $a$ is equivalent to some other input $c \neq b$ in both equivalence relations, i.e., it is possible $\exists c \in \mathcal{X}, c \neq a$ , s.t. $c \in [a]^{(1)} \cap [a]^{(2)}$ . We denote the collection of equivalence classes of $\mathcal{X}$ under the equivalence relation $\sim_k$ as the quotient space $\mathcal{X} / \sim_k := \{[x]^{(k)} \mid x \in \mathcal{X}\}$ .
|
| 60 |
+
|
| 61 |
+
Transformation group example. Consider the bijective transformations $t: \mathcal{X} \to \mathcal{X}$ of a transformation group $G$ , $t \in G$ . We now define an equivalence relation over $G$ as $t \circ x \sim_G x$ for all $t \in G$ . The equivalence class $[\pmb{x}]^{(G)}$ is $x$ 's orbit defined as $[\pmb{x}]^{(G)} := \{\pmb{x}' : \exists t \in G, \pmb{x}' = t \circ x\}$ . For example, if $G$ is the group that permutes the elements of vectors in $\mathbb{R}^3$ , then $(1, 2, 3) \sim_G (2, 1, 3)$ .
|
| 62 |
+
|
| 63 |
+
Property functions example. Another way of deriving an equivalence relation is via functions of the input space $z: \mathcal{X} \to \mathbb{R}^p$ , where the output $z(\boldsymbol{x})$ is a particular property of the vector $\boldsymbol{x} \in \mathcal{X}$ . For example, given an observation of length $T$ from a dynamical system, $\boldsymbol{x} \in \mathbb{R}^{d \times T}$ , a possible property function could be $z_{\text{energy}}(\cdot)$ that computes the energy of the dynamical system. Assuming there are $m$ known properties $z_1, \ldots, z_m$ with $z_i: \mathcal{X} \to \mathbb{R}^{p_i}$ , we can construct corresponding equivalence relations $\sim_1, \ldots, \sim_m$ such that for any $\boldsymbol{x}, \boldsymbol{x}' \in \mathcal{X}$ , $\boldsymbol{x} \sim_i \boldsymbol{x}'$ if $z_j(\boldsymbol{x}) = z_j(\boldsymbol{x}')$ , $\forall j \neq i$ . In words, two inputs are equivalent under $\sim_i$ if they have the same properties for all $z_j, j \neq i$ .
|
| 64 |
+
|
| 65 |
+
Symmetry transformations. As seen above, symmetries can be defined without defining how the input is transformed to create the equivalence classes, although defining a set of transformations is useful when describing the equivalence class. Given an equivalence relation $\sim$ , we can define a set of transformations $\mathcal{T}$ that respect the equivalence relation such that $\forall t \in \mathcal{T}, \forall x \in \mathcal{X}, t \circ x \sim x$ . We call $\mathcal{T}$ the set of symmetry transformations of $\sim$ . Similar to transformations groups, $\mathcal{T}$ always has the identity transformation $t_{\mathrm{id}} \circ x = x$ , but in contrast, all the transformations in $\mathcal{T}$ need not be bijective.
|
| 66 |
+
|
| 67 |
+
Join of equivalence relations. Similar to how two groups can be joined to form a larger group, two equivalence relations can be joined to form a coarser equivalence relation. Given two equivalence relations, $\sim_{1}$ and $\sim_{2}$ , their join $\sim_{1} \vee \sim_{2}$ is defined as: for all $\pmb{x}, \pmb{x}', \pmb{x}(\sim_{1} \vee \sim_{2})\pmb{x}'$ if and only if there exists a chain of equivalence relations $\pmb{x} \sim_{k_1} \pmb{x}_1, \dots, \pmb{x}_{h-1} \sim_{k_h} \pmb{x}'$ with all $k_j \in \{1, 2\}$ . It is easy to check that $\sim_{1} \vee \sim_{2}$ is an equivalence relation.
|
| 68 |
+
|
| 69 |
+
We are now ready to define a general causal model that defines the training and test distributions in our setting.
|
| 70 |
+
|
| 71 |
+
# 3 SCM FOR SYMMETRY-BASED OOD TASKS
|
| 72 |
+
|
| 73 |
+
Let $\mathcal{X},\mathcal{Y}$ denote the input and output spaces respectively. We define our general structural causal model (SCM) as follows. We define $X^{\dagger}\in \mathcal{X}$ as the unobserved canonically ordered input
|
| 74 |
+
|
| 75 |
+
$$
|
| 76 |
+
X ^ {\dagger} := g \left(U _ {u}\right), \tag {2}
|
| 77 |
+
$$
|
| 78 |
+
|
| 79 |
+
with $U_{u}$ a background random variable and $g: \mathcal{U}_u \to \mathcal{X}$ is a measurable map. This definition is general enough to define any task.
|
| 80 |
+
|
| 81 |
+
There are $m$ possible symmetries given in the form of equivalence relations $\sim_1, \ldots, \sim_m$ over the input space $\mathcal{X}$ . Let $\mathcal{T}^{(k)}$ denote a set of symmetric transformations $t$ on $\mathcal{X}$ corresponding to the equivalence relation $\sim_k, 1 \leqslant k \leqslant m$ . In other words, for all $\pmb{x} \in \mathcal{X}$ and $t \in \mathcal{T}^{(k)}$ , we have $(t \circ \pmb{x}) \sim_k \pmb{x}$ . Similarly, let $\mathcal{T}$ be the set of all symmetric transformations with respect to the join equivalence relation $\sim_{1,\dots,m} \equiv \sim_1 \vee \dots \vee \sim_m$ . We can think of transformation $t \in \mathcal{T}$ as a path $\pmb{x} \xrightarrow{t^{(k_1)}} \pmb{x}_1 \cdots \pmb{x}_{h-1} \xrightarrow{t^{(k_h)}} \pmb{x}_h$ that starts at $\pmb{x}$ , applies a transformation $t^{(k_1)} \in \mathcal{T}^{(k_1)}$ to get $\pmb{x}_1 \in [\pmb{x}]^{(k_1)}$ , and so on until it stops and outputs a value $\pmb{x}_h$ , $h \geqslant 1$ .
|
| 82 |
+
|
| 83 |
+
Let $U_{1},\ldots ,U_{m}$ be independent background variables associated with the $m$ symmetries, where $U_{i}\in \mathcal{U}_{i}$ , $i = 1,\dots,m$ . These background variables together select a function $t(U_1,\dots ,U_m)$ from the set $\mathcal{T}$ as follows. Each $U_{k}$ independently selects a countable sequence of transformations $t_{1,U_k}^{(k)},t_{2,U_k}^{(k)},\ldots \in T^{(k)}$ . Then, $t(U_{1},\ldots ,U_{m})$ is defined by interleaving these transformations
|
| 84 |
+
|
| 85 |
+

|
| 86 |
+
Figure 1: Example that illustrates a few important concepts. (a) Training data shows how Equations (2) to (4) define the training distribution $P(X^{\mathrm{tr}}, Y^{\mathrm{tr}})$ . Task: Given an image of a rod (shown in brown), we wish to predict the orientation of the rod, i.e., whether the rod is upright or flat ( $Y := h(U_{\mathrm{rot}})$ ). In this example, we have $\mathbb{D} = \{\mathrm{rot}\}$ (image rotations $0^\circ$ and $90^\circ$ ) and $\bar{\mathbb{D}} = \{\mathrm{trans}\}$ (horizontal translations of $-5, 0, +5$ units) as any horizontal translation does not affect the orientation of the rod. (b) The test data (only a single example shown) suffers an OOD shift through a different distribution over $P(U_{\mathrm{trans}})$ , where non-zero translations can happen before the second rotation. (c) Here we illustrate why an invariance that is good for traditional data augmentation, such as counting the brown pixels in the green shaded area, would fail in test if, say, a $+5$ units horizontal translation happens before a rotation. (d) Here we illustrate why counterfactual language is needed to define how the input data would change in the presence of changes to $U_{\mathrm{trans}}$ . Using counterfactuals, it is finally clear that the invariant representation must be able to also consider the number of brown pixels inside the horizontal purple and green bands (among other horizontal bands).
|
| 87 |
+
|
| 88 |
+

|
| 89 |
+
|
| 90 |
+
$t(U_{1},\ldots ,U_{m}):= (t_{1,U_{1}}^{(1)}\circ \dots \circ t_{1,U_{m}}^{(m)})\circ \dots \circ (t_{r,U_{1}}^{(1)}\circ \dots \circ t_{r,U_{m}}^{(m)})\circ \dots$ to construct the path described above. Since $\mathcal{T}^{(1)},\mathcal{T}^{(2)},\ldots$ contain the identity transformation, $t(U_{1},\ldots ,U_{m})$ can be described by a finite sequences of transformations. The observed $X$ is the result of a transformation of $X^{\dagger}$
|
| 91 |
+
|
| 92 |
+
$$
|
| 93 |
+
X := t \left(U _ {1}, \dots , U _ {m}\right) \circ X ^ {\dagger}. \tag {3}
|
| 94 |
+
$$
|
| 95 |
+
|
| 96 |
+
Finally, the label $Y$ is defined as a function of the untransformed canonical input $X^{\dagger}$ as
|
| 97 |
+
|
| 98 |
+
$$
|
| 99 |
+
Y := h \left(X ^ {\dagger}, \left(U _ {i}\right) _ {i \in \mathbb {D}}, U _ {Y}\right), \tag {4}
|
| 100 |
+
$$
|
| 101 |
+
|
| 102 |
+
where $\mathbb{D} \subseteq \{1, \ldots, m\}$ is unknown. This means that $Y$ is not invariant with respect to equivalence relations $\sim_{i}, i \in \mathbb{D}$ , i.e., examples $\pmb{x}$ and $\pmb{x}' \in [\pmb{x}]^{(i)}$ can have different labels. A distribution over the variables $U_{u}, \{U_{i}\}_{i=1}^{m}, U_{Y}$ entails a joint distribution $P(X,Y)$ over the observed variables.
|
| 103 |
+
|
| 104 |
+
Illustrative SCM example. Figure 1 illustrates our data generation process. The training data Figure 1(a) has $X^{\dagger}$ defined as a centered upright brown rod (i.e., $X^{\dagger}$ is deterministic). The label $Y$ is defined by the rotation transformations $\mathcal{T}^{\mathrm{rot}} = \{T_{0^\circ}^{\mathrm{rot}}, T_{90^\circ}^{\mathrm{rot}}\}$ . The image can also be horizontally translated by $\{-5,0,5\}$ units via transformations $\mathcal{T}^{\mathrm{trans}} = \{T_{-5}^{\mathrm{trans}}, T_0^{\mathrm{trans}}, T_{+5}^{\mathrm{trans}}\}$ (only 0 and +5 translations are depicted), but $Y$ does not depend on these horizontal translations. The transformations applied to $X^{\dagger}$ are randomly chosen via $U_{\mathrm{rot}}$ and $U_{\mathrm{trans}}$ , which are two bidimensional vectors indexing a sequence four transformations that interleave rotations and translations (see Figure 1). A representation that counts the number of brown pixels in the green shaded area of $X^{\mathrm{tr}}$ is enough to achieve 100% accuracy in the training distribution. We formally define OOD distribution shifts next using Figure 1 for illustration.
|
| 105 |
+
|
| 106 |
+
OD distribution shift. Let $\bar{\mathbb{D}} = \{1,\dots ,m\} \backslash \mathbb{D}$ be the complement of the set of symmetry relations $\mathbb{D}$ that $Y$ depends on. We define the OOD distribution shift between train and test as a shift in the distribution of $P((U_i)_{i\in \bar{\mathbb{D}}})$ , influencing the distribution of input transformations in Equation (3), which in turn can shift the distributions $P(X^{\mathrm{tr}}), P(Y^{\mathrm{tr}}|X^{\mathrm{tr}}), P(Y^{\mathrm{tr}},X^{\mathrm{tr}})$ to
|
| 107 |
+
|
| 108 |
+
$P(X^{\mathrm{te}}), P(Y^{\mathrm{te}}|X^{\mathrm{te}}), P(Y^{\mathrm{te}}, X^{\mathrm{te}})$ respectively. Since $X$ does not causally affect $Y$ in our structural causal model (Equation (4)), changes in input transformations are able to shift $P(Y|X)$ . For example, in Figure 1(b) the test data (only a single example shown) could suffer an OOD shift due to a different distribution over $P(U_{\mathrm{trans}})$ that introduces non-zero translations before the second rotation. Note that the representation that counted the number of brown pixels in the green shaded area, which was perfect for the training inputs $X^{\mathrm{tr}}$ , will achieve poor accuracy in the test inputs $X^{\mathrm{te}}$ .
|
| 109 |
+
|
| 110 |
+
Learning OOD classifiers. Equation (4) shows that the label $Y$ is invariant to changes in the distribution of $(U_i)_{i \in \mathbb{D}}$ in the test distribution, but we do not know $\mathbb{D}$ . Hence, if our representation of $X$ is invariant to changes in the distribution of $(U_i)_{i \in \mathbb{D}}$ , we will be able to perform the OOD task.
|
| 111 |
+
|
| 112 |
+
# 4 ASYMMETRY LEARNING & FINDING THE RIGHT REPRESENTATION SYMMETRY FOR THE OOD TASK
|
| 113 |
+
|
| 114 |
+
# 4.1 FINDING OOD-INVARIANT REPRESENTATIONS AS CAUSAL STRUCTURE DISCOVERY
|
| 115 |
+
|
| 116 |
+
We first define the process of finding an OOD invariant representations for the symmetries $\{\sim_i\}_{i\in \bar{\mathbb{D}}}$ our classifier should be invariant to in the test data. Since $Y$ does not depend on $\{U_i\}_{i\in \bar{\mathbb{D}}}$ , we will make a representation of $X$ that is invariant to transformations driven by $\{U_i\}_{i\in \bar{\mathbb{D}}}$ .
|
| 117 |
+
|
| 118 |
+
Definition 1 introduces the concept of counterfactual invariance for symmetry transformations. We note that this definition is less restrictive than the parallel work of Veitch et al. (2021, Definition 1.1): whereas Veitch et al. (2021, Definition 1.1) require invariance over the entire sample space, we only require invariance over the test support of transformation variable $U_{i}$ . The definitions are equivalent if the test support is the entire sample space of $U_{i}$ .
|
| 119 |
+
|
| 120 |
+
Definition 1 (Counterfactual-invariant representations for symmetric transformations). Assume the SCM defined in Equations (2) to (4). A representation $\Gamma_i: \mathcal{X} \to \mathbb{R}^d$ , $d \geqslant 1$ , is counterfactual-invariant to the transformations $T_{1,U_i}, T_{2,U_i}, \ldots$ of equivalence relation $\sim_i$ , $1 \leqslant i \leqslant m$ , if
|
| 121 |
+
|
| 122 |
+
$$
|
| 123 |
+
\Gamma_ {i} (x) = \Gamma_ {i} (X (U _ {i} = \tilde {u} _ {i}) | X = x)
|
| 124 |
+
$$
|
| 125 |
+
|
| 126 |
+
almost everywhere, $\forall \tilde{u}_i\in \operatorname {supp}(U_i^{te}),\forall x\in \operatorname {supp}(X^{tr})$ , where $\operatorname {supp}(A)$ is the support of random variable $A$ . A representation $\Gamma_{\mathbb{S}}:\mathcal{X}\to \mathbb{R}^d$ $d\geqslant 1$ , is counterfactual-invariant to a subset $\mathbb{S}\subseteq \{1,\ldots ,m\}$ if it is jointly counterfactual-invariant to the transformation indices $\{U_j\}_{j\in \mathbb{S}}$ of equivalence relations $\{\sim_j\}_{j\in \mathbb{S}}$
|
| 127 |
+
|
| 128 |
+
We refer the reader to Equation (1) for the relationship between the counterfactual variables $X(U_i = \tilde{u})|U_i = u$ and $X(U_i = \tilde{u})|X = x$ . Figure 1(d) illustrates why counterfactual language is important for our task: It states that given an input $X^{\mathrm{tr}} = x$ we need to know how it would have been different if we had chosen a different distribution $P(U_{\mathrm{trans}})$ resulting in a different sequence of transformations $T_{1,U_{\mathrm{trans}}}, T_{2,U_{\mathrm{trans}}}$ . From Figure 1(c) it is clear that we cannot simply data-augment our training data with translations, since we would think that counting brown pixels in the green shaded area is an invariant representation for $U_{\mathrm{trans}}$ .
|
| 129 |
+
|
| 130 |
+
Up until now we have not imposed restrictions on the types of transformations $\mathcal{T}^{(i)}, i = 1, \ldots, m$ , we consider in this work. Our next results require imposing conditions on these transformations.
|
| 131 |
+
|
| 132 |
+
Definition 2 (Equivalence class lumpability). The quotient space $\mathcal{X} / \sim_{i}$ is the set of equivalence classes of $\mathcal{X}$ with respect to equivalence relation $\sim_{i}, i = 1, \ldots, m$ . Let $[\pmb{x}]^{(i)} \in \mathcal{X} / \sim_{i}$ be the equivalence class of $\pmb{x} \in \mathcal{X}$ with respect to equivalence relation $\sim_{i}$ . Then, $\mathcal{X} / \sim_{i}$ is said to be lumpable with respect to a transformation set $\mathcal{T}$ if $\forall [\pmb{x}]^{(i)} \in \mathcal{X} / \sim_{i}$ and $\forall t \in \mathcal{T}$ ,
|
| 133 |
+
|
| 134 |
+
$$
|
| 135 |
+
\exists \left[ \boldsymbol {x} ^ {\prime} \right] ^ {(i)} \in \left(\mathcal {X} / \sim_ {i}\right) s. t. \boldsymbol {x} ^ {*} \in \left[ \boldsymbol {x} \right] ^ {(i)} \Rightarrow t \circ \boldsymbol {x} ^ {*} \in \left[ \boldsymbol {x} ^ {\prime} \right] ^ {(i)}.
|
| 136 |
+
$$
|
| 137 |
+
|
| 138 |
+
In words, if the lumpability condition in Definition 2 holds for an equivalence relation $\sim_{i}$ with respect to a set of transformations $\mathcal{T}$ , then every transformation in $\mathcal{T}$ maps all points within an equivalence class $[\pmb{x}]^{(i)}\in \mathcal{X} / \sim$ to points in another equivalence class $[\pmb{x}^{\prime}]^{(i)}\in (\mathcal{X} / \sim)$ . To illustrate the lumpability condition, consider two transformation groups $G_{1}$ and $G_{2}$ whose transformations commute, i.e., $\forall (t_1,t_2)\in G_1\times G_2$ , $t_1\circ t_2 = t_2\circ t_1$ . Then the equivalence classes imposed by $G_{i}$ , i.e., the orbits $[\pmb{x}]^{(i)} = \{t_i\circ \pmb {x}:\forall t_i\in G_i\}$ , are lumpable with respect to the transformations $G_{j}$ , for $i,j\in \{1,2\}$ and $j\neq i$ .
|
| 139 |
+
|
| 140 |
+

|
| 141 |
+
(i) Causal DAG
|
| 142 |
+
|
| 143 |
+

|
| 144 |
+
(ii) Causal DAG in (i) with counterfactual-invariant representation of X
|
| 145 |
+
|
| 146 |
+

|
| 147 |
+
(iii) Asymmetry learning: Causal model search using information in asymmetry (illustration with $m = 3$ ). Red arrows indicate the asymmetry being considered in the causal model.
|
| 148 |
+
(a)
|
| 149 |
+
|
| 150 |
+

|
| 151 |
+
(b)
|
| 152 |
+
Figure 2: (a) (i) True causal DAG; (ii) causal DAG of counterfactual invariant representation; (iii) Causal model search. (b) Partial order over invariant representations (arrows indicate higher invariance). (c) An example figure where training data has a single example per equivalence class in $\mathcal{X} / \sim_{1}$ (green rectangles). Then, we have $\mathrm{COMP}(\mathcal{F}_{\{1\}},\mathcal{D}) = \mathrm{COMP}(\mathcal{F}_{\emptyset},\mathcal{D})$ even though $\mathcal{F}_{\{1\}}$ is more invariant (simpler) than $\mathcal{F}_{\emptyset}$ .
|
| 153 |
+
|
| 154 |
+

|
| 155 |
+
(c)
|
| 156 |
+
|
| 157 |
+
Figure 2a(i) shows our structural causal graph where an edge $U_{i} \to Y$ exists only if $i \in \mathbb{D}$ . Then, we use the definition of lumpability to prove that, under certain conditions, a most-expressive representation $\Gamma_{i}$ invariant with respect to $\sim_{i}$ allows us to identify if there is no edge $U_{i} \to Y$ in the causal DAG.
|
| 158 |
+
|
| 159 |
+
Theorem 1 (Counterfactual invariance & causal DAG identification). Let $\mathcal{X} / \sim_{i}$ be lumpable given every $\mathcal{T}^{(j)}, j \neq i$ as in Definition 2. Then, the structural causal DAG implied by Equations (2) to (4) (depicted in Figure 2a(i)) does not contain the edge $U_{i} \to Y$ iff
|
| 160 |
+
|
| 161 |
+
$$
|
| 162 |
+
\left| P (Y \mid \Gamma_ {i} (X), U _ {Y}) - P (Y \mid X, U _ {Y}) \right| _ {T V} = 0, \tag {5}
|
| 163 |
+
$$
|
| 164 |
+
|
| 165 |
+
$\forall P(X^{\dagger}), \forall P(U_1), \ldots, \forall P(U_m)$ , where $\Gamma_i$ is a most-expressive representation that is invariant with respect to $\sim_i$ .
|
| 166 |
+
|
| 167 |
+
The proof is in the Appendix. With the lumpability assumption of $\mathcal{X} / \sim_{i}$ , $\Gamma_{i}$ in Theorem 1 is a counterfactual-invariant representation. We now use Figure 2a(ii) to describe the result in Theorem 1. We first note that the representation $\Gamma_{\bar{\mathbb{D}}}$ depicted in the figure is counterfactual invariant to $\bar{\mathbb{D}}$ , and hence also counterfactual invariant to $k \in \bar{\mathbb{D}}$ . Next we see that since the representation $\Gamma_{\bar{\mathbb{D}}}$ is counterfactual invariant to $U_{k}$ , there is no arrow $U_{k} \to \Gamma_{\bar{\mathbb{D}}} (X)$ in Figure 2a(ii). If there is no arrow $U_{k} \to Y$ , the missing arrows from $U_{k}$ to $\Gamma_{\bar{\mathbb{D}}} (X)$ will have no influence in the ability of $\Gamma_{\bar{\mathbb{D}}} (X)$ to predict $Y$ , assuming $\Gamma_{\bar{\mathbb{D}}}$ is most-expressive. If there is an arrow $U_{k} \to Y$ , cutting the arrow $U_{k} \to \Gamma_{\bar{\mathbb{D}}} (X)$ creates a loss in predictive performance from $\Gamma_{\bar{\mathbb{D}}} (X)$ to $Y$ for some distribution of the background and observable variables. If $\Gamma_{\bar{\mathbb{D}}} (X)$ never loses any predictive power over $Y$ for any distribution of the background and observable variables, then there is no arrow $U_{k} \to Y$ .
|
| 168 |
+
|
| 169 |
+
Assumption 1 (Asymmetry learning training data). In asymmetry learning we assume that every $\mathcal{X} / \sim_{i}, i \in \{1, \ldots, m\}$ is lumpable given $\mathcal{T}^{(j)}, j \neq i$ , and that in a large training dataset sampled from $(Y^{tr}, X^{tr})$ , an arrow $U_{j} \to Y$ in the causal DAG of Figure 2a(i), $j \in \{1, \ldots, m\}$ , contains observations of $\{U_{j}\}_{j \in \mathbb{D}}$ that violate Equation (5). Hence, if Equation (5) holds for some $i \in \{1, \ldots, m\}$ in this dataset, we can conclude that there is no arrow $U_{i} \to Y$ in the true causal DAG. See Appendix A for a justification of this assumption.
|
| 170 |
+
|
| 171 |
+
Next we use Assumption 1 and the previous results to search for the right OOD invariance.
|
| 172 |
+
|
| 173 |
+
# 4.2 CAUSAL STRUCTURE DISCOVERY OF RELEVANT SYMMETRIES
|
| 174 |
+
|
| 175 |
+
We need a general procedure for obtaining the unknown set $\mathbb{D}$ , which is equivalent to finding all transformations indices $\{U_i\}_{i \in \mathbb{D}} \subseteq \{U_1, \ldots, U_m\}$ that act as confounders between $Y$ and $X$ in the causal DAG in Figure 2a(i). Finding whether an edge exists or not in the causal DAG is known as the causal structure discovery problem (e.g., Heinze-Deml et al. (2017)). The principle of our search is learning the causal structure with the fewest possible edges into $Y$ (i.e., where $Y$ is invariant to most $U_i$ , $i = 1, \ldots, m$ ) while also maximizing the likelihood of the observed data. Accordingly, we take the score-based causal discovery approach (Chickering (2002); Huang et al. (2018)) that assigns scores to each allowed DAG based on the training data and the complexity of the DAG to find a minimal causal structure that fits the training data. This idea is visualized in Figure 2a(iii)
|
| 176 |
+
|
| 177 |
+
where causal graphs with more edges between the transformation indices into $Y$ are defined to have higher complexity and are higher up in the partial ordering. Our search space is simpler than typical structure discovery tasks: The DAGs in our search space have the same structure for $X$ and only differ in edges of the form $U_{i} \rightarrow Y, i \in \{1, \dots, m\}$ . Next, we describe a scoring criterion that uses Theorem 1 and counterfactual-invariant representations to assign scores to the corresponding causal structures.
|
| 178 |
+
|
| 179 |
+
Proposed DAG scoring criterion. For each DAG in the search space, we wish to assign a score based on the training data $\mathcal{D} = \{(x^{(i)},y^{(i)})_{i = 1}^{n^{\mathrm{tr}}}$ under Assumption 1 for a classification task with $C$ classes. Theorem 1 shows that there is a correspondence between a causal structure without the edge $U_{i}\rightarrow Y$ and a predictive probability gap between the original input and a most-expressive representation $\Gamma_{i}$ that is counterfactually-invariant to $U_{i}$ . Thus, under Assumption 1, we can represent the causal search from Figure 2a(iii) in terms of a search over counterfactually-invariant representation function classes as shown in Figures 2a(iii) and 2b. Formally, we are given a collection of function classes $\mathcal{F}:= \{\mathcal{F}_{\mathbb{S}}:\mathbb{S}\subseteq \{1,\dots,m\}\}$ , where $\mathcal{F}_{\mathbb{S}}$ is a family of functions $\Gamma_{\mathbb{S}}$ that are counterfactually-invariant to all $U_{i},i\in \mathbb{S}$ (Definition 1). We wish to score each of the function classes $\mathcal{F}_{\mathbb{S}}\in \mathcal{F}$ to indirectly learn the correct causal structure.
|
| 180 |
+
|
| 181 |
+
The minimum description length (MDL) principle (Schwarz, 1978) is commonly used for causal structure discovery (Budhathoki & Vreeken, 2016; 2017) and comes with the key insight that learning from data can be viewed as compressing it. Given the collection $\mathcal{F}$ and the training dataset $\mathcal{D}$ , MDL finds the function class $\mathcal{F}_{\mathbb{S}} \in \mathcal{F}$ that compresses $\mathcal{D}$ the most. While there are several ways of encoding a dataset given the function class, normalized maximum likelihood (NML) code is known to be optimal (Shtarkov, 1987). NML code is computed as follows
|
| 182 |
+
|
| 183 |
+
$$
|
| 184 |
+
L _ {\mathrm {n m l}} \left(\mathcal {F} _ {\mathbb {S}}, \mathcal {D}\right) = - L \left(\mathcal {F} _ {\mathbb {S}} | \mathcal {D}\right) + \operatorname {C O M P} \left(\mathcal {F} _ {\mathbb {S}}, \mathcal {D}\right), \tag {6}
|
| 185 |
+
$$
|
| 186 |
+
|
| 187 |
+
where $L(\mathcal{F}_{\mathbb{S}}|\mathcal{D}) = \sup_{\Gamma_{\mathbb{S}}\in \mathcal{F}_{\mathbb{S}}}\sum_{i = 1}^{n^{\mathrm{tr}}} \log P(\boldsymbol{y}^{(i)}|\Gamma_{\mathbb{S}}(\boldsymbol{x}^{(i)}))$ is the maximum log-likelihood of $\mathcal{F}_{\mathbb{S}}$ given the data and
|
| 188 |
+
|
| 189 |
+
$$
|
| 190 |
+
\operatorname {C O M P} \left(\mathcal {F} _ {\mathbb {S}}, \mathcal {D}\right) = \log \left[ \sum_ {\substack {\boldsymbol {y} ^ {(1)}, \dots , \boldsymbol {y} ^ {(n ^ {\mathrm {t r}})}: \\ \boldsymbol {y} ^ {(i)} \in \{0, \dots , C \}}} \sup _ {\Gamma_ {\mathbb {S}} \in \mathcal {F} _ {\mathbb {S}}} \prod_ {i = 1} ^ {n ^ {\mathrm {t r}}} P \left(\boldsymbol {y} ^ {(i)} \mid \Gamma_ {\mathbb {S}} (\boldsymbol {x} ^ {(i)})\right) \right], \tag{7}
|
| 191 |
+
$$
|
| 192 |
+
|
| 193 |
+
measures the complexity of the function class $\mathcal{F}_{\mathbb{S}}$ by computing how well it can represent different label distributions for the given inputs $\{\pmb{x}^{(i)}\}_{i = 1}^{n_{i}^{\mathrm{tr}}}$ in training. We can estimate the combinatorial sum in Equation (7) by uniformly sampling random labels for all the training examples.
|
| 194 |
+
|
| 195 |
+
Since $\mathrm{COMP}(\mathcal{F}_{\mathbb{S}},\mathcal{D})$ is computed using the training data, it may underestimate the complexity of function classes if, for instance, all the training examples are generated with $U_{i} = u_{i}$ . Then, $\mathcal{F}_{\{i\}}$ and $\mathcal{F}_{\emptyset}$ are given the same score even though $\mathcal{F}_{\{i\}}$ is clearly more invariant and thus, a simpler function class. This can happen in practice if, say, all images are upright in training with no rotations applied; both rotation-invariant and rotation-sensitive function classes get the same complexity score.
|
| 196 |
+
|
| 197 |
+
In order to break the above ties of our COMP score, asymmetry learning adds an additional term to the NML score that chooses models that have higher invariance based on the partial order (see Figure 2b). We extend the penalty proposed by Mouli & Ribeiro (2021) and use $R(\mathcal{F}_{\mathbb{S}})\coloneqq |\{\mathcal{F}':\mathcal{F}'\in \mathcal{F},\mathcal{F}'>\mathcal{F}_{\mathbb{S}}\} |,$ the number of function classes that are higher in the partial order than $\mathcal{F}_{\mathbb{S}}$ , as the tie-breaking term. For example, in figure $R(\mathcal{F}_{\{1\}}) = |\{\mathcal{F}_{\{1,2\}},\mathcal{F}_{\{1,3\}},\mathcal{F}_{\{1,2,3\}}\} | = 3.$ We define the final score of each function class $\mathcal{F}_{\mathbb{S}}\in \mathcal{F}$ as
|
| 198 |
+
|
| 199 |
+
$$
|
| 200 |
+
S \left(\mathcal {F} _ {\mathbb {S}}, \mathcal {D}\right) = L _ {\mathrm {n m l}} \left(\mathcal {F} _ {\mathbb {S}}, \mathcal {D}\right) + R \left(\mathcal {F} _ {\mathbb {S}}\right). \tag {8}
|
| 201 |
+
$$
|
| 202 |
+
|
| 203 |
+
The score in Equation (8) can be minimized by a score-based causal discovery algorithm to obtain the final DAG. We use Greedy Equivalence Search (Chickering, 2002) to showcase a concrete instantiation of asymmetry learning. Other score-based structure discovery algorithms could also be used.
|
| 204 |
+
|
| 205 |
+
Greedy Equivalence Search. Greedy Equivalence Search (GES) is a greedy search algorithm that optimizes a given scoring function over DAGs. In our setting, the search begins with a DAG with no edges of the form $U_{i} \to Y, i \in \{1, \dots, m\}$ . In the first phase, GES adds these edges one at a time
|
| 206 |
+
|
| 207 |
+
Table 1: Results for different function classes on the pendulum task with $\mathbb{D} = \{1\}$ and $\mathbb{D} = \{1,2\}$ . $R(\mathcal{F})$ , $\widehat{\mathrm{COMP}}(\mathcal{F},\mathcal{D})$ and $S(\mathcal{F},\mathcal{D})$ are discussed as in Section 4.2. Bold values indicate the function class chosen by GES method with the proposed scoring criterion. Test accuracy is computed on the extrapolated dataset after shifting the distribution of $P(\{U_i\}_{i\in \overline{\mathbb{D}}})$ .
|
| 208 |
+
|
| 209 |
+
<table><tr><td rowspan="2">Model class</td><td rowspan="2">Architecture</td><td rowspan="2">R(F)</td><td colspan="4">D = {1}</td><td colspan="4">D = {1,2}</td></tr><tr><td>COMP(F,D)</td><td>S(F,D)</td><td>Train Acc.</td><td>Test Acc.</td><td>COMP(F,D)</td><td>S(F,D)</td><td>Train Acc.</td><td>Test Acc.</td></tr><tr><td>F2</td><td>X → z1 → Y</td><td>0</td><td>0.282</td><td>23.89</td><td>98.5 (0.9)</td><td>98.3 (1.4)</td><td>0.501</td><td>532.84</td><td>72.7 (0.4)</td><td>69.4 (0.5)</td></tr><tr><td>F1</td><td>X → z2 → Y</td><td>0</td><td>0.382</td><td>633.32</td><td>63.8 (7.0)</td><td>51.2 (1.0)</td><td>0.292</td><td>284.75</td><td>85.2 (0.5)</td><td>84.6 (0.2)</td></tr><tr><td>F∅</td><td>X → Y</td><td>2</td><td>1.256</td><td>26.80</td><td>98.9 (0.8)</td><td>77.6 (11.5)</td><td>0.995</td><td>4.54</td><td>99.7 (0.2)</td><td>99.5 (0.2)</td></tr></table>
|
| 210 |
+
|
| 211 |
+
that maximally improve the score in Equation (8) until there is no improvement. In the second phase, GES begins from the DAG obtained at the end of first phase and deletes edges one at a time until such deletions do not improve the score. The DAG obtained at the end of the second phase is the final output of the algorithm. Under the causal Markov and faithfulness assumptions, Chickering (2002) showed that GES is optimal in the large sample limit if the scoring function is locally consistent.
|
| 212 |
+
|
| 213 |
+
# 5 RESULTS
|
| 214 |
+
|
| 215 |
+
Pendulum task description. We evaluate the proposed method in a simulated classification task. Our input $\pmb{x}$ is a motion vector over time $(\theta_t, \frac{d\theta_t}{dt})_{t=1}^T$ of a simple pendulum of an unknown length $l$ after it is dropped from some initial angle $\theta_0$ with $\frac{d\theta_0}{dt} = 0$ . After an initial $\tau$ seconds of uninterrupted motion, we simulate an elastic collision by placing another object of same mass at the bottom. The classification task is to predict whether the kinetic energy imparted by the pendulum is enough to move the second object beyond a certain threshold.
|
| 216 |
+
|
| 217 |
+
Physical properties and equivalence relations. We consider the following two properties of the dynamical system described above: $z_{1}:\mathcal{X}\to \mathbb{R}$ which computes the initial potential energy of the system and $z_{2}:\mathcal{X}\rightarrow \mathbb{R}$ which returns the time of collision. The equivalence relations $\sim_{1}$ and $\sim_{2}$ are defined using these properties as defined in Section 2. For instance, two pendulum motion curves $x,x^{\prime}$ are equivalent with respect to $\sim_{1}$ , i.e., $x\sim_{1}x^{\prime}$ , if they have the same time of collision, $z_{2}(\pmb {x}) = z_{2}(\pmb{x}^{\prime})$ . Then $\mathcal{T}^{(1)}$ consists of transformations that change the initial potential energy of the system (for example, by changing the length of the pendulum or the initial dropping angle $\theta_0$ ) while keeping the time of collision same. Similarly, $x\sim_{2}x^{\prime}$ if their respective potential energies are the same and transformations in $\mathcal{T}^{(2)}$ change the time of collision while keeping the same initial potential energies. Note that the space of equivalence classes $\mathcal{X}/\sim_{1}$ is lumpable with respect to $\mathcal{T}^{(2)}$ and vice versa (Definition 2). Thus, by Theorem 1, we can use predictive performance of counterfactual-invariant representations for scoring the causal DAGs.
|
| 218 |
+
|
| 219 |
+
Unknown $\mathbb{D}$ and OOD classification. We consider two scenarios for the label $Y$ given $X$ . First, if the motion of the pendulum is not damped by friction, then $Y$ depends only on $z_{1}(\pmb{x})$ , i.e., $\mathbb{D} = \{1\}$ . Second, if the motion of the pendulum is damped, then $Y$ depends on both $z_{1}(\pmb{x})$ and $z_{2}(\pmb{x})$ , i.e., $\mathbb{D} = \{1,2\}$ . The extrapolation test data is generated by shifting the distribution of the background variables $\{U_i\}_{i\in \mathbb{D}}$ . The task of a structure discovery algorithm is to correctly identify $\mathbb{D}$ .
|
| 220 |
+
|
| 221 |
+
Results. We use the greedy equivalence search (GES, Section 4.2) algorithm to search over the different causal graphs with the proposed scoring criterion defined in Equation (8). We build classes of counterfactual-invariant representations $\mathcal{F}_{\mathbb{S}}$ corresponding to each possible value of $\mathbb{S} \subsetneq \{1,2\}$ , where every $\Gamma_{\mathbb{S}} \in \mathcal{F}_{\mathbb{S}}$ is invariant to $\{U_i\}_{i \in \mathbb{S}}$ . For example, $\mathcal{F}_{\{1\}}$ is a family of feedforward neural networks that only take $z_2(x)$ as input, i.e., invariant to $z_1(x)$ , whereas $\mathcal{F}_{\emptyset}$ is a sequence model (e.g., LSTM) with no invariance. Table 1 reports the estimated complexity $\widehat{\mathrm{COMP}}(\mathcal{F},\mathcal{D})$ and the final scores $S(\mathcal{F},\mathcal{D})$ for the different function classes for the two tasks. The bold values indicate the function class chosen by the GES algorithm. When $\mathbb{D} = \{1\}$ , the greedy search stops after adding the edge $U_1 \to Y$ as adding the second edge $U_2 \to Y$ only worsens (increases) the score. When $\mathbb{D} = \{1,2\}$ , the greedy search is able to improve the score by adding both edges, first $U_1 \to Y$ and then $U_2 \to Y$ . In both the cases, the extrapolation test accuracy achieved by the chosen model class is the highest.
|
| 222 |
+
|
| 223 |
+
Image classification task. Appendices A.4 and A.5 also offers an application to image classification using image transformation sets (groups and nongroups).
|
| 224 |
+
|
| 225 |
+
# 6 RELATED WORK
|
| 226 |
+
|
| 227 |
+
Counterfactual inference and invariances. Recent efforts have brought causal inference to machine learning (extensively reviewed in Schölkopf et al. (2021); Schölkopf (2022)). Invariant Causal Prediction (Peters et al., 2015; Heinze-Deml et al., 2018) and Invariant Risk Minimization methods (Arjovsky et al., 2019; Bellot & van der Schaar, 2020) learn representations that are invariant across multiple environments but have been shown to be insufficient for OOD generalization in classification tasks without additional assumptions Ahuja et al. (2021). Wang & Jordan (2021) use counterfactual language to formally define and learn non-spurious representations from a single environment that can extrapolate to new environments. Veitch et al. (2021) define counterfactual invariant predictors $f(X)$ when $X$ has a single parent $Z$ and provide conditions such predictors must satisfy over the observed distribution (given an SCM). Kaushik et al. (2020; 2021) propose counterfactual data augmentation for text datasets but they either require a fully-specified toy SCM or rely on humans-in-the-loop to generate the counterfactual data. Other counterfactual methods (Johansson et al., 2016; Shalit et al., 2017; Qidong et al., 2020) learn representations to predict counterfactual change in some observed variables whereas in our setting, the transformation variables $U_{i}$ that generate the observed $X$ are unobserved. In-depth comparison of our work with the existing counterfactual methods is presented in Appendix A.3.
|
| 228 |
+
|
| 229 |
+
Domain adaptation and domain generalization. Domain adaptation and domain generalization (e.g. (Long et al., 2017; Muandet et al., 2013; Quionero-Candela et al., 2009; Rojas-Carulla et al., 2018; Shimodaira, 2000; Zhang et al., 2015) and others) consider observed or known shifts in the data distribution, for instance, given the test distribution $P(X^{\mathrm{te}})$ , rather than counterfactual questions.
|
| 230 |
+
|
| 231 |
+
Causal structure discovery. The methods for causal structure discovery can be broadly classified into two categories. Constraint-based approaches (e.g., Spirtes et al. (2001); Sun et al. (2007)) use conditional independence tests and reject causal graphs that impose more independence than what is observed in data. On the other hand, score-based causal discovery approaches (e.g., Chickering (2002); Huang et al. (2018); Ding et al. (2020); Zhu et al. (2020)) assign scores to each allowed causal graph based on the data and find the one with best score. While there are several works (Budhathoki & Vreeken, 2016; 2017; Bornschein et al., 2021) that use minimum description length (MDL) (Schwarz, 1978) as a scoring criterion, we show why it is insufficient for out-of-distribution tasks and use an additional term for tie-breaking. Goudet et al. (2017) minimize the divergence between a distribution generated by a learnt causal DAG and the observed data distribution; however the method is limited to orienting edges over observed variables, whereas our transformation variables $U_{i}$ are unobserved. Recently, GFlowNets Bengio et al. (2021a,b) have been used to sample DAGs proportional to a score function for Bayesian structure learning Deleu et al. (2022), however we are interested in finding the best DAG with the minimum score.
|
| 232 |
+
|
| 233 |
+
Group-invariant representations. Majority of the works strictly enforce G-invariances either within the architecture (e.g., Zaheer et al. (2017); Cohen et al. (2016); Lyle et al. (2020); Murphy et al. (2019a)) or via data-augmentation (Chen et al., 2020) and do not handle the case when the target is actually influenced by the transformation of the input. Other works (Benton et al., 2020; Zhou et al., 2020; van der Wilk et al., 2018; Anselmi et al., 2019) consider learning symmetries from the training data but do not consider the extrapolation task that we show can be solved only under certain conditions. Mouli & Ribeiro (2021) consider the special case where the transformations are from normal subgroups and do not formally describe the causal task. These works rely on invertible transformations while we define symmetries more generally via equivalence relations. Dubois et al. (2021) also define invariances via equivalence relations and, under the assumption that all such invariances hold in the data, the authors design methods for data compression. Our goal is rather different: We want to discover which equivalence relations (transformations thereof) affect the label.
|
| 234 |
+
|
| 235 |
+
# 7 CONCLUSIONS
|
| 236 |
+
|
| 237 |
+
This work considered an out-of-distribution (OOD) classification task where the shift between train and test environments is through different symmetry transformations of the input, where symmetry transformations are defined via equivalence relations over the input space. We described the task of finding symmetries that affect the label as a causal structure discovery task and show that, under certain conditions, we can use the predictive performance of invariant representations on the observational data to predict whether an edge exists in the causal DAG (Theorem 1). We then proposed an MDL-based scoring for this causal structure discovery. Finally, we test our approach in two simulated physics tasks and six image classification tasks.
|
| 238 |
+
|
| 239 |
+
# ACKNOWLEDGMENTS
|
| 240 |
+
|
| 241 |
+
This work was funded in part by the National Science Foundation (NSF) Awards CAREER IIS-1943364 and CCF-1918483, the Purdue Integrative Data Science Initiative, and the Wabash Heartland Innovation Network. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the sponsors.
|
| 242 |
+
|
| 243 |
+
# REFERENCES
|
| 244 |
+
|
| 245 |
+
Kartik Ahuja, Ethan Caballero, Dinghuai Zhang, Jean-Christophe Gagnon-Audet, Yoshua Bengio, Ioannis Mitliagkas, and Irina Rish. Invariance principle meets information bottleneck for out-of-distribution generalization. Advances in Neural Information Processing Systems, 34, 2021.
|
| 246 |
+
Fabio Anselmi, Georgios Evangelopoulos, Lorenzo Rosasco, and Tomaso Poggio. Symmetry-adapted representation learning. Pattern Recognition, 86:201-208, February 2019. ISSN 0031-3203. doi: 10.1016/j.patcog.2018.07.025.
|
| 247 |
+
Martin Arjovsky, Léon Bottou, Ishaan Gulrajani, and David Lopez-Paz. Invariant risk minimization. arXiv preprint arXiv:1907.02893, 2019.
|
| 248 |
+
Elias Baireinboim, Juan Correa, Duligur Ibeling, and Thomas Icard. On Pearl's hierarchy and the foundations of causal inference. ACM special volume in honor of Judea Pearl, 2020.
|
| 249 |
+
Alexis Bellot and Mihaela van der Schaar. Accounting for unobserved confounding in domain generalization. arXiv preprint arXiv:2007.10653, 2020.
|
| 250 |
+
Shai Ben-David, John Blitzer, Koby Crammer, Fernando Pereira, et al. Analysis of representations for domain adaptation. Advances in neural information processing systems, 19:137, 2007.
|
| 251 |
+
Emmanuel Bengio, Moksh Jain, Maksym Korablyov, Doina Precup, and Yoshua Bengio. Flow network based generative models for non-iterative diverse candidate generation. Advances in Neural Information Processing Systems, 34, 2021a.
|
| 252 |
+
Yoshua Bengio, Tristan Deleu, Edward J Hu, Salem Lahlou, Mo Tiwari, and Emmanuel Bengio. Gfownet foundations. arXiv preprint arXiv:2111.09266, 2021b.
|
| 253 |
+
Gregory Benton, Marc Finzi, Pavel Izmailov, and Andrew Gordon Wilson. Learning invariances in neural networks from data. NeurIPS, 2020.
|
| 254 |
+
Jorg Bornschein, Silvia Chiappa, Alan Malek, and Rosemary Nan Ke. Prequential MDL for Causal Structure Learning with Neural Networks. July 2021.
|
| 255 |
+
Michael M Bronstein, Joan Bruna, Yann LeCun, Arthur Szlam, and Pierre Vandergheynst. Geometric deep learning: going beyond euclidean data. IEEE Signal Processing Magazine, 34(4):18-42, 2017.
|
| 256 |
+
Kailash Budhathoki and Jilles Vreeken. Causal Inference by Compression. In 2016 IEEE 16th International Conference on Data Mining (ICDM), pp. 41-50, Barcelona, Spain, December 2016. IEEE. ISBN 978-1-5090-5473-2. doi: 10.1109/ICDM.2016.0015.
|
| 257 |
+
Kailash Budhathoki and Jilles Vreeken. MDL for Causal Inference on Discrete Data. In 2017 IEEE International Conference on Data Mining (ICDM), pp. 751-756, November 2017. doi: 10.1109/ICDM.2017.87.
|
| 258 |
+
Shuxiao Chen, Edgar Dobriban, and Jane H. Lee. A group-theoretic framework for data augmentation. Journal of Machine Learning Research, 21(245):1-71, 2020. URL http://jmlr.org/papers/v21/20-163.html.
|
| 259 |
+
David Maxwell Chickering. Optimal Structure Identification With Greedy Search. Journal of Machine Learning Research, 3(Nov):507-554, 2002. ISSN ISSN 1533-7928.
|
| 260 |
+
Taco S Cohen, T S Cohen, and Uva Nl. Group Equivariant Convolutional Networks. pp. 10, 2016.
|
| 261 |
+
Elliot Creager, Jorn-Henrik Jacobsen, and Richard Zemel. Environment inference for invariant learning. In International Conference on Machine Learning, pp. 2189-2200. PMLR, 2021.
|
| 262 |
+
|
| 263 |
+
Alexander D'Amour, Katherine Heller, Dan Moldovan, Ben Adlam, Babak Alipanahi, Alex Beutel, Christina Chen, Jonathan Deaton, Jacob Eisenstein, Matthew D Hoffman, et al. Underspecification presents challenges for credibility in modern machine learning. arXiv preprint arXiv:2011.03395, 2020.
|
| 264 |
+
Tristan Deleu, Antonio Gois, Chris Emezue, Mansi Rankawat, Simon Lacoste-Julien, Stefan Bauer, and Yoshua Bengio. Bayesian structure learning with generative flow networks. arXiv preprint arXiv:2202.13903, 2022.
|
| 265 |
+
Chenwei Ding, Biwei Huang, Mingming Gong, Kun Zhang, Tongliang Liu, and Dacheng Tao. Score-based Causal Discovery from Heterogeneous Data. September 2020.
|
| 266 |
+
Yann Dubois, Benjamin Bloem-Reddy, Karen Ullrich, and Chris J Maddison. Lossy compression for lossless prediction. arXiv preprint arXiv:2106.10800, 2021.
|
| 267 |
+
Marc Finzi, Max Welling, and Andrew Gordon Wilson. A practical method for constructing equivariant multilayer perceptrons for arbitrary matrix groups. arXiv preprint arXiv:2104.09459, 2021.
|
| 268 |
+
Robert Geirhos, Jorn-Henrik Jacobsen, Claudio Michaelis, Richard Zemel, Wieland Brendel, Matthias Bethge, and Felix A Wichmann. Shortcut learning in deep neural networks. Nature Machine Intelligence, 2(11):665-673, 2020.
|
| 269 |
+
Olivier Goudet, Diviyan Kalainathan, Philippe Caillou, Isabelle Guyon, David Lopez-Paz, and Michèle Sebag. Causal generative neural networks. arXiv preprint arXiv:1711.08936, 2017.
|
| 270 |
+
Christina Heinze-Deml, Marloes H. Maathuis, and Nicolai Meinshausen. Causal Structure Learning. arXiv:1706.09141 [stat], June 2017.
|
| 271 |
+
Christina Heinze-Deml, Jonas Peters, and Nicolai Meinshausen. Invariant Causal Prediction for Nonlinear Models. arXiv:1706.08576 [stat], September 2018.
|
| 272 |
+
Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, and Jure Leskovec. Open graph benchmark: Datasets for machine learning on graphs. In Advances in Neural Information Processing Systems, 2020.
|
| 273 |
+
Biwei Huang, Kun Zhang, Yizhu Lin, Bernhard Scholkopf, and Clark Glymour. Generalized Score Functions for Causal Discovery. KDD: proceedings. International Conference on Knowledge Discovery & Data Mining, 2018:1551-1560, August 2018. ISSN 2154-817X. doi: 10.1145/3219819.3220104.
|
| 274 |
+
Maximilian Ilse, Jakub M Tomczak, and Patrick Forre. Selecting data augmentation for simulating interventions. In International Conference on Machine Learning, pp. 4555-4562. PMLR, 2021.
|
| 275 |
+
Fredrik Johansson, Uri Shalit, and David Sontag. Learning representations for counterfactual inference. In International conference on machine learning, pp. 3020-3029, 2016.
|
| 276 |
+
Divyansh Kaushik, Eduard Hovy, and Zachary Lipton. Learning the difference that makes a difference with counterfactually-augmented data. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=SklgsoNFvr.
|
| 277 |
+
Divyansh Kaushik, Amrith Setlur, Eduard H Hovy, and Zachary Chase Lipton. Explaining the efficacy of counterfactually augmented data. In International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id=HHiiQKWsOcv.
|
| 278 |
+
Pang Wei Koh, Shiori Sagawa, Henrik Marklund, Sang Michael Xie, Marvin Zhang, Akshay Balsubramani, Weihua Hu, Michihiro Yasunaga, Richard Lanas Phillips, Sara Beery, et al. Wilds: A benchmark of in-the-wild distribution shifts. arXiv preprint arXiv:2012.07421, 2020.
|
| 279 |
+
Risi Kondor and Shubhendu Trivedi. On the generalization of equivariance and convolution in neural networks to the action of compact groups. In International Conference on Machine Learning, pp. 2747-2755. PMLR, 2018.
|
| 280 |
+
Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009.
|
| 281 |
+
|
| 282 |
+
David Krueger, Ethan Caballero, Joern-Henrik Jacobsen, Amy Zhang, Jonathan Binas, Dinghuai Zhang, Remi Le Priol, and Aaron Courville. Out-of-distribution generalization via risk extrapolation (rex). In International Conference on Machine Learning, pp. 5815-5826. PMLR, 2021.
|
| 283 |
+
Mingsheng Long, Han Zhu, Jianmin Wang, and Michael I Jordan. Deep transfer learning with joint adaptation networks. In International conference on machine learning, pp. 2208-2217. PMLR, 2017.
|
| 284 |
+
Clare Lyle, Mark van der Wilk, Marta Kwiatkowska, Yarin Gal, and Benjamin Bloem-Reddy. On the benefits of invariance in neural networks. arXiv preprint arXiv:2005.00178, 2020.
|
| 285 |
+
Haggai Maron, Heli Ben-Hamu, Nadav Shamir, and Yaron Lipman. Invariant and equivariant graph networks. arXiv preprint arXiv:1812.09902, 2018.
|
| 286 |
+
S Chandra Mouli and Bruno Ribeiro. Neural network extrapolations with g-invariances from a single environment. In International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id=7t1FcJUWhi3.
|
| 287 |
+
Krikamol Muandet, David Balduzzi, and Bernhard Scholkopf. Domain generalization via invariant feature representation. In International Conference on Machine Learning, pp. 10-18, 2013.
|
| 288 |
+
R. Murphy, B. Srinivasan, V. Rao, and B. Ribeiro. Janossy pooling: Learning deep permutation-invariant functions for variable-size inputs. In International Conference on Learning Representations, 2019a.
|
| 289 |
+
Ryan Murphy, Balasubramaniam Srinivasan, Vinayak Rao, and Bruno Ribeiro. Relational pooling for graph representations. In Proceedings of the 36th International Conference on Machine Learning, 2019b.
|
| 290 |
+
J Pearl and D Mackenzie. The ladder of causation. The book of why: the new science of cause and effect. New York (NY): Basic Books, pp. 23-52, 2018.
|
| 291 |
+
Jonas Peters, Peter Buhlmann, and Nicolai Meinshausen. Causal inference using invariant prediction: identification and confidence intervals. arXiv preprint arXiv:1501.01332, 2015.
|
| 292 |
+
Liu Qidong, Tian Feng, Ji Weihua, and Zheng Qinghua. A new representation learning method for individual treatment effect estimation: Split covariate representation network. In Asian Conference on Machine Learning, pp. 811-822. PMLR, 2020.
|
| 293 |
+
Joaquin Quionero-Candela, Masashi Sugiyama, Anton Schwaighofer, and Neil D Lawrence. Dataset shift in machine learning. The MIT Press, 2009.
|
| 294 |
+
Mateo Rojas-Carulla, Bernhard Schölkopf, Richard Turner, and Jonas Peters. Invariant models for causal transfer learning. The Journal of Machine Learning Research, 19(1):1309-1342, 2018.
|
| 295 |
+
Joseph Rosen. Symmetry rules: How science and nature are founded on symmetry. Springer Science & Business Media, 2008.
|
| 296 |
+
Elan Rosenfeld, Pradeep Ravikumar, and Andrej Risteski. The risks of invariant risk minimization. arXiv preprint arXiv:2010.05761, 2020.
|
| 297 |
+
Bernhard Schölkopf. Causality for machine learning. In Probabilistic and Causal Inference: The Works of Judea Pearl, pp. 765-804. 2022.
|
| 298 |
+
Bernhard Schölkopf, Francesco Locatello, Stefan Bauer, Nan Rosemary Ke, Nal Kalchbrenner, Anirudh Goyal, and Yoshua Bengio. Toward causal representation learning. Proceedings of the IEEE, 109(5):612-634, 2021.
|
| 299 |
+
Gideon Schwarz. Estimating the Dimension of a Model. The Annals of Statistics, 6(2):461-464, March 1978. ISSN 0090-5364, 2168-8966. doi: 10.1214/aos/1176344136.
|
| 300 |
+
Uri Shalit, Fredrik D Johansson, and David Sontag. Estimating individual treatment effect: generalization bounds and algorithms. In International Conference on Machine Learning, pp. 3076-3085. PMLR, 2017.
|
| 301 |
+
|
| 302 |
+
John Shawe-Taylor. Symmetries and discriminability in feedforward network architectures. IEEE Transactions on Neural Networks, 4(5):816-826, 1993.
|
| 303 |
+
Hidetoshi Shimodaira. Improving predictive inference under covariate shift by weighting the log-likelihood function. Journal of statistical planning and inference, 90(2):227-244, 2000.
|
| 304 |
+
Yurii Mikhailovich Shtarkov. Universal sequential coding of single messages. Problemy Peredachi Informatii, 23(3):3-17, 1987.
|
| 305 |
+
Murray Sidman and William Tailby. Conditional discrimination vs. matching to sample: An expansion of the testing paradigm. Journal of the Experimental Analysis of behavior, 37(1):5-22, 1982.
|
| 306 |
+
Murray Sidman, Ricki Rauzin, Ronald Lazar, Sharon Cunningham, William Tailby, and Philip Carrigan. A search for symmetry in the conditional discriminations of rhesus monkeys, baboons, and children. Journal of the experimental analysis of behavior, 37(1):23-44, 1982.
|
| 307 |
+
Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
|
| 308 |
+
Peter Spirtes, Clark Glymour, and Richard Scheines. Causation, Prediction, and Search, 2nd Edition. MIT Press Books, The MIT Press, 2001.
|
| 309 |
+
Xiaohai Sun, Dominik Janzing, Bernhard Scholkopf, and Kenji Fukumizu. A kernel-based causal learning algorithm. In Proceedings of the 24th International Conference on Machine Learning, ICML '07, pp. 855-862, New York, NY, USA, June 2007. Association for Computing Machinery. ISBN 978-1-59593-793-3. doi: 10.1145/1273496.1273604.
|
| 310 |
+
Damien Teney, Ehsan Abbasnedjad, and Anton van den Hengel. Learning what makes a difference from counterfactual examples and gradient supervision. In Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part X 16, pp. 580-599. Springer, 2020.
|
| 311 |
+
Mark van der Wilk, Matthias Bauer, ST John, and James Hensman. Learning invariances using the marginal likelihood. In Advances in Neural Information Processing Systems, pp. 9938-9948, 2018.
|
| 312 |
+
Victor Veitch, Alexander D'Amour, Steve Yadlowsky, and Jacob Eisenstein. Counterfactual invariance to spurious correlations: Why and how to pass stress tests. arXiv preprint arXiv:2106.00545, 2021.
|
| 313 |
+
Yixin Wang and Michael I. Jordan. Desiderata for Representation Learning: A Causal Perspective. arXiv:2109.03795 [cs, stat], September 2021.
|
| 314 |
+
Gesche Westphal-Fitch, Ludwig Huber, Juan Carlos Gomez, and W Tecumseh Fitch. Production and perception rules underlying visual patterns: effects of symmetry and hierarchy. Philosophical Transactions of the Royal Society B: Biological Sciences, 367(1598):2007-2022, 2012.
|
| 315 |
+
Manzil Zaheer, Satwik Kottur, Siamak Ravanbakhsh, Barnabas Poczos, Russ R Salakhutdinov, and Alexander J Smola. Deep Sets. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (eds.), Advances in Neural Information Processing Systems 30, pp. 3391-3401. Curran Associates, Inc., 2017.
|
| 316 |
+
Kun Zhang, Mingming Gong, and Bernhard Schölkopf. Multi-source domain adaptation: A causal view. In AAAI, volume 1, pp. 3150-3157, 2015.
|
| 317 |
+
Han Zhao, Remi Tachet Des Combes, Kun Zhang, and Geoffrey Gordon. On learning invariant representations for domain adaptation. In International Conference on Machine Learning, pp. 7523-7532. PMLR, 2019.
|
| 318 |
+
Allan Zhou, Tom Knowles, and Chelsea Finn. Meta-Learning Symmetries by Reparameterization. arXiv:2007.02933 [cs, stat], October 2020.
|
| 319 |
+
Shengyu Zhu, Ignavier Ng, and Zhitang Chen. Causal Discovery with Reinforcement Learning. pp. 17, 2020.
|
| 320 |
+
|
| 321 |
+
# A APPENDIX
|
| 322 |
+
|
| 323 |
+
# A.1 JUSTIFICATION FOR ASSUMPTION 1.
|
| 324 |
+
|
| 325 |
+
The above assumption is inspired by the deep relationship between symmetries and intelligence. Young children, unlike monkeys and baboons, assume that a conditional stimulus $\mathrm{F}$ given another stimulus $\mathrm{D}$ extrapolates to a symmetric relation $\mathrm{D}$ given $\mathrm{F}$ without ever seeing any such examples (Sidman et al., 1982). That is, if given $\mathrm{D}$ , action $\mathrm{F}$ produces a treat, the child assumes that given $\mathrm{F}$ , action $\mathrm{D}$ also produces a treat. Young children differ from primates in their ability to use symmetries to build conceptual relations beyond visual patterns (Sidman & Tailby, 1982; Westphal-Fitch et al., 2012), allowing extrapolations from intelligent reasoning. However, forcing symmetries against data evidence is undesirable, since symmetries can provide valuable information when they are broken. Unsurprising, humans are generally able to quickly find and pay attention to some types of asymmetries.
|
| 326 |
+
|
| 327 |
+
# A.2 PROOF OF THEOREM 1
|
| 328 |
+
|
| 329 |
+
Theorem 1 (Counterfactual invariance & causal DAG identification). Let $\mathcal{X} / \sim_{i}$ be lumpable given every $\mathcal{T}^{(j)}, j \neq i$ as in Definition 2. Then, the structural causal DAG implied by Equations (2) to (4) (depicted in Figure 2a(i)) does not contain the edge $U_{i} \to Y$ iff
|
| 330 |
+
|
| 331 |
+
$$
|
| 332 |
+
\left| P \left(Y \mid \Gamma_ {i} (X), U _ {Y}\right) - P \left(Y \mid X, U _ {Y}\right) \right| _ {T V} = 0, \tag {5}
|
| 333 |
+
$$
|
| 334 |
+
|
| 335 |
+
$\forall P(X^{\dagger}), \forall P(U_1), \ldots, \forall P(U_m)$ , where $\Gamma_i$ is a most-expressive representation that is invariant with respect to $\sim_i$ .
|
| 336 |
+
|
| 337 |
+
Proof. Notation (following Equation (3)): The observed input $X$ is $X := t(U_1, \ldots, U_{i-1}, u_i, U_{i+1}, \ldots, U_m) \circ X^\dagger$ where $t(U_1, \ldots, U_m)$ is obtained by interleaving the transformation sequences from each individual $U_1, \ldots, U_m$ and we have set $U_i = u_i$ .
|
| 338 |
+
|
| 339 |
+
Necessity: We wish to show that if the SCM does not contain edge $U_{i} \to Y$ , then Equation (5) holds for all $P(X^{\dagger}), P(U_1), \ldots, P(U_m)$ . By this assumption, $Y$ outputs the same label for any value of $U_{i}$ . Consider the collection of equivalence classes $\mathcal{X} / \sim_{i}$ . By the lumpability condition of Definition 2, all transformations $t^{(j)} \in \mathcal{T}^{(j)}, j \neq i$ , map all points in one equivalence class of $\sim_{i}$ to points in a different one. On the other hand, all transformations $t^{(i)} \in \mathcal{T}^{(i)}$ map points to other points within the same equivalence class under $\sim_{i}$ . Now, consider the equivalence class of $X$ after all the transformations have been applied to $X^{\dagger}$ . The equivalence class of $X = t(U_{1},\dots ,U_{i - 1},u_{i},U_{i + 1},\dots ,U_{m}) \circ X^{\dagger}$ is the same as that of $X^{*} = t(U_{1},\dots ,U_{i - 1},u_{i}^{\mathrm{id}},U_{i + 1},\dots ,U_{m}) \circ X^{\dagger}$ where $U_{i} = u_{i}^{\mathrm{id}}$ always selects identity transformations. This is because changing $u_{i}$ to $u_{i}^{\mathrm{id}}$ only impacts the transformations chosen from $\mathcal{T}^{(i)}$ , and these transformations do not change the equivalence class under $\sim_{i}$ . Thus, we have shown that we reach the same equivalence class under $\sim_{i}$ for both $X$ and $X^{*}$ .
|
| 340 |
+
|
| 341 |
+
Now let $\Gamma_{i}$ be a most-expressive representation that is invariant with respect to $\sim_{i}$ . By definition, $\Gamma_{i}$ outputs the same value within an equivalence class, thus, $\Gamma_{i}(X) = \Gamma_{i}(X^{*})$ . But since by assumption $U_{i} \to Y$ does not exist, $X$ and $X^{*}$ have the same label always. Thus, there is no loss of information incurred by $\Gamma_{i}$ in predicting $Y$ with the additional restraint $\Gamma_{i}(X) = \Gamma_{i}(X^{*})$ . Since $\Gamma_{i}$ is most-expressive, we have $P(Y = y|\Gamma_{i}(X), U_{Y}) = P(Y = y|X, U_{Y})$ for all $y \in \mathcal{V}$ . This holds for all values of $u_{i}$ , and hence we get the desired result for any distribution $P(U_{i})$ .
|
| 342 |
+
|
| 343 |
+
Sufficiency: We wish to show that if Equation (5) holds for all $P(X^{\dagger})$ and $P(U_1),\ldots ,P(U_m)$ , then there is no edge $U_{i}\to Y$ in the causal graph. We will prove by contrapositive: Assume there is an edge $U_{i}\rightarrow Y$ , then we will show there exists distributions $P(X^{\dagger})$ and $P(U_{1}),\dots ,P(U_{m})$ such that Equation (5) does not hold.
|
| 344 |
+
|
| 345 |
+
Define $P(X^{\dagger}) = \delta_{x^{\dagger}}$ for some $x^{\dagger} \in \mathcal{X}$ where $\delta$ denotes a Dirac-delta function. Define $P(U_i = u_i^{\mathrm{id}}) = 0.5$ and $P(U_i = u_i) = 0.5$ for $u_i^{\mathrm{id}}, u_i \in \operatorname{supp}(U_i)$ . As usual, $u_i^{\mathrm{id}}$ always selects the identity transformation, and $u_i$ selects a single transformation $t_{u_i} \in \mathcal{T}^{(i)}$ . Similarly, for all $j \neq i$ , define $P(U_j) = \delta_{u_j^{\mathrm{id}}}$ for $u_j^{\mathrm{id}} \in \operatorname{supp}(U_j)$ that only select identity transformations. Now, there are two possible observed inputs: $\boldsymbol{x} = t(u_1^{\mathrm{id}}, \ldots, u_m^{\mathrm{id}}) \circ x^{\dagger} = x^{\dagger}$ and $\boldsymbol{x}' = t(u_1^{\mathrm{id}}, \ldots, u_i, \ldots, u_m^{\mathrm{id}}) \circ x^{\dagger} = t_{u_i} \circ x^{\dagger}$ . Finally, define $Y := \mathbf{1}(U_i = u_i^{\mathrm{id}})$ , thus $\boldsymbol{x}$ and $\boldsymbol{x}'$ have different labels. But, any invariant
|
| 346 |
+
|
| 347 |
+
representation $\Gamma_{i}$ by definition has $\Gamma_{i}(\pmb {x}) = \Gamma_{i}(\pmb{x}^{\prime})$ since they belong to the same equivalence class. Thus, even if $\Gamma_{i}$ is most-expressive, we have $|P(Y|\Gamma_i(X),U_Y) - P(Y|X,U_Y)|_{\mathrm{TV}} = 0.5.$
|
| 348 |
+
|
| 349 |
+

|
| 350 |
+
|
| 351 |
+
# A.3 ADDITIONAL RELATED WORK
|
| 352 |
+
|
| 353 |
+
Counterfactual invariances. Wang & Jordan (2021) use counterfactual language to formally define and learn non-spurious, disentangled representations from a single environment. Our work is different in the following ways. In the structural causal model (SCM) of their work, the authors assume that there are no confounders between the observed $X$ and the label $Y$ . However, in our SCM (Figure 2a(i)), we allow unobserved confounders $X^{\dagger}$ and $U_{i}, i \in \mathbb{D}$ . The hidden transformation variables $U_{i}, i \in \mathbb{D}$ are confounders because they affect both the observed input $X$ and the labels $Y$ . We leverage the fact that the confounders are related to symmetries (and do not affect $X$ arbitrarily) to resolve the issue with unobserved confounding. Wang & Jordan (2021) also require pinpointability of the cause of the observed $X$ . In our setting, this is typically not possible since there are multiple paths of transformations from $X^{\dagger}$ to the same observed $X$ . Thus, all the parents of $X$ may not be pinpointable, specifically the transformation variables $U_{1}, \ldots, U_{m}$ .
|
| 354 |
+
|
| 355 |
+
Kaushik et al. (2020; 2021) propose counterfactual data augmentation for text datasets where human annotators are asked to make minimal modifications to the input document so as to change its label (for example, by changing a few positive words to negative words) while keeping style, etc. fixed. This type of augmentation essentially asks the labelers to identify all the causal features in the document and make modifications to those features alone. This can be seen as obtaining new counterfactual examples by simulating the causal model and requires knowing the true function that describes how the features affect the labels. We consider the more realistic setting where we do not have access to such a collection of counterfactual examples. In this work, we consider the traditional automated data augmentations under a mostly unknown data generation process, as opposed to the counterfactual data augmentation (Kaushik et al., 2020) that either considers a fully-specified toy SCM or relies on humans-in-the-loop to generate counterfactual data.
|
| 356 |
+
|
| 357 |
+
In Figure 1(c) we show that the standard data augmentation is not sufficient for the OOD task. However, if one had access to the fully-specified causal model, one could generate the counterfactual data shown in Figure 1(d) and learn an OOD classifier with the counterfactually augmented data (as done by Kaushik et al. (2020)). But our work does not assume access to these counterfactual examples. Additionally, we prove that a counterfactual invariant classifier can be constructed from traditional data augmentation alone if the lumpability condition (Definition 2) is satisfied. This is not the case in Figure 1(d).
|
| 358 |
+
|
| 359 |
+
Veitch et al. (2021) define counterfactual invariant predictors $f(X)$ when $X$ has a single parent $Z$ and provide conditions such predictors must satisfy over the observed distribution (given an SCM). Note also that Veitch et al. (2021) assume that part of the observed input $X(X_{Z}^{\perp})$ is not causally influenced by the confounder $Z$ . In our scenarios this is not generally true. For example, under a color change, the entire observed image $X$ changes. Still, we show that the notion of a counterfactual invariant predictor exists. Hence, the definition of Veitch et al. (2021, Lemma 3.1) of a counterfactually invariant predictor that requires a segment of $X$ to not causally depend on $Z$ , a fundamental result of their work, unfortunately does not apply to our setting (since $X$ may have no such segment).
|
| 360 |
+
|
| 361 |
+
# A.4 MNIST-\{3,4\} EXPERIMENTS WITH FINITE TRANSFORMATION GROUPS
|
| 362 |
+
|
| 363 |
+
We test our proposed method on out-of-distribution tasks on images where the equivalence relations (symmetries) are provided as transformation groups (e.g., $90^{\circ}$ rotations). We use the MNIST-\{3, 4\} (colored) dataset (Mouli & Ribeiro, 2021) that only contains digits 3 and 4, and follow their experimental setup. MNIST-\{3, 4\} is used to avoid any confounding factors while testing if the proposed method can learn the correct invariances, not for any practical considerations (e.g., rotated 6 is a 9 and would interfere with some experiments, etc.).
|
| 364 |
+
|
| 365 |
+
We consider equivalence relations obtained from 3 different transformation groups: rotations by $90^{\circ}$ (denoted $G_{\mathrm{rot}}$ ), vertically flipping the image (denoted $G_{\mathrm{v - flip}}$ ), and permuting the RGB color channels of the image (denoted $G_{\mathrm{col}}$ ). The 3 corresponding equivalence relations are lumpable (Definition 2) with respect to the transformations in the other two groups in almost all the cases. Only exception
|
| 366 |
+
|
| 367 |
+
Table 2: Results for different function classes on the MNIST-{3,4} classification task with $\bar{\mathbb{D}} =$ {rot,col,vflip}, $\mathbb{D} = \varnothing$ , i.e., task is invariant to 3 groups (D) and sensitive to none (D). $R(\mathcal{F})$ $\widehat{\mathrm{COMP}} (\mathcal{F},\mathcal{D})$ and $S(\mathcal{F},\mathcal{D})$ are as discussed in Section 4.2. Bold values indicate the function class chosen by GES method with the proposed scoring criterion. Test accuracy is computed on the extrapolated dataset after shifting the distribution of $P(\{U_i\}_{i\in \bar{\mathbb{D}}})$ . We see that the $S(\mathcal{F},\mathcal{D})$ loss selects the correct model class in training.
|
| 368 |
+
|
| 369 |
+
<table><tr><td>Model class</td><td>R(F)</td><td>+ COMP(F, D)</td><td>+ NLL(F, D)</td><td>= S(F, D)</td><td>Train Acc</td><td>Test Acc</td></tr><tr><td>F{}</td><td>7</td><td>6639.310</td><td>0.013</td><td>6646.324</td><td>100.00 (0.00)</td><td>48.38 (5.22)</td></tr><tr><td>F{vflip}</td><td>3</td><td>6639.241</td><td>0.079</td><td>6642.320</td><td>100.00 (0.00)</td><td>47.08 (5.34)</td></tr><tr><td>F{col}</td><td>3</td><td>6639.241</td><td>0.029</td><td>6642.270</td><td>100.00 (0.00)</td><td>53.92 (2.47)</td></tr><tr><td>F{col,vflip}</td><td>1</td><td>6639.241</td><td>0.099</td><td>6640.340</td><td>100.00 (0.00)</td><td>53.15 (1.83)</td></tr><tr><td>F{rot}</td><td>3</td><td>6639.241</td><td>0.037</td><td>6642.278</td><td>100.00 (0.00)</td><td>53.06 (10.00)</td></tr><tr><td>F{rot,vflip}</td><td>1</td><td>6639.241</td><td>0.580</td><td>6640.821</td><td>100.00 (0.01)</td><td>54.86 (13.60)</td></tr><tr><td>F{rot,col}</td><td>1</td><td>6639.241</td><td>0.043</td><td>6640.284</td><td>100.00 (0.00)</td><td>90.29 (6.76)</td></tr><tr><td>F{rot,col,vflip}</td><td>0</td><td>6639.241</td><td>0.210</td><td>6639.451</td><td>100.00 (0.00)</td><td>92.02 (2.99)</td></tr></table>
|
| 370 |
+
|
| 371 |
+
Table 3: Results for different function classes on the MNIST-{3,4} classification task with $\bar{\mathbb{D}} = \{\mathrm{rot},\mathrm{vflip}\} ,\mathbb{D} =$ {col}, i.e., task is invariant to rotation and vertical flip groups (D) but sensitive to color (D). $R(\mathcal{F})$ $\widehat{\mathrm{COMP}} (\mathcal{F},\mathcal{D})$ and $S(\mathcal{F},\mathcal{D})$ are as discussed in Section 4.2. Bold values indicate the function class chosen by GES method with the proposed scoring criterion. Test accuracy is computed on the extrapolated dataset after shifting the distribution of $P(\{U_i\}_{i\in \bar{\mathbb{D}}})$ . We see that the $S(\mathcal{F},\mathcal{D})$ loss selects the correct model class in training.
|
| 372 |
+
|
| 373 |
+
<table><tr><td>Model class</td><td>R(F)</td><td>+ COMP(F, D)</td><td>+ NLL(F, D)</td><td>= S(F, D)</td><td>Train Acc</td><td>Test Acc</td></tr><tr><td>F{}</td><td>7</td><td>6639.241</td><td>0.010</td><td>6646.251</td><td>100.00 (0.00)</td><td>54.79 (0.74)</td></tr><tr><td>F{vflip}</td><td>3</td><td>6639.241</td><td>0.012</td><td>6642.253</td><td>100.00 (0.00)</td><td>55.05 (1.56)</td></tr><tr><td>F{col}</td><td>3</td><td>6639.240</td><td>8269.480</td><td>14911.720</td><td>41.98 (5.79)</td><td>18.81 (2.94)</td></tr><tr><td>F{col,vflip}</td><td>1</td><td>6639.241</td><td>8275.716</td><td>14915.957</td><td>42.71 (4.07)</td><td>18.62 (2.25)</td></tr><tr><td>F{rot}</td><td>3</td><td>6638.946</td><td>0.132</td><td>6642.078</td><td>100.00 (0.00)</td><td>91.40 (3.19)</td></tr><tr><td>F{rot,vflip}</td><td>1</td><td>6638.428</td><td>0.504</td><td>6639.932</td><td>100.00 (0.00)</td><td>92.32 (1.84)</td></tr><tr><td>F{rot,col}</td><td>1</td><td>6639.241</td><td>8412.954</td><td>15053.194</td><td>37.20 (1.97)</td><td>29.25 (5.18)</td></tr><tr><td>F{rot,col,vflip}</td><td>0</td><td>6639.239</td><td>8389.719</td><td>15028.958</td><td>38.01 (2.02)</td><td>29.98 (3.96)</td></tr></table>
|
| 374 |
+
|
| 375 |
+
is the equivalence relation $\sim_{\mathrm{v - flip}}$ , which is not lumpable with respect to the transformations in $G_{\mathrm{rot}}$ . Consequently, we do not consider a task with invariance to vertical flip alone. We test our method on the same 4 classification tasks proposed by Mouli & Ribeiro (2021) where each task represents the case where the target $Y$ has different invariances, i.e., invariant to all three groups, to two, to one, invariant to none (the task is sensitive to the remaining groups).
|
| 376 |
+
|
| 377 |
+
We use the VGG architecture (Simonyan & Zisserman, 2014) for image classification and construct a collection of function classes $\mathcal{F} \coloneqq \{\mathcal{F}_{\mathbb{S}} : \mathbb{S} \subseteq \{\mathrm{rot}, \mathrm{col}, \mathrm{v}\text{-flip}\}\}$ corresponding to various invariant representations. For example, $\mathcal{F}_{\{\mathrm{rot}, \mathrm{col}\}}$ is a space of functions (CNNs) that are G-invariant to the rotation and color-permutation groups ( $G_{\mathrm{rot}}$ and $G_{\mathrm{col}}$ ), and $\mathcal{F}_{\emptyset}$ is the space of functions with no invariance (standard CNN).
|
| 378 |
+
|
| 379 |
+
Results. Our results are shown in Tables 2 to 5 for the four tasks respectively where the label is (i) invariant to all three groups, (ii) invariant to only rotation and vertical flips, (iii) invariant to color-permutation, and (iv) invariant to none. We show the values for $R(\mathcal{F})$ , $\widehat{\mathrm{COMP}}(\mathcal{F},\mathcal{D})$ and $S(\mathcal{F},\mathcal{D})$ as discussed in Section 4.2. Bold values in the tables indicate the function class chosen by GES method with the proposed scoring criterion (minimizing $S(\mathcal{F},\mathcal{D})$ ). Test accuracy is computed on the extrapolated dataset after shifting the distribution of $P(\{U_i\}_{i\in \bar{\mathbb{D}}})$ (i.e., by applying the transformations that the label is invariant to).
|
| 380 |
+
|
| 381 |
+
In Tables 2 and 3, we see that the proposed method selects the correct model class in training and achieves the best OOD test accuracy. In Tables 4 and 5, the method is excessively invariant (to vertical flip) but still achieves within $1\%$ of the best OOD test accuracy. The OOD test accuracy of a
|
| 382 |
+
|
| 383 |
+
Table 4: Results for different function classes on the MNIST-{3,4} classification task with $\bar{\mathbb{D}} = \{\mathrm{col}\}$ $\mathbb{D} =$ rot,vflip}, i.e., task is invariant to color (D) but sensitive to rotation and vertical flips (D). $R(\mathcal{F}),\widehat{\mathrm{COMP}} (\mathcal{F},\mathcal{D})$ and $S(\mathcal{F},\mathcal{D})$ are as discussed in Section 4.2. Bold values indicate the function class chosen by GES method with the proposed scoring criterion. Test accuracy is computed on the extrapolated dataset after shifting the distribution of $P(\{U_i\}_{i\in \bar{\mathbb{D}}})$ . We see that the $S(\mathcal{F},\mathcal{D})$ loss selects a model that is excessively invariant in training, but the test accuracy is not that much penalized by the extra invariance (vertical flips).
|
| 384 |
+
|
| 385 |
+
<table><tr><td>Model class</td><td>R(F)</td><td>+ COMP(F, D)</td><td>+ NLL(F, D)</td><td>= S(F, D)</td><td>Train Acc</td><td>Test Acc</td></tr><tr><td>F{}</td><td>7</td><td>6639.241</td><td>2.395</td><td>6648.636</td><td>100.00 (0.01)</td><td>16.87 (5.88)</td></tr><tr><td>F{vflip}</td><td>3</td><td>6639.233</td><td>5.370</td><td>6647.603</td><td>99.99 (0.05)</td><td>15.71 (5.53)</td></tr><tr><td>F{col}</td><td>3</td><td>6639.196</td><td>2.315</td><td>6644.512</td><td>100.00 (0.00)</td><td>97.28 (0.28)</td></tr><tr><td>F{col,vflip}</td><td>1</td><td>6639.240</td><td>3.098</td><td>6643.337</td><td>100.00 (0.00)</td><td>96.82 (0.54)</td></tr><tr><td>F{rot}</td><td>3</td><td>6639.228</td><td>5296.755</td><td>11938.984</td><td>56.17 (3.90)</td><td>6.20 (0.86)</td></tr><tr><td>F{rot,vflip}</td><td>1</td><td>6639.221</td><td>5325.008</td><td>11965.228</td><td>55.96 (5.39)</td><td>7.24 (1.48)</td></tr><tr><td>F{rot,col}</td><td>1</td><td>6639.218</td><td>5322.015</td><td>11962.233</td><td>56.14 (3.31)</td><td>47.98 (1.34)</td></tr><tr><td>F{rot,col,vflip}</td><td>0</td><td>6639.230</td><td>5342.805</td><td>11982.035</td><td>55.32 (3.80)</td><td>49.25 (3.09)</td></tr></table>
|
| 386 |
+
|
| 387 |
+
Table 5: Results for different function classes on the MNIST-\{3,4\} classification task with $\bar{\mathbb{D}} = \emptyset, \mathbb{D} = \{\text{rot}, \text{col}, \text{vflip}\}$ , i.e., task is sensitive to all three groups ( $\mathbb{D}$ ) and insensitive to none ( $\bar{\mathbb{D}}$ ). $R(\mathcal{F}), \widehat{\mathrm{COMP}}(\mathcal{F}, \mathcal{D})$ and $S(\mathcal{F}, \mathcal{D})$ are as discussed in Section 4.2. **Bold** values indicate the function class chosen by GES method with the proposed scoring criterion. Test accuracy is computed on the extrapolated dataset after shifting the distribution of $P(\{U_i\}_{i \in \bar{\mathbb{D}}})$ . We see that the $S(\mathcal{F}, \mathcal{D})$ loss selects a model that is excessively invariant in training, but the test accuracy is not that much penalized by the extra invariance (vertical flip).
|
| 388 |
+
|
| 389 |
+
<table><tr><td>Model class</td><td>R(F)</td><td>+ COMP(F, D)</td><td>+ NLL(F, D)</td><td>= S(F, D)</td><td>Train Acc</td><td>Test Acc</td></tr><tr><td>F{}</td><td>7</td><td>6639.165</td><td>1.195</td><td>6647.360</td><td>100.00 (0.00)</td><td>96.00 (0.60)</td></tr><tr><td>F{vflip}</td><td>3</td><td>6639.117</td><td>3.548</td><td>6645.665</td><td>100.00 (0.00)</td><td>95.18 (0.45)</td></tr><tr><td>F{col}</td><td>3</td><td>6639.192</td><td>7536.167</td><td>14178.359</td><td>58.77 (3.34)</td><td>32.45 (2.18)</td></tr><tr><td>F{col,vflip}</td><td>1</td><td>6639.184</td><td>7902.462</td><td>14542.645</td><td>52.50 (7.64)</td><td>31.21 (2.48)</td></tr><tr><td>F{rot,col}</td><td>1</td><td>6639.088</td><td>13628.356</td><td>20268.443</td><td>23.78 (2.25)</td><td>15.93 (0.71)</td></tr><tr><td>F{rot}</td><td>3</td><td>6639.153</td><td>5259.957</td><td>11902.110</td><td>58.12 (4.05)</td><td>47.23 (1.89)</td></tr><tr><td>F{rot,vflip}</td><td>1</td><td>6639.827</td><td>5267.771</td><td>11908.598</td><td>57.13 (1.38)</td><td>47.57 (2.15)</td></tr><tr><td>F{rot,col,vflip}</td><td>0</td><td>6639.055</td><td>13705.123</td><td>20344.178</td><td>22.97 (3.32)</td><td>16.13 (2.22)</td></tr></table>
|
| 390 |
+
|
| 391 |
+
standard CNN with no invariance $(\mathcal{F}_{\emptyset})$ is typically very low except in Table 5 where sensitivity to all groups is required. We can also see the importance of $R(\mathcal{F})$ for tie-breaking in these experiments. As discussed in Section 4.2, $\widehat{\mathrm{COMP}} (\mathcal{F},\mathcal{D})$ is unable to distinguish between the different function classes because the training data contains a single example per equivalence class (see Figure 2c).
|
| 392 |
+
|
| 393 |
+
# A.5 CIFAR10 EXPERIMENTS WITH INFINITE/NONGROUP TRANSFORMATION SETS
|
| 394 |
+
|
| 395 |
+
In this section, we test our proposed method on out-of-distribution tasks on CIFAR10 images (Krizhevsky et al., 2009) where the equivalence relations are provided as infinite sets of transformations that may not form a group. We used (a) arbitrary rotation transformations over an image (denoted $\mathcal{T}_{\mathrm{rot}}$ ), and (b) shifting the hue of an image (denoted $\mathcal{T}_{\mathrm{col}}$ ). Note that for a bounded image, arbitrary rotation is not a group due to cropping. Further, transformations from the respective sets commute with each other, and hence, the lumpability condition is satisfied (Definition 2) for the corresponding equivalence relations.
|
| 396 |
+
|
| 397 |
+
We tested our method on 2 classification tasks: (i) invariant to both sets of transformations (arbitrary rotations and hue shifts), and (ii) invariant to arbitrary rotations, but sensitive to hue shifts. As before, we use the VGG architecture (Simonyan & Zisserman, 2014) for image classification and construct a collection of function classes $\mathcal{F} \coloneqq \{\mathcal{F}_{\mathbb{S}} : \mathbb{S} \subseteq \{\mathrm{rot}, \mathrm{col}\}\}$ corresponding to the various invariant representations. We use data augmentation to construct these invariant representations (this is possible since the lumpability condition holds). For example, $\mathcal{F}_{\{\mathrm{rot}, \mathrm{col}\}}$ refers to CNNs that were trained by
|
| 398 |
+
|
| 399 |
+
Table 6: Results for different function classes on the CIFAR10 classification task with two sets of transformations (transformations that do not form groups) on images: arbitrary rotations (with cropping due to rotation) and arbitrary hue shifts. The task is invariant to both sets of transformations $(\overline{\mathbb{D}})$ and sensitive to none $(\mathbb{D})$ . $R(\mathcal{F})$ , $\widehat{\mathrm{COMP}}(\mathcal{F},\mathcal{D})$ and $S(\mathcal{F},\mathcal{D})$ are as discussed in Section 4.2. Bold values indicate the function class chosen by GES method with the proposed scoring criterion. Test accuracy is computed on the extrapolated dataset after shifting the distribution of $P(\{U_i\}_{i\in \overline{\mathbb{D}}})$ . We see that the $S(\mathcal{F},\mathcal{D})$ loss selects the correct model class in training.
|
| 400 |
+
|
| 401 |
+
<table><tr><td>Model class</td><td>R(F)</td><td>+ COMP(F, D)</td><td>+ NLL(F, D)</td><td>= S(F, D)</td><td>Train Acc</td><td>Test Acc</td></tr><tr><td>F{}</td><td>3</td><td>27725.875</td><td>17496.615</td><td>45225.490</td><td>85.60</td><td>21.48</td></tr><tr><td>F{col}</td><td>1</td><td>27716.947</td><td>22715.956</td><td>50433.903</td><td>81.28</td><td>21.85</td></tr><tr><td>F{rot}</td><td>1</td><td>-60894.145</td><td>20365.793</td><td>-40527.352</td><td>82.65</td><td>45.12</td></tr><tr><td>F{rot, col}</td><td>0</td><td>-66262.157</td><td>23538.768</td><td>-42723.390</td><td>79.99</td><td>69.35</td></tr></table>
|
| 402 |
+
|
| 403 |
+
Table 7: Results for different function classes on the CIFAR10 classification task with two sets of transformations (transformations that do not form groups) on images: arbitrary angle rotations (with cropping due to rotation) and arbitrary hue shifts. The task is invariant to arbitrary rotations of the image $(\mathbb{D})$ but sensitive to color $(\mathbb{D})$ . $R(\mathcal{F})$ , $\widehat{\mathrm{COMP}}(\mathcal{F},\mathcal{D})$ and $S(\mathcal{F},\mathcal{D})$ are as discussed in Section 4.2. Bold values indicate the function class chosen by GES method with the proposed scoring criterion. Test accuracy is computed on the extrapolated dataset after shifting the distribution of $P(\{U_i\}_{i\in \mathbb{D}})$ . We see that the $S(\mathcal{F},\mathcal{D})$ loss selects the correct model class in training.
|
| 404 |
+
|
| 405 |
+
<table><tr><td>Model class</td><td>R(F)</td><td>+ COMP(F, D)</td><td>+ NLL(F, D)</td><td>= S(F, D)</td><td>Train Acc</td><td>Test Acc</td></tr><tr><td>F{}</td><td>3</td><td>27724.256</td><td>42166.993</td><td>69894.250</td><td>64.37</td><td>17.16</td></tr><tr><td>F{col}</td><td>1</td><td>27715.023</td><td>49744.680</td><td>77460.703</td><td>42.69</td><td>10.91</td></tr><tr><td>F{rot}</td><td>1</td><td>-91370.533</td><td>46218.086</td><td>-45151.447</td><td>61.77</td><td>52.60</td></tr><tr><td>F{rot,col}</td><td>0</td><td>-92009.184</td><td>50246.908</td><td>-41762.276</td><td>41.45</td><td>35.56</td></tr></table>
|
| 406 |
+
|
| 407 |
+
augmenting both arbitrarily rotated images and hue-shifted images. Once again, $\mathcal{F}_{\emptyset}$ is the space of functions with no invariance (standard CNN with no data augmentations).
|
| 408 |
+
|
| 409 |
+
Results. We show in Tables 6 and 7 that our method is able to find the correct invariance and achieves the best OOD test accuracy whereas the standard CNN with no invariance has poor OOD performance.
|
| 410 |
+
|
| 411 |
+
# A.6 MORE ON LUMPABILITY (DEFINITION 2)
|
| 412 |
+
|
| 413 |
+
We show that the lumpability condition of Definition 2 is equivalent to the normal subgroup condition of Mouli & Ribeiro (2021, Theorem 2) when the given equivalence relations are obtained from transformation groups. However, unlike the normal subgroup condition, the lumpability condition applies in the general case when the equivalence relations are not necessarily obtained via transformation groups.
|
| 414 |
+
|
| 415 |
+
Proposition 1. Let $\sim_{G_1}$ and $\sim_{G_2}$ be two equivalence relations on the input space $\mathcal{X}$ obtained as orbits under transformation groups $G_1$ and $G_2$ respectively, i.e., for $i = 1,2$ , $\pmb{x} \sim_{G_i} \pmb{x}'$ iff there exists $t^{(i)} \in G_i$ with $\pmb{x}' = t^{(i)} \circ \pmb{x}$ . Then, $\sim_{G_1}$ is lumpable with respect to the transformations $G_2$ (Definition 2) if and only if $G_1$ is a normal subgroup of $G_1 \vee G_2$ , where $\vee$ is the join operator.
|
| 416 |
+
|
| 417 |
+
Proof. First, given $\sim_{G_1}$ is lumpable with respect to $G_2$ , we wish to prove that $G_1$ is a normal subgroup of $G_1 \vee G_2$ . By definition of the join operator on transformation groups, $G_1$ is a subgroup of $G_1 \vee G_2$ .
|
| 418 |
+
|
| 419 |
+
Next, consider an equivalence class $[x]_{G_1} \in \mathcal{X} / \sim_{G_1}$ . Then, by the lumpability of $\sim_{G_1}$ with respect to $G_2$ , we have that for all $t^{(2)} \in G_2$ , there exists $[x']_{G_1}$ with $x^* \in [x]_{G_1} \Rightarrow t^{(2)} \circ x^* \in [x']_{G_1}$ . In other words, each $t^{(2)}$ maps all points in one equivalence class $[x]_{G_1}$ to another equivalence class
|
| 420 |
+
|
| 421 |
+
$[\pmb{x}^{\prime}]_{G_1}$ . Specifically, $t^{(2)}$ maps $\pmb{x} \in [\pmb{x}]_{G_1}$ to $t^{(2)} \circ \pmb{x} \in [\pmb{x}^{\prime}]_{G_1}$ . Thus, we can set $\pmb{x}^{\prime} = t^{(2)} \circ \pmb{x}$ without loss of generality.
|
| 422 |
+
|
| 423 |
+
Then, for all $t^{(2)} \in G_2$ , we have from the lumpability condition that
|
| 424 |
+
|
| 425 |
+
$$
|
| 426 |
+
\boldsymbol {x} ^ {*} \in [ \boldsymbol {x} ] _ {G _ {1}} \Longrightarrow t ^ {(2)} \circ \boldsymbol {x} ^ {*} \in \left[ t ^ {(2)} \circ \boldsymbol {x} \right] _ {G _ {1}}. \tag {9}
|
| 427 |
+
$$
|
| 428 |
+
|
| 429 |
+
Recall from the definition of the equivalence class derived from a transformation group (i.e., the orbit) that $\pmb{x}^{*} \in [\pmb{x}]_{G_{1}}$ means that there exists a transformation $t^{(1)} \in G_{1}$ that maps $\pmb{x}$ to $\pmb{x}^{*}$ , i.e., $\pmb{x}^{*} = t^{(1)} \circ \pmb{x}$ . Similarly, $t^{(2)} \circ \pmb{x}^{*} \in [t^{(2)} \circ \pmb{x}]_{G_{1}}$ means that there exists another transformation $\tilde{t}^{(1)}$ such that $t^{(2)} \circ \pmb{x}^{*} = \tilde{t}^{(1)} \circ t^{(2)} \circ \pmb{x}$ .
|
| 430 |
+
|
| 431 |
+
Equation (9) then becomes
|
| 432 |
+
|
| 433 |
+
$$
|
| 434 |
+
\exists t ^ {(1)} \in G _ {1} \text {s . t .} \boldsymbol {x} ^ {*} = t ^ {(1)} \circ \boldsymbol {x} \Rightarrow \exists \tilde {t} ^ {(1)} \in G _ {1} \text {s . t .} t ^ {(2)} \circ \boldsymbol {x} ^ {*} = \tilde {t} ^ {(1)} \circ t ^ {(2)} \circ \boldsymbol {x}, \tag {10}
|
| 435 |
+
$$
|
| 436 |
+
|
| 437 |
+
for all $t^{(2)}\in G_2$
|
| 438 |
+
|
| 439 |
+
Since Equation (10) holds for all $\pmb{x}^{*} \in [\pmb{x}]_{G_{1}}$ and for all $\pmb{x} \in \mathcal{X}$ , we have $\forall t^{(2)} \in G_2, \forall t^{(1)} \in G_1, \exists \tilde{t}^{(1)} \in G_1$ such that,
|
| 440 |
+
|
| 441 |
+
$$
|
| 442 |
+
t ^ {(2)} \circ t ^ {(1)} = \tilde {t} ^ {(1)} \circ t ^ {(2)},
|
| 443 |
+
$$
|
| 444 |
+
|
| 445 |
+
which implies that $G_{1}$ is a normal subgroup of $G_{1} \vee G_{2}$ . The converse can be proved trivially by reversing the steps of the above proof.
|
| 446 |
+
|
| 447 |
+

|
asymmetrylearningforcounterfactuallyinvariantclassificationinoodtasks/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:6e272f152a3b1da70638d7ba6e766c147a53d35ce49ab507ad92ba0839be13ae
|
| 3 |
+
size 565083
|
asymmetrylearningforcounterfactuallyinvariantclassificationinoodtasks/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:5f5f3bd812216839987fe36b4a792f41ba43b70a4f8ebedf4e20b84fff439752
|
| 3 |
+
size 1008502
|
beitbertpretrainingofimagetransformers/ca973394-5fd9-479e-8af4-ce04c9509247_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:7a4b6471d9bdae45a5901fe98c5e30068bae0ad7dc13b0ae7303768fa8ff78cf
|
| 3 |
+
size 101171
|
beitbertpretrainingofimagetransformers/ca973394-5fd9-479e-8af4-ce04c9509247_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:0560217af38ab18bd8865281c37a624a752b3453b7ad28ebf9f446b62cc36e90
|
| 3 |
+
size 121043
|
beitbertpretrainingofimagetransformers/ca973394-5fd9-479e-8af4-ce04c9509247_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:21c5145443c8f13f0832d32e56806ecfe4c9350c7bbeede58fb439a5a8ac1729
|
| 3 |
+
size 856952
|
beitbertpretrainingofimagetransformers/full.md
ADDED
|
@@ -0,0 +1,338 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# BEiT: BERT PRE-TRAINING OF IMAGE TRANSFORMERS
|
| 2 |
+
|
| 3 |
+
Hangbo Bao†, Li Dong†, Songhao Piao†, Furu Wei‡
|
| 4 |
+
|
| 5 |
+
† Harbin Institute of Technology
|
| 6 |
+
$\ddagger$ Microsoft Research
|
| 7 |
+
|
| 8 |
+
https://github.com/microsoft/unilm
|
| 9 |
+
|
| 10 |
+
# ABSTRACT
|
| 11 |
+
|
| 12 |
+
We introduce a self-supervised vision representation model BEiT, which stands for Bidirectional Encoder representation from Image Transformers. Following BERT (Devlin et al., 2019) developed in the natural language processing area, we propose a masked image modeling task to pretrain vision Transformers. Specifically, each image has two views in our pre-training, i.e., image patches (such as $16 \times 16$ pixels), and visual tokens (i.e., discrete tokens). We first "tokenize" the original image into visual tokens. Then we randomly mask some image patches and fed them into the backbone Transformer. The pre-training objective is to recover the original visual tokens based on the corrupted image patches. After pre-training BEiT, we directly fine-tune the model parameters on downstream tasks by appending task layers upon the pretrained encoder. Experimental results on image classification and semantic segmentation show that our model achieves competitive results with previous pre-training methods.
|
| 13 |
+
|
| 14 |
+
# 1 INTRODUCTION
|
| 15 |
+
|
| 16 |
+
Transformer (Vaswani et al., 2017) has achieved promising performance in computer vision (Dosovitskiy et al., 2020; Touvron et al., 2020). However, empirical studies show that vision Transformers require more training data than convolutional neural networks. In order to solve the data-hungry issue (Liu et al., 2021a), self-supervised pre-training is a promising solution to leverage large-scale image data. Several strands of methods have been explored for vision Transformers, such as contrastive learning (Chen et al., 2021; Xie et al., 2021), and self-distillation (Caron et al., 2021).
|
| 17 |
+
|
| 18 |
+
Concurrently, BERT (Devlin et al., 2019) has achieved great success in natural language processing. Its masked language modeling task first randomly masks some proportion of tokens within a text, and then recovers the masked tokens based on the Transformer encoding results of the corrupted text. Motivated by BERT, we turn to the denoising auto-encoding idea to pretrain vision Transformers, which has not been well studied by the vision community. It is challenging to directly apply BERT-style pre-training for image data. First of all, there is no pre-exist vocabulary for vision Transformer's input unit, i.e., image patches. So we cannot simply employ a softmax classifier to predict over all possible candidates for masked patches. In contrast, the language vocabulary, such as words and BPE (Sennrich et al., 2016), is well-defined and eases auto-encoding prediction. A straightforward alternative is regarding the task as a regression problem, which predicts the raw pixels of masked patches. However, such pixel-level recovery task tends to waste modeling capability on pre-training short-range dependencies and high-frequency details (Ramesh et al., 2021). Our goal is to overcome the above issues for pre-training of vision Transformers.
|
| 19 |
+
|
| 20 |
+
In this work, we introduce a self-supervised vision representation model BEiT, which stands for Bidirectional Encoder representation from Image Transformers. Inspired by BERT, we propose a pre-training task, namely, masked image modeling (MIM). As shown in Figure 1, MIM uses two views for each images, i.e., image patches, and visual tokens. We split the image into a grid of patches that are the input representation of backbone Transformer. Moreover, we "tokenize" the image to discrete visual tokens, which is obtained by the latent codes of discrete VAE (Ramesh et al., 2021).
|
| 21 |
+
|
| 22 |
+

|
| 23 |
+
Figure 1: Overview of BEiT pre-training. Before pre-training, we learn an "image tokenizer" via autoencoding-style reconstruction, where an image is tokenized into discrete visual tokens according to the learned vocabulary. During pre-training, each image has two views, i.e., image patches, and visual tokens. We randomly mask some proportion of image patches (gray patches in the figure) and replace them with a special mask embedding [M]. Then the patches are fed to a backbone vision Transformer. The pre-training task aims at predicting the visual tokens of the original image based on the encoding vectors of the corrupted image.
|
| 24 |
+
|
| 25 |
+
During pre-training, we randomly mask some proportion of image patches, and feed the corrupted input to Transformer. The model learns to recover the visual tokens of the original image, instead of the raw pixels of masked patches.
|
| 26 |
+
|
| 27 |
+
We perform self-supervised learning and then fine-tune the pretrained BEiT on two downstream tasks, i.e., image classification, and semantic segmentation. Experimental results indicate that BEiT outperforms both from-scratch training and previous strong self-supervised models. Moreover, BEiT is complementary to supervised pre-training. Performance of BEiT can be further improved by intermediate fine-tuning with ImageNet labels. Ablation studies show that our proposed techniques are critical to the effectiveness of BERT-style pre-training for image data. Apart from performance, the improvements of convergence speed and stability of fine-tuning reduce training costs on end tasks. In addition, we demonstrate that self-supervised BEiT can learn reasonable semantic regions via pre-training, unleashing the rich supervision signals contained in images.
|
| 28 |
+
|
| 29 |
+
Our contributions are summarized as follows:
|
| 30 |
+
|
| 31 |
+
- We propose a masked image modeling task to pretrain vision Transformers in a self-supervised manner. We also provide a theoretical explanation from the perspective of variational autoencoder.
|
| 32 |
+
- We pretrain BEiT and conduct extensive fine-tuning experiments on downstream tasks, such as image classification, and semantic segmentation.
|
| 33 |
+
- We present that the self-attention mechanism of self-supervised BEiT learns to distinguish semantic regions and object boundaries, although without using any human annotation.
|
| 34 |
+
|
| 35 |
+
# 2 METHODS
|
| 36 |
+
|
| 37 |
+
Given an input image $x$ , BEiT encodes it to contextualized vector representations. As shown in Figure 1, BEiT is pretrained by the masked image modeling (MIM) task in a self-supervised learning manner. MIM aims at recovering the masked image patches based on encoding vectors. For
|
| 38 |
+
|
| 39 |
+
downstream tasks (such as image classification, and semantic segmentation), we append task layers upon pretrained BEiT and fine-tune the parameters on the specific datasets.
|
| 40 |
+
|
| 41 |
+
# 2.1 IMAGE REPRESENTATIONS
|
| 42 |
+
|
| 43 |
+
The images have two views of representations in our method, namely, image patch, and visual tokens. The two types serve as input and output representations during pre-training, respectively.
|
| 44 |
+
|
| 45 |
+
# 2.1.1 IMAGE PATCH
|
| 46 |
+
|
| 47 |
+
The 2D image is split into a sequence of patches (Dosovitskiy et al., 2020), so that a standard Transformer can directly accept image data. Formally, we reshape the image $\pmb{x} \in \mathbb{R}^{H \times W \times C}$ into $N = HW / P^2$ patches $\pmb{x}^p \in \mathbb{R}^{N \times (P^2C)}$ , where $C$ is the number of channels, $(H, W)$ is the input image resolution, and $(P, P)$ is the resolution of each patch. The image patches $\{\pmb{x}_i^p\}_{i=1}^N$ are flattened into vectors and are linearly projected, which is similar to word embeddings in BERT (Devlin et al., 2019). Image patches preserve raw pixels and are used as input features in BEiT.
|
| 48 |
+
|
| 49 |
+
In our experiments, we split each $224 \times 224$ image into a $14 \times 14$ grid of image patches, where each patch is $16 \times 16$ .
|
| 50 |
+
|
| 51 |
+
# 2.1.2 VISUALTOKEN
|
| 52 |
+
|
| 53 |
+
Similar to natural language, we represent the image as a sequence of discrete tokens obtained by an "image tokenizer", instead of raw pixels. Specifically, we tokenize the image $\boldsymbol{x} \in \mathbb{R}^{H \times W \times C}$ into $z = [z_1, \ldots, z_N] \in \mathcal{V}^{h \times w}$ , where the vocabulary $\mathcal{V} = \{1, \ldots, |\mathcal{V}|\}$ contains discrete token indices.
|
| 54 |
+
|
| 55 |
+
Following (Ramesh et al., 2021), we use the image tokenizer learned by discrete variational autoencoder (dVAE). There are two modules during visual token learning, namely, tokenizer and decoder. The tokenizer $q_{\phi}(\boldsymbol{z}|\boldsymbol{x})$ maps image pixels $\boldsymbol{x}$ into discrete tokens $\boldsymbol{z}$ according to a visual codebook (i.e., vocabulary). The decoder $p_{\psi}(\boldsymbol{x}|\boldsymbol{z})$ learns to reconstruct the input image $\boldsymbol{x}$ based on the visual tokens $\boldsymbol{z}$ . The reconstruction objective can be written as $\mathbb{E}_{\boldsymbol{z}\sim q_{\phi}(\boldsymbol{z}|\boldsymbol{x})}[\log p_{\psi}(\boldsymbol{x}|\boldsymbol{z})]$ . Because the latent visual tokens are discrete, the model training is non-differentiable. Gumbel-softmax relaxation (Jang et al., 2017; Maddison et al., 2017) is employed to train the model parameters. Moreover, a uniform prior is put on $q_{\phi}$ during dVAE training. Refer to (Ramesh et al., 2021) for more training details of the image tokenizer.
|
| 56 |
+
|
| 57 |
+
We tokenize each image to a $14 \times 14$ grid of visual tokens. Notice the number of visual tokens and the number of image patches for one image are the same. The vocabulary size is set to $|\mathcal{V}| = 8192$ . In our work, we directly use the publicly available<sup>1</sup> image tokenizer described in (Ramesh et al., 2021). We also compare it with a re-implemented tokenizer in Appendix C.
|
| 58 |
+
|
| 59 |
+
# 2.2 BACKBONE NETWORK: IMAGE TRANSFORMER
|
| 60 |
+
|
| 61 |
+
Following ViT (Dosovitskiy et al., 2020), we use the standard Transformer (Vaswani et al., 2017) as the backbone network. So the results can be directly compared with previous work in terms of the network architecture.
|
| 62 |
+
|
| 63 |
+
The input of Transformer is a sequence of image patches $\{\pmb{x}_i^p\}_{i=1}^N$ . The patches are then linearly projected to obtain patch embeddings $\pmb{E}\pmb{x}_i^p$ , where $\pmb{E} \in \mathbb{R}^{(P^2C) \times D}$ . Moreover, we prepend a special token [S] to the input sequence. We also add standard learnable 1D position embeddings $\pmb{E}_{pos} \in \mathbb{R}^{N \times D}$ to patch embeddings. The input vectors $\pmb{H}_0 = [\pmb{e}_{[S]}, \pmb{E}\pmb{x}_i^p, \dots, \pmb{E}\pmb{x}_N^p] + \pmb{E}_{pos}$ is fed into Transformer. The encoder contains $L$ layers of Transformer blocks $\pmb{H}^l = \mathrm{Transformer}(\pmb{H}^{l-1})$ , where $l = 1, \dots, L$ . The output vectors of the last layer $\pmb{H}^L = [\pmb{h}_{[S]}^L, \pmb{h}_1^L, \dots, \pmb{h}_N^L]$ are used as the encoded representations for the image patches, where $\pmb{h}_i^L$ is the vector of the $i$ -th image patch.
|
| 64 |
+
|
| 65 |
+
# 2.3 PRE-TRAINING BEIT: MASKED IMAGE MODELING
|
| 66 |
+
|
| 67 |
+
We propose a masked image modeling (MIM) task. We randomly mask some percentage of image patches, and then predict the visual tokens that are corresponding to the masked patches.
|
| 68 |
+
|
| 69 |
+
Figure 1 shows the overview of our method. As presented in Section 2.1, given an input image $\pmb{x}$ , we split it into $N$ image patches $(\{\pmb{x}_i^p\}_{i=1}^N)$ , and tokenize it to $N$ visual tokens $(\{z_i\}_{i=1}^N)$ . We randomly mask approximately $40\%$ image patches, where the masked positions are denoted as $\mathcal{M} \in \{1, \dots, N\}^{0.4N}$ . Next we replace the masked patches with a learnable embedding $e_{[\mathbb{M}]} \in \mathbb{R}^D$ . The corrupted image patches $x^{\mathcal{M}} = \{x_i^p : i \notin \mathcal{M}\}_{i=1}^N \cup \{e_{[\mathbb{M}]} : i \in \mathcal{M}\}_{i=1}^N$ are then fed into the $L$ -layer Transformer as described in Section 2.2. The final hidden vectors $\{h_i^L\}_{i=1}^N$ are regarded as encoded representations of the input patches. For each masked position $\{h_i^L : i \in \mathcal{M}\}_{i=1}^N$ , we use a softmax classifier to predict the corresponding visual tokens $p_{\mathrm{MIM}}(z'|x^{\mathcal{M}}) = \mathrm{softmax}_{z'}(W_c h_i^L + b_c)$ where $x^{\mathcal{M}}$ is the corrupted image, $W_c \in \mathbb{R}^{|\mathcal{V}| \times D}$ , and $b_c \in \mathbb{R}^{|\mathcal{V}|}$ . The pre-training objective is to maximize the log-likelihood of the correct visual tokens $z_i$ given the corrupted image:
|
| 70 |
+
|
| 71 |
+
$$
|
| 72 |
+
\max \sum_ {x \in \mathcal {D}} \mathbb {E} _ {\mathcal {M}} \left[ \sum_ {i \in \mathcal {M}} \log p _ {\mathrm {M I M}} \left(z _ {i} \mid x ^ {\mathcal {M}}\right) \right] \tag {1}
|
| 73 |
+
$$
|
| 74 |
+
|
| 75 |
+
where $\mathcal{D}$ is the training corpus, $\mathcal{M}$ represents randomly masked positions, and $x^{\mathcal{M}}$ is the corrupted image that is masked according to $\mathcal{M}$ .
|
| 76 |
+
|
| 77 |
+
Rather than randomly choosing patches for the masked positions $\mathcal{M}$ , we employ blockwise masking in our work. As summarized in Algorithm 1, a block of image patches is masked each time. For each block, we set the minimum number of patches to 16. Then we randomly choose an aspect ratio for the masking block. We repeat the above two steps until obtaining enough masked patches, i.e., $0.4N$ , where $N$ is the total number of image patches, and 0.4 is masking ratio.
|
| 78 |
+
|
| 79 |
+
Algorithm 1 Blockwise Masking
|
| 80 |
+
Input: $N(h\times w)$ image patches
|
| 81 |
+
Output: Masked positions $\mathcal{M}$ $\mathcal{M}\gets \{\}$
|
| 82 |
+
repeat
|
| 83 |
+
$s\gets \mathrm{Rand}(16,0.4N - |\mathcal{M}|)$ ▷ Block size
|
| 84 |
+
$r\gets \mathrm{Rand}(0.3,\frac{1}{0.3})$ ▷ Aspect ratio of block
|
| 85 |
+
$a\gets \sqrt{s\cdot r};b\gets \sqrt{s / r}$ $t\gets \mathrm{Rand}(0,h - a);l\gets \mathrm{Rand}(0,w - b)$ $\mathcal{M}\gets \mathcal{M}\bigcup \{(i,j):i\in [t,t + a),j\in [l,l + b)\}$
|
| 86 |
+
until $|\mathcal{M}| > 0.4N$ ▷ Masking ratio is $40\%$
|
| 87 |
+
return $\mathcal{M}$
|
| 88 |
+
|
| 89 |
+
The MIM task is greatly inspired by masked language modeling (Devlin et al., 2019), which is one of the most successful pre-training objective in natural language processing. Moreover, blockwise (or n-gram) masking is also widely applied in BERT-like models (Joshi et al., 2020; Bao et al., 2020; Raffel et al., 2020). However, directly using pixel-level auto-encoding (i.e., recovering the pixels of masked patches) for vision pre-training pushes the model to focus on short-range dependencies and high-frequency details (Ramesh et al., 2021). BEiT overcomes the above issue by predicting discrete visual tokens, which summarizes the details to high-level abstractions. Ablation studies in Section 3.3 show that our proposed method significantly outperforms pixel-level auto-encoding.
|
| 90 |
+
|
| 91 |
+
# 2.4 FROM THE PERSPECTIVE OF VARIATIONAL AUTOENCODER
|
| 92 |
+
|
| 93 |
+
The BEiT pre-training can be viewed as variational autoencoder (Kingma & Welling, 2014) training. Let $x$ denote the original image, $\tilde{x}$ the masked image, and $z$ the visual tokens. Considering the evidence lower bound (ELBO) of the log-likelihood $p(x|\tilde{x})$ , i.e., recovering the original image from its corrupted version:
|
| 94 |
+
|
| 95 |
+
$$
|
| 96 |
+
\sum_ {\left(x _ {i}, \tilde {x} _ {i}\right) \in \mathcal {D}} \log p \left(x _ {i} \mid \tilde {x} _ {i}\right) \geq \sum_ {\left(x _ {i}, \tilde {x} _ {i}\right) \in \mathcal {D}} \left(\underbrace {\mathbb {E} _ {z _ {i} \sim q _ {\phi} (\mathbf {z} \mid x _ {i})} \left[ \log p _ {\psi} \left(x _ {i} \mid z _ {i}\right) \right]} _ {\text {V i s u a l T o k e n R e c o n s t r u c t i o n}} - D _ {\mathrm {K L}} \left[ q _ {\phi} (\mathbf {z} \mid x _ {i}), p _ {\theta} (\mathbf {z} \mid \tilde {x} _ {i}) \right]\right) \tag {2}
|
| 97 |
+
$$
|
| 98 |
+
|
| 99 |
+
where (1) $q_{\phi}(z|x)$ denotes the image tokenizer that obtains visual tokens; (2) $p_{\psi}(x|z)$ decodes the original image given input visual tokens; (3) $p_{\theta}(z|\tilde{x})$ recovers the visual tokens based on the masked image, which is our MIM pre-training task.
|
| 100 |
+
|
| 101 |
+
We learn the model following a two-stage procedure similar to (van den Oord et al., 2017; Razavi et al., 2019). In the first stage, we obtain the image tokenizer as a discrete variational autoencoder (Ramesh et al., 2021). Specifically, the first stage minimizes the reconstruction loss
|
| 102 |
+
|
| 103 |
+
$-\mathbb{E}_{z_i \sim q_\phi(\mathbf{z} | x_i)}[\log p_\psi(x_i | z_i)]$ with an uniform prior as described in Equation (2). In the second stage, we learn the prior $p_\theta$ while keeping $q_\phi$ and $p_\psi$ fixed. We simplify $q_\phi(\mathbf{z} | x_i)$ to a one-point distribution with the most likely visual tokens $\hat{z}_i = \arg \max_z q_\phi(z | x_i)$ . Then Equation (2) can be rewritten as:
|
| 104 |
+
|
| 105 |
+
$$
|
| 106 |
+
\sum_ {\left(x _ {i}, \tilde {x} _ {i}\right) \in \mathcal {D}} \left(\underbrace {\mathbb {E} _ {z _ {i} \sim q _ {\phi} (z \mid x _ {i})} [ \log p _ {\psi} \left(x _ {i} \mid z _ {i}\right) ]} _ {\text {S t a g e 1 : V i s u a l T o k e n R e c o n s t r u c t i o n}} + \underbrace {\log p _ {\theta} \left(\hat {z} _ {i} \mid \tilde {x} _ {i}\right)} _ {\text {S t a g e 2 : M a s k e d I m a g e M o d e l i n g}}\right) \tag {3}
|
| 107 |
+
$$
|
| 108 |
+
|
| 109 |
+
where the second term is our BEiT pre-training objective.
|
| 110 |
+
|
| 111 |
+
# 2.5 PRE-TRAINING SETUP
|
| 112 |
+
|
| 113 |
+
The network architecture of BEiT follows that of ViT-Base (Dosovitskiy et al., 2020) for a fair comparison. We use a 12-layer Transformer with 768 hidden size, and 12 attention heads. The intermediate size of feed-forward networks is 3072. We employ the default $16 \times 16$ input patch size. We directly borrow the image tokenizer trained by Ramesh et al. (2021). The vocabulary size of visual tokens is 8192.
|
| 114 |
+
|
| 115 |
+
We pretrain BEiT on the training set of ImageNet-1K (Russakovsky et al., 2015), which contains about $1.2\mathrm{M}$ images. Our augmentation policy includes random resized cropping, horizontal flipping, color jittering (Wu et al., 2018). Notice that we do not use the labels for self-supervised learning. We use the $224 \times 224$ resolution in our experiments. So the input is split to $14 \times 14$ image patches, and the same amount of visual tokens. We randomly mask at most 75 patches (i.e., roughly $40\%$ of total image patches).
|
| 116 |
+
|
| 117 |
+
The pre-training runs for about 500k steps (i.e., 800 epochs) with 2k batch size. Adam (Loshchilov & Hutter, 2019) with $\beta_{1} = 0.9$ , $\beta_{2} = 0.999$ is employed for optimization. The learning rate is set to 1.5e-3, with a warmup of 10 epochs, and cosine learning rate decay. The weight decay is 0.05. We employ stochastic depth (Huang et al., 2016) with a 0.1 rate, and disable dropout. The 500k training steps take about five days using 16 Nvidia Telsa V100 32GB GPU cards.
|
| 118 |
+
|
| 119 |
+
We find that proper initialization is important to stabilize Transformer, especially for large-scale pretraining. We first randomly initialize all the parameters within a small range, such as $[-0.02, 0.02]$ . Then, for the $l$ -th Transformer layer, we rescale the output matrices (i.e., the last linear projection within each sub-layer) of the self-attention module and the feed-forward network by $\frac{1}{\sqrt{2l}}$ .
|
| 120 |
+
|
| 121 |
+
# 2.6 FINE-TUNING BEIT ON DOWNSSTREAM VISION TASKS
|
| 122 |
+
|
| 123 |
+
After pre-training BEiT, we append a task layer upon the Transformer, and fine-tune the parameters on downstream tasks, like BERT. We take image classification and semantic segmentation as examples in our work. It is straightforward to leverage the pre-training-then-fine-tuning paradigm on other vision tasks with BEiT.
|
| 124 |
+
|
| 125 |
+
Image classification. For image classification tasks, we directly employ a simple linear classifier as the task layer. Specifically, we use average pooling to aggregate the representations, and feed the global to a softmax classifier. The category probabilities are computed as $\mathrm{softmax}(\mathrm{avg}(\{h_i^L\}_{i=1}^N W_c))$ , where $h_i^L$ is the final encoding vector of the $i$ -th image patch, $W_c \in \mathbb{R}^{D \times C}$ is a parameter matrix, and $C$ is the number of labels. We maximize the likelihood of labeled data by updating the parameters of BEiT and the softmax classifier.
|
| 126 |
+
|
| 127 |
+
Semantic segmentation. For semantic segmentation, we follow the task layer used in SETR-PUP (Zheng et al., 2020). To be specific, we use pretrained BEiT as a backbone encoder, and incorporate several deconvolution layers as decoder to produce segmentation. The model is also end-to-end fine-tuned similar to image classification.
|
| 128 |
+
|
| 129 |
+
Intermediate fine-tuning. After self-supervised pre-training, we can further train BEiT on a data-rich intermediate dataset (i.e., ImageNet-1K in our work), and then finetune the model on the target downstream tasks. Such intermediate fine-tuning is the common practice of BERT fine-tuning in NLP (Pruksachatkun et al., 2020). We directly follow the method for BEiT.
|
| 130 |
+
|
| 131 |
+
# 3 EXPERIMENTS
|
| 132 |
+
|
| 133 |
+
We conduct full fine-tuning experiments on image classification and semantic segmentation. Moreover, we present various ablation studies for pre-training and analyze the representations learned by BEiT. We also report linear probes on ImageNet in Appendix D.
|
| 134 |
+
|
| 135 |
+
# 3.1 IMAGE CLASSIFICATION
|
| 136 |
+
|
| 137 |
+
The image classification task classifies input images to various categories. We evaluate BEiT on the ILSVRC-2012 ImageNet dataset (Russakovsky et al., 2015) with 1k classes and 1.3M images. We directly follow the most of hyperparameters of DeiT (Touvron et al., 2020) in our fine-tuning experiments for a fair comparison. We reduce fine-tuning epochs compared with training from scratch, as BEiT has been pre-trained. Accordingly, we use a larger learning rate with layer-wise decay. The detailed hyperparameters are summarized in Appendix H.
|
| 138 |
+
|
| 139 |
+
Table 1 reports top-1 accuracy on image classification. We compare BEiT with vision Transformers trained by random initialization, supervised pre-training, and previous self-supervised learning methods. All the compared models are base-size, except iGPT has 1.36B parameters. Pre-training is conducted on ImageNet for the comparison purpose, except ViT-JFT300M is pretrained on Google's in-house 300M images.
|
| 140 |
+
|
| 141 |
+
Compared with the models trained by random initialization, we find that pre-trained BEiT significantly improves performance on both datasets. BEiT improves the performance on ImageNet, which shows the effectiveness under the rich-resource setting.
|
| 142 |
+
|
| 143 |
+
Moreover, we compare BEiT with previous state-of-the-art self-supervised methods for Transformer, such as DINO (Caron et al., 2021), and MoCo v3 (Chen et al., 2021). Our proposed method outperforms previous models on ImageNet fine-tuning. Among them, iGPT-1.36B (Chen et al., 2020a) uses much more parameters (i.e., 1.36B vs 86M), and ViT-JFT300M (Dosovitskiy et al., 2020) is pretrained on larger corpus (i.e., 300M vs 1.3M), while others pretrain ViT-Base on ImageNet-1K. iGPT-1.36B and ViT-JFT300M are the most comparable methods, which also follows auto-encoding pre-training for vision Transformer. Specifically, iGPT uses clustered image tokens as both input and output for image GPT or image BERT. In contrast, we use image patches as input to preserve raw pixels, and employ discrete visual tokens as a prediction bottleneck. ViT-JFT300 predicts the mean, 3-bit color of each masked patch, rather than visual tokens learned by discrete VAE. We also pretrain the self-supervised tasks of BEiT and DINO in a multi-task learning manner, which is presented in Appendix E.
|
| 144 |
+
|
| 145 |
+
In addition, we evaluate our proposed method with intermediate fine-tuning. In other words, we first pretrain BEiT in a self-supervised manner, and then fine-tune the pretrained model on ImageNet with labeled data. The results show that BEiT is complementary to supervised pre-training, achieving additional gain after intermediate fine-tuning on ImageNet.
|
| 146 |
+
|
| 147 |
+
Fine-tuning to $384 \times 384$ resolution. After fine-tuning with resolution $224 \times 224$ , we additionally fine-tune the model on $384 \times 384$ images by 10 more epochs. We follow the standard higher-resolution setting of DeiT (Touvron et al., 2020), except using fewer epochs. Notice that we keep patch size the same for both $224 \times 224$ and $384 \times 384$ images. So the input sequence length of Transformers becomes longer for higher resolutions. Table 1 shows that higher resolution improves the BEiT results by $1+$ points on ImageNet. More importantly, $\mathrm{BEiT}_{384}$ pretrained on ImageNet-1K even outperforms supervised pre-training $\mathrm{ViT}_{384}$ that uses ImageNet-22K, when they use the same input resolution.
|
| 148 |
+
|
| 149 |
+
Scaling up to larger size. We further scale up BEiT to the large size (same as ViT-L). As shown in Table 1, $\mathrm{ViT}_{384}$ -L is worse than $\mathrm{ViT}_{384}$ on ImageNet, when training from scratch. The results verifies the data-hungry issue of vision Transformers. Supervised pre-training on ImageNet-22K partially relieves the issue, where $\mathrm{ViT}_{384}$ -L finally outperforms $\mathrm{ViT}_{384}$ by 1.2. In comparison, BEiT-L is better than BEiT by 2.0, and $\mathrm{BEiT}_{384}$ -L outperforms $\mathrm{BEiT}_{384}$ by 1.7. In other words, the benefits of scaling up BEiT from base to large are greater than supervised pre-training with ImageNet-22K. More importantly, comparing between $\mathrm{BEiT}_{384}$ with $\mathrm{ViT}_{384}$ that conducts supervised pre-training on ImageNet-22K, the improvements of BEiT become greater along with scaling the size from base
|
| 150 |
+
|
| 151 |
+
<table><tr><td>Models</td><td>Model Size</td><td>Resolution</td><td>ImageNet</td></tr><tr><td colspan="4">Training from scratch (i.e., random initialization)</td></tr><tr><td>ViT384-B (Dosovitskiy et al., 2020)</td><td>86M</td><td>3842</td><td>77.9</td></tr><tr><td>ViT384-L (Dosovitskiy et al., 2020)</td><td>307M</td><td>3842</td><td>76.5</td></tr><tr><td>DeiT-B (Touvron et al., 2020)</td><td>86M</td><td>2242</td><td>81.8</td></tr><tr><td>DeiT384-B (Touvron et al., 2020)</td><td>86M</td><td>3842</td><td>83.1</td></tr><tr><td colspan="4">Supervised Pre-Training on ImageNet-22K (using labeled data)</td></tr><tr><td>ViT384-B (Dosovitskiy et al., 2020)</td><td>86M</td><td>3842</td><td>84.0</td></tr><tr><td>ViT384-L (Dosovitskiy et al., 2020)</td><td>307M</td><td>3842</td><td>85.2</td></tr><tr><td colspan="4">Self-Supervised Pre-Training on ImageNet-1K (without labeled data)</td></tr><tr><td>iGPT-1.36B† (Chen et al., 2020a)</td><td>1.36B</td><td>2242</td><td>66.5</td></tr><tr><td>ViT384-B-JFT300M‡ (Dosovitskiy et al., 2020)</td><td>86M</td><td>3842</td><td>79.9</td></tr><tr><td>MoCo v3-B (Chen et al., 2021)</td><td>86M</td><td>2242</td><td>83.2</td></tr><tr><td>MoCo v3-L (Chen et al., 2021)</td><td>307M</td><td>2242</td><td>84.1</td></tr><tr><td>DINO-B (Caron et al., 2021)</td><td>86M</td><td>2242</td><td>82.8</td></tr><tr><td>BEiT-B (ours)</td><td>86M</td><td>2242</td><td>83.2</td></tr><tr><td>BEiT384-B (ours)</td><td>86M</td><td>3842</td><td>84.6</td></tr><tr><td>BEiT-L (ours)</td><td>307M</td><td>2242</td><td>85.2</td></tr><tr><td>BEiT384-L (ours)</td><td>307M</td><td>3842</td><td>86.3</td></tr></table>
|
| 152 |
+
|
| 153 |
+

|
| 154 |
+
Table 2: Convergence curves of training DeiT from scratch and fine-tuning BEiT on ImageNet-1K.
|
| 155 |
+
|
| 156 |
+
Table 1: Top-1 accuracy on ImageNet-1K. We evaluate base- ("-B") and large-size ("-L") models at resolutions ${224} \times {224}$ and ${384} \times {384}$ . †: iGPT-1.36B contains 1.36 billion parameters, while others are base-size models. ‡: ViT ${}_{384}$ -B-JFT300M is pretrained with the "masked patch prediction" task on Google's in-house 300M images, while others use ImageNet.
|
| 157 |
+
|
| 158 |
+
<table><tr><td>Models</td><td>ADE20K</td></tr><tr><td>Supervised Pre-Training on ImageNet</td><td>45.3</td></tr><tr><td>DINO (Caron et al., 2021)</td><td>44.1</td></tr><tr><td>BEiT (ours)</td><td>45.6</td></tr><tr><td>BEiT + Intermediate Fine-Tuning (ours)</td><td>47.7</td></tr></table>
|
| 159 |
+
|
| 160 |
+
Table 3: Results of semantic segmentation on ADE20K. We use SETR-PUP (Zheng et al., 2020) as the task layer and report results of single-scale inference.
|
| 161 |
+
|
| 162 |
+
(i.e., 0.6) to large (i.e., 1.1). The results suggest that BEiT tends to help more for extremely larger models (such as 1B, or 10B), especially when labeled data are insufficient<sup>2</sup> to conduct supervised pre-training<sup>3</sup> for such large models.
|
| 163 |
+
|
| 164 |
+
Convergence curves. Figure 2 compares the convergence curves of the training-from-scratch and pre-training-then-fine-tuning paradigms. We find that fine-tuning BEiT not only achieves better performance, but also converging much faster than training DeiT from scratch. Moreover, fine-tuning BEiT can reach reasonable numbers within very few epochs.
|
| 165 |
+
|
| 166 |
+
<table><tr><td>Models</td><td>ImageNet</td><td>ADE20K</td></tr><tr><td>BEiT (300 Epochs)</td><td>82.86</td><td>44.65</td></tr><tr><td>- Blockwise masking</td><td>82.77</td><td>42.93</td></tr><tr><td>- Visual tokens (i.e., recover masked pixels)</td><td>81.04</td><td>41.38</td></tr><tr><td>- Visual tokens - Blockwise masking</td><td>80.50</td><td>37.09</td></tr><tr><td>+ Recover 100% visual tokens</td><td>82.59</td><td>40.93</td></tr><tr><td>- Masking + Recover 100% visual tokens</td><td>81.67</td><td>36.73</td></tr><tr><td>Pretrain longer (800 epochs)</td><td>83.19</td><td>45.58</td></tr></table>
|
| 167 |
+
|
| 168 |
+
Table 4: Ablation studies for BEiT pre-training on image classification and semantic segmentation.
|
| 169 |
+
|
| 170 |
+
# 3.2 SEMANTIC SEGMENTATION
|
| 171 |
+
|
| 172 |
+
Semantic segmentation aims to predict a corresponding class for each pixel of the input image. We evaluate BEiT on the ADE20K benchmark (Zhou et al., 2019) with 25K images and 150 semantic categories. We report the metric of mean Intersection of Union (mIoU) averaged over all semantic categories. As presented in Section 2.6, we directly follow the task layer and the most of hyperparameters described in SETR-PUP (Zheng et al., 2020). On ADE20K, we use Adam (Loshchilov & Hutter, 2019) as the optimizer. The learning rate is set to 1e-3 with layer-wise decay similar to image classification. We conduct fine-tuning for 160K steps. The batch size is 16. The detailed hyperparameters are described in Appendix I.
|
| 173 |
+
|
| 174 |
+
As shown in Table 3, we compare BEiT with supervised pre-training that relies on labeled data of ImageNet. We find that our proposed method achieves better performance than supervised pretraining, although BEiT does not require manual annotations for pre-training. Moreover, we employ intermediate fine-tuning for BEiT on ImageNet, i.e., we first fine-tune pretrained BEiT on ImageNet, and then fine-tune the model on ADE20K. The results indicate that intermediate fine-tuning further improves BEiT on semantic segmentation.
|
| 175 |
+
|
| 176 |
+
# 3.3 ABLATION STUDIES
|
| 177 |
+
|
| 178 |
+
We conduct ablation studies to analyze the contributions of each component in BEiT. The models are evaluated on image classification (i.e., ImageNet) and semantic segmentation (i.e., ADE20K). We set the default pre-training steps to 300 epochs for the ablation studies, which is $37.5\%$ of the total steps used in the previous experiments.
|
| 179 |
+
|
| 180 |
+
Table 4 reports the results of various model variants. First, we ablate blockwise masking by randomly sample masked positions. We find that blockwise masking is beneficial on both tasks, especially on semantic segmentation. Second, we ablate the usage of visual tokens by predicting the raw pixels of masked patches, i.e., the pre-training task becomes a pixel regression problem to recover masked patches. Our proposed masked image modeling task significantly outperforms naive pixel-level auto-encoding. Compared with the results in Table 1, the ablation result is worse than training vision Transformer from scratch on two tasks. The results indicate that the prediction of visual tokens is the key ingredient of BEiT. Third, we ablate the usage of visual tokens and blockwise masking together. We find that blockwise masking is even more helpful for pixel-level auto-encoding, which relieves the suffering of short-distance dependency. Forth, recovering all the visual tokens harms performance on downstream tasks. Fifth, we compare BEiT with different training steps. Pre-training the model longer can further improve performance on downstream tasks.
|
| 181 |
+
|
| 182 |
+
# 3.4 ANALYSIS OF SELF-ATTENTION MAP
|
| 183 |
+
|
| 184 |
+
We show that the self-attention mechanism in BEiT can separate objects, even though our pre-training does not rely on any manual annotation at all. Similar properties are also observed by Caron et al. (2021). The probing images are taken from the MS COCO (Lin et al., 2014) corpus to avoid appearing in the pre-training data.
|
| 185 |
+
|
| 186 |
+

|
| 187 |
+
Figure 2: Self-attention map for different reference points. The self-attention mechanism in BEiT is able to separate objects, although self-supervised pre-training does not use manual annotations.
|
| 188 |
+
|
| 189 |
+
As shown in Figure 2, we plot the self-attention map for different reference points within an image. The visualizations are produced by attention scores computed via query-key product in the last layer. For each reference point, we use the corresponding patch as query, and show which patch it attends to. After pre-training, BEiT learns to distinguish semantic regions using self-attention heads, without any task-specific supervision. The property partially indicates the reason why BEiT is able to help downstream tasks. Such knowledge acquired by BEiT potentially improves the generalization ability of fine-tuned models, especially on small-scale datasets.
|
| 190 |
+
|
| 191 |
+
# 4 RELATED WORK
|
| 192 |
+
|
| 193 |
+
Self-supervised visual representation learning. Various methods have been introduced over the years to pretrain vision models in a self-supervised manner. Pioneering works design clever pretext tasks, such as predicting the patch orderings (Noroozi & Favaro, 2016), colorization (Zhang et al., 2016), and predicting rotation angles (Komodakis & Gidaris, 2018). In addition, Trinh et al. (2019) propose to mask some patches within an image, and classify whether the masked patches are real or fake for each masked position. The method is similar to the masked version of Jigsaw pretraining (Noroozi & Favaro, 2016). The recent strand of research follows contrastive paradigm (Wu et al., 2018; Oord et al., 2018; Hjelm et al., 2019; Bachman et al., 2019; He et al., 2020; Chen et al., 2020b;c). The models typically regard various data augmentations as different views of an image, and then make the representations of positive pairs similar while pushing negative pairs away. In order to obtain enough informative negative samples in contrastive learning, the methods usually rely on large memory banks (Wu et al., 2018; He et al., 2020) or large batch size (Chen et al., 2020b). BYOL (Grill et al., 2020) and SimSiam (Chen & He, 2020) further eliminate the requirement of negative samples, using various techniques to avoid representation collapse. Another strand of methods use clustering to organize image examples (Caron et al., 2018; Asano et al., 2020; Caron et al., 2020; Li et al., 2021).
|
| 194 |
+
|
| 195 |
+
Self-supervised vision Transformers. Pre-training vision Transformers has received significant attention recently due to the data-hungry issue. iGPT (Chen et al., 2020a) first creates a 9-bit color palette by k-means clustering RGB pixels, and then uses the clustered tokens to represent images. Next iGPT uses the tasks of BERT and GPT to pretrain Transformers. In comparison, our proposed method uses image patches as input without losing pixel-level information. Moreover, our visual tokens are obtained by discrete VAE instead of clustering. ViT (Dosovitskiy et al., 2020) conducts a preliminary exploration with the masked patch prediction task, which predicts the 3-bit mean color of the masked patches. Dosovitskiy et al. (2020) also report that pixel-level auto-encoding performs
|
| 196 |
+
|
| 197 |
+
worse, although it is the most straightforward translation of BERT from NLP to CV. Rather than using heuristically designed pre-training tasks, our proposed model leverages visual tokens learned by discrete VAE, which not only achieves better performance but also is better theoretically motivated. Apart from masked auto-encoding, other mainstream research works use contrastive learning (Chen et al., 2021; Xie et al., 2021), and self-distillation (Caron et al., 2021). In comparison, BEiT can achieve several times of improvement in terms of pre-training throughput (Appendix E), and memory consumption. The advantages make BEiT appealing to scale up vision Transformers.
|
| 198 |
+
|
| 199 |
+
# 5 CONCLUSION
|
| 200 |
+
|
| 201 |
+
We introduce a self-supervised pre-training framework for vision Transformers, achieving strong fine-tuning results on downstream tasks, such as image classification, and semantic segmentation. We show that the proposed method is critical to make BERT-like pre-training (i.e., auto-encoding with masked input) work well for image Transformers. We also present the intriguing property of automatically acquired knowledge about semantic regions, without using any human-annotated data. In the future, we would like to scale up BEiT pre-training in terms of data size and model size. Moreover, we will conduct multimodal pre-training in a more unified way, using the similar objectives and the shared architecture for texts and images.
|
| 202 |
+
|
| 203 |
+
# REFERENCES
|
| 204 |
+
|
| 205 |
+
Yuki M. Asano, Christian Rupprecht, and Andrea Vedaldi. Self-labelling via simultaneous clustering and representation learning. In International Conference on Learning Representations (ICLR), 2020.
|
| 206 |
+
Philip Bachman, R Devon Hjelm, and William Buchwalter. Learning representations by maximizing mutual information across views. In Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019.
|
| 207 |
+
Hangbo Bao, Li Dong, Furu Wei, Wenhui Wang, Nan Yang, Xiaodong Liu, Yu Wang, Jianfeng Gao, Songhao Piao, Ming Zhou, and Hsiao-Wuen Hon. UniLMv2: Pseudo-masked language models for unified language model pre-training. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, volume 119 of Proceedings of Machine Learning Research, pp. 642-652. PMLR, 2020. URL http://proceedings.mlr.press/v119/bao20a.html.
|
| 208 |
+
Mathilde Caron, Piotr Bojanowski, Armand Joulin, and Matthijs Douze. Deep clustering for unsupervised learning of visual features. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 132-149, 2018.
|
| 209 |
+
Mathilde Caron, Ishan Misra, Julien Mairal, Priya Goyal, Piotr Bojanowski, and Armand Joulin. Unsupervised learning of visual features by contrasting cluster assignments. In Advances in Neural Information Processing Systems, volume 33, pp. 9912-9924. Curran Associates, Inc., 2020.
|
| 210 |
+
Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, and Armand Joulin. Emerging properties in self-supervised vision transformers. arXiv preprint arXiv:2104.14294, 2021.
|
| 211 |
+
Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Heewoo Jun, David Luan, and Ilya Sutskever. Generative pretraining from pixels. In Hal Daumé III and Aarti Singh (eds.), Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pp. 1691-1703. PMLR, 13-18 Jul 2020a. URL http://proceedings.mlr.press/v119/chen20s.html.
|
| 212 |
+
Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. preprint arXiv:2002.05709, 2020b.
|
| 213 |
+
Xinlei Chen and Kaiming He. Exploring simple siamese representation learning. preprint arXiv:2011.10566, 2020.
|
| 214 |
+
|
| 215 |
+
Xinlei Chen, Haoqi Fan, Ross Girshick, and Kaiming He. Improved baselines with momentum contrastive learning. preprint arXiv:2003.04297, 2020c.
|
| 216 |
+
Xinlei Chen, Saining Xie, and Kaiming He. An empirical study of training self-supervised vision transformers. ArXiv, abs/2104.02057, 2021.
|
| 217 |
+
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 4171–4186. Association for Computational Linguistics, 2019.
|
| 218 |
+
Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. preprint arXiv:2010.11929, 2020.
|
| 219 |
+
Jean-Bastien Grill, Florian Strub, Florent Altché, Corentin Tallec, Pierre H Richemond, Elena Buchatskaya, Carl Doersch, Bernardo Avila Pires, Zhaohan Daniel Guo, Mohammad Gheshlaghi Azar, Bilal Piot, Koray Kavukcuoglu, Rémi Munos, and Michal Valko. Bootstrap your own latent: A new approach to self-supervised learning. In NeurIPS, 2020.
|
| 220 |
+
Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsupervised visual representation learning. In CVPR, 2020.
|
| 221 |
+
R Devon Hjelm, Alex Fedorov, Samuel Lavoie-Marchildon, Karan Grewal, Phil Bachman, Adam Trischler, and Yoshua Bengio. Learning deep representations by mutual information estimation and maximization. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=Bklr3j0cKX.
|
| 222 |
+
Gao Huang, Yu Sun, Zhuang Liu, Daniel Sedra, and Kilian Q. Weinberger. Deep networks with stochastic depth. In Bastian Leibe, Jiri Matas, Nicu Sebe, and Max Welling (eds.), Computer Vision - ECCV 2016, pp. 646-661, Cham, 2016. Springer International Publishing. ISBN 978-3-319-46493-0.
|
| 223 |
+
Eric Jang, Shixiang Gu, and Ben Poole. Categorical reparameterization with gumbel-softmax. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net, 2017. URL https://openreview.net/forum?id=rkE3y85ee.
|
| 224 |
+
Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S. Weld, Luke Zettlemoyer, and Omer Levy. SpanBERT: Improving pre-training by representing and predicting spans. Transactions of the Association for Computational Linguistics, 8:64-77, 2020. doi: 10.1162/tacl_a_00300. URL https://www.aclweb.org/anthology/2020.tacl-1.5.
|
| 225 |
+
Diederik P. Kingma and Max Welling. Auto-Encoding Variational Bayes. In 2nd International Conference on Learning Representations, ICLR 2014, 2014.
|
| 226 |
+
Nikos Komodakis and Spyros Gidaris. Unsupervised representation learning by predicting image rotations. In International Conference on Learning Representations (ICLR), 2018.
|
| 227 |
+
A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images. Master's thesis, Department of Computer Science, University of Toronto, 2009.
|
| 228 |
+
Junnan Li, Pan Zhou, Caiming Xiong, and Steven Hoi. Prototypical contrastive learning of unsupervised representations. In International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id=KmykpuSrjcq.
|
| 229 |
+
Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dólar, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In European conference on computer vision, pp. 740-755. Springer, 2014.
|
| 230 |
+
Yahui Liu, Enver Sangineto, Wei Bi, Nicu Sebe, Bruno Lepri, and Marco De Nadai. Efficient training of visual transformers with small datasets. In Thirty-Fifth Conference on Neural Information Processing Systems, 2021a. URL https://openreview.net/forum?id=SCN8UaetXx.
|
| 231 |
+
|
| 232 |
+
Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. Swin Transformer: Hierarchical vision transformer using shifted windows. arXiv preprint arXiv:2103.14030, 2021b.
|
| 233 |
+
Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=Bkg6RiCqY7.
|
| 234 |
+
Chris J. Maddison, Andriy Mnih, and Yee Whye Teh. The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables. In International Conference on Learning Representations, 2017.
|
| 235 |
+
Mehdi Noroozi and Paolo Favaro. Unsupervised learning of visual representations by solving jigsaw puzzles. In European conference on computer vision, pp. 69-84. Springer, 2016.
|
| 236 |
+
Aaron van den Oord, Yazhe Li, and Oriol Vinyals. Representation learning with contrastive predictive coding. preprint arXiv:1807.03748, 2018.
|
| 237 |
+
Yada Pruksachatkun, Jason Phang, Haokun Liu, Phu Mon Htut, Xiaoyi Zhang, Richard Yuanzhe Pang, Clara Vania, Katharina Kann, and Samuel R. Bowman. Intermediate-task transfer learning with pretrained language models: When and why does it work? In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, July 2020.
|
| 238 |
+
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21:140:1-140:67, 2020. URL http://jmlr.org/papers/v21/20-074.html.
|
| 239 |
+
A. Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, and Ilya Sutskever. Zero-shot text-to-image generation. ArXiv, abs/2102.12092, 2021.
|
| 240 |
+
Ali Razavi, Aaron van den Oord, and Oriol Vinyals. Generating diverse high-fidelity images with VQ-VAE-2. In Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019.
|
| 241 |
+
Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C Berg, and Li Fei-Fei. Imagenet large scale visual recognition challenge. IJCV, 2015.
|
| 242 |
+
Rico Sennrich, Barry Haddow, and Alexandra Birch. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1715-1725, Berlin, Germany, August 2016. Association for Computational Linguistics. doi: 10.18653/v1/P16-1162. URL https://www.aclweb.org/anthology/P16-1162.
|
| 243 |
+
Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, and Hervé Jégou. Training data-efficient image transformers & distillation through attention. preprint arXiv:2012.12877, 2020.
|
| 244 |
+
Hugo Touvron, Matthieu Cord, Alexandre Sablayrolles, Gabriel Synnaeve, and Hervé Jégou. Going deeper with image transformers. arXiv preprint arXiv:2103.17239, 2021.
|
| 245 |
+
Trieu H Trinh, Minh-Thang Luong, and Quoc V Le. Selfie: Self-supervised pretraining for image embedding. arXiv preprint arXiv:1906.02940, 2019.
|
| 246 |
+
Aaron van den Oord, Oriol Vinyals, and Koray Kavukcuoglu. Neural discrete representation learning. In Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS'17, pp. 6309-6318, Red Hook, NY, USA, 2017. Curran Associates Inc. ISBN 9781510860964.
|
| 247 |
+
|
| 248 |
+
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V. N. Vishwanathan, and Roman Garnett (eds.), Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pp. 5998-6008, 2017.
|
| 249 |
+
Zhirong Wu, Yuanjun Xiong, Stella X Yu, and Dahua Lin. Unsupervised feature learning via non-parametric instance discrimination. In CVPR, 2018.
|
| 250 |
+
Tete Xiao, Yingcheng Liu, Bolei Zhou, Yuning Jiang, and Jian Sun. Unified perceptual parsing for scene understanding. In ECCV, 2018.
|
| 251 |
+
Zhenda Xie, Yutong Lin, Zhuliang Yao, Zheng Zhang, Qi Dai, Yue Cao, and Han Hu. Self-supervised learning with swin transformers. arXiv preprint arXiv:2105.04553, 2021.
|
| 252 |
+
Xiaohua Zhai, Alexander Kolesnikov, Neil Houlsby, and Lucas Beyer. Scaling vision transformers. arXiv preprint arXiv:2106.04560, 2021.
|
| 253 |
+
Richard Zhang, Phillip Isola, and Alexei A Efros. Colorful image colorization. In ECCV, 2016.
|
| 254 |
+
Sixiao Zheng, Jiachen Lu, Hengshuang Zhao, Xiatian Zhu, Zekun Luo, Yabiao Wang, Yanwei Fu, Jianfeng Feng, Tao Xiang, Philip H. S. Torr, and Li Zhang. Rethinking semantic segmentation from a sequence-to-sequence perspective with transformers. CoRR, abs/2012.15840, 2020. URL https://arxiv.org/abs/2012.15840.
|
| 255 |
+
Bolei Zhou, Hang Zhao, Xavier Puig, Tete Xiao, Sanja Fidler, Adela Barriuso, and Antonio Torralba. Semantic understanding of scenes through the ADE20K dataset. Int. J. Comput. Vis., 127(3): 302-321, 2019. doi: 10.1007/s11263-018-1140-0. URL https://doi.org/10.1007/s11263-018-1140-0.
|
| 256 |
+
|
| 257 |
+
# A ARCHITECTURE VARIANTS OF VISION TRANSFORMER
|
| 258 |
+
|
| 259 |
+
We use the standard vision Transformer (ViT; Dosovitskiy et al. 2020) in the experiments for fair comparisons. In addition, we find that LayerScale (Touvron et al., 2021) and relative position bias (Bao et al., 2020; Raffel et al., 2020) improve ViTs on downstream tasks. We employ the same setting as in Section 3.3 for ablation studies, which pretrains base-size models for 300 epochs on ImageNet-1K.
|
| 260 |
+
|
| 261 |
+
As shown in Table 5, both LayerScale and relative position bias improve performance on ImageNet classification and ADE20K semantic segmentation. We denote the improved architecture as $\mathrm{BEiT}^+$ and use it for the experiments in Appendix B. We empirically notice that vanilla Transformer is the most stable when scaling up the model to billions of parameters, so we do not use LayerScale for extra-large models.
|
| 262 |
+
|
| 263 |
+
<table><tr><td>Architecture</td><td>ImageNet</td><td>ADE20K</td></tr><tr><td>ViT (used in this paper)</td><td>82.86</td><td>44.86</td></tr><tr><td>ViT+LayerScale</td><td>83.00</td><td>45.43</td></tr><tr><td>ViT+LayerScale+Relative Position Bias</td><td>83.22</td><td>45.70</td></tr></table>
|
| 264 |
+
|
| 265 |
+
# B COMPARISON WITH LARGE-SCALE SUPERVISED PRE-TRAINING
|
| 266 |
+
|
| 267 |
+
We compare with state-of-the-art supervised pre-training at scale. In addition to using ImageNet-1K for fair comparisons with previous work, we pretrain BEiT on ImageNet-22K to boost performance. We employ the architecture improvements (i.e., LayerScale, and relative position bias) as described in Appendix A, which is denoted as $\mathrm{BEIT}^+$ in Table 6 and Table 7. We follow the same pre-training setup as in Section 2.5, except we pretrain 150 epochs on ImageNet-22K. After self-supervised pre-training, we conduct intermediate fine-tuning on ImageNet-22K for 90 epochs. Moreover, we use an in-house dataset that has about 70M labeled images as a drop-in replacement of ImageNet-22K.
|
| 268 |
+
|
| 269 |
+
Table 5: Ablation studies of architecture variants on image classification and semantic segmentation. For ADE20K, we use UperNet (Xiao et al., 2018) as the task layer, and report mIoU scores of single-scale inference.
|
| 270 |
+
|
| 271 |
+
<table><tr><td>Models</td><td>Model Size</td><td>Labeled Data Size</td><td>ImageNet 3842</td><td>5122</td></tr><tr><td colspan="5">Supervised Pre-Training on ImageNet-22K (using labeled data)</td></tr><tr><td>ViT-B (Dosovitskiy et al., 2020)</td><td>86M</td><td>14M</td><td>84.0</td><td>-</td></tr><tr><td>ViT-L (Dosovitskiy et al., 2020)</td><td>307M</td><td>14M</td><td>85.2</td><td>85.30</td></tr><tr><td>ViT-H (Dosovitskiy et al., 2020)</td><td>632M</td><td>14M</td><td>85.1</td><td>-</td></tr><tr><td colspan="5">Supervised Pre-Training on Google JFT-300M (using labeled data)</td></tr><tr><td>ViT-B (Dosovitskiy et al., 2020)</td><td>86M</td><td>300M</td><td>84.2</td><td>-</td></tr><tr><td>ViT-L (Dosovitskiy et al., 2020)</td><td>307M</td><td>300M</td><td>87.1</td><td>87.76</td></tr><tr><td>ViT-H (Dosovitskiy et al., 2020)</td><td>632M</td><td>300M</td><td>88.0</td><td>88.55</td></tr><tr><td colspan="5">Supervised Pre-Training on Google JFT-3B (using labeled data)</td></tr><tr><td>ViT-B (Zhai et al., 2021)</td><td>86M</td><td>3000M</td><td>86.6</td><td>-</td></tr><tr><td>ViT-L (Zhai et al., 2021)</td><td>307M</td><td>3000M</td><td>88.5</td><td>-</td></tr><tr><td colspan="5">Self-Supervised Pre-Training, and Intermediate Fine-Tuning on ImageNet-22K</td></tr><tr><td>BEiT-B+ (ours)</td><td>86M</td><td>14M</td><td>86.8</td><td>-</td></tr><tr><td>BEiT-L+ (ours)</td><td>307M</td><td>14M</td><td>88.4</td><td>88.6</td></tr><tr><td colspan="5">Self-Supervised Pre-Training, and Intermediate Fine-Tuning on In-House-70M</td></tr><tr><td>BEiT-L+ (ours)</td><td>307M</td><td>70M</td><td>89.3</td><td>89.5</td></tr></table>
|
| 272 |
+
|
| 273 |
+
Table 6: Top-1 accuracy on ImageNet-1K fine-tuning. We evaluate models at resolutions $384^{2}$ and $512^{2}$ .
|
| 274 |
+
|
| 275 |
+
Table 6 compares BEiT with previous state-of-the-art supervised pre-training (Dosovitskiy et al., 2020; Zhai et al., 2021) on ImageNet fine-tuning. Rather than heavily relying on extremely large-size labeled data (such as Google's in-house JFT-300M and JFT-3B), we demonstrate that BEiT pretraining can catch up with only ImageNet-22k (14M). Specifically, BEiT-L fine-tuned on ImageNet-22K achieves comparable performance with ViT-L trained on Google JFT-3B. Moreover, BEiT-L obtains $89.5\%$ top-1 accuracy on ImageNet after intermediate fine-tuning on an in-house 70M dataset. The results indicate that BEiT pre-training greatly reduces the required labeling efforts and advances the new state of the art for large-size vision Transformers.
|
| 276 |
+
|
| 277 |
+
As shown in Table 7, we report the fine-tuning results on the ADE20K semantic segmentation benchmark. Following Swin (Liu et al., 2021b), we use the same task layer (i.e., UperNet; Xiao et al. 2018) and evaluate the models at the resolution $640 \times 640$ . The BEIT-L model obtains state-of-the-art performance on ADE20K.
|
| 278 |
+
|
| 279 |
+
<table><tr><td>Models</td><td>mIoU (%)</td><td>Multi-Scale mIoU (%)</td></tr><tr><td colspan="3">Supervised Pre-Training on ImageNet-22K (using labeled data)</td></tr><tr><td>Swin-B (Liu et al., 2021b)</td><td>50.0</td><td>51.7</td></tr><tr><td>Swin-L (Liu et al., 2021b)</td><td>52.1</td><td>53.5</td></tr><tr><td colspan="3">Self-Supervised Pre-Training, and Intermediate Fine-Tuning on ImageNet-22K</td></tr><tr><td>BEiT-B+ (ours)</td><td>53.6</td><td>54.2</td></tr><tr><td>BEiT-L+ (ours)</td><td>56.7</td><td>57.0</td></tr><tr><td colspan="3">Self-Supervised Pre-Training, and Intermediate Fine-Tuning on In-House-70M</td></tr><tr><td>BEiT-L+ (ours)</td><td>57.9</td><td>58.4</td></tr></table>
|
| 280 |
+
|
| 281 |
+
# C ABLATION STUDIES OF IMAGETOKENIZER
|
| 282 |
+
|
| 283 |
+
For comparison, we re-train the image tokenizer on ImageNet-1K. The reimplementation is based on https://github.com/lucidrains/DALLE-pytorch. We use the same codebook size 8K as in DALL-E (Ramesh et al., 2021). Then we plug the tokenizer into our pre-training process. We follow the same experimental setup of ablation studies as in Section 3.3. Table 8 shows that our reimplemented tokenizer obtains comparable reconstruction loss and ImageNet fine-tuning performance compared with the off-the-shelf DALL-E tokenizer.
|
| 284 |
+
|
| 285 |
+
Table 7: Performance comparison on the ADE20K semantic segmentation. We follow Swin-L (Liu et al., 2021b) to use UperNet (Xiao et al., 2018) as the task layer and evaluate at resolution $640 \times 640$ .
|
| 286 |
+
|
| 287 |
+
<table><tr><td>Image Tokenizer</td><td>Reconstruction Error</td><td>ImageNet</td></tr><tr><td>DALL-E Tokenizer (Ramesh et al., 2021)</td><td>0.0856</td><td>82.86</td></tr><tr><td>Our reimplementation</td><td>0.0880</td><td>82.70</td></tr></table>
|
| 288 |
+
|
| 289 |
+
Table 8: Top-1 accuracy on ImageNet-1K using different image tokenizers during pre-training. For image reconstruction, we report mean absolute error of normalized RGB values. The reimplemented image tokenizer is trained on ImageNet-1K without labels.
|
| 290 |
+
|
| 291 |
+
# D LINEAR PROBES ON IMAGENET
|
| 292 |
+
|
| 293 |
+
We evaluate linear probes on ImageNet for various pretrained vision Transformers. We compare BEiT with two main strands of work, namely discriminative and generative self-supervised learning. The first one applies discriminative learning for pre-training, such as contrastive learning (Chen et al., 2021), and self distillation (Caron et al., 2021). The above methods typically learn to aggregate the image-level features into a global vector, which is relatively suitable for linear probing. In contrast, the second strand of methods, such as iGPT (Chen et al., 2020a) and ours, usually do not pretrain such global feature aggregation, which tends to make linear probes difficult.
|
| 294 |
+
|
| 295 |
+
Following iGPT (Chen et al., 2020a), we use average pooling to aggregate the hidden states of each image patches, and add the probing layer at the middle layer of Transformer instead of always at the final layer. Similarly, we find that the best layer lies in 9-th layer for BEiT-B, and 14-th layer for BEiT-L. To be specific, we use AdamW (Loshchilov & Hutter, 2019) to update the linear probe layer for 50 epochs. The learning rate is 4e-3 with cosine decay. The batch size is 1024. The weight decay is set to 1e-4. We follow data augmentation used in DINO (Caron et al., 2021), which uses random resize crops and horizontal flips augmentation during training and evaluates on central crops.
|
| 296 |
+
|
| 297 |
+
<table><tr><td>Models</td><td>Model Size</td><td>Accuracy</td></tr><tr><td colspan="3">Discriminative self-supervised learning</td></tr><tr><td>DINO-B (Caron et al., 2021)</td><td>86M</td><td>78.2</td></tr><tr><td>MoCo v3-B (Chen et al., 2021)</td><td>86M</td><td>76.7</td></tr><tr><td>MoCo v3-L (Chen et al., 2021)</td><td>307M</td><td>77.6</td></tr><tr><td colspan="3">Generative self-supervised learning</td></tr><tr><td>iGPT-L (Chen et al., 2020a)</td><td>1362M</td><td>65.2</td></tr><tr><td>iGPT-XL (Chen et al., 2020a)</td><td>6801M</td><td>68.7</td></tr><tr><td>iGPT-XL (Chen et al., 2020a)</td><td>6801M</td><td>72.0*</td></tr><tr><td>BEiT-B (ours)</td><td>86M</td><td>56.7</td></tr><tr><td>BEiT-L (ours)</td><td>307M</td><td>73.5</td></tr></table>
|
| 298 |
+
|
| 299 |
+
As shown in Table 9, we evaluate linear probes on ImageNet-1K for self-supervised learning. Overall, discriminative methods perform better than generative pre-training on linear probing. Linear probes keep the Transformer parameters fixed and only update the linear layer. So the pre-training of global aggregation of image-level features is beneficial to linear probing in DINO and MoCo v3, although full fine-tuning eliminates the gap. Moreover, the results indicate that increasing the model size from base (86M) to large (304M) significantly improves accuracy for our proposed method. In contrast, the gap between base- and large-size MoCo v3 is smaller. We also find that BEiT outperforms iGPT by a large margin even using much fewer parameters.
|
| 300 |
+
|
| 301 |
+
# E MULTI-TASK PRE-TRAINING WITH DINO
|
| 302 |
+
|
| 303 |
+
We train the pre-training tasks of BEiT and DINO (Caron et al., 2021) together in a multi-task manner. As shown in Table 10, augmenting masked image modeling with DINO improves semantic segmentation on ADE20K, and obtains comparable results on ImageNet classification. Moreover, BEiT is more efficient in terms of pre-training speed, as DINO has two copies of Transformer parameters for self-distillation and multi-crop augmentation (Caron et al., 2020). For the throughput comparisons between BEiT and BEiT+DINO, we set batch size to the same. Because BEiT is also more memory-efficient, we can use larger batch size to fully utilize GPU cards, which obtains greater speedup in practice than the reported numbers.
|
| 304 |
+
|
| 305 |
+
Table 9: Linear probing accuracy on ImageNet. " $*$ " denotes that iGPT-XL uses concatenation of five layers for linear probing, while others use the features of single layer.
|
| 306 |
+
|
| 307 |
+
<table><tr><td>Models</td><td>ImageNet</td><td>ADE20K</td><td>Pre-Training Throughput</td></tr><tr><td>DINO (400 Epochs)</td><td>82.8</td><td>44.08</td><td>-</td></tr><tr><td>BEiT (300 Epochs)</td><td>82.9</td><td>44.65</td><td>4.2x</td></tr><tr><td>BEiT + DINO (300 Epochs)</td><td>82.9</td><td>46.85</td><td>1.0x</td></tr></table>
|
| 308 |
+
|
| 309 |
+
Table 10: We train the pre-training tasks of BEiT and DINO (Caron et al., 2021) in the way of multi-task learning. We report the performance by fine-tuning on ImageNet-1K image classification and ADE20K semantic segmentation. For ADE20K, we use SETR-PUP (Zheng et al., 2020) as the task layer and report the mIoU score of single-scale inference. The pre-training throughput measures the speed, where larger numbers indicate faster pre-training.
|
| 310 |
+
|
| 311 |
+
# F IMAGE CLASSIFICATION ON CIFAR-100
|
| 312 |
+
|
| 313 |
+
In addition to ImageNet classification, we conduct fine-tuning experiments on the CIFAR-100 (Krizhevsky & Hinton, 2009) benchmark with 100 classes and 60k images. The experimental setup is the same as in Section 3.1.
|
| 314 |
+
|
| 315 |
+
Table 11 reports the top-1 accuracy on CIFAR-100. Notably, on the smaller CIFAR-100 dataset, ViT trained from scratch only reaches $48.5\%$ accuracy (Chen et al., 2021). In comparison, BEiT achieves $90.1\%$ with the help of pre-training. The results indicate that BEiT can greatly reduce the requirement of annotation efforts. BEiT also outperforms MoCo v3. Moreover, intermediate fine-tuning on ImageNet-1K further improves the results on CIFAR-100.
|
| 316 |
+
|
| 317 |
+
<table><tr><td>Models</td><td>CIFAR-100</td></tr><tr><td colspan="2">Training from scratch (i.e., random initialization)</td></tr><tr><td>ViT384 (Dosovitskiy et al., 2020)</td><td>48.5*</td></tr><tr><td colspan="2">Supervised Pre-Training on ImageNet-1K (using labeled data)</td></tr><tr><td>ViT384 (Dosovitskiy et al., 2020)</td><td>87.1</td></tr><tr><td>DeiT (Touvron et al., 2020)</td><td>90.8</td></tr><tr><td colspan="2">Self-Supervised Pre-Training on ImageNet-1K (without labeled data)</td></tr><tr><td>DINO (Caron et al., 2021)</td><td>91.7</td></tr><tr><td>MoCo v3 (Chen et al., 2021)</td><td>87.1</td></tr><tr><td>BEiT (ours)</td><td>90.1</td></tr><tr><td colspan="2">Self-Supervised Pre-Training, and Intermediate Fine-Tuning on ImageNet-1K</td></tr><tr><td>BEiT (ours)</td><td>91.8</td></tr></table>
|
| 318 |
+
|
| 319 |
+
# G HYPERPARAMETERS FOR PRE-TRAINING
|
| 320 |
+
|
| 321 |
+
Table 11: Top-1 accuracy of image classification on CIFAR-100. The models are at resolution $224 \times 224$ , except $\mathrm{ViT}_{384}$ uses $384 \times 384$ . The results, unless otherwise indicated, are all obtained by base-size models. \*: result is taken from (Chen et al., 2021).
|
| 322 |
+
|
| 323 |
+
<table><tr><td>Hyperparameters</td><td>Base Size</td><td>Large Size</td></tr><tr><td>Layers</td><td>12</td><td>24</td></tr><tr><td>Hidden size</td><td>768</td><td>1024</td></tr><tr><td>FFN inner hidden size</td><td>3072</td><td>4096</td></tr><tr><td>Attention heads</td><td>12</td><td>16</td></tr><tr><td>Attention head size</td><td></td><td>64</td></tr><tr><td>Patch size</td><td colspan="2">16 × 16</td></tr><tr><td>Training epochs</td><td colspan="2">800</td></tr><tr><td>Batch size</td><td colspan="2">2048</td></tr><tr><td>Adam ε</td><td colspan="2">1e-8</td></tr><tr><td>Adam β</td><td colspan="2">(0.9, 0.999)</td></tr><tr><td>Peak learning rate</td><td colspan="2">1.5e-3</td></tr><tr><td>Minimal learning rate</td><td colspan="2">1e-5</td></tr><tr><td>Learning rate schedule</td><td colspan="2">Cosine</td></tr><tr><td>Warmup epochs</td><td colspan="2">10</td></tr><tr><td>Gradient clipping</td><td>3.0</td><td>1.0</td></tr><tr><td>Dropout</td><td colspan="2">x</td></tr><tr><td>Stoch. depth</td><td colspan="2">0.1</td></tr><tr><td>Weight decay</td><td colspan="2">0.05</td></tr><tr><td>Data Augment</td><td colspan="2">RandomResizeAndCrop</td></tr><tr><td>Input resolution</td><td colspan="2">224 × 224</td></tr><tr><td>Color jitter</td><td colspan="2">0.4</td></tr></table>
|
| 324 |
+
|
| 325 |
+
Table 12: Hyperparameters for pre-training BEiT on ImageNet-1K.
|
| 326 |
+
|
| 327 |
+
H HYPERPARAMETERS FOR IMAGE CLASSIFICATION FINE-TUNING
|
| 328 |
+
|
| 329 |
+
<table><tr><td>Hyperparameters</td><td>CIFAR-100
|
| 330 |
+
Base Size</td><td>ImageNet-1K
|
| 331 |
+
Base Size</td><td>Large Size</td></tr><tr><td>Peak learning rate</td><td colspan="3">{2e-3, 3e-3, 4e-3, 5e-3}</td></tr><tr><td>Fine-tuning epochs</td><td>150</td><td>100</td><td>50</td></tr><tr><td>Batch size</td><td>512</td><td>1024</td><td>1024</td></tr><tr><td>Warmup epochs</td><td>20</td><td>20</td><td>5</td></tr><tr><td>Layer-wise learning rate decay</td><td>0.65</td><td>0.65</td><td>0.75</td></tr><tr><td>Adam ε</td><td></td><td>1e-8</td><td></td></tr><tr><td>Adam β</td><td></td><td>(0.9, 0.999)</td><td></td></tr><tr><td>Minimal learning rate</td><td></td><td>1e-6</td><td></td></tr><tr><td>Learning rate schedule</td><td></td><td>Cosine</td><td></td></tr><tr><td>Repeated Aug</td><td>✓</td><td>✓</td><td>×</td></tr><tr><td>Weight decay</td><td>0.3</td><td>0.05</td><td>0.05</td></tr><tr><td>Label smoothing ε</td><td></td><td>0.1</td><td></td></tr><tr><td>Stoch. depth</td><td></td><td>0.1</td><td></td></tr><tr><td>Dropout</td><td></td><td>×</td><td></td></tr><tr><td>Gradient clipping</td><td></td><td>×</td><td></td></tr><tr><td>Erasing prob.</td><td>×</td><td>0.25</td><td>0.25</td></tr><tr><td>Input resolution</td><td></td><td>224 × 224</td><td></td></tr><tr><td>Rand Augment</td><td></td><td>9/0.5</td><td></td></tr><tr><td>Mixup prob.</td><td></td><td>0.8</td><td></td></tr><tr><td>Cutmix prob.</td><td></td><td>1.0</td><td></td></tr></table>
|
| 332 |
+
|
| 333 |
+
Table 13: Hyperparameters for fine-tuning BEiT on ImageNet-1K and CIFAR-100.
|
| 334 |
+
I HYPERPARAMETERS FOR ADE20K SEMANTIC SEGMENTATION FINE-TUNING
|
| 335 |
+
|
| 336 |
+
<table><tr><td>Hyperparameters</td><td>Base Size</td></tr><tr><td>Peak learning rate</td><td>1e-3</td></tr><tr><td>Fine-tuning steps</td><td>160K</td></tr><tr><td>Batch size</td><td>16</td></tr><tr><td>Adam ε</td><td>1e-8</td></tr><tr><td>Adam β</td><td>(0.9, 0.999)</td></tr><tr><td>Layer-wise learning rate decay</td><td>0.65</td></tr><tr><td>Minimal learning rate</td><td>0</td></tr><tr><td>Learning rate schedule</td><td>Linear</td></tr><tr><td>Warmup steps</td><td>1500</td></tr><tr><td>Dropout</td><td>x</td></tr><tr><td>Stoch. depth</td><td>0.1</td></tr><tr><td>Weight decay</td><td>0.05</td></tr><tr><td>Input resolution</td><td>512 × 512</td></tr><tr><td>Position embedding interpolate</td><td>bilinear</td></tr></table>
|
| 337 |
+
|
| 338 |
+
Table 14: Hyperparameters for fine-tuning BEiT on ADE20K.
|
beitbertpretrainingofimagetransformers/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:0f0c34b69b4922a276fd6e271290fb5fb0bff9a9eba72c93201d066b466973c6
|
| 3 |
+
size 863510
|
beitbertpretrainingofimagetransformers/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:2560c5d57e79a6a82eb77567799651d94963170f10103ea8bae988ab18ffc269
|
| 3 |
+
size 479521
|
betaintactvaeidentifyingandestimatingcausaleffectsunderlimitedoverlap/1625e013-94b7-45bc-b557-732b97354c2e_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f8ec5fa6167700be32fce3a0bed653d9853a526afceb546b722eb0e7bb1cc5fb
|
| 3 |
+
size 226730
|
betaintactvaeidentifyingandestimatingcausaleffectsunderlimitedoverlap/1625e013-94b7-45bc-b557-732b97354c2e_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:90e86b7d24733f1662d4eebf8e68d46cc3b301f8085413a06e3452301dc440c4
|
| 3 |
+
size 265742
|
betaintactvaeidentifyingandestimatingcausaleffectsunderlimitedoverlap/1625e013-94b7-45bc-b557-732b97354c2e_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:bf98d858bbc75802393b3d76fdf70e7e99a1acbd956c53eb591b96512562b99b
|
| 3 |
+
size 6576334
|
betaintactvaeidentifyingandestimatingcausaleffectsunderlimitedoverlap/full.md
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
betaintactvaeidentifyingandestimatingcausaleffectsunderlimitedoverlap/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a25c51abd042b11d0c48d45b2f360f5f0299a14f4b044e60529e354d7aa21b2c
|
| 3 |
+
size 1152116
|
betaintactvaeidentifyingandestimatingcausaleffectsunderlimitedoverlap/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:9ef5b8fdfcdc5c18545ab69ef8fbcfa654550e032a19fb36d788c34d6bdd3572
|
| 3 |
+
size 1642158
|
bootstrappedmetalearning/de1b972e-09c6-4fcd-8bde-fab396a261dd_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:8b8a56f5291e7616c7e8699cdf8458827b9f8de7cca79d9195c5c3f94d8d90c0
|
| 3 |
+
size 225068
|
bootstrappedmetalearning/de1b972e-09c6-4fcd-8bde-fab396a261dd_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:cd6117484900e957d2194a4bb1148c1edcb581ba53224786fa6bcba2dd9b1c58
|
| 3 |
+
size 258683
|
bootstrappedmetalearning/de1b972e-09c6-4fcd-8bde-fab396a261dd_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:c63992aab215d14d33a0ed7b22a48e9d68cd80a177723e477f61ef8a36793c34
|
| 3 |
+
size 7797691
|
bootstrappedmetalearning/full.md
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
bootstrappedmetalearning/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:47a73eec9551329e20004215efde006d4c67281809def5c04d01ab892429049c
|
| 3 |
+
size 2675595
|
bootstrappedmetalearning/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a4d38d190ceae7c40a059566be8bd32a5196f1d7575d877060f6310bed23db0b
|
| 3 |
+
size 1334765
|
comparingdistributionsbymeasuringdifferencesthataffectdecisionmaking/44a9f630-b762-4301-8f83-a41fcf2d0c4b_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:bd76f1700fbc7df7b7e2349705f662dd69891786da848e20a83d58285f17c8aa
|
| 3 |
+
size 151060
|
comparingdistributionsbymeasuringdifferencesthataffectdecisionmaking/44a9f630-b762-4301-8f83-a41fcf2d0c4b_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:4fecaf272a4aa4064a203abd815498916cc32d32666eeaa89d0e02fded0d4414
|
| 3 |
+
size 173169
|
comparingdistributionsbymeasuringdifferencesthataffectdecisionmaking/44a9f630-b762-4301-8f83-a41fcf2d0c4b_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:47db4fbff55a4c24928633e78dd8c88c5e7723ac89cff1996b7348f483ea116b
|
| 3 |
+
size 5216764
|
comparingdistributionsbymeasuringdifferencesthataffectdecisionmaking/full.md
ADDED
|
@@ -0,0 +1,723 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# COMPARING DISTRIBUTIONS BY MEASURING DIFFERENCES THAT AFFECT DECISION MAKING
|
| 2 |
+
|
| 3 |
+
Shengjia Zhao*, Abhishek Sinha*, Yutong He*, Aidan Perreault, Jiaming Song, Stefano Ermon
|
| 4 |
+
|
| 5 |
+
Department of Computer Science
|
| 6 |
+
|
| 7 |
+
Stanford University
|
| 8 |
+
|
| 9 |
+
{sjzhao,a7b23,kellyyhe,aperr,tsong,ermon}@stanford.edu
|
| 10 |
+
|
| 11 |
+
# ABSTRACT
|
| 12 |
+
|
| 13 |
+
Measuring the discrepancy between two probability distributions is a fundamental problem in machine learning and statistics. We propose a new class of discrepancies based on the optimal loss for a decision task – two distributions are different if the optimal decision loss is higher on their mixture than on each individual distribution. By suitably choosing the decision task, this generalizes the Jensen-Shannon divergence and the maximum mean discrepancy family. We apply our approach to two-sample tests, and on various benchmarks, we achieve superior test power compared to competing methods. In addition, a modeler can directly specify their preferences when comparing distributions through the decision loss. We apply this property to understanding the effects of climate change on different economic activities and selecting features targeting different decision tasks.
|
| 14 |
+
|
| 15 |
+
# 1 INTRODUCTION
|
| 16 |
+
|
| 17 |
+
Quantifying the difference between two probability distributions is a fundamental problem in machine learning. Modelers choose different types of discrepancies (or probability divergences) to encode their prior knowledge about which aspects are relevant to evaluate the difference. Integral probability metrics (IPMs, Müller (1997)) and $f$ -divergences (Csiszár, 1964) are widely used discrepancies in machine learning. IPMs, such as the Wasserstein distance, maximum mean discrepancy (MMD) (Rao, 1982; Burbea & Rao, 1984; Gretton et al., 2012), are based on the idea that if two distributions are identical, any function should have the same expectation under both distributions. IPMs are used to define training objectives for generative models (Arjovsky et al., 2017), perform independence tests (Doran et al., 2014), robust optimization (Esfahani & Kuhn, 2018) among many other applications. $f$ -divergences, such as the KL divergence and the Jensen Shannon divergence, are based on the idea that if two distributions are identical, they assign the same likelihood to every point. One can then define a discrepancy based on how different the likelihood ratio is from one. KL divergence underlies some of the most commonly used training objectives for both supervised and unsupervised machine learning algorithms, such as cross entropy loss.
|
| 18 |
+
|
| 19 |
+
We propose a third category of divergences called H-divergences that overlaps with but also extends the set of integral probability metrics or the set $f$ -divergences. Intuitively, H-divergence compares two distributions in terms of the optimal loss for a certain decision task. This optimal loss corresponds to a generalized notion of entropy (DeGroot et al., 1962). Instead of measuring the best average code length of any encoding scheme (Shannon entropy), the generalized entropy uses arbitrary loss function (rather than code length) and set of actions (rather than encoding schemes), and is defined as the best expected loss among the set of actions. In particular, given two distribution $p$ and $q$ , we compare the generalized entropy of the mixture distribution $(p + q) / 2$ and the generalized entropy of $p$ and $q$ individually. Intuitively, if $p$ and $q$ are different, it is more difficult to minimize expected loss under the mixture distribution $(p + q) / 2$ , and hence the mixture distribution should have higher generalized entropy; if $p$ and $q$ are identical, then the mixture distribution is identical to $p$ or $q$ , and hence should have the same generalized entropy.
|
| 20 |
+
|
| 21 |
+
Our divergence strictly generalizes the maximum mean discrepancy family and the Jensen Shannon divergence, which can be obtained with specific choices of the loss function. We illustrate this via
|
| 22 |
+
|
| 23 |
+

|
| 24 |
+
Figure 1: Relationship between H-divergence (this paper) and existing divergences. The Jensen Shannon divergence is an $f$ -divergence but not an IPM; the MMD is an IPM but not always an $f$ -divergence; both are H-divergences. There are H-divergences that are not $f$ -divergences or IPMs.
|
| 25 |
+
|
| 26 |
+
the Venn diagram in Figure 1. Our formulation allows us to choose alternative losses to leverage inductive biases and machine learning models from different problem domains. For example, if we choose the generalized entropy as the maximum log likelihood of deep generative models, we are able to leverage recent progress in modeling high dimensional images.
|
| 27 |
+
|
| 28 |
+
We demonstrate the effectiveness of H-divergence in two sample tests, i.e. to decide whether two sets of samples come from the same distribution or not. A test based on a probability discrepancy declares two sets of samples different if their discrepancy exceeds some threshold. We use H-divergences based on generalized entropy defined by the log likelihood of off-the-shelf generative models. Compared to state-of-the-art tests based on MMD with deep kernels (Liu et al., 2020), tests based on the H-divergence achieve better test power (given identical type I error) on a large set of benchmarks.
|
| 29 |
+
|
| 30 |
+
More importantly, scientists and policy makers are often interested not only in if two distributions are different, but how two distributions are different and whether the differences affect decision making. Typical divergence measures (such as KL) or two sample tests only quantify if two distributions are different, while we show that H-divergence is a useful tool for quantifying how distributions are different with three application examples: studying the effect of climate change, feature selection, and sample quality evaluation. In each of these examples, we compare different aspects of the distributions by choosing specific decision loss functions. For example, climate change (Figure 3) might impact agriculture in a region but not energy production, or vice versa. By choosing suitable loss functions (related to agriculture, energy, etc) we can quantify and test if the change in climate distribution impact different economic activities.
|
| 31 |
+
|
| 32 |
+
# 2 BACKGROUND
|
| 33 |
+
|
| 34 |
+
# 2.1 PROBABILITY DIVERGENCES
|
| 35 |
+
|
| 36 |
+
Let $\mathcal{X}$ denote a finite set or a finite dimensional vector space, and $\mathcal{P}(\mathcal{X})$ denote the set of probability distributions on $\mathcal{X}$ that have a density. We consider the problem of defining a probability divergence between any two distributions in $\mathcal{P}(\mathcal{X})$ , where a probability divergence is any function $D:\mathcal{P}(\mathcal{X})\times \mathcal{P}(\mathcal{X})\to \mathbb{R}$ that satisfies $D(p\| q)\geq 0,D(p\| p) = 0,\forall p,q\in \mathcal{P}(\mathcal{X})$ . We call the divergence $D$ "strict" if $D(p\| q) > 0\forall p\neq q$ , and "non-strict" otherwise. In this paper we consider both types of divergences.
|
| 37 |
+
|
| 38 |
+
Integral Probability Metrics Let $\mathcal{F}$ denote a set of functions $\mathcal{X} \to \mathbb{R}$ . An integral probability metric is defined as $\mathrm{IPM}_{\mathcal{F}}(p\| q) = \sup_{f \in \mathcal{F}} |\mathbb{E}_p[f(X)] - \mathbb{E}_q[f(X)]|$ . Several important divergences belong to integral probability metrics. Examples include the Wasserstein distance, where $\mathcal{F}$ is the set of 1-Lipschitz functions; the total variation distance, where $\mathcal{F}$ is the set of functions $\mathcal{X} \to [-1,1]$ . The maximum mean discrepancy (MMD) (Rao, 1982; Burbea & Rao, 1984; Gretton et al., 2012) chooses a kernel function $k: \mathcal{X} \times \mathcal{X} \to \mathbb{R}_+$ and is defined by
|
| 39 |
+
|
| 40 |
+
$$
|
| 41 |
+
\mathrm {M M D} ^ {2} (p \| q) = \mathbb {E} _ {p, p} k (X, Y) + \mathbb {E} _ {q, q} k (X, Y) - 2 \mathbb {E} _ {p, q} k (X, Y)
|
| 42 |
+
$$
|
| 43 |
+
|
| 44 |
+
MMD is an IPM where $\mathcal{F}$ is the unit norm functions in the reproducing kernel Hilbert space (RKHS) associated with the kernel $k$ .
|
| 45 |
+
|
| 46 |
+
$f$ -Divergences Given any convex continuous function $f: \mathbb{R}_+ \to \mathbb{R}$ such that $f(1) = 0$ , the $f$ -Divergence is defined as (assuming densities exist) $D_f(p\|q) = \mathbb{E}_q[f(p(X)/q(X))]$ . Examples include the KL divergence, where $f: t \mapsto t\log t$ and the Jensen Shannon divergence, where $f: t \mapsto (t + 1)\log\left(\frac{2}{t + 1}\right) + t\log t$ .
|
| 47 |
+
|
| 48 |
+
# 2.2 H-ENTROPY
|
| 49 |
+
|
| 50 |
+
For any action space $\mathcal{A}$ and any loss function $\ell : \mathcal{X} \times \mathcal{A} \to \mathbb{R}$ , the H-entropy (DeGroot et al., 1962; DeGroot, 2005; Grünwald et al., 2004) is defined as
|
| 51 |
+
|
| 52 |
+
$$
|
| 53 |
+
H _ {\ell} (p) = \inf _ {a \in \mathcal {A}} \mathbb {E} _ {p} [ \ell (X, a) ]
|
| 54 |
+
$$
|
| 55 |
+
|
| 56 |
+
In words, $H$ -entropy is the Bayes optimal loss of a decision maker who must select some action $a$ not for a particular $x$ , but in expectation for a random $x$ drawn from $p(x)$ . H-entropy generalizes several important notions of uncertainty. Examples include: Shannon Entropy, where $\mathcal{A}$ as the set of probabilities $\mathcal{P}(\mathcal{X})$ , and $\ell(x, a) = -\log a(x)$ ; Variance where $\mathcal{A} = \mathcal{X}$ , and $\ell(x, a) = \|x - a\|_2^2$ ; Predictive $\mathcal{V}$ -entropy, where $\mathcal{A} \subset \mathcal{P}(\mathcal{X})$ is some subset of distributions, and $\ell(x, a) = -\log a(x)$ (Xu et al., 2020).
|
| 57 |
+
|
| 58 |
+
A key property we will use is that $H$ -entropy is concave (DeGroot et al., 1962).
|
| 59 |
+
|
| 60 |
+
Lemma 1. For any choice of $\ell :\mathcal{X}\times \mathcal{A}\to \mathbb{R}$ $H_{\ell}$ is a concave function.
|
| 61 |
+
|
| 62 |
+
This Lemma can be proved by observing that inf is a concave function: it is always better to pick an optimal action for $p$ and $q$ separately rather than a single one for both.
|
| 63 |
+
|
| 64 |
+
$$
|
| 65 |
+
\begin{array}{l} H _ {\ell} \big (\alpha p + (1 - \alpha) q \big) = \inf _ {a} \left(\alpha \mathbb {E} _ {p} [ \ell (X, a) ] + (1 - \alpha) \mathbb {E} _ {q} [ \ell (X, a) ]\right) \\ \geq \alpha \inf _ {a} \mathbb {E} _ {p} [ \ell (X, a) ] + (1 - \alpha) \inf _ {a} \mathbb {E} _ {q} [ \ell (X, a) ] = \alpha H _ {\ell} (p) + (1 - \alpha) H _ {\ell} (q) \\ \end{array}
|
| 66 |
+
$$
|
| 67 |
+
|
| 68 |
+
This Lemma reflects why $H_{\ell}$ can be thought of as a measurement of entropy or uncertainty. If the distribution is more uncertain (e.g. a mixture of $p$ and $q$ , rather than $p$ or $q$ separately) then decisions made under higher uncertainty will suffer a higher loss.
|
| 69 |
+
|
| 70 |
+
# 3 DEFINITION AND THEORETICAL PROPERTIES
|
| 71 |
+
|
| 72 |
+
# 3.1 H-JENSEN SHANNON DIVERGENCE
|
| 73 |
+
|
| 74 |
+
As a warm up, we present a special case of our divergence.
|
| 75 |
+
|
| 76 |
+
Definition 1 (H-Jensen Shannon divergence).
|
| 77 |
+
|
| 78 |
+
$$
|
| 79 |
+
D _ {\ell} ^ {\mathrm {J S}} (p, q) = H _ {\ell} \left(\frac {p + q}{2}\right) - \frac {1}{2} \left(H _ {\ell} (p) + H _ {\ell} (q)\right) \tag {1}
|
| 80 |
+
$$
|
| 81 |
+
|
| 82 |
+
$D_{\ell}^{\mathrm{JS}}$ is always non-negative because H-entropy is concave (Lemma 1), and clearly $D_{\ell}^{\mathrm{JS}}(p,q) = 0$ whenever $p = q$ . Therefore, $D_{\ell}^{\mathrm{JS}}$ is a valid probability divergence. In particular, if we choose $H_{\ell}$ as the Shannon entropy, Definition 1 recovers the Jensen Shannon divergence. Other special loss function choices can recover definitions in (Burbea & Rao, 1982).
|
| 83 |
+
|
| 84 |
+
# 3.2 GENERAL H-DIVERGENCE
|
| 85 |
+
|
| 86 |
+
In addition to the H-Jensen Shannon divergence, there are other functions based on the H-entropy that satisfy the requirements of a divergence. For example,
|
| 87 |
+
|
| 88 |
+
$$
|
| 89 |
+
D _ {\ell} ^ {\operatorname {M i n}} = H _ {\ell} \left(\frac {p + q}{2}\right) - \min \left(H _ {\ell} (p), H _ {\ell} (q)\right) \tag {2}
|
| 90 |
+
$$
|
| 91 |
+
|
| 92 |
+
is also a valid divergence (this will be proved later as a special case of Lemma 2). We can define a general set of divergences that includes the above two divergences with the following definition:
|
| 93 |
+
|
| 94 |
+
Definition 2 (H-divergence). For two distributions $p, q$ on $\mathcal{X}$ , given any continuous function $\phi : \mathbb{R}^2 \to \mathbb{R}$ such that $\phi(\theta, \lambda) > 0$ whenever $\theta + \lambda > 0$ and $\phi(0, 0) = 0$ , define
|
| 95 |
+
|
| 96 |
+
$$
|
| 97 |
+
D _ {\ell} ^ {\phi} (p \| q) = \phi \left(H _ {\ell} \left(\frac {p + q}{2}\right) - H _ {\ell} (p), H _ {\ell} \left(\frac {p + q}{2}\right) - H _ {\ell} (q)\right)
|
| 98 |
+
$$
|
| 99 |
+
|
| 100 |
+
Intuitively $H_{\ell}\left(\frac{p + q}{2}\right) - H_{\ell}(p)$ and $H_{\ell}\left(\frac{p + q}{2}\right) - H_{\ell}(q)$ measure how much more difficult it is to minimize loss on the mixture distribution $(p + q) / 2$ than on $p$ and $q$ respectively. $\phi$ is a general class of functions that map these differences into a scalar divergence, while satisfying some desirable properties described in the next section.
|
| 101 |
+
|
| 102 |
+
The following proposition shows that the H-divergence generalizes the previous definitions (1) and (2). Therefore, any property of H-divergence is inherited by e.g. the H-Jensen Shannon divergence.
|
| 103 |
+
|
| 104 |
+
Proposition 1. If $\phi (\theta ,\lambda) = \frac{\theta + \lambda}{2}$ then $D_{\ell}^{\phi}(p,q)$ is the H-Jensen Shannon divergence in Eq.(1). If $\phi (\theta ,\lambda) = \max (\theta ,\lambda)$ then $D_{\ell}^{\phi}(p,q)$ is the H-Min divergence in Eq.(2).
|
| 105 |
+
|
| 106 |
+
# 3.3 PROPERTIES OF THE H-DIVERGENCE
|
| 107 |
+
|
| 108 |
+
We first verify that $D_{\ell}^{\phi}$ is indeed a (strict or non-strict) probability divergence.
|
| 109 |
+
|
| 110 |
+
Lemma 2. For any choice of $\ell$ and for any choice of $\phi$ that satisfy Definition 2, $D_{\ell}^{\phi}$ is non-negative and $D_{\ell}^{\phi}(p,q) = 0$ whenever $p = q$ . Furthermore, $D_{\ell}^{\phi}$ is symmetric whenever $\phi$ is symmetric.
|
| 111 |
+
|
| 112 |
+
Depending on the choice of $\ell$ , $H$ -divergence may or may not be strict (i.e. whenever $p \neq q$ , $D(p\|q) > 0$ ). The following proposition characterizes conditions for a strict divergence.
|
| 113 |
+
|
| 114 |
+
Proposition 2 (Strict Divergence). For any choice of $\phi$ the following are equivalent 1) $\forall p\neq q$ $D_{\ell}(p\| q) > 0$ 2) The $H$ entropy $H_{\ell}(p)\coloneqq \inf_{a}\mathbb{E}_{p}[\ell (X,a)]$ is strictly convex in p. 3) $\forall p\neq q$ arg $\inf_{a}\mathbb{E}_{p}[\ell (X,a)]\cap \arg \inf_{a}\mathbb{E}_{q}[\ell (X,a)] = \emptyset .$
|
| 115 |
+
|
| 116 |
+
In particular, this proposition can be used to characterize all strict H-divergences, because the set of all losses $\ell$ that induces strict H-entropy functions $H_{\ell}$ can be characterized by Fenchel duality (Duchi et al., 2018).
|
| 117 |
+
|
| 118 |
+
One important property of the H-divergence is that two distributions have non-zero divergence if and only if they have different optimal actions, i.e. the optimal solutions for their respective H-entropy are different. This is shown in the following proposition (proof in Appendix A).
|
| 119 |
+
|
| 120 |
+
Proposition 3. $D_{\ell}^{\phi}(p\| q) > 0$ if and only if $\arg \inf_{a}\mathbb{E}_{p}[\ell (X,a)]\cap \arg \inf_{a}\mathbb{E}_{q}[\ell (X,a)] = \emptyset$
|
| 121 |
+
|
| 122 |
+
Intuitively, $D_{\ell}^{\phi}$ only takes into account differences between distributions that lead to different optimal action choices. This property allows us to incorporate prior domain knowledge. By choosing $\mathcal{A}$ and $\ell$ we can specify which differences between distributions lead to different optimal actions, and which differences do not. For example, we can choose $\mathcal{A}$ as a set of generative models (e.g., mixture of Gaussians) and $\ell(x, a)$ as the negative log likelihood of $x$ under generative model $a$ . If under two distributions we end up learning the same generative model (by maximizing log likelihood), the H-divergence between them is zero.
|
| 123 |
+
|
| 124 |
+
# 3.4 RELATIONSHIP TO MMD
|
| 125 |
+
|
| 126 |
+
An important special case of the H-divergence is the set of squared Maximum Mean Discrepencies (MMD), as shown by the following theorem:
|
| 127 |
+
|
| 128 |
+
Theorem 1. The set of $H$ -Jensen Shannon Divergences is strictly larger than the MMD² distances.
|
| 129 |
+
|
| 130 |
+
To prove this theorem, we show that for each choice of kernel $k: \mathcal{X} \times \mathcal{X} \to \mathbb{R}$ , there exists an action space $\mathcal{A}$ and loss $\ell$ such that the corresponding squared MMD distance and H-divergence are the same (see proof in Appendix A). In particular, this equivalence can be achieved by choosing $\mathcal{A}$ to be the RKHS $\mathcal{H}$ of $k(\cdot, \cdot)$ , and $\ell(x, a) = 4\|k(x, \cdot) - a\|_{\mathcal{H}}^2$ . Inclusion is strict because the Jensen Shannon divergence is a H-Jensen Shannon Divergence but not a squared MMD distance.
|
| 131 |
+
|
| 132 |
+
# 3.5 ESTIMATION AND CONVERGENCE
|
| 133 |
+
|
| 134 |
+
Many machine learning tasks can be reduced to the problem of estimating the divergence between two distributions given samples. Specifically, suppose we are provided with a set of $m$ i.i.d. samples $\hat{p}_m = (x_1,\dots ,x_m)$ drawn from distribution $p$ and $\hat{q}_m = (x_1',\dots ,x_m')$ drawn from distribution $q$ , and would like to obtain an estimate of $D_{\ell}^{\phi}(p\| q)$ based on the samples. Here, $\hat{p}_m$ and $\hat{q}_m$ denote empirical distributions drawn from $p$ and $q$ respectively. In this section we propose an empirical estimator for the H-divergence and show that it has favorable convergence properties.
|
| 135 |
+
|
| 136 |
+
Let $\hat{D}_{\ell}^{\phi}(\hat{p}_m\| \hat{q}_m)$ be the empirical (random) estimator for $D_{\ell}^{\phi}(p\| q)$ defined by
|
| 137 |
+
|
| 138 |
+
$$
|
| 139 |
+
\hat {D} _ {\ell} ^ {\phi} \left(\hat {p} _ {m} \| \hat {q} _ {m}\right) = \phi \left(\inf _ {a} \frac {1}{m} \sum_ {i = 1} ^ {m} \ell \left(x _ {i} ^ {\prime \prime}, a\right) - \inf _ {a} \frac {1}{m} \sum_ {i = 1} ^ {m} \ell \left(x _ {i}, a\right), \inf _ {a} \frac {1}{m} \sum_ {i = 1} ^ {m} \ell \left(x _ {i} ^ {\prime \prime}, a\right) - \inf _ {a} \frac {1}{m} \sum_ {i = 1} ^ {m} \ell \left(x _ {i} ^ {\prime}, a\right)\right)
|
| 140 |
+
$$
|
| 141 |
+
|
| 142 |
+
where $x_{i}^{\prime \prime} = x_{i}b_{i} + x_{i}^{\prime}(1 - b_{i})$ and $b_{i}$ are i.i.d uniformly sampled from $\{0,1\}$ , so that $x_{i}^{\prime \prime}$ is a sample from the mixture distribution $(p + q) / 2$ of size $m$ .
|
| 143 |
+
|
| 144 |
+
Using $x_{i}^{\prime \prime}$ as defined above is crucial for the convergence properties we will prove in Theorem 2. It might be tempting to replace the term $\frac{1}{m}\sum_{i = 1}^{m}\ell (x_{i}^{\prime \prime},a)$ with $\frac{1}{2m}\sum_{i = 1}^{m}(\ell (x_i,a) + \ell (x_i',a))$ to use all the available samples. However, optimizing the action based on a finite set of samples (instead of in expectation) is prone to overfitting, and introduces bias. Intuitively, using $m$ samples $(x_{i}^{\prime \prime})$ ensures the bias for the mixture is comparable to that of $p$ and $q$ . Without this, Theorem 2 is no longer true, and empirical performance also degrades.
|
| 145 |
+
|
| 146 |
+
Before presenting the convergence results, we first must define several assumptions that make convergence possible. In particular, we are going to assume that the loss function $\ell$ is $C$ -bounded, i.e. there exists some $C$ such that $0 \leq \ell(x, a) \leq C, \forall a, x$ . This assumption seemingly excludes important special cases such as the Jensen-Shannon divergence (which is associated with the unbounded log loss). However, we show in the appendix that the Jensen-Shannon divergence cannot be consistently estimated in general, hence correctly excluded by our theorem. One practical solution is to clip the log likelihood, which is the approach adopted in (Song & Ermon, 2019) for improved divergence estimation (for a similar KL divergence estimation problem).
|
| 147 |
+
|
| 148 |
+
In addition, we assume that $\phi$ is 1-Lipschitz under the $\infty$ -norm, i.e. $|\phi(\theta + d\theta, \lambda + d\lambda) - \phi(\theta, \lambda)| \leq \max(d\theta, d\lambda), \forall \theta, \lambda, d\theta, d\lambda \in \mathbb{R}$ . Both $\phi(\theta, \lambda) = \frac{\theta + \lambda}{2}$ and $\phi(\theta, \lambda) = \max\{\theta + \lambda\}$ are 1-Lipschitz under the $\infty$ -norm. This is a mild assumption because if $\phi$ is not 1-Lipschitz we can rescale $\phi$ to make it 1-Lipschitz. Finally, define the Radamacher complexity
|
| 149 |
+
|
| 150 |
+
$$
|
| 151 |
+
\mathcal {R} _ {m} ^ {p} (\ell) = \mathbb {E} _ {X _ {i} \sim p, \epsilon_ {i} \sim \mathrm {U n i f o r m} (\{- 1, 1 \})} \left[ \sup _ {a \in \mathcal {A}} \frac {1}{m} \sum_ {i = 1} ^ {m} \epsilon_ {i} \ell (X _ {i}, a) \right]
|
| 152 |
+
$$
|
| 153 |
+
|
| 154 |
+
We define $\mathcal{R}_m^q (\ell)$ analogously. Based on these assumptions and definitions we can bound the difference between $\hat{D}_{\ell}^{\phi}(\hat{p}_m\| \hat{q}_m)$ and $D_{\ell}^{\phi}(p\| q)$ .
|
| 155 |
+
|
| 156 |
+
Theorem 2. If $\ell$ is $C$ -bounded, and $\phi$ is 1-Lipschitz under the $\infty$ -norm, for any choice of distribution $p, q \in \mathcal{P}(\mathcal{X})$ and $t > 0$ we have
|
| 157 |
+
|
| 158 |
+
1. $\operatorname*{Pr}[\hat{D}_{\ell}^{\phi}(\hat{p}_m\| \hat{q}_m)\geq t]\leq 4e^{-\frac{t^2m}{2C^2}}$ if $p = q$
|
| 159 |
+
2. $\operatorname*{Pr}\left[\left|\hat{D}_{\ell}^{\phi}(\hat{p}_m\| \hat{q}_m) - D_{\ell}^{\phi}(p\| q)\right|\geq 4\max (\mathcal{R}_m^p (\ell),\mathcal{R}_m^q (\ell)) + t\right]\leq 4e^{-\frac{t^2m}{2C^2}}$
|
| 160 |
+
|
| 161 |
+
Corollary 1. $\sqrt{\operatorname{Var}[\hat{D}_{\ell}^{\phi}(\hat{p}_m\|\hat{q}_m)]}\leq 4\max (\mathcal{R}_m^p (\ell),\mathcal{R}_m^q (\ell)) + \sqrt{2C^2 / m}$
|
| 162 |
+
|
| 163 |
+
For proof see Appendix A. Note that when $p = q$ , the convergence of $\hat{D}_{\ell}^{\phi}(\hat{p}_m\| \hat{q}_m)$ does not depend on the Radamacher complexity of $\ell$ , and converges to 0 very quickly. When $p \neq q$ the estimator $\hat{D}_{\ell}^{\phi}(\hat{p}_m\| \hat{q}_m)$ is still consistent (under regularity assumptions)
|
| 164 |
+
|
| 165 |
+
Corollary 2. [Consistency] Under the condition of Theorem 2, if additionally either 1. $\mathcal{A}$ is a finite set 2. $\mathcal{A}$ is a bounded subset of $\mathbb{R}^d$ for some $d\in \mathbb{N}$ and $\ell$ is Lipschitz w.r.t. $a$ , then almost surely $\lim_{m\to \infty}\hat{D}_{\ell}^{\phi}(\hat{p}_m\| \hat{q}_m) = D_{\ell}^{\phi}(p\| q)$ .
|
| 166 |
+
|
| 167 |
+
For both cases in Corollary 2 the Radamacher complexity $\mathcal{R}_m^p (\ell)$ goes to zero (as sample size $m\to \infty$ ) at a rate of $O(1 / \sqrt{m})$ . In other words we can conclude that the estimation error in Theorem 2 is bounded by $O(1 / \sqrt{m})$ and the variance of the estimator is also bounded by $O(1 / \sqrt{m})$ when the sample size $m\rightarrow \infty$
|
| 168 |
+
|
| 169 |
+
# 4 EXPERIMENT: TWO SAMPLE TEST
|
| 170 |
+
|
| 171 |
+
The first application is to design more powerful two sample tests. We aim to show that H-divergence allows us to leverage inductive biases for each data type (e.g. image, bio, text) by choosing suitable actions $\mathcal{A}$ and loss $\ell$ , which leads to improved test power.
|
| 172 |
+
|
| 173 |
+
# 4.1 TWO SAMPLE TEST
|
| 174 |
+
|
| 175 |
+
For the task of two sample test, we would like to decide if two sets of samples are drawn from the same distribution or not. Specifically, given two sets of samples $\hat{p}_m \coloneqq (x_1, \dots, x_m) \stackrel{\mathrm{i.i.d.}}{\sim} p$ and $\hat{q}_m \coloneqq (x_1', \dots, x_m') \stackrel{\mathrm{i.i.d.}}{\sim} q$ we would like to decide if $p = q$ . Typical approaches estimate a divergence $\hat{D}(\hat{p}_m \| \hat{q}_m)$ and output $p \neq q$ if the divergence exceeds some threshold.
|
| 176 |
+
|
| 177 |
+
There are two types of errors: Type I error happens when the algorithm incorrectly outputs $p \neq q$ ; the probability of type I errors is called the significance level. Type II error happens when the algorithm incorrectly outputs $p = q$ ; the probability of not making a Type II error is called the test power (higher is better). Note that both the significance level and the test power are relative to distributions $p$ and $q$ .
|
| 178 |
+
|
| 179 |
+
We follow the typical setup where we guarantee a certain significance level while empirically measuring the test power. In particular, the significance level can be guaranteed with a permutation test (Ernst et al., 2004). In a permutation test, in addition to the original set of samples $\hat{p}_m$ and $\hat{q}_m$ , we also uniformly randomly swap elements between $\hat{p}_m$ and $\hat{q}_m$ , and sample multiple randomly swapped datasets $(\hat{p}_m^1, \hat{q}_m^1)$ , $(\hat{p}_m^2, \hat{q}_m^2)$ , $\cdots$ . The testing algorithm outputs $p \neq q$ if $\hat{D}(\hat{p}_m \| \hat{q}_m)$ is in the top $\alpha$ -quantile among $\{\hat{D}(\hat{p}_m^1 \| \hat{q}_m^1), \hat{D}(\hat{p}_m^2 \| \hat{q}_m^2), \cdots\}$ . Permutation test guarantees the significance level (i.e. low Type I error) because if $p = q$ then swapping elements between $\hat{p}_m$ and $\hat{q}_m$ should not change its distribution, so each pair $(\hat{p}_m, \hat{q}_m)$ , $(\hat{p}_m^1, \hat{q}_m^1)$ , $\cdots$ should have the same distribution. Therefore, $\hat{D}(\hat{p}_m \| \hat{q}_m)$ happens to be in the top $\alpha$ -quantile with at most $\alpha$ probability. Note that the significance level guarantee does not rely on accurate estimation of H-divergence in Theorem 2 (accurate H-divergence estimation is still important because the test power does depends on it).
|
| 180 |
+
|
| 181 |
+
When the choice of $D^{\ell}$ is not a strict divergence (See Proposition 2) we may falsely conclude $p = q$ ( $D(p\|q) = 0$ ) when in reality $p \neq q$ . This is true but inconsequential in finite data scenarios. With finite data, it is generally impossible to guarantee the test power (i.e. bounding the probability of concluding $p = q$ when in reality $p \neq q$ for any $p, q$ ) and prior literature do not provide such guarantees. Hence our guarantee is no weaker than prior two sample test literature.
|
| 182 |
+
|
| 183 |
+
# 4.2 EXPERIMENT SETUP
|
| 184 |
+
|
| 185 |
+
Baselines We compare our proposed approach with six other divergences. All methods are based on the permutation test explained in Section 4.1. MMD-D (Liu et al., 2020) measures the MMD distance with a deep kernel, while MMD-O (Gretton et al., 2012) measures the MMD distance with a Gaussian kernel. Mean embedding (ME) and smoothed characteristic functions (SCF) (Chwialkowski et al., 2015; Jitkrittum et al., 2016) are distances based on the difference in Gaussian kernel mean embedding at a set of optimized points, or a set of optimized frequencies. C2STS-S & C2ST-L (Lopez-Paz & Oquab, 2017; Cheng & Cloninger, 2019) use a classifier's accuracy distinguishing between the two distributions.
|
| 186 |
+
|
| 187 |
+
Comparison Metrics and Setup All methods have the same significance level (which is provably equal to $\alpha = 0.05$ because of the permutation test), therefore we only consider the test power. We follow Liu et al. (2020) and consider four datasets: Blob (Liu et al., 2020), HDGM (Liu et al.,
|
| 188 |
+
|
| 189 |
+
2020), HIGGS (Adam-Boxdarios et al., 2014) and MNIST (LeCun & Cortes, 2010). Our method and all the baseline methods have hyper-parameters. To ensure fair comparison, we follow the same evaluation setup as (Liu et al., 2020) for all methods. We split each dataset into two equal partitions: a training set to tune hyper-parameters, and a validation set to compute the final test output.
|
| 190 |
+
|
| 191 |
+
Implementation Details We choose $\phi(\theta, \lambda) = \left(\frac{\theta^s + \lambda^s}{2}\right)^{1/s}$ for $s > 1$ (which includes the H-Jensen Shannon divergence when $s = 1$ and the H-Min divergence when $s = \infty$ ). We define $l(x, a)$ as the negative log likelihood of $x$ under distribution $a$ , where $a$ is in a certain model family $\mathcal{A}$ . We experiment with mixture of Gaussian distributions, Parzen density estimator and Variational Autoencoder (Kingma & Welling, 2013). Our hyper-parameters consist of the best parameter $s$ and also the best generative model family. Choosing these hyper-parameters might seem cumbersome, but compared to the second best baseline (MMD-D which chooses thousands of deep kernel parameters), we have much fewer hyper-parameters.
|
| 192 |
+
|
| 193 |
+
We use $\alpha = 0.05$ in all two-sample test experiments. Each permutation test uses 100 permutations, and we run each test 100 times to compute the test power (i.e. the percent of times it correctly outputs $p \neq q$ ). Finally we plot and report the performance standard deviation by repeating the entire experiment 10 times.
|
| 194 |
+
|
| 195 |
+
# 4.3 EXPERIMENT RESULTS
|
| 196 |
+
|
| 197 |
+
The average test powers are reported in Figure 4, Figure 2, Table 1 and Table 3. Our approach achieves superior test power across the board. Notably on Higgs we achieve the same test power with $2\mathrm{x}$ fewer samples than the second best test, and on MNIST we can achieve perfect test power even on the smallest sample size evaluated in (Liu et al., 2020).
|
| 198 |
+
|
| 199 |
+
Following (Liu et al., 2020) we also evaluate the test power as the dimension of the problem increases (Figure 2). Our test power decreases gracefully as the dimension of the problem increases. We hypothesize that the test power improvements come from leveraging progress in generative model research: for each type of data (e.g. bio, image, text) there has been decades of research finding suitable generative models; we use commonly used generative models (in modern literature) for each data type (e.g. KDE for low dimensional physics/bio data, VAE for simple images).
|
| 200 |
+
|
| 201 |
+

|
| 202 |
+
Figure 2: Average test power on HDGM dataset. Left: results with the same sample size (4000) and different data dimensions. Right: results with the same sample dimension (10) and different sample sizes. Our method (H-Div, dashed line) achieve better test power for almost every setup. All tests have high test power for low data dimensions, but our method scales better for higher data dimensions.
|
| 203 |
+
|
| 204 |
+

|
| 205 |
+
|
| 206 |
+
<table><tr><td>N</td><td>ME</td><td>SCF</td><td>C2ST-S</td><td>C2ST-L</td><td>MMD-O</td><td>MMD-D</td><td>H-Div</td></tr><tr><td>1000</td><td>0.120±0.007</td><td>0.095±0.007</td><td>0.082±0.015</td><td>0.097±0.014</td><td>0.132±0.005</td><td>0.113±0.013</td><td>0.240±0.020</td></tr><tr><td>2000</td><td>0.165±0.019</td><td>0.130±0.019</td><td>0.183±0.026</td><td>0.232±0.032</td><td>0.291±0.017</td><td>0.304±0.012</td><td>0.380±0.040</td></tr><tr><td>3000</td><td>0.197±0.012</td><td>0.142±0.025</td><td>0.257±0.049</td><td>0.399±0.058</td><td>0.376±0.022</td><td>0.403±0.050</td><td>0.685±0.015</td></tr><tr><td>5000</td><td>0.410±0.041</td><td>0.261±0.044</td><td>0.592±0.037</td><td>0.447±0.045</td><td>0.659±0.018</td><td>0.699±0.047</td><td>0.930±0.010</td></tr><tr><td>8000</td><td>0.691±0.067</td><td>0.467±0.038</td><td>0.892±0.029</td><td>0.878±0.020</td><td>0.923±0.013</td><td>0.952±0.024</td><td>1.000±0.000</td></tr><tr><td>10000</td><td>0.786±0.041</td><td>0.603±0.066</td><td>0.974±0.007</td><td>0.985±0.005</td><td>1.000±0.000</td><td>1.000±0.000</td><td>1.000±0.000</td></tr><tr><td>Avg.</td><td>0.395</td><td>0.283</td><td>0.497</td><td>0.506</td><td>0.564</td><td>0.579</td><td>0.847</td></tr></table>
|
| 207 |
+
|
| 208 |
+
Table 1: Average test power $\pm$ standard error for $N$ samples over the HIGGS dataset. The results on MNIST is similar and presented in Table 3, Appendix B.1.
|
| 209 |
+
|
| 210 |
+

|
| 211 |
+
Figure 3: Example plots of H-divergence across different geographical locations for losses $\ell$ related to agriculture (left) and energy production (right). Darker color indicates larger H-divergence. Compared to divergences such as KL, H-divergence measures changes relevant to different social and economic activities (by selecting appropriate loss functions $\ell$ ). For example, even though climate change significantly impacts the high latitude or high altitude areas, this change has less relevance to agriculture (because few agriculture activities are possible in these areas).
|
| 212 |
+
|
| 213 |
+

|
| 214 |
+
|
| 215 |
+
# 5 EXPERIMENT: DECISION DEPENDENT DISCREPANCY MEASUREMENT
|
| 216 |
+
|
| 217 |
+
# 5.1 ASSESSING CLIMATE CHANGE
|
| 218 |
+
|
| 219 |
+
As an illustrative example of how H-divergence can facilitate decision making, we use climate data and study how climate change affects decision making through the lens of H-divergence. Scientists and policy makers are often interested in how climate change disparately affect different geographical locations. Existing methods (Preston et al., 2011) focus on one aspect of climate change (such as the expected economic loss (Burke et al., 2018)) using tailor-designed analysis, while H-divergence provides a general tool for hypothesis testing and visualization for different aspects of climate change. In our example, we choose suitable loss functions to quantitatively measure aspects of climate change that are relevant to decision making in agriculture and renewable energy production.[2]
|
| 220 |
+
|
| 221 |
+
Setup We use the NOAA database which contains daily weather from thousands of weather stations at different geographical locations. For each location, we summarize the weather sequence of each year into a few summary statistics (average yearly temperature, humidity, wind speed and rainy days). We are interested in assessing changes in weather over this period at each location, from the perspective of agriculture and renewable energy activities. Further details of these experiments are in Appendix C.2.
|
| 222 |
+
|
| 223 |
+
Example: Agriculture It is known that climate changes affect crop suitability (Lobell et al., 2008). Let $\mathcal{A}$ denote the set of possible crops to plant at each location (e.g. wheat/barley/rice), and $\ell(x, a)$ denote the loss of planting crop $a$ if the yearly weather is $x$ . We estimate the function $\ell$ by matching geographical locations in the FAO crop yield dataset (FAOSTAT et al., 2006) to weather stations in the NOAA database, and learn a function to predict crop yield from weather data with kernel ridge regression.
|
| 224 |
+
|
| 225 |
+
The H-divergence has a natural interpretation: a geographical location could either (1) plant the same crop for the entire period 1981-2019 that is optimal for the local climate (i.e. choose $a^* = \arg \min_{a\in \mathcal{A}}\mathbb{E}_{(p + q) / 2}[\ell (X,a)]$ ); (2) plant the optimal crops for 1981-1999 and for 2000-2019 respectively. H divergence measures the additional loss of option (1) compared to option (2). In other words, it is the excess loss of not adapting crop type to climate change. For each geographical location we can compute the H-divergence $D_{\ell}^{\mathrm{JS}}$ for the estimated $\ell$ (plotted in Figure 3 left).
|
| 226 |
+
|
| 227 |
+
Example: Energy production Changes in weather also affect electricity generation, since climate change could affect the amount of wind/solar energy available. Let $\mathcal{A}$ denote the number of wind/solar/fossil fuel power plants built, and $\ell(x, a)$ denote the loss (negative utility) when the weather is $x$ . We obtain the function $\ell$ using empirical formulas for energy production (Npower, 2012). The H-divergence for this loss function is shown in Figure 3 (right). Intuitively the H di
|
| 228 |
+
|
| 229 |
+
<table><tr><td>Loss Selection</td><td>Selected Features</td></tr><tr><td>Neutral</td><td>education, cap-gain, sex, age, occupation</td></tr><tr><td>Upweight low income</td><td>education, cap-gain, relationship, marital-status, sex</td></tr><tr><td>Upweight high income</td><td>education, cap-gain, sex, age, race</td></tr></table>
|
| 230 |
+
|
| 231 |
+
Table 2: Features selected by different approaches. With H-Divergence we can select different features that are important in different decision problems. For example, if we assign a high / low penalty to making incorrect prediction for higher income groups, we select a different set of features.
|
| 232 |
+
|
| 233 |
+
vergence measures the excess loss of using the same energy generation infrastructure for the entire time period vs. using different infrastructure that adapts to climate change. While this is only an illustrative example, comparing the two maps we see that regions and industries are affected by climate change in different ways – H divergence provides a quantitative framework for this kind of assessments.
|
| 234 |
+
|
| 235 |
+
# 5.2 FEATURE SELECTION
|
| 236 |
+
|
| 237 |
+
In a feature selection task, we wish to know which input features are most predictive of the label. Feature selection provides information on which features have the biggest influence on the label, and can be used in scientific discovery (Jović et al., 2015; Zhang et al., 2015).
|
| 238 |
+
|
| 239 |
+
Off-the-shelf feature selection algorithms often do not take into account problem specific requirements. For example, denote the input features as $X_{1}, \dots, X_{K}$ and label as $Y$ , the mutual information feature selection algorithms estimate the Shannon mutual information $I(X_{i}, Y) \coloneqq \mathrm{KL}(p(X_{i}, Y) \| p(X_{i}) p(Y))$ and select features with largest mutual information. However, scientists and policies makers often need fine-grained control to answer their specific scientific or policy questions. For example, social scientists might want to know which features are more important for high-income as compared to low-income groups (e.g. to understand potential glass ceilings).
|
| 240 |
+
|
| 241 |
+
With H-Divergence we can select features with large $D_{\ell}^{\phi}(p(X_i,Y)\| p(X_i)p(Y))$ (i.e. the optimal action is different under the joint $p(X_i,Y)$ and the product of marginals $p(X_i)p(Y)$ ). By choosing different loss functions $\ell$ we can get different feature selection results, each reflecting important features for that decision problem. For example, in Table 2 we show the features selected for the UCI income prediction dataset (Blake, 1998). For this dataset, we choose $\mathcal{A}$ as the set of logistic regression functions, $l(x,a)$ as the cross entropy loss for regression function $a$ on the sample $x$ and $\phi(\theta,\lambda) = \max(\theta,\lambda)$ . If we want to focus on high income groups, we can assign a higher weight to the loss of high income samples, and vice versa. We observe that gender/race is more predictive of income for high-income groups, while relationship or marital status is more predictive of income for lower-income groups. This can help us identify potential inequality or suggest further investigation into the cause of low income and poverty. For example, our results suggest a connection between family and relationship status and poverty, and a connection between gender/race and high income. These connections merit further investigation into the cause and policy remedy.
|
| 242 |
+
|
| 243 |
+
# 6 ACKNOWLEDGEMENTS
|
| 244 |
+
|
| 245 |
+
SE acknowledges support by NSF(#1651565, #1522054, #1733686), ONR (N000141912145), AFOSR (FA95501910024), ARO (W911NF-21-1-0125) and Sloan Fellowship.
|
| 246 |
+
|
| 247 |
+
# REFERENCES
|
| 248 |
+
|
| 249 |
+
Claire Adam-Boxdarios, Glen Cowan, Cecile Germain, Isabelle Guyon, Balázs Kégl, and David Rousseau. The higgs boson machine learning challenge. In Proceedings of the 2014 International Conference on High-Energy Physics and Machine Learning - Volume 42, HEPML'14, pp. 19-55. JMLR.org, 2014.
|
| 250 |
+
Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein generative adversarial networks. In International conference on machine learning, pp. 214-223. PMLR, 2017.
|
| 251 |
+
Peter Bartlett. Theoretical statistics, lecture 13. https://www.stat.berkeley.edu/~bartlett/courses/2013spring-stat210b/notes/13notes.pdf, 2013.
|
| 252 |
+
|
| 253 |
+
Catherine Blake. Uci repository of machine learning databases. http://www.ics.uci.edu/~mlearn/MLRepository.html, 1998.
|
| 254 |
+
Jacob Burbea and C Radhakrishna Rao. Entropy differential metric, distance and divergence measures in probability spaces: A unified approach. Journal of Multivariate Analysis, 12(4):575-596, 1982.
|
| 255 |
+
Jacob Burbea and C Radhakrishna Rao. Differential metrics in probability spaces. Probab. Math. Stat, 3:241-258, 1984.
|
| 256 |
+
Marshall Burke, W Matthew Davis, and Noah S Diffenbaugh. Large potential reduction in economic damages under un mitigation targets. Nature, 557(7706):549-553, 2018.
|
| 257 |
+
X. Cheng and A. Cloninger. Classification logit two-sample testing by neural networks. ArXiv, abs/1909.11298, 2019.
|
| 258 |
+
Kacper P Chwialkowski, Aaditya Ramdas, Dino Sejdinovic, and Arthur Gretton. Fast two-sample testing with analytic representations of probability measures. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett (eds.), Advances in Neural Information Processing Systems 28, pp. 1981-1989. Curran Associates, Inc., 2015.
|
| 259 |
+
Imre Csiszár. Eine informationstheoretische ungleichung und ihre anwendung auf beweis der ergodizitaet von markoffschen ketten. Magyer Tud. Akad. Mat. Kutato Int. Koezl., 8:85-108, 1964.
|
| 260 |
+
Morris H DeGroot. Optimal statistical decisions, volume 82. John Wiley & Sons, 2005.
|
| 261 |
+
Morris H DeGroot et al. Uncertainty, information, and sequential experiments. The Annals of Mathematical Statistics, 33(2):404-419, 1962.
|
| 262 |
+
Gary Doran, Krikamol Muandet, Kun Zhang, and Bernhard Scholkopf. A permutation-based kernel conditional independence test. In UAI, pp. 132-141, 2014.
|
| 263 |
+
John Duchi, Khashayar Khosravi, Feng Ruan, et al. Multiclass classification, information, divergence and surrogate risk. Annals of Statistics, 46(6B):3246-3275, 2018.
|
| 264 |
+
Michael D Ernst et al. Permutation methods: a basis for exact inference. Statistical Science, 19(4): 676-685, 2004.
|
| 265 |
+
Peyman Mohajerin Esfahani and Daniel Kuhn. Data-driven distributionally robust optimization using the wasserstein metric: Performance guarantees and tractable reformulations. Mathematical Programming, 171(1-2):115-166, 2018.
|
| 266 |
+
FAO FAOSTAT et al. Fao statistical databases. Rome: Food and Agriculture Organization of the United Nations, 2006.
|
| 267 |
+
Arthur Gretton, Karsten M Borgwardt, Malte J Rasch, Bernhard Scholkopf, and Alexander Smola. A kernel two-sample test. The Journal of Machine Learning Research, 13(1):723-773, 2012.
|
| 268 |
+
Peter D Grünwald, A Philip Dawid, et al. Game theory, maximum entropy, minimum discrepancy and robust bayesian decision theory. the Annals of Statistics, 32(4):1367-1433, 2004.
|
| 269 |
+
Dan Hendrycks and Thomas Dietterich. Benchmarking neural network robustness to common corruptions and perturbations. arXiv preprint arXiv:1903.12261, 2019.
|
| 270 |
+
Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. In Advances in neural information processing systems, pp. 6626-6637, 2017.
|
| 271 |
+
Wittawat Jitkrittum, Zoltán Szabó, Kacper P Chwialkowski, and Arthur Gretton. Interpretable distribution features with maximum testing power. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett (eds.), Advances in Neural Information Processing Systems 29, pp. 181-189. Curran Associates, Inc., 2016.
|
| 272 |
+
|
| 273 |
+
Alan Jović, Karla Brkić, and Nikola Bogunović. A review of feature selection methods with applications. In 2015 38th international convention on information and communication technology, electronics and microelectronics (MIPRO), pp. 1200-1205. Ieee, 2015.
|
| 274 |
+
Diederik P Kingma and Max Welling. Auto-Encoding variational bayes. arXiv preprint arXiv:1312.6114v10, December 2013.
|
| 275 |
+
Yann LeCun and Corinna Cortes. MNIST handwritten digit database. 2010. URL http://yann.lecun.com/exdb/mnist/.
|
| 276 |
+
Feng Liu, Wenkai Xu, Jie Lu, Guangquan Zhang, A. Gretton, and D. Sutherland. Learning deep kernels for non-parametric two-sample tests. ArXiv, abs/2002.09116, 2020.
|
| 277 |
+
David B Lobell, Marshall B Burke, Claudia Tebaldi, Michael D Mastrandrea, Walter P Falcon, and Rosamond L Naylor. Prioritizing climate change adaptation needs for food security in 2030. Science, 319(5863):607-610, 2008.
|
| 278 |
+
David Lopez-Paz and M. Oquab. Revisiting classifier two-sample tests. arXiv: Machine Learning, 2017.
|
| 279 |
+
Tengyu Ma. Machine learning theory. https://github.com/tengyuma/cs229m_notes/blob/main/Winter2021/pdf/02-08-2021.pdf, 2021.
|
| 280 |
+
Alfred Müller. Integral probability metrics and their generating classes of functions. Advances in Applied Probability, pp. 429-443, 1997.
|
| 281 |
+
Sebastian Nowozin, Botond Cseke, and Ryota Tomioka. f-gan: Training generative neural samplers using variational divergence minimization. In Proceedings of the 30th International Conference on Neural Information Processing Systems, pp. 271-279, 2016.
|
| 282 |
+
Npower. Wind turbine power calculations. Mechanical and Electrical Engineering Power Industry, The Royal Academy of Engineering, 2012.
|
| 283 |
+
Benjamin L Preston, Emma J Yuen, and Richard M Westaway. Putting vulnerability to climate change on the map: a review of approaches, benefits, and risks. *Sustainability science*, 6(2): 177-202, 2011.
|
| 284 |
+
C Radhakrishna Rao. Diversity and dissimilarity coefficients: a unified approach. Theoretical population biology, 21(1):24-43, 1982.
|
| 285 |
+
Shai Shalev-Shwartz and Shai Ben-David. Understanding machine learning: From theory to algorithms. Cambridge university press, 2014.
|
| 286 |
+
Jiaming Song and Stefano Ermon. Understanding the limitations of variational mutual information estimators. arXiv preprint arXiv:1910.06222, 2019.
|
| 287 |
+
Bharath K Striperumbudur, Kenji Fukumizu, Arthur Gretton, Bernhard Scholkopf, and Gert RG Lanckriet. On integral probability metrics, $\backslash$ phi-divergences and binary classification. arXiv preprint arXiv:0901.2698, 2009.
|
| 288 |
+
Yilun Xu, Shengjia Zhao, Jiaming Song, Russell Stewart, and Stefano Ermon. A theory of usable information under computational constraints. arXiv preprint arXiv:2002.10689, 2020.
|
| 289 |
+
Yudong Zhang, Zhengchao Dong, Preetha Phillips, Shuihua Wang, Genlin Ji, Jiquan Yang, and Ti-Fei Yuan. Detection of subjects and brain regions related to alzheimer's disease using 3d mri scans based on eigenbrain and machine learning. Frontiers in computational neuroscience, 9:66, 2015.
|
| 290 |
+
|
| 291 |
+
# A PROOFS
|
| 292 |
+
|
| 293 |
+
Lemma 2. For any choice of $\ell$ and for any choice of $\phi$ that satisfy Definition 2, $D_{\ell}^{\phi}$ is non-negative and $D_{\ell}^{\phi}(p,q) = 0$ whenever $p = q$ . Furthermore, $D_{\ell}^{\phi}$ is symmetric whenever $\phi$ is symmetric.
|
| 294 |
+
|
| 295 |
+
Proof of Lemma 2. For any choice of $p, q$ by the concavity of the H-entropy in Lemma 1 we have
|
| 296 |
+
|
| 297 |
+
$$
|
| 298 |
+
H _ {\ell} \left(\frac {p + q}{2}\right) - H _ {\ell} (p) \geq \frac {1}{2} \left(H _ {\ell} (q) - H _ {\ell} (p)\right)
|
| 299 |
+
$$
|
| 300 |
+
|
| 301 |
+
$$
|
| 302 |
+
H _ {\ell} \left(\frac {p + q}{2}\right) - H _ {\ell} (q) \geq \frac {1}{2} \left(H _ {\ell} (p) - H _ {\ell} (q)\right)
|
| 303 |
+
$$
|
| 304 |
+
|
| 305 |
+
Therefore by summing the two inequalities we have
|
| 306 |
+
|
| 307 |
+
$$
|
| 308 |
+
\left(H _ {\ell} \left(\frac {p + q}{2}\right) - H _ {\ell} (p)\right) + \left(H _ {\ell} \left(\frac {p + q}{2}\right) - H _ {\ell} (q)\right) \geq 0
|
| 309 |
+
$$
|
| 310 |
+
|
| 311 |
+
By the requirement on $\phi$ we know that $\mathcal{D}_{\ell}^{\phi}(p\| q)\geq 0$ . In addition when $p = q$ since $(p + q) / 2 = p = q$ we have $D_{\ell}^{\phi}(p\| q) = \phi (0,0) = 0$ .
|
| 312 |
+
|
| 313 |
+
To show it is symmetric, note that
|
| 314 |
+
|
| 315 |
+
$$
|
| 316 |
+
\begin{array}{l} D _ {\ell} ^ {\phi} (p \| q) = \phi \left(H _ {\ell} \left(\frac {p + q}{2}\right) - H _ {\ell} (p), H _ {\ell} \left(\frac {p + q}{2}\right) - H _ {\ell} (q)\right) = \phi \left(H _ {\ell} \left(\frac {p + q}{2}\right) - H _ {\ell} (q), H _ {\ell} \left(\frac {p + q}{2}\right) - H _ {\ell} (p)\right) \\ = D _ {\ell} ^ {\phi} (q \| p) \\ \end{array}
|
| 317 |
+
$$
|
| 318 |
+
|
| 319 |
+
whenever $\phi$ is symmetric.
|
| 320 |
+
|
| 321 |
+

|
| 322 |
+
|
| 323 |
+
Proposition 3. $D_{\ell}^{\phi}(p\| q) > 0$ if and only if $\arg \inf_{a}\mathbb{E}_{p}[\ell (X,a)]\cap \arg \inf_{a}\mathbb{E}_{q}[\ell (X,a)] = \emptyset$
|
| 324 |
+
|
| 325 |
+
Proof of Proposition 3. Denote $\mathcal{A}_p^* = \arg \inf_a\mathbb{E}_p[\ell (X,a)]$ and $\mathcal{A}_q^* = \arg \inf_a\mathbb{E}_q[\ell (X,a)]$ . Also compute
|
| 326 |
+
|
| 327 |
+
$$
|
| 328 |
+
H _ {\ell} \left(\frac {p + q}{2}\right) = \inf _ {a} \mathbb {E} _ {\frac {p + q}{2}} [ \ell (X, a) ] = \inf _ {a} \left(\frac {1}{2} \mathbb {E} _ {p} [ \ell (X, a) ] + \frac {1}{2} \mathbb {E} _ {q} [ \ell (X, a) ]\right) \tag {3}
|
| 329 |
+
$$
|
| 330 |
+
|
| 331 |
+
If $\mathcal{A}_p^* \cap \mathcal{A}_q^* = \emptyset$ , for any action $a'$ such that $\mathbb{E}_p[\ell(X, a')] = H_\ell(p)$ , we must have $a' \in \mathcal{A}_p^*$ so $a' \notin \mathcal{A}_q^*$ and $\mathbb{E}_q[\ell(X, a')] > H_\ell(q)$ . Similar if we choose $a''$ such that $\mathbb{E}_q[\ell(X, a'')] = H_\ell(q)$ we have similarly have $\mathbb{E}_p[\ell(X, a'')] > H_\ell(p)$ . In other words, for any choice of action $a \in \mathcal{A}$ either $a \notin \mathcal{A}_p^*$ and $\mathbb{E}_p[l(X, a)] > H_\ell(p)$ or $a \in \mathcal{A}_p^*$ and $\mathbb{E}_q[l(X, a)] > H_\ell(q)$ . Therefore
|
| 332 |
+
|
| 333 |
+
$$
|
| 334 |
+
\inf _ {a} \left(\frac {1}{2} \mathbb {E} _ {p} [ \ell (X, a) ] + \frac {1}{2} \mathbb {E} _ {q} [ \ell (X, a) ]\right) > \frac {1}{2} H _ {\ell} (p) + \frac {1}{2} H _ {\ell} (q) \tag {4}
|
| 335 |
+
$$
|
| 336 |
+
|
| 337 |
+
Combining Eq.(3) and Eq.(4) we have
|
| 338 |
+
|
| 339 |
+
$$
|
| 340 |
+
\frac {1}{2} \left(H _ {\ell} \left(\frac {p + q}{2}\right) - H _ {\ell} (p)\right) + \frac {1}{2} \left(H _ {\ell} \left(\frac {p + q}{2}\right) - H _ {\ell} (q)\right) > 0
|
| 341 |
+
$$
|
| 342 |
+
|
| 343 |
+
By Definition 2 this implies (for any choice of $\phi$ that satisfies the requirements in Definition 2) that $D_{\ell}^{\phi}(p\| q) > 0$ .
|
| 344 |
+
|
| 345 |
+
To prove the converse simply obverse that if $\mathcal{A}_p^*\cap \mathcal{A}_q^*\neq \phi$ , let $a^* \in \mathcal{A}_p^*\cap \mathcal{A}_q^*$ we have $a^{*} = \arg \inf_{a\in \mathcal{A}}\mathbb{E}_{\frac{p + q}{2}}[l(X,a)]$ . This implies that
|
| 346 |
+
|
| 347 |
+
$$
|
| 348 |
+
2 H _ {\ell} \left(\frac {p + q}{2}\right) - H _ {\ell} (q) - H _ {\ell} (p) = 2 \mathbb {E} _ {\frac {p + q}{2}} \left[ l \left(X, a ^ {*}\right) \right] - \mathbb {E} _ {q} \left[ l \left(X, a ^ {*}\right) \right] - \mathbb {E} _ {p} \left[ l \left(X, a ^ {*}\right) \right] = 0
|
| 349 |
+
$$
|
| 350 |
+
|
| 351 |
+
By Definition 2 we can conclude that $D_{\ell}^{\phi}(p\| q) = 0$ .
|
| 352 |
+
|
| 353 |
+

|
| 354 |
+
|
| 355 |
+
Theorem 1. The set of $H$ -Jensen Shannon Divergences is strictly larger than the MMD² distances.
|
| 356 |
+
|
| 357 |
+
Proof of Theorem 1. Let $k(x,y)$ be some kernel on an input space $\mathcal{X}$ , and let $\mathcal{H}$ be the RKHS induced by the kernel. The (squared) MMD distance is defined by
|
| 358 |
+
|
| 359 |
+
$$
|
| 360 |
+
\mathrm {M M D} ^ {2} (p, q) = \mathbb {E} _ {X \sim p, Y \sim p} k (X, Y) + \mathbb {E} _ {X \sim q, Y \sim q} k (X, Y) - 2 \mathbb {E} _ {X \sim p, Y \sim q} k (X, Y)
|
| 361 |
+
$$
|
| 362 |
+
|
| 363 |
+
which we write more compactly as $\mathrm{MMD}^2 (p,q) = \mathbb{E}_{p,p}k(X,Y) + \mathbb{E}_{q,q}k(X,Y) - 2\mathbb{E}_{p,q}k(X,Y)$ .
|
| 364 |
+
|
| 365 |
+
Define $\phi (x,y) = \| k(x,\cdot) - k(y,\cdot)\|_{\mathcal{H}}^2$ . We can rewrite this in the following form:
|
| 366 |
+
|
| 367 |
+
$$
|
| 368 |
+
\begin{array}{l} \operatorname {M M D} ^ {2} (p, q) = \mathbb {E} _ {p, q} \phi (X, Y) - \frac {1}{2} \mathbb {E} _ {p, p} \phi (X, Y) - \frac {1}{2} \mathbb {E} _ {q, q} \phi (X, Y) \tag {5} \\ = \mathbb {E} _ {p, q} \| k (X, \cdot) \| _ {\mathcal {H}} ^ {2} + \| k (Y, \cdot) \| _ {\mathcal {H}} ^ {2} - 2 k (X, Y) - \frac {1}{2} \mathbb {E} _ {p, p} \| k (X, \cdot) \| _ {\mathcal {H}} ^ {2} + \| k (Y, \cdot) \| _ {\mathcal {H}} ^ {2} - 2 k (X, Y) \\ - \frac {1}{2} \mathbb {E} _ {q, q} \| k (X, \cdot) \| _ {\mathcal {H}} ^ {2} + \| k (Y, \cdot) \| _ {\mathcal {H}} ^ {2} - 2 k (X, Y) = \mathbb {E} _ {p, p} k (X, Y) + \mathbb {E} _ {q, q} k (X, Y) - 2 \mathbb {E} _ {p, q} k (X, Y) \\ \end{array}
|
| 369 |
+
$$
|
| 370 |
+
|
| 371 |
+
We also observe an algebraic relationship for any function $f(x,y)$ such that $f(x,y) = f(y,x)$ for all $x,y$ :
|
| 372 |
+
|
| 373 |
+
$$
|
| 374 |
+
\begin{array}{l} \mathbb {E} _ {\frac {p + q}{2}, \frac {p + q}{2}} f (X, Y) = \frac {1}{4} \mathbb {E} _ {p, p} f (X, Y) + \frac {1}{4} \mathbb {E} _ {p, q} f (X, Y) + \frac {1}{4} \mathbb {E} _ {q, p} f (X, Y) + \frac {1}{4} \mathbb {E} _ {q, q} f (X, Y) \\ = \frac {1}{4} \mathbb {E} _ {p, p} f (X, Y) + \frac {1}{4} \mathbb {E} _ {p, q} f (X, Y) + \frac {1}{4} \mathbb {E} _ {q, p} f (Y, X) + \frac {1}{4} \mathbb {E} _ {q, q} f (X, Y) \\ = \frac {1}{4} \mathbb {E} _ {p, p} f (X, Y) + \frac {1}{4} \mathbb {E} _ {q, q} f (X, Y) + \frac {1}{2} \mathbb {E} _ {p, q} f (X, Y) \tag {6} \\ \end{array}
|
| 375 |
+
$$
|
| 376 |
+
|
| 377 |
+
Furthermore, we have that
|
| 378 |
+
|
| 379 |
+
$$
|
| 380 |
+
\mathbb {E} _ {p, p} \| k (X, \cdot) - k (Y, \cdot) \| _ {\mathcal {H}} ^ {2} = 2 \mathbb {E} _ {p} \| k (X, \cdot) - \mathbb {E} _ {p} k (Y, \cdot) \| _ {\mathcal {H}} ^ {2} \tag {7}
|
| 381 |
+
$$
|
| 382 |
+
|
| 383 |
+
Based on the above, noting that $\phi (x,y) = \phi (y,x)$ , we can derive
|
| 384 |
+
|
| 385 |
+
$$
|
| 386 |
+
\begin{array}{l} \operatorname {M M D} ^ {2} (p, q) = \mathbb {E} _ {p, q} \| k (X, \cdot) - k (Y, \cdot) \| _ {\mathcal {H}} ^ {2} - \frac {1}{2} \mathbb {E} _ {p, p} \| k (X, \cdot) - k (Y, \cdot) \| _ {\mathcal {H}} ^ {2} - \frac {1}{2} \mathbb {E} _ {q, q} \| k (X, \cdot) - k (Y, \cdot) \| _ {\mathcal {H}} ^ {2} (Eq(5)) \\ = 2 \mathbb {E} _ {\frac {p + q}{2}, \frac {p + q}{2}} \| k (X, \cdot) - k (Y, \cdot) \| _ {\mathcal {H}} ^ {2} - \mathbb {E} _ {p, p} \| k (X, \cdot) - k (Y, \cdot) \| _ {\mathcal {H}} ^ {2} - \mathbb {E} _ {q, q} \| k (X, \cdot) - k (Y, \cdot) \| _ {\mathcal {H}} ^ {2} (Eq(6)) \\ = 4 \mathbb {E} _ {\frac {p + q}{2}} \| k (X, \cdot) - \mathbb {E} _ {\frac {p + q}{2}} k (Y, \cdot) \| _ {\mathcal {H}} ^ {2} - 2 \mathbb {E} _ {p} \| k (X, \cdot) - \mathbb {E} _ {p} k (Y, \cdot) \| _ {\mathcal {H}} ^ {2} - 2 \mathbb {E} _ {q} \| k (X, \cdot) - \mathbb {E} _ {q} k (Y, \cdot) \| _ {\mathcal {H}} ^ {2} \quad (Eq(7)) \\ = 4 \inf _ {a \in \mathcal {H}} \mathbb {E} _ {\frac {p + q}{2}} \| k (X, \cdot) - a \| _ {\mathcal {H}} ^ {2} - 2 \inf _ {a \in \mathcal {H}} \mathbb {E} _ {p} \| k (X, \cdot) - a \| _ {\mathcal {H}} ^ {2} - 2 \inf _ {a \in \mathcal {H}} \mathbb {E} _ {q} \| k (X, \cdot) - a \| _ {\mathcal {H}} ^ {2}. \quad \text {m e a n d e f .} \\ \end{array}
|
| 387 |
+
$$
|
| 388 |
+
|
| 389 |
+
Therefore we can define a loss $\ell :\mathcal{X}\times \mathcal{H}\to \mathbb{R}$ where
|
| 390 |
+
|
| 391 |
+
$$
|
| 392 |
+
\ell (x, a) = 4 \| k (x, \cdot) - a \| _ {\mathcal {H}} ^ {2}
|
| 393 |
+
$$
|
| 394 |
+
|
| 395 |
+
Under the new notation we have
|
| 396 |
+
|
| 397 |
+
$$
|
| 398 |
+
\begin{array}{l} \mathrm {M M D} ^ {2} (p, q) = \inf _ {a \in \mathcal {H}} \mathbb {E} _ {\frac {p + q}{2}} l (X, a) - \frac {1}{2} \left(\inf _ {a \in \mathcal {H}} \mathbb {E} _ {p} l (X, a) + \inf _ {a \in \mathcal {H}} \mathbb {E} _ {q} l (X, a)\right) \\ = H _ {\ell} \left(\frac {p + q}{2}\right) - \frac {1}{2} \left(H _ {\ell} (p) + H _ {\ell} (q)\right) = D _ {\ell} ^ {\mathrm {J S}} (p \| q) \\ \end{array}
|
| 399 |
+
$$
|
| 400 |
+
|
| 401 |
+
Conversely we want to show that not every H-Jensen Shannon divergence is a MMD. For example, take $H_{\ell}$ to be the Shannon entropy, then the corresponding $D_{\ell}^{\mathrm{JS}}$ is the Jensen-Shannon divergence, which is not a MMD. This is because the JS divergence is a type of $f$ -divergence, and the only $f$ -divergence that is also an IPM is total variation distance Sriperumbudur et al. (2009). Therefore, the set of H-Jensen Shannon Divergences is strictly larger than the set of MMDs. $\square$
|
| 402 |
+
|
| 403 |
+
Theorem 2. If $\ell$ is $C$ -bounded, and $\phi$ is 1-Lipschitz under the $\infty$ -norm, for any choice of distribution $p, q \in \mathcal{P}(\mathcal{X})$ and $t > 0$ we have
|
| 404 |
+
|
| 405 |
+
1. $\operatorname*{Pr}[\hat{D}_{\ell}^{\phi}(\hat{p}_m\| \hat{q}_m)\geq t]\leq 4e^{-\frac{t^2m}{2C^2}}$ if $p = q$
|
| 406 |
+
2. $\operatorname*{Pr}\left[\left|\hat{D}_{\ell}^{\phi}(\hat{p}_m\| \hat{q}_m) - D_{\ell}^{\phi}(p\| q)\right|\geq 4\max (\mathcal{R}_m^p (\ell),\mathcal{R}_m^q (\ell)) + t\right]\leq 4e^{-\frac{t^2m}{2C^2}}$
|
| 407 |
+
|
| 408 |
+
Proof of Theorem 2. Let $\hat{p}_m$ be a sequence of $n$ samples $(x_1, \dots, x_m)$ drawn from $p$ , and $\hat{q}_m$ a sequence of $n$ samples $(x_1', \dots, x_m')$ drawn from $q$ . Let $\hat{r}_m$ the sub-sampling mixture $(x_1'', \dots, x_m'')$ defined in Section 3.5 (i.e. $x_i'' = x_i b_i + x_i'(1 - b_i)$ where $b_i$ is uniformly sampled from $\{0, 1\}$ ). We also overload the notation $H_\ell$ by defining $H_\ell(\hat{p}_m) = \inf_{a \in \mathcal{A}} \frac{1}{m} \sum_{i=1}^{m} l(x_i, a)$ , and define $H_\ell(\hat{q}_m)$ , $H_\ell(\hat{r}_m)$ similarly.
|
| 409 |
+
|
| 410 |
+
Before proving this theorem we need the following Lemmas
|
| 411 |
+
|
| 412 |
+
Lemma 3. Under the assumptions of Theorem 2
|
| 413 |
+
|
| 414 |
+
$$
|
| 415 |
+
\operatorname * {P r} \left[ H _ {\ell} (\hat {p} _ {m}) - \mathbb {E} [ H _ {\ell} (\hat {p} _ {m}) ] \geq t \right] \leq e ^ {- \frac {2 t ^ {2} m}{C ^ {2}}}
|
| 416 |
+
$$
|
| 417 |
+
|
| 418 |
+
$$
|
| 419 |
+
\operatorname * {P r} \left[ H _ {\ell} (\hat {p} _ {m}) - \mathbb {E} [ H _ {\ell} (\hat {p} _ {m}) ] \leq - t \right] \leq e ^ {- \frac {2 t ^ {2} m}{C ^ {2}}}
|
| 420 |
+
$$
|
| 421 |
+
|
| 422 |
+
Lemma 4. Under the assumptions of Theorem 2
|
| 423 |
+
|
| 424 |
+
$$
|
| 425 |
+
\operatorname * {P r} \left[ | H _ {\ell} (p) - H _ {\ell} (\hat {p} _ {m}) | \geq 2 \mathcal {R} _ {m} (\ell) + t \right] \leq e ^ {- \frac {2 t ^ {2} m}{C ^ {2}}}
|
| 426 |
+
$$
|
| 427 |
+
|
| 428 |
+
To prove the first statement of the Theorem, when $p = q$ we can denote $\mu = \mathbb{E}[H_{\ell}(\hat{p}_m)] = \mathbb{E}[H_{\ell}(\hat{q}_m)] = \mathbb{E}[H_{\ell}(\hat{r}_m)]$ , and we have
|
| 429 |
+
|
| 430 |
+
$$
|
| 431 |
+
\begin{array}{l} \Pr \left[ \hat {D} _ {\ell} ^ {\phi} (\hat {p} _ {m} \| \hat {q} _ {m}) \geq t \right] \\ = \Pr \left[ \phi \left(H _ {\ell} \left(\hat {r} _ {m}\right) - H _ {\ell} \left(\hat {p} _ {m}\right), H _ {\ell} \left(\hat {r} _ {m}\right) - H _ {\ell} \left(\hat {q} _ {m}\right)\right) \geq t \right] \\ \leq \Pr \left[ \max \left(H _ {\ell} \left(\hat {r} _ {m}\right) - H _ {\ell} \left(\hat {p} _ {m}\right), H _ {\ell} \left(\hat {r} _ {m}\right) - H _ {\ell} \left(\hat {q} _ {m}\right)\right) \geq t \right] \\ \leq \Pr \left[ H _ {\ell} \left(\hat {r} _ {m}\right) - H _ {\ell} \left(\hat {p} _ {m}\right) \geq t \right] + \Pr \left[ H _ {\ell} \left(\hat {r} _ {m}\right) - H _ {\ell} \left(\hat {q} _ {m}\right) \geq t \right] \\ \leq \Pr \left[ H _ {\ell} \left(\hat {p} _ {m}\right) - \mu \leq - t / 2 \right] + 2 \Pr \left[ H _ {\ell} \left(\hat {r} _ {m}\right) - \mu \geq t / 2 \right] + \Pr \left[ H _ {\ell} \left(\hat {q} _ {m}\right) - \mu \leq - t / 2 \right] \\ \leq 4 e ^ {- \frac {t ^ {2}}{2 C ^ {2} / m}} \\ \end{array}
|
| 432 |
+
$$
|
| 433 |
+
|
| 434 |
+
Def 2
|
| 435 |
+
|
| 436 |
+
$\phi$ 1-Lipschitz
|
| 437 |
+
|
| 438 |
+
Union bound
|
| 439 |
+
|
| 440 |
+
Union bound
|
| 441 |
+
|
| 442 |
+
Lemma 3
|
| 443 |
+
|
| 444 |
+
The third inequality is because if $H_{\ell}(\hat{r}_m) - H_{\ell}(\hat{p}_m) \geq t$ then it must be either $H_{\ell}(\hat{p}_m) - \mu \leq -t/2$ or $H_{\ell}(\hat{r}_m) - \mu \geq t/2$ . Similarly if $H_{\ell}(\hat{r}_m) - H_{\ell}(\hat{q}_m) \geq t$ then it must be either $H_{\ell}(\hat{q}_m) - \mu \leq -t/2$ and $H_{\ell}(\hat{r}_m) - \mu \geq t/2$ .
|
| 445 |
+
|
| 446 |
+
To prove the second statement of the Theorem, we observe that
|
| 447 |
+
|
| 448 |
+
$$
|
| 449 |
+
\begin{array}{l} \left| \hat {D} _ {\ell} ^ {\phi} (p _ {m} \| q _ {m}) - D _ {\ell} ^ {\phi} (p \| q) \right| \\ = \left| \phi \left(H _ {\ell} (\hat {r} _ {m}) - H _ {\ell} (\hat {p} _ {m}), H _ {\ell} (\hat {r} _ {m}) - H _ {\ell} (\hat {q} _ {m})\right) - \phi \left(H _ {\ell} \left(\frac {p + q}{2}\right) - H _ {\ell} (p), H _ {\ell} \left(\frac {p + q}{2}\right) - H _ {\ell} (q)\right) \right| \\ \leq \max \left(\left| H _ {\ell} \left(\hat {r} _ {m}\right) - H _ {\ell} \left(\hat {p} _ {m}\right) - H _ {\ell} \left(\frac {p + q}{2}\right) + H _ {\ell} (p) \right|, \left| H _ {\ell} \left(\hat {r} _ {m}\right) - H _ {\ell} \left(\hat {q} _ {m}\right) - H _ {\ell} \left(\frac {p + q}{2}\right) + H _ {\ell} (q) \right|\right) \\ \leq \max \left(\left| H _ {\ell} \left(\hat {r} _ {m}\right) - H _ {\ell} \left(\frac {p + q}{2}\right) \right| + \left| H _ {\ell} \left(\hat {p} _ {m}\right) - H _ {\ell} (p) \right|, \left| H _ {\ell} \left(\hat {r} _ {m}\right) - H _ {\ell} \left(\frac {p + q}{2}\right) \right| + \left| H _ {\ell} \left(\hat {q} _ {m}\right) - H _ {\ell} (q) \right|\right) \\ \end{array}
|
| 450 |
+
$$
|
| 451 |
+
|
| 452 |
+
Def 2
|
| 453 |
+
|
| 454 |
+
$\phi$ 1-Lip
|
| 455 |
+
|
| 456 |
+
Jensen
|
| 457 |
+
|
| 458 |
+
Therefore, the event $|\hat{D}_{\ell}^{\phi}(p_m \| q_m) - D_{\ell}^{\phi}(p \| q)| \geq 4 \max(\mathcal{R}_m^p(\ell), \mathcal{R}_m^q(\ell)) + t$ happens only if at least one of the following events happen
|
| 459 |
+
|
| 460 |
+
$$
|
| 461 |
+
\left| H _ {\ell} \left(\hat {r} _ {m}\right) - H _ {\ell} \left(\frac {p + q}{2}\right) \right| \geq \mathcal {R} _ {m} ^ {p} (\ell) + \mathcal {R} _ {m} ^ {q} (\ell) + t / 2 \geq 2 \mathcal {R} _ {m} ^ {(p + q) / 2} (\ell) + t / 2 \quad \mathcal {R} \text {c o n v e x}
|
| 462 |
+
$$
|
| 463 |
+
|
| 464 |
+
$$
|
| 465 |
+
\begin{array}{l} \left| H _ {\ell} \left(\hat {p} _ {m}\right) - H _ {\ell} (p) \right| \geq 2 \mathcal {R} _ {m} (\ell) + t / 2 \\ \left| H _ {\ell} \left(\hat {q} _ {m}\right) - H _ {\ell} (q) \right| \geq 2 \mathcal {R} _ {m} (\ell) + t / 2 \\ \end{array}
|
| 466 |
+
$$
|
| 467 |
+
|
| 468 |
+
Based on Lemma 4 each of these events only happen with probability at most $e^{-\frac{t^2m}{2C^2}}$ . Therefore we can conclude by union bound that
|
| 469 |
+
|
| 470 |
+
$$
|
| 471 |
+
\operatorname * {P r} [ | \hat {D} _ {\ell} ^ {\phi} (p \| q) - D _ {\ell} ^ {\phi} (p \| q) | \geq 4 \max (\mathcal {R} _ {m} ^ {p} (\ell), \mathcal {R} _ {m} ^ {q} (\ell)) + t ] \leq 4 e ^ {- \frac {t ^ {2} m}{2 C ^ {2}}}
|
| 472 |
+
$$
|
| 473 |
+
|
| 474 |
+
Finally we prove the two Lemmas used in the theorem. Lemma 4 is a standard result in the Radamacher complexity literature. For a proof, see e.g. Section 26.1 in (Shalev-Shwartz & Ben-David, 2014). Lemma 3 can also be proved by standard techniques. We provide the proof here.
|
| 475 |
+
|
| 476 |
+
Proof of Lemma 3. Consider two sets of samples $x_{1}, \dots, x_{j}, \dots, x_{m}$ and $x_{1}', \dots, x_{j}', \dots, x_{m}'$ where $x_{i} = x_{i}'$ for every index $i = 1, \dots, m$ except index $j$ .
|
| 477 |
+
|
| 478 |
+
$$
|
| 479 |
+
\begin{array}{l} \left| \inf _ {a} \frac {1}{m} \sum_ {i} \ell (x _ {i}, a) - \inf _ {a} \frac {1}{m} \sum_ {i} \ell \left(x _ {i} ^ {\prime}, a\right) \right| \leq \sup _ {a} \left| \frac {1}{m} \sum_ {i} \ell (x _ {i}, a) - \frac {1}{m} \sum_ {i} \ell \left(x _ {i} ^ {\prime}, a\right) \right| \\ = \frac {1}{m} \sup _ {a} \left| \ell \left(x _ {j}, a\right) - \ell \left(x _ {j} ^ {\prime}, a\right) \right| \leq \frac {C}{m} \\ \end{array}
|
| 480 |
+
$$
|
| 481 |
+
|
| 482 |
+
Then we can conclude by Mcdiarmid inequality that
|
| 483 |
+
|
| 484 |
+
$$
|
| 485 |
+
\operatorname * {P r} \left[ \inf _ {a} \frac {1}{m} \sum_ {i} \ell (X _ {i}, a) - \mathbb {E} \left[ \inf _ {a} \frac {1}{m} \sum_ {i} \ell (X _ {i}, a) \right] \geq t \right] \leq e ^ {- \frac {2 t ^ {2}}{C ^ {2} / m}} = e ^ {- \frac {2 t ^ {2} m}{C ^ {2}}}
|
| 486 |
+
$$
|
| 487 |
+
|
| 488 |
+

|
| 489 |
+
|
| 490 |
+

|
| 491 |
+
|
| 492 |
+
Corollary 1. $\sqrt{\operatorname{Var}[\hat{D}_{\ell}^{\phi}(\hat{p}_m\|\hat{q}_m)]}\leq 4\max (\mathcal{R}_m^p (\ell),\mathcal{R}_m^q (\ell)) + \sqrt{2C^2 / m}$
|
| 493 |
+
|
| 494 |
+
Proof of Corollary 1. For notation convenience denote $B = 4\max (\mathcal{R}_m^p (\ell),\mathcal{R}_m^q (\ell))$
|
| 495 |
+
|
| 496 |
+
$$
|
| 497 |
+
\begin{array}{l} \mathrm {V a r} [ \hat {D} _ {\ell} ^ {\phi} (\hat {p} _ {m} \| \hat {q} _ {m}) ] \\ = \mathbb {E} \left[ \left(\hat {D} _ {\ell} ^ {\phi} (\hat {p} _ {m} \| \hat {q} _ {m}) - D _ {\ell} ^ {\phi} (p \| q)\right) ^ {2} \right] \\ = \int_ {t = 0} ^ {\infty} \Pr \left[ \left(\hat {D} _ {\ell} ^ {\phi} (\hat {p} _ {m} \| \hat {q} _ {m}) - D _ {\ell} ^ {\phi} (p \| q)\right) ^ {2} \geq t \right] d t \\ = \int_ {t = 0} ^ {\infty} \Pr \left[ \left| \hat {D} _ {\ell} ^ {\phi} (\hat {p} _ {m} \| \hat {q} _ {m}) - D _ {\ell} ^ {\phi} (p \| q) \right| \geq \sqrt {t} \right] d t \\ = \int_ {s = 0} ^ {\infty} \Pr \left[ \left| \hat {D} _ {\ell} ^ {\phi} (\hat {p} _ {m} \| \hat {q} _ {m}) - D _ {\ell} ^ {\phi} (p \| q) \right| \geq s \right] 2 s d s \quad s = \sqrt {t} \\ \leq \int_ {s = 0} ^ {B} 2 s d s + \int_ {s = 0} ^ {\infty} \Pr \left[ \left| \hat {D} _ {\ell} ^ {\phi} (\hat {p} _ {m} \| \hat {q} _ {m}) - D _ {\ell} ^ {\phi} (p \| q) \right| \geq B + s \right] 2 (B + s) d s \\ \leq B ^ {2} + \int_ {s = 0} ^ {\infty} 2 (B + s) e ^ {- \frac {s ^ {2} m}{2 C ^ {2}}} d s \\ \leq B ^ {2} + \int_ {t = 0} ^ {\infty} 2 (B + t \sqrt {\frac {2 C ^ {2}}{m}}) e ^ {- t ^ {2}} \sqrt {\frac {2 C ^ {2}}{m}} d t \quad t = s \sqrt {\frac {m}{2 C ^ {2}}} \\ = B ^ {2} + 2 B \sqrt {\frac {2 C ^ {2}}{m}} \int e ^ {- t ^ {2}} d t + \frac {4 C ^ {2}}{m} \int t e ^ {- t ^ {2}} d t \\ = B ^ {2} + B \sqrt {\frac {2 \pi C ^ {2}}{m}} + \frac {2 C ^ {2}}{m} \leq (B + \sqrt {2 C ^ {2} / m}) ^ {2} \\ \end{array}
|
| 498 |
+
$$
|
| 499 |
+
|
| 500 |
+

|
| 501 |
+
|
| 502 |
+
Corollary 2. [Consistency] Under the condition of Theorem 2, if additionally either 1. $\mathcal{A}$ is a finite set 2. $\mathcal{A}$ is a bounded subset of $\mathbb{R}^d$ for some $d\in \mathbb{N}$ and $\ell$ is Lipschitz w.r.t. $a$ , then almost surely $\lim_{m\to \infty}\hat{D}_{\ell}^{\phi}(\hat{p}_m\| \hat{q}_m) = D_{\ell}^{\phi}(p\| q)$ .
|
| 503 |
+
|
| 504 |
+
Proof of Corollary 2. We can prove the consistency results from Theorem 2 by observing that the expected Radamacher complexity goes to 0 when $m \to \infty$ .
|
| 505 |
+
|
| 506 |
+
The first statement is a simple consequence of Massart's Lemma, (e.g. see Eq.(8.44) in (Ma, 2021)). In particular, because $\mathcal{A}$ is finite we have
|
| 507 |
+
|
| 508 |
+
$$
|
| 509 |
+
\mathcal {R} _ {m} ^ {p} (\ell) \leq \sqrt {2 \log | \mathcal {A} | / m} \rightarrow_ {m \rightarrow \infty} 0
|
| 510 |
+
$$
|
| 511 |
+
|
| 512 |
+
To prove the second statement, first observe that because $\mathcal{A}$ is bounded, there must exist some $r\in \mathbb{R}$ such that $\mathcal{A}\subset B_r\coloneqq \{a,\| a\| _2\leq r\}$ . In addition, without loss of generality we can assume that there exists $L\in \mathbb{R}$ such that $\forall x\in \mathcal{X}$ and $a,a^{\prime}\in \mathcal{A}$
|
| 513 |
+
|
| 514 |
+
$$
|
| 515 |
+
| \ell (x, a) - \ell (x, a ^ {\prime}) | \leq L \| a - a ^ {\prime} \| _ {2}
|
| 516 |
+
$$
|
| 517 |
+
|
| 518 |
+
There is no loss of generality because in finite dimensions all norms are equivalent, so if $f$ is Lipschitz under any norm, then $\ell$ is Lipschitz under the 2-norm. We can apply the results on Radamacher complexity for smoothly parameterized class proved in (Bartlett, 2013), and conclude that $\lim_{m\to \infty}\mathcal{R}_m^p (\ell) = 0$ .
|
| 519 |
+
|
| 520 |
+
# B ADDITIONAL EXPERIMENTAL RESULTS
|
| 521 |
+
|
| 522 |
+
# B.1 BLOB DATASET
|
| 523 |
+
|
| 524 |
+

|
| 525 |
+
Figure 4: Left: Average test power on the Blob dataset for different sample sizes and significance level $\alpha = 0.05$ . Our method (H-Div, dashed line) has significantly better test power, especially for setups with small sample sizes. Right: The same plot with significance level $\alpha = 0.01$ .
|
| 526 |
+
|
| 527 |
+

|
| 528 |
+
|
| 529 |
+
# B.2 EVALUATING SAMPLE QUALITY
|
| 530 |
+
|
| 531 |
+

|
| 532 |
+
|
| 533 |
+

|
| 534 |
+
Figure 5: The divergence between corrupted image and original image measured by H-divergence vs. FID. For better comparison we normalize each distance to between $[0,1]$ by a linear transformation. For "speckle" and "impulse" corruption, both divergences are monotonically increasing with more corruption. For "snow" corruption H-divergence is monotonic while FID is not. Other types of corruptions are provided in Appendix B.2.
|
| 535 |
+
|
| 536 |
+

|
| 537 |
+
|
| 538 |
+

|
| 539 |
+
|
| 540 |
+

|
| 541 |
+
|
| 542 |
+

|
| 543 |
+
|
| 544 |
+

|
| 545 |
+
|
| 546 |
+

|
| 547 |
+
|
| 548 |
+

|
| 549 |
+
|
| 550 |
+

|
| 551 |
+
|
| 552 |
+

|
| 553 |
+
|
| 554 |
+

|
| 555 |
+
|
| 556 |
+

|
| 557 |
+
|
| 558 |
+

|
| 559 |
+
|
| 560 |
+

|
| 561 |
+
|
| 562 |
+
The gold standard for evaluating image generative models requires human decision, which is nevertheless expensive. Several surrogate measurements are commonly used, such as the Frechet Inception Distance (FID) (Heusel et al., 2017) or the inception score. Here by formulating such evaluation as an estimation of the discrepancy between the generated and the real images, we can quantify the quality of image samples by calculating the corresponding H-Divergences.
|
| 563 |
+
|
| 564 |
+
We choose $\mathcal{A}$ as the set of Gaussian mixture distributions on the inception feature space, $l(x,a)$ as the negative log likelihood of $x$ under distribution $a$ and $\phi (\theta ,\lambda) = \max (\theta ,\lambda)$ . To evaluate the performance, we use the same setup as (Heusel et al., 2017), where we add corruption from (Hendrycks & Dietterich, 2019) to a set of images. Intuitively, adding more corruption degrades the sample quality, so a good measurement of sample quality should assign a lower quality score (higher divergence from clean images). The results are plotted in Figure 5. The remaining plots of other perturbations are in Appendix B.2. Both FID and H-divergence are generally monotonically increasing as we increase the amount of corruption. Our method is slightly better on some perturbations (such as "snow"), where the FID fails to be monotonically increasing, but our method is still monotonic, better aligning with human intuition.
|
| 565 |
+
|
| 566 |
+

|
| 567 |
+
|
| 568 |
+

|
| 569 |
+
|
| 570 |
+

|
| 571 |
+
|
| 572 |
+

|
| 573 |
+
|
| 574 |
+

|
| 575 |
+
|
| 576 |
+

|
| 577 |
+
|
| 578 |
+

|
| 579 |
+
|
| 580 |
+

|
| 581 |
+
|
| 582 |
+

|
| 583 |
+
|
| 584 |
+

|
| 585 |
+
|
| 586 |
+

|
| 587 |
+
|
| 588 |
+

|
| 589 |
+
|
| 590 |
+

|
| 591 |
+
Figure 6: Additional plots that extend Figure 5.
|
| 592 |
+
|
| 593 |
+

|
| 594 |
+
|
| 595 |
+

|
| 596 |
+
|
| 597 |
+
# C ADDITIONAL THEORY AND EXPERIMENT DETAILS
|
| 598 |
+
|
| 599 |
+
# C.1 CONNECTION TO OPTIMAL TRANSPORT
|
| 600 |
+
|
| 601 |
+
We first show that H-divergence can also have a transportation interpretation. For all the intuitive interpretations we avoid technical difficulty by assuming $\mathcal{X}$ is a finite set, even though all the formulas are applicable when $\mathcal{X}$ has infinite cardinality.
|
| 602 |
+
|
| 603 |
+
<table><tr><td>N</td><td>ME</td><td>SCF</td><td>C2ST-S</td><td>C2ST-L</td><td>MMD-O</td><td>MMD-D</td><td>H-Div</td></tr><tr><td>200</td><td>0.414±0.050</td><td>0.107±0.018</td><td>0.193±0.037</td><td>0.234±0.031</td><td>0.188±0.010</td><td>0.555±0.044</td><td>1.000±0.000</td></tr><tr><td>400</td><td>0.921±0.032</td><td>0.152±0.021</td><td>0.646±0.039</td><td>0.706±0.047</td><td>0.363±0.017</td><td>0.996±0.004</td><td>1.000±0.000</td></tr><tr><td>600</td><td>1.000±0.000</td><td>0.294±0.008</td><td>1.000±0.000</td><td>0.977±0.012</td><td>0.619±0.021</td><td>1.000±0.000</td><td>1.000±0.000</td></tr><tr><td>800</td><td>1.000±0.000</td><td>0.317±0.017</td><td>1.000±0.000</td><td>1.000±0.000</td><td>0.797±0.015</td><td>1.000±0.000</td><td>1.000±0.000</td></tr><tr><td>1000</td><td>1.000±0.000</td><td>0.346±0.019</td><td>1.000±0.000</td><td>1.000±0.000</td><td>0.894±0.016</td><td>1.000±0.000</td><td>1.000±0.000</td></tr><tr><td>Avg.</td><td>0.867</td><td>0.243</td><td>0.768</td><td>0.783</td><td>0.572</td><td>0.910</td><td>1.000</td></tr></table>
|
| 604 |
+
|
| 605 |
+
Table 3: Average test power $\pm$ standard error for $N$ samples over the MNIST dataset.
|
| 606 |
+
|
| 607 |
+
Setup Choose $\mathcal{A} = \mathcal{X}$ , and let $l(x, a)$ be a symmetric function $(l(x, a) = l(a, x))$ that denotes the cost of transporting a unit of goods from $x$ to $a$ . When we say that a unit of goods is located according to $p$ , we mean that there is 1 unit of goods dispersed in $\mathcal{X}$ locations, where $p(x)$ is the amount of goods at location $x$ .
|
| 608 |
+
|
| 609 |
+
Optimal Transport Distance The optimal transport distance is defined by
|
| 610 |
+
|
| 611 |
+
$$
|
| 612 |
+
O _ {\ell} (p, q) = \inf _ {r _ {X Y}, r _ {X} = p _ {X}, r _ {Y} = q _ {Y}} \mathbb {E} _ {r _ {X Y}} [ l (X, Y) ]
|
| 613 |
+
$$
|
| 614 |
+
|
| 615 |
+
Intuitively the optimal transport distance measures the following cost: initially the goods are located according to $p$ , we would like to move them to locate according to $q$ ; $O(p, q)$ denote the minimum cost to accomplish this transportation task.
|
| 616 |
+
|
| 617 |
+
H-Divergence as Optimal Storage We first consider the intuitive interpretation to the H-entropy
|
| 618 |
+
|
| 619 |
+
$$
|
| 620 |
+
H _ {\ell} (p) = \inf _ {a} \mathbb {E} _ {p} [ \ell (X, a) ] \quad a ^ {*} = \arg \inf _ {a \in \mathcal {X}} \mathbb {E} _ {p} [ \ell (X, a) ]
|
| 621 |
+
$$
|
| 622 |
+
|
| 623 |
+
Suppose we want to move goods located according to $p$ to a storage location (for example, we want to collect all the mail in a city to a package center), then $a^*$ is the optimal location to build the storage location, and H-entropy measures the minimum cost to transport all goods to the storage location. Similarly $2H_{\ell}\left(\frac{p + q}{2}\right)$ measures the minimum cost to transport both goods located according to $p$ and goods located according to $q$ to the same storage location. The H-divergence
|
| 624 |
+
|
| 625 |
+
$$
|
| 626 |
+
2 D _ {\ell} ^ {\mathrm {J S}} (p \| q) := 2 H _ {\ell} \left(\frac {p + q}{2}\right) - \left(H _ {\ell} (q) + H _ {\ell} (p)\right)
|
| 627 |
+
$$
|
| 628 |
+
|
| 629 |
+
measures the reduction of transportation cost with two storage locations (one for $p$ and one for $q$ ) rather than a single storage location (for both $p$ and $q$ ).
|
| 630 |
+
|
| 631 |
+
The H-Divergence is related to the optimal transport distance by the following inequality.
|
| 632 |
+
|
| 633 |
+
Proposition 4. If $\ell$ satisfies the triangle inequality $\forall x, y, z \in \mathcal{X}, l(x, y) + l(\bar{y}, z) \geq l(x, z)$ then $D_{\ell}^{\mathrm{JS}}(p \| q) \leq \frac{1}{2} O(p, q)$ .
|
| 634 |
+
|
| 635 |
+
Proof of Proposition 4. Let $a_q^* = \arg \inf \mathbb{E}_q[l(X, a)]$ then we have
|
| 636 |
+
|
| 637 |
+
$$
|
| 638 |
+
\begin{array}{l} 2 H _ {\ell} \left(\frac {p + q}{2}\right) = \inf _ {a} \left(\mathbb {E} _ {p} [ \ell (X, a) ] + \mathbb {E} _ {q} [ \ell (X, a) ]\right) \leq \mathbb {E} _ {p} [ \ell (X, a _ {q} ^ {*}) ] + \mathbb {E} _ {q} [ \ell (X, a _ {q} ^ {*}) ] \\ \leq \inf _ {r _ {X Y}, r _ {X} = p _ {X}, r _ {Y} = q _ {Y}} \mathbb {E} _ {r _ {X Y}} \left[ \ell (X, a _ {q} ^ {*}) \right] + \mathbb {E} _ {q} \left[ \ell (X, a _ {q} ^ {*}) \right] \\ \leq \inf _ {r _ {X Y}, r _ {X} = p _ {X}, r _ {Y} = q _ {Y}} \mathbb {E} _ {r _ {X Y}} [ \ell (X, Y) + \ell (Y, a _ {q} ^ {*}) ] + \mathbb {E} _ {q} [ \ell (X, a _ {q} ^ {*}) ] \\ = O _ {\ell} (p, q) + 2 H _ {\ell} (q) \\ \end{array}
|
| 639 |
+
$$
|
| 640 |
+
|
| 641 |
+
Intuitively, to move goods located according to $p$ and goods located according to $q$ to some storage location, one option is to first transport all goods from $p$ to $q$ (so that the goods at location $x$ will be $2q(x)$ ), then move the goods located according to $2q$ to the optimal storage location. Similarly we have
|
| 642 |
+
|
| 643 |
+
$$
|
| 644 |
+
2 H _ {\ell} \left(\frac {p + q}{2}\right) \leq O (q, p) + 2 H _ {\ell} (p)
|
| 645 |
+
$$
|
| 646 |
+
|
| 647 |
+
which combined we have
|
| 648 |
+
|
| 649 |
+
$$
|
| 650 |
+
2 D _ {\ell} ^ {\mathrm {J S}} (p \| q) = 2 H _ {\ell} \left(\frac {p + q}{2}\right) - \left(H _ {\ell} (q) + H _ {\ell} (p)\right) \leq O (p, q)
|
| 651 |
+
$$
|
| 652 |
+
|
| 653 |
+

|
| 654 |
+
|
| 655 |
+
# C.2 IMPOSSIBILITY OF JENSEN-SHANNON DIVERGENCE ESTIMATION
|
| 656 |
+
|
| 657 |
+
Suppose we have a consistent estimator for the Jenson-Shannon divergence, i.e. a function $\hat{\mathrm{JS}}$ such that for any pair of distribution $p,q$ and given $N$ i.i.d. samples $p_N\sim p$ and $q_{N}\sim q$ we have $\lim_{N\to \infty}\hat{\mathrm{JS}} (p_N\| q_N) = \hat{\mathrm{JS}} (p\| q)$ , then we prove a contradiction by the probabilistic method.
|
| 658 |
+
|
| 659 |
+
Let $p$ be a standard Gaussian distribution, let $Q^{M}$ be a uniform distribution on a set of $M$ i.i.d. samples from $p$ (hence $Q^{M}$ is itself a random variable that depends on the i.i.d. samples). Let $Q^{*}$ be the limit of $Q^{M}$ when $M \to \infty$ (i.e. it is the uniform distribution on an infinite set of samples). Let $q^{*}$ denote a value that $Q^{*}$ can take. Because $q^{*}$ is always supported on countably many points, hence $\mathrm{JS}(p\|q^{*}) = 1$ . Note that for any $N$ the following two sampling process leads to identical distribution on $p_{N}, q_{N}$ :
|
| 660 |
+
|
| 661 |
+
$$
|
| 662 |
+
p _ {N} \sim p, q _ {N} \sim p \quad Q ^ {*} \sim p, p _ {N} \sim p, q _ {N} \sim Q ^ {*}
|
| 663 |
+
$$
|
| 664 |
+
|
| 665 |
+
Hence, the expectation of any function (including $\tilde{\mathrm{JS}}$ ) is also identical.
|
| 666 |
+
|
| 667 |
+
$$
|
| 668 |
+
\mathbb {E} _ {Q ^ {*} \sim p} \left[ \mathbb {E} _ {p _ {N} \sim p, q _ {N} \sim Q ^ {*}} [ \hat {\mathrm {J S}} (p _ {N}, q _ {N}) ] \right] = \mathbb {E} _ {p _ {N} \sim p, q _ {N} \sim p} [ \hat {\mathrm {J S}} (p _ {N}, q _ {N}) ]
|
| 669 |
+
$$
|
| 670 |
+
|
| 671 |
+
Hence the limit when $N\to \infty$ must be identical
|
| 672 |
+
|
| 673 |
+
$$
|
| 674 |
+
\lim _ {N \rightarrow \infty} \mathbb {E} _ {Q ^ {*} \sim p} \left[ \mathbb {E} _ {p _ {N} \sim p, q _ {N} \sim Q ^ {*}} [ \hat {\mathrm {J S}} (p _ {N}, q _ {N}) ] \right] = \lim _ {N \rightarrow \infty} \mathbb {E} _ {p _ {N} \sim p, q _ {N} \sim p} [ \hat {\mathrm {J S}} (p _ {N}, q _ {N}) ]
|
| 675 |
+
$$
|
| 676 |
+
|
| 677 |
+
Because the Jenson Shannon divergence is always bounded, any consistent estimator must also be bounded for sufficiently large $N$ . By the dominated convergence theorem we can exchange the expectation and the limit.
|
| 678 |
+
|
| 679 |
+
$$
|
| 680 |
+
\mathbb {E} _ {Q ^ {*} \sim p} \left[ \lim _ {N \to \infty} \mathbb {E} _ {p _ {N} \sim p, q _ {N} \sim Q ^ {*}} [ \hat {\mathrm {J S}} (p _ {N}, q _ {N}) ] \right] = \lim _ {N \to \infty} \mathbb {E} _ {p _ {N} \sim p, q _ {N} \sim p} [ \hat {\mathrm {J S}} (p _ {N}, q _ {N}) ]
|
| 681 |
+
$$
|
| 682 |
+
|
| 683 |
+
By the probabilistic method (i.e. for any function $f$ if $\mathbb{E}_{Q^* \sim p}[f(Q^*)] = 0$ there must exist some $q^*$ such that $f(q^*) \leq 0$ ) there must exist some $q^*$ such that
|
| 684 |
+
|
| 685 |
+
$$
|
| 686 |
+
\lim _ {N \to \infty} \mathbb {E} _ {p _ {N} \sim p, q _ {N} \sim q ^ {*}} [ \hat {\mathrm {J S}} (p _ {N}, q _ {N}) ] \leq \lim _ {N \to \infty} \mathbb {E} _ {p _ {N} \sim p, q _ {N} \sim p} [ \hat {\mathrm {J S}} (p _ {N}, q _ {N}) ] = 0 \neq \mathrm {J S} (p \| q ^ {*}) = 1
|
| 687 |
+
$$
|
| 688 |
+
|
| 689 |
+
Therefore $\hat{\mathrm{JS}}$ cannot be consistent.
|
| 690 |
+
|
| 691 |
+
# C.3 CLIMATE CHANGE EXPERIMENT DETAILS
|
| 692 |
+
|
| 693 |
+
Setup Details In this experiment, we extract the statistics of yearly weather for each year from 1981-2019. We use the NOAA dataset, which contains daily weather data from thousands of weather stations across the globe. For each year we compute the following summary statistics: average yearly temperature, average yearly humidity, average yearly wind speed and average number of rainy days in an year. For example $x_{1990}$ is a 4 dimensional vector where each dimension corresponds to one of the summary statistics above.
|
| 694 |
+
|
| 695 |
+
Let $p$ denote the uniform distribution over $\{x_{1981},\dots ,x_{1999}\}$ and $q$ denote the uniform distribution over $\{x_{2000},\dots ,x_{2019}\}$ . For example $\mathbb{E}_p[\ell (X,a)]$ denote the expected loss of action $a$ for a random year sampled from 1981-1999. Note that for many decision problems, it is possible to make yearly decisions (e.g. decide the best crop to plant for each year). However, because we want to measure the difference between two time periods 1981-1999 vs. 2000-2019, we choose the action space $\mathcal{A}$ to be a single crop selection that will be used for the entire time period (rather than a different crop selection for each year). Similarly for energy production we choose the action space $\mathcal{A}$ to be the proportion of different energy production methods that will be used for the entire time period.
|
| 696 |
+
|
| 697 |
+
Crop yield We obtain the crop yield loss function $\ell (x,a)$ with the following procedure
|
| 698 |
+
|
| 699 |
+
1. We obtain the crop yield dataset from (FAOSTAT et al., 2006), each entry we extract is the following tuple: (country code, year, crop type, yield per hectare $(\mathrm{kg / ha})$
|
| 700 |
+
2. We associate each country code with the central coordinate (i.e. the average latitude and longitude) of the country. For each central coordinate we find the nearest weather station in the NOAA database. We use data for the nearest weather station as the weather data for the country.
|
| 701 |
+
3. Based on step 2 for each (country code, year) pair we can associate a weather statistics (i.e. the 4 dimensional vector described in Setup Details). We update each entry in step 1 to be (weather statistics, crop type, yield per hectare).
|
| 702 |
+
4. Based on the data entries we obtain in step 3 we train a Kernel Ridge regression model to learn the function $\ell(x, a)$ where $x$ is the weather statistics, $a$ is the crop type, and $\ell(x, a)$ is learned to predict the yield (normalized by market price) of the weather $x$ for crop type $a$ .
|
| 703 |
+
|
| 704 |
+
Energy production We consider three types of energy production methods: solar, wind and traditional (such as fossil fuel). Solar energy and wind energy both depend heavily on weather, while traditional energy does not. In particular, we use empirical formulas for solar and wind energy calculation:
|
| 705 |
+
|
| 706 |
+
solar $\propto$ number of sunny days * daylight hour
|
| 707 |
+
|
| 708 |
+
wind $\propto$ wind velocity
|
| 709 |
+
|
| 710 |
+
# D DISCUSSIONS
|
| 711 |
+
|
| 712 |
+
Future Work In this paper we explored the applications of H-divergence to two sample testing. Future work can explore other applications of divergences. Potential applications include
|
| 713 |
+
|
| 714 |
+
- Generative model training. Many generative models learning algorithm minimize divergences (Nowozin et al., 2016; Arjovsky et al., 2017), and future work can explore if the new divergence family leads to new generative model learning algorithms.
|
| 715 |
+
- Independence tests. Independence tests are two sample tests between the joint distribution $p_{XY}$ and the product of marginal distributions $p_{X}p_{Y}$ , therefore the two sample test results from this paper are applicable to independence tests.
|
| 716 |
+
- Robustness. Many robust optimization, estimation or prediction methods aim to achieve good performance even when the data distribution is perturbed. Typically perturbation is measured by e.g. the KL divergence or the Lp distances. Future work can measure perturbation with the H-divergence $H_{\ell}$ by choosing loss functions $\ell$ that are tailored for the problem.
|
| 717 |
+
|
| 718 |
+
Computation Issues There are several situations where estimating the H-divergence is (provably) computationally feasible:
|
| 719 |
+
|
| 720 |
+
- When $\mathcal{A}$ is a small finite set, in which case we can enumerate all possible values of $a \in \mathcal{A}$ .
|
| 721 |
+
- When the loss function $\ell(y, a)$ is convex in $a$ , in which case we can accurately estimate the H-divergence in polynomial time by solving the optimization problem $\inf_{a} \mathbb{E}[\ell(Y, a)]$ by gradient descent.
|
| 722 |
+
|
| 723 |
+
In general, while it is difficult to guarantee computational feasibility, we use a practical technique that works well in our experiments: we use the same number of gradient descent steps for evaluating $H_{\ell}\left(\frac{p + q}{2}\right)$ and $H_{\ell}(p), H_{\ell}(q)$ . Intuitively, the "sub-optimality" when estimating the three terms is approximately the same in expectation and cancels out.
|
comparingdistributionsbymeasuringdifferencesthataffectdecisionmaking/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:0ae6e6554f176eae07f2b4e53072ed4c209b3b88cdb3eab3aac590752b2d7120
|
| 3 |
+
size 1033851
|
comparingdistributionsbymeasuringdifferencesthataffectdecisionmaking/layout.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:20b8d027af9af259c4e7184f9109c3016601cc98c0a7178c9eded77fbcdc3b24
|
| 3 |
+
size 1006089
|
coordinationamongneuralmodulesthroughasharedglobalworkspace/a937660a-9b49-49b5-9181-ee4c710a6943_content_list.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:50024d0f29307a7d394fd79a276a6bca50def494a19425bf7b67d7fabf0d4cf5
|
| 3 |
+
size 140723
|
coordinationamongneuralmodulesthroughasharedglobalworkspace/a937660a-9b49-49b5-9181-ee4c710a6943_model.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:8b5a4ca2aed5e4c64131c61a90cafeaae54a4dc86f53bf58c23e6243de8a8a76
|
| 3 |
+
size 173881
|